idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,201
Multiple Regression or Separate Simple Regressions?
One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effects. In other words, regression allows us to determine the unique effect that X has on Y and the unique effect that Z has on Y. If X and Z are related together statistically, then simply regressing Y on X will give an erroneous estimate of the effect of X on Y because some of the effect of Z will be caught up in the regression. The same thing happens if you only regress Y on Z. The cool thing about regression is that it allows us to see the unique effect that each predictor has on the response variable, even if our predictors are themselves related. That being said, it looks like you need to read up or review the basics of regression itself. This is especially true if you're using regression methods to make arguments in a thesis.
Multiple Regression or Separate Simple Regressions?
One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effect
Multiple Regression or Separate Simple Regressions? One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effects. In other words, regression allows us to determine the unique effect that X has on Y and the unique effect that Z has on Y. If X and Z are related together statistically, then simply regressing Y on X will give an erroneous estimate of the effect of X on Y because some of the effect of Z will be caught up in the regression. The same thing happens if you only regress Y on Z. The cool thing about regression is that it allows us to see the unique effect that each predictor has on the response variable, even if our predictors are themselves related. That being said, it looks like you need to read up or review the basics of regression itself. This is especially true if you're using regression methods to make arguments in a thesis.
Multiple Regression or Separate Simple Regressions? One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effect
46,202
Multiple Regression or Separate Simple Regressions?
This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationship into account to fully understand how they relate to y. Even if x and z are perfectly orthogonal, accounting for the variance explained by z when looking at the relationship between x and y can reduce the variation and give more precise estimates. That said, sometimes there are advantages to looking at the individual relationships as well as the multiple regression. You need to think about what question(s) you are trying to answer and what models would answer them.
Multiple Regression or Separate Simple Regressions?
This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationsh
Multiple Regression or Separate Simple Regressions? This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationship into account to fully understand how they relate to y. Even if x and z are perfectly orthogonal, accounting for the variance explained by z when looking at the relationship between x and y can reduce the variation and give more precise estimates. That said, sometimes there are advantages to looking at the individual relationships as well as the multiple regression. You need to think about what question(s) you are trying to answer and what models would answer them.
Multiple Regression or Separate Simple Regressions? This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationsh
46,203
Is a weighted average of two correlation matrices again a correlation matrix?
Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know that $R$ and $Q$ are, since they are correlation matrices. Hence for any non-zero vector $x$, $x^tPx \geq 0$ and $x^tQx \geq 0$. Since $p\geq0$ and $(1-p)\geq0$ we have, as required: $$x^tMx = p(x^tRx) + (1-p)(x^tQx) \geq 0$$. For this covariance matrix to be a correlation matrix we additionally require that the random variables that form the components of the vector have variance one. The variances are the diagonal elements, and we already know that the diagonal elements of $R$ and $Q$ are one since they are correlation matrices. So: $$M_{ii} = pR_{ii} + (1-p)Q_{ii} = p(1) + (1-p)(1) = 1$$ This estalishes that "the weighted average of two correlation matrices is a correlation matrix", which is the question in the title. It doesn't address the question implicit in the body of the post though: I claim that $\sqrt pX+\sqrt{1-p}Y$ is a random vector that has correlation matrix $pR+(1-p)Q$. Here, $X$ is the random vector with correlation $R$ and $Y$ is the random vector with correlation $Q$. Also, $X$ and $Y$ are independent. I can get the covariance part to match but I can't get the standard deviation in the denominator to match. I'm not sure why. The covariance part is correct, which is why that part matched. But the claim about the correlation matrix of $\sqrt pX+\sqrt{1-p}Y$ is incorrect, which is why the algebra doesn't work out. I will demonstrate this with simulation from the bivariate normal distribution using $p=\frac{1}{2}$. The relevant intuition in my choice of parameters is that the variances/covariance of $X$ swamp any variation in $Y$, so that the correlation structure of $\frac{1}{\sqrt{2}}X + \frac{1}{\sqrt{2}}Y$ is largely determined by $X$. Here's the R code for the set-up: library(MASS) # has mvrnorm to simulate multivariate normals n <- 1e7 # simulated sample size mu <- c(0,0) Sigma1 <- matrix(c(100,50,50,100), nrow=2) Sigma2 <-matrix(c(1,0,0,1), nrow=2) # covariance = 50, correlation = 0.5 x <- mvrnorm(n, mu, Sigma1) # covariance = correlation = 0 y <- mvrnorm(n, mu, Sigma2) # covariance = ?, correlation = ? z <- sqrt(0.5)*x + sqrt(0.5)*y For $X$ and $Y$ we find the covariance between the first and second components of the vector is pretty close to the specified population covariance. For $\frac{1}{\sqrt{2}}X + \frac{1}{\sqrt{2}}Y$ we find the covariance is indeed the weighted average ($p = \frac{1}{2}$) between the covariances for $X$ and $Y$. > cov(x[,1], x[,2]) [1] 50.0234 > cov(y[,1], y[,2]) [1] -0.0004923819 > cov(z[,1], z[,2]) [1] 25.01153 For the correlation between first and second components, again the sample of simulated $X$ and $Y$ behaves in the expected manner, but $\frac{1}{\sqrt{2}}X + \frac{1}{\sqrt{2}}Y$ largely reflects the behaviour of $X$. > cor(x[,1], x[,2]) [1] 0.5001478 > cor(y[,1], y[,2]) [1] -0.0004925274 > cor(z[,1], z[,2]) [1] 0.4951888 Let's visualise the problem by looking at the relationship between the first and second components for our samples of each of the three random vectors. To get it to plot in a reasonable time I resampled with n <- 1e5 first but this didn't substantially alter the results. require(ggplot2) df <- data.frame(variable=rep(c("X", "Z", "Y"), each=n), first=c(x[,1], z[,1], y[,1]), second=c(x[,2], z[,2], y[,2])) g <- ggplot(df, aes(x=first, y=second, colour=variable)) + geom_point(alpha = 0.1) + coord_fixed() + theme_bw() + guides(colour = guide_legend(override.aes = list(alpha = 1))) + xlab("First component") + ylab("Second component") print(g) g <- g + guides(colour=FALSE) + facet_grid(. ~ variable) print(g) We can see that the components of $Y$ are not correlated (the points from the sample of $Y$ form a roughly circular blob) while the components of $X$ are (its points form a roughly elliptical blob with its axes not aligned with horizontal and vertical). Their weighted average $Z$ resembles a scaled-down version of $X$ because the contribution from $Y$ is so small. This produces the required averaging of covariances, but leaves the correlation almost unchanged from that of $X$. So if the original poster wants to make headway with their $\sqrt pX+\sqrt{1-p}Y$ approach (which is rather neat) then it's important to prevent such "swamping". This is feasible, since the covariance matrix does behave in the manner they desire, and a correlation matrix is just the covariance matrix for a vector whose components each have unit variance. So rather than setting $X$ as any random vector with correlation $R$, and $Y$ as any random vector independent of $X$ and with correlation $Q$, try being stricter by demanding that $Q$ and $R$ are the covariance matrices. Can you make your algebra work now? Look at this simulation with $p=\frac{1}{4}$ so that $Z$ is mostly weighted towards $Y$: > Sigma1 <- matrix(c(1,0.4,0.4,1), nrow=2) # variance = 1 for each component > Sigma2 <-matrix(c(1,0.8,0.8,1), nrow=2) > > # covariance = correlation = 0.4 > x <- mvrnorm(n, mu, Sigma1) > > # covariance = correlation = 0.8 > y <- mvrnorm(n, mu, Sigma2) > > # p = 1/4 > # covariance = ?, correlation = ? > z <- sqrt(0.25)*x + sqrt(0.75)*y > > cor(x[,1], x[,2]) [1] 0.4001048 > cor(y[,1], y[,2]) [1] 0.8001744 > cor(z[,1], z[,2]) # weighted average .25*.4 + .75*.8 = 0.7 [1] 0.7000963
Is a weighted average of two correlation matrices again a correlation matrix?
Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know
Is a weighted average of two correlation matrices again a correlation matrix? Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know that $R$ and $Q$ are, since they are correlation matrices. Hence for any non-zero vector $x$, $x^tPx \geq 0$ and $x^tQx \geq 0$. Since $p\geq0$ and $(1-p)\geq0$ we have, as required: $$x^tMx = p(x^tRx) + (1-p)(x^tQx) \geq 0$$. For this covariance matrix to be a correlation matrix we additionally require that the random variables that form the components of the vector have variance one. The variances are the diagonal elements, and we already know that the diagonal elements of $R$ and $Q$ are one since they are correlation matrices. So: $$M_{ii} = pR_{ii} + (1-p)Q_{ii} = p(1) + (1-p)(1) = 1$$ This estalishes that "the weighted average of two correlation matrices is a correlation matrix", which is the question in the title. It doesn't address the question implicit in the body of the post though: I claim that $\sqrt pX+\sqrt{1-p}Y$ is a random vector that has correlation matrix $pR+(1-p)Q$. Here, $X$ is the random vector with correlation $R$ and $Y$ is the random vector with correlation $Q$. Also, $X$ and $Y$ are independent. I can get the covariance part to match but I can't get the standard deviation in the denominator to match. I'm not sure why. The covariance part is correct, which is why that part matched. But the claim about the correlation matrix of $\sqrt pX+\sqrt{1-p}Y$ is incorrect, which is why the algebra doesn't work out. I will demonstrate this with simulation from the bivariate normal distribution using $p=\frac{1}{2}$. The relevant intuition in my choice of parameters is that the variances/covariance of $X$ swamp any variation in $Y$, so that the correlation structure of $\frac{1}{\sqrt{2}}X + \frac{1}{\sqrt{2}}Y$ is largely determined by $X$. Here's the R code for the set-up: library(MASS) # has mvrnorm to simulate multivariate normals n <- 1e7 # simulated sample size mu <- c(0,0) Sigma1 <- matrix(c(100,50,50,100), nrow=2) Sigma2 <-matrix(c(1,0,0,1), nrow=2) # covariance = 50, correlation = 0.5 x <- mvrnorm(n, mu, Sigma1) # covariance = correlation = 0 y <- mvrnorm(n, mu, Sigma2) # covariance = ?, correlation = ? z <- sqrt(0.5)*x + sqrt(0.5)*y For $X$ and $Y$ we find the covariance between the first and second components of the vector is pretty close to the specified population covariance. For $\frac{1}{\sqrt{2}}X + \frac{1}{\sqrt{2}}Y$ we find the covariance is indeed the weighted average ($p = \frac{1}{2}$) between the covariances for $X$ and $Y$. > cov(x[,1], x[,2]) [1] 50.0234 > cov(y[,1], y[,2]) [1] -0.0004923819 > cov(z[,1], z[,2]) [1] 25.01153 For the correlation between first and second components, again the sample of simulated $X$ and $Y$ behaves in the expected manner, but $\frac{1}{\sqrt{2}}X + \frac{1}{\sqrt{2}}Y$ largely reflects the behaviour of $X$. > cor(x[,1], x[,2]) [1] 0.5001478 > cor(y[,1], y[,2]) [1] -0.0004925274 > cor(z[,1], z[,2]) [1] 0.4951888 Let's visualise the problem by looking at the relationship between the first and second components for our samples of each of the three random vectors. To get it to plot in a reasonable time I resampled with n <- 1e5 first but this didn't substantially alter the results. require(ggplot2) df <- data.frame(variable=rep(c("X", "Z", "Y"), each=n), first=c(x[,1], z[,1], y[,1]), second=c(x[,2], z[,2], y[,2])) g <- ggplot(df, aes(x=first, y=second, colour=variable)) + geom_point(alpha = 0.1) + coord_fixed() + theme_bw() + guides(colour = guide_legend(override.aes = list(alpha = 1))) + xlab("First component") + ylab("Second component") print(g) g <- g + guides(colour=FALSE) + facet_grid(. ~ variable) print(g) We can see that the components of $Y$ are not correlated (the points from the sample of $Y$ form a roughly circular blob) while the components of $X$ are (its points form a roughly elliptical blob with its axes not aligned with horizontal and vertical). Their weighted average $Z$ resembles a scaled-down version of $X$ because the contribution from $Y$ is so small. This produces the required averaging of covariances, but leaves the correlation almost unchanged from that of $X$. So if the original poster wants to make headway with their $\sqrt pX+\sqrt{1-p}Y$ approach (which is rather neat) then it's important to prevent such "swamping". This is feasible, since the covariance matrix does behave in the manner they desire, and a correlation matrix is just the covariance matrix for a vector whose components each have unit variance. So rather than setting $X$ as any random vector with correlation $R$, and $Y$ as any random vector independent of $X$ and with correlation $Q$, try being stricter by demanding that $Q$ and $R$ are the covariance matrices. Can you make your algebra work now? Look at this simulation with $p=\frac{1}{4}$ so that $Z$ is mostly weighted towards $Y$: > Sigma1 <- matrix(c(1,0.4,0.4,1), nrow=2) # variance = 1 for each component > Sigma2 <-matrix(c(1,0.8,0.8,1), nrow=2) > > # covariance = correlation = 0.4 > x <- mvrnorm(n, mu, Sigma1) > > # covariance = correlation = 0.8 > y <- mvrnorm(n, mu, Sigma2) > > # p = 1/4 > # covariance = ?, correlation = ? > z <- sqrt(0.25)*x + sqrt(0.75)*y > > cor(x[,1], x[,2]) [1] 0.4001048 > cor(y[,1], y[,2]) [1] 0.8001744 > cor(z[,1], z[,2]) # weighted average .25*.4 + .75*.8 = 0.7 [1] 0.7000963
Is a weighted average of two correlation matrices again a correlation matrix? Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know
46,204
Is a weighted average of two correlation matrices again a correlation matrix?
To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I think you see that. Noting both $R$ and $Q$ are positive definite, (2) holds since any positive definite matrix multiplied by a positive scalar is still positive definite and the sum of any 2 positive definite matrices is also positive definite. Hope that simplifies things!
Is a weighted average of two correlation matrices again a correlation matrix?
To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I
Is a weighted average of two correlation matrices again a correlation matrix? To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I think you see that. Noting both $R$ and $Q$ are positive definite, (2) holds since any positive definite matrix multiplied by a positive scalar is still positive definite and the sum of any 2 positive definite matrices is also positive definite. Hope that simplifies things!
Is a weighted average of two correlation matrices again a correlation matrix? To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I
46,205
impose an intercept on lm in r [duplicate]
Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- rep(9.81,k+1) fit <- lm(y ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 + offset(interc),data=data[i:(i+k),]) While the coefficients and standard errors should be the same, one advantage of the second one is it's actually giving a model for y rather than a shifted y. In some cases that may be useful. (If you want to test the intercept value, remove the "0+".) -- AIC should be fine working this way. $R^2$ won't really work - at least not without some thought, and even then, probably not the way you'd like. Its meaning will change from a model with an intercept, since a pre-specified intercept is effectively a no-intercept model (in fact it is, for a shifted y). Depending on the exact form of calculation of $R^2$, you might get values outside $[0,1]$, for example, and different forms that were equivalent may not be. Not having a free intercept renders the comparison with an intercept-only model tricky. If you need an $R^2$ you need to think carefully about which properties of $R^2$ you most need to preserve, because you're going to have to give some up.
impose an intercept on lm in r [duplicate]
Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- r
impose an intercept on lm in r [duplicate] Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- rep(9.81,k+1) fit <- lm(y ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 + offset(interc),data=data[i:(i+k),]) While the coefficients and standard errors should be the same, one advantage of the second one is it's actually giving a model for y rather than a shifted y. In some cases that may be useful. (If you want to test the intercept value, remove the "0+".) -- AIC should be fine working this way. $R^2$ won't really work - at least not without some thought, and even then, probably not the way you'd like. Its meaning will change from a model with an intercept, since a pre-specified intercept is effectively a no-intercept model (in fact it is, for a shifted y). Depending on the exact form of calculation of $R^2$, you might get values outside $[0,1]$, for example, and different forms that were equivalent may not be. Not having a free intercept renders the comparison with an intercept-only model tricky. If you need an $R^2$ you need to think carefully about which properties of $R^2$ you most need to preserve, because you're going to have to give some up.
impose an intercept on lm in r [duplicate] Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- r
46,206
Is it valid to use quantile regression with only categorical predictors?
A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable, but both categorical and continuous predictors can be included. If you want to evaluate the impact of 2 dichotomous predictors on a specific percentile $p$ of $Y$, (which is the same situation of a two-way ANOVA), you can build the model $$Q_{Y|X=x}{(p)}=\beta_0(p)+\beta_1(p)x_1+\beta_2(p)x_2$$ where $\beta_1(p)$ and $\beta_2(p)$ express the change in the pth percentile of $Y$ associated with the dichotomous variables $x_1$ and $x_2$, respectively
Is it valid to use quantile regression with only categorical predictors?
A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable,
Is it valid to use quantile regression with only categorical predictors? A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable, but both categorical and continuous predictors can be included. If you want to evaluate the impact of 2 dichotomous predictors on a specific percentile $p$ of $Y$, (which is the same situation of a two-way ANOVA), you can build the model $$Q_{Y|X=x}{(p)}=\beta_0(p)+\beta_1(p)x_1+\beta_2(p)x_2$$ where $\beta_1(p)$ and $\beta_2(p)$ express the change in the pth percentile of $Y$ associated with the dichotomous variables $x_1$ and $x_2$, respectively
Is it valid to use quantile regression with only categorical predictors? A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable,
46,207
Is it valid to use quantile regression with only categorical predictors?
Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regression, ANOVA or ANCOVA -- that is, any general linear model (GLM) -- should work with quantile regression. Petscher and Logan (2014) provide a wonderful intro, explicitly discussing how to interpret the results of quantile regression with a single continuous predictor, with a single binary predictor, and with both. They also discuss an application to longitudinal research. They compare all quantile regression results to standard regression. With regard to interpreting quantile regression coefficients, they note "While linear regression posits the question 'What is the relation between X and Y?' quantile regression extends this to, 'For whom does a relation between X and Y exist' as well as testing for whom a relation is stronger or weaker" (pg 864, emphasis added). Gaining intuition for quantile regression can take practice. The introductions associated with software in R are very useful; this short post also provides some insights. Software notes: The lqmm package in R provides facilities for quantile regressions with random effects and quantile Poisson regression. As is generally applicable for random effects/hierarchical models, lqmm is better behaved if you center continuous predictors. Koencker's quantreg package in R, which has a nice vignette, handles independent data and has facilities for non-linear quantile (nlrq) models and "non-parameteric quantile smoothing" which looks to be similar to traditional smoothing/GAM methods.
Is it valid to use quantile regression with only categorical predictors?
Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regressi
Is it valid to use quantile regression with only categorical predictors? Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regression, ANOVA or ANCOVA -- that is, any general linear model (GLM) -- should work with quantile regression. Petscher and Logan (2014) provide a wonderful intro, explicitly discussing how to interpret the results of quantile regression with a single continuous predictor, with a single binary predictor, and with both. They also discuss an application to longitudinal research. They compare all quantile regression results to standard regression. With regard to interpreting quantile regression coefficients, they note "While linear regression posits the question 'What is the relation between X and Y?' quantile regression extends this to, 'For whom does a relation between X and Y exist' as well as testing for whom a relation is stronger or weaker" (pg 864, emphasis added). Gaining intuition for quantile regression can take practice. The introductions associated with software in R are very useful; this short post also provides some insights. Software notes: The lqmm package in R provides facilities for quantile regressions with random effects and quantile Poisson regression. As is generally applicable for random effects/hierarchical models, lqmm is better behaved if you center continuous predictors. Koencker's quantreg package in R, which has a nice vignette, handles independent data and has facilities for non-linear quantile (nlrq) models and "non-parameteric quantile smoothing" which looks to be similar to traditional smoothing/GAM methods.
Is it valid to use quantile regression with only categorical predictors? Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regressi
46,208
Should train and test datasets have similar variance?
You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampling methods are far better, starting with the Efron-Gong optimism bootstrap (see e.g. the R rms package) or 10-fold cross-validation repeated 100 times.
Should train and test datasets have similar variance?
You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampl
Should train and test datasets have similar variance? You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampling methods are far better, starting with the Efron-Gong optimism bootstrap (see e.g. the R rms package) or 10-fold cross-validation repeated 100 times.
Should train and test datasets have similar variance? You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampl
46,209
Should train and test datasets have similar variance?
Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X$ variance is also higher and the fitted coefficients will explain $Y$ variance equally well. Plot Y ~ X on both data sets and fit a regression line on each plot. What do you see?
Should train and test datasets have similar variance?
Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X
Should train and test datasets have similar variance? Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X$ variance is also higher and the fitted coefficients will explain $Y$ variance equally well. Plot Y ~ X on both data sets and fit a regression line on each plot. What do you see?
Should train and test datasets have similar variance? Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X
46,210
Solving linear regression with weights and constraints
You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N <- 1500 d <- c(1/2, 2/pi, 2/3) x <- c(2, 1, 3) limit <- 5 d%*%x <= limit A <- cbind(1, rnorm(N), rnorm(N)) b.hat <- A%*%x wgt <- rexp(N) b <- rnorm(N, mean=b.hat, sd=wgt) library(mgcv) pin <- c(1.5, .75, 2.5) Ain <- matrix(d, nrow=1) M <- list(y=b, w=wgt, X=A, p=pin, Ain=-Ain, bin=-limit, C=matrix(1, ncol=0, nrow=0)) pcls(M) 1.8844996 0.9421333 2.9770852 The inequality in this package is flipped the other direction by default. So we have to multiply both sides by $-1$.
Solving linear regression with weights and constraints
You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N
Solving linear regression with weights and constraints You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N <- 1500 d <- c(1/2, 2/pi, 2/3) x <- c(2, 1, 3) limit <- 5 d%*%x <= limit A <- cbind(1, rnorm(N), rnorm(N)) b.hat <- A%*%x wgt <- rexp(N) b <- rnorm(N, mean=b.hat, sd=wgt) library(mgcv) pin <- c(1.5, .75, 2.5) Ain <- matrix(d, nrow=1) M <- list(y=b, w=wgt, X=A, p=pin, Ain=-Ain, bin=-limit, C=matrix(1, ncol=0, nrow=0)) pcls(M) 1.8844996 0.9421333 2.9770852 The inequality in this package is flipped the other direction by default. So we have to multiply both sides by $-1$.
Solving linear regression with weights and constraints You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N
46,211
Solving linear regression with weights and constraints
Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transformations of variables. This is true even when I'm not explicitly fitting a Bayesian model. This is what I've worked up for your particular problem. library(rstan) set.seed(1880) N <- 1500 d <- c(1/2, 2/pi, 2/3) x <- c(2, 1, 3) limit <- 5 d%*%x <= limit > TRUE A <- cbind(1, rnorm(N), rnorm(N)) b.hat <- A%*%x tau <- 5 wgt <- rexp(N) Sigma <- tau*wgt b <- rnorm(N, mean=b.hat, sd=Sigma) constrained.reg <- " data{ int<lower=1> N; int<lower=1> K; vector<lower=0>[N] wgt; matrix[N,K] A; vector[N] b; vector[K] d; real limit; // s.t. d*x<=limit } parameters{ real<upper=limit> c; // this is the largest possible value of x%*%d. simplex[K] sim_x; real<lower=0> tau; } transformed parameters { vector[K] x; vector[N] b_hat; vector[N] Sigma; x <- d .*sim_x /c; b_hat <- A*x; Sigma <- tau*wgt; } model{ b ~ normal(b_hat, Sigma); increment_log_prob(-2*log(tau)); // uniform prior on beta, noninformative prior on tau } generated quantities{ vector[N] resid; resid <- (b_hat-b) ./Sigma; } " fake.data <- list(N=N, A=A, K=3, b=b, wgt=wgt, d=d, limit=limit) fit.test <- stan(model_code=constrained.reg, data=fake.data, iter=10) system.time(fit <- stan(fit=fit.test, iter=1000, data=fake.data)) print(fit, c("x", "tau")); x I realized that I was being dense and that we can enforce the inequality by sampling a value as large as the maximum permissible dot product result and then transforming appropriately. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat x[1] 1.99 0 0.01 1.98 1.98 1.99 1.99 2.00 1645 1.00 x[2] 0.99 0 0.01 0.97 0.98 0.99 0.99 1.00 624 1.00 x[3] 3.00 0 0.01 2.98 2.99 3.00 3.01 3.02 945 1.00 tau 4.82 0 0.09 4.62 4.76 4.82 4.88 5.00 558 1.01 These results look fine to me.
Solving linear regression with weights and constraints
Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transf
Solving linear regression with weights and constraints Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transformations of variables. This is true even when I'm not explicitly fitting a Bayesian model. This is what I've worked up for your particular problem. library(rstan) set.seed(1880) N <- 1500 d <- c(1/2, 2/pi, 2/3) x <- c(2, 1, 3) limit <- 5 d%*%x <= limit > TRUE A <- cbind(1, rnorm(N), rnorm(N)) b.hat <- A%*%x tau <- 5 wgt <- rexp(N) Sigma <- tau*wgt b <- rnorm(N, mean=b.hat, sd=Sigma) constrained.reg <- " data{ int<lower=1> N; int<lower=1> K; vector<lower=0>[N] wgt; matrix[N,K] A; vector[N] b; vector[K] d; real limit; // s.t. d*x<=limit } parameters{ real<upper=limit> c; // this is the largest possible value of x%*%d. simplex[K] sim_x; real<lower=0> tau; } transformed parameters { vector[K] x; vector[N] b_hat; vector[N] Sigma; x <- d .*sim_x /c; b_hat <- A*x; Sigma <- tau*wgt; } model{ b ~ normal(b_hat, Sigma); increment_log_prob(-2*log(tau)); // uniform prior on beta, noninformative prior on tau } generated quantities{ vector[N] resid; resid <- (b_hat-b) ./Sigma; } " fake.data <- list(N=N, A=A, K=3, b=b, wgt=wgt, d=d, limit=limit) fit.test <- stan(model_code=constrained.reg, data=fake.data, iter=10) system.time(fit <- stan(fit=fit.test, iter=1000, data=fake.data)) print(fit, c("x", "tau")); x I realized that I was being dense and that we can enforce the inequality by sampling a value as large as the maximum permissible dot product result and then transforming appropriately. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat x[1] 1.99 0 0.01 1.98 1.98 1.99 1.99 2.00 1645 1.00 x[2] 0.99 0 0.01 0.97 0.98 0.99 0.99 1.00 624 1.00 x[3] 3.00 0 0.01 2.98 2.99 3.00 3.01 3.02 945 1.00 tau 4.82 0 0.09 4.62 4.76 4.82 4.88 5.00 558 1.01 These results look fine to me.
Solving linear regression with weights and constraints Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transf
46,212
MCMC packages in R
The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of length=dim and returns TRUE if the vector is within the support of the objective and FALSE otherwise. Supp is *always* called right before Obj. It also seems to be a pretty general sampling algorithm. From the package: The t-walk is a "A General Purpose Sampling Algorithm for Continuous Distributions" to sample from many objective functions (specially suited for posterior distributions using non-standard models that would make the use of common algorithms and software difficult); it is an MCMC that does not required tuning. R package here: www.cimat.mx/~jac/twalk/
MCMC packages in R
The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of leng
MCMC packages in R The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of length=dim and returns TRUE if the vector is within the support of the objective and FALSE otherwise. Supp is *always* called right before Obj. It also seems to be a pretty general sampling algorithm. From the package: The t-walk is a "A General Purpose Sampling Algorithm for Continuous Distributions" to sample from many objective functions (specially suited for posterior distributions using non-standard models that would make the use of common algorithms and software difficult); it is an MCMC that does not required tuning. R package here: www.cimat.mx/~jac/twalk/
MCMC packages in R The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of leng
46,213
MCMC packages in R
You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to create your own distributions Package documentation and examples: http://mambajl.readthedocs.org/en/latest/
MCMC packages in R
You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to crea
MCMC packages in R You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to create your own distributions Package documentation and examples: http://mambajl.readthedocs.org/en/latest/
MCMC packages in R You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to crea
46,214
MCMC packages in R
Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automatically drop constant terms), but they will still be fairly fast. The specific details of writing functions are found in the Stan manual, and examples can be found on the stan-users mailing list.
MCMC packages in R
Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automaticall
MCMC packages in R Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automatically drop constant terms), but they will still be fairly fast. The specific details of writing functions are found in the Stan manual, and examples can be found on the stan-users mailing list.
MCMC packages in R Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automaticall
46,215
PCA on train and test datasets: do I need to merge them? [duplicate]
Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. You will use (some) of these $W$ to project your original dataset $X$ to a lower dimensional subspace $T$. This is your new dataset and the PCs are in effect an axis system over which we can represent data $X$ in a compact form. Now, as @RobertKubrick mentions you need to make sure that information from your testing dataset is not "leaked" into your training dataset. If this takes place then you will utilize information that "should be unknown" during your prediction; your error estimates will be wrong. The generalization of your model will suffer. For your case in particular you should do the following: Calculate the principal components $W$s on the training dataset and then utilize the training sample $W$ to reduce the dimensions of the testing dataset. I say this because: if you merged both your training and testing dataset to calculate your PC you will evidently utilize information from the testing set. This is clearly wrong. If you did two independent PCAs you will be comparing data registered on different axes (if anything princ. components are not sign-identifiable so the estimated parameters from them will also have the same issue). The axes over which you project your data should be the same, otherwise you are in a typical "orange-apples situation". Clearly if you do $k$-fold cross validation, or something similar (eg. jack-knifing) that you will need to calculate new principal components $W$ each time. I.T. Jolliffe's Principal Component Analysis is a standard and great reference on PCA; I would strongly recommend it.
PCA on train and test datasets: do I need to merge them? [duplicate]
Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. Y
PCA on train and test datasets: do I need to merge them? [duplicate] Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. You will use (some) of these $W$ to project your original dataset $X$ to a lower dimensional subspace $T$. This is your new dataset and the PCs are in effect an axis system over which we can represent data $X$ in a compact form. Now, as @RobertKubrick mentions you need to make sure that information from your testing dataset is not "leaked" into your training dataset. If this takes place then you will utilize information that "should be unknown" during your prediction; your error estimates will be wrong. The generalization of your model will suffer. For your case in particular you should do the following: Calculate the principal components $W$s on the training dataset and then utilize the training sample $W$ to reduce the dimensions of the testing dataset. I say this because: if you merged both your training and testing dataset to calculate your PC you will evidently utilize information from the testing set. This is clearly wrong. If you did two independent PCAs you will be comparing data registered on different axes (if anything princ. components are not sign-identifiable so the estimated parameters from them will also have the same issue). The axes over which you project your data should be the same, otherwise you are in a typical "orange-apples situation". Clearly if you do $k$-fold cross validation, or something similar (eg. jack-knifing) that you will need to calculate new principal components $W$ each time. I.T. Jolliffe's Principal Component Analysis is a standard and great reference on PCA; I would strongly recommend it.
PCA on train and test datasets: do I need to merge them? [duplicate] Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. Y
46,216
PCA on train and test datasets: do I need to merge them? [duplicate]
The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want to calculate the prediction error on data "unseen" by your model.
PCA on train and test datasets: do I need to merge them? [duplicate]
The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want t
PCA on train and test datasets: do I need to merge them? [duplicate] The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want to calculate the prediction error on data "unseen" by your model.
PCA on train and test datasets: do I need to merge them? [duplicate] The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want t
46,217
Population parameters of a regression
The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity. Take a simpler example. Assume your model is $$ E[Y] = \beta_0 + \beta_1 X + \beta_2 Z + \beta_{12} XZ $$ Here the partial derivative of $E[Y]$ with respect to $X$ is $\beta_1 + \beta_3 Z$. Put another way, $\beta_1$ is only the partial derivative of $Y$ with respect to $X$ when $Z = 0$. Your model is a special case of this one. The population marginal effect of X (the partial derivative you're talking about) is indeed one of the things you're interested in modeling with this regression. But think of it as just a happy coincidence when this quantity corresponds to particular model parameter. Generally speaking, it won't.
Population parameters of a regression
The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity
Population parameters of a regression The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity. Take a simpler example. Assume your model is $$ E[Y] = \beta_0 + \beta_1 X + \beta_2 Z + \beta_{12} XZ $$ Here the partial derivative of $E[Y]$ with respect to $X$ is $\beta_1 + \beta_3 Z$. Put another way, $\beta_1$ is only the partial derivative of $Y$ with respect to $X$ when $Z = 0$. Your model is a special case of this one. The population marginal effect of X (the partial derivative you're talking about) is indeed one of the things you're interested in modeling with this regression. But think of it as just a happy coincidence when this quantity corresponds to particular model parameter. Generally speaking, it won't.
Population parameters of a regression The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity
46,218
Population parameters of a regression
Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term. The example is sufficiently general that we might as well address it directly, so consider a model of the form $$Y = \beta_0 + \beta_1 X + \beta_2 X^2.$$ This can be viewed as the composition of two functions, $Y = g(f(X)),$ where $$f:\mathbb{R}\to \mathbb{R}^3,\quad f(x) = (1, x, x^2)$$ and $$g:\mathbb{R}^3\to \mathbb{R},\quad g((x,y,z)) = \beta_0 x + \beta_1 y + \beta_2 z = (\beta_0,\beta_1,\beta_2)(x,y,z)^\prime.$$ This figure (which suppresses the unvarying first coordinate) depicts the graph of $1 + 10y - 2z$ as a blue planar surface, shows hypothetical data as red points, and plots the graph of $x\to (x, x^2)$ as a black curve. The points all lie along this curve and the planar surface, which is fit to the points, contains the curve. The following discussion distinguishes between moving about in the plane (which is described by the partial derivatives of $g$) and motion constrained to the curve (which is described by the partial derivatives of the composite function $g\circ f$.) It is indeed the case that the betas are the partial derivatives of $g$ with respect to its arguments: $$\beta_0 = \frac{\partial g}{\partial x},\ \beta_1 = \frac{\partial g}{\partial y},\ \beta_2 = \frac{\partial g}{\partial z},$$ all of which are constant (because $g$ is a linear transformation). In this sense, it is indeed correct to understand the betas as partial derivatives. However, the partial derivatives of $Y$ with respect to $X$ are obtained via the Chain Rule from those of $g$ and those of $f$: $$\frac{\partial Y}{\partial X}(X) = Dg(f(X)) Df = (\beta_0, \beta_1, \beta_2) (0,1,2X)^\prime = \beta_1 + 2\beta_2 X.$$ The function $f$ captures the fact that the three variables in the model--the constant, $X$, and $X^2$--are not functionally independent: the third is determined by the second. This lack of independence means that $X$ and $X^2$ cannot be changed separately, the way unrelated variables $X$ and $Z$ could be changed in a model of the form $Y = \beta_0 + \beta_1 X + \beta_2 Z$. In general, this is exactly what it means for any model to be "curvilinear." In practice, $f$ is realized by the dataset itself: a separate column of values $X^2$ has to be created (either explicitly by the user or internally in response to a nonlinear model formula) out of other data columns, in this case that of $X$. The function $g$--specifically, its coefficients $(\beta_0,\beta_1,\beta_2)$--is what least squares regression estimates. By separating the nonlinear behavior ($f$) from the linear behavior ($g$) in this fashion, least squares techniques can fit nonlinear functional forms. Only by considering these two aspects of the model--$f$ and $g$--can the coefficients be properly and fully interpreted.
Population parameters of a regression
Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term.
Population parameters of a regression Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term. The example is sufficiently general that we might as well address it directly, so consider a model of the form $$Y = \beta_0 + \beta_1 X + \beta_2 X^2.$$ This can be viewed as the composition of two functions, $Y = g(f(X)),$ where $$f:\mathbb{R}\to \mathbb{R}^3,\quad f(x) = (1, x, x^2)$$ and $$g:\mathbb{R}^3\to \mathbb{R},\quad g((x,y,z)) = \beta_0 x + \beta_1 y + \beta_2 z = (\beta_0,\beta_1,\beta_2)(x,y,z)^\prime.$$ This figure (which suppresses the unvarying first coordinate) depicts the graph of $1 + 10y - 2z$ as a blue planar surface, shows hypothetical data as red points, and plots the graph of $x\to (x, x^2)$ as a black curve. The points all lie along this curve and the planar surface, which is fit to the points, contains the curve. The following discussion distinguishes between moving about in the plane (which is described by the partial derivatives of $g$) and motion constrained to the curve (which is described by the partial derivatives of the composite function $g\circ f$.) It is indeed the case that the betas are the partial derivatives of $g$ with respect to its arguments: $$\beta_0 = \frac{\partial g}{\partial x},\ \beta_1 = \frac{\partial g}{\partial y},\ \beta_2 = \frac{\partial g}{\partial z},$$ all of which are constant (because $g$ is a linear transformation). In this sense, it is indeed correct to understand the betas as partial derivatives. However, the partial derivatives of $Y$ with respect to $X$ are obtained via the Chain Rule from those of $g$ and those of $f$: $$\frac{\partial Y}{\partial X}(X) = Dg(f(X)) Df = (\beta_0, \beta_1, \beta_2) (0,1,2X)^\prime = \beta_1 + 2\beta_2 X.$$ The function $f$ captures the fact that the three variables in the model--the constant, $X$, and $X^2$--are not functionally independent: the third is determined by the second. This lack of independence means that $X$ and $X^2$ cannot be changed separately, the way unrelated variables $X$ and $Z$ could be changed in a model of the form $Y = \beta_0 + \beta_1 X + \beta_2 Z$. In general, this is exactly what it means for any model to be "curvilinear." In practice, $f$ is realized by the dataset itself: a separate column of values $X^2$ has to be created (either explicitly by the user or internally in response to a nonlinear model formula) out of other data columns, in this case that of $X$. The function $g$--specifically, its coefficients $(\beta_0,\beta_1,\beta_2)$--is what least squares regression estimates. By separating the nonlinear behavior ($f$) from the linear behavior ($g$) in this fashion, least squares techniques can fit nonlinear functional forms. Only by considering these two aspects of the model--$f$ and $g$--can the coefficients be properly and fully interpreted.
Population parameters of a regression Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term.
46,219
Random Forest - Need help understanding the rfcv function
The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the first row in your output first value = 9, second value = round(9(0.7)) = 6, third value = round(6(0.7)) = 4, and so on. So the first row of the output is just the number of predictors used in each model. The second row in your output is the cross-validation error of each of the models. It becomes clear that as the number of predictors are reduced the error generally increases, but the difference between using 9 predictors and using 6 predictors is low which suggests the 6 predictor model is about as good as the 9 predictor model.
Random Forest - Need help understanding the rfcv function
The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the f
Random Forest - Need help understanding the rfcv function The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the first row in your output first value = 9, second value = round(9(0.7)) = 6, third value = round(6(0.7)) = 4, and so on. So the first row of the output is just the number of predictors used in each model. The second row in your output is the cross-validation error of each of the models. It becomes clear that as the number of predictors are reduced the error generally increases, but the difference between using 9 predictors and using 6 predictors is low which suggests the 6 predictor model is about as good as the 9 predictor model.
Random Forest - Need help understanding the rfcv function The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the f
46,220
Textbooks pertaining to creating models?
With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical Models" (Cambridge) But, the one book you REALY, REALLY should study is this one: David A. Freedman: "Statistical Models. Theory and Practice. Revised Edition." (Cambridge) From the foreword by some friends: "Some books are correct. Some are clear. Some are useful. Some are entertaining. Few are even two of these. This book is all four. Statistical Models: Theory and Practice is lucid, candid and insightful, a joy to read. We are fortunate that David Freedman finished this new edition before his death in late 2008. We are deeply saddened by his passing, and we greatly admire the energy and cheer he brought to this volume—and many other projects—during his final months. " This book is low on mathematics (which does NOT mean "easy"), but high on the conceptual side, and not only presents models, but critizes them too. You will love it!
Textbooks pertaining to creating models?
With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical M
Textbooks pertaining to creating models? With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical Models" (Cambridge) But, the one book you REALY, REALLY should study is this one: David A. Freedman: "Statistical Models. Theory and Practice. Revised Edition." (Cambridge) From the foreword by some friends: "Some books are correct. Some are clear. Some are useful. Some are entertaining. Few are even two of these. This book is all four. Statistical Models: Theory and Practice is lucid, candid and insightful, a joy to read. We are fortunate that David Freedman finished this new edition before his death in late 2008. We are deeply saddened by his passing, and we greatly admire the energy and cheer he brought to this volume—and many other projects—during his final months. " This book is low on mathematics (which does NOT mean "easy"), but high on the conceptual side, and not only presents models, but critizes them too. You will love it!
Textbooks pertaining to creating models? With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical M
46,221
Textbooks pertaining to creating models?
If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first one given your background. There are proofs, but there are also empirical examples, with the datasets readily available. The focus is mainly on cross sectional and panel data topics, though everything you mention is covered.
Textbooks pertaining to creating models?
If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first o
Textbooks pertaining to creating models? If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first one given your background. There are proofs, but there are also empirical examples, with the datasets readily available. The focus is mainly on cross sectional and panel data topics, though everything you mention is covered.
Textbooks pertaining to creating models? If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first o
46,222
Textbooks pertaining to creating models?
If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
Textbooks pertaining to creating models?
If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
Textbooks pertaining to creating models? If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
Textbooks pertaining to creating models? If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
46,223
Textbooks pertaining to creating models?
I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, and Bruce. The professor also had readings in the Hastie, Tbshirani, and Friedman which can be found here . These gave a pretty good introduction and it was a pretty mathematically rigorous class.
Textbooks pertaining to creating models?
I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, a
Textbooks pertaining to creating models? I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, and Bruce. The professor also had readings in the Hastie, Tbshirani, and Friedman which can be found here . These gave a pretty good introduction and it was a pretty mathematically rigorous class.
Textbooks pertaining to creating models? I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, a
46,224
Textbooks pertaining to creating models?
If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Economic Series: Readings in Econometric Methodology (Advanced Texts in Econometrics) by C. W. J. Granger. Modelling Nonlinear Economic Time Series (Advanced Texts in Econometrics) by Timo Terasvirta, Dag Tjostheim, Clive W. J. Granger. Dynamic Econometrics (Advanced Texts in Econometrics) by David F. Hendry. Statistical Foundations of Econometric Modelling by Aris Spanos. Specification Searches: Ad Hoc Inference with Nonexperimental Data by Edward E. Leamer. Forecasting, Structural Time Series Models and the Kalman Filter by Andrew C. Harvey. For a history of the evolution of econometrics, see: The Foundations of Econometric Analysis (Econometric Society Monographs) by David F. Hendry, Mary S. Morgan. There's a good deal of math and notation to get through in these books. (Unfortunately, a unified notation is not really used in the field.) I'd suggest starting with the first book on the list, which is a collection of papers, because it will give an introduction and overview of issues in econometric methodology. Indeed, it will help put the material in the other books into context and makes things more digestible. If you'd like a shorter read, you can get a taste of the econometric methodology literature by checking out a recent article by Ray C Fair: Reflections on Macroeconometric Modeling For more recommendations, check out the references listed in each of the aforementioned books. There's tonnes of (exciting) stuff for you to get your teeth through.
Textbooks pertaining to creating models?
If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Econ
Textbooks pertaining to creating models? If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Economic Series: Readings in Econometric Methodology (Advanced Texts in Econometrics) by C. W. J. Granger. Modelling Nonlinear Economic Time Series (Advanced Texts in Econometrics) by Timo Terasvirta, Dag Tjostheim, Clive W. J. Granger. Dynamic Econometrics (Advanced Texts in Econometrics) by David F. Hendry. Statistical Foundations of Econometric Modelling by Aris Spanos. Specification Searches: Ad Hoc Inference with Nonexperimental Data by Edward E. Leamer. Forecasting, Structural Time Series Models and the Kalman Filter by Andrew C. Harvey. For a history of the evolution of econometrics, see: The Foundations of Econometric Analysis (Econometric Society Monographs) by David F. Hendry, Mary S. Morgan. There's a good deal of math and notation to get through in these books. (Unfortunately, a unified notation is not really used in the field.) I'd suggest starting with the first book on the list, which is a collection of papers, because it will give an introduction and overview of issues in econometric methodology. Indeed, it will help put the material in the other books into context and makes things more digestible. If you'd like a shorter read, you can get a taste of the econometric methodology literature by checking out a recent article by Ray C Fair: Reflections on Macroeconometric Modeling For more recommendations, check out the references listed in each of the aforementioned books. There's tonnes of (exciting) stuff for you to get your teeth through.
Textbooks pertaining to creating models? If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Econ
46,225
Textbooks pertaining to creating models?
It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note: I believe this text will be updated into two texts within the next few years. Follow Gelman's blog for further details.) Regression Modeling Strategies by Harrell is a must-have.
Textbooks pertaining to creating models?
It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note:
Textbooks pertaining to creating models? It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note: I believe this text will be updated into two texts within the next few years. Follow Gelman's blog for further details.) Regression Modeling Strategies by Harrell is a must-have.
Textbooks pertaining to creating models? It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note:
46,226
Textbooks pertaining to creating models?
I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the book is probably elementary for your calibre/background, but I would still recommend it. Here is good list of books which deal with the application of modelling to more real world problems and applications. HTH
Textbooks pertaining to creating models?
I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the b
Textbooks pertaining to creating models? I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the book is probably elementary for your calibre/background, but I would still recommend it. Here is good list of books which deal with the application of modelling to more real world problems and applications. HTH
Textbooks pertaining to creating models? I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the b
46,227
Way of measuring students' performance
1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the null hypothesis it will be small. The problem arises with the denominator. In the case of sets of Poisson (or multinomial) counts, a sum of squares of standardized deviations will be (or will simplify to) dividing by the expected values. The $E_i$ in the denominator of the chi-square doesn't seem to apply to your situation. To make it a chi-square test in your problem, you'd need to specify the variance of $O_i-E_i$. You seem to be doing this on a per-student basis, so you'd need to have a variance-per-student. You might assume they have equal variance (which I doubt can be true, since the variability in scores getting near the limits of 0 and 120 will be smaller than the variability in scores when they're near the middle. 2) I am also concerned that your choice of statistic might not correspond to a question of interest. What is the underlying question you're trying to answer? Or, more directly, what are the alternatives you're most interested in being able to identify?
Way of measuring students' performance
1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the
Way of measuring students' performance 1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the null hypothesis it will be small. The problem arises with the denominator. In the case of sets of Poisson (or multinomial) counts, a sum of squares of standardized deviations will be (or will simplify to) dividing by the expected values. The $E_i$ in the denominator of the chi-square doesn't seem to apply to your situation. To make it a chi-square test in your problem, you'd need to specify the variance of $O_i-E_i$. You seem to be doing this on a per-student basis, so you'd need to have a variance-per-student. You might assume they have equal variance (which I doubt can be true, since the variability in scores getting near the limits of 0 and 120 will be smaller than the variability in scores when they're near the middle. 2) I am also concerned that your choice of statistic might not correspond to a question of interest. What is the underlying question you're trying to answer? Or, more directly, what are the alternatives you're most interested in being able to identify?
Way of measuring students' performance 1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the
46,228
Way of measuring students' performance
You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcome of exactly one observation. The motivation for $\chi^2$ test is that you know the probability $P_i$ of an outcome $i$, then you conduct N experiments and observe $O_i$ number of outcome $i$, where $\sum_iO_i=N$, so you compare it to expected frequencies $E_i=N\times P_i$. UPDATE: If this was USA, then the measurement errors of the test scores would have been available from College Board, they have statistical tables available, such as these. They claim that the measurement error is ~30 points. So, you can use this sort of information to see whether an individual student's score is different from the target score. You could also test whether the entire group of students scored differently than the target. In this case the standard deviation of the mean score of N students is $\sigma_N=\sigma/\sqrt{N}$. So, you can get the t-statistics by $t=\frac{\bar{T}-\bar{S}}{\sigma_N}$, where the numerator is a difference between an averages of target scores and the test results. Based on the t-stat you can say whether your test scores are significantly different from the target or not. In your case, you don't have the measurement error $\sigma$. You can try to estimate it under reasonable assumptions. The mechanics are simple: $\hat\sigma^2=Var[T_i-S_i]$, where $T_i,S_i$ - target and test scores of individual students. Basically, get the variance of the deviations from the target scores. This will give you an estimate of the measurement errors, which you can plug into the $\hat\sigma_N$ equation to get the estimate of the measurement error of the average class score similar to the first case. Now, how would you interpret this result? Let's say that you got the class average lower than the target. Does it mean that you are teaching worse than the other schools? It would depend on how the scores are computed. For instance, if it is possible that all colleges had lower scores than the target in entire UK, then it would be possible that your college faired as well as others. On the other hand, if they somehow rescale the test scores so they match the target somehow in average UK, then it's a different story.
Way of measuring students' performance
You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcom
Way of measuring students' performance You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcome of exactly one observation. The motivation for $\chi^2$ test is that you know the probability $P_i$ of an outcome $i$, then you conduct N experiments and observe $O_i$ number of outcome $i$, where $\sum_iO_i=N$, so you compare it to expected frequencies $E_i=N\times P_i$. UPDATE: If this was USA, then the measurement errors of the test scores would have been available from College Board, they have statistical tables available, such as these. They claim that the measurement error is ~30 points. So, you can use this sort of information to see whether an individual student's score is different from the target score. You could also test whether the entire group of students scored differently than the target. In this case the standard deviation of the mean score of N students is $\sigma_N=\sigma/\sqrt{N}$. So, you can get the t-statistics by $t=\frac{\bar{T}-\bar{S}}{\sigma_N}$, where the numerator is a difference between an averages of target scores and the test results. Based on the t-stat you can say whether your test scores are significantly different from the target or not. In your case, you don't have the measurement error $\sigma$. You can try to estimate it under reasonable assumptions. The mechanics are simple: $\hat\sigma^2=Var[T_i-S_i]$, where $T_i,S_i$ - target and test scores of individual students. Basically, get the variance of the deviations from the target scores. This will give you an estimate of the measurement errors, which you can plug into the $\hat\sigma_N$ equation to get the estimate of the measurement error of the average class score similar to the first case. Now, how would you interpret this result? Let's say that you got the class average lower than the target. Does it mean that you are teaching worse than the other schools? It would depend on how the scores are computed. For instance, if it is possible that all colleges had lower scores than the target in entire UK, then it would be possible that your college faired as well as others. On the other hand, if they somehow rescale the test scores so they match the target somehow in average UK, then it's a different story.
Way of measuring students' performance You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcom
46,229
Way of measuring students' performance
I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would work. You have paired observations, and two measures that are not strictly normal (i.e. possible scores do not range from $-\infty$ to $\infty$). Seems a straightforward application. Added advantage that it is implemented in all the major software packages.
Way of measuring students' performance
I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would w
Way of measuring students' performance I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would work. You have paired observations, and two measures that are not strictly normal (i.e. possible scores do not range from $-\infty$ to $\infty$). Seems a straightforward application. Added advantage that it is implemented in all the major software packages.
Way of measuring students' performance I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would w
46,230
Way of measuring students' performance
Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could try $O_i=\alpha + \beta E_i + \varepsilon$. A toy example in R: > set.seed(123) > e <- rnorm(20, 80, 20) > range(e) [1] 40.67 115.74 > o <- e - rnorm(20, 20, 10) > range(o) [1] 21.29 111.17 > fit <- lm(o ~ e) > summary(fit) [...] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -22.73 8.51 -2.67 0.016 * e 1.04 0.10 10.38 5e-09 *** In this example, you get: $$O_i = -\underset{(8.51)}{22.73}+\underset{(0.10)}{1.04}\;E_i+\varepsilon$$ (standard errors under the estimates.) This would mean that: actual scores are lesser than target scores by 22.7 on average; high actual scores are slightly more likely when the target score is high. If a regression doesn't look absurd to you, and if you get a reasonable explication (i.e., a reasonable $R^2$), you could add some predictors, e.g. gender.
Way of measuring students' performance
Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could
Way of measuring students' performance Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could try $O_i=\alpha + \beta E_i + \varepsilon$. A toy example in R: > set.seed(123) > e <- rnorm(20, 80, 20) > range(e) [1] 40.67 115.74 > o <- e - rnorm(20, 20, 10) > range(o) [1] 21.29 111.17 > fit <- lm(o ~ e) > summary(fit) [...] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -22.73 8.51 -2.67 0.016 * e 1.04 0.10 10.38 5e-09 *** In this example, you get: $$O_i = -\underset{(8.51)}{22.73}+\underset{(0.10)}{1.04}\;E_i+\varepsilon$$ (standard errors under the estimates.) This would mean that: actual scores are lesser than target scores by 22.7 on average; high actual scores are slightly more likely when the target score is high. If a regression doesn't look absurd to you, and if you get a reasonable explication (i.e., a reasonable $R^2$), you could add some predictors, e.g. gender.
Way of measuring students' performance Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could
46,231
Interactions between random effects
Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of balance d <- d[sample(1:nrow(d),size=round(0.7*nrow(d))),] ## now get Poisson-distributed number of obs per site/year library(plyr) d <- ddply(d,c("Site","Year"),transform,rep=seq(rpois(1,lambda=10))) library(lme4) d$ticks <- simulate(~1+(1|Year)+(1|Site)+(1|Year:Site), family=poisson,newdata=d, newparams=list(beta=2, ## mean(log(ticks))=2 theta=c(1,1,1)))[[1]] mm <- glmer(ticks~1+(1|Year)+(1|Site)+(1|Year:Site), family=poisson,data=d) We get out approximately what we put in -- equal variances at each level: ## Generalized linear mixed model fit by maximum likelihood (Laplace ## Approximation) [glmerMod] ## Family: poisson ( log ) ## Formula: ticks ~ 1 + (1 | Year) + (1 | Site) + (1 | Year:Site) ## Data: d ## ## AIC BIC logLik deviance df.resid ## 12487.3 12510.2 -6239.7 12479.3 2267 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.9944 -0.6842 -0.0726 0.6010 3.8532 ## ## Random effects: ## Groups Name Variance Std.Dev. ## Year:Site (Intercept) 1.0818 1.0401 ## Site (Intercept) 1.0490 1.0242 ## Year (Intercept) 0.9787 0.9893 ## Number of obs: 2271, groups: Year:Site, 231 Site, 30 Year, 11 ## ## Fixed effects: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) 2.1952 0.3593 6.109 1e-09 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 You may want to include an observation-level random effect to allow for overdispersion (see the "grouse ticks" example in http://rpubs.com/bbolker/glmmchapter)
Interactions between random effects
Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of
Interactions between random effects Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of balance d <- d[sample(1:nrow(d),size=round(0.7*nrow(d))),] ## now get Poisson-distributed number of obs per site/year library(plyr) d <- ddply(d,c("Site","Year"),transform,rep=seq(rpois(1,lambda=10))) library(lme4) d$ticks <- simulate(~1+(1|Year)+(1|Site)+(1|Year:Site), family=poisson,newdata=d, newparams=list(beta=2, ## mean(log(ticks))=2 theta=c(1,1,1)))[[1]] mm <- glmer(ticks~1+(1|Year)+(1|Site)+(1|Year:Site), family=poisson,data=d) We get out approximately what we put in -- equal variances at each level: ## Generalized linear mixed model fit by maximum likelihood (Laplace ## Approximation) [glmerMod] ## Family: poisson ( log ) ## Formula: ticks ~ 1 + (1 | Year) + (1 | Site) + (1 | Year:Site) ## Data: d ## ## AIC BIC logLik deviance df.resid ## 12487.3 12510.2 -6239.7 12479.3 2267 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.9944 -0.6842 -0.0726 0.6010 3.8532 ## ## Random effects: ## Groups Name Variance Std.Dev. ## Year:Site (Intercept) 1.0818 1.0401 ## Site (Intercept) 1.0490 1.0242 ## Year (Intercept) 0.9787 0.9893 ## Number of obs: 2271, groups: Year:Site, 231 Site, 30 Year, 11 ## ## Fixed effects: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) 2.1952 0.3593 6.109 1e-09 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 You may want to include an observation-level random effect to allow for overdispersion (see the "grouse ticks" example in http://rpubs.com/bbolker/glmmchapter)
Interactions between random effects Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of
46,232
Homoscedastic and heteroscedastic data and regression models
In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model you normally put it into a variable from which you can then call summary on it to get the usual regression table for the coefficients. If you plot the same variable you get some diagnostic plots. For example, consider: carmdl <- lm(dist~speed,cars) plot(carmdl) The third of the default plots that it produces is the scale-location plot: [Other common choices for the y-axis in such a plot are the absolute residual and the log of the squared residual.] That's a basic visual diagnostic of the spread of standardized (for model-variance) residuals against fitted values, which is suitable for seeing if there's variability related to the mean (not already accounted for by the model). If the assumption of homoskedasticity is true, we should see roughly constant spread. In this case the indication of increase with fitted values is fairly mild. A common form of heteroskedasticity to look for would be where there's an increase in spread against fitted values. That would show as an increasing trend in the plot above. It can also be formally tested by the Breusch-Pagan test (though formal hypothesis tests of model assumptions aren't necessarily the best choice). There are other forms of heteroskedasticity that are possible, but that's the most common one to check for. For example, if changing spread against a particular predictor was expected, that would suggest plotting the residual spread measure above against that predictor.
Homoscedastic and heteroscedastic data and regression models
In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model
Homoscedastic and heteroscedastic data and regression models In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model you normally put it into a variable from which you can then call summary on it to get the usual regression table for the coefficients. If you plot the same variable you get some diagnostic plots. For example, consider: carmdl <- lm(dist~speed,cars) plot(carmdl) The third of the default plots that it produces is the scale-location plot: [Other common choices for the y-axis in such a plot are the absolute residual and the log of the squared residual.] That's a basic visual diagnostic of the spread of standardized (for model-variance) residuals against fitted values, which is suitable for seeing if there's variability related to the mean (not already accounted for by the model). If the assumption of homoskedasticity is true, we should see roughly constant spread. In this case the indication of increase with fitted values is fairly mild. A common form of heteroskedasticity to look for would be where there's an increase in spread against fitted values. That would show as an increasing trend in the plot above. It can also be formally tested by the Breusch-Pagan test (though formal hypothesis tests of model assumptions aren't necessarily the best choice). There are other forms of heteroskedasticity that are possible, but that's the most common one to check for. For example, if changing spread against a particular predictor was expected, that would suggest plotting the residual spread measure above against that predictor.
Homoscedastic and heteroscedastic data and regression models In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model
46,233
Question about the error term in a simple linear regression
it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers to the subject (a person), and a subscript $t$ refers to the tume of othe observation. in a simple linear regression we don't make a lot of assumptions about $\epsilon$ to estimate $\beta_i$. the errors don't have to be normal, and with increasing sample size they will not tend to become normal. CLT is applied in two different ways: when a sample size increases then the distribution of an estimate of $\beta_i$ which is often denoted as $\hat{\beta}_i$ will tend to become normal, i.e. $\hat{\beta}_i\sim\mathcal{N}(0,\sigma_\beta)$, where $\sigma_\beta$ is a function of $\sigma$. Again, we do not require $\epsilon_{it}\sim\mathcal{N}(0,\sigma)$, we only need $var[\epsilon_{it}]=\sigma$ for this. This is one of large sample properties of linear regressions. often times when we deal with physical experiments, one could argue that there are many sources of errors, when they all add up, they make $\epsilon_{it}$ - a single observation noise - distributed normally. this is not related to the sample size, this is simply sources of errors influencing a single observation. in this case we often make a reasonable assumption of $\epsilon_{it}\sim\mathcal{N}(0,\sigma)$
Question about the error term in a simple linear regression
it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers t
Question about the error term in a simple linear regression it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers to the subject (a person), and a subscript $t$ refers to the tume of othe observation. in a simple linear regression we don't make a lot of assumptions about $\epsilon$ to estimate $\beta_i$. the errors don't have to be normal, and with increasing sample size they will not tend to become normal. CLT is applied in two different ways: when a sample size increases then the distribution of an estimate of $\beta_i$ which is often denoted as $\hat{\beta}_i$ will tend to become normal, i.e. $\hat{\beta}_i\sim\mathcal{N}(0,\sigma_\beta)$, where $\sigma_\beta$ is a function of $\sigma$. Again, we do not require $\epsilon_{it}\sim\mathcal{N}(0,\sigma)$, we only need $var[\epsilon_{it}]=\sigma$ for this. This is one of large sample properties of linear regressions. often times when we deal with physical experiments, one could argue that there are many sources of errors, when they all add up, they make $\epsilon_{it}$ - a single observation noise - distributed normally. this is not related to the sample size, this is simply sources of errors influencing a single observation. in this case we often make a reasonable assumption of $\epsilon_{it}\sim\mathcal{N}(0,\sigma)$
Question about the error term in a simple linear regression it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers t
46,234
Question about the error term in a simple linear regression
Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal except it is getting censored (eg negative incomes get reported as zero).
Question about the error term in a simple linear regression
Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal exce
Question about the error term in a simple linear regression Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal except it is getting censored (eg negative incomes get reported as zero).
Question about the error term in a simple linear regression Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal exce
46,235
Question about the error term in a simple linear regression
The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster says, you might look at generalized linear models which allow for non-normal error distributions. However, note that linear regression does not require normally distributed errors. Regardless of the distribution, the least squares estimator is the Best Linear Unbiased Estimater (BLUE) by the Gauss-Markov theorem. They only need to be uncorrelated and have the same variance. The normal distribution is only required if you want to claim that the least squares estimate is also the maximum likelihood estimator.
Question about the error term in a simple linear regression
The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster say
Question about the error term in a simple linear regression The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster says, you might look at generalized linear models which allow for non-normal error distributions. However, note that linear regression does not require normally distributed errors. Regardless of the distribution, the least squares estimator is the Best Linear Unbiased Estimater (BLUE) by the Gauss-Markov theorem. They only need to be uncorrelated and have the same variance. The normal distribution is only required if you want to claim that the least squares estimate is also the maximum likelihood estimator.
Question about the error term in a simple linear regression The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster say
46,236
Variance of absolute value of a rv
The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\right)^2\\ &= E[X^2] - \left(E[|X|]\right)^2\\&= \operatorname{var}(X) + \left(E[X]\right)^2- \left(E[|X|]\right)^2 \end{align}$$ and so only $E[|X|]$ might need to be computed if you already know $\operatorname{var}(X)$ and $E[X]$.
Variance of absolute value of a rv
The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\ri
Variance of absolute value of a rv The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\right)^2\\ &= E[X^2] - \left(E[|X|]\right)^2\\&= \operatorname{var}(X) + \left(E[X]\right)^2- \left(E[|X|]\right)^2 \end{align}$$ and so only $E[|X|]$ might need to be computed if you already know $\operatorname{var}(X)$ and $E[X]$.
Variance of absolute value of a rv The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\ri
46,237
Deriving confidence interval from standard error of the mean when the data are non-normal
This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage. The t is at least reasonably robust to mild deviations from the assumptions, so if the population distribution isn't particularly skewed or especially heavy tailed, that should at least work reasonably well. assume the distribution is symmetric* and construct an interval for the pseudomedian (Hodges-Lehmann estimate, median of pairwise averages) via a Wilcoxon signed-rank-type procedure. If the t-distribution would have been right, on average you lose very little by doing this. This can be done in many packages. [With a symmetric distribution whose mean exists, the mean, pseudomedian, the ordinary median (and many other location-measures) coincide. An interval that contains one with a particular probability will also contain the others] *(or at least 'sufficiently' close to it) Here's an example of this done in R: y <- rlogis(8,50,1) wilcox.test(y,conf.int=TRUE) Wilcoxon signed rank test` data: y V = 36, p-value = 0.007813 alternative hypothesis: true location is not equal to 0 95 percent confidence interval: 47.49677 52.22811 sample estimates: (pseudo)median 49.55069 So the interval given there is (47.50, 52.23): The purple vertical line segment is the sample mean and the centre blue one is the sample pseudomedian. The outer blue segments mark the ends of the confidence interval. You see that in this example the interval includes the true population mean of 50. assume symmetry and construct a CI from the values for the mean that would not be rejected by a permutation test (this can be done from a single permutation test distribution and 8 observations is few enough to get the whole permutation distribution rather than sample it). use bootstrapping to construct a CI for the mean. The bootstrap is justified by an asymptotic argument (so it may not work very well for small samples), but you can make various distributional assumptions and check its coverage properties for plausible distributions via simulation. This paper (pdf is downloadable at that link) suggests that the bootstrap-t intervals often get better coverage properties than the usual t-intervals -- but may have poor coverage when samples are small and the distributions are skew. If you have some additional information that would help guide a choice of distribution, you can get somewhere with other distributional assumptions. For example, if you know that the distribution is skew and continuous, you might try using a Gamma or lognormal model (say) to construct a CI for the mean. Or if you have count data you might use a Poisson, binomial or negative binomial model to try to construct an interval.
Deriving confidence interval from standard error of the mean when the data are non-normal
This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage
Deriving confidence interval from standard error of the mean when the data are non-normal This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage. The t is at least reasonably robust to mild deviations from the assumptions, so if the population distribution isn't particularly skewed or especially heavy tailed, that should at least work reasonably well. assume the distribution is symmetric* and construct an interval for the pseudomedian (Hodges-Lehmann estimate, median of pairwise averages) via a Wilcoxon signed-rank-type procedure. If the t-distribution would have been right, on average you lose very little by doing this. This can be done in many packages. [With a symmetric distribution whose mean exists, the mean, pseudomedian, the ordinary median (and many other location-measures) coincide. An interval that contains one with a particular probability will also contain the others] *(or at least 'sufficiently' close to it) Here's an example of this done in R: y <- rlogis(8,50,1) wilcox.test(y,conf.int=TRUE) Wilcoxon signed rank test` data: y V = 36, p-value = 0.007813 alternative hypothesis: true location is not equal to 0 95 percent confidence interval: 47.49677 52.22811 sample estimates: (pseudo)median 49.55069 So the interval given there is (47.50, 52.23): The purple vertical line segment is the sample mean and the centre blue one is the sample pseudomedian. The outer blue segments mark the ends of the confidence interval. You see that in this example the interval includes the true population mean of 50. assume symmetry and construct a CI from the values for the mean that would not be rejected by a permutation test (this can be done from a single permutation test distribution and 8 observations is few enough to get the whole permutation distribution rather than sample it). use bootstrapping to construct a CI for the mean. The bootstrap is justified by an asymptotic argument (so it may not work very well for small samples), but you can make various distributional assumptions and check its coverage properties for plausible distributions via simulation. This paper (pdf is downloadable at that link) suggests that the bootstrap-t intervals often get better coverage properties than the usual t-intervals -- but may have poor coverage when samples are small and the distributions are skew. If you have some additional information that would help guide a choice of distribution, you can get somewhere with other distributional assumptions. For example, if you know that the distribution is skew and continuous, you might try using a Gamma or lognormal model (say) to construct a CI for the mean. Or if you have count data you might use a Poisson, binomial or negative binomial model to try to construct an interval.
Deriving confidence interval from standard error of the mean when the data are non-normal This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage
46,238
Deriving confidence interval from standard error of the mean when the data are non-normal
If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used only in theoretical papers think about 95%. i know that it's fashionable to try to squeeze out as much information from data as possible, but, c'mon, let's be reasonable, with 8 data points you can hope for something like 12.5% and 87.5% percentile. maybe you can do something fancy and move the edges a bit, but to 95%?!
Deriving confidence interval from standard error of the mean when the data are non-normal
If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used o
Deriving confidence interval from standard error of the mean when the data are non-normal If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used only in theoretical papers think about 95%. i know that it's fashionable to try to squeeze out as much information from data as possible, but, c'mon, let's be reasonable, with 8 data points you can hope for something like 12.5% and 87.5% percentile. maybe you can do something fancy and move the edges a bit, but to 95%?!
Deriving confidence interval from standard error of the mean when the data are non-normal If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used o
46,239
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$ with $x_{(n)}\equiv M_n$. So $$(1-\hat \lambda x_{(n)})e^{\hat \lambda x_{(n)}} = 1-\hat \lambda x_{(n)}n $$ Assume first that $1-\hat \lambda x_{(n)} >0$. Then we must also have $1-\hat \lambda x_{(n)}n>0$ since the exponential is always positive. Moreover since $x_{(n)}, \hat \lambda > 0\Rightarrow e^{\hat \lambda x_{(n)}}>1$. Therefore we should have $$\frac {1-\hat \lambda x_{(n)}n}{1-\hat \lambda x_{(n)}}>1 \Rightarrow \hat \lambda x_{(n)}>\hat \lambda x_{(n)}n$$ which is impossible. Therefore we conclude that $$\hat \lambda >\frac 1{x_{(n)}},\;\; \hat \lambda = \frac c{x_{(n)}}, \;\; c>1$$ Inserting into the log-likelihood we get $$\ell(\hat\lambda(c)\mid x_{(n)}) = \log \frac c{x_{(n)}} + \log n - \frac c{x_{(n)}} x_{(n)} + (n-1) \log (1 - e^{-\frac c{x_{(n)}} x_{(n)}})$$ $$= \log \frac n{x_{(n)}} + \log c - c + (n-1) \log (1 - e^{-c})$$ We want to maximize this likelihood with respect to $c$. Its 1st derivative is $$\frac{d\ell}{dc}=\frac 1c -1 +(n-1)\frac 1{e^{c}-1}$$ Setting this equal to zero, we require that $$e^{c}-1 - c\left(e^{c}-1\right)+(n-1)c =0$$ $$\Rightarrow \left(n-e^c\right)c = 1-e^c$$ Since $c>1$ the RHS is negative. Therefore we must also have $n-e^c <0 \Rightarrow c > \ln n$. For $n\ge 3$ this provides a tighter lower bound for the MLE, but it doesn't cover the $n=2$ case, so $$\hat \lambda > \max \left\{\frac 1{x_{(n)}}, \frac {\ln n}{x_{(n)}}\right\}$$ Moreover (for $n\ge 3$) rearranging the 1st-order condition we have that $$c= \frac{e^c-1}{e^c-n} > \ln n \Rightarrow e^c -1 > e^c\ln n -n\ln n $$ $$\Rightarrow n\ln n-1>e^c(\ln n -1) \Rightarrow c< \ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$ So for $n\ge 3$ we have that $$\frac 1{x_{(n)}}\ln n < \hat \lambda < \frac 1{x_{(n)}}\ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$ This is a narrow interval, especially if $x_{(n)}\ge 1$. For example (truncated at 3d digit ) $$\begin{align} n=10 & &\frac 1{x_{(n)}}2.302 < \hat \lambda < \frac 1{x_{(n)}}2.827\\ n=100 & & \frac 1{x_{(n)}}4.605 < \hat \lambda < \frac 1{x_{(n)}}4.847\\ n=1000 & & \frac 1{x_{(n)}}6.907 < \hat \lambda < \frac 1{x_{(n)}}7.063\\ n=10000 & & \frac 1{x_{(n)}}9.210< \hat \lambda < \frac 1{x_{(n)}}9.325\\ \end{align}$$ Numerical examples indicate that the MLE tends to be equal to the upper bound, up to second decimal digit. ADDENDUM: A CLOSED FORM EXPRESSION This is just an approximate solution (it only approximately maximizes the likelihood), but here it is: manipulating the 1st-order condition we want to have $$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {\lambda x_{(n)}n -1}{\lambda x_{(n)} -1}\right]$$ Now, one can show (see for example here) that $$E[X_{(n)}] = \frac {H_n}{\lambda},\;\; H_n = \sum_{k=1}^n\frac 1k$$ Solving for $\lambda$ and inserting into the RHS of the implicit 1st-order condition, we obtain $$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n\frac {x_{(n)}}{E[X_{(n)}]} -1}{ H_n\frac {x_{(n)}}{E[X_{(n)}]} -1}\right]$$ We want an estimate of $\lambda$, given that $X_{(n)}=x_{(n)}$, $\hat \lambda \mid \{X_{(n)}=x_{(n)}\}$. But in such a case, we also have $E[X_{(n)}\mid \{X_{(n)}=x_{(n)}\}] =x_{(n)}$. this simplifies the expression and we obtain $$\hat \lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n -1}{ H_n -1}\right]$$ One can verify that this closed form expression stays close to the upper bound derived previously, but a bit less than the actual (numerically obtained) MLE.
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$ with $x_{(n)}\equiv M_n$. So $$(1-\hat \lambda x_{(n)})e^{\hat \lambda x_{(n)}} = 1-\hat \lambda x_{(n)}n $$ Assume first that $1-\hat \lambda x_{(n)} >0$. Then we must also have $1-\hat \lambda x_{(n)}n>0$ since the exponential is always positive. Moreover since $x_{(n)}, \hat \lambda > 0\Rightarrow e^{\hat \lambda x_{(n)}}>1$. Therefore we should have $$\frac {1-\hat \lambda x_{(n)}n}{1-\hat \lambda x_{(n)}}>1 \Rightarrow \hat \lambda x_{(n)}>\hat \lambda x_{(n)}n$$ which is impossible. Therefore we conclude that $$\hat \lambda >\frac 1{x_{(n)}},\;\; \hat \lambda = \frac c{x_{(n)}}, \;\; c>1$$ Inserting into the log-likelihood we get $$\ell(\hat\lambda(c)\mid x_{(n)}) = \log \frac c{x_{(n)}} + \log n - \frac c{x_{(n)}} x_{(n)} + (n-1) \log (1 - e^{-\frac c{x_{(n)}} x_{(n)}})$$ $$= \log \frac n{x_{(n)}} + \log c - c + (n-1) \log (1 - e^{-c})$$ We want to maximize this likelihood with respect to $c$. Its 1st derivative is $$\frac{d\ell}{dc}=\frac 1c -1 +(n-1)\frac 1{e^{c}-1}$$ Setting this equal to zero, we require that $$e^{c}-1 - c\left(e^{c}-1\right)+(n-1)c =0$$ $$\Rightarrow \left(n-e^c\right)c = 1-e^c$$ Since $c>1$ the RHS is negative. Therefore we must also have $n-e^c <0 \Rightarrow c > \ln n$. For $n\ge 3$ this provides a tighter lower bound for the MLE, but it doesn't cover the $n=2$ case, so $$\hat \lambda > \max \left\{\frac 1{x_{(n)}}, \frac {\ln n}{x_{(n)}}\right\}$$ Moreover (for $n\ge 3$) rearranging the 1st-order condition we have that $$c= \frac{e^c-1}{e^c-n} > \ln n \Rightarrow e^c -1 > e^c\ln n -n\ln n $$ $$\Rightarrow n\ln n-1>e^c(\ln n -1) \Rightarrow c< \ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$ So for $n\ge 3$ we have that $$\frac 1{x_{(n)}}\ln n < \hat \lambda < \frac 1{x_{(n)}}\ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$ This is a narrow interval, especially if $x_{(n)}\ge 1$. For example (truncated at 3d digit ) $$\begin{align} n=10 & &\frac 1{x_{(n)}}2.302 < \hat \lambda < \frac 1{x_{(n)}}2.827\\ n=100 & & \frac 1{x_{(n)}}4.605 < \hat \lambda < \frac 1{x_{(n)}}4.847\\ n=1000 & & \frac 1{x_{(n)}}6.907 < \hat \lambda < \frac 1{x_{(n)}}7.063\\ n=10000 & & \frac 1{x_{(n)}}9.210< \hat \lambda < \frac 1{x_{(n)}}9.325\\ \end{align}$$ Numerical examples indicate that the MLE tends to be equal to the upper bound, up to second decimal digit. ADDENDUM: A CLOSED FORM EXPRESSION This is just an approximate solution (it only approximately maximizes the likelihood), but here it is: manipulating the 1st-order condition we want to have $$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {\lambda x_{(n)}n -1}{\lambda x_{(n)} -1}\right]$$ Now, one can show (see for example here) that $$E[X_{(n)}] = \frac {H_n}{\lambda},\;\; H_n = \sum_{k=1}^n\frac 1k$$ Solving for $\lambda$ and inserting into the RHS of the implicit 1st-order condition, we obtain $$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n\frac {x_{(n)}}{E[X_{(n)}]} -1}{ H_n\frac {x_{(n)}}{E[X_{(n)}]} -1}\right]$$ We want an estimate of $\lambda$, given that $X_{(n)}=x_{(n)}$, $\hat \lambda \mid \{X_{(n)}=x_{(n)}\}$. But in such a case, we also have $E[X_{(n)}\mid \{X_{(n)}=x_{(n)}\}] =x_{(n)}$. this simplifies the expression and we obtain $$\hat \lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n -1}{ H_n -1}\right]$$ One can verify that this closed form expression stays close to the upper bound derived previously, but a bit less than the actual (numerically obtained) MLE.
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$
46,240
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L(\lambda \mid x) = f_{M_n}(x)$, consequently the log-likelihood is $$\ell(\lambda \mid x) = \log \lambda + \log n - \lambda x + (n-1) \log (1 - e^{-\lambda x}).$$ Differentiation with respect to $\lambda$ gives $$\frac{d\ell}{d\lambda} = \frac{1}{\lambda} - x + \frac{(n-1) x e^{-\lambda x}}{1 - e^{-\lambda x}},$$ which we require to be zero; i.e., $$(1-\lambda x)e^{\lambda x} + n \lambda x - 1 = 0.$$ Such an equation does not, to the best of my knowledge, admit an elementary closed form solution for $\lambda$. I would very much like to see what this professor's idea of $\hat\lambda_n$ is, because I can almost assure you that whatever he thinks it is, he is wrong.
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L(\lambda \mid x) = f_{M_n}(x)$, consequently the log-likelihood is $$\ell(\lambda \mid x) = \log \lambda + \log n - \lambda x + (n-1) \log (1 - e^{-\lambda x}).$$ Differentiation with respect to $\lambda$ gives $$\frac{d\ell}{d\lambda} = \frac{1}{\lambda} - x + \frac{(n-1) x e^{-\lambda x}}{1 - e^{-\lambda x}},$$ which we require to be zero; i.e., $$(1-\lambda x)e^{\lambda x} + n \lambda x - 1 = 0.$$ Such an equation does not, to the best of my knowledge, admit an elementary closed form solution for $\lambda$. I would very much like to see what this professor's idea of $\hat\lambda_n$ is, because I can almost assure you that whatever he thinks it is, he is wrong.
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L
46,241
Coding categorical variables for regression
Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, and beginning salary, salbegin, is the continuous independent variable. Using GLM, you can perform pairwise comparisons between each pair of job categories. The steps are as follow: With the data set open, go to Analyze > General Linear Model > Univariate. Put the dependent variable and independent variable into the correct slots. Categorical independent variables go to "Fixed Factor(s)" and continuous ones go to "Covariate(s)." Do not worry about the Random Factors. When it's all set, click the "Model" button. In the Model panel, highlight the two independent variables, then change the build term to "Main effects," and then click the arrow button (indicated by the red circle) to bring the two variables over. When all set, click "Continue." Now, click the "Option" button. In the Option panel, do the followings: 1) Highlight jobcat, 2) bring it over to the right by clicking the arrow button, 3) Check "Compare Main Effects", 4) Specify the adjustment you'd like to make for the multiple pairwise comparisons. I left it as LSD which does not adjust for multiple tests, 5) Check "Parameter Estimates" so that you'll also get the regression coefficients. When it's all done, click Continue and then OK to submit the test. Here is the regression coefficient table: Scroll down a bit and you'll find the pairwise comparisons table:
Coding categorical variables for regression
Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, an
Coding categorical variables for regression Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, and beginning salary, salbegin, is the continuous independent variable. Using GLM, you can perform pairwise comparisons between each pair of job categories. The steps are as follow: With the data set open, go to Analyze > General Linear Model > Univariate. Put the dependent variable and independent variable into the correct slots. Categorical independent variables go to "Fixed Factor(s)" and continuous ones go to "Covariate(s)." Do not worry about the Random Factors. When it's all set, click the "Model" button. In the Model panel, highlight the two independent variables, then change the build term to "Main effects," and then click the arrow button (indicated by the red circle) to bring the two variables over. When all set, click "Continue." Now, click the "Option" button. In the Option panel, do the followings: 1) Highlight jobcat, 2) bring it over to the right by clicking the arrow button, 3) Check "Compare Main Effects", 4) Specify the adjustment you'd like to make for the multiple pairwise comparisons. I left it as LSD which does not adjust for multiple tests, 5) Check "Parameter Estimates" so that you'll also get the regression coefficients. When it's all done, click Continue and then OK to submit the test. Here is the regression coefficient table: Scroll down a bit and you'll find the pairwise comparisons table:
Coding categorical variables for regression Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, an
46,242
Coding categorical variables for regression
Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differences (HSD) test will do that, and is familiar to many people. You needn't worry about the type of coding used. First, as @Scortchi notes, you can perform this test with any regular coding method (reference level, effect, etc.). Second, SPSS will probably take care of the coding for you. It's been a long time since I've used SPSS, but I gather you would use the GLM Univariate Analysis option, since you have both continuous and categorical variables. The SPSS documentation for post-hoc comparisons after running a GLM can be found here.
Coding categorical variables for regression
Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differen
Coding categorical variables for regression Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differences (HSD) test will do that, and is familiar to many people. You needn't worry about the type of coding used. First, as @Scortchi notes, you can perform this test with any regular coding method (reference level, effect, etc.). Second, SPSS will probably take care of the coding for you. It's been a long time since I've used SPSS, but I gather you would use the GLM Univariate Analysis option, since you have both continuous and categorical variables. The SPSS documentation for post-hoc comparisons after running a GLM can be found here.
Coding categorical variables for regression Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differen
46,243
Coding categorical variables for regression
The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would implement one or more of the tests on that list. You can search for those terms in the SPSS documentation and that should tell you how to specify that you want those comparisons. Googling for "SPSS post hoc" brings up several promising links as well.
Coding categorical variables for regression
The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would imple
Coding categorical variables for regression The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would implement one or more of the tests on that list. You can search for those terms in the SPSS documentation and that should tell you how to specify that you want those comparisons. Googling for "SPSS post hoc" brings up several promising links as well.
Coding categorical variables for regression The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would imple
46,244
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta>0$
I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then: \begin{align} L(\theta) &= (\alpha - 1) \Sigma \log X_i - n \log(\Gamma (\alpha)) - n\alpha \log(\theta) - \frac{1}{\theta} \Sigma X_i \\[5pt] \frac{\partial}{\partial \theta} &= -\frac{n\alpha}{\theta} + \frac{\Sigma X_i}{\theta^2} \\[5pt] \frac{\partial^2}{\partial \theta^2} &= \frac{n\alpha}{\theta^2} - \frac{2\Sigma X_i}{\theta^3} \end{align} What is the expectation of a gamma dist? (looks like $\alpha \theta$) \begin{align} -E \frac{\partial^2}{\partial \theta^2} &= -\frac{n\alpha}{\theta^2} + \frac{2\alpha n}{\theta^2} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \\[5pt] &= \frac{n\alpha}{\theta^2} \\[5pt] \alpha &= 4 \text{ so;} \\[5pt] &= \frac{4n}{\theta^2} \end{align} so if $n = 1$ (i.e., a single observation from a gamma distribution, like this problem seems to be asking), then in fact the answer is: $$ = \frac{4}{\theta^2}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$$ Feel free to correct/critique my errors.
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta>
I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then:
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta>0$ I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then: \begin{align} L(\theta) &= (\alpha - 1) \Sigma \log X_i - n \log(\Gamma (\alpha)) - n\alpha \log(\theta) - \frac{1}{\theta} \Sigma X_i \\[5pt] \frac{\partial}{\partial \theta} &= -\frac{n\alpha}{\theta} + \frac{\Sigma X_i}{\theta^2} \\[5pt] \frac{\partial^2}{\partial \theta^2} &= \frac{n\alpha}{\theta^2} - \frac{2\Sigma X_i}{\theta^3} \end{align} What is the expectation of a gamma dist? (looks like $\alpha \theta$) \begin{align} -E \frac{\partial^2}{\partial \theta^2} &= -\frac{n\alpha}{\theta^2} + \frac{2\alpha n}{\theta^2} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \\[5pt] &= \frac{n\alpha}{\theta^2} \\[5pt] \alpha &= 4 \text{ so;} \\[5pt] &= \frac{4n}{\theta^2} \end{align} so if $n = 1$ (i.e., a single observation from a gamma distribution, like this problem seems to be asking), then in fact the answer is: $$ = \frac{4}{\theta^2}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$$ Feel free to correct/critique my errors.
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta> I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then:
46,245
When was the k-means clustering algorithm first used?
To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that paper contains an application (which is missing from earlier papers such as Steinhaus (1956)). J. MacQueen (1967). Some methods for classification and analysis of multivariate observations. Proc. Fifth Berkeley Symp. on Math. Statist. and Prob., Vol. 1 (Univ. of Calif. Press, 1967), 281--297. Steinhaus (1956). Sur la division des corps mat ́eriels en parties. Bulletin de l’Académie Polonaise des Sciences, Classe III, vol. IV, no. 12, 801-804.
When was the k-means clustering algorithm first used?
To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that pape
When was the k-means clustering algorithm first used? To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that paper contains an application (which is missing from earlier papers such as Steinhaus (1956)). J. MacQueen (1967). Some methods for classification and analysis of multivariate observations. Proc. Fifth Berkeley Symp. on Math. Statist. and Prob., Vol. 1 (Univ. of Calif. Press, 1967), 281--297. Steinhaus (1956). Sur la division des corps mat ́eriels en parties. Bulletin de l’Académie Polonaise des Sciences, Classe III, vol. IV, no. 12, 801-804.
When was the k-means clustering algorithm first used? To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that pape
46,246
When was the k-means clustering algorithm first used?
I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (somehow): Diverse questions, for instance those about types in anthropology, or others with practical motivations, like those of industrial object normalization, require a solution based on the determination of $n$ fictitious representatives of a numerous population, chosen so as to minimize as much as possible the deviations between population elements and those from the sample. The deviation is mesured between every actual element and the closest fictitious element. I can only guess that it was used at least a such closely thereafter, but the history did not keep track. In his paper, H. Steinhaus uses $A_i$ to name centroids (means), and $K_i$ refers to each of the $n$ sub-bodies (possibly from German Körper, the letter $K$ for fields being in use in mathematics since R. Dedekind). MacQueen's 1967 paper motivated the name: The $k$-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the $k$-means.
When was the k-means clustering algorithm first used?
I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (
When was the k-means clustering algorithm first used? I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (somehow): Diverse questions, for instance those about types in anthropology, or others with practical motivations, like those of industrial object normalization, require a solution based on the determination of $n$ fictitious representatives of a numerous population, chosen so as to minimize as much as possible the deviations between population elements and those from the sample. The deviation is mesured between every actual element and the closest fictitious element. I can only guess that it was used at least a such closely thereafter, but the history did not keep track. In his paper, H. Steinhaus uses $A_i$ to name centroids (means), and $K_i$ refers to each of the $n$ sub-bodies (possibly from German Körper, the letter $K$ for fields being in use in mathematics since R. Dedekind). MacQueen's 1967 paper motivated the name: The $k$-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the $k$-means.
When was the k-means clustering algorithm first used? I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (
46,247
When was the k-means clustering algorithm first used?
Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cluster split/merge phase in order to arrive at a "best" number of clusters. Pure K-Means takes the number of centroids as a given. [1] Ball, G.H. and Hall, D.J. (1965) "ISODATA, a Novel Method of Data Analysis and Pattern Classification." Stanford Research Institute, Menlo Park
When was the k-means clustering algorithm first used?
Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cl
When was the k-means clustering algorithm first used? Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cluster split/merge phase in order to arrive at a "best" number of clusters. Pure K-Means takes the number of centroids as a given. [1] Ball, G.H. and Hall, D.J. (1965) "ISODATA, a Novel Method of Data Analysis and Pattern Classification." Stanford Research Institute, Menlo Park
When was the k-means clustering algorithm first used? Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cl
46,248
Explanation of cubic spline interpolation
If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ is a piecewise function: on each $h_i = x_i - x_{i-1}$ it's a cubic polynomial, which can be written for simplicity as $S_i(x) = a_i + b_i(x - x_i) + {c_i\over2}(x-x_i)^2 + {d_i\over6}(x - x_i)^3 \,\!$. It has to satisfy the next constraints: 1) passing through the knots : $S_i\left(x_{i}\right) = f(x_{i})$ 2) be continuous up to the 2nd derivative: $S_i\left(x_{i-1}\right) = S_{i-1}(x_{i-1}) \\ S'_i\left(x_{i-1}\right) = S'_{i-1}(x_{i-1}) \\ S''_i\left(x_{i-1}\right) = S''_{i-1}(x_{i-1})$ 3) for natural splines: $S''(a) = S''(b) = 0.$ These equations will uniquely define spline coefficients. A good way to understand this is to take e.g. 3 points and manually solve systems for coefficients of $S_1(x)$ and $S_2(x)$ Finally you should get the next system: $a_i = f\left(x_{i}\right) \,\!$ $h_ic_{i-1} + 2(h_i + h_{i+1})c_i + h_{i+1}c_{i+1} = 6\left({{f_{i+1} - f_i}\over{h_{i+1}}} - {{f_{i} - f_{i-1}}\over{h_{i}}}\right) \,\!$ $d_i = {{c_i - c_{i-1}}\over{h_i}} \,\!$ $b_i = {1\over2}h_ic_i - {1\over6}h_i^2d_i + {{f_i - f_{i-1}}\over{h_i}}= {{f_i - f_{i-1}}\over{h_i}} + {{h_i(2c_i + c_{i-1})}\over6} \,\!$
Explanation of cubic spline interpolation
If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ i
Explanation of cubic spline interpolation If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ is a piecewise function: on each $h_i = x_i - x_{i-1}$ it's a cubic polynomial, which can be written for simplicity as $S_i(x) = a_i + b_i(x - x_i) + {c_i\over2}(x-x_i)^2 + {d_i\over6}(x - x_i)^3 \,\!$. It has to satisfy the next constraints: 1) passing through the knots : $S_i\left(x_{i}\right) = f(x_{i})$ 2) be continuous up to the 2nd derivative: $S_i\left(x_{i-1}\right) = S_{i-1}(x_{i-1}) \\ S'_i\left(x_{i-1}\right) = S'_{i-1}(x_{i-1}) \\ S''_i\left(x_{i-1}\right) = S''_{i-1}(x_{i-1})$ 3) for natural splines: $S''(a) = S''(b) = 0.$ These equations will uniquely define spline coefficients. A good way to understand this is to take e.g. 3 points and manually solve systems for coefficients of $S_1(x)$ and $S_2(x)$ Finally you should get the next system: $a_i = f\left(x_{i}\right) \,\!$ $h_ic_{i-1} + 2(h_i + h_{i+1})c_i + h_{i+1}c_{i+1} = 6\left({{f_{i+1} - f_i}\over{h_{i+1}}} - {{f_{i} - f_{i-1}}\over{h_{i}}}\right) \,\!$ $d_i = {{c_i - c_{i-1}}\over{h_i}} \,\!$ $b_i = {1\over2}h_ic_i - {1\over6}h_i^2d_i + {{f_i - f_{i-1}}\over{h_i}}= {{f_i - f_{i-1}}\over{h_i}} + {{h_i(2c_i + c_{i-1})}\over6} \,\!$
Explanation of cubic spline interpolation If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ i
46,249
Bayes-factor for testing a null-hypothesis?
You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subtract the estimate for the group level parameters for each step to get a posterior distribution of the difference as was done by John Kruschke in this paper: Bayesian Estimation Supersedes the t Test He does not calculate a bayes factor there and recommends against it (see appendix D). Instead he designates a region of practical equivalence around the null hypothesis (zero) and see if your credible interval overlaps. To get the minimum bayes factor I believe what you can do is then divide the probability at the mode of your estimate of the difference between means by the probability at zero. I have not seen this done anywhere but it makes sense to me. Hopefully someone else can comment on that.
Bayes-factor for testing a null-hypothesis?
You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subt
Bayes-factor for testing a null-hypothesis? You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subtract the estimate for the group level parameters for each step to get a posterior distribution of the difference as was done by John Kruschke in this paper: Bayesian Estimation Supersedes the t Test He does not calculate a bayes factor there and recommends against it (see appendix D). Instead he designates a region of practical equivalence around the null hypothesis (zero) and see if your credible interval overlaps. To get the minimum bayes factor I believe what you can do is then divide the probability at the mode of your estimate of the difference between means by the probability at zero. I have not seen this done anywhere but it makes sense to me. Hopefully someone else can comment on that.
Bayes-factor for testing a null-hypothesis? You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subt
46,250
Bayes-factor for testing a null-hypothesis?
You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl.missouri.edu/bayesfactor uses the same models (see the Rouder et al 2009 reference on the web calculator page). Note that the Kruschke reference given above does not actually allow you to test a null hypothesis.
Bayes-factor for testing a null-hypothesis?
You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl
Bayes-factor for testing a null-hypothesis? You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl.missouri.edu/bayesfactor uses the same models (see the Rouder et al 2009 reference on the web calculator page). Note that the Kruschke reference given above does not actually allow you to test a null hypothesis.
Bayes-factor for testing a null-hypothesis? You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl
46,251
Is the F-1 score symmetric?
Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{tp}{tp+fp} \cdot \frac{tp}{tp+fn}}{\frac{tp}{tp+fp} + \frac{tp}{tp+fn}} = 2 \frac{TP} {2 TP + FP + FN} = 2 \frac{TP} {TP + 1 - TN} $ Therefore: $\text{F-1 score symmetric} \leftrightarrow 2 \frac{TP} {TP + 1 - TN} = 2 \frac{TN} {TN + 1 - TP} \leftrightarrow TN(1-TN) = TP(1-TP) \leftrightarrow (TN = TP) \vee (TN = 1 - TP)$. So the F-1 score is symmetric only for some special cases, namely when $TN = TP$ or $TN = 1 - TP$. By the same token, the precision and recall are generally not symmetric, but the AUROC (Area Under an ROC Curve) always is. As a result, when presenting the results, one would typically distinguish positive classes (-P) from negatives ones (-N):
Is the F-1 score symmetric?
Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{
Is the F-1 score symmetric? Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{tp}{tp+fp} \cdot \frac{tp}{tp+fn}}{\frac{tp}{tp+fp} + \frac{tp}{tp+fn}} = 2 \frac{TP} {2 TP + FP + FN} = 2 \frac{TP} {TP + 1 - TN} $ Therefore: $\text{F-1 score symmetric} \leftrightarrow 2 \frac{TP} {TP + 1 - TN} = 2 \frac{TN} {TN + 1 - TP} \leftrightarrow TN(1-TN) = TP(1-TP) \leftrightarrow (TN = TP) \vee (TN = 1 - TP)$. So the F-1 score is symmetric only for some special cases, namely when $TN = TP$ or $TN = 1 - TP$. By the same token, the precision and recall are generally not symmetric, but the AUROC (Area Under an ROC Curve) always is. As a result, when presenting the results, one would typically distinguish positive classes (-P) from negatives ones (-N):
Is the F-1 score symmetric? Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{
46,252
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both procedures look like using that same data. This makes the difference glaringly obvious. Following Hyndman's notation let $y(1), \ldots, y(T)$ be the time series and $m$ the minimum number of points need to build a model. Then the procedure described by Hyndman works as follows For t = m to T-1: Fit model with y(1), ... y(t) e(t+1) = y(t+1) - y*(t+1) Calculate MSE of e(m+1) to e(T) For leave-one-out CV the procedure look like For t = 1 to T: Fit model with y(1), ..., y(t-1), y(t+1), ..., y(T) e(t) = y(t) - y*(t) Calculate MSE of e(1) to e(T) Notice how in the time series version we're actually using a different number of points to fit each model, namel $m, m+1, \ldots, T-1$. Compare this to the other version where one is always using $T-1$ points.
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both pr
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both procedures look like using that same data. This makes the difference glaringly obvious. Following Hyndman's notation let $y(1), \ldots, y(T)$ be the time series and $m$ the minimum number of points need to build a model. Then the procedure described by Hyndman works as follows For t = m to T-1: Fit model with y(1), ... y(t) e(t+1) = y(t+1) - y*(t+1) Calculate MSE of e(m+1) to e(T) For leave-one-out CV the procedure look like For t = 1 to T: Fit model with y(1), ..., y(t-1), y(t+1), ..., y(T) e(t) = y(t) - y*(t) Calculate MSE of e(1) to e(T) Notice how in the time series version we're actually using a different number of points to fit each model, namel $m, m+1, \ldots, T-1$. Compare this to the other version where one is always using $T-1$ points.
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both pr
46,253
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked and your description is for a situation where you have repeated measurements of time series. In this situation you can leave out complete time series from your data. Imagine you want to predict some property based on a new complete measurement of another time series, e.g. classification of EEG readings. You can assume EEG readings of different patients to be statistically independent and a scenario where only complete readings are used is sensible. In that case the natural way of splitting the data would be by patient. Hyndman discusses a situation where you essentially have only one (ongoing) measurement of a time series, and you want to predict future values of the time series from past measurements. Thus, you split by time, and the future implies that none of the following time points is known. In the EEG example, this corresponds to trying to predict what the next seconds/minutes of the EEG of the given patient would be. This type of splitting is also important when you want to measure how long a model is valid, see e.g. Esbensen, K. H. and Geladi, P.: Principles of Proper Validation: use and abuse of re-sampling for validation, J Chemom, 2010, 24, 168-187. Another situation where you'd need to split by time and also by case would be: imagine you'd like to do predictions on future value of stocks. Again, you need to split by case (stock). But of course, the tested stock's value at a given time may (is probably) also be correlated with the value of other stocks at that time. Thus, you also need to leave out all "future" data of all stocks from model training.
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked a
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked and your description is for a situation where you have repeated measurements of time series. In this situation you can leave out complete time series from your data. Imagine you want to predict some property based on a new complete measurement of another time series, e.g. classification of EEG readings. You can assume EEG readings of different patients to be statistically independent and a scenario where only complete readings are used is sensible. In that case the natural way of splitting the data would be by patient. Hyndman discusses a situation where you essentially have only one (ongoing) measurement of a time series, and you want to predict future values of the time series from past measurements. Thus, you split by time, and the future implies that none of the following time points is known. In the EEG example, this corresponds to trying to predict what the next seconds/minutes of the EEG of the given patient would be. This type of splitting is also important when you want to measure how long a model is valid, see e.g. Esbensen, K. H. and Geladi, P.: Principles of Proper Validation: use and abuse of re-sampling for validation, J Chemom, 2010, 24, 168-187. Another situation where you'd need to split by time and also by case would be: imagine you'd like to do predictions on future value of stocks. Again, you need to split by case (stock). But of course, the tested stock's value at a given time may (is probably) also be correlated with the value of other stocks at that time. Thus, you also need to leave out all "future" data of all stocks from model training.
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked a
46,254
Kolmogorov-Smirnov test strange output
Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come from any of the well-studied distributions, and you have 6k data, so even a trivial discrepancy will make the test 'significant'. (For more along those lines, see: Is normality testing 'essentially useless'?) On the other hand, instead of checking to see if your data significantly diverge from these distributions, you could see how well your data correlate with the distributions you are interested in--the fit may well be 'good enough' for your purposes. (For more along these lines, see my answer here: Testing randomly generated data against its intended distribution.)
Kolmogorov-Smirnov test strange output
Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come fr
Kolmogorov-Smirnov test strange output Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come from any of the well-studied distributions, and you have 6k data, so even a trivial discrepancy will make the test 'significant'. (For more along those lines, see: Is normality testing 'essentially useless'?) On the other hand, instead of checking to see if your data significantly diverge from these distributions, you could see how well your data correlate with the distributions you are interested in--the fit may well be 'good enough' for your purposes. (For more along these lines, see my answer here: Testing randomly generated data against its intended distribution.)
Kolmogorov-Smirnov test strange output Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come fr
46,255
Correct use of cross validation in LibsSVM
It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm-train in k-fold cross-validation mode using the -v k flag. In this mode, svm-train does not output a model -- just a cross-validated estimate of the generalization performance. grid.py is basically a wrapper around svm-train in cross-validation mode. It allows you to easily assess the best parameter tuple out of a given set of options via cross-validation. It is essentially a loop over the specified parameter tuples which performs cross-validation. a. Is the -v 10 option of cross validation can replace the testing step? Not entirely. Cross-validation is indeed used to get an estimate of the generalization performance of a model, but when performing cross-validation the entire training set is never used to construct a single model. The typical steps are (i) find optimal tuning parameters using cross-validation, (ii) train a model using these optimal parameters on the full training set and (iii) test this model on the test set. b. The result given by the steps above is suspiciously high (96%), and so I'm wondering if I am doing something wrong? Don't worry, be happy. Such classification accuracies are quite feasible for a wide range of problems. c. Could the use of grid.py for parameter selection before the train + cross validation damage the results (as if I were testing on data I've already trained)? grid.py does cross-validation for you. There is no point to perform cross-validation again after you ran grid.py.
Correct use of cross validation in LibsSVM
It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm
Correct use of cross validation in LibsSVM It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm-train in k-fold cross-validation mode using the -v k flag. In this mode, svm-train does not output a model -- just a cross-validated estimate of the generalization performance. grid.py is basically a wrapper around svm-train in cross-validation mode. It allows you to easily assess the best parameter tuple out of a given set of options via cross-validation. It is essentially a loop over the specified parameter tuples which performs cross-validation. a. Is the -v 10 option of cross validation can replace the testing step? Not entirely. Cross-validation is indeed used to get an estimate of the generalization performance of a model, but when performing cross-validation the entire training set is never used to construct a single model. The typical steps are (i) find optimal tuning parameters using cross-validation, (ii) train a model using these optimal parameters on the full training set and (iii) test this model on the test set. b. The result given by the steps above is suspiciously high (96%), and so I'm wondering if I am doing something wrong? Don't worry, be happy. Such classification accuracies are quite feasible for a wide range of problems. c. Could the use of grid.py for parameter selection before the train + cross validation damage the results (as if I were testing on data I've already trained)? grid.py does cross-validation for you. There is no point to perform cross-validation again after you ran grid.py.
Correct use of cross validation in LibsSVM It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm
46,256
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
(This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If we only estimate a logit model with one binary regressor of interest and some (dummy or continuous) control variables, can we tell whether dropping the control variables will result in a change of sign for the (coefficient of) the regressor of interest?" Notation: Let $RA\equiv Y$ be the dependent variable, $HHS \equiv X$ the binary regressor of interest and $\mathbf Z$ a matrix of control variables. The size of the sample is $n$. Denote $n_0$ the number of zero-realizations of $X$ and $n_1$ the number of non-zero realizations, $n_0+n_1=n$. Denote $\Lambda()$ the cdf of the logistic distribution. Let the model including the control variables (the "unrestricted" model) be $$M_U : \begin{align} &P(Y=1\mid X,\mathbf Z)=\Lambda(X, \mathbf Z,b,\mathbf c)\\ &P(Y=0\mid X,\mathbf Z)=1-\Lambda(X, \mathbf Z,b,\mathbf c) \end{align}$$ where $b$ is the coefficient on the regressor of interest. Let the model including only the regressor of interest (the "restricted" model) be $$M_R : \begin{align} &P(Y=1\mid X)=\Lambda(X, \beta)\\ &P(Y=0\mid X)=1-\Lambda(X,\beta) \end{align}$$ STEP 1 Consider the unrestricted model. The first-derivative of the log-likelihood w.r.t to $b$ and the condition for a maximum is $$\frac {\partial \ln L_U}{\partial b}= \sum_{i=1}^n\left[(y_i-\Lambda_i(x_i, \mathbf z_i,b,\mathbf c)\right]x_i=0 \Rightarrow b^*: \sum_{i=1}^ny_ix_i=\sum_{i=1}^n\Lambda_i(x_i, \mathbf z_i,b^*,\mathbf c^*)x_i \;[1]$$ The analogous relations for the restricted model is $$\frac {\partial \ln L_R}{\partial \beta}= \sum_{i=1}^n\left[(y_i-\Lambda_i(x_i,\beta)\right]x_i=0 \Rightarrow \beta^*: \sum_{i=1}^ny_ix_i=\sum_{i=1}^n\Lambda_i(x_i, \beta^*)x_i \qquad[2]$$ We have $$\Lambda_i(X,\beta^*) = \frac {1}{1+e^{-x_i\beta^*}}$$ and since $X$ is a zero/one binary variable relation $[2]$ can be written $$\beta^*: \sum_{i=1}^ny_ix_i=\frac {n_1}{1+e^{-\beta^*}} \qquad[2a]$$ Combining $[1]$ and $[2a]$ and using again the fact that $X$ is binary we obtain the following equality relation between the estimated coefficients of the two models: $$\frac {n_1}{1+e^{-\beta^*}} = \sum_{i=1}^n\Lambda_i(x_i, \mathbf z_i,b^*,\mathbf c^*)x_i $$ $$\Rightarrow \frac {1}{1+e^{-\beta^*}} = \frac {1}{n_1}\sum_{x_i=1}\Lambda_i(x_i=1, \mathbf z_i,b^*,\mathbf c^*) \qquad [3]$$ $$\Rightarrow \hat P_R(Y=1\mid X=1) = \hat {\bar P_U}(Y=1\mid X=1,\mathbf Z) \qquad [3a]$$ or in words, that the estimated probability from the restricted model will equal the restricted average estimated probability from the model that includes the control variables. STEP 2 For a sole binary regressor in a logistic regression, its marginal effect $m_R(X)$ is $$ \hat m_R(X)= \hat P_R(Y=1\mid X=1) - \hat P_R(Y=1\mid X=0)$$ $$ \Rightarrow \hat m_R(X) = \frac {1}{1+e^{-\beta^*}} - \frac 12$$ and using $[3]$ $$ \hat m_R(X) = \frac {1}{n_1}\sum_{x_i=1}\Lambda_i(x_i=1, \mathbf z_i,b^*,\mathbf c^*) - \frac 12 \qquad [4]$$ For the unrestricted model that includes the control variables we have $$ \hat m_U(X)= \hat P_U(Y=1\mid X=1, \bar {\mathbf z}) - \hat P_U(Y=1\mid X=0, \bar {\mathbf z})$$ $$\Rightarrow \hat m_U(X) = \frac {1}{1+e^{-b^*-\bar {\mathbf z}'\mathbf c^*}} - \frac {1}{1+e^{-\bar {\mathbf z}'\mathbf c^*}} \qquad [5]$$ where $\bar {\mathbf z}$ contains the sample means of the control variables. It is easy to see that the marginal effect of $X$ has the same sign as its estimated coefficient. Since we have expressed the marginal effect of $X$ from both models in terms of the estimated coefficients from the unrestricted model, we can estimated only the latter, and then calculate the above two expressions ($[4]$ and $[5]$) which will tell us whether we will observe a sign reversal for the coefficient of $X$ or not, without the need to estimate the restricted model.
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
(This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? (This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If we only estimate a logit model with one binary regressor of interest and some (dummy or continuous) control variables, can we tell whether dropping the control variables will result in a change of sign for the (coefficient of) the regressor of interest?" Notation: Let $RA\equiv Y$ be the dependent variable, $HHS \equiv X$ the binary regressor of interest and $\mathbf Z$ a matrix of control variables. The size of the sample is $n$. Denote $n_0$ the number of zero-realizations of $X$ and $n_1$ the number of non-zero realizations, $n_0+n_1=n$. Denote $\Lambda()$ the cdf of the logistic distribution. Let the model including the control variables (the "unrestricted" model) be $$M_U : \begin{align} &P(Y=1\mid X,\mathbf Z)=\Lambda(X, \mathbf Z,b,\mathbf c)\\ &P(Y=0\mid X,\mathbf Z)=1-\Lambda(X, \mathbf Z,b,\mathbf c) \end{align}$$ where $b$ is the coefficient on the regressor of interest. Let the model including only the regressor of interest (the "restricted" model) be $$M_R : \begin{align} &P(Y=1\mid X)=\Lambda(X, \beta)\\ &P(Y=0\mid X)=1-\Lambda(X,\beta) \end{align}$$ STEP 1 Consider the unrestricted model. The first-derivative of the log-likelihood w.r.t to $b$ and the condition for a maximum is $$\frac {\partial \ln L_U}{\partial b}= \sum_{i=1}^n\left[(y_i-\Lambda_i(x_i, \mathbf z_i,b,\mathbf c)\right]x_i=0 \Rightarrow b^*: \sum_{i=1}^ny_ix_i=\sum_{i=1}^n\Lambda_i(x_i, \mathbf z_i,b^*,\mathbf c^*)x_i \;[1]$$ The analogous relations for the restricted model is $$\frac {\partial \ln L_R}{\partial \beta}= \sum_{i=1}^n\left[(y_i-\Lambda_i(x_i,\beta)\right]x_i=0 \Rightarrow \beta^*: \sum_{i=1}^ny_ix_i=\sum_{i=1}^n\Lambda_i(x_i, \beta^*)x_i \qquad[2]$$ We have $$\Lambda_i(X,\beta^*) = \frac {1}{1+e^{-x_i\beta^*}}$$ and since $X$ is a zero/one binary variable relation $[2]$ can be written $$\beta^*: \sum_{i=1}^ny_ix_i=\frac {n_1}{1+e^{-\beta^*}} \qquad[2a]$$ Combining $[1]$ and $[2a]$ and using again the fact that $X$ is binary we obtain the following equality relation between the estimated coefficients of the two models: $$\frac {n_1}{1+e^{-\beta^*}} = \sum_{i=1}^n\Lambda_i(x_i, \mathbf z_i,b^*,\mathbf c^*)x_i $$ $$\Rightarrow \frac {1}{1+e^{-\beta^*}} = \frac {1}{n_1}\sum_{x_i=1}\Lambda_i(x_i=1, \mathbf z_i,b^*,\mathbf c^*) \qquad [3]$$ $$\Rightarrow \hat P_R(Y=1\mid X=1) = \hat {\bar P_U}(Y=1\mid X=1,\mathbf Z) \qquad [3a]$$ or in words, that the estimated probability from the restricted model will equal the restricted average estimated probability from the model that includes the control variables. STEP 2 For a sole binary regressor in a logistic regression, its marginal effect $m_R(X)$ is $$ \hat m_R(X)= \hat P_R(Y=1\mid X=1) - \hat P_R(Y=1\mid X=0)$$ $$ \Rightarrow \hat m_R(X) = \frac {1}{1+e^{-\beta^*}} - \frac 12$$ and using $[3]$ $$ \hat m_R(X) = \frac {1}{n_1}\sum_{x_i=1}\Lambda_i(x_i=1, \mathbf z_i,b^*,\mathbf c^*) - \frac 12 \qquad [4]$$ For the unrestricted model that includes the control variables we have $$ \hat m_U(X)= \hat P_U(Y=1\mid X=1, \bar {\mathbf z}) - \hat P_U(Y=1\mid X=0, \bar {\mathbf z})$$ $$\Rightarrow \hat m_U(X) = \frac {1}{1+e^{-b^*-\bar {\mathbf z}'\mathbf c^*}} - \frac {1}{1+e^{-\bar {\mathbf z}'\mathbf c^*}} \qquad [5]$$ where $\bar {\mathbf z}$ contains the sample means of the control variables. It is easy to see that the marginal effect of $X$ has the same sign as its estimated coefficient. Since we have expressed the marginal effect of $X$ from both models in terms of the estimated coefficients from the unrestricted model, we can estimated only the latter, and then calculate the above two expressions ($[4]$ and $[5]$) which will tell us whether we will observe a sign reversal for the coefficient of $X$ or not, without the need to estimate the restricted model.
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? (This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If
46,257
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the origin. The length of the vector equals the standard deviation of the corresponding variable. The cosine of the angle between any two vectors equals the correlation of the corresponding two variables. I will take all the standard deviations to be 1. The picture shows the plane determined by the $X_1$ and $X_2$ when they correlate positively with one another. $Y$ is a vector coming out of the screen; the dashed line is its projection into the predictor space and is the regression estimate of $Y$, $\hat{Y}$. The length of the dashed line equals the multiple correlation, $R$, of $Y$ with $X_1$ and $X_2$. If the projection is in any of the colored sectors then both predictors correlate positively with $Y$. The signs of the regression coefficients $\beta_1$ and $\beta_2$ are immediately apparent visually, because $\hat{Y}$ is the vector sum of $\beta_1 X_1$ and $\beta_2 X_2$. If the projection is in the yellow sector then both $\beta_1$ and $\beta_2$ are positive, but if the projection is in either the red or the blue sector then we have what appears to be suppression; that is, the sign of one of the regression weights is opposite to the sign of the corresponding simple correlation with $Y$. In the picture, $\beta_1$ is positive and $\beta_2$ is negative. Since the length of the projection can vary between 0 and 1 no matter where it is in the predictor space, there is no minimum $R^2$ for suppression.
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the origin. The length of the vector equals the standard deviation of the corresponding variable. The cosine of the angle between any two vectors equals the correlation of the corresponding two variables. I will take all the standard deviations to be 1. The picture shows the plane determined by the $X_1$ and $X_2$ when they correlate positively with one another. $Y$ is a vector coming out of the screen; the dashed line is its projection into the predictor space and is the regression estimate of $Y$, $\hat{Y}$. The length of the dashed line equals the multiple correlation, $R$, of $Y$ with $X_1$ and $X_2$. If the projection is in any of the colored sectors then both predictors correlate positively with $Y$. The signs of the regression coefficients $\beta_1$ and $\beta_2$ are immediately apparent visually, because $\hat{Y}$ is the vector sum of $\beta_1 X_1$ and $\beta_2 X_2$. If the projection is in the yellow sector then both $\beta_1$ and $\beta_2$ are positive, but if the projection is in either the red or the blue sector then we have what appears to be suppression; that is, the sign of one of the regression weights is opposite to the sign of the corresponding simple correlation with $Y$. In the picture, $\beta_1$ is positive and $\beta_2$ is negative. Since the length of the projection can vary between 0 and 1 no matter where it is in the predictor space, there is no minimum $R^2$ for suppression.
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the
46,258
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ with $\epsilon_i \sim N(0, sd_\text{error}^2)$. I show the zero to make explicit that the intercept of the true model is zero, this is just a simplification. When x and y are highly correlated and centered about zero then the coefficient of z when regressing over just z will be positive instead of negative. Note that the true model coefficients do not change with $sd_\text{error}$ but you can make $R^2$ vary between zero and one by changing the magnitude of the residual error. Look for example at the following R-code: require(MASS) sd.error <- 1 x.and.z <- mvrnorm(1000, c(0,0) , matrix(c(1, 0.9,0.9,1),nrow=2)) # set correlation to 0.9 x <- x.and.z[, 1] z <- x.and.z[, 2] y <- 5*x - z + rnorm(1000, 0, sd.error) # true model modell1 <- lm(y~x+z) modell2 <- lm(y~z) print(summary(modell1)) # coefficient of z should be negative print(summary(modell2)) # coefficient of z should be positive and play a bit with sd.error. Look for example at $sd_\text{error}=50$. Note that with a very large sd.error the coefficient estimation will become more unstable and the reversal might not show up every time. But that's a limitation of the sample size. A short summary would be that the variance of the error does not affect the expectations and thus reversal. Therefore neither does $R^2$.
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ w
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ with $\epsilon_i \sim N(0, sd_\text{error}^2)$. I show the zero to make explicit that the intercept of the true model is zero, this is just a simplification. When x and y are highly correlated and centered about zero then the coefficient of z when regressing over just z will be positive instead of negative. Note that the true model coefficients do not change with $sd_\text{error}$ but you can make $R^2$ vary between zero and one by changing the magnitude of the residual error. Look for example at the following R-code: require(MASS) sd.error <- 1 x.and.z <- mvrnorm(1000, c(0,0) , matrix(c(1, 0.9,0.9,1),nrow=2)) # set correlation to 0.9 x <- x.and.z[, 1] z <- x.and.z[, 2] y <- 5*x - z + rnorm(1000, 0, sd.error) # true model modell1 <- lm(y~x+z) modell2 <- lm(y~z) print(summary(modell1)) # coefficient of z should be negative print(summary(modell2)) # coefficient of z should be positive and play a bit with sd.error. Look for example at $sd_\text{error}=50$. Note that with a very large sd.error the coefficient estimation will become more unstable and the reversal might not show up every time. But that's a limitation of the sample size. A short summary would be that the variance of the error does not affect the expectations and thus reversal. Therefore neither does $R^2$.
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ w
46,259
How to interpret Weka Logistic Regression output?
Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for a given event is 0.6, then the odds for that event is $0.6/0.4=1.5$. 1- As you said, since the logistic regression outputs probabilities based on the following equation: $$\text{logit}(p_{i}) = \log{\frac{p_{i}}{1-p_{i}}} = \beta_{0} + \beta_{1}x_{1} + ... + \beta_{k}x_{k}$$ the coefficients refer to each $\beta_{i}$. 2- Odds ratios are simply the exponential of the weights you found before. For example, the first coefficient you have is outlook=sunny: -6.4257. If you calculate $\exp(-6.4257)$ you get 0.0016 that is the corresponding value in the odds ratio table. The relation between the coefficient for outlook=sunny and its odds ratio is, in this case, the logarithm of the odds of outlook=sunny over the odds of outlook=¬sunny: $$\displaystyle \log{\frac{Odds(outlook=sunny)}{Odds(outlook=¬sunny)}}$$ For instance, the odds of outlook=sunny is the probability of a sunny day in which you can play over the probability of having a sunny day in which you can't play. Similarly, you can calculate the odds for outlook=¬sunny. The log of this ratio is the value of the coefficient attached to the variable outlook=sunny in the logistic regression. However, in this particular example, since you have more than one variable as predictors, it's necessary to fix the value of the other variables. Now you can see why outlook=overcast has such a value. The odds for outlook=overcast are extremely favorable to the yes outcome, producing a high positive value. A simpler example of this can be found here. 3.- The confusion matrix is very simple. In the first row, for example, it tells you the number of instances classified in your training data as yes that you classified as yes (that is, 7) and the number that are classified as yes that you classified as no(2). The second row is equivalent for instances classified as no.
How to interpret Weka Logistic Regression output?
Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for
How to interpret Weka Logistic Regression output? Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for a given event is 0.6, then the odds for that event is $0.6/0.4=1.5$. 1- As you said, since the logistic regression outputs probabilities based on the following equation: $$\text{logit}(p_{i}) = \log{\frac{p_{i}}{1-p_{i}}} = \beta_{0} + \beta_{1}x_{1} + ... + \beta_{k}x_{k}$$ the coefficients refer to each $\beta_{i}$. 2- Odds ratios are simply the exponential of the weights you found before. For example, the first coefficient you have is outlook=sunny: -6.4257. If you calculate $\exp(-6.4257)$ you get 0.0016 that is the corresponding value in the odds ratio table. The relation between the coefficient for outlook=sunny and its odds ratio is, in this case, the logarithm of the odds of outlook=sunny over the odds of outlook=¬sunny: $$\displaystyle \log{\frac{Odds(outlook=sunny)}{Odds(outlook=¬sunny)}}$$ For instance, the odds of outlook=sunny is the probability of a sunny day in which you can play over the probability of having a sunny day in which you can't play. Similarly, you can calculate the odds for outlook=¬sunny. The log of this ratio is the value of the coefficient attached to the variable outlook=sunny in the logistic regression. However, in this particular example, since you have more than one variable as predictors, it's necessary to fix the value of the other variables. Now you can see why outlook=overcast has such a value. The odds for outlook=overcast are extremely favorable to the yes outcome, producing a high positive value. A simpler example of this can be found here. 3.- The confusion matrix is very simple. In the first row, for example, it tells you the number of instances classified in your training data as yes that you classified as yes (that is, 7) and the number that are classified as yes that you classified as no(2). The second row is equivalent for instances classified as no.
How to interpret Weka Logistic Regression output? Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for
46,260
How to approach forecasting time-series data
A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To be more precise, suppose that influence is the multiplicative. A parametrized realization of that model for 30 days is given below. If we normalize and overlay each day, we can perform regression on it. As if by cheating, we've recovered our seasonal component. The code. import numpy as np import pandas from matplotlib import pyplot as plt from sklearn.neighbors import KNeighborsRegressor def generate_ts(hours=24, days=30): np.random.seed(123) # Generate some iid like data x = np.random.binomial(10, .5, hours * days) # Generate your trend slice = np.linspace(-np.pi, np.pi, hours) hourly_trend = np.round(np.cos(slice) * 5) hourly_trend -= hourly_trend.min() rep_hourly_trend = np.tile(hourly_trend, days) data = x * rep_hourly_trend # Generate a index ind = pandas.DatetimeIndex(freq='h', start='2013-09-29 00:00:00', periods=days * hours) return pandas.Series(data, index=ind), hourly_trend def recover_trend(ts, hours=24, days=30): obs_trend = ts.values.reshape(-1, hours) obs_trend = (obs_trend.T - obs_trend.mean(axis=1)) / obs_trend.std(axis=1) y = obs_trend.ravel() x = (np.repeat(np.arange(hours), days)).reshape(-1, 1) model = KNeighborsRegressor() model.fit(x, y) rec_trend = model.predict(np.arange(hours).reshape(-1, 1)) return x, y, rec_trend def main(): hours, days = 24, 30 ts, true_trend = generate_ts(hours=hours, days=days) true_trend = (true_trend - true_trend.mean()) / true_trend.std() ts.plot() plt.title("Run Sequence Plot of Likes") plt.ylabel("Likes") plt.xlabel("Time") plt.show() x, y, rec_trend = recover_trend(ts, hours=hours, days=days) plt.scatter(x.ravel(), y, c='k', label='Observed Trend') plt.plot(np.arange(hours), rec_trend, 'g', label='Recovered Trend', linewidth=5) plt.plot(np.arange(hours), true_trend, 'r', label='True Trend', linewidth=5) plt.grid() plt.title("Trend Regression") plt.ylabel("Normalized Like Influence") plt.xlabel("Hours") plt.legend() plt.show() season_comp = pandas.Series(np.tile(rec_trend, days), index=ts.index) season_comp.plot() plt.title("Run Sequence Plot of Seasonal Component of Likes") plt.ylabel("Likes") plt.xlabel("Time") plt.show() if __name__ == '__main__': main() Before using this, I must caution that there are several issues. If there is a trend component, it must be dealt with first. Low order polynomial regression or the lag operator are popular options. Careful inspection of the autocorrelation and partial autocorrelation plots may reveal additional components of the time series to consider. After detrending your time series, you should inspect the residuals for stationarity. No information is given on the distributions of times that the posts were made in the collected data. Though it may seem obvious that the optimal posting time is prior to the maximum of the recovered seasonal trend, this may not be the case. Changing the posting time, may change the seasonality of the likes. Clumping all the posts on the hour that receives the most likes, will likely change the user behaviour. This problem is better suited to reinforcement learning. The principled approach is to perform sequential optimization of post time by contextual bandits.
How to approach forecasting time-series data
A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To b
How to approach forecasting time-series data A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To be more precise, suppose that influence is the multiplicative. A parametrized realization of that model for 30 days is given below. If we normalize and overlay each day, we can perform regression on it. As if by cheating, we've recovered our seasonal component. The code. import numpy as np import pandas from matplotlib import pyplot as plt from sklearn.neighbors import KNeighborsRegressor def generate_ts(hours=24, days=30): np.random.seed(123) # Generate some iid like data x = np.random.binomial(10, .5, hours * days) # Generate your trend slice = np.linspace(-np.pi, np.pi, hours) hourly_trend = np.round(np.cos(slice) * 5) hourly_trend -= hourly_trend.min() rep_hourly_trend = np.tile(hourly_trend, days) data = x * rep_hourly_trend # Generate a index ind = pandas.DatetimeIndex(freq='h', start='2013-09-29 00:00:00', periods=days * hours) return pandas.Series(data, index=ind), hourly_trend def recover_trend(ts, hours=24, days=30): obs_trend = ts.values.reshape(-1, hours) obs_trend = (obs_trend.T - obs_trend.mean(axis=1)) / obs_trend.std(axis=1) y = obs_trend.ravel() x = (np.repeat(np.arange(hours), days)).reshape(-1, 1) model = KNeighborsRegressor() model.fit(x, y) rec_trend = model.predict(np.arange(hours).reshape(-1, 1)) return x, y, rec_trend def main(): hours, days = 24, 30 ts, true_trend = generate_ts(hours=hours, days=days) true_trend = (true_trend - true_trend.mean()) / true_trend.std() ts.plot() plt.title("Run Sequence Plot of Likes") plt.ylabel("Likes") plt.xlabel("Time") plt.show() x, y, rec_trend = recover_trend(ts, hours=hours, days=days) plt.scatter(x.ravel(), y, c='k', label='Observed Trend') plt.plot(np.arange(hours), rec_trend, 'g', label='Recovered Trend', linewidth=5) plt.plot(np.arange(hours), true_trend, 'r', label='True Trend', linewidth=5) plt.grid() plt.title("Trend Regression") plt.ylabel("Normalized Like Influence") plt.xlabel("Hours") plt.legend() plt.show() season_comp = pandas.Series(np.tile(rec_trend, days), index=ts.index) season_comp.plot() plt.title("Run Sequence Plot of Seasonal Component of Likes") plt.ylabel("Likes") plt.xlabel("Time") plt.show() if __name__ == '__main__': main() Before using this, I must caution that there are several issues. If there is a trend component, it must be dealt with first. Low order polynomial regression or the lag operator are popular options. Careful inspection of the autocorrelation and partial autocorrelation plots may reveal additional components of the time series to consider. After detrending your time series, you should inspect the residuals for stationarity. No information is given on the distributions of times that the posts were made in the collected data. Though it may seem obvious that the optimal posting time is prior to the maximum of the recovered seasonal trend, this may not be the case. Changing the posting time, may change the seasonality of the likes. Clumping all the posts on the hour that receives the most likes, will likely change the user behaviour. This problem is better suited to reinforcement learning. The principled approach is to perform sequential optimization of post time by contextual bandits.
How to approach forecasting time-series data A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To b
46,261
How to approach forecasting time-series data
It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observations, like you have done. From here you can calculate the data-derived expected likes by hour of week. If you normalize the data, then the likes for each hour over the total likes for a week will give you the probability for that hour of the week. You can regress on that data, but to utilize a clustering algorithm like k-NN, or a neural network to predict based on latent features, will require that you have more than an x and y. Adding features like the general topic, perhaps some term frequency or semantic analysis, maybe the format of the post (images or not, links or not, question or not, etc.), will offer you data to cluster by. You will likely need to adjust by overall activity, and will need many more weeks to gain any kind of confidence in the output. However, if you get a good set of features and can remove general unrelated trends in activity, you might be best served by generating a self organizing map (a type of neural network) in which the hour of the week is a node whose response correlates best with a specific combined set of features. A good, simple implementation in Python is here. Then, when you get a specific post and decompose it into the feature set, you can see which node responds well and post on that hour of the week. Afterward, add the true response back into your training data and retrain the map to include the new data.
How to approach forecasting time-series data
It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observ
How to approach forecasting time-series data It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observations, like you have done. From here you can calculate the data-derived expected likes by hour of week. If you normalize the data, then the likes for each hour over the total likes for a week will give you the probability for that hour of the week. You can regress on that data, but to utilize a clustering algorithm like k-NN, or a neural network to predict based on latent features, will require that you have more than an x and y. Adding features like the general topic, perhaps some term frequency or semantic analysis, maybe the format of the post (images or not, links or not, question or not, etc.), will offer you data to cluster by. You will likely need to adjust by overall activity, and will need many more weeks to gain any kind of confidence in the output. However, if you get a good set of features and can remove general unrelated trends in activity, you might be best served by generating a self organizing map (a type of neural network) in which the hour of the week is a node whose response correlates best with a specific combined set of features. A good, simple implementation in Python is here. Then, when you get a specific post and decompose it into the feature set, you can see which node responds well and post on that hour of the week. Afterward, add the true response back into your training data and retrain the map to include the new data.
How to approach forecasting time-series data It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observ
46,262
Making box plots when analyzing a case with 3 predictor variables?
Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution and outliers. However, since it's an ANOVA, I'd also recommend visualize the mean and 95% CI as well using error plot: By comparing and contrasting the positions of each mean and CI across panels and across clusters, one may gain a bit more insight on what the interactions between the group means will be like. Start from just two variables (uranium vs. temperature, uranium vs. time, etc.) and the build up from there. If your class has not covered interaction yet, then I'd suggest asking the instructor if he/she will allow you to experiment.
Making box plots when analyzing a case with 3 predictor variables?
Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution
Making box plots when analyzing a case with 3 predictor variables? Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution and outliers. However, since it's an ANOVA, I'd also recommend visualize the mean and 95% CI as well using error plot: By comparing and contrasting the positions of each mean and CI across panels and across clusters, one may gain a bit more insight on what the interactions between the group means will be like. Start from just two variables (uranium vs. temperature, uranium vs. time, etc.) and the build up from there. If your class has not covered interaction yet, then I'd suggest asking the instructor if he/she will allow you to experiment.
Making box plots when analyzing a case with 3 predictor variables? Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution
46,263
Making box plots when analyzing a case with 3 predictor variables?
So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will always be the DV (uranium). On the x-axis will the the IVs. For example, temp low, temp med, temp high. Do this for all 3 IVs. If you want to look at the interaction between the IVs, plots will be more complicating (as will be your analysis). There's no easy way. You're just going to have to divide up the data into 6 when looking at 2 IVs, and 9 when looking at all 3 at once, and make boxplots for each. I don't suggest you do this. Given your skill level and because it is for a class, looking at one IV at a time is probably good enough.
Making box plots when analyzing a case with 3 predictor variables?
So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will alw
Making box plots when analyzing a case with 3 predictor variables? So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will always be the DV (uranium). On the x-axis will the the IVs. For example, temp low, temp med, temp high. Do this for all 3 IVs. If you want to look at the interaction between the IVs, plots will be more complicating (as will be your analysis). There's no easy way. You're just going to have to divide up the data into 6 when looking at 2 IVs, and 9 when looking at all 3 at once, and make boxplots for each. I don't suggest you do this. Given your skill level and because it is for a class, looking at one IV at a time is probably good enough.
Making box plots when analyzing a case with 3 predictor variables? So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will alw
46,264
Making box plots when analyzing a case with 3 predictor variables?
Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)), time=sample(pred1, 100, replace=TRUE), temp=sample(pred1, 100, replace=TRUE), str=sample(pred1, 100, replace=TRUE) ) library(ggplot2) g1 <- ggplot(data=df1, aes(y=ur, x=time, fill=time)) g1 + geom_boxplot() + facet_grid(facets = str ~ temp, scale="free_y", labeller=label_both) giving: (Note y-axis scales vary per row).
Making box plots when analyzing a case with 3 predictor variables?
Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)),
Making box plots when analyzing a case with 3 predictor variables? Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)), time=sample(pred1, 100, replace=TRUE), temp=sample(pred1, 100, replace=TRUE), str=sample(pred1, 100, replace=TRUE) ) library(ggplot2) g1 <- ggplot(data=df1, aes(y=ur, x=time, fill=time)) g1 + geom_boxplot() + facet_grid(facets = str ~ temp, scale="free_y", labeller=label_both) giving: (Note y-axis scales vary per row).
Making box plots when analyzing a case with 3 predictor variables? Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)),
46,265
Sample size and power detection
When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. Here is a page I wrote: http://graphpad.com/support/faq/why-it-is-not-helpful-to-compute-the-power-of-an-experiment-to-detect-the-difference-actually-observed-why-is-post-hoc-power-analysis-futile/ The key paragraph: If your study reached a conclusion that the difference is not statistically significant, then -- by definition-- its power to detect the effect actually observed is very low. You learn nothing new by such a calculation. It can be useful to compute the power of the study to detect a difference that would have been scientifically or clinically worth detecting. It is not worthwhile to compute the power of the study to detect the difference (or effect) actually observed. Here are five related peer-reviewed articles: SN Goodman and JA Berlin, The Use of Predicted Confidence Intervals When Planning Experiments and the Misuse of Power When Interpreting the Results, Annals Internal Medicine 121: 200-206, 1994. Hoenig JM, Heisey DM, The abuse of power, The American Statistician. February 1, 2001, 55(1): 19-24. doi:10.1198/000313001300339897. Lenth, R. V. (2001), Some Practical Guidelines for Effective Sample Size Determination, The American Statistician, 55, 187-193 M Levine and MHH Ensom, Post Hoc Power Analysis: An Idea Whose Time Has Passed, Pharmacotherapy 21:405-409, 2001. Thomas, L, Retrospective Power Analysis, Conservation Biology Vol. 11 (1997), No. 1, pages 276-280
Sample size and power detection
When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. He
Sample size and power detection When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. Here is a page I wrote: http://graphpad.com/support/faq/why-it-is-not-helpful-to-compute-the-power-of-an-experiment-to-detect-the-difference-actually-observed-why-is-post-hoc-power-analysis-futile/ The key paragraph: If your study reached a conclusion that the difference is not statistically significant, then -- by definition-- its power to detect the effect actually observed is very low. You learn nothing new by such a calculation. It can be useful to compute the power of the study to detect a difference that would have been scientifically or clinically worth detecting. It is not worthwhile to compute the power of the study to detect the difference (or effect) actually observed. Here are five related peer-reviewed articles: SN Goodman and JA Berlin, The Use of Predicted Confidence Intervals When Planning Experiments and the Misuse of Power When Interpreting the Results, Annals Internal Medicine 121: 200-206, 1994. Hoenig JM, Heisey DM, The abuse of power, The American Statistician. February 1, 2001, 55(1): 19-24. doi:10.1198/000313001300339897. Lenth, R. V. (2001), Some Practical Guidelines for Effective Sample Size Determination, The American Statistician, 55, 187-193 M Levine and MHH Ensom, Post Hoc Power Analysis: An Idea Whose Time Has Passed, Pharmacotherapy 21:405-409, 2001. Thomas, L, Retrospective Power Analysis, Conservation Biology Vol. 11 (1997), No. 1, pages 276-280
Sample size and power detection When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. He
46,266
Sample size and power detection
First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program that will calculate power for you. The more complex is to simulate the data. The former makes assumptions (sometimes unwarranted assumptions); if you go this route, you'd probably want to use the power programs for a one-way ANOVA and then note that in limitations. The latter requires you to create hypothetical data. Both have been discussed here a lot. How to simulate will depend on what software you are using. Third, regarding power for the KW test, this article seems apropos, but I have not read it (beyond the abstract) as it is behind a pay wall.
Sample size and power detection
First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program th
Sample size and power detection First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program that will calculate power for you. The more complex is to simulate the data. The former makes assumptions (sometimes unwarranted assumptions); if you go this route, you'd probably want to use the power programs for a one-way ANOVA and then note that in limitations. The latter requires you to create hypothetical data. Both have been discussed here a lot. How to simulate will depend on what software you are using. Third, regarding power for the KW test, this article seems apropos, but I have not read it (beyond the abstract) as it is behind a pay wall.
Sample size and power detection First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program th
46,267
Why is coefficient of determination used to assess fit of a least squares line?
This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyone is unclear. I'd characterise it rather as one of several available computing formulas. You ask "Why is this used" but that confuses or conflates the question of why the coefficient of determination is used at all with why the particular formula you cite might be used. For me, the attractions of $R^2$ lie in being (a) a simple and single measure linked to the correlation coefficient $r$ or an analogue of that and (b) free of the units of measurement of the original variable. In multiple regression, the correlation concerned is between the values observed and those predicted from the model. The disadvantages of $R^2$ are precisely the same points: no summary measure can capture all the virtues and limitations of a regression and there is often much point in summarising lack of fit on the scale of the measured response. To that end, $SS_\text{res}/n$ is, contrary to your implication, often used, if indirectly. Summarising the residuals by mean square is at base a good idea, although its square root is a better one on dimensional grounds and for detailed technical reasons there is a case for using a divisor which is the sample size minus the number of parameters fitted. (Looking at the detailed pattern of the residuals is usually an even better idea.) More broadly, $R^2$ is often over-valued in that a low $R^2$ may be a worthwhile achievement and a high $R^2$ a scientific or practical failure. Much depends on what is interesting, useful and possible scientifically or practically.
Why is coefficient of determination used to assess fit of a least squares line?
This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyo
Why is coefficient of determination used to assess fit of a least squares line? This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyone is unclear. I'd characterise it rather as one of several available computing formulas. You ask "Why is this used" but that confuses or conflates the question of why the coefficient of determination is used at all with why the particular formula you cite might be used. For me, the attractions of $R^2$ lie in being (a) a simple and single measure linked to the correlation coefficient $r$ or an analogue of that and (b) free of the units of measurement of the original variable. In multiple regression, the correlation concerned is between the values observed and those predicted from the model. The disadvantages of $R^2$ are precisely the same points: no summary measure can capture all the virtues and limitations of a regression and there is often much point in summarising lack of fit on the scale of the measured response. To that end, $SS_\text{res}/n$ is, contrary to your implication, often used, if indirectly. Summarising the residuals by mean square is at base a good idea, although its square root is a better one on dimensional grounds and for detailed technical reasons there is a case for using a divisor which is the sample size minus the number of parameters fitted. (Looking at the detailed pattern of the residuals is usually an even better idea.) More broadly, $R^2$ is often over-valued in that a low $R^2$ may be a worthwhile achievement and a high $R^2$ a scientific or practical failure. Much depends on what is interesting, useful and possible scientifically or practically.
Why is coefficient of determination used to assess fit of a least squares line? This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyo
46,268
Why is coefficient of determination used to assess fit of a least squares line?
The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squared deviation, all of that variability not explained by the mean (any value exactly at the mean contributes 0 to $SS$). The $SS_\text{res}$ is the variability that your more complex model didn't explain, whatever that model is. For example, if you have two means in the more complex model they should explain more of the data / have a smaller $SS$. Therefore $SS_\text{res}/SS_\text{tot}$ is the proportion of variability that you didn't explain. If you subtract what's unexplained from 1 then you get the remaining portion of variability you did explain. It means something. The reason it's used is because it means something sensible and useful. $SS_\text{res}/n$, or some other value, may mean something too, but not the same thing. If you come up with a more useful number for your purposes then use that.
Why is coefficient of determination used to assess fit of a least squares line?
The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squa
Why is coefficient of determination used to assess fit of a least squares line? The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squared deviation, all of that variability not explained by the mean (any value exactly at the mean contributes 0 to $SS$). The $SS_\text{res}$ is the variability that your more complex model didn't explain, whatever that model is. For example, if you have two means in the more complex model they should explain more of the data / have a smaller $SS$. Therefore $SS_\text{res}/SS_\text{tot}$ is the proportion of variability that you didn't explain. If you subtract what's unexplained from 1 then you get the remaining portion of variability you did explain. It means something. The reason it's used is because it means something sensible and useful. $SS_\text{res}/n$, or some other value, may mean something too, but not the same thing. If you come up with a more useful number for your purposes then use that.
Why is coefficient of determination used to assess fit of a least squares line? The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squa
46,269
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow \pm 1$
Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X = (X1 - \mu_1)/\sigma_1$ and $Y = (X2 - \mu_2)/\sigma_2$. To standardize you subtract the mean and then divide by the standard deviation (in your post you only did the latter). When $\rho=1$, the variables are perfectly correlated (which in the case of a normal distribution means perfectly dependent), so $X=Y$. And when $\rho=-1$, $X=-Y$. The cumulative distribution function $\Phi(x,y,\rho)$ is defined to be the probability $\rm{Pr}(X\le x \cap Y\le y)$ given the correlation $\rm{Corr}(X,Y)=\rho$. I.e., $X$ has to be $\le x\,$ AND $\;Y$ has to be $\le y$. So in the case $\rho=1$, $$\begin{eqnarray*} \Phi(x,y) &=& \rm{Pr}(X\le x \cap Y\le y) \\ &=& \rm{Pr}(X\le x \cap X\le y) \\ &=& \rm{Pr}(X\le \rm{min}(x,y)) \\ &=& \Phi_X(\rm{min}(x,y)) \\ &=& \Phi_Y(\rm{min}(x,y)). \\ \end{eqnarray*}$$ Here, $0 < \Phi(x,y) < 1$, so long as $|x|, |y| < \infty$. The case $\rho=-1$ is much more interesting (and I'm not 100% sure I've got this right so would welcome corrections): $$\begin{eqnarray*} \Phi(x,y) &=& \rm{Pr}(X\le x \cap Y\le y) \\ &=& \rm{Pr}(X\le x \cap -X\le y) \\ &=& \rm{Pr}(X\le x \cap X \ge -y) \\ &=& \rm{Pr}(-y \le X \le x) \\ &=& \Phi_X(x) - \Phi_X(-y) \;\;\;\mbox{(*)}\\ &=& \Phi_X(x) - (1 - \Phi_X(y)) \\ &=& \Phi_X(x) + \Phi_X(y) -1. \end{eqnarray*}$$ Note that the step marked * assumes $-y < x$, or equivalently, $y > -x$. If this doesn't hold, then $\Phi = 0$. Here, $0 \le \Phi(x,y) < 1$, so long as $|x|, |y| < \infty$. Compared to the case of $\rho=1$, it is now possible to get $\Phi(x,y) = 0$ with finite values of $x$ and $y$. E.g. if $x=1$ and $y=-2$, it's impossible to get both $X\le x$ and $X\ge y$ ($X\le 1$ and $X\ge 2$). To get some intuition for how the cumulative distribution functions look, I've plotted 3d plots and contour plots for the two cases below. R code for these plots grid = expand.grid(x=seq(-3,3,0.05), y=seq(-3,3,0.05)) grid$phi1 = with(grid, pnorm(pmin(x,y))) grid$phi2 = with(grid, ifelse(-y<x,pnorm(x) + pnorm(y) -1,0)) library(lattice) wireframe(data=grid, phi1 ~ x*y, shade=TRUE, main="X=Y", scales=list(arrows=FALSE)) contourplot(data=grid, phi1 ~ x*y, main="X=Y") wireframe(data=grid, phi2 ~ x*y, shade=TRUE, main="X=-Y", scales=list(arrows=FALSE)) contourplot(data=grid, phi2 ~ x*y, main="X=-Y", cuts=10) There are plenty of web pages which cover the bivariate standard normal distribution. Which one you find best is going to be dependent on you. I had a quick search and rather liked the following: http://webee.technion.ac.il/people/adler/lec36.pdf, as it has some nice diagrams on p8 of what happens as $\rho \rightarrow \pm 1$. In the case of $\rho = \pm 1$, plotting $X$ against $Y$ will give you a straight line through the origin, either $y=\pm x$. If you plot this yourself, you should get a good intuition as to why $\rm{min}$ occurs in the formula for $\rho =1$.
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow
Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X =
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow \pm 1$ Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X = (X1 - \mu_1)/\sigma_1$ and $Y = (X2 - \mu_2)/\sigma_2$. To standardize you subtract the mean and then divide by the standard deviation (in your post you only did the latter). When $\rho=1$, the variables are perfectly correlated (which in the case of a normal distribution means perfectly dependent), so $X=Y$. And when $\rho=-1$, $X=-Y$. The cumulative distribution function $\Phi(x,y,\rho)$ is defined to be the probability $\rm{Pr}(X\le x \cap Y\le y)$ given the correlation $\rm{Corr}(X,Y)=\rho$. I.e., $X$ has to be $\le x\,$ AND $\;Y$ has to be $\le y$. So in the case $\rho=1$, $$\begin{eqnarray*} \Phi(x,y) &=& \rm{Pr}(X\le x \cap Y\le y) \\ &=& \rm{Pr}(X\le x \cap X\le y) \\ &=& \rm{Pr}(X\le \rm{min}(x,y)) \\ &=& \Phi_X(\rm{min}(x,y)) \\ &=& \Phi_Y(\rm{min}(x,y)). \\ \end{eqnarray*}$$ Here, $0 < \Phi(x,y) < 1$, so long as $|x|, |y| < \infty$. The case $\rho=-1$ is much more interesting (and I'm not 100% sure I've got this right so would welcome corrections): $$\begin{eqnarray*} \Phi(x,y) &=& \rm{Pr}(X\le x \cap Y\le y) \\ &=& \rm{Pr}(X\le x \cap -X\le y) \\ &=& \rm{Pr}(X\le x \cap X \ge -y) \\ &=& \rm{Pr}(-y \le X \le x) \\ &=& \Phi_X(x) - \Phi_X(-y) \;\;\;\mbox{(*)}\\ &=& \Phi_X(x) - (1 - \Phi_X(y)) \\ &=& \Phi_X(x) + \Phi_X(y) -1. \end{eqnarray*}$$ Note that the step marked * assumes $-y < x$, or equivalently, $y > -x$. If this doesn't hold, then $\Phi = 0$. Here, $0 \le \Phi(x,y) < 1$, so long as $|x|, |y| < \infty$. Compared to the case of $\rho=1$, it is now possible to get $\Phi(x,y) = 0$ with finite values of $x$ and $y$. E.g. if $x=1$ and $y=-2$, it's impossible to get both $X\le x$ and $X\ge y$ ($X\le 1$ and $X\ge 2$). To get some intuition for how the cumulative distribution functions look, I've plotted 3d plots and contour plots for the two cases below. R code for these plots grid = expand.grid(x=seq(-3,3,0.05), y=seq(-3,3,0.05)) grid$phi1 = with(grid, pnorm(pmin(x,y))) grid$phi2 = with(grid, ifelse(-y<x,pnorm(x) + pnorm(y) -1,0)) library(lattice) wireframe(data=grid, phi1 ~ x*y, shade=TRUE, main="X=Y", scales=list(arrows=FALSE)) contourplot(data=grid, phi1 ~ x*y, main="X=Y") wireframe(data=grid, phi2 ~ x*y, shade=TRUE, main="X=-Y", scales=list(arrows=FALSE)) contourplot(data=grid, phi2 ~ x*y, main="X=-Y", cuts=10) There are plenty of web pages which cover the bivariate standard normal distribution. Which one you find best is going to be dependent on you. I had a quick search and rather liked the following: http://webee.technion.ac.il/people/adler/lec36.pdf, as it has some nice diagrams on p8 of what happens as $\rho \rightarrow \pm 1$. In the case of $\rho = \pm 1$, plotting $X$ against $Y$ will give you a straight line through the origin, either $y=\pm x$. If you plot this yourself, you should get a good intuition as to why $\rm{min}$ occurs in the formula for $\rho =1$.
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X =
46,270
How to identify which predictors should be included in a multiple regression?
The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do this. Should you have too many events per variable (one rule of thumb is to have at least 15 subjects per parameter in the model), strongly consider data reduction methods that are blinded to $Y$. These include principal components, variable clustering, and redundancy analysis. Examples are in my course notes at http://biostat.mc.vanderbilt.edu/CourseBios330.
How to identify which predictors should be included in a multiple regression?
The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do th
How to identify which predictors should be included in a multiple regression? The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do this. Should you have too many events per variable (one rule of thumb is to have at least 15 subjects per parameter in the model), strongly consider data reduction methods that are blinded to $Y$. These include principal components, variable clustering, and redundancy analysis. Examples are in my course notes at http://biostat.mc.vanderbilt.edu/CourseBios330.
How to identify which predictors should be included in a multiple regression? The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do th
46,271
How to identify which predictors should be included in a multiple regression?
There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you didn't: http://www.nesug.org/proceedings/nesug07/sa/sa07.pdf
How to identify which predictors should be included in a multiple regression?
There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you d
How to identify which predictors should be included in a multiple regression? There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you didn't: http://www.nesug.org/proceedings/nesug07/sa/sa07.pdf
How to identify which predictors should be included in a multiple regression? There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you d
46,272
How to identify which predictors should be included in a multiple regression?
It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of subject matter experts. Some of the decision will rest on how large is your sample size. If the size is sufficiently large, you could take a subgroup and check for associations between the independent variables and the dependent variables. When you run multiple regression, you do risk an error with each step of the analysis, so it is important to not just throw everything you have into the regression. If you are able to work with a subgroup, you can then verify what you think you have found with a different group for confirmation. Could you tell us a little more about your sample?
How to identify which predictors should be included in a multiple regression?
It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of sub
How to identify which predictors should be included in a multiple regression? It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of subject matter experts. Some of the decision will rest on how large is your sample size. If the size is sufficiently large, you could take a subgroup and check for associations between the independent variables and the dependent variables. When you run multiple regression, you do risk an error with each step of the analysis, so it is important to not just throw everything you have into the regression. If you are able to work with a subgroup, you can then verify what you think you have found with a different group for confirmation. Could you tell us a little more about your sample?
How to identify which predictors should be included in a multiple regression? It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of sub
46,273
How to identify which predictors should be included in a multiple regression?
In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlated, this can indicate that different IVs are accounting for the same portion of variance in the dependent variable or outcome, which can bias the estimated correlation coefficients. One indication of this problem is that you can have a very high R^2 value with very few significant IVs. In other words, having highly correlated IVs in the regression model can mask their actual relationship with the DV. There are several remedies for the problem of multicolinearity, such as excluding one (or more) of the correlated IVs, combining the IVs (additive approach). It is useful to obtain and check the variance inflation factor (VIF) value for each predictor, as high VIF values can indicate variables contributing to multicollinearity. In constructing regression models, it is acceptable to exclude non-significant IVs from your model, after running a regression model with all of the relevant IVs included, but the decision about whether to exclude variables from analysis is generally not based on the correlation matrix.
How to identify which predictors should be included in a multiple regression?
In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlate
How to identify which predictors should be included in a multiple regression? In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlated, this can indicate that different IVs are accounting for the same portion of variance in the dependent variable or outcome, which can bias the estimated correlation coefficients. One indication of this problem is that you can have a very high R^2 value with very few significant IVs. In other words, having highly correlated IVs in the regression model can mask their actual relationship with the DV. There are several remedies for the problem of multicolinearity, such as excluding one (or more) of the correlated IVs, combining the IVs (additive approach). It is useful to obtain and check the variance inflation factor (VIF) value for each predictor, as high VIF values can indicate variables contributing to multicollinearity. In constructing regression models, it is acceptable to exclude non-significant IVs from your model, after running a regression model with all of the relevant IVs included, but the decision about whether to exclude variables from analysis is generally not based on the correlation matrix.
How to identify which predictors should be included in a multiple regression? In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlate
46,274
Iterative PCA R
As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to solve for a subset of leading PCs. The package irlba allows for this by performing a "Lanczos bidiagonalization". It is much faster for large matrices when you are only interested in returning a few of the leading PCs. In the following example, I have adapted a bootstrapping technique that I discussed here into a function that incorporates this method as well as a variable sub-sampling parameter. In the function bootpca, you can define the number of variables to sample, n, the number of PCs to return, npc, and the number of iterations B for the sub-sampling routine. For this method, I have centered and scaled the sub-sampled matrix in order to standardize the variance of the dataset and allow for comparability among the singular values of the matrix decomposition. By making a boxplot of these bootstrapped singular values, lam, you should be able to differentiate between PCs that carry signals from those that are dominated by noise. Example Generate data m=50 n=100 x <- (seq(m)*2*pi)/m t <- (seq(n)*2*pi)/n #field Xt <- outer(sin(x), sin(t)) + outer(sin(2.1*x), sin(2.1*t)) + outer(sin(3.1*x), sin(3.1*t)) + outer(tanh(x), cos(t)) + outer(tanh(2*x), cos(2.1*t)) + outer(tanh(4*x), cos(0.1*t)) + outer(tanh(2.4*x), cos(1.1*t)) + tanh(outer(x, t, FUN="+")) + tanh(outer(x, 2*t, FUN="+")) Xt <- t(Xt) image(Xt) #Noisy field set.seed(1) RAND <- matrix(runif(length(Xt), min=-1, max=1), nrow=nrow(Xt), ncol=ncol(Xt)) R <- RAND * 0.2 * Xt #True field + Noise field Xp <- Xt + R image(Xp) load bootpca function library(irlba) bootpca <- function(mat, n=0.5*nrow(mat), npc=10, B=40*nrow(mat)){ lam <- matrix(NaN, nrow=npc, ncol=B) for(b in seq(B)){ samp.b <- NaN*seq(n) for(i in seq(n)){ samp.b[i] <- sample(nrow(mat), 1) } mat.b <- scale(mat[samp.b,], center=TRUE, scale=TRUE) E.b <- irlba(mat.b, nu=npc, nv=npc) lam[,b] <- E.b$d print(paste(round(b/B*100), "%", " completed", sep="")) } lam } Result and plot res <- bootpca(Xp, n=0.5*nrow(Xp), npc=15, B=999) #50% of variables used in each iteration, 15 PCs computed, and 999 iterations par(mar=c(4,4,1,1)) boxplot(t(res), log="y", col=8, outpch="", ylab="Lambda [log-scale]") It's obvious that the leading 5 PCs carry the most information, although there were technically 9 signals in the example data set. For your very large data set, you may want to use a smaller fraction of variables (i.e. rows) in each iteration, but do many iterations.
Iterative PCA R
As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to
Iterative PCA R As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to solve for a subset of leading PCs. The package irlba allows for this by performing a "Lanczos bidiagonalization". It is much faster for large matrices when you are only interested in returning a few of the leading PCs. In the following example, I have adapted a bootstrapping technique that I discussed here into a function that incorporates this method as well as a variable sub-sampling parameter. In the function bootpca, you can define the number of variables to sample, n, the number of PCs to return, npc, and the number of iterations B for the sub-sampling routine. For this method, I have centered and scaled the sub-sampled matrix in order to standardize the variance of the dataset and allow for comparability among the singular values of the matrix decomposition. By making a boxplot of these bootstrapped singular values, lam, you should be able to differentiate between PCs that carry signals from those that are dominated by noise. Example Generate data m=50 n=100 x <- (seq(m)*2*pi)/m t <- (seq(n)*2*pi)/n #field Xt <- outer(sin(x), sin(t)) + outer(sin(2.1*x), sin(2.1*t)) + outer(sin(3.1*x), sin(3.1*t)) + outer(tanh(x), cos(t)) + outer(tanh(2*x), cos(2.1*t)) + outer(tanh(4*x), cos(0.1*t)) + outer(tanh(2.4*x), cos(1.1*t)) + tanh(outer(x, t, FUN="+")) + tanh(outer(x, 2*t, FUN="+")) Xt <- t(Xt) image(Xt) #Noisy field set.seed(1) RAND <- matrix(runif(length(Xt), min=-1, max=1), nrow=nrow(Xt), ncol=ncol(Xt)) R <- RAND * 0.2 * Xt #True field + Noise field Xp <- Xt + R image(Xp) load bootpca function library(irlba) bootpca <- function(mat, n=0.5*nrow(mat), npc=10, B=40*nrow(mat)){ lam <- matrix(NaN, nrow=npc, ncol=B) for(b in seq(B)){ samp.b <- NaN*seq(n) for(i in seq(n)){ samp.b[i] <- sample(nrow(mat), 1) } mat.b <- scale(mat[samp.b,], center=TRUE, scale=TRUE) E.b <- irlba(mat.b, nu=npc, nv=npc) lam[,b] <- E.b$d print(paste(round(b/B*100), "%", " completed", sep="")) } lam } Result and plot res <- bootpca(Xp, n=0.5*nrow(Xp), npc=15, B=999) #50% of variables used in each iteration, 15 PCs computed, and 999 iterations par(mar=c(4,4,1,1)) boxplot(t(res), log="y", col=8, outpch="", ylab="Lambda [log-scale]") It's obvious that the leading 5 PCs carry the most information, although there were technically 9 signals in the example data set. For your very large data set, you may want to use a smaller fraction of variables (i.e. rows) in each iteration, but do many iterations.
Iterative PCA R As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to
46,275
Iterative PCA R
Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for the first few components. I have been successful with that number of variables (albeit on a smaller sample size). Alternatively, you can try an approach like regularized PCA or sparse PCA. If you are using R, take a look at the packages "elasticnet" and "mixOmics".
Iterative PCA R
Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for t
Iterative PCA R Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for the first few components. I have been successful with that number of variables (albeit on a smaller sample size). Alternatively, you can try an approach like regularized PCA or sparse PCA. If you are using R, take a look at the packages "elasticnet" and "mixOmics".
Iterative PCA R Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for t
46,276
How do I interpret the figure output from package dlnm in R?
Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because they used mortality data and Poisson regression models. In their framework, the exponentiated regression coefficient from the Poisson models $RR=\exp(\hat{\beta})$ is the relative risk. This is analogous to exponentiating the regression coefficients in logistic regression, which is the odds ratio. Can I still use the model? Yeah, you can still use such a model. Let's summarize what you do: You use natural cubic B-splines as basis functions instead of polynomials to model the relationship between temperature and $\mathrm{CO}_{2}$ (arglag with option type="ns" instead of type="poly") You assume that the effect of temperature is non-linear, as you specify argvar as splines. One important thing you have to know for the interpretation of the plots is that the function crossbasis automatically centeres the values at the predictor mean (i.e. the mean temperature) if not specified otherwise. This is the reference value with which the predictions are later compared in the graphics. You consider lags up to 12 (option lag=12 in crossbasis). (Btw: Why do you suppress the warnings?) You calculate a GLM with Gaussian errors and the identity link function, which is equivalent to a simple linear regression (OLS). You could have used the lm function instead. The plot that you have provided is interpreted as follows: The x-axis is the lag. Interpretation of the values of the y-axis: The y-axis depicts the changes in $\mathrm{CO}_{2}$ concentration associated with an increase of 10, 20 or 30°C compared to the mean temperature. If predicted change is 0, this means that an increase in temperature is not associated with an increase in $\mathrm{CO}_{2}$ concentration compared to $\mathrm{CO}_{2}$ concentration at mean temperature: The predicted $\mathrm{CO}_{2}$ concentration is the same at $\bar{x}_{Temp}+z$ degrees (where $z$ is any amount, say 10 or 20 degrees) and at mean temperature $\bar{x}_{Temp}$. This means that for an increase in temperature of 10°C, the temperature at lag 0 (on the same hour) increases the $\mathrm{CO}_{2}$ concentration compared to the mean temperature. Because you specified cumul=TRUE in crosspred, the effects are cumulative. The cumulative effects of an increase of 10°C are quasi nonexistent after 4 hours compared to the mean temperature. This suggests that the non-cumulative effects are negative at lags 1-4 and null effects from that on. For temperature increases of 20 or 30°C, the cumulative effects on the $\mathrm{CO}_{2}$ concentration are lower in the first 1-4 hours compared to $\mathrm{CO}_{2}$ at the mean temperature. As with temperature increases of 10°C, the cumulative effects are practically nonexistent after 4 or 5 hours. Again: $\mathrm{CO}_{2}$ concentrations are the same at mean temperature and at an increase in temperature of 20 or 30°C after 4 or 5 hours. I think a contour plot would be easier to interpret. Try the following code: plot(cp, xlab="Temperature", col="red", zlab="CO2", shade=0.6, main="3D graph of temperature effect") Interpretation of the example given in the vignette of the dlnm packge First, a little something about distributed lag models. They have the form: $$ Y_{i}=\alpha + \sum_{l=0}^{K}\beta_{j}x_{t-l} + \text{other predictors} +\epsilon_{i} $$ where $K$ is the maximum lag and $x$ is a predictor. This is just fitted using a multiple linear regression. So the coefficient $\beta_{1}$ would estimate the effect of $x_{t-1}$ of the day before on $Y_{t}$. In essence, multiple lags of the predictors are included in the model simultaneously. This obviously has the problem that the lagged predictors are highly correlated (autocorrelation). A more advanced method are polynomial distributed lag models. It has the same basic formula as above, but the impulse-response function is forced to lie on a polynomial of degree $q$ (link to a paper for Stata): $$ \beta_{i} = a_{0} + a_{1}i + a_{2}i^2 +\ldots+a_{q}i^q $$ where $q$ is the degree of the polynomial and $i$ the lag length. Another formulation is $$ \beta_{i} = a_{0} + \sum_{j=1}^{q}a_{j}f_{j}(i) $$ Where $f_{j}(i)$ is a polynomial of degree $j$ in the lag length $i$. A good introduction to the dlnm package and polynomial distributed lag models can be found here. These models are often used in studies about air-pollution and health because air-pollution has lagged effects on health outcomes. Let's look at this graph from the vignette of the dlnm package (page 13): The degree of the polynomials was $q=4$ in this case so the green line is a polynomial of 4th degree. The y-axis is the relative risk (RR) estimated via Poisson regression and the x-axis the considered lag. The relative risk has the following interpretation: Persons who were exposed have a $(RR-1)\cdot100\%$ higher/lower chance of getting the outcome (e.g. death, lung cancer, etc.) compared to people who were not exposed. If $RR>1$ this means a positive association and if $RR<1$ means a protective association. A $RR=1$ means no association. We see that for every increase of $\textrm{PM}_{10}$ by 10 units ($\mu \mathrm{g}/m^{3}$), there is a $(1.001-1)\cdot100\%=0.1\%$ increase in the risk to die at lag 0 (i.e. on the same day as the exposure). Strangely, the exposure from about 9 days ago is protective: an increase of 10$~\mu \mathrm{g}/m^{3}$ is associated with a decreased risk to die compared to people with 10 units less exposure. We can also see that the exposure from 15 days before doesn't play a role (i.e. $RR\approx1$). Let's look at the cumulative relative risk: This is the same as before but the effects are cumulated over time (i.e. summing all contributions from the lags up to the maximum lag). The red line starts at the same point as the green line in the first graphic (i.e. $\approx1.001$). We can see that people who have been exposed for five days have an increased cumulative risk of about $(1.005-1)\cdot100\%=0.5\%$ to die compared to non-exposed people. Because the green line goes below the relative risk of $1$ after a lag of about 5 days, the cumulative association after 15 days is nearly $1$. This means that the protective effects of $\textrm{PM}_{10}$ from lag 5 on have compensated the harmful effects from earlier lags. Whether that is scientifically reasonable is another quesiton.
How do I interpret the figure output from package dlnm in R?
Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because t
How do I interpret the figure output from package dlnm in R? Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because they used mortality data and Poisson regression models. In their framework, the exponentiated regression coefficient from the Poisson models $RR=\exp(\hat{\beta})$ is the relative risk. This is analogous to exponentiating the regression coefficients in logistic regression, which is the odds ratio. Can I still use the model? Yeah, you can still use such a model. Let's summarize what you do: You use natural cubic B-splines as basis functions instead of polynomials to model the relationship between temperature and $\mathrm{CO}_{2}$ (arglag with option type="ns" instead of type="poly") You assume that the effect of temperature is non-linear, as you specify argvar as splines. One important thing you have to know for the interpretation of the plots is that the function crossbasis automatically centeres the values at the predictor mean (i.e. the mean temperature) if not specified otherwise. This is the reference value with which the predictions are later compared in the graphics. You consider lags up to 12 (option lag=12 in crossbasis). (Btw: Why do you suppress the warnings?) You calculate a GLM with Gaussian errors and the identity link function, which is equivalent to a simple linear regression (OLS). You could have used the lm function instead. The plot that you have provided is interpreted as follows: The x-axis is the lag. Interpretation of the values of the y-axis: The y-axis depicts the changes in $\mathrm{CO}_{2}$ concentration associated with an increase of 10, 20 or 30°C compared to the mean temperature. If predicted change is 0, this means that an increase in temperature is not associated with an increase in $\mathrm{CO}_{2}$ concentration compared to $\mathrm{CO}_{2}$ concentration at mean temperature: The predicted $\mathrm{CO}_{2}$ concentration is the same at $\bar{x}_{Temp}+z$ degrees (where $z$ is any amount, say 10 or 20 degrees) and at mean temperature $\bar{x}_{Temp}$. This means that for an increase in temperature of 10°C, the temperature at lag 0 (on the same hour) increases the $\mathrm{CO}_{2}$ concentration compared to the mean temperature. Because you specified cumul=TRUE in crosspred, the effects are cumulative. The cumulative effects of an increase of 10°C are quasi nonexistent after 4 hours compared to the mean temperature. This suggests that the non-cumulative effects are negative at lags 1-4 and null effects from that on. For temperature increases of 20 or 30°C, the cumulative effects on the $\mathrm{CO}_{2}$ concentration are lower in the first 1-4 hours compared to $\mathrm{CO}_{2}$ at the mean temperature. As with temperature increases of 10°C, the cumulative effects are practically nonexistent after 4 or 5 hours. Again: $\mathrm{CO}_{2}$ concentrations are the same at mean temperature and at an increase in temperature of 20 or 30°C after 4 or 5 hours. I think a contour plot would be easier to interpret. Try the following code: plot(cp, xlab="Temperature", col="red", zlab="CO2", shade=0.6, main="3D graph of temperature effect") Interpretation of the example given in the vignette of the dlnm packge First, a little something about distributed lag models. They have the form: $$ Y_{i}=\alpha + \sum_{l=0}^{K}\beta_{j}x_{t-l} + \text{other predictors} +\epsilon_{i} $$ where $K$ is the maximum lag and $x$ is a predictor. This is just fitted using a multiple linear regression. So the coefficient $\beta_{1}$ would estimate the effect of $x_{t-1}$ of the day before on $Y_{t}$. In essence, multiple lags of the predictors are included in the model simultaneously. This obviously has the problem that the lagged predictors are highly correlated (autocorrelation). A more advanced method are polynomial distributed lag models. It has the same basic formula as above, but the impulse-response function is forced to lie on a polynomial of degree $q$ (link to a paper for Stata): $$ \beta_{i} = a_{0} + a_{1}i + a_{2}i^2 +\ldots+a_{q}i^q $$ where $q$ is the degree of the polynomial and $i$ the lag length. Another formulation is $$ \beta_{i} = a_{0} + \sum_{j=1}^{q}a_{j}f_{j}(i) $$ Where $f_{j}(i)$ is a polynomial of degree $j$ in the lag length $i$. A good introduction to the dlnm package and polynomial distributed lag models can be found here. These models are often used in studies about air-pollution and health because air-pollution has lagged effects on health outcomes. Let's look at this graph from the vignette of the dlnm package (page 13): The degree of the polynomials was $q=4$ in this case so the green line is a polynomial of 4th degree. The y-axis is the relative risk (RR) estimated via Poisson regression and the x-axis the considered lag. The relative risk has the following interpretation: Persons who were exposed have a $(RR-1)\cdot100\%$ higher/lower chance of getting the outcome (e.g. death, lung cancer, etc.) compared to people who were not exposed. If $RR>1$ this means a positive association and if $RR<1$ means a protective association. A $RR=1$ means no association. We see that for every increase of $\textrm{PM}_{10}$ by 10 units ($\mu \mathrm{g}/m^{3}$), there is a $(1.001-1)\cdot100\%=0.1\%$ increase in the risk to die at lag 0 (i.e. on the same day as the exposure). Strangely, the exposure from about 9 days ago is protective: an increase of 10$~\mu \mathrm{g}/m^{3}$ is associated with a decreased risk to die compared to people with 10 units less exposure. We can also see that the exposure from 15 days before doesn't play a role (i.e. $RR\approx1$). Let's look at the cumulative relative risk: This is the same as before but the effects are cumulated over time (i.e. summing all contributions from the lags up to the maximum lag). The red line starts at the same point as the green line in the first graphic (i.e. $\approx1.001$). We can see that people who have been exposed for five days have an increased cumulative risk of about $(1.005-1)\cdot100\%=0.5\%$ to die compared to non-exposed people. Because the green line goes below the relative risk of $1$ after a lag of about 5 days, the cumulative association after 15 days is nearly $1$. This means that the protective effects of $\textrm{PM}_{10}$ from lag 5 on have compensated the harmful effects from earlier lags. Whether that is scientifically reasonable is another quesiton.
How do I interpret the figure output from package dlnm in R? Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because t
46,277
Are the relations in fixed, random and mixed effect models and multilevel models causal?
Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an earnings regression of the type $$\ln(y_{i}) = \alpha + \delta S_{i} + \gamma A_{i} + X'\beta + \epsilon$$ where the dependent variable is log earnings, $S_{i}$ is years of education, $A_{i}$ is ability and $X$ are other relevant variables that affect wages like parental background, age, gender, etc. Assume $A_{i}$ and $S_{i}$ are correlated and that there are no other endogeneity issues or measurement error. If you can observe $S_{i}$, $A_{i}$ and $X$, then the coefficient $\delta$ has a causal interpretation, i.e. it is the causal effect of an additional year of education on earnings - holding all else constant. This ceteris paribus assumption is what makes causality. To extend this example to your fixed effects model, if you have panel data and you don't observe $A_{i}$, you can still consistently estimate $\delta$ using fixed effects. Suppose $S_{i}$ varies over time and $A_{i}$ does not vary over time, then $$\ln(y_{i}) = \eta + \delta S_{i} + X'\beta + \epsilon$$ the absorbing variable $\eta = \alpha + A_{i} + G_{i}$ includes all observed and unobserved variables that do not vary over time, like the intercept or $G_{i} =$ gender, place of birth, etc. So it pulls $A_{i}$ out of the error and hence removes the endogeneity problem (remember $A_{i}$ and $S_{i}$ are correlated, so if $A_{i}$ is in the error, $S_{i}$ will be correlated with the error). The problem is that $A_{i}$ is likely not to be fixed over time as for instance mental capabilities and productivity diminish with old age. In theory, I could go on providing examples for each type of your models but I guess you get the idea. Whether or not you estimate a causal effect depends on the included (and omitted!) variables AND on the assumptions of the model. So see what kind of data you have at hand, what you can control for in terms of relevant variables for the relationship you are after (perhaps you don't even have an endogeneity problem), and what assumptions are the most realistic for your analysis to be credible. If you want to dig a little deeper into the topic of causal effects estimation, Mostly Harmless Econometrics by Angrist and Pischke is an excellent book. Otherwise you will find plenty of lecture notes online.
Are the relations in fixed, random and mixed effect models and multilevel models causal?
Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an e
Are the relations in fixed, random and mixed effect models and multilevel models causal? Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an earnings regression of the type $$\ln(y_{i}) = \alpha + \delta S_{i} + \gamma A_{i} + X'\beta + \epsilon$$ where the dependent variable is log earnings, $S_{i}$ is years of education, $A_{i}$ is ability and $X$ are other relevant variables that affect wages like parental background, age, gender, etc. Assume $A_{i}$ and $S_{i}$ are correlated and that there are no other endogeneity issues or measurement error. If you can observe $S_{i}$, $A_{i}$ and $X$, then the coefficient $\delta$ has a causal interpretation, i.e. it is the causal effect of an additional year of education on earnings - holding all else constant. This ceteris paribus assumption is what makes causality. To extend this example to your fixed effects model, if you have panel data and you don't observe $A_{i}$, you can still consistently estimate $\delta$ using fixed effects. Suppose $S_{i}$ varies over time and $A_{i}$ does not vary over time, then $$\ln(y_{i}) = \eta + \delta S_{i} + X'\beta + \epsilon$$ the absorbing variable $\eta = \alpha + A_{i} + G_{i}$ includes all observed and unobserved variables that do not vary over time, like the intercept or $G_{i} =$ gender, place of birth, etc. So it pulls $A_{i}$ out of the error and hence removes the endogeneity problem (remember $A_{i}$ and $S_{i}$ are correlated, so if $A_{i}$ is in the error, $S_{i}$ will be correlated with the error). The problem is that $A_{i}$ is likely not to be fixed over time as for instance mental capabilities and productivity diminish with old age. In theory, I could go on providing examples for each type of your models but I guess you get the idea. Whether or not you estimate a causal effect depends on the included (and omitted!) variables AND on the assumptions of the model. So see what kind of data you have at hand, what you can control for in terms of relevant variables for the relationship you are after (perhaps you don't even have an endogeneity problem), and what assumptions are the most realistic for your analysis to be credible. If you want to dig a little deeper into the topic of causal effects estimation, Mostly Harmless Econometrics by Angrist and Pischke is an excellent book. Otherwise you will find plenty of lecture notes online.
Are the relations in fixed, random and mixed effect models and multilevel models causal? Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an e
46,278
Benchmark data for Random Forest evaluation [closed]
I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://link.springer.com/chapter/10.1007/978-3-540-30115-8_34), but my impression that this stuff isn't main-stream practice. You can find a good recent review of random forests in Elements of Statistical Learning (http://www-stat.stanford.edu/~tibs/ElemStatLearn/). The datasets used by Breiman can be found at http://archive.ics.uci.edu/ml/. These datasets are well known classics. The downside is that they are not very large compared to some other datasets out there. That being said, I think the UCI datasets are a great place to start your investigations. Finally - I think there's still a lot of good work to be done on random forests; the field is far from complete. Good luck!
Benchmark data for Random Forest evaluation [closed]
I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://
Benchmark data for Random Forest evaluation [closed] I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://link.springer.com/chapter/10.1007/978-3-540-30115-8_34), but my impression that this stuff isn't main-stream practice. You can find a good recent review of random forests in Elements of Statistical Learning (http://www-stat.stanford.edu/~tibs/ElemStatLearn/). The datasets used by Breiman can be found at http://archive.ics.uci.edu/ml/. These datasets are well known classics. The downside is that they are not very large compared to some other datasets out there. That being said, I think the UCI datasets are a great place to start your investigations. Finally - I think there's still a lot of good work to be done on random forests; the field is far from complete. Good luck!
Benchmark data for Random Forest evaluation [closed] I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://
46,279
Benchmark data for Random Forest evaluation [closed]
One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classifiers, among them multiple versions of Random Forests, on the entire UCI repository as of that time and find that Random Forest variants indeed perform best. It seems like specific variants of Random Forests may work better for specific classes of problems, but overall, plain vanilla Random Forests work very well indeed. Of course, the UCI repository has grown from the 121 datasets the authors used to (currently) 394 datasets (although probably not all of these are classification), so it might make sense to update that study.
Benchmark data for Random Forest evaluation [closed]
One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classi
Benchmark data for Random Forest evaluation [closed] One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classifiers, among them multiple versions of Random Forests, on the entire UCI repository as of that time and find that Random Forest variants indeed perform best. It seems like specific variants of Random Forests may work better for specific classes of problems, but overall, plain vanilla Random Forests work very well indeed. Of course, the UCI repository has grown from the 121 datasets the authors used to (currently) 394 datasets (although probably not all of these are classification), so it might make sense to update that study.
Benchmark data for Random Forest evaluation [closed] One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classi
46,280
Fisher's method for combing p-values - what about the lower tail?
i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes going to give different results, try this: x=70;c(1-pchisq(x,1),pchisq(x,1,lower.tail=FALSE)) ii) Yes, it's one-sided. Small values of the chi-square statistic indicate that the component p-values tend to be large (that is, a lack of evidence against the overall null). Imagine you were doing a t-test and the sample means were really, really close together... i.e. $|t|$ was unusually small. Would you reject the null hypothesis that they were equal because they were unusually close together? Clearly not. You might conclude something else was wrong (like one of your assumptions could be faulty, or you used a really bad test, or your calculation might be wrong, or someone fiddled the data, or ...) - but you wouldn't conclude the means were different because they were surprisingly close! Indeed - what would you do in that situation: > t.test(x,y,var.equal=TRUE) Two Sample t-test data: x and y t = 1e-04, df = 18, p-value = 0.9999 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.7213824 0.7214315 sample estimates: mean of x mean of y -0.2161466 -0.2161711 So there's a two sample t-test with $p$ really close to 1 (~0.999944). What do you conclude? So now, with a goodness of fit, what kinds of things might a p-value really close to 1 tell you?
Fisher's method for combing p-values - what about the lower tail?
i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes g
Fisher's method for combing p-values - what about the lower tail? i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes going to give different results, try this: x=70;c(1-pchisq(x,1),pchisq(x,1,lower.tail=FALSE)) ii) Yes, it's one-sided. Small values of the chi-square statistic indicate that the component p-values tend to be large (that is, a lack of evidence against the overall null). Imagine you were doing a t-test and the sample means were really, really close together... i.e. $|t|$ was unusually small. Would you reject the null hypothesis that they were equal because they were unusually close together? Clearly not. You might conclude something else was wrong (like one of your assumptions could be faulty, or you used a really bad test, or your calculation might be wrong, or someone fiddled the data, or ...) - but you wouldn't conclude the means were different because they were surprisingly close! Indeed - what would you do in that situation: > t.test(x,y,var.equal=TRUE) Two Sample t-test data: x and y t = 1e-04, df = 18, p-value = 0.9999 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.7213824 0.7214315 sample estimates: mean of x mean of y -0.2161466 -0.2161711 So there's a two sample t-test with $p$ really close to 1 (~0.999944). What do you conclude? So now, with a goodness of fit, what kinds of things might a p-value really close to 1 tell you?
Fisher's method for combing p-values - what about the lower tail? i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes g
46,281
What to do with a variable that loads equally on two factors in a factor analysis?
Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternative ideas: Consider whether you have extracted enough factors. Sometimes when you extract more factors cross-loading items or items that don't load much at all can load cleanly on one factor. If this is only the initial phase of data collection and you are planning on generating more items, or you already have a large item pool, then it makes more sense to drop cross loading items. If this is a single shot, then you might be more reluctant to drop items. You also need to consider what your threshold is for cross-loadings (.3, .4, .5). If you set it too high, then you might fail to identify problematic items. If you set it too low, then you may pick up cross-loadings that either reflect a little noise in the data or are more generally not going to substantively effect the purity of your factors. Don't forget to think. Think about why the items are cross-loading. What is it about the two factors and the nature of the items that is leading to this cross-loading. There may be theoretical or other reasons why you want to model and retain cross-loading items. References You may want to read some of the following articles about factor analysis and scale construction: Clark and Watson's Constructing Validity: Basic Issues in Objective Scale Development. PDF Gerbing and Anderson's An Updated Paradigm for Scale Development Incorporating Unidimensionality and Its Assessment PDF Reise, Waller, and Comrey's Factor Analysis and Scale Revision PDF Hinkin's A Review of Scale Development Practices in the Study of Organizations PDF Ford, MacCallum, and Tait's The Application of Exploratory Factor Analysis in Applied Psychology: A Critical Review and Analysis Fabrigar, Wegener, MacCallum, and Strahan's Evaluating the Use of Exploratory Factor Analysis in Psychological Research Costello and Osborne's Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most From Your Data Analysis
What to do with a variable that loads equally on two factors in a factor analysis?
Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternati
What to do with a variable that loads equally on two factors in a factor analysis? Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternative ideas: Consider whether you have extracted enough factors. Sometimes when you extract more factors cross-loading items or items that don't load much at all can load cleanly on one factor. If this is only the initial phase of data collection and you are planning on generating more items, or you already have a large item pool, then it makes more sense to drop cross loading items. If this is a single shot, then you might be more reluctant to drop items. You also need to consider what your threshold is for cross-loadings (.3, .4, .5). If you set it too high, then you might fail to identify problematic items. If you set it too low, then you may pick up cross-loadings that either reflect a little noise in the data or are more generally not going to substantively effect the purity of your factors. Don't forget to think. Think about why the items are cross-loading. What is it about the two factors and the nature of the items that is leading to this cross-loading. There may be theoretical or other reasons why you want to model and retain cross-loading items. References You may want to read some of the following articles about factor analysis and scale construction: Clark and Watson's Constructing Validity: Basic Issues in Objective Scale Development. PDF Gerbing and Anderson's An Updated Paradigm for Scale Development Incorporating Unidimensionality and Its Assessment PDF Reise, Waller, and Comrey's Factor Analysis and Scale Revision PDF Hinkin's A Review of Scale Development Practices in the Study of Organizations PDF Ford, MacCallum, and Tait's The Application of Exploratory Factor Analysis in Applied Psychology: A Critical Review and Analysis Fabrigar, Wegener, MacCallum, and Strahan's Evaluating the Use of Exploratory Factor Analysis in Psychological Research Costello and Osborne's Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most From Your Data Analysis
What to do with a variable that loads equally on two factors in a factor analysis? Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternati
46,282
Assumptions and contraindications of conjoint analysis
Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full profile rating, binary choice or discrete choice experiments, graded pairs and constant sum paired comparisons. They are typically based on random utility theory, which holds that latent utility ($U$) is a combination of a systematic component ($v$) and a stochastic component ($ \epsilon $): $$ U=v+\epsilon $$ The systematic component, $v$, is assumed to be a function of the attributes levels presented in the conjoint choice tasks, i.e. $ v=f(x_1,...,x_n)$. $\epsilon$ is not observable and makes the analysis probabilistic from the perspective of the analyst. The assumed distribution of $\epsilon$ determines the model used to analyze the choice data. For example, in a binary choice analysis, assuming a normal distribution leads to a probit model, while a EV1 distribution leads to a logit. The coefficients from the choice model represent the part-worth utility associated with a 1-unit change in each attribute level. If you want more information on CA, you might be interested in this book... Applied Choice Analysis. For more detail on discrete choice specifically, the second edition of Kenneth Train's fantastic book is available online for free... Discrete Choice Methods with Simulation Obviously the validity of these estimates depends on the quality of the experimental design, but the theory of experimental design in another matter altogether. If you're interested in this, you should visit http://support.sas.com/resources/papers/tnote/tnote_marketresearch.html.
Assumptions and contraindications of conjoint analysis
Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full
Assumptions and contraindications of conjoint analysis Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full profile rating, binary choice or discrete choice experiments, graded pairs and constant sum paired comparisons. They are typically based on random utility theory, which holds that latent utility ($U$) is a combination of a systematic component ($v$) and a stochastic component ($ \epsilon $): $$ U=v+\epsilon $$ The systematic component, $v$, is assumed to be a function of the attributes levels presented in the conjoint choice tasks, i.e. $ v=f(x_1,...,x_n)$. $\epsilon$ is not observable and makes the analysis probabilistic from the perspective of the analyst. The assumed distribution of $\epsilon$ determines the model used to analyze the choice data. For example, in a binary choice analysis, assuming a normal distribution leads to a probit model, while a EV1 distribution leads to a logit. The coefficients from the choice model represent the part-worth utility associated with a 1-unit change in each attribute level. If you want more information on CA, you might be interested in this book... Applied Choice Analysis. For more detail on discrete choice specifically, the second edition of Kenneth Train's fantastic book is available online for free... Discrete Choice Methods with Simulation Obviously the validity of these estimates depends on the quality of the experimental design, but the theory of experimental design in another matter altogether. If you're interested in this, you should visit http://support.sas.com/resources/papers/tnote/tnote_marketresearch.html.
Assumptions and contraindications of conjoint analysis Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full
46,283
Assumptions and contraindications of conjoint analysis
And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Nike Red Adidas Blue Nike Blue Adidas The respondents' evaluation task (choice, ranking, Best-worst) gives a score for each option. Depending on the task, and underlying assumptions of the task & choice processes & error term, and whether data are treated at the individual level or aggregated to create a single model for a whole group, the scores a simply regressed against the components (attributes) of the options (here colour and brand) to infer the value of each attribute level. As with Regression, ANOVA, etc., the technique is robust to moderate violations of the underlying statistical assumptions. If your problem is a 2*2 factorial as I have presented, then the final data-gathering process may be the same whether you call it conjoint analysis or not. Conjoint Analysis more often is used when there are many attributes, each with several attribute levels. This usually calls for fractional factorial designs, or some sort of dynamically changing evaluation set if you want to get fancy. If your team is advocating using a branded conjoint analysis technique such as Sawtooth then they're just using a sledge hammer to crack a wallnut. Or they don't understand what they're doing, but they do know how to press buttons on a black box.
Assumptions and contraindications of conjoint analysis
And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Ni
Assumptions and contraindications of conjoint analysis And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Nike Red Adidas Blue Nike Blue Adidas The respondents' evaluation task (choice, ranking, Best-worst) gives a score for each option. Depending on the task, and underlying assumptions of the task & choice processes & error term, and whether data are treated at the individual level or aggregated to create a single model for a whole group, the scores a simply regressed against the components (attributes) of the options (here colour and brand) to infer the value of each attribute level. As with Regression, ANOVA, etc., the technique is robust to moderate violations of the underlying statistical assumptions. If your problem is a 2*2 factorial as I have presented, then the final data-gathering process may be the same whether you call it conjoint analysis or not. Conjoint Analysis more often is used when there are many attributes, each with several attribute levels. This usually calls for fractional factorial designs, or some sort of dynamically changing evaluation set if you want to get fancy. If your team is advocating using a branded conjoint analysis technique such as Sawtooth then they're just using a sledge hammer to crack a wallnut. Or they don't understand what they're doing, but they do know how to press buttons on a black box.
Assumptions and contraindications of conjoint analysis And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Ni
46,284
Muthén's robust weighted least squares factoring method for binary items...in R?
What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
Muthén's robust weighted least squares factoring method for binary items...in R?
What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
Muthén's robust weighted least squares factoring method for binary items...in R? What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
Muthén's robust weighted least squares factoring method for binary items...in R? What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
46,285
Skewness of a mixture density
Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are better (more interpretable) measures nowdays. It has been largely discussed the validity of using a measure of skewness in multimodal distributions, since its interpretation becomes unclear. This is the case of finite mixtures. If your mixture looks (or is) unimodal, then you can use this value to understand a bit how asymmetric it is. In R, this quantity is implemented in the library moments, in the command skewness(). The moments of a mixture $X$, with density $g=\sum_{j=1}^n \pi_j f_j$ can be calculated as $E[X^k] = \sum_{j=1}^n \pi_j\int x^k f_j(x)dx$. A numerical solution in R. # Sampling from a 2-gaussian mixture gaussmix <- function(n,m1,m2,s1,s2,alpha) { I <- runif(n)<alpha rnorm(n,mean=ifelse(I,m1,m2),sd=ifelse(I,s1,s2)) } # A simulated sample samp <- gaussmix(100000,0,0,1,1,0.5) library(moments) # Approximated kurtosis and skeweness using the simulated sample skewness(samp) kurtosis(samp)
Skewness of a mixture density
Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are
Skewness of a mixture density Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are better (more interpretable) measures nowdays. It has been largely discussed the validity of using a measure of skewness in multimodal distributions, since its interpretation becomes unclear. This is the case of finite mixtures. If your mixture looks (or is) unimodal, then you can use this value to understand a bit how asymmetric it is. In R, this quantity is implemented in the library moments, in the command skewness(). The moments of a mixture $X$, with density $g=\sum_{j=1}^n \pi_j f_j$ can be calculated as $E[X^k] = \sum_{j=1}^n \pi_j\int x^k f_j(x)dx$. A numerical solution in R. # Sampling from a 2-gaussian mixture gaussmix <- function(n,m1,m2,s1,s2,alpha) { I <- runif(n)<alpha rnorm(n,mean=ifelse(I,m1,m2),sd=ifelse(I,s1,s2)) } # A simulated sample samp <- gaussmix(100000,0,0,1,1,0.5) library(moments) # Approximated kurtosis and skeweness using the simulated sample skewness(samp) kurtosis(samp)
Skewness of a mixture density Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are
46,286
What is the 'same distribution' mean?
It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide for these variables. The functions $F_X(t)={\mathbb P}(X\leq t)$ and $F_Y(t)= {\mathbb P}(Y\leq t)$ are termed the distribution functions of the variables $X$ and $Y$, respectively. See http://en.wikipedia.org/wiki/Random_variable#Equality_in_distribution
What is the 'same distribution' mean?
It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide
What is the 'same distribution' mean? It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide for these variables. The functions $F_X(t)={\mathbb P}(X\leq t)$ and $F_Y(t)= {\mathbb P}(Y\leq t)$ are termed the distribution functions of the variables $X$ and $Y$, respectively. See http://en.wikipedia.org/wiki/Random_variable#Equality_in_distribution
What is the 'same distribution' mean? It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide
46,287
What is the 'same distribution' mean?
Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly seen distributions, like normal distribution, if you can verify that type of distribution and all parameters are the same, then the distributions are the same. However, be aware of that the mean and variance can be undefined; for example, see Cauchy distribution. In fact, the PDF and any other parameters can be undefinable (credit to Whuber). However, their correlation can still be arbitrary (or undefined).
What is the 'same distribution' mean?
Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly
What is the 'same distribution' mean? Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly seen distributions, like normal distribution, if you can verify that type of distribution and all parameters are the same, then the distributions are the same. However, be aware of that the mean and variance can be undefined; for example, see Cauchy distribution. In fact, the PDF and any other parameters can be undefinable (credit to Whuber). However, their correlation can still be arbitrary (or undefined).
What is the 'same distribution' mean? Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly
46,288
Does adjustement completely remove the effect of the confounding variables?
I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions where adjustment can cause bias rather than decreasing biases. For more information on this issue, search for collider bias and directed acyclic graphs. 2) Adjustment does remove the confounding effect, but only if the operationalization is correct. In other words, you have chosen the correct variable to represent the construct. There are multiple reasons why age may not be a good indicator of aging (the actual construct that is related to mortality.) For example, fatal heart disease can be lifestyle-related. It can also be related to immunological response and how body mediates inflammation. All these factors can have substantial difference within the same age. In the reporting side, under-report of one's age tends to go up with age, introducing some error that is correlated with age as well. If you control for age, and thinking you have controlled for age-related factors, chance is this assumption is usually over-ambitious. It's always more important to know what the control variables really means. 3) There are also other dynamics which can cause adjustment alone to be insufficient. For example, interaction between age and other variable(s) in the model can bias the estimate of age. Non-linear relationship between age and mortality can also cause simple adjustment for age alone an imperfect method. My guess is in epidemiology, it's better to say "no" whenever someone asks if something can completely removed whatever... perhaps except "can randomized controlled trial completely remove biases?" Then "theoretically yes."
Does adjustement completely remove the effect of the confounding variables?
I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions
Does adjustement completely remove the effect of the confounding variables? I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions where adjustment can cause bias rather than decreasing biases. For more information on this issue, search for collider bias and directed acyclic graphs. 2) Adjustment does remove the confounding effect, but only if the operationalization is correct. In other words, you have chosen the correct variable to represent the construct. There are multiple reasons why age may not be a good indicator of aging (the actual construct that is related to mortality.) For example, fatal heart disease can be lifestyle-related. It can also be related to immunological response and how body mediates inflammation. All these factors can have substantial difference within the same age. In the reporting side, under-report of one's age tends to go up with age, introducing some error that is correlated with age as well. If you control for age, and thinking you have controlled for age-related factors, chance is this assumption is usually over-ambitious. It's always more important to know what the control variables really means. 3) There are also other dynamics which can cause adjustment alone to be insufficient. For example, interaction between age and other variable(s) in the model can bias the estimate of age. Non-linear relationship between age and mortality can also cause simple adjustment for age alone an imperfect method. My guess is in epidemiology, it's better to say "no" whenever someone asks if something can completely removed whatever... perhaps except "can randomized controlled trial completely remove biases?" Then "theoretically yes."
Does adjustement completely remove the effect of the confounding variables? I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions
46,289
Likelihood ratio tests on linear mixed effect models
You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress * vowel_group + (1|speaker), data) >anova(fm1,fm2) It doesn't matter whether you set the model with the fewest df first or second in the anova command, however it is important in the interpretation of the results that the model with the least df gets preferred in case of a significant difference.
Likelihood ratio tests on linear mixed effect models
You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress *
Likelihood ratio tests on linear mixed effect models You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress * vowel_group + (1|speaker), data) >anova(fm1,fm2) It doesn't matter whether you set the model with the fewest df first or second in the anova command, however it is important in the interpretation of the results that the model with the least df gets preferred in case of a significant difference.
Likelihood ratio tests on linear mixed effect models You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress *
46,290
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$
There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) + U_i $$ where $\boldsymbol{X}_i = [X_{1i},\ldots, X_{Ki}]'$, together with the assumption that $\mathbb{E}(U_i \mid \boldsymbol{X}_i)=0$. It appears you are trying to prove that $$ \mathbb{E}(Y_i \mid \boldsymbol{X}_i) = \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) $$ in order to prove identification. This is tautological from your model, and does not amount to proving that the parameters are identified. Let us first work through some definitions of identification, isolating the exact form of identification that applies to nonlinear regression models, where only the conditional mean, and not the entire conditional distribution is specified. Parametric identification of distributions The most primitive notion of identification, is the notion of identification of distributions, usually just called identification. If the conditional density $Y_i \mid \boldsymbol{X}_i$ is written as $$ Y_i \mid \boldsymbol{X}_i \sim f(Y_i \mid \boldsymbol{X}_i; \boldsymbol{\theta}^0) $$ Then the conditional distribution is said to be identified if $$ \boldsymbol{\theta}^0 \neq \boldsymbol{\theta} \implies f(Y_i \mid \boldsymbol{X}_i; \boldsymbol{\theta}^0) \neq f(Y_i \mid \boldsymbol{X}_i; \boldsymbol{\theta}) $$ Parametric identification of the conditional mean function In the case of regression models, we are typically interested in the identification of the parameters that enter the conditional mean. Let the conditional mean function be $$ \mathbb{E}(Y_i \mid \boldsymbol{X}_i) = m(\boldsymbol{X}_i;\boldsymbol{\beta}^0) $$ The parameters of the conditional mean function are said to be identified, or indeed, the conditional mean model is said to be identified if $$ \boldsymbol{\beta}^0 \neq \boldsymbol{\beta} \implies m(\boldsymbol{X}_i;\boldsymbol{\beta}^0) \neq m(\boldsymbol{X}_i;\boldsymbol{\beta}) $$ In your case, this is the notion of identification that you are interested in. Note that identification does not imply conditional mean identification. Conditional mean identification in the exponential regression model Now that the definitions are in place, we can solve the problem at hand, that is, specify primitive conditions for the conditional mean identification in the exponential regression model. Recall that the exponential regression model is a conditional mean model such that $$ m(\boldsymbol{X}_i;\boldsymbol{\beta}) = \exp(\boldsymbol{X}_i'\boldsymbol{\beta} ) $$ Make the assumption that $\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')$ is nonsingular (this is essentially what your positive variance assumptions are). Then, we have that if $\boldsymbol{\beta}^0 \neq \boldsymbol{\beta}$, $$ \begin{alignat}{1} &\mathbb{E}((\boldsymbol{X}_i'(\boldsymbol{\beta}^0 - \boldsymbol{\beta}))^2) &= (\boldsymbol{\beta}^0 -\boldsymbol{\beta})'\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')(\boldsymbol{\beta}^0 -\boldsymbol{\beta}) \\ &&>0 \\ \implies & \boldsymbol{X}_i'(\boldsymbol{\beta}^0 -\boldsymbol{\beta}) &\neq 0 \text{ on a set of positive measure} \\ \implies & \boldsymbol{X}_i'\boldsymbol{\beta}^0 &\neq \boldsymbol{X}_i'\boldsymbol{\beta} \text{ on a set of positive measure }\\ \implies & \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) &\neq \exp(\boldsymbol{X}_i'\boldsymbol{\beta}) \text{ on a set of positive measure} \\ \end{alignat} $$ Thus, you have shown that under the nonsingularity of the $\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')$ matrix, the conditional mean function (that is, its parameters) are identified. Added explanation: The theory of integration (Lebesgue, in this case) states that if $\mathbb{E}(f)>0$ and $ f\geq 0$, then, $f>0$ on a set of positive measure. So, if I can prove that $\mathbb{E}((\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta}))^2)>0$, I can claim that $(\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta}))^2>0$ on a set of positive measure, in turn that $\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta})\neq0$ on a set of positive measure. Note that all this work has to be done to rule out the one case that could spoil the party, that is when the expectation and hence its argument is zero almost everywhere. That would mean that the conditional mean functions corresponding to the two parameter sets are the same almost everywhere. Now, I know that for any nonsingular matrix $\mathbf{a}$ and nonzero vectors $\boldsymbol{x}$, $\boldsymbol{x}'\mathbf{a}\boldsymbol{x}>0$. So, I write the expectation of interest $\mathbb{E}((\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta}))^2)$ as $(\boldsymbol{\beta}^0 -\boldsymbol{\beta})'\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')({\boldsymbol{\beta}}^0-\boldsymbol{\beta})$, from where, using the fact that $\boldsymbol{\beta}\neq\boldsymbol{\beta}^0$, and the nonsingularity of $\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')$, I have the desired positivity of the expectation. This is where you should end the answer to your homework question. From here, I discuss the links of conditional mean identification with the notion of asymptotic identification by estimators. (Asymptotic) parameter identification Recall that (from Newey & McFadden (1994), pg. 2124), that the parameters of a model are (asymptotically) identified by an estimator (in this case, the NLS estimtator) if the limit of the objective function has a unique minimum (maximum) at the truth. We show that if the conditional mean of the distribution is identified in the sense above, then the parameters of the model are asymptotically identified by the NLS estimator. Consider the nonlinear least squares objective function for the model at hand, where $\boldsymbol{Y}=[Y_1,\ldots, Y_n]$, and $\mathbf{X}=[\boldsymbol{X}_1, \ldots, \boldsymbol{X}_n]'$. $$ \begin{align} q_n(\boldsymbol{Y},\mathbf{X}; \boldsymbol{\beta}) &= \sum_{i=1}^n\left(Y_i - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2 \\ &= \sum_{i=1}^n\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) + U_i - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2 \\ &=\sum_{i=1}^n \left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2+\sum_{i=1}^n U_i^2 \\ &\quad+ \sum_{i=1}^n U_i\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right) \end{align} $$ Given that each of the expectations is well defined, these converge to their expectations in probability, using appropriate laws of large numbers. A simple expedient that allows considerable simplifications is to write $\mathbb{E}_{U_i, \boldsymbol{X}_i}\equiv \mathbb{E}_{\boldsymbol{X}_i}\mathbb{E}_{U_i\mid \boldsymbol{X}_i}$, where the subscripts denote marginal densities with respect to which the expectations are taken, and the equivalence is by the law of iterated expectations. Then, $$ \begin{align} q_n(\boldsymbol{Y},\mathbf{X}; \boldsymbol{\beta}) &\to^p q_\infty(\boldsymbol{Y},\mathbf{X}; \boldsymbol{\beta})\\ &=\mathbb{E}_{\boldsymbol{X}_i}\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2 + \mathbb{E}_{U_i\mid \boldsymbol{X}_i}(U_i^2) \\ &\quad+ \mathbb{E}_{\boldsymbol{X}_i}\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta}\right)\underbrace{\mathbb{E}_{U_i\mid \boldsymbol{X}_i}(U_i) }_{=0} \end{align} $$ From here, it is easy to see that the last term is zero, and the second term is independent of the chosen parameter value. The first term is uniquely minimized to zero, when $\boldsymbol{\beta} = \boldsymbol{\beta}^0$. As is obvious, the last step assumes conditional mean identification.
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$
There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\b
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$ There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) + U_i $$ where $\boldsymbol{X}_i = [X_{1i},\ldots, X_{Ki}]'$, together with the assumption that $\mathbb{E}(U_i \mid \boldsymbol{X}_i)=0$. It appears you are trying to prove that $$ \mathbb{E}(Y_i \mid \boldsymbol{X}_i) = \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) $$ in order to prove identification. This is tautological from your model, and does not amount to proving that the parameters are identified. Let us first work through some definitions of identification, isolating the exact form of identification that applies to nonlinear regression models, where only the conditional mean, and not the entire conditional distribution is specified. Parametric identification of distributions The most primitive notion of identification, is the notion of identification of distributions, usually just called identification. If the conditional density $Y_i \mid \boldsymbol{X}_i$ is written as $$ Y_i \mid \boldsymbol{X}_i \sim f(Y_i \mid \boldsymbol{X}_i; \boldsymbol{\theta}^0) $$ Then the conditional distribution is said to be identified if $$ \boldsymbol{\theta}^0 \neq \boldsymbol{\theta} \implies f(Y_i \mid \boldsymbol{X}_i; \boldsymbol{\theta}^0) \neq f(Y_i \mid \boldsymbol{X}_i; \boldsymbol{\theta}) $$ Parametric identification of the conditional mean function In the case of regression models, we are typically interested in the identification of the parameters that enter the conditional mean. Let the conditional mean function be $$ \mathbb{E}(Y_i \mid \boldsymbol{X}_i) = m(\boldsymbol{X}_i;\boldsymbol{\beta}^0) $$ The parameters of the conditional mean function are said to be identified, or indeed, the conditional mean model is said to be identified if $$ \boldsymbol{\beta}^0 \neq \boldsymbol{\beta} \implies m(\boldsymbol{X}_i;\boldsymbol{\beta}^0) \neq m(\boldsymbol{X}_i;\boldsymbol{\beta}) $$ In your case, this is the notion of identification that you are interested in. Note that identification does not imply conditional mean identification. Conditional mean identification in the exponential regression model Now that the definitions are in place, we can solve the problem at hand, that is, specify primitive conditions for the conditional mean identification in the exponential regression model. Recall that the exponential regression model is a conditional mean model such that $$ m(\boldsymbol{X}_i;\boldsymbol{\beta}) = \exp(\boldsymbol{X}_i'\boldsymbol{\beta} ) $$ Make the assumption that $\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')$ is nonsingular (this is essentially what your positive variance assumptions are). Then, we have that if $\boldsymbol{\beta}^0 \neq \boldsymbol{\beta}$, $$ \begin{alignat}{1} &\mathbb{E}((\boldsymbol{X}_i'(\boldsymbol{\beta}^0 - \boldsymbol{\beta}))^2) &= (\boldsymbol{\beta}^0 -\boldsymbol{\beta})'\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')(\boldsymbol{\beta}^0 -\boldsymbol{\beta}) \\ &&>0 \\ \implies & \boldsymbol{X}_i'(\boldsymbol{\beta}^0 -\boldsymbol{\beta}) &\neq 0 \text{ on a set of positive measure} \\ \implies & \boldsymbol{X}_i'\boldsymbol{\beta}^0 &\neq \boldsymbol{X}_i'\boldsymbol{\beta} \text{ on a set of positive measure }\\ \implies & \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) &\neq \exp(\boldsymbol{X}_i'\boldsymbol{\beta}) \text{ on a set of positive measure} \\ \end{alignat} $$ Thus, you have shown that under the nonsingularity of the $\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')$ matrix, the conditional mean function (that is, its parameters) are identified. Added explanation: The theory of integration (Lebesgue, in this case) states that if $\mathbb{E}(f)>0$ and $ f\geq 0$, then, $f>0$ on a set of positive measure. So, if I can prove that $\mathbb{E}((\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta}))^2)>0$, I can claim that $(\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta}))^2>0$ on a set of positive measure, in turn that $\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta})\neq0$ on a set of positive measure. Note that all this work has to be done to rule out the one case that could spoil the party, that is when the expectation and hence its argument is zero almost everywhere. That would mean that the conditional mean functions corresponding to the two parameter sets are the same almost everywhere. Now, I know that for any nonsingular matrix $\mathbf{a}$ and nonzero vectors $\boldsymbol{x}$, $\boldsymbol{x}'\mathbf{a}\boldsymbol{x}>0$. So, I write the expectation of interest $\mathbb{E}((\boldsymbol{X}_i' (\boldsymbol{\beta}^0 -\boldsymbol{\beta}))^2)$ as $(\boldsymbol{\beta}^0 -\boldsymbol{\beta})'\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')({\boldsymbol{\beta}}^0-\boldsymbol{\beta})$, from where, using the fact that $\boldsymbol{\beta}\neq\boldsymbol{\beta}^0$, and the nonsingularity of $\mathbb{E}(\boldsymbol{X}_i\boldsymbol{X}_i')$, I have the desired positivity of the expectation. This is where you should end the answer to your homework question. From here, I discuss the links of conditional mean identification with the notion of asymptotic identification by estimators. (Asymptotic) parameter identification Recall that (from Newey & McFadden (1994), pg. 2124), that the parameters of a model are (asymptotically) identified by an estimator (in this case, the NLS estimtator) if the limit of the objective function has a unique minimum (maximum) at the truth. We show that if the conditional mean of the distribution is identified in the sense above, then the parameters of the model are asymptotically identified by the NLS estimator. Consider the nonlinear least squares objective function for the model at hand, where $\boldsymbol{Y}=[Y_1,\ldots, Y_n]$, and $\mathbf{X}=[\boldsymbol{X}_1, \ldots, \boldsymbol{X}_n]'$. $$ \begin{align} q_n(\boldsymbol{Y},\mathbf{X}; \boldsymbol{\beta}) &= \sum_{i=1}^n\left(Y_i - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2 \\ &= \sum_{i=1}^n\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) + U_i - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2 \\ &=\sum_{i=1}^n \left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2+\sum_{i=1}^n U_i^2 \\ &\quad+ \sum_{i=1}^n U_i\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right) \end{align} $$ Given that each of the expectations is well defined, these converge to their expectations in probability, using appropriate laws of large numbers. A simple expedient that allows considerable simplifications is to write $\mathbb{E}_{U_i, \boldsymbol{X}_i}\equiv \mathbb{E}_{\boldsymbol{X}_i}\mathbb{E}_{U_i\mid \boldsymbol{X}_i}$, where the subscripts denote marginal densities with respect to which the expectations are taken, and the equivalence is by the law of iterated expectations. Then, $$ \begin{align} q_n(\boldsymbol{Y},\mathbf{X}; \boldsymbol{\beta}) &\to^p q_\infty(\boldsymbol{Y},\mathbf{X}; \boldsymbol{\beta})\\ &=\mathbb{E}_{\boldsymbol{X}_i}\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta})\right)^2 + \mathbb{E}_{U_i\mid \boldsymbol{X}_i}(U_i^2) \\ &\quad+ \mathbb{E}_{\boldsymbol{X}_i}\left(\exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) - \exp(\boldsymbol{X}_i'\boldsymbol{\beta}\right)\underbrace{\mathbb{E}_{U_i\mid \boldsymbol{X}_i}(U_i) }_{=0} \end{align} $$ From here, it is easy to see that the last term is zero, and the second term is independent of the chosen parameter value. The first term is uniquely minimized to zero, when $\boldsymbol{\beta} = \boldsymbol{\beta}^0$. As is obvious, the last step assumes conditional mean identification.
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$ There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\b
46,291
Which logit or probit model should I use for multiple response / dependent variables?
You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be acceptable if it's necessary, but if a better way is possible, you may want to avoid it. Multivariate generalized linear models treat your response as a singular point in a multidimensional space, rather than five points ordered in time. Multivariate methods are typically used / understood for cases where you have five different kinds of measurements (here binary) that are all related to each other, but I gather you have a sequence of five instances of the same kind of measurement. It would be better to fit a model that is designed for that. Fortunately, there are models that are designed exactly for this type of situation. You will want to use a Generalized Linear Mixed effects Model or use the Generalized Estimating Equations. Which you should choose depends on the question you want to ask, GLiMMs provide information on the effects of the covariates for the individual study units, whereas the GEE provides information on the effects of the covariates for the population average. There are several threads on CV that discuss these: I provide a fairly conceptual explanation here: Difference between generalized linear models generalized linear mixed models in SPSS, there is also an explanation here: What is the difference between generalized estimating equations and GLMM, and, a little more mathematical explanation here: When to use generalized estimating equations vs. mixed effects models? Regarding whether to use the logit link or the probit link, I discussed that fairly extensively here: Difference between logit and probit models. (Actually, the answer there is a little more fundamental in nature, so it may be worth reading that one first.)
Which logit or probit model should I use for multiple response / dependent variables?
You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be
Which logit or probit model should I use for multiple response / dependent variables? You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be acceptable if it's necessary, but if a better way is possible, you may want to avoid it. Multivariate generalized linear models treat your response as a singular point in a multidimensional space, rather than five points ordered in time. Multivariate methods are typically used / understood for cases where you have five different kinds of measurements (here binary) that are all related to each other, but I gather you have a sequence of five instances of the same kind of measurement. It would be better to fit a model that is designed for that. Fortunately, there are models that are designed exactly for this type of situation. You will want to use a Generalized Linear Mixed effects Model or use the Generalized Estimating Equations. Which you should choose depends on the question you want to ask, GLiMMs provide information on the effects of the covariates for the individual study units, whereas the GEE provides information on the effects of the covariates for the population average. There are several threads on CV that discuss these: I provide a fairly conceptual explanation here: Difference between generalized linear models generalized linear mixed models in SPSS, there is also an explanation here: What is the difference between generalized estimating equations and GLMM, and, a little more mathematical explanation here: When to use generalized estimating equations vs. mixed effects models? Regarding whether to use the logit link or the probit link, I discussed that fairly extensively here: Difference between logit and probit models. (Actually, the answer there is a little more fundamental in nature, so it may be worth reading that one first.)
Which logit or probit model should I use for multiple response / dependent variables? You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be
46,292
Which logit or probit model should I use for multiple response / dependent variables?
Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce each each row of Y to a number between -3 and +3, where larger values correspond to "more negative, later": f <- function(r) sum(r * c(-2, -1, 0, +1, +2)) Z <- apply(Y, 1, f) Then, use linear regression to model those scores based on your predictors. model <- lm.fit(X, Z) (Here, X is your 300x8 matrix of predictor values, not the 5x300 matrix mentioned in your first paragraph.) The coefficients of the regression will have the interpretation you desire: larger values indicate stronger odds of "more negative, later". If you really prefer the logistic model, the R code becomes model <- glm.fix(X, Z, family=binomial()) The question for you is simply, which model works better for your application. The application does not strike me as intrinsically categorical; rather, you constructed the Y matrix to be categorical.
Which logit or probit model should I use for multiple response / dependent variables?
Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce eac
Which logit or probit model should I use for multiple response / dependent variables? Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce each each row of Y to a number between -3 and +3, where larger values correspond to "more negative, later": f <- function(r) sum(r * c(-2, -1, 0, +1, +2)) Z <- apply(Y, 1, f) Then, use linear regression to model those scores based on your predictors. model <- lm.fit(X, Z) (Here, X is your 300x8 matrix of predictor values, not the 5x300 matrix mentioned in your first paragraph.) The coefficients of the regression will have the interpretation you desire: larger values indicate stronger odds of "more negative, later". If you really prefer the logistic model, the R code becomes model <- glm.fix(X, Z, family=binomial()) The question for you is simply, which model works better for your application. The application does not strike me as intrinsically categorical; rather, you constructed the Y matrix to be categorical.
Which logit or probit model should I use for multiple response / dependent variables? Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce eac
46,293
Which logit or probit model should I use for multiple response / dependent variables?
You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/1439813264.
Which logit or probit model should I use for multiple response / dependent variables?
You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/143981326
Which logit or probit model should I use for multiple response / dependent variables? You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/1439813264.
Which logit or probit model should I use for multiple response / dependent variables? You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/143981326
46,294
Identifying fraudulent questionnaires
This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the person doing the study. There are scales to detect this sort of faking, such as the Crowne=Marlowe scale. These essentially ask questions to which virtually no one could answer "yes" (e.g. "I have never told a lie in my life"). Often, people designing questionnaires will ask the same question in different ways. One well-known issue is that people will give different age answers if you ask "How old are you?" and "What is your birth date?" The latter have been found to be more accurate. Another type of pattern is to answer all the questions with one answer on multiple choice questionnaires. One way to detect this is to have some questions that are reverse coded. Then someone who answers (say) "nearly all the time" to both "I am happy" and "I am sad" may be suspect. You can also look at correlations among the questions and then identify people who have very different patterns. Of course, none of these are fool-proof. But they are ways to investigate the issue.
Identifying fraudulent questionnaires
This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the pe
Identifying fraudulent questionnaires This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the person doing the study. There are scales to detect this sort of faking, such as the Crowne=Marlowe scale. These essentially ask questions to which virtually no one could answer "yes" (e.g. "I have never told a lie in my life"). Often, people designing questionnaires will ask the same question in different ways. One well-known issue is that people will give different age answers if you ask "How old are you?" and "What is your birth date?" The latter have been found to be more accurate. Another type of pattern is to answer all the questions with one answer on multiple choice questionnaires. One way to detect this is to have some questions that are reverse coded. Then someone who answers (say) "nearly all the time" to both "I am happy" and "I am sad" may be suspect. You can also look at correlations among the questions and then identify people who have very different patterns. Of course, none of these are fool-proof. But they are ways to investigate the issue.
Identifying fraudulent questionnaires This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the pe
46,295
Lognormal distribution from world bank quintiles PPP data
Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitter(qlnorm(q)) Create function to minimise fitfun <- function(p)sum(abs(data-qlnorm(q,p[1],p[2]))) Run the optimiser with the initial guess of parameters of log-normal distribution: opt <- optim(c(0.1,1.1)) The parameters fitted: Display the fit visually: aa<-seq(0,0.95,by=0.01) plot(aa,qlnorm(aa,opt$par[1],opt$par[2]),type="l") points(q,data) Note, I intentionally plotted only 95%-quantile, since the log-normal distribution is unbounded, i.e. the 100%-quantile is infinity. Usual caveats apply, real life example might look much uglier than this one, i.e. fit might be much worse. Also try Singh-Maddala distribution instead of log-normal, it works better for income distributions.
Lognormal distribution from world bank quintiles PPP data
Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitt
Lognormal distribution from world bank quintiles PPP data Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitter(qlnorm(q)) Create function to minimise fitfun <- function(p)sum(abs(data-qlnorm(q,p[1],p[2]))) Run the optimiser with the initial guess of parameters of log-normal distribution: opt <- optim(c(0.1,1.1)) The parameters fitted: Display the fit visually: aa<-seq(0,0.95,by=0.01) plot(aa,qlnorm(aa,opt$par[1],opt$par[2]),type="l") points(q,data) Note, I intentionally plotted only 95%-quantile, since the log-normal distribution is unbounded, i.e. the 100%-quantile is infinity. Usual caveats apply, real life example might look much uglier than this one, i.e. fit might be much worse. Also try Singh-Maddala distribution instead of log-normal, it works better for income distributions.
Lognormal distribution from world bank quintiles PPP data Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitt
46,296
Lognormal distribution from world bank quintiles PPP data
I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the following form. Calculate the total income of all population Divide population into income groups Calculate the total income of population in groups defined in previous step. Report for each group the proportion of total income in the group relative to total income of all population. Suppose the population income is distributed according to unknown distribution function $F$. For the data the following income groups are defined: Population with income in a range of $[0,F^{-1}(0.1))$ Population with income in a range of $[F^{-1}(0.1),F^{-1}(0.2))$ Population with income in a range of $[F^{-1}(0.2),F^{-1}(0.4))$ Population with income in a range of $[F^{-1}(0.4),F^{-1}(0.6))$ Population with income in a range of $[F^{-1}(0.6),F^{-1}(0.8))$ Population with income in a range of $[F^{-1}(0.8),F^{-1}(0.9))$ Population with income in a range of $[F^{-1}(0.9),\infty)$ For each of this range the following proportion is reported: $$\frac{n_r\int_{l_r}^{u_r}xdF(x)}{N\int_{0}^{\infty}xdF(x)},$$ where $n_r$ is the number of people in a range $[l_r,u_r)$ and $N$ is the total population. The nominator of the fraction is number of people in a range times the average income in the range. Denominator has total number of people in the range times the average income. Since the ranges defined are quantiles, proportions $n_r/N$ are known, i.e. for the first two and the last two ranges the proportion is equal to 0.1, for the rest 0.2. The integral in nominator can be expressed in more convenient form: $$\int_{l_r}^{u_r}xdF(x)=\int_{F(l_r)}^{F(u_r)}F^{-1}(u)du$$ The most obvious way to fit the data would be to integrate $F^{-1}$ numerically to a given range (or calculate the integrals analytically, which might be a challenge). Then calculate the proportions and fit them using your criterion of choice, least squares, least absolute deviations, etc. Note that one proportion is redundant since the proportions sum to one. Another caveat is that you need to know average income of the population, which is not given in the data.
Lognormal distribution from world bank quintiles PPP data
I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the follow
Lognormal distribution from world bank quintiles PPP data I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the following form. Calculate the total income of all population Divide population into income groups Calculate the total income of population in groups defined in previous step. Report for each group the proportion of total income in the group relative to total income of all population. Suppose the population income is distributed according to unknown distribution function $F$. For the data the following income groups are defined: Population with income in a range of $[0,F^{-1}(0.1))$ Population with income in a range of $[F^{-1}(0.1),F^{-1}(0.2))$ Population with income in a range of $[F^{-1}(0.2),F^{-1}(0.4))$ Population with income in a range of $[F^{-1}(0.4),F^{-1}(0.6))$ Population with income in a range of $[F^{-1}(0.6),F^{-1}(0.8))$ Population with income in a range of $[F^{-1}(0.8),F^{-1}(0.9))$ Population with income in a range of $[F^{-1}(0.9),\infty)$ For each of this range the following proportion is reported: $$\frac{n_r\int_{l_r}^{u_r}xdF(x)}{N\int_{0}^{\infty}xdF(x)},$$ where $n_r$ is the number of people in a range $[l_r,u_r)$ and $N$ is the total population. The nominator of the fraction is number of people in a range times the average income in the range. Denominator has total number of people in the range times the average income. Since the ranges defined are quantiles, proportions $n_r/N$ are known, i.e. for the first two and the last two ranges the proportion is equal to 0.1, for the rest 0.2. The integral in nominator can be expressed in more convenient form: $$\int_{l_r}^{u_r}xdF(x)=\int_{F(l_r)}^{F(u_r)}F^{-1}(u)du$$ The most obvious way to fit the data would be to integrate $F^{-1}$ numerically to a given range (or calculate the integrals analytically, which might be a challenge). Then calculate the proportions and fit them using your criterion of choice, least squares, least absolute deviations, etc. Note that one proportion is redundant since the proportions sum to one. Another caveat is that you need to know average income of the population, which is not given in the data.
Lognormal distribution from world bank quintiles PPP data I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the follow
46,297
Lognormal distribution from world bank quintiles PPP data
A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihood. If not you can use a fit criterion such as least squares or minimum sum of absolute errors to fit the given percentiles (quantiles) to values of a lognormal fit for these percentiles.
Lognormal distribution from world bank quintiles PPP data
A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihoo
Lognormal distribution from world bank quintiles PPP data A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihood. If not you can use a fit criterion such as least squares or minimum sum of absolute errors to fit the given percentiles (quantiles) to values of a lognormal fit for these percentiles.
Lognormal distribution from world bank quintiles PPP data A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihoo
46,298
Lognormal distribution from world bank quintiles PPP data
A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, you would have access to the raw data, and would apply the standard the maximum likelihood estimators (MLEs) for $\mu$ and $\sigma$, which are straightforward: $$\hat{\mu} = \frac{1}{n}\sum_i \ln(y_i) = \langle \ln y \rangle\\ \hat{\sigma}^2 = \frac{1}{n}\sum_i (\ln(y_i)-\hat{\mu})^{2} \enspace .$$ That is, $\mu$ is the mean of the logarithm of your observed data $\{y_i\}$, and $\sigma$ is the standard deviation of the logarithm of the data. But in this case, you don't have the raw data. Instead, you have some sketchy information about the cumulative distribution function (CDF). Very roughly, what you know the fraction of the distribution $\Pr(y)$ that is below some $y$ for some set of values $\{y_i\}$. You can still estimate the log-normal parameters (or those of any other distribution) from this kind of information, but there are subtleties. Two approaches come to mind. The first is a quick and dirty one that will not produce entirely accurate parameter estimates, but will get you close enough to get a sense of what the distribution looks like and, if you want, roughly what the Gini coefficient would be. The second is more complicated and more accurate for the kind of data you have. Quick and dirty approximation Here's the quick and dirty solution. The information you have is a "binned" version of the CDF, represented by a set of pairs $(q_i,y_i)$, where $q_i$ is the fraction of the distribution at or below the value $y_i$ (note: you said that the PPP is an average within the bin, which is technically distinct from the CDF, but for our calculation, that distinction doesn't make a difference). Now, recall that the definition of the mean is $$\langle x \rangle = \sum_i x_i \Pr(x_i)\enspace ,$$ where $\Pr(x_i)$ is the probability of observing $x_i$. We don't have $\Pr(x)$, but we can approximate it using the binned CDF information, like this $$\hat{\mu} \approx \sum_{i=1}^k \Delta q_i \ln x_i$$ where $\Delta q_i=q_{i+1} -q_i$ is the size or width of the $i$th bin, out of $k$ bins. Similarly, for the standard deviation, the definition is $$\sigma = \sum_i (x_i-\langle x \rangle)^2 \Pr(x_i)\enspace,$$ which becomes $$\hat{\sigma} \approx \sum_{i=1}^{k} \Delta q_i (x_i-\hat{\mu})^2 \enspace .$$ To apply these to your data, you'll need to let $x_i=\ln y_i$ since you're working with the log-normal distribution, rather than the normal (or Gaussian) distribution. Coding up these estimators should be fairly easy. In my numerical experiments with these estimators, I consistently get slight errors in the estimates relative to the underlying or "population" values I used to generate synthetic log-normal data. If you use these with your data, you should not treat the estimated values as being highly accurate. To get those, you'd need to apply a more mathematically sophisticated approach, which I'll sketch for you now. Maximum likelihood approach The more complicated and more accurate solution is to derive the maximum likelihood parameter estimate for the particular representation of the log-normal distribution you have, i.e., the binned CDF. The definition of the log-normal PDF is $$\Pr(x) = \frac{1}{x\sigma\sqrt{2\pi}}{\rm e}^{-\frac{(\ln x - \mu)^2}{2\sigma^2} } \enspace ,$$ and the CDF is $$\Pr(x<X) = F(x) = \frac{1}{2}\left(1+{\rm erf}\left( \frac{\ln x - \mu}{\sigma\sqrt{2}} \right) \right) \enspace ,$$ where $\textrm{erf}()$ is the error function, and where we let $F(x)$ be a short-hand representation for the CDF. (Normally, we would say $F(x\,|\,\mu,\sigma)$ to indicate that $F$ depends on your parameter choices, but I'm going to drop that notation henceforth; just remember that it's implied.) This is relevant because you want to assume your quantile data were drawn from a binned version of this distribution. If $F(x)$ is the CDF, i.e., the integral of $\Pr(x)$ from $-\infty$ to $x$, then let $F(x\,|\,a,b)$ be the integral of $\Pr(x)$ from $a$ to $b$. (Mathematically, $F(x\,|\,a,b)=F(b)-F(a)$.) The log-likelihood of your observed quantile information is then $$\ln \mathcal{L} = \sum_{i=1}^k \ln F(x_i\,|\,q_i,q_{i+1})\enspace .$$ The more sophisticated approach would be to estimate $\mu$ and $\sigma$ by maximizing this function over these parameters. This would give you the maximum likelihood estimate for your log-normal model, given the observed information you have. For arbitrary choices of $\{q_i\}$, an analytic solution for the MLE is not possible, but for regularly spaced choices of the bin boundaries, it may be. Regardless, however, you may always numerically maximize the function (which many numerical software packages can do for you, if you whisper the right words to them). What makes this approach more complicated is that you need to get the mathematics correct when you code up the numerical routine to do the estimation with the data. If the accuracy of your answers is really important, then this approach might be worth the extra effort.
Lognormal distribution from world bank quintiles PPP data
A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, y
Lognormal distribution from world bank quintiles PPP data A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, you would have access to the raw data, and would apply the standard the maximum likelihood estimators (MLEs) for $\mu$ and $\sigma$, which are straightforward: $$\hat{\mu} = \frac{1}{n}\sum_i \ln(y_i) = \langle \ln y \rangle\\ \hat{\sigma}^2 = \frac{1}{n}\sum_i (\ln(y_i)-\hat{\mu})^{2} \enspace .$$ That is, $\mu$ is the mean of the logarithm of your observed data $\{y_i\}$, and $\sigma$ is the standard deviation of the logarithm of the data. But in this case, you don't have the raw data. Instead, you have some sketchy information about the cumulative distribution function (CDF). Very roughly, what you know the fraction of the distribution $\Pr(y)$ that is below some $y$ for some set of values $\{y_i\}$. You can still estimate the log-normal parameters (or those of any other distribution) from this kind of information, but there are subtleties. Two approaches come to mind. The first is a quick and dirty one that will not produce entirely accurate parameter estimates, but will get you close enough to get a sense of what the distribution looks like and, if you want, roughly what the Gini coefficient would be. The second is more complicated and more accurate for the kind of data you have. Quick and dirty approximation Here's the quick and dirty solution. The information you have is a "binned" version of the CDF, represented by a set of pairs $(q_i,y_i)$, where $q_i$ is the fraction of the distribution at or below the value $y_i$ (note: you said that the PPP is an average within the bin, which is technically distinct from the CDF, but for our calculation, that distinction doesn't make a difference). Now, recall that the definition of the mean is $$\langle x \rangle = \sum_i x_i \Pr(x_i)\enspace ,$$ where $\Pr(x_i)$ is the probability of observing $x_i$. We don't have $\Pr(x)$, but we can approximate it using the binned CDF information, like this $$\hat{\mu} \approx \sum_{i=1}^k \Delta q_i \ln x_i$$ where $\Delta q_i=q_{i+1} -q_i$ is the size or width of the $i$th bin, out of $k$ bins. Similarly, for the standard deviation, the definition is $$\sigma = \sum_i (x_i-\langle x \rangle)^2 \Pr(x_i)\enspace,$$ which becomes $$\hat{\sigma} \approx \sum_{i=1}^{k} \Delta q_i (x_i-\hat{\mu})^2 \enspace .$$ To apply these to your data, you'll need to let $x_i=\ln y_i$ since you're working with the log-normal distribution, rather than the normal (or Gaussian) distribution. Coding up these estimators should be fairly easy. In my numerical experiments with these estimators, I consistently get slight errors in the estimates relative to the underlying or "population" values I used to generate synthetic log-normal data. If you use these with your data, you should not treat the estimated values as being highly accurate. To get those, you'd need to apply a more mathematically sophisticated approach, which I'll sketch for you now. Maximum likelihood approach The more complicated and more accurate solution is to derive the maximum likelihood parameter estimate for the particular representation of the log-normal distribution you have, i.e., the binned CDF. The definition of the log-normal PDF is $$\Pr(x) = \frac{1}{x\sigma\sqrt{2\pi}}{\rm e}^{-\frac{(\ln x - \mu)^2}{2\sigma^2} } \enspace ,$$ and the CDF is $$\Pr(x<X) = F(x) = \frac{1}{2}\left(1+{\rm erf}\left( \frac{\ln x - \mu}{\sigma\sqrt{2}} \right) \right) \enspace ,$$ where $\textrm{erf}()$ is the error function, and where we let $F(x)$ be a short-hand representation for the CDF. (Normally, we would say $F(x\,|\,\mu,\sigma)$ to indicate that $F$ depends on your parameter choices, but I'm going to drop that notation henceforth; just remember that it's implied.) This is relevant because you want to assume your quantile data were drawn from a binned version of this distribution. If $F(x)$ is the CDF, i.e., the integral of $\Pr(x)$ from $-\infty$ to $x$, then let $F(x\,|\,a,b)$ be the integral of $\Pr(x)$ from $a$ to $b$. (Mathematically, $F(x\,|\,a,b)=F(b)-F(a)$.) The log-likelihood of your observed quantile information is then $$\ln \mathcal{L} = \sum_{i=1}^k \ln F(x_i\,|\,q_i,q_{i+1})\enspace .$$ The more sophisticated approach would be to estimate $\mu$ and $\sigma$ by maximizing this function over these parameters. This would give you the maximum likelihood estimate for your log-normal model, given the observed information you have. For arbitrary choices of $\{q_i\}$, an analytic solution for the MLE is not possible, but for regularly spaced choices of the bin boundaries, it may be. Regardless, however, you may always numerically maximize the function (which many numerical software packages can do for you, if you whisper the right words to them). What makes this approach more complicated is that you need to get the mathematics correct when you code up the numerical routine to do the estimation with the data. If the accuracy of your answers is really important, then this approach might be worth the extra effort.
Lognormal distribution from world bank quintiles PPP data A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, y
46,299
Performing multiple linear regressions, in Excel, that have a common x-intercept?
There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is the mean squared residual. Use Solver to find the x-intercept minimizing the mean squared residual. If you take some care in controlling Solver--especially by constraining the x-intercept within reasonable bounds and giving it a good starting value--you ought to get excellent estimates. The fiddly part involves setting up the data in the right way. We can figure this out by means of a mathematical expression for the implicit model. There are five groups of data: let's index them by $k$ ranging from $1$ to $5$ (from bottom to top in the plot). Each data point can then be identified by means of a second index $j$ as the ordered pair $x_{kj}, y_{kj}$. (It appears that $x_{kj} = x_{k'j}$ for any two indexes $k$ and $k'$, but this is not essential.) In these terms the model supposes there are five slopes $\beta_k$ and an x-intercept $\alpha$; that is, $y_{kj}$ should be closely approximated by $\beta_k (x_{kj}-\alpha)$. The combined LINEST/Solver solution minimizes the sum of squares of the discrepancies. Alternatively--this will come in handy for assessing confidence intervals--we can view the $y_{kj}$ as independent draws from normal distributions having a common unknown variance $\sigma^2$ and means $\beta_k(x_{kj}-\alpha)$. This formulation, with five different coefficients and the proposed use of LINEST, suggests we should set up the data in an array where there is a separate column for each $k$ and these are immediately followed by a column for the $y_{kj}$. I worked up an example using simulated data akin to those shown in the question. Here is what the data array looks like: [B] [C] [D] [E] [F] [G] [H] [I] k x 1 2 3 4 5 y ----------------------------------------------- 355 7355 0 0 0 0 636 355 0 7355 0 0 0 3705 355 0 0 7355 0 0 6757 355 0 0 0 7355 0 9993 355 0 0 0 0 7355 13092 429 7429 0 0 0 0 539 ... The strange values 7355, 7429, etc, as well as all the zeros, are produced by formulas. The one in cell D3, for instance, is =IF($B2=D$1, $C2-Alpha, 0) Here, Alpha is a named cell containing the intercept (currently set to -7000). This formula, when pasted down the full extent of the columns headed "1" through "5", puts a zero in each cell except when the value of $k$ (shown in the leftmost column) corresponds to the column heading, where it puts the difference $x_{kj}-\alpha$. This is what is needed to perform multiple linear regression with LINEST. The expression looks like LINEST(I2:I126,D2:H126,FALSE,TRUE) Range I2:I126 is the column of y-values; range D2:H126 comprises the five computed columns; FALSE stipulates that the y-intercept is forced to $0$; and TRUE asks for extended statistics. The formula's output occupies a range of 6 rows by 5 columns, of which the first three rows might look like 1.296 0.986 0.678 0.371 0.062 0.001 0.001 0.001 0.001 0.001 1.000 51.199 ... Strangely (you have to put up with the bizarre when doing stats in Excel :-), the output columns correspond to the input columns in reverse order: thus, 1.296 is the estimated coefficient for column H (corresponding to $k=5$, which we have named $\beta_5$) while 0.062 is the estimated coefficient for column D (corresponding to $k=1$, which we have named $\beta_1$). Notice, in particular, the 51.199 in row 3, column 2 of the LINEST output: this is the mean sum of squares of residuals. That's what we would like to minimize. In my spreadsheet this value sits at cell U9. In eyeballing the plots, I figured the x-intercept was surely between $-20000$ and $0$. Here's the corresponding Solver dialog to minimize U9 by varying $\alpha$, named XIntercept in this sheet: It returned a reasonable result almost instantly. To see how it can perform, compare the parameters as set in the simulation against the estimates obtained in this fashion: Parameter Value Estimate Alpha -10000 -9696.2 Beta1 .05 .0619 Beta2 .35 .3710 Beta3 .65 .6772 Beta4 .95 .9853 Beta5 1.25 1.2957 Sigma 50 51.199 Using these parameters, the fit is excellent: One can go further by computing the fit and using that to calculate the log likelihood. Solver can modify a set of parameters (initalized to the LINEST estimates) one parameter at time to attain any desired value of the log likelihood less than the maximum value. In the usual way--by reducing the log likelihood by a quantile of a $\chi^2$ distribution--you can obtain confidence intervals for each parameter. In fact, if you want--this is an excellent way to learn how the maximum likelihood machinery works--you can skip the LINEST approach altogether and use Solver to maximize the log likelihood. However, using Solver in this "naked" way--without knowing in advance approximately what the parameter estimates should be--is risky. Solver will readily stop at a (poor) local maximum. The combination of an initial estimate, such as that afforded by guessing at $\alpha$ and applying LINEST, along with a quick application of Solver to polish these results, is much more reliable and tends to work well.
Performing multiple linear regressions, in Excel, that have a common x-intercept?
There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is t
Performing multiple linear regressions, in Excel, that have a common x-intercept? There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is the mean squared residual. Use Solver to find the x-intercept minimizing the mean squared residual. If you take some care in controlling Solver--especially by constraining the x-intercept within reasonable bounds and giving it a good starting value--you ought to get excellent estimates. The fiddly part involves setting up the data in the right way. We can figure this out by means of a mathematical expression for the implicit model. There are five groups of data: let's index them by $k$ ranging from $1$ to $5$ (from bottom to top in the plot). Each data point can then be identified by means of a second index $j$ as the ordered pair $x_{kj}, y_{kj}$. (It appears that $x_{kj} = x_{k'j}$ for any two indexes $k$ and $k'$, but this is not essential.) In these terms the model supposes there are five slopes $\beta_k$ and an x-intercept $\alpha$; that is, $y_{kj}$ should be closely approximated by $\beta_k (x_{kj}-\alpha)$. The combined LINEST/Solver solution minimizes the sum of squares of the discrepancies. Alternatively--this will come in handy for assessing confidence intervals--we can view the $y_{kj}$ as independent draws from normal distributions having a common unknown variance $\sigma^2$ and means $\beta_k(x_{kj}-\alpha)$. This formulation, with five different coefficients and the proposed use of LINEST, suggests we should set up the data in an array where there is a separate column for each $k$ and these are immediately followed by a column for the $y_{kj}$. I worked up an example using simulated data akin to those shown in the question. Here is what the data array looks like: [B] [C] [D] [E] [F] [G] [H] [I] k x 1 2 3 4 5 y ----------------------------------------------- 355 7355 0 0 0 0 636 355 0 7355 0 0 0 3705 355 0 0 7355 0 0 6757 355 0 0 0 7355 0 9993 355 0 0 0 0 7355 13092 429 7429 0 0 0 0 539 ... The strange values 7355, 7429, etc, as well as all the zeros, are produced by formulas. The one in cell D3, for instance, is =IF($B2=D$1, $C2-Alpha, 0) Here, Alpha is a named cell containing the intercept (currently set to -7000). This formula, when pasted down the full extent of the columns headed "1" through "5", puts a zero in each cell except when the value of $k$ (shown in the leftmost column) corresponds to the column heading, where it puts the difference $x_{kj}-\alpha$. This is what is needed to perform multiple linear regression with LINEST. The expression looks like LINEST(I2:I126,D2:H126,FALSE,TRUE) Range I2:I126 is the column of y-values; range D2:H126 comprises the five computed columns; FALSE stipulates that the y-intercept is forced to $0$; and TRUE asks for extended statistics. The formula's output occupies a range of 6 rows by 5 columns, of which the first three rows might look like 1.296 0.986 0.678 0.371 0.062 0.001 0.001 0.001 0.001 0.001 1.000 51.199 ... Strangely (you have to put up with the bizarre when doing stats in Excel :-), the output columns correspond to the input columns in reverse order: thus, 1.296 is the estimated coefficient for column H (corresponding to $k=5$, which we have named $\beta_5$) while 0.062 is the estimated coefficient for column D (corresponding to $k=1$, which we have named $\beta_1$). Notice, in particular, the 51.199 in row 3, column 2 of the LINEST output: this is the mean sum of squares of residuals. That's what we would like to minimize. In my spreadsheet this value sits at cell U9. In eyeballing the plots, I figured the x-intercept was surely between $-20000$ and $0$. Here's the corresponding Solver dialog to minimize U9 by varying $\alpha$, named XIntercept in this sheet: It returned a reasonable result almost instantly. To see how it can perform, compare the parameters as set in the simulation against the estimates obtained in this fashion: Parameter Value Estimate Alpha -10000 -9696.2 Beta1 .05 .0619 Beta2 .35 .3710 Beta3 .65 .6772 Beta4 .95 .9853 Beta5 1.25 1.2957 Sigma 50 51.199 Using these parameters, the fit is excellent: One can go further by computing the fit and using that to calculate the log likelihood. Solver can modify a set of parameters (initalized to the LINEST estimates) one parameter at time to attain any desired value of the log likelihood less than the maximum value. In the usual way--by reducing the log likelihood by a quantile of a $\chi^2$ distribution--you can obtain confidence intervals for each parameter. In fact, if you want--this is an excellent way to learn how the maximum likelihood machinery works--you can skip the LINEST approach altogether and use Solver to maximize the log likelihood. However, using Solver in this "naked" way--without knowing in advance approximately what the parameter estimates should be--is risky. Solver will readily stop at a (poor) local maximum. The combination of an initial estimate, such as that afforded by guessing at $\alpha$ and applying LINEST, along with a quick application of Solver to polish these results, is much more reliable and tends to work well.
Performing multiple linear regressions, in Excel, that have a common x-intercept? There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is t
46,300
Performing multiple linear regressions, in Excel, that have a common x-intercept?
It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know) what the common x intercept is (not just estimate it from the data) then you can subtract that value from all the x variables and do a regression without intercept (because the line should go through 0,0 now). You can compare that model to the model with each line having its own intercept (if they all go through the same fixed intercept then all the fitted intercepts should simultaneously be not significantly different from 0. A quick way to get a feel for if your x intercepts are likely to be the same would be to reverse your x and y variables and fit the lines, this means that now the y-intercepts would be the same which is easier to test. However this also changes the direction of the error and so answers a bit of a different question and should probably be followed up by something more formal. You could create bootstrap estimates of the x intercept (computed as -b/m) and use that to estimate if the intercepts differ. You could fit a nonlinear least squares model to estimate the model with a common x-intercept and compare it with a model where each gets its own intercept to see if they are significantly different (the model would be of the form slope*(x-x0) with slope and x0 as the parameters (x0 being the x-intercept). You could fit the similar model using Bayesian techniques as well to compare. Any of these would be doable in R or other statistical packages.
Performing multiple linear regressions, in Excel, that have a common x-intercept?
It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know
Performing multiple linear regressions, in Excel, that have a common x-intercept? It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know) what the common x intercept is (not just estimate it from the data) then you can subtract that value from all the x variables and do a regression without intercept (because the line should go through 0,0 now). You can compare that model to the model with each line having its own intercept (if they all go through the same fixed intercept then all the fitted intercepts should simultaneously be not significantly different from 0. A quick way to get a feel for if your x intercepts are likely to be the same would be to reverse your x and y variables and fit the lines, this means that now the y-intercepts would be the same which is easier to test. However this also changes the direction of the error and so answers a bit of a different question and should probably be followed up by something more formal. You could create bootstrap estimates of the x intercept (computed as -b/m) and use that to estimate if the intercepts differ. You could fit a nonlinear least squares model to estimate the model with a common x-intercept and compare it with a model where each gets its own intercept to see if they are significantly different (the model would be of the form slope*(x-x0) with slope and x0 as the parameters (x0 being the x-intercept). You could fit the similar model using Bayesian techniques as well to compare. Any of these would be doable in R or other statistical packages.
Performing multiple linear regressions, in Excel, that have a common x-intercept? It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know