idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,201
Multiple Regression or Separate Simple Regressions?
One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effects. In other words, regression allows us to determine the unique effect that X has on Y and the unique effect that Z has ...
Multiple Regression or Separate Simple Regressions?
One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effect
Multiple Regression or Separate Simple Regressions? One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effects. In other words, regression allows us to determine the unique effe...
Multiple Regression or Separate Simple Regressions? One way of thinking about why least squares regression (and other methods, but I'm assuming this is what you're asking about) is useful is thinking about the problem of distinguishing different effect
46,202
Multiple Regression or Separate Simple Regressions?
This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationship into account to fully understand how they relate to y. Even if x and z are perfectly orthogonal, accounting for the ...
Multiple Regression or Separate Simple Regressions?
This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationsh
Multiple Regression or Separate Simple Regressions? This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationship into account to fully understand how they relate to y. Even if x...
Multiple Regression or Separate Simple Regressions? This answer to another question, along with the other discussion, may help your understanding. A big part of it is that x and z may be correlated with each other and you need to take that relationsh
46,203
Is a weighted average of two correlation matrices again a correlation matrix?
Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know that $R$ and $Q$ are, since they are correlation matrices. Hence for any non-zero vector $x$, $x^tPx \geq 0$ and $x^tQx...
Is a weighted average of two correlation matrices again a correlation matrix?
Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know
Is a weighted average of two correlation matrices again a correlation matrix? Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know that $R$ and $Q$ are, since they are corr...
Is a weighted average of two correlation matrices again a correlation matrix? Here is an argument in two steps. Firstly, $M = pR + (1-p)Q$ is the variance-covariance matrix of some vector of random variables. Proof: we require to show that $M$ is semi-positive definite. We know
46,204
Is a weighted average of two correlation matrices again a correlation matrix?
To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I think you see that. Noting both $R$ and $Q$ are positive definite, (2) holds since any positive definite matrix multipl...
Is a weighted average of two correlation matrices again a correlation matrix?
To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I
Is a weighted average of two correlation matrices again a correlation matrix? To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I think you see that. Noting both $R$ and $...
Is a weighted average of two correlation matrices again a correlation matrix? To be a non-degenerate correlation matrix, $pR+(1-p)Q$ must have two properties: All the diagonal elements of $pR+(1-p)Q$ must be $1$ $pR+(1-p)Q$ must be positive definite Obviously (1) is met...I
46,205
impose an intercept on lm in r [duplicate]
Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- rep(9.81,k+1) fit <- lm(y ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 + offset(interc),data=data[i:(i+k),]) While the coeffic...
impose an intercept on lm in r [duplicate]
Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- r
impose an intercept on lm in r [duplicate] Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- rep(9.81,k+1) fit <- lm(y ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 + offset(inte...
impose an intercept on lm in r [duplicate] Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- r
46,206
Is it valid to use quantile regression with only categorical predictors?
A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable, but both categorical and continuous predictors can be included. If you want to evaluate the impact of 2 dichotomous pre...
Is it valid to use quantile regression with only categorical predictors?
A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable,
Is it valid to use quantile regression with only categorical predictors? A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable, but both categorical and continuous predictors...
Is it valid to use quantile regression with only categorical predictors? A quantile regression model establishes a relationship between the percentiles of a continuous outcome and a set of predictors. In the simplest situation the outcome needs to be a continuous variable,
46,207
Is it valid to use quantile regression with only categorical predictors?
Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regression, ANOVA or ANCOVA -- that is, any general linear model (GLM) -- should work with quantile regression. Petscher and Lo...
Is it valid to use quantile regression with only categorical predictors?
Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regressi
Is it valid to use quantile regression with only categorical predictors? Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regression, ANOVA or ANCOVA -- that is, any general lin...
Is it valid to use quantile regression with only categorical predictors? Binary predictors (eg male vs female) and categorical variables (color) can enter into quantile regression alone or in combination with continuous predictors. Anything you can do in multiple regressi
46,208
Should train and test datasets have similar variance?
You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampling methods are far better, starting with the Efron-Gong optimism bootstrap (see e.g. the R rms package) or 10-fold cros...
Should train and test datasets have similar variance?
You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampl
Should train and test datasets have similar variance? You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampling methods are far better, starting with the Efron-Gong optimism ...
Should train and test datasets have similar variance? You have to first figure out why you are splitting the data. The only reason that comes immediately to mind is that fitting the model is so laborious that you can only do it once. Otherwise, resampl
46,209
Should train and test datasets have similar variance?
Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X$ variance is also higher and the fitted coefficients will explain $Y$ variance equally well. Plot Y ~ X on both data se...
Should train and test datasets have similar variance?
Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X
Should train and test datasets have similar variance? Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X$ variance is also higher and the fitted coefficients will explain...
Should train and test datasets have similar variance? Not necessarily. What is more important is the conditional distribution of $Y|X$ being consistent in both data sets. In other words, if $Y$ variance in the test data set is higher, it could be that $X
46,210
Solving linear regression with weights and constraints
You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N <- 1500 d <- c(1/2, 2/pi, 2/3) x <- c(2, 1, 3) limit <- 5 d%*%x <= limit A <- cbind(1, rnorm(N), ...
Solving linear regression with weights and constraints
You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N
Solving linear regression with weights and constraints You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N <- 1500 d <- c(1/2, 2/pi, 2/3) x <- c(2, 1, 3) limi...
Solving linear regression with weights and constraints You're looking for the mgcv package. With the toy data we used before, it works just fine. (I'm uncertain why rstan is so confident in its results... I'm still looking into it.) set.seed(1880) N
46,211
Solving linear regression with weights and constraints
Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transformations of variables. This is true even when I'm not explicitly fitting a Bayesian model. This is what I've worked up ...
Solving linear regression with weights and constraints
Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transf
Solving linear regression with weights and constraints Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transformations of variables. This is true even when I'm not explicitly...
Solving linear regression with weights and constraints Whenever I have a complicated model to fit, I usually just fit it directly in rstan because it's great at fitting highly constrained coefficients, and because it's easy to include penalties and transf
46,212
MCMC packages in R
The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of length=dim and returns TRUE if the vector is within the support of the objective and FALSE otherwise. Supp is *always* calle...
MCMC packages in R
The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of leng
MCMC packages in R The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of length=dim and returns TRUE if the vector is within the support of the objective and FALSE otherwise. Sup...
MCMC packages in R The t-walk package implementing the t-walk algorithm allows you to define the support for your (log)likelihood function, if that is what you are after. Supp a function that takes a vector of leng
46,213
MCMC packages in R
You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to create your own distributions Package documentation and examples: http://mambajl.readthedocs.org/en/latest/
MCMC packages in R
You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to crea
MCMC packages in R You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to create your own distributions Package documentation and examples: http://mambajl.readthedocs.org/en/lates...
MCMC packages in R You should also check out Mamba, a new MCMC package, but its not in R, but rather julia: https://github.com/brian-j-smith/Mamba.jl it relies on the julia Distributions package which allows you to crea
46,214
MCMC packages in R
Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automatically drop constant terms), but they will still be fairly fast. The specific details of writing functions are found in the S...
MCMC packages in R
Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automaticall
MCMC packages in R Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automatically drop constant terms), but they will still be fairly fast. The specific details of writing functions...
MCMC packages in R Stan allows user-defined functions (including likelihood) as part of the model's "functions" blocks. These may not be quite as fast as the language's built-in likelihoods (and they won't automaticall
46,215
PCA on train and test datasets: do I need to merge them? [duplicate]
Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. You will use (some) of these $W$ to project your original dataset $X$ to a lower dimensional subspace $T$. This is your n...
PCA on train and test datasets: do I need to merge them? [duplicate]
Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. Y
PCA on train and test datasets: do I need to merge them? [duplicate] Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. You will use (some) of these $W$ to project your ori...
PCA on train and test datasets: do I need to merge them? [duplicate] Principal component analysis will provide you with a number of principal components $W$; these components will qualitatively represent the principal and orthogonal modes of variation in your sample. Y
46,216
PCA on train and test datasets: do I need to merge them? [duplicate]
The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want to calculate the prediction error on data "unseen" by your model.
PCA on train and test datasets: do I need to merge them? [duplicate]
The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want t
PCA on train and test datasets: do I need to merge them? [duplicate] The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want to calculate the prediction error on data "unseen" b...
PCA on train and test datasets: do I need to merge them? [duplicate] The test set should never be included in your modeling decisions or else you will be lose the benefit of unfitted data. This is true for regression, PCA or whatever other fitting technique. You want t
46,217
Population parameters of a regression
The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity. Take a simpler example. Assume your model is $$ E[Y] = \beta_0 + \beta_1 X + \beta_2 Z + \beta_{12} XZ $$ Here th...
Population parameters of a regression
The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity
Population parameters of a regression The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity. Take a simpler example. Assume your model is $$ E[Y] = \beta_0 + \beta_1 X +...
Population parameters of a regression The problem is with this: I had always interpreted the betas as the partial derivative of X on Y 'in reality' That's not always true in a model with interactions or various other forms of complexity
46,218
Population parameters of a regression
Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term. The example is sufficiently general that we might as well address it directly, so consider a model of the form $$Y = \...
Population parameters of a regression
Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term.
Population parameters of a regression Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term. The example is sufficiently general that we might as well address it directly, s...
Population parameters of a regression Your understanding is correct--provided we look at the model in the right way. Because the question concerns interpreting a predictive model, we may focus on its predictions and ignore the error term.
46,219
Random Forest - Need help understanding the rfcv function
The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the first row in your output first value = 9, second value = round(9(0.7)) = 6, third value = round(6(0.7)) = 4, and so ...
Random Forest - Need help understanding the rfcv function
The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the f
Random Forest - Need help understanding the rfcv function The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the first row in your output first value = 9, second value = ro...
Random Forest - Need help understanding the rfcv function The rfcv function creates multiple models based on the number of predictors and the "step" argument (default = 0.5). In your case you began with 9 predictors with step = 0.7 which corresponds to the f
46,220
Textbooks pertaining to creating models?
With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical Models" (Cambridge) But, the one book you REALY, REALLY should study is this one: David A. Freedman: "Statistical Model...
Textbooks pertaining to creating models?
With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical M
Textbooks pertaining to creating models? With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical Models" (Cambridge) But, the one book you REALY, REALLY should study is this on...
Textbooks pertaining to creating models? With your background I would look to " The Elements of Statistical Learning " (Springer) by Hastie, Trevor, Tibshirani, Robert, Friedman, Jerome. Another good book is A. C. Davison: "Statistical M
46,221
Textbooks pertaining to creating models?
If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first one given your background. There are proofs, but there are also empirical examples, with the datasets readily available. ...
Textbooks pertaining to creating models?
If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first o
Textbooks pertaining to creating models? If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first one given your background. There are proofs, but there are also empirical exampl...
Textbooks pertaining to creating models? If you want a mixture of application and rigor, I would recommend the two Wooldridge books. One book is a graduate-level text, and the other is aimed at undergraduate students. I would try the first o
46,222
Textbooks pertaining to creating models?
If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
Textbooks pertaining to creating models?
If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
Textbooks pertaining to creating models? If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
Textbooks pertaining to creating models? If you are looking for Time Series in finance, here is a great book : Tsay, R. S. (2010) Analysis of Financial Time Series. Third Edition. New York: Wiley.
46,223
Textbooks pertaining to creating models?
I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, and Bruce. The professor also had readings in the Hastie, Tbshirani, and Friedman which can be found here . These gave a ...
Textbooks pertaining to creating models?
I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, a
Textbooks pertaining to creating models? I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, and Bruce. The professor also had readings in the Hastie, Tbshirani, and Friedma...
Textbooks pertaining to creating models? I just finished a data mining class at University and we used "Data Mining for Business Intelligence Concepts, Techniques, and Applications in Microsoft Office Excel With Xlminer" By Shmueli, Patel, a
46,224
Textbooks pertaining to creating models?
If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Economic Series: Readings in Econometric Methodology (Advanced Texts in Econometrics) by C. W. J. Granger. Modelling Nonline...
Textbooks pertaining to creating models?
If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Econ
Textbooks pertaining to creating models? If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Economic Series: Readings in Econometric Methodology (Advanced Texts in Econometric...
Textbooks pertaining to creating models? If you're interested in learning about different econometric methodologies (how to go about creating models and dealing with issues encountered) then I'd recommend the following books: Modelling Econ
46,225
Textbooks pertaining to creating models?
It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note: I believe this text will be updated into two texts within the next few years. Follow Gelman's blog for further details.)...
Textbooks pertaining to creating models?
It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note:
Textbooks pertaining to creating models? It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note: I believe this text will be updated into two texts within the next few years. F...
Textbooks pertaining to creating models? It has been three years since I wrote the question above. Here are some additional suggestions I can make: Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (note:
46,226
Textbooks pertaining to creating models?
I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the book is probably elementary for your calibre/background, but I would still recommend it. Here is good list of books which...
Textbooks pertaining to creating models?
I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the b
Textbooks pertaining to creating models? I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the book is probably elementary for your calibre/background, but I would still recom...
Textbooks pertaining to creating models? I liked the book "The practise of Business Statistics" as a good verbose introduction to the application of creating models with some real world data with real world problems. The mathematics in the b
46,227
Way of measuring students' performance
1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the null hypothesis it will be small. The problem arises with the denominator. In the case of sets of Poisson (or multinomia...
Way of measuring students' performance
1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the
Way of measuring students' performance 1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the null hypothesis it will be small. The problem arises with the denominator. In the...
Way of measuring students' performance 1) The problem is that the chi-square arises because it's a sum of squares of standardized deviations of (approximately) normally distributed variables. The numerator you propose is fine - under the
46,228
Way of measuring students' performance
You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcome of exactly one observation. The motivation for $\chi^2$ test is that you know the probability $P_i$ of an outcome $i$,...
Way of measuring students' performance
You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcom
Way of measuring students' performance You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcome of exactly one observation. The motivation for $\chi^2$ test is that you know t...
Way of measuring students' performance You can't use $\chi^2$ test here, because it is for counts (frequency) data. $E_i$ in this test is the frequency of observing value $i$. In your case it is a single score of a student, i.e. the outcom
46,229
Way of measuring students' performance
I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would work. You have paired observations, and two measures that are not strictly normal (i.e. possible scores do not range from...
Way of measuring students' performance
I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would w
Way of measuring students' performance I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would work. You have paired observations, and two measures that are not strictly normal ...
Way of measuring students' performance I wonder if a simple rank sum test for stochastic dominance (or, if the assumptions of same shape and distributions differing only with respect to central location, test for median difference) would w
46,230
Way of measuring students' performance
Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could try $O_i=\alpha + \beta E_i + \varepsilon$. A toy example in R: > set.seed(123) > e <- rnorm(20, 80, 20) > range(e) [1]...
Way of measuring students' performance
Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could
Way of measuring students' performance Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could try $O_i=\alpha + \beta E_i + \varepsilon$. A toy example in R: > set.seed(123) ...
Way of measuring students' performance Well, I'm not sure, but you could wonder if the target score can predict the actual score. I think that a positive correlation between target and actual scores is a reasonable assumption, so you could
46,231
Interactions between random effects
Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of balance d <- d[sample(1:nrow(d),size=round(0.7*nrow(d))),] ## now get Poisson-distributed number of obs per site/year l...
Interactions between random effects
Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of
Interactions between random effects Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of balance d <- d[sample(1:nrow(d),size=round(0.7*nrow(d))),] ## now get Poisson-distr...
Interactions between random effects Have you tried it? That sounds like it should be fine. set.seed(101) ## generate fully crossed design: d <- expand.grid(Year=2000:2010,Site=1:30) ## sample 70% of the site/year comb to induce lack of
46,232
Homoscedastic and heteroscedastic data and regression models
In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model you normally put it into a variable from which you can then call summary on it to get the usual regression table for the...
Homoscedastic and heteroscedastic data and regression models
In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model
Homoscedastic and heteroscedastic data and regression models In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model you normally put it into a variable from which you can then...
Homoscedastic and heteroscedastic data and regression models In R when you fit a regression or glm (though GLMs are themselves typically heteroskedastic), you can check the model's variance assumption by plotting the model fit. That is, when you fit the model
46,233
Question about the error term in a simple linear regression
it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers to the subject (a person), and a subscript $t$ refers to the tume of othe observation. in a simple linear regression we d...
Question about the error term in a simple linear regression
it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers t
Question about the error term in a simple linear regression it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers to the subject (a person), and a subscript $t$ refers to the ...
Question about the error term in a simple linear regression it seems that you're confused about relation of the sample size to CLT application. the distribution of $\epsilon_{it}$ has nothing to do with the sample size. I'm assuming that subscript $i$ refers t
46,234
Question about the error term in a simple linear regression
Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal except it is getting censored (eg negative incomes get reported as zero).
Question about the error term in a simple linear regression
Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal exce
Question about the error term in a simple linear regression Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal except it is getting censored (eg negative incomes get reported ...
Question about the error term in a simple linear regression Depending on the nature of the response variable, I would suggest checking out either the GLM or Tobit models. GLM for when the response is non-normal (eg counts), and Tobit if it could be normal exce
46,235
Question about the error term in a simple linear regression
The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster says, you might look at generalized linear models which allow for non-normal error distributions. However, note that linea...
Question about the error term in a simple linear regression
The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster say
Question about the error term in a simple linear regression The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster says, you might look at generalized linear models which allow f...
Question about the error term in a simple linear regression The central limit theorem does not imply that the errors are Normal if you have a large data set. The CLT applies to sums of random variables (under other certain conditions). As the other poster say
46,236
Variance of absolute value of a rv
The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\right)^2\\ &= E[X^2] - \left(E[|X|]\right)^2\\&= \operatorname{var}(X) + \left(E[X]\right)^2- \left(E[|X|]\right)^2 \end{a...
Variance of absolute value of a rv
The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\ri
Variance of absolute value of a rv The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\right)^2\\ &= E[X^2] - \left(E[|X|]\right)^2\\&= \operatorname{var}(X) + \left(E[X]\rig...
Variance of absolute value of a rv The general calculation for both quantities can be obtained by the application of LOTUS. For $\operatorname{var}(|X|)$, note that $$\begin{align} \operatorname{var}(|X|) &= E[|X|^2] - \left(E[|X|]\ri
46,237
Deriving confidence interval from standard error of the mean when the data are non-normal
This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage. The t is at least reasonably robust to mild deviations from the assumptions, so if the population distribution isn't p...
Deriving confidence interval from standard error of the mean when the data are non-normal
This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage
Deriving confidence interval from standard error of the mean when the data are non-normal This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage. The t is at least reasonably...
Deriving confidence interval from standard error of the mean when the data are non-normal This is somewhat tricky. There are several approaches: Assume the distribution isn't 'too far' from the normal (in a particular sense), and that the t-interval will give close to the desired coverage
46,238
Deriving confidence interval from standard error of the mean when the data are non-normal
If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used only in theoretical papers think about 95%. i know that it's fashionable to try to squeeze out as much information from d...
Deriving confidence interval from standard error of the mean when the data are non-normal
If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used o
Deriving confidence interval from standard error of the mean when the data are non-normal If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used only in theoretical papers thin...
Deriving confidence interval from standard error of the mean when the data are non-normal If you don't know the distribution nothing can be done with 8 observations. Report your standard deviation. You can try using chebyshev or similar inequalities but they are usually so wide that used o
46,239
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$ with $x_{(n)}\equiv M_n$. So $$(1-\hat \lambda x_{(n)})e^{\hat \lambda x_{(n)}} = 1-\hat \lambda x_{(n)}n $$ Assume fir...
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$ with $x_{(n)}\equiv M_n$. ...
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$
46,240
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L(\lambda \mid x) = f_{M_n}(x)$, consequently the log-likelihood is $$\ell(\lambda \mid x) = \log \lambda + \log n - \lam...
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics
Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L(\lambda \mid x) = f_{M_n}(...
Maximum Likelihood Estimator of the exponential function parameter based on Order Statistics Q1. Trivial: differentiate $G$ to obtain $$f_{M_n}(x) = \lambda n e^{-\lambda x}(1-e^{-\lambda x})^{n-1}, \quad x > 0.$$ Q2. The likelihood of $\lambda$ given the single observation $M_n = x$ is $L
46,241
Coding categorical variables for regression
Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, and beginning salary, salbegin, is the continuous independent variable. Using GLM, you can perform pairwise comparisons be...
Coding categorical variables for regression
Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, an
Coding categorical variables for regression Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, and beginning salary, salbegin, is the continuous independent variable. Using ...
Coding categorical variables for regression Here is an example using the employee data.sav data, which comes with standard installation. Suppose salary is the dependent variable, job category, jobcat, is the categorical independent variable, an
46,242
Coding categorical variables for regression
Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differences (HSD) test will do that, and is familiar to many people. You needn't worry about the type of coding used. First, a...
Coding categorical variables for regression
Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differen
Coding categorical variables for regression Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differences (HSD) test will do that, and is familiar to many people. You needn't wo...
Coding categorical variables for regression Since you want to compare all groups with each other, the tests will not be orthogonal, even if they are a-priori. So you should use a test that addresses that. Tukey's honestly significant differen
46,243
Coding categorical variables for regression
The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would implement one or more of the tests on that list. You can search for those terms in the SPSS documentation and that should te...
Coding categorical variables for regression
The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would imple
Coding categorical variables for regression The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would implement one or more of the tests on that list. You can search for those terms ...
Coding categorical variables for regression The Wikipedia article on post hoc analyses lists several tests/options for comparing groups after a factor has been found significant. I don't know SPSS well anymore, but I expect that it would imple
46,244
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta>0$
I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then: \begin{align} L(\theta) &= (\alpha - 1) \Sigma \log X_i - n \log(\Gamma (\alpha)) - n\alpha \log(\theta) - \frac{1}{\the...
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta>
I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then:
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta>0$ I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then: \begin{align} L(\...
Find the Fisher information $I(\theta)$ of the gamma distribution with $\alpha=4$ and $\beta=\theta> I'm doing this to work through this myself as much as help you. Lets give it a go. PDF of a Gamma = $\frac{X^{\alpha-1}}{\Gamma(\alpha)\theta^{\alpha}}e^{\frac{X}{\theta}}$. Log likelihood is then:
46,245
When was the k-means clustering algorithm first used?
To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that paper contains an application (which is missing from earlier papers such as Steinhaus (1956)). J. MacQueen (1967). Some m...
When was the k-means clustering algorithm first used?
To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that pape
When was the k-means clustering algorithm first used? To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that paper contains an application (which is missing from earlier papers s...
When was the k-means clustering algorithm first used? To the best of my knowledge, the name 'k-means' was first used in MacQueen (1967). The name refers to the improved algorithm proposed in that paper and not to the original one. Section 3 of that pape
46,246
When was the k-means clustering algorithm first used?
I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (somehow): Diverse questions, for instance those about types in anthropology, or others with practical motivations, li...
When was the k-means clustering algorithm first used?
I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (
When was the k-means clustering algorithm first used? I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (somehow): Diverse questions, for instance those about types in an...
When was the k-means clustering algorithm first used? I have recently reproduced a version of Hugo Steinhaus paper: Sur la division des corps matériels en parties (On the division of material bodies into parts). The conclusion (originally in French) is (
46,247
When was the k-means clustering algorithm first used?
Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cluster split/merge phase in order to arrive at a "best" number of clusters. Pure K-Means takes the number of centroids a...
When was the k-means clustering algorithm first used?
Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cl
When was the k-means clustering algorithm first used? Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cluster split/merge phase in order to arrive at a "best" number of c...
When was the k-means clustering algorithm first used? Another early paper showing K-Means clustering was published by Ball and Hall in 1965 [1]. A K-Means like algorithm was part of their ISODATA algorithm. They went further to implement an iterative cl
46,248
Explanation of cubic spline interpolation
If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ is a piecewise function: on each $h_i = x_i - x_{i-1}$ it's a cubic polynomial, which can be written for simplicity as $S...
Explanation of cubic spline interpolation
If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ i
Explanation of cubic spline interpolation If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ is a piecewise function: on each $h_i = x_i - x_{i-1}$ it's a cubic polynomial,...
Explanation of cubic spline interpolation If you have a function $f(x)$ on some interval $[a,b]$, which is divided on $[x_{i-1}, x_i]$ such as $a=x_0< x_1< ... <x_N=b$ then you can interpolate this function by a cubic spline $S(x)$. $S(x)$ i
46,249
Bayes-factor for testing a null-hypothesis?
You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subtract the estimate for the group level parameters for each step to get a posterior distribution of the difference as was ...
Bayes-factor for testing a null-hypothesis?
You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subt
Bayes-factor for testing a null-hypothesis? You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subtract the estimate for the group level parameters for each step to get a post...
Bayes-factor for testing a null-hypothesis? You could try the approach recommended by Steve Goodman and calculate the minimum bayes factor: Toward Evidence Based Medical Statistics 2: The Bayes Factor To get this from mcmc results, you can subt
46,250
Bayes-factor for testing a null-hypothesis?
You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl.missouri.edu/bayesfactor uses the same models (see the Rouder et al 2009 reference on the web calculator page). Note th...
Bayes-factor for testing a null-hypothesis?
You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl
Bayes-factor for testing a null-hypothesis? You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl.missouri.edu/bayesfactor uses the same models (see the Rouder et al 2009 re...
Bayes-factor for testing a null-hypothesis? You can use the BayesFactor package in R to easily compute Bayesian t tests. See the examples here: http://bayesfactorpcl.r-forge.r-project.org/#twosample for details. The web calculator at http://pcl
46,251
Is the F-1 score symmetric?
Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{tp}{tp+fp} \cdot \frac{tp}{tp+fn}}{\frac{tp}{tp+fp} + \frac{tp}{tp+fn}} = 2 \frac{TP} {2 TP + FP + FN} = 2 \frac{TP} {...
Is the F-1 score symmetric?
Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{
Is the F-1 score symmetric? Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{tp}{tp+fp} \cdot \frac{tp}{tp+fn}}{\frac{tp}{tp+fp} + \frac{tp}{tp+fn}} = 2 \frac{TP} {2 T...
Is the F-1 score symmetric? Let's normalize the confusion matrix, i.e. $TP + FP + FN + TN = 1$. We have: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = 2 \cdot \frac{\frac{
46,252
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both procedures look like using that same data. This makes the difference glaringly obvious. Following Hyndman's notation let $...
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both pr
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both procedures look like usi...
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both pr
46,253
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked and your description is for a situation where you have repeated measurements of time series. In this situation you can le...
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out?
The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked a
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked and your description is...
How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. The image you linked a
46,254
Kolmogorov-Smirnov test strange output
Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come from any of the well-studied distributions, and you have 6k data, so even a trivial discrepancy will make the test 'signif...
Kolmogorov-Smirnov test strange output
Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come fr
Kolmogorov-Smirnov test strange output Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come from any of the well-studied distributions, and you have 6k data, so even a trivial...
Kolmogorov-Smirnov test strange output Yes. Neither of these distributions is a good fit for your data by that criterion. There are some other distributions you could try, but it strikes me as (ultimately) unlikely that real data come fr
46,255
Correct use of cross validation in LibsSVM
It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm-train in k-fold cross-validation mode using the -v k flag. In this mode, svm-train does not output a model -- just a cr...
Correct use of cross validation in LibsSVM
It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm
Correct use of cross validation in LibsSVM It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm-train in k-fold cross-validation mode using the -v k flag. In this mode, svm...
Correct use of cross validation in LibsSVM It seems like you are mixing a couple of things up. First of all, cross-validation is used to get an accurate idea of the generalization error when certain tuning parameters are used. You can use svm
46,256
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
(This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If we only estimate a logit model with one binary regressor of interest and some (dummy or continuous) control variables, ...
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
(This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? (This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If we only estimate a logit mo...
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? (This answer uses results from W.H. Greene (2003), Econometric Analysis, 5th ed. ch.21) I will answer the following modified version, which I believe accomplishes the goals of the OP's question : "If
46,257
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the origin. The length of the vector equals the standard deviation of the corresponding variable. The cosine of the angle be...
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the origin. The length of the ve...
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? This is for OLS regression. Consider a geometric representation of three variables -- two predictors, $X_1$ and $X_2$, and a dependent variable, $Y$. Each variable is represented by a vector from the
46,258
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ with $\epsilon_i \sim N(0, sd_\text{error}^2)$. I show the zero to make explicit that the intercept of the true model is ...
How high must logistic covariates' predictive accuracy be for a reversal effect to show up?
There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ w
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ with $\epsilon_i \sim N(0, sd...
How high must logistic covariates' predictive accuracy be for a reversal effect to show up? There is no obvious relationship between $R^2$ and reversal of the sign of a regression coefficient. Assume you have data for which the true model is for example $$ y_i = 0+5x_i -z_z + \epsilon_i $$ w
46,259
How to interpret Weka Logistic Regression output?
Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for a given event is 0.6, then the odds for that event is $0.6/0.4=1.5$. 1- As you said, since the logistic regression outp...
How to interpret Weka Logistic Regression output?
Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for
How to interpret Weka Logistic Regression output? Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for a given event is 0.6, then the odds for that event is $0.6/0.4=1.5$. 1...
How to interpret Weka Logistic Regression output? Let me explain what odds mean in general. Odds are the ratio between the probability of success over the probability of failure, that is, $\displaystyle \frac{p_{i}}{1-p_{i}}$. Let's say $p_{i}$ for
46,260
How to approach forecasting time-series data
A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To be more precise, suppose that influence is the multiplicative. A parametrized realization of that model for 30 days is gi...
How to approach forecasting time-series data
A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To b
How to approach forecasting time-series data A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To be more precise, suppose that influence is the multiplicative. A parametrize...
How to approach forecasting time-series data A simple approach is to post at the hour slot you expect to receive the most likes. Your description suggests that the only expected component of your time series is seasonal by hours of the day. To b
46,261
How to approach forecasting time-series data
It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observations, like you have done. From here you can calculate the data-derived expected likes by hour of week. If you normali...
How to approach forecasting time-series data
It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observ
How to approach forecasting time-series data It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observations, like you have done. From here you can calculate the data-derived e...
How to approach forecasting time-series data It sounds like you only care about what day of week and what hour of that day will likely garner the most attention. You can format your data into hour of week, and treat each week as a set of observ
46,262
Making box plots when analyzing a case with 3 predictor variables?
Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution and outliers. However, since it's an ANOVA, I'd also recommend visualize the mean and 95% CI as well using error plot: ...
Making box plots when analyzing a case with 3 predictor variables?
Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution
Making box plots when analyzing a case with 3 predictor variables? Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution and outliers. However, since it's an ANOVA, I'd also ...
Making box plots when analyzing a case with 3 predictor variables? Thanks for the clarification. You can capitalize on the paneling and clustering designs and put together a compact boxplot like this: The boxplot will be useful for assessing group-wise distribution
46,263
Making box plots when analyzing a case with 3 predictor variables?
So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will always be the DV (uranium). On the x-axis will the the IVs. For example, temp low, temp med, temp high. Do this for all 3 I...
Making box plots when analyzing a case with 3 predictor variables?
So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will alw
Making box plots when analyzing a case with 3 predictor variables? So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will always be the DV (uranium). On the x-axis will the the I...
Making box plots when analyzing a case with 3 predictor variables? So I understand that your DV is numerical and your 3 IVs are categorical (3 levels). Boxplots is a good choice. You will have 9 boxplots, 3 for each IV. Plot each IV separately. On the y axis will alw
46,264
Making box plots when analyzing a case with 3 predictor variables?
Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)), time=sample(pred1, 100, replace=TRUE), temp=sample(pred1, 100, replace=TRUE), s...
Making box plots when analyzing a case with 3 predictor variables?
Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)),
Making box plots when analyzing a case with 3 predictor variables? Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)), time=sample(pred1, 100, replace=TRUE), ...
Making box plots when analyzing a case with 3 predictor variables? Here's the '9x boxplot' approach in R: ### make reproducible set.seed(1) pred1 <- factor(c("low", "med", "high"), levels=c("low", "med", "high")) df1 <- data.frame(ur=10*abs(runif(100)),
46,265
Sample size and power detection
When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. Here is a page I wrote: http://graphpad.com/support/faq/why-it-is-not-helpful-to-compute-the-power-of-an-experiment-to-det...
Sample size and power detection
When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. He
Sample size and power detection When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. Here is a page I wrote: http://graphpad.com/support/faq/why-it-is-not-helpful-to-compute-t...
Sample size and power detection When computing power, you have to state what hypothetical effect size you are trying to detect. As Peter mentioned, computing the power to detect the results you actually detected is rarely useful. He
46,266
Sample size and power detection
First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program that will calculate power for you. The more complex is to simulate the data. The former makes assumptions (sometimes unwar...
Sample size and power detection
First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program th
Sample size and power detection First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program that will calculate power for you. The more complex is to simulate the data. The former ma...
Sample size and power detection First, post-hoc power analysis is problematic (see, e.g. this Second, if you decide to proceed anyway, there are two general approaches to power calculation. The simpler choice is to find a program th
46,267
Why is coefficient of determination used to assess fit of a least squares line?
This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyone is unclear. I'd characterise it rather as one of several available computing formulas. You ask "Why is this used" bu...
Why is coefficient of determination used to assess fit of a least squares line?
This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyo
Why is coefficient of determination used to assess fit of a least squares line? This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyone is unclear. I'd characterise it rathe...
Why is coefficient of determination used to assess fit of a least squares line? This is a very broad question, although it may not seem so. Two comments: You say "The coefficient of determination is" but whether the formula you give acts as a definition of fundamentals for anyo
46,268
Why is coefficient of determination used to assess fit of a least squares line?
The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squared deviation, all of that variability not explained by the mean (any value exactly at the mean contributes 0 to $SS$). ...
Why is coefficient of determination used to assess fit of a least squares line?
The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squa
Why is coefficient of determination used to assess fit of a least squares line? The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squared deviation, all of that variability n...
Why is coefficient of determination used to assess fit of a least squares line? The $SS$ can be considered a sum quantity of variability. The $SS_\text{tot}$ is all of the variability when the very simplest model is used, the mean. Look at the equation, it's the sum of each squa
46,269
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow \pm 1$
Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X = (X1 - \mu_1)/\sigma_1$ and $Y = (X2 - \mu_2)/\sigma_2$. To standardize you subtract the mean and then divide by the sta...
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow
Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X =
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow \pm 1$ Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X = (X1 - \mu_1)...
Bivariate normal distribution and its distribution function as correlation coefficient $\rightarrow Yes, it's well-defined. For convenience and ease of exposition I'm changing your notation to use two standard normal distributed random variables $X$ and $Y$ in place of your $X1$ and $X2$. I.e., $X =
46,270
How to identify which predictors should be included in a multiple regression?
The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do this. Should you have too many events per variable (one rule of thumb is to have at least 15 subjects per parameter in th...
How to identify which predictors should be included in a multiple regression?
The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do th
How to identify which predictors should be included in a multiple regression? The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do this. Should you have too many events per v...
How to identify which predictors should be included in a multiple regression? The model should be formulated by subject matter expertise. It is not a good idea to use the data to tell you which data to use. The data are not information-rich enough to be able to reliably do th
46,271
How to identify which predictors should be included in a multiple regression?
There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you didn't: http://www.nesug.org/proceedings/nesug07/sa/sa07.pdf
How to identify which predictors should be included in a multiple regression?
There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you d
How to identify which predictors should be included in a multiple regression? There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you didn't: http://www.nesug.org/proceedings/ne...
How to identify which predictors should be included in a multiple regression? There are lots of methods that can be used for variable selection. LASSO is one of the better data driven variable selection models. Do not, whatever you do, use forward stepwise. You'll be glad you d
46,272
How to identify which predictors should be included in a multiple regression?
It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of subject matter experts. Some of the decision will rest on how large is your sample size. If the size is sufficiently large...
How to identify which predictors should be included in a multiple regression?
It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of sub
How to identify which predictors should be included in a multiple regression? It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of subject matter experts. Some of the decision ...
How to identify which predictors should be included in a multiple regression? It is probably important to not let the analysis drive the theory. Which variables are the best predictors should be based on previous research, or as a minimum, on a consensus of the opinions of sub
46,273
How to identify which predictors should be included in a multiple regression?
In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlated, this can indicate that different IVs are accounting for the same portion of variance in the dependent variable or out...
How to identify which predictors should be included in a multiple regression?
In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlate
How to identify which predictors should be included in a multiple regression? In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlated, this can indicate that different IVs ar...
How to identify which predictors should be included in a multiple regression? In conducting a regression analysis, it is useful to examine correlations between the independent variables to avoid the problem of multicolinearity. If you have multiple IVs that are highly correlate
46,274
Iterative PCA R
As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to solve for a subset of leading PCs. The package irlba allows for this by performing a "Lanczos bidiagonalization". It is ...
Iterative PCA R
As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to
Iterative PCA R As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to solve for a subset of leading PCs. The package irlba allows for this by performing a "Lanczos bidiagonal...
Iterative PCA R As I understand your problem, the main issue is the size of the data set, and not that it contains missing value (i.e. "sparse"). For such a problem, I would recommend doing a partial PCA in order to
46,275
Iterative PCA R
Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for the first few components. I have been successful with that number of variables (albeit on a smaller sample size). Alterna...
Iterative PCA R
Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for t
Iterative PCA R Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for the first few components. I have been successful with that number of variables (albeit on a smaller sampl...
Iterative PCA R Why don't you directly do a PCA on the full set and see where it takes you? PCA is computationally very fast, and you will be able to quickly to determine how many variables seem to be important for t
46,276
How do I interpret the figure output from package dlnm in R?
Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because they used mortality data and Poisson regression models. In their framework, the exponentiated regression coefficient from...
How do I interpret the figure output from package dlnm in R?
Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because t
How do I interpret the figure output from package dlnm in R? Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because they used mortality data and Poisson regression models. In t...
How do I interpret the figure output from package dlnm in R? Interpretation of the graph in your case Note: The y-axis is not always the relative risk as in the example given in the vignette of the dlnm package. This is only the case in their example, because t
46,277
Are the relations in fixed, random and mixed effect models and multilevel models causal?
Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an earnings regression of the type $$\ln(y_{i}) = \alpha + \delta S_{i} + \gamma A_{i} + X'\beta + \epsilon$$ where the depe...
Are the relations in fixed, random and mixed effect models and multilevel models causal?
Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an e
Are the relations in fixed, random and mixed effect models and multilevel models causal? Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an earnings regression of the type ...
Are the relations in fixed, random and mixed effect models and multilevel models causal? Whether a coefficient from a model has a causal interpretation mostly depends on the other variables included or the way that unobserved but relevant variables are controlled for. For example, in an e
46,278
Benchmark data for Random Forest evaluation [closed]
I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://link.springer.com/chapter/10.1007/978-3-540-30115-8_34), but my impression that this stuff isn't main-stream practice. Y...
Benchmark data for Random Forest evaluation [closed]
I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://
Benchmark data for Random Forest evaluation [closed] I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://link.springer.com/chapter/10.1007/978-3-540-30115-8_34), but my imp...
Benchmark data for Random Forest evaluation [closed] I think random forests are still mostly used in the form they were introduced by Breiman in his 2001 paper. There have been some attempts to improve them by e.g. moving beyond majority voting (http://
46,279
Benchmark data for Random Forest evaluation [closed]
One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classifiers, among them multiple versions of Random Forests, on the entire UCI repository as of that time and find that Random...
Benchmark data for Random Forest evaluation [closed]
One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classi
Benchmark data for Random Forest evaluation [closed] One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classifiers, among them multiple versions of Random Forests, on the entir...
Benchmark data for Random Forest evaluation [closed] One very relevant paper is Fernández-Delgado, Cernadas, Barro & Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?", JMLR, 2014. The authors evaluated many classi
46,280
Fisher's method for combing p-values - what about the lower tail?
i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes going to give different results, try this: x=70;c(1-pchisq(x,1),pchisq(x,1,lower.tail=FALSE)) ii) Yes, it's one-sided. ...
Fisher's method for combing p-values - what about the lower tail?
i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes g
Fisher's method for combing p-values - what about the lower tail? i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes going to give different results, try this: x=70;c(1-pc...
Fisher's method for combing p-values - what about the lower tail? i) First, a recommendation: Use pchisq( -2*sum(log(p-values)), df, lower.tail=FALSE) instead of 1- ... - you're likely to end up with more accuracy for small p-values. To see that they're sometimes g
46,281
What to do with a variable that loads equally on two factors in a factor analysis?
Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternative ideas: Consider whether you have extracted enough factors. Sometimes when you extract more factors cross-loading ite...
What to do with a variable that loads equally on two factors in a factor analysis?
Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternati
What to do with a variable that loads equally on two factors in a factor analysis? Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternative ideas: Consider whether you have ...
What to do with a variable that loads equally on two factors in a factor analysis? Using factor analysis for scale construction is a bit of an art. It is common to drop items that load to a substantial degree on more than one factor after factor rotation. That said, a few alternati
46,282
Assumptions and contraindications of conjoint analysis
Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full profile rating, binary choice or discrete choice experiments, graded pairs and constant sum paired comparisons. They ...
Assumptions and contraindications of conjoint analysis
Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full
Assumptions and contraindications of conjoint analysis Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full profile rating, binary choice or discrete choice experiments, gr...
Assumptions and contraindications of conjoint analysis Conjoint analysis is not an analysis method per se, but rather a family of choice-based methods for collecting preference data. These methods include (among others) best-worst scaling (MaxDiff), full
46,283
Assumptions and contraindications of conjoint analysis
And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Nike Red Adidas Blue Nike Blue Adidas The respondents' evaluation task (choice, ranking, Best-worst) gives a score for eac...
Assumptions and contraindications of conjoint analysis
And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Ni
Assumptions and contraindications of conjoint analysis And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Nike Red Adidas Blue Nike Blue Adidas The respondents' evaluation t...
Assumptions and contraindications of conjoint analysis And to add to DarkPrivateer's excellent answer, almost all conjoint studies are based on factorial designs. In your case I take it that your 2*2 factorial design creates options something like: Red Ni
46,284
Muthén's robust weighted least squares factoring method for binary items...in R?
What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
Muthén's robust weighted least squares factoring method for binary items...in R?
What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
Muthén's robust weighted least squares factoring method for binary items...in R? What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
Muthén's robust weighted least squares factoring method for binary items...in R? What you want is in the lavaan package, the function is named sem. Try writing an argument estimator = "WLSMV". For more information read this.
46,285
Skewness of a mixture density
Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are better (more interpretable) measures nowdays. It has been largely discussed the validity of using a measure of skewness ...
Skewness of a mixture density
Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are
Skewness of a mixture density Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are better (more interpretable) measures nowdays. It has been largely discussed the validity o...
Skewness of a mixture density Skewness is a vague concept which allows its formalisation in several ways. The most popular measure of skewness is the one you mention, which was proposed more than 100 years ago. However, there are
46,286
What is the 'same distribution' mean?
It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide for these variables. The functions $F_X(t)={\mathbb P}(X\leq t)$ and $F_Y(t)= {\mathbb P}(Y\leq t)$ are termed the distr...
What is the 'same distribution' mean?
It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide
What is the 'same distribution' mean? It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide for these variables. The functions $F_X(t)={\mathbb P}(X\leq t)$ and $F_Y(t)= {\ma...
What is the 'same distribution' mean? It is more general than this. It means that $F_X(t)={\mathbb P}(X\leq t) = {\mathbb P}(Y\leq t) = F_Y(t)$, for all $t$. Then, in particular, if the mean and variance exist, then their values coincide
46,287
What is the 'same distribution' mean?
Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly seen distributions, like normal distribution, if you can verify that type of distribution and all parameters are the sa...
What is the 'same distribution' mean?
Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly
What is the 'same distribution' mean? Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly seen distributions, like normal distribution, if you can verify that type of dist...
What is the 'same distribution' mean? Strictly speaking, it means that the CDF is the same. That is, the type of distribution, the mean, the variance, and all parameters are all the same, if they are well-defined. For most of the commonly
46,288
Does adjustement completely remove the effect of the confounding variables?
I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions where adjustment can cause bias rather than decreasing biases. For more information on this issue, search for collider b...
Does adjustement completely remove the effect of the confounding variables?
I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions
Does adjustement completely remove the effect of the confounding variables? I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions where adjustment can cause bias rather than ...
Does adjustement completely remove the effect of the confounding variables? I don't have a complete answer but can provide some thoughts: 1) Adjustment does remove the confounding effect, but only if the underlying causal pathways are correctly specified. There are occasions
46,289
Likelihood ratio tests on linear mixed effect models
You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress * vowel_group + (1|speaker), data) >anova(fm1,fm2) It doesn't matter whether you set the model with the fewest df first ...
Likelihood ratio tests on linear mixed effect models
You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress *
Likelihood ratio tests on linear mixed effect models You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress * vowel_group + (1|speaker), data) >anova(fm1,fm2) It doesn't matte...
Likelihood ratio tests on linear mixed effect models You just use an ANOVA test for this like Stéphane and the help file of the package suggest! >fm1 <- lmer(intdiff ~ stress * vowel_group + (1|speaker) + (1|word), data) >fm2 <- lmer(intdiff ~ stress *
46,290
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$
There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) + U_i $$ where $\boldsymbol{X}_i = [X_{1i},\ldots, X_{Ki}]'$, together with the assumption that $\mathbb{E}(U_i ...
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$
There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\b
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$ There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\beta}^0) + U_i $$ where $\boldsymbol{X...
Identify the parameters of the model $Y=\exp(\beta_0 + \beta_1 X + \beta_2 Z)+u_i$ There appears to be some discrepancy here regarding what a proof of identification entails and what you are trying to prove. Let me rewrite your model as $$ Y_i = \exp(\boldsymbol{X}_i'\boldsymbol{\b
46,291
Which logit or probit model should I use for multiple response / dependent variables?
You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be acceptable if it's necessary, but if a better way is possible, you may want to avoid it. Multivariate generalized linea...
Which logit or probit model should I use for multiple response / dependent variables?
You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be
Which logit or probit model should I use for multiple response / dependent variables? You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be acceptable if it's necessary, but ...
Which logit or probit model should I use for multiple response / dependent variables? You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be
46,292
Which logit or probit model should I use for multiple response / dependent variables?
Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce each each row of Y to a number between -3 and +3, where larger values correspond to "more negative, later": f <- function(r...
Which logit or probit model should I use for multiple response / dependent variables?
Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce eac
Which logit or probit model should I use for multiple response / dependent variables? Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce each each row of Y to a number betwee...
Which logit or probit model should I use for multiple response / dependent variables? Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce eac
46,293
Which logit or probit model should I use for multiple response / dependent variables?
You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/1439813264.
Which logit or probit model should I use for multiple response / dependent variables?
You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/143981326
Which logit or probit model should I use for multiple response / dependent variables? You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/1439813264.
Which logit or probit model should I use for multiple response / dependent variables? You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/143981326
46,294
Identifying fraudulent questionnaires
This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the person doing the study. There are scales to detect this sort of faking, such as the Crowne=Marlowe scale. These essential...
Identifying fraudulent questionnaires
This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the pe
Identifying fraudulent questionnaires This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the person doing the study. There are scales to detect this sort of faking, such as the ...
Identifying fraudulent questionnaires This is a fairly large topic in social psychology and questionnaire design. Here are some ideas: The person could be faking it, either good or bad. People do this in order to appear "good" to the pe
46,295
Lognormal distribution from world bank quintiles PPP data
Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitter(qlnorm(q)) Create function to minimise fitfun <- function(p)sum(abs(data-qlnorm(q,p[1],p[2]))) Run the optimiser wi...
Lognormal distribution from world bank quintiles PPP data
Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitt
Lognormal distribution from world bank quintiles PPP data Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitter(qlnorm(q)) Create function to minimise fitfun <- function(...
Lognormal distribution from world bank quintiles PPP data Here is the example of the quick and dirty R code to illustrate what Michael suggested: Define quantiles available: q<-c(0.1,0.2,0.4,0.6,0.8,0.9) Create artificial data and add some noise data <-jitt
46,296
Lognormal distribution from world bank quintiles PPP data
I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the following form. Calculate the total income of all population Divide population into income groups Calculate the total income ...
Lognormal distribution from world bank quintiles PPP data
I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the follow
Lognormal distribution from world bank quintiles PPP data I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the following form. Calculate the total income of all population Divide...
Lognormal distribution from world bank quintiles PPP data I'm giving another answer, since more details about data were given. From the initial question it seemed that some quantiles are observed but that is not the case. The data is calculated in the follow
46,297
Lognormal distribution from world bank quintiles PPP data
A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihood. If not you can use a fit criterion such as least squares or minimum sum of absolute errors to fit the given percenti...
Lognormal distribution from world bank quintiles PPP data
A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihoo
Lognormal distribution from world bank quintiles PPP data A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihood. If not you can use a fit criterion such as least squares o...
Lognormal distribution from world bank quintiles PPP data A lognormal distribution is determined by two parameters, the mean and the variance of the related normal distribution. If you have raw data you could fit a lognormal distribution by maximum likelihoo
46,298
Lognormal distribution from world bank quintiles PPP data
A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, you would have access to the raw data, and would apply the standard the maximum likelihood estimators (MLEs) for $\mu$ an...
Lognormal distribution from world bank quintiles PPP data
A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, y
Lognormal distribution from world bank quintiles PPP data A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, you would have access to the raw data, and would apply the stan...
Lognormal distribution from world bank quintiles PPP data A log-normal distribution is fully defined by the pair of parameters $\mu$ and $\sigma$. Since you want to fit this distribution to your data, it's sufficient to estimate these two values. Normally, y
46,299
Performing multiple linear regressions, in Excel, that have a common x-intercept?
There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is the mean squared residual. Use Solver to find the x-intercept minimizing the mean squared residual. If you take some car...
Performing multiple linear regressions, in Excel, that have a common x-intercept?
There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is t
Performing multiple linear regressions, in Excel, that have a common x-intercept? There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is the mean squared residual. Use Solver t...
Performing multiple linear regressions, in Excel, that have a common x-intercept? There are several straightforward ways to do this in Excel. Perhaps the simplest uses LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is t
46,300
Performing multiple linear regressions, in Excel, that have a common x-intercept?
It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know) what the common x intercept is (not just estimate it from the data) then you can subtract that value from all the x va...
Performing multiple linear regressions, in Excel, that have a common x-intercept?
It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know
Performing multiple linear regressions, in Excel, that have a common x-intercept? It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know) what the common x intercept is (not ...
Performing multiple linear regressions, in Excel, that have a common x-intercept? It is very unlikely that Excel would be able to do this easily or reliably (you should really not use Excel for any but the simplest stats, and sometimes not even then). If you know (or think you know