idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
11,701
|
KKT versus unconstrained formulation of lasso regression
|
The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have the same minimizer $\beta$.
Here's the justification:
Consider the lasso formulation:
$$f(\beta)=\frac{1}{2}||Y - X\beta||_2^2 + \lambda ||\beta||_1$$
Let the minimizer be $\beta^*$ and let $b=||\beta^*||_1$. My claim is that if you set $t=b$ in the first formulation, then the solution of the first formulation will also be $\beta^*$. Here's the proof:
Consider the first formulation
$$\min \frac{1}{2}||Y - X\beta||_2^2 \text{ s.t.} ||\beta||_1\leq b$$
If possible let this second formulation have a solution $\hat{\beta}$ such that $||\hat{\beta}||_1<||\beta^*||_1=b$ (note the strictly less than sign). Then it is easy to see that $f(\hat{\beta})<f(\beta^*)$ contradicting the fact that $\beta^*$ is a solution for the lasso. Thus, the solution to the first formulation is also $\beta^*$.
Since $t=b$, the complementary slackness condition is satisfied at the solution point $\beta^*$.
So, given a lasso formulation with $\lambda$, you construct a constrained formulation using a $t$ equal to the value of the $l_1$ norm of the lasso solution. Conversely, given a constrained formulation with $t$, you find a $\lambda$ such that the solution to the lasso will be equal to the solution of the constrained formulation.
(If you know about subgradients, you can find this $\lambda$ by solving the equation $X^T(y-X\beta^*)=\lambda z^*$, where $z^* \in \partial ||\beta^*||_1)$
|
KKT versus unconstrained formulation of lasso regression
|
The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have
|
KKT versus unconstrained formulation of lasso regression
The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have the same minimizer $\beta$.
Here's the justification:
Consider the lasso formulation:
$$f(\beta)=\frac{1}{2}||Y - X\beta||_2^2 + \lambda ||\beta||_1$$
Let the minimizer be $\beta^*$ and let $b=||\beta^*||_1$. My claim is that if you set $t=b$ in the first formulation, then the solution of the first formulation will also be $\beta^*$. Here's the proof:
Consider the first formulation
$$\min \frac{1}{2}||Y - X\beta||_2^2 \text{ s.t.} ||\beta||_1\leq b$$
If possible let this second formulation have a solution $\hat{\beta}$ such that $||\hat{\beta}||_1<||\beta^*||_1=b$ (note the strictly less than sign). Then it is easy to see that $f(\hat{\beta})<f(\beta^*)$ contradicting the fact that $\beta^*$ is a solution for the lasso. Thus, the solution to the first formulation is also $\beta^*$.
Since $t=b$, the complementary slackness condition is satisfied at the solution point $\beta^*$.
So, given a lasso formulation with $\lambda$, you construct a constrained formulation using a $t$ equal to the value of the $l_1$ norm of the lasso solution. Conversely, given a constrained formulation with $t$, you find a $\lambda$ such that the solution to the lasso will be equal to the solution of the constrained formulation.
(If you know about subgradients, you can find this $\lambda$ by solving the equation $X^T(y-X\beta^*)=\lambda z^*$, where $z^* \in \partial ||\beta^*||_1)$
|
KKT versus unconstrained formulation of lasso regression
The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have
|
11,702
|
KKT versus unconstrained formulation of lasso regression
|
I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct.
In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $\|\hat{\beta}\| < \|\beta^*\|$ leads to a contradiction, we can only assume the necessity of $\|\hat{\beta}\| = \|\beta^*\|$, not that $\hat{\beta} = \beta^*$.
I suggest, instead, that we proceed as follows:
For convenience, let's denote by $P_1$ and $P_2$ the first and second formulation respectively. Let's assume that $P_2$ has a unique solution, $\beta^*$, with $\|\beta^*\|=b$. Let $P_1$ have a solution, $\hat{\beta} \neq \beta^*$. Then, we have that $\|\hat{\beta}\| \leq \|\beta^*\|$ (it cannot be greater because of the constraint) and therefore $f(\hat{\beta}) \leq f(\beta^*)$. If $f(\hat{\beta}) < f(\beta^*)$ then $\beta^*$ is not the solution to the $P_2$, which contradicts our assumptions. If $f(\hat{\beta}) = f(\beta^*)$ then $\hat{\beta} = \beta^*$, since we assumed the solution to be unique.
However, it may be the case that the Lasso has multiple solutions. By lemma 1 of arxiv.org/pdf/1206.0313.pdf we know that all of these solutions have the same $\ell 1$-norm (and the same minimum value, of course). We set that norm as the constraint for the $P_1$ and proceed.
Let's denote by $S$ the set of solutions to $P_2$, with $\|\beta\|=b \mbox{ } \forall \beta \in S$. Let $P_1$ have a solution, $\hat{\beta} \notin S$. Then, we have that $\|\hat{\beta}\| \leq \|\beta\| \forall \beta \in S$ and therefore $f(\hat{\beta}) \leq f(\beta) \forall \beta \in S$. If $f(\hat{\beta}) = f(\beta)$ for some $\beta \in S$ (and hence for all of them) then $\hat{\beta} \in S$, which contradicts our assumptions. If $f(\hat{\beta}) < f(\beta)$ for some $\beta \in S$ then $S$ is not the set of solutions to $P_2$. Therefore, every solution to $P_1$ is in $S$, i.e. any solution to $P_1$ is also a solution to $P_2$. It would remain to prove that the complementary holds too.
|
KKT versus unconstrained formulation of lasso regression
|
I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct.
In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $
|
KKT versus unconstrained formulation of lasso regression
I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct.
In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $\|\hat{\beta}\| < \|\beta^*\|$ leads to a contradiction, we can only assume the necessity of $\|\hat{\beta}\| = \|\beta^*\|$, not that $\hat{\beta} = \beta^*$.
I suggest, instead, that we proceed as follows:
For convenience, let's denote by $P_1$ and $P_2$ the first and second formulation respectively. Let's assume that $P_2$ has a unique solution, $\beta^*$, with $\|\beta^*\|=b$. Let $P_1$ have a solution, $\hat{\beta} \neq \beta^*$. Then, we have that $\|\hat{\beta}\| \leq \|\beta^*\|$ (it cannot be greater because of the constraint) and therefore $f(\hat{\beta}) \leq f(\beta^*)$. If $f(\hat{\beta}) < f(\beta^*)$ then $\beta^*$ is not the solution to the $P_2$, which contradicts our assumptions. If $f(\hat{\beta}) = f(\beta^*)$ then $\hat{\beta} = \beta^*$, since we assumed the solution to be unique.
However, it may be the case that the Lasso has multiple solutions. By lemma 1 of arxiv.org/pdf/1206.0313.pdf we know that all of these solutions have the same $\ell 1$-norm (and the same minimum value, of course). We set that norm as the constraint for the $P_1$ and proceed.
Let's denote by $S$ the set of solutions to $P_2$, with $\|\beta\|=b \mbox{ } \forall \beta \in S$. Let $P_1$ have a solution, $\hat{\beta} \notin S$. Then, we have that $\|\hat{\beta}\| \leq \|\beta\| \forall \beta \in S$ and therefore $f(\hat{\beta}) \leq f(\beta) \forall \beta \in S$. If $f(\hat{\beta}) = f(\beta)$ for some $\beta \in S$ (and hence for all of them) then $\hat{\beta} \in S$, which contradicts our assumptions. If $f(\hat{\beta}) < f(\beta)$ for some $\beta \in S$ then $S$ is not the set of solutions to $P_2$. Therefore, every solution to $P_1$ is in $S$, i.e. any solution to $P_1$ is also a solution to $P_2$. It would remain to prove that the complementary holds too.
|
KKT versus unconstrained formulation of lasso regression
I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct.
In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $
|
11,703
|
Test for linear separability
|
Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a linear hyperplane. This is a nice short SVM video illustrating the idea.
You may wrap SVM with a search method for feature selection (wrapper model) and try to see if any of your features can linearly sparate the classes you have.
There are many interesting tools for using SVM including LIBSVM, MSVMPack and Scikit-learn SVM.
|
Test for linear separability
|
Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a l
|
Test for linear separability
Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a linear hyperplane. This is a nice short SVM video illustrating the idea.
You may wrap SVM with a search method for feature selection (wrapper model) and try to see if any of your features can linearly sparate the classes you have.
There are many interesting tools for using SVM including LIBSVM, MSVMPack and Scikit-learn SVM.
|
Test for linear separability
Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a l
|
11,704
|
Test for linear separability
|
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel language offers an interface for it - R, Python, Octave, Julia, etc.
With respect to the answer suggesting the usage of SVMs:
Using SVMs is a sub-optimal solution to verifying linear separability for two reasons:
SVMs are soft-margin classifiers. That means a linear kernel SVM might settle for a separating plane which is not separating perfectly even though it might be actually possible. If you then check the error rate it is going to be not 0 and you will falsely conclude that the two sets are not linearly separable. This issue can be attenuated by choosing a very high cost coefficient C - but this comes itself at a very high computational cost.
SVMs are maximum-margin classifiers. That means the algorithm will try to find a separating plane that is separating the two classes while trying to stay away from both as far as possible. Again this is a feature increasing the computational effort unnecessarily as it calculates something that is not relevant to answering the question of linear separability.
Let's say you have a set of points A and B:
Then you have to minimize the 0 for the following conditions:
(The A below is a matrix, not the set of points from above)
"Minimizing 0" effectively means that you don't need to actually optimize an objective function because this is not necessary to find out if the sets are linearly separable.
In the end
() is defining the separating plane.
In case you are interested in a working example in R or the math details, then check this out.
|
Test for linear separability
|
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel l
|
Test for linear separability
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel language offers an interface for it - R, Python, Octave, Julia, etc.
With respect to the answer suggesting the usage of SVMs:
Using SVMs is a sub-optimal solution to verifying linear separability for two reasons:
SVMs are soft-margin classifiers. That means a linear kernel SVM might settle for a separating plane which is not separating perfectly even though it might be actually possible. If you then check the error rate it is going to be not 0 and you will falsely conclude that the two sets are not linearly separable. This issue can be attenuated by choosing a very high cost coefficient C - but this comes itself at a very high computational cost.
SVMs are maximum-margin classifiers. That means the algorithm will try to find a separating plane that is separating the two classes while trying to stay away from both as far as possible. Again this is a feature increasing the computational effort unnecessarily as it calculates something that is not relevant to answering the question of linear separability.
Let's say you have a set of points A and B:
Then you have to minimize the 0 for the following conditions:
(The A below is a matrix, not the set of points from above)
"Minimizing 0" effectively means that you don't need to actually optimize an objective function because this is not necessary to find out if the sets are linearly separable.
In the end
() is defining the separating plane.
In case you are interested in a working example in R or the math details, then check this out.
|
Test for linear separability
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel l
|
11,705
|
Test for linear separability
|
Linear Perceptron is guaranteed to find a solution if one exists.
This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming as mentioned by @Raffael.
A quick solution would be to solve a perceptron. A code with an example to solve using Perceptron in Matlab is here
|
Test for linear separability
|
Linear Perceptron is guaranteed to find a solution if one exists.
This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are l
|
Test for linear separability
Linear Perceptron is guaranteed to find a solution if one exists.
This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming as mentioned by @Raffael.
A quick solution would be to solve a perceptron. A code with an example to solve using Perceptron in Matlab is here
|
Test for linear separability
Linear Perceptron is guaranteed to find a solution if one exists.
This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are l
|
11,706
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
|
Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in the objective function, it replaces $\frac{1}{2N}$by $\frac{1}{2}$. Second, both methods normalize the data differently, so the normalization must be swiched off when calling the methods.
To reproduce that, and see that the same solutions for the lasso problem can be computed using lars and glmnet, the following lines in the code above must be changed:
la <- lars(X,Y,intercept=TRUE, max.steps=1000, use.Gram=FALSE)
to
la <- lars(X,Y,intercept=TRUE, normalize=FALSE, max.steps=1000, use.Gram=FALSE)
and
glm2 <- glmnet(X,Y,family="gaussian",lambda=0.5*la$lambda,thresh=1e-16)
to
glm2 <- glmnet(X,Y,family="gaussian",lambda=1/nbSamples*la$lambda,standardize=FALSE,thresh=1e-16)
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
|
Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in the objective function, it replaces $\frac{1}{2N}$by $\frac{1}{2}$. Second, both methods normalize the data differently, so the normalization must be swiched off when calling the methods.
To reproduce that, and see that the same solutions for the lasso problem can be computed using lars and glmnet, the following lines in the code above must be changed:
la <- lars(X,Y,intercept=TRUE, max.steps=1000, use.Gram=FALSE)
to
la <- lars(X,Y,intercept=TRUE, normalize=FALSE, max.steps=1000, use.Gram=FALSE)
and
glm2 <- glmnet(X,Y,family="gaussian",lambda=0.5*la$lambda,thresh=1e-16)
to
glm2 <- glmnet(X,Y,family="gaussian",lambda=1/nbSamples*la$lambda,standardize=FALSE,thresh=1e-16)
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in
|
11,707
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
|
Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients will change and you do not change them the way you are approaching it. You need to fit the same model with both methods if you want the same or nearly the same answers.
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
|
Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients will change and you do not change them the way you are approaching it. You need to fit the same model with both methods if you want the same or nearly the same answers.
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients
|
11,708
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
|
Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on convergence.
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
|
Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on converg
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on convergence.
|
Why do Lars and Glmnet give different solutions for the Lasso problem?
Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on converg
|
11,709
|
How does it make sense to do OLS after LASSO variable selection?
|
There was a similar question a few days ago which had the relevant reference:
Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimensional Controls", Review of Economic Studies, 81(2), pp. 608-50 (link)
At least for me the paper is a pretty tough read because the proofs behind this relatively simple are fairly elaborate. When you are interested in estimating a model like
$$y_i = \alpha T_i + X_i'\beta + \epsilon_i$$
where $y_i$ is your outcome, $T_i$ is some treatment effect of interest, and $X_i$ is a vector of potential controls. The target parameter is $\alpha$. Assuming that most of the variation in your outcome is explained by the treatment and a sparse set of controls, Belloni et al. (2014) develop a double-robust selection method which provides correct point estimates and valid confidence intervals. This sparsity assumption is important though.
If $X_i$ includes a few important predictors of $y_i$ but you don't know which they are (either single variables, their higher order polynomials, or interactions with other variables), you can perform a three step selection procedure:
regress $y_i$ on $X_i$, their squares, and interactions, and select important predictors using LASSO
regress $T_i$ on $X_i$, their squares, and interactions, and select important predictors using LASSO
regress $y_i$ on $T_i$ and all the variables which were selected in either of the first two steps
They provide proofs as for why this works and why you get the correct confidence intervals etc out of this method. They also show that if you only perform a LASSO selection on the above regression and then regress the outcome on the treatment and the selected variables you get wrong point estimates and false confidence intervals, like Björn already said.
The purpose for doing this is twofold: comparing your initial model, where variable selection was guided by intuition or theory, to the double-robust selection model gives you an idea as to how good your first model was. Perhaps your first model forgot some important squared or interaction terms and thus suffers from misspecified functional form or omitted variables. Secondly, the Belloni et al. (2014) method can improve inference on your target parameter because redundant regressors were penalized away in their procedure.
|
How does it make sense to do OLS after LASSO variable selection?
|
There was a similar question a few days ago which had the relevant reference:
Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimension
|
How does it make sense to do OLS after LASSO variable selection?
There was a similar question a few days ago which had the relevant reference:
Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimensional Controls", Review of Economic Studies, 81(2), pp. 608-50 (link)
At least for me the paper is a pretty tough read because the proofs behind this relatively simple are fairly elaborate. When you are interested in estimating a model like
$$y_i = \alpha T_i + X_i'\beta + \epsilon_i$$
where $y_i$ is your outcome, $T_i$ is some treatment effect of interest, and $X_i$ is a vector of potential controls. The target parameter is $\alpha$. Assuming that most of the variation in your outcome is explained by the treatment and a sparse set of controls, Belloni et al. (2014) develop a double-robust selection method which provides correct point estimates and valid confidence intervals. This sparsity assumption is important though.
If $X_i$ includes a few important predictors of $y_i$ but you don't know which they are (either single variables, their higher order polynomials, or interactions with other variables), you can perform a three step selection procedure:
regress $y_i$ on $X_i$, their squares, and interactions, and select important predictors using LASSO
regress $T_i$ on $X_i$, their squares, and interactions, and select important predictors using LASSO
regress $y_i$ on $T_i$ and all the variables which were selected in either of the first two steps
They provide proofs as for why this works and why you get the correct confidence intervals etc out of this method. They also show that if you only perform a LASSO selection on the above regression and then regress the outcome on the treatment and the selected variables you get wrong point estimates and false confidence intervals, like Björn already said.
The purpose for doing this is twofold: comparing your initial model, where variable selection was guided by intuition or theory, to the double-robust selection model gives you an idea as to how good your first model was. Perhaps your first model forgot some important squared or interaction terms and thus suffers from misspecified functional form or omitted variables. Secondly, the Belloni et al. (2014) method can improve inference on your target parameter because redundant regressors were penalized away in their procedure.
|
How does it make sense to do OLS after LASSO variable selection?
There was a similar question a few days ago which had the relevant reference:
Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimension
|
11,710
|
How does it make sense to do OLS after LASSO variable selection?
|
To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes, invalid p-values and confidence intervals with below nominal coverage. Perhaps if the sample size is very large and there are a few huge effects and a lot of null effects, LASSO+OLS might not be too badly affected by this, but other than that I cannot see any reasonable justification and in that case the LASSO estimates ought to be just fine, too.
|
How does it make sense to do OLS after LASSO variable selection?
|
To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes
|
How does it make sense to do OLS after LASSO variable selection?
To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes, invalid p-values and confidence intervals with below nominal coverage. Perhaps if the sample size is very large and there are a few huge effects and a lot of null effects, LASSO+OLS might not be too badly affected by this, but other than that I cannot see any reasonable justification and in that case the LASSO estimates ought to be just fine, too.
|
How does it make sense to do OLS after LASSO variable selection?
To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes
|
11,711
|
How does it make sense to do OLS after LASSO variable selection?
|
It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regression you uncover that many of the variables selected by LASSO are nowhere near being statistically significant and/or have the wrong sign. And, that may invite you to use another variable selection method that given your data set may be much more robust than LASSO.
LASSO does not always work as intended. This is due to its fitting algorithm that includes a penalty factor that penalizes the model against higher regression coefficients. It seems like a good idea, as people think it always reduces model overfitting, and improves predictions (on new data). In reality it very often does the opposite... increase model under-fitting and weakens prediction accuracy. You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO MSE graph." Whenever such graphs show the lowest MSE at the beginning of the X-axis, it shows a LASSO that has failed (increase model under-fitting).
The above unintended consequences are due to the penalty algorithm. Because of it LASSO has no way of distinguishing between a strong causal variable with predictive information and an associated high regression coefficient and a weak variable with no explanatory or predictive information value that has a low regression coefficient. Often, LASSO will prefer the weak variable over the strong causal variable. Also, it may at times even cause to shift the directional signs of variables (shifting from one direction that makes sense to an opposite direction that does not). You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO coefficient path".
|
How does it make sense to do OLS after LASSO variable selection?
|
It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regressi
|
How does it make sense to do OLS after LASSO variable selection?
It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regression you uncover that many of the variables selected by LASSO are nowhere near being statistically significant and/or have the wrong sign. And, that may invite you to use another variable selection method that given your data set may be much more robust than LASSO.
LASSO does not always work as intended. This is due to its fitting algorithm that includes a penalty factor that penalizes the model against higher regression coefficients. It seems like a good idea, as people think it always reduces model overfitting, and improves predictions (on new data). In reality it very often does the opposite... increase model under-fitting and weakens prediction accuracy. You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO MSE graph." Whenever such graphs show the lowest MSE at the beginning of the X-axis, it shows a LASSO that has failed (increase model under-fitting).
The above unintended consequences are due to the penalty algorithm. Because of it LASSO has no way of distinguishing between a strong causal variable with predictive information and an associated high regression coefficient and a weak variable with no explanatory or predictive information value that has a low regression coefficient. Often, LASSO will prefer the weak variable over the strong causal variable. Also, it may at times even cause to shift the directional signs of variables (shifting from one direction that makes sense to an opposite direction that does not). You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO coefficient path".
|
How does it make sense to do OLS after LASSO variable selection?
It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regressi
|
11,712
|
Safely determining sample size for A/B testing
|
The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$)
You won't be able to ever know the "true" conversion rate of the two paths, but this will give you the ability to say something to the effect "With 99% confidence, A is more effective at converting than B".
For example: Lets assume that you have run 1000 trials down path A. Of these 1000 trials, 121 were successful conversions (conversion rate of 0.121) and we would like a 99% confidence interval around this 0.121 result. The z-score for 99% confidence intervals is 2.576 (you just look this up in a table), so according to the formula:
$$
\begin{aligned}
\hat p &\pm 2.576\left(\sqrt{\frac{0.121 \times (1 - 0.121)}{1000}}\right) \\
\hat p &\pm 0.027
\end{aligned}
$$
So with 99% confidence we can say that $0.094 \le \hat p \le 0.148$, where $\hat p$ is the "true" conversion rate of process A.
If we construct a similar interval for process B, we can compare the intervals. If the intervals don't overlap, then we can say with 98% confidence that one is better than the other. (Remember, we're only 99% confident about each interval, so our overall confidence about the comparison is 0.99 * 0.99)
If the intervals do overlap, then we have to run more trials, or decide that they are too similar in performance to distinguish, which brings us the tricky part - determining $N$, the number of trials. I'm not familiar with other methods, but with this method, you aren't going to be able to determine $N$ up front unless you have an accurate estimate of the performance of both A and B up front. Otherwise, you are just going to have to run trials until you get samples so that the intervals separate.
Best of luck to you. (I'm rooting for process B, by the way).
$^\dagger$ The link doesn't work.
|
Safely determining sample size for A/B testing
|
The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$)
You won't be able to ever know the "true" conversion rate of
|
Safely determining sample size for A/B testing
The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$)
You won't be able to ever know the "true" conversion rate of the two paths, but this will give you the ability to say something to the effect "With 99% confidence, A is more effective at converting than B".
For example: Lets assume that you have run 1000 trials down path A. Of these 1000 trials, 121 were successful conversions (conversion rate of 0.121) and we would like a 99% confidence interval around this 0.121 result. The z-score for 99% confidence intervals is 2.576 (you just look this up in a table), so according to the formula:
$$
\begin{aligned}
\hat p &\pm 2.576\left(\sqrt{\frac{0.121 \times (1 - 0.121)}{1000}}\right) \\
\hat p &\pm 0.027
\end{aligned}
$$
So with 99% confidence we can say that $0.094 \le \hat p \le 0.148$, where $\hat p$ is the "true" conversion rate of process A.
If we construct a similar interval for process B, we can compare the intervals. If the intervals don't overlap, then we can say with 98% confidence that one is better than the other. (Remember, we're only 99% confident about each interval, so our overall confidence about the comparison is 0.99 * 0.99)
If the intervals do overlap, then we have to run more trials, or decide that they are too similar in performance to distinguish, which brings us the tricky part - determining $N$, the number of trials. I'm not familiar with other methods, but with this method, you aren't going to be able to determine $N$ up front unless you have an accurate estimate of the performance of both A and B up front. Otherwise, you are just going to have to run trials until you get samples so that the intervals separate.
Best of luck to you. (I'm rooting for process B, by the way).
$^\dagger$ The link doesn't work.
|
Safely determining sample size for A/B testing
The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$)
You won't be able to ever know the "true" conversion rate of
|
11,713
|
Safely determining sample size for A/B testing
|
IMHO, as far as it goes, the post goes into the right direction. However:
The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sample size depends very much on how good you meet these assumptions. I recommend that you calculate required sample sizes for several combinations of p1 and p2 that you think are realistic. That will give you a feeling about how reliable the sample size calculation actually is.
> power.prop.test (p1=0.1, p2 = 0.1*1.1, sig.level=0.05, power=0.8)
Two-sample comparison of proportions power calculation
n = 14750.79
p1 = 0.1
p2 = 0.11
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
> power.prop.test (p1=0.09, p2 = 0.09*1.1, sig.level=0.05, power=0.8)
Two-sample comparison of proportions power calculation
n = 16582.2
p1 = 0.09
p2 = 0.099
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
So if the actual conversion rate is 9% instead of 10%, you need another 2000 cases for each scenario to detect the 10%-more-than-baseline conversion rate of the new form.
After the test is done, you can calculate confidence intervals for the proportions based on your actual observations.
the last conclusion under 3. (about testing multiple scenarios) is not quite correct. To adjust for multiple testing (in the example multiple = 2), it is not enough to add just another $n$ tests for each new scenario:
If neither B nor C are better than the original version A, and the two tests A ./. B and B ./. C are done as proposed there with $n$ cases for each of the scenarios, then the probability to falsely change away from A is (1 - α)² ≈ 10% (α: accepted probability of type I error; sig.level above). In other words, it is almost twice as large as specified initially. The second problem with that approach is: can you really do without comparing B ./. C? What are you going to do if you find both B and C better than A?
|
Safely determining sample size for A/B testing
|
IMHO, as far as it goes, the post goes into the right direction. However:
The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sam
|
Safely determining sample size for A/B testing
IMHO, as far as it goes, the post goes into the right direction. However:
The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sample size depends very much on how good you meet these assumptions. I recommend that you calculate required sample sizes for several combinations of p1 and p2 that you think are realistic. That will give you a feeling about how reliable the sample size calculation actually is.
> power.prop.test (p1=0.1, p2 = 0.1*1.1, sig.level=0.05, power=0.8)
Two-sample comparison of proportions power calculation
n = 14750.79
p1 = 0.1
p2 = 0.11
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
> power.prop.test (p1=0.09, p2 = 0.09*1.1, sig.level=0.05, power=0.8)
Two-sample comparison of proportions power calculation
n = 16582.2
p1 = 0.09
p2 = 0.099
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
So if the actual conversion rate is 9% instead of 10%, you need another 2000 cases for each scenario to detect the 10%-more-than-baseline conversion rate of the new form.
After the test is done, you can calculate confidence intervals for the proportions based on your actual observations.
the last conclusion under 3. (about testing multiple scenarios) is not quite correct. To adjust for multiple testing (in the example multiple = 2), it is not enough to add just another $n$ tests for each new scenario:
If neither B nor C are better than the original version A, and the two tests A ./. B and B ./. C are done as proposed there with $n$ cases for each of the scenarios, then the probability to falsely change away from A is (1 - α)² ≈ 10% (α: accepted probability of type I error; sig.level above). In other words, it is almost twice as large as specified initially. The second problem with that approach is: can you really do without comparing B ./. C? What are you going to do if you find both B and C better than A?
|
Safely determining sample size for A/B testing
IMHO, as far as it goes, the post goes into the right direction. However:
The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sam
|
11,714
|
Safely determining sample size for A/B testing
|
Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help.
Take a look here.
|
Safely determining sample size for A/B testing
|
Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help.
Take a look here.
|
Safely determining sample size for A/B testing
Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help.
Take a look here.
|
Safely determining sample size for A/B testing
Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help.
Take a look here.
|
11,715
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver:
... if I asked you to tell me how often your commute takes 10 minutes longer than average — something that requires some version of a confidence interval — you’d have to think about that a little bit, ...
(from the FiveThirtyEight blog in the New York Times, 9/29/10.) This is not a confidence interval. Depending on how you interpret it, it's either a tolerance interval or a prediction interval. (Otherwise there's nothing the matter with Mr. Silver's excellent discussion of estimating probabilities; it's a good read.) Many other web sites (particularly those with an investment focus) similarly confuse confidence intervals with other kinds of intervals.
The New York Times has made efforts to clarify the meaning of the statistical results it produces and reports on. The fine print beneath many polls includes something like this:
In theory, in 19 cases out of 20, results based on such samples of all adults will differ by no more than three percentage points in either direction from what would have been obtained by seeking to interview all American adults.
(e.g., How the Poll Was Conducted, 5/2/2011.)
A little wordy, perhaps, but clear and accurate: this statement characterizes the variability of the sampling distribution of the poll results. That's getting close to the idea of confidence interval, but it is not quite there. One might consider using such wording in place of confidence intervals in many cases, however.
When there is so much potential confusion on the internet, it is useful to turn to authoritative sources. One of my favorites is Freedman, Pisani, & Purves' time-honored text, Statistics. Now in its fourth edition, it has been used at universities for over 30 years and is notable for its clear, plain explanations and focus on classical "frequentist" methods. Let's see what it says about interpreting confidence intervals:
The confidence level of 95% says something about the sampling procedure...
[at p. 384; all quotations are from the third edition (1998)]. It continues,
If the sample had come out differently, the confidence interval would have been different. ... For about 95% of all samples, the interval ... covers the population percentage, and for the other 5% it does not.
[p. 384]. The text says much more about confidence intervals, but this is enough to help: its approach is to move the focus of discussion onto the sample, at once bringing rigor and clarity to the statements. We might therefore try the same thing in our own reporting. For instance, let's apply this approach to describing a confidence interval of [34%, 40%] around a reported percentage difference in a hypothetical experiment:
"This experiment used a randomly selected sample of subjects and a random selection of controls. We report a confidence interval from 34% to 40% for the difference. This quantifies the reliability of the experiment: if the selections of subjects and controls had been different, this confidence interval would change to reflect the results for the chosen subjects and controls. In 95% of such cases the confidence interval would include the true difference (between all subjects and all controls) and in the other 5% of cases it would not. Therefore it is likely--but not certain--that this confidence interval includes the true difference: that is, we believe the true difference is between 34% and 40%."
(This is my text, which surely can be improved: I invite editors to work on it.)
A long statement like this is somewhat unwieldy. In actual reports most of the context--random sampling, subjects and controls, possibility of variability--will already have been established, making half of the preceding statement unnecessary. When the report establishes that there is sampling variability and exhibits a probability model for the sample results, it is usually not difficult to explain a confidence interval (or other random interval) as clearly and rigorously as the audience needs.
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver:
... if I asked you to tell me how often your commute takes 10 minutes longer than average — so
|
How to interpret confidence interval of the difference in means in one sample T-test?
This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver:
... if I asked you to tell me how often your commute takes 10 minutes longer than average — something that requires some version of a confidence interval — you’d have to think about that a little bit, ...
(from the FiveThirtyEight blog in the New York Times, 9/29/10.) This is not a confidence interval. Depending on how you interpret it, it's either a tolerance interval or a prediction interval. (Otherwise there's nothing the matter with Mr. Silver's excellent discussion of estimating probabilities; it's a good read.) Many other web sites (particularly those with an investment focus) similarly confuse confidence intervals with other kinds of intervals.
The New York Times has made efforts to clarify the meaning of the statistical results it produces and reports on. The fine print beneath many polls includes something like this:
In theory, in 19 cases out of 20, results based on such samples of all adults will differ by no more than three percentage points in either direction from what would have been obtained by seeking to interview all American adults.
(e.g., How the Poll Was Conducted, 5/2/2011.)
A little wordy, perhaps, but clear and accurate: this statement characterizes the variability of the sampling distribution of the poll results. That's getting close to the idea of confidence interval, but it is not quite there. One might consider using such wording in place of confidence intervals in many cases, however.
When there is so much potential confusion on the internet, it is useful to turn to authoritative sources. One of my favorites is Freedman, Pisani, & Purves' time-honored text, Statistics. Now in its fourth edition, it has been used at universities for over 30 years and is notable for its clear, plain explanations and focus on classical "frequentist" methods. Let's see what it says about interpreting confidence intervals:
The confidence level of 95% says something about the sampling procedure...
[at p. 384; all quotations are from the third edition (1998)]. It continues,
If the sample had come out differently, the confidence interval would have been different. ... For about 95% of all samples, the interval ... covers the population percentage, and for the other 5% it does not.
[p. 384]. The text says much more about confidence intervals, but this is enough to help: its approach is to move the focus of discussion onto the sample, at once bringing rigor and clarity to the statements. We might therefore try the same thing in our own reporting. For instance, let's apply this approach to describing a confidence interval of [34%, 40%] around a reported percentage difference in a hypothetical experiment:
"This experiment used a randomly selected sample of subjects and a random selection of controls. We report a confidence interval from 34% to 40% for the difference. This quantifies the reliability of the experiment: if the selections of subjects and controls had been different, this confidence interval would change to reflect the results for the chosen subjects and controls. In 95% of such cases the confidence interval would include the true difference (between all subjects and all controls) and in the other 5% of cases it would not. Therefore it is likely--but not certain--that this confidence interval includes the true difference: that is, we believe the true difference is between 34% and 40%."
(This is my text, which surely can be improved: I invite editors to work on it.)
A long statement like this is somewhat unwieldy. In actual reports most of the context--random sampling, subjects and controls, possibility of variability--will already have been established, making half of the preceding statement unnecessary. When the report establishes that there is sampling variability and exhibits a probability model for the sample results, it is usually not difficult to explain a confidence interval (or other random interval) as clearly and rigorously as the audience needs.
|
How to interpret confidence interval of the difference in means in one sample T-test?
This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver:
... if I asked you to tell me how often your commute takes 10 minutes longer than average — so
|
11,716
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals.
I would interpret a confidence interval as: there is a 95% probability that the 95% confidence interval covers the true mean difference
An interpretation of this is that if we were to repeat the whole experiment $N$ times, under the same conditions, then we would have $N$ different confidence intervals. The confidence level is the proportion of these intervals which contain the true mean difference.
My own personal quibble with the logic of such reasoning is that this explanation of confidence intervals requires us to ignore the other $N-1$ samples when calculating our confidence interval. For instance if you had a sample size of 100, would you then go and calculate 100 "1-sample" 95% confidence intervals?
But note that this is all in the philosophy. Confidence intervals are best left vague in the explanation I think. They give good results when used properly.
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals.
I would interpret a confidence interval as: there is a 95% proba
|
How to interpret confidence interval of the difference in means in one sample T-test?
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals.
I would interpret a confidence interval as: there is a 95% probability that the 95% confidence interval covers the true mean difference
An interpretation of this is that if we were to repeat the whole experiment $N$ times, under the same conditions, then we would have $N$ different confidence intervals. The confidence level is the proportion of these intervals which contain the true mean difference.
My own personal quibble with the logic of such reasoning is that this explanation of confidence intervals requires us to ignore the other $N-1$ samples when calculating our confidence interval. For instance if you had a sample size of 100, would you then go and calculate 100 "1-sample" 95% confidence intervals?
But note that this is all in the philosophy. Confidence intervals are best left vague in the explanation I think. They give good results when used properly.
|
How to interpret confidence interval of the difference in means in one sample T-test?
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals.
I would interpret a confidence interval as: there is a 95% proba
|
11,717
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete and inaccurate.
The incompleteness lies in the fact that it is not clear that "95% confident" means anything concrete, or if it does, then that concrete meaning would not be universally agreed upon by even a small sample of statisticians. The meaning of confidence depends on what method was used to obtain the interval and on what model of inference is being used (which I hope will become clearer below).
The inaccuracy lies in the fact that many confidence intervals are not designed to tell you anything about the location of the true parameter value for the particular experimental case that yielded the confidence interval! That will be surprising to many, but it follows directly from the Neyman-Pearson philosophy that is clearly stated in this quote from their 1933 paper "On the Problem of the Most Efficient Tests of Statistical Hypotheses":
We are inclined to think that as far
as a particular hypothesis is
concerned, no test based upon the
theory of probability can by itself
provide any valuable evidence of the
truth or falsehood of that hypothesis.
But we may look at the purpose of
tests from another view-point. Without
hoping to know whether each separate
hypothesis is true or false, we may
search for rules to govern our
behaviour with regard to them, in
following which we insure that, in the
long run of experience, we shall not
be too often wrong.
Intervals that are based on the 'inversion' of N-P hypothesis tests will therefore inherit from that test the nature of having known long-run error properties without allowing inference about the properties of the experiment that yielded them! My understanding is that this protects against inductive inference, which Neyman apparently considered to be an abomination.
Neyman explicitly lays claim to the term ‘confidence interval’ and to the origin of the theory of confidence intervals in his 1941 Biometrika paper “Fiducial argument and the theory of confidence intervals”. In a sense, then, anything that is properly a confidence interval plays by his rules and so the meaning of an individual interval can only be expressed in terms of the long run rate at which intervals calculated by that method contain (cover) the relevant true parameter value.
We now need to fork the discussion. One strand follows the notion of ‘coverage’, and the other follows non-Neymanian intervals that are like confidence intervals. I will defer the former so that I can complete this post before it becomes too long.
There are many different approaches that yield intervals that could be called non-Neymanian confidence intervals. The first of these is Fisher’s fiducial intervals. (The word ‘fiducial’ may scare many and elicit derisive smirks from others, but I will leave that aside...) For some types of data (e.g. normal with unknown population variance) the intervals calculated by Fisher’s method are numerically identical to the intervals that would be calculated by Neyman’s method. However, they invite interpretations that are diametrically opposed. Neymanian intervals reflect only long run coverage properties of the method, whereas Fisher’s intervals are intended to support inductive inference concerning the true parameter values for the particular experiment that was performed.
The fact that one set of interval bounds can come from methods based on either of two philosophically distinct paradigms leads to a really confusing situation--the results can be interpreted in two contradictory ways. From the fiducial argument there is a 95% likelihood that a particular 95% fiducial interval will contain the true parameter value. From Neyman’s method we know only that 95% of intervals calculated in that manner will contain the true parameter value, and have to say confusing things about the probability of the interval containing the true parameter value being unknown but either 1 or 0.
To a large extent, Neyman’s approach has held sway over Fisher’s. That is most unfortunate, in my opinion, because it does not lead to a natural interpretation of the intervals. (Re-read the quote above from Neyman and Pearson and see if it matches your natural interpretation of experimental results. Most likely it does not.)
If an interval can be correctly interpreted in terms of global error rates but also correctly in local inferential terms, I don’t see a good reason to bar interval users from the more natural interpretation afforded by the latter. Thus my suggestion is that the proper interpretation of a confidence interval is BOTH of the following:
Neymanian: This 95% interval was constructed by a method that yields intervals that cover the true parameter value on 95% of occasions in the long run (...of our statistical experience).
Fisherian: This 95% interval has a 95% probability of covering the true parameter value.
(Bayesian and likelihood methods will also yield intervals with desirable frequentist properties. Such intervals invite slightly different interpretations that will both probably feel more natural than the Neymanian.)
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete
|
How to interpret confidence interval of the difference in means in one sample T-test?
The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete and inaccurate.
The incompleteness lies in the fact that it is not clear that "95% confident" means anything concrete, or if it does, then that concrete meaning would not be universally agreed upon by even a small sample of statisticians. The meaning of confidence depends on what method was used to obtain the interval and on what model of inference is being used (which I hope will become clearer below).
The inaccuracy lies in the fact that many confidence intervals are not designed to tell you anything about the location of the true parameter value for the particular experimental case that yielded the confidence interval! That will be surprising to many, but it follows directly from the Neyman-Pearson philosophy that is clearly stated in this quote from their 1933 paper "On the Problem of the Most Efficient Tests of Statistical Hypotheses":
We are inclined to think that as far
as a particular hypothesis is
concerned, no test based upon the
theory of probability can by itself
provide any valuable evidence of the
truth or falsehood of that hypothesis.
But we may look at the purpose of
tests from another view-point. Without
hoping to know whether each separate
hypothesis is true or false, we may
search for rules to govern our
behaviour with regard to them, in
following which we insure that, in the
long run of experience, we shall not
be too often wrong.
Intervals that are based on the 'inversion' of N-P hypothesis tests will therefore inherit from that test the nature of having known long-run error properties without allowing inference about the properties of the experiment that yielded them! My understanding is that this protects against inductive inference, which Neyman apparently considered to be an abomination.
Neyman explicitly lays claim to the term ‘confidence interval’ and to the origin of the theory of confidence intervals in his 1941 Biometrika paper “Fiducial argument and the theory of confidence intervals”. In a sense, then, anything that is properly a confidence interval plays by his rules and so the meaning of an individual interval can only be expressed in terms of the long run rate at which intervals calculated by that method contain (cover) the relevant true parameter value.
We now need to fork the discussion. One strand follows the notion of ‘coverage’, and the other follows non-Neymanian intervals that are like confidence intervals. I will defer the former so that I can complete this post before it becomes too long.
There are many different approaches that yield intervals that could be called non-Neymanian confidence intervals. The first of these is Fisher’s fiducial intervals. (The word ‘fiducial’ may scare many and elicit derisive smirks from others, but I will leave that aside...) For some types of data (e.g. normal with unknown population variance) the intervals calculated by Fisher’s method are numerically identical to the intervals that would be calculated by Neyman’s method. However, they invite interpretations that are diametrically opposed. Neymanian intervals reflect only long run coverage properties of the method, whereas Fisher’s intervals are intended to support inductive inference concerning the true parameter values for the particular experiment that was performed.
The fact that one set of interval bounds can come from methods based on either of two philosophically distinct paradigms leads to a really confusing situation--the results can be interpreted in two contradictory ways. From the fiducial argument there is a 95% likelihood that a particular 95% fiducial interval will contain the true parameter value. From Neyman’s method we know only that 95% of intervals calculated in that manner will contain the true parameter value, and have to say confusing things about the probability of the interval containing the true parameter value being unknown but either 1 or 0.
To a large extent, Neyman’s approach has held sway over Fisher’s. That is most unfortunate, in my opinion, because it does not lead to a natural interpretation of the intervals. (Re-read the quote above from Neyman and Pearson and see if it matches your natural interpretation of experimental results. Most likely it does not.)
If an interval can be correctly interpreted in terms of global error rates but also correctly in local inferential terms, I don’t see a good reason to bar interval users from the more natural interpretation afforded by the latter. Thus my suggestion is that the proper interpretation of a confidence interval is BOTH of the following:
Neymanian: This 95% interval was constructed by a method that yields intervals that cover the true parameter value on 95% of occasions in the long run (...of our statistical experience).
Fisherian: This 95% interval has a 95% probability of covering the true parameter value.
(Bayesian and likelihood methods will also yield intervals with desirable frequentist properties. Such intervals invite slightly different interpretations that will both probably feel more natural than the Neymanian.)
|
How to interpret confidence interval of the difference in means in one sample T-test?
The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete
|
11,718
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assumptions are correct, and you would calculate that interval again in each repetition, then this confidence interval would contain the true prevalence in 95% of the repetitions (on average).
So, you could say you are 95% certain (if your assumptions are correct etc.) that you have now constructed an interval that contains the true prevalence.
This is typically stated as: with 95% confidence, between 4.5 and 8.3% of children of mothers who smoked throughout pregnancy become obese.
Note that this is typically not interesting in itself: you probably want to compare this to prevalence in children of mothers who didn't smoke (odds ratio, relative risk, etc.)
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assump
|
How to interpret confidence interval of the difference in means in one sample T-test?
The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assumptions are correct, and you would calculate that interval again in each repetition, then this confidence interval would contain the true prevalence in 95% of the repetitions (on average).
So, you could say you are 95% certain (if your assumptions are correct etc.) that you have now constructed an interval that contains the true prevalence.
This is typically stated as: with 95% confidence, between 4.5 and 8.3% of children of mothers who smoked throughout pregnancy become obese.
Note that this is typically not interesting in itself: you probably want to compare this to prevalence in children of mothers who didn't smoke (odds ratio, relative risk, etc.)
|
How to interpret confidence interval of the difference in means in one sample T-test?
The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assump
|
11,719
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
|
How to interpret confidence interval of the difference in means in one sample T-test?
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
|
How to interpret confidence interval of the difference in means in one sample T-test?
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
|
11,720
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie within these 95% limits. More clearly, lets say those limits are "a" and "b" then 95 out of 100 times your sample mean difference will lie between "a" and "b".I assume that you understand that different experiment can have different samples to cover out of the whole population.
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie with
|
How to interpret confidence interval of the difference in means in one sample T-test?
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie within these 95% limits. More clearly, lets say those limits are "a" and "b" then 95 out of 100 times your sample mean difference will lie between "a" and "b".I assume that you understand that different experiment can have different samples to cover out of the whole population.
|
How to interpret confidence interval of the difference in means in one sample T-test?
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie with
|
11,721
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
"95 times out of 100, your value will fall within one standard deviation of the mean"
|
How to interpret confidence interval of the difference in means in one sample T-test?
|
"95 times out of 100, your value will fall within one standard deviation of the mean"
|
How to interpret confidence interval of the difference in means in one sample T-test?
"95 times out of 100, your value will fall within one standard deviation of the mean"
|
How to interpret confidence interval of the difference in means in one sample T-test?
"95 times out of 100, your value will fall within one standard deviation of the mean"
|
11,722
|
Textbooks on Matrix Calculus?
|
For most matrix questions I always first refer to "The Matrix Cookbook" (see here).
It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostly intended as a handbook.
|
Textbooks on Matrix Calculus?
|
For most matrix questions I always first refer to "The Matrix Cookbook" (see here).
It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostl
|
Textbooks on Matrix Calculus?
For most matrix questions I always first refer to "The Matrix Cookbook" (see here).
It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostly intended as a handbook.
|
Textbooks on Matrix Calculus?
For most matrix questions I always first refer to "The Matrix Cookbook" (see here).
It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostl
|
11,723
|
Textbooks on Matrix Calculus?
|
If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus:
Abadir, K.M. and Magnus, J.R.
Matrix Algebra
Cambridge University Press, 2005
that has more emphasis on the applications of matrix calculus.
|
Textbooks on Matrix Calculus?
|
If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus:
Abadir, K.M. and Magnus, J.R.
Matrix Algebra
Cambridge University Press, 2005
that has
|
Textbooks on Matrix Calculus?
If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus:
Abadir, K.M. and Magnus, J.R.
Matrix Algebra
Cambridge University Press, 2005
that has more emphasis on the applications of matrix calculus.
|
Textbooks on Matrix Calculus?
If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus:
Abadir, K.M. and Magnus, J.R.
Matrix Algebra
Cambridge University Press, 2005
that has
|
11,724
|
Textbooks on Matrix Calculus?
|
I would highly recommend this 26 pages paper from Stanford University:
"Linear Algebra Review and Reference" by Zico Kolter
It really focus on typical Sum calculations with a lot of i and j everywhere and tells you the corresponding matrix calculation (i.e. using their "vectorized" implementation).
It helps you recognize right away what type of matrix formula you should write to do your calculations.
|
Textbooks on Matrix Calculus?
|
I would highly recommend this 26 pages paper from Stanford University:
"Linear Algebra Review and Reference" by Zico Kolter
It really focus on typical Sum calculations with a lot of i and j everywher
|
Textbooks on Matrix Calculus?
I would highly recommend this 26 pages paper from Stanford University:
"Linear Algebra Review and Reference" by Zico Kolter
It really focus on typical Sum calculations with a lot of i and j everywhere and tells you the corresponding matrix calculation (i.e. using their "vectorized" implementation).
It helps you recognize right away what type of matrix formula you should write to do your calculations.
|
Textbooks on Matrix Calculus?
I would highly recommend this 26 pages paper from Stanford University:
"Linear Algebra Review and Reference" by Zico Kolter
It really focus on typical Sum calculations with a lot of i and j everywher
|
11,725
|
Textbooks on Matrix Calculus?
|
A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost:
You don't really need a lot of results on vector and matrix derivatives for ML, and Tom Minka's paper covers most of it, but the definitive treatment is Magnus & Neudecker's Matrix Differential Calculus with Applications in Statistics and Econometrics.
Indeed, Magnus & Neudecker has excellent reviews on Amazon and Tom Minka's paper (Old and New Matrix Algebra Useful for Statistics, 2000) contains many useful formulas, although he warns "this is advanced material."
|
Textbooks on Matrix Calculus?
|
A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost:
You don't really need a lot of results on vector and matrix derivatives for ML,
|
Textbooks on Matrix Calculus?
A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost:
You don't really need a lot of results on vector and matrix derivatives for ML, and Tom Minka's paper covers most of it, but the definitive treatment is Magnus & Neudecker's Matrix Differential Calculus with Applications in Statistics and Econometrics.
Indeed, Magnus & Neudecker has excellent reviews on Amazon and Tom Minka's paper (Old and New Matrix Algebra Useful for Statistics, 2000) contains many useful formulas, although he warns "this is advanced material."
|
Textbooks on Matrix Calculus?
A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost:
You don't really need a lot of results on vector and matrix derivatives for ML,
|
11,726
|
How do you "control" for a factor/variable?
|
As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already some highly voted questions and answers on this topic, such as:
How exactly does one “control for other variables”?
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
Explain model adjustment, in plain English
The accepted answers on these questions are all very good treatments of the question you are asking within an observational (I would say correlational) framework, more such questions can be found here.
However, you are asking your question specifically within an experimental or ANOVA framework, some more thoughts on this topic can be given.
Within an experimental framework you control for a variable by randomizing individuals (or other units of observation) on the different experimental conditions. The underlying assumption is that as a consequence the only difference between the conditions is the experimental treatment. When correctly randomizing (i.e., each individual has the same chance to be in each condition) this is a reasonable assumption. Furthermore, only randomization allows you to draw causal inferences from your observation as this is the only way to make sure that not other factors are responsible for your results.
However, it can also be necessary to control for variables within an experimental framework, namely when there is another known factor that also affects that dependent variable. To enhance statistical power and can then be a good idea to control for this variable. The usual statistical procedure used for this is analysis of covariance (ANCOVA), which basically also just adds the variable to the model.
Now comes the crux: For ANCOVA to be reasonable, it is absolutely crucial that the assignment to the groups is random and that the covariate for which it is controlled is not correlated with the grouping variable.
This is unfortunately often ignored leading to uninterpretable results. A really readable introduction to this exact issue (i.e., when to use ANCOVA or not) is given by Miller & Chapman (2001):
Despite numerous technical treatments in many venues, analysis of
covariance (ANCOVA) remains a widely misused approach to dealing with
substantive group differences on potential covariates, particularly in
psychopathology research. Published articles reach unfounded
conclusions, and some statistics texts neglect the issue. The problem
with ANCOVA in such cases is reviewed. In many cases, there is no
means of achieving the superficially appealing goal of "correcting" or
"controlling for" real group differences on a potential covariate. In
hopes of curtailing misuse of ANCOVA and promoting appropriate use, a
nontechnical discussion is provided, emphasizing a substantive
confound rarely articulated in textbooks and other general
presentations, to complement the mathematical critiques already
available. Some alternatives are discussed for contexts in which
ANCOVA is inappropriate or questionable.
Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of Abnormal Psychology, 110(1), 40–48. doi:10.1037/0021-843X.110.1.40
|
How do you "control" for a factor/variable?
|
As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already so
|
How do you "control" for a factor/variable?
As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already some highly voted questions and answers on this topic, such as:
How exactly does one “control for other variables”?
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
Explain model adjustment, in plain English
The accepted answers on these questions are all very good treatments of the question you are asking within an observational (I would say correlational) framework, more such questions can be found here.
However, you are asking your question specifically within an experimental or ANOVA framework, some more thoughts on this topic can be given.
Within an experimental framework you control for a variable by randomizing individuals (or other units of observation) on the different experimental conditions. The underlying assumption is that as a consequence the only difference between the conditions is the experimental treatment. When correctly randomizing (i.e., each individual has the same chance to be in each condition) this is a reasonable assumption. Furthermore, only randomization allows you to draw causal inferences from your observation as this is the only way to make sure that not other factors are responsible for your results.
However, it can also be necessary to control for variables within an experimental framework, namely when there is another known factor that also affects that dependent variable. To enhance statistical power and can then be a good idea to control for this variable. The usual statistical procedure used for this is analysis of covariance (ANCOVA), which basically also just adds the variable to the model.
Now comes the crux: For ANCOVA to be reasonable, it is absolutely crucial that the assignment to the groups is random and that the covariate for which it is controlled is not correlated with the grouping variable.
This is unfortunately often ignored leading to uninterpretable results. A really readable introduction to this exact issue (i.e., when to use ANCOVA or not) is given by Miller & Chapman (2001):
Despite numerous technical treatments in many venues, analysis of
covariance (ANCOVA) remains a widely misused approach to dealing with
substantive group differences on potential covariates, particularly in
psychopathology research. Published articles reach unfounded
conclusions, and some statistics texts neglect the issue. The problem
with ANCOVA in such cases is reviewed. In many cases, there is no
means of achieving the superficially appealing goal of "correcting" or
"controlling for" real group differences on a potential covariate. In
hopes of curtailing misuse of ANCOVA and promoting appropriate use, a
nontechnical discussion is provided, emphasizing a substantive
confound rarely articulated in textbooks and other general
presentations, to complement the mathematical critiques already
available. Some alternatives are discussed for contexts in which
ANCOVA is inappropriate or questionable.
Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of Abnormal Psychology, 110(1), 40–48. doi:10.1037/0021-843X.110.1.40
|
How do you "control" for a factor/variable?
As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already so
|
11,727
|
How do you "control" for a factor/variable?
|
To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching.
I can only explain this with an example, not formally, B-school is years in the past, so there.
If you would say:
Brazil is richer than Switzerland because Brasil has a national income of 3524 billion $ and Switzerland just 551 billion
you would be correct in absolute terms, but anyone over 12 with a passing knowledge about the world would suspect that there's something wrong with that statement, too.
It would be better to elevate Switzerlands population to that of Brasil and then compare income again.
So, if Switzerlands population was the size of Brazils their income would be:
(210 million / 8,5 million) * 551 billion dollars = 13612 billion dollars
This makes them about 4 times as rich as Brazil with 3524 billion dollars.
And yes, you can also take the per capita approach, where you compare average incomes. But the above approach, you can apply that several times.
|
How do you "control" for a factor/variable?
|
To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching.
I can only explain this with an example, not formally, B-sc
|
How do you "control" for a factor/variable?
To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching.
I can only explain this with an example, not formally, B-school is years in the past, so there.
If you would say:
Brazil is richer than Switzerland because Brasil has a national income of 3524 billion $ and Switzerland just 551 billion
you would be correct in absolute terms, but anyone over 12 with a passing knowledge about the world would suspect that there's something wrong with that statement, too.
It would be better to elevate Switzerlands population to that of Brasil and then compare income again.
So, if Switzerlands population was the size of Brazils their income would be:
(210 million / 8,5 million) * 551 billion dollars = 13612 billion dollars
This makes them about 4 times as rich as Brazil with 3524 billion dollars.
And yes, you can also take the per capita approach, where you compare average incomes. But the above approach, you can apply that several times.
|
How do you "control" for a factor/variable?
To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching.
I can only explain this with an example, not formally, B-sc
|
11,728
|
What MCMC algorithms/techniques are used for discrete parameters?
|
So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand.
I'm not sure why you think sampling an arbitrary discrete distribution is more difficult than an arbitrary continuous distribution. If you can calculate the discrete distribution and the sample space isn't huge then it's much, much easier (unless the continuous distribution is standard, perhaps). Calculate the likelihood $f(k)$ for each category, then normalise to get the probabilities $P(\tilde k = k) = f(k)/\sum f(k)$ and use inverse transform sampling (imposing an arbitrary order on $k$).
Have you got a particular model in mind? There are all sorts of MCMC approaches to fitting mixture models, for example, where the latent component assignments are discrete parameters. These range from very simple (Gibbs) to quite complex.
How big is the parameter space? Is it potentially enormous (eg in the mixture model case, it's N by the number of mixture components)? You might not need anything more than a Gibbs sampler, since conjugacy is no longer an issue (you can get the normalizing constant directly so you can compute the full conditionals). In fact griddy Gibbs used to be popular for these cases, where a continuous prior is discretized to ease computation.
I don't think there is a particular "best" for all problems having a discrete parameter space any more than there is for the continuous case. But if you tell us more about the models you're interested in perhaps we can make some recommendations.
Edit: OK, I can give a little more information in re: your examples.
Your first example has pretty long history, as you might imagine. A recent-ish review is in [1], see also [2]. I'll try to give some details here: A relevant example is stochastic search variable selection. The initial formulation was to use absolutely continuous priors like $p(\beta)\sim \pi N(\beta; 0, \tau) + (1-\pi) N(\beta, 0, 1000\tau)$. That actually turns out to work poorly compared to priors like $p(\beta)\sim \pi \delta_0 (\beta) + (1-\pi) N(\beta, 0, \tau)$ where $\delta_0$ is a point mass at 0. Note that both fit into your original formulation; an MCMC approach would usually proceed by augmenting $\beta$ with a (discrete) model indicator (say $Z$). This is equivalent to a model index; if you have $Z_1\dots, Z_p$ then obviously you can remap the $2^p$ possible configurations to numbers in $1:2^p$.
So how can you improve the MCMC? In a lot of these models you can sample from $p(Z, \beta|y)$ by composition, ie using that $p(Z, \beta|y) = p(\beta | Y, Z)p(Z|Y)$. Block updates like this can tremendously improve mixing since the correlation between $Z$ and $\beta$ is now irrelevant to the sampler
SSVS embeds the whole model space in one big model. Often this is easy to implement but gives works poorly. Reversible jump MCMC is a different kind of approach which lets the dimension of the parameter space vary explicitly; see [3] for a review and some practical notes. You can find more detailed notes on implementation in different models in the literature, I'm sure.
Oftentimes a complete MCMC approach is infeasible; say you have a linear regression with $p=1000$ variables and you're using an approach like SSVS. You can't hope for your sampler to converge; there's not enough time or computing power to visit all those model configurations, and you're especially hosed if some of your variables are even moderately correlated. You should be especially skeptical of people trying to estimate things like variable inclusion probabilities in this way. Various stochastic search algorithms used in conjunction with MCMC have been proposed for such cases. One example is BAS [4], another is in [5] (Sylvia Richardson has other relevant work too); most of the others I'm aware of are geared toward a particular model.
A different approach which is gaining in popularity is to use absolutely continuous shrinkage priors that mimic model averaged results. Typically these are formulated as scale mixtures of normals. The Bayesian lasso is one example, which is a special case of normal-gamma priors and a limiting case of normal-exponential-gamma priors. Other choices include the horseshoe and the general class of normal distributions with inverted beta priors on their variance. For more on these, I'd suggest starting with [6] and walking back through the references (too many for me to replicate here :) )
I'll add more about outlier models later if I get a chance; the classic reference is [7]. They're very similar in spirit to shrinkage priors. Usually they're pretty easy to do with Gibbs sampling.
Perhaps not as practical as you were hoping for; model selection in particular is a hard problem and the more elaborate the model the worse it gets. Block update wherever possible is the only piece of general advice I have. Sampling from a mixture of distributions you will often have the problem that membership indicators and component parameters are highly correlated. I also haven't touched on label switching issues (or lack of label switching); there is quite a bit of literature there but it's a little out of my wheelhouse.
Anyway, I think it's useful to start with some of the references here, to get a feeling for the different ways that others are approaching similar problems.
[1] Merlise Clyde and E. I. George. Model Uncertainty Statistical Science 19 (2004): 81--94.
http://www.isds.duke.edu/~clyde/papers/statsci.pdf
[2]http://www-personal.umich.edu/~bnyhan/montgomery-nyhan-bma.pdf
[3] Green & Hastie Reversible jump MCMC (2009)
http://www.stats.bris.ac.uk/~mapjg/papers/rjmcmc_20090613.pdf
[4] http://www.stat.duke.edu/~clyde/BAS/
[5] http://ba.stat.cmu.edu/journal/2010/vol05/issue03/bottolo.pdf
[6] http://www.uv.es/bernardo/Polson.pdf
[7] Mike West Outlier models and prior distributions in Bayesian linear regression (1984) JRSS-B
|
What MCMC algorithms/techniques are used for discrete parameters?
|
So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand.
I'm not sure why you think sampl
|
What MCMC algorithms/techniques are used for discrete parameters?
So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand.
I'm not sure why you think sampling an arbitrary discrete distribution is more difficult than an arbitrary continuous distribution. If you can calculate the discrete distribution and the sample space isn't huge then it's much, much easier (unless the continuous distribution is standard, perhaps). Calculate the likelihood $f(k)$ for each category, then normalise to get the probabilities $P(\tilde k = k) = f(k)/\sum f(k)$ and use inverse transform sampling (imposing an arbitrary order on $k$).
Have you got a particular model in mind? There are all sorts of MCMC approaches to fitting mixture models, for example, where the latent component assignments are discrete parameters. These range from very simple (Gibbs) to quite complex.
How big is the parameter space? Is it potentially enormous (eg in the mixture model case, it's N by the number of mixture components)? You might not need anything more than a Gibbs sampler, since conjugacy is no longer an issue (you can get the normalizing constant directly so you can compute the full conditionals). In fact griddy Gibbs used to be popular for these cases, where a continuous prior is discretized to ease computation.
I don't think there is a particular "best" for all problems having a discrete parameter space any more than there is for the continuous case. But if you tell us more about the models you're interested in perhaps we can make some recommendations.
Edit: OK, I can give a little more information in re: your examples.
Your first example has pretty long history, as you might imagine. A recent-ish review is in [1], see also [2]. I'll try to give some details here: A relevant example is stochastic search variable selection. The initial formulation was to use absolutely continuous priors like $p(\beta)\sim \pi N(\beta; 0, \tau) + (1-\pi) N(\beta, 0, 1000\tau)$. That actually turns out to work poorly compared to priors like $p(\beta)\sim \pi \delta_0 (\beta) + (1-\pi) N(\beta, 0, \tau)$ where $\delta_0$ is a point mass at 0. Note that both fit into your original formulation; an MCMC approach would usually proceed by augmenting $\beta$ with a (discrete) model indicator (say $Z$). This is equivalent to a model index; if you have $Z_1\dots, Z_p$ then obviously you can remap the $2^p$ possible configurations to numbers in $1:2^p$.
So how can you improve the MCMC? In a lot of these models you can sample from $p(Z, \beta|y)$ by composition, ie using that $p(Z, \beta|y) = p(\beta | Y, Z)p(Z|Y)$. Block updates like this can tremendously improve mixing since the correlation between $Z$ and $\beta$ is now irrelevant to the sampler
SSVS embeds the whole model space in one big model. Often this is easy to implement but gives works poorly. Reversible jump MCMC is a different kind of approach which lets the dimension of the parameter space vary explicitly; see [3] for a review and some practical notes. You can find more detailed notes on implementation in different models in the literature, I'm sure.
Oftentimes a complete MCMC approach is infeasible; say you have a linear regression with $p=1000$ variables and you're using an approach like SSVS. You can't hope for your sampler to converge; there's not enough time or computing power to visit all those model configurations, and you're especially hosed if some of your variables are even moderately correlated. You should be especially skeptical of people trying to estimate things like variable inclusion probabilities in this way. Various stochastic search algorithms used in conjunction with MCMC have been proposed for such cases. One example is BAS [4], another is in [5] (Sylvia Richardson has other relevant work too); most of the others I'm aware of are geared toward a particular model.
A different approach which is gaining in popularity is to use absolutely continuous shrinkage priors that mimic model averaged results. Typically these are formulated as scale mixtures of normals. The Bayesian lasso is one example, which is a special case of normal-gamma priors and a limiting case of normal-exponential-gamma priors. Other choices include the horseshoe and the general class of normal distributions with inverted beta priors on their variance. For more on these, I'd suggest starting with [6] and walking back through the references (too many for me to replicate here :) )
I'll add more about outlier models later if I get a chance; the classic reference is [7]. They're very similar in spirit to shrinkage priors. Usually they're pretty easy to do with Gibbs sampling.
Perhaps not as practical as you were hoping for; model selection in particular is a hard problem and the more elaborate the model the worse it gets. Block update wherever possible is the only piece of general advice I have. Sampling from a mixture of distributions you will often have the problem that membership indicators and component parameters are highly correlated. I also haven't touched on label switching issues (or lack of label switching); there is quite a bit of literature there but it's a little out of my wheelhouse.
Anyway, I think it's useful to start with some of the references here, to get a feeling for the different ways that others are approaching similar problems.
[1] Merlise Clyde and E. I. George. Model Uncertainty Statistical Science 19 (2004): 81--94.
http://www.isds.duke.edu/~clyde/papers/statsci.pdf
[2]http://www-personal.umich.edu/~bnyhan/montgomery-nyhan-bma.pdf
[3] Green & Hastie Reversible jump MCMC (2009)
http://www.stats.bris.ac.uk/~mapjg/papers/rjmcmc_20090613.pdf
[4] http://www.stat.duke.edu/~clyde/BAS/
[5] http://ba.stat.cmu.edu/journal/2010/vol05/issue03/bottolo.pdf
[6] http://www.uv.es/bernardo/Polson.pdf
[7] Mike West Outlier models and prior distributions in Bayesian linear regression (1984) JRSS-B
|
What MCMC algorithms/techniques are used for discrete parameters?
So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand.
I'm not sure why you think sampl
|
11,729
|
Gibbs sampling versus general MH-MCMC
|
the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributions which you draw variates from.
|
Gibbs sampling versus general MH-MCMC
|
the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributi
|
Gibbs sampling versus general MH-MCMC
the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributions which you draw variates from.
|
Gibbs sampling versus general MH-MCMC
the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributi
|
11,730
|
Gibbs sampling versus general MH-MCMC
|
Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings alleviates some of the dimensionaltiy problems of generate rejection sampling techinques, but you are still sampling from a full multi-variate distribution (and deciding to accept/reject the sample) which causes the algorithm to suffer from the curse of dimensionality.
Think of it in this simplified way: it is much easier to propose an update for one variable at a time (Gibbs) than all variables simultaneously (Metropolis Hastings).
With that being said, the dimensionality of the parameters space will still affect convergence in both Gibbs and Metropolis Hastings since there are more parameters that could potentially not converge.
Gibbs is also nice because each step of the Gibbs loop may be in closed form. This is often the case in hierarchical models where each parameter is conditioned on only a few others. It is often pretty simple to contstruct your model so that each Gibbs step is in closed form (when each step is conjugate it's sometimes called "semi-conjugate"). This is nice because you're sampling from known distributions which can often be very fast.
|
Gibbs sampling versus general MH-MCMC
|
Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings allevia
|
Gibbs sampling versus general MH-MCMC
Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings alleviates some of the dimensionaltiy problems of generate rejection sampling techinques, but you are still sampling from a full multi-variate distribution (and deciding to accept/reject the sample) which causes the algorithm to suffer from the curse of dimensionality.
Think of it in this simplified way: it is much easier to propose an update for one variable at a time (Gibbs) than all variables simultaneously (Metropolis Hastings).
With that being said, the dimensionality of the parameters space will still affect convergence in both Gibbs and Metropolis Hastings since there are more parameters that could potentially not converge.
Gibbs is also nice because each step of the Gibbs loop may be in closed form. This is often the case in hierarchical models where each parameter is conditioned on only a few others. It is often pretty simple to contstruct your model so that each Gibbs step is in closed form (when each step is conjugate it's sometimes called "semi-conjugate"). This is nice because you're sampling from known distributions which can often be very fast.
|
Gibbs sampling versus general MH-MCMC
Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings allevia
|
11,731
|
Is there a reason to prefer a specific measure of multicollinearity?
|
Back in the late 1990s, I did my dissertation on collinearity.
My conclusion was that condition indexes were best.
The main reason was that, rather than look at individual variables, it lets you look at sets of variables. Since collinearity is a function of sets of variables, this is a good thing.
Also, the results of my Monte Carlo study showed better sensitivity to problematic collinearity, but I have long ago forgotten the details.
On the other hand, it is probably the hardest to explain. Lots of people know what $R^2$ is. Only a small subset of those people have heard of eigenvalues. However, when I have used condition indexes as a diagnostic tool, I have never been asked for an explanation.
For much more on this, check out books by David Belsley. Or, if you really want to, you can get my dissertation Multicollinearity diagnostics for multiple regression: A Monte Carlo study
|
Is there a reason to prefer a specific measure of multicollinearity?
|
Back in the late 1990s, I did my dissertation on collinearity.
My conclusion was that condition indexes were best.
The main reason was that, rather than look at individual variables, it lets you look
|
Is there a reason to prefer a specific measure of multicollinearity?
Back in the late 1990s, I did my dissertation on collinearity.
My conclusion was that condition indexes were best.
The main reason was that, rather than look at individual variables, it lets you look at sets of variables. Since collinearity is a function of sets of variables, this is a good thing.
Also, the results of my Monte Carlo study showed better sensitivity to problematic collinearity, but I have long ago forgotten the details.
On the other hand, it is probably the hardest to explain. Lots of people know what $R^2$ is. Only a small subset of those people have heard of eigenvalues. However, when I have used condition indexes as a diagnostic tool, I have never been asked for an explanation.
For much more on this, check out books by David Belsley. Or, if you really want to, you can get my dissertation Multicollinearity diagnostics for multiple regression: A Monte Carlo study
|
Is there a reason to prefer a specific measure of multicollinearity?
Back in the late 1990s, I did my dissertation on collinearity.
My conclusion was that condition indexes were best.
The main reason was that, rather than look at individual variables, it lets you look
|
11,732
|
How to handle the difference between the distribution of the test set and the training set?
|
If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper:
Marco Saerens, Patrice Latinne, Christine Decaestecker: Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure. Neural Computation 14(1): 21-41 (2002) (www)
I've used it myself and found it worked very well (you need a classifier that outputs a probability of class membership though).
If the distribution of patterns within each class changes, then the problem is known as "covariate shift" and there is an excellent book by Sugiyama and Kawanabe. Many of the papers by this group are available on-line, but I would strongly recommend reading the book as well if you can get hold of a copy. The basic idea is to weight the training data according to the difference in density between the training set and the test set (for which labels are not required). A simple way to get the weighting is by using logistic regression to predict whether a pattern is drawn from the training set or the test set. The difficult part is in choosing how much weighting to apply.
See also the nice blog post by Alex Smola here.
|
How to handle the difference between the distribution of the test set and the training set?
|
If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper:
Marco Saerens, Patrice Latinne, Christine
|
How to handle the difference between the distribution of the test set and the training set?
If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper:
Marco Saerens, Patrice Latinne, Christine Decaestecker: Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure. Neural Computation 14(1): 21-41 (2002) (www)
I've used it myself and found it worked very well (you need a classifier that outputs a probability of class membership though).
If the distribution of patterns within each class changes, then the problem is known as "covariate shift" and there is an excellent book by Sugiyama and Kawanabe. Many of the papers by this group are available on-line, but I would strongly recommend reading the book as well if you can get hold of a copy. The basic idea is to weight the training data according to the difference in density between the training set and the test set (for which labels are not required). A simple way to get the weighting is by using logistic regression to predict whether a pattern is drawn from the training set or the test set. The difficult part is in choosing how much weighting to apply.
See also the nice blog post by Alex Smola here.
|
How to handle the difference between the distribution of the test set and the training set?
If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper:
Marco Saerens, Patrice Latinne, Christine
|
11,733
|
How to handle the difference between the distribution of the test set and the training set?
|
I found an excellent tutorial about domain adaptation that might help explain this in more detail:
http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html
The one solution that hasn't been mentioned here is based on ADABOOST.
Here is the link to the original article:
http://ftp.cse.ust.hk/~qyang/Docs/2007/tradaboost.pdf
The basic idea is to use some of the new test data to update learning from the train data.This article is the tip of the iceburg about transfer learning-- where you take what you know from one task and apply it to another one.
|
How to handle the difference between the distribution of the test set and the training set?
|
I found an excellent tutorial about domain adaptation that might help explain this in more detail:
http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html
The one solution that hasn't
|
How to handle the difference between the distribution of the test set and the training set?
I found an excellent tutorial about domain adaptation that might help explain this in more detail:
http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html
The one solution that hasn't been mentioned here is based on ADABOOST.
Here is the link to the original article:
http://ftp.cse.ust.hk/~qyang/Docs/2007/tradaboost.pdf
The basic idea is to use some of the new test data to update learning from the train data.This article is the tip of the iceburg about transfer learning-- where you take what you know from one task and apply it to another one.
|
How to handle the difference between the distribution of the test set and the training set?
I found an excellent tutorial about domain adaptation that might help explain this in more detail:
http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html
The one solution that hasn't
|
11,734
|
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
|
As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$.
This is mentioned in both the Details section of help(princomp):
Note that the default calculation uses divisor 'N' for the covariance matrix.
and the Details section of help(prcomp):
Unlike princomp, variances are computed with the usual divisor N - 1.
You can also see this in the source. For example, the snippet of princomp source below shows that $N$ (n.obs) is used as the denominator when calculating cv.
else if (is.null(covmat)) {
dn <- dim(z)
if (dn[1L] < dn[2L])
stop("'princomp' can only be used with more units than variables")
covmat <- cov.wt(z)
n.obs <- covmat$n.obs
cv <- covmat$cov * (1 - 1/n.obs)
cen <- covmat$center
}
You can avoid this multiplication by specifying the covmat argument instead of the x argument.
princomp(covmat = cov(iris[,1:4]))$sd^2
Update regarding PCA scores:
You can set cor = TRUE in your call to princomp in order to perform PCA on the correlation matrix (instead of the covariance matrix). This will cause princomp to $z$-score the data, but it will still use $N$ for the denominator.
As as result, princomp(scale(data))$scores and princomp(data, cor = TRUE)$scores will differ by the factor $\sqrt{(N-1)/N}$.
|
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
|
As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$.
This is mentioned in both the Details sec
|
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$.
This is mentioned in both the Details section of help(princomp):
Note that the default calculation uses divisor 'N' for the covariance matrix.
and the Details section of help(prcomp):
Unlike princomp, variances are computed with the usual divisor N - 1.
You can also see this in the source. For example, the snippet of princomp source below shows that $N$ (n.obs) is used as the denominator when calculating cv.
else if (is.null(covmat)) {
dn <- dim(z)
if (dn[1L] < dn[2L])
stop("'princomp' can only be used with more units than variables")
covmat <- cov.wt(z)
n.obs <- covmat$n.obs
cv <- covmat$cov * (1 - 1/n.obs)
cen <- covmat$center
}
You can avoid this multiplication by specifying the covmat argument instead of the x argument.
princomp(covmat = cov(iris[,1:4]))$sd^2
Update regarding PCA scores:
You can set cor = TRUE in your call to princomp in order to perform PCA on the correlation matrix (instead of the covariance matrix). This will cause princomp to $z$-score the data, but it will still use $N$ for the denominator.
As as result, princomp(scale(data))$scores and princomp(data, cor = TRUE)$scores will differ by the factor $\sqrt{(N-1)/N}$.
|
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$.
This is mentioned in both the Details sec
|
11,735
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the estimate of the effect size will be biased in a completely statististical sense. If you stopped for efficacy, the estimated effect will be too high (assuming it is positive), if you stopped for futility, it will be too low.
Piantodosi does give an intuitive explanantion as well (Sec 10.5.4 in my edition). Suppose the true difference in two means is 1 unit. When you run a lot of trials, and look at them at your interim analysis time, some of them will have observed effect sizes much above 1, some much below one, and most around 1 - the distribution will be wide, but symmetric. The estimated effect size at this point would not be very accurate, but would be unbiased. However you only stop and report an effect size if the difference is significant (adjusted for multiple testing), that is the estimate is on the high side. In all other cases you keep going and don't report an estimate. That means that conditional on having stopped early, the distribution of the effect size is not symmetric, and its expected value is above the true value of the estimate.
The fact that this effect is more severe early on comes from the larger hurdle for stopping the trial, thus a larger part of the distribution being thrown away during the conditioning.
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the
|
Why is bias affected when a clinical trial is terminated at an early stage?
First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the estimate of the effect size will be biased in a completely statististical sense. If you stopped for efficacy, the estimated effect will be too high (assuming it is positive), if you stopped for futility, it will be too low.
Piantodosi does give an intuitive explanantion as well (Sec 10.5.4 in my edition). Suppose the true difference in two means is 1 unit. When you run a lot of trials, and look at them at your interim analysis time, some of them will have observed effect sizes much above 1, some much below one, and most around 1 - the distribution will be wide, but symmetric. The estimated effect size at this point would not be very accurate, but would be unbiased. However you only stop and report an effect size if the difference is significant (adjusted for multiple testing), that is the estimate is on the high side. In all other cases you keep going and don't report an estimate. That means that conditional on having stopped early, the distribution of the effect size is not symmetric, and its expected value is above the true value of the estimate.
The fact that this effect is more severe early on comes from the larger hurdle for stopping the trial, thus a larger part of the distribution being thrown away during the conditioning.
|
Why is bias affected when a clinical trial is terminated at an early stage?
First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the
|
11,736
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect but may have a negative effect (-1). Five guinea pigs are tested one after the other. The unknown probability of a positive outcome in a single case is in fact $\frac{3}{4}$ and a negative outcome $\frac{1}{4}$.
So after five trials the probabilities of the different outcomes are
Outcome Probability
+5-0 = +5 243/1024
+4-1 = +3 405/1024
+3-2 = +1 270/1024
+2-3 = -1 90/1024
+1-4 = -3 15/1024
+0-5 = -5 1/1024
so the probability of a positive outcome overall is 918/1024 = 0.896, and the mean outcome is +2.5. Dividing by the 5 trials, this is an average of a +0.5 outcome per trial.
It is the unbiased figure, as it is also $+1\times\frac{3}{4}-1\times\frac{1}{4}$.
Suppose that in order to protect guinea pigs, the study will be terminated if at any stage the cumulative outcome is negative. Then the probabilities become
Outcome Probability
+5-0 = +5 243/1024
+4-1 = +3 324/1024
+3-2 = +1 135/1024
+2-3 = -1 18/1024
+1-2 = -1 48/1024
+0-1 = -1 256/1024
so the probability of a positive outcome overall is 702/1024 = 0.6855, and the mean outcome is +1.953. If we looked the mean value of outcome per trial in the previous calculation, i.e. using $\frac{+5}{5}$, $\frac{+3}{5}$, $\frac{+1}{5}$, $\frac{-1}{5}$, $\frac{-1}{3}$ and $\frac{-1}{1}$ then we would get +0.184.
These are the senses in which there is bias by stopping early in the second scheme, and the bias is in the predicted direction. But it is not the full story.
Why do whuber and probabilityislogic think stopping early should produce unbiased results? We know the expected outcome of the trials in the second scheme is +1.953. The expected number of trials turns out to be 3.906. So dividing one by the other we get +0.5, exactly as before and what was described as unbiased.
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect bu
|
Why is bias affected when a clinical trial is terminated at an early stage?
Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect but may have a negative effect (-1). Five guinea pigs are tested one after the other. The unknown probability of a positive outcome in a single case is in fact $\frac{3}{4}$ and a negative outcome $\frac{1}{4}$.
So after five trials the probabilities of the different outcomes are
Outcome Probability
+5-0 = +5 243/1024
+4-1 = +3 405/1024
+3-2 = +1 270/1024
+2-3 = -1 90/1024
+1-4 = -3 15/1024
+0-5 = -5 1/1024
so the probability of a positive outcome overall is 918/1024 = 0.896, and the mean outcome is +2.5. Dividing by the 5 trials, this is an average of a +0.5 outcome per trial.
It is the unbiased figure, as it is also $+1\times\frac{3}{4}-1\times\frac{1}{4}$.
Suppose that in order to protect guinea pigs, the study will be terminated if at any stage the cumulative outcome is negative. Then the probabilities become
Outcome Probability
+5-0 = +5 243/1024
+4-1 = +3 324/1024
+3-2 = +1 135/1024
+2-3 = -1 18/1024
+1-2 = -1 48/1024
+0-1 = -1 256/1024
so the probability of a positive outcome overall is 702/1024 = 0.6855, and the mean outcome is +1.953. If we looked the mean value of outcome per trial in the previous calculation, i.e. using $\frac{+5}{5}$, $\frac{+3}{5}$, $\frac{+1}{5}$, $\frac{-1}{5}$, $\frac{-1}{3}$ and $\frac{-1}{1}$ then we would get +0.184.
These are the senses in which there is bias by stopping early in the second scheme, and the bias is in the predicted direction. But it is not the full story.
Why do whuber and probabilityislogic think stopping early should produce unbiased results? We know the expected outcome of the trials in the second scheme is +1.953. The expected number of trials turns out to be 3.906. So dividing one by the other we get +0.5, exactly as before and what was described as unbiased.
|
Why is bias affected when a clinical trial is terminated at an early stage?
Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect bu
|
11,737
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262
Essentially, to the best of my recollection the results will be biased as 1) stopping early usually means that either the treatment was more or less effective than one hoped, and if this is positive, then you may be capitalising on chance.
I believe that p values are calculated on the basis of the planned sample size (but i could be wrong on this), and also if you are constantly checking your results to see if any effects have been shown, you need to correct for multiple comparisons in order to insure that you are not merely finding a chance effect.
For example, if you check 20 times for p values below .05 then statistically speaking, you are almost certain to find one significant result.
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262
Essentially, to the best of my recollection the results will be biased as 1) stopp
|
Why is bias affected when a clinical trial is terminated at an early stage?
Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262
Essentially, to the best of my recollection the results will be biased as 1) stopping early usually means that either the treatment was more or less effective than one hoped, and if this is positive, then you may be capitalising on chance.
I believe that p values are calculated on the basis of the planned sample size (but i could be wrong on this), and also if you are constantly checking your results to see if any effects have been shown, you need to correct for multiple comparisons in order to insure that you are not merely finding a chance effect.
For example, if you check 20 times for p values below .05 then statistically speaking, you are almost certain to find one significant result.
|
Why is bias affected when a clinical trial is terminated at an early stage?
Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262
Essentially, to the best of my recollection the results will be biased as 1) stopp
|
11,738
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will be "biased" because you have less data. The so called "likelihood principle" states that inference should only depend on data that was observed, and not on data that might have been observed, but was not. The LP says
$$P(H|D,S,I)=P(H|D,I)$$
Where $H$ stands for the hypothesis you are testing (in the form of a proposition, such as "the treatment was effective"), $D$ stands for the data you actually observed, and $S$ stands for the proposition "the experiment was stopped early", and $I$ stands for the prior information (such as a model). Now suppose your stopping rule depends on the data $D$ and on the prior information $I$, so you can write $S=g(D,I)$. Now an elementary rule of logic is $AA=A$ - saying that A is true twice is the same thing as saying it once.
This means that because $S=g(D,I)$ will be true whenever $D$ and $I$ are also true. So in "boolean algebra" we have $D,S,I = D,g(D,I),I = D,I$. This proves the above equation of the likelihood principle. It is only if your stopping rule depends on something other than the data $D$ or the prior information $I$ that it matters.
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will
|
Why is bias affected when a clinical trial is terminated at an early stage?
I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will be "biased" because you have less data. The so called "likelihood principle" states that inference should only depend on data that was observed, and not on data that might have been observed, but was not. The LP says
$$P(H|D,S,I)=P(H|D,I)$$
Where $H$ stands for the hypothesis you are testing (in the form of a proposition, such as "the treatment was effective"), $D$ stands for the data you actually observed, and $S$ stands for the proposition "the experiment was stopped early", and $I$ stands for the prior information (such as a model). Now suppose your stopping rule depends on the data $D$ and on the prior information $I$, so you can write $S=g(D,I)$. Now an elementary rule of logic is $AA=A$ - saying that A is true twice is the same thing as saying it once.
This means that because $S=g(D,I)$ will be true whenever $D$ and $I$ are also true. So in "boolean algebra" we have $D,S,I = D,g(D,I),I = D,I$. This proves the above equation of the likelihood principle. It is only if your stopping rule depends on something other than the data $D$ or the prior information $I$ that it matters.
|
Why is bias affected when a clinical trial is terminated at an early stage?
I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will
|
11,739
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
there will be bias (in "statistical sense") if termination of studies is not random.
In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no effect" will show some effect (as a result of chance) and (b) some experiments that ultimately do find an effect will show "no effect" (likely as a result of lack of power). In a world in which you terminate trials, if you stop (a) more often than (b), you'll end up across run of studies with bias in favor of finding an effect. (Same logic applies for effect sizes; terminating studies that show "bigger than expected" effect early on more often than ones that show "as expected or lower" will inflate count of findings of "big effect.")
If in fact medical trials are terminated when early results show a positive effect -- in order to make treatment available to subjects in placebo or others -- but not when early results are inconclusive, then there will be more type 1 error in such testing than there would be if all experiments were run to conclusion. But that doesn't meant the practice is wrong; the cost of type 1 error, morally speaking, might be lower than denying treatment as quickly as one otherwise would for treatments that really would be shown to work at end of full trial.
|
Why is bias affected when a clinical trial is terminated at an early stage?
|
there will be bias (in "statistical sense") if termination of studies is not random.
In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no
|
Why is bias affected when a clinical trial is terminated at an early stage?
there will be bias (in "statistical sense") if termination of studies is not random.
In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no effect" will show some effect (as a result of chance) and (b) some experiments that ultimately do find an effect will show "no effect" (likely as a result of lack of power). In a world in which you terminate trials, if you stop (a) more often than (b), you'll end up across run of studies with bias in favor of finding an effect. (Same logic applies for effect sizes; terminating studies that show "bigger than expected" effect early on more often than ones that show "as expected or lower" will inflate count of findings of "big effect.")
If in fact medical trials are terminated when early results show a positive effect -- in order to make treatment available to subjects in placebo or others -- but not when early results are inconclusive, then there will be more type 1 error in such testing than there would be if all experiments were run to conclusion. But that doesn't meant the practice is wrong; the cost of type 1 error, morally speaking, might be lower than denying treatment as quickly as one otherwise would for treatments that really would be shown to work at end of full trial.
|
Why is bias affected when a clinical trial is terminated at an early stage?
there will be bias (in "statistical sense") if termination of studies is not random.
In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no
|
11,740
|
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature?
|
"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green jelly bean" cartoon in which investigators performed hypothesis tests of associations between consumption of jelly beans (of 20 different colors) and acne. One test reported a p-value less than $1/20$, leading to the conclusion that "green jelly beans cause acne." The joke is that p-values, by design, have a $1/20$ chance of being less than $1/20$, so intuitively we would expect to see a p-value that low among $20$ different tests.
What the cartoon does not say is whether the $20$ tests were based on separate datasets or one dataset.
With separate datasets, each of the $20$ results has a $1/20$ chance of being "significant." Basic properties of probabilities (of independent events) then imply that the chance all $20$ results are "insignificant" is $(1-0.05)^{20}\approx 0.36$. The remaining chance of $1-0.36 = 0.64$ is large enough to corroborate our intuition that a single "significant" result in this large group of results is no surprise; no cause can validly be assigned to such a result except the operation of chance.
If the $20$ results were based on a common dataset, however, the preceding calculation would be erroneous: it assumes all $20$ outcomes were statistically independent. But why wouldn't they be? Analysis of Variance provides a standard example: when comparing two or more treatment groups against a control group, each comparison involves the same control results. The comparisons are not independent. Now, for instance, "significant" differences could arise due to chance variation in the controls. Such variation could simultaneously change the comparisons with every group.
(ANOVA handles this problem by means of its overall F-test. It is sort of a comparison "to rule them all": we will not trust group-to-group comparison unless first this F-test is significant.)
We can abstract the essence of this situation with the following framework. Multiple comparisons concerns making a decision from the p-values $(p_1, p_2, \ldots, p_n)$ of $n$ distinct tests. Those p-values are random variables. Assuming all the corresponding null hypotheses are logically consistent, each should have a uniform distribution. When we know their joint distribution, we can construct reasonable ways to combine all $n$ of them into a single decision. Otherwise, the best we can usually do is rely on approximate bounds (which is the basis of the Bonferroni correction, for instance).
Joint distributions of independent random variables are easy to compute. The literature therefore distinguishes between this situation and the case of non-independence.
Accordingly, the correct meaning of "independent" in the quotations is in the usual statistical sense of independent random variables.
Note that an assumption was needed to arrive at this conclusion: namely, that all $n$ of the null hypotheses are logically consistent. As an example of what is being avoided, consider conducting two tests with a batch of univariate data $(x_1, \ldots, x_m)$ assumed to be a random sample from a Normal distribution of unknown mean $\mu$. The first is a t-test of $\mu=0$, with p-value $p_1$, and the second is a t-test of $\mu=1$, with p-value $p_2$. Since both cannot logically hold simultaneously, it would be problematic to talk about "the null distribution" of $(p_1, p_2)$. In this case there can be no such thing at all! Thus the very concept of statistical independence sometimes cannot even apply.
|
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature
|
"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green je
|
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature?
"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green jelly bean" cartoon in which investigators performed hypothesis tests of associations between consumption of jelly beans (of 20 different colors) and acne. One test reported a p-value less than $1/20$, leading to the conclusion that "green jelly beans cause acne." The joke is that p-values, by design, have a $1/20$ chance of being less than $1/20$, so intuitively we would expect to see a p-value that low among $20$ different tests.
What the cartoon does not say is whether the $20$ tests were based on separate datasets or one dataset.
With separate datasets, each of the $20$ results has a $1/20$ chance of being "significant." Basic properties of probabilities (of independent events) then imply that the chance all $20$ results are "insignificant" is $(1-0.05)^{20}\approx 0.36$. The remaining chance of $1-0.36 = 0.64$ is large enough to corroborate our intuition that a single "significant" result in this large group of results is no surprise; no cause can validly be assigned to such a result except the operation of chance.
If the $20$ results were based on a common dataset, however, the preceding calculation would be erroneous: it assumes all $20$ outcomes were statistically independent. But why wouldn't they be? Analysis of Variance provides a standard example: when comparing two or more treatment groups against a control group, each comparison involves the same control results. The comparisons are not independent. Now, for instance, "significant" differences could arise due to chance variation in the controls. Such variation could simultaneously change the comparisons with every group.
(ANOVA handles this problem by means of its overall F-test. It is sort of a comparison "to rule them all": we will not trust group-to-group comparison unless first this F-test is significant.)
We can abstract the essence of this situation with the following framework. Multiple comparisons concerns making a decision from the p-values $(p_1, p_2, \ldots, p_n)$ of $n$ distinct tests. Those p-values are random variables. Assuming all the corresponding null hypotheses are logically consistent, each should have a uniform distribution. When we know their joint distribution, we can construct reasonable ways to combine all $n$ of them into a single decision. Otherwise, the best we can usually do is rely on approximate bounds (which is the basis of the Bonferroni correction, for instance).
Joint distributions of independent random variables are easy to compute. The literature therefore distinguishes between this situation and the case of non-independence.
Accordingly, the correct meaning of "independent" in the quotations is in the usual statistical sense of independent random variables.
Note that an assumption was needed to arrive at this conclusion: namely, that all $n$ of the null hypotheses are logically consistent. As an example of what is being avoided, consider conducting two tests with a batch of univariate data $(x_1, \ldots, x_m)$ assumed to be a random sample from a Normal distribution of unknown mean $\mu$. The first is a t-test of $\mu=0$, with p-value $p_1$, and the second is a t-test of $\mu=1$, with p-value $p_2$. Since both cannot logically hold simultaneously, it would be problematic to talk about "the null distribution" of $(p_1, p_2)$. In this case there can be no such thing at all! Thus the very concept of statistical independence sometimes cannot even apply.
|
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature
"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green je
|
11,741
|
Minimal number of points for a linear regression
|
Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can be fit perfectly with just 3 points. So clearly in almost any circumstance, it would be proper to say that 4 points are insufficient. However, like most rules of thumb, it does not cover every situation. Cases, where the noise term in the model has a large variance, will require more samples than a similar case where the error variance is small.
The required number of sample points does depend on the objectives. If you are doing exploratory analysis just to see if one model (say linear in a covariate) looks better than another (say a quadratic function of the covariate) less than 10 points may be enough. But if you want very accurate estimates of the correlation and regression coefficients for the covariates you could need more than 10 per covariate. A criterion for prediction accuracy could require even more samples than accurate parameter estimates. Note that the variance of the estimates and prediction all involve the variance of the model's error term.
|
Minimal number of points for a linear regression
|
Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can b
|
Minimal number of points for a linear regression
Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can be fit perfectly with just 3 points. So clearly in almost any circumstance, it would be proper to say that 4 points are insufficient. However, like most rules of thumb, it does not cover every situation. Cases, where the noise term in the model has a large variance, will require more samples than a similar case where the error variance is small.
The required number of sample points does depend on the objectives. If you are doing exploratory analysis just to see if one model (say linear in a covariate) looks better than another (say a quadratic function of the covariate) less than 10 points may be enough. But if you want very accurate estimates of the correlation and regression coefficients for the covariates you could need more than 10 per covariate. A criterion for prediction accuracy could require even more samples than accurate parameter estimates. Note that the variance of the estimates and prediction all involve the variance of the model's error term.
|
Minimal number of points for a linear regression
Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can b
|
11,742
|
Minimal number of points for a linear regression
|
As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
|
Minimal number of points for a linear regression
|
As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
|
Minimal number of points for a linear regression
As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
|
Minimal number of points for a linear regression
As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
|
11,743
|
Cross-validation vs empirical Bayes for estimating hyperparameters
|
I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the model. Thus if the model is mis-specified, then the evidence may be unreliable. Cross-validation on the other hand gives an estimate of the probability of the data, whether the modelling assumptions are correct or not. This means that the evidence may be a better guide if the modelling assumptions are correct using less data, but cross-validation will be robust against model mis-specification. CV is assymptotically unbiased, but I would assume that the evidence isn't unless the model assumptions happen to be exactly correct.
This is essentially my intuition/experience; I would also be interested to hear about research on this.
Note that for many models (e.g. ridge regression, Gaussian processes, kernel ridge regression/LS-SVM etc) leave-one-out cross-validation can be performed at least as efficiently as estimating the evidence, so there isn't necessarily a computational advantage there.
Addendum: Both the marginal likelihood and cross-validation performance estimates are evaluated over a finite sample of data, and hence there is always a possibility of over-fitting if a model is tuned by optimising either criterion. For small samples, the difference in the variance of the two criteria may decide which works best. See my paper
Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", Journal of Machine Learning Research, 11(Jul):2079−2107, 2010. (pdf)
|
Cross-validation vs empirical Bayes for estimating hyperparameters
|
I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the mod
|
Cross-validation vs empirical Bayes for estimating hyperparameters
I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the model. Thus if the model is mis-specified, then the evidence may be unreliable. Cross-validation on the other hand gives an estimate of the probability of the data, whether the modelling assumptions are correct or not. This means that the evidence may be a better guide if the modelling assumptions are correct using less data, but cross-validation will be robust against model mis-specification. CV is assymptotically unbiased, but I would assume that the evidence isn't unless the model assumptions happen to be exactly correct.
This is essentially my intuition/experience; I would also be interested to hear about research on this.
Note that for many models (e.g. ridge regression, Gaussian processes, kernel ridge regression/LS-SVM etc) leave-one-out cross-validation can be performed at least as efficiently as estimating the evidence, so there isn't necessarily a computational advantage there.
Addendum: Both the marginal likelihood and cross-validation performance estimates are evaluated over a finite sample of data, and hence there is always a possibility of over-fitting if a model is tuned by optimising either criterion. For small samples, the difference in the variance of the two criteria may decide which works best. See my paper
Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", Journal of Machine Learning Research, 11(Jul):2079−2107, 2010. (pdf)
|
Cross-validation vs empirical Bayes for estimating hyperparameters
I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the mod
|
11,744
|
Cross-validation vs empirical Bayes for estimating hyperparameters
|
There is actually a paper that connects CV and EB:
E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496
If I understand correctly, the paper claims that the marginal likelihood is similar to a very exhaustive cross-validation procedure, where you consider all possible training and test set splits (so also training sets of 1 sample) and where you use the posterior predictive density as a loss function/scoring rule. This could be seen as a disadvantage of marginal likelihood maximization because you would also take "weird" cross-validation partitions such as 1 training sample and N-1 test samples into account to estimate the hyper parameters
|
Cross-validation vs empirical Bayes for estimating hyperparameters
|
There is actually a paper that connects CV and EB:
E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496
If I understand correc
|
Cross-validation vs empirical Bayes for estimating hyperparameters
There is actually a paper that connects CV and EB:
E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496
If I understand correctly, the paper claims that the marginal likelihood is similar to a very exhaustive cross-validation procedure, where you consider all possible training and test set splits (so also training sets of 1 sample) and where you use the posterior predictive density as a loss function/scoring rule. This could be seen as a disadvantage of marginal likelihood maximization because you would also take "weird" cross-validation partitions such as 1 training sample and N-1 test samples into account to estimate the hyper parameters
|
Cross-validation vs empirical Bayes for estimating hyperparameters
There is actually a paper that connects CV and EB:
E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496
If I understand correc
|
11,745
|
Cross-validation vs empirical Bayes for estimating hyperparameters
|
If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are identical.
|
Cross-validation vs empirical Bayes for estimating hyperparameters
|
If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are ident
|
Cross-validation vs empirical Bayes for estimating hyperparameters
If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are identical.
|
Cross-validation vs empirical Bayes for estimating hyperparameters
If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are ident
|
11,746
|
"Normalizing" variables for SVD / PCA
|
The three common normalizations are centering, scaling, and standardizing.
Let $X$ be a random variable.
Centering is $$x_i^* = x_i-\bar{x}.$$
The resultant $x^*$ will have $\bar{x^*}=0$.
Scaling is $$x_i^* = \frac{x_i}{\sqrt{(\sum_{i}{x_i^2})}}.$$
The resultant $x^*$ will have $\sum_{i}{{{x_i^*}}^2} = 1$.
Standardizing is centering-then-scaling. The resultant $x^*$ will have $\bar{x^*}=0$ and $\sum_{i}{{{x_i^*}}^2} = 1$.
|
"Normalizing" variables for SVD / PCA
|
The three common normalizations are centering, scaling, and standardizing.
Let $X$ be a random variable.
Centering is $$x_i^* = x_i-\bar{x}.$$
The resultant $x^*$ will have $\bar{x^*}=0$.
Scaling is $
|
"Normalizing" variables for SVD / PCA
The three common normalizations are centering, scaling, and standardizing.
Let $X$ be a random variable.
Centering is $$x_i^* = x_i-\bar{x}.$$
The resultant $x^*$ will have $\bar{x^*}=0$.
Scaling is $$x_i^* = \frac{x_i}{\sqrt{(\sum_{i}{x_i^2})}}.$$
The resultant $x^*$ will have $\sum_{i}{{{x_i^*}}^2} = 1$.
Standardizing is centering-then-scaling. The resultant $x^*$ will have $\bar{x^*}=0$ and $\sum_{i}{{{x_i^*}}^2} = 1$.
|
"Normalizing" variables for SVD / PCA
The three common normalizations are centering, scaling, and standardizing.
Let $X$ be a random variable.
Centering is $$x_i^* = x_i-\bar{x}.$$
The resultant $x^*$ will have $\bar{x^*}=0$.
Scaling is $
|
11,747
|
"Normalizing" variables for SVD / PCA
|
You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimensions. For that reason, unless the variables are all comparable (same physical quantity, same units), it is recommended to perform PCA on the correlation matrix instead of covariance matrix. See here:
PCA on correlation or covariance?
Doing PCA on correlation matrix is equivalent to standardizing all the variables prior to the analysis (and then doing PCA on covariance matrix). Standardizing means centering and then dividing each variable by its standard deviation, so that all of them become of unit variance. This can be seen as a convenient "change of units", to make all the units comparable.
One can ask if there might sometimes be a better way of "normalizing" variables; e.g. one can choose to divide by some robust estimate of variance, instead of by the raw variance. This was asked in the following thread, and see the ensuing discussion (even though no definite answer was given there):
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
Finally, you were worried that normalizing by standard deviation (or something similar) is not rotation invariant. Well, yes, it is not. But, as @whuber remarked in the comment above, there is no rotation invariant way of doing it: changing units of individual variables is not a rotation invariant operation! There is nothing to worry about here.
|
"Normalizing" variables for SVD / PCA
|
You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimens
|
"Normalizing" variables for SVD / PCA
You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimensions. For that reason, unless the variables are all comparable (same physical quantity, same units), it is recommended to perform PCA on the correlation matrix instead of covariance matrix. See here:
PCA on correlation or covariance?
Doing PCA on correlation matrix is equivalent to standardizing all the variables prior to the analysis (and then doing PCA on covariance matrix). Standardizing means centering and then dividing each variable by its standard deviation, so that all of them become of unit variance. This can be seen as a convenient "change of units", to make all the units comparable.
One can ask if there might sometimes be a better way of "normalizing" variables; e.g. one can choose to divide by some robust estimate of variance, instead of by the raw variance. This was asked in the following thread, and see the ensuing discussion (even though no definite answer was given there):
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
Finally, you were worried that normalizing by standard deviation (or something similar) is not rotation invariant. Well, yes, it is not. But, as @whuber remarked in the comment above, there is no rotation invariant way of doing it: changing units of individual variables is not a rotation invariant operation! There is nothing to worry about here.
|
"Normalizing" variables for SVD / PCA
You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimens
|
11,748
|
"Normalizing" variables for SVD / PCA
|
A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about it. If we speak in MATLAB code: this is
clear, clf
clc
%% Let us draw a line
scale = 1;
x = scale .* (1:0.25:5);
y = 1/2*x + 1;
%% and add some noise
y = y + rand(size(y));
%% plot and see
subplot(1,2,1), plot(x, y, '*k')
axis equal
%% Put the data in columns and see what SVD gives
A = [x;y];
[U, S, V] = svd(A);
hold on
plot([mean(x)-U(1,1)*S(1,1) mean(x)+U(1,1)*S(1,1)], ...
[mean(y)-U(2,1)*S(1,1) mean(y)+U(2,1)*S(1,1)], ...
':k');
plot([mean(x)-U(1,2)*S(2,2) mean(x)+U(1,2)*S(2,2)], ...
[mean(y)-U(2,2)*S(2,2) mean(y)+U(2,2)*S(2,2)], ...
'-.k');
title('The left singular vectors found directly')
%% Now, subtract the mean and see its effect
A(1,:) = A(1,:) - mean(A(1,:));
A(2,:) = A(2,:) - mean(A(2,:));
[U, S, V] = svd(A);
subplot(1,2,2)
plot(x, y, '*k')
axis equal
hold on
plot([mean(x)-U(1,1)*S(1,1) mean(x)+U(1,1)*S(1,1)], ...
[mean(y)-U(2,1)*S(1,1) mean(y)+U(2,1)*S(1,1)], ...
':k');
plot([mean(x)-U(1,2)*S(2,2) mean(x)+U(1,2)*S(2,2)], ...
[mean(y)-U(2,2)*S(2,2) mean(y)+U(2,2)*S(2,2)], ...
'-.k');
title('The left singular vectors found after subtracting mean')
As can be seen from the figure, I think you should subtract the mean from the data if you want to analyze the (co)variance better. Then the values will not be between 10-100 and 0.1-1, but their mean will all be zero. The variances will be found as the eigenvalues (or square of the singular values ). The found eigenvectors are not affected by the scale of a dimension for the case when we subtract the mean as much as the case when we do not. For instance, I've tested and observed the following that tells subtracting the mean might matter for your case. So the problem may result not from the variance but from the translation difference.
% scale = 0.5, without subtracting mean
U =
-0.5504 -0.8349
-0.8349 0.5504
% scale = 0.5, with subtracting mean
U =
-0.8311 -0.5561
-0.5561 0.8311
% scale = 1, without subtracting mean
U =
-0.7327 -0.6806
-0.6806 0.7327
% scale = 1, with subtracting mean
U =
-0.8464 -0.5325
-0.5325 0.8464
% scale = 100, without subtracting mean
U =
-0.8930 -0.4501
-0.4501 0.8930
% scale = 100, with subtracting mean
U =
-0.8943 -0.4474
-0.4474 0.8943
|
"Normalizing" variables for SVD / PCA
|
A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about
|
"Normalizing" variables for SVD / PCA
A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about it. If we speak in MATLAB code: this is
clear, clf
clc
%% Let us draw a line
scale = 1;
x = scale .* (1:0.25:5);
y = 1/2*x + 1;
%% and add some noise
y = y + rand(size(y));
%% plot and see
subplot(1,2,1), plot(x, y, '*k')
axis equal
%% Put the data in columns and see what SVD gives
A = [x;y];
[U, S, V] = svd(A);
hold on
plot([mean(x)-U(1,1)*S(1,1) mean(x)+U(1,1)*S(1,1)], ...
[mean(y)-U(2,1)*S(1,1) mean(y)+U(2,1)*S(1,1)], ...
':k');
plot([mean(x)-U(1,2)*S(2,2) mean(x)+U(1,2)*S(2,2)], ...
[mean(y)-U(2,2)*S(2,2) mean(y)+U(2,2)*S(2,2)], ...
'-.k');
title('The left singular vectors found directly')
%% Now, subtract the mean and see its effect
A(1,:) = A(1,:) - mean(A(1,:));
A(2,:) = A(2,:) - mean(A(2,:));
[U, S, V] = svd(A);
subplot(1,2,2)
plot(x, y, '*k')
axis equal
hold on
plot([mean(x)-U(1,1)*S(1,1) mean(x)+U(1,1)*S(1,1)], ...
[mean(y)-U(2,1)*S(1,1) mean(y)+U(2,1)*S(1,1)], ...
':k');
plot([mean(x)-U(1,2)*S(2,2) mean(x)+U(1,2)*S(2,2)], ...
[mean(y)-U(2,2)*S(2,2) mean(y)+U(2,2)*S(2,2)], ...
'-.k');
title('The left singular vectors found after subtracting mean')
As can be seen from the figure, I think you should subtract the mean from the data if you want to analyze the (co)variance better. Then the values will not be between 10-100 and 0.1-1, but their mean will all be zero. The variances will be found as the eigenvalues (or square of the singular values ). The found eigenvectors are not affected by the scale of a dimension for the case when we subtract the mean as much as the case when we do not. For instance, I've tested and observed the following that tells subtracting the mean might matter for your case. So the problem may result not from the variance but from the translation difference.
% scale = 0.5, without subtracting mean
U =
-0.5504 -0.8349
-0.8349 0.5504
% scale = 0.5, with subtracting mean
U =
-0.8311 -0.5561
-0.5561 0.8311
% scale = 1, without subtracting mean
U =
-0.7327 -0.6806
-0.6806 0.7327
% scale = 1, with subtracting mean
U =
-0.8464 -0.5325
-0.5325 0.8464
% scale = 100, without subtracting mean
U =
-0.8930 -0.4501
-0.4501 0.8930
% scale = 100, with subtracting mean
U =
-0.8943 -0.4474
-0.4474 0.8943
|
"Normalizing" variables for SVD / PCA
A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about
|
11,749
|
"Normalizing" variables for SVD / PCA
|
To normalizing the data for PCA, following formula also used
$\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$
where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$
describes all raw values across all countries for that indicator across all years.
|
"Normalizing" variables for SVD / PCA
|
To normalizing the data for PCA, following formula also used
$\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$
where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$
describ
|
"Normalizing" variables for SVD / PCA
To normalizing the data for PCA, following formula also used
$\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$
where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$
describes all raw values across all countries for that indicator across all years.
|
"Normalizing" variables for SVD / PCA
To normalizing the data for PCA, following formula also used
$\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$
where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$
describ
|
11,750
|
Statistical forensics: Benford and beyond
|
Great Question!
In the scientific context there are various kinds of problematic reporting and problematic behaviour:
Fraud: I'd define fraud as a deliberate intention on the part of the author or analyst to misrepresent the results and where the misrepresentation is of a sufficiently grave nature. The main example being complete fabrication of raw data or summary statistics.
Error: Data analysts can make errors at many phases of data analysis from data entry, to data manipulation, to analyses, to reporting, to interpretation.
Inappropriate behaviour: There are many forms of inappropriate behaviour. In general, it can be summarised by an orientation which seeks to confirm a particular position rather than search for the truth.
Common examples of inappropriate behaviour include:
Examining a series of possible dependent variables and only reporting the one that is statistically significant
Not mentioning important violations of assumptions
Performing data manipulations and outlier removal procedures without mentioning it, particularly where these procedures are both inappropriate and chosen purely to make the results look better
Presenting a model as confirmatory which is actually exploratory
Omitting important results that go against the desired argument
Choosing a statistical test solely on the basis that it makes the results look better
Running a series of five or ten under-powered studies where only one is statistically significant (perhaps at p = .04) and then reporting the study without mention of the other studies
In general, I'd hypothesise that incompetence is related to all three forms of problematic behaviour. A researcher who does not understand how to do good science but otherwise wants to be successful will have a greater incentive to misrepresent their results, and is less likely to respect the principles of ethical data analysis.
The above distinctions have implications for detection of problematic behaviour.
For example, if you manage to discern that a set of reported results are wrong, it still needs to be ascertained as to whether the results arose from fraud, error or inappropriate behaviour. Also, I'd assume that various forms of inappropriate behaviour are far more common than fraud.
With regards to detecting problematic behaviour, I think it is largely a skill that comes from experience working with data, working with a topic, and working with researchers. All of these experiences strengthen your expectations about what data should look like. Thus, major deviations from expectations start the process of searching for an explanation. Experience with researchers gives you a sense of the kinds of inappropriate behaviour which are more or less common. In combination this leads to the generation of hypotheses. For example, if I read a journal article and I am surprised with the results, the study is underpowered, and the nature of the writing suggests that the author is set on making a point, I generate the hypothesis that the results perhaps should not be trusted.
Other Resources
Robert P. Abelson Statistics as a Principled Argument has a chapter titled "On Suspecting Fishiness"
|
Statistical forensics: Benford and beyond
|
Great Question!
In the scientific context there are various kinds of problematic reporting and problematic behaviour:
Fraud: I'd define fraud as a deliberate intention on the part of the author or an
|
Statistical forensics: Benford and beyond
Great Question!
In the scientific context there are various kinds of problematic reporting and problematic behaviour:
Fraud: I'd define fraud as a deliberate intention on the part of the author or analyst to misrepresent the results and where the misrepresentation is of a sufficiently grave nature. The main example being complete fabrication of raw data or summary statistics.
Error: Data analysts can make errors at many phases of data analysis from data entry, to data manipulation, to analyses, to reporting, to interpretation.
Inappropriate behaviour: There are many forms of inappropriate behaviour. In general, it can be summarised by an orientation which seeks to confirm a particular position rather than search for the truth.
Common examples of inappropriate behaviour include:
Examining a series of possible dependent variables and only reporting the one that is statistically significant
Not mentioning important violations of assumptions
Performing data manipulations and outlier removal procedures without mentioning it, particularly where these procedures are both inappropriate and chosen purely to make the results look better
Presenting a model as confirmatory which is actually exploratory
Omitting important results that go against the desired argument
Choosing a statistical test solely on the basis that it makes the results look better
Running a series of five or ten under-powered studies where only one is statistically significant (perhaps at p = .04) and then reporting the study without mention of the other studies
In general, I'd hypothesise that incompetence is related to all three forms of problematic behaviour. A researcher who does not understand how to do good science but otherwise wants to be successful will have a greater incentive to misrepresent their results, and is less likely to respect the principles of ethical data analysis.
The above distinctions have implications for detection of problematic behaviour.
For example, if you manage to discern that a set of reported results are wrong, it still needs to be ascertained as to whether the results arose from fraud, error or inappropriate behaviour. Also, I'd assume that various forms of inappropriate behaviour are far more common than fraud.
With regards to detecting problematic behaviour, I think it is largely a skill that comes from experience working with data, working with a topic, and working with researchers. All of these experiences strengthen your expectations about what data should look like. Thus, major deviations from expectations start the process of searching for an explanation. Experience with researchers gives you a sense of the kinds of inappropriate behaviour which are more or less common. In combination this leads to the generation of hypotheses. For example, if I read a journal article and I am surprised with the results, the study is underpowered, and the nature of the writing suggests that the author is set on making a point, I generate the hypothesis that the results perhaps should not be trusted.
Other Resources
Robert P. Abelson Statistics as a Principled Argument has a chapter titled "On Suspecting Fishiness"
|
Statistical forensics: Benford and beyond
Great Question!
In the scientific context there are various kinds of problematic reporting and problematic behaviour:
Fraud: I'd define fraud as a deliberate intention on the part of the author or an
|
11,751
|
Statistical forensics: Benford and beyond
|
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural world.
You are right that you can use Benford's Law in only certain circumstances. You say that the data has to have a uniform log distribution. Technically, this is absolutely correct. But, you could describe the requirement in a much simpler and lenient way. All you need is that the data set range crosses at least one order of magnitude. Let's say from 1 to 9 or 10 to 99 or 100 to 999. If it crosses two orders of magnitude, you are in business. And, Benford's Law should be pretty helpful.
The beauty of Benford's Law is that it helps you narrow your investigation really quickly on the needle(s) within the hay stack of data. You look for the anomalies whereby the frequency of first digit is much different than Benford frequencies. Once you notice that there are two many 6s, you then use Benford's Law to focus on just the 6s; but, you take it now to the first two digits (60, 61, 62, 63, etc...). Now, maybe you find out there are a lot more 63s then what Benford suggest (you would do that by calculating Benford's frequency: log(1+1/63) that gives you a value close to 0%). So, you use Benford to the first three digits. By the time you find out there are way too many 632s (or whatever by calculating Benford's frequency: log (1+1/632)) than expected you are probably on to something. Not all anomalies are frauds. But, most frauds are anomalies.
If the data set that Marc Hauser manipulated are natural unconstrained data with a related range that was wide enough, then Benford's Law would be a pretty good diagnostic tool. I am sure there are other good diagnostic tools also detecting unlikely patterns and by combining them with Benford's Law you could most probably have investigated the Marc Hauser affair effectively (taking into consideration the mentioned data requirement of Benford's Law).
I explain Benford's Law a bit more in this short presentation that you can see here:
http://www.slideshare.net/gaetanlion/benfords-law-4669483
|
Statistical forensics: Benford and beyond
|
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural
|
Statistical forensics: Benford and beyond
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural world.
You are right that you can use Benford's Law in only certain circumstances. You say that the data has to have a uniform log distribution. Technically, this is absolutely correct. But, you could describe the requirement in a much simpler and lenient way. All you need is that the data set range crosses at least one order of magnitude. Let's say from 1 to 9 or 10 to 99 or 100 to 999. If it crosses two orders of magnitude, you are in business. And, Benford's Law should be pretty helpful.
The beauty of Benford's Law is that it helps you narrow your investigation really quickly on the needle(s) within the hay stack of data. You look for the anomalies whereby the frequency of first digit is much different than Benford frequencies. Once you notice that there are two many 6s, you then use Benford's Law to focus on just the 6s; but, you take it now to the first two digits (60, 61, 62, 63, etc...). Now, maybe you find out there are a lot more 63s then what Benford suggest (you would do that by calculating Benford's frequency: log(1+1/63) that gives you a value close to 0%). So, you use Benford to the first three digits. By the time you find out there are way too many 632s (or whatever by calculating Benford's frequency: log (1+1/632)) than expected you are probably on to something. Not all anomalies are frauds. But, most frauds are anomalies.
If the data set that Marc Hauser manipulated are natural unconstrained data with a related range that was wide enough, then Benford's Law would be a pretty good diagnostic tool. I am sure there are other good diagnostic tools also detecting unlikely patterns and by combining them with Benford's Law you could most probably have investigated the Marc Hauser affair effectively (taking into consideration the mentioned data requirement of Benford's Law).
I explain Benford's Law a bit more in this short presentation that you can see here:
http://www.slideshare.net/gaetanlion/benfords-law-4669483
|
Statistical forensics: Benford and beyond
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural
|
11,752
|
What is the relationship between the GINI score and the log-likelihood ratio
|
I will use the same notation I used here: Mathematics behind classification and regression trees
Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference is in the impurity function $I$:
$\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$
$\textit{Entropy}: H(E) = -\sum_{j=1}^{c}p_j\log p_j$
They actually are particular values of a more general entropy measure (Tsallis' Entropy) parametrized in $\beta$:
$$H_\beta (E) = \frac{1}{\beta-1} \left( 1 - \sum_{j=1}^{c}p_j^\beta \right)
$$
$\textit{Gini}$ is obtained with $\beta = 2$ and $H$ with $\beta \rightarrow 1$.
The log-likelihood, also called $G$-statistic, is a linear transformation of Information Gain:
$$G\text{-statistic} = 2 \cdot |E| \cdot IG$$
Depending on the community (statistics/data mining) people prefer one measure or the the other (Related question here). They might be pretty much equivalent in the decision tree induction process. Log-likelihood might give higher scores to balanced partitions when there are many classes though [Technical Note: Some Properties of Splitting Criteria. Breiman 1996].
Gini Gain can be nicer because it doesn't have logarithms and you can find the closed form for its expected value and variance under random split assumption [Alin Dobra, Johannes Gehrke: Bias Correction in Classification Tree Construction. ICML 2001: 90-97]. It is not as easy for Information Gain (If you are interested, see here).
|
What is the relationship between the GINI score and the log-likelihood ratio
|
I will use the same notation I used here: Mathematics behind classification and regression trees
Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference i
|
What is the relationship between the GINI score and the log-likelihood ratio
I will use the same notation I used here: Mathematics behind classification and regression trees
Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference is in the impurity function $I$:
$\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$
$\textit{Entropy}: H(E) = -\sum_{j=1}^{c}p_j\log p_j$
They actually are particular values of a more general entropy measure (Tsallis' Entropy) parametrized in $\beta$:
$$H_\beta (E) = \frac{1}{\beta-1} \left( 1 - \sum_{j=1}^{c}p_j^\beta \right)
$$
$\textit{Gini}$ is obtained with $\beta = 2$ and $H$ with $\beta \rightarrow 1$.
The log-likelihood, also called $G$-statistic, is a linear transformation of Information Gain:
$$G\text{-statistic} = 2 \cdot |E| \cdot IG$$
Depending on the community (statistics/data mining) people prefer one measure or the the other (Related question here). They might be pretty much equivalent in the decision tree induction process. Log-likelihood might give higher scores to balanced partitions when there are many classes though [Technical Note: Some Properties of Splitting Criteria. Breiman 1996].
Gini Gain can be nicer because it doesn't have logarithms and you can find the closed form for its expected value and variance under random split assumption [Alin Dobra, Johannes Gehrke: Bias Correction in Classification Tree Construction. ICML 2001: 90-97]. It is not as easy for Information Gain (If you are interested, see here).
|
What is the relationship between the GINI score and the log-likelihood ratio
I will use the same notation I used here: Mathematics behind classification and regression trees
Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference i
|
11,753
|
What is the relationship between the GINI score and the log-likelihood ratio
|
Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead!
I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to compare the likelihood of data arising from two (or more) different distributions, whereas the Gini coefficient is a summary statistic of a single distribution.
A useful way to think of the Gini coefficient (IMO) is as the area under the Lorenz curve (related to the cdf).
It may be possible to equate Shannon's entropy with Gini using the definition given in the OP for entropy:
$H = \Sigma_{i} P\left(x_{i} \right)\log_{b} P\left(x_{i} \right)$
and the definition of Gini:
$G = 1 - \frac{1}{\mu}\Sigma_i P(x_i)(S_{i-1} + S_i)$,
where
$S_i = \Sigma_{j=1}^i P(x_i)x_i$ (i.e. the cumulative mean up to $x_i$).
It doesn't look like an easy task though!
|
What is the relationship between the GINI score and the log-likelihood ratio
|
Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead!
I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to
|
What is the relationship between the GINI score and the log-likelihood ratio
Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead!
I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to compare the likelihood of data arising from two (or more) different distributions, whereas the Gini coefficient is a summary statistic of a single distribution.
A useful way to think of the Gini coefficient (IMO) is as the area under the Lorenz curve (related to the cdf).
It may be possible to equate Shannon's entropy with Gini using the definition given in the OP for entropy:
$H = \Sigma_{i} P\left(x_{i} \right)\log_{b} P\left(x_{i} \right)$
and the definition of Gini:
$G = 1 - \frac{1}{\mu}\Sigma_i P(x_i)(S_{i-1} + S_i)$,
where
$S_i = \Sigma_{j=1}^i P(x_i)x_i$ (i.e. the cumulative mean up to $x_i$).
It doesn't look like an easy task though!
|
What is the relationship between the GINI score and the log-likelihood ratio
Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead!
I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to
|
11,754
|
Recovering raw coefficients and variances from orthogonal polynomial regression
|
Yes, it's possible.
Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perfect fit. You can perform this with the software even when it does not document its procedures to compute orthogonal polynomials. The regression of $z_j$ yields coefficients $\gamma_{ij}$ for which
$$z_{ij} = \gamma_{j0} + x_i\gamma_{j1} + x_i^2\gamma_{j2} + x_i^3\gamma_{j3}.$$
The result is a $4\times 4$ matrix $\Gamma$ that, upon right multiplication, converts the design matrix $X=\pmatrix{1;&x;&x^2;&x^3}$ into $$Z=\pmatrix{1;&z_1;&z_2;&z_3} = X\Gamma.\tag{1}$$
After fitting the model
$$\mathbb{E}(Y) = Z\beta$$
and obtaining estimated coefficients $\hat\beta$ (a four-element column vector), you may substitute $(1)$ to obtain
$$\hat Y = Z\hat\beta = (X\Gamma)\hat\beta = X(\Gamma\hat\beta).$$
Therefore $\Gamma\hat\beta$ is the estimated coefficient vector for the model in terms of the original (raw, un-orthogonalized) powers of $x$.
The following R code illustrates these procedures and tests them with synthetic data.
n <- 10 # Number of observations
d <- 3 # Degree
#
# Synthesize a regressor, its powers, and orthogonal polynomials thereof.
#
x <- rnorm(n)
x.p <- outer(x, 0:d, `^`); colnames(x.p) <- c("Intercept", paste0("x.", 1:d))
z <- poly(x, d)
#
# Compute the orthogonal polynomials in terms of the powers via OLS.
#
xform <- lm(cbind(1, z) ~ x.p-1)
gamma <- coef(xform)
#
# Verify the transformation: all components should be tiny, certainly
# infinitesimal compared to 1.
#
if (!all.equal(as.vector(1 + crossprod(x.p %*% gamma - cbind(1,z)) - 1),
rep(0, (d+1)^2)))
warning("Transformation is inaccurate.")
#
# Fit the model with orthogonal polynomials.
#
y <- x + rnorm(n)
fit <- lm(y ~ z)
#summary(fit)
#
# As a check, fit the model with raw powers.
#
fit.p <- lm(y ~ .-1, data.frame(x.p))
#summary(fit.p)
#
# Compare the results.
#
(rbind(Computed=as.vector(gamma %*% coef(fit)), Fit=coef(fit.p)))
if (!all.equal(as.vector(gamma %*% coef(fit)), as.vector(coef(fit.p))))
warning("Results were not the same.")
|
Recovering raw coefficients and variances from orthogonal polynomial regression
|
Yes, it's possible.
Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perf
|
Recovering raw coefficients and variances from orthogonal polynomial regression
Yes, it's possible.
Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perfect fit. You can perform this with the software even when it does not document its procedures to compute orthogonal polynomials. The regression of $z_j$ yields coefficients $\gamma_{ij}$ for which
$$z_{ij} = \gamma_{j0} + x_i\gamma_{j1} + x_i^2\gamma_{j2} + x_i^3\gamma_{j3}.$$
The result is a $4\times 4$ matrix $\Gamma$ that, upon right multiplication, converts the design matrix $X=\pmatrix{1;&x;&x^2;&x^3}$ into $$Z=\pmatrix{1;&z_1;&z_2;&z_3} = X\Gamma.\tag{1}$$
After fitting the model
$$\mathbb{E}(Y) = Z\beta$$
and obtaining estimated coefficients $\hat\beta$ (a four-element column vector), you may substitute $(1)$ to obtain
$$\hat Y = Z\hat\beta = (X\Gamma)\hat\beta = X(\Gamma\hat\beta).$$
Therefore $\Gamma\hat\beta$ is the estimated coefficient vector for the model in terms of the original (raw, un-orthogonalized) powers of $x$.
The following R code illustrates these procedures and tests them with synthetic data.
n <- 10 # Number of observations
d <- 3 # Degree
#
# Synthesize a regressor, its powers, and orthogonal polynomials thereof.
#
x <- rnorm(n)
x.p <- outer(x, 0:d, `^`); colnames(x.p) <- c("Intercept", paste0("x.", 1:d))
z <- poly(x, d)
#
# Compute the orthogonal polynomials in terms of the powers via OLS.
#
xform <- lm(cbind(1, z) ~ x.p-1)
gamma <- coef(xform)
#
# Verify the transformation: all components should be tiny, certainly
# infinitesimal compared to 1.
#
if (!all.equal(as.vector(1 + crossprod(x.p %*% gamma - cbind(1,z)) - 1),
rep(0, (d+1)^2)))
warning("Transformation is inaccurate.")
#
# Fit the model with orthogonal polynomials.
#
y <- x + rnorm(n)
fit <- lm(y ~ z)
#summary(fit)
#
# As a check, fit the model with raw powers.
#
fit.p <- lm(y ~ .-1, data.frame(x.p))
#summary(fit.p)
#
# Compare the results.
#
(rbind(Computed=as.vector(gamma %*% coef(fit)), Fit=coef(fit.p)))
if (!all.equal(as.vector(gamma %*% coef(fit)), as.vector(coef(fit.p))))
warning("Results were not the same.")
|
Recovering raw coefficients and variances from orthogonal polynomial regression
Yes, it's possible.
Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perf
|
11,755
|
Recovering raw coefficients and variances from orthogonal polynomial regression
|
Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indices be zero by definition, and undefined $\Gamma$ entries be zero. Then we can find that if we disregard the scaling here then the map to the orthogonal polynomial is given by
$$\begin{align*}
z_0 &= 1 = \gamma_{0,0:0}\cdot 1\\
z_1 &= x - \alpha_1 \\
&= \underbrace{(\vec\gamma_{0,-1:0} - \alpha_1
\vec\gamma_{0,0:1})^\top}_{\vec\gamma_{1,0:1}^\top}(1 , x) \\
z_2 &= (x - \alpha_2)z_1 - \frac{\sigma_2}{\sigma_1} z_0 \\
&= x^2 + (\gamma_{10} -\alpha_2)x
- \alpha_2\gamma_{10} - \frac{\sigma_2}{\sigma_1} \\
&= (1, x, x^2)\underbrace{
(\vec\gamma_{1,-1:1}-\alpha_2\vec\gamma_{1,0:2}
-\frac{\sigma_2}{\sigma_1}\vec\gamma_{0,0:2})}_{
\vec\gamma_{2,0:2}^\top}\\
z_3 &= (x - \alpha_3)z_2 - \frac{\sigma_3}{\sigma_2} z_1 \\
&= x^3 + (\gamma_{21}-\alpha_3)x^2
+ (\gamma_{20}-\alpha_3\gamma_{21})x
-\alpha_3\gamma_{20}
-\frac{\sigma_3}{\sigma_2} x
- \frac{\sigma_3}{\sigma_2}\gamma_{01} \\
&= (1, x, x^2, x^3)\underbrace{(\vec\gamma_{2,-1:2}-\alpha_3\vec\gamma_{2,0:3}
-\frac{\sigma_3}{\sigma_2}\vec\gamma_{1,0:3})}_{
\vec\gamma_{3,0:3}}\\
\vdots\, &= \,\vdots
\end{align*}$$
Thus, we can compute the $\Gamma$ matrix with this code
get_poly_orth_map <- function(object){
stopifnot(inherits(object, "poly"))
sigs <- attr(object, "coefs")$norm2
alpha <- attr(object, "coefs")$alpha
nc <- length(alpha) + 1L
Gamma <- matrix(0., nc, nc)
Gamma[1, 1] <- 1
if(nc > 1){
Gamma[ , 2] <- -alpha[1] * Gamma[, 1]
Gamma[2, 2] <- 1
}
if(nc > 2)
for(i in 3:nc){
i_m1 <- i - 1L
Gamma[, i] <- c(0, Gamma[-nc, i_m1]) - alpha[i_m1] * Gamma[, i_m1] -
sigs[i] / sigs[i_m1] * Gamma[, i - 2L]
}
tmp <- sigs[-1]
tmp[1] <- 1
Gamma / rep(sqrt(tmp), each = nc)
}
and confirm that this gives the right matrix
# from whuber's answer
set.seed(1)
lm_method <- function(d, n = d * 4){
x <- rnorm(n, mean = 2)
x_p <- outer(x, 1:d, `^`)
colnames(x_p) <- paste0("x", 1:d)
poly_obj <- poly(x, d)
list(poly_obj = poly_obj, gamma = coef(lm(cbind(1, poly_obj) ~ x_p)))
}
# check that we get the same with different degrees
for(d in 1:10){
dat <- lm_method(d)
stopifnot(all.equal(
dat$gamma, get_poly_orth_map(dat$poly_obj), check.attributes = FALSE))
}
Is reconstructing raw coefficients (and obtaining their variances) from coefficients fitted to an orthogonal polynomial...
impossible to do and I'm wasting my time.
As whuber shows, it is not. As another addition, here is an example to get the standard errors of the estimates as mentioned in the comments
# from `help(cars)`
fm <- lm(dist ~ speed + I(speed^2) + I(speed^3), data = cars)
summary(fm)
#R> Call:
#R> lm(formula = dist ~ speed + I(speed^2) + I(speed^3), data = cars)
#R>
#R> Residuals:
#R> Min 1Q Median 3Q Max
#R> -26.67 -9.60 -2.23 7.08 44.69
#R>
#R> Coefficients:
#R> Estimate Std. Error t value Pr(>|t|)
#R> (Intercept) -19.5050 28.4053 -0.69 0.50
#R> speed 6.8011 6.8011 1.00 0.32
#R> I(speed^2) -0.3497 0.4999 -0.70 0.49
#R> I(speed^3) 0.0103 0.0113 0.91 0.37
#R>
#R> Residual standard error: 15.2 on 46 degrees of freedom
#R> Multiple R-squared: 0.673, Adjusted R-squared: 0.652
#R> F-statistic: 31.6 on 3 and 46 DF, p-value: 3.07e-11
fp <- lm(dist ~ poly(speed, 3), data = cars)
gamma <- get_poly_orth_map(poly(cars$speed, 3))
drop(gamma %*% coef(fp))
#R> [1] -19.5050 6.8011 -0.3497 0.0103
sqrt(diag(tcrossprod(gamma %*% vcov(fp), gamma)))
#R> [1] 28.4053 6.8011 0.4999 0.0113
|
Recovering raw coefficients and variances from orthogonal polynomial regression
|
Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indice
|
Recovering raw coefficients and variances from orthogonal polynomial regression
Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indices be zero by definition, and undefined $\Gamma$ entries be zero. Then we can find that if we disregard the scaling here then the map to the orthogonal polynomial is given by
$$\begin{align*}
z_0 &= 1 = \gamma_{0,0:0}\cdot 1\\
z_1 &= x - \alpha_1 \\
&= \underbrace{(\vec\gamma_{0,-1:0} - \alpha_1
\vec\gamma_{0,0:1})^\top}_{\vec\gamma_{1,0:1}^\top}(1 , x) \\
z_2 &= (x - \alpha_2)z_1 - \frac{\sigma_2}{\sigma_1} z_0 \\
&= x^2 + (\gamma_{10} -\alpha_2)x
- \alpha_2\gamma_{10} - \frac{\sigma_2}{\sigma_1} \\
&= (1, x, x^2)\underbrace{
(\vec\gamma_{1,-1:1}-\alpha_2\vec\gamma_{1,0:2}
-\frac{\sigma_2}{\sigma_1}\vec\gamma_{0,0:2})}_{
\vec\gamma_{2,0:2}^\top}\\
z_3 &= (x - \alpha_3)z_2 - \frac{\sigma_3}{\sigma_2} z_1 \\
&= x^3 + (\gamma_{21}-\alpha_3)x^2
+ (\gamma_{20}-\alpha_3\gamma_{21})x
-\alpha_3\gamma_{20}
-\frac{\sigma_3}{\sigma_2} x
- \frac{\sigma_3}{\sigma_2}\gamma_{01} \\
&= (1, x, x^2, x^3)\underbrace{(\vec\gamma_{2,-1:2}-\alpha_3\vec\gamma_{2,0:3}
-\frac{\sigma_3}{\sigma_2}\vec\gamma_{1,0:3})}_{
\vec\gamma_{3,0:3}}\\
\vdots\, &= \,\vdots
\end{align*}$$
Thus, we can compute the $\Gamma$ matrix with this code
get_poly_orth_map <- function(object){
stopifnot(inherits(object, "poly"))
sigs <- attr(object, "coefs")$norm2
alpha <- attr(object, "coefs")$alpha
nc <- length(alpha) + 1L
Gamma <- matrix(0., nc, nc)
Gamma[1, 1] <- 1
if(nc > 1){
Gamma[ , 2] <- -alpha[1] * Gamma[, 1]
Gamma[2, 2] <- 1
}
if(nc > 2)
for(i in 3:nc){
i_m1 <- i - 1L
Gamma[, i] <- c(0, Gamma[-nc, i_m1]) - alpha[i_m1] * Gamma[, i_m1] -
sigs[i] / sigs[i_m1] * Gamma[, i - 2L]
}
tmp <- sigs[-1]
tmp[1] <- 1
Gamma / rep(sqrt(tmp), each = nc)
}
and confirm that this gives the right matrix
# from whuber's answer
set.seed(1)
lm_method <- function(d, n = d * 4){
x <- rnorm(n, mean = 2)
x_p <- outer(x, 1:d, `^`)
colnames(x_p) <- paste0("x", 1:d)
poly_obj <- poly(x, d)
list(poly_obj = poly_obj, gamma = coef(lm(cbind(1, poly_obj) ~ x_p)))
}
# check that we get the same with different degrees
for(d in 1:10){
dat <- lm_method(d)
stopifnot(all.equal(
dat$gamma, get_poly_orth_map(dat$poly_obj), check.attributes = FALSE))
}
Is reconstructing raw coefficients (and obtaining their variances) from coefficients fitted to an orthogonal polynomial...
impossible to do and I'm wasting my time.
As whuber shows, it is not. As another addition, here is an example to get the standard errors of the estimates as mentioned in the comments
# from `help(cars)`
fm <- lm(dist ~ speed + I(speed^2) + I(speed^3), data = cars)
summary(fm)
#R> Call:
#R> lm(formula = dist ~ speed + I(speed^2) + I(speed^3), data = cars)
#R>
#R> Residuals:
#R> Min 1Q Median 3Q Max
#R> -26.67 -9.60 -2.23 7.08 44.69
#R>
#R> Coefficients:
#R> Estimate Std. Error t value Pr(>|t|)
#R> (Intercept) -19.5050 28.4053 -0.69 0.50
#R> speed 6.8011 6.8011 1.00 0.32
#R> I(speed^2) -0.3497 0.4999 -0.70 0.49
#R> I(speed^3) 0.0103 0.0113 0.91 0.37
#R>
#R> Residual standard error: 15.2 on 46 degrees of freedom
#R> Multiple R-squared: 0.673, Adjusted R-squared: 0.652
#R> F-statistic: 31.6 on 3 and 46 DF, p-value: 3.07e-11
fp <- lm(dist ~ poly(speed, 3), data = cars)
gamma <- get_poly_orth_map(poly(cars$speed, 3))
drop(gamma %*% coef(fp))
#R> [1] -19.5050 6.8011 -0.3497 0.0103
sqrt(diag(tcrossprod(gamma %*% vcov(fp), gamma)))
#R> [1] 28.4053 6.8011 0.4999 0.0113
|
Recovering raw coefficients and variances from orthogonal polynomial regression
Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indice
|
11,756
|
What are the effects of depth and width in deep neural networks?
|
The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8:
Widdening consistently improves performance across residual networks of different depth;
Increasing both depth and width helps until the number of parameters becomes too high and stronger regularization is needed;
There doesn’t seem to be a regularization effect from very high depth in residual net- works as wide networks with the same number of
parameters as thin ones can learn same or better representations. Furthermore, wide networks can successfully learn with a 2 or more
times larger number of parameters than thin ones, which would re-
quire doubling the depth of thin networks, making them infeasibly
expensive to train.
The paper focused on an experimental comparison between the two methods. Nonetheless, I believe theorectically (and the paper also states) one of the main reasons why the wide residual networks produces fast and more accurate result than previous works is because:
it is more computationally effective to widen the layers than have
thousands of small kernels as GPU is much more efficient in parallel computations on large tensors.
I.e. wider residual networks allow many multiplications to be computed in parallel, whilst deeper residual networks use more sequential computations (since the computation depend on the previous layer).
Also regarding my third bullet point above:
the residual block with identity mapping that allows to train very deep networks is at the same time a weakness of residual networks. As gradient flows through the network there is nothing to force it to go through residual block weights and it can avoid learning anything during training, so it is possible that there is either only a few blocks that learn useful representations, or many blocks share very little information with small contribution to the final goal.
There are also some useful comments at the Reddit page regarding this paper.
|
What are the effects of depth and width in deep neural networks?
|
The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8:
Widdening consistently improves performance across residual networks of different depth;
Increasing both depth and
|
What are the effects of depth and width in deep neural networks?
The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8:
Widdening consistently improves performance across residual networks of different depth;
Increasing both depth and width helps until the number of parameters becomes too high and stronger regularization is needed;
There doesn’t seem to be a regularization effect from very high depth in residual net- works as wide networks with the same number of
parameters as thin ones can learn same or better representations. Furthermore, wide networks can successfully learn with a 2 or more
times larger number of parameters than thin ones, which would re-
quire doubling the depth of thin networks, making them infeasibly
expensive to train.
The paper focused on an experimental comparison between the two methods. Nonetheless, I believe theorectically (and the paper also states) one of the main reasons why the wide residual networks produces fast and more accurate result than previous works is because:
it is more computationally effective to widen the layers than have
thousands of small kernels as GPU is much more efficient in parallel computations on large tensors.
I.e. wider residual networks allow many multiplications to be computed in parallel, whilst deeper residual networks use more sequential computations (since the computation depend on the previous layer).
Also regarding my third bullet point above:
the residual block with identity mapping that allows to train very deep networks is at the same time a weakness of residual networks. As gradient flows through the network there is nothing to force it to go through residual block weights and it can avoid learning anything during training, so it is possible that there is either only a few blocks that learn useful representations, or many blocks share very little information with small contribution to the final goal.
There are also some useful comments at the Reddit page regarding this paper.
|
What are the effects of depth and width in deep neural networks?
The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8:
Widdening consistently improves performance across residual networks of different depth;
Increasing both depth and
|
11,757
|
How to set up and estimate a multinomial logit model in R?
|
Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source for instructions on how to run a multinomial logistic regression model in R using mlogit package. If you go to the econometrics academy website she has all the scripts, data for R and SAS and STATA I think or SPSS one of those.
Which kind of explains how/why and what to do about transforming your data into the format of the "long" format vs "wide". Most likely you have a wide format, which requires transformation.
Multinomial Probit and Logit Models
|
How to set up and estimate a multinomial logit model in R?
|
Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source
|
How to set up and estimate a multinomial logit model in R?
Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source for instructions on how to run a multinomial logistic regression model in R using mlogit package. If you go to the econometrics academy website she has all the scripts, data for R and SAS and STATA I think or SPSS one of those.
Which kind of explains how/why and what to do about transforming your data into the format of the "long" format vs "wide". Most likely you have a wide format, which requires transformation.
Multinomial Probit and Logit Models
|
How to set up and estimate a multinomial logit model in R?
Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source
|
11,758
|
How to set up and estimate a multinomial logit model in R?
|
In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different models of the same data. Different developers can make different choices about what to leave in or out of that constant. When you should worry is when the differences in AIC values between two models differ. Actually I just noticed an argument to multinom() allows you to change how rows with identical X values are collapsed, and that this affects the baseline of the deviance, and hence the AIC. You could try different values of the summ argument and see if that makes the deviances agree. We don't know what JMP is doing! :)
If the estimated coefficients and standard errors are the same, then you're good. If the coefficients are not the same, don't forget that JMP might choose a different baseline outcome to calculate the coefficients for. multinom() makes different choices from mlogit(), for example.
Getting p-values from the summary() result of multinom() is pretty easy. I can't reproduce your models, so here's the example from the help page on multinom():
library("nnet")
data("Fishing", package = "mlogit")
fishing.mu <- multinom(mode ~ income, data = Fishing)
sum.fishing <- summary(fishing.mu) # gives a table of outcomes by covariates for coef and SE
str(sum.fishing)
# now get the p values by first getting the t values
pt(abs(sum.fishing$coefficients / sum.fishing$standard.errors),
df=nrow(Fishing)-6,lower=FALSE)
I agree that figuring out the mlogit package is a bit of a challenge! Read the vignettes, carefully. They do help.
|
How to set up and estimate a multinomial logit model in R?
|
In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different mod
|
How to set up and estimate a multinomial logit model in R?
In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different models of the same data. Different developers can make different choices about what to leave in or out of that constant. When you should worry is when the differences in AIC values between two models differ. Actually I just noticed an argument to multinom() allows you to change how rows with identical X values are collapsed, and that this affects the baseline of the deviance, and hence the AIC. You could try different values of the summ argument and see if that makes the deviances agree. We don't know what JMP is doing! :)
If the estimated coefficients and standard errors are the same, then you're good. If the coefficients are not the same, don't forget that JMP might choose a different baseline outcome to calculate the coefficients for. multinom() makes different choices from mlogit(), for example.
Getting p-values from the summary() result of multinom() is pretty easy. I can't reproduce your models, so here's the example from the help page on multinom():
library("nnet")
data("Fishing", package = "mlogit")
fishing.mu <- multinom(mode ~ income, data = Fishing)
sum.fishing <- summary(fishing.mu) # gives a table of outcomes by covariates for coef and SE
str(sum.fishing)
# now get the p values by first getting the t values
pt(abs(sum.fishing$coefficients / sum.fishing$standard.errors),
df=nrow(Fishing)-6,lower=FALSE)
I agree that figuring out the mlogit package is a bit of a challenge! Read the vignettes, carefully. They do help.
|
How to set up and estimate a multinomial logit model in R?
In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different mod
|
11,759
|
How to set up and estimate a multinomial logit model in R?
|
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
|
How to set up and estimate a multinomial logit model in R?
|
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
|
How to set up and estimate a multinomial logit model in R?
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
|
How to set up and estimate a multinomial logit model in R?
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
|
11,760
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
It's not about not being able to compute something.
Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variable, that is...
If you have the attributes shoe size and body mass, Euclidean distance doesn't make much sense either. It's good when x,y,z are distances. Then Euclidean distance is the line of sight distance between the points.
Now if you dummy-encode variables, what meaning does this yield?
Plus, Euclidean distance doesn't make sense when your data is discrete.
If there only exist integer x and y values, Euclidean distance will still yield non-integer distances. They don't map back to the data. Similarly, for dummy-encoded variables, the distance will not map back to a quantity of dummy variables...
When you then plan to use e.g. k-means clustering, it isn't just about distances, but about computing the mean. But there is no reasonable mean on dummy-encoded variables, is there?
Finally, there is the curse of dimensionality. Euclidean distance is known to degrade when you increase the number of variables. Adding dummy-encoded variables means you lose distance contrast quite fast. Everything is as similar as everything else, because a single dummy variable can make all the difference.
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
It's not about not being able to compute something.
Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variab
|
Why are mixed data a problem for euclidean-based clustering algorithms?
It's not about not being able to compute something.
Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variable, that is...
If you have the attributes shoe size and body mass, Euclidean distance doesn't make much sense either. It's good when x,y,z are distances. Then Euclidean distance is the line of sight distance between the points.
Now if you dummy-encode variables, what meaning does this yield?
Plus, Euclidean distance doesn't make sense when your data is discrete.
If there only exist integer x and y values, Euclidean distance will still yield non-integer distances. They don't map back to the data. Similarly, for dummy-encoded variables, the distance will not map back to a quantity of dummy variables...
When you then plan to use e.g. k-means clustering, it isn't just about distances, but about computing the mean. But there is no reasonable mean on dummy-encoded variables, is there?
Finally, there is the curse of dimensionality. Euclidean distance is known to degrade when you increase the number of variables. Adding dummy-encoded variables means you lose distance contrast quite fast. Everything is as similar as everything else, because a single dummy variable can make all the difference.
|
Why are mixed data a problem for euclidean-based clustering algorithms?
It's not about not being able to compute something.
Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variab
|
11,761
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
At the heart of these metric based clustering problems is the idea of interpolation.
Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and you have 10kg in your data. When you see a new 99kg, the metric will enable you to approach 100kg --- even though you have never seen it. Unfortunately, there is no interpolation existing for discrete data.
Another argument for this question is there is no natural way to do so. You want to assign 3 values in R and make them equal-distance between each pair, this would be impossible. If you assign them into different categories and run let's say PCA, then you lose the information that they reflect in fact the same category.
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
At the heart of these metric based clustering problems is the idea of interpolation.
Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and
|
Why are mixed data a problem for euclidean-based clustering algorithms?
At the heart of these metric based clustering problems is the idea of interpolation.
Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and you have 10kg in your data. When you see a new 99kg, the metric will enable you to approach 100kg --- even though you have never seen it. Unfortunately, there is no interpolation existing for discrete data.
Another argument for this question is there is no natural way to do so. You want to assign 3 values in R and make them equal-distance between each pair, this would be impossible. If you assign them into different categories and run let's say PCA, then you lose the information that they reflect in fact the same category.
|
Why are mixed data a problem for euclidean-based clustering algorithms?
At the heart of these metric based clustering problems is the idea of interpolation.
Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and
|
11,762
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then you imply that a data pattern with orange value is closer to a pattern with green value than the one with the blue value.
One way to handle this is to make them new features (columns). For each distinct value you create a new binary feature and set it to true/false (in other words binary encode the values and make each bit a column). For each data pattern from this new set of features, only one feature will have the value 1 and all the others 0. But this usually doesn't stop the training algorithm to assign centroid values close to 1 to more than one features. This ofcourse might cause interpretation issues cause this doesn't make sense in the data domain.
You don't have the same problem with "capacity classes" namely ordered categories since in that case the numerical values assignment makes sence.
And ofcourse is you use features of different nature or measurement unit or different range of values then you should always normalize the values.
https://stackoverflow.com/questions/19507928/growing-self-organizing-map-for-mixed-type-data/19511894#19511894
https://stackoverflow.com/questions/13687256/is-it-right-to-normalize-data-and-or-weight-vectors-in-a-som/13693409#13693409
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then y
|
Why are mixed data a problem for euclidean-based clustering algorithms?
A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then you imply that a data pattern with orange value is closer to a pattern with green value than the one with the blue value.
One way to handle this is to make them new features (columns). For each distinct value you create a new binary feature and set it to true/false (in other words binary encode the values and make each bit a column). For each data pattern from this new set of features, only one feature will have the value 1 and all the others 0. But this usually doesn't stop the training algorithm to assign centroid values close to 1 to more than one features. This ofcourse might cause interpretation issues cause this doesn't make sense in the data domain.
You don't have the same problem with "capacity classes" namely ordered categories since in that case the numerical values assignment makes sence.
And ofcourse is you use features of different nature or measurement unit or different range of values then you should always normalize the values.
https://stackoverflow.com/questions/19507928/growing-self-organizing-map-for-mixed-type-data/19511894#19511894
https://stackoverflow.com/questions/13687256/is-it-right-to-normalize-data-and-or-weight-vectors-in-a-som/13693409#13693409
|
Why are mixed data a problem for euclidean-based clustering algorithms?
A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then y
|
11,763
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discrete values of a categorical variable). It is meant to represent something non-measurable, non-quantifiable, by storing the information of whether it's there or not. This is why a dummy variable is expressed in binary digits, as many as the discrete values of the categorical variable it represents (or minus 1).
Representing factor levels as 0/1 values makes sense only in an analytical equation, such as a linear model (this is an easy concept for those who can interpret the coefficients of statistical models). In a dummy variable, the information of the underlying categorical variable is stored in the order of bits. When using those bits as the dimensions to map an input sample to a feature space (as in the case of a similarity/distance matrix), the information in the order of bits is completely lost.
|
Why are mixed data a problem for euclidean-based clustering algorithms?
|
The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discr
|
Why are mixed data a problem for euclidean-based clustering algorithms?
The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discrete values of a categorical variable). It is meant to represent something non-measurable, non-quantifiable, by storing the information of whether it's there or not. This is why a dummy variable is expressed in binary digits, as many as the discrete values of the categorical variable it represents (or minus 1).
Representing factor levels as 0/1 values makes sense only in an analytical equation, such as a linear model (this is an easy concept for those who can interpret the coefficients of statistical models). In a dummy variable, the information of the underlying categorical variable is stored in the order of bits. When using those bits as the dimensions to map an input sample to a feature space (as in the case of a similarity/distance matrix), the information in the order of bits is completely lost.
|
Why are mixed data a problem for euclidean-based clustering algorithms?
The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discr
|
11,764
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
|
Introduction
This reply addresses the underlying motivation for this set of questions:
What are the assumptions underlying a correlation test and a regression slope test?
In light of the background provided in the question, though, I would like to suggest expanding this question a little: let us explore the different purposes and conceptions of correlation and regression.
Correlation typically is invoked in situations where
Data are bivariate: exactly two distinct values of interest are associated with each "subject" or "observation".
The data are observational: neither of the values was set by the experimenter. Both were observed or measured.
Interest lies in identifying, quantifying, and testing some kind of relationship between the variables.
Regression is used where
Data are bivariate or multivariate: there may be more than two distinct values of interest.
Interest focuses on understanding what can be said about a subset of the variables--the "dependent" variables or "responses"--based on what might be known about the other subset--the "independent" variables or "regressors."
Specific values of the regressors may have been set by the experimenter.
These differing aims and situations lead to distinct approaches. Because this thread is concerned about their similarities, let's focus on the case where they are most similar: bivariate data. In either case those data will typically be modeled as realizations of a random variable $(X,Y)$. Very generally, both forms of analysis seek relatively simple characterizations of this variable.
Correlation
I believe "correlation analysis" has never been generally defined. Should it be limited to computing correlation coefficients, or could it be considered more extensively as comprising PCA, cluster analysis, and other forms of analysis that relate two variables? Whether your point of view is narrowly circumscribed or broad, perhaps you would agree that the following description applies:
Correlation is an analysis that makes assumptions about the distribution of $(X,Y)$, without privileging either variable, and uses the data to draw more specific conclusions about that distribution.
For instance, you might begin by assuming $(X,Y)$ has a bivariate Normal distribution and use the Pearson correlation coefficient of the data to estimate one of the parameters of that distribution. This is one of the narrowest (and oldest) conceptions of correlation.
As another example, you might being by assuming $(X,Y)$ could have any distribution and use a cluster analysis to identify $k$ "centers." One might construe that as the beginnings of a resolution of the distribution of $(X,Y)$ into a mixture of unimodal bivariate distributions, one for each cluster.
One thing common to all these approaches is a symmetric treatment of $X$ and $Y$: neither is privileged over the other. Both play equivalent roles.
Regression
Regression enjoys a clear, universally understood definition:
Regression characterizes the conditional distribution of $Y$ (the response) given $X$ (the regressor).
Historically, regression traces its roots to Galton's discovery (c. 1885) that bivariate Normal data $(X,Y)$ enjoy a linear regression: the conditional expectation of $Y$ is a linear function of $X$. At one pole of the special-general spectrum is Ordinary Least Squares (OLS) regression where the conditional distribution of $Y$ is assumed to be Normal$(\beta_0+\beta_1 X, \sigma^2)$ for fixed parameters $\beta_0, \beta_1,$ and $\sigma$ to be estimated from the data.
At the extremely general end of this spectrum are generalized linear models, generalized additive models, and others of their ilk that relax all aspects of OLS: the expectation, variance, and even the shape of the conditional distribution of $Y$ may be allowed to vary nonlinearly with $X$. The concept that survives all this generalization is that interest remains focused on understanding how $Y$ depends on $X$. That fundamental asymmetry is still there.
Correlation and Regression
One very special situation is common to both approaches and is frequently encountered: the bivariate Normal model. In this model, a scatterplot of data will assume a classic "football," oval, or cigar shape: the data are spread elliptically around an orthogonal pair of axes.
A correlation analysis focuses on the "strength" of this relationship, in the sense that a relatively small spread around the major axis is "strong."
As remarked above, the regression of $Y$ on $X$ (and, equally, the regression of $X$ on $Y$) is linear: the conditional expectation of the response is a linear function of the regressor.
(It is worthwhile pondering the clear geometric differences between these two descriptions: they illuminate the underlying statistical differences.)
Of the five bivariate Normal parameters (two means, two spreads, and one more that measures the dependence between the two variables), one is of common interest: the fifth parameter, $\rho$. It is directly (and simply) related to
The coefficient of $X$ in the regression of $Y$ on $X$.
The coefficient of $Y$ in the regression of $X$ on $Y$.
The conditional variances in either of the regressions $(1)$ and $(2)$.
The spreads of $(X,Y)$ around the axes of an ellipse (measured as variances).
A correlation analysis focuses on $(4)$, without distinguishing the roles of $X$ and $Y$.
A regression analysis focuses on the versions of $(1)$ through $(3)$ appropriate to the choice of regressor and response variables.
In both cases, the hypothesis $H_0: \rho=0$ enjoys a special role: it indicates no correlation as well as no variation of $Y$ with respect to $X$. Because (in this simplest situation) both the probability model and the null hypothesis are common to correlation and regression, it should be no surprise that both methods share an interest in the same statistics (whether called "$r$" or "$\hat\beta$"); that the null sampling distributions of those statistics are the same; and (therefore) that hypothesis tests can produce identical p-values.
This common application, which is the first one anybody learns, can make it difficult to recognize just how different correlation and regression are in their concepts and aims. It is only when we learn about their generalizations that the underlying differences are exposed. It would be difficult to construe a GAM as giving much information about "correlation," just as it would be hard to frame a cluster analysis as a form of "regression." The two are different families of procedures with different objectives, each useful in its own right when applied appropriately.
I hope that this rather general and somewhat vague review has illuminated some of the ways in which "these issues go deeper than simply whether $r$ and $\hat\beta$ should be numerically equal." An appreciation of these differences has helped me understand what various techniques are attempting to accomplish, as well as to make better use of them in solving statistical problems.
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
|
Introduction
This reply addresses the underlying motivation for this set of questions:
What are the assumptions underlying a correlation test and a regression slope test?
In light of the background
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
Introduction
This reply addresses the underlying motivation for this set of questions:
What are the assumptions underlying a correlation test and a regression slope test?
In light of the background provided in the question, though, I would like to suggest expanding this question a little: let us explore the different purposes and conceptions of correlation and regression.
Correlation typically is invoked in situations where
Data are bivariate: exactly two distinct values of interest are associated with each "subject" or "observation".
The data are observational: neither of the values was set by the experimenter. Both were observed or measured.
Interest lies in identifying, quantifying, and testing some kind of relationship between the variables.
Regression is used where
Data are bivariate or multivariate: there may be more than two distinct values of interest.
Interest focuses on understanding what can be said about a subset of the variables--the "dependent" variables or "responses"--based on what might be known about the other subset--the "independent" variables or "regressors."
Specific values of the regressors may have been set by the experimenter.
These differing aims and situations lead to distinct approaches. Because this thread is concerned about their similarities, let's focus on the case where they are most similar: bivariate data. In either case those data will typically be modeled as realizations of a random variable $(X,Y)$. Very generally, both forms of analysis seek relatively simple characterizations of this variable.
Correlation
I believe "correlation analysis" has never been generally defined. Should it be limited to computing correlation coefficients, or could it be considered more extensively as comprising PCA, cluster analysis, and other forms of analysis that relate two variables? Whether your point of view is narrowly circumscribed or broad, perhaps you would agree that the following description applies:
Correlation is an analysis that makes assumptions about the distribution of $(X,Y)$, without privileging either variable, and uses the data to draw more specific conclusions about that distribution.
For instance, you might begin by assuming $(X,Y)$ has a bivariate Normal distribution and use the Pearson correlation coefficient of the data to estimate one of the parameters of that distribution. This is one of the narrowest (and oldest) conceptions of correlation.
As another example, you might being by assuming $(X,Y)$ could have any distribution and use a cluster analysis to identify $k$ "centers." One might construe that as the beginnings of a resolution of the distribution of $(X,Y)$ into a mixture of unimodal bivariate distributions, one for each cluster.
One thing common to all these approaches is a symmetric treatment of $X$ and $Y$: neither is privileged over the other. Both play equivalent roles.
Regression
Regression enjoys a clear, universally understood definition:
Regression characterizes the conditional distribution of $Y$ (the response) given $X$ (the regressor).
Historically, regression traces its roots to Galton's discovery (c. 1885) that bivariate Normal data $(X,Y)$ enjoy a linear regression: the conditional expectation of $Y$ is a linear function of $X$. At one pole of the special-general spectrum is Ordinary Least Squares (OLS) regression where the conditional distribution of $Y$ is assumed to be Normal$(\beta_0+\beta_1 X, \sigma^2)$ for fixed parameters $\beta_0, \beta_1,$ and $\sigma$ to be estimated from the data.
At the extremely general end of this spectrum are generalized linear models, generalized additive models, and others of their ilk that relax all aspects of OLS: the expectation, variance, and even the shape of the conditional distribution of $Y$ may be allowed to vary nonlinearly with $X$. The concept that survives all this generalization is that interest remains focused on understanding how $Y$ depends on $X$. That fundamental asymmetry is still there.
Correlation and Regression
One very special situation is common to both approaches and is frequently encountered: the bivariate Normal model. In this model, a scatterplot of data will assume a classic "football," oval, or cigar shape: the data are spread elliptically around an orthogonal pair of axes.
A correlation analysis focuses on the "strength" of this relationship, in the sense that a relatively small spread around the major axis is "strong."
As remarked above, the regression of $Y$ on $X$ (and, equally, the regression of $X$ on $Y$) is linear: the conditional expectation of the response is a linear function of the regressor.
(It is worthwhile pondering the clear geometric differences between these two descriptions: they illuminate the underlying statistical differences.)
Of the five bivariate Normal parameters (two means, two spreads, and one more that measures the dependence between the two variables), one is of common interest: the fifth parameter, $\rho$. It is directly (and simply) related to
The coefficient of $X$ in the regression of $Y$ on $X$.
The coefficient of $Y$ in the regression of $X$ on $Y$.
The conditional variances in either of the regressions $(1)$ and $(2)$.
The spreads of $(X,Y)$ around the axes of an ellipse (measured as variances).
A correlation analysis focuses on $(4)$, without distinguishing the roles of $X$ and $Y$.
A regression analysis focuses on the versions of $(1)$ through $(3)$ appropriate to the choice of regressor and response variables.
In both cases, the hypothesis $H_0: \rho=0$ enjoys a special role: it indicates no correlation as well as no variation of $Y$ with respect to $X$. Because (in this simplest situation) both the probability model and the null hypothesis are common to correlation and regression, it should be no surprise that both methods share an interest in the same statistics (whether called "$r$" or "$\hat\beta$"); that the null sampling distributions of those statistics are the same; and (therefore) that hypothesis tests can produce identical p-values.
This common application, which is the first one anybody learns, can make it difficult to recognize just how different correlation and regression are in their concepts and aims. It is only when we learn about their generalizations that the underlying differences are exposed. It would be difficult to construe a GAM as giving much information about "correlation," just as it would be hard to frame a cluster analysis as a form of "regression." The two are different families of procedures with different objectives, each useful in its own right when applied appropriately.
I hope that this rather general and somewhat vague review has illuminated some of the ways in which "these issues go deeper than simply whether $r$ and $\hat\beta$ should be numerically equal." An appreciation of these differences has helped me understand what various techniques are attempting to accomplish, as well as to make better use of them in solving statistical problems.
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
Introduction
This reply addresses the underlying motivation for this set of questions:
What are the assumptions underlying a correlation test and a regression slope test?
In light of the background
|
11,765
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
|
As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by and large when people think about, compare, and contrast regression and correlation they are in fact considering two sides of the same mathematical coin (typically a linear regression and a Pearson' correlation). Whether they should take a broader view of both families of analyses is something of a separate debate, and one that researchers should wrestle with at least minimally.
Ultimately, when evaluating correlation and regression in their most common applications, there are conceptual distinctions to be made between these two, but not not mathematical ones, aside from a linear transformation of $x$ and $y$ to specify certain distributional properties of $(x,y)$.
In this narrow view of both regression and correlation the following explanations should help elucidate how and why their estimates, standard errors and p values are essentially variants of one another.
With the dataframe dat being the longley data set referenced above we get the following for the cor.test. (There is nothing new here unless you skipped over the question above and went straight to reading the answers):
> cor.test(dat$Employed, dat$Population)
Pearson's product-moment correlation
data: dat$Employed and dat$Population
t = 12.896, df = 14, p-value = 3.693e-09
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.8869236 0.9864676
sample estimates:
cor
0.9603906
And the following for the linear model (also same as above):
> summary(lm(Employed~Population, data=dat))
Call:
lm(formula = Employed ~ Population, data = dat)
Residuals:
Min 1Q Median 3Q Max
-1.4362 -0.9740 0.2021 0.5531 1.9048
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 8.3807 4.4224 1.895 0.0789 .
Population 0.4849 0.0376 12.896 3.69e-09 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.013 on 14 degrees of freedom
Multiple R-squared: 0.9224, Adjusted R-squared: 0.9168
F-statistic: 166.3 on 1 and 14 DF, p-value: 3.693e-09
Now for the new component to this answer. First, create two new standardized versions of the Employed and Population variables:
> dat$zEmployed<-scale(dat$Employed)
> dat$zPopulation<-scale(dat$Population)
Second re-run the regression:
> summary(lm(zEmployed~zPopulation, data=dat))
Call:
lm(formula = zEmployed ~ zPopulation, data = dat)
Residuals:
Min 1Q Median 3Q Max
-0.40894 -0.27733 0.05755 0.15748 0.54238
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.956e-15 7.211e-02 0.0 1
zPopulation 9.604e-01 7.447e-02 12.9 3.69e-09 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2884 on 14 degrees of freedom
Multiple R-squared: 0.9224, Adjusted R-squared: 0.9168
F-statistic: 166.3 on 1 and 14 DF, p-value: 3.693e-09
Voila! The regression slope equals the correlation coefficient from above. The answer to Question 1 then is that the assumptions for both tests are essentially the same:
Independence of observations
A linear relation between $x$ and $y$
Normally distributed residuals with a mean of zero, $e\backsim N(0,\sigma_e^2)$
Error terms are similarly distributed at each predicted value of the regression line (i.e., homogeneity of error variance)
Should any of these assumptions not be met, a researcher should interpret with caution results from either a correlation or a simple linear regression. After all, the only difference between a simple linear regression and a correlation (specifically Pearson's) is the linear transformation of both the $x$ and $y$ variables in which both variables are mean-centered and assigned a variance of 1 (sometimes called z-scoring or standardizing).
For Question 2, let's start with the standard error of the regression slope formula used above (implied in the R code - but stated outright below):
$$
b=\frac{\sum(X_i-\bar{X})(Y_i-\bar{Y})}{\sum(X_i-\bar{X})^2}
$$
Therefore if we want to know the standard error of $b$ we need to be able to calculate its variance (or $Var(b)$). To make the notation simpler we can say $\mathbf{X_i}=(X_i-\bar{X})$ and $\mathbf{Y_i}=(Y_i-\bar{Y})$, which means that...
$$
Var(b)=Var(\frac{\sum(\mathbf{X_i}\mathbf{Y_i})}{\sum(\mathbf{X_i}^2)})
$$
From that formula you can get to the following, condensed and more useful expression (see this link for step-by-step):
$$
Var(b)=\frac{\sigma_e^2}{\sum(X_i-\bar{X})^2}
$$
$$
SE(b) =\sqrt{Var(b)}=\sqrt{\frac{\sigma_e^2}{\sum(X_i-\bar{X})^2}}
$$
where $\sigma_e^2$ represents the variance of the residuals.
I think you'll find if you solve this equation for the unstandardized and standardized (i.e., correlation) linear models you'll get the same p and t values for your slopes. Both tests are relying on ordinary least squares estimation and make the same assumptions. In practice, many researchers skip over assumption checking for both simple linear regression models and correlations, though I think it is even more prevalent to do so for correlations as many people do not recognize them as special cases of simple linear regressions. (Note: this is not a good practice to adopt)
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
|
As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by and large when people think about, compare, and contrast regression and correlation they are in fact considering two sides of the same mathematical coin (typically a linear regression and a Pearson' correlation). Whether they should take a broader view of both families of analyses is something of a separate debate, and one that researchers should wrestle with at least minimally.
Ultimately, when evaluating correlation and regression in their most common applications, there are conceptual distinctions to be made between these two, but not not mathematical ones, aside from a linear transformation of $x$ and $y$ to specify certain distributional properties of $(x,y)$.
In this narrow view of both regression and correlation the following explanations should help elucidate how and why their estimates, standard errors and p values are essentially variants of one another.
With the dataframe dat being the longley data set referenced above we get the following for the cor.test. (There is nothing new here unless you skipped over the question above and went straight to reading the answers):
> cor.test(dat$Employed, dat$Population)
Pearson's product-moment correlation
data: dat$Employed and dat$Population
t = 12.896, df = 14, p-value = 3.693e-09
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.8869236 0.9864676
sample estimates:
cor
0.9603906
And the following for the linear model (also same as above):
> summary(lm(Employed~Population, data=dat))
Call:
lm(formula = Employed ~ Population, data = dat)
Residuals:
Min 1Q Median 3Q Max
-1.4362 -0.9740 0.2021 0.5531 1.9048
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 8.3807 4.4224 1.895 0.0789 .
Population 0.4849 0.0376 12.896 3.69e-09 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.013 on 14 degrees of freedom
Multiple R-squared: 0.9224, Adjusted R-squared: 0.9168
F-statistic: 166.3 on 1 and 14 DF, p-value: 3.693e-09
Now for the new component to this answer. First, create two new standardized versions of the Employed and Population variables:
> dat$zEmployed<-scale(dat$Employed)
> dat$zPopulation<-scale(dat$Population)
Second re-run the regression:
> summary(lm(zEmployed~zPopulation, data=dat))
Call:
lm(formula = zEmployed ~ zPopulation, data = dat)
Residuals:
Min 1Q Median 3Q Max
-0.40894 -0.27733 0.05755 0.15748 0.54238
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.956e-15 7.211e-02 0.0 1
zPopulation 9.604e-01 7.447e-02 12.9 3.69e-09 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2884 on 14 degrees of freedom
Multiple R-squared: 0.9224, Adjusted R-squared: 0.9168
F-statistic: 166.3 on 1 and 14 DF, p-value: 3.693e-09
Voila! The regression slope equals the correlation coefficient from above. The answer to Question 1 then is that the assumptions for both tests are essentially the same:
Independence of observations
A linear relation between $x$ and $y$
Normally distributed residuals with a mean of zero, $e\backsim N(0,\sigma_e^2)$
Error terms are similarly distributed at each predicted value of the regression line (i.e., homogeneity of error variance)
Should any of these assumptions not be met, a researcher should interpret with caution results from either a correlation or a simple linear regression. After all, the only difference between a simple linear regression and a correlation (specifically Pearson's) is the linear transformation of both the $x$ and $y$ variables in which both variables are mean-centered and assigned a variance of 1 (sometimes called z-scoring or standardizing).
For Question 2, let's start with the standard error of the regression slope formula used above (implied in the R code - but stated outright below):
$$
b=\frac{\sum(X_i-\bar{X})(Y_i-\bar{Y})}{\sum(X_i-\bar{X})^2}
$$
Therefore if we want to know the standard error of $b$ we need to be able to calculate its variance (or $Var(b)$). To make the notation simpler we can say $\mathbf{X_i}=(X_i-\bar{X})$ and $\mathbf{Y_i}=(Y_i-\bar{Y})$, which means that...
$$
Var(b)=Var(\frac{\sum(\mathbf{X_i}\mathbf{Y_i})}{\sum(\mathbf{X_i}^2)})
$$
From that formula you can get to the following, condensed and more useful expression (see this link for step-by-step):
$$
Var(b)=\frac{\sigma_e^2}{\sum(X_i-\bar{X})^2}
$$
$$
SE(b) =\sqrt{Var(b)}=\sqrt{\frac{\sigma_e^2}{\sum(X_i-\bar{X})^2}}
$$
where $\sigma_e^2$ represents the variance of the residuals.
I think you'll find if you solve this equation for the unstandardized and standardized (i.e., correlation) linear models you'll get the same p and t values for your slopes. Both tests are relying on ordinary least squares estimation and make the same assumptions. In practice, many researchers skip over assumption checking for both simple linear regression models and correlations, though I think it is even more prevalent to do so for correlations as many people do not recognize them as special cases of simple linear regressions. (Note: this is not a good practice to adopt)
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by
|
11,766
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
|
Here is an explanation of the equivalence of the test, also showing how r and b are related.
http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/
In order to perform OLS, you have to make https://en.wikipedia.org/wiki/Ordinary_least_squares#Assumptions
Additionally, OLS and corr require assumption of random sampling.
Construction of a corr test assumes:
We have a "random and large enough sample" from the population of (x,y).
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
|
Here is an explanation of the equivalence of the test, also showing how r and b are related.
http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/
In order
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
Here is an explanation of the equivalence of the test, also showing how r and b are related.
http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/
In order to perform OLS, you have to make https://en.wikipedia.org/wiki/Ordinary_least_squares#Assumptions
Additionally, OLS and corr require assumption of random sampling.
Construction of a corr test assumes:
We have a "random and large enough sample" from the population of (x,y).
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
Here is an explanation of the equivalence of the test, also showing how r and b are related.
http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/
In order
|
11,767
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
|
Regarding question 2
how to calculate the same t-value using r instead of β1
I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference can be derived from the $F$ statistic, where the alternative hypothesis is that the model does not explain the data, and this can be calculated from $r$.
$$ F = \frac{r^2/k}{(1-r^2)/(n-k)} $$
With $k=2$ parameters in the model and $n=datapoints$
With the restriction that
...the F ratio cannot be used when the model does not have intercept
Source: Hypothesis testing in the multiple regression model
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
|
Regarding question 2
how to calculate the same t-value using r instead of β1
I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference c
|
Difference between the assumptions underlying a correlation and a regression slope tests of significance
Regarding question 2
how to calculate the same t-value using r instead of β1
I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference can be derived from the $F$ statistic, where the alternative hypothesis is that the model does not explain the data, and this can be calculated from $r$.
$$ F = \frac{r^2/k}{(1-r^2)/(n-k)} $$
With $k=2$ parameters in the model and $n=datapoints$
With the restriction that
...the F ratio cannot be used when the model does not have intercept
Source: Hypothesis testing in the multiple regression model
|
Difference between the assumptions underlying a correlation and a regression slope tests of signific
Regarding question 2
how to calculate the same t-value using r instead of β1
I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference c
|
11,768
|
Can there be multiple local optimum solutions when we solve a linear regression?
|
This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. Although these connections are simple and easily learned, they are subtle and often overlooked.
To summarize some ideas from the comments to other replies, I would like to point out there are at least two ways that "linear regression" can produce non-unique solutions--not just theoretically, but in practice.
Lack of identifiability
The first is when the model is not identifiable. This creates a convex but not strictly convex objective function which has multiple solutions.
Consider, for instance, regressing $z$ against $x$ and $y$ (with an intercept) for the $(x,y,z)$ data $(1,-1,0),(2,-2,-1),(3,-3,-2)$. One solution is $\hat z = 1 + y$. Another is $\hat z = 1-x$. To see that there must be multiple solutions, parameterize the model with three real parameters $(\lambda,\mu,\nu)$ and an error term $\varepsilon$ in the form
$$z = 1+\mu + (\lambda + \nu - 1)x + (\lambda -\nu)y + \varepsilon.$$
The sum of squares of residuals simplifies to
$$\operatorname{SSR} = 3\mu^2 + 24 \mu\nu + 56 \nu^2.$$
(This is a limiting case of objective functions that arise in practice, such as the one discussed at Can the empirical hessian of an M-estimator be indefinite?, where you can read detailed analyses and view plots of the function.)
Because the coefficients of the squares ($3$ and $56$) are positive and the determinant $3\times 56 - (24/2)^2 = 24$ is positive, this is a positive-semidefinite quadratic form in $(\mu,\nu,\lambda)$. It is minimized when $\mu=\nu=0$, but $\lambda$ can have any value whatsoever. Since the objective function $\operatorname{SSR}$ does not depend on $\lambda$, neither does its gradient (or any other derivatives). Therefore, any gradient descent algorithm--if it does not make some arbitrary changes of direction--will set the solution's value of $\lambda$ to whatever the starting value was.
Even when gradient descent is not used, the solution can vary. In R, for instance, there are two easy, equivalent ways to specify this model: as z ~ x + y or z ~ y + x. The first yields $\hat z = 1 - x$ but the second gives $\hat z = 1 + y$.
> x <- 1:3
> y <- -x
> z <- y+1
> lm(z ~ x + y)
Coefficients:
(Intercept) x y
1 -1 NA
> lm(z ~ y + x)
Coefficients:
(Intercept) y x
1 1 NA
(The NA values should be interpreted as zeros, but with a warning that multiple solutions exist. The warning was possible because of preliminary analyses performed in R that are independent of its solution method. A gradient descent method would likely not detect the possibility of multiple solutions, although a good one would warn you of some uncertainty that it had arrived at the optimum.)
Parameter constraints
Strict convexity guarantees a unique global optimum, provided the domain of the parameters is convex. Parameter restrictions can create non-convex domains, leading to multiple global solutions.
A very simple example is afforded by the problem of estimating a "mean" $\mu$ for the data $-1, 1$ subject to the restriction $|\mu| \ge 1/2$. This models a situation that is kind of the opposite of regularization methods like Ridge Regression, the Lasso, or the Elastic Net: it is insisting that a model parameter not become too small. (Various questions have appeared on this site asking how to solve regression problems with such parameter constraints, showing that they do arise in practice.)
There are two least-squares solutions to this example, both equally good. They are found by minimizing $(1-\mu)^2 + (-1-\mu)^2$ subject to the constraint $|\mu| \ge 1/2$. The two solutions are $\mu=\pm 1/2$. More than one solution can arise because the parameter restriction makes the domain $\mu \in (-\infty, -1/2]\cup [1/2, \infty)$ nonconvex:
The parabola is the graph of a (strictly) convex function. The thick red part is the portion restricted to the domain of $\mu$: it has two lowest points at $\mu=\pm 1/2$, where the sum of squares is $5/2$. The rest of the parabola (shown dotted) is removed by the constraint, thereby eliminating its unique minimum from consideration.
A gradient descent method, unless it were willing to take large jumps, would likely find the "unique" solution $\mu=1/2$ when starting with a positive value and otherwise it would find the "unique" solution $\mu=-1/2$ when starting with a negative value.
The same situation can occur with larger datasets and in higher dimensions (that is, with more regression parameters to fit).
|
Can there be multiple local optimum solutions when we solve a linear regression?
|
This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. A
|
Can there be multiple local optimum solutions when we solve a linear regression?
This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. Although these connections are simple and easily learned, they are subtle and often overlooked.
To summarize some ideas from the comments to other replies, I would like to point out there are at least two ways that "linear regression" can produce non-unique solutions--not just theoretically, but in practice.
Lack of identifiability
The first is when the model is not identifiable. This creates a convex but not strictly convex objective function which has multiple solutions.
Consider, for instance, regressing $z$ against $x$ and $y$ (with an intercept) for the $(x,y,z)$ data $(1,-1,0),(2,-2,-1),(3,-3,-2)$. One solution is $\hat z = 1 + y$. Another is $\hat z = 1-x$. To see that there must be multiple solutions, parameterize the model with three real parameters $(\lambda,\mu,\nu)$ and an error term $\varepsilon$ in the form
$$z = 1+\mu + (\lambda + \nu - 1)x + (\lambda -\nu)y + \varepsilon.$$
The sum of squares of residuals simplifies to
$$\operatorname{SSR} = 3\mu^2 + 24 \mu\nu + 56 \nu^2.$$
(This is a limiting case of objective functions that arise in practice, such as the one discussed at Can the empirical hessian of an M-estimator be indefinite?, where you can read detailed analyses and view plots of the function.)
Because the coefficients of the squares ($3$ and $56$) are positive and the determinant $3\times 56 - (24/2)^2 = 24$ is positive, this is a positive-semidefinite quadratic form in $(\mu,\nu,\lambda)$. It is minimized when $\mu=\nu=0$, but $\lambda$ can have any value whatsoever. Since the objective function $\operatorname{SSR}$ does not depend on $\lambda$, neither does its gradient (or any other derivatives). Therefore, any gradient descent algorithm--if it does not make some arbitrary changes of direction--will set the solution's value of $\lambda$ to whatever the starting value was.
Even when gradient descent is not used, the solution can vary. In R, for instance, there are two easy, equivalent ways to specify this model: as z ~ x + y or z ~ y + x. The first yields $\hat z = 1 - x$ but the second gives $\hat z = 1 + y$.
> x <- 1:3
> y <- -x
> z <- y+1
> lm(z ~ x + y)
Coefficients:
(Intercept) x y
1 -1 NA
> lm(z ~ y + x)
Coefficients:
(Intercept) y x
1 1 NA
(The NA values should be interpreted as zeros, but with a warning that multiple solutions exist. The warning was possible because of preliminary analyses performed in R that are independent of its solution method. A gradient descent method would likely not detect the possibility of multiple solutions, although a good one would warn you of some uncertainty that it had arrived at the optimum.)
Parameter constraints
Strict convexity guarantees a unique global optimum, provided the domain of the parameters is convex. Parameter restrictions can create non-convex domains, leading to multiple global solutions.
A very simple example is afforded by the problem of estimating a "mean" $\mu$ for the data $-1, 1$ subject to the restriction $|\mu| \ge 1/2$. This models a situation that is kind of the opposite of regularization methods like Ridge Regression, the Lasso, or the Elastic Net: it is insisting that a model parameter not become too small. (Various questions have appeared on this site asking how to solve regression problems with such parameter constraints, showing that they do arise in practice.)
There are two least-squares solutions to this example, both equally good. They are found by minimizing $(1-\mu)^2 + (-1-\mu)^2$ subject to the constraint $|\mu| \ge 1/2$. The two solutions are $\mu=\pm 1/2$. More than one solution can arise because the parameter restriction makes the domain $\mu \in (-\infty, -1/2]\cup [1/2, \infty)$ nonconvex:
The parabola is the graph of a (strictly) convex function. The thick red part is the portion restricted to the domain of $\mu$: it has two lowest points at $\mu=\pm 1/2$, where the sum of squares is $5/2$. The rest of the parabola (shown dotted) is removed by the constraint, thereby eliminating its unique minimum from consideration.
A gradient descent method, unless it were willing to take large jumps, would likely find the "unique" solution $\mu=1/2$ when starting with a positive value and otherwise it would find the "unique" solution $\mu=-1/2$ when starting with a negative value.
The same situation can occur with larger datasets and in higher dimensions (that is, with more regression parameters to fit).
|
Can there be multiple local optimum solutions when we solve a linear regression?
This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. A
|
11,769
|
Can there be multiple local optimum solutions when we solve a linear regression?
|
I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution and it will be global optimum. Gradient descent can and will return multiple solutions if you have a non-convex problem.
Although OP asks for a linear regression, the below example shows least square minimization although nonlinear (vs. linear regression which OP wants) can have multiple solutions and gradient descent can return different solution.
I can show empirically using a simple example that
Sum of squared errors can some time be non-convex, therefore have multiple solutions
Gradient descent method can provide multiple solutions.
Consider the example where you are trying to minimize least squares for the following problem:
where you are trying to solve for $w$ by minimizing objective function. The above funtion although differentiable is non-convex and can have multiple solution. Substituting actual values for $a$ see below.
$a_{12} =9,a_{13} = 1/9,a_{23}=9,a_{31}=1/9$
$minimize$ ${(9-\frac{w_1}{w_2})^2+(\frac{1}{9}-\frac{w_1}{w_3})^2+(\frac{1}{9}-\frac{w_2}{w_1})^2+(9-\frac{w_2}{w_3})^2+(9-\frac{w_3}{w_1})^2+(\frac{1}{9}-\frac{w_3}{w_2})^2}$
The above problem has 3 different solution and they are as follows:
$w = (0.670,0.242,0.080),obj = 165.2$
$w = (0.080,0.242,0.670),obj = 165.2$
$w = (0.242,0.670,0.080),obj = 165.2$
As shown above the least squares problem can be nonconvex and can have multiple solution. Then above problem can be solved using gradient descent method such as microsoft excel solver and every time we run we end up getting different solution. since gradient descent is a local optimizer and can get stuck in local solution we need to use different starting values to get true global optima. A problem like this is dependent on starting values.
|
Can there be multiple local optimum solutions when we solve a linear regression?
|
I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution
|
Can there be multiple local optimum solutions when we solve a linear regression?
I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution and it will be global optimum. Gradient descent can and will return multiple solutions if you have a non-convex problem.
Although OP asks for a linear regression, the below example shows least square minimization although nonlinear (vs. linear regression which OP wants) can have multiple solutions and gradient descent can return different solution.
I can show empirically using a simple example that
Sum of squared errors can some time be non-convex, therefore have multiple solutions
Gradient descent method can provide multiple solutions.
Consider the example where you are trying to minimize least squares for the following problem:
where you are trying to solve for $w$ by minimizing objective function. The above funtion although differentiable is non-convex and can have multiple solution. Substituting actual values for $a$ see below.
$a_{12} =9,a_{13} = 1/9,a_{23}=9,a_{31}=1/9$
$minimize$ ${(9-\frac{w_1}{w_2})^2+(\frac{1}{9}-\frac{w_1}{w_3})^2+(\frac{1}{9}-\frac{w_2}{w_1})^2+(9-\frac{w_2}{w_3})^2+(9-\frac{w_3}{w_1})^2+(\frac{1}{9}-\frac{w_3}{w_2})^2}$
The above problem has 3 different solution and they are as follows:
$w = (0.670,0.242,0.080),obj = 165.2$
$w = (0.080,0.242,0.670),obj = 165.2$
$w = (0.242,0.670,0.080),obj = 165.2$
As shown above the least squares problem can be nonconvex and can have multiple solution. Then above problem can be solved using gradient descent method such as microsoft excel solver and every time we run we end up getting different solution. since gradient descent is a local optimizer and can get stuck in local solution we need to use different starting values to get true global optima. A problem like this is dependent on starting values.
|
Can there be multiple local optimum solutions when we solve a linear regression?
I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution
|
11,770
|
Can there be multiple local optimum solutions when we solve a linear regression?
|
This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eventually.
Why this objective function is convex? This is the beauty of using the squared error for minimization. The derivation and equality to zero will show nicely why this is the case. It is pretty a textbook problem and is covered almost everywhere.
|
Can there be multiple local optimum solutions when we solve a linear regression?
|
This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eve
|
Can there be multiple local optimum solutions when we solve a linear regression?
This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eventually.
Why this objective function is convex? This is the beauty of using the squared error for minimization. The derivation and equality to zero will show nicely why this is the case. It is pretty a textbook problem and is covered almost everywhere.
|
Can there be multiple local optimum solutions when we solve a linear regression?
This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eve
|
11,771
|
How is the confusion matrix reported from K-fold cross-validation?
|
If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds or 'test' sets. You train your model on 9/10 of the folds and test the first fold and get a confusion matrix. This confusion matrix represents the classification of 1/10 of the data. You repeat the analysis again with the next 'test' set and get another confusion matrix representing another 1/10 of the data. Adding this new confusion matrix to the first now represents 20% of your data. You continue until you have run all your folds, sum all your confusion matrices and the final confusion matrix represents that model's performance for all of the data. You could average the confusion matrices but that doesn't really provide any additional information from the cumulative matrix and may be biased if your folds are not all the same size.
Note -- this assumes non-repeated sampling of your data. I'm not completely certain if this would be different for repeated sampling. Will update if I learn something or someone recommends a method.
|
How is the confusion matrix reported from K-fold cross-validation?
|
If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds
|
How is the confusion matrix reported from K-fold cross-validation?
If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds or 'test' sets. You train your model on 9/10 of the folds and test the first fold and get a confusion matrix. This confusion matrix represents the classification of 1/10 of the data. You repeat the analysis again with the next 'test' set and get another confusion matrix representing another 1/10 of the data. Adding this new confusion matrix to the first now represents 20% of your data. You continue until you have run all your folds, sum all your confusion matrices and the final confusion matrix represents that model's performance for all of the data. You could average the confusion matrices but that doesn't really provide any additional information from the cumulative matrix and may be biased if your folds are not all the same size.
Note -- this assumes non-repeated sampling of your data. I'm not completely certain if this would be different for repeated sampling. Will update if I learn something or someone recommends a method.
|
How is the confusion matrix reported from K-fold cross-validation?
If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds
|
11,772
|
How to visualize an enormous sparse contingency table?
|
What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual shadings and you get the idea.
The sparse matrix/contigency table would normally contain the number of occurences of each drug with each adverse effect. With the residual shading idea however you can set up a baseline log linear model (e.g. an independence model or whatever else you like) and use the color scheme to find out which drugs/effect combination occurs more often/ less often than the model would predict. Since you have many observations, you could use a very fine color thresholding and get a map that looks similar to how microarrays in cluster analysis are often visualised e.g. here (but probably with stronger color "gradients"). Or you could build the thresholds such that only if the differences of observations to predictions exceeds the threshold than it gets colored and the rest will remain white. How exactly you would do this (e.g. which model to use or which thresholds) depends on your questions.
Edit
So here's how I would do it (given I'd have enough RAM available...)
Create a sparse matrix of the desired dimensions (drug names x effects)
Calculate the residuals from the independence loglinear model
Use a color gradient in fine resolution from the min to the maximum of the residual (e.g. with a hsv colorspace)
Insert the according color value of the residuals magnitude at the according position in the sparse matrix
Plot the matrix with an image plot.
You then end up with something like this (of course you picture will be much larger and there will be a much lower pixel size but you should get the idea. With clever usage of color you can visualize the associations/departures from independence you are most interested in).
A quick and dirty example with a 100x100 matrix. This is just a toy example with residuals ranging from -10 to 10 as you can see in the legend. White is zero, blue is less frequent than expected, red is more frequent than expected. You should be able to get the idea and take it from there. Edit: I fixed the plot's set up and used non-violent colors.
This was done using the image function and cm.colors() in the following function:
ImagePlot <- function(x, ...){
min <- min(x)
max <- max(x)
layout(matrix(data=c(1,2), nrow=1, ncol=2), widths=c(1,7), heights=c(1,1))
ColorLevels <- cm.colors(255)
# Color Scale
par(mar = c(1,2.2,1,1))
image(1, seq(min,max,length=255),
matrix(data=seq(min,max,length=255), ncol=length(ColorLevels),nrow=1),
col=ColorLevels,
xlab="",ylab="",
xaxt="n")
# Data Map
par(mar = c(0.5,1,1,1))
image(1:dim(x)[1], 1:dim(x)[2], t(x), col=ColorLevels, xlab="",
ylab="", axes=FALSE, zlim=c(min,max))
layout(1)
}
#100x100 example
x <- c(seq(-10,10,length=255),rep(0,600))
mat <- matrix(sample(x,10000,replace=TRUE),nrow=100,ncol=100)
ImagePlot(mat)
using ideas from here http://www.phaget4.org/R/image_matrix.html. If your matrix is so big that the image function gets slow, use the useRaster=TRUE argument (you might also want to use sparse Matrix objects; note that there should be an image method if you want to use the code from above, see the sparseM package.)
If you do this, some clever ordering of the rows/columns might become handy, which you can calculate with the arules package (check page 17 and 18 or so). I would generally recommend the arules utilities for this type of data and problem (not only visualisation but also to find patterns). There you will also find measures of association between the levels that you could use instead of the residual shading.
You might also want to look at tableplots of you want to investigate only a couple of adverse effects later.
|
How to visualize an enormous sparse contingency table?
|
What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual
|
How to visualize an enormous sparse contingency table?
What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual shadings and you get the idea.
The sparse matrix/contigency table would normally contain the number of occurences of each drug with each adverse effect. With the residual shading idea however you can set up a baseline log linear model (e.g. an independence model or whatever else you like) and use the color scheme to find out which drugs/effect combination occurs more often/ less often than the model would predict. Since you have many observations, you could use a very fine color thresholding and get a map that looks similar to how microarrays in cluster analysis are often visualised e.g. here (but probably with stronger color "gradients"). Or you could build the thresholds such that only if the differences of observations to predictions exceeds the threshold than it gets colored and the rest will remain white. How exactly you would do this (e.g. which model to use or which thresholds) depends on your questions.
Edit
So here's how I would do it (given I'd have enough RAM available...)
Create a sparse matrix of the desired dimensions (drug names x effects)
Calculate the residuals from the independence loglinear model
Use a color gradient in fine resolution from the min to the maximum of the residual (e.g. with a hsv colorspace)
Insert the according color value of the residuals magnitude at the according position in the sparse matrix
Plot the matrix with an image plot.
You then end up with something like this (of course you picture will be much larger and there will be a much lower pixel size but you should get the idea. With clever usage of color you can visualize the associations/departures from independence you are most interested in).
A quick and dirty example with a 100x100 matrix. This is just a toy example with residuals ranging from -10 to 10 as you can see in the legend. White is zero, blue is less frequent than expected, red is more frequent than expected. You should be able to get the idea and take it from there. Edit: I fixed the plot's set up and used non-violent colors.
This was done using the image function and cm.colors() in the following function:
ImagePlot <- function(x, ...){
min <- min(x)
max <- max(x)
layout(matrix(data=c(1,2), nrow=1, ncol=2), widths=c(1,7), heights=c(1,1))
ColorLevels <- cm.colors(255)
# Color Scale
par(mar = c(1,2.2,1,1))
image(1, seq(min,max,length=255),
matrix(data=seq(min,max,length=255), ncol=length(ColorLevels),nrow=1),
col=ColorLevels,
xlab="",ylab="",
xaxt="n")
# Data Map
par(mar = c(0.5,1,1,1))
image(1:dim(x)[1], 1:dim(x)[2], t(x), col=ColorLevels, xlab="",
ylab="", axes=FALSE, zlim=c(min,max))
layout(1)
}
#100x100 example
x <- c(seq(-10,10,length=255),rep(0,600))
mat <- matrix(sample(x,10000,replace=TRUE),nrow=100,ncol=100)
ImagePlot(mat)
using ideas from here http://www.phaget4.org/R/image_matrix.html. If your matrix is so big that the image function gets slow, use the useRaster=TRUE argument (you might also want to use sparse Matrix objects; note that there should be an image method if you want to use the code from above, see the sparseM package.)
If you do this, some clever ordering of the rows/columns might become handy, which you can calculate with the arules package (check page 17 and 18 or so). I would generally recommend the arules utilities for this type of data and problem (not only visualisation but also to find patterns). There you will also find measures of association between the levels that you could use instead of the residual shading.
You might also want to look at tableplots of you want to investigate only a couple of adverse effects later.
|
How to visualize an enormous sparse contingency table?
What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual
|
11,773
|
Why do we assume that the error is normally distributed?
|
I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ...
The field of Robust Statistics examines the question of what to do when the Gaussian assumption fails (in the sense that there are outliers):
it is often assumed that the data errors are normally distributed, at least approximately, or that the central limit theorem can be relied on to produce normally distributed estimates. Unfortunately, when there are outliers in the data, classical methods often have very poor performance
These have been applied in ML too, for example in Mika el al. (2001) A Mathematical Programming Approach to the Kernel Fisher Algorithm, they describe how Huber's Robust Loss can be used with KDFA (along with other loss functions). Of course this is a classification loss, but KFDA is closely related to the Relevance Vector Machine (see section 4 of the Mika paper).
As implied in the question, there is a close connection between loss functions and Bayesian error models (see here for a discussion).
However it tends to be the case that as soon as you start incorporating "funky" loss functions, optimisation becomes tough (note that this happens in the Bayesian world too). So in many cases people resort to standard loss functions that are easy to optimise, and instead do extra pre-processing to ensure that the data conforms to the model.
The other point that you mention is that the CLT only applies to samples that are IID. This is true, but then the assumptions (and the accompanying analysis) of most algorithms is the same. When you start looking at non-IID data, things get a lot more tricky. One example is if there is temporal dependence, in which case typically the approach is to assume that the dependence only spans a certain window, and samples can therefore be considered approximately IID outside of this window (see for example this brilliant but tough paper Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and Stationary β-Mixing Processes), after which normal analysis can be applied.
So, yes, it comes down in part to convenience, and in part because in the real world, most errors do look (roughly) Gaussian. One should of course always be careful when looking at a new problem to make sure that the assumptions aren't violated.
|
Why do we assume that the error is normally distributed?
|
I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ...
The field of Robust Statistics
|
Why do we assume that the error is normally distributed?
I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ...
The field of Robust Statistics examines the question of what to do when the Gaussian assumption fails (in the sense that there are outliers):
it is often assumed that the data errors are normally distributed, at least approximately, or that the central limit theorem can be relied on to produce normally distributed estimates. Unfortunately, when there are outliers in the data, classical methods often have very poor performance
These have been applied in ML too, for example in Mika el al. (2001) A Mathematical Programming Approach to the Kernel Fisher Algorithm, they describe how Huber's Robust Loss can be used with KDFA (along with other loss functions). Of course this is a classification loss, but KFDA is closely related to the Relevance Vector Machine (see section 4 of the Mika paper).
As implied in the question, there is a close connection between loss functions and Bayesian error models (see here for a discussion).
However it tends to be the case that as soon as you start incorporating "funky" loss functions, optimisation becomes tough (note that this happens in the Bayesian world too). So in many cases people resort to standard loss functions that are easy to optimise, and instead do extra pre-processing to ensure that the data conforms to the model.
The other point that you mention is that the CLT only applies to samples that are IID. This is true, but then the assumptions (and the accompanying analysis) of most algorithms is the same. When you start looking at non-IID data, things get a lot more tricky. One example is if there is temporal dependence, in which case typically the approach is to assume that the dependence only spans a certain window, and samples can therefore be considered approximately IID outside of this window (see for example this brilliant but tough paper Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and Stationary β-Mixing Processes), after which normal analysis can be applied.
So, yes, it comes down in part to convenience, and in part because in the real world, most errors do look (roughly) Gaussian. One should of course always be careful when looking at a new problem to make sure that the assumptions aren't violated.
|
Why do we assume that the error is normally distributed?
I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ...
The field of Robust Statistics
|
11,774
|
Bootstrapping - do I need to remove outliers first?
|
Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, outliers are defined as high leverage, high influence observations. One can and should identify such observations in the analysis of data, but those conditions alone do not warrant removing those observations. A "true outlier" is a high leverage/high influence observation that's inconsistent with replications of the experimental design. To deem an observation as such requires specialized knowledge of that population and the science behind the "data generating mechanism". The most important aspect is that you should be able to identify potential outliers apriori.
As for the bootstrapping aspect of things, the bootstrap is meant to simulate independent, repeated draws from the sampling population. If you prespecify exclusion criteria in your analysis plan, you should still leave excluded values in the referent bootstrap sampling distribution. This is because you will account for the loss of power due to applying exclusions after sampling your data. However, if there are no prespecified exclusion criteria and outliers are removed using post hoc adjudication, as I'm obviously rallying against, removing these values will propagate the same errors in inference that are caused by removing outliers.
Consider a study on wealth and happiness in an unstratified simple random sample of 100 people. If we took the statement, "1% of the population holds 90% of the world's wealth" literally, then we would observe, on average, one very highly influential value. Suppose further that, beyond affording a basic quality of life, there was no excess happiness attributable to larger income (nonconstant linear trend). So this individual is also high leverage.
The least squares regression coefficient fit on unadulterated data estimates a population averaged first order trend in these data. It is heavily attenuated by our 1 individual in the sample whose happiness is consistent with those near median income levels. If we remove this individual, the least squares regression slope is much larger, but the variance of the regressor is reduced, hence inference about the association is approximately the same. The difficulty with doing this is that I did not prespecify conditions in which individuals would be excluded. If another researcher replicated this study design, they would sample an average of one high income, moderately happy individual, and obtain results that were inconsistent with my "trimmed" results.
If we were apriori interested in the moderate income happiness association, then we should have prespecified that we would, e.g. "compare individuals earning less than $100,000 annual household income". So removing the outlier causes us to estimate an association we cannot describe, hence the p-values are meaningless.
On the other hand, miscalibrated medical equipment and facetious self-reported survey lies can be removed. The more accurately that exclusion criteria can be described before the actual analysis takes place, the more valid and consistent the results that such an analysis will produce.
|
Bootstrapping - do I need to remove outliers first?
|
Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, o
|
Bootstrapping - do I need to remove outliers first?
Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, outliers are defined as high leverage, high influence observations. One can and should identify such observations in the analysis of data, but those conditions alone do not warrant removing those observations. A "true outlier" is a high leverage/high influence observation that's inconsistent with replications of the experimental design. To deem an observation as such requires specialized knowledge of that population and the science behind the "data generating mechanism". The most important aspect is that you should be able to identify potential outliers apriori.
As for the bootstrapping aspect of things, the bootstrap is meant to simulate independent, repeated draws from the sampling population. If you prespecify exclusion criteria in your analysis plan, you should still leave excluded values in the referent bootstrap sampling distribution. This is because you will account for the loss of power due to applying exclusions after sampling your data. However, if there are no prespecified exclusion criteria and outliers are removed using post hoc adjudication, as I'm obviously rallying against, removing these values will propagate the same errors in inference that are caused by removing outliers.
Consider a study on wealth and happiness in an unstratified simple random sample of 100 people. If we took the statement, "1% of the population holds 90% of the world's wealth" literally, then we would observe, on average, one very highly influential value. Suppose further that, beyond affording a basic quality of life, there was no excess happiness attributable to larger income (nonconstant linear trend). So this individual is also high leverage.
The least squares regression coefficient fit on unadulterated data estimates a population averaged first order trend in these data. It is heavily attenuated by our 1 individual in the sample whose happiness is consistent with those near median income levels. If we remove this individual, the least squares regression slope is much larger, but the variance of the regressor is reduced, hence inference about the association is approximately the same. The difficulty with doing this is that I did not prespecify conditions in which individuals would be excluded. If another researcher replicated this study design, they would sample an average of one high income, moderately happy individual, and obtain results that were inconsistent with my "trimmed" results.
If we were apriori interested in the moderate income happiness association, then we should have prespecified that we would, e.g. "compare individuals earning less than $100,000 annual household income". So removing the outlier causes us to estimate an association we cannot describe, hence the p-values are meaningless.
On the other hand, miscalibrated medical equipment and facetious self-reported survey lies can be removed. The more accurately that exclusion criteria can be described before the actual analysis takes place, the more valid and consistent the results that such an analysis will produce.
|
Bootstrapping - do I need to remove outliers first?
Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, o
|
11,775
|
Bootstrapping - do I need to remove outliers first?
|
Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
|
Bootstrapping - do I need to remove outliers first?
|
Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
|
Bootstrapping - do I need to remove outliers first?
Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
|
Bootstrapping - do I need to remove outliers first?
Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
|
11,776
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
|
This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition:
For a categorical predictor having $c$ levels, users of ridge regression often do not recognize that the amount of shrinkage and the predicted values from the
fitted model depend on how the design matrix is coded. For example, one will
get different predictions depending on which cell is chosen as the reference
cell when constructing dummy variables.
He then cites the approach used in 1994 by Verweij and Van Houwelingen, Penalized Likelihood in Cox Regression, Statistics in Medicine 13, 2427-2436. Their approach was to use a penalty function applied to all levels of an unordered categorical predictor. With $l(\beta)$ the partial log-likelihood at a vector of coefficient values $\beta$, they defined the penalized partial log-likelihood at a weight factor $\lambda$ as:
$$l^{\lambda}(\beta) = l(\beta) - \frac{1}{2} \lambda p(\beta)$$
where $p(\beta)$ is a penalty function. At a given value of $\lambda$, coefficient estimates $b^{\lambda}$ are chosen to maximize this penalized partial likelihood.
They define the penalty function for an unordered categorical covariate having $c$ levels as:
$$p_0(\beta) = \sum_{j=1}^c \left( \beta_j - \bar \beta \right)^2$$
where $\bar \beta$ is the average of the individual regression coefficients for the categories. "This function penalizes $\beta_j$'s that are further from the mean." It also removes the special treatment of a reference category $1$ taken to have $\beta_1=0$. In illustrating their penalized partial likelihood approach in a model with a 49-level categorical covariate, they constrained the coefficients to sum to zero, and then optimized to choose $\lambda$ and generate the penalized coefficient estimates.
Penalization must involve all levels of a multi-level categorical covariate in some way, as the OP and another answer indicate. One-hot encoding is one way to do that. This alternative shows a way to do so with dummy coding, in a way that seems to keep more emphasis on deviations of individual coefficient values from the mean of coefficients within the same covariate, rather than on their differences from coefficients of unrelated covariates.
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
|
This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition:
For a categorical predictor having $c$ levels, users of ridge regression often d
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition:
For a categorical predictor having $c$ levels, users of ridge regression often do not recognize that the amount of shrinkage and the predicted values from the
fitted model depend on how the design matrix is coded. For example, one will
get different predictions depending on which cell is chosen as the reference
cell when constructing dummy variables.
He then cites the approach used in 1994 by Verweij and Van Houwelingen, Penalized Likelihood in Cox Regression, Statistics in Medicine 13, 2427-2436. Their approach was to use a penalty function applied to all levels of an unordered categorical predictor. With $l(\beta)$ the partial log-likelihood at a vector of coefficient values $\beta$, they defined the penalized partial log-likelihood at a weight factor $\lambda$ as:
$$l^{\lambda}(\beta) = l(\beta) - \frac{1}{2} \lambda p(\beta)$$
where $p(\beta)$ is a penalty function. At a given value of $\lambda$, coefficient estimates $b^{\lambda}$ are chosen to maximize this penalized partial likelihood.
They define the penalty function for an unordered categorical covariate having $c$ levels as:
$$p_0(\beta) = \sum_{j=1}^c \left( \beta_j - \bar \beta \right)^2$$
where $\bar \beta$ is the average of the individual regression coefficients for the categories. "This function penalizes $\beta_j$'s that are further from the mean." It also removes the special treatment of a reference category $1$ taken to have $\beta_1=0$. In illustrating their penalized partial likelihood approach in a model with a 49-level categorical covariate, they constrained the coefficients to sum to zero, and then optimized to choose $\lambda$ and generate the penalized coefficient estimates.
Penalization must involve all levels of a multi-level categorical covariate in some way, as the OP and another answer indicate. One-hot encoding is one way to do that. This alternative shows a way to do so with dummy coding, in a way that seems to keep more emphasis on deviations of individual coefficient values from the mean of coefficients within the same covariate, rather than on their differences from coefficients of unrelated covariates.
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition:
For a categorical predictor having $c$ levels, users of ridge regression often d
|
11,777
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
|
From The Elements of Statistical Learning (2nd Edition; pages 63-64):
The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (3.41). In addition, notice that the intercept $\beta_0$ has been left out of the penalty term. Penalization of the intercept would make the procedure depend on the origin chosen for $Y$; that is adding a constant $c$ to each of the targets $y_i$ wold not simply result in a shift of the predictions by the same amount $c$. ... The solution adds a positive constant to the diagonal of $\mathbf{X}^T\mathbf{X}$ before inversion. This makes the problem nonsingular, even if $\mathbf{X}^T\mathbf{X}$ is not of full rank, and was the main motivation for ridge regression when it was first introduced in statistics (Hoerl and Kennard, 1970).
Hastie et al. go on to write:
Ridge regression can also be derived as the mean or mode of a posterior distribution, with a suitably chosen prior distribution. In detail, suppose $y_i \sim \mathcal{N}(\beta_0 + x_i^T\beta, \sigma^2)$, and the parameters $\beta_j$ are each distributed as $\mathcal{N}(0, \tau^2)$, independently of one another. ... Thus the ridge regression estimate is the mode of the posterior distribution; since the distribution is Gaussian, it is also the posterior mean.
Since the intercept doesn't have the same rules applied to it as the other coefficients, $\mathbf{X}$ doesn't need to be full-rank, and the Bayesian perspective reminds me of an approach one could take with varying-intercept hierarchical regression models – where the intercepts $\alpha_{j[i]}$ are adjustments from a global intercept -- it sounds to me like one-hot encoding would be the way to go here.
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
|
From The Elements of Statistical Learning (2nd Edition; pages 63-64):
The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
From The Elements of Statistical Learning (2nd Edition; pages 63-64):
The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (3.41). In addition, notice that the intercept $\beta_0$ has been left out of the penalty term. Penalization of the intercept would make the procedure depend on the origin chosen for $Y$; that is adding a constant $c$ to each of the targets $y_i$ wold not simply result in a shift of the predictions by the same amount $c$. ... The solution adds a positive constant to the diagonal of $\mathbf{X}^T\mathbf{X}$ before inversion. This makes the problem nonsingular, even if $\mathbf{X}^T\mathbf{X}$ is not of full rank, and was the main motivation for ridge regression when it was first introduced in statistics (Hoerl and Kennard, 1970).
Hastie et al. go on to write:
Ridge regression can also be derived as the mean or mode of a posterior distribution, with a suitably chosen prior distribution. In detail, suppose $y_i \sim \mathcal{N}(\beta_0 + x_i^T\beta, \sigma^2)$, and the parameters $\beta_j$ are each distributed as $\mathcal{N}(0, \tau^2)$, independently of one another. ... Thus the ridge regression estimate is the mode of the posterior distribution; since the distribution is Gaussian, it is also the posterior mean.
Since the intercept doesn't have the same rules applied to it as the other coefficients, $\mathbf{X}$ doesn't need to be full-rank, and the Bayesian perspective reminds me of an approach one could take with varying-intercept hierarchical regression models – where the intercepts $\alpha_{j[i]}$ are adjustments from a global intercept -- it sounds to me like one-hot encoding would be the way to go here.
|
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
From The Elements of Statistical Learning (2nd Edition; pages 63-64):
The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (
|
11,778
|
What does it mean to regress a variable against another
|
It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X.
As an example, the data is X = 1,...,100. The value of Y is plotted on the Y axis. The red line is the linear regression surface.
Personally, I don't find the independent/dependent variable language to be that helpful. Those words connote causality, but regression can work the other way round too (use Y to predict X).
|
What does it mean to regress a variable against another
|
It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X.
As an example, the data is
|
What does it mean to regress a variable against another
It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X.
As an example, the data is X = 1,...,100. The value of Y is plotted on the Y axis. The red line is the linear regression surface.
Personally, I don't find the independent/dependent variable language to be that helpful. Those words connote causality, but regression can work the other way round too (use Y to predict X).
|
What does it mean to regress a variable against another
It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X.
As an example, the data is
|
11,779
|
What does it mean to regress a variable against another
|
Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on genomic breeding value, etc.
bias=lm(TBV~GBV)
|
What does it mean to regress a variable against another
|
Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on ge
|
What does it mean to regress a variable against another
Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on genomic breeding value, etc.
bias=lm(TBV~GBV)
|
What does it mean to regress a variable against another
Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on ge
|
11,780
|
Hypothesis testing and significance for time series
|
I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2), the most general (largest) model would be an AR(2). Estimate this model globally i.e. for the combined time series. Compare the error sum of squares for the combined set with the sum of the two individual error sum of squares to generate an F value to test the hypothesis of constant parameters across groups. I you wish you can post your data and I will illustrate this test precisely.
ADDITIONAL COMMENTS:
Since the data set is auto-correlated normality does not apply. If the observations are independent over time then one might apply some of the well-known non-time series methods. IN terms of your request about an easy to read book about time series, I suggest the Wei text by Addison-Wesley. Social scientists will find the non-mathematical approach of Mcleary and Hay (1980) to be more intuitive but lacking rigor.
|
Hypothesis testing and significance for time series
|
I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2)
|
Hypothesis testing and significance for time series
I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2), the most general (largest) model would be an AR(2). Estimate this model globally i.e. for the combined time series. Compare the error sum of squares for the combined set with the sum of the two individual error sum of squares to generate an F value to test the hypothesis of constant parameters across groups. I you wish you can post your data and I will illustrate this test precisely.
ADDITIONAL COMMENTS:
Since the data set is auto-correlated normality does not apply. If the observations are independent over time then one might apply some of the well-known non-time series methods. IN terms of your request about an easy to read book about time series, I suggest the Wei text by Addison-Wesley. Social scientists will find the non-mathematical approach of Mcleary and Hay (1980) to be more intuitive but lacking rigor.
|
Hypothesis testing and significance for time series
I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2)
|
11,781
|
Hypothesis testing and significance for time series
|
There are many ways to do it if you think of the weight variations as a dynamical process.
For example, it can be modeled as an integrator
$\dot x(t) = \theta x(t) + v(t)$
where $x(t)$ is the weight variation, $\theta$ relates to how fast the weight changes and $v(t)$ is a stochastic disturbance that may affect the weight variation. You could model $v(t)$ as $\mathcal N(0,Q)$, for a known $Q$ (you can also estimate it).
From here, you can try to identify the parameter $\theta$ for the two populations (and their covariance), using, e.g., a prediction error method. If the Gaussian assumption holds, prediction error methods will give that the estimate of $\theta$ is also Gaussian (asymptotically) and you can therefore build a hypothesis testing to determine whether the estimate of $\theta_1$ is statistically close to that of $\theta_2$.
For a reference, I can suggest this book.
|
Hypothesis testing and significance for time series
|
There are many ways to do it if you think of the weight variations as a dynamical process.
For example, it can be modeled as an integrator
$\dot x(t) = \theta x(t) + v(t)$
where $x(t)$ is the weight
|
Hypothesis testing and significance for time series
There are many ways to do it if you think of the weight variations as a dynamical process.
For example, it can be modeled as an integrator
$\dot x(t) = \theta x(t) + v(t)$
where $x(t)$ is the weight variation, $\theta$ relates to how fast the weight changes and $v(t)$ is a stochastic disturbance that may affect the weight variation. You could model $v(t)$ as $\mathcal N(0,Q)$, for a known $Q$ (you can also estimate it).
From here, you can try to identify the parameter $\theta$ for the two populations (and their covariance), using, e.g., a prediction error method. If the Gaussian assumption holds, prediction error methods will give that the estimate of $\theta$ is also Gaussian (asymptotically) and you can therefore build a hypothesis testing to determine whether the estimate of $\theta_1$ is statistically close to that of $\theta_2$.
For a reference, I can suggest this book.
|
Hypothesis testing and significance for time series
There are many ways to do it if you think of the weight variations as a dynamical process.
For example, it can be modeled as an integrator
$\dot x(t) = \theta x(t) + v(t)$
where $x(t)$ is the weight
|
11,782
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution?
|
The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good
1)Any experiment is performed under the same conditions till a fixed number of successes, say C, is achieved
2)The result of each experiment can be classified into one of the two categories, success or failure
3)The probability P of success is the same for each experiment
40Each experiment is independent of all the other. The first condition is the only key differentiating factor between binomial and negative binomial
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution
|
The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good
1)Any experiment is performed under the
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution?
The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good
1)Any experiment is performed under the same conditions till a fixed number of successes, say C, is achieved
2)The result of each experiment can be classified into one of the two categories, success or failure
3)The probability P of success is the same for each experiment
40Each experiment is independent of all the other. The first condition is the only key differentiating factor between binomial and negative binomial
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution
The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good
1)Any experiment is performed under the
|
11,783
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution?
|
The poisson distribution can be a reasonable approximation of the binomial under certain conditions like
1)The probability of success for each trial is very small. P-->0
2)np=m(say) is finete
The rule most often used by statisticians is that the poisson is a good approximation of the binomial when n is equal to or greater than 20 and p is equal or less than 5%
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution
|
The poisson distribution can be a reasonable approximation of the binomial under certain conditions like
1)The probability of success for each trial is very small. P-->0
2)np=m(say) is finete
The rule
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution?
The poisson distribution can be a reasonable approximation of the binomial under certain conditions like
1)The probability of success for each trial is very small. P-->0
2)np=m(say) is finete
The rule most often used by statisticians is that the poisson is a good approximation of the binomial when n is equal to or greater than 20 and p is equal or less than 5%
|
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution
The poisson distribution can be a reasonable approximation of the binomial under certain conditions like
1)The probability of success for each trial is very small. P-->0
2)np=m(say) is finete
The rule
|
11,784
|
What is the difference between the vertical bar and semi-colon notations?
|
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say in a regression setting, you would have a distribution:
$$
p(Y | x, \beta)
$$
Which means: the distribution of $Y$ if you know (conditional on) the $x$ and $\beta$ values.
If you want to estimate the betas, you want to maximize the likelihood:
$$
L(\beta; y,x) = p(Y | x, \beta)
$$
Essentially, you are now looking at the expression $p(Y | x, \beta)$ as a function of the beta's, but apart from that, there is no difference (for mathematical correct expressions that you can properly derive, this is a necessity --- although in practice no one bothers).
Then, in bayesian settings, the difference between parameters and other variables soon fades, so one started to you use both notations intermixedly.
So, in essence: there is no actual difference: they both indicate the conditional distribution of the thing on the left, conditional on the thing(s) on the right.
|
What is the difference between the vertical bar and semi-colon notations?
|
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say
|
What is the difference between the vertical bar and semi-colon notations?
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say in a regression setting, you would have a distribution:
$$
p(Y | x, \beta)
$$
Which means: the distribution of $Y$ if you know (conditional on) the $x$ and $\beta$ values.
If you want to estimate the betas, you want to maximize the likelihood:
$$
L(\beta; y,x) = p(Y | x, \beta)
$$
Essentially, you are now looking at the expression $p(Y | x, \beta)$ as a function of the beta's, but apart from that, there is no difference (for mathematical correct expressions that you can properly derive, this is a necessity --- although in practice no one bothers).
Then, in bayesian settings, the difference between parameters and other variables soon fades, so one started to you use both notations intermixedly.
So, in essence: there is no actual difference: they both indicate the conditional distribution of the thing on the left, conditional on the thing(s) on the right.
|
What is the difference between the vertical bar and semi-colon notations?
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say
|
11,785
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x,\theta)$ and only makes sense if $\Theta$ is a random variable. $f(x|\theta)$ is the conditional distribution of $X$ given $\Theta$, and again, only makes sense if $\Theta$ is a random variable. This will become much clearer when you get further into the book and look at Bayesian analysis.
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x,\theta)$ and only makes sense if $\Theta$ is a random variable. $f(x|\theta)$ is the conditional distribution of $X$ given $\Theta$, and again, only makes sense if $\Theta$ is a random variable. This will become much clearer when you get further into the book and look at Bayesian analysis.
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x
|
11,786
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of functions, where the elements are indexed by $\Theta$. A subtle distinction, perhaps, but an important one, esp. when it comes time to estimate an unknown parameter $\theta$ on the basis of known data $x$; at that time, $\theta$ varies and $x$ is fixed, resulting in the "likelihood function". Usage of $\mid$ is more common among statisticians, while $;$ among mathematicians.
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of func
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of functions, where the elements are indexed by $\Theta$. A subtle distinction, perhaps, but an important one, esp. when it comes time to estimate an unknown parameter $\theta$ on the basis of known data $x$; at that time, $\theta$ varies and $x$ is fixed, resulting in the "likelihood function". Usage of $\mid$ is more common among statisticians, while $;$ among mathematicians.
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of func
|
11,787
|
What is the difference between the vertical bar and semi-colon notations?
|
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is an operation on random variables and as such using this notation when $d, w$ aren't random variables is confusing (and tragically common).
As @Nick Sabbe points out $p(y|X, \Theta)$ is a common notation for the sampling distribution of observed data $y$. Some frequentists will use this notation but insist that $\Theta$ isn't a random variable, which is an abuse IMO. But they have no monopoly there; I've seen Bayesians do it too, tacking fixed hyperparameters on at the end of the conditionals.
|
What is the difference between the vertical bar and semi-colon notations?
|
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates con
|
What is the difference between the vertical bar and semi-colon notations?
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is an operation on random variables and as such using this notation when $d, w$ aren't random variables is confusing (and tragically common).
As @Nick Sabbe points out $p(y|X, \Theta)$ is a common notation for the sampling distribution of observed data $y$. Some frequentists will use this notation but insist that $\Theta$ isn't a random variable, which is an abuse IMO. But they have no monopoly there; I've seen Bayesians do it too, tacking fixed hyperparameters on at the end of the conditionals.
|
What is the difference between the vertical bar and semi-colon notations?
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates con
|
11,788
|
Is exploratory data analysis important when doing purely predictive modeling?
|
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours.
I went through each of the variables in turn, graphing them, calculating summary statistics etc. I also calculated correlations between the numerical variables.
Among the things I found were:
One categorical variable almost perfectly matched the target.
Two or three variables had over half of their values missing.
A couple of variables had extreme outliers.
Two of the numerical variables were perfectly correlated.
etc.
My point is that these were things which had been put in deliberately to see whether people would notice them before trying to build a model. The company put them in because they are the sort of thing which can happen in real life, and drastically affect model performance.
So yes, EDA is important when doing machine learning!
|
Is exploratory data analysis important when doing purely predictive modeling?
|
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit
|
Is exploratory data analysis important when doing purely predictive modeling?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours.
I went through each of the variables in turn, graphing them, calculating summary statistics etc. I also calculated correlations between the numerical variables.
Among the things I found were:
One categorical variable almost perfectly matched the target.
Two or three variables had over half of their values missing.
A couple of variables had extreme outliers.
Two of the numerical variables were perfectly correlated.
etc.
My point is that these were things which had been put in deliberately to see whether people would notice them before trying to build a model. The company put them in because they are the sort of thing which can happen in real life, and drastically affect model performance.
So yes, EDA is important when doing machine learning!
|
Is exploratory data analysis important when doing purely predictive modeling?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit
|
11,789
|
Is exploratory data analysis important when doing purely predictive modeling?
|
Obviously, yes.
The data analysis could lead you to many points that would hurt your predictive model :
Incomplete data
Assuming we are talking about quantitative data, you'll have to decide whether you want to ignore the column (if there's too much data missing) or figure out what will be your "default" value (Mean, Mode, Etc). You can't do this without exploring your data first.
Abnormal data
Say you have data that is pretty strongly correlated but there is a 2% of your data that is way off this correlation. You might want to remove this data altogether to help your predictive model
Remove columns with too much correlation
Ok this contradicts a little bit my previous point but english isn't my main language so I hope you'll understand.
I'll take a dumb example, say you analysis a football's stadium dataset and you have Width, Length, Area as parameters. Well, we can easily imagine that these three parameters will be strongly correlated. Having too much correlation between your column leads the predictive model in a wrong direction. You might decide to flush one or more of the parameters.
Find new features
I'll take the example of the small Titanic Kaggle "Competition". When looking at the folks' names, you could figure out that you can extract a feature that is the Title of the person. This feature turns out to be pretty important when it comes to modeling, but you would have missed it if you didn't analyse your data first.
You might decide to bin your continuous data because it feels more appropriate or turn a continuous feature into a categorical one.
Find what kind of algorithm to use
I can't draw plots right now, but let's make this a simple example.
Imagine that you have a small model with one feature column and one binary (0 or 1 only) "result" column. You want to create a predictive classifying model for this dataset.
If, once again as an example, you were to plot it (soo, analyse your data), you might realise that the plot forms a perfect circle around your 1 value. In such a scenario, if would be pretty obvious that you could use a polynomial classifier to have a great model instead of jumping straight to the DNN. (Obviously, considering there's only two columns in my example, it doesn't make for a excellent example, but you get the point)
Overall, you can't expect a predictive model to perform well if you don't look at the data first.
|
Is exploratory data analysis important when doing purely predictive modeling?
|
Obviously, yes.
The data analysis could lead you to many points that would hurt your predictive model :
Incomplete data
Assuming we are talking about quantitative data, you'll have to decide whether y
|
Is exploratory data analysis important when doing purely predictive modeling?
Obviously, yes.
The data analysis could lead you to many points that would hurt your predictive model :
Incomplete data
Assuming we are talking about quantitative data, you'll have to decide whether you want to ignore the column (if there's too much data missing) or figure out what will be your "default" value (Mean, Mode, Etc). You can't do this without exploring your data first.
Abnormal data
Say you have data that is pretty strongly correlated but there is a 2% of your data that is way off this correlation. You might want to remove this data altogether to help your predictive model
Remove columns with too much correlation
Ok this contradicts a little bit my previous point but english isn't my main language so I hope you'll understand.
I'll take a dumb example, say you analysis a football's stadium dataset and you have Width, Length, Area as parameters. Well, we can easily imagine that these three parameters will be strongly correlated. Having too much correlation between your column leads the predictive model in a wrong direction. You might decide to flush one or more of the parameters.
Find new features
I'll take the example of the small Titanic Kaggle "Competition". When looking at the folks' names, you could figure out that you can extract a feature that is the Title of the person. This feature turns out to be pretty important when it comes to modeling, but you would have missed it if you didn't analyse your data first.
You might decide to bin your continuous data because it feels more appropriate or turn a continuous feature into a categorical one.
Find what kind of algorithm to use
I can't draw plots right now, but let's make this a simple example.
Imagine that you have a small model with one feature column and one binary (0 or 1 only) "result" column. You want to create a predictive classifying model for this dataset.
If, once again as an example, you were to plot it (soo, analyse your data), you might realise that the plot forms a perfect circle around your 1 value. In such a scenario, if would be pretty obvious that you could use a polynomial classifier to have a great model instead of jumping straight to the DNN. (Obviously, considering there's only two columns in my example, it doesn't make for a excellent example, but you get the point)
Overall, you can't expect a predictive model to perform well if you don't look at the data first.
|
Is exploratory data analysis important when doing purely predictive modeling?
Obviously, yes.
The data analysis could lead you to many points that would hurt your predictive model :
Incomplete data
Assuming we are talking about quantitative data, you'll have to decide whether y
|
11,790
|
Is exploratory data analysis important when doing purely predictive modeling?
|
One important thing done by EDA is finding data entry errors and other anomalous points.
Another is that the distribution of variables can influence the models you try to fit.
|
Is exploratory data analysis important when doing purely predictive modeling?
|
One important thing done by EDA is finding data entry errors and other anomalous points.
Another is that the distribution of variables can influence the models you try to fit.
|
Is exploratory data analysis important when doing purely predictive modeling?
One important thing done by EDA is finding data entry errors and other anomalous points.
Another is that the distribution of variables can influence the models you try to fit.
|
Is exploratory data analysis important when doing purely predictive modeling?
One important thing done by EDA is finding data entry errors and other anomalous points.
Another is that the distribution of variables can influence the models you try to fit.
|
11,791
|
Is exploratory data analysis important when doing purely predictive modeling?
|
We used to have a phrase in chemistry:
"Two weeks spent in the lab can save you two hours on Scifinder".
I'm sure the same applies to machine learning:
"Two weeks spent training a neuralnet can save you 2 hours looking at the input data".
These are the things I'd go through before starting any ML process.
Plot out the density of every (continuous) variable. How are the numbers skewed? Do I need a log transform to make the data make sense? How far away are the outliers? Are there any values that do not make physical or logical sense?
Keep an eye out for NAs. Usually, you can just discard them, but if there are a lot of them, or if they represent a crucial aspect to the behaviour of the system, you might have to find a way of recreating the data. This could be a project in and of itself.
Plot every variable against the response variable. How much sense can you make out of it just by eyeballing it? Are there obvious curves that can be fitted with functions?
Assess whether or not you need a complicated ML model in the first place. Sometimes linear regression is all you really need. Even if it isn't, it provides a good baseline fit for your ML model to improve upon.
Beyond those basic steps, I wouldn't spend much additional time looking at the data before applying ML processes to it. If you already have a large number of variables, complicated nonlinear combinations of them get increasingly difficult not only to find, but to plot and understand. This is the sort of stuff best handled by the computer.
|
Is exploratory data analysis important when doing purely predictive modeling?
|
We used to have a phrase in chemistry:
"Two weeks spent in the lab can save you two hours on Scifinder".
I'm sure the same applies to machine learning:
"Two weeks spent training a neuralnet can sav
|
Is exploratory data analysis important when doing purely predictive modeling?
We used to have a phrase in chemistry:
"Two weeks spent in the lab can save you two hours on Scifinder".
I'm sure the same applies to machine learning:
"Two weeks spent training a neuralnet can save you 2 hours looking at the input data".
These are the things I'd go through before starting any ML process.
Plot out the density of every (continuous) variable. How are the numbers skewed? Do I need a log transform to make the data make sense? How far away are the outliers? Are there any values that do not make physical or logical sense?
Keep an eye out for NAs. Usually, you can just discard them, but if there are a lot of them, or if they represent a crucial aspect to the behaviour of the system, you might have to find a way of recreating the data. This could be a project in and of itself.
Plot every variable against the response variable. How much sense can you make out of it just by eyeballing it? Are there obvious curves that can be fitted with functions?
Assess whether or not you need a complicated ML model in the first place. Sometimes linear regression is all you really need. Even if it isn't, it provides a good baseline fit for your ML model to improve upon.
Beyond those basic steps, I wouldn't spend much additional time looking at the data before applying ML processes to it. If you already have a large number of variables, complicated nonlinear combinations of them get increasingly difficult not only to find, but to plot and understand. This is the sort of stuff best handled by the computer.
|
Is exploratory data analysis important when doing purely predictive modeling?
We used to have a phrase in chemistry:
"Two weeks spent in the lab can save you two hours on Scifinder".
I'm sure the same applies to machine learning:
"Two weeks spent training a neuralnet can sav
|
11,792
|
Is exploratory data analysis important when doing purely predictive modeling?
|
Statistical perspective:
Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA:
Prediction gives obvious nonsense results, because your input data violated the assumptions of your prediction method. You now have to go back and check your inputs to find out where the problem lies, then fix the issue and redo the analysis. Depending on the nature of the issue, you may even need to change your prediction methods. (What do you mean, this is a categorical variable?)
Prediction gives results that are bad but not obviously bad, because your data violated assumptions in a slightly less obvious way. Either you go back and check those assumptions anyway (in which case, see #1 above) or you accept bad results.
By good fortune, your input data is exactly what you expected it to be (I understand this does occasionally happen) and the prediction gives good results... which would be great, except that you can't tell the difference between this and #2 above.
Project-management perspective:
Resolving data issues can take a significant amount of time and effort. For instance:
The data is dirty and you need to spend time developing processes to clean it. (For example: the time I had to code an autocorrect for all the people who keep writing the wrong year in January, and the people who enter the date in the year field, and the system that was parsing dates as MM/DD/YYYY instead of DD/MM/YYYY.)
You need to ask questions about what the data means, and only Joan can answer them. Joan is going on a six-month holiday, starting two weeks after your project begins.
Data limitations prevent you from delivering everything you had intended to deliver (cf. Bernhard's example of being unable to produce analysis by sex/gender because the data set only had one woman) and you/your clients need to figure out what to do about that.
The earlier you can identify such issues, the better your chances of keeping your project on the rails, finishing on time, and making your clients happy.
|
Is exploratory data analysis important when doing purely predictive modeling?
|
Statistical perspective:
Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA:
Prediction gives obvious nonsense results, be
|
Is exploratory data analysis important when doing purely predictive modeling?
Statistical perspective:
Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA:
Prediction gives obvious nonsense results, because your input data violated the assumptions of your prediction method. You now have to go back and check your inputs to find out where the problem lies, then fix the issue and redo the analysis. Depending on the nature of the issue, you may even need to change your prediction methods. (What do you mean, this is a categorical variable?)
Prediction gives results that are bad but not obviously bad, because your data violated assumptions in a slightly less obvious way. Either you go back and check those assumptions anyway (in which case, see #1 above) or you accept bad results.
By good fortune, your input data is exactly what you expected it to be (I understand this does occasionally happen) and the prediction gives good results... which would be great, except that you can't tell the difference between this and #2 above.
Project-management perspective:
Resolving data issues can take a significant amount of time and effort. For instance:
The data is dirty and you need to spend time developing processes to clean it. (For example: the time I had to code an autocorrect for all the people who keep writing the wrong year in January, and the people who enter the date in the year field, and the system that was parsing dates as MM/DD/YYYY instead of DD/MM/YYYY.)
You need to ask questions about what the data means, and only Joan can answer them. Joan is going on a six-month holiday, starting two weeks after your project begins.
Data limitations prevent you from delivering everything you had intended to deliver (cf. Bernhard's example of being unable to produce analysis by sex/gender because the data set only had one woman) and you/your clients need to figure out what to do about that.
The earlier you can identify such issues, the better your chances of keeping your project on the rails, finishing on time, and making your clients happy.
|
Is exploratory data analysis important when doing purely predictive modeling?
Statistical perspective:
Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA:
Prediction gives obvious nonsense results, be
|
11,793
|
Why does independence imply zero correlation?
|
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident!
$$\rho_{X,Y}=\frac{\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]}{\sqrt{\operatorname{E}[X^2]-[\operatorname{E}[X]]^2}~\sqrt{\operatorname{E}[Y^2]- [\operatorname{E}[Y]]^2}}$$
If $X$ and $Y$ are independent, means $\operatorname{E}[XY]= \operatorname{E}[X]\operatorname{E}[Y]$. Hence, the numerator of $\rho_{X,Y}$ is zero in this case.
So, if you don't change the meaning of correlation, as mentioned here, it is not possible. Unless, clarify your defintion from what the correlation is.
|
Why does independence imply zero correlation?
|
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident!
$$\rho_{X,Y}=\frac{\operatorna
|
Why does independence imply zero correlation?
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident!
$$\rho_{X,Y}=\frac{\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]}{\sqrt{\operatorname{E}[X^2]-[\operatorname{E}[X]]^2}~\sqrt{\operatorname{E}[Y^2]- [\operatorname{E}[Y]]^2}}$$
If $X$ and $Y$ are independent, means $\operatorname{E}[XY]= \operatorname{E}[X]\operatorname{E}[Y]$. Hence, the numerator of $\rho_{X,Y}$ is zero in this case.
So, if you don't change the meaning of correlation, as mentioned here, it is not possible. Unless, clarify your defintion from what the correlation is.
|
Why does independence imply zero correlation?
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident!
$$\rho_{X,Y}=\frac{\operatorna
|
11,794
|
Why does independence imply zero correlation?
|
Comment on sample correlation. In comparing two small independent samples
of the same size, the sample
correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (+1) on the population correlation $\rho.]$
Consider correlations between a million pairs of independent
samples of size $n = 5$ from the exponential distribution with rate $1.$
set.seed(616)
r = replicate( 10^6, cor(rexp(5), rexp(5)) )
mean(abs(r) > .5)
[1] 0.386212
mean(r)
[1] -0.0005904455
hist(r, prob=T, br=40, col="skyblue2")
abline(v=c(-.5,.5), col="red", lwd=2)
For example, here is the scatterplot of first of the million pairs of samples of size $5,$ for which
$r = -0.5716.$
There is nothing special about the exponential distribution in this regard.
Changing the parent distribution to standard uniform gave the following results.
set.seed(2019)
...
mean(abs(r) > .5)
[1] 0.391061
mean(r)
[1] 1.43269e-05
By contrast, here is the corresponding histogram of correlations for
pairs of normal samples of size $n = 20.$
Note: Other pages on this site discuss the distribution of $r$ in more detail; one of them is this Q & A.
|
Why does independence imply zero correlation?
|
Comment on sample correlation. In comparing two small independent samples
of the same size, the sample
correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (
|
Why does independence imply zero correlation?
Comment on sample correlation. In comparing two small independent samples
of the same size, the sample
correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (+1) on the population correlation $\rho.]$
Consider correlations between a million pairs of independent
samples of size $n = 5$ from the exponential distribution with rate $1.$
set.seed(616)
r = replicate( 10^6, cor(rexp(5), rexp(5)) )
mean(abs(r) > .5)
[1] 0.386212
mean(r)
[1] -0.0005904455
hist(r, prob=T, br=40, col="skyblue2")
abline(v=c(-.5,.5), col="red", lwd=2)
For example, here is the scatterplot of first of the million pairs of samples of size $5,$ for which
$r = -0.5716.$
There is nothing special about the exponential distribution in this regard.
Changing the parent distribution to standard uniform gave the following results.
set.seed(2019)
...
mean(abs(r) > .5)
[1] 0.391061
mean(r)
[1] 1.43269e-05
By contrast, here is the corresponding histogram of correlations for
pairs of normal samples of size $n = 20.$
Note: Other pages on this site discuss the distribution of $r$ in more detail; one of them is this Q & A.
|
Why does independence imply zero correlation?
Comment on sample correlation. In comparing two small independent samples
of the same size, the sample
correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (
|
11,795
|
Why does independence imply zero correlation?
|
Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero.
That is because the sample is not a perfect representation of the population.
The larger the sample, the better it represents the population, so the smaller the correlation you'll have. For an infinite sample, the correlation would be zero.
|
Why does independence imply zero correlation?
|
Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero.
That is because the sample is not a perfec
|
Why does independence imply zero correlation?
Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero.
That is because the sample is not a perfect representation of the population.
The larger the sample, the better it represents the population, so the smaller the correlation you'll have. For an infinite sample, the correlation would be zero.
|
Why does independence imply zero correlation?
Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero.
That is because the sample is not a perfec
|
11,796
|
Why does independence imply zero correlation?
|
Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this:
These data are presumably independent but clearly exhibit correlation ($r = 0.66$). "I thought independence implies zero correlation!" the student says.
As others have already pointed out, the sample values are correlated, but that does not mean the population has nonzero correlation.
Of course, these two should be independent—given Nicolas Cage appeared in a record-setting 10 films this year, we shouldn't be closing the local pool for the summer for safety purposes.
But when we check how many people drown this year, there is a small chance that a record-setting 1000 people drown this year.
Getting such correlation is unlikely. Maybe one in a thousand. But it's possible, even though the two are independent. But this is just one case. Consider that there the millions of possible events to measure out there, and you can see the chance that the odds of some two happening to give a high correlation is quite high (hence the existence of graphs such as that above).
Another way to look at it is that guaranteeing that two independent events will always give uncorrelated values is itself restrictive. Given two independent dice, and the results of the first, there are a certain (sizable) set of results for the second dice which will give some nonzero correlation. To restrict the second dice's results to give zero correlation with the first is a clear violation of independence, as the first dice's rolls are now affecting the distribution of the results.
|
Why does independence imply zero correlation?
|
Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this:
These data are presumably independent but clearly exhibit correlation ($r = 0.66$).
|
Why does independence imply zero correlation?
Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this:
These data are presumably independent but clearly exhibit correlation ($r = 0.66$). "I thought independence implies zero correlation!" the student says.
As others have already pointed out, the sample values are correlated, but that does not mean the population has nonzero correlation.
Of course, these two should be independent—given Nicolas Cage appeared in a record-setting 10 films this year, we shouldn't be closing the local pool for the summer for safety purposes.
But when we check how many people drown this year, there is a small chance that a record-setting 1000 people drown this year.
Getting such correlation is unlikely. Maybe one in a thousand. But it's possible, even though the two are independent. But this is just one case. Consider that there the millions of possible events to measure out there, and you can see the chance that the odds of some two happening to give a high correlation is quite high (hence the existence of graphs such as that above).
Another way to look at it is that guaranteeing that two independent events will always give uncorrelated values is itself restrictive. Given two independent dice, and the results of the first, there are a certain (sizable) set of results for the second dice which will give some nonzero correlation. To restrict the second dice's results to give zero correlation with the first is a clear violation of independence, as the first dice's rolls are now affecting the distribution of the results.
|
Why does independence imply zero correlation?
Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this:
These data are presumably independent but clearly exhibit correlation ($r = 0.66$).
|
11,797
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
|
Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data. Such observations need to be set apart at the onset of any analysis simply because their distance from the bulk of the data ensures that they will exert a disproportionate pull on any multivariable model fitted by maximum likelihood (or indeed any other convex loss function).
It is important to point out that multivariable outliers can simply not be reliably detected using residuals from a least square fit (or any other model estimated by ML, or any other convex loss function). Simply put, multivariable outliers can only be reliably detected using their residuals from a model fitted using an estimation procedure not susceptible to be swayed by them.
The belief that outliers will necessary stand out in the residuals of a classical fit ranks somewhere up there with other hard to debunk statistical no-no's such as interpreting p-values as measure of evidence or drawing inference on a population from a biased sample. Except perhaps that this one may well be much older: Gauss himself recommended the use of robust estimator such as the median and the mad (instead of the classical mean and standard deviations) to estimate the parameters of a normal distribution from noisy observations (even going so far as deriving the consistency factor of the mad(1)).
To give a simple visual example based on real data, consider the infamous CYG star data. The red line here depicts the least square fit, the blue line the fit obtained using a robust linear regression fit. The robust fit here is namely the FastLTS (2) fit, an alternative to the LS fit which can be used to detect outliers (because it uses an estimation procedure that ensures that the influence of any observation on the estimated coefficient is bounded). The R code to reproduce it is:
library(robustbase)
data(starsCYG)
plot(starsCYG)
lm.stars <- lm(log.light ~ log.Te, data = starsCYG)
abline(lm.stars$coef,col="red",lwd=2)
lts.stars <- ltsReg(log.light ~ log.Te, data = starsCYG)
abline(lts.stars$coef,col="blue",lwd=2)
Interestingly, the 4 outlying observations on the left do not even have the largest
residuals with respect to the LS fit and the QQ plot of the residuals of the LS fit (or any of the diagnostic tools derived from them such as the Cook's distance or the dfbeta) fail to show any of them as problematic. This is actually the norm: no more than two outliers are needed (regardless of the sample size) to pull the LS estimates in such a way that the outliers would not stand out in a residual plot. This is called the masking effect and it is well documented. Perhaps the only thing remarkable about the CYGstars dataset is that it is bivariate (hence we can use visual inspection to confirm the result of the robust fit) and that there actually is a good explanation for why these four observations on the left are so abnormal.
This is, btw, the exception more than the rule: except in small pilot studies involving small samples and few variables and where the person doing the statistical analysis was also involved in the data collection process, I have never experienced a case where prior beliefs about the identity of the outliers were actually true.
This is by the way quiet easy to verify. Regardless of whether outliers have been identified using an outlier detection algorithm or the researcher's gut feeling, outliers are by definition observations that have an abnormal leverage (or 'pull') over the coefficients obtained from an LS fit. In other words, outliers are observations whose removal from the sample should severely impact the LS fit.
Although I have never personally experienced this either, there are some well documented cases in the literature where observations flagged as outliers by an outlier detection algorithm were latter found to have been gross errors or generated by a different process. In any case, it is neither scientifically warranted nor wise to only remove outliers if they can somehow be understood or explained. If a small cabal of observations is so far removed from the main body of the data that it can single handedly pull the results of a statistical procedure all by itself it is wise (and i might add natural) to treat it apart regardless of whether or not these data points happens to be also suspect on other grounds.
(1): see Stephen M. Stigler, The History of Statistics: The Measurement of Uncertainty before 1900.
(2): Computing LTS Regression for Large Data Sets (2006)
P. J. Rousseeuw, K. van Driessen.
(3): High-Breakdown Robust Multivariate Methods (2008).
Hubert M., Rousseeuw P. J. and Van Aelst S.
Source: Statist. Sci. Volume 23, 92-119.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
|
Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the patte
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data. Such observations need to be set apart at the onset of any analysis simply because their distance from the bulk of the data ensures that they will exert a disproportionate pull on any multivariable model fitted by maximum likelihood (or indeed any other convex loss function).
It is important to point out that multivariable outliers can simply not be reliably detected using residuals from a least square fit (or any other model estimated by ML, or any other convex loss function). Simply put, multivariable outliers can only be reliably detected using their residuals from a model fitted using an estimation procedure not susceptible to be swayed by them.
The belief that outliers will necessary stand out in the residuals of a classical fit ranks somewhere up there with other hard to debunk statistical no-no's such as interpreting p-values as measure of evidence or drawing inference on a population from a biased sample. Except perhaps that this one may well be much older: Gauss himself recommended the use of robust estimator such as the median and the mad (instead of the classical mean and standard deviations) to estimate the parameters of a normal distribution from noisy observations (even going so far as deriving the consistency factor of the mad(1)).
To give a simple visual example based on real data, consider the infamous CYG star data. The red line here depicts the least square fit, the blue line the fit obtained using a robust linear regression fit. The robust fit here is namely the FastLTS (2) fit, an alternative to the LS fit which can be used to detect outliers (because it uses an estimation procedure that ensures that the influence of any observation on the estimated coefficient is bounded). The R code to reproduce it is:
library(robustbase)
data(starsCYG)
plot(starsCYG)
lm.stars <- lm(log.light ~ log.Te, data = starsCYG)
abline(lm.stars$coef,col="red",lwd=2)
lts.stars <- ltsReg(log.light ~ log.Te, data = starsCYG)
abline(lts.stars$coef,col="blue",lwd=2)
Interestingly, the 4 outlying observations on the left do not even have the largest
residuals with respect to the LS fit and the QQ plot of the residuals of the LS fit (or any of the diagnostic tools derived from them such as the Cook's distance or the dfbeta) fail to show any of them as problematic. This is actually the norm: no more than two outliers are needed (regardless of the sample size) to pull the LS estimates in such a way that the outliers would not stand out in a residual plot. This is called the masking effect and it is well documented. Perhaps the only thing remarkable about the CYGstars dataset is that it is bivariate (hence we can use visual inspection to confirm the result of the robust fit) and that there actually is a good explanation for why these four observations on the left are so abnormal.
This is, btw, the exception more than the rule: except in small pilot studies involving small samples and few variables and where the person doing the statistical analysis was also involved in the data collection process, I have never experienced a case where prior beliefs about the identity of the outliers were actually true.
This is by the way quiet easy to verify. Regardless of whether outliers have been identified using an outlier detection algorithm or the researcher's gut feeling, outliers are by definition observations that have an abnormal leverage (or 'pull') over the coefficients obtained from an LS fit. In other words, outliers are observations whose removal from the sample should severely impact the LS fit.
Although I have never personally experienced this either, there are some well documented cases in the literature where observations flagged as outliers by an outlier detection algorithm were latter found to have been gross errors or generated by a different process. In any case, it is neither scientifically warranted nor wise to only remove outliers if they can somehow be understood or explained. If a small cabal of observations is so far removed from the main body of the data that it can single handedly pull the results of a statistical procedure all by itself it is wise (and i might add natural) to treat it apart regardless of whether or not these data points happens to be also suspect on other grounds.
(1): see Stephen M. Stigler, The History of Statistics: The Measurement of Uncertainty before 1900.
(2): Computing LTS Regression for Large Data Sets (2006)
P. J. Rousseeuw, K. van Driessen.
(3): High-Breakdown Robust Multivariate Methods (2008).
Hubert M., Rousseeuw P. J. and Van Aelst S.
Source: Statist. Sci. Volume 23, 92-119.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the patte
|
11,798
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
|
In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the predictors/independent variables that are "far" from the rest. The true problem with outliers is that they don't follow the linear model that every other data point follows. How do you know whether this is the case? You don't.
If anything, you don't want to look for values of your variables that are outliers; instead, you want to look for values of your residuals that are outliers. Look at these data points. Are their variables recorded correctly? Is there any reason that they wouldn't follow the same model as the rest of your data?
Of course, the reason why these observations may appear as outliers (according to the residual diagnostic) could be because your model is wrong. I have a professor that liked to say that, if we threw away outliers, we'd still believe that the planets revolve around the sun in perfect circles. Kepler could have thrown away Mars and the circular orbit story would have looked pretty good. Mars provided the key insight that this model was incorrect and he would have missed this result if he ignored that planet.
You mentioned that removing the outliers doesn't change your results very much. Either this is because you only have a very small number of observations that you removed relative to your sample or they are reasonably consistent with your model. This might suggest that, while the variables themselves may look different from the rest, that their residuals are not that outstanding. I would leave them in and not try to justify my decision to remove some points to my critics.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
|
In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the predictors/independent variables that are "far" from the rest. The true problem with outliers is that they don't follow the linear model that every other data point follows. How do you know whether this is the case? You don't.
If anything, you don't want to look for values of your variables that are outliers; instead, you want to look for values of your residuals that are outliers. Look at these data points. Are their variables recorded correctly? Is there any reason that they wouldn't follow the same model as the rest of your data?
Of course, the reason why these observations may appear as outliers (according to the residual diagnostic) could be because your model is wrong. I have a professor that liked to say that, if we threw away outliers, we'd still believe that the planets revolve around the sun in perfect circles. Kepler could have thrown away Mars and the circular orbit story would have looked pretty good. Mars provided the key insight that this model was incorrect and he would have missed this result if he ignored that planet.
You mentioned that removing the outliers doesn't change your results very much. Either this is because you only have a very small number of observations that you removed relative to your sample or they are reasonably consistent with your model. This might suggest that, while the variables themselves may look different from the rest, that their residuals are not that outstanding. I would leave them in and not try to justify my decision to remove some points to my critics.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the
|
11,799
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
|
+1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can vary, and I don't know for sure what SPSS is doing) label points more than 1.5 times the Inter-Quartile Range above (below) the third (first) quartile as 'outliers'. However, we can ask how often should we expect to find at least one such point when we know for a fact that all points come from the same distribution? A simple simulation can help us answer this question:
set.seed(999) # this makes the sim reproducable
outVector = vector(length=10000) # to store the results
N = 100 # amount of data per sample
for(i in 1:10000){ # repeating 10k times
X = rnorm(N) # draw normal sample
bp = boxplot(X, plot=FALSE) # make boxplot
outVector[i] = ifelse(length(bp$out)!=0, 1, 0) # if there are 'outliers', 1, else 0
}
mean(outVector) # the % of cases w/ >0 'outliers'
[1] 0.5209
What this demonstrates is that such points can be expected to occur commonly (>50% of the time) with samples of size 100, even when nothing is amiss. As that last sentence hints, the probability of finding a faux 'outlier' via the boxplot strategy will depend on the sample size:
N probability
10 [1] 0.2030
50 [1] 0.3639
100 [1] 0.5209
500 [1] 0.9526
1000 [1] 0.9974
There are other strategies for automatically identifying outliers, but any such method will sometimes misidentify valid points as 'outliers', and sometimes misidentify true outliers as 'valid points'. (You can think of these as type I and type II errors.) My thinking on this issue (for what it's worth) is to focus on the effects of including / excluding the points in question. If your goal is prediction, you can use cross validation to determine whether / how much including the points in question increase the root mean squared error of prediction. If your goal is explanation, you can look at dfBeta (i.e., look at how much the beta estimates of your model change depending on whether the points in question are included or not). Another perspective (arguably the best) is to avoid having to choose whether aberrant points should be thrown out, and just use robust analyses instead.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
|
+1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
+1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can vary, and I don't know for sure what SPSS is doing) label points more than 1.5 times the Inter-Quartile Range above (below) the third (first) quartile as 'outliers'. However, we can ask how often should we expect to find at least one such point when we know for a fact that all points come from the same distribution? A simple simulation can help us answer this question:
set.seed(999) # this makes the sim reproducable
outVector = vector(length=10000) # to store the results
N = 100 # amount of data per sample
for(i in 1:10000){ # repeating 10k times
X = rnorm(N) # draw normal sample
bp = boxplot(X, plot=FALSE) # make boxplot
outVector[i] = ifelse(length(bp$out)!=0, 1, 0) # if there are 'outliers', 1, else 0
}
mean(outVector) # the % of cases w/ >0 'outliers'
[1] 0.5209
What this demonstrates is that such points can be expected to occur commonly (>50% of the time) with samples of size 100, even when nothing is amiss. As that last sentence hints, the probability of finding a faux 'outlier' via the boxplot strategy will depend on the sample size:
N probability
10 [1] 0.2030
50 [1] 0.3639
100 [1] 0.5209
500 [1] 0.9526
1000 [1] 0.9974
There are other strategies for automatically identifying outliers, but any such method will sometimes misidentify valid points as 'outliers', and sometimes misidentify true outliers as 'valid points'. (You can think of these as type I and type II errors.) My thinking on this issue (for what it's worth) is to focus on the effects of including / excluding the points in question. If your goal is prediction, you can use cross validation to determine whether / how much including the points in question increase the root mean squared error of prediction. If your goal is explanation, you can look at dfBeta (i.e., look at how much the beta estimates of your model change depending on whether the points in question are included or not). Another perspective (arguably the best) is to avoid having to choose whether aberrant points should be thrown out, and just use robust analyses instead.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
+1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can
|
11,800
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
|
You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say exactly how to do this in that program, nor what boxplots you are looking at; however, it's hard to imagine that asterisks mean "not that bad" they probably mean that these are highly unusual points by some criterion).
Then, if you have outliers, look at them and try to figure out why.
Then you can try the regression with and without the outliers. If the results are similar, life is good. Report the full results with a footnote. If not similar, then you should explain both regressions.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
|
You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say
|
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say exactly how to do this in that program, nor what boxplots you are looking at; however, it's hard to imagine that asterisks mean "not that bad" they probably mean that these are highly unusual points by some criterion).
Then, if you have outliers, look at them and try to figure out why.
Then you can try the regression with and without the outliers. If the results are similar, life is good. Report the full results with a footnote. If not similar, then you should explain both regressions.
|
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.