idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
11,701
KKT versus unconstrained formulation of lasso regression
The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have the same minimizer $\beta$. Here's the justification: Consider the lasso formulation: $$f(\beta)=\frac{1}{2}||Y - X\beta...
KKT versus unconstrained formulation of lasso regression
The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have
KKT versus unconstrained formulation of lasso regression The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have the same minimizer $\beta$. Here's the justification: Consider ...
KKT versus unconstrained formulation of lasso regression The two formulations are equivalent in the sense that for every value of $t$ in the first formulation, there exists a value of $\lambda$ for the second formulation such that the two formulations have
11,702
KKT versus unconstrained formulation of lasso regression
I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct. In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $\|\hat{\beta}\| < \|\beta^*\|$ leads to a contradiction, we can only assume the necessity of $\|\hat{\beta}\| = \|\beta^...
KKT versus unconstrained formulation of lasso regression
I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct. In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $
KKT versus unconstrained formulation of lasso regression I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct. In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $\|\hat{\beta}\| < \|\beta^*\|$ leads to a contradiction, we can...
KKT versus unconstrained formulation of lasso regression I think that elexhobby's idea for this proof is a good one, but I don't think it's completely correct. In showing that the existence of a solution for the first formulation, $\hat{\beta}$, such that $
11,703
Test for linear separability
Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a linear hyperplane. This is a nice short SVM video illustrating the idea. You may wrap SVM with a search method for featur...
Test for linear separability
Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a l
Test for linear separability Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a linear hyperplane. This is a nice short SVM video illustrating the idea. You may wrap SVM wi...
Test for linear separability Well, support vector machines (SVM) are probably, what you are looking for. For example, SVM with a linear RBF kernel, maps feature to a higher dimenional space and tries to separet the classes by a l
11,704
Test for linear separability
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel language offers an interface for it - R, Python, Octave, Julia, etc. With respect to the answer suggesting the usage of S...
Test for linear separability
Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel l
Test for linear separability Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel language offers an interface for it - R, Python, Octave, Julia, etc. With respect to the ans...
Test for linear separability Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming. GLTK is perfect for that purpose and pretty much every highlevel l
11,705
Test for linear separability
Linear Perceptron is guaranteed to find a solution if one exists. This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming as mentioned by @Raffael. A quick solution would be to solve a perce...
Test for linear separability
Linear Perceptron is guaranteed to find a solution if one exists. This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are l
Test for linear separability Linear Perceptron is guaranteed to find a solution if one exists. This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are linearly separable is by applying linear programming as mentioned by @Raffael. A quick solut...
Test for linear separability Linear Perceptron is guaranteed to find a solution if one exists. This approach is not efficient for large dimensions. Computationally the most effective way to decide whether two sets of points are l
11,706
Why do Lars and Glmnet give different solutions for the Lasso problem?
Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in the objective function, it replaces $\frac{1}{2N}$by $\frac{1}{2}$. Second, both methods normalize the data differently,...
Why do Lars and Glmnet give different solutions for the Lasso problem?
Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in
Why do Lars and Glmnet give different solutions for the Lasso problem? Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in the objective function, it replaces $\frac{1}{2N}...
Why do Lars and Glmnet give different solutions for the Lasso problem? Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in
11,707
Why do Lars and Glmnet give different solutions for the Lasso problem?
Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients will change and you do not change them the way you are approaching it. You need to fit the same model with both methods...
Why do Lars and Glmnet give different solutions for the Lasso problem?
Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients
Why do Lars and Glmnet give different solutions for the Lasso problem? Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients will change and you do not change them the way y...
Why do Lars and Glmnet give different solutions for the Lasso problem? Obviously if the methods use different models you will get different answers. Subtracting off the intercept terms does not lead to the model without the intercept because the best fitting coefficients
11,708
Why do Lars and Glmnet give different solutions for the Lasso problem?
Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on convergence.
Why do Lars and Glmnet give different solutions for the Lasso problem?
Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on converg
Why do Lars and Glmnet give different solutions for the Lasso problem? Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on convergence.
Why do Lars and Glmnet give different solutions for the Lasso problem? Results have to be the same. lars package uses by default type="lar", change this value to type="lasso". Just lower the parameter 'thresh=1e-16' for glmnet since coordinate descent is based on converg
11,709
How does it make sense to do OLS after LASSO variable selection?
There was a similar question a few days ago which had the relevant reference: Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimensional Controls", Review of Economic Studies, 81(2), pp. 608-50 (link) At least for me the paper is a pretty tough read bec...
How does it make sense to do OLS after LASSO variable selection?
There was a similar question a few days ago which had the relevant reference: Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimension
How does it make sense to do OLS after LASSO variable selection? There was a similar question a few days ago which had the relevant reference: Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimensional Controls", Review of Economic Studies, 81(2), pp. 60...
How does it make sense to do OLS after LASSO variable selection? There was a similar question a few days ago which had the relevant reference: Belloni, A., Chernozhukov, V., and Hansen, C. (2014) "Inference on Treatment Effects after Selection among High-Dimension
11,710
How does it make sense to do OLS after LASSO variable selection?
To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes, invalid p-values and confidence intervals with below nominal coverage. Perhaps if the sample size is very large and th...
How does it make sense to do OLS after LASSO variable selection?
To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes
How does it make sense to do OLS after LASSO variable selection? To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes, invalid p-values and confidence intervals with below ...
How does it make sense to do OLS after LASSO variable selection? To perform a variable selection and then re-run an anslysis, as if no variable selection had happened and the selected model had be intended from the start, typically leads to exaggerated effect sizes
11,711
How does it make sense to do OLS after LASSO variable selection?
It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regression you uncover that many of the variables selected by LASSO are nowhere near being statistically significant and/or have...
How does it make sense to do OLS after LASSO variable selection?
It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regressi
How does it make sense to do OLS after LASSO variable selection? It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regression you uncover that many of the variables selected by L...
How does it make sense to do OLS after LASSO variable selection? It may be an excellent idea to run an OLS regression after LASSO. This is simply to double check that your LASSO variable selection made sense. Very often when you rerun the model using OLS regressi
11,712
Safely determining sample size for A/B testing
The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$) You won't be able to ever know the "true" conversion rate of the two paths, but this will give you the ability to say something to the effect "With 99% confidence, A is more effect...
Safely determining sample size for A/B testing
The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$) You won't be able to ever know the "true" conversion rate of
Safely determining sample size for A/B testing The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$) You won't be able to ever know the "true" conversion rate of the two paths, but this will give you the ability to say something to th...
Safely determining sample size for A/B testing The most common method for doing this kind of testing is with binomial proportion confidence intervals (see http://bit.ly/fa2K7B$^\dagger$) You won't be able to ever know the "true" conversion rate of
11,713
Safely determining sample size for A/B testing
IMHO, as far as it goes, the post goes into the right direction. However: The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sample size depends very much on how good you meet these assumptions. I recommend that you calculate required sample sizes ...
Safely determining sample size for A/B testing
IMHO, as far as it goes, the post goes into the right direction. However: The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sam
Safely determining sample size for A/B testing IMHO, as far as it goes, the post goes into the right direction. However: The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sample size depends very much on how good you meet these assumptions. I reco...
Safely determining sample size for A/B testing IMHO, as far as it goes, the post goes into the right direction. However: The proposed method implicitly makes two assumptions: the baseline conversion rate and the expected amount of change. The sam
11,714
Safely determining sample size for A/B testing
Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help. Take a look here.
Safely determining sample size for A/B testing
Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help. Take a look here.
Safely determining sample size for A/B testing Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help. Take a look here.
Safely determining sample size for A/B testing Instead of calculating overlapping intervals you calculate the Z-score. This is algorithmically easier to implement, and you will get statistical libraries to help. Take a look here.
11,715
How to interpret confidence interval of the difference in means in one sample T-test?
This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver: ... if I asked you to tell me how often your commute takes 10 minutes longer than average — something that requires some version of a confidence interval — you’d have to think about that a little bit, ... (from th...
How to interpret confidence interval of the difference in means in one sample T-test?
This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver: ... if I asked you to tell me how often your commute takes 10 minutes longer than average — so
How to interpret confidence interval of the difference in means in one sample T-test? This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver: ... if I asked you to tell me how often your commute takes 10 minutes longer than average — something that requires some version...
How to interpret confidence interval of the difference in means in one sample T-test? This is not an easy thing, even for respected statisticians. Look at one recent attempt by Nate Silver: ... if I asked you to tell me how often your commute takes 10 minutes longer than average — so
11,716
How to interpret confidence interval of the difference in means in one sample T-test?
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals. I would interpret a confidence interval as: there is a 95% probability that the 95% confidence interval covers the true mean difference An interpretation of this is that if we were to ...
How to interpret confidence interval of the difference in means in one sample T-test?
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals. I would interpret a confidence interval as: there is a 95% proba
How to interpret confidence interval of the difference in means in one sample T-test? From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals. I would interpret a confidence interval as: there is a 95% probability that the 95% confidence int...
How to interpret confidence interval of the difference in means in one sample T-test? From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals. I would interpret a confidence interval as: there is a 95% proba
11,717
How to interpret confidence interval of the difference in means in one sample T-test?
The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete and inaccurate. The incompleteness lies in the fact that it is not clear that "95% confident" means anything concrete, o...
How to interpret confidence interval of the difference in means in one sample T-test?
The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete
How to interpret confidence interval of the difference in means in one sample T-test? The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete and inaccurate. The incompleteness...
How to interpret confidence interval of the difference in means in one sample T-test? The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete
11,718
How to interpret confidence interval of the difference in means in one sample T-test?
The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assumptions are correct, and you would calculate that interval again in each repetition, then this confidence interval would c...
How to interpret confidence interval of the difference in means in one sample T-test?
The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assump
How to interpret confidence interval of the difference in means in one sample T-test? The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assumptions are correct, and you would c...
How to interpret confidence interval of the difference in means in one sample T-test? The meaning of a confidence interval is: if you were to repeat your experiment in the exact same way (i.e.: the same number of observations, drawing from the same population, etc.), and if your assump
11,719
How to interpret confidence interval of the difference in means in one sample T-test?
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
How to interpret confidence interval of the difference in means in one sample T-test?
If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
How to interpret confidence interval of the difference in means in one sample T-test? If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
How to interpret confidence interval of the difference in means in one sample T-test? If the true mean difference is outside of this interval, then there is only a 5% chance that the mean difference from our experiment would be so far away from the true mean difference.
11,720
How to interpret confidence interval of the difference in means in one sample T-test?
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie within these 95% limits. More clearly, lets say those limits are "a" and "b" then 95 out of 100 times your sample mean diffe...
How to interpret confidence interval of the difference in means in one sample T-test?
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie with
How to interpret confidence interval of the difference in means in one sample T-test? My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie within these 95% limits. More clearly,...
How to interpret confidence interval of the difference in means in one sample T-test? My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie with
11,721
How to interpret confidence interval of the difference in means in one sample T-test?
"95 times out of 100, your value will fall within one standard deviation of the mean"
How to interpret confidence interval of the difference in means in one sample T-test?
"95 times out of 100, your value will fall within one standard deviation of the mean"
How to interpret confidence interval of the difference in means in one sample T-test? "95 times out of 100, your value will fall within one standard deviation of the mean"
How to interpret confidence interval of the difference in means in one sample T-test? "95 times out of 100, your value will fall within one standard deviation of the mean"
11,722
Textbooks on Matrix Calculus?
For most matrix questions I always first refer to "The Matrix Cookbook" (see here). It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostly intended as a handbook.
Textbooks on Matrix Calculus?
For most matrix questions I always first refer to "The Matrix Cookbook" (see here). It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostl
Textbooks on Matrix Calculus? For most matrix questions I always first refer to "The Matrix Cookbook" (see here). It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostly intended as a handbook.
Textbooks on Matrix Calculus? For most matrix questions I always first refer to "The Matrix Cookbook" (see here). It is regularly updated due to feedback from various sources. There are proofs contained within, however it is mostl
11,723
Textbooks on Matrix Calculus?
If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus: Abadir, K.M. and Magnus, J.R. Matrix Algebra Cambridge University Press, 2005 that has more emphasis on the applications of matrix calculus.
Textbooks on Matrix Calculus?
If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus: Abadir, K.M. and Magnus, J.R. Matrix Algebra Cambridge University Press, 2005 that has
Textbooks on Matrix Calculus? If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus: Abadir, K.M. and Magnus, J.R. Matrix Algebra Cambridge University Press, 2005 that has more emphasis on the applications of matrix calculus.
Textbooks on Matrix Calculus? If you found too much theory in the book of Magnus and Neudecker, I recommend this one, also authored by Magnus: Abadir, K.M. and Magnus, J.R. Matrix Algebra Cambridge University Press, 2005 that has
11,724
Textbooks on Matrix Calculus?
I would highly recommend this 26 pages paper from Stanford University: "Linear Algebra Review and Reference" by Zico Kolter It really focus on typical Sum calculations with a lot of i and j everywhere and tells you the corresponding matrix calculation (i.e. using their "vectorized" implementation). It helps you recogn...
Textbooks on Matrix Calculus?
I would highly recommend this 26 pages paper from Stanford University: "Linear Algebra Review and Reference" by Zico Kolter It really focus on typical Sum calculations with a lot of i and j everywher
Textbooks on Matrix Calculus? I would highly recommend this 26 pages paper from Stanford University: "Linear Algebra Review and Reference" by Zico Kolter It really focus on typical Sum calculations with a lot of i and j everywhere and tells you the corresponding matrix calculation (i.e. using their "vectorized" implem...
Textbooks on Matrix Calculus? I would highly recommend this 26 pages paper from Stanford University: "Linear Algebra Review and Reference" by Zico Kolter It really focus on typical Sum calculations with a lot of i and j everywher
11,725
Textbooks on Matrix Calculus?
A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost: You don't really need a lot of results on vector and matrix derivatives for ML, and Tom Minka's paper covers most of it, but the definitive treatment is Magnus & Neudecker's Matrix Differential Calcul...
Textbooks on Matrix Calculus?
A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost: You don't really need a lot of results on vector and matrix derivatives for ML,
Textbooks on Matrix Calculus? A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost: You don't really need a lot of results on vector and matrix derivatives for ML, and Tom Minka's paper covers most of it, but the definitive treatment is Magnus & Neudecke...
Textbooks on Matrix Calculus? A user self-deleted the following helpful answer, which I here reproduce in full so that its information is not lost: You don't really need a lot of results on vector and matrix derivatives for ML,
11,726
How do you "control" for a factor/variable?
As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already some highly voted questions and answers on this topic, such as: How exactly does one “control for other variables”? Is th...
How do you "control" for a factor/variable?
As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already so
How do you "control" for a factor/variable? As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already some highly voted questions and answers on this topic, such as: How exactly d...
How do you "control" for a factor/variable? As already said, controlling usually means including a variable in a regression (as pointed out by @EMS, this doesn't guarantee any success in achieving this, he links to this). There exist already so
11,727
How do you "control" for a factor/variable?
To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching. I can only explain this with an example, not formally, B-school is years in the past, so there. If you would say: Brazil is richer than Switzerland because Brasil has a national...
How do you "control" for a factor/variable?
To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching. I can only explain this with an example, not formally, B-sc
How do you "control" for a factor/variable? To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching. I can only explain this with an example, not formally, B-school is years in the past, so there. If you would say: Brazil is richer tha...
How do you "control" for a factor/variable? To control for a variable, one can equalize two groups on a relevant trait and then compare the difference on the issue you're researching. I can only explain this with an example, not formally, B-sc
11,728
What MCMC algorithms/techniques are used for discrete parameters?
So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand. I'm not sure why you think sampling an arbitrary discrete distribution is more difficult than an arbitrary continuous distribution. If you can calculate...
What MCMC algorithms/techniques are used for discrete parameters?
So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand. I'm not sure why you think sampl
What MCMC algorithms/techniques are used for discrete parameters? So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand. I'm not sure why you think sampling an arbitrary discrete distribution is more difficu...
What MCMC algorithms/techniques are used for discrete parameters? So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand. I'm not sure why you think sampl
11,729
Gibbs sampling versus general MH-MCMC
the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributions which you draw variates from.
Gibbs sampling versus general MH-MCMC
the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributi
Gibbs sampling versus general MH-MCMC the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributions which you draw variates from.
Gibbs sampling versus general MH-MCMC the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributi
11,730
Gibbs sampling versus general MH-MCMC
Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings alleviates some of the dimensionaltiy problems of generate rejection sampling techinques, but you are still sampling from a ful...
Gibbs sampling versus general MH-MCMC
Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings allevia
Gibbs sampling versus general MH-MCMC Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings alleviates some of the dimensionaltiy problems of generate rejection sampling techinques,...
Gibbs sampling versus general MH-MCMC Gibbs sampling breaks the curse of dimensionalality in sampling since you've broken down the (possibly high dimensional) parameter space into several low dimensional steps. Metropolis-Hastings allevia
11,731
Is there a reason to prefer a specific measure of multicollinearity?
Back in the late 1990s, I did my dissertation on collinearity. My conclusion was that condition indexes were best. The main reason was that, rather than look at individual variables, it lets you look at sets of variables. Since collinearity is a function of sets of variables, this is a good thing. Also, the results o...
Is there a reason to prefer a specific measure of multicollinearity?
Back in the late 1990s, I did my dissertation on collinearity. My conclusion was that condition indexes were best. The main reason was that, rather than look at individual variables, it lets you look
Is there a reason to prefer a specific measure of multicollinearity? Back in the late 1990s, I did my dissertation on collinearity. My conclusion was that condition indexes were best. The main reason was that, rather than look at individual variables, it lets you look at sets of variables. Since collinearity is a func...
Is there a reason to prefer a specific measure of multicollinearity? Back in the late 1990s, I did my dissertation on collinearity. My conclusion was that condition indexes were best. The main reason was that, rather than look at individual variables, it lets you look
11,732
How to handle the difference between the distribution of the test set and the training set?
If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper: Marco Saerens, Patrice Latinne, Christine Decaestecker: Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure. Neural Computati...
How to handle the difference between the distribution of the test set and the training set?
If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper: Marco Saerens, Patrice Latinne, Christine
How to handle the difference between the distribution of the test set and the training set? If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper: Marco Saerens, Patrice Latinne, Christine Decaestecker: Adjusting the...
How to handle the difference between the distribution of the test set and the training set? If the difference lies only in the relative class frequencies in the training and test sets, then I would recommend the EM procedure introduced in this paper: Marco Saerens, Patrice Latinne, Christine
11,733
How to handle the difference between the distribution of the test set and the training set?
I found an excellent tutorial about domain adaptation that might help explain this in more detail: http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html The one solution that hasn't been mentioned here is based on ADABOOST. Here is the link to the original article: http://ftp.cse.ust.hk/~qyang/Docs/...
How to handle the difference between the distribution of the test set and the training set?
I found an excellent tutorial about domain adaptation that might help explain this in more detail: http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html The one solution that hasn't
How to handle the difference between the distribution of the test set and the training set? I found an excellent tutorial about domain adaptation that might help explain this in more detail: http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html The one solution that hasn't been mentioned here is base...
How to handle the difference between the distribution of the test set and the training set? I found an excellent tutorial about domain adaptation that might help explain this in more detail: http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/da_survey.html The one solution that hasn't
11,734
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$. This is mentioned in both the Details section of help(princomp): Note that the default calculation uses divisor 'N' for the covariance matrix. and the Details ...
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues?
As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$. This is mentioned in both the Details sec
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues? As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$. This is mentioned in both the Details section of help(princomp): Note that the defaul...
Why do the R functions 'princomp' and 'prcomp' give different eigenvalues? As pointed out in the comments, it's because princomp uses $N$ for the divisor, but prcomp and the direct calculation using cov both use $N-1$ instead of $N$. This is mentioned in both the Details sec
11,735
Why is bias affected when a clinical trial is terminated at an early stage?
First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the estimate of the effect size will be biased in a completely statististical sense. If you stopped for efficacy, the estim...
Why is bias affected when a clinical trial is terminated at an early stage?
First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the
Why is bias affected when a clinical trial is terminated at an early stage? First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the estimate of the effect size will be biased ...
Why is bias affected when a clinical trial is terminated at an early stage? First of all, you have to note the context: this only applies when the trial was stopped early due to interim monitoring showing efficacy/futility, not for some random outside reason. In that case the
11,736
Why is bias affected when a clinical trial is terminated at an early stage?
Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect but may have a negative effect (-1). Five guinea pigs are tested one after the other. The unknown probability of a posit...
Why is bias affected when a clinical trial is terminated at an early stage?
Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect bu
Why is bias affected when a clinical trial is terminated at an early stage? Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect but may have a negative effect (-1). Five gui...
Why is bias affected when a clinical trial is terminated at an early stage? Here is an illustration of how bias might arise in conclusions, and why it may not be the full story. Suppose you have a sequential trial of a drug which is expected to have a positive (+1) effect bu
11,737
Why is bias affected when a clinical trial is terminated at an early stage?
Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262 Essentially, to the best of my recollection the results will be biased as 1) stopping early usually means that either the treatment was more or less effective than one hoped, and if this is positive, th...
Why is bias affected when a clinical trial is terminated at an early stage?
Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262 Essentially, to the best of my recollection the results will be biased as 1) stopp
Why is bias affected when a clinical trial is terminated at an early stage? Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262 Essentially, to the best of my recollection the results will be biased as 1) stopping early usually means that either the trea...
Why is bias affected when a clinical trial is terminated at an early stage? Well, my knowledge on this comes from the Harveian oration in 2008 http://bookshop.rcplondon.ac.uk/details.aspx?e=262 Essentially, to the best of my recollection the results will be biased as 1) stopp
11,738
Why is bias affected when a clinical trial is terminated at an early stage?
I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will be "biased" because you have less data. The so called "likelihood principle" states that inference should only depend o...
Why is bias affected when a clinical trial is terminated at an early stage?
I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will
Why is bias affected when a clinical trial is terminated at an early stage? I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will be "biased" because you have less data. The...
Why is bias affected when a clinical trial is terminated at an early stage? I would disagree with that claim, unless by "bias" Piantadosi means that part of the accuracy which is commonly called bias. The inference won't be "biased" because you chose to stop per se: it will
11,739
Why is bias affected when a clinical trial is terminated at an early stage?
there will be bias (in "statistical sense") if termination of studies is not random. In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no effect" will show some effect (as a result of chance) and (b) some experiments that ultimately do find an effect will s...
Why is bias affected when a clinical trial is terminated at an early stage?
there will be bias (in "statistical sense") if termination of studies is not random. In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no
Why is bias affected when a clinical trial is terminated at an early stage? there will be bias (in "statistical sense") if termination of studies is not random. In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no effect" will show some effect (as a result ...
Why is bias affected when a clinical trial is terminated at an early stage? there will be bias (in "statistical sense") if termination of studies is not random. In a set of experiments run to conclusion, the "early on" results of (a) some experiments that ultimately find "no
11,740
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature?
"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green jelly bean" cartoon in which investigators performed hypothesis tests of associations between consumption of jelly beans (...
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature
"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green je
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature? "Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green jelly bean" cartoon ...
Plain language meaning of "dependent" and "independent" tests in the multiple comparisons literature "Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green je
11,741
Minimal number of points for a linear regression
Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can be fit perfectly with just 3 points. So clearly in almost any circumstance, it would be proper to say that 4 points are ...
Minimal number of points for a linear regression
Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can b
Minimal number of points for a linear regression Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can be fit perfectly with just 3 points. So clearly in almost any circumsta...
Minimal number of points for a linear regression Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can b
11,742
Minimal number of points for a linear regression
As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
Minimal number of points for a linear regression
As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
Minimal number of points for a linear regression As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
Minimal number of points for a linear regression As mentioned by Michael a good rule of thumb is 10, you can also check it out on wiki https://en.wikipedia.org/wiki/One_in_ten_rule
11,743
Cross-validation vs empirical Bayes for estimating hyperparameters
I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the model. Thus if the model is mis-specified, then the evidence may be unreliable. Cross-validation on the other hand gives ...
Cross-validation vs empirical Bayes for estimating hyperparameters
I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the mod
Cross-validation vs empirical Bayes for estimating hyperparameters I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the model. Thus if the model is mis-specified, then the evi...
Cross-validation vs empirical Bayes for estimating hyperparameters I doubt there will be a theoretical link that says that CV and evidence maximisation are asymptotically equivalent as the evidence tells us the probability of the data given the assumptions of the mod
11,744
Cross-validation vs empirical Bayes for estimating hyperparameters
There is actually a paper that connects CV and EB: E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496 If I understand correctly, the paper claims that the marginal likelihood is similar to a very exhaustive cross-validation procedure, where you...
Cross-validation vs empirical Bayes for estimating hyperparameters
There is actually a paper that connects CV and EB: E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496 If I understand correc
Cross-validation vs empirical Bayes for estimating hyperparameters There is actually a paper that connects CV and EB: E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496 If I understand correctly, the paper claims that the marginal likelihood is...
Cross-validation vs empirical Bayes for estimating hyperparameters There is actually a paper that connects CV and EB: E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496 If I understand correc
11,745
Cross-validation vs empirical Bayes for estimating hyperparameters
If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are identical.
Cross-validation vs empirical Bayes for estimating hyperparameters
If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are ident
Cross-validation vs empirical Bayes for estimating hyperparameters If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are identical.
Cross-validation vs empirical Bayes for estimating hyperparameters If you didn't have the other parameters $k$, then EB is identical to CV except that you don't have to search. You say that you are integrating out $k$ in both CV and EB. In that case, they are ident
11,746
"Normalizing" variables for SVD / PCA
The three common normalizations are centering, scaling, and standardizing. Let $X$ be a random variable. Centering is $$x_i^* = x_i-\bar{x}.$$ The resultant $x^*$ will have $\bar{x^*}=0$. Scaling is $$x_i^* = \frac{x_i}{\sqrt{(\sum_{i}{x_i^2})}}.$$ The resultant $x^*$ will have $\sum_{i}{{{x_i^*}}^2} = 1$. Standardizi...
"Normalizing" variables for SVD / PCA
The three common normalizations are centering, scaling, and standardizing. Let $X$ be a random variable. Centering is $$x_i^* = x_i-\bar{x}.$$ The resultant $x^*$ will have $\bar{x^*}=0$. Scaling is $
"Normalizing" variables for SVD / PCA The three common normalizations are centering, scaling, and standardizing. Let $X$ be a random variable. Centering is $$x_i^* = x_i-\bar{x}.$$ The resultant $x^*$ will have $\bar{x^*}=0$. Scaling is $$x_i^* = \frac{x_i}{\sqrt{(\sum_{i}{x_i^2})}}.$$ The resultant $x^*$ will have $\...
"Normalizing" variables for SVD / PCA The three common normalizations are centering, scaling, and standardizing. Let $X$ be a random variable. Centering is $$x_i^* = x_i-\bar{x}.$$ The resultant $x^*$ will have $\bar{x^*}=0$. Scaling is $
11,747
"Normalizing" variables for SVD / PCA
You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimensions. For that reason, unless the variables are all comparable (same physical quantity, same units), it is recommended t...
"Normalizing" variables for SVD / PCA
You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimens
"Normalizing" variables for SVD / PCA You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimensions. For that reason, unless the variables are all comparable (same physical quan...
"Normalizing" variables for SVD / PCA You are absolutely right that having individual variables with very different variances can be problematic for PCA, especially if this difference is due to different units or different physical dimens
11,748
"Normalizing" variables for SVD / PCA
A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about it. If we speak in MATLAB code: this is clear, clf clc %% Let us draw a line scale = 1; x = scale .* (1:0.25:5); y = 1/...
"Normalizing" variables for SVD / PCA
A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about
"Normalizing" variables for SVD / PCA A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about it. If we speak in MATLAB code: this is clear, clf clc %% Let us draw a line scal...
"Normalizing" variables for SVD / PCA A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about
11,749
"Normalizing" variables for SVD / PCA
To normalizing the data for PCA, following formula also used $\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$ where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$ describes all raw values across all countries for that indicator across all years.
"Normalizing" variables for SVD / PCA
To normalizing the data for PCA, following formula also used $\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$ where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$ describ
"Normalizing" variables for SVD / PCA To normalizing the data for PCA, following formula also used $\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$ where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$ describes all raw values across all countries for that indicator across all years.
"Normalizing" variables for SVD / PCA To normalizing the data for PCA, following formula also used $\text{SC}=100\frac{X-\min(X)}{\max(X)-\min(X)}$ where $X$ is the raw value for that indicator for country $c$ in year $t$, and $X$ describ
11,750
Statistical forensics: Benford and beyond
Great Question! In the scientific context there are various kinds of problematic reporting and problematic behaviour: Fraud: I'd define fraud as a deliberate intention on the part of the author or analyst to misrepresent the results and where the misrepresentation is of a sufficiently grave nature. The main example be...
Statistical forensics: Benford and beyond
Great Question! In the scientific context there are various kinds of problematic reporting and problematic behaviour: Fraud: I'd define fraud as a deliberate intention on the part of the author or an
Statistical forensics: Benford and beyond Great Question! In the scientific context there are various kinds of problematic reporting and problematic behaviour: Fraud: I'd define fraud as a deliberate intention on the part of the author or analyst to misrepresent the results and where the misrepresentation is of a suff...
Statistical forensics: Benford and beyond Great Question! In the scientific context there are various kinds of problematic reporting and problematic behaviour: Fraud: I'd define fraud as a deliberate intention on the part of the author or an
11,751
Statistical forensics: Benford and beyond
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural world. You are right that you can use Benford's Law in only certain circumstances. You say that the data has to have a ...
Statistical forensics: Benford and beyond
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural
Statistical forensics: Benford and beyond Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural world. You are right that you can use Benford's Law in only certain circumstan...
Statistical forensics: Benford and beyond Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural
11,752
What is the relationship between the GINI score and the log-likelihood ratio
I will use the same notation I used here: Mathematics behind classification and regression trees Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference is in the impurity function $I$: $\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$ $\textit{Entropy}: H(E) = -...
What is the relationship between the GINI score and the log-likelihood ratio
I will use the same notation I used here: Mathematics behind classification and regression trees Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference i
What is the relationship between the GINI score and the log-likelihood ratio I will use the same notation I used here: Mathematics behind classification and regression trees Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference is in the impurity function $I$: $\textit{G...
What is the relationship between the GINI score and the log-likelihood ratio I will use the same notation I used here: Mathematics behind classification and regression trees Gini Gain and Information Gain ($IG$) are both impurity based splitting criteria. The only difference i
11,753
What is the relationship between the GINI score and the log-likelihood ratio
Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead! I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to compare the likelihood of data arising from two (or more) different distributions, whereas the Gini coefficient is a su...
What is the relationship between the GINI score and the log-likelihood ratio
Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead! I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to
What is the relationship between the GINI score and the log-likelihood ratio Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead! I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to compare the likelihood of data arising fro...
What is the relationship between the GINI score and the log-likelihood ratio Good question. Unfortunately I don't have enough reputation yet to upvote or comment, so answering instead! I'm not very familiar with the ratio test, but it strikes me that it is a formalism used to
11,754
Recovering raw coefficients and variances from orthogonal polynomial regression
Yes, it's possible. Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perfect fit. You can perform this with the software even when it does not document its procedures to compute orthogonal pol...
Recovering raw coefficients and variances from orthogonal polynomial regression
Yes, it's possible. Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perf
Recovering raw coefficients and variances from orthogonal polynomial regression Yes, it's possible. Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perfect fit. You can perform this with the ...
Recovering raw coefficients and variances from orthogonal polynomial regression Yes, it's possible. Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perf
11,755
Recovering raw coefficients and variances from orthogonal polynomial regression
Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indices be zero by definition, and undefined $\Gamma$ entries be zero. Then we can find that if we disregard the scaling here ...
Recovering raw coefficients and variances from orthogonal polynomial regression
Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indice
Recovering raw coefficients and variances from orthogonal polynomial regression Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indices be zero by definition, and undefined $...
Recovering raw coefficients and variances from orthogonal polynomial regression Just a potentially useful additions to whuber's answer. Looking at the code for poly, you can deduce the linear map yourself. Let $\vec h_{m:n} =(h_m, h_{m + 1}, \dots, h_n)^\top$, let negative indice
11,756
What are the effects of depth and width in deep neural networks?
The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8: Widdening consistently improves performance across residual networks of different depth; Increasing both depth and width helps until the number of parameters becomes too high and stronger regularization is needed; There doesn’t seem t...
What are the effects of depth and width in deep neural networks?
The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8: Widdening consistently improves performance across residual networks of different depth; Increasing both depth and
What are the effects of depth and width in deep neural networks? The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8: Widdening consistently improves performance across residual networks of different depth; Increasing both depth and width helps until the number of parameters becomes too...
What are the effects of depth and width in deep neural networks? The "Wide Residual Networks" paper linked makes a nice summary at the bottom of p8: Widdening consistently improves performance across residual networks of different depth; Increasing both depth and
11,757
How to set up and estimate a multinomial logit model in R?
Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source for instructions on how to run a multinomial logistic regression model in R using mlogit package. If you go to the econo...
How to set up and estimate a multinomial logit model in R?
Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source
How to set up and estimate a multinomial logit model in R? Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source for instructions on how to run a multinomial logistic regress...
How to set up and estimate a multinomial logit model in R? Im sure you've already found your solutions as this post is very old, but for those of us who are still looking for solutions - I have found Multinomial Probit and Logit Models in R is a great source
11,758
How to set up and estimate a multinomial logit model in R?
In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different models of the same data. Different developers can make different choices about what to leave in or out of that constant. Wh...
How to set up and estimate a multinomial logit model in R?
In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different mod
How to set up and estimate a multinomial logit model in R? In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different models of the same data. Different developers can make different...
How to set up and estimate a multinomial logit model in R? In general, differences in AIC values between two different pieces of software are not entirely surprising. Calculating the likelihoods often involves a constant that is the same between different mod
11,759
How to set up and estimate a multinomial logit model in R?
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
How to set up and estimate a multinomial logit model in R?
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
How to set up and estimate a multinomial logit model in R? You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
How to set up and estimate a multinomial logit model in R? You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
11,760
Why are mixed data a problem for euclidean-based clustering algorithms?
It's not about not being able to compute something. Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variable, that is... If you have the attributes shoe size and body mass, Euclidean distance doesn't make much sense either. It...
Why are mixed data a problem for euclidean-based clustering algorithms?
It's not about not being able to compute something. Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variab
Why are mixed data a problem for euclidean-based clustering algorithms? It's not about not being able to compute something. Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variable, that is... If you have the attributes shoe s...
Why are mixed data a problem for euclidean-based clustering algorithms? It's not about not being able to compute something. Distances much be used to measure something meaningful. This will fail much earlier with categorial data. If it ever works with more than one variab
11,761
Why are mixed data a problem for euclidean-based clustering algorithms?
At the heart of these metric based clustering problems is the idea of interpolation. Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and you have 10kg in your data. When you see a new 99kg, the metric will enable you to approach 100kg --- even though you ha...
Why are mixed data a problem for euclidean-based clustering algorithms?
At the heart of these metric based clustering problems is the idea of interpolation. Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and
Why are mixed data a problem for euclidean-based clustering algorithms? At the heart of these metric based clustering problems is the idea of interpolation. Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and you have 10kg in your data. When you see a new 9...
Why are mixed data a problem for euclidean-based clustering algorithms? At the heart of these metric based clustering problems is the idea of interpolation. Take whatever method you just cited, and let us consider a continuous variable such as weight. You have 100kg and
11,762
Why are mixed data a problem for euclidean-based clustering algorithms?
A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then you imply that a data pattern with orange value is closer to a pattern with green value than the one with the blue value...
Why are mixed data a problem for euclidean-based clustering algorithms?
A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then y
Why are mixed data a problem for euclidean-based clustering algorithms? A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then you imply that a data pattern with orange value i...
Why are mixed data a problem for euclidean-based clustering algorithms? A problem with unorder categorical values is that if you dummy encode them you force an ordering and thus a new meaning to the variables. E.g if you encode blue as 1 and orange as 2 and green 3 then y
11,763
Why are mixed data a problem for euclidean-based clustering algorithms?
The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discrete values of a categorical variable). It is meant to represent something non-measurable, non-quantifiable, by storing t...
Why are mixed data a problem for euclidean-based clustering algorithms?
The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discr
Why are mixed data a problem for euclidean-based clustering algorithms? The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discrete values of a categorical variable). It is mea...
Why are mixed data a problem for euclidean-based clustering algorithms? The answer is actually quite simple, we just need to understand what the information in a dummy variable really is. The idea of a dummy variable denotes the presence or absence of factor levels (discr
11,764
Difference between the assumptions underlying a correlation and a regression slope tests of significance
Introduction This reply addresses the underlying motivation for this set of questions: What are the assumptions underlying a correlation test and a regression slope test? In light of the background provided in the question, though, I would like to suggest expanding this question a little: let us explore the different...
Difference between the assumptions underlying a correlation and a regression slope tests of signific
Introduction This reply addresses the underlying motivation for this set of questions: What are the assumptions underlying a correlation test and a regression slope test? In light of the background
Difference between the assumptions underlying a correlation and a regression slope tests of significance Introduction This reply addresses the underlying motivation for this set of questions: What are the assumptions underlying a correlation test and a regression slope test? In light of the background provided in the...
Difference between the assumptions underlying a correlation and a regression slope tests of signific Introduction This reply addresses the underlying motivation for this set of questions: What are the assumptions underlying a correlation test and a regression slope test? In light of the background
11,765
Difference between the assumptions underlying a correlation and a regression slope tests of significance
As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by and large when people think about, compare, and contrast regression and correlation they are in fact considering two si...
Difference between the assumptions underlying a correlation and a regression slope tests of signific
As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by
Difference between the assumptions underlying a correlation and a regression slope tests of significance As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by and large when...
Difference between the assumptions underlying a correlation and a regression slope tests of signific As @whuber's answer suggests there are a number of models and techniques that may fall under the correlation umbrella that do not have clear analogues in a regression world and vice versa. However, by
11,766
Difference between the assumptions underlying a correlation and a regression slope tests of significance
Here is an explanation of the equivalence of the test, also showing how r and b are related. http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/ In order to perform OLS, you have to make https://en.wikipedia.org/wiki/Ordinary_least_squares#Assumptions Additionally, OLS and ...
Difference between the assumptions underlying a correlation and a regression slope tests of signific
Here is an explanation of the equivalence of the test, also showing how r and b are related. http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/ In order
Difference between the assumptions underlying a correlation and a regression slope tests of significance Here is an explanation of the equivalence of the test, also showing how r and b are related. http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/ In order to perform OLS,...
Difference between the assumptions underlying a correlation and a regression slope tests of signific Here is an explanation of the equivalence of the test, also showing how r and b are related. http://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/ In order
11,767
Difference between the assumptions underlying a correlation and a regression slope tests of significance
Regarding question 2 how to calculate the same t-value using r instead of β1 I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference can be derived from the $F$ statistic, where the alternative hypothesis is that the model does not explain the data, and ...
Difference between the assumptions underlying a correlation and a regression slope tests of signific
Regarding question 2 how to calculate the same t-value using r instead of β1 I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference c
Difference between the assumptions underlying a correlation and a regression slope tests of significance Regarding question 2 how to calculate the same t-value using r instead of β1 I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference can be derived f...
Difference between the assumptions underlying a correlation and a regression slope tests of signific Regarding question 2 how to calculate the same t-value using r instead of β1 I do not think it is possible to calculate the $t$ statistic from the $r$ value, however the same statistical inference c
11,768
Can there be multiple local optimum solutions when we solve a linear regression?
This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. Although these connections are simple and easily learned, they are subtle and often overlooked. To summarize some ideas f...
Can there be multiple local optimum solutions when we solve a linear regression?
This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. A
Can there be multiple local optimum solutions when we solve a linear regression? This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. Although these connections are simple an...
Can there be multiple local optimum solutions when we solve a linear regression? This question is interesting insofar as it exposes some connections among optimization theory, optimization methods, and statistical methods that any capable user of statistics needs to understand. A
11,769
Can there be multiple local optimum solutions when we solve a linear regression?
I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution and it will be global optimum. Gradient descent can and will return multiple solutions if you have a non-convex problem...
Can there be multiple local optimum solutions when we solve a linear regression?
I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution
Can there be multiple local optimum solutions when we solve a linear regression? I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution and it will be global optimum. Gradien...
Can there be multiple local optimum solutions when we solve a linear regression? I'm afraid there is no binary answer to your question. If Linear regression is strictly convex (no constraints on coefficients, no regularizer etc.,) then gradient descent will have a unique solution
11,770
Can there be multiple local optimum solutions when we solve a linear regression?
This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eventually. Why this objective function is convex? This is the beauty of using the squared error for minimization. The deri...
Can there be multiple local optimum solutions when we solve a linear regression?
This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eve
Can there be multiple local optimum solutions when we solve a linear regression? This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eventually. Why this objective function is...
Can there be multiple local optimum solutions when we solve a linear regression? This is because the objective function you are minimizing is convex, there is only one minima/maxima. Therefore, the local optimum is also a global optimum. Gradient descent will find the solution eve
11,771
How is the confusion matrix reported from K-fold cross-validation?
If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds or 'test' sets. You train your model on 9/10 of the folds and test the first fold and get a confusion matrix. This con...
How is the confusion matrix reported from K-fold cross-validation?
If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds
How is the confusion matrix reported from K-fold cross-validation? If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds or 'test' sets. You train your model on 9/10 of the ...
How is the confusion matrix reported from K-fold cross-validation? If you are testing the performance of a model (i.e. not optimizing parameters), generally you will sum the confusion matrices. Think of it like this, you have split you data in to 10 different folds
11,772
How to visualize an enormous sparse contingency table?
What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual shadings and you get the idea. The sparse matrix/contigency table would normally contain the number of occurences of ea...
How to visualize an enormous sparse contingency table?
What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual
How to visualize an enormous sparse contingency table? What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual shadings and you get the idea. The sparse matrix/contigency tabl...
How to visualize an enormous sparse contingency table? What you could do is use the residual shading ideas from vcd here in combination with sparse matrix visualisation as for example on page 49 of this book chapter. Imagine the latter plot with residual
11,773
Why do we assume that the error is normally distributed?
I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ... The field of Robust Statistics examines the question of what to do when the Gaussian assumption fails (in the sense that there are outliers): it is of...
Why do we assume that the error is normally distributed?
I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ... The field of Robust Statistics
Why do we assume that the error is normally distributed? I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ... The field of Robust Statistics examines the question of what to do when the Gaussian assumptio...
Why do we assume that the error is normally distributed? I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ... The field of Robust Statistics
11,774
Bootstrapping - do I need to remove outliers first?
Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, outliers are defined as high leverage, high influence observations. One can and should identify such observations in the ...
Bootstrapping - do I need to remove outliers first?
Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, o
Bootstrapping - do I need to remove outliers first? Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, outliers are defined as high leverage, high influence observations. O...
Bootstrapping - do I need to remove outliers first? Before addressing this, it's important to acknowledge that the statistical malpractice of "removing outliers" has been wrongly promulgated in much of the applied statistical pedagogy. Traditionally, o
11,775
Bootstrapping - do I need to remove outliers first?
Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
Bootstrapping - do I need to remove outliers first?
Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
Bootstrapping - do I need to remove outliers first? Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
Bootstrapping - do I need to remove outliers first? Looking at this as an outlier problem seems wrong to me. If "< 10% of users spend at all", you need to model that aspect. Tobit or Heckman regression would be two possibilities.
11,776
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition: For a categorical predictor having $c$ levels, users of ridge regression often do not recognize that the amount of shrinkage and the predicted values from the fitted model depend on how the design mat...
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition: For a categorical predictor having $c$ levels, users of ridge regression often d
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression? This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition: For a categorical predictor having $c$ levels, users of ridge regression often do not recognize that the amount of shrinkage a...
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression? This issue has been appreciated for some time. See Harrell on page 210 of Regression Modeling Strategies, 2nd edition: For a categorical predictor having $c$ levels, users of ridge regression often d
11,777
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
From The Elements of Statistical Learning (2nd Edition; pages 63-64): The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (3.41). In addition, notice that the intercept $\beta_0$ has been left out of the penalty term. Penalization of the inter...
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression?
From The Elements of Statistical Learning (2nd Edition; pages 63-64): The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression? From The Elements of Statistical Learning (2nd Edition; pages 63-64): The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (3.41). In addition, notice that the intercept ...
Should One Hot Encoding or Dummy Variables Be Used With Ridge Regression? From The Elements of Statistical Learning (2nd Edition; pages 63-64): The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving (
11,778
What does it mean to regress a variable against another
It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X. As an example, the data is X = 1,...,100. The value of Y is plotted on the Y axis. The red line is the linear regression surface. Personally, I d...
What does it mean to regress a variable against another
It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X. As an example, the data is
What does it mean to regress a variable against another It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X. As an example, the data is X = 1,...,100. The value of Y is plotted on the Y axis. The red...
What does it mean to regress a variable against another It typically means finding a surface parametrised by known X such that Y typically lies close to that surface. This gives you a recipe for finding unknown Y when you know X. As an example, the data is
11,779
What does it mean to regress a variable against another
Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on genomic breeding value, etc. bias=lm(TBV~GBV)
What does it mean to regress a variable against another
Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on ge
What does it mean to regress a variable against another Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on genomic breeding value, etc. bias=lm(TBV~GBV)
What does it mean to regress a variable against another Probably, Yes. Many times we need to regress a variable (say Y) on another variable (say X). In Regression, it can therefore be written as $Y = a+bX$; regress Y on X: regress true breeding value on ge
11,780
Hypothesis testing and significance for time series
I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2), the most general (largest) model would be an AR(2). Estimate this model globally i.e. for the combined time series. Co...
Hypothesis testing and significance for time series
I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2)
Hypothesis testing and significance for time series I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2), the most general (largest) model would be an AR(2). Estimate this ...
Hypothesis testing and significance for time series I would suggest identifying an ARIMA model for each mice separately and then review them for similarities and generalization. For example if the first mice has an AR(1) and the second one has an AR(2)
11,781
Hypothesis testing and significance for time series
There are many ways to do it if you think of the weight variations as a dynamical process. For example, it can be modeled as an integrator $\dot x(t) = \theta x(t) + v(t)$ where $x(t)$ is the weight variation, $\theta$ relates to how fast the weight changes and $v(t)$ is a stochastic disturbance that may affect the we...
Hypothesis testing and significance for time series
There are many ways to do it if you think of the weight variations as a dynamical process. For example, it can be modeled as an integrator $\dot x(t) = \theta x(t) + v(t)$ where $x(t)$ is the weight
Hypothesis testing and significance for time series There are many ways to do it if you think of the weight variations as a dynamical process. For example, it can be modeled as an integrator $\dot x(t) = \theta x(t) + v(t)$ where $x(t)$ is the weight variation, $\theta$ relates to how fast the weight changes and $v(t)...
Hypothesis testing and significance for time series There are many ways to do it if you think of the weight variations as a dynamical process. For example, it can be modeled as an integrator $\dot x(t) = \theta x(t) + v(t)$ where $x(t)$ is the weight
11,782
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution?
The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good 1)Any experiment is performed under the same conditions till a fixed number of successes, say C, is achieved 2)The result of each experiment can be classified ...
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution
The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good 1)Any experiment is performed under the
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution? The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good 1)Any experiment is performed under the same conditions t...
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution The negative binomial distribution is very much similar to the binomial probability model. it is applicable when the following assumptions(conditions) hold good 1)Any experiment is performed under the
11,783
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution?
The poisson distribution can be a reasonable approximation of the binomial under certain conditions like 1)The probability of success for each trial is very small. P-->0 2)np=m(say) is finete The rule most often used by statisticians is that the poisson is a good approximation of the binomial when n is equal to or grea...
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution
The poisson distribution can be a reasonable approximation of the binomial under certain conditions like 1)The probability of success for each trial is very small. P-->0 2)np=m(say) is finete The rule
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution? The poisson distribution can be a reasonable approximation of the binomial under certain conditions like 1)The probability of success for each trial is very small. P-->0 2)np=m(say) is finete The rule most often used b...
Switch from Modelling a Process using a Poisson Distribution to use a Negative Binomial Distribution The poisson distribution can be a reasonable approximation of the binomial under certain conditions like 1)The probability of success for each trial is very small. P-->0 2)np=m(say) is finete The rule
11,784
What is the difference between the vertical bar and semi-colon notations?
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be). Let's say in a regression setting, you would have a distribution: $$ p(Y | x, \beta) $$ Which means: the distribution of $Y$ if y...
What is the difference between the vertical bar and semi-colon notations?
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be). Let's say
What is the difference between the vertical bar and semi-colon notations? I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be). Let's say in a regression setting, you would have a dis...
What is the difference between the vertical bar and semi-colon notations? I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be). Let's say
11,785
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x,\theta)$ and only makes sense if $\Theta$ is a random variable. $f(x|\theta)$ is the conditional distribution of $X$ gi...
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x
What is the difference between the vertical bar and semi-colon notations? $f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x,\theta)$ and only makes sense if $\Theta$ is ...
What is the difference between the vertical bar and semi-colon notations? $f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x
11,786
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of functions, where the elements are indexed by $\Theta$. A subtle distinction, perhaps, but an important one, esp. when it com...
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of func
What is the difference between the vertical bar and semi-colon notations? $f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of functions, where the elements are indexed by $\The...
What is the difference between the vertical bar and semi-colon notations? $f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of func
11,787
What is the difference between the vertical bar and semi-colon notations?
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is an operation on random variables and as such using this notation when $d, ...
What is the difference between the vertical bar and semi-colon notations?
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates con
What is the difference between the vertical bar and semi-colon notations? Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is ...
What is the difference between the vertical bar and semi-colon notations? Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates con
11,788
Is exploratory data analysis important when doing purely predictive modeling?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours. I went through each of the variables in turn, graphing them, calculating summary statistics etc. I also...
Is exploratory data analysis important when doing purely predictive modeling?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit
Is exploratory data analysis important when doing purely predictive modeling? Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours. I went through each of th...
Is exploratory data analysis important when doing purely predictive modeling? Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit
11,789
Is exploratory data analysis important when doing purely predictive modeling?
Obviously, yes. The data analysis could lead you to many points that would hurt your predictive model : Incomplete data Assuming we are talking about quantitative data, you'll have to decide whether you want to ignore the column (if there's too much data missing) or figure out what will be your "default" value (Mean, M...
Is exploratory data analysis important when doing purely predictive modeling?
Obviously, yes. The data analysis could lead you to many points that would hurt your predictive model : Incomplete data Assuming we are talking about quantitative data, you'll have to decide whether y
Is exploratory data analysis important when doing purely predictive modeling? Obviously, yes. The data analysis could lead you to many points that would hurt your predictive model : Incomplete data Assuming we are talking about quantitative data, you'll have to decide whether you want to ignore the column (if there's t...
Is exploratory data analysis important when doing purely predictive modeling? Obviously, yes. The data analysis could lead you to many points that would hurt your predictive model : Incomplete data Assuming we are talking about quantitative data, you'll have to decide whether y
11,790
Is exploratory data analysis important when doing purely predictive modeling?
One important thing done by EDA is finding data entry errors and other anomalous points. Another is that the distribution of variables can influence the models you try to fit.
Is exploratory data analysis important when doing purely predictive modeling?
One important thing done by EDA is finding data entry errors and other anomalous points. Another is that the distribution of variables can influence the models you try to fit.
Is exploratory data analysis important when doing purely predictive modeling? One important thing done by EDA is finding data entry errors and other anomalous points. Another is that the distribution of variables can influence the models you try to fit.
Is exploratory data analysis important when doing purely predictive modeling? One important thing done by EDA is finding data entry errors and other anomalous points. Another is that the distribution of variables can influence the models you try to fit.
11,791
Is exploratory data analysis important when doing purely predictive modeling?
We used to have a phrase in chemistry: "Two weeks spent in the lab can save you two hours on Scifinder". I'm sure the same applies to machine learning: "Two weeks spent training a neuralnet can save you 2 hours looking at the input data". These are the things I'd go through before starting any ML process. Plot out ...
Is exploratory data analysis important when doing purely predictive modeling?
We used to have a phrase in chemistry: "Two weeks spent in the lab can save you two hours on Scifinder". I'm sure the same applies to machine learning: "Two weeks spent training a neuralnet can sav
Is exploratory data analysis important when doing purely predictive modeling? We used to have a phrase in chemistry: "Two weeks spent in the lab can save you two hours on Scifinder". I'm sure the same applies to machine learning: "Two weeks spent training a neuralnet can save you 2 hours looking at the input data". ...
Is exploratory data analysis important when doing purely predictive modeling? We used to have a phrase in chemistry: "Two weeks spent in the lab can save you two hours on Scifinder". I'm sure the same applies to machine learning: "Two weeks spent training a neuralnet can sav
11,792
Is exploratory data analysis important when doing purely predictive modeling?
Statistical perspective: Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA: Prediction gives obvious nonsense results, because your input data violated the assumptions of your prediction method. You now have to go back and check your inputs ...
Is exploratory data analysis important when doing purely predictive modeling?
Statistical perspective: Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA: Prediction gives obvious nonsense results, be
Is exploratory data analysis important when doing purely predictive modeling? Statistical perspective: Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA: Prediction gives obvious nonsense results, because your input data violated the assumpt...
Is exploratory data analysis important when doing purely predictive modeling? Statistical perspective: Leaving aside errors in the modelling stage, there are three likely outcomes from attempting prediction without first doing EDA: Prediction gives obvious nonsense results, be
11,793
Why does independence imply zero correlation?
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident! $$\rho_{X,Y}=\frac{\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]}{\sqrt{\operatorname{E}[X^2]-[\operatorname{E}[X]]^2}~\sqrt{\operatorna...
Why does independence imply zero correlation?
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident! $$\rho_{X,Y}=\frac{\operatorna
Why does independence imply zero correlation? By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident! $$\rho_{X,Y}=\frac{\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]}{\sqrt{\operatorname{E}[X...
Why does independence imply zero correlation? By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident! $$\rho_{X,Y}=\frac{\operatorna
11,794
Why does independence imply zero correlation?
Comment on sample correlation. In comparing two small independent samples of the same size, the sample correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (+1) on the population correlation $\rho.]$ Consider correlations between a million pairs of independent samples of size ...
Why does independence imply zero correlation?
Comment on sample correlation. In comparing two small independent samples of the same size, the sample correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (
Why does independence imply zero correlation? Comment on sample correlation. In comparing two small independent samples of the same size, the sample correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (+1) on the population correlation $\rho.]$ Consider correlations between a...
Why does independence imply zero correlation? Comment on sample correlation. In comparing two small independent samples of the same size, the sample correlation is often noticeably different from $r = 0.$ [Nothing here contradicts @OmG's Answer (
11,795
Why does independence imply zero correlation?
Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero. That is because the sample is not a perfect representation of the population. The larger the sample, the better it represents the population, so the smaller the c...
Why does independence imply zero correlation?
Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero. That is because the sample is not a perfec
Why does independence imply zero correlation? Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero. That is because the sample is not a perfect representation of the population. The larger the sample, the better it r...
Why does independence imply zero correlation? Simple answer: if 2 variables are independent, then the population correlation is zero, whereas the sample correlation will typically be small, but non-zero. That is because the sample is not a perfec
11,796
Why does independence imply zero correlation?
Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this: These data are presumably independent but clearly exhibit correlation ($r = 0.66$). "I thought independence implies zero correlation!" the student says. As others have already pointed out, the sample val...
Why does independence imply zero correlation?
Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this: These data are presumably independent but clearly exhibit correlation ($r = 0.66$).
Why does independence imply zero correlation? Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this: These data are presumably independent but clearly exhibit correlation ($r = 0.66$). "I thought independence implies zero correlation!" the student says. As o...
Why does independence imply zero correlation? Maybe this is helpful for some people sharing the same intuitive understanding. We've all seen something like this: These data are presumably independent but clearly exhibit correlation ($r = 0.66$).
11,797
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data. Such observations need to be set apart at the onset of any analysis simply because their...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the patte
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression? Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the pattern of ...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl Flagging outliers is not a judgement call (or in any case need not be one). Given a statistical model, outliers have a precise, objective definition: they are observations that do not follow the patte
11,798
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the predictors/independent variables that are "far" from the rest. The true problem with outliers is that they don't follow ...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression? In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the predic...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl In general, I am wary of removing "outliers." Regression analysis can be correctly applied in the presence of non-normally distributed errors, errors that exhibit heteroskedasticity, or values of the
11,799
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
+1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can vary, and I don't know for sure what SPSS is doing) label points more than 1.5 times the Inter-Quartile Range above (bel...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
+1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression? +1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can vary, ...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl +1 to @Charlie and @PeterFlom; you're getting good information there. Perhaps I can make a small contribution here by challenging the premise of the question. A boxplot will typically (software can
11,800
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression?
You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say exactly how to do this in that program, nor what boxplots you are looking at; however, it's hard to imagine that asteri...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl
You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say
Whether to delete cases that are flagged as outliers by statistical software when performing multiple regression? You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say exact...
Whether to delete cases that are flagged as outliers by statistical software when performing multipl You should first look at plots of the residuals: Do they follow (roughly) a normal distribution? Do they show signs of heteroskedasticity? Look at other plots as well (I do not use SPSS, so cannot say