idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
29,301
Interpreting out of bag error estimate for RandomForestRegressor
In order to compare the ground truth (i.e. correct/actual) target values with estimated (i.e. predicted) target values by the random forest , scikit-learn doesn't use the MSE but $R^2$ (unlike e.g. MATLAB or (Breiman 1996b)), as you can see in the code of forest.py: self.oob_score_ = 0.0 for k in xrange(self.n_outputs_): self.oob_score_ += r2_score(y[:, k], predictions[:, k]) self.oob_score_ /= self.n_outputs_ r2_score() computes the coefficient of determination aka. R2, whose best possible score is 1.0, and lower values are worse. FYI: What is the out of bag error in Random Forests? What is the difference between “coefficient of determination” and “mean squared error”? Breiman, Leo. Out-of-bag estimation. Technical report, Statistics Department, University of California Berkeley, Berkeley CA 94708, 1996b. 33, 34, 1996.
Interpreting out of bag error estimate for RandomForestRegressor
In order to compare the ground truth (i.e. correct/actual) target values with estimated (i.e. predicted) target values by the random forest , scikit-learn doesn't use the MSE but $R^2$ (unlike e.g. MA
Interpreting out of bag error estimate for RandomForestRegressor In order to compare the ground truth (i.e. correct/actual) target values with estimated (i.e. predicted) target values by the random forest , scikit-learn doesn't use the MSE but $R^2$ (unlike e.g. MATLAB or (Breiman 1996b)), as you can see in the code of forest.py: self.oob_score_ = 0.0 for k in xrange(self.n_outputs_): self.oob_score_ += r2_score(y[:, k], predictions[:, k]) self.oob_score_ /= self.n_outputs_ r2_score() computes the coefficient of determination aka. R2, whose best possible score is 1.0, and lower values are worse. FYI: What is the out of bag error in Random Forests? What is the difference between “coefficient of determination” and “mean squared error”? Breiman, Leo. Out-of-bag estimation. Technical report, Statistics Department, University of California Berkeley, Berkeley CA 94708, 1996b. 33, 34, 1996.
Interpreting out of bag error estimate for RandomForestRegressor In order to compare the ground truth (i.e. correct/actual) target values with estimated (i.e. predicted) target values by the random forest , scikit-learn doesn't use the MSE but $R^2$ (unlike e.g. MA
29,302
Is two-way ANOVA appropriate?
Every mouse is sampled at seven different time points. These are repeated measurements, and the lack of independence between these repeated measurements violates the assumptions of the standard two-way ANOVA. In addition, there could be differences between the individual mice from the beginning, and taking these individual differences into account could be a good idea. If all the mice are very similar in their response, and the time itself does not much affect the blood glucose level, this could potentially be analyzed with a two-way ANOVA, but I would rather prefer a repeated measures ANOVA, or more generally a mixed model regression approach. However, most of the (good) statistical software packages offer the possibility to fit a two-way ANOVA, but not nearly all contain the functionality to fit a mixed model. You do not mention the software you have access to, but this could be a limiting factor, too.
Is two-way ANOVA appropriate?
Every mouse is sampled at seven different time points. These are repeated measurements, and the lack of independence between these repeated measurements violates the assumptions of the standard two-wa
Is two-way ANOVA appropriate? Every mouse is sampled at seven different time points. These are repeated measurements, and the lack of independence between these repeated measurements violates the assumptions of the standard two-way ANOVA. In addition, there could be differences between the individual mice from the beginning, and taking these individual differences into account could be a good idea. If all the mice are very similar in their response, and the time itself does not much affect the blood glucose level, this could potentially be analyzed with a two-way ANOVA, but I would rather prefer a repeated measures ANOVA, or more generally a mixed model regression approach. However, most of the (good) statistical software packages offer the possibility to fit a two-way ANOVA, but not nearly all contain the functionality to fit a mixed model. You do not mention the software you have access to, but this could be a limiting factor, too.
Is two-way ANOVA appropriate? Every mouse is sampled at seven different time points. These are repeated measurements, and the lack of independence between these repeated measurements violates the assumptions of the standard two-wa
29,303
Is two-way ANOVA appropriate?
Your sample size is small, so you may have various little issues with not meeting assumptions, but try this..... 2-way repeated measures anova with group as between-subjects IV and time as within-subjects IV. Be sure to include interaction effects. You may encounter issues with sphericity (mauchly's test) When did the injection take place? If it was after Day 1, an option I'd prefer would be to do a 2-way repeated measures ancova by including day 1 as the covariate. Comparing each group and time individually post-hoc is not going to be very practical. If the analysis is significant, I'd just plot the data using side-by-side boxplots and make conclusions based on what you see visually. Comparing each group regardless of time, however, should not be too hard. In #3, you say it like you are only interested in day 14. You could get rid of all days between 1 and 14, and make the analysis much more simple. But I presume this isn't something you'd want to do
Is two-way ANOVA appropriate?
Your sample size is small, so you may have various little issues with not meeting assumptions, but try this..... 2-way repeated measures anova with group as between-subjects IV and time as within-subj
Is two-way ANOVA appropriate? Your sample size is small, so you may have various little issues with not meeting assumptions, but try this..... 2-way repeated measures anova with group as between-subjects IV and time as within-subjects IV. Be sure to include interaction effects. You may encounter issues with sphericity (mauchly's test) When did the injection take place? If it was after Day 1, an option I'd prefer would be to do a 2-way repeated measures ancova by including day 1 as the covariate. Comparing each group and time individually post-hoc is not going to be very practical. If the analysis is significant, I'd just plot the data using side-by-side boxplots and make conclusions based on what you see visually. Comparing each group regardless of time, however, should not be too hard. In #3, you say it like you are only interested in day 14. You could get rid of all days between 1 and 14, and make the analysis much more simple. But I presume this isn't something you'd want to do
Is two-way ANOVA appropriate? Your sample size is small, so you may have various little issues with not meeting assumptions, but try this..... 2-way repeated measures anova with group as between-subjects IV and time as within-subj
29,304
Using logistic regression for a continuous dependent variable
The proportional odds ordinal logistic regression model should work fine for this problem. For an efficient implementation that can allow thousands of unique $Y$ values see the orm function in the R rms package.
Using logistic regression for a continuous dependent variable
The proportional odds ordinal logistic regression model should work fine for this problem. For an efficient implementation that can allow thousands of unique $Y$ values see the orm function in the R
Using logistic regression for a continuous dependent variable The proportional odds ordinal logistic regression model should work fine for this problem. For an efficient implementation that can allow thousands of unique $Y$ values see the orm function in the R rms package.
Using logistic regression for a continuous dependent variable The proportional odds ordinal logistic regression model should work fine for this problem. For an efficient implementation that can allow thousands of unique $Y$ values see the orm function in the R
29,305
Using logistic regression for a continuous dependent variable
you could also try ordered probit/logit models by assigning values 1, 2,3, and 4 to scores in the 1st,.....,4th percentiles respectively.
Using logistic regression for a continuous dependent variable
you could also try ordered probit/logit models by assigning values 1, 2,3, and 4 to scores in the 1st,.....,4th percentiles respectively.
Using logistic regression for a continuous dependent variable you could also try ordered probit/logit models by assigning values 1, 2,3, and 4 to scores in the 1st,.....,4th percentiles respectively.
Using logistic regression for a continuous dependent variable you could also try ordered probit/logit models by assigning values 1, 2,3, and 4 to scores in the 1st,.....,4th percentiles respectively.
29,306
Using logistic regression for a continuous dependent variable
You could dichotomise (convert to a binary variable) the score. If score is from 0 to 100 then you could assign 0 to any score less than 50 and 1 otherwise. I've never heard before that this a good way of dealing with outliers though. This might just hide outliers since it will be impossible to distinguish very high or low scores. This doesn't make a great deal of sense to me but you can try it. More importantly why are you log transforming all your covariates and your response variable? This is going to affect your $\beta$ estimates and your $R^2$ (i think). Also the reviewer says a small $R^2$ suggests overfitting? I thought overfitting was when your $R^2$ is high but your model performs poorly on new data (i.e it overfits your data but doesn't generalise to new data). Overfitting tends to happen when you have few observations which you are trying to predict with a large number of parameters. This is what you are doing in your Model 2 since you have 8 observations which you are trying to explain with 7 parameters. I am not going to pretend I know a great deal about statistics but it seems to me, based on his comments, that this reviewer might know even less.
Using logistic regression for a continuous dependent variable
You could dichotomise (convert to a binary variable) the score. If score is from 0 to 100 then you could assign 0 to any score less than 50 and 1 otherwise. I've never heard before that this a good wa
Using logistic regression for a continuous dependent variable You could dichotomise (convert to a binary variable) the score. If score is from 0 to 100 then you could assign 0 to any score less than 50 and 1 otherwise. I've never heard before that this a good way of dealing with outliers though. This might just hide outliers since it will be impossible to distinguish very high or low scores. This doesn't make a great deal of sense to me but you can try it. More importantly why are you log transforming all your covariates and your response variable? This is going to affect your $\beta$ estimates and your $R^2$ (i think). Also the reviewer says a small $R^2$ suggests overfitting? I thought overfitting was when your $R^2$ is high but your model performs poorly on new data (i.e it overfits your data but doesn't generalise to new data). Overfitting tends to happen when you have few observations which you are trying to predict with a large number of parameters. This is what you are doing in your Model 2 since you have 8 observations which you are trying to explain with 7 parameters. I am not going to pretend I know a great deal about statistics but it seems to me, based on his comments, that this reviewer might know even less.
Using logistic regression for a continuous dependent variable You could dichotomise (convert to a binary variable) the score. If score is from 0 to 100 then you could assign 0 to any score less than 50 and 1 otherwise. I've never heard before that this a good wa
29,307
Using logistic regression for a continuous dependent variable
It is possible to apply logistic regression even to a contiuous dependent variable. It makes sense, if you want to make sure that the predicted score is always within [0, 100] (I judge from your screenshots that it is on 100-point scale). To accomplish it, just divide your score by 100, and run logistic regression with this [0,1]- based target variable, like in this question - you can do it, for example, with R, using glm(y~x, family="binomial", data=your.dataframe) I don't know whether this approach helps with outliers - it depends on the sort of outliers you are expecting. But sometimes it improves goodness of fit (even $R^2$, if your dependent variable has natural lower and upper bounds. As for the second question, $R^2\approx 0.3$ may be the best what you can squeeze out of your data, without overfitting. If you build your model for the purpose of inference, low $R^2$ is totally fine, as long as the coefficients important to you are significant. If you want to check whether the model is overfitted, you can check its $R^2$ on a test set, or even do a cross-validation.
Using logistic regression for a continuous dependent variable
It is possible to apply logistic regression even to a contiuous dependent variable. It makes sense, if you want to make sure that the predicted score is always within [0, 100] (I judge from your scree
Using logistic regression for a continuous dependent variable It is possible to apply logistic regression even to a contiuous dependent variable. It makes sense, if you want to make sure that the predicted score is always within [0, 100] (I judge from your screenshots that it is on 100-point scale). To accomplish it, just divide your score by 100, and run logistic regression with this [0,1]- based target variable, like in this question - you can do it, for example, with R, using glm(y~x, family="binomial", data=your.dataframe) I don't know whether this approach helps with outliers - it depends on the sort of outliers you are expecting. But sometimes it improves goodness of fit (even $R^2$, if your dependent variable has natural lower and upper bounds. As for the second question, $R^2\approx 0.3$ may be the best what you can squeeze out of your data, without overfitting. If you build your model for the purpose of inference, low $R^2$ is totally fine, as long as the coefficients important to you are significant. If you want to check whether the model is overfitted, you can check its $R^2$ on a test set, or even do a cross-validation.
Using logistic regression for a continuous dependent variable It is possible to apply logistic regression even to a contiuous dependent variable. It makes sense, if you want to make sure that the predicted score is always within [0, 100] (I judge from your scree
29,308
Why must an estimator be independent from the parameter?
You are right that any sensible estimator will be a (non-constant) function of the data (except in some special, arguably pathological, cases, such as my example here). So, it is correct to say that a reasonable estimator does depend on $\theta$ through its dependence on the data. But, I'm pretty sure all that is meant by the sentence Show that $U^{\star}$ is indeed an estimator - that it is a function of the $X_i$'s that does not depend on $\theta$ is that the formula for an estimator cannot contain the parameter. This is to exclude things like $\hat{\theta} = \theta$, which would be a perfect estimator (even if you had no data!!) but you'd need to be psychic in order to calculate it :-) As noted in the passage you pasted, since $T$ is a sufficient statistic, the distribution of any statistic, e.g. $U$, conditional on $T$, will not depend on $\theta$. Therefore, $U^{\star} = E(U|T)$ cannot depend on $\theta$, ensuring that it will have the property in question.
Why must an estimator be independent from the parameter?
You are right that any sensible estimator will be a (non-constant) function of the data (except in some special, arguably pathological, cases, such as my example here). So, it is correct to say that a
Why must an estimator be independent from the parameter? You are right that any sensible estimator will be a (non-constant) function of the data (except in some special, arguably pathological, cases, such as my example here). So, it is correct to say that a reasonable estimator does depend on $\theta$ through its dependence on the data. But, I'm pretty sure all that is meant by the sentence Show that $U^{\star}$ is indeed an estimator - that it is a function of the $X_i$'s that does not depend on $\theta$ is that the formula for an estimator cannot contain the parameter. This is to exclude things like $\hat{\theta} = \theta$, which would be a perfect estimator (even if you had no data!!) but you'd need to be psychic in order to calculate it :-) As noted in the passage you pasted, since $T$ is a sufficient statistic, the distribution of any statistic, e.g. $U$, conditional on $T$, will not depend on $\theta$. Therefore, $U^{\star} = E(U|T)$ cannot depend on $\theta$, ensuring that it will have the property in question.
Why must an estimator be independent from the parameter? You are right that any sensible estimator will be a (non-constant) function of the data (except in some special, arguably pathological, cases, such as my example here). So, it is correct to say that a
29,309
Vector calculus in statistics
One example you could look into is quasi-likelihood. The discussion of these in McCullagh & Nelder: Generalized Linear Models uses (for the theoretical part) gradients and path-integrals in an essential way! See chapeter 9 of that book.
Vector calculus in statistics
One example you could look into is quasi-likelihood. The discussion of these in McCullagh & Nelder: Generalized Linear Models uses (for the theoretical part) gradients and path-integrals in an essenti
Vector calculus in statistics One example you could look into is quasi-likelihood. The discussion of these in McCullagh & Nelder: Generalized Linear Models uses (for the theoretical part) gradients and path-integrals in an essential way! See chapeter 9 of that book.
Vector calculus in statistics One example you could look into is quasi-likelihood. The discussion of these in McCullagh & Nelder: Generalized Linear Models uses (for the theoretical part) gradients and path-integrals in an essenti
29,310
Vector calculus in statistics
I doubt many statisticians will have to use vector calculus as it is taught for physics and engineering. But for what it's worth here are a few topics that would use it, at least tangentially. The underlying theme here is that holomorphic functions from complex analysis, which are composed of harmonic functions, are intimately linked through the Cauchy Riemann equations to both Stokes' and Green's theorems. These functions can be studied both by examining the the interior of their domain along with their boundary. Probability Currents. This isn't just for quantum mechanics. In general, probability diffusions arise when studying time-varying probability distributions which change smoothly. This includes stochastic version of classical systems, such as the heat equation, Navier Stokes for fluid dynamics, wave equations for quantum mechanics, etc. Examples of equations include the Fokker-Planck equation and Kolmogorov Backwards/Forwards equations involve divergences, which in turn relate to heat equations, Feynan-Kac integrals, dirichlet problems and Green's functions. The keywords here are complex harmonic functions, which satisfy the mean value property, which in turn is a consequence of Green's integral theorem and Stokes' theorem. A classical example is calculating the exit time of a diffusion from a closed region, which reduces to evaluating integrals on the boundary of the surface and exploiting harmonicity within the region. The main example here is problems involving Brownian motion, and in general the wide class of Ito Diffusions. A wonderful (and eccentric!) book on this is Green, Brown and Probability by the legendary Kai Chung. The Disintegration Theorem for probability is implicity Stokes' Thoerem, in that one disintegrates a 3 dimensional probability measure onto the boundary of the surface that encloses its support. In statistical mechanics and in markov random fields, there is a large prevalence of conservation in the form of currents. The Ising Model, especially at criticality, and its relatives can be studied from the point of view of discrete harmonic and holomorphic functions. From the Cauchy Riemann equations, one recovers both Green's Theorem and Stokes's theorem, in that currents are both divergence free and curl free, which together imply that the underlying field is holomorphic. A great reference on this is from the work of Smirnov, Chelkak and Dominil-Copin.
Vector calculus in statistics
I doubt many statisticians will have to use vector calculus as it is taught for physics and engineering. But for what it's worth here are a few topics that would use it, at least tangentially. The und
Vector calculus in statistics I doubt many statisticians will have to use vector calculus as it is taught for physics and engineering. But for what it's worth here are a few topics that would use it, at least tangentially. The underlying theme here is that holomorphic functions from complex analysis, which are composed of harmonic functions, are intimately linked through the Cauchy Riemann equations to both Stokes' and Green's theorems. These functions can be studied both by examining the the interior of their domain along with their boundary. Probability Currents. This isn't just for quantum mechanics. In general, probability diffusions arise when studying time-varying probability distributions which change smoothly. This includes stochastic version of classical systems, such as the heat equation, Navier Stokes for fluid dynamics, wave equations for quantum mechanics, etc. Examples of equations include the Fokker-Planck equation and Kolmogorov Backwards/Forwards equations involve divergences, which in turn relate to heat equations, Feynan-Kac integrals, dirichlet problems and Green's functions. The keywords here are complex harmonic functions, which satisfy the mean value property, which in turn is a consequence of Green's integral theorem and Stokes' theorem. A classical example is calculating the exit time of a diffusion from a closed region, which reduces to evaluating integrals on the boundary of the surface and exploiting harmonicity within the region. The main example here is problems involving Brownian motion, and in general the wide class of Ito Diffusions. A wonderful (and eccentric!) book on this is Green, Brown and Probability by the legendary Kai Chung. The Disintegration Theorem for probability is implicity Stokes' Thoerem, in that one disintegrates a 3 dimensional probability measure onto the boundary of the surface that encloses its support. In statistical mechanics and in markov random fields, there is a large prevalence of conservation in the form of currents. The Ising Model, especially at criticality, and its relatives can be studied from the point of view of discrete harmonic and holomorphic functions. From the Cauchy Riemann equations, one recovers both Green's Theorem and Stokes's theorem, in that currents are both divergence free and curl free, which together imply that the underlying field is holomorphic. A great reference on this is from the work of Smirnov, Chelkak and Dominil-Copin.
Vector calculus in statistics I doubt many statisticians will have to use vector calculus as it is taught for physics and engineering. But for what it's worth here are a few topics that would use it, at least tangentially. The und
29,311
Multiple regression with repeatedly measured independent variables?
After consulting multiple people, here are some advice I received that helped me decide which approach to take. Ultimately, it goes back to the research question and the hypotheses made. If we were interested in the unique contribution of A to B, over and above current and past wellbeing, we could run hierarchical regression. There will be plenty of overlapping variance explained by current and past wellbeing, but entering them in separate steps can help us understand the unique contribution of either to B. In our case, we first entered wellbeing at Time-1, followed by wellbeing at Time-2. Even though Time-1 wellbeing explained a great deal of the variance in B, it was no longer a significant predictor when we entered Time-2 wellbeing. This suggests that current, rather than past wellbeing is a more important contributing factor. We entered A in the final step, and it made significant improvement to the model with Time-1 and Time-2 wellbeing in it, and this supports our initial hypothesis. If we were interested in how the change in wellbeing from Time-1 to Time-2 predicts B, we could compute the difference scores, or use more elaborate latent change score models to account for the repeatedly measured nature of wellbeing. A couple of useful resources for this approach: McArdle's 2009 review paper, Cambridge Powerpoint slides with examples and Mplus syntax
Multiple regression with repeatedly measured independent variables?
After consulting multiple people, here are some advice I received that helped me decide which approach to take. Ultimately, it goes back to the research question and the hypotheses made. If we were
Multiple regression with repeatedly measured independent variables? After consulting multiple people, here are some advice I received that helped me decide which approach to take. Ultimately, it goes back to the research question and the hypotheses made. If we were interested in the unique contribution of A to B, over and above current and past wellbeing, we could run hierarchical regression. There will be plenty of overlapping variance explained by current and past wellbeing, but entering them in separate steps can help us understand the unique contribution of either to B. In our case, we first entered wellbeing at Time-1, followed by wellbeing at Time-2. Even though Time-1 wellbeing explained a great deal of the variance in B, it was no longer a significant predictor when we entered Time-2 wellbeing. This suggests that current, rather than past wellbeing is a more important contributing factor. We entered A in the final step, and it made significant improvement to the model with Time-1 and Time-2 wellbeing in it, and this supports our initial hypothesis. If we were interested in how the change in wellbeing from Time-1 to Time-2 predicts B, we could compute the difference scores, or use more elaborate latent change score models to account for the repeatedly measured nature of wellbeing. A couple of useful resources for this approach: McArdle's 2009 review paper, Cambridge Powerpoint slides with examples and Mplus syntax
Multiple regression with repeatedly measured independent variables? After consulting multiple people, here are some advice I received that helped me decide which approach to take. Ultimately, it goes back to the research question and the hypotheses made. If we were
29,312
GAM cross-validation to test prediction error
I really like the package caret for things like this but unfortunately I just read that you can't specify the formula in gam exactly for it. "When you use train with this model, you cannot (at this time) specify the gam formula. caret has an internal function that figures out a formula based on how many unique levels each predictor has etc. In other words, train currently determines which terms are smoothed and which are plain old linear main effects." source: https://stackoverflow.com/questions/20044014/error-with-train-from-caret-package-using-method-gam but if you let train select the smooth terms, in this case it produces your model exactly anyway. The default performance metric in this case is RMSE, but you can change it using the summaryFunction argument of the trainControl function. I think one of the main drawbacks of LOOCV is that when the dataset is large, it takes forever. Since your dataset is small and it works quite fast, I think it is a sensible option. Hope this helps. library(mgcv) library(caret) set.seed(0) dat <- gamSim(1, n = 400, dist = "normal", scale = 2) b <- train(y ~ x0 + x1 + x2 + x3, data = dat, method = "gam", trControl = trainControl(method = "LOOCV", number = 1, repeats = 1), tuneGrid = data.frame(method = "GCV.Cp", select = FALSE) ) print(b) summary(b$finalModel) output: > print(b) Generalized Additive Model using Splines 400 samples 9 predictors No pre-processing Resampling: Summary of sample sizes: 399, 399, 399, 399, 399, 399, ... Resampling results RMSE Rsquared 2.157964 0.7091647 Tuning parameter 'select' was held constant at a value of FALSE Tuning parameter 'method' was held constant at a value of GCV.Cp > summary(b$finalModel) Family: gaussian Link function: identity Formula: .outcome ~ s(x0) + s(x1) + s(x2) + s(x3) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 7.9150 0.1049 75.44 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x0) 5.173 6.287 4.564 0.000139 *** s(x1) 2.357 2.927 103.089 < 2e-16 *** s(x2) 8.517 8.931 84.308 < 2e-16 *** s(x3) 1.000 1.000 0.441 0.506929 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.726 Deviance explained = 73.7% GCV = 4.611 Scale est. = 4.4029 n = 400
GAM cross-validation to test prediction error
I really like the package caret for things like this but unfortunately I just read that you can't specify the formula in gam exactly for it. "When you use train with this model, you cannot (at this ti
GAM cross-validation to test prediction error I really like the package caret for things like this but unfortunately I just read that you can't specify the formula in gam exactly for it. "When you use train with this model, you cannot (at this time) specify the gam formula. caret has an internal function that figures out a formula based on how many unique levels each predictor has etc. In other words, train currently determines which terms are smoothed and which are plain old linear main effects." source: https://stackoverflow.com/questions/20044014/error-with-train-from-caret-package-using-method-gam but if you let train select the smooth terms, in this case it produces your model exactly anyway. The default performance metric in this case is RMSE, but you can change it using the summaryFunction argument of the trainControl function. I think one of the main drawbacks of LOOCV is that when the dataset is large, it takes forever. Since your dataset is small and it works quite fast, I think it is a sensible option. Hope this helps. library(mgcv) library(caret) set.seed(0) dat <- gamSim(1, n = 400, dist = "normal", scale = 2) b <- train(y ~ x0 + x1 + x2 + x3, data = dat, method = "gam", trControl = trainControl(method = "LOOCV", number = 1, repeats = 1), tuneGrid = data.frame(method = "GCV.Cp", select = FALSE) ) print(b) summary(b$finalModel) output: > print(b) Generalized Additive Model using Splines 400 samples 9 predictors No pre-processing Resampling: Summary of sample sizes: 399, 399, 399, 399, 399, 399, ... Resampling results RMSE Rsquared 2.157964 0.7091647 Tuning parameter 'select' was held constant at a value of FALSE Tuning parameter 'method' was held constant at a value of GCV.Cp > summary(b$finalModel) Family: gaussian Link function: identity Formula: .outcome ~ s(x0) + s(x1) + s(x2) + s(x3) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 7.9150 0.1049 75.44 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x0) 5.173 6.287 4.564 0.000139 *** s(x1) 2.357 2.927 103.089 < 2e-16 *** s(x2) 8.517 8.931 84.308 < 2e-16 *** s(x3) 1.000 1.000 0.441 0.506929 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.726 Deviance explained = 73.7% GCV = 4.611 Scale est. = 4.4029 n = 400
GAM cross-validation to test prediction error I really like the package caret for things like this but unfortunately I just read that you can't specify the formula in gam exactly for it. "When you use train with this model, you cannot (at this ti
29,313
GAM cross-validation to test prediction error
In the mgcv library pdf it says; "Given a model structure specified by a gam model formula, gam() attempts to find the appropriate smoothness for each applicable model term using prediction error criteria or likelihood based methods. The prediction error criteria used are Generalized (Approximate) Cross Validation (GCV or GACV) when the scale parameter is unknown or an Un-Biased Risk Estimator (UBRE) when it is known." "gam in mgcv solves the smoothing parameter estimation problem by using the Generalized Cross Validation (GCV) criterion: nD/(n − DoF )2 or an Un-Biased Risk Estimator (UBRE )criterion: D/n + 2sDoF/n − s"
GAM cross-validation to test prediction error
In the mgcv library pdf it says; "Given a model structure specified by a gam model formula, gam() attempts to find the appropriate smoothness for each applicable model term using prediction error cri
GAM cross-validation to test prediction error In the mgcv library pdf it says; "Given a model structure specified by a gam model formula, gam() attempts to find the appropriate smoothness for each applicable model term using prediction error criteria or likelihood based methods. The prediction error criteria used are Generalized (Approximate) Cross Validation (GCV or GACV) when the scale parameter is unknown or an Un-Biased Risk Estimator (UBRE) when it is known." "gam in mgcv solves the smoothing parameter estimation problem by using the Generalized Cross Validation (GCV) criterion: nD/(n − DoF )2 or an Un-Biased Risk Estimator (UBRE )criterion: D/n + 2sDoF/n − s"
GAM cross-validation to test prediction error In the mgcv library pdf it says; "Given a model structure specified by a gam model formula, gam() attempts to find the appropriate smoothness for each applicable model term using prediction error cri
29,314
Feature selection using mutual information in Matlab
This is the problem of limited sampling bias. The small sample estimates of the densities are noisy, and this variation induces spurious correlations between the variables which increase the estimated information value. In the discrete case this is a well studied problem. There are many techniques to correct, from the fully Bayesian (NSB), to simple corrections. The most basic (Miller-Madow) is to subtract $(R-1)(S-1) / 2N\ln2$ from the value. This is the difference in degrees of freedom between the two implicit models (full joint multinomial vs the product of independent marginals) - indeed with sufficient sampling $2N\ln(2)I$ is the likeilhood ratio test of indepenence (G-test) which is $\chi^2$ distributed with $(R-1)(S-1)$ d.o.f. under the null hypothesis. With limited trials it can even be hard to estimate R and S reliably - an effective correction is to use a Bayesian counting procedure to estimate these (Panzeri-Treves or PT correction). Some package implementing these techniques in Matlab include infotoolbox and Spike Train Analysis Toolkit. For the continuous case, estimators based on nearest neighbour distances reduuce the problem.
Feature selection using mutual information in Matlab
This is the problem of limited sampling bias. The small sample estimates of the densities are noisy, and this variation induces spurious correlations between the variables which increase the estimate
Feature selection using mutual information in Matlab This is the problem of limited sampling bias. The small sample estimates of the densities are noisy, and this variation induces spurious correlations between the variables which increase the estimated information value. In the discrete case this is a well studied problem. There are many techniques to correct, from the fully Bayesian (NSB), to simple corrections. The most basic (Miller-Madow) is to subtract $(R-1)(S-1) / 2N\ln2$ from the value. This is the difference in degrees of freedom between the two implicit models (full joint multinomial vs the product of independent marginals) - indeed with sufficient sampling $2N\ln(2)I$ is the likeilhood ratio test of indepenence (G-test) which is $\chi^2$ distributed with $(R-1)(S-1)$ d.o.f. under the null hypothesis. With limited trials it can even be hard to estimate R and S reliably - an effective correction is to use a Bayesian counting procedure to estimate these (Panzeri-Treves or PT correction). Some package implementing these techniques in Matlab include infotoolbox and Spike Train Analysis Toolkit. For the continuous case, estimators based on nearest neighbour distances reduuce the problem.
Feature selection using mutual information in Matlab This is the problem of limited sampling bias. The small sample estimates of the densities are noisy, and this variation induces spurious correlations between the variables which increase the estimate
29,315
Feature selection using mutual information in Matlab
I have used KL-divergence and with appropriate sample sizes get values of 0 for loci where distributions have equal probability. I suggest you rephrase your MI in terms of KL-divergence.
Feature selection using mutual information in Matlab
I have used KL-divergence and with appropriate sample sizes get values of 0 for loci where distributions have equal probability. I suggest you rephrase your MI in terms of KL-divergence.
Feature selection using mutual information in Matlab I have used KL-divergence and with appropriate sample sizes get values of 0 for loci where distributions have equal probability. I suggest you rephrase your MI in terms of KL-divergence.
Feature selection using mutual information in Matlab I have used KL-divergence and with appropriate sample sizes get values of 0 for loci where distributions have equal probability. I suggest you rephrase your MI in terms of KL-divergence.
29,316
Feature selection using mutual information in Matlab
You should use a Partial Mutual Information algorithm for input variable (feature) selection. It is based on MI concepts and probability density estimation. For example in: Kernel based PMI: (+) has a stopping criteria (Akaike Information Criteria) (-) higher complexity kNN based PMI: (-) does not have a stopping criteria (+) lower complexity I used PMI to reduce the number of neural network inputs as they increase complexity and introduce other problems. You can find a complete overview of Input Variable Selection (IVS) algorithms in Review of Input Variable Selection Methods for Artificial Neural Networks paper. You can use IVS for SVM and other. To make things short, use PMI.
Feature selection using mutual information in Matlab
You should use a Partial Mutual Information algorithm for input variable (feature) selection. It is based on MI concepts and probability density estimation. For example in: Kernel based PMI: (+) has
Feature selection using mutual information in Matlab You should use a Partial Mutual Information algorithm for input variable (feature) selection. It is based on MI concepts and probability density estimation. For example in: Kernel based PMI: (+) has a stopping criteria (Akaike Information Criteria) (-) higher complexity kNN based PMI: (-) does not have a stopping criteria (+) lower complexity I used PMI to reduce the number of neural network inputs as they increase complexity and introduce other problems. You can find a complete overview of Input Variable Selection (IVS) algorithms in Review of Input Variable Selection Methods for Artificial Neural Networks paper. You can use IVS for SVM and other. To make things short, use PMI.
Feature selection using mutual information in Matlab You should use a Partial Mutual Information algorithm for input variable (feature) selection. It is based on MI concepts and probability density estimation. For example in: Kernel based PMI: (+) has
29,317
Parametrizing the Behrens–Fisher distributions
The Behrens-Fisher distribution is defined by $t_2\cos\theta - t_1\sin\theta$ where $\theta$ is a real number and $t_2$ and $t_1$ are independent $t$-distributions with degrees of freedom $\nu_2$ and $\nu_1$ respectively. Behrens and Fisher's solution of the Behrens-Fisher problem involves the Behrens-Fisher distribution with $\theta$ depending on the observations because it is a pseudo-Bayesian (in fact, a fiducial) solution: this data-depending distribution is a posterior-like distribution of $\tau$ (with $\delta$ the only random part in the definition of $\tau$ because the data are fixed).
Parametrizing the Behrens–Fisher distributions
The Behrens-Fisher distribution is defined by $t_2\cos\theta - t_1\sin\theta$ where $\theta$ is a real number and $t_2$ and $t_1$ are independent $t$-distributions with degrees of freedom $\nu_2$ and
Parametrizing the Behrens–Fisher distributions The Behrens-Fisher distribution is defined by $t_2\cos\theta - t_1\sin\theta$ where $\theta$ is a real number and $t_2$ and $t_1$ are independent $t$-distributions with degrees of freedom $\nu_2$ and $\nu_1$ respectively. Behrens and Fisher's solution of the Behrens-Fisher problem involves the Behrens-Fisher distribution with $\theta$ depending on the observations because it is a pseudo-Bayesian (in fact, a fiducial) solution: this data-depending distribution is a posterior-like distribution of $\tau$ (with $\delta$ the only random part in the definition of $\tau$ because the data are fixed).
Parametrizing the Behrens–Fisher distributions The Behrens-Fisher distribution is defined by $t_2\cos\theta - t_1\sin\theta$ where $\theta$ is a real number and $t_2$ and $t_1$ are independent $t$-distributions with degrees of freedom $\nu_2$ and
29,318
How to get prediction for a specific variable in WinBUGS?
Just add the variable h to the list of the parameters to be monitored. If you are using package like R2WinBUGS, then add variable h to to the list passed to parameters.to.save argument to the bugs function. Then look at your last value in h (the one with NA) - you will get a posterior distribution there. This is usual way to make predictions in bayesian inference (see also this question). It is nice and simple! No more separation of parameter evaluation and prediction. Everything is done at once. The posterior distrubution of parameters is given by the actual data and propagated to the NA values (as "predictions").
How to get prediction for a specific variable in WinBUGS?
Just add the variable h to the list of the parameters to be monitored. If you are using package like R2WinBUGS, then add variable h to to the list passed to parameters.to.save argument to the bugs fun
How to get prediction for a specific variable in WinBUGS? Just add the variable h to the list of the parameters to be monitored. If you are using package like R2WinBUGS, then add variable h to to the list passed to parameters.to.save argument to the bugs function. Then look at your last value in h (the one with NA) - you will get a posterior distribution there. This is usual way to make predictions in bayesian inference (see also this question). It is nice and simple! No more separation of parameter evaluation and prediction. Everything is done at once. The posterior distrubution of parameters is given by the actual data and propagated to the NA values (as "predictions").
How to get prediction for a specific variable in WinBUGS? Just add the variable h to the list of the parameters to be monitored. If you are using package like R2WinBUGS, then add variable h to to the list passed to parameters.to.save argument to the bugs fun
29,319
Kaplan-Meier multiple group comparisons
One of the issues of inference that arises in event history models is that hazard functions and survival functions in different groups can cross each other at different points in time. For example both the following conditions can be true: Those individuals in group A who experience the event (i.e. who "do not survive") may do so relatively quickly, while individuals in group B who experience the event take longer to do so. The overall survival in group A may be higher than in group B. So when you ask about wanting to make comparisons among groups what specifically do you want to compare? Median survival time? The hazard at time t? The survival at time t? The time until survival "flattens" (for some meaning of "flatten")? Something else? Once you have a well-formulated question about what you would like to compare, multiple comparisons adjustments make sense. Some cases (comparisons at each point in time t, for example) might make the definition of family in the FWER multiple comparison adjustment methods problematic, which might incline one towards the FDR methods, since they scale/do not rely on a definition of family.
Kaplan-Meier multiple group comparisons
One of the issues of inference that arises in event history models is that hazard functions and survival functions in different groups can cross each other at different points in time. For example bot
Kaplan-Meier multiple group comparisons One of the issues of inference that arises in event history models is that hazard functions and survival functions in different groups can cross each other at different points in time. For example both the following conditions can be true: Those individuals in group A who experience the event (i.e. who "do not survive") may do so relatively quickly, while individuals in group B who experience the event take longer to do so. The overall survival in group A may be higher than in group B. So when you ask about wanting to make comparisons among groups what specifically do you want to compare? Median survival time? The hazard at time t? The survival at time t? The time until survival "flattens" (for some meaning of "flatten")? Something else? Once you have a well-formulated question about what you would like to compare, multiple comparisons adjustments make sense. Some cases (comparisons at each point in time t, for example) might make the definition of family in the FWER multiple comparison adjustment methods problematic, which might incline one towards the FDR methods, since they scale/do not rely on a definition of family.
Kaplan-Meier multiple group comparisons One of the issues of inference that arises in event history models is that hazard functions and survival functions in different groups can cross each other at different points in time. For example bot
29,320
Kaplan-Meier multiple group comparisons
Coincidentally, you can indeed think of this much like a Pearson Chi-square test for homogeneity. If the groups you have defined are measured on a similar time scale and are being measured then it makes sense to actually compare survival curves. A little known fact is that the logrank test is actually a score test for the partial likelihood equations produced by the Cox proportional hazards model. So, if your goal is creating a global test of hazards, you can fit a Cox model with an indicator for groups and conduct a multivariate partial likelihood ratio test. Nonproportional survival curves are a sensitivity in that they reduce power for both the Cox model and the log rank test, with all the same sensitivities and assumptions. But the obvious solution of presenting model results alongside Kaplan Meier estimates of survivor functions should quickly assess this and explicate all the sensitivities therein.
Kaplan-Meier multiple group comparisons
Coincidentally, you can indeed think of this much like a Pearson Chi-square test for homogeneity. If the groups you have defined are measured on a similar time scale and are being measured then it mak
Kaplan-Meier multiple group comparisons Coincidentally, you can indeed think of this much like a Pearson Chi-square test for homogeneity. If the groups you have defined are measured on a similar time scale and are being measured then it makes sense to actually compare survival curves. A little known fact is that the logrank test is actually a score test for the partial likelihood equations produced by the Cox proportional hazards model. So, if your goal is creating a global test of hazards, you can fit a Cox model with an indicator for groups and conduct a multivariate partial likelihood ratio test. Nonproportional survival curves are a sensitivity in that they reduce power for both the Cox model and the log rank test, with all the same sensitivities and assumptions. But the obvious solution of presenting model results alongside Kaplan Meier estimates of survivor functions should quickly assess this and explicate all the sensitivities therein.
Kaplan-Meier multiple group comparisons Coincidentally, you can indeed think of this much like a Pearson Chi-square test for homogeneity. If the groups you have defined are measured on a similar time scale and are being measured then it mak
29,321
How to obtain covariance matrix for constrained regression fit?
At first I would go with very simple bootstrap. Basically something as follows: Create a new data-set by resampling pairs of $(x,y)$. Run your regression on this new data set and you will get some parameters $\hat \beta$. Repeat 1 and 2 as many times as possible. Now you will have a large sets of $ \hat \beta$ Now just take the sample covariance of your $\hat \beta$. Done
How to obtain covariance matrix for constrained regression fit?
At first I would go with very simple bootstrap. Basically something as follows: Create a new data-set by resampling pairs of $(x,y)$. Run your regression on this new data set and you will get some pa
How to obtain covariance matrix for constrained regression fit? At first I would go with very simple bootstrap. Basically something as follows: Create a new data-set by resampling pairs of $(x,y)$. Run your regression on this new data set and you will get some parameters $\hat \beta$. Repeat 1 and 2 as many times as possible. Now you will have a large sets of $ \hat \beta$ Now just take the sample covariance of your $\hat \beta$. Done
How to obtain covariance matrix for constrained regression fit? At first I would go with very simple bootstrap. Basically something as follows: Create a new data-set by resampling pairs of $(x,y)$. Run your regression on this new data set and you will get some pa
29,322
Multiple regression with missing predictor variable
+1, I think this is a really interesting and clearly stated question. However, more information will help us think through this situation. For example, what is the relationship between $x_n$ and $y$? It's quite possible that there isn't one, in which case, regression $(1)$ offers no advantage relative to regression $(2)$. (Actually, it is at a very slight disadvantage, in the sense that the standard errors will be slightly larger, and thus betas might be slightly further, on average, from their true values.) If there is a function mapping $x_n$ to $y$, then, by definition, there is real information there, and regression $(1)$ will be better in the initial situation. Next, what is the nature of the relationship between $(x_1, \cdots, x_{n-1})$ and $x_n$? Is there one? For instance, when we conduct experiments, (usually) we try to assign equal numbers of study units to each combination of values of the explanatory variables. (This approach uses a multiple of the Cartesian product of the levels of the IV's, and is called a 'full factorial' design; there are also cases where levels are intentionally confounded to save data, called 'fractional factorial' designs.) If the explanatory variables are orthogonal, your third regression will yield absolutely, exactly 0. On the other hand, in an observational study the covariates are pretty much always correlated. The stronger that correlation, the less information exists in $x_n$. These facts will modulate the relative merits of regression $(1)$ and regression $(2)$. However, (unfortunately perhaps) it's more complicated than that. One of the important, but difficult, concepts in multiple regression is multicollinearity. Should you attempt to estimate regression $(4)$, you will find that you have perfect multicollinearity, and your software will tell you that the design matrix is not invertible. Thus, while regression $(1)$ may well offer an advantage relative to regression $(2)$, regression $(4)$ will not. The more interesting question (and the one you're asking) is what if you use regression $(1)$ to make predictions about $y$ using the estimated $x_n$ values output from the predictions of regression $(3)$? (That is, you're not estimating regression $(4)$—you're plugging the output from the prediction equation estimated in regression $(3)$ into prediction model $(4)$.) The thing is that you aren't actually gaining any new information here. Whatever information exists in the first $n-1$ predictor values for each observation is already being used optimally by regression $(2)$, so there is no gain. Thus, the answer to your first question is that you might as well go with regression $(2)$ for your predictions to save unnecessary work. Note that I have been addressing this in a fairly abstract way, rather than addressing the concrete situation you describe in which someone hands you two data sets (I just can't imagine this occurring). Instead, I'm thinking of this question as trying to understand something fairly deep about the nature of regression. What does occur on occasion, though, is that some observations have values on all predictors, and some other observations (within the same dataset) are missing some values on some of the predictors. This is particularly common when dealing with longitudinal data. In such a situation, you want to investigate multiple imputation.
Multiple regression with missing predictor variable
+1, I think this is a really interesting and clearly stated question. However, more information will help us think through this situation. For example, what is the relationship between $x_n$ and $y
Multiple regression with missing predictor variable +1, I think this is a really interesting and clearly stated question. However, more information will help us think through this situation. For example, what is the relationship between $x_n$ and $y$? It's quite possible that there isn't one, in which case, regression $(1)$ offers no advantage relative to regression $(2)$. (Actually, it is at a very slight disadvantage, in the sense that the standard errors will be slightly larger, and thus betas might be slightly further, on average, from their true values.) If there is a function mapping $x_n$ to $y$, then, by definition, there is real information there, and regression $(1)$ will be better in the initial situation. Next, what is the nature of the relationship between $(x_1, \cdots, x_{n-1})$ and $x_n$? Is there one? For instance, when we conduct experiments, (usually) we try to assign equal numbers of study units to each combination of values of the explanatory variables. (This approach uses a multiple of the Cartesian product of the levels of the IV's, and is called a 'full factorial' design; there are also cases where levels are intentionally confounded to save data, called 'fractional factorial' designs.) If the explanatory variables are orthogonal, your third regression will yield absolutely, exactly 0. On the other hand, in an observational study the covariates are pretty much always correlated. The stronger that correlation, the less information exists in $x_n$. These facts will modulate the relative merits of regression $(1)$ and regression $(2)$. However, (unfortunately perhaps) it's more complicated than that. One of the important, but difficult, concepts in multiple regression is multicollinearity. Should you attempt to estimate regression $(4)$, you will find that you have perfect multicollinearity, and your software will tell you that the design matrix is not invertible. Thus, while regression $(1)$ may well offer an advantage relative to regression $(2)$, regression $(4)$ will not. The more interesting question (and the one you're asking) is what if you use regression $(1)$ to make predictions about $y$ using the estimated $x_n$ values output from the predictions of regression $(3)$? (That is, you're not estimating regression $(4)$—you're plugging the output from the prediction equation estimated in regression $(3)$ into prediction model $(4)$.) The thing is that you aren't actually gaining any new information here. Whatever information exists in the first $n-1$ predictor values for each observation is already being used optimally by regression $(2)$, so there is no gain. Thus, the answer to your first question is that you might as well go with regression $(2)$ for your predictions to save unnecessary work. Note that I have been addressing this in a fairly abstract way, rather than addressing the concrete situation you describe in which someone hands you two data sets (I just can't imagine this occurring). Instead, I'm thinking of this question as trying to understand something fairly deep about the nature of regression. What does occur on occasion, though, is that some observations have values on all predictors, and some other observations (within the same dataset) are missing some values on some of the predictors. This is particularly common when dealing with longitudinal data. In such a situation, you want to investigate multiple imputation.
Multiple regression with missing predictor variable +1, I think this is a really interesting and clearly stated question. However, more information will help us think through this situation. For example, what is the relationship between $x_n$ and $y
29,323
Best way to handle unbalanced multiclass dataset with SVM
Having different penalties for the margin slack variables for patterns of each class is a better approach than resampling the data. It is asymptotically equivalent to resampling anyway, but is esier to implement and continuous, rather than discrete, so you have more control. However, choosing the weights is not straightforward. In principal you can work out a theoretical weighting that takes into account the misclassification costs and the differences between training set an operational prior class probabilities, but it will not give the optimal performance. The best thing to do is to select the penalties/weights for each class via minimising the loss (taking into account the misclassification costs) by cross-validation.
Best way to handle unbalanced multiclass dataset with SVM
Having different penalties for the margin slack variables for patterns of each class is a better approach than resampling the data. It is asymptotically equivalent to resampling anyway, but is esier
Best way to handle unbalanced multiclass dataset with SVM Having different penalties for the margin slack variables for patterns of each class is a better approach than resampling the data. It is asymptotically equivalent to resampling anyway, but is esier to implement and continuous, rather than discrete, so you have more control. However, choosing the weights is not straightforward. In principal you can work out a theoretical weighting that takes into account the misclassification costs and the differences between training set an operational prior class probabilities, but it will not give the optimal performance. The best thing to do is to select the penalties/weights for each class via minimising the loss (taking into account the misclassification costs) by cross-validation.
Best way to handle unbalanced multiclass dataset with SVM Having different penalties for the margin slack variables for patterns of each class is a better approach than resampling the data. It is asymptotically equivalent to resampling anyway, but is esier
29,324
How to find when a graph reaches a peak and plateaus?
If you know that this is the exact pattern to expect, then you can look for this exact pattern, but then you will miss other patterns. So. If you know that the peak will be 150, then you could look for 2 or 3 or 4 or (however many) consecutive values of 150. But you say "or so" - how big is the "or so"? Perhaps the peak is defined as "3 consecutive values over 130" or maybe it's "3 out of 5 consecutive values over 140". That's for you to decide. On the other hand, if you are just looking for some general program to detect peaks - well, that's been looked at. There are a bunch of smoothing methods (e.g. loess, splines of various sorts, moving averages etc.). Not a field I'm expert in, but there's lots of literature on this.
How to find when a graph reaches a peak and plateaus?
If you know that this is the exact pattern to expect, then you can look for this exact pattern, but then you will miss other patterns. So. If you know that the peak will be 150, then you could look f
How to find when a graph reaches a peak and plateaus? If you know that this is the exact pattern to expect, then you can look for this exact pattern, but then you will miss other patterns. So. If you know that the peak will be 150, then you could look for 2 or 3 or 4 or (however many) consecutive values of 150. But you say "or so" - how big is the "or so"? Perhaps the peak is defined as "3 consecutive values over 130" or maybe it's "3 out of 5 consecutive values over 140". That's for you to decide. On the other hand, if you are just looking for some general program to detect peaks - well, that's been looked at. There are a bunch of smoothing methods (e.g. loess, splines of various sorts, moving averages etc.). Not a field I'm expert in, but there's lots of literature on this.
How to find when a graph reaches a peak and plateaus? If you know that this is the exact pattern to expect, then you can look for this exact pattern, but then you will miss other patterns. So. If you know that the peak will be 150, then you could look f
29,325
How to find when a graph reaches a peak and plateaus?
Look into SiZer (SIgnificant ZERo crossings... or slopes, I don't remember), although arguably it is more of a cross-sectional than time-series tool. The idea there is to smooth the data at different bandwidths (varying by some three orders of magnitude), and apply some local tests to see whether the slope of a local regression is significantly positive or negative (or undecided). It produces a convincing picture that would aid you in determining which features are there. (I am surprised there is no R implementation, only Matlab.)
How to find when a graph reaches a peak and plateaus?
Look into SiZer (SIgnificant ZERo crossings... or slopes, I don't remember), although arguably it is more of a cross-sectional than time-series tool. The idea there is to smooth the data at different
How to find when a graph reaches a peak and plateaus? Look into SiZer (SIgnificant ZERo crossings... or slopes, I don't remember), although arguably it is more of a cross-sectional than time-series tool. The idea there is to smooth the data at different bandwidths (varying by some three orders of magnitude), and apply some local tests to see whether the slope of a local regression is significantly positive or negative (or undecided). It produces a convincing picture that would aid you in determining which features are there. (I am surprised there is no R implementation, only Matlab.)
How to find when a graph reaches a peak and plateaus? Look into SiZer (SIgnificant ZERo crossings... or slopes, I don't remember), although arguably it is more of a cross-sectional than time-series tool. The idea there is to smooth the data at different
29,326
Best practices for measuring and avoiding overfitting?
For over-fitting in model selection, then a paper worth reading is C. Ambroise and G. J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS, vol. 99 no. 10 6562-6566, May 2002. http://dx.doi.org/10.1073/pnas.102102699 For a discussion of the same sort of problem that arises in model selection, see G. C. Cawley, N. L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", Journal of Machine Learning Research, 11(Jul):2079−2107, 2010. http://jmlr.csail.mit.edu/papers/v11/cawley10a.html The way to solve the problem of the validation set becoming tainted is to use nested cross-validation, so the method used to make choices about the model is performed independently in each fold of the cross-validation used for performance estimation. Essentially the performance estimation must estimate the performance of the whole model fitting procedure (fitting the model, feature selection, model selection, everything). The other approach is to be a Bayesian. The risk of over-fitting is introduced whenever you optimise a criterion based on a finite sample of data, so if you marginalise (integrate out) rather than optimise then classical over-fitting is impossible. You do however have the problem of specifying the priors.
Best practices for measuring and avoiding overfitting?
For over-fitting in model selection, then a paper worth reading is C. Ambroise and G. J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS, vol. 99
Best practices for measuring and avoiding overfitting? For over-fitting in model selection, then a paper worth reading is C. Ambroise and G. J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS, vol. 99 no. 10 6562-6566, May 2002. http://dx.doi.org/10.1073/pnas.102102699 For a discussion of the same sort of problem that arises in model selection, see G. C. Cawley, N. L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", Journal of Machine Learning Research, 11(Jul):2079−2107, 2010. http://jmlr.csail.mit.edu/papers/v11/cawley10a.html The way to solve the problem of the validation set becoming tainted is to use nested cross-validation, so the method used to make choices about the model is performed independently in each fold of the cross-validation used for performance estimation. Essentially the performance estimation must estimate the performance of the whole model fitting procedure (fitting the model, feature selection, model selection, everything). The other approach is to be a Bayesian. The risk of over-fitting is introduced whenever you optimise a criterion based on a finite sample of data, so if you marginalise (integrate out) rather than optimise then classical over-fitting is impossible. You do however have the problem of specifying the priors.
Best practices for measuring and avoiding overfitting? For over-fitting in model selection, then a paper worth reading is C. Ambroise and G. J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS, vol. 99
29,327
Clustering with asymmetrical distance measures
If the M-F distance is asymmetric because the future is different from the past, then a genuine asymmetric clustering is called for. First, an asymmetric distance function must be defined. One way to to asymmetric clustering, given a distance function, is to embed the original data into a new coordinate space. See "Geometrical Structures of Some Non-Distance Models for Asymmetric MDS" by Naohito Chino and Kenichi Shiraiwa, Behaviormetrika, 1992 (pdf). This is called HCM (the Hermitian Canonical Model). Find a Hermitian matrix $H$, where $$ H_{ij} = \frac 1 2 [d(x_i, x_j) + d(x_j, x_i)] + i \frac 1 2 [d(x_i, x_j) - d(x_j, x_i)] $$ Find the eigenvalues and eigenvectors, then scale each eigenvector by the square root of its corresponding eigenvalue. This transforms the data into a space of complex numbers. Once the data is embedded, the distance between objects x and y is just x * y, where * is the conjugate transpose. At this point you can run k-means on the complex vectors. Spectral asymmetric clustering has also been done, see the thesis by Stefan Emilov Atev, "Using Asymmetry in the Spectral Clustering of Trajectories," University of Minnesota, 2011, which gives MATLAB code for a special algorithm.
Clustering with asymmetrical distance measures
If the M-F distance is asymmetric because the future is different from the past, then a genuine asymmetric clustering is called for. First, an asymmetric distance function must be defined. One wa
Clustering with asymmetrical distance measures If the M-F distance is asymmetric because the future is different from the past, then a genuine asymmetric clustering is called for. First, an asymmetric distance function must be defined. One way to to asymmetric clustering, given a distance function, is to embed the original data into a new coordinate space. See "Geometrical Structures of Some Non-Distance Models for Asymmetric MDS" by Naohito Chino and Kenichi Shiraiwa, Behaviormetrika, 1992 (pdf). This is called HCM (the Hermitian Canonical Model). Find a Hermitian matrix $H$, where $$ H_{ij} = \frac 1 2 [d(x_i, x_j) + d(x_j, x_i)] + i \frac 1 2 [d(x_i, x_j) - d(x_j, x_i)] $$ Find the eigenvalues and eigenvectors, then scale each eigenvector by the square root of its corresponding eigenvalue. This transforms the data into a space of complex numbers. Once the data is embedded, the distance between objects x and y is just x * y, where * is the conjugate transpose. At this point you can run k-means on the complex vectors. Spectral asymmetric clustering has also been done, see the thesis by Stefan Emilov Atev, "Using Asymmetry in the Spectral Clustering of Trajectories," University of Minnesota, 2011, which gives MATLAB code for a special algorithm.
Clustering with asymmetrical distance measures If the M-F distance is asymmetric because the future is different from the past, then a genuine asymmetric clustering is called for. First, an asymmetric distance function must be defined. One wa
29,328
Clustering with asymmetrical distance measures
You can take some sort of a mean (like an arithmetic mean or, for probability distributions, the square root of the Jensen–Shannon divergence.)
Clustering with asymmetrical distance measures
You can take some sort of a mean (like an arithmetic mean or, for probability distributions, the square root of the Jensen–Shannon divergence.)
Clustering with asymmetrical distance measures You can take some sort of a mean (like an arithmetic mean or, for probability distributions, the square root of the Jensen–Shannon divergence.)
Clustering with asymmetrical distance measures You can take some sort of a mean (like an arithmetic mean or, for probability distributions, the square root of the Jensen–Shannon divergence.)
29,329
Clustering with asymmetrical distance measures
You should have a look to circular statistics (if you want to work "within"a tunning week)
Clustering with asymmetrical distance measures
You should have a look to circular statistics (if you want to work "within"a tunning week)
Clustering with asymmetrical distance measures You should have a look to circular statistics (if you want to work "within"a tunning week)
Clustering with asymmetrical distance measures You should have a look to circular statistics (if you want to work "within"a tunning week)
29,330
Clustering with asymmetrical distance measures
If your distance function is not a valid Mercer kernel, then $X \neq X^T$, where $X$ is the Gram matrix. In this case want co-clustering, also called bi-clustering. Algorithms of this class produce cluster indicators simultaneously for the rows and columns. The example you gave is the result of a poorly chosen distance metric. A better distance metric would be $|\text{days apart}|$ Generally your distance function should be a valid Mercer kernel. A valid Mercer kernel is any function taking two observations that is continuous, symmetric and has a positive definite covariance matrix $\forall x \in D$.
Clustering with asymmetrical distance measures
If your distance function is not a valid Mercer kernel, then $X \neq X^T$, where $X$ is the Gram matrix. In this case want co-clustering, also called bi-clustering. Algorithms of this class produce cl
Clustering with asymmetrical distance measures If your distance function is not a valid Mercer kernel, then $X \neq X^T$, where $X$ is the Gram matrix. In this case want co-clustering, also called bi-clustering. Algorithms of this class produce cluster indicators simultaneously for the rows and columns. The example you gave is the result of a poorly chosen distance metric. A better distance metric would be $|\text{days apart}|$ Generally your distance function should be a valid Mercer kernel. A valid Mercer kernel is any function taking two observations that is continuous, symmetric and has a positive definite covariance matrix $\forall x \in D$.
Clustering with asymmetrical distance measures If your distance function is not a valid Mercer kernel, then $X \neq X^T$, where $X$ is the Gram matrix. In this case want co-clustering, also called bi-clustering. Algorithms of this class produce cl
29,331
Online outlier detection
Have you considered something like a one-class classifier? You would need a training set of known-good images, which are used to train up a classifier that tries to distinguish between "images like your training set" and everything else. There's a thesis by David Tax that probably has more information than you actually require on the topic, but might be a good place to start. Other than requiring a training set, it seems like it would meet your requirements: Parameters are learned from the data (no ad-hockery here) Once you've got the model, there's no need to keep the data in memory. Similarly, the trained classifier could be run on as many nodes as you've got. Depending on your application, you might be able to train up a serviceable classifier once and reuse it for different types of specimens/dyes/stains/florophores/etc. Alternately, you might be able to get users to manually rate some of the first batch of each run--I imagine a human could check at least 5-8 examples/minute with a good interface.
Online outlier detection
Have you considered something like a one-class classifier? You would need a training set of known-good images, which are used to train up a classifier that tries to distinguish between "images like y
Online outlier detection Have you considered something like a one-class classifier? You would need a training set of known-good images, which are used to train up a classifier that tries to distinguish between "images like your training set" and everything else. There's a thesis by David Tax that probably has more information than you actually require on the topic, but might be a good place to start. Other than requiring a training set, it seems like it would meet your requirements: Parameters are learned from the data (no ad-hockery here) Once you've got the model, there's no need to keep the data in memory. Similarly, the trained classifier could be run on as many nodes as you've got. Depending on your application, you might be able to train up a serviceable classifier once and reuse it for different types of specimens/dyes/stains/florophores/etc. Alternately, you might be able to get users to manually rate some of the first batch of each run--I imagine a human could check at least 5-8 examples/minute with a good interface.
Online outlier detection Have you considered something like a one-class classifier? You would need a training set of known-good images, which are used to train up a classifier that tries to distinguish between "images like y
29,332
Online outlier detection
See http://scholar.google.com/scholar?q=stream+outlier+detection A couple of established methods such as LOF have been adopted to a streaming context. There are also of course methods that update histograms in a streaming way and thus flag obvious one-dimensional outliers. That could actually be sufficient for you?
Online outlier detection
See http://scholar.google.com/scholar?q=stream+outlier+detection A couple of established methods such as LOF have been adopted to a streaming context. There are also of course methods that update hist
Online outlier detection See http://scholar.google.com/scholar?q=stream+outlier+detection A couple of established methods such as LOF have been adopted to a streaming context. There are also of course methods that update histograms in a streaming way and thus flag obvious one-dimensional outliers. That could actually be sufficient for you?
Online outlier detection See http://scholar.google.com/scholar?q=stream+outlier+detection A couple of established methods such as LOF have been adopted to a streaming context. There are also of course methods that update hist
29,333
Online outlier detection
There are many possible approaches, but it is hard to know what may be best in your situation without more information. It sounds like, for each image, you receive a feature vector, which is an element of $\mathbb{R}^n$. If that's the case, here are a handful of candidate solutions: Store the feature vectors of all prior images, along with their classification, on disk. Periodically (say, once a day) train a learning algorithm on this data, and use the resulting algorithm to classify new images. Disk space is cheap; this solution might be a pragmatic and effective to convert an offline learning algorithm into one that can be used in your online setting. Store the feature vectors of a random sample of 1,000 (or 1,000,000) prior images, along with their classification. Periodically train a learning algorithm on this subsample. Note that you can efficiently update this subsample in an online fashion using standard tricks. This is only interesting if there is some reason why it is hard to store all of the feature vectors of all prior images (which seems hard to imagine, for me, but who knows). For each of the $n$ vectors, keep track of the running average and standard deviation of the non-defective images seen so far. Then, when you receive a new image, if any of its features is at least $c$ standard deviations beyond the mean for that feature, classify it as defective, otherwise classify it as non-defective. You can choose $c$ based upon $n$ and the desired tradeoff between false positives and false negatives. In other words, you maintain a $n$-vector $\mu$ of means, and a $n$-vector $\sigma$ of standard deviations, where $\mu_i$ is the mean of the $i$th feature vector and $\sigma_i$ is the standard deviation of that feature. When you receive a new feature vector $x$, you check whether $|x_i - \mu_i| \ge c \sigma_i$ for any $i$. If not, you classify it as non-defective and you update $\mu$ and $\sigma$. This approach assumes that each parameter from a non-defective image has a Gaussian distribution, and that the parameters are independent. Those assumptions may be optimistic. There are many more sophisticated variants of this scheme which will eliminate the need for these assumptions or improve performance; this is just a simple example to give you an idea. In general, you could look at online algorithms and streaming algorithms.
Online outlier detection
There are many possible approaches, but it is hard to know what may be best in your situation without more information. It sounds like, for each image, you receive a feature vector, which is an elemen
Online outlier detection There are many possible approaches, but it is hard to know what may be best in your situation without more information. It sounds like, for each image, you receive a feature vector, which is an element of $\mathbb{R}^n$. If that's the case, here are a handful of candidate solutions: Store the feature vectors of all prior images, along with their classification, on disk. Periodically (say, once a day) train a learning algorithm on this data, and use the resulting algorithm to classify new images. Disk space is cheap; this solution might be a pragmatic and effective to convert an offline learning algorithm into one that can be used in your online setting. Store the feature vectors of a random sample of 1,000 (or 1,000,000) prior images, along with their classification. Periodically train a learning algorithm on this subsample. Note that you can efficiently update this subsample in an online fashion using standard tricks. This is only interesting if there is some reason why it is hard to store all of the feature vectors of all prior images (which seems hard to imagine, for me, but who knows). For each of the $n$ vectors, keep track of the running average and standard deviation of the non-defective images seen so far. Then, when you receive a new image, if any of its features is at least $c$ standard deviations beyond the mean for that feature, classify it as defective, otherwise classify it as non-defective. You can choose $c$ based upon $n$ and the desired tradeoff between false positives and false negatives. In other words, you maintain a $n$-vector $\mu$ of means, and a $n$-vector $\sigma$ of standard deviations, where $\mu_i$ is the mean of the $i$th feature vector and $\sigma_i$ is the standard deviation of that feature. When you receive a new feature vector $x$, you check whether $|x_i - \mu_i| \ge c \sigma_i$ for any $i$. If not, you classify it as non-defective and you update $\mu$ and $\sigma$. This approach assumes that each parameter from a non-defective image has a Gaussian distribution, and that the parameters are independent. Those assumptions may be optimistic. There are many more sophisticated variants of this scheme which will eliminate the need for these assumptions or improve performance; this is just a simple example to give you an idea. In general, you could look at online algorithms and streaming algorithms.
Online outlier detection There are many possible approaches, but it is hard to know what may be best in your situation without more information. It sounds like, for each image, you receive a feature vector, which is an elemen
29,334
Online outlier detection
From what I understand from your question, you receive a sequence of vectors in $R^n$ and you'd like to flag the current vector as being an outlier given all the vectors you've seen thus far. (I am assuming that the image parameters are the elements of the vector.) If the outliers are pretty obvious, a simple trick that would work is the following. Construct a locality sensitive hash function from your vectors. (A simple randomized hash like which side of a set of random hyperplanes the the vector falls on might work. This would yield a boolean vector as the hash value.) Now as you receive vectors, you compute the hash value of the vector and store the hash value (the boolean vector in the case of hyperplanes) and the counts in a dictionary. You also store the total number of vectors seen thus far. At any given time you can flag a given vector as being an outlier if the total number of vectors that collide with it in the hash are less than than a predefined percentage of the total. You can view this as building a histogram in an incremental fashion. But since the data is not univariate we use the hashing trick to make it behave like it.
Online outlier detection
From what I understand from your question, you receive a sequence of vectors in $R^n$ and you'd like to flag the current vector as being an outlier given all the vectors you've seen thus far. (I am as
Online outlier detection From what I understand from your question, you receive a sequence of vectors in $R^n$ and you'd like to flag the current vector as being an outlier given all the vectors you've seen thus far. (I am assuming that the image parameters are the elements of the vector.) If the outliers are pretty obvious, a simple trick that would work is the following. Construct a locality sensitive hash function from your vectors. (A simple randomized hash like which side of a set of random hyperplanes the the vector falls on might work. This would yield a boolean vector as the hash value.) Now as you receive vectors, you compute the hash value of the vector and store the hash value (the boolean vector in the case of hyperplanes) and the counts in a dictionary. You also store the total number of vectors seen thus far. At any given time you can flag a given vector as being an outlier if the total number of vectors that collide with it in the hash are less than than a predefined percentage of the total. You can view this as building a histogram in an incremental fashion. But since the data is not univariate we use the hashing trick to make it behave like it.
Online outlier detection From what I understand from your question, you receive a sequence of vectors in $R^n$ and you'd like to flag the current vector as being an outlier given all the vectors you've seen thus far. (I am as
29,335
How to find relationships between different types of events (defined by their 2D location)?
The type of data you describe is ususally called "marked point patterns", R has a task view for spatial statistics that offers many good packages for this type of analysis, most of which are probably not able to deal with the kind of humongous data you have :( For example, maybe events of type A usually don't occur where events of type B do. Or maybe in some area, there are mostly events of type C. These are two fairly different type of questions: The second asks about the positioning of one type of mark/event. Buzzwords to look for in this context are f.e. intensity estimation or K-function estimation if you are interested in discovering patterns of clustering (events of a kind tend to group together) or repulsion (events of a kind tend to be separated). The first asks about the correlation between different types of events. This is usually measured with mark correlation functions. I think subsampling the data to get a more tractable data size is dangerous (see comment to @hamner's reply), but maybe you could aggregate your data: Divide the observation window into a managable number of cells of equal size and tabulate the event counts in each. Each cell is then described by the location of its centre and a 10-vector of counts for your 10 mark types. You should be able to use the standard methods for marked point processes on this aggregated process.
How to find relationships between different types of events (defined by their 2D location)?
The type of data you describe is ususally called "marked point patterns", R has a task view for spatial statistics that offers many good packages for this type of analysis, most of which are probably
How to find relationships between different types of events (defined by their 2D location)? The type of data you describe is ususally called "marked point patterns", R has a task view for spatial statistics that offers many good packages for this type of analysis, most of which are probably not able to deal with the kind of humongous data you have :( For example, maybe events of type A usually don't occur where events of type B do. Or maybe in some area, there are mostly events of type C. These are two fairly different type of questions: The second asks about the positioning of one type of mark/event. Buzzwords to look for in this context are f.e. intensity estimation or K-function estimation if you are interested in discovering patterns of clustering (events of a kind tend to group together) or repulsion (events of a kind tend to be separated). The first asks about the correlation between different types of events. This is usually measured with mark correlation functions. I think subsampling the data to get a more tractable data size is dangerous (see comment to @hamner's reply), but maybe you could aggregate your data: Divide the observation window into a managable number of cells of equal size and tabulate the event counts in each. Each cell is then described by the location of its centre and a 10-vector of counts for your 10 mark types. You should be able to use the standard methods for marked point processes on this aggregated process.
How to find relationships between different types of events (defined by their 2D location)? The type of data you describe is ususally called "marked point patterns", R has a task view for spatial statistics that offers many good packages for this type of analysis, most of which are probably
29,336
How to find relationships between different types of events (defined by their 2D location)?
First, the size of the dataset. I recommend taking small, tractable samples of the dataset (either by randomly choosing N datapoints, or by randomly choosing several relatively small rectangles in the X-Y plane and taking all points that fall within that plane) and then honing your analysis techniques on this subset. Once you have an idea of the form of analysis that works, you can apply it to larger portions of the dataset. PCA is primarily used as a dimensionality reduction technique; your dataset is only three dimensions (one of which is categorical), so I doubt it would apply here. Try working with Matlab or R to visualize the points you are analyzing in the X-Y plane (or their relative density if working with the entire data set), both for individual types and all types the combined, and seeing what patterns emerge visually. That can help guide a more rigorous analysis.
How to find relationships between different types of events (defined by their 2D location)?
First, the size of the dataset. I recommend taking small, tractable samples of the dataset (either by randomly choosing N datapoints, or by randomly choosing several relatively small rectangles in th
How to find relationships between different types of events (defined by their 2D location)? First, the size of the dataset. I recommend taking small, tractable samples of the dataset (either by randomly choosing N datapoints, or by randomly choosing several relatively small rectangles in the X-Y plane and taking all points that fall within that plane) and then honing your analysis techniques on this subset. Once you have an idea of the form of analysis that works, you can apply it to larger portions of the dataset. PCA is primarily used as a dimensionality reduction technique; your dataset is only three dimensions (one of which is categorical), so I doubt it would apply here. Try working with Matlab or R to visualize the points you are analyzing in the X-Y plane (or their relative density if working with the entire data set), both for individual types and all types the combined, and seeing what patterns emerge visually. That can help guide a more rigorous analysis.
How to find relationships between different types of events (defined by their 2D location)? First, the size of the dataset. I recommend taking small, tractable samples of the dataset (either by randomly choosing N datapoints, or by randomly choosing several relatively small rectangles in th
29,337
Bayesian vs Maximum entropy
This may come a wee late, but the question should be rephrased: as defined by Jaynes, maximum entropy is a way to construct a prior distribution that (a) satisfies the constraints imposed by $E$ and (b) has the maximum entropy, relative to a reference measure in the continuous case: $$ \int -\log [ \pi(\theta) ] \text{d}\mu(\theta)\,. $$ Thus, (Jaynes') maximum entropy is clearly part of the Bayesian toolbox. And the maximum entropy prior does not provide the prior distribution that is closest to the true prior, as suggested by Ashok's question. Bayesian inference about a distribution $Q$ is an altogether different problem, handled by Bayesian non-parametrics (see, e.g., this recent book by Hjort et al.). It requires to have observations from $Q$, which does not seem to be the setting of the current question...
Bayesian vs Maximum entropy
This may come a wee late, but the question should be rephrased: as defined by Jaynes, maximum entropy is a way to construct a prior distribution that (a) satisfies the constraints imposed by $E$ and (
Bayesian vs Maximum entropy This may come a wee late, but the question should be rephrased: as defined by Jaynes, maximum entropy is a way to construct a prior distribution that (a) satisfies the constraints imposed by $E$ and (b) has the maximum entropy, relative to a reference measure in the continuous case: $$ \int -\log [ \pi(\theta) ] \text{d}\mu(\theta)\,. $$ Thus, (Jaynes') maximum entropy is clearly part of the Bayesian toolbox. And the maximum entropy prior does not provide the prior distribution that is closest to the true prior, as suggested by Ashok's question. Bayesian inference about a distribution $Q$ is an altogether different problem, handled by Bayesian non-parametrics (see, e.g., this recent book by Hjort et al.). It requires to have observations from $Q$, which does not seem to be the setting of the current question...
Bayesian vs Maximum entropy This may come a wee late, but the question should be rephrased: as defined by Jaynes, maximum entropy is a way to construct a prior distribution that (a) satisfies the constraints imposed by $E$ and (
29,338
Difference between ridge regression implementation in R and SAS
Though ridge regression looks at first like simple algorithm the devil is in the details. Apparently original variables are scaled, and parameter $\lambda$ is not the parameter you would think it is given the original description. From what I gathered reading the reference given in R help page of lm.ridge there is no one agreed way of doing ridge regression. So the difference in results can only be explained by different algorithms used by R and SAS. Hopefully someone more knowledgeable can give more detailed answer. You can see what kind of algorithm is applied in R by looking at the source of lm.ridge. Just type lm.ridge in the R prompt.
Difference between ridge regression implementation in R and SAS
Though ridge regression looks at first like simple algorithm the devil is in the details. Apparently original variables are scaled, and parameter $\lambda$ is not the parameter you would think it is g
Difference between ridge regression implementation in R and SAS Though ridge regression looks at first like simple algorithm the devil is in the details. Apparently original variables are scaled, and parameter $\lambda$ is not the parameter you would think it is given the original description. From what I gathered reading the reference given in R help page of lm.ridge there is no one agreed way of doing ridge regression. So the difference in results can only be explained by different algorithms used by R and SAS. Hopefully someone more knowledgeable can give more detailed answer. You can see what kind of algorithm is applied in R by looking at the source of lm.ridge. Just type lm.ridge in the R prompt.
Difference between ridge regression implementation in R and SAS Though ridge regression looks at first like simple algorithm the devil is in the details. Apparently original variables are scaled, and parameter $\lambda$ is not the parameter you would think it is g
29,339
Difference between ridge regression implementation in R and SAS
Using lm.ridge also produces a scaling vector (try head(model) to see all the output). To get the predicted values in R that you see in SAS take the coefficients and divide by the scalar vector.
Difference between ridge regression implementation in R and SAS
Using lm.ridge also produces a scaling vector (try head(model) to see all the output). To get the predicted values in R that you see in SAS take the coefficients and divide by the scalar vector.
Difference between ridge regression implementation in R and SAS Using lm.ridge also produces a scaling vector (try head(model) to see all the output). To get the predicted values in R that you see in SAS take the coefficients and divide by the scalar vector.
Difference between ridge regression implementation in R and SAS Using lm.ridge also produces a scaling vector (try head(model) to see all the output). To get the predicted values in R that you see in SAS take the coefficients and divide by the scalar vector.
29,340
Plotting a piecewise regression line
The only way I know how to do this easily is to predict from the model across the range of sqft and plot the predictions. There isn't a general way with abline or similar. You might also take a look at the segmented package which will fit these models and provide the plotting infrastructure for you. Doing this via predictions and base graphics. First, some dummy data: set.seed(1) sqft <- runif(100) sqft <- ifelse((tmp <- sqft > mean(sqft)), 1, 0) + rnorm(100, sd = 0.5) price <- 2 + 2.5 * sqft price <- ifelse(tmp, price, 0) + rnorm(100, sd = 0.6) DF <- data.frame(sqft = sqft, price = price, Ind = ifelse(sqft > mean(sqft), 1, 0)) rm(price, sqft) plot(price ~ sqft, data = DF) Fit the model: mod <- lm(price~sqft+I((sqft-mean(sqft))*Ind), data = DF) Generate some data to predict for and predict: m.sqft <- with(DF, mean(sqft)) pDF <- with(DF, data.frame(sqft = seq(min(sqft), max(sqft), length = 200))) pDF <- within(pDF, Ind <- ifelse(sqft > m.sqft, 1, 0)) pDF <- within(pDF, price <- predict(mod, newdata = pDF)) Plot the regression lines: ylim <- range(pDF$price, DF$price) xlim <- range(pDF$sqft, DF$sqft) plot(price ~ sqft, data = DF, ylim = ylim, xlim = xlim) lines(price ~ sqft, data = pDF, subset = Ind > 0, col = "red", lwd = 2) lines(price ~ sqft, data = pDF, subset = Ind < 1, col = "red", lwd = 2) You could code this up into a simple function - you only need the steps in the two preceding code chunks - which you can use in place of abline: myabline <- function(model, data, ...) { m.sqft <- with(data, mean(sqft)) pDF <- with(data, data.frame(sqft = seq(min(sqft), max(sqft), length = 200))) pDF <- within(pDF, Ind <- ifelse(sqft > m.sqft, 1, 0)) pDF <- within(pDF, price <- predict(mod, newdata = pDF)) lines(price ~ sqft, data = pDF, subset = Ind > 0, ...) lines(price ~ sqft, data = pDF, subset = Ind < 1, ...) invisible(model) } Then: ylim <- range(pDF$price, DF$price) xlim <- range(pDF$sqft, DF$sqft) plot(price ~ sqft, data = DF, ylim = ylim, xlim = xlim) myabline(mod, DF, col = "red", lwd = 2) Via the segmented package require(segmented) mod2 <- lm(price ~ sqft, data = DF) mod.s <- segmented(mod2, seg.Z = ~ sqft, psi = 0.5, control = seg.control(stop.if.error = FALSE)) plot(price ~ sqft, data = DF) plot(mod.s, add = TRUE) lines(mod.s, col = "red") With these data it doesn't estimate the breakpoint at mean(sqft), but the plot and lines methods in that package might help you implement something more generic than myabline to do this job for you diretcly from the fitted lm() model. Edit: If you want segmented to estimate the location of the breakpoint, then set the 'psi' argument to NA: mod.s <- segmented(mod2, seg.Z = ~ sqft, psi = NA, control = seg.control(stop.if.error = FALSE)) Then segmented will try K = 10 quantiles of sqft, with K being set in seg.control() and which defaults to 10. See ?seg.control for more.
Plotting a piecewise regression line
The only way I know how to do this easily is to predict from the model across the range of sqft and plot the predictions. There isn't a general way with abline or similar. You might also take a look a
Plotting a piecewise regression line The only way I know how to do this easily is to predict from the model across the range of sqft and plot the predictions. There isn't a general way with abline or similar. You might also take a look at the segmented package which will fit these models and provide the plotting infrastructure for you. Doing this via predictions and base graphics. First, some dummy data: set.seed(1) sqft <- runif(100) sqft <- ifelse((tmp <- sqft > mean(sqft)), 1, 0) + rnorm(100, sd = 0.5) price <- 2 + 2.5 * sqft price <- ifelse(tmp, price, 0) + rnorm(100, sd = 0.6) DF <- data.frame(sqft = sqft, price = price, Ind = ifelse(sqft > mean(sqft), 1, 0)) rm(price, sqft) plot(price ~ sqft, data = DF) Fit the model: mod <- lm(price~sqft+I((sqft-mean(sqft))*Ind), data = DF) Generate some data to predict for and predict: m.sqft <- with(DF, mean(sqft)) pDF <- with(DF, data.frame(sqft = seq(min(sqft), max(sqft), length = 200))) pDF <- within(pDF, Ind <- ifelse(sqft > m.sqft, 1, 0)) pDF <- within(pDF, price <- predict(mod, newdata = pDF)) Plot the regression lines: ylim <- range(pDF$price, DF$price) xlim <- range(pDF$sqft, DF$sqft) plot(price ~ sqft, data = DF, ylim = ylim, xlim = xlim) lines(price ~ sqft, data = pDF, subset = Ind > 0, col = "red", lwd = 2) lines(price ~ sqft, data = pDF, subset = Ind < 1, col = "red", lwd = 2) You could code this up into a simple function - you only need the steps in the two preceding code chunks - which you can use in place of abline: myabline <- function(model, data, ...) { m.sqft <- with(data, mean(sqft)) pDF <- with(data, data.frame(sqft = seq(min(sqft), max(sqft), length = 200))) pDF <- within(pDF, Ind <- ifelse(sqft > m.sqft, 1, 0)) pDF <- within(pDF, price <- predict(mod, newdata = pDF)) lines(price ~ sqft, data = pDF, subset = Ind > 0, ...) lines(price ~ sqft, data = pDF, subset = Ind < 1, ...) invisible(model) } Then: ylim <- range(pDF$price, DF$price) xlim <- range(pDF$sqft, DF$sqft) plot(price ~ sqft, data = DF, ylim = ylim, xlim = xlim) myabline(mod, DF, col = "red", lwd = 2) Via the segmented package require(segmented) mod2 <- lm(price ~ sqft, data = DF) mod.s <- segmented(mod2, seg.Z = ~ sqft, psi = 0.5, control = seg.control(stop.if.error = FALSE)) plot(price ~ sqft, data = DF) plot(mod.s, add = TRUE) lines(mod.s, col = "red") With these data it doesn't estimate the breakpoint at mean(sqft), but the plot and lines methods in that package might help you implement something more generic than myabline to do this job for you diretcly from the fitted lm() model. Edit: If you want segmented to estimate the location of the breakpoint, then set the 'psi' argument to NA: mod.s <- segmented(mod2, seg.Z = ~ sqft, psi = NA, control = seg.control(stop.if.error = FALSE)) Then segmented will try K = 10 quantiles of sqft, with K being set in seg.control() and which defaults to 10. See ?seg.control for more.
Plotting a piecewise regression line The only way I know how to do this easily is to predict from the model across the range of sqft and plot the predictions. There isn't a general way with abline or similar. You might also take a look a
29,341
Use of kernel density estimate in Naive Bayes Classifier?
I have read both the first linked earlier question, especially the answer of whuber and the comments on this. The answer is yes, you can do that, i.e. using the density from a kde of a numeric variable as conditional probability ($P(X=x|C=c)$ in the bayes theorem. $P(C=c|X=x)=P(C=c)*P(X=x|C=c)/P(X=x)$ By assuming that d(height) is equal across all classes, d(height) is normalized out when the theorem is applied, i.e. when $P(X=x|C=c)$ is divided by $P(X=x)$. This paper could be interesting for you: estimating continuous distributions in bayesian classifiers
Use of kernel density estimate in Naive Bayes Classifier?
I have read both the first linked earlier question, especially the answer of whuber and the comments on this. The answer is yes, you can do that, i.e. using the density from a kde of a numeric variabl
Use of kernel density estimate in Naive Bayes Classifier? I have read both the first linked earlier question, especially the answer of whuber and the comments on this. The answer is yes, you can do that, i.e. using the density from a kde of a numeric variable as conditional probability ($P(X=x|C=c)$ in the bayes theorem. $P(C=c|X=x)=P(C=c)*P(X=x|C=c)/P(X=x)$ By assuming that d(height) is equal across all classes, d(height) is normalized out when the theorem is applied, i.e. when $P(X=x|C=c)$ is divided by $P(X=x)$. This paper could be interesting for you: estimating continuous distributions in bayesian classifiers
Use of kernel density estimate in Naive Bayes Classifier? I have read both the first linked earlier question, especially the answer of whuber and the comments on this. The answer is yes, you can do that, i.e. using the density from a kde of a numeric variabl
29,342
Difference between distribution shift and data shift, concept drift and model drift
I'm not aware of a precise and accepted definition of each of these terms which sharply distinguishes them. There is an excellent blog post on the topic here. But broadly speaking: Model drift: This refers to the general idea that in some cases model predictions deteriorate over time. I.e. the distribution of model predictions and distribution of true values drift apart from each other. This can happen for a number of reasons. Concept Drift: This is drift due to the dependent variable. The distributions of data might be staying the same but the relationship between input and output has been altered. For example, in a model to detect fraudulent activity, there may be a change in the definition of what is considered fraudulent. Data Drift: This is due to changes in the distributions of the input data. For example, using the fraud example again, we might see an increase in certain types of fraud which change the distributions of observations from what was seen in the training data.
Difference between distribution shift and data shift, concept drift and model drift
I'm not aware of a precise and accepted definition of each of these terms which sharply distinguishes them. There is an excellent blog post on the topic here. But broadly speaking: Model drift: This
Difference between distribution shift and data shift, concept drift and model drift I'm not aware of a precise and accepted definition of each of these terms which sharply distinguishes them. There is an excellent blog post on the topic here. But broadly speaking: Model drift: This refers to the general idea that in some cases model predictions deteriorate over time. I.e. the distribution of model predictions and distribution of true values drift apart from each other. This can happen for a number of reasons. Concept Drift: This is drift due to the dependent variable. The distributions of data might be staying the same but the relationship between input and output has been altered. For example, in a model to detect fraudulent activity, there may be a change in the definition of what is considered fraudulent. Data Drift: This is due to changes in the distributions of the input data. For example, using the fraud example again, we might see an increase in certain types of fraud which change the distributions of observations from what was seen in the training data.
Difference between distribution shift and data shift, concept drift and model drift I'm not aware of a precise and accepted definition of each of these terms which sharply distinguishes them. There is an excellent blog post on the topic here. But broadly speaking: Model drift: This
29,343
AIC/BIC formula wrong in James/Witten?
There is no error but there is a subtlety. Note: In the second edition of ISLR model selection is discussed on pages 232-235 [1]. Let's start by deriving the log-likelihood for linear regression as it's at the heart of this question. The likelihood is a product of Normal densities. Evaluated at the MLE: $$ \hat{L} = \prod_{i=1}^n\frac{1}{\sqrt{2\pi\hat{\sigma}^2}}\exp\left\{-\frac{(y_i - \hat{y}_i)^2}{2\hat{\sigma}^2}\right\} $$ where n is the number of data points and $\hat{y}_i$ is the prediction, so $y_i - \hat{y}_i$ is the residual. We take the log and keep track of constants as they are important later on. $$ \log(\hat{L}) = -\frac{n}{2}\log(2\pi\hat{\sigma}^2) - \sum_{i=1}^n \frac{-(y_i - \hat{y}_i)^2}{2\hat{\sigma}^2} = -\frac{n}{2}\log(2\pi\hat{\sigma}^2) - \frac{RSS}{2\hat{\sigma}^2} $$ where RSS is the residual sum of squares. What about the MLE $\hat{\sigma}^2$ of the error variance $\sigma^2$? It's also a function of the RSS. $$ \hat{\sigma}^2 = \frac{RSS}{n} $$ And here is the subtle point. For model selection with AIC and BIC ISLR uses the $\hat{\sigma}^2$ from the full model to compare all nested models. Let's call this residual variance $\hat{\sigma}^2_{full}$ for clarity. Finally we write down the Bayesian information criterion (BIC). d is the number of fixed effects. $$ BIC = -2 \log(\hat{L}) + \log(n)d = n\log(2\pi\hat{\sigma}^2_{full}) + \frac{RSS}{\hat{\sigma}^2_{full}} + \log(n)d \\ = c_0 + c_1\left(RSS + \log(n)d\hat{\sigma}^2_{full}\right) $$ This is Equation (6.3) in ISLR up to two constants, $c_0$ and $c_1=\hat{\sigma}^{-2}_{full}$, that are the same for all models under consideration. ISLR also divides BIC by the sample size n. What if we want to estimate $\sigma^2$ separately for each model? Then we plug in the MLE $\hat{\sigma}^2$ = RSS/n and we get the "more popular" formulation. We add 1 to the number of parameters because we estimate the error variance plus the d fixed effects. $$ BIC = n\log(2\pi\hat{\sigma}^2) + \frac{RSS}{\hat{\sigma}^2} + \log(n)(d+1)\\ = n\log(2\pi RSS/n) + \frac{RSS}{RSS/n} + \log(n)(d+1)\\ = c^*_0 + n\log(RSS) + \log(n)(d+1) $$ The residual sum of squares RSS is the same in both versions of the BIC. [Since the effect estimates are $\hat{\beta} = (X'X)^{-1}X'Y$ and the predictions $X(X'X)^{-1}X'Y$ don't depend on $\sigma^2$.] [1] G. James, D. Witten, T. Hastie, and R. Tibshirani. An Introduction to Statistical Learning with Applications in R. Springer, 2nd edition, 2021. Available online.
AIC/BIC formula wrong in James/Witten?
There is no error but there is a subtlety. Note: In the second edition of ISLR model selection is discussed on pages 232-235 [1]. Let's start by deriving the log-likelihood for linear regression as it
AIC/BIC formula wrong in James/Witten? There is no error but there is a subtlety. Note: In the second edition of ISLR model selection is discussed on pages 232-235 [1]. Let's start by deriving the log-likelihood for linear regression as it's at the heart of this question. The likelihood is a product of Normal densities. Evaluated at the MLE: $$ \hat{L} = \prod_{i=1}^n\frac{1}{\sqrt{2\pi\hat{\sigma}^2}}\exp\left\{-\frac{(y_i - \hat{y}_i)^2}{2\hat{\sigma}^2}\right\} $$ where n is the number of data points and $\hat{y}_i$ is the prediction, so $y_i - \hat{y}_i$ is the residual. We take the log and keep track of constants as they are important later on. $$ \log(\hat{L}) = -\frac{n}{2}\log(2\pi\hat{\sigma}^2) - \sum_{i=1}^n \frac{-(y_i - \hat{y}_i)^2}{2\hat{\sigma}^2} = -\frac{n}{2}\log(2\pi\hat{\sigma}^2) - \frac{RSS}{2\hat{\sigma}^2} $$ where RSS is the residual sum of squares. What about the MLE $\hat{\sigma}^2$ of the error variance $\sigma^2$? It's also a function of the RSS. $$ \hat{\sigma}^2 = \frac{RSS}{n} $$ And here is the subtle point. For model selection with AIC and BIC ISLR uses the $\hat{\sigma}^2$ from the full model to compare all nested models. Let's call this residual variance $\hat{\sigma}^2_{full}$ for clarity. Finally we write down the Bayesian information criterion (BIC). d is the number of fixed effects. $$ BIC = -2 \log(\hat{L}) + \log(n)d = n\log(2\pi\hat{\sigma}^2_{full}) + \frac{RSS}{\hat{\sigma}^2_{full}} + \log(n)d \\ = c_0 + c_1\left(RSS + \log(n)d\hat{\sigma}^2_{full}\right) $$ This is Equation (6.3) in ISLR up to two constants, $c_0$ and $c_1=\hat{\sigma}^{-2}_{full}$, that are the same for all models under consideration. ISLR also divides BIC by the sample size n. What if we want to estimate $\sigma^2$ separately for each model? Then we plug in the MLE $\hat{\sigma}^2$ = RSS/n and we get the "more popular" formulation. We add 1 to the number of parameters because we estimate the error variance plus the d fixed effects. $$ BIC = n\log(2\pi\hat{\sigma}^2) + \frac{RSS}{\hat{\sigma}^2} + \log(n)(d+1)\\ = n\log(2\pi RSS/n) + \frac{RSS}{RSS/n} + \log(n)(d+1)\\ = c^*_0 + n\log(RSS) + \log(n)(d+1) $$ The residual sum of squares RSS is the same in both versions of the BIC. [Since the effect estimates are $\hat{\beta} = (X'X)^{-1}X'Y$ and the predictions $X(X'X)^{-1}X'Y$ don't depend on $\sigma^2$.] [1] G. James, D. Witten, T. Hastie, and R. Tibshirani. An Introduction to Statistical Learning with Applications in R. Springer, 2nd edition, 2021. Available online.
AIC/BIC formula wrong in James/Witten? There is no error but there is a subtlety. Note: In the second edition of ISLR model selection is discussed on pages 232-235 [1]. Let's start by deriving the log-likelihood for linear regression as it
29,344
Backpropagation on Variational Autoencoders
Q1: Your description seems to be pretty much correct. Q2: The two options are equal: $$ \frac {\partial E} {\partial w} = \frac {\partial \frac 1 n \sum_{i=1}^n E_i} {\partial w} = \frac 1 n \sum_{i=1}^n \frac {\partial E_i} {\partial w} $$ Also, note that $n=1$ is a valid choice: In our experiments we found that the number of samples $L$ per datapoint can be set to 1 as long as the minibatch size $M$ was large enough, e.g. $M = 100$. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
Backpropagation on Variational Autoencoders
Q1: Your description seems to be pretty much correct. Q2: The two options are equal: $$ \frac {\partial E} {\partial w} = \frac {\partial \frac 1 n \sum_{i=1}^n E_i} {\partial w} = \frac 1 n \sum_{i=1
Backpropagation on Variational Autoencoders Q1: Your description seems to be pretty much correct. Q2: The two options are equal: $$ \frac {\partial E} {\partial w} = \frac {\partial \frac 1 n \sum_{i=1}^n E_i} {\partial w} = \frac 1 n \sum_{i=1}^n \frac {\partial E_i} {\partial w} $$ Also, note that $n=1$ is a valid choice: In our experiments we found that the number of samples $L$ per datapoint can be set to 1 as long as the minibatch size $M$ was large enough, e.g. $M = 100$. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
Backpropagation on Variational Autoencoders Q1: Your description seems to be pretty much correct. Q2: The two options are equal: $$ \frac {\partial E} {\partial w} = \frac {\partial \frac 1 n \sum_{i=1}^n E_i} {\partial w} = \frac 1 n \sum_{i=1
29,345
Why is lasso more robust to outliers compared to ridge?
Let's first consider what an outlier does to the coefficients: If it has low leverage, nothing; If it has high leverage, it pulls the coefficient towards itself (either increasing or decreasing it). When you apply the LASSO penalty to OLS, you penalize the coefficients by summing their absolute values. An outlier with sufficient leverage increases/decreases a coefficient, also affecting the penalty linearly. This will somewhat increase/decrease the penalty to the the other coefficients, but not by much. When you apply the ridge penalty, the sum of squared coefficients shrinks the coefficient. This means that outlyingness will not only increase the OLS quadratically, but also the penalty. As such, all the other coefficients might be shrunk considerably more/less (depending on what kind of outlier you're dealing with). This sensitivity of the penalty to changes in the coefficients (and thus to outliers) means that ridge is less robust to outliers than LASSO.
Why is lasso more robust to outliers compared to ridge?
Let's first consider what an outlier does to the coefficients: If it has low leverage, nothing; If it has high leverage, it pulls the coefficient towards itself (either increasing or decreasing it).
Why is lasso more robust to outliers compared to ridge? Let's first consider what an outlier does to the coefficients: If it has low leverage, nothing; If it has high leverage, it pulls the coefficient towards itself (either increasing or decreasing it). When you apply the LASSO penalty to OLS, you penalize the coefficients by summing their absolute values. An outlier with sufficient leverage increases/decreases a coefficient, also affecting the penalty linearly. This will somewhat increase/decrease the penalty to the the other coefficients, but not by much. When you apply the ridge penalty, the sum of squared coefficients shrinks the coefficient. This means that outlyingness will not only increase the OLS quadratically, but also the penalty. As such, all the other coefficients might be shrunk considerably more/less (depending on what kind of outlier you're dealing with). This sensitivity of the penalty to changes in the coefficients (and thus to outliers) means that ridge is less robust to outliers than LASSO.
Why is lasso more robust to outliers compared to ridge? Let's first consider what an outlier does to the coefficients: If it has low leverage, nothing; If it has high leverage, it pulls the coefficient towards itself (either increasing or decreasing it).
29,346
Why does the Bayesian posterior concentrate around the minimiser of KL divergence?
Use of logarithms in calculations like this comes from information theory. In the particular case of the KL divergence, the measure can be interpreted as the relative information of two distributions: $$\begin{equation} \begin{aligned} KL(\tilde{f} \parallel f_\theta) &= \int \limits_{-\infty}^\infty \tilde{f}(x) (\log \tilde{f}(x) - \log f_\theta (x)) \ dx \\[6pt] &= \Bigg( \underbrace{- \int \limits_{-\infty}^\infty \tilde{f}(x) \log f_\theta(x) \ dx}_{H(\tilde{f}, f_\theta)} \Bigg) - \Bigg( \underbrace{- \int \limits_{-\infty}^\infty \tilde{f}(x) \log \tilde{f}(x) \ dx}_{H(\tilde{f})} \Bigg), \\[6pt] \end{aligned} \end{equation}$$ where $H(\tilde{f})$ is the entropy of $\tilde{f}$ and $H(\tilde{f}, f_\theta)$ is the cross-entropy of the $\tilde{f}$ and $f_\theta$. Entropy can be regarded as measures of the average rate of produced by a density (thought cross-entropy is a bit more complicated). Minimising the KL divergence for a fixed value $\tilde{f}$ (as in the problem you mention) is equivalent to minimising the cross-entropy, and so this optimisation can be given an information-theoretic interpretation. It is not possible for me to give a good account of information theory, and the properties of information measures, in a short post. However, I would recommend having a look at the field, as it has close connections to statistics. Many statistical measures involving integrals and sums over logarithms of densities are simple combinations of standard information measures used in measure theory, and in such cases, they can be given interpretations in terms of the underlying levels of information in various densities, etc.
Why does the Bayesian posterior concentrate around the minimiser of KL divergence?
Use of logarithms in calculations like this comes from information theory. In the particular case of the KL divergence, the measure can be interpreted as the relative information of two distributions
Why does the Bayesian posterior concentrate around the minimiser of KL divergence? Use of logarithms in calculations like this comes from information theory. In the particular case of the KL divergence, the measure can be interpreted as the relative information of two distributions: $$\begin{equation} \begin{aligned} KL(\tilde{f} \parallel f_\theta) &= \int \limits_{-\infty}^\infty \tilde{f}(x) (\log \tilde{f}(x) - \log f_\theta (x)) \ dx \\[6pt] &= \Bigg( \underbrace{- \int \limits_{-\infty}^\infty \tilde{f}(x) \log f_\theta(x) \ dx}_{H(\tilde{f}, f_\theta)} \Bigg) - \Bigg( \underbrace{- \int \limits_{-\infty}^\infty \tilde{f}(x) \log \tilde{f}(x) \ dx}_{H(\tilde{f})} \Bigg), \\[6pt] \end{aligned} \end{equation}$$ where $H(\tilde{f})$ is the entropy of $\tilde{f}$ and $H(\tilde{f}, f_\theta)$ is the cross-entropy of the $\tilde{f}$ and $f_\theta$. Entropy can be regarded as measures of the average rate of produced by a density (thought cross-entropy is a bit more complicated). Minimising the KL divergence for a fixed value $\tilde{f}$ (as in the problem you mention) is equivalent to minimising the cross-entropy, and so this optimisation can be given an information-theoretic interpretation. It is not possible for me to give a good account of information theory, and the properties of information measures, in a short post. However, I would recommend having a look at the field, as it has close connections to statistics. Many statistical measures involving integrals and sums over logarithms of densities are simple combinations of standard information measures used in measure theory, and in such cases, they can be given interpretations in terms of the underlying levels of information in various densities, etc.
Why does the Bayesian posterior concentrate around the minimiser of KL divergence? Use of logarithms in calculations like this comes from information theory. In the particular case of the KL divergence, the measure can be interpreted as the relative information of two distributions
29,347
What is the null hypothesis for the individual p-values in multiple regression?
The null hypothesis is $$ H_0: B1 = 0 \: \text{and} \: B2 \in \mathbb{R} \: \text{and} \: A \in \mathbb{R}, $$ which basically means that the null hypothesis does not restrict B2 and A. The alternative hypothesis is $$ H_1: B1 \neq 0 \: \text{and} \: B2 \in \mathbb{R} \: \text{and} \: A \in \mathbb{R}. $$ In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis. In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$. In my answer, I have not addressed the distribution of $\epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like $$ E[\epsilon \mid X1, X2] = 0 $$ then a similar conclusion holds asymptotically (under regularity assumptions).
What is the null hypothesis for the individual p-values in multiple regression?
The null hypothesis is $$ H_0: B1 = 0 \: \text{and} \: B2 \in \mathbb{R} \: \text{and} \: A \in \mathbb{R}, $$ which basically means that the null hypothesis does not restrict B2 and A. The alternat
What is the null hypothesis for the individual p-values in multiple regression? The null hypothesis is $$ H_0: B1 = 0 \: \text{and} \: B2 \in \mathbb{R} \: \text{and} \: A \in \mathbb{R}, $$ which basically means that the null hypothesis does not restrict B2 and A. The alternative hypothesis is $$ H_1: B1 \neq 0 \: \text{and} \: B2 \in \mathbb{R} \: \text{and} \: A \in \mathbb{R}. $$ In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis. In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$. In my answer, I have not addressed the distribution of $\epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like $$ E[\epsilon \mid X1, X2] = 0 $$ then a similar conclusion holds asymptotically (under regularity assumptions).
What is the null hypothesis for the individual p-values in multiple regression? The null hypothesis is $$ H_0: B1 = 0 \: \text{and} \: B2 \in \mathbb{R} \: \text{and} \: A \in \mathbb{R}, $$ which basically means that the null hypothesis does not restrict B2 and A. The alternat
29,348
What is the null hypothesis for the individual p-values in multiple regression?
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_j\ne0$ (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
What is the null hypothesis for the individual p-values in multiple regression?
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_j\ne0$ (see here for more details: ht
What is the null hypothesis for the individual p-values in multiple regression? The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_j\ne0$ (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
What is the null hypothesis for the individual p-values in multiple regression? The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_j\ne0$ (see here for more details: ht
29,349
What is the null hypothesis for the individual p-values in multiple regression?
You can make the same assumptions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well. As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance. If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
What is the null hypothesis for the individual p-values in multiple regression?
You can make the same assumptions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.
What is the null hypothesis for the individual p-values in multiple regression? You can make the same assumptions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well. As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance. If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
What is the null hypothesis for the individual p-values in multiple regression? You can make the same assumptions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.
29,350
Bayesian optimization for non-Gaussian noise
There are Gaussian process models with non-Gaussian likelihood: The prior distribution on the function $f$ is still a Gaussian process but the noise term is not Gaussian anymore, i.e. the likelihood $p(y | f)$ is not assumed to be Gaussian anymore. As a consequence the analytical results are lost and drawing inference now requires approximation methods such as MCMC or Laplace approximation. For several distributions this is implemented and explained as part of the GPML Matlab package, available and explained here. The table of inference methods in section 3d ("A More Detailed Overview") gives an overview of what distributions have been implemented for the likelihood and what inference method is available for each of them. The only articles that I can link you to right now (because I bookmarked them at some point) are on the Student's $t$ distribution: Shah, Amar, Andrew Wilson, and Zoubin Ghahramani. "Student-t processes as alternatives to Gaussian processes." Artificial Intelligence and Statistics. 2014. Jylänki, Pasi, Jarno Vanhatalo, and Aki Vehtari. "Robust Gaussian process regression with a Student-t likelihood." Journal of Machine Learning Research 12.Nov (2011): 3227-3257.
Bayesian optimization for non-Gaussian noise
There are Gaussian process models with non-Gaussian likelihood: The prior distribution on the function $f$ is still a Gaussian process but the noise term is not Gaussian anymore, i.e. the likelihood $
Bayesian optimization for non-Gaussian noise There are Gaussian process models with non-Gaussian likelihood: The prior distribution on the function $f$ is still a Gaussian process but the noise term is not Gaussian anymore, i.e. the likelihood $p(y | f)$ is not assumed to be Gaussian anymore. As a consequence the analytical results are lost and drawing inference now requires approximation methods such as MCMC or Laplace approximation. For several distributions this is implemented and explained as part of the GPML Matlab package, available and explained here. The table of inference methods in section 3d ("A More Detailed Overview") gives an overview of what distributions have been implemented for the likelihood and what inference method is available for each of them. The only articles that I can link you to right now (because I bookmarked them at some point) are on the Student's $t$ distribution: Shah, Amar, Andrew Wilson, and Zoubin Ghahramani. "Student-t processes as alternatives to Gaussian processes." Artificial Intelligence and Statistics. 2014. Jylänki, Pasi, Jarno Vanhatalo, and Aki Vehtari. "Robust Gaussian process regression with a Student-t likelihood." Journal of Machine Learning Research 12.Nov (2011): 3227-3257.
Bayesian optimization for non-Gaussian noise There are Gaussian process models with non-Gaussian likelihood: The prior distribution on the function $f$ is still a Gaussian process but the noise term is not Gaussian anymore, i.e. the likelihood $
29,351
mlr compared to caret
I've been using caret for a long time, and love it. I only discovered mlr today, and have spent most of the day learning how to use it. I discovered mlr because I was searching for a way to produce a partial dependence plot of variable importances from random forest models. After one day's experience, I'm actually leaning toward switching to mlr! So I would say stick with mlr unless you have a compelling reason to devote time and energy into learning caret.
mlr compared to caret
I've been using caret for a long time, and love it. I only discovered mlr today, and have spent most of the day learning how to use it. I discovered mlr because I was searching for a way to produce a
mlr compared to caret I've been using caret for a long time, and love it. I only discovered mlr today, and have spent most of the day learning how to use it. I discovered mlr because I was searching for a way to produce a partial dependence plot of variable importances from random forest models. After one day's experience, I'm actually leaning toward switching to mlr! So I would say stick with mlr unless you have a compelling reason to devote time and energy into learning caret.
mlr compared to caret I've been using caret for a long time, and love it. I only discovered mlr today, and have spent most of the day learning how to use it. I discovered mlr because I was searching for a way to produce a
29,352
Time Series One Step Ahead vs N-Step Ahead
If the model is correct, then the optimal forecast is given by the iterated forecast (i.e. when you forecast each intermediate $y_{T+k}$ to finally produce $\hat y_{T+h}$). The direct forecast (when you estimate the model with $y_t$ as a function of $y_{t-h}$ in which the 'one'-step-ahead forecast is now a $h$-step ahead forecast in 'physical' time) is less efficient in this case, but on the upside it is more robust to model misspecification. Marcellino, Stock and Watson investigated this (in the AR context) in more detail and the abstract reads: “Iterated” multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas “direct” forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated out-of-sample methods to 171 U.S. monthly macroeconomic time series spanning 1959 – 2002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon. A free version of their paper is available here: https://www.princeton.edu/~mwatson/papers/hstep_3.pdf Massimiliano Marcellino, James H. Stock, Mark W. Watson (2006) "A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series", Journal of Econometrics, (135):1–2, 499-526, https://doi.org/10.1016/j.jeconom.2005.07.020.
Time Series One Step Ahead vs N-Step Ahead
If the model is correct, then the optimal forecast is given by the iterated forecast (i.e. when you forecast each intermediate $y_{T+k}$ to finally produce $\hat y_{T+h}$). The direct forecast (when y
Time Series One Step Ahead vs N-Step Ahead If the model is correct, then the optimal forecast is given by the iterated forecast (i.e. when you forecast each intermediate $y_{T+k}$ to finally produce $\hat y_{T+h}$). The direct forecast (when you estimate the model with $y_t$ as a function of $y_{t-h}$ in which the 'one'-step-ahead forecast is now a $h$-step ahead forecast in 'physical' time) is less efficient in this case, but on the upside it is more robust to model misspecification. Marcellino, Stock and Watson investigated this (in the AR context) in more detail and the abstract reads: “Iterated” multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas “direct” forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated out-of-sample methods to 171 U.S. monthly macroeconomic time series spanning 1959 – 2002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon. A free version of their paper is available here: https://www.princeton.edu/~mwatson/papers/hstep_3.pdf Massimiliano Marcellino, James H. Stock, Mark W. Watson (2006) "A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series", Journal of Econometrics, (135):1–2, 499-526, https://doi.org/10.1016/j.jeconom.2005.07.020.
Time Series One Step Ahead vs N-Step Ahead If the model is correct, then the optimal forecast is given by the iterated forecast (i.e. when you forecast each intermediate $y_{T+k}$ to finally produce $\hat y_{T+h}$). The direct forecast (when y
29,353
Probabilistic interpretation of Thin Plate Smoothing Splines
Let the model of the question be written as \begin{equation} \tag{1} Y_i = \boldsymbol{\phi}(\mathbf{x}_i)^\top\boldsymbol{\beta} + h(\mathbf{x}_i) + \varepsilon_i \end{equation} where $h(\mathbf{x})$ is an unobserved GP with index $\mathbf{x} \in \mathbb{R}^d$ and $\varepsilon_i$ is a normal noise term with variance $\sigma^2$. The GP is usually assumed to be centered, stationary and non-deterministic. Note that the term $\boldsymbol{\phi}(\mathbf{x})^\top \boldsymbol{\beta}$ can be regarded as a (deterministic) GP with kernel $\boldsymbol{\phi}(\mathbf{x})^\top \mathbf{B}\, \boldsymbol{\phi}(\mathbf{x})$ where $\mathbf{B}$ is a ``infinite-valued'' covariance matrix. Indeed, by taking $\mathbf{B} := \rho \, \mathbf{I}$ with $\rho \to \infty$ we get the kriging equations of the question. This is often named the diffuse prior for $\boldsymbol{\beta}$. A proper posterior for $\boldsymbol{\beta}$ results only when the matrix $\boldsymbol{\Phi}$ has full rank. So the model writes as well as \begin{equation} \tag{2} Y_i = \zeta(\mathbf{x}_i) + \varepsilon_i \end{equation} where $\zeta(\mathbf{x})$ is a GP. The same Bayes interpretation can be used with restrictions when $\zeta(\mathbf{x})$ is no longer a GP but rather is an Intrinsic Random Function (IRF). The derivation can be found in the book of G. Wahba. Readable presentations of the concept of IRF are e.g. in the book by N. Cressie and the article by Mardia et al cited below. IRFs are similar to the well-known integrated processes in the discrete-time context (such as ARIMA): an IRF is transformed into a classical GP by a kind of differencing operation. Here are two examples of IRF for $d=1$. Firstly, consider a Wiener process $\zeta(x)$ with its initial condition $\zeta(0) = 0$ replaced by a diffuse initial condition: $\zeta(0)$ is normal with an infinite variance. Once a value $\zeta(x)$ is known, the IRF can be predicted as is the Wiener GP. Secondly, consider an integrated Wiener process given by the equation $$ \text{d}^2 \zeta(x) / \text{d}x^2 = \text{d} W(x)/\text{d}x $$ where $W(x)$ is a Wiener process. To get a GP we now need two scalar parameters: two values $\zeta(x)$ and $\zeta(x')$ for $x \neq x'$, or the values $\zeta(x)$ and $\text{d}\zeta(x) / \text{d}x$ at some chosen $x$. We may consider that the two extra parameters are jointly Gaussian with an infinite $2 \times 2$ covariance matrix. In both examples, as soon as a suitable finite set of observations is available, the IRF is nearly coped with as a GP. Moreover we used a differential operator: $L := \text{d}/ \text{d}x$ and $L := \text{d}^2/ \text{d}x^2$ respectively. The nullspace is a linear space $\mathcal{F}$ of functions $\phi(x)$ such that $L \phi = 0$. It contains the constant function $\phi_1(x)=1$ in the first case and the functions $\phi_1(x)=1$ and $\phi_2(x) = x$ in the second case. Note that in the first example $\zeta(x) - \zeta(x + \delta)$ is GP for any fixed $\delta$ in the first example and similarly $\zeta(x-\delta) - 2 \zeta(x) + \zeta(x + \delta)$ is a GP in the second case. For a general dimension $d$, consider a linear space $\mathcal{F}$ of functions defined on $\mathbb{R}^d$. We call an increment relative to $\mathcal{F}$ a finite collection of $s$ locations $\mathbf{x}_i \in \mathbb{R}^d$ and $s$ real weights $\nu_i$ such that $$ \sum_{i=1}^s \, \nu_i \,\phi(\mathbf{x}_i) = 0 \text{ for all } \phi \in \mathcal{F}. $$ Consider $\mathcal{F}$ as being the nullspace of our examples. For the first example we can take e.g. $s=2$ with $x_1$ and $x_2$ arbitrary and $[1, \, -1]$. For the second example we can take $s = 3$ equally spaced $x_i$s and $\boldsymbol{\nu} = [1,\,-2,\,1]$. The definition of an IRF involves a space of functions $\mathcal{F}$ and a function $g(\mathbf{x}, \, \mathbf{x}')$ which is conditionally positive w.r.t. $\mathcal{F}$, which means that $$ \sum_{i=1}^s \sum_{j=1}^s \nu_i \nu_j \, g(\mathbf{x}_i, \, \mathbf{x}'_j) \geq 0 $$ holds as soon as $[\nu_i,\,\mathbf{x}_i]_{i=1}^s$ is an increment w.r.t. $\mathcal{F}$. From $\mathcal{F}$ and $g(\mathbf{x},\,\mathbf{x}')$ we can make a covariance kernel hence a GP as in Mardia et al. We can start from a linear differential operator $L$ and use the nullspace as $\mathcal{F}$; the IRF will then have connection with the equation $L \zeta =$ a Gaussian noise. The computation of the prediction of the IRF is nearly the same as in the question, with $k(\mathbf{x},\,\mathbf{x}')$ replaced by $g(\mathbf{x},\,\mathbf{x}')$, but with the $\phi_i(\mathbf{x})$ now forming a basis of $\mathcal{F}$. The extra constraint $\boldsymbol{\Phi}^\top \boldsymbol{\alpha} = \mathbf{0}$ must be added in the optimisation problem, which will grant that $\boldsymbol{\alpha}^\top \mathbf{K} \boldsymbol{\alpha} \geq 0$. We still can add more basis functions which are not in $\mathcal{F}$ if needed; this will have the effect of adding a deterministic GP, say $\boldsymbol{\psi}(\mathbf{x})^\top\boldsymbol{\gamma}$ to the IRF $\zeta(\mathbf{x})$ in (2). The thin-plate spline depends on an integer $m$ such that $2m> d$, the space $\mathcal{F}$ contains polynomials with low degree, with dimension $p(m)$ depending on $m$ and $d$. It can be shown that if $E(r)$ is the following function for $r \geq 0$ $$ E(r) := \begin{cases} (-1)^{m + 1 + d /2} \, r^{2m-d} \log r & d \text{ even},\\ r^{2m-d} & d \text{ odd,} \end{cases} $$ then $g(\mathbf{x},\,\mathbf{x}') := E(\|\mathbf{x} - \mathbf{x}'\|)$ defines a conditionally positive w.r.t. $\mathcal{F}$. The construction relates to a differential operator $L$. It turns out that for $d=1$ and $m=2$ the thin plate spline is nothing than the usual natural cubic spline, which relates to the integrated Wiener example above, with $g(x,\,x') = |x - x'|^3$. So (2) is nothing then than the usual smoothing spline model. When $d=2$ and $m=2$ the nullspace has dimension $p(m)=3$ and is generated by the functions $1$, $x_1$ and $x_2$. Cressie N Statistics for Spatial Data. Wiley 1993. Mardia KV, Kent JT, Goodall CR and Little JA. Kriging and splines with derivative information. Biometrika (1996), 83,1, pp. 207-221. Wahba G Spline Models for Observational Data. SIAM 1990. Wang, Y Smoothing Splines, Methods and Applications. Chapman and Hall, 2011.
Probabilistic interpretation of Thin Plate Smoothing Splines
Let the model of the question be written as \begin{equation} \tag{1} Y_i = \boldsymbol{\phi}(\mathbf{x}_i)^\top\boldsymbol{\beta} + h(\mathbf{x}_i) + \varepsilon_i \end{equation} where $h(\ma
Probabilistic interpretation of Thin Plate Smoothing Splines Let the model of the question be written as \begin{equation} \tag{1} Y_i = \boldsymbol{\phi}(\mathbf{x}_i)^\top\boldsymbol{\beta} + h(\mathbf{x}_i) + \varepsilon_i \end{equation} where $h(\mathbf{x})$ is an unobserved GP with index $\mathbf{x} \in \mathbb{R}^d$ and $\varepsilon_i$ is a normal noise term with variance $\sigma^2$. The GP is usually assumed to be centered, stationary and non-deterministic. Note that the term $\boldsymbol{\phi}(\mathbf{x})^\top \boldsymbol{\beta}$ can be regarded as a (deterministic) GP with kernel $\boldsymbol{\phi}(\mathbf{x})^\top \mathbf{B}\, \boldsymbol{\phi}(\mathbf{x})$ where $\mathbf{B}$ is a ``infinite-valued'' covariance matrix. Indeed, by taking $\mathbf{B} := \rho \, \mathbf{I}$ with $\rho \to \infty$ we get the kriging equations of the question. This is often named the diffuse prior for $\boldsymbol{\beta}$. A proper posterior for $\boldsymbol{\beta}$ results only when the matrix $\boldsymbol{\Phi}$ has full rank. So the model writes as well as \begin{equation} \tag{2} Y_i = \zeta(\mathbf{x}_i) + \varepsilon_i \end{equation} where $\zeta(\mathbf{x})$ is a GP. The same Bayes interpretation can be used with restrictions when $\zeta(\mathbf{x})$ is no longer a GP but rather is an Intrinsic Random Function (IRF). The derivation can be found in the book of G. Wahba. Readable presentations of the concept of IRF are e.g. in the book by N. Cressie and the article by Mardia et al cited below. IRFs are similar to the well-known integrated processes in the discrete-time context (such as ARIMA): an IRF is transformed into a classical GP by a kind of differencing operation. Here are two examples of IRF for $d=1$. Firstly, consider a Wiener process $\zeta(x)$ with its initial condition $\zeta(0) = 0$ replaced by a diffuse initial condition: $\zeta(0)$ is normal with an infinite variance. Once a value $\zeta(x)$ is known, the IRF can be predicted as is the Wiener GP. Secondly, consider an integrated Wiener process given by the equation $$ \text{d}^2 \zeta(x) / \text{d}x^2 = \text{d} W(x)/\text{d}x $$ where $W(x)$ is a Wiener process. To get a GP we now need two scalar parameters: two values $\zeta(x)$ and $\zeta(x')$ for $x \neq x'$, or the values $\zeta(x)$ and $\text{d}\zeta(x) / \text{d}x$ at some chosen $x$. We may consider that the two extra parameters are jointly Gaussian with an infinite $2 \times 2$ covariance matrix. In both examples, as soon as a suitable finite set of observations is available, the IRF is nearly coped with as a GP. Moreover we used a differential operator: $L := \text{d}/ \text{d}x$ and $L := \text{d}^2/ \text{d}x^2$ respectively. The nullspace is a linear space $\mathcal{F}$ of functions $\phi(x)$ such that $L \phi = 0$. It contains the constant function $\phi_1(x)=1$ in the first case and the functions $\phi_1(x)=1$ and $\phi_2(x) = x$ in the second case. Note that in the first example $\zeta(x) - \zeta(x + \delta)$ is GP for any fixed $\delta$ in the first example and similarly $\zeta(x-\delta) - 2 \zeta(x) + \zeta(x + \delta)$ is a GP in the second case. For a general dimension $d$, consider a linear space $\mathcal{F}$ of functions defined on $\mathbb{R}^d$. We call an increment relative to $\mathcal{F}$ a finite collection of $s$ locations $\mathbf{x}_i \in \mathbb{R}^d$ and $s$ real weights $\nu_i$ such that $$ \sum_{i=1}^s \, \nu_i \,\phi(\mathbf{x}_i) = 0 \text{ for all } \phi \in \mathcal{F}. $$ Consider $\mathcal{F}$ as being the nullspace of our examples. For the first example we can take e.g. $s=2$ with $x_1$ and $x_2$ arbitrary and $[1, \, -1]$. For the second example we can take $s = 3$ equally spaced $x_i$s and $\boldsymbol{\nu} = [1,\,-2,\,1]$. The definition of an IRF involves a space of functions $\mathcal{F}$ and a function $g(\mathbf{x}, \, \mathbf{x}')$ which is conditionally positive w.r.t. $\mathcal{F}$, which means that $$ \sum_{i=1}^s \sum_{j=1}^s \nu_i \nu_j \, g(\mathbf{x}_i, \, \mathbf{x}'_j) \geq 0 $$ holds as soon as $[\nu_i,\,\mathbf{x}_i]_{i=1}^s$ is an increment w.r.t. $\mathcal{F}$. From $\mathcal{F}$ and $g(\mathbf{x},\,\mathbf{x}')$ we can make a covariance kernel hence a GP as in Mardia et al. We can start from a linear differential operator $L$ and use the nullspace as $\mathcal{F}$; the IRF will then have connection with the equation $L \zeta =$ a Gaussian noise. The computation of the prediction of the IRF is nearly the same as in the question, with $k(\mathbf{x},\,\mathbf{x}')$ replaced by $g(\mathbf{x},\,\mathbf{x}')$, but with the $\phi_i(\mathbf{x})$ now forming a basis of $\mathcal{F}$. The extra constraint $\boldsymbol{\Phi}^\top \boldsymbol{\alpha} = \mathbf{0}$ must be added in the optimisation problem, which will grant that $\boldsymbol{\alpha}^\top \mathbf{K} \boldsymbol{\alpha} \geq 0$. We still can add more basis functions which are not in $\mathcal{F}$ if needed; this will have the effect of adding a deterministic GP, say $\boldsymbol{\psi}(\mathbf{x})^\top\boldsymbol{\gamma}$ to the IRF $\zeta(\mathbf{x})$ in (2). The thin-plate spline depends on an integer $m$ such that $2m> d$, the space $\mathcal{F}$ contains polynomials with low degree, with dimension $p(m)$ depending on $m$ and $d$. It can be shown that if $E(r)$ is the following function for $r \geq 0$ $$ E(r) := \begin{cases} (-1)^{m + 1 + d /2} \, r^{2m-d} \log r & d \text{ even},\\ r^{2m-d} & d \text{ odd,} \end{cases} $$ then $g(\mathbf{x},\,\mathbf{x}') := E(\|\mathbf{x} - \mathbf{x}'\|)$ defines a conditionally positive w.r.t. $\mathcal{F}$. The construction relates to a differential operator $L$. It turns out that for $d=1$ and $m=2$ the thin plate spline is nothing than the usual natural cubic spline, which relates to the integrated Wiener example above, with $g(x,\,x') = |x - x'|^3$. So (2) is nothing then than the usual smoothing spline model. When $d=2$ and $m=2$ the nullspace has dimension $p(m)=3$ and is generated by the functions $1$, $x_1$ and $x_2$. Cressie N Statistics for Spatial Data. Wiley 1993. Mardia KV, Kent JT, Goodall CR and Little JA. Kriging and splines with derivative information. Biometrika (1996), 83,1, pp. 207-221. Wahba G Spline Models for Observational Data. SIAM 1990. Wang, Y Smoothing Splines, Methods and Applications. Chapman and Hall, 2011.
Probabilistic interpretation of Thin Plate Smoothing Splines Let the model of the question be written as \begin{equation} \tag{1} Y_i = \boldsymbol{\phi}(\mathbf{x}_i)^\top\boldsymbol{\beta} + h(\mathbf{x}_i) + \varepsilon_i \end{equation} where $h(\ma
29,354
What does it mean for the training data to be generated by a probability distribution over datasets
Probability distrubution over datasets: What are the datasets? How is the probability distribution generated? Once we can estimate the underlying distributions of the input data, we essentially know how they are picked and can do good predictions. (generative model). Normally, we can assume an underlying distribution according to what we believe (inductive bias). For example, if we believe that there is a high probability that values are close to zero, we can take a Gaussian distribution with mean $0$ and tune the parameters like variance when we train. Datasets are, for example, set of all coin tosses and the distribution assumed will be binomial. When we do say maximizing log-likelihood for the actual data points, we will get those parameters which make the dataset fit into the distribution assumed. The examples are independent of each other. Can you give me an example of where the examples are dependent? For example, we toss a coin and if we have a head we toss another otherwise we do not. Here there is a dependence between subsequent tosses Drawn from the same probability distribution as each other. Suppose the probability distribution is Gaussian. Does the term "same probability distribution" mean that all the examples are drawn from a Gaussian distribution with the same mean and variance? "This assumption enables us". What does this mean? Yes. That is why (4) is said. Once you have a probability distribution from one example, you do not need other examples to describe the data generating process. Finally, for the last paragraph of page 122, it is given that the samples follow Bernoulli distribution. What does this mean intuitively? It means that each example can be thought of as a coin toss. If the experiment was multiple coin tosses, you would have each coin toss independent with a probability of head to be $\frac{1}{2}$. Similarly, if you choose any other experiment, the result of each example can be thought of as a coin toss or an n-dimensional dice. Generating examples means getting a distribution closest to what we see in the dataset for training. That is got by assuming a distribution and maximizing the likelihood of the given dataset and outputting the optimum parameters.
What does it mean for the training data to be generated by a probability distribution over datasets
Probability distrubution over datasets: What are the datasets? How is the probability distribution generated? Once we can estimate the underlying distributions of the input data, we essentially know
What does it mean for the training data to be generated by a probability distribution over datasets Probability distrubution over datasets: What are the datasets? How is the probability distribution generated? Once we can estimate the underlying distributions of the input data, we essentially know how they are picked and can do good predictions. (generative model). Normally, we can assume an underlying distribution according to what we believe (inductive bias). For example, if we believe that there is a high probability that values are close to zero, we can take a Gaussian distribution with mean $0$ and tune the parameters like variance when we train. Datasets are, for example, set of all coin tosses and the distribution assumed will be binomial. When we do say maximizing log-likelihood for the actual data points, we will get those parameters which make the dataset fit into the distribution assumed. The examples are independent of each other. Can you give me an example of where the examples are dependent? For example, we toss a coin and if we have a head we toss another otherwise we do not. Here there is a dependence between subsequent tosses Drawn from the same probability distribution as each other. Suppose the probability distribution is Gaussian. Does the term "same probability distribution" mean that all the examples are drawn from a Gaussian distribution with the same mean and variance? "This assumption enables us". What does this mean? Yes. That is why (4) is said. Once you have a probability distribution from one example, you do not need other examples to describe the data generating process. Finally, for the last paragraph of page 122, it is given that the samples follow Bernoulli distribution. What does this mean intuitively? It means that each example can be thought of as a coin toss. If the experiment was multiple coin tosses, you would have each coin toss independent with a probability of head to be $\frac{1}{2}$. Similarly, if you choose any other experiment, the result of each example can be thought of as a coin toss or an n-dimensional dice. Generating examples means getting a distribution closest to what we see in the dataset for training. That is got by assuming a distribution and maximizing the likelihood of the given dataset and outputting the optimum parameters.
What does it mean for the training data to be generated by a probability distribution over datasets Probability distrubution over datasets: What are the datasets? How is the probability distribution generated? Once we can estimate the underlying distributions of the input data, we essentially know
29,355
What good are Rademacher bounds?
Always fun to come back to these... :) Rademacher bounds are still useful in theoretical settings precisely because they're distribution-dependent! While it's unrealistic to know the data distribution ahead of time, you can still use additional context from your setting and the Rademacher bounds to get sharper results than you could otherwise. For instance: The tools from Rademacher complexity give us ways to show uniform convergence of empirical processes, which is useful for proving generalization in a variety of settings, like M-estimation, which require flexible function classes that VC dimension-based bounds wouldn't be able to support. We can use Rademacher-like sums to draw connections between learnability, stability, and uniform convergence. On a more practical note, this can inform procedures for training with SGD for generalization (SGD with quick-decaying LR generalizes well). Perhaps most practically and directly answering the question -- "Is Rademacher complexity still useful without knowledge of the data distribution?" -- is Bartlett and Mendelson 2002. With a firm yes. In particular, Rademacher complexities observe structural regularities (Section 3.1), which enable us to express complexities of weird function classes (such as neural networks) in terms of simpler ones (such as perceptrons). More precisely, using a technique called symmetrization you can show that your generalization error (or indeed, uniform convergence error for empirical processes) is bounded by Rademacher averages. These, in turn, allow for control through covering numbers on our functional space (so you can come up with bounds for very general classes of functions, from arbitrary parametric classes with a Lipschitz condition to nonparametric bounded Lipschitz or monotonic classes through fat-shattering dimension). With VC dimension, parametric function classes will usually have similar bounds as those derived from the above, but you have to write a proof for each parametric function class you bound! With covering number bounds from Rademacher complexity, you can take advantage of structural results and know that as long as your function class has a certain type of regularity, you can use "building blocks" that let you build-a-complexity-bound, or even bound things you couldn't otherwise (like non-parametric classes with infinite VC dimension). Other approaches to bounding Rademacher averages exist as well, some of which give better bounds than covering numbers (e.g., bracketing).
What good are Rademacher bounds?
Always fun to come back to these... :) Rademacher bounds are still useful in theoretical settings precisely because they're distribution-dependent! While it's unrealistic to know the data distribution
What good are Rademacher bounds? Always fun to come back to these... :) Rademacher bounds are still useful in theoretical settings precisely because they're distribution-dependent! While it's unrealistic to know the data distribution ahead of time, you can still use additional context from your setting and the Rademacher bounds to get sharper results than you could otherwise. For instance: The tools from Rademacher complexity give us ways to show uniform convergence of empirical processes, which is useful for proving generalization in a variety of settings, like M-estimation, which require flexible function classes that VC dimension-based bounds wouldn't be able to support. We can use Rademacher-like sums to draw connections between learnability, stability, and uniform convergence. On a more practical note, this can inform procedures for training with SGD for generalization (SGD with quick-decaying LR generalizes well). Perhaps most practically and directly answering the question -- "Is Rademacher complexity still useful without knowledge of the data distribution?" -- is Bartlett and Mendelson 2002. With a firm yes. In particular, Rademacher complexities observe structural regularities (Section 3.1), which enable us to express complexities of weird function classes (such as neural networks) in terms of simpler ones (such as perceptrons). More precisely, using a technique called symmetrization you can show that your generalization error (or indeed, uniform convergence error for empirical processes) is bounded by Rademacher averages. These, in turn, allow for control through covering numbers on our functional space (so you can come up with bounds for very general classes of functions, from arbitrary parametric classes with a Lipschitz condition to nonparametric bounded Lipschitz or monotonic classes through fat-shattering dimension). With VC dimension, parametric function classes will usually have similar bounds as those derived from the above, but you have to write a proof for each parametric function class you bound! With covering number bounds from Rademacher complexity, you can take advantage of structural results and know that as long as your function class has a certain type of regularity, you can use "building blocks" that let you build-a-complexity-bound, or even bound things you couldn't otherwise (like non-parametric classes with infinite VC dimension). Other approaches to bounding Rademacher averages exist as well, some of which give better bounds than covering numbers (e.g., bracketing).
What good are Rademacher bounds? Always fun to come back to these... :) Rademacher bounds are still useful in theoretical settings precisely because they're distribution-dependent! While it's unrealistic to know the data distribution
29,356
Get p-value of coefficients in regression models using bootstrap
Suppose we want to test the null hypothesis that a regression coefficient = 0 using bootstrap, and say we decide 0.05 to be the level of significance. Now, we can generate the sampling distribution for each coefficient using bootstrap. It is easy to check if 0 falls within 95% confidence interval, thus we can easily decide whether we can reject the null or not. To get a p-value, we need to check what is the quantile value of 0 in the sampling distribution. (I am using a quantile based approach there are other methods to do it which can be found here Fox on Regression) After you get the quantile,Q, the p-value is 2*Q or 2*(1-Q) depending on whether Q > 0.5 or less than 0.5. As an illustration of the approach, consider this library(faraway) Build linear model mdl <- lm(divorce ~ ., data = divusa) Bootstrap bootTest <- sapply(1:1e4,function(x){ rows <- sample(nrow(divusa),nrow(divusa),replace = T) mdl <- lm(divorce ~ ., data = divusa[rows,]) return(mdl$coefficients) }) Here is the model summary(mdl) Call: lm(formula = divorce ~ ., data = divusa) Residuals: Min 1Q Median 3Q Max -2.9087 -0.9212 -0.0935 0.7447 3.4689 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 380.14761 99.20371 3.832 0.000274 *** year -0.20312 0.05333 -3.809 0.000297 *** unemployed -0.04933 0.05378 -0.917 0.362171 femlab 0.80793 0.11487 7.033 1.09e-09 *** marriage 0.14977 0.02382 6.287 2.42e-08 *** birth -0.11695 0.01470 -7.957 2.19e-11 *** military -0.04276 0.01372 -3.117 0.002652 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.513 on 70 degrees of freedom Multiple R-squared: 0.9344, Adjusted R-squared: 0.9288 F-statistic: 166.2 on 6 and 70 DF, p-value: < 2.2e-16 Notice the p-values Subset of regression coefficient generated using bootstrap bootTest[,1:5] [,1] [,2] [,3] [,4] [,5] (Intercept) 335.70970574 372.525260160 569.85830341 338.70069977 344.69261238 year -0.18107568 -0.201798080 -0.30579380 -0.18125215 -0.18328105 unemployed -0.02916575 0.006828023 0.01197723 -0.05610887 -0.11463230 femlab 0.79078784 0.842924808 1.02607863 0.77527548 0.76472406 marriage 0.17372382 0.199571033 0.18782967 0.15289119 0.15693996 birth -0.11613752 -0.118507758 -0.11998122 -0.11666450 -0.13344442 military -0.04051730 -0.056277118 -0.04062756 -0.05167556 -0.07251748 Generate p-values with bootstrap pvals <- sapply(1:nrow(bootTest),function(x) { distribution <- ecdf(bootTest[x,]) qt0 <- distribution(0) if(qt0 < 0.5){ return(2*qt0) } else { return(2*(1-qt0)) } }) Comparing p-values from t-test and bootstrap T test summary(mdl)$coefficients[,4] (Intercept) year unemployed femlab marriage birth military 2.744830e-04 2.966776e-04 3.621708e-01 1.085196e-09 2.419284e-08 2.191964e-11 2.652003e-03 Bootstrap > pvals [1] 0.0008 0.0008 0.2196 0.0000 0.0000 0.0000 0.0188 The highly significant p-values with coefficients < 1e-8 are all 0 with bootstrap with 1e4 iterations. Furthermore, the ranking of p-values is comparable as well.
Get p-value of coefficients in regression models using bootstrap
Suppose we want to test the null hypothesis that a regression coefficient = 0 using bootstrap, and say we decide 0.05 to be the level of significance. Now, we can generate the sampling distribution fo
Get p-value of coefficients in regression models using bootstrap Suppose we want to test the null hypothesis that a regression coefficient = 0 using bootstrap, and say we decide 0.05 to be the level of significance. Now, we can generate the sampling distribution for each coefficient using bootstrap. It is easy to check if 0 falls within 95% confidence interval, thus we can easily decide whether we can reject the null or not. To get a p-value, we need to check what is the quantile value of 0 in the sampling distribution. (I am using a quantile based approach there are other methods to do it which can be found here Fox on Regression) After you get the quantile,Q, the p-value is 2*Q or 2*(1-Q) depending on whether Q > 0.5 or less than 0.5. As an illustration of the approach, consider this library(faraway) Build linear model mdl <- lm(divorce ~ ., data = divusa) Bootstrap bootTest <- sapply(1:1e4,function(x){ rows <- sample(nrow(divusa),nrow(divusa),replace = T) mdl <- lm(divorce ~ ., data = divusa[rows,]) return(mdl$coefficients) }) Here is the model summary(mdl) Call: lm(formula = divorce ~ ., data = divusa) Residuals: Min 1Q Median 3Q Max -2.9087 -0.9212 -0.0935 0.7447 3.4689 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 380.14761 99.20371 3.832 0.000274 *** year -0.20312 0.05333 -3.809 0.000297 *** unemployed -0.04933 0.05378 -0.917 0.362171 femlab 0.80793 0.11487 7.033 1.09e-09 *** marriage 0.14977 0.02382 6.287 2.42e-08 *** birth -0.11695 0.01470 -7.957 2.19e-11 *** military -0.04276 0.01372 -3.117 0.002652 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.513 on 70 degrees of freedom Multiple R-squared: 0.9344, Adjusted R-squared: 0.9288 F-statistic: 166.2 on 6 and 70 DF, p-value: < 2.2e-16 Notice the p-values Subset of regression coefficient generated using bootstrap bootTest[,1:5] [,1] [,2] [,3] [,4] [,5] (Intercept) 335.70970574 372.525260160 569.85830341 338.70069977 344.69261238 year -0.18107568 -0.201798080 -0.30579380 -0.18125215 -0.18328105 unemployed -0.02916575 0.006828023 0.01197723 -0.05610887 -0.11463230 femlab 0.79078784 0.842924808 1.02607863 0.77527548 0.76472406 marriage 0.17372382 0.199571033 0.18782967 0.15289119 0.15693996 birth -0.11613752 -0.118507758 -0.11998122 -0.11666450 -0.13344442 military -0.04051730 -0.056277118 -0.04062756 -0.05167556 -0.07251748 Generate p-values with bootstrap pvals <- sapply(1:nrow(bootTest),function(x) { distribution <- ecdf(bootTest[x,]) qt0 <- distribution(0) if(qt0 < 0.5){ return(2*qt0) } else { return(2*(1-qt0)) } }) Comparing p-values from t-test and bootstrap T test summary(mdl)$coefficients[,4] (Intercept) year unemployed femlab marriage birth military 2.744830e-04 2.966776e-04 3.621708e-01 1.085196e-09 2.419284e-08 2.191964e-11 2.652003e-03 Bootstrap > pvals [1] 0.0008 0.0008 0.2196 0.0000 0.0000 0.0000 0.0188 The highly significant p-values with coefficients < 1e-8 are all 0 with bootstrap with 1e4 iterations. Furthermore, the ranking of p-values is comparable as well.
Get p-value of coefficients in regression models using bootstrap Suppose we want to test the null hypothesis that a regression coefficient = 0 using bootstrap, and say we decide 0.05 to be the level of significance. Now, we can generate the sampling distribution fo
29,357
Get p-value of coefficients in regression models using bootstrap
Should my bootstrap function return the test statistic calculated for each sample, or the estimate? We can bootstrap both the coefficient estimates and the test statistics but it would be better to bootstrap the $t$-statistics. If we take care how we calculate the $t$-statistic in each bootstrap sample, we increase the power of the test as discussed in Correct creation of the null distribution for bootstrapped 𝑝-values. Should I calculate the proportion of the test statistic/estimate above 0 or above the point estimate of the base model? This is perhaps the most confusing step as the reasoning behind the p-value calculation is different depending on whether we bootstrap coefficient estimates or test statistics. The bootstrap principle states that the bootstrap distribution of $\beta^*$ is close to the sampling distribution of $\hat{\beta}$, and that $\hat{\beta}$ itself is close to the true value $\beta$. This is helpful as we can construct confidence intervals for $\beta$. However, unless the true $\beta$ is indeed equal to 0, the bootstrap sample is not simulated under the null hypothesis $H_0:\beta = 0$. Instead we can "invert" a confidence interval to compute a p-value; for example, $\operatorname{Pr}\left\{\beta^* \geq 0\right\}$ is the p-value for the one-sided right-tail test. The bootstrap principle also states that the distribution of $t^* = (\beta^* - \hat{\beta}) / \operatorname{se}(\beta^*)$ is close to the distribution of $t = (\hat{\beta} - \beta) / \operatorname{se}(\hat{\beta})$. This is even more helpful because the $t$-statistic is (approximately) pivotal. A pivot is a random variable whose distribution doesn't depend on the parameters. In this case, the distribution of the $t$-statistic doesn't depend on the true value of $\beta$. So while $\operatorname{E}\hat{\beta} = 0$ under the null and $\operatorname{E}\hat{\beta}\neq 0$ under the alternative, the $t$-statistic has the same distribution under the null and under the alternative. The p-value for the one-sided right-tail test is $\operatorname{Pr}\left\{t^* \geq \hat{t}\right\}$ where the $t^*$s are the bootstrapped test statistics and $\hat{t}$ is the observed test statistic. Should I multiply the result by 2 because the test is bilateral or use absolute values? To report a two-sided p-value, calculate both tail area probabilities and multiply the smaller one (corresponds to "more extreme" situations) by 2. I use the same example as @risingStar: a linear regression for US divorce rate as a function of six predictors + an intercept. @risingStar shows how to bootstrap the coefficient estimates (+1); I show how to bootstrap the $t$-statistics. The p-values for all but the last predictor, military, are pretty much the same with both methods. bootstrap.summary(beta.hats, t.stats, p) #> # A tibble: 7 × 4 #> Name Estimate `t value` `Pr(>|t|)` #> <chr> <dbl> <dbl> <dbl> #> 1 (Intercept) 380. 3.83 0.000200 #> 2 year -0.203 -3.81 0.000200 #> 3 unemployed -0.0493 -0.917 0.292 #> 4 femlab 0.808 7.03 0.000200 #> 5 marriage 0.150 6.29 0.000200 #> 6 birth -0.117 -7.96 0.000200 #> 7 military -0.0428 -3.12 0.00160 Aside: None of the p-values are exactly 0 because I use the bias-corrected formula for the p-values as described in After bootstrapping regression analysis, all p-values are multiple of 0.001996. And finally I plot histograms of the bootstrap distributions of the coefficient estimate [left] and the test statistic [right] for military. These nicely illustrate the effect of "bootstrap pivoting". R code to bootstrap p-values: library("tidyverse") data(divusa, package = "faraway") model <- function(data) { lm(divorce ~ ., data = data) } simulator <- function(data) { rows <- sample(nrow(data), nrow(data), replace = TRUE) data[rows, ] } estimator <- function(data) { coefficients(model(data)) } test <- function(data, b.test) { fit <- model(data) b <- coefficients(fit) var <- diag(vcov(fit)) t <- (b - b.test) / sqrt(var) t } pvalue <- function(t.star, t.hat, alternative = c("two.sided", "less", "greater")) { alternative <- match.arg(alternative) p.upper <- (sum(t.star >= t.hat) + 1) / (length(t.star) + 1) p.lower <- (sum(t.star <= t.hat) + 1) / (length(t.star) + 1) if (alternative == "greater") { p.upper } else if (alternative == "less") { p.lower } else { # The two-tailed p-value is twice the smaller of the two one-tailed p-values. 2 * min(p.upper, p.lower) } } bootstrap.summary <- function(b, t, p) { tibble( `Name` = names(b), `Estimate` = b, `t value` = t, `Pr(>|t|)` = p ) } set.seed(1234) B <- 10000 # These are the coefficient estimates, $\{ \hat{\beta}_i \}$ and the $t$ statistics, respectively. # We can also get those with the `summary` function. beta.hat <- estimator(divusa) beta.hat t.stat <- test(divusa, 0) # Calculate (beta.hat - 0) / se(beta.hat) t.stat # Bootstrap the coefficient estimates. boot.estimate <- replicate(B, estimator(simulator(divusa))) # Bootstrap the t statistics. boot.statistic <- replicate(B, test(simulator(divusa), beta.hat)) # Calculate (beta.star - beta.hat) / se(beta.star) # Bootstrapped p-values computed two ways: p <- NULL for (i in seq(beta.hat)) { p <- c(p, pvalue(boot.estimate[i, ], 0)) } bootstrap.summary(beta.hat, t.stat, p) p <- NULL for (i in seq(t.stat)) { p <- c(p, pvalue(boot.statistic[i, ], t.stat[i])) } bootstrap.summary(beta.hat, t.stat, p) # The 7th coefficient is the estimate for x = military i <- 7 pvalue(boot.estimate[i, ], 0) pvalue(boot.statistic[i, ], t.stat[i]) par(mfrow = c(1, 2)) hist(boot.estimate[i, ], breaks = 50, freq = TRUE, xlab = NULL, ylab = NULL, main = paste0("Histogram of β* (x = ", names(beta.hat)[i], ")"), font.main = 1 ) hist(boot.statistic[i, ], breaks = 50, freq = TRUE, xlab = NULL, ylab = NULL, main = paste0("Histogram of t* (x = ", names(t.stat)[i], ")"), font.main = 1 )
Get p-value of coefficients in regression models using bootstrap
Should my bootstrap function return the test statistic calculated for each sample, or the estimate? We can bootstrap both the coefficient estimates and the test statistics but it would be better to b
Get p-value of coefficients in regression models using bootstrap Should my bootstrap function return the test statistic calculated for each sample, or the estimate? We can bootstrap both the coefficient estimates and the test statistics but it would be better to bootstrap the $t$-statistics. If we take care how we calculate the $t$-statistic in each bootstrap sample, we increase the power of the test as discussed in Correct creation of the null distribution for bootstrapped 𝑝-values. Should I calculate the proportion of the test statistic/estimate above 0 or above the point estimate of the base model? This is perhaps the most confusing step as the reasoning behind the p-value calculation is different depending on whether we bootstrap coefficient estimates or test statistics. The bootstrap principle states that the bootstrap distribution of $\beta^*$ is close to the sampling distribution of $\hat{\beta}$, and that $\hat{\beta}$ itself is close to the true value $\beta$. This is helpful as we can construct confidence intervals for $\beta$. However, unless the true $\beta$ is indeed equal to 0, the bootstrap sample is not simulated under the null hypothesis $H_0:\beta = 0$. Instead we can "invert" a confidence interval to compute a p-value; for example, $\operatorname{Pr}\left\{\beta^* \geq 0\right\}$ is the p-value for the one-sided right-tail test. The bootstrap principle also states that the distribution of $t^* = (\beta^* - \hat{\beta}) / \operatorname{se}(\beta^*)$ is close to the distribution of $t = (\hat{\beta} - \beta) / \operatorname{se}(\hat{\beta})$. This is even more helpful because the $t$-statistic is (approximately) pivotal. A pivot is a random variable whose distribution doesn't depend on the parameters. In this case, the distribution of the $t$-statistic doesn't depend on the true value of $\beta$. So while $\operatorname{E}\hat{\beta} = 0$ under the null and $\operatorname{E}\hat{\beta}\neq 0$ under the alternative, the $t$-statistic has the same distribution under the null and under the alternative. The p-value for the one-sided right-tail test is $\operatorname{Pr}\left\{t^* \geq \hat{t}\right\}$ where the $t^*$s are the bootstrapped test statistics and $\hat{t}$ is the observed test statistic. Should I multiply the result by 2 because the test is bilateral or use absolute values? To report a two-sided p-value, calculate both tail area probabilities and multiply the smaller one (corresponds to "more extreme" situations) by 2. I use the same example as @risingStar: a linear regression for US divorce rate as a function of six predictors + an intercept. @risingStar shows how to bootstrap the coefficient estimates (+1); I show how to bootstrap the $t$-statistics. The p-values for all but the last predictor, military, are pretty much the same with both methods. bootstrap.summary(beta.hats, t.stats, p) #> # A tibble: 7 × 4 #> Name Estimate `t value` `Pr(>|t|)` #> <chr> <dbl> <dbl> <dbl> #> 1 (Intercept) 380. 3.83 0.000200 #> 2 year -0.203 -3.81 0.000200 #> 3 unemployed -0.0493 -0.917 0.292 #> 4 femlab 0.808 7.03 0.000200 #> 5 marriage 0.150 6.29 0.000200 #> 6 birth -0.117 -7.96 0.000200 #> 7 military -0.0428 -3.12 0.00160 Aside: None of the p-values are exactly 0 because I use the bias-corrected formula for the p-values as described in After bootstrapping regression analysis, all p-values are multiple of 0.001996. And finally I plot histograms of the bootstrap distributions of the coefficient estimate [left] and the test statistic [right] for military. These nicely illustrate the effect of "bootstrap pivoting". R code to bootstrap p-values: library("tidyverse") data(divusa, package = "faraway") model <- function(data) { lm(divorce ~ ., data = data) } simulator <- function(data) { rows <- sample(nrow(data), nrow(data), replace = TRUE) data[rows, ] } estimator <- function(data) { coefficients(model(data)) } test <- function(data, b.test) { fit <- model(data) b <- coefficients(fit) var <- diag(vcov(fit)) t <- (b - b.test) / sqrt(var) t } pvalue <- function(t.star, t.hat, alternative = c("two.sided", "less", "greater")) { alternative <- match.arg(alternative) p.upper <- (sum(t.star >= t.hat) + 1) / (length(t.star) + 1) p.lower <- (sum(t.star <= t.hat) + 1) / (length(t.star) + 1) if (alternative == "greater") { p.upper } else if (alternative == "less") { p.lower } else { # The two-tailed p-value is twice the smaller of the two one-tailed p-values. 2 * min(p.upper, p.lower) } } bootstrap.summary <- function(b, t, p) { tibble( `Name` = names(b), `Estimate` = b, `t value` = t, `Pr(>|t|)` = p ) } set.seed(1234) B <- 10000 # These are the coefficient estimates, $\{ \hat{\beta}_i \}$ and the $t$ statistics, respectively. # We can also get those with the `summary` function. beta.hat <- estimator(divusa) beta.hat t.stat <- test(divusa, 0) # Calculate (beta.hat - 0) / se(beta.hat) t.stat # Bootstrap the coefficient estimates. boot.estimate <- replicate(B, estimator(simulator(divusa))) # Bootstrap the t statistics. boot.statistic <- replicate(B, test(simulator(divusa), beta.hat)) # Calculate (beta.star - beta.hat) / se(beta.star) # Bootstrapped p-values computed two ways: p <- NULL for (i in seq(beta.hat)) { p <- c(p, pvalue(boot.estimate[i, ], 0)) } bootstrap.summary(beta.hat, t.stat, p) p <- NULL for (i in seq(t.stat)) { p <- c(p, pvalue(boot.statistic[i, ], t.stat[i])) } bootstrap.summary(beta.hat, t.stat, p) # The 7th coefficient is the estimate for x = military i <- 7 pvalue(boot.estimate[i, ], 0) pvalue(boot.statistic[i, ], t.stat[i]) par(mfrow = c(1, 2)) hist(boot.estimate[i, ], breaks = 50, freq = TRUE, xlab = NULL, ylab = NULL, main = paste0("Histogram of β* (x = ", names(beta.hat)[i], ")"), font.main = 1 ) hist(boot.statistic[i, ], breaks = 50, freq = TRUE, xlab = NULL, ylab = NULL, main = paste0("Histogram of t* (x = ", names(t.stat)[i], ")"), font.main = 1 )
Get p-value of coefficients in regression models using bootstrap Should my bootstrap function return the test statistic calculated for each sample, or the estimate? We can bootstrap both the coefficient estimates and the test statistics but it would be better to b
29,358
Comparing 0/10 to 0/20
Suppose the we know the probability of success in an attempt. In this case we compute the probability of 0 out of 10 and 0 out of 20 cases. However, in this case we go the other way around. We don't know the probability, we have the data and we try to estimate the probability. The more cases we have, the more certain we can be regarding the results. If I'll flip a coin one and it will be head, you won't be very certain that it is double headed. If I'll throw it 1,000 times and it will be all heads, it is unlikely that it is balanced. There are methods that were designed in order to consider the number of trails when giving the estimations. One of them is additive smoothing that @abukaj comment about above. In additive smoothing we add extra pseudo samples into consideration. In our case, instead to the trail we have seen we add two more - one successful and one failed. In the first case the smoothed probability will be $\frac{1+0}{10 +1 +1}$ = $\frac{1}{12}$ ~ 8.3% In the second case we will get $\frac{1+0}{20 +1 +1}$ = $\frac{1}{22}$ ~ 4.5% Note that additive smoothing is only one method of estimation. You will get different results with different methods. Even with additive smoothing itself, you would have gotten different results if you added 4 pseudo samples. Another method is using the confidence interval as @mdewey suggested. The more samples we have, the shorter the confidence interval will be. The size of the confidence interval is proportional to the square root of the samples - $\frac{1}{\sqrt{n}}$. Therefore, doubling the number of samples will lead to a $\sqrt{2}$ shorter confidence interval. The mean in both cases is 0. It we take confidence level of 90% (z=1.645) In the first case we will get 0 + $\frac{1.645}{\sqrt{10}}$ ~ 52% In the second case we will get 0 + $\frac{1.645}{\sqrt{20}}$ ~ 36% In case of missing data, there is uncertainty. The assumptions you make and the external data you'll use will change what you will get.
Comparing 0/10 to 0/20
Suppose the we know the probability of success in an attempt. In this case we compute the probability of 0 out of 10 and 0 out of 20 cases. However, in this case we go the other way around. We don't k
Comparing 0/10 to 0/20 Suppose the we know the probability of success in an attempt. In this case we compute the probability of 0 out of 10 and 0 out of 20 cases. However, in this case we go the other way around. We don't know the probability, we have the data and we try to estimate the probability. The more cases we have, the more certain we can be regarding the results. If I'll flip a coin one and it will be head, you won't be very certain that it is double headed. If I'll throw it 1,000 times and it will be all heads, it is unlikely that it is balanced. There are methods that were designed in order to consider the number of trails when giving the estimations. One of them is additive smoothing that @abukaj comment about above. In additive smoothing we add extra pseudo samples into consideration. In our case, instead to the trail we have seen we add two more - one successful and one failed. In the first case the smoothed probability will be $\frac{1+0}{10 +1 +1}$ = $\frac{1}{12}$ ~ 8.3% In the second case we will get $\frac{1+0}{20 +1 +1}$ = $\frac{1}{22}$ ~ 4.5% Note that additive smoothing is only one method of estimation. You will get different results with different methods. Even with additive smoothing itself, you would have gotten different results if you added 4 pseudo samples. Another method is using the confidence interval as @mdewey suggested. The more samples we have, the shorter the confidence interval will be. The size of the confidence interval is proportional to the square root of the samples - $\frac{1}{\sqrt{n}}$. Therefore, doubling the number of samples will lead to a $\sqrt{2}$ shorter confidence interval. The mean in both cases is 0. It we take confidence level of 90% (z=1.645) In the first case we will get 0 + $\frac{1.645}{\sqrt{10}}$ ~ 52% In the second case we will get 0 + $\frac{1.645}{\sqrt{20}}$ ~ 36% In case of missing data, there is uncertainty. The assumptions you make and the external data you'll use will change what you will get.
Comparing 0/10 to 0/20 Suppose the we know the probability of success in an attempt. In this case we compute the probability of 0 out of 10 and 0 out of 20 cases. However, in this case we go the other way around. We don't k
29,359
Comparing 0/10 to 0/20
Extending the idea of invoking confidence intervals, there is a concept of an exact binomial interval. Binomial distribution is that of the total number of successes in independent trials that end up with either 0 (failure) or 1 (success). The probability of obtaining 1 (success) is traditionally denoted $p$, and its complement is $q=1-p$. Then the standard probability result is that the probability of exactly $k$ successes in $n$ trials is $$ p_{n,k} = {n \choose k} p^k q^{n-k} = \frac{n!}{k!(n-k)!} p^k q^{n-k} $$ The concept of the confidence interval is to bound a set of possible values of the model parameters (here, probabilities of success $p$) so that we can make probabilistic (well, frequentist) statements about whether the true parameter value is inside this interval (namely, that if we repeat the probabilistic experiment of making 10 or 20 trials, and construct the confidence interval in a specified way, we will observe that the true value of the parameter is inside the interval 95% of the time). In this case, we can solve for $p$ in that formula: $$ p_{n,0}=(1-p)^n $$ So if we wanted a 95% one-sided interval, we would set $p_{n,0}=5\%$ to solve for the probability of the observed zero count being at most 5%. For $n=20$, the answer is $[0\%,13.9\%]$ (i.e., at the extreme, if the probability of a success in each trial is 13.9%, then the probability of observing zero successes is 5%). For $n=10$, the answer is $[0\%,25.9\%]$. So from a sample of $n=20$, we learned more than from the sample of $n=10$, in the sense that we can ``exclude'' the range $[13.9\%,25.9\%]$ that the sample of $n=10$ still leaves as plausible.
Comparing 0/10 to 0/20
Extending the idea of invoking confidence intervals, there is a concept of an exact binomial interval. Binomial distribution is that of the total number of successes in independent trials that end up
Comparing 0/10 to 0/20 Extending the idea of invoking confidence intervals, there is a concept of an exact binomial interval. Binomial distribution is that of the total number of successes in independent trials that end up with either 0 (failure) or 1 (success). The probability of obtaining 1 (success) is traditionally denoted $p$, and its complement is $q=1-p$. Then the standard probability result is that the probability of exactly $k$ successes in $n$ trials is $$ p_{n,k} = {n \choose k} p^k q^{n-k} = \frac{n!}{k!(n-k)!} p^k q^{n-k} $$ The concept of the confidence interval is to bound a set of possible values of the model parameters (here, probabilities of success $p$) so that we can make probabilistic (well, frequentist) statements about whether the true parameter value is inside this interval (namely, that if we repeat the probabilistic experiment of making 10 or 20 trials, and construct the confidence interval in a specified way, we will observe that the true value of the parameter is inside the interval 95% of the time). In this case, we can solve for $p$ in that formula: $$ p_{n,0}=(1-p)^n $$ So if we wanted a 95% one-sided interval, we would set $p_{n,0}=5\%$ to solve for the probability of the observed zero count being at most 5%. For $n=20$, the answer is $[0\%,13.9\%]$ (i.e., at the extreme, if the probability of a success in each trial is 13.9%, then the probability of observing zero successes is 5%). For $n=10$, the answer is $[0\%,25.9\%]$. So from a sample of $n=20$, we learned more than from the sample of $n=10$, in the sense that we can ``exclude'' the range $[13.9\%,25.9\%]$ that the sample of $n=10$ still leaves as plausible.
Comparing 0/10 to 0/20 Extending the idea of invoking confidence intervals, there is a concept of an exact binomial interval. Binomial distribution is that of the total number of successes in independent trials that end up
29,360
Comparing 0/10 to 0/20
A Bayesian Approach Let $X_i$ for $i=1,\ldots n$ be a series of IID Bernoulli random variables with parameter $p$. Let us represent our uncertainty of the parameter $p$ by assuming it follows the Beta distribution with hyperparameters $\alpha$ and $\beta$. The likelihood function is Bernoulli and the Beta distribution is a conjugate prior for the Bernoulli distribution, hence the posterior follows the Beta distribution. Furthermore, the posterior is parameterized by: $$ \hat{\alpha} = \alpha + \sum_{i=1}^n X_i \quad \quad \hat{\beta} = \beta + n - \sum_{i=1}^n X_i$$ Consequently: \begin{align*} \mathrm{E}[p \mid X_1, \ldots, X_n] &= \frac{\hat{\alpha}}{\hat{\alpha} + \hat{\beta}}\\ &= \frac{\alpha + \sum_{i=1}^n X_i }{\alpha + \beta + n} \end{align*} Thus if you see 10 failures, your expectation of $p$ is $\frac{\alpha}{\alpha + \beta + 10}$, and if you see 20 failures, your expectation of $p$ is $\frac{\alpha}{\alpha + \beta + 20}$. The more failures you see, the lower your expectation of $p$. Is this a reasonable argument? It depends on how you feel about Bayesian statistics, whether you're willing to model uncertainty over some parameter $p$ using the mechanics of probability. And it depends on how reasonable is your choice of a prior.
Comparing 0/10 to 0/20
A Bayesian Approach Let $X_i$ for $i=1,\ldots n$ be a series of IID Bernoulli random variables with parameter $p$. Let us represent our uncertainty of the parameter $p$ by assuming it follows the Bet
Comparing 0/10 to 0/20 A Bayesian Approach Let $X_i$ for $i=1,\ldots n$ be a series of IID Bernoulli random variables with parameter $p$. Let us represent our uncertainty of the parameter $p$ by assuming it follows the Beta distribution with hyperparameters $\alpha$ and $\beta$. The likelihood function is Bernoulli and the Beta distribution is a conjugate prior for the Bernoulli distribution, hence the posterior follows the Beta distribution. Furthermore, the posterior is parameterized by: $$ \hat{\alpha} = \alpha + \sum_{i=1}^n X_i \quad \quad \hat{\beta} = \beta + n - \sum_{i=1}^n X_i$$ Consequently: \begin{align*} \mathrm{E}[p \mid X_1, \ldots, X_n] &= \frac{\hat{\alpha}}{\hat{\alpha} + \hat{\beta}}\\ &= \frac{\alpha + \sum_{i=1}^n X_i }{\alpha + \beta + n} \end{align*} Thus if you see 10 failures, your expectation of $p$ is $\frac{\alpha}{\alpha + \beta + 10}$, and if you see 20 failures, your expectation of $p$ is $\frac{\alpha}{\alpha + \beta + 20}$. The more failures you see, the lower your expectation of $p$. Is this a reasonable argument? It depends on how you feel about Bayesian statistics, whether you're willing to model uncertainty over some parameter $p$ using the mechanics of probability. And it depends on how reasonable is your choice of a prior.
Comparing 0/10 to 0/20 A Bayesian Approach Let $X_i$ for $i=1,\ldots n$ be a series of IID Bernoulli random variables with parameter $p$. Let us represent our uncertainty of the parameter $p$ by assuming it follows the Bet
29,361
Applying stochastic variational inference to Bayesian Mixture of Gaussian
First, a few notes that help me make sense of the SVI paper: In calculating the intermediate value for the variational parameter of the global parameters, we sample one data point and pretend our entire data set of size $N$ was that single point, $N$ times. $\eta_g$ is the natural parameter for the full conditional of the global variable $\beta$. The notation is used to stress that it's a function of the conditioned variables, including the observed data. In a mixture of $k$ Gaussians, our global parameters are the mean and precision (inverse variance) parameters $\mu_k, \tau_k$ params for each. That is, $\eta_g$ is the natural parameter for this distribution, a Normal-Gamma of the form $$\mu, \tau \sim N(\mu|\gamma, \tau(2\alpha -1)Ga(\tau|\alpha, \beta)$$ with $\eta_0 = 2\alpha - 1$, $\eta_1 = \gamma*(2\alpha -1)$ and $\eta_2 = 2\beta+\gamma^2(2\alpha-1)$. (Bernardo and Smith, Bayesian Theory; note this varies a little from the four-parameter Normal-Gamma you'll commonly see.) We'll use $a, b, m$ to refer to the variational parameters for $\alpha, \beta, \mu$ The full conditional of $\mu_k, \tau_k$ is a Normal-Gamma with params $\dot\eta + \langle\sum_Nz_{n,k}$, $\sum_N z_{n,k}x_N$, $\sum_Nz_{n,k}x^2_{n}\rangle$, where $\dot\eta$ is the prior. (The $z_{n,k}$ in there can also be confusing; it makes sense starting with an $\exp\ln(p))$ trick applied to $\prod_N p(x_n|z_n, \alpha, \beta, \gamma) = \prod_N\prod_K\big(p(x_n|\alpha_k,\beta_k,\gamma_k)\big)^{z_{n,k}}$, and ending with a fair amount of algebra left to the reader.) With that, we can complete step (5) of the SVI pseudocode with: $$\phi_{n,k} \propto \exp (ln(\pi) + \mathbb E_q \ln(p(x_n|\alpha_k, \beta_k, \gamma_k))\\ =\exp(\ln(\pi) + \mathbb E_q \big[\langle \mu_k\tau_k, \frac{-\tau}{2} \rangle \cdot\langle x, x^2\rangle - \frac{\mu^2\tau - \ln \tau}{2})\big] $$ Updating the global parameters is easier, since each parameter corresponds to a count of the data or one of its sufficient statistics: $$ \hat \lambda = \dot \eta + N\phi_n \langle 1, x, x^2 \rangle $$ Here's what the marginal likelihood of data looks like over many iterations, when trained on very artificial, easily separable data (code below). The first plot shows the likelihood with initial, random variational parameters and $0$ iterations; each subsequent is after the next power of two iterations. In the code, $a, b, m$ refer to variational parameters for $\alpha, \beta, \mu$. #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Sun Aug 12 12:49:15 2018 @author: SeanEaster """ import numpy as np from matplotlib import pylab as plt from scipy.stats import t from scipy.special import digamma # These are priors for mu, alpha and beta def calc_rho(t, delay=16,forgetting=1.): return np.power(t + delay, -forgetting) m_prior, alpha_prior, beta_prior = 0., 1., 1. eta_0 = 2 * alpha_prior - 1 eta_1 = m_prior * (2 * alpha_prior - 1) eta_2 = 2 * beta_prior + np.power(m_prior, 2.) * (2 * alpha_prior - 1) k = 3 eta_shape = (k,3) eta_prior = np.ones(eta_shape) eta_prior[:,0] = eta_0 eta_prior[:,1] = eta_1 eta_prior[:,2] = eta_2 np.random.seed(123) size = 1000 dummy_data = np.concatenate(( np.random.normal(-1., scale=.25, size=size), np.random.normal(0., scale=.25,size=size), np.random.normal(1., scale=.25, size=size) )) N = len(dummy_data) S = 1 # randomly init global params alpha = np.random.gamma(3., scale=1./3., size=k) m = np.random.normal(scale=1, size=k) beta = np.random.gamma(3., scale=1./3., size=k) eta = np.zeros(eta_shape) eta[:,0] = 2 * alpha - 1 eta[:,1] = m * eta[:,0] eta[:,2] = 2. * beta + np.power(m, 2.) * eta[:,0] phi = np.random.dirichlet(np.ones(k) / k, size = dummy_data.shape[0]) nrows, ncols = 4, 5 total_plots = nrows * ncols total_iters = np.power(2, total_plots - 1) iter_idx = 0 x = np.linspace(dummy_data.min(), dummy_data.max(), num=200) while iter_idx < total_iters: if np.log2(iter_idx + 1) % 1 == 0: alpha = 0.5 * (eta[:,0] + 1) beta = 0.5 * (eta[:,2] - np.power(eta[:,1], 2.) / eta[:,0]) m = eta[:,1] / eta[:,0] idx = int(np.log2(iter_idx + 1)) + 1 f = plt.subplot(nrows, ncols, idx) s = np.zeros(x.shape) for _ in range(k): y = t.pdf(x, alpha[_], m[_], 2 * beta[_] / (2 * alpha[_] - 1)) s += y plt.plot(x, y) plt.plot(x, s) f.axes.get_xaxis().set_visible(False) f.axes.get_yaxis().set_visible(False) # randomly sample data point, update parameters interm_eta = np.zeros(eta_shape) for _ in range(S): datum = np.random.choice(dummy_data, 1) # mean params for ease of calculating expectations alpha = 0.5 * ( eta[:,0] + 1) beta = 0.5 * (eta[:,2] - np.power(eta[:,1], 2) / eta[:,0]) m = eta[:,1] / eta[:,0] exp_mu = m exp_tau = alpha / beta exp_tau_m_sq = 1. / (2 * alpha - 1) + np.power(m, 2.) * alpha / beta exp_log_tau = digamma(alpha) - np.log(beta) like_term = datum * (exp_mu * exp_tau) - np.power(datum, 2.) * exp_tau / 2 \ - (0.5 * exp_tau_m_sq - 0.5 * exp_log_tau) log_phi = np.log(1. / k) + like_term phi = np.exp(log_phi) phi = phi / phi.sum() interm_eta[:, 0] += phi interm_eta[:, 1] += phi * datum interm_eta[:, 2] += phi * np.power(datum, 2.) interm_eta = interm_eta * N / S interm_eta += eta_prior rho = calc_rho(iter_idx + 1) eta = (1 - rho) * eta + rho * interm_eta iter_idx += 1
Applying stochastic variational inference to Bayesian Mixture of Gaussian
First, a few notes that help me make sense of the SVI paper: In calculating the intermediate value for the variational parameter of the global parameters, we sample one data point and pretend our ent
Applying stochastic variational inference to Bayesian Mixture of Gaussian First, a few notes that help me make sense of the SVI paper: In calculating the intermediate value for the variational parameter of the global parameters, we sample one data point and pretend our entire data set of size $N$ was that single point, $N$ times. $\eta_g$ is the natural parameter for the full conditional of the global variable $\beta$. The notation is used to stress that it's a function of the conditioned variables, including the observed data. In a mixture of $k$ Gaussians, our global parameters are the mean and precision (inverse variance) parameters $\mu_k, \tau_k$ params for each. That is, $\eta_g$ is the natural parameter for this distribution, a Normal-Gamma of the form $$\mu, \tau \sim N(\mu|\gamma, \tau(2\alpha -1)Ga(\tau|\alpha, \beta)$$ with $\eta_0 = 2\alpha - 1$, $\eta_1 = \gamma*(2\alpha -1)$ and $\eta_2 = 2\beta+\gamma^2(2\alpha-1)$. (Bernardo and Smith, Bayesian Theory; note this varies a little from the four-parameter Normal-Gamma you'll commonly see.) We'll use $a, b, m$ to refer to the variational parameters for $\alpha, \beta, \mu$ The full conditional of $\mu_k, \tau_k$ is a Normal-Gamma with params $\dot\eta + \langle\sum_Nz_{n,k}$, $\sum_N z_{n,k}x_N$, $\sum_Nz_{n,k}x^2_{n}\rangle$, where $\dot\eta$ is the prior. (The $z_{n,k}$ in there can also be confusing; it makes sense starting with an $\exp\ln(p))$ trick applied to $\prod_N p(x_n|z_n, \alpha, \beta, \gamma) = \prod_N\prod_K\big(p(x_n|\alpha_k,\beta_k,\gamma_k)\big)^{z_{n,k}}$, and ending with a fair amount of algebra left to the reader.) With that, we can complete step (5) of the SVI pseudocode with: $$\phi_{n,k} \propto \exp (ln(\pi) + \mathbb E_q \ln(p(x_n|\alpha_k, \beta_k, \gamma_k))\\ =\exp(\ln(\pi) + \mathbb E_q \big[\langle \mu_k\tau_k, \frac{-\tau}{2} \rangle \cdot\langle x, x^2\rangle - \frac{\mu^2\tau - \ln \tau}{2})\big] $$ Updating the global parameters is easier, since each parameter corresponds to a count of the data or one of its sufficient statistics: $$ \hat \lambda = \dot \eta + N\phi_n \langle 1, x, x^2 \rangle $$ Here's what the marginal likelihood of data looks like over many iterations, when trained on very artificial, easily separable data (code below). The first plot shows the likelihood with initial, random variational parameters and $0$ iterations; each subsequent is after the next power of two iterations. In the code, $a, b, m$ refer to variational parameters for $\alpha, \beta, \mu$. #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Sun Aug 12 12:49:15 2018 @author: SeanEaster """ import numpy as np from matplotlib import pylab as plt from scipy.stats import t from scipy.special import digamma # These are priors for mu, alpha and beta def calc_rho(t, delay=16,forgetting=1.): return np.power(t + delay, -forgetting) m_prior, alpha_prior, beta_prior = 0., 1., 1. eta_0 = 2 * alpha_prior - 1 eta_1 = m_prior * (2 * alpha_prior - 1) eta_2 = 2 * beta_prior + np.power(m_prior, 2.) * (2 * alpha_prior - 1) k = 3 eta_shape = (k,3) eta_prior = np.ones(eta_shape) eta_prior[:,0] = eta_0 eta_prior[:,1] = eta_1 eta_prior[:,2] = eta_2 np.random.seed(123) size = 1000 dummy_data = np.concatenate(( np.random.normal(-1., scale=.25, size=size), np.random.normal(0., scale=.25,size=size), np.random.normal(1., scale=.25, size=size) )) N = len(dummy_data) S = 1 # randomly init global params alpha = np.random.gamma(3., scale=1./3., size=k) m = np.random.normal(scale=1, size=k) beta = np.random.gamma(3., scale=1./3., size=k) eta = np.zeros(eta_shape) eta[:,0] = 2 * alpha - 1 eta[:,1] = m * eta[:,0] eta[:,2] = 2. * beta + np.power(m, 2.) * eta[:,0] phi = np.random.dirichlet(np.ones(k) / k, size = dummy_data.shape[0]) nrows, ncols = 4, 5 total_plots = nrows * ncols total_iters = np.power(2, total_plots - 1) iter_idx = 0 x = np.linspace(dummy_data.min(), dummy_data.max(), num=200) while iter_idx < total_iters: if np.log2(iter_idx + 1) % 1 == 0: alpha = 0.5 * (eta[:,0] + 1) beta = 0.5 * (eta[:,2] - np.power(eta[:,1], 2.) / eta[:,0]) m = eta[:,1] / eta[:,0] idx = int(np.log2(iter_idx + 1)) + 1 f = plt.subplot(nrows, ncols, idx) s = np.zeros(x.shape) for _ in range(k): y = t.pdf(x, alpha[_], m[_], 2 * beta[_] / (2 * alpha[_] - 1)) s += y plt.plot(x, y) plt.plot(x, s) f.axes.get_xaxis().set_visible(False) f.axes.get_yaxis().set_visible(False) # randomly sample data point, update parameters interm_eta = np.zeros(eta_shape) for _ in range(S): datum = np.random.choice(dummy_data, 1) # mean params for ease of calculating expectations alpha = 0.5 * ( eta[:,0] + 1) beta = 0.5 * (eta[:,2] - np.power(eta[:,1], 2) / eta[:,0]) m = eta[:,1] / eta[:,0] exp_mu = m exp_tau = alpha / beta exp_tau_m_sq = 1. / (2 * alpha - 1) + np.power(m, 2.) * alpha / beta exp_log_tau = digamma(alpha) - np.log(beta) like_term = datum * (exp_mu * exp_tau) - np.power(datum, 2.) * exp_tau / 2 \ - (0.5 * exp_tau_m_sq - 0.5 * exp_log_tau) log_phi = np.log(1. / k) + like_term phi = np.exp(log_phi) phi = phi / phi.sum() interm_eta[:, 0] += phi interm_eta[:, 1] += phi * datum interm_eta[:, 2] += phi * np.power(datum, 2.) interm_eta = interm_eta * N / S interm_eta += eta_prior rho = calc_rho(iter_idx + 1) eta = (1 - rho) * eta + rho * interm_eta iter_idx += 1
Applying stochastic variational inference to Bayesian Mixture of Gaussian First, a few notes that help me make sense of the SVI paper: In calculating the intermediate value for the variational parameter of the global parameters, we sample one data point and pretend our ent
29,362
Applying stochastic variational inference to Bayesian Mixture of Gaussian
This tutorial (https://chrisdxie.files.wordpress.com/2016/06/in-depth-variational-inference-tutorial.pdf) answers most of your questions, and would probably be easier to understand than the original SVI paper as it goes specifically through all of the details of implementing SVI (and coordinate ascent VI and gibbs sampling) for a Gaussian mixture model (with known variance).
Applying stochastic variational inference to Bayesian Mixture of Gaussian
This tutorial (https://chrisdxie.files.wordpress.com/2016/06/in-depth-variational-inference-tutorial.pdf) answers most of your questions, and would probably be easier to understand than the original S
Applying stochastic variational inference to Bayesian Mixture of Gaussian This tutorial (https://chrisdxie.files.wordpress.com/2016/06/in-depth-variational-inference-tutorial.pdf) answers most of your questions, and would probably be easier to understand than the original SVI paper as it goes specifically through all of the details of implementing SVI (and coordinate ascent VI and gibbs sampling) for a Gaussian mixture model (with known variance).
Applying stochastic variational inference to Bayesian Mixture of Gaussian This tutorial (https://chrisdxie.files.wordpress.com/2016/06/in-depth-variational-inference-tutorial.pdf) answers most of your questions, and would probably be easier to understand than the original S
29,363
Applying stochastic variational inference to Bayesian Mixture of Gaussian
"Local variational parameter" = variational parameters of the local variables. E.g., in GMM they are the cluster assignment. They are "local", because for each data point $x_i$, the corresponding latent variable corresponds only to it. In the paper they denoted this by $z_i$, or $z$ for the entire data vector. We are placing a variational distribution over these $z_i$'s, because of the mean field each one will have its own, and we denote each one by $q(z_i)$. These distributions have parameters. E.g., if they are categorical, they have a vector of probabilities that must sum up to 1. These are the (local) variational parameters denoted by $\phi$. The "global variational parameters" are the parameters of the global variable. The global variables are denoted in the paper by $\beta$. In GMM they will be the cluster means. They are "global" because they don't correspond to a specific data point. They also have a distribution, denoted by $q(\beta)$ and this distribution has parameters denoted in the paper by $\lambda$. Maybe you are confused to see $z$'s also in the update formula for the global parameters, but this is a property of the model - the global parameters depend on the local ones. E.g., in GMM, the estimation of the cluster means depends on the estimation of which points belong to which cluster. And vice versa, the local parameter update depends on the global parameters - i.e., the estimation of the cluster assignment depends on where I think the means are. Note that the update rule for the cluster assignments ($z$'s) doesn't suffer when you increase the data size. So, the update rule for it remains a CAVI update rule (for the Expo-Family - which has a simpler update formula of simply updating the natural parameters; in GMM the Expo-Family is a valid assumption as all probabilities belong to it). So, for the i'th point, we will update the cluster assignment (step 5 in the algorithm): $$\phi_i^{(t+1)} = \mathbb E_{q} [\eta_{\mathcal l} (x_i, \beta)] $$ The expectation is w.r.t. all other variational distributions ($q$'s) except the $q(\phi_i)$ - in GMM because of the model structure, it means it will only be w.r.t. the $\beta$'s. $\eta$ is the natural parameter of the resulting distribution, $l$ denotes that it's the local one, and it might still be dependent on the values of the data and all the other parameters, and again, in GMM, because of the model structure, it will only depend on the i'th data point $x_i$ and the global parameters $\beta$. It won't depend on other $z$'s. The CAVI update rule for the global parameters $\beta$ might be hard to scale, as it requires going over the entire data. To "combat" this SVI uses (stochastic) gradient ascent instead of coordinate ascent (CAVI) for the global parameters. It does so by also taking the natural gradient in Riemannian space instead of the regular gradient in Euclidian space. After a lot of math, this turns out to be almost identical to CAVI. In CAVI the update rules are (almost) the term specified in step 6 in the algorithm: $$ \lambda^{(t+1)} = \mathbb E_{q} [\eta_{\mathcal g} (x, z)] $$ In regular CAVI you would have to take the expectation w.r.t. the entire data set ($x$'s) and local variables ($z$'s). In SVI you replace it with N replications of the same (randomly sampled) $x_i,z_i$. E.g., for GMM, and for $\beta_1$ supposing 1D data, the true (natural parameter) CAVI update rules will be (corresponding to $\mu/\sigma^2$, $-1/2\sigma^2$ natural parameters): $$\lambda_{11}^{(t+1)} = \sum_{i=1}^{n} \mathbb E[z_{i1}]x_i \\ \lambda_{12}^{(t+1)} = -0.5(\frac{1}{\sigma^2} + \sum_{i=1}^n \mathbb E[z_{i1}]) $$ These will turn to: $$\lambda_{11}^{(t+1)} = n \cdot \mathbb E[z_{i1}]x_i \\ \lambda_{12}^{(t+1)} = -0.5(\frac{1}{\sigma^2} + n \cdot \mathbb E[z_{i1}]) $$ As mentioned, this is the CAVI update, but we want gradient ascent, and so step 7 does the (stochastic) gradient ascent. If you want to learn more, I suggest you check out my YouTube video on the topic, and my medium article.
Applying stochastic variational inference to Bayesian Mixture of Gaussian
"Local variational parameter" = variational parameters of the local variables. E.g., in GMM they are the cluster assignment. They are "local", because for each data point $x_i$, the corresponding late
Applying stochastic variational inference to Bayesian Mixture of Gaussian "Local variational parameter" = variational parameters of the local variables. E.g., in GMM they are the cluster assignment. They are "local", because for each data point $x_i$, the corresponding latent variable corresponds only to it. In the paper they denoted this by $z_i$, or $z$ for the entire data vector. We are placing a variational distribution over these $z_i$'s, because of the mean field each one will have its own, and we denote each one by $q(z_i)$. These distributions have parameters. E.g., if they are categorical, they have a vector of probabilities that must sum up to 1. These are the (local) variational parameters denoted by $\phi$. The "global variational parameters" are the parameters of the global variable. The global variables are denoted in the paper by $\beta$. In GMM they will be the cluster means. They are "global" because they don't correspond to a specific data point. They also have a distribution, denoted by $q(\beta)$ and this distribution has parameters denoted in the paper by $\lambda$. Maybe you are confused to see $z$'s also in the update formula for the global parameters, but this is a property of the model - the global parameters depend on the local ones. E.g., in GMM, the estimation of the cluster means depends on the estimation of which points belong to which cluster. And vice versa, the local parameter update depends on the global parameters - i.e., the estimation of the cluster assignment depends on where I think the means are. Note that the update rule for the cluster assignments ($z$'s) doesn't suffer when you increase the data size. So, the update rule for it remains a CAVI update rule (for the Expo-Family - which has a simpler update formula of simply updating the natural parameters; in GMM the Expo-Family is a valid assumption as all probabilities belong to it). So, for the i'th point, we will update the cluster assignment (step 5 in the algorithm): $$\phi_i^{(t+1)} = \mathbb E_{q} [\eta_{\mathcal l} (x_i, \beta)] $$ The expectation is w.r.t. all other variational distributions ($q$'s) except the $q(\phi_i)$ - in GMM because of the model structure, it means it will only be w.r.t. the $\beta$'s. $\eta$ is the natural parameter of the resulting distribution, $l$ denotes that it's the local one, and it might still be dependent on the values of the data and all the other parameters, and again, in GMM, because of the model structure, it will only depend on the i'th data point $x_i$ and the global parameters $\beta$. It won't depend on other $z$'s. The CAVI update rule for the global parameters $\beta$ might be hard to scale, as it requires going over the entire data. To "combat" this SVI uses (stochastic) gradient ascent instead of coordinate ascent (CAVI) for the global parameters. It does so by also taking the natural gradient in Riemannian space instead of the regular gradient in Euclidian space. After a lot of math, this turns out to be almost identical to CAVI. In CAVI the update rules are (almost) the term specified in step 6 in the algorithm: $$ \lambda^{(t+1)} = \mathbb E_{q} [\eta_{\mathcal g} (x, z)] $$ In regular CAVI you would have to take the expectation w.r.t. the entire data set ($x$'s) and local variables ($z$'s). In SVI you replace it with N replications of the same (randomly sampled) $x_i,z_i$. E.g., for GMM, and for $\beta_1$ supposing 1D data, the true (natural parameter) CAVI update rules will be (corresponding to $\mu/\sigma^2$, $-1/2\sigma^2$ natural parameters): $$\lambda_{11}^{(t+1)} = \sum_{i=1}^{n} \mathbb E[z_{i1}]x_i \\ \lambda_{12}^{(t+1)} = -0.5(\frac{1}{\sigma^2} + \sum_{i=1}^n \mathbb E[z_{i1}]) $$ These will turn to: $$\lambda_{11}^{(t+1)} = n \cdot \mathbb E[z_{i1}]x_i \\ \lambda_{12}^{(t+1)} = -0.5(\frac{1}{\sigma^2} + n \cdot \mathbb E[z_{i1}]) $$ As mentioned, this is the CAVI update, but we want gradient ascent, and so step 7 does the (stochastic) gradient ascent. If you want to learn more, I suggest you check out my YouTube video on the topic, and my medium article.
Applying stochastic variational inference to Bayesian Mixture of Gaussian "Local variational parameter" = variational parameters of the local variables. E.g., in GMM they are the cluster assignment. They are "local", because for each data point $x_i$, the corresponding late
29,364
Difference between Adjusted R Squared and Predicted R Squared
I won't attempt to give you a highly technical answer here. I am inclined to trust the PRESS statistic more than I would trust adjusted R squared. The adjusted R squared is still an "in-sample" measure, while the PRESS is an "out-of-sample" measure. I would consider the out-of-sample measure to be more powerful generally. Additionally, the adjusted R squared might not be very different from R squared if you have a small number of predictors compared to the number of observations you are fitting. So it might not be as informative as the PRESS statistic. However, If what you are describing is correct, and the PRESS statistic is calculated correctly, then it is clear that the predictive performance of your model is suffering when you add predictors. But it is also not clear why your PRESS statistic is suffering so much compared to the adjusted R squared. It doesn't feel like that should be the case. I would recommend checking that you are calculating the PRESS statistic correctly (if this is not something that is provided by the tool you are using to fit the model). It is quite difficult to diagnose what is happening without having more detail of the issue at hand.
Difference between Adjusted R Squared and Predicted R Squared
I won't attempt to give you a highly technical answer here. I am inclined to trust the PRESS statistic more than I would trust adjusted R squared. The adjusted R squared is still an "in-sample" measu
Difference between Adjusted R Squared and Predicted R Squared I won't attempt to give you a highly technical answer here. I am inclined to trust the PRESS statistic more than I would trust adjusted R squared. The adjusted R squared is still an "in-sample" measure, while the PRESS is an "out-of-sample" measure. I would consider the out-of-sample measure to be more powerful generally. Additionally, the adjusted R squared might not be very different from R squared if you have a small number of predictors compared to the number of observations you are fitting. So it might not be as informative as the PRESS statistic. However, If what you are describing is correct, and the PRESS statistic is calculated correctly, then it is clear that the predictive performance of your model is suffering when you add predictors. But it is also not clear why your PRESS statistic is suffering so much compared to the adjusted R squared. It doesn't feel like that should be the case. I would recommend checking that you are calculating the PRESS statistic correctly (if this is not something that is provided by the tool you are using to fit the model). It is quite difficult to diagnose what is happening without having more detail of the issue at hand.
Difference between Adjusted R Squared and Predicted R Squared I won't attempt to give you a highly technical answer here. I am inclined to trust the PRESS statistic more than I would trust adjusted R squared. The adjusted R squared is still an "in-sample" measu
29,365
Difference between Adjusted R Squared and Predicted R Squared
I will give a shot at a technical answer. Let us assume that all regression assumptions are met and that the predictors have a multivariate normal distribution. In notation, we can summarize this as $Y=\beta^{\top}X+\epsilon, \text{with} \quad X \sim N(\mu_X, \Sigma_X),\quad \epsilon \sim N(0,\sigma_\epsilon^2)$ with $Y$ representing the dependent variable and $X$ the independent variables/predictors, $\epsilon$ the error. When using adjusted-R-squared, what we actually want to estimate is $$\rho^2=1-\frac{\sigma_\epsilon^2}{Var(Y)}.$$ In words: this is the amount of variance of the dependent variable that the best linear function f (as represented by the true regression weights $\beta$) explains. On a side note, I wrote a paper [1] that shows that often there are better ways to do this than standard adjusted R-squared. For predicted R-squared, we almost use the same formula, only a different error term. Instead of estimating the irreducible error $\sigma_\epsilon^2$, we are interested in $$E_{X,Y}([\hat{f}(X)-Y]^2)$$ where $\hat{f}$ is an estimate of the best function $f$ that we got from applying linear regression on a training set $D$. Thus, the population value of predicted R-squared is $$\rho_c^2=1-\frac{E_{X,Y}([\hat{f}(X)-Y]^2)}{Var(Y)}$$ This is thus the amount of variance that the particular function $\hat{f}$ explains in the population. We can decompose (this is the start of the well-known bias-variance decomposition from machine learning) $$E([\hat{f}(X)-Y]^2)=E_{X,Y}([\hat{f}(X)-f(X)]^2)+E_{X,Y}([\hat{f}(X)-Y]^2)=E_{X,Y}([\hat{f}(X)-f(X)]^2)+\sigma_\epsilon^2.$$ Note that thus $E_{X,Y}((\hat{f}(X)-Y)^2) \geq\sigma_\epsilon^2$ and equality holds if $\hat{f}=f$, which is generally not the case. In words: The error that the estimated function $\hat{f}$ makes consists of the difference between the estimated function $\hat{f}$ and the true function $f$ and the difference between the true function $f$ and the true value $Y$. Or, in other words, $\rho^2_c$ is an upper bound for $\rho^2$, and no function can predict better than $\sigma_\epsilon^2$. So which measure is better at model selection? It depends. If you want to select the set of predictors that will lead to the most accurate predictions based on the current sample, then the predicted R-squared is better. If you want to select the set of predictors that will lead to the most accurate predictions if you had the whole population available (which is arguably often the question we would like to ask when we do explanatory modeling), then adjusted R-squared is better. One word of warning. Predictive R-squared estimation via cross-validation works with rather minimal assumptions (only independence is needed). In contrast, adjusted R-squared estimation generally relies on all regression assumptions and the assumption that the predictors are multivariate normal. References $[1]$: Karch J (2020): Improving on adjusted R-squared. Collabra: Psychology (2020) 6 (1): 45. (link)
Difference between Adjusted R Squared and Predicted R Squared
I will give a shot at a technical answer. Let us assume that all regression assumptions are met and that the predictors have a multivariate normal distribution. In notation, we can summarize this as $
Difference between Adjusted R Squared and Predicted R Squared I will give a shot at a technical answer. Let us assume that all regression assumptions are met and that the predictors have a multivariate normal distribution. In notation, we can summarize this as $Y=\beta^{\top}X+\epsilon, \text{with} \quad X \sim N(\mu_X, \Sigma_X),\quad \epsilon \sim N(0,\sigma_\epsilon^2)$ with $Y$ representing the dependent variable and $X$ the independent variables/predictors, $\epsilon$ the error. When using adjusted-R-squared, what we actually want to estimate is $$\rho^2=1-\frac{\sigma_\epsilon^2}{Var(Y)}.$$ In words: this is the amount of variance of the dependent variable that the best linear function f (as represented by the true regression weights $\beta$) explains. On a side note, I wrote a paper [1] that shows that often there are better ways to do this than standard adjusted R-squared. For predicted R-squared, we almost use the same formula, only a different error term. Instead of estimating the irreducible error $\sigma_\epsilon^2$, we are interested in $$E_{X,Y}([\hat{f}(X)-Y]^2)$$ where $\hat{f}$ is an estimate of the best function $f$ that we got from applying linear regression on a training set $D$. Thus, the population value of predicted R-squared is $$\rho_c^2=1-\frac{E_{X,Y}([\hat{f}(X)-Y]^2)}{Var(Y)}$$ This is thus the amount of variance that the particular function $\hat{f}$ explains in the population. We can decompose (this is the start of the well-known bias-variance decomposition from machine learning) $$E([\hat{f}(X)-Y]^2)=E_{X,Y}([\hat{f}(X)-f(X)]^2)+E_{X,Y}([\hat{f}(X)-Y]^2)=E_{X,Y}([\hat{f}(X)-f(X)]^2)+\sigma_\epsilon^2.$$ Note that thus $E_{X,Y}((\hat{f}(X)-Y)^2) \geq\sigma_\epsilon^2$ and equality holds if $\hat{f}=f$, which is generally not the case. In words: The error that the estimated function $\hat{f}$ makes consists of the difference between the estimated function $\hat{f}$ and the true function $f$ and the difference between the true function $f$ and the true value $Y$. Or, in other words, $\rho^2_c$ is an upper bound for $\rho^2$, and no function can predict better than $\sigma_\epsilon^2$. So which measure is better at model selection? It depends. If you want to select the set of predictors that will lead to the most accurate predictions based on the current sample, then the predicted R-squared is better. If you want to select the set of predictors that will lead to the most accurate predictions if you had the whole population available (which is arguably often the question we would like to ask when we do explanatory modeling), then adjusted R-squared is better. One word of warning. Predictive R-squared estimation via cross-validation works with rather minimal assumptions (only independence is needed). In contrast, adjusted R-squared estimation generally relies on all regression assumptions and the assumption that the predictors are multivariate normal. References $[1]$: Karch J (2020): Improving on adjusted R-squared. Collabra: Psychology (2020) 6 (1): 45. (link)
Difference between Adjusted R Squared and Predicted R Squared I will give a shot at a technical answer. Let us assume that all regression assumptions are met and that the predictors have a multivariate normal distribution. In notation, we can summarize this as $
29,366
Reference request: Classical statistics for working data scientists
Larry Wasserman's All of Statistics is a nice book for getting a whirlwind tour of mathematical statistics. It was the first book on mathematical statistics I used myself. It includes the classics like hypothesis testing and maximum likelihood estimation, but it also has plenty of coverage of more recently developed but equally important topics like bootstrapping. Wasserman always has one foot in statistics and the other foot in machine learning, which I think all contemporary data analysts should do; if you're only familiar with one field of the two, you're going to be missing a lot. Also, the book has a lot of good exercises. If you have a background in real analysis and you want the raw, uncut stuff, by which I mean a measure-theoretic treatment of probability and statistics, try Mark J. Schervish's Theory of Statistics. Schervish is half of DeGroot and Schervish, whose less technical book Probability and Statistics is maybe the most popular book on mathematical statistics today. Theory of Statistics is a helpfully talky book for a topic usually reserved for graduate students who are supposed to do all the work themselves. To be quite honest, I found this book very hard (although not as hard as Jun Shao's Mathematical Statistics) and eventually came to feel the immense effort required to master it wasn't a good use of my time as an applied data analyst. But I still learned a lot and came away with a good understanding of what measure theory is and how it can be used to clean up hairy theoretical difficulties that arise in the more naive traditional approach to probability theory. I also came to better appreciate the similarities and differences of exchangeability and independence.
Reference request: Classical statistics for working data scientists
Larry Wasserman's All of Statistics is a nice book for getting a whirlwind tour of mathematical statistics. It was the first book on mathematical statistics I used myself. It includes the classics lik
Reference request: Classical statistics for working data scientists Larry Wasserman's All of Statistics is a nice book for getting a whirlwind tour of mathematical statistics. It was the first book on mathematical statistics I used myself. It includes the classics like hypothesis testing and maximum likelihood estimation, but it also has plenty of coverage of more recently developed but equally important topics like bootstrapping. Wasserman always has one foot in statistics and the other foot in machine learning, which I think all contemporary data analysts should do; if you're only familiar with one field of the two, you're going to be missing a lot. Also, the book has a lot of good exercises. If you have a background in real analysis and you want the raw, uncut stuff, by which I mean a measure-theoretic treatment of probability and statistics, try Mark J. Schervish's Theory of Statistics. Schervish is half of DeGroot and Schervish, whose less technical book Probability and Statistics is maybe the most popular book on mathematical statistics today. Theory of Statistics is a helpfully talky book for a topic usually reserved for graduate students who are supposed to do all the work themselves. To be quite honest, I found this book very hard (although not as hard as Jun Shao's Mathematical Statistics) and eventually came to feel the immense effort required to master it wasn't a good use of my time as an applied data analyst. But I still learned a lot and came away with a good understanding of what measure theory is and how it can be used to clean up hairy theoretical difficulties that arise in the more naive traditional approach to probability theory. I also came to better appreciate the similarities and differences of exchangeability and independence.
Reference request: Classical statistics for working data scientists Larry Wasserman's All of Statistics is a nice book for getting a whirlwind tour of mathematical statistics. It was the first book on mathematical statistics I used myself. It includes the classics lik
29,367
Reference request: Classical statistics for working data scientists
Aside Kodiologist's very good suggestions (+1) I would also recommend looking at the subject of observational studies. I think it is very unappreciated field between data-scientists despite the fact that in many cases the data analysed are of observational nature. I think this is because the bulk of bibliography (especially in Biostatistics) assume at least some quasi-experimental design is already in place. Paul Rosenbaum's books Observational Studies and Design of Observational Studies are some of the most commonly used references.
Reference request: Classical statistics for working data scientists
Aside Kodiologist's very good suggestions (+1) I would also recommend looking at the subject of observational studies. I think it is very unappreciated field between data-scientists despite the fact t
Reference request: Classical statistics for working data scientists Aside Kodiologist's very good suggestions (+1) I would also recommend looking at the subject of observational studies. I think it is very unappreciated field between data-scientists despite the fact that in many cases the data analysed are of observational nature. I think this is because the bulk of bibliography (especially in Biostatistics) assume at least some quasi-experimental design is already in place. Paul Rosenbaum's books Observational Studies and Design of Observational Studies are some of the most commonly used references.
Reference request: Classical statistics for working data scientists Aside Kodiologist's very good suggestions (+1) I would also recommend looking at the subject of observational studies. I think it is very unappreciated field between data-scientists despite the fact t
29,368
Regression with zero inflated continuous response variable using gradient boosting trees and random forest
Updated problem statement Given data: comprised of 50% zero padding, 50% of something else sufficient in size and complexity that xgboost (or equivalent) is needed nonzero data is described by a multivariate linear relationship so multivariate regression is apropos Show: HOW to split task into "is zero vs. is not" and "if not, then fit linear" WHEN to ... Solution Here is our data. We can use the Kaggle "Human Resource Analytics" challenge dataset, but with modified goal. In the challenge, the goal is to predict "whether" the employee will leave, so the output is the class, not the regression. We have to modify that for our purposes. Let's suppose that the "satisfaction" is some HR self-congratulatory hack, and poorly represents actual satisfaction. Let's presume also that a strongly truly satisfied employee doesn't tend to leave, and one that is unsatisfied does tend to leave. Process: first we use gbm (xgboost or other) to use all columns but satisfaction level and "left" to determine if they left. second we use the left class to regress on satisfaction. finally we compare to see if there are two fundamentally different sets of "physics" driving satisfaction. Execution I am going to use r + 'h2o', but the process and results should generalize to any gradient boosted machine including xgboost. I like the H2O flow interface through the browser. I also like to use a random-forest as a robust estimator of central tendency. It is really hard to over-fit a random forest. Fit of nominal (did employee leave) #library library(h2o) #gbm #spin up h2o h2o.init(nthreads = -1) #use this computer to the max #import data mydata <- h2o.importFile("HR_comma_sep.csv") mydata[,7] <- as.factor(mydata[,7]) #split data splits <- h2o.splitFrame(mydata, c(0.8)) train.hex <- h2o.assign(splits[[1]], "train.hex") valid.hex <- h2o.assign(splits[[2]], "valid.hex") #stage for gbm idxx <- 1:10 idxx <- idxx[-c(1,7)] idxy <- 7 Nn <- 300 Lambda <- 0.1 #fit data my_fit.gbm <- h2o.gbm(y=idxy, x=idxx, training_frame = train.hex, validation_frame = valid.hex, model_id = "my_fit.gbm", ntrees=Nn, learn_rate = Lambda, score_each_iteration = TRUE) h2o.confusionMatrix(my_fit.gbm) The purpose of training/validation is to "dial in the parameters" to a decent level, and to estimate operational uncertainty. When the dials are set, and we have estimates of how what step 1 errors are, then we train on the whole data to move to the second step. In this case I am moving fast so that is not done here. I predict on the model used for tuning parameters. Convergence is fair, although the example here is nothing close to rigorous. Here is the baseline RF my_fit.rf <- h2o.randomForest(y=idxy, x=idxx, training_frame = train.hex, validation_frame = valid.hex, model_id = "my_fit.rf", ntrees=150, score_each_iteration = TRUE) h2o.confusionMatrix(my_fit.rf) Its confusion matrix is: Confusion Matrix (vertical: actual; across: predicted) for max f1 @ threshold = 0.469005852414851: 0 1 Error Rate 0 9008 100 0.010979 =100/9108 1 98 2762 0.034266 =98/2860 Totals 9106 2862 0.016544 =198/11968 Comparison of this with confusion matrix from the GBM fit suggests we have around 93.6% positive predictive value, and we are in the right area to not be over-fitting. Here is the confusion matrix for the GBM: Confusion Matrix (vertical: actual; across: predicted) for max f1 @ threshold = 0.410323012296247: 0 1 Error Rate 0 9030 87 0.009543 =87/9117 1 103 2723 0.036447 =103/2826 Totals 9133 2810 0.015909 =190/11943 So let's predict the "did they leave" on the whole data, and use it to model the "satisfaction_level". Here we predict and augment the data pred_left.hex <- h2o.predict(my_fit.gbm, newdata = mydata, destination_frame="pred_left.hex") mydata2 <- h2o.cbind(mydata, pred_left.hex) Here we make prediction of "satisfaction" #stage for second gbm idxx2 <- 1:13 idxx2 <- idxx2[-c(1,7)] idxy2 <- 1 Nn <- 300 Lambda <- 0.05 #split data splits2 <- h2o.splitFrame(mydata2, c(0.8)) train2.hex <- h2o.assign(splits2[[1]], "train2.hex") valid2.hex <- h2o.assign(splits2[[2]], "valid2.hex") #fit data my_fit2.gbm <- h2o.gbm(y=idxy2, x=idxx2, training_frame = train2.hex, validation_frame = valid2.hex, model_id = "my_fit2.gbm", ntrees=Nn, learn_rate = Lambda, score_each_iteration = TRUE) As long as it is a "fair" model, the variable importance is going to show whether this has utility. Here is the "RF as a gross reality check" my_fit2.rf <- h2o.randomForest(y=idxy2, x=idxx2, training_frame = train2.hex, validation_frame = valid2.hex, model_id = "my_fit2.rf", ntrees=150, score_each_iteration = TRUE) The RF converged Fit metrics give mae around 0.13 Here is the GBM results Now, I did nearly nothing in the way of real tuning. A decent GBM can usually outperform an RF for accuracy by quite a bit. It can also over-fit, which is a bad thing that requires a litle time and effort to resolve. Our typical error scale of 13% (mae = mean absolute error) isn't bad. It is consistent with the RF, but there is something much more interesting. Here is what the GBM gives for variable importance (and the key you are looking for). Notice that the "P0", the probability of staying, is the number 2 most informative value in the set. It is stunningly more important than salary, work hours, accidents, or previous review. It is, in fact, more informative than the bottom 8 variables combined even though it is a function of them. From this we might say that any HR that claims all "satisfaction scores" are created equal, given this data, is junk; we shouldn't be as surprised as they are that the "best and most experienced employees are leaving prematurely". With only a little work, the predictive values should be moved to the late 90's, even on real-world data. This also shows how having the class probabilities as an input can be substantially informative. Thoughts: If P0 was contrived as log of probability, or as log-odds, then it might be even more informative for the fundamental learner, the CART. Again, the GBM could be substantially improved by adjusting control parameters. This is practically "shoot from the hip". UPDATE: There is also a package called "lime" that is about unpacking variable importance from black box models like random forests. (ref)
Regression with zero inflated continuous response variable using gradient boosting trees and random
Updated problem statement Given data: comprised of 50% zero padding, 50% of something else sufficient in size and complexity that xgboost (or equivalent) is needed nonzero data is described by a mu
Regression with zero inflated continuous response variable using gradient boosting trees and random forest Updated problem statement Given data: comprised of 50% zero padding, 50% of something else sufficient in size and complexity that xgboost (or equivalent) is needed nonzero data is described by a multivariate linear relationship so multivariate regression is apropos Show: HOW to split task into "is zero vs. is not" and "if not, then fit linear" WHEN to ... Solution Here is our data. We can use the Kaggle "Human Resource Analytics" challenge dataset, but with modified goal. In the challenge, the goal is to predict "whether" the employee will leave, so the output is the class, not the regression. We have to modify that for our purposes. Let's suppose that the "satisfaction" is some HR self-congratulatory hack, and poorly represents actual satisfaction. Let's presume also that a strongly truly satisfied employee doesn't tend to leave, and one that is unsatisfied does tend to leave. Process: first we use gbm (xgboost or other) to use all columns but satisfaction level and "left" to determine if they left. second we use the left class to regress on satisfaction. finally we compare to see if there are two fundamentally different sets of "physics" driving satisfaction. Execution I am going to use r + 'h2o', but the process and results should generalize to any gradient boosted machine including xgboost. I like the H2O flow interface through the browser. I also like to use a random-forest as a robust estimator of central tendency. It is really hard to over-fit a random forest. Fit of nominal (did employee leave) #library library(h2o) #gbm #spin up h2o h2o.init(nthreads = -1) #use this computer to the max #import data mydata <- h2o.importFile("HR_comma_sep.csv") mydata[,7] <- as.factor(mydata[,7]) #split data splits <- h2o.splitFrame(mydata, c(0.8)) train.hex <- h2o.assign(splits[[1]], "train.hex") valid.hex <- h2o.assign(splits[[2]], "valid.hex") #stage for gbm idxx <- 1:10 idxx <- idxx[-c(1,7)] idxy <- 7 Nn <- 300 Lambda <- 0.1 #fit data my_fit.gbm <- h2o.gbm(y=idxy, x=idxx, training_frame = train.hex, validation_frame = valid.hex, model_id = "my_fit.gbm", ntrees=Nn, learn_rate = Lambda, score_each_iteration = TRUE) h2o.confusionMatrix(my_fit.gbm) The purpose of training/validation is to "dial in the parameters" to a decent level, and to estimate operational uncertainty. When the dials are set, and we have estimates of how what step 1 errors are, then we train on the whole data to move to the second step. In this case I am moving fast so that is not done here. I predict on the model used for tuning parameters. Convergence is fair, although the example here is nothing close to rigorous. Here is the baseline RF my_fit.rf <- h2o.randomForest(y=idxy, x=idxx, training_frame = train.hex, validation_frame = valid.hex, model_id = "my_fit.rf", ntrees=150, score_each_iteration = TRUE) h2o.confusionMatrix(my_fit.rf) Its confusion matrix is: Confusion Matrix (vertical: actual; across: predicted) for max f1 @ threshold = 0.469005852414851: 0 1 Error Rate 0 9008 100 0.010979 =100/9108 1 98 2762 0.034266 =98/2860 Totals 9106 2862 0.016544 =198/11968 Comparison of this with confusion matrix from the GBM fit suggests we have around 93.6% positive predictive value, and we are in the right area to not be over-fitting. Here is the confusion matrix for the GBM: Confusion Matrix (vertical: actual; across: predicted) for max f1 @ threshold = 0.410323012296247: 0 1 Error Rate 0 9030 87 0.009543 =87/9117 1 103 2723 0.036447 =103/2826 Totals 9133 2810 0.015909 =190/11943 So let's predict the "did they leave" on the whole data, and use it to model the "satisfaction_level". Here we predict and augment the data pred_left.hex <- h2o.predict(my_fit.gbm, newdata = mydata, destination_frame="pred_left.hex") mydata2 <- h2o.cbind(mydata, pred_left.hex) Here we make prediction of "satisfaction" #stage for second gbm idxx2 <- 1:13 idxx2 <- idxx2[-c(1,7)] idxy2 <- 1 Nn <- 300 Lambda <- 0.05 #split data splits2 <- h2o.splitFrame(mydata2, c(0.8)) train2.hex <- h2o.assign(splits2[[1]], "train2.hex") valid2.hex <- h2o.assign(splits2[[2]], "valid2.hex") #fit data my_fit2.gbm <- h2o.gbm(y=idxy2, x=idxx2, training_frame = train2.hex, validation_frame = valid2.hex, model_id = "my_fit2.gbm", ntrees=Nn, learn_rate = Lambda, score_each_iteration = TRUE) As long as it is a "fair" model, the variable importance is going to show whether this has utility. Here is the "RF as a gross reality check" my_fit2.rf <- h2o.randomForest(y=idxy2, x=idxx2, training_frame = train2.hex, validation_frame = valid2.hex, model_id = "my_fit2.rf", ntrees=150, score_each_iteration = TRUE) The RF converged Fit metrics give mae around 0.13 Here is the GBM results Now, I did nearly nothing in the way of real tuning. A decent GBM can usually outperform an RF for accuracy by quite a bit. It can also over-fit, which is a bad thing that requires a litle time and effort to resolve. Our typical error scale of 13% (mae = mean absolute error) isn't bad. It is consistent with the RF, but there is something much more interesting. Here is what the GBM gives for variable importance (and the key you are looking for). Notice that the "P0", the probability of staying, is the number 2 most informative value in the set. It is stunningly more important than salary, work hours, accidents, or previous review. It is, in fact, more informative than the bottom 8 variables combined even though it is a function of them. From this we might say that any HR that claims all "satisfaction scores" are created equal, given this data, is junk; we shouldn't be as surprised as they are that the "best and most experienced employees are leaving prematurely". With only a little work, the predictive values should be moved to the late 90's, even on real-world data. This also shows how having the class probabilities as an input can be substantially informative. Thoughts: If P0 was contrived as log of probability, or as log-odds, then it might be even more informative for the fundamental learner, the CART. Again, the GBM could be substantially improved by adjusting control parameters. This is practically "shoot from the hip". UPDATE: There is also a package called "lime" that is about unpacking variable importance from black box models like random forests. (ref)
Regression with zero inflated continuous response variable using gradient boosting trees and random Updated problem statement Given data: comprised of 50% zero padding, 50% of something else sufficient in size and complexity that xgboost (or equivalent) is needed nonzero data is described by a mu
29,369
Easier way to find $\mathbb{E}\left[X_{(2)}| X_{(1)}, X_{(3)}\right]$?
Because the $X_i$ all have a uniform distribution, all (unordered) variables are assumed independent, and no other order statistic lies between $X_{(1)}$ and $X_{(3)}$, $X_{(2)}$ has a truncated uniform distribution supported on the interval $[X_{(1)}, X_{(3)}]$. Its mean obviously is $(X_{(1)}+X_{(3)})/2$, QED. If you would like a formal demonstration, note that when the $X_i$ are iid with an absolutely continuous distribution $F$, the conditional density of $X_{(k)}$ (conditional on all the other order statistics) is $dF(x_k)/(F(x_{(k+1)}) - F(x_{(k-1)}))$, which is the truncated distribution. (When $k=1$, $F(x_{0})$ is taken to be $0$; and when $k=n$, $F(x_{n+1})$ is taken to be $1$.) This follows from Joint pdf of functions of order statistics, for instance, together with the definition of conditional densities.
Easier way to find $\mathbb{E}\left[X_{(2)}| X_{(1)}, X_{(3)}\right]$?
Because the $X_i$ all have a uniform distribution, all (unordered) variables are assumed independent, and no other order statistic lies between $X_{(1)}$ and $X_{(3)}$, $X_{(2)}$ has a truncated unifo
Easier way to find $\mathbb{E}\left[X_{(2)}| X_{(1)}, X_{(3)}\right]$? Because the $X_i$ all have a uniform distribution, all (unordered) variables are assumed independent, and no other order statistic lies between $X_{(1)}$ and $X_{(3)}$, $X_{(2)}$ has a truncated uniform distribution supported on the interval $[X_{(1)}, X_{(3)}]$. Its mean obviously is $(X_{(1)}+X_{(3)})/2$, QED. If you would like a formal demonstration, note that when the $X_i$ are iid with an absolutely continuous distribution $F$, the conditional density of $X_{(k)}$ (conditional on all the other order statistics) is $dF(x_k)/(F(x_{(k+1)}) - F(x_{(k-1)}))$, which is the truncated distribution. (When $k=1$, $F(x_{0})$ is taken to be $0$; and when $k=n$, $F(x_{n+1})$ is taken to be $1$.) This follows from Joint pdf of functions of order statistics, for instance, together with the definition of conditional densities.
Easier way to find $\mathbb{E}\left[X_{(2)}| X_{(1)}, X_{(3)}\right]$? Because the $X_i$ all have a uniform distribution, all (unordered) variables are assumed independent, and no other order statistic lies between $X_{(1)}$ and $X_{(3)}$, $X_{(2)}$ has a truncated unifo
29,370
What to do when a linear regression gives negative estimates which are not possible
You haven't given context, but you have linked to a post that offers one solution. I will assume that that solution is not applicable here. Then another solution is to not use linear regression (simple or multiple) since they do not solve the problem you have. First, though, let's use your of income as a function of age and education. Here, negative predicted values are reasonable because you are probably not interested in the income of newborn babies. However, there, taking log(income) is also reasonable, unless some people in your data set have no income. But suppose that's not it. Then you can use a regression method that respects bounds on the dependent variable. One such is beta regression, which requires a DV that is between 0 and 1 - so you could scale your DV to be between 0 and 1 and then use beta regression. But I would really urge you to add your actual variables to the question.
What to do when a linear regression gives negative estimates which are not possible
You haven't given context, but you have linked to a post that offers one solution. I will assume that that solution is not applicable here. Then another solution is to not use linear regression (simp
What to do when a linear regression gives negative estimates which are not possible You haven't given context, but you have linked to a post that offers one solution. I will assume that that solution is not applicable here. Then another solution is to not use linear regression (simple or multiple) since they do not solve the problem you have. First, though, let's use your of income as a function of age and education. Here, negative predicted values are reasonable because you are probably not interested in the income of newborn babies. However, there, taking log(income) is also reasonable, unless some people in your data set have no income. But suppose that's not it. Then you can use a regression method that respects bounds on the dependent variable. One such is beta regression, which requires a DV that is between 0 and 1 - so you could scale your DV to be between 0 and 1 and then use beta regression. But I would really urge you to add your actual variables to the question.
What to do when a linear regression gives negative estimates which are not possible You haven't given context, but you have linked to a post that offers one solution. I will assume that that solution is not applicable here. Then another solution is to not use linear regression (simp
29,371
What to do when a linear regression gives negative estimates which are not possible
Your definition of x may not include, as a previous poster said, all situations. In fact, you can have a negative income. If you spend more than you make, then that is a net negative. Explicitly defining age and/or education is just as important. Some folks have trust funds and are earning money at birth, and some are on welfare and are a net negative income at 100 years. It is also true that a person with 10 years of school, like a doctor for instance, commits felony or fraud on a grand scale and has a net negative income, losing everything, maybe charged with paying restitution on top of that, and becoming a ward of the state in prison. In short, details matter. Clearly define the details and you will have no negative intercept.
What to do when a linear regression gives negative estimates which are not possible
Your definition of x may not include, as a previous poster said, all situations. In fact, you can have a negative income. If you spend more than you make, then that is a net negative. Explicitly defin
What to do when a linear regression gives negative estimates which are not possible Your definition of x may not include, as a previous poster said, all situations. In fact, you can have a negative income. If you spend more than you make, then that is a net negative. Explicitly defining age and/or education is just as important. Some folks have trust funds and are earning money at birth, and some are on welfare and are a net negative income at 100 years. It is also true that a person with 10 years of school, like a doctor for instance, commits felony or fraud on a grand scale and has a net negative income, losing everything, maybe charged with paying restitution on top of that, and becoming a ward of the state in prison. In short, details matter. Clearly define the details and you will have no negative intercept.
What to do when a linear regression gives negative estimates which are not possible Your definition of x may not include, as a previous poster said, all situations. In fact, you can have a negative income. If you spend more than you make, then that is a net negative. Explicitly defin
29,372
How to prove cooperation from behavioural sequences
I post second answer since your last comment By cooperation I mean "when male is attacking, the female make threats", and I would like to test this hypothesis against an alternative: "when male is attacking, the female do not prefer make threats" (in other words, behaviour of female is independent of male behaviour). is a game-changer. It seems that the problem can be approach from totally different perspective. First, you are interested only in part of your sample when males are attacking. Second, you are interested if in such cases females make treats more often than we would expect if they made them randomly. To test such hypothesis we can use a permutation test: randomly shuffle either male_seq or female_seq (it doesn't matter) and then count cases where male_seq == "attack" and female_seq == "treat" to obtain null distribution. Next, compare count obtained from your data to counts in the null distribution to obtain $p$-value. prmfun <- function() { sum(female_seq[sample(male_seq) == "attack"] == "threat") } mean(replicate(1e5, prmfun()) >= sum(female_seq[male_seq == "attack"] == "threat")) ## [1] 5e-05 You can define your test statistic differently, based on how do you define females' "preference". Permutation test in this case is a direct interpretation of your $H_0$: "behaviour of female is independent of male behaviour", that leads to: "female behaviour is random given male behaviour", so the behaviours are be randomly shuffled under $H_0$. Moreover, even if you assumed that the behaviours appear in clusters of the same behaviour repeated for some period of time, with permutation test you can shuffle whole clusters: female_rle <- rle(female_seq) n_rle <- length(female_rle$values) prmfun2 <- function() { ord <- sample(n_rle) sim_female_seq <- rep(female_rle$values[ord], female_rle$lengths[ord]) sum(sim_female_seq[male_seq == "attack"] == "threat") } mean(replicate(1e5, prmfun2()) >= sum(female_seq[male_seq == "attack"] == "threat")) ## [1] 0.00257 In either of the cases, the co-operation patterns in the data you provided seem to be far from random. Notice that in both cases we ignore the autocorrelated nature of this data, we are rather asking: if we picked random point in time when male was attacking, would female be less or more likely to make treats at the same time? Since you seem to be talking about causality ("when ... then"), while conducting permutation test you may be interested in comparing males behaviour in $t-1$ time to females behaviour at $t$ time (what was females' "reaction" to males behaviour?), but this is something that you have to ask yourself. Permutation tests are flexible and can be easily adapted to the kind of problems you seem to be describing.
How to prove cooperation from behavioural sequences
I post second answer since your last comment By cooperation I mean "when male is attacking, the female make threats", and I would like to test this hypothesis against an alternative: "when male
How to prove cooperation from behavioural sequences I post second answer since your last comment By cooperation I mean "when male is attacking, the female make threats", and I would like to test this hypothesis against an alternative: "when male is attacking, the female do not prefer make threats" (in other words, behaviour of female is independent of male behaviour). is a game-changer. It seems that the problem can be approach from totally different perspective. First, you are interested only in part of your sample when males are attacking. Second, you are interested if in such cases females make treats more often than we would expect if they made them randomly. To test such hypothesis we can use a permutation test: randomly shuffle either male_seq or female_seq (it doesn't matter) and then count cases where male_seq == "attack" and female_seq == "treat" to obtain null distribution. Next, compare count obtained from your data to counts in the null distribution to obtain $p$-value. prmfun <- function() { sum(female_seq[sample(male_seq) == "attack"] == "threat") } mean(replicate(1e5, prmfun()) >= sum(female_seq[male_seq == "attack"] == "threat")) ## [1] 5e-05 You can define your test statistic differently, based on how do you define females' "preference". Permutation test in this case is a direct interpretation of your $H_0$: "behaviour of female is independent of male behaviour", that leads to: "female behaviour is random given male behaviour", so the behaviours are be randomly shuffled under $H_0$. Moreover, even if you assumed that the behaviours appear in clusters of the same behaviour repeated for some period of time, with permutation test you can shuffle whole clusters: female_rle <- rle(female_seq) n_rle <- length(female_rle$values) prmfun2 <- function() { ord <- sample(n_rle) sim_female_seq <- rep(female_rle$values[ord], female_rle$lengths[ord]) sum(sim_female_seq[male_seq == "attack"] == "threat") } mean(replicate(1e5, prmfun2()) >= sum(female_seq[male_seq == "attack"] == "threat")) ## [1] 0.00257 In either of the cases, the co-operation patterns in the data you provided seem to be far from random. Notice that in both cases we ignore the autocorrelated nature of this data, we are rather asking: if we picked random point in time when male was attacking, would female be less or more likely to make treats at the same time? Since you seem to be talking about causality ("when ... then"), while conducting permutation test you may be interested in comparing males behaviour in $t-1$ time to females behaviour at $t$ time (what was females' "reaction" to males behaviour?), but this is something that you have to ask yourself. Permutation tests are flexible and can be easily adapted to the kind of problems you seem to be describing.
How to prove cooperation from behavioural sequences I post second answer since your last comment By cooperation I mean "when male is attacking, the female make threats", and I would like to test this hypothesis against an alternative: "when male
29,373
How to prove cooperation from behavioural sequences
You can think of your data in terms of bivariate Markov chain. You have two different variables $X$ for females and $Y$ for males, that describe stochastic process of changes in $X$ and $Y$ at time $t$ to one of four different states. Let's denote by $X_{t-1,i} \rightarrow X_{t,j}$ transition for $X$ from $t-1$ to $t$ time, from $i$-th to $j$-th state. In this case, transition in time to another state is conditional on previous state in $X$ and in $Y$: $$ \Pr( X_{t-1,i} \rightarrow X_{t,j} ) = \Pr(X_{t,j} | X_{t-1,i},Y_{t-1,k}) \\ \Pr( Y_{t-1,h} \rightarrow Y_{t,k} ) = \Pr(Y_{t,h} | Y_{t-1,k},X_{t-1,i})$$ Transition probabilities can be easily calculated by counting transition histories and normalizing the probabilities afterwards: states <- c("absent", "present", "attack", "threat") # data is stored in 3-dimensional array, initialized with # a very small "default" non-zero count to avoid zeros. female_counts <- male_counts <- array(1e-16, c(4,4,4), list(states, states, states)) n <- length(male_seq) for (i in 1:n) { male_counts[female_seq[i-1], male_seq[i-1], male_seq[i]] <- male_counts[female_seq[i-1], male_seq[i-1], male_seq[i]] + 1 female_counts[male_seq[i-1], female_seq[i-1], female_seq[i]] <- female_counts[male_seq[i-1], female_seq[i-1], female_seq[i]] + 1 } male_counts/sum(male_counts) female_counts/sum(female_counts) It can be also easyly simulated using marginal probabilities: male_sim <- female_sim <- "absent" for (i in 2:nsim) { male_sim[i] <- sample(states, 1, prob = male_counts[female_sim[i-1], male_sim[i-1], ]) female_sim[i] <- sample(states, 1, prob = female_counts[male_sim[i-1], female_sim[i-1], ]) } Result of such simulation is plotted below. Moreover, it can be used to make one-step-ahead predictions: male_pred <- female_pred <- NULL for (i in 2:n) { curr_m <- male_counts[female_seq[i-1], male_seq[i-1], ] curr_f <- female_counts[male_seq[i-1], female_seq[i-1], ] male_pred[i] <- sample(names(curr_m)[curr_m == max(curr_m)], 1) female_pred[i] <- sample(names(curr_f)[curr_f == max(curr_f)], 1) } with 69-86% accuracy on the data you provided: > mean(male_seq == male_pred, na.rm = TRUE) [1] 0.8611111 > mean(female_seq == female_pred, na.rm = TRUE) [1] 0.6944444 If the transitions occurred randomly, the transition probabilities would follow discrete uniform distribution. This is not a proof, but can serve as a way of thinking about your data using a simple model.
How to prove cooperation from behavioural sequences
You can think of your data in terms of bivariate Markov chain. You have two different variables $X$ for females and $Y$ for males, that describe stochastic process of changes in $X$ and $Y$ at time $t
How to prove cooperation from behavioural sequences You can think of your data in terms of bivariate Markov chain. You have two different variables $X$ for females and $Y$ for males, that describe stochastic process of changes in $X$ and $Y$ at time $t$ to one of four different states. Let's denote by $X_{t-1,i} \rightarrow X_{t,j}$ transition for $X$ from $t-1$ to $t$ time, from $i$-th to $j$-th state. In this case, transition in time to another state is conditional on previous state in $X$ and in $Y$: $$ \Pr( X_{t-1,i} \rightarrow X_{t,j} ) = \Pr(X_{t,j} | X_{t-1,i},Y_{t-1,k}) \\ \Pr( Y_{t-1,h} \rightarrow Y_{t,k} ) = \Pr(Y_{t,h} | Y_{t-1,k},X_{t-1,i})$$ Transition probabilities can be easily calculated by counting transition histories and normalizing the probabilities afterwards: states <- c("absent", "present", "attack", "threat") # data is stored in 3-dimensional array, initialized with # a very small "default" non-zero count to avoid zeros. female_counts <- male_counts <- array(1e-16, c(4,4,4), list(states, states, states)) n <- length(male_seq) for (i in 1:n) { male_counts[female_seq[i-1], male_seq[i-1], male_seq[i]] <- male_counts[female_seq[i-1], male_seq[i-1], male_seq[i]] + 1 female_counts[male_seq[i-1], female_seq[i-1], female_seq[i]] <- female_counts[male_seq[i-1], female_seq[i-1], female_seq[i]] + 1 } male_counts/sum(male_counts) female_counts/sum(female_counts) It can be also easyly simulated using marginal probabilities: male_sim <- female_sim <- "absent" for (i in 2:nsim) { male_sim[i] <- sample(states, 1, prob = male_counts[female_sim[i-1], male_sim[i-1], ]) female_sim[i] <- sample(states, 1, prob = female_counts[male_sim[i-1], female_sim[i-1], ]) } Result of such simulation is plotted below. Moreover, it can be used to make one-step-ahead predictions: male_pred <- female_pred <- NULL for (i in 2:n) { curr_m <- male_counts[female_seq[i-1], male_seq[i-1], ] curr_f <- female_counts[male_seq[i-1], female_seq[i-1], ] male_pred[i] <- sample(names(curr_m)[curr_m == max(curr_m)], 1) female_pred[i] <- sample(names(curr_f)[curr_f == max(curr_f)], 1) } with 69-86% accuracy on the data you provided: > mean(male_seq == male_pred, na.rm = TRUE) [1] 0.8611111 > mean(female_seq == female_pred, na.rm = TRUE) [1] 0.6944444 If the transitions occurred randomly, the transition probabilities would follow discrete uniform distribution. This is not a proof, but can serve as a way of thinking about your data using a simple model.
How to prove cooperation from behavioural sequences You can think of your data in terms of bivariate Markov chain. You have two different variables $X$ for females and $Y$ for males, that describe stochastic process of changes in $X$ and $Y$ at time $t
29,374
Difference between GMM classification and QDA
If you're given class labels $c$ and fit a generative model $p(x, c) = p(c) p(x|c)$, and use the conditional distribution $p(c|x)$ for classification, then yes you're essentially performing QDA (the decision boundary will be quadratic in $x$). Under this generative model, the marginal distribution of the data $x$ is exactly the GMM density (say you have $K$ classes): $$p(x) = \sum_{k \in \{1,...,K\}} p(c=k) p(x|c=k) = \sum_{k=1}^K \pi_k \mathcal{N}({x};{\mu}_k, {\Sigma}_k)$$ "Gaussian mixture" typically refers to the marginal distribution above, which is a distribution over $x$ alone, as we often don't have access to the class labels $c$.
Difference between GMM classification and QDA
If you're given class labels $c$ and fit a generative model $p(x, c) = p(c) p(x|c)$, and use the conditional distribution $p(c|x)$ for classification, then yes you're essentially performing QDA (the d
Difference between GMM classification and QDA If you're given class labels $c$ and fit a generative model $p(x, c) = p(c) p(x|c)$, and use the conditional distribution $p(c|x)$ for classification, then yes you're essentially performing QDA (the decision boundary will be quadratic in $x$). Under this generative model, the marginal distribution of the data $x$ is exactly the GMM density (say you have $K$ classes): $$p(x) = \sum_{k \in \{1,...,K\}} p(c=k) p(x|c=k) = \sum_{k=1}^K \pi_k \mathcal{N}({x};{\mu}_k, {\Sigma}_k)$$ "Gaussian mixture" typically refers to the marginal distribution above, which is a distribution over $x$ alone, as we often don't have access to the class labels $c$.
Difference between GMM classification and QDA If you're given class labels $c$ and fit a generative model $p(x, c) = p(c) p(x|c)$, and use the conditional distribution $p(c|x)$ for classification, then yes you're essentially performing QDA (the d
29,375
Best use of LSTM for within sequence event prediction
Your data seems to be just sequences of tokens. Try build a LSTM autoencoder and let the encoder learns some fixed representations of the first part of your sequence and the decoder to predict the remaining. These representations would be your motifs. Ref: Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Srivastava, N., Mansimov, E., & Salakhutdinov, R. (2015). Unsupervised learning of video representations using LSTMs. arXiv preprint arXiv:1502.04681.
Best use of LSTM for within sequence event prediction
Your data seems to be just sequences of tokens. Try build a LSTM autoencoder and let the encoder learns some fixed representations of the first part of your sequence and the decoder to predict the rem
Best use of LSTM for within sequence event prediction Your data seems to be just sequences of tokens. Try build a LSTM autoencoder and let the encoder learns some fixed representations of the first part of your sequence and the decoder to predict the remaining. These representations would be your motifs. Ref: Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Srivastava, N., Mansimov, E., & Salakhutdinov, R. (2015). Unsupervised learning of video representations using LSTMs. arXiv preprint arXiv:1502.04681.
Best use of LSTM for within sequence event prediction Your data seems to be just sequences of tokens. Try build a LSTM autoencoder and let the encoder learns some fixed representations of the first part of your sequence and the decoder to predict the rem
29,376
Best use of LSTM for within sequence event prediction
The most important part is how you "phrase" the classification problem, meaning how you represent the input and what you want to output. Seeing as you have so many different event types you need to learn an embedding of these. This can be done directly in e.g. Keras. You can see this example on how to learn an embedding directly from data. Another approach would be to learn an embedding beforehand using a unsupervised approach such as word2vec. However, this requires more work on your part as you need to come up with a relevant task and train that to generate the embedding. Given that you have enough data it is easier (although slightly less effective) to learn the embedding directly. For the output I would not predict all the different types of events, but only the special events and a "background class" to keep the problem feasible. If you really want to be able to predict every single class then you need to use some tricks (look into how word2vec does it). Regarding the time between events. You could simply add that to your LSTM as an additional dimension (see e.g. this for an example of how to do that in Keras). This would be easy to do and would allow the LSTM to take the temporal difference into account. I do not know of any way to visualize the motifs by "unrolling" the temporal nature of the network. You might be able to generate some motifs using a generative network, but it would likely be difficult to interpret. One way to explore motifs could be to simply find the top 100000 most common sequences of non-special events of e.g. length 20-100, input them into the trained model and extract the probability output from the final softmax layer. In this way you could find sequences that are connected to certain special events. However, it's difficult to say whether this motif approach is feasible/useful without looking at your data.
Best use of LSTM for within sequence event prediction
The most important part is how you "phrase" the classification problem, meaning how you represent the input and what you want to output. Seeing as you have so many different event types you need to le
Best use of LSTM for within sequence event prediction The most important part is how you "phrase" the classification problem, meaning how you represent the input and what you want to output. Seeing as you have so many different event types you need to learn an embedding of these. This can be done directly in e.g. Keras. You can see this example on how to learn an embedding directly from data. Another approach would be to learn an embedding beforehand using a unsupervised approach such as word2vec. However, this requires more work on your part as you need to come up with a relevant task and train that to generate the embedding. Given that you have enough data it is easier (although slightly less effective) to learn the embedding directly. For the output I would not predict all the different types of events, but only the special events and a "background class" to keep the problem feasible. If you really want to be able to predict every single class then you need to use some tricks (look into how word2vec does it). Regarding the time between events. You could simply add that to your LSTM as an additional dimension (see e.g. this for an example of how to do that in Keras). This would be easy to do and would allow the LSTM to take the temporal difference into account. I do not know of any way to visualize the motifs by "unrolling" the temporal nature of the network. You might be able to generate some motifs using a generative network, but it would likely be difficult to interpret. One way to explore motifs could be to simply find the top 100000 most common sequences of non-special events of e.g. length 20-100, input them into the trained model and extract the probability output from the final softmax layer. In this way you could find sequences that are connected to certain special events. However, it's difficult to say whether this motif approach is feasible/useful without looking at your data.
Best use of LSTM for within sequence event prediction The most important part is how you "phrase" the classification problem, meaning how you represent the input and what you want to output. Seeing as you have so many different event types you need to le
29,377
Q-Learning vs Fitted Q-Iteration
You are right. It means that Q function is approximated linearly. Let $S$ be a state space and $A$ be an action space. $\textbf{x}(s,a) = (x_1(s,a),\ldots,x_n(s,a))$ where $s \in S$, is a vector of features of $S \times A$ and $\textbf{x}(s,a) \in \mathbb{R}^n$. Suppose, that $Q(a,s)$ is the real Q-value function. Now we may try to approximate it with the following estimation function: $$\hat{Q}(a,s,\textbf{w}) = \textbf{w} \cdot \textbf{x}(s,a) = \sum_{i=1}^nw_ix_i(s,a)$$ So you may want to make features for state-action pairs, instead of making features for states only. To fine-tune the $\textbf{w}$ vector, you can use gradient descend methods. For more on this issue see Sutton & Barto, control with function approximation.
Q-Learning vs Fitted Q-Iteration
You are right. It means that Q function is approximated linearly. Let $S$ be a state space and $A$ be an action space. $\textbf{x}(s,a) = (x_1(s,a),\ldots,x_n(s,a))$ where $s \in S$, is a vector of fe
Q-Learning vs Fitted Q-Iteration You are right. It means that Q function is approximated linearly. Let $S$ be a state space and $A$ be an action space. $\textbf{x}(s,a) = (x_1(s,a),\ldots,x_n(s,a))$ where $s \in S$, is a vector of features of $S \times A$ and $\textbf{x}(s,a) \in \mathbb{R}^n$. Suppose, that $Q(a,s)$ is the real Q-value function. Now we may try to approximate it with the following estimation function: $$\hat{Q}(a,s,\textbf{w}) = \textbf{w} \cdot \textbf{x}(s,a) = \sum_{i=1}^nw_ix_i(s,a)$$ So you may want to make features for state-action pairs, instead of making features for states only. To fine-tune the $\textbf{w}$ vector, you can use gradient descend methods. For more on this issue see Sutton & Barto, control with function approximation.
Q-Learning vs Fitted Q-Iteration You are right. It means that Q function is approximated linearly. Let $S$ be a state space and $A$ be an action space. $\textbf{x}(s,a) = (x_1(s,a),\ldots,x_n(s,a))$ where $s \in S$, is a vector of fe
29,378
When to use LDA over GMM for clustering?
I would not use Gaussian mixture models, as they require the constituent distributions to all be normal. You have counts, so GMM is inappropriate by definition. Latent Dirichlet allocation (full disclosure: I don't really know topic modeling) requires your data to be multinomial, but you can have counts in that case—they would be counts of occurrences of different categories of a variable. Another possibility is that your counts are counts of different variables, as in having several Poisson variables. This is a bit of an ontological question for how you are thinking about your data. Consider a simple example where I go to the grocery store because I want some fruit. I will purchase a certain number of apples, oranges, peaches and bananas. Each of those could be considered a separate Poisson variable. When I get home I put all of them in a fruit bowl. Later, when I feel like snacking, I might reach into the bowl without looking and grab two pieces of fruit (e.g., an apple and a peach). That can be considered a draw from a multinomial distribution. In both cases, I have counts of categories, but we think of them differently. In the first case, the fruits I will buy are known before I get to the grocery store, but the number purchased in each category can vary. In the second case, I don't know which fruits I will pick but I know I'm grabbing two from the possible types. If your data are like the fruit bowl example, LDA may be appropriate for you. On the other hand, if they are like the grocery store example, you could try Poisson finite mixture modeling. That is, you can use mixture modeling with distributions other than Gaussian / normal. GMM's are the most common by far; other distributions (such as Poisson) are more exotic. I don't know how widely implemented they are in software. If you use R, Googling led to the discovery of ?PoisMixClus in the HTSCluster package and the rebmix package (note I've never used either, or done Poisson mixture modeling). It may be possible to find implementations for other software as well. Adding some specifics: I would say LDA is at least as much a Bayesian technique as GMM. I suspect the most important differentiation between LDA and GMM is the type of data they assume you have. You cannot compare them, because they are for different kinds of data. (Nor would I really want to compare LDA and Poisson MM, as they conceptualize the counts differently.) I would not dichotomize your data into zero / non-zero.
When to use LDA over GMM for clustering?
I would not use Gaussian mixture models, as they require the constituent distributions to all be normal. You have counts, so GMM is inappropriate by definition. Latent Dirichlet allocation (full di
When to use LDA over GMM for clustering? I would not use Gaussian mixture models, as they require the constituent distributions to all be normal. You have counts, so GMM is inappropriate by definition. Latent Dirichlet allocation (full disclosure: I don't really know topic modeling) requires your data to be multinomial, but you can have counts in that case—they would be counts of occurrences of different categories of a variable. Another possibility is that your counts are counts of different variables, as in having several Poisson variables. This is a bit of an ontological question for how you are thinking about your data. Consider a simple example where I go to the grocery store because I want some fruit. I will purchase a certain number of apples, oranges, peaches and bananas. Each of those could be considered a separate Poisson variable. When I get home I put all of them in a fruit bowl. Later, when I feel like snacking, I might reach into the bowl without looking and grab two pieces of fruit (e.g., an apple and a peach). That can be considered a draw from a multinomial distribution. In both cases, I have counts of categories, but we think of them differently. In the first case, the fruits I will buy are known before I get to the grocery store, but the number purchased in each category can vary. In the second case, I don't know which fruits I will pick but I know I'm grabbing two from the possible types. If your data are like the fruit bowl example, LDA may be appropriate for you. On the other hand, if they are like the grocery store example, you could try Poisson finite mixture modeling. That is, you can use mixture modeling with distributions other than Gaussian / normal. GMM's are the most common by far; other distributions (such as Poisson) are more exotic. I don't know how widely implemented they are in software. If you use R, Googling led to the discovery of ?PoisMixClus in the HTSCluster package and the rebmix package (note I've never used either, or done Poisson mixture modeling). It may be possible to find implementations for other software as well. Adding some specifics: I would say LDA is at least as much a Bayesian technique as GMM. I suspect the most important differentiation between LDA and GMM is the type of data they assume you have. You cannot compare them, because they are for different kinds of data. (Nor would I really want to compare LDA and Poisson MM, as they conceptualize the counts differently.) I would not dichotomize your data into zero / non-zero.
When to use LDA over GMM for clustering? I would not use Gaussian mixture models, as they require the constituent distributions to all be normal. You have counts, so GMM is inappropriate by definition. Latent Dirichlet allocation (full di
29,379
How can training and testing error comparisons be indicative of overfitting?
Overfitting does not refer to the gap between training and test error being large or even increasing. It might be true that both training and testing error are decreasing, but training error is decreasing at a faster rate. Overfitting specifically relates to the training error decreasing at the expense of model generalization (approximated through cross validation) as model hyperparameters are tuned (such as max tree depth, max nodes, min samples per split, and min samples per node for simple decision trees). From Wikipedia: Tree-based methods often have the training error decrease at a faster rate than the test error as specific hyperparameters are changed. If you are not testing different hyperparameters for a specific model, then you cannot identify overfitting. Perhaps the specific combination of hyperparameters chosen is the best and any other combination causes cross validated testing error to increase.
How can training and testing error comparisons be indicative of overfitting?
Overfitting does not refer to the gap between training and test error being large or even increasing. It might be true that both training and testing error are decreasing, but training error is decrea
How can training and testing error comparisons be indicative of overfitting? Overfitting does not refer to the gap between training and test error being large or even increasing. It might be true that both training and testing error are decreasing, but training error is decreasing at a faster rate. Overfitting specifically relates to the training error decreasing at the expense of model generalization (approximated through cross validation) as model hyperparameters are tuned (such as max tree depth, max nodes, min samples per split, and min samples per node for simple decision trees). From Wikipedia: Tree-based methods often have the training error decrease at a faster rate than the test error as specific hyperparameters are changed. If you are not testing different hyperparameters for a specific model, then you cannot identify overfitting. Perhaps the specific combination of hyperparameters chosen is the best and any other combination causes cross validated testing error to increase.
How can training and testing error comparisons be indicative of overfitting? Overfitting does not refer to the gap between training and test error being large or even increasing. It might be true that both training and testing error are decreasing, but training error is decrea
29,380
Confidence intervals from the Holm-Bonferroni test?
[This answer is completely rewritten from yesterday.] First nomenclature. The Holm method is also called the Holm step-down method, or the Holm-Ryan method. Those are all the same. Whichever of those names you use, there are two alternative calculations. The original Holm method is based on Bonferroni. An alternative slightly more powerful method is based on Sidak instead, so is called the Holm-Sidak method. The Holm method can be used for multiple comparisons in a variety of contexts. Its input is a stack of P values. One use is following ANOVA, comparing pairs of means while correcting for multiple corrections. When this is done, as far as I can see, it is very rare to report confidence intervals (corrected for multiple comparisons, so properly called simultaneous confidence intervals) as well as conclusions about statistical significance and multiplicity adjusted P values. I've found two papers that explain how to compute such confidence intervals, but they differ. Serlin, R. (1993). Confidence intervals and the scientific method: A case for Holm on the range. Journal of Experimental Education, 61(4), 350–360. Ludbrook, J. MULTIPLE INFERENCES USING CONFIDENCE INTERVALS. Clinical and Experimental Pharmacology and Physiology (2000) 27, 212–215 For the comparisons with the smallest P values, the two methods are the same (but one uses C as the # of comparisons and the other uses m) . But for the comparisons with larger P values, the two methods differ. For the comparison with the largest P value, Ludbrook would compute the 95% CI normally, with no correction for multiple comparisons. Serlin would use the same adjustment for all comparisons with an adjusted P value greater than 0.05 (assuming you want 95% intervals), so the intervals for the comparisons with large P values would be wider than the ones that Ludbrook method generates. Both methods use the Bonferroni approach, but could be easily adjusted to the Sidak approach. Any thoughts on which method is correct/better?
Confidence intervals from the Holm-Bonferroni test?
[This answer is completely rewritten from yesterday.] First nomenclature. The Holm method is also called the Holm step-down method, or the Holm-Ryan method. Those are all the same. Whichever of those
Confidence intervals from the Holm-Bonferroni test? [This answer is completely rewritten from yesterday.] First nomenclature. The Holm method is also called the Holm step-down method, or the Holm-Ryan method. Those are all the same. Whichever of those names you use, there are two alternative calculations. The original Holm method is based on Bonferroni. An alternative slightly more powerful method is based on Sidak instead, so is called the Holm-Sidak method. The Holm method can be used for multiple comparisons in a variety of contexts. Its input is a stack of P values. One use is following ANOVA, comparing pairs of means while correcting for multiple corrections. When this is done, as far as I can see, it is very rare to report confidence intervals (corrected for multiple comparisons, so properly called simultaneous confidence intervals) as well as conclusions about statistical significance and multiplicity adjusted P values. I've found two papers that explain how to compute such confidence intervals, but they differ. Serlin, R. (1993). Confidence intervals and the scientific method: A case for Holm on the range. Journal of Experimental Education, 61(4), 350–360. Ludbrook, J. MULTIPLE INFERENCES USING CONFIDENCE INTERVALS. Clinical and Experimental Pharmacology and Physiology (2000) 27, 212–215 For the comparisons with the smallest P values, the two methods are the same (but one uses C as the # of comparisons and the other uses m) . But for the comparisons with larger P values, the two methods differ. For the comparison with the largest P value, Ludbrook would compute the 95% CI normally, with no correction for multiple comparisons. Serlin would use the same adjustment for all comparisons with an adjusted P value greater than 0.05 (assuming you want 95% intervals), so the intervals for the comparisons with large P values would be wider than the ones that Ludbrook method generates. Both methods use the Bonferroni approach, but could be easily adjusted to the Sidak approach. Any thoughts on which method is correct/better?
Confidence intervals from the Holm-Bonferroni test? [This answer is completely rewritten from yesterday.] First nomenclature. The Holm method is also called the Holm step-down method, or the Holm-Ryan method. Those are all the same. Whichever of those
29,381
"Export" machine learning model from R
One way to share models between the software that does the actual model fitting and the software that is used to do the predictions is the Predictive Model Markup Language (PMML). This is an XML-based standard maintained by the Data Mining Group consortium. It allows to deploy models to other applications, to the cloud, or database systems. So if the software that your partner wants to is PMML-compliant, then you can employ the pmml package to export your models from R. Of course, there are more machine learning models implemented in R than supported by the PMML standard or the pmml R package but there is quite a range of supported models. The pmml package is also employed by the rattle data mining GUI in R.
"Export" machine learning model from R
One way to share models between the software that does the actual model fitting and the software that is used to do the predictions is the Predictive Model Markup Language (PMML). This is an XML-based
"Export" machine learning model from R One way to share models between the software that does the actual model fitting and the software that is used to do the predictions is the Predictive Model Markup Language (PMML). This is an XML-based standard maintained by the Data Mining Group consortium. It allows to deploy models to other applications, to the cloud, or database systems. So if the software that your partner wants to is PMML-compliant, then you can employ the pmml package to export your models from R. Of course, there are more machine learning models implemented in R than supported by the PMML standard or the pmml R package but there is quite a range of supported models. The pmml package is also employed by the rattle data mining GUI in R.
"Export" machine learning model from R One way to share models between the software that does the actual model fitting and the software that is used to do the predictions is the Predictive Model Markup Language (PMML). This is an XML-based
29,382
How to handle changing input vector length with neural networks
There are three general strategies I can think of for NNs with varying input sizes: Preprocess the inputs to save the same size. For example, people often resize images (ignoring aspect ratio) to a standard square resolution for NNs. In the language case, you might convert all words to a symbolic representation (e.g. "john"=1, "james"=2, "maurice"=3, "kelly"=4, "doe"=5) if that makes sense in your application. Use a sliding window. The network gets to see a fixed-size portion of the input, and then you slide the window by some fixed stride and run it again (from scratch), repeat until you hit the end, and then combine all the outputs in some way. Same as #2, but using a recurrent neural network so that the network has some internal state that carries over between each stride. This is how NNs process speech audio, for example. Obviously this is a more dramatic change to the architecture than the other options, but for many language tasks this might be necessary (if you have long inputs and need to combine information across the string in a complicated way).
How to handle changing input vector length with neural networks
There are three general strategies I can think of for NNs with varying input sizes: Preprocess the inputs to save the same size. For example, people often resize images (ignoring aspect ratio) to a s
How to handle changing input vector length with neural networks There are three general strategies I can think of for NNs with varying input sizes: Preprocess the inputs to save the same size. For example, people often resize images (ignoring aspect ratio) to a standard square resolution for NNs. In the language case, you might convert all words to a symbolic representation (e.g. "john"=1, "james"=2, "maurice"=3, "kelly"=4, "doe"=5) if that makes sense in your application. Use a sliding window. The network gets to see a fixed-size portion of the input, and then you slide the window by some fixed stride and run it again (from scratch), repeat until you hit the end, and then combine all the outputs in some way. Same as #2, but using a recurrent neural network so that the network has some internal state that carries over between each stride. This is how NNs process speech audio, for example. Obviously this is a more dramatic change to the architecture than the other options, but for many language tasks this might be necessary (if you have long inputs and need to combine information across the string in a complicated way).
How to handle changing input vector length with neural networks There are three general strategies I can think of for NNs with varying input sizes: Preprocess the inputs to save the same size. For example, people often resize images (ignoring aspect ratio) to a s
29,383
Is there a formula for a general form of the coupon collector problem?
This is not easy to compute, but it can be done, provided $\binom{m+k}{k}$ is not too large. (This number counts the possible states you need to track while collecting coupons.) Let's begin with a simulation to get some sense of the answer. Here, I collected LEGO figures one million times. The black line in this plot tracks the frequencies of the numbers of purchases needed to collect at least three of ten different figures. The gray band is an approximate two-sided 95% confidence interval for each count. Underneath it all is a red curve: this is the true value. To obtain the true values, consider the state of affairs while you are collecting figures, of which there are $n=12$ possible types and you wish to collect at least $k=3$ of $m=10$ different types. The only information you need to keep track of is how many figures you haven't seen, how many you have seen just once, how many you have seen twice, and how many you have seen three or more times. We can represent this conveniently as a monomial $x_0^{i_0} x_1^{i_1} x_2^{i_2} x_3^{i_3}$ where the $i_j$ are the associated counts, indexes from $k=0$ through $k=t$. In general, we would use monomials of the form $\prod_{j=0}^k x_j^{i_j}$. Upon collecting a new random object, it will be one of the $i_0$ unseen objects with probability $i_0/n$, one of the objects seen just once with probability $i_1/n$, and so forth. The result can be expressed as a linear combination of monomials, $$x_0^{i_0} x_1^{i_1} x_2^{i_2} x_3^{i_3}\to \frac{1}{n}\left(i_0 x_0^{i_0-1}x_1^{i_1+1}x_2^{i_2}x_3^{i_3} + \cdots + i_3 x_0^{i_0}x_1^{i_1}x_2^{i_2-1}x_3^{i_3}\right).$$ This is the result of applying the linear differential operator $(x_1 D_{x_0} + x_2 D_{x_1} + x_3 D_{x_2} + x_3 D_{x_3})/n$ to the monomial. Evidently, repeated applications to the initial state $x_0^{12}=x_0^n$ will give a polynomial $p$, having at most $\binom{n+k}{k}$ terms, where the coefficient of $\prod_{j=0}^k x_j^{i_j}$ is the chance of being in the state indicated by its exponents. We merely need to focus on terms in $p$ with $i_3 \ge t$: the sum of their coefficients will be the chance of having finished the coupon collecting. The whole calculation therefore requires up to $(m+1)\binom{n+k}{k}$ easy calculations at each step, repeated as many times as necessary to be almost certain of succeeding with the collection. Expressing the process in this fashion makes it possible to exploit efficiencies of computer algebra systems. Here, for instance, is a general Mathematica solution to compute the chances up to $6nk=216$ draws. That omits some possibilities, but their total chances are less than $10^{-17}$, giving us a nearly complete picture of the distribution. n = 12; threshold = 10; k = 3; (* Draw one object randomly from an urn with `n` of them *) draw[p_] := Expand[Sum[Subscript[x, i] D[#, Subscript[x, i - 1]], {i, 1, k}] + Subscript[x, k] D[#, Subscript[x, k]] & @ p]; (* Find the chance that we have collected at least `k` each of `threshold` objects *) f[p_] := Sum[ Coefficient[p, Subscript[x, k]^t] /. Table[Subscript[x, i] -> 1, {i, 0, k - 1}], {t, threshold, n}] (* Compute the chances for a long series of draws *) q = f /@ NestList[draw[#]/n &, Subscript[x, 0]^n, 6 n k]; The result, which takes about two seconds to compute (faster than the simulation!) is an array of probabilities indexed by the number of draws. Here is a plot of its differences, which are the probabilities of ending your purchases as a function of the count: These are precisely the numbers used to draw the red background curve in the first figure. (A chi-squared test indicates the simulation is not significantly different from this computation.) We may estimate the expected number of draws by summing $1-q$; the result should be good to 14-15 decimal places. I obtain $50.7619549386733$ (which is correct in every digit, as determined by a longer calculation).
Is there a formula for a general form of the coupon collector problem?
This is not easy to compute, but it can be done, provided $\binom{m+k}{k}$ is not too large. (This number counts the possible states you need to track while collecting coupons.) Let's begin with a si
Is there a formula for a general form of the coupon collector problem? This is not easy to compute, but it can be done, provided $\binom{m+k}{k}$ is not too large. (This number counts the possible states you need to track while collecting coupons.) Let's begin with a simulation to get some sense of the answer. Here, I collected LEGO figures one million times. The black line in this plot tracks the frequencies of the numbers of purchases needed to collect at least three of ten different figures. The gray band is an approximate two-sided 95% confidence interval for each count. Underneath it all is a red curve: this is the true value. To obtain the true values, consider the state of affairs while you are collecting figures, of which there are $n=12$ possible types and you wish to collect at least $k=3$ of $m=10$ different types. The only information you need to keep track of is how many figures you haven't seen, how many you have seen just once, how many you have seen twice, and how many you have seen three or more times. We can represent this conveniently as a monomial $x_0^{i_0} x_1^{i_1} x_2^{i_2} x_3^{i_3}$ where the $i_j$ are the associated counts, indexes from $k=0$ through $k=t$. In general, we would use monomials of the form $\prod_{j=0}^k x_j^{i_j}$. Upon collecting a new random object, it will be one of the $i_0$ unseen objects with probability $i_0/n$, one of the objects seen just once with probability $i_1/n$, and so forth. The result can be expressed as a linear combination of monomials, $$x_0^{i_0} x_1^{i_1} x_2^{i_2} x_3^{i_3}\to \frac{1}{n}\left(i_0 x_0^{i_0-1}x_1^{i_1+1}x_2^{i_2}x_3^{i_3} + \cdots + i_3 x_0^{i_0}x_1^{i_1}x_2^{i_2-1}x_3^{i_3}\right).$$ This is the result of applying the linear differential operator $(x_1 D_{x_0} + x_2 D_{x_1} + x_3 D_{x_2} + x_3 D_{x_3})/n$ to the monomial. Evidently, repeated applications to the initial state $x_0^{12}=x_0^n$ will give a polynomial $p$, having at most $\binom{n+k}{k}$ terms, where the coefficient of $\prod_{j=0}^k x_j^{i_j}$ is the chance of being in the state indicated by its exponents. We merely need to focus on terms in $p$ with $i_3 \ge t$: the sum of their coefficients will be the chance of having finished the coupon collecting. The whole calculation therefore requires up to $(m+1)\binom{n+k}{k}$ easy calculations at each step, repeated as many times as necessary to be almost certain of succeeding with the collection. Expressing the process in this fashion makes it possible to exploit efficiencies of computer algebra systems. Here, for instance, is a general Mathematica solution to compute the chances up to $6nk=216$ draws. That omits some possibilities, but their total chances are less than $10^{-17}$, giving us a nearly complete picture of the distribution. n = 12; threshold = 10; k = 3; (* Draw one object randomly from an urn with `n` of them *) draw[p_] := Expand[Sum[Subscript[x, i] D[#, Subscript[x, i - 1]], {i, 1, k}] + Subscript[x, k] D[#, Subscript[x, k]] & @ p]; (* Find the chance that we have collected at least `k` each of `threshold` objects *) f[p_] := Sum[ Coefficient[p, Subscript[x, k]^t] /. Table[Subscript[x, i] -> 1, {i, 0, k - 1}], {t, threshold, n}] (* Compute the chances for a long series of draws *) q = f /@ NestList[draw[#]/n &, Subscript[x, 0]^n, 6 n k]; The result, which takes about two seconds to compute (faster than the simulation!) is an array of probabilities indexed by the number of draws. Here is a plot of its differences, which are the probabilities of ending your purchases as a function of the count: These are precisely the numbers used to draw the red background curve in the first figure. (A chi-squared test indicates the simulation is not significantly different from this computation.) We may estimate the expected number of draws by summing $1-q$; the result should be good to 14-15 decimal places. I obtain $50.7619549386733$ (which is correct in every digit, as determined by a longer calculation).
Is there a formula for a general form of the coupon collector problem? This is not easy to compute, but it can be done, provided $\binom{m+k}{k}$ is not too large. (This number counts the possible states you need to track while collecting coupons.) Let's begin with a si
29,384
Is there a formula for a general form of the coupon collector problem?
In the original problem you would use the geometric distribution to compute the waiting time until everything has been seen at least once. Here the difficulty is that the times overlap. Still, seeing something for the third time has negative binomial distribution. If we call these (dependent) times $T_i^{(3)}$, then the time until we see all items at least three times is $\max_{1\leq i\leq n}T_i^{(3)}.$ Then you can use the maximum-minimum equation $$E(\max_iT_i^{(3)})=\sum_{i=1}^nE(T_i^{(3)})-\sum_{i<j}E(\min(T_i^{(3)},T_j^{(3)}))+\sum_{i<j<k}E(\min(T_i^{(3)},T_j^{(3)},T_k^{(3)}))-\ldots (-1)^nE(\min_i T_i^{(3)})$$ Thereby noting that $E(\min_i T_i^{(3)})=3$. What i haven't figured out (or even tried) is whether the computation of minima for the negative binomial distribution is as easy as for the geometric.
Is there a formula for a general form of the coupon collector problem?
In the original problem you would use the geometric distribution to compute the waiting time until everything has been seen at least once. Here the difficulty is that the times overlap. Still, seeing
Is there a formula for a general form of the coupon collector problem? In the original problem you would use the geometric distribution to compute the waiting time until everything has been seen at least once. Here the difficulty is that the times overlap. Still, seeing something for the third time has negative binomial distribution. If we call these (dependent) times $T_i^{(3)}$, then the time until we see all items at least three times is $\max_{1\leq i\leq n}T_i^{(3)}.$ Then you can use the maximum-minimum equation $$E(\max_iT_i^{(3)})=\sum_{i=1}^nE(T_i^{(3)})-\sum_{i<j}E(\min(T_i^{(3)},T_j^{(3)}))+\sum_{i<j<k}E(\min(T_i^{(3)},T_j^{(3)},T_k^{(3)}))-\ldots (-1)^nE(\min_i T_i^{(3)})$$ Thereby noting that $E(\min_i T_i^{(3)})=3$. What i haven't figured out (or even tried) is whether the computation of minima for the negative binomial distribution is as easy as for the geometric.
Is there a formula for a general form of the coupon collector problem? In the original problem you would use the geometric distribution to compute the waiting time until everything has been seen at least once. Here the difficulty is that the times overlap. Still, seeing
29,385
ARIMA predictions constant
There's nothing wrong; that model indeed has constant forecasts -- your 'best' guess at any future value is the last one you observed, since the deviations from that are a sum of future 0-mean noise terms. $I(1)$ model: $y_t=y_{t-1}+\varepsilon_t$ Predictions: $E(y_{T+1|T})=E(y_{T|T})+E(\varepsilon_T)=y_{T}+0=y_{T}$ $E(y_{T+2|T})=E(y_{T+1|T})+E(\varepsilon_{T+1})=E(y_{T+1|T})+0=y_{T}$ and so on. [As indicated in the answer here, the more complicated ARIMA(0,1,1) model also has constant forecasts.]
ARIMA predictions constant
There's nothing wrong; that model indeed has constant forecasts -- your 'best' guess at any future value is the last one you observed, since the deviations from that are a sum of future 0-mean noise t
ARIMA predictions constant There's nothing wrong; that model indeed has constant forecasts -- your 'best' guess at any future value is the last one you observed, since the deviations from that are a sum of future 0-mean noise terms. $I(1)$ model: $y_t=y_{t-1}+\varepsilon_t$ Predictions: $E(y_{T+1|T})=E(y_{T|T})+E(\varepsilon_T)=y_{T}+0=y_{T}$ $E(y_{T+2|T})=E(y_{T+1|T})+E(\varepsilon_{T+1})=E(y_{T+1|T})+0=y_{T}$ and so on. [As indicated in the answer here, the more complicated ARIMA(0,1,1) model also has constant forecasts.]
ARIMA predictions constant There's nothing wrong; that model indeed has constant forecasts -- your 'best' guess at any future value is the last one you observed, since the deviations from that are a sum of future 0-mean noise t
29,386
What are examples of "flat priors"?
The term "flat" in reference to a prior generally means $f(\theta)\propto c$ over the support of $\theta$. So a flat prior for $p$ in a Bernoulli would usually be interpreted to mean $U(0,1)$. A flat prior for $\mu$ in a normal is an improper prior where $f(\mu)\propto c$ over the real line. "Flat" is not necessarily synonymous with 'uninformative', nor does it have invariance to transformations of the parameter. For example, a flat prior on $\sigma$ in a normal effectively says that we think that $\sigma$ will be large, while a flat prior on $\log(\sigma)$ does not. With flat priors, your conditional posterior will be proportional to the likelihood (possibly constrained to some interval/region if the prior was). (In this case MAP and ML will normally correspond, though if we're taking the flat prior over some region, it might change that.)
What are examples of "flat priors"?
The term "flat" in reference to a prior generally means $f(\theta)\propto c$ over the support of $\theta$. So a flat prior for $p$ in a Bernoulli would usually be interpreted to mean $U(0,1)$. A flat
What are examples of "flat priors"? The term "flat" in reference to a prior generally means $f(\theta)\propto c$ over the support of $\theta$. So a flat prior for $p$ in a Bernoulli would usually be interpreted to mean $U(0,1)$. A flat prior for $\mu$ in a normal is an improper prior where $f(\mu)\propto c$ over the real line. "Flat" is not necessarily synonymous with 'uninformative', nor does it have invariance to transformations of the parameter. For example, a flat prior on $\sigma$ in a normal effectively says that we think that $\sigma$ will be large, while a flat prior on $\log(\sigma)$ does not. With flat priors, your conditional posterior will be proportional to the likelihood (possibly constrained to some interval/region if the prior was). (In this case MAP and ML will normally correspond, though if we're taking the flat prior over some region, it might change that.)
What are examples of "flat priors"? The term "flat" in reference to a prior generally means $f(\theta)\propto c$ over the support of $\theta$. So a flat prior for $p$ in a Bernoulli would usually be interpreted to mean $U(0,1)$. A flat
29,387
Fisher's z-transform in Python?
The Fisher transform equals the inverse hyperbolic tangent‌​/arctanh, which is implemented for example in numpy. The inverse Fisher transform/tanh can be dealt with similarly. Moreover, numpy's function for Pearson's correlation also gives a p value.
Fisher's z-transform in Python?
The Fisher transform equals the inverse hyperbolic tangent‌​/arctanh, which is implemented for example in numpy. The inverse Fisher transform/tanh can be dealt with similarly. Moreover, numpy's functi
Fisher's z-transform in Python? The Fisher transform equals the inverse hyperbolic tangent‌​/arctanh, which is implemented for example in numpy. The inverse Fisher transform/tanh can be dealt with similarly. Moreover, numpy's function for Pearson's correlation also gives a p value.
Fisher's z-transform in Python? The Fisher transform equals the inverse hyperbolic tangent‌​/arctanh, which is implemented for example in numpy. The inverse Fisher transform/tanh can be dealt with similarly. Moreover, numpy's functi
29,388
The standard normal distribution vs the t-distribution
Actually $s$ doesn't need to systematically underestimate $\sigma$; this could happen even if that weren't true. As it is, $s$ is biased for $\sigma$ (the fact that $s^2$ is unbiased for $\sigma^2$ means that $s$ will be biased for $\sigma$, due to Jensen's inequality*, but that's not the central thing going on there. * Jensen's inequality If $g$ is a convex function, $g\left(\text{E}[X]\right) \leq \text{E}\left[g(X)\right]$ with equality only if $X$ is constant or $g$ is linear. Now $g(X)=-\sqrt{X}$ is convex, so $-\sqrt{\text{E}[X]} < \text{E}(-\sqrt{X})$, i.e. $\sqrt{\text{E}[X]} > \text{E}(\sqrt{X})\,$, implying $\sigma>E(s)$ if the random variable $s$ is not a fixed constant. Edit: a simpler demonstration not invoking Jensen -- Assume that the distribution of the underlying variable has $\sigma>0$. Note that $\text{Var}(s) = E(s^2)-E(s)^2$ this variance will always be positive for $\sigma>0$. Hence $E(s)^2 = E(s^2)-\text{Var}(s) < \sigma^2$, so $E(s)<\sigma$. So what is the main issue? Let $Z=\frac{\overline{X} - \mu}{\frac{\sigma}{\sqrt{n}}}$ Note that you're dealing with $t=Z\cdot\frac{\sigma}{s}$. That inversion of $s$ is important. So the effect on the variance it's not whether $s$ is smaller than $\sigma$ on average (though it is, very slightly), but whether $1/s$ is larger than $1/\sigma$ on average (and those two things are NOT the same thing). And it is larger, to a greater extent than its inverse is smaller. Which is to say $E(1/X)\neq 1/E(X)$; in fact, from Jensen's inequality: $g(X) = 1/x$ is convex, so if $X$ is not constant, $1/\left(\text{E}[X]\right) < \text{E}\left[1/X\right]$ So consider, for example, normal samples of size 10; $s$ is about 2.7% smaller than $\sigma$ on average, but $1/s$ is about 9.4% larger than $1/\sigma$ on average. So even if at n=10 we made our estimate of $\sigma$ 2.7-something percent larger** so that $E(\widehat\sigma)=\sigma$, the corresponding $t=Z\cdot\frac{\sigma}{\widehat\sigma}$ would not have unit variance - it would still be a fair bit larger than 1. **(at other $n$ the adjustment would be different of course) Since the t-distribution is like the standard normal distribution but with a higher variance (smaller peak and fatter tails) If you adjust for the difference in spread, the peak is higher. Why does the t-distribution become more normal as sample size increases? The standard normal distribution vs the t-distribution
The standard normal distribution vs the t-distribution
Actually $s$ doesn't need to systematically underestimate $\sigma$; this could happen even if that weren't true. As it is, $s$ is biased for $\sigma$ (the fact that $s^2$ is unbiased for $\sigma^2$ me
The standard normal distribution vs the t-distribution Actually $s$ doesn't need to systematically underestimate $\sigma$; this could happen even if that weren't true. As it is, $s$ is biased for $\sigma$ (the fact that $s^2$ is unbiased for $\sigma^2$ means that $s$ will be biased for $\sigma$, due to Jensen's inequality*, but that's not the central thing going on there. * Jensen's inequality If $g$ is a convex function, $g\left(\text{E}[X]\right) \leq \text{E}\left[g(X)\right]$ with equality only if $X$ is constant or $g$ is linear. Now $g(X)=-\sqrt{X}$ is convex, so $-\sqrt{\text{E}[X]} < \text{E}(-\sqrt{X})$, i.e. $\sqrt{\text{E}[X]} > \text{E}(\sqrt{X})\,$, implying $\sigma>E(s)$ if the random variable $s$ is not a fixed constant. Edit: a simpler demonstration not invoking Jensen -- Assume that the distribution of the underlying variable has $\sigma>0$. Note that $\text{Var}(s) = E(s^2)-E(s)^2$ this variance will always be positive for $\sigma>0$. Hence $E(s)^2 = E(s^2)-\text{Var}(s) < \sigma^2$, so $E(s)<\sigma$. So what is the main issue? Let $Z=\frac{\overline{X} - \mu}{\frac{\sigma}{\sqrt{n}}}$ Note that you're dealing with $t=Z\cdot\frac{\sigma}{s}$. That inversion of $s$ is important. So the effect on the variance it's not whether $s$ is smaller than $\sigma$ on average (though it is, very slightly), but whether $1/s$ is larger than $1/\sigma$ on average (and those two things are NOT the same thing). And it is larger, to a greater extent than its inverse is smaller. Which is to say $E(1/X)\neq 1/E(X)$; in fact, from Jensen's inequality: $g(X) = 1/x$ is convex, so if $X$ is not constant, $1/\left(\text{E}[X]\right) < \text{E}\left[1/X\right]$ So consider, for example, normal samples of size 10; $s$ is about 2.7% smaller than $\sigma$ on average, but $1/s$ is about 9.4% larger than $1/\sigma$ on average. So even if at n=10 we made our estimate of $\sigma$ 2.7-something percent larger** so that $E(\widehat\sigma)=\sigma$, the corresponding $t=Z\cdot\frac{\sigma}{\widehat\sigma}$ would not have unit variance - it would still be a fair bit larger than 1. **(at other $n$ the adjustment would be different of course) Since the t-distribution is like the standard normal distribution but with a higher variance (smaller peak and fatter tails) If you adjust for the difference in spread, the peak is higher. Why does the t-distribution become more normal as sample size increases? The standard normal distribution vs the t-distribution
The standard normal distribution vs the t-distribution Actually $s$ doesn't need to systematically underestimate $\sigma$; this could happen even if that weren't true. As it is, $s$ is biased for $\sigma$ (the fact that $s^2$ is unbiased for $\sigma^2$ me
29,389
Distribution of the Rayleigh quotient
In case of the normal distribution, a solution can be found in Mathai and Provost, Quadratic forms in random variables (1992). The inverse and product moments of such quadratic forms are derived there from the moment generating function. Quadratic forms in elliptic distributions and their moments are treated in Mathai, Provost and Hayakawa, Bilinear forms and zonal polynomials (1995), but not to the same extend as in the normal case. As elliptical distributions are usually defined in terms of their characteristic function $e^{it\mu}\xi(t'\Sigma t)$, this function $\xi$ will appear in the solution if one chooses the mgf-approach. Yet, it has never been calculated, afaik.
Distribution of the Rayleigh quotient
In case of the normal distribution, a solution can be found in Mathai and Provost, Quadratic forms in random variables (1992). The inverse and product moments of such quadratic forms are derived there
Distribution of the Rayleigh quotient In case of the normal distribution, a solution can be found in Mathai and Provost, Quadratic forms in random variables (1992). The inverse and product moments of such quadratic forms are derived there from the moment generating function. Quadratic forms in elliptic distributions and their moments are treated in Mathai, Provost and Hayakawa, Bilinear forms and zonal polynomials (1995), but not to the same extend as in the normal case. As elliptical distributions are usually defined in terms of their characteristic function $e^{it\mu}\xi(t'\Sigma t)$, this function $\xi$ will appear in the solution if one chooses the mgf-approach. Yet, it has never been calculated, afaik.
Distribution of the Rayleigh quotient In case of the normal distribution, a solution can be found in Mathai and Provost, Quadratic forms in random variables (1992). The inverse and product moments of such quadratic forms are derived there
29,390
Distribution of the Rayleigh quotient
There is a nice approximation described in the paper "Computing moments of ratios of quadratic forms in normal variables" (the approximation predates this paper though). It uses a second-order Taylor expansion that leads to a simple formula that is a good approximation in many cases (this approximation is used in this other answer of mine, see the comments of the original poster). Let's write $N = w^T A w$ and $D = w^T B w$. Then $\mathbb{E}\left(\frac{w^T A w}{w^T B w}\right)$ can be approximated with the following expression of the moments of $N$ and $D$: \begin{equation} \mathbb{E}\left(\frac{N}{D}\right) \approx \frac{\mu_N}{\mu_D}\left( 1 - \frac{Cov(N,D)}{\mu_N \mu_D} + \frac{Var(D)}{\mu_D^2} \right) \end{equation} where: \begin{equation} \begin{split} & \mu_N = tr(A\Sigma) + \mu_{w}^T A \mu_{w} \\ & \mu_D = tr(B\Sigma) + \mu_w^T B \mu_w \\ & Var(D) = 2tr([B \Sigma]^2) + 4 \mu_w^T B \Sigma B \mu_w \\ & Cov(N,D) = 2tr(B \Sigma A \Sigma) + 4 \mu_w^T B \Sigma A \mu_w \end{split} \end{equation} and $\mu_w$ and $\Sigma$ are the mean and covariance of normal vector $w$. That is, $w\sim \mathcal{N}(\mu_w, \Sigma)$. If this answers your question, please consider upvoting.
Distribution of the Rayleigh quotient
There is a nice approximation described in the paper "Computing moments of ratios of quadratic forms in normal variables" (the approximation predates this paper though). It uses a second-order Taylor
Distribution of the Rayleigh quotient There is a nice approximation described in the paper "Computing moments of ratios of quadratic forms in normal variables" (the approximation predates this paper though). It uses a second-order Taylor expansion that leads to a simple formula that is a good approximation in many cases (this approximation is used in this other answer of mine, see the comments of the original poster). Let's write $N = w^T A w$ and $D = w^T B w$. Then $\mathbb{E}\left(\frac{w^T A w}{w^T B w}\right)$ can be approximated with the following expression of the moments of $N$ and $D$: \begin{equation} \mathbb{E}\left(\frac{N}{D}\right) \approx \frac{\mu_N}{\mu_D}\left( 1 - \frac{Cov(N,D)}{\mu_N \mu_D} + \frac{Var(D)}{\mu_D^2} \right) \end{equation} where: \begin{equation} \begin{split} & \mu_N = tr(A\Sigma) + \mu_{w}^T A \mu_{w} \\ & \mu_D = tr(B\Sigma) + \mu_w^T B \mu_w \\ & Var(D) = 2tr([B \Sigma]^2) + 4 \mu_w^T B \Sigma B \mu_w \\ & Cov(N,D) = 2tr(B \Sigma A \Sigma) + 4 \mu_w^T B \Sigma A \mu_w \end{split} \end{equation} and $\mu_w$ and $\Sigma$ are the mean and covariance of normal vector $w$. That is, $w\sim \mathcal{N}(\mu_w, \Sigma)$. If this answers your question, please consider upvoting.
Distribution of the Rayleigh quotient There is a nice approximation described in the paper "Computing moments of ratios of quadratic forms in normal variables" (the approximation predates this paper though). It uses a second-order Taylor
29,391
Treating ordinal variables as continuous for regression problems
It's important to distinguish, as pointed out by Nick Cox, between iV and dV. As far as dV is concerned, why not use a ordinal regression model, as discussed excellently e.g. by Agresti: http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470082895.html I am less sure about the iV case. Standard would perhaps use dummy coding. I suppose this is what Frank Harrell means. Maybe Agresti discusses this as well.
Treating ordinal variables as continuous for regression problems
It's important to distinguish, as pointed out by Nick Cox, between iV and dV. As far as dV is concerned, why not use a ordinal regression model, as discussed excellently e.g. by Agresti: http://eu.wil
Treating ordinal variables as continuous for regression problems It's important to distinguish, as pointed out by Nick Cox, between iV and dV. As far as dV is concerned, why not use a ordinal regression model, as discussed excellently e.g. by Agresti: http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470082895.html I am less sure about the iV case. Standard would perhaps use dummy coding. I suppose this is what Frank Harrell means. Maybe Agresti discusses this as well.
Treating ordinal variables as continuous for regression problems It's important to distinguish, as pointed out by Nick Cox, between iV and dV. As far as dV is concerned, why not use a ordinal regression model, as discussed excellently e.g. by Agresti: http://eu.wil
29,392
Treating ordinal variables as continuous for regression problems
With the luxury of time we would use dummy variables as with nominal predictors, then penalize them (penalized MLE) towards ordinality of effects. Something like that was discussed in a paper by Hans van Houwelingen some years ago. Short of that, we often approximate the effect of ordinal variables by fitting a quadratic effect. It would also not be ridiculous to use AIC to select between a regular nominal dummy variables model and a restricted model that assumed the ordinal predictor was continuous (like the quadratic). I'm not sure that the SEM results would apply, but they might.
Treating ordinal variables as continuous for regression problems
With the luxury of time we would use dummy variables as with nominal predictors, then penalize them (penalized MLE) towards ordinality of effects. Something like that was discussed in a paper by Hans
Treating ordinal variables as continuous for regression problems With the luxury of time we would use dummy variables as with nominal predictors, then penalize them (penalized MLE) towards ordinality of effects. Something like that was discussed in a paper by Hans van Houwelingen some years ago. Short of that, we often approximate the effect of ordinal variables by fitting a quadratic effect. It would also not be ridiculous to use AIC to select between a regular nominal dummy variables model and a restricted model that assumed the ordinal predictor was continuous (like the quadratic). I'm not sure that the SEM results would apply, but they might.
Treating ordinal variables as continuous for regression problems With the luxury of time we would use dummy variables as with nominal predictors, then penalize them (penalized MLE) towards ordinality of effects. Something like that was discussed in a paper by Hans
29,393
Treating ordinal variables as continuous for regression problems
I have one source, Snijders and Bosker`s (2012) multilevel analysis book, page 310, saying: "if the number of categories is small (3 or 4), or if it is between 5 and 10, and the distribution cannot well be approximated by a normal distribution, then statistical methods for ordered categorical outcomes can be useful" My understanding is, if you have at least 10 categories and approximately normally distributed dependent variable, it is safe to treat it as a continuous variable. For a more concrete answer, I would run a small scale simulation analysis.
Treating ordinal variables as continuous for regression problems
I have one source, Snijders and Bosker`s (2012) multilevel analysis book, page 310, saying: "if the number of categories is small (3 or 4), or if it is between 5 and 10, and the distribution cannot we
Treating ordinal variables as continuous for regression problems I have one source, Snijders and Bosker`s (2012) multilevel analysis book, page 310, saying: "if the number of categories is small (3 or 4), or if it is between 5 and 10, and the distribution cannot well be approximated by a normal distribution, then statistical methods for ordered categorical outcomes can be useful" My understanding is, if you have at least 10 categories and approximately normally distributed dependent variable, it is safe to treat it as a continuous variable. For a more concrete answer, I would run a small scale simulation analysis.
Treating ordinal variables as continuous for regression problems I have one source, Snijders and Bosker`s (2012) multilevel analysis book, page 310, saying: "if the number of categories is small (3 or 4), or if it is between 5 and 10, and the distribution cannot we
29,394
Treating ordinal variables as continuous for regression problems
Liddell & Kruschke (2018) is another source which discusses problems associated with treating ordinal data as continuous. The paper illustrates a number of the problems that can occur. They advocate using ordered-probit models to deal with ordinal data. While they specifically advocate for a Bayesian approach, they note that Frequentist approaches may also work
Treating ordinal variables as continuous for regression problems
Liddell & Kruschke (2018) is another source which discusses problems associated with treating ordinal data as continuous. The paper illustrates a number of the problems that can occur. They advocat
Treating ordinal variables as continuous for regression problems Liddell & Kruschke (2018) is another source which discusses problems associated with treating ordinal data as continuous. The paper illustrates a number of the problems that can occur. They advocate using ordered-probit models to deal with ordinal data. While they specifically advocate for a Bayesian approach, they note that Frequentist approaches may also work
Treating ordinal variables as continuous for regression problems Liddell & Kruschke (2018) is another source which discusses problems associated with treating ordinal data as continuous. The paper illustrates a number of the problems that can occur. They advocat
29,395
Treating ordinal variables as continuous for regression problems
If an outcome is ordinal, one should want a method of analysis that is invariant to the codes used to label the levels. For example, suppose the outcome has levels : SD, D, N, A and SA. Then one might label the levels with codes 1, 2, 3, 4, 5. If one analyzes this outcome with a t-test, the p-value is invariant only to location or scale changes. For example, -2, -1, 0, 1, 2 or -4, -2, 0, 2, 4. The p-value from the t-test is not invariant to other codings like -10, -1, 0, 1, 100 or any coding that does not preserve 'distance'. The proportional odds model and the multinomial model give p-values that are invariant to the coding selection. [maybe this point has been made earlier?]
Treating ordinal variables as continuous for regression problems
If an outcome is ordinal, one should want a method of analysis that is invariant to the codes used to label the levels. For example, suppose the outcome has levels : SD, D, N, A and SA. Then one might
Treating ordinal variables as continuous for regression problems If an outcome is ordinal, one should want a method of analysis that is invariant to the codes used to label the levels. For example, suppose the outcome has levels : SD, D, N, A and SA. Then one might label the levels with codes 1, 2, 3, 4, 5. If one analyzes this outcome with a t-test, the p-value is invariant only to location or scale changes. For example, -2, -1, 0, 1, 2 or -4, -2, 0, 2, 4. The p-value from the t-test is not invariant to other codings like -10, -1, 0, 1, 100 or any coding that does not preserve 'distance'. The proportional odds model and the multinomial model give p-values that are invariant to the coding selection. [maybe this point has been made earlier?]
Treating ordinal variables as continuous for regression problems If an outcome is ordinal, one should want a method of analysis that is invariant to the codes used to label the levels. For example, suppose the outcome has levels : SD, D, N, A and SA. Then one might
29,396
When including a linear interaction between two continuous predictors, should one generally also include quadratic predictors?
Is this reasoning valid? Yes it is. See below. Is this a well known rule of thumb, e.g., is it stated as a recommendation in any standard books? I think it should be but I don't think it is, at least judging by the number of postgraduate students (and beyond) that haven't really considered it. This question made me think about a section in the brilliant paper by W N Venables "Exegeses on Linear Models" (which is required reading for all my students) and I encourage anyone who has not read it, or hasn't read it recently, to do so. I will provide the full reference and link at the end of this answer. Most of what follows is taken from the paper, almost verbatim. Let's start with a model that, on the face of it, is not very interesting: $$ Y = f(x,Z)$$ where $x$ is a matrix of continuous explanatory variables and $Z$ is a random variable which we can think of as normally distributed around zero, but it does not have to be. If we take a first-order Taylor series approximation around $x_0$, then we have: $$Y \approx f(x_0, 0) + \sum_{i = 1}^{p} f^{(i)}(x_0,0)(x_i-x_{i0}) + f^{(p+i)}(x_0,0)Z $$ or equivalently $$Y \approx \beta_0 + \sum_{i = 1}^{p} \beta_{i}(x_i-x_{i0}) + \sigma Z $$ Note that it is common practice to subsume all the $x_{i0}$ into the intercept and then the model takes on a very familiar form. At this point we could naturally discuss whether a global intercept is a good idea, and whether centring the data could be of value. If we continue with the Taylor series, the next approximation will be: $$Y \approx \beta_0 + \sum_{i = 1}^{p} \beta_{i}(x_i-x_{i0}) + \sum_{i = 1}^{p}\sum_{j = 1}^{p} \beta_{ij}(x_i-x_{i0})(x_j-x_{j0}) + \left( \sigma + \sum_{i = 1}^{p} \gamma_i(x_i-x_{i0}) \right) Z + \sigma Z^2 $$ and so now we find: nonlinearty in the main effect (quadratic terms) a linear x linear interaction (cross product of two linear terms) heteroskedasticity (the $(x_i-x_{i0}) Z$ terms) skewness (the term in $Z^2$) So this brings us back to the question about whether we should include quadratic terms when we include an interaction, and this approach tells us that we should. In general I think it is always a good idea to consider quadratic terms when fitting an interaction. Perhaps a good question to ask is why should we not do so ? I always encourage students and colleagues to step back and look at what we are doing from a wider viewpoint. Statistical models are models, an abstraction of reality, which as George Box famously said, are all wrong, but some are useful. It is our job to make them as useful as possible, whether that be for prediction or inference. It might very well be the case that in a particular context (eg a small region of the domain of $x$) that the nonlinear (and/or) interaction terms will not be needed, but at the very least it is a good idea to think about this, and the same goes for heteroskedasticity and skewness. Source for the above: Venables, W.N., 1998, October. Exegeses on linear models. In S-Plus User’s Conference, Washington DC. http://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf
When including a linear interaction between two continuous predictors, should one generally also inc
Is this reasoning valid? Yes it is. See below. Is this a well known rule of thumb, e.g., is it stated as a recommendation in any standard books? I think it should be but I don't think it is, at lea
When including a linear interaction between two continuous predictors, should one generally also include quadratic predictors? Is this reasoning valid? Yes it is. See below. Is this a well known rule of thumb, e.g., is it stated as a recommendation in any standard books? I think it should be but I don't think it is, at least judging by the number of postgraduate students (and beyond) that haven't really considered it. This question made me think about a section in the brilliant paper by W N Venables "Exegeses on Linear Models" (which is required reading for all my students) and I encourage anyone who has not read it, or hasn't read it recently, to do so. I will provide the full reference and link at the end of this answer. Most of what follows is taken from the paper, almost verbatim. Let's start with a model that, on the face of it, is not very interesting: $$ Y = f(x,Z)$$ where $x$ is a matrix of continuous explanatory variables and $Z$ is a random variable which we can think of as normally distributed around zero, but it does not have to be. If we take a first-order Taylor series approximation around $x_0$, then we have: $$Y \approx f(x_0, 0) + \sum_{i = 1}^{p} f^{(i)}(x_0,0)(x_i-x_{i0}) + f^{(p+i)}(x_0,0)Z $$ or equivalently $$Y \approx \beta_0 + \sum_{i = 1}^{p} \beta_{i}(x_i-x_{i0}) + \sigma Z $$ Note that it is common practice to subsume all the $x_{i0}$ into the intercept and then the model takes on a very familiar form. At this point we could naturally discuss whether a global intercept is a good idea, and whether centring the data could be of value. If we continue with the Taylor series, the next approximation will be: $$Y \approx \beta_0 + \sum_{i = 1}^{p} \beta_{i}(x_i-x_{i0}) + \sum_{i = 1}^{p}\sum_{j = 1}^{p} \beta_{ij}(x_i-x_{i0})(x_j-x_{j0}) + \left( \sigma + \sum_{i = 1}^{p} \gamma_i(x_i-x_{i0}) \right) Z + \sigma Z^2 $$ and so now we find: nonlinearty in the main effect (quadratic terms) a linear x linear interaction (cross product of two linear terms) heteroskedasticity (the $(x_i-x_{i0}) Z$ terms) skewness (the term in $Z^2$) So this brings us back to the question about whether we should include quadratic terms when we include an interaction, and this approach tells us that we should. In general I think it is always a good idea to consider quadratic terms when fitting an interaction. Perhaps a good question to ask is why should we not do so ? I always encourage students and colleagues to step back and look at what we are doing from a wider viewpoint. Statistical models are models, an abstraction of reality, which as George Box famously said, are all wrong, but some are useful. It is our job to make them as useful as possible, whether that be for prediction or inference. It might very well be the case that in a particular context (eg a small region of the domain of $x$) that the nonlinear (and/or) interaction terms will not be needed, but at the very least it is a good idea to think about this, and the same goes for heteroskedasticity and skewness. Source for the above: Venables, W.N., 1998, October. Exegeses on linear models. In S-Plus User’s Conference, Washington DC. http://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf
When including a linear interaction between two continuous predictors, should one generally also inc Is this reasoning valid? Yes it is. See below. Is this a well known rule of thumb, e.g., is it stated as a recommendation in any standard books? I think it should be but I don't think it is, at lea
29,397
When including a linear interaction between two continuous predictors, should one generally also include quadratic predictors?
I have an argument for, and an argument against. My recommendation is not to force the quadratic terms when adding interaction. If they fit your problem, then go ahead and add them, but not as a general rule of thumb. Yes Include quadratic terms, because 2nd order Taylor expansion would suggest so: $$y=f(x,z)+e\approx f_x\Delta x+f_z\Delta z+2f_{xz}\Delta x\Delta z+f_{xx}\Delta x^2+f_{zz}\Delta z^2+e$$ From a point of view of mathematical elegance it would make a sense to throw in the quadratic terms whenever the interaction is added, if you like Taylor expansion interpretation of the regression coefficients. No This would make your approximation quadratic, which usually is not what you want: $$y=\beta_0+\beta_xx+\beta_zz+\beta_{xz}xz+\beta_{xx}x^2+\beta_{zz}z^2+e$$ So, e.g. when $x\to-\infty$ then $y\sim |x|^2$, i.e. increasing at quadratic speed. This is rarely a desired behavior in linear regression because we usually expect monotonic effect of regressors. On the other hand, without quadratic terms the dependence on $x$ remains linear with an interaction $(\beta_x+\beta_{xz}z)x$, although the slope varies with $z$, which is a desired behavior. This is the reason, for instance, why in LOESS type of local polynomial fits the odd order polynomials are used with p=1 or 3.
When including a linear interaction between two continuous predictors, should one generally also inc
I have an argument for, and an argument against. My recommendation is not to force the quadratic terms when adding interaction. If they fit your problem, then go ahead and add them, but not as a gener
When including a linear interaction between two continuous predictors, should one generally also include quadratic predictors? I have an argument for, and an argument against. My recommendation is not to force the quadratic terms when adding interaction. If they fit your problem, then go ahead and add them, but not as a general rule of thumb. Yes Include quadratic terms, because 2nd order Taylor expansion would suggest so: $$y=f(x,z)+e\approx f_x\Delta x+f_z\Delta z+2f_{xz}\Delta x\Delta z+f_{xx}\Delta x^2+f_{zz}\Delta z^2+e$$ From a point of view of mathematical elegance it would make a sense to throw in the quadratic terms whenever the interaction is added, if you like Taylor expansion interpretation of the regression coefficients. No This would make your approximation quadratic, which usually is not what you want: $$y=\beta_0+\beta_xx+\beta_zz+\beta_{xz}xz+\beta_{xx}x^2+\beta_{zz}z^2+e$$ So, e.g. when $x\to-\infty$ then $y\sim |x|^2$, i.e. increasing at quadratic speed. This is rarely a desired behavior in linear regression because we usually expect monotonic effect of regressors. On the other hand, without quadratic terms the dependence on $x$ remains linear with an interaction $(\beta_x+\beta_{xz}z)x$, although the slope varies with $z$, which is a desired behavior. This is the reason, for instance, why in LOESS type of local polynomial fits the odd order polynomials are used with p=1 or 3.
When including a linear interaction between two continuous predictors, should one generally also inc I have an argument for, and an argument against. My recommendation is not to force the quadratic terms when adding interaction. If they fit your problem, then go ahead and add them, but not as a gener
29,398
Advice/literature on combining items with different response scales into composite scales?
This is a great question! I think that in scale construction, there's a delicate balance between interpretability and psychometric considerations. Specifically, a scale sum or average is much easier to grasp than a sum or average taken of standardized or otherwise re-scaled items. However, there can be a somewhat subtle psychometric reason for re-scaling items prior to creating your scale composite (i.e., taking a sum or average). If your items have radically different standard deviations, the reliability of your composite scale will be decreased simply because of these differing standard deviations. One way to understand this intuitively is to realize that, as you point out, items with widely varying standard deviations are assigned different weights in the composite. So, measurement error in the item with the greater standard deviation will tend to dominate the scale composite. In effect, having widely varying standard deviations reduces the very benefit that you're trying to accrue by averaging together multiple items (i.e., normally, averaging together multiple items reduces the impact of measurement error from any one of the component items). I have created a demonstration of the effects of a single dominant item in some simulated data below. Here I create five correlated items and find the reliability (measured with Cronbach's alpha) of the resultant scale. require(psych) # Create data set.seed(13105) item1 <- round(rnorm(100, sd = 3), digits = 0) item2 <- round(item1 + rnorm(100, sd = 1), digits = 0) item3 <- round(item1 + rnorm(100, sd = 1), digits = 0) item4 <- round(item1 + rnorm(100, sd = 1), digits = 0) item5 <- round(item1 + rnorm(100, sd = 1), digits = 0) d <- data.frame(item1, item2, item3, item4, item5) # Cronbach's alpha alpha(d) Reliability analysis Call: alpha(x = d) raw_alpha std.alpha G6(smc) average_r mean sd 0.97 0.97 0.97 0.87 -0.14 2.5 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r item1 0.96 0.96 0.94 0.84 item2 0.97 0.97 0.96 0.88 item3 0.97 0.97 0.96 0.89 item4 0.97 0.97 0.96 0.88 item5 0.96 0.97 0.96 0.87 Item statistics n r r.cor r.drop mean sd item1 100 0.98 0.99 0.97 -0.10 2.5 item2 100 0.94 0.92 0.90 -0.27 2.8 item3 100 0.93 0.91 0.89 -0.09 2.7 item4 100 0.94 0.92 0.91 -0.19 2.6 item5 100 0.94 0.93 0.91 -0.06 2.7 And here I change the standard deviation of item2 by multiplying the item by $5$. Note the dramatic drop in Cronbach's alpha due to this procedure. Also note that multiplying an item by a positive constant does not affect the correlation matrix constructed with these five items in the slightest. The only thing that I have done by multiplying item2 by $5$ is that I have changed the scale on which item2 is measured, and yet changing this scale greatly impacts the reliability of the composite. # Re-scale item 2 to have a much larger standard deviation than the other items d$item2 <- d$item2 * 5 # Cronbach's alpha alpha(d) Reliability analysis Call: alpha(x = d) raw_alpha std.alpha G6(smc) average_r mean sd 0.74 0.97 0.97 0.87 -0.36 4.7 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r item1 0.68 0.96 0.94 0.84 item2 0.97 0.97 0.96 0.88 item3 0.69 0.97 0.96 0.89 item4 0.68 0.97 0.96 0.88 item5 0.68 0.97 0.96 0.87 Item statistics n r r.cor r.drop mean sd item1 100 0.98 0.99 0.96 -0.10 2.5 item2 100 0.94 0.92 0.90 -1.35 13.9 item3 100 0.93 0.91 0.86 -0.09 2.7 item4 100 0.94 0.92 0.89 -0.19 2.6 item5 100 0.94 0.93 0.90 -0.06 2.7
Advice/literature on combining items with different response scales into composite scales?
This is a great question! I think that in scale construction, there's a delicate balance between interpretability and psychometric considerations. Specifically, a scale sum or average is much easier
Advice/literature on combining items with different response scales into composite scales? This is a great question! I think that in scale construction, there's a delicate balance between interpretability and psychometric considerations. Specifically, a scale sum or average is much easier to grasp than a sum or average taken of standardized or otherwise re-scaled items. However, there can be a somewhat subtle psychometric reason for re-scaling items prior to creating your scale composite (i.e., taking a sum or average). If your items have radically different standard deviations, the reliability of your composite scale will be decreased simply because of these differing standard deviations. One way to understand this intuitively is to realize that, as you point out, items with widely varying standard deviations are assigned different weights in the composite. So, measurement error in the item with the greater standard deviation will tend to dominate the scale composite. In effect, having widely varying standard deviations reduces the very benefit that you're trying to accrue by averaging together multiple items (i.e., normally, averaging together multiple items reduces the impact of measurement error from any one of the component items). I have created a demonstration of the effects of a single dominant item in some simulated data below. Here I create five correlated items and find the reliability (measured with Cronbach's alpha) of the resultant scale. require(psych) # Create data set.seed(13105) item1 <- round(rnorm(100, sd = 3), digits = 0) item2 <- round(item1 + rnorm(100, sd = 1), digits = 0) item3 <- round(item1 + rnorm(100, sd = 1), digits = 0) item4 <- round(item1 + rnorm(100, sd = 1), digits = 0) item5 <- round(item1 + rnorm(100, sd = 1), digits = 0) d <- data.frame(item1, item2, item3, item4, item5) # Cronbach's alpha alpha(d) Reliability analysis Call: alpha(x = d) raw_alpha std.alpha G6(smc) average_r mean sd 0.97 0.97 0.97 0.87 -0.14 2.5 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r item1 0.96 0.96 0.94 0.84 item2 0.97 0.97 0.96 0.88 item3 0.97 0.97 0.96 0.89 item4 0.97 0.97 0.96 0.88 item5 0.96 0.97 0.96 0.87 Item statistics n r r.cor r.drop mean sd item1 100 0.98 0.99 0.97 -0.10 2.5 item2 100 0.94 0.92 0.90 -0.27 2.8 item3 100 0.93 0.91 0.89 -0.09 2.7 item4 100 0.94 0.92 0.91 -0.19 2.6 item5 100 0.94 0.93 0.91 -0.06 2.7 And here I change the standard deviation of item2 by multiplying the item by $5$. Note the dramatic drop in Cronbach's alpha due to this procedure. Also note that multiplying an item by a positive constant does not affect the correlation matrix constructed with these five items in the slightest. The only thing that I have done by multiplying item2 by $5$ is that I have changed the scale on which item2 is measured, and yet changing this scale greatly impacts the reliability of the composite. # Re-scale item 2 to have a much larger standard deviation than the other items d$item2 <- d$item2 * 5 # Cronbach's alpha alpha(d) Reliability analysis Call: alpha(x = d) raw_alpha std.alpha G6(smc) average_r mean sd 0.74 0.97 0.97 0.87 -0.36 4.7 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r item1 0.68 0.96 0.94 0.84 item2 0.97 0.97 0.96 0.88 item3 0.69 0.97 0.96 0.89 item4 0.68 0.97 0.96 0.88 item5 0.68 0.97 0.96 0.87 Item statistics n r r.cor r.drop mean sd item1 100 0.98 0.99 0.96 -0.10 2.5 item2 100 0.94 0.92 0.90 -1.35 13.9 item3 100 0.93 0.91 0.86 -0.09 2.7 item4 100 0.94 0.92 0.89 -0.19 2.6 item5 100 0.94 0.93 0.90 -0.06 2.7
Advice/literature on combining items with different response scales into composite scales? This is a great question! I think that in scale construction, there's a delicate balance between interpretability and psychometric considerations. Specifically, a scale sum or average is much easier
29,399
Good, authoritative recent book on factor analysis and principal component analysis
There was a conference in 2004 »Factor Analysis at 100. Historical Developments and Future Directions«. An edited book of chapters based on conference presentations followed: Cudeck/MacCallum. 2007. Factor Analysis at 100. Historical Developments and Future Directions. Lawrence Erlbaum. From the preface: This book is the result of a conference that was held at the University of North Carolina in the spring of 2004 to commemorate the 100 year anniversary of Spearman’s famous article. The purpose of the conference and of this book was to review the contributions of the last century that have produced the extensive body of knowledge associated with factor analysis and other latent variable models. The contributors also took the occasion to describe the main contemporary themes in statistical models for latent variables and to give an overview of how these ideas are being extended.
Good, authoritative recent book on factor analysis and principal component analysis
There was a conference in 2004 »Factor Analysis at 100. Historical Developments and Future Directions«. An edited book of chapters based on conference presentations followed: Cudeck/MacCallum. 2007. F
Good, authoritative recent book on factor analysis and principal component analysis There was a conference in 2004 »Factor Analysis at 100. Historical Developments and Future Directions«. An edited book of chapters based on conference presentations followed: Cudeck/MacCallum. 2007. Factor Analysis at 100. Historical Developments and Future Directions. Lawrence Erlbaum. From the preface: This book is the result of a conference that was held at the University of North Carolina in the spring of 2004 to commemorate the 100 year anniversary of Spearman’s famous article. The purpose of the conference and of this book was to review the contributions of the last century that have produced the extensive body of knowledge associated with factor analysis and other latent variable models. The contributors also took the occasion to describe the main contemporary themes in statistical models for latent variables and to give an overview of how these ideas are being extended.
Good, authoritative recent book on factor analysis and principal component analysis There was a conference in 2004 »Factor Analysis at 100. Historical Developments and Future Directions«. An edited book of chapters based on conference presentations followed: Cudeck/MacCallum. 2007. F
29,400
Good, authoritative recent book on factor analysis and principal component analysis
By just looking at (or seeing) this In 1977, [Yoshio Takane] earned his Ph. D. in Psychometrics from the Univeraity of North Carolina at Chapel Hill. and this Peter Flom received his Ph.D. in Psychometrics in 1999 from Fordham University, where he was a Presidential fellow. it is possible to assume that you are looking for the book entitled "Constrained Principal Component Analysis and Related Techniques" authored solely by Yoshio Takane and published by CRC press in 2014 I think it is reasonably authoritative but please let us know if it is modern enough (Japanese version is dated 1995).
Good, authoritative recent book on factor analysis and principal component analysis
By just looking at (or seeing) this In 1977, [Yoshio Takane] earned his Ph. D. in Psychometrics from the Univeraity of North Carolina at Chapel Hill. and this Peter Flom received his Ph.D. in Ps
Good, authoritative recent book on factor analysis and principal component analysis By just looking at (or seeing) this In 1977, [Yoshio Takane] earned his Ph. D. in Psychometrics from the Univeraity of North Carolina at Chapel Hill. and this Peter Flom received his Ph.D. in Psychometrics in 1999 from Fordham University, where he was a Presidential fellow. it is possible to assume that you are looking for the book entitled "Constrained Principal Component Analysis and Related Techniques" authored solely by Yoshio Takane and published by CRC press in 2014 I think it is reasonably authoritative but please let us know if it is modern enough (Japanese version is dated 1995).
Good, authoritative recent book on factor analysis and principal component analysis By just looking at (or seeing) this In 1977, [Yoshio Takane] earned his Ph. D. in Psychometrics from the Univeraity of North Carolina at Chapel Hill. and this Peter Flom received his Ph.D. in Ps