idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
17,101
Mixed effects model: Compare random variance component across levels of a grouping variable
Looking at this problem from a slightly different perspective and starting from the "general" form of the linear mixed model, we have $$ y_{ijk} = \mu + \alpha_j + d_{ij} + e_{ijk}, \quad d_{i} \sim N(0, \Sigma), \quad e_{ijk} \sim N(0, \sigma^2) $$ where $\alpha_j$ is the fixed effect of the $j$'th condition and $d_i = (d_{i1}, \ldots, d_{iJ})^\top$ is a random vector (some call it vector-valued random effect, I think) for the $i$'th participant in the $j$'th condition. In your example we have two conditions $y_{i1k}$ and $y_{i2k}$ which I'll denote as $A$ and $B$ in what follows. So the covariance matrix of the two-dimensional random vector $d_{i}$ is of the general form $$\Sigma = \begin{bmatrix}\sigma^2_A & \sigma_{AB}\\ \sigma_{AB} & \sigma^2_{B} \end{bmatrix} $$ with non-negative $\sigma^2_A$ and $\sigma^2_{B}$. Let's first see how the re-parameterized version of $\Sigma$ looks when we use sum contrasts. The variance of the intercept, which corresponds to the grand mean, is $$ \sigma^2_{1} := \text{Var(grand mean)} = \text{Var}(\frac{1}{2} \cdot (A+B)) = \frac{1}{4} \cdot (\text{Var}(A) + \text{Var}(B) + 2 \cdot \text{Cov}(A,B)). $$ The variance of the contrast is $$ \sigma^2_{2} := \text{Var(contrast)} = \text{Var}(\frac{1}{2} \cdot (A-B)) = \frac{1}{4} \cdot (\text{Var}(A) + \text{Var}(B) - 2 \cdot \text{Cov}(A,B)). $$ And the covariance between the intercept and the contrast is $$ \sigma_{12} := \text{Cov}(\text{grand mean, contrast}) = \text{Cov}(\frac{1}{2} \cdot (A+B), \frac{1}{2} \cdot (A-B)) \\ = \frac{1}{4} \cdot (\text{Var}(A) - \text{Var}(B)). $$ Thus, the re-parameterized $\Sigma$ is $$\Sigma = \begin{bmatrix}\sigma^2_1 + \sigma^2_2 + 2\sigma_{12} & \sigma^2_1 - \sigma^2_2\\ \sigma^2_1 - \sigma^2_2 & \sigma^2_1 + \sigma^2_2 - 2\sigma_{12} \end{bmatrix} = \begin{bmatrix}\sigma^2_A & \sigma_{AB}\\ \sigma_{AB} & \sigma^2_{B} \end{bmatrix}. $$ $\Sigma$ can be decomposed into $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}\sigma^2_2 & -\sigma^2_2\\ -\sigma^2_2 & \sigma^2_2\end{bmatrix} + 2\begin{bmatrix}\sigma_{12} & 0\\ 0 & -\sigma_{12}\end{bmatrix}. $$ Setting the covariance parameter $\sigma_{12}$ to zero we get $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}\sigma^2_2 & -\sigma^2_2\\ -\sigma^2_2 & \sigma^2_2\end{bmatrix} = \begin{bmatrix}\sigma^2_1 + \sigma^2_2 & \sigma^2_1 - \sigma^2_2 \\ \sigma^2_1 - \sigma^2_2 & \sigma^2_1 + \sigma^2_2\end{bmatrix} $$ which, as @Jake Westfall derived slightly differently, tests the hypothesis of equal variances when we compare a model without this covariance parameter to a model where the covariance parameter is still included/not set to zero. Notably, introducing another crossed random grouping factor (such as stimuli) does not change the model comparison that has to be done, i.e., anova(mod1, mod2) (optionally with the argument refit = FALSE when you use REML estimation) where mod1 and mod2 are defined as @Jake Westfall did. Taking out $\sigma_{12}$ and the variance component for the contrast $\sigma^2_2$ (what @Henrik suggests) results in $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} $$ which tests the hypothesis that the variances in the two conditions are equal and that they are equal to the (positive) covariance between the two conditions. When we have two conditions, a model that fits a covariance matrix with two parameters in a (positive) compound symmetric structure can also be written as # code snippet from Jake Westfall d$contrast <- 2*(d$condition == 'experimental') - 1 # new model mod3 <- lmer(sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id), data = d, REML = FALSE) or (using the categorical variable/factor condition) mod4 <- lmer(sim_1 ~ condition + (1 | participant_id) + (1 | condition:participant_id), data = d, REML = FALSE) with $$ \Sigma = \begin{bmatrix}\sigma^2_1 + \sigma^2_2 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1 + \sigma^2_2 \end{bmatrix} = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}\sigma^2_2 & 0\\ 0 & \sigma^2_2\end{bmatrix} $$ where $\sigma^2_1$ and $\sigma^2_2$ are the variance parameters for the participant and the participant-condition-combination intercepts, respectively. Note that this $\Sigma$ has a non-negative covariance parameter. Below we see that mod1, mod3, and mod4 yield equivalent fits: # code snippet from Jake Westfall d$contrast <- 2*(d$condition == 'experimental') - 1 mod1 <- lmer(sim_1 ~ contrast + (contrast || participant_id), data = d, REML = FALSE) mod2 <- lmer(sim_1 ~ contrast + (contrast | participant_id), data = d, REML = FALSE) # new models mod3 <- lmer(sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id), data = d, REML = FALSE) mod4 <- lmer(sim_1 ~ condition + (1 | participant_id) + (1 | condition:participant_id), data = d, REML = FALSE) anova(mod3, mod1) # Data: d # Models: # mod3: sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id) # mod1: sim_1 ~ contrast + ((1 | participant_id) + (0 + contrast | participant_id)) # Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) # mod3 5 2396.9 2420.3 -1193.5 2386.9 # mod1 5 2396.9 2420.3 -1193.5 2386.9 0 0 1 anova(mod4, mod3) # Data: d # Models: # mod4: sim_1 ~ condition + (1 | participant_id) + (1 | condition:participant_id) # mod3: sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id) # Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) # mod4 5 2396.9 2420.3 -1193.5 2386.9 # mod3 5 2396.9 2420.3 -1193.5 2386.9 0 0 1 With treatment contrasts (the default in R) the re-parameterized $\Sigma$ is $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1 + \sigma_{12}\\ \sigma^2_1 + \sigma_{12} & \sigma^2_1 + \sigma^2_2 + 2\sigma_{12} \end{bmatrix} = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}0 & 0\\ 0 & \sigma^2_2\end{bmatrix} + \begin{bmatrix}0 & \sigma_{12}\\ \sigma_{12} & 2\sigma_{12}\end{bmatrix} $$ where $\sigma^2_1$ is the variance parameter for the intercept (condition $A$), $\sigma^2_2$ the variance parameter for the contrast ($A - B$), and $\sigma_{12}$ the corresponding covariance parameter. We can see that neither setting $\sigma_{12}$ to zero nor setting $\sigma^2_2$ to zero tests (only) the hypothesis of equal variances. However, as shown above, we can still use mod4 to test the hypothesis as changing the contrasts has no impact on the parameterization of $\Sigma$ for this model.
Mixed effects model: Compare random variance component across levels of a grouping variable
Looking at this problem from a slightly different perspective and starting from the "general" form of the linear mixed model, we have $$ y_{ijk} = \mu + \alpha_j + d_{ij} + e_{ijk}, \quad d_{i} \sim N
Mixed effects model: Compare random variance component across levels of a grouping variable Looking at this problem from a slightly different perspective and starting from the "general" form of the linear mixed model, we have $$ y_{ijk} = \mu + \alpha_j + d_{ij} + e_{ijk}, \quad d_{i} \sim N(0, \Sigma), \quad e_{ijk} \sim N(0, \sigma^2) $$ where $\alpha_j$ is the fixed effect of the $j$'th condition and $d_i = (d_{i1}, \ldots, d_{iJ})^\top$ is a random vector (some call it vector-valued random effect, I think) for the $i$'th participant in the $j$'th condition. In your example we have two conditions $y_{i1k}$ and $y_{i2k}$ which I'll denote as $A$ and $B$ in what follows. So the covariance matrix of the two-dimensional random vector $d_{i}$ is of the general form $$\Sigma = \begin{bmatrix}\sigma^2_A & \sigma_{AB}\\ \sigma_{AB} & \sigma^2_{B} \end{bmatrix} $$ with non-negative $\sigma^2_A$ and $\sigma^2_{B}$. Let's first see how the re-parameterized version of $\Sigma$ looks when we use sum contrasts. The variance of the intercept, which corresponds to the grand mean, is $$ \sigma^2_{1} := \text{Var(grand mean)} = \text{Var}(\frac{1}{2} \cdot (A+B)) = \frac{1}{4} \cdot (\text{Var}(A) + \text{Var}(B) + 2 \cdot \text{Cov}(A,B)). $$ The variance of the contrast is $$ \sigma^2_{2} := \text{Var(contrast)} = \text{Var}(\frac{1}{2} \cdot (A-B)) = \frac{1}{4} \cdot (\text{Var}(A) + \text{Var}(B) - 2 \cdot \text{Cov}(A,B)). $$ And the covariance between the intercept and the contrast is $$ \sigma_{12} := \text{Cov}(\text{grand mean, contrast}) = \text{Cov}(\frac{1}{2} \cdot (A+B), \frac{1}{2} \cdot (A-B)) \\ = \frac{1}{4} \cdot (\text{Var}(A) - \text{Var}(B)). $$ Thus, the re-parameterized $\Sigma$ is $$\Sigma = \begin{bmatrix}\sigma^2_1 + \sigma^2_2 + 2\sigma_{12} & \sigma^2_1 - \sigma^2_2\\ \sigma^2_1 - \sigma^2_2 & \sigma^2_1 + \sigma^2_2 - 2\sigma_{12} \end{bmatrix} = \begin{bmatrix}\sigma^2_A & \sigma_{AB}\\ \sigma_{AB} & \sigma^2_{B} \end{bmatrix}. $$ $\Sigma$ can be decomposed into $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}\sigma^2_2 & -\sigma^2_2\\ -\sigma^2_2 & \sigma^2_2\end{bmatrix} + 2\begin{bmatrix}\sigma_{12} & 0\\ 0 & -\sigma_{12}\end{bmatrix}. $$ Setting the covariance parameter $\sigma_{12}$ to zero we get $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}\sigma^2_2 & -\sigma^2_2\\ -\sigma^2_2 & \sigma^2_2\end{bmatrix} = \begin{bmatrix}\sigma^2_1 + \sigma^2_2 & \sigma^2_1 - \sigma^2_2 \\ \sigma^2_1 - \sigma^2_2 & \sigma^2_1 + \sigma^2_2\end{bmatrix} $$ which, as @Jake Westfall derived slightly differently, tests the hypothesis of equal variances when we compare a model without this covariance parameter to a model where the covariance parameter is still included/not set to zero. Notably, introducing another crossed random grouping factor (such as stimuli) does not change the model comparison that has to be done, i.e., anova(mod1, mod2) (optionally with the argument refit = FALSE when you use REML estimation) where mod1 and mod2 are defined as @Jake Westfall did. Taking out $\sigma_{12}$ and the variance component for the contrast $\sigma^2_2$ (what @Henrik suggests) results in $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} $$ which tests the hypothesis that the variances in the two conditions are equal and that they are equal to the (positive) covariance between the two conditions. When we have two conditions, a model that fits a covariance matrix with two parameters in a (positive) compound symmetric structure can also be written as # code snippet from Jake Westfall d$contrast <- 2*(d$condition == 'experimental') - 1 # new model mod3 <- lmer(sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id), data = d, REML = FALSE) or (using the categorical variable/factor condition) mod4 <- lmer(sim_1 ~ condition + (1 | participant_id) + (1 | condition:participant_id), data = d, REML = FALSE) with $$ \Sigma = \begin{bmatrix}\sigma^2_1 + \sigma^2_2 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1 + \sigma^2_2 \end{bmatrix} = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}\sigma^2_2 & 0\\ 0 & \sigma^2_2\end{bmatrix} $$ where $\sigma^2_1$ and $\sigma^2_2$ are the variance parameters for the participant and the participant-condition-combination intercepts, respectively. Note that this $\Sigma$ has a non-negative covariance parameter. Below we see that mod1, mod3, and mod4 yield equivalent fits: # code snippet from Jake Westfall d$contrast <- 2*(d$condition == 'experimental') - 1 mod1 <- lmer(sim_1 ~ contrast + (contrast || participant_id), data = d, REML = FALSE) mod2 <- lmer(sim_1 ~ contrast + (contrast | participant_id), data = d, REML = FALSE) # new models mod3 <- lmer(sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id), data = d, REML = FALSE) mod4 <- lmer(sim_1 ~ condition + (1 | participant_id) + (1 | condition:participant_id), data = d, REML = FALSE) anova(mod3, mod1) # Data: d # Models: # mod3: sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id) # mod1: sim_1 ~ contrast + ((1 | participant_id) + (0 + contrast | participant_id)) # Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) # mod3 5 2396.9 2420.3 -1193.5 2386.9 # mod1 5 2396.9 2420.3 -1193.5 2386.9 0 0 1 anova(mod4, mod3) # Data: d # Models: # mod4: sim_1 ~ condition + (1 | participant_id) + (1 | condition:participant_id) # mod3: sim_1 ~ contrast + (1 | participant_id) + (1 | contrast:participant_id) # Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) # mod4 5 2396.9 2420.3 -1193.5 2386.9 # mod3 5 2396.9 2420.3 -1193.5 2386.9 0 0 1 With treatment contrasts (the default in R) the re-parameterized $\Sigma$ is $$ \Sigma = \begin{bmatrix}\sigma^2_1 & \sigma^2_1 + \sigma_{12}\\ \sigma^2_1 + \sigma_{12} & \sigma^2_1 + \sigma^2_2 + 2\sigma_{12} \end{bmatrix} = \begin{bmatrix}\sigma^2_1 & \sigma^2_1\\ \sigma^2_1 & \sigma^2_1\end{bmatrix} + \begin{bmatrix}0 & 0\\ 0 & \sigma^2_2\end{bmatrix} + \begin{bmatrix}0 & \sigma_{12}\\ \sigma_{12} & 2\sigma_{12}\end{bmatrix} $$ where $\sigma^2_1$ is the variance parameter for the intercept (condition $A$), $\sigma^2_2$ the variance parameter for the contrast ($A - B$), and $\sigma_{12}$ the corresponding covariance parameter. We can see that neither setting $\sigma_{12}$ to zero nor setting $\sigma^2_2$ to zero tests (only) the hypothesis of equal variances. However, as shown above, we can still use mod4 to test the hypothesis as changing the contrasts has no impact on the parameterization of $\Sigma$ for this model.
Mixed effects model: Compare random variance component across levels of a grouping variable Looking at this problem from a slightly different perspective and starting from the "general" form of the linear mixed model, we have $$ y_{ijk} = \mu + \alpha_j + d_{ij} + e_{ijk}, \quad d_{i} \sim N
17,102
ANOVA to compare models
The output from anova() is a series of likelihood ratio tests. The lines in the output are: The first line in the output corresponds to the simplest model with only a smooth of x1 (I'm ignoring the factor x0 as it isn't up for consideration in your example) β€” this is not tested against anything simpler hence the last few column entries are empty. The second line is a likelihood ratio test between the model in line 1 and the model in line 2. At the cost of 0.97695 extra degrees of freedom, the residual deviance is decreased by 1180.2. This reduction in deviance (or conversely, increase in deviance explained), at the cost of <1 degree of freedom, is highly unlikely if the true effect of x2 were 0. Why 0.97695 degrees of freedom increase? Well, the linear function of x2 would add 1 df to the model but the smoother for x1 will be penalised back a little bit more than before and hence use slightly fewer effective degrees of freedom, hence the <1 change in overall degrees of freedom. The third line is exactly the same as I described above but for a comparison between the model in the second line and the model in the third line: i.e. the third line is evaluating the improvement in moving from modelling x2 as a linear term to modelling x2 as a smooth function. Again, this improvement in model fit (change in deviance is now 2211.8 at the cost of 7.37288 more degrees of freedom) is unlikely if the extra parameters associated with s(x2) were all equal to 0. In summary, line 2 says Model 2 fits better than Model 1, so a linear function of x2 is better than no effect of x1. But line 3 says that Model 3 fits the data better than Model 2, so a smooth function of x2 is preferred over a linear function of x2. This is a sequential analysis of models, not a series of comparisons against the simplest model. However… What they're showing is not the best way to do this β€” recent theory would suggest that the output from summary(m3) would have the most "correct" coverage properties. Furthermore, to select between models, one should probably use select = TRUE when fitting the full model (the one with two smooths), which would allow for shrinkage of terms that would include the model with linear x2 or even no effect of this variable. They're also not fitting using REML or ML smoothness selection, which many of us mgcv users would consider the default option (even though it isn't the actual default in gam()) What I would do is: library("mgcv") gam_data <- gamSim(eg=5) m3 <- gam(y ~ x0 + s(x1) + s(x2), data = gam_data, select = TRUE, method = "REML") summary(m3) The final line produces the following: > summary(m3) Family: gaussian Link function: identity Formula: y ~ x0 + s(x1) + s(x2) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.4097 0.2153 39.053 < 2e-16 *** x02 1.9311 0.3073 6.284 8.93e-10 *** x03 4.4241 0.3052 14.493 < 2e-16 *** x04 5.7639 0.3042 18.948 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.487 9 25.85 <2e-16 *** s(x2) 7.627 9 76.03 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.769 Deviance explained = 77.7% -REML = 892.61 Scale est. = 4.5057 n = 400 We can see that both smooth terms are significantly different from null functions. What select = TRUE is doing is putting an extra penalty on the null space of the penalty (this is the part of the spline that is perfectly smooth). If you don't have this, smoothness selection can only penalise a smooth back to a linear function (because the penalty that's doing smoothness selection only works on the non-smooth (the wiggly) parts of the basis). To perform selection we need to be able to penalise the null space (the smooth parts of the basis) as well. select = TRUE achieves this through the use of a second penalty added to all smooth terms in the model (Marra and Wood, 2011). This acts as a kinds of shrinkage, pulling all smooth terms somewhat towards 0, but it will pull superfluous terms towards 0 much more quickly, hence selecting them out of the model if they don't have any explanatory power. We pay a price for this when evaluating the significance of the smooths; note the Ref.df column above (the 9 comes from the default value of k = 10, which for thin plate splines with centring constraints means 9 basis functions), instead of paying something like 2.5 and 7.7 degrees of freedom for the splines, we're paying 9 degrees of freedom each. This reflects that fact that we've done the selection, that we weren't sure which terms should be in the model. Note: it is important that you don't use anova(m1, m2, m3) type calls on models using select = TRUE. As noted in ?mgcv:::anova.gam, the approximation used can be very bad for smooths with penalties on their null spaces. In the comments, @BillyJean mentioned using AIC for selection. Recent work by Simon Wood and colleagues (Wood et al, 2016) derived an AIC that accounts for the extra uncertainty due to us having estimated the smoothness parameters in the model. This AIC works reasonably well, but there is some discussion as to the behaviour of their derivation of AIC when IIRC smooths are close to linear functions. Anyway, AIC would give us: m1 <- gam(y ~ x0 + s(x1), data = gam_data, method = "ML") m2 <- gam(y ~ x0 + s(x1) + x2, data = gam_data, method = "ML") m3 <- gam(y ~ x0 + s(x1) + s(x2), data = gam_data, method = "ML") AIC(m1, m2, m3) > AIC(m1, m2, m3) df AIC m1 7.307712 2149.046 m2 8.608444 2055.651 m3 16.589330 1756.890 Note I refitted all of these with ML smoothness selection as I'm not certain what the AIC does when select = TRUE and you have to be careful comparing models with different fixed effects, that aren't fully penalised, using REML. Again the inference is clear; the model with smooths of x1 and x2 has substantially better fit than either of the other two models. Marra, G. & Wood, S. N. Practical variable selection for generalized additive models. Comput. Stat. Data Anal. 55, 2372–2387 (2011). Wood, S. N., Pya, N. & SΓ€fken, B. Smoothing Parameter and Model Selection for General Smooth Models. J. Am. Stat. Assoc. 111, 1548–1563 (2016).
ANOVA to compare models
The output from anova() is a series of likelihood ratio tests. The lines in the output are: The first line in the output corresponds to the simplest model with only a smooth of x1 (I'm ignoring the f
ANOVA to compare models The output from anova() is a series of likelihood ratio tests. The lines in the output are: The first line in the output corresponds to the simplest model with only a smooth of x1 (I'm ignoring the factor x0 as it isn't up for consideration in your example) β€” this is not tested against anything simpler hence the last few column entries are empty. The second line is a likelihood ratio test between the model in line 1 and the model in line 2. At the cost of 0.97695 extra degrees of freedom, the residual deviance is decreased by 1180.2. This reduction in deviance (or conversely, increase in deviance explained), at the cost of <1 degree of freedom, is highly unlikely if the true effect of x2 were 0. Why 0.97695 degrees of freedom increase? Well, the linear function of x2 would add 1 df to the model but the smoother for x1 will be penalised back a little bit more than before and hence use slightly fewer effective degrees of freedom, hence the <1 change in overall degrees of freedom. The third line is exactly the same as I described above but for a comparison between the model in the second line and the model in the third line: i.e. the third line is evaluating the improvement in moving from modelling x2 as a linear term to modelling x2 as a smooth function. Again, this improvement in model fit (change in deviance is now 2211.8 at the cost of 7.37288 more degrees of freedom) is unlikely if the extra parameters associated with s(x2) were all equal to 0. In summary, line 2 says Model 2 fits better than Model 1, so a linear function of x2 is better than no effect of x1. But line 3 says that Model 3 fits the data better than Model 2, so a smooth function of x2 is preferred over a linear function of x2. This is a sequential analysis of models, not a series of comparisons against the simplest model. However… What they're showing is not the best way to do this β€” recent theory would suggest that the output from summary(m3) would have the most "correct" coverage properties. Furthermore, to select between models, one should probably use select = TRUE when fitting the full model (the one with two smooths), which would allow for shrinkage of terms that would include the model with linear x2 or even no effect of this variable. They're also not fitting using REML or ML smoothness selection, which many of us mgcv users would consider the default option (even though it isn't the actual default in gam()) What I would do is: library("mgcv") gam_data <- gamSim(eg=5) m3 <- gam(y ~ x0 + s(x1) + s(x2), data = gam_data, select = TRUE, method = "REML") summary(m3) The final line produces the following: > summary(m3) Family: gaussian Link function: identity Formula: y ~ x0 + s(x1) + s(x2) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.4097 0.2153 39.053 < 2e-16 *** x02 1.9311 0.3073 6.284 8.93e-10 *** x03 4.4241 0.3052 14.493 < 2e-16 *** x04 5.7639 0.3042 18.948 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.487 9 25.85 <2e-16 *** s(x2) 7.627 9 76.03 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.769 Deviance explained = 77.7% -REML = 892.61 Scale est. = 4.5057 n = 400 We can see that both smooth terms are significantly different from null functions. What select = TRUE is doing is putting an extra penalty on the null space of the penalty (this is the part of the spline that is perfectly smooth). If you don't have this, smoothness selection can only penalise a smooth back to a linear function (because the penalty that's doing smoothness selection only works on the non-smooth (the wiggly) parts of the basis). To perform selection we need to be able to penalise the null space (the smooth parts of the basis) as well. select = TRUE achieves this through the use of a second penalty added to all smooth terms in the model (Marra and Wood, 2011). This acts as a kinds of shrinkage, pulling all smooth terms somewhat towards 0, but it will pull superfluous terms towards 0 much more quickly, hence selecting them out of the model if they don't have any explanatory power. We pay a price for this when evaluating the significance of the smooths; note the Ref.df column above (the 9 comes from the default value of k = 10, which for thin plate splines with centring constraints means 9 basis functions), instead of paying something like 2.5 and 7.7 degrees of freedom for the splines, we're paying 9 degrees of freedom each. This reflects that fact that we've done the selection, that we weren't sure which terms should be in the model. Note: it is important that you don't use anova(m1, m2, m3) type calls on models using select = TRUE. As noted in ?mgcv:::anova.gam, the approximation used can be very bad for smooths with penalties on their null spaces. In the comments, @BillyJean mentioned using AIC for selection. Recent work by Simon Wood and colleagues (Wood et al, 2016) derived an AIC that accounts for the extra uncertainty due to us having estimated the smoothness parameters in the model. This AIC works reasonably well, but there is some discussion as to the behaviour of their derivation of AIC when IIRC smooths are close to linear functions. Anyway, AIC would give us: m1 <- gam(y ~ x0 + s(x1), data = gam_data, method = "ML") m2 <- gam(y ~ x0 + s(x1) + x2, data = gam_data, method = "ML") m3 <- gam(y ~ x0 + s(x1) + s(x2), data = gam_data, method = "ML") AIC(m1, m2, m3) > AIC(m1, m2, m3) df AIC m1 7.307712 2149.046 m2 8.608444 2055.651 m3 16.589330 1756.890 Note I refitted all of these with ML smoothness selection as I'm not certain what the AIC does when select = TRUE and you have to be careful comparing models with different fixed effects, that aren't fully penalised, using REML. Again the inference is clear; the model with smooths of x1 and x2 has substantially better fit than either of the other two models. Marra, G. & Wood, S. N. Practical variable selection for generalized additive models. Comput. Stat. Data Anal. 55, 2372–2387 (2011). Wood, S. N., Pya, N. & SΓ€fken, B. Smoothing Parameter and Model Selection for General Smooth Models. J. Am. Stat. Assoc. 111, 1548–1563 (2016).
ANOVA to compare models The output from anova() is a series of likelihood ratio tests. The lines in the output are: The first line in the output corresponds to the simplest model with only a smooth of x1 (I'm ignoring the f
17,103
ANOVA to compare models
You may want to test the two models with lrest. lrtest(two_term_model, two_smooth_model) Model 1: y ~ x0 + s(x1) + x2 Model 2: y ~ x0 + s(x1) + s(x2) #Df LogLik Df Chisq Pr(>Chisq) 1 8.1107 -995.22 2 15.0658 -848.95 6.955 292.55 < 2.2e-16 *** While adding a smooth function to both terms indeed complicate the model, the improvement in the log-likelihood is significant. This shouldn't be surprising because the data was generated by a GAM simulator. You may also want to print out the summary statistics: Link function: identity Formula: y ~ x0 + s(x1) + x2 Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.6234 0.3950 29.429 < 2e-16 *** x02 2.1147 0.4180 5.059 6.48e-07 *** x03 4.3813 0.4172 10.501 < 2e-16 *** x04 6.2644 0.4173 15.010 < 2e-16 *** x2 -6.4110 0.5212 -12.300 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.111 2.626 64.92 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.583 Deviance explained = 58.9% GCV = 8.7944 Scale est. = 8.6381 n = 400 and Family: gaussian Link function: identity Formula: y ~ x0 + s(x1) + s(x2) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.3328 0.2074 40.185 < 2e-16 *** x02 2.1057 0.2955 7.125 5.15e-12 *** x03 4.3715 0.2934 14.901 < 2e-16 *** x04 6.1197 0.2935 20.853 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.691 3.343 95.00 <2e-16 *** s(x2) 7.375 8.356 85.07 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.796 Deviance explained = 80.2% GCV = 4.3862 Scale est. = 4.232 n = 400 Note the difference in deviance explained (it's huge). The more complicated model also has better R-sq.(adj). The second smoothing term is highly significant and fits nicely with the data.
ANOVA to compare models
You may want to test the two models with lrest. lrtest(two_term_model, two_smooth_model) Model 1: y ~ x0 + s(x1) + x2 Model 2: y ~ x0 + s(x1) + s(x2) #Df LogLik Df Chisq Pr(>Chisq) 1
ANOVA to compare models You may want to test the two models with lrest. lrtest(two_term_model, two_smooth_model) Model 1: y ~ x0 + s(x1) + x2 Model 2: y ~ x0 + s(x1) + s(x2) #Df LogLik Df Chisq Pr(>Chisq) 1 8.1107 -995.22 2 15.0658 -848.95 6.955 292.55 < 2.2e-16 *** While adding a smooth function to both terms indeed complicate the model, the improvement in the log-likelihood is significant. This shouldn't be surprising because the data was generated by a GAM simulator. You may also want to print out the summary statistics: Link function: identity Formula: y ~ x0 + s(x1) + x2 Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.6234 0.3950 29.429 < 2e-16 *** x02 2.1147 0.4180 5.059 6.48e-07 *** x03 4.3813 0.4172 10.501 < 2e-16 *** x04 6.2644 0.4173 15.010 < 2e-16 *** x2 -6.4110 0.5212 -12.300 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.111 2.626 64.92 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.583 Deviance explained = 58.9% GCV = 8.7944 Scale est. = 8.6381 n = 400 and Family: gaussian Link function: identity Formula: y ~ x0 + s(x1) + s(x2) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.3328 0.2074 40.185 < 2e-16 *** x02 2.1057 0.2955 7.125 5.15e-12 *** x03 4.3715 0.2934 14.901 < 2e-16 *** x04 6.1197 0.2935 20.853 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.691 3.343 95.00 <2e-16 *** s(x2) 7.375 8.356 85.07 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.796 Deviance explained = 80.2% GCV = 4.3862 Scale est. = 4.232 n = 400 Note the difference in deviance explained (it's huge). The more complicated model also has better R-sq.(adj). The second smoothing term is highly significant and fits nicely with the data.
ANOVA to compare models You may want to test the two models with lrest. lrtest(two_term_model, two_smooth_model) Model 1: y ~ x0 + s(x1) + x2 Model 2: y ~ x0 + s(x1) + s(x2) #Df LogLik Df Chisq Pr(>Chisq) 1
17,104
Discriminant analysis vs logistic regression
When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. If there are covariate values that can predict the binary outcome perfectly then the algorithm of logistic regression, i.e. Fisher scoring, does not even converge. If you are using R or SAS you will get a warning that probabilities of zero and one were computed and that the algorithm has crashed. This is the extreme case of perfect separation but even if the data are only separated to a great degree and not perfectly, the maximum likelihood estimator might not exist and even if it does exist, the estimates are not reliable. The resulting fit is not good at all. There are many threads dealing with the problem of separation on this site so by all means take a look. By contrast, one does not often encounter estimation problems with Fisher's discriminant. It can still happen if either the between or within covariance matrix is singular but that is a rather rare instance. In fact, If there is complete or quasi-complete separation then all the better because the discriminant is more likely to be successful. It is also worth mentioning that contrary to popular belief LDA is not based on any distribution assumptions. We only implicitly require equality of the population covariance matrices since a pooled estimator is used for the within covariance matrix. Under the additional assumptions of normality, equal prior probabilities and misclassification costs, the LDA is optimal in the sense that it minimizes the misclassification probability. How does LDA provide low-dimensional views? It's easier to see that for the case of two populations and two variables. Here is a pictorial representation of how LDA works in that case. Remember that we are looking for linear combinations of the variables that maximize separability. Hence the data are projected on the vector whose direction better achieves this separation. How we find that vector is an interesting problem of linear algebra, we basically maximize a Rayleigh quotient, but let's leave that aside for now. If the data are projected on that vector, the dimension is reduced from two to one. The general case of more than two populations and variables is dealt similarly. If the dimension is large, then more linear combinations are used to reduce it, the data are projected on planes or hyperplanes in that case. There is a limit to how many linear combinations one can find of course and this limit results from the original dimension of the data. If we denote the number of predictor variables by $p$ and the number of populations by $g$, it turns out that the number is at most $\min(g-1,p)$. If you can name more pros or cons, that would be nice. The low-dimensional representantion does not come without drawbacks nevertheless, the most important one being of course the loss of information. This is less of a problem when the data are linearly separable but if they are not the loss of information might be substantial and the classifier will perform poorly. There might also be cases where the equality of covariance matrices might not be a tenable assumption. You can employ a test to make sure but these tests are very sensitive to departures from normality so you need to make this additional assumption and also test for it. If it is found that the populations are normal with unequal covariance matrices a quadratic classification rule might be used (QDA) instead but I find that this is a rather awkward rule, not to mention counterintuitive in high dimensions. Overall, the main advantage of the LDA is the existence of an explicit solution and its computational convenience which is not the case for more advanced classification techniques such as SVM or neural networks. The price we pay is the set of assumptions that go with it, namely linear separability and equality of covariance matrices. Hope this helps. EDIT: I suspect my claim that the LDA on the specific cases I mentioned does not require any distributional assumptions other than equality of the covariance matrices has cost me a downvote. This is no less true nevertheless so let me be more specific. If we let $\bar{\mathbf{x}}_i, \ i = 1,2$ denote the means from the first and second population, and $\mathbf{S}_{\text{pooled}}$ denote the pooled covariance matrix, Fisher's discriminant solves the problem $$\max_{\mathbf{a}} \frac{ \left( \mathbf{a}^{T} \bar{\mathbf{x}}_1 - \mathbf{a}^{T} \bar{\mathbf{x}}_2 \right)^2}{\mathbf{a}^{T} \mathbf{S}_{\text{pooled}} \mathbf{a} } = \max_{\mathbf{a}} \frac{ \left( \mathbf{a}^{T} \mathbf{d} \right)^2}{\mathbf{a}^{T} \mathbf{S}_{\text{pooled}} \mathbf{a} } $$ The solution of this problem (up to a constant) can be shown to be $$ \mathbf{a} = \mathbf{S}_{\text{pooled}}^{-1} \mathbf{d} = \mathbf{S}_{\text{pooled}}^{-1} \left( \bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2 \right) $$ This is equivalent to the LDA you derive under the assumption of normality, equal covariance matrices, misclassification costs and prior probabilities, right? Well yes, except now that we have not assumed normality. There is nothing stopping you from using the discriminant above in all settings, even if the covariance matrices are not really equal. It might not be optimal in the sense of the expected cost of misclassification (ECM) but this is supervised learning so you can always evaluate its performance, using for instance the hold-out procedure. References Bishop, Christopher M. Neural networks for pattern recognition. Oxford university press, 1995. Johnson, Richard Arnold, and Dean W. Wichern. Applied multivariate statistical analysis. Vol. 4. Englewood Cliffs, NJ: Prentice hall, 1992.
Discriminant analysis vs logistic regression
When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. If there are cov
Discriminant analysis vs logistic regression When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. If there are covariate values that can predict the binary outcome perfectly then the algorithm of logistic regression, i.e. Fisher scoring, does not even converge. If you are using R or SAS you will get a warning that probabilities of zero and one were computed and that the algorithm has crashed. This is the extreme case of perfect separation but even if the data are only separated to a great degree and not perfectly, the maximum likelihood estimator might not exist and even if it does exist, the estimates are not reliable. The resulting fit is not good at all. There are many threads dealing with the problem of separation on this site so by all means take a look. By contrast, one does not often encounter estimation problems with Fisher's discriminant. It can still happen if either the between or within covariance matrix is singular but that is a rather rare instance. In fact, If there is complete or quasi-complete separation then all the better because the discriminant is more likely to be successful. It is also worth mentioning that contrary to popular belief LDA is not based on any distribution assumptions. We only implicitly require equality of the population covariance matrices since a pooled estimator is used for the within covariance matrix. Under the additional assumptions of normality, equal prior probabilities and misclassification costs, the LDA is optimal in the sense that it minimizes the misclassification probability. How does LDA provide low-dimensional views? It's easier to see that for the case of two populations and two variables. Here is a pictorial representation of how LDA works in that case. Remember that we are looking for linear combinations of the variables that maximize separability. Hence the data are projected on the vector whose direction better achieves this separation. How we find that vector is an interesting problem of linear algebra, we basically maximize a Rayleigh quotient, but let's leave that aside for now. If the data are projected on that vector, the dimension is reduced from two to one. The general case of more than two populations and variables is dealt similarly. If the dimension is large, then more linear combinations are used to reduce it, the data are projected on planes or hyperplanes in that case. There is a limit to how many linear combinations one can find of course and this limit results from the original dimension of the data. If we denote the number of predictor variables by $p$ and the number of populations by $g$, it turns out that the number is at most $\min(g-1,p)$. If you can name more pros or cons, that would be nice. The low-dimensional representantion does not come without drawbacks nevertheless, the most important one being of course the loss of information. This is less of a problem when the data are linearly separable but if they are not the loss of information might be substantial and the classifier will perform poorly. There might also be cases where the equality of covariance matrices might not be a tenable assumption. You can employ a test to make sure but these tests are very sensitive to departures from normality so you need to make this additional assumption and also test for it. If it is found that the populations are normal with unequal covariance matrices a quadratic classification rule might be used (QDA) instead but I find that this is a rather awkward rule, not to mention counterintuitive in high dimensions. Overall, the main advantage of the LDA is the existence of an explicit solution and its computational convenience which is not the case for more advanced classification techniques such as SVM or neural networks. The price we pay is the set of assumptions that go with it, namely linear separability and equality of covariance matrices. Hope this helps. EDIT: I suspect my claim that the LDA on the specific cases I mentioned does not require any distributional assumptions other than equality of the covariance matrices has cost me a downvote. This is no less true nevertheless so let me be more specific. If we let $\bar{\mathbf{x}}_i, \ i = 1,2$ denote the means from the first and second population, and $\mathbf{S}_{\text{pooled}}$ denote the pooled covariance matrix, Fisher's discriminant solves the problem $$\max_{\mathbf{a}} \frac{ \left( \mathbf{a}^{T} \bar{\mathbf{x}}_1 - \mathbf{a}^{T} \bar{\mathbf{x}}_2 \right)^2}{\mathbf{a}^{T} \mathbf{S}_{\text{pooled}} \mathbf{a} } = \max_{\mathbf{a}} \frac{ \left( \mathbf{a}^{T} \mathbf{d} \right)^2}{\mathbf{a}^{T} \mathbf{S}_{\text{pooled}} \mathbf{a} } $$ The solution of this problem (up to a constant) can be shown to be $$ \mathbf{a} = \mathbf{S}_{\text{pooled}}^{-1} \mathbf{d} = \mathbf{S}_{\text{pooled}}^{-1} \left( \bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2 \right) $$ This is equivalent to the LDA you derive under the assumption of normality, equal covariance matrices, misclassification costs and prior probabilities, right? Well yes, except now that we have not assumed normality. There is nothing stopping you from using the discriminant above in all settings, even if the covariance matrices are not really equal. It might not be optimal in the sense of the expected cost of misclassification (ECM) but this is supervised learning so you can always evaluate its performance, using for instance the hold-out procedure. References Bishop, Christopher M. Neural networks for pattern recognition. Oxford university press, 1995. Johnson, Richard Arnold, and Dean W. Wichern. Applied multivariate statistical analysis. Vol. 4. Englewood Cliffs, NJ: Prentice hall, 1992.
Discriminant analysis vs logistic regression When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. If there are cov
17,105
Discriminant analysis vs logistic regression
LDA makes severe distributional assumptions (multivariate normality of all predictors) unlike logistic regression. Try getting posterior probabilities of class membership on the basis of subjects' sex and you'll see what I mean - the probabilities will not be accurate. The instability of logistic regression when a set of predictor values gives rise to a probability of 0 or 1 that $Y=1$ is more or less an illusion. Newton-Raphson iterations will converge to $\beta$s that are close enough to $\pm \infty$ (e.g., $\pm 30$) so that predicted probabilities are essentially 0 or 1 when they should be. The only problem this causes is the Hauck-Donner effect in the Wald statistics. The solution is simple: don't use Wald tests in this case; use likelihood ratio tests, which behave very well even with infinite estimates. For confidence intervals use profile likelihood confidence intervals if there is complete separation. See this for more information. Note that if multivariable normality holds, by Bayes' theorem the assumptions of logistic regression hold. The reverse is not true. Normality (or at the very least symmetry) must almost hold for variances and covariances to "do the job". Non-multivariate normally distributed predictors will even hurt the discriminant extraction phase.
Discriminant analysis vs logistic regression
LDA makes severe distributional assumptions (multivariate normality of all predictors) unlike logistic regression. Try getting posterior probabilities of class membership on the basis of subjects' se
Discriminant analysis vs logistic regression LDA makes severe distributional assumptions (multivariate normality of all predictors) unlike logistic regression. Try getting posterior probabilities of class membership on the basis of subjects' sex and you'll see what I mean - the probabilities will not be accurate. The instability of logistic regression when a set of predictor values gives rise to a probability of 0 or 1 that $Y=1$ is more or less an illusion. Newton-Raphson iterations will converge to $\beta$s that are close enough to $\pm \infty$ (e.g., $\pm 30$) so that predicted probabilities are essentially 0 or 1 when they should be. The only problem this causes is the Hauck-Donner effect in the Wald statistics. The solution is simple: don't use Wald tests in this case; use likelihood ratio tests, which behave very well even with infinite estimates. For confidence intervals use profile likelihood confidence intervals if there is complete separation. See this for more information. Note that if multivariable normality holds, by Bayes' theorem the assumptions of logistic regression hold. The reverse is not true. Normality (or at the very least symmetry) must almost hold for variances and covariances to "do the job". Non-multivariate normally distributed predictors will even hurt the discriminant extraction phase.
Discriminant analysis vs logistic regression LDA makes severe distributional assumptions (multivariate normality of all predictors) unlike logistic regression. Try getting posterior probabilities of class membership on the basis of subjects' se
17,106
Discriminant analysis vs logistic regression
When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. Disclaimer: What follows here lacks mathematical rigour completely. In order to fit a (nonlinear) function well you need observations in all regions of the function where "its shape changes". Logistic regression fits a sigmoid function to the data: In the case of well-separated classes all observations will fall onto the two "ends" where the sigmoid approaches its asymptotes (0 and 1). Since all sigmoids "look the same" in these regions, so to speak, no wonder the poor fitting algorithm will have difficulties to find "the right one". Let's have a look at two (hopefully instructive) examples calculated with R's glm() function. Case 1: The two groups overlap to quite some extent: and the observations distribute nicely around the inflexion point of the fitted sigmoid: These are the fitted the parameters with nice low standard errors: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -17.21374 4.07741 -4.222 2.42e-05 *** wgt 0.35111 0.08419 4.171 3.04e-05 *** and the deviance also looks OK: Null deviance: 138.629 on 99 degrees of freedom Residual deviance: 30.213 on 98 degrees of freedom Case 2: The two groups are well separated: and the observations all lie on the asymptotes practically. The glm() function tried its best to fit something, but complained about numerically 0 or 1 probabilities, because there are simply no observations available to "get the shape of the sigmoid right" around its inflexion point: You can diagnose the problem by noting that the standard errors of the estimated parameters go through the roof: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -232.638 421264.847 -0.001 1 wgt 5.065 9167.439 0.001 1 and at the same time the deviance looks suspiciously good (because the observations do fit the asymptotes well): Null deviance: 1.3863e+02 on 99 degrees of freedom Residual deviance: 4.2497e-10 on 98 degrees of freedom At least intuitively it should be clear from these considerations why "the parameter estimates for logistic regression are surprisingly unstable".
Discriminant analysis vs logistic regression
When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. Disclaimer:
Discriminant analysis vs logistic regression When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. Disclaimer: What follows here lacks mathematical rigour completely. In order to fit a (nonlinear) function well you need observations in all regions of the function where "its shape changes". Logistic regression fits a sigmoid function to the data: In the case of well-separated classes all observations will fall onto the two "ends" where the sigmoid approaches its asymptotes (0 and 1). Since all sigmoids "look the same" in these regions, so to speak, no wonder the poor fitting algorithm will have difficulties to find "the right one". Let's have a look at two (hopefully instructive) examples calculated with R's glm() function. Case 1: The two groups overlap to quite some extent: and the observations distribute nicely around the inflexion point of the fitted sigmoid: These are the fitted the parameters with nice low standard errors: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -17.21374 4.07741 -4.222 2.42e-05 *** wgt 0.35111 0.08419 4.171 3.04e-05 *** and the deviance also looks OK: Null deviance: 138.629 on 99 degrees of freedom Residual deviance: 30.213 on 98 degrees of freedom Case 2: The two groups are well separated: and the observations all lie on the asymptotes practically. The glm() function tried its best to fit something, but complained about numerically 0 or 1 probabilities, because there are simply no observations available to "get the shape of the sigmoid right" around its inflexion point: You can diagnose the problem by noting that the standard errors of the estimated parameters go through the roof: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -232.638 421264.847 -0.001 1 wgt 5.065 9167.439 0.001 1 and at the same time the deviance looks suspiciously good (because the observations do fit the asymptotes well): Null deviance: 1.3863e+02 on 99 degrees of freedom Residual deviance: 4.2497e-10 on 98 degrees of freedom At least intuitively it should be clear from these considerations why "the parameter estimates for logistic regression are surprisingly unstable".
Discriminant analysis vs logistic regression When the classes are well-separated, the parameter estimates for logistic regression are surprisingly unstable. Coefficients may go to infinity. LDA doesn't suffer from this problem. Disclaimer:
17,107
Why do we need Bootstrapping?
Two answers. What's the standard error of the ratio of two means? What's the standard error of the median? What's the standard error of any complex statistic? Maybe there's a closed form equation, but it's possible that no one has worked it out yet. In order to use the formula for (say) the standard error of the mean, we must make some assumptions. If those assumptions are violated, we can't necessarily use the method. As @Whuber points out in the comments, bootstrapping allows us to relax some of these assumptions and hence might provide more appropriate standard errors (although it may also make additional assumptions).
Why do we need Bootstrapping?
Two answers. What's the standard error of the ratio of two means? What's the standard error of the median? What's the standard error of any complex statistic? Maybe there's a closed form equation, b
Why do we need Bootstrapping? Two answers. What's the standard error of the ratio of two means? What's the standard error of the median? What's the standard error of any complex statistic? Maybe there's a closed form equation, but it's possible that no one has worked it out yet. In order to use the formula for (say) the standard error of the mean, we must make some assumptions. If those assumptions are violated, we can't necessarily use the method. As @Whuber points out in the comments, bootstrapping allows us to relax some of these assumptions and hence might provide more appropriate standard errors (although it may also make additional assumptions).
Why do we need Bootstrapping? Two answers. What's the standard error of the ratio of two means? What's the standard error of the median? What's the standard error of any complex statistic? Maybe there's a closed form equation, b
17,108
Why do we need Bootstrapping?
An example might help to illustrate. Suppose, in a causal modeling framework, you're interested in determining whether the relation between $X$ (an exposure of interest) an $Y$ (an outcome of interest) is mediated by a variable $W$. This means that in the two regression models: $$\begin{eqnarray} E[Y|X] &=& \beta_0 + \beta_1 X \\ E[Y|X, W] &=& \gamma_0 + \gamma_1 X + \gamma_2 W \\ \end{eqnarray}$$ The effect $\beta_1$ is different than the effect $\gamma_1$. As an example, consider the relationship between smoking and cardiovascular (CV) risk. Smoking obviously increases CV risk (for events like heart attack and stroke) by causing veins to become brittle and calcified. However, smoking is also an appetite suppressant. So we would be curious whether the estimated relationship between smoking and CV risk is mediated by BMI, which independently is a risk factor for CV risk. Here $Y$ could be a binary event (myocardial or neurological infarction) in a logistic regression model or a continuous variable like coronary arterial calcification (CAC), left ventricular ejection fraction (LVEF), or left ventricular mass (LVM). We would fit two models 1: adjusting for smoking and the outcome along with other confounders like age, sex, income, and family history of heart disease then 2: all the previous covariates as well as body mass index. The difference in the smoking effect between models 1 and 2 is where we base our inference. We are interested in testing the hypotheses $$\begin{eqnarray} \mathcal{H} &:& \beta_1 = \gamma_1\\ \mathcal{K} &:& \beta_1 \ne \gamma_1\\ \end{eqnarray}$$ One possible effect measurement could be: $T = \beta_1 - \gamma_1$ or $S = \beta_1 / \gamma_1$ or any number of measurements. You can use the usual estimators for $T$ and $S$. The standard error of these estimators is very complicated to derive. Bootstrapping the distribution of them, however, is a commonly applied technique, and it is easy to calculate the $p$-value directly from that.
Why do we need Bootstrapping?
An example might help to illustrate. Suppose, in a causal modeling framework, you're interested in determining whether the relation between $X$ (an exposure of interest) an $Y$ (an outcome of interest
Why do we need Bootstrapping? An example might help to illustrate. Suppose, in a causal modeling framework, you're interested in determining whether the relation between $X$ (an exposure of interest) an $Y$ (an outcome of interest) is mediated by a variable $W$. This means that in the two regression models: $$\begin{eqnarray} E[Y|X] &=& \beta_0 + \beta_1 X \\ E[Y|X, W] &=& \gamma_0 + \gamma_1 X + \gamma_2 W \\ \end{eqnarray}$$ The effect $\beta_1$ is different than the effect $\gamma_1$. As an example, consider the relationship between smoking and cardiovascular (CV) risk. Smoking obviously increases CV risk (for events like heart attack and stroke) by causing veins to become brittle and calcified. However, smoking is also an appetite suppressant. So we would be curious whether the estimated relationship between smoking and CV risk is mediated by BMI, which independently is a risk factor for CV risk. Here $Y$ could be a binary event (myocardial or neurological infarction) in a logistic regression model or a continuous variable like coronary arterial calcification (CAC), left ventricular ejection fraction (LVEF), or left ventricular mass (LVM). We would fit two models 1: adjusting for smoking and the outcome along with other confounders like age, sex, income, and family history of heart disease then 2: all the previous covariates as well as body mass index. The difference in the smoking effect between models 1 and 2 is where we base our inference. We are interested in testing the hypotheses $$\begin{eqnarray} \mathcal{H} &:& \beta_1 = \gamma_1\\ \mathcal{K} &:& \beta_1 \ne \gamma_1\\ \end{eqnarray}$$ One possible effect measurement could be: $T = \beta_1 - \gamma_1$ or $S = \beta_1 / \gamma_1$ or any number of measurements. You can use the usual estimators for $T$ and $S$. The standard error of these estimators is very complicated to derive. Bootstrapping the distribution of them, however, is a commonly applied technique, and it is easy to calculate the $p$-value directly from that.
Why do we need Bootstrapping? An example might help to illustrate. Suppose, in a causal modeling framework, you're interested in determining whether the relation between $X$ (an exposure of interest) an $Y$ (an outcome of interest
17,109
Why do we need Bootstrapping?
Having parametric solutions for each statistical measure would be desirable but, at the same time, quite unrealistic. Bootstrap comes in handy in those instances. The example that springs to my mind concerns the difference between two means of highly skewed cost distributions. In that case, the classic two-sample t-test fails to meet its theoretical requirements (the distributions from which the samples under investigation were drawn surely depart from normality, due to their long right-tail) and non-parametric tests lack to convey useful infromation to decision-makers (who are usually not interested in ranks). A possible solution to avoid being stalled on that issue is a two-sample bootstrap t-test.
Why do we need Bootstrapping?
Having parametric solutions for each statistical measure would be desirable but, at the same time, quite unrealistic. Bootstrap comes in handy in those instances. The example that springs to my mind c
Why do we need Bootstrapping? Having parametric solutions for each statistical measure would be desirable but, at the same time, quite unrealistic. Bootstrap comes in handy in those instances. The example that springs to my mind concerns the difference between two means of highly skewed cost distributions. In that case, the classic two-sample t-test fails to meet its theoretical requirements (the distributions from which the samples under investigation were drawn surely depart from normality, due to their long right-tail) and non-parametric tests lack to convey useful infromation to decision-makers (who are usually not interested in ranks). A possible solution to avoid being stalled on that issue is a two-sample bootstrap t-test.
Why do we need Bootstrapping? Having parametric solutions for each statistical measure would be desirable but, at the same time, quite unrealistic. Bootstrap comes in handy in those instances. The example that springs to my mind c
17,110
I have a line of best fit. I need data points that will not change my line of best fit
Pick any $(x_i)$ provided at least two of them differ. Set an intercept $\beta_0$ and slope $\beta_1$ and define $$y_{0i} = \beta_0 + \beta_1 x_i.$$ This fit is perfect. Without changing the fit, you can modify $y_0$ to $y = y_0 + \varepsilon$ by adding any error vector $\varepsilon=(\varepsilon_i)$ to it provided it is orthogonal both to the vector $x = (x_i)$ and the constant vector $(1,1,\ldots, 1)$. An easy way to obtain such an error is to pick any vector $e$ and let $\varepsilon$ be the residuals upon regressing $e$ against $x$. In the code below, $e$ is generated as a set of independent random normal values with mean $0$ and common standard deviation. Furthermore, you can even preselect the amount of scatter, perhaps by stipulating what $R^2$ should be. Letting $\tau^2 = \text{var}(y_i) = \beta_1^2 \text{var}(x_i)$, rescale those residuals to have a variance of $$\sigma^2 = \tau^2\left(1/R^2 - 1\right).$$ This method is fully general: all possible examples (for a given set of $x_i$) can be created in this way. Examples Anscombe's Quartet We can easily reproduce Anscombe's Quartet of four qualitatively distinct bivariate datasets having the same descriptive statistics (through second order). The code is remarkably simple and flexible. set.seed(17) rho <- 0.816 # Common correlation coefficient x.0 <- 4:14 peak <- 10 n <- length(x.0) # -- Describe a collection of datasets. x <- list(x.0, x.0, x.0, c(rep(8, n-1), 19)) # x-values e <- list(rnorm(n), -(x.0-peak)^2, 1:n==peak, rnorm(n)) # residual patterns f <- function(x) 3 + x/2 # Common regression line par(mfrow=c(2,2)) xlim <- range(as.vector(x)) ylim <- f(xlim + c(-2,2)) s <- sapply(1:4, function(i) { # -- Create data. y <- f(x[[i]]) # Model values sigma <- sqrt(var(y) * (1 / rho^2 - 1)) # Conditional S.D. y <- y + sigma * scale(residuals(lm(e[[i]] ~ x[[i]]))) # Observed values # -- Plot them and their OLS fit. plot(x[[i]], y, xlim=xlim, ylim=ylim, pch=16, col="Orange", xlab="x") abline(lm(y ~ x[[i]]), col="Blue") # -- Return some regression statistics. c(mean(x[[i]]), var(x[[i]]), mean(y), var(y), cor(x[[i]], y), coef(lm(y ~ x[[i]]))) }) # -- Tabulate the regression statistics from all the datasets. rownames(s) <- c("Mean x", "Var x", "Mean y", "Var y", "Cor(x,y)", "Intercept", "Slope") t(s) The output gives the second-order descriptive statistics for the $(x,y)$ data for each dataset. All four lines are identical. You can easily create more examples by altering x (the x-coordinates) and e (the error patterns) at the outset. Simulations This R function generates vectors $y$ according to the specifications of $\beta=(\beta_0,\beta_1)$ and $R^2$ (with $0 \le R^2 \le 1$), given a set of $x$ values. simulate <- function(x, beta, r.2) { sigma <- sqrt(var(x) * beta[2]^2 * (1/r.2 - 1)) e <- residuals(lm(rnorm(length(x)) ~ x)) return (y.0 <- beta[1] + beta[2]*x + sigma * scale(e)) } (It wouldn't be difficult to port this to Excel--but it's a little painful.) As an example of its use, here are four simulations of $(x,y)$ data using a common set of $60$ $x$ values, $\beta=(1,-1/2)$ (i.e., intercept $1$ and slope $-1/2$), and $R^2 = 0.5$. n <- 60 beta <- c(1,-1/2) r.2 <- 0.5 # Between 0 and 1 set.seed(17) x <- rnorm(n) par(mfrow=c(1,4)) invisible(replicate(4, { y <- simulate(x, beta, r.2) fit <- lm(y ~ x) plot(x, y) abline(fit, lwd=2, col="Red") })) By executing summary(fit) you can check that the estimated coefficients are exactly as specified and the multiple $R^2$ is the intended value. Other statistics, such as the regression p-value, can be adjusted by modifying the values of the $x_i$.
I have a line of best fit. I need data points that will not change my line of best fit
Pick any $(x_i)$ provided at least two of them differ. Set an intercept $\beta_0$ and slope $\beta_1$ and define $$y_{0i} = \beta_0 + \beta_1 x_i.$$ This fit is perfect. Without changing the fit, yo
I have a line of best fit. I need data points that will not change my line of best fit Pick any $(x_i)$ provided at least two of them differ. Set an intercept $\beta_0$ and slope $\beta_1$ and define $$y_{0i} = \beta_0 + \beta_1 x_i.$$ This fit is perfect. Without changing the fit, you can modify $y_0$ to $y = y_0 + \varepsilon$ by adding any error vector $\varepsilon=(\varepsilon_i)$ to it provided it is orthogonal both to the vector $x = (x_i)$ and the constant vector $(1,1,\ldots, 1)$. An easy way to obtain such an error is to pick any vector $e$ and let $\varepsilon$ be the residuals upon regressing $e$ against $x$. In the code below, $e$ is generated as a set of independent random normal values with mean $0$ and common standard deviation. Furthermore, you can even preselect the amount of scatter, perhaps by stipulating what $R^2$ should be. Letting $\tau^2 = \text{var}(y_i) = \beta_1^2 \text{var}(x_i)$, rescale those residuals to have a variance of $$\sigma^2 = \tau^2\left(1/R^2 - 1\right).$$ This method is fully general: all possible examples (for a given set of $x_i$) can be created in this way. Examples Anscombe's Quartet We can easily reproduce Anscombe's Quartet of four qualitatively distinct bivariate datasets having the same descriptive statistics (through second order). The code is remarkably simple and flexible. set.seed(17) rho <- 0.816 # Common correlation coefficient x.0 <- 4:14 peak <- 10 n <- length(x.0) # -- Describe a collection of datasets. x <- list(x.0, x.0, x.0, c(rep(8, n-1), 19)) # x-values e <- list(rnorm(n), -(x.0-peak)^2, 1:n==peak, rnorm(n)) # residual patterns f <- function(x) 3 + x/2 # Common regression line par(mfrow=c(2,2)) xlim <- range(as.vector(x)) ylim <- f(xlim + c(-2,2)) s <- sapply(1:4, function(i) { # -- Create data. y <- f(x[[i]]) # Model values sigma <- sqrt(var(y) * (1 / rho^2 - 1)) # Conditional S.D. y <- y + sigma * scale(residuals(lm(e[[i]] ~ x[[i]]))) # Observed values # -- Plot them and their OLS fit. plot(x[[i]], y, xlim=xlim, ylim=ylim, pch=16, col="Orange", xlab="x") abline(lm(y ~ x[[i]]), col="Blue") # -- Return some regression statistics. c(mean(x[[i]]), var(x[[i]]), mean(y), var(y), cor(x[[i]], y), coef(lm(y ~ x[[i]]))) }) # -- Tabulate the regression statistics from all the datasets. rownames(s) <- c("Mean x", "Var x", "Mean y", "Var y", "Cor(x,y)", "Intercept", "Slope") t(s) The output gives the second-order descriptive statistics for the $(x,y)$ data for each dataset. All four lines are identical. You can easily create more examples by altering x (the x-coordinates) and e (the error patterns) at the outset. Simulations This R function generates vectors $y$ according to the specifications of $\beta=(\beta_0,\beta_1)$ and $R^2$ (with $0 \le R^2 \le 1$), given a set of $x$ values. simulate <- function(x, beta, r.2) { sigma <- sqrt(var(x) * beta[2]^2 * (1/r.2 - 1)) e <- residuals(lm(rnorm(length(x)) ~ x)) return (y.0 <- beta[1] + beta[2]*x + sigma * scale(e)) } (It wouldn't be difficult to port this to Excel--but it's a little painful.) As an example of its use, here are four simulations of $(x,y)$ data using a common set of $60$ $x$ values, $\beta=(1,-1/2)$ (i.e., intercept $1$ and slope $-1/2$), and $R^2 = 0.5$. n <- 60 beta <- c(1,-1/2) r.2 <- 0.5 # Between 0 and 1 set.seed(17) x <- rnorm(n) par(mfrow=c(1,4)) invisible(replicate(4, { y <- simulate(x, beta, r.2) fit <- lm(y ~ x) plot(x, y) abline(fit, lwd=2, col="Red") })) By executing summary(fit) you can check that the estimated coefficients are exactly as specified and the multiple $R^2$ is the intended value. Other statistics, such as the regression p-value, can be adjusted by modifying the values of the $x_i$.
I have a line of best fit. I need data points that will not change my line of best fit Pick any $(x_i)$ provided at least two of them differ. Set an intercept $\beta_0$ and slope $\beta_1$ and define $$y_{0i} = \beta_0 + \beta_1 x_i.$$ This fit is perfect. Without changing the fit, yo
17,111
Estimate ARMA coefficients through ACF and PACF inspection
My answer is really an abridgement of javlacelle's but is is too long for a simple comment but not too short to be useless. While jvlacelle's response is technically correct at one level it "overly simplifies" as it premises certain "things" which are normally never true . It assumes that there is no deterministic structure required such as one or more time trends OR one or more level shifts or one or more seasonal pulses or one or more one-time pulses. Furthermore it assumes that the parameters of the identified model are invariant over time and the error process underlying the tentatively identified model is also invariant over time. Ignoring any of the above is often (always in my opinion !) a recipe for disaster or more precisely a "poorly identified model". A classic case of this is the unnecessary logarithmic transformation proposed for the airline series and for the series that the OP presents in his revised question. There is no need for any logarithmic transformation for his data as there are just a few "unusual" values at periods 198,207,218,219 and 256 which left untreated create the false impression that there is higher error variance with higher levels. Note that "unusual values" are identified taking into account any needed ARIMA structure which often escapes the human eye.Transformations are needed when the error variance is non-constant over time NOT when the variance of the the observed Y is non-constant over time. Primitive procedures still make the tactical error of selecting a transformation prematurely prior to any of the aforementioned remedies. One has to remember that the simple-minded ARIMA model identification strategy was developed in the early 60's BUT a lot of development/improvements have gone on since then. Edited after data was posted : A reasonable model was identified using http://www.autobox.com/cms/ which is a piece of software that incorporates some of my aforementioned ideas as I helped develop it. The Chow Test for parameter constancy suggested that the data be segmented and that the last 94 observations be used as model parameters had changed over time. .These last 94 values yielded an equation with all coefficients being significant. . The plot of the residuals suggests a reasonable scatter with the following ACF suggesting randomness . THe Actual and Cleansed graph is illuminating as it shows the subtle BUT significant outliers. . Finally a plot of actual,fit and forecast summarizes our work ALL WITHOUT TAKING LOGARITHMS . It is well known but often forgotten that power transforms are like drugs .... unwarranted usage can harm you. Finally notice that the model has an AR(2) BUT not an AR(1) structure.
Estimate ARMA coefficients through ACF and PACF inspection
My answer is really an abridgement of javlacelle's but is is too long for a simple comment but not too short to be useless. While jvlacelle's response is technically correct at one level it "overly si
Estimate ARMA coefficients through ACF and PACF inspection My answer is really an abridgement of javlacelle's but is is too long for a simple comment but not too short to be useless. While jvlacelle's response is technically correct at one level it "overly simplifies" as it premises certain "things" which are normally never true . It assumes that there is no deterministic structure required such as one or more time trends OR one or more level shifts or one or more seasonal pulses or one or more one-time pulses. Furthermore it assumes that the parameters of the identified model are invariant over time and the error process underlying the tentatively identified model is also invariant over time. Ignoring any of the above is often (always in my opinion !) a recipe for disaster or more precisely a "poorly identified model". A classic case of this is the unnecessary logarithmic transformation proposed for the airline series and for the series that the OP presents in his revised question. There is no need for any logarithmic transformation for his data as there are just a few "unusual" values at periods 198,207,218,219 and 256 which left untreated create the false impression that there is higher error variance with higher levels. Note that "unusual values" are identified taking into account any needed ARIMA structure which often escapes the human eye.Transformations are needed when the error variance is non-constant over time NOT when the variance of the the observed Y is non-constant over time. Primitive procedures still make the tactical error of selecting a transformation prematurely prior to any of the aforementioned remedies. One has to remember that the simple-minded ARIMA model identification strategy was developed in the early 60's BUT a lot of development/improvements have gone on since then. Edited after data was posted : A reasonable model was identified using http://www.autobox.com/cms/ which is a piece of software that incorporates some of my aforementioned ideas as I helped develop it. The Chow Test for parameter constancy suggested that the data be segmented and that the last 94 observations be used as model parameters had changed over time. .These last 94 values yielded an equation with all coefficients being significant. . The plot of the residuals suggests a reasonable scatter with the following ACF suggesting randomness . THe Actual and Cleansed graph is illuminating as it shows the subtle BUT significant outliers. . Finally a plot of actual,fit and forecast summarizes our work ALL WITHOUT TAKING LOGARITHMS . It is well known but often forgotten that power transforms are like drugs .... unwarranted usage can harm you. Finally notice that the model has an AR(2) BUT not an AR(1) structure.
Estimate ARMA coefficients through ACF and PACF inspection My answer is really an abridgement of javlacelle's but is is too long for a simple comment but not too short to be useless. While jvlacelle's response is technically correct at one level it "overly si
17,112
Estimate ARMA coefficients through ACF and PACF inspection
Just to clear up concepts, by visual inspection of the ACF or PACF you can choose (not estimate) a tentative ARMA model. Once a model is selected you can estimate the model by maximizing the likelihood function, minimizing the sum of squares or, in the case of the AR model, by means of the method of moments. An ARMA model can be chosen upon inspection of the ACF and PACF. This approach relies on the following facts: 1) the ACF of a stationary AR process of order p goes to zero at an exponential rate, while the PACF becomes zero after lag p. 2) For an MA process of order q the theoretical ACF and PACF exhibit the reverse behaviour (the ACF truncates after lag q and the PACF goes to zero relatively quickly). It is usually clear to detect the order of an AR or MA model. However, with processes that include both an AR and MA part the lag at which they are truncated may be blurred because both the ACF and PACF will decay to zero. One way to proceed is to fit first an AR or MA model (the one that seems more clear in the ACF and PACF) of low order. Then, if there is some further structure it will show up in the residuals, so the ACF and PACF of the residuals is checked to determine if additional AR or MA terms are necessary. Usually you will have to try and diagnose more than one model. You can also compare them by looking at the AIC. The ACF and PACF that you posted first suggested an ARMA(2,0,0)(0,0,1), that is, a regular AR(2) and a seasonal MA(1). The seasonal part of the model is determined similarly as the regular part but looking at the lags of seasonal order (e.g. 12, 24, 36,... in monthly data). If you are using R it is recommended to increase the default number of lags that are displayed, acf(x, lag.max = 60). The plot that you show now reveals suspicious negative correlation. If this plot is based on the same that as the previous plot you may have taken too many differences. See also this post. You can get further details, among other sources, here: Chapter 3 in Time Series: Theory and Methods by Peter J. Brockwell and Richard A. Davis and here.
Estimate ARMA coefficients through ACF and PACF inspection
Just to clear up concepts, by visual inspection of the ACF or PACF you can choose (not estimate) a tentative ARMA model. Once a model is selected you can estimate the model by maximizing the likelihoo
Estimate ARMA coefficients through ACF and PACF inspection Just to clear up concepts, by visual inspection of the ACF or PACF you can choose (not estimate) a tentative ARMA model. Once a model is selected you can estimate the model by maximizing the likelihood function, minimizing the sum of squares or, in the case of the AR model, by means of the method of moments. An ARMA model can be chosen upon inspection of the ACF and PACF. This approach relies on the following facts: 1) the ACF of a stationary AR process of order p goes to zero at an exponential rate, while the PACF becomes zero after lag p. 2) For an MA process of order q the theoretical ACF and PACF exhibit the reverse behaviour (the ACF truncates after lag q and the PACF goes to zero relatively quickly). It is usually clear to detect the order of an AR or MA model. However, with processes that include both an AR and MA part the lag at which they are truncated may be blurred because both the ACF and PACF will decay to zero. One way to proceed is to fit first an AR or MA model (the one that seems more clear in the ACF and PACF) of low order. Then, if there is some further structure it will show up in the residuals, so the ACF and PACF of the residuals is checked to determine if additional AR or MA terms are necessary. Usually you will have to try and diagnose more than one model. You can also compare them by looking at the AIC. The ACF and PACF that you posted first suggested an ARMA(2,0,0)(0,0,1), that is, a regular AR(2) and a seasonal MA(1). The seasonal part of the model is determined similarly as the regular part but looking at the lags of seasonal order (e.g. 12, 24, 36,... in monthly data). If you are using R it is recommended to increase the default number of lags that are displayed, acf(x, lag.max = 60). The plot that you show now reveals suspicious negative correlation. If this plot is based on the same that as the previous plot you may have taken too many differences. See also this post. You can get further details, among other sources, here: Chapter 3 in Time Series: Theory and Methods by Peter J. Brockwell and Richard A. Davis and here.
Estimate ARMA coefficients through ACF and PACF inspection Just to clear up concepts, by visual inspection of the ACF or PACF you can choose (not estimate) a tentative ARMA model. Once a model is selected you can estimate the model by maximizing the likelihoo
17,113
How to detect polarized user opinions (high and low star ratings)
One could construct a polarization index; exactly how one defines it depends on what constitutes being more polarized (i.e. what, exactly do you mean, in particular edge cases, by more or less polarized?): For example, if the mean is '4', is a 50-50 split between '3' and '5' more, or less polarized than 25% '1' and 75% '5'? Anyway, in the absence of that kind of specific definition of what you mean, I'll suggest a measure based off variance: Given a particular mean, define the most polarized possible split as the one that maximizes variance*. *(NB that would say that 25% '1' and 75% '5' is substantially more polarized than 50-50 split of '3's and '5's; if that doesn't match your intuition, don't use variance) So this polarization index is the proportion of the largest possible variance (with the observed mean) in the observed variance. Call the average rating $m$ ($m=\bar x$). The maximum variance occurs when a proportion $p=\frac{m-1}{4}$ is at $5$ and $1-p$ is at $1$; this has a variance of $(m-1)(5-m) \cdot \frac{n}{n-1}$. So simply take the sample variance and divide by $(m-1)(5-m) \cdot \frac{n}{n-1}$; this gives a number between $0$ (perfect agreement) and $1$ (completely polarized). For a number of cases where the mean rating is 4, this would give the following: You might instead prefer not to compute them relative to the biggest possible variance with the same mean, but instead as a percentage of the biggest possible variance for any mean rating. That would involve dividing instead by $4 \cdot \frac{n}{n-1}$, and again yields a value between 0 (perfect agreement) and $1$ (polarized at the extremes in a 50-50 ratio). This would yield the same relativities as the diagram above, but all the values would be 3/4 as large (that is, from left to right, top to bottom they'd be 0, 16.5%, 25%, 25%, 50% and 75%). Either of the two is a perfectly valid choice - as is any other number of alternative ways of constructing such an index.
How to detect polarized user opinions (high and low star ratings)
One could construct a polarization index; exactly how one defines it depends on what constitutes being more polarized (i.e. what, exactly do you mean, in particular edge cases, by more or less polariz
How to detect polarized user opinions (high and low star ratings) One could construct a polarization index; exactly how one defines it depends on what constitutes being more polarized (i.e. what, exactly do you mean, in particular edge cases, by more or less polarized?): For example, if the mean is '4', is a 50-50 split between '3' and '5' more, or less polarized than 25% '1' and 75% '5'? Anyway, in the absence of that kind of specific definition of what you mean, I'll suggest a measure based off variance: Given a particular mean, define the most polarized possible split as the one that maximizes variance*. *(NB that would say that 25% '1' and 75% '5' is substantially more polarized than 50-50 split of '3's and '5's; if that doesn't match your intuition, don't use variance) So this polarization index is the proportion of the largest possible variance (with the observed mean) in the observed variance. Call the average rating $m$ ($m=\bar x$). The maximum variance occurs when a proportion $p=\frac{m-1}{4}$ is at $5$ and $1-p$ is at $1$; this has a variance of $(m-1)(5-m) \cdot \frac{n}{n-1}$. So simply take the sample variance and divide by $(m-1)(5-m) \cdot \frac{n}{n-1}$; this gives a number between $0$ (perfect agreement) and $1$ (completely polarized). For a number of cases where the mean rating is 4, this would give the following: You might instead prefer not to compute them relative to the biggest possible variance with the same mean, but instead as a percentage of the biggest possible variance for any mean rating. That would involve dividing instead by $4 \cdot \frac{n}{n-1}$, and again yields a value between 0 (perfect agreement) and $1$ (polarized at the extremes in a 50-50 ratio). This would yield the same relativities as the diagram above, but all the values would be 3/4 as large (that is, from left to right, top to bottom they'd be 0, 16.5%, 25%, 25%, 50% and 75%). Either of the two is a perfectly valid choice - as is any other number of alternative ways of constructing such an index.
How to detect polarized user opinions (high and low star ratings) One could construct a polarization index; exactly how one defines it depends on what constitutes being more polarized (i.e. what, exactly do you mean, in particular edge cases, by more or less polariz
17,114
How to detect polarized user opinions (high and low star ratings)
"No graphical methods" is kind of a big handicap, but...here are a couple odd ideas. Both treat the ratings as continuous, which is something of a conceptual weakness, and probably not the only one... Kurtosis The kurtosis of {1,1,1,5,5,5} = 1. You won't get a lower kurtosis with any combo of 1–5 ratings. The kurtosis of {1,2,3,4,5} = 1.7. Lower means more extreme values; higher means more middle. This won't work if the distribution isn't roughly symmetrical. I'll demonstrate below. Negative binomial regression With a data frame like this:$$\begin{array}{c|c}\rm Rating&\rm Frequency\\\hline1&31\\2&15\\3&7\\4&9\\5&37\end{array}$$Fit the model $\rm Frequency\sim\rm Rating+\sqrt{Rating}$ using negative binomial regression. The $\rm\sqrt{Rating}$ coefficient should be near zero if ratings are uniformly distributed, positive if there are proportionally more middle-range values (cf. binomial distribution), or negative with polarized distributions like the one above, for which the coefficient is -11.8. FWIW, here's the r code I've been playing around with: x=rbinom(99,4,c(.1,.9))+1;y=sample(0:4,99,replace=T)+1 #Some polarized & uniform rating data table(x);table(y) #Frequencies require(moments);kurtosis(x);kurtosis(y) #Kurtosis Y=data.frame(n=as.numeric(table(y)),rating=as.numeric(levels(factor(y)))) #Data frame setup X=data.frame(n=as.numeric(table(x)),rating=as.numeric(levels(factor(x)))) #Data frame setup require(MASS);summary(glm.nb(n~rating+sqrt(rating),X)) #Negative binomial of polarized data summary(glm.nb(n~rating+sqrt(rating),Y)) #Negative binomial of uniform data Can't resist throwing in a plot... require(ggplot2);ggplot(X,aes(x=rating,y=n))+geom_point()+stat_smooth(formula=y~x+I(sqrt(x)),method='glm',family='poisson') The $\rm\sqrt{Rating}$ term determines the curvature (concavity in this case) of the regression line. Since I'm already cheating by using graphics, I fit this with Poisson regression instead of negative binomial because it's easier to code than doing it the right way. Edit: Just saw this question advertised on the sidebar: and when I clicked, I saw it in the Hot Network Questions linking back to itself, as sometimes happens, so I thought this might deserve revisiting in a more generally useful way. I decided to try my methods on the Amazon customer reviews for The Mountain Three Wolf Moon Short Sleeve Tee: $$\begin{array}{c|ccccc}\rm Rating&1&2&3&4&5\\\hline\rm Frequency&208&54&89&198&2273\end{array}$$As you can see, this is a pretty awesome t-shirt. George Takei said so. Anyway... The kurtosis of this distribution is quite high (7.1), so that method's not as simple as it seems. The negative binomial regression model still works though! $\beta_\sqrt{\rm Rating}=-19.1$. BTW, @Duncan's $\rm \sigma^2_{Frequency_\text{The Mountain Three Wolf Moon Short Sleeve Tee Ratings}}=1.31$... and with x=rep(5:1,c(2273,198,89,54,208)), @Glen_b's polarization index var(x)/(4*length(x)/(length(x)-1))= .33 ...just sayin'.
How to detect polarized user opinions (high and low star ratings)
"No graphical methods" is kind of a big handicap, but...here are a couple odd ideas. Both treat the ratings as continuous, which is something of a conceptual weakness, and probably not the only one...
How to detect polarized user opinions (high and low star ratings) "No graphical methods" is kind of a big handicap, but...here are a couple odd ideas. Both treat the ratings as continuous, which is something of a conceptual weakness, and probably not the only one... Kurtosis The kurtosis of {1,1,1,5,5,5} = 1. You won't get a lower kurtosis with any combo of 1–5 ratings. The kurtosis of {1,2,3,4,5} = 1.7. Lower means more extreme values; higher means more middle. This won't work if the distribution isn't roughly symmetrical. I'll demonstrate below. Negative binomial regression With a data frame like this:$$\begin{array}{c|c}\rm Rating&\rm Frequency\\\hline1&31\\2&15\\3&7\\4&9\\5&37\end{array}$$Fit the model $\rm Frequency\sim\rm Rating+\sqrt{Rating}$ using negative binomial regression. The $\rm\sqrt{Rating}$ coefficient should be near zero if ratings are uniformly distributed, positive if there are proportionally more middle-range values (cf. binomial distribution), or negative with polarized distributions like the one above, for which the coefficient is -11.8. FWIW, here's the r code I've been playing around with: x=rbinom(99,4,c(.1,.9))+1;y=sample(0:4,99,replace=T)+1 #Some polarized & uniform rating data table(x);table(y) #Frequencies require(moments);kurtosis(x);kurtosis(y) #Kurtosis Y=data.frame(n=as.numeric(table(y)),rating=as.numeric(levels(factor(y)))) #Data frame setup X=data.frame(n=as.numeric(table(x)),rating=as.numeric(levels(factor(x)))) #Data frame setup require(MASS);summary(glm.nb(n~rating+sqrt(rating),X)) #Negative binomial of polarized data summary(glm.nb(n~rating+sqrt(rating),Y)) #Negative binomial of uniform data Can't resist throwing in a plot... require(ggplot2);ggplot(X,aes(x=rating,y=n))+geom_point()+stat_smooth(formula=y~x+I(sqrt(x)),method='glm',family='poisson') The $\rm\sqrt{Rating}$ term determines the curvature (concavity in this case) of the regression line. Since I'm already cheating by using graphics, I fit this with Poisson regression instead of negative binomial because it's easier to code than doing it the right way. Edit: Just saw this question advertised on the sidebar: and when I clicked, I saw it in the Hot Network Questions linking back to itself, as sometimes happens, so I thought this might deserve revisiting in a more generally useful way. I decided to try my methods on the Amazon customer reviews for The Mountain Three Wolf Moon Short Sleeve Tee: $$\begin{array}{c|ccccc}\rm Rating&1&2&3&4&5\\\hline\rm Frequency&208&54&89&198&2273\end{array}$$As you can see, this is a pretty awesome t-shirt. George Takei said so. Anyway... The kurtosis of this distribution is quite high (7.1), so that method's not as simple as it seems. The negative binomial regression model still works though! $\beta_\sqrt{\rm Rating}=-19.1$. BTW, @Duncan's $\rm \sigma^2_{Frequency_\text{The Mountain Three Wolf Moon Short Sleeve Tee Ratings}}=1.31$... and with x=rep(5:1,c(2273,198,89,54,208)), @Glen_b's polarization index var(x)/(4*length(x)/(length(x)-1))= .33 ...just sayin'.
How to detect polarized user opinions (high and low star ratings) "No graphical methods" is kind of a big handicap, but...here are a couple odd ideas. Both treat the ratings as continuous, which is something of a conceptual weakness, and probably not the only one...
17,115
How to detect polarized user opinions (high and low star ratings)
I would think an easy way is to calculate the variance. In a simple system like that, a higher variance would mean more 1s/5s. EDIT Quick example: if your values are 1,3,3,5 your variance will be: $$\frac {(1-3)^2 + (3-3)^2 + (3-3)^2 + (5-3)^2}4 = 1$$If your numbers are 1,1,5,5 your variance will be:$$\frac {(1-3)^2 + (1-3)^2 + (5-3)^2 + (5-3)^2}4 = 2$$
How to detect polarized user opinions (high and low star ratings)
I would think an easy way is to calculate the variance. In a simple system like that, a higher variance would mean more 1s/5s. EDIT Quick example: if your values are 1,3,3,5 your variance will be: $$\
How to detect polarized user opinions (high and low star ratings) I would think an easy way is to calculate the variance. In a simple system like that, a higher variance would mean more 1s/5s. EDIT Quick example: if your values are 1,3,3,5 your variance will be: $$\frac {(1-3)^2 + (3-3)^2 + (3-3)^2 + (5-3)^2}4 = 1$$If your numbers are 1,1,5,5 your variance will be:$$\frac {(1-3)^2 + (1-3)^2 + (5-3)^2 + (5-3)^2}4 = 2$$
How to detect polarized user opinions (high and low star ratings) I would think an easy way is to calculate the variance. In a simple system like that, a higher variance would mean more 1s/5s. EDIT Quick example: if your values are 1,3,3,5 your variance will be: $$\
17,116
How to detect polarized user opinions (high and low star ratings)
I doubt that I can add something valuable to the clever answers already given. In particular, to @Glen_b's fine idea to assess how the variance observed is relatively close to the maximal variance possible under the observed mean. My own blunt and straight from the shoulder proposal is, instead, about some robust measure of dispersion based not on deviations from some centre but directly on distances between data points. Compute pairwise distances (absolute differences) between all the data points. Drop out $d_{ii}$ zero distances. Compute a central tendency in the distribution of the distances (the choice is yours; it may be, for example, mean, median, or Hodges-Lehmann centre). Rating scale Distances Mean Median Hodges-Lehmann 1 2 3 4 5 Frequency distributions: 1 2 1 0 2 2 2 2 4 2 2 2 2 2 0 0 4 4 4 4 2.7 4 2 1 2 1 0 1 1 3 3 4 2 2 2 1 1 1 1 1 1 2 2 3 4 2.2 2 2 1 1 1 1 1 1 2 3 3 4 2.3 2.5 2.5 1 3 0 0 0 4 4 4 2 2 2 As you can see, the 3 statistics may be very different as measures of "polarization" (if I were to measure "disagreement" rather than bipolar confrontation, I would probably choose HL). The choice is yours. One notion: if you compute squared distances, their mean will be directly related to usual variance in the data (and so you will arrive at @Duncan's suggestion to compute variance). Computation of distances won't be too hard even with big $N$ here because the rating scale is descrete and with relatively few grades, so frequency-weighting algorithm to compute distances offers itself naturally.
How to detect polarized user opinions (high and low star ratings)
I doubt that I can add something valuable to the clever answers already given. In particular, to @Glen_b's fine idea to assess how the variance observed is relatively close to the maximal variance pos
How to detect polarized user opinions (high and low star ratings) I doubt that I can add something valuable to the clever answers already given. In particular, to @Glen_b's fine idea to assess how the variance observed is relatively close to the maximal variance possible under the observed mean. My own blunt and straight from the shoulder proposal is, instead, about some robust measure of dispersion based not on deviations from some centre but directly on distances between data points. Compute pairwise distances (absolute differences) between all the data points. Drop out $d_{ii}$ zero distances. Compute a central tendency in the distribution of the distances (the choice is yours; it may be, for example, mean, median, or Hodges-Lehmann centre). Rating scale Distances Mean Median Hodges-Lehmann 1 2 3 4 5 Frequency distributions: 1 2 1 0 2 2 2 2 4 2 2 2 2 2 0 0 4 4 4 4 2.7 4 2 1 2 1 0 1 1 3 3 4 2 2 2 1 1 1 1 1 1 2 2 3 4 2.2 2 2 1 1 1 1 1 1 2 3 3 4 2.3 2.5 2.5 1 3 0 0 0 4 4 4 2 2 2 As you can see, the 3 statistics may be very different as measures of "polarization" (if I were to measure "disagreement" rather than bipolar confrontation, I would probably choose HL). The choice is yours. One notion: if you compute squared distances, their mean will be directly related to usual variance in the data (and so you will arrive at @Duncan's suggestion to compute variance). Computation of distances won't be too hard even with big $N$ here because the rating scale is descrete and with relatively few grades, so frequency-weighting algorithm to compute distances offers itself naturally.
How to detect polarized user opinions (high and low star ratings) I doubt that I can add something valuable to the clever answers already given. In particular, to @Glen_b's fine idea to assess how the variance observed is relatively close to the maximal variance pos
17,117
How to detect polarized user opinions (high and low star ratings)
How about, if the 3 star rating is smaller than the average of the 5 and 4, and also smaller than the average of the 1 and 2: if (number_of_ratings > 6) // kind of meaningless unless there's enough ratings { if ( ((rating(5)+rating(4))*0.5 > rating(3)) && ((rating(1)+rating(2))*0.5 > rating(3)) ) { // Opinion divided } else { // Opinion not divided } } else { // Hard to tell yet if opinion is divided } Off the top of my head I can't think of any situation in which that wouldn't work. Using the example above: Amazon customer reviews for The Mountain Three Wolf Moon Short Sleeve Tee: $$\begin{array}{c|ccccc}\rm Rating&1&2&3&4&5\\\hline\rm Frequency&208&54&89&198&2273\end{array}$$ In this case: $$\begin{array}{c|ccccc}\rm Rating&average(1,2)&3&average(4,5)\\\hline\rm Frequency&131&89&1235\end{array}$$ This would pass the test and be considered divided opinion.
How to detect polarized user opinions (high and low star ratings)
How about, if the 3 star rating is smaller than the average of the 5 and 4, and also smaller than the average of the 1 and 2: if (number_of_ratings > 6) // kind of meaningless unless there's enou
How to detect polarized user opinions (high and low star ratings) How about, if the 3 star rating is smaller than the average of the 5 and 4, and also smaller than the average of the 1 and 2: if (number_of_ratings > 6) // kind of meaningless unless there's enough ratings { if ( ((rating(5)+rating(4))*0.5 > rating(3)) && ((rating(1)+rating(2))*0.5 > rating(3)) ) { // Opinion divided } else { // Opinion not divided } } else { // Hard to tell yet if opinion is divided } Off the top of my head I can't think of any situation in which that wouldn't work. Using the example above: Amazon customer reviews for The Mountain Three Wolf Moon Short Sleeve Tee: $$\begin{array}{c|ccccc}\rm Rating&1&2&3&4&5\\\hline\rm Frequency&208&54&89&198&2273\end{array}$$ In this case: $$\begin{array}{c|ccccc}\rm Rating&average(1,2)&3&average(4,5)\\\hline\rm Frequency&131&89&1235\end{array}$$ This would pass the test and be considered divided opinion.
How to detect polarized user opinions (high and low star ratings) How about, if the 3 star rating is smaller than the average of the 5 and 4, and also smaller than the average of the 1 and 2: if (number_of_ratings > 6) // kind of meaningless unless there's enou
17,118
How to detect polarized user opinions (high and low star ratings)
I think what you are looking for is standard deviation: $$ \sigma = \sqrt{\frac{\sum_{i=0}^{n}(x_i-\mu )^2}{n}}\\\text{where }\sigma \text{ is standard deviation, } \\ n \text{ is the number of data points,}\\ x \text{ represents all of the data points, and}\\\mu\text{ is the mean.} $$ I don't know what programming language this is, but here's a java method that will give you standard deviation: public static double standardDeviation(double[] data) { //find the mean double sum = 0; for(double x:data) { sum+=x; } double mean = sum/data.length; //find standard deviation Double sd; sd=0.0; for(double x:data) { sd+=Math.pow((x-mean),2); } sd=sd/data.length; sd=Math.sqrt(sd); return sd; }
How to detect polarized user opinions (high and low star ratings)
I think what you are looking for is standard deviation: $$ \sigma = \sqrt{\frac{\sum_{i=0}^{n}(x_i-\mu )^2}{n}}\\\text{where }\sigma \text{ is standard deviation, } \\ n \text{ is the number of data
How to detect polarized user opinions (high and low star ratings) I think what you are looking for is standard deviation: $$ \sigma = \sqrt{\frac{\sum_{i=0}^{n}(x_i-\mu )^2}{n}}\\\text{where }\sigma \text{ is standard deviation, } \\ n \text{ is the number of data points,}\\ x \text{ represents all of the data points, and}\\\mu\text{ is the mean.} $$ I don't know what programming language this is, but here's a java method that will give you standard deviation: public static double standardDeviation(double[] data) { //find the mean double sum = 0; for(double x:data) { sum+=x; } double mean = sum/data.length; //find standard deviation Double sd; sd=0.0; for(double x:data) { sd+=Math.pow((x-mean),2); } sd=sd/data.length; sd=Math.sqrt(sd); return sd; }
How to detect polarized user opinions (high and low star ratings) I think what you are looking for is standard deviation: $$ \sigma = \sqrt{\frac{\sum_{i=0}^{n}(x_i-\mu )^2}{n}}\\\text{where }\sigma \text{ is standard deviation, } \\ n \text{ is the number of data
17,119
Plots to illustrate results of linear mixed effect model
It depends on your model, but, in my experience, even colleagues, who don't have a good understanding of mixed effects models, really like if you plot the predictions with different grouping levels: library(nlme) fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1|Subject) newdat <- expand.grid(Sex=unique(Orthodont$Sex), age=c(min(Orthodont$age), max(Orthodont$age))) library(ggplot2) p <- ggplot(Orthodont, aes(x=age, y=distance, colour=Sex)) + geom_point(size=3) + geom_line(aes(y=predict(fm2), group=Subject, size="Subjects")) + geom_line(data=newdat, aes(y=predict(fm2, level=0, newdata=newdat), size="Population")) + scale_size_manual(name="Predictions", values=c("Subjects"=0.5, "Population"=3)) + theme_bw(base_size=22) print(p)
Plots to illustrate results of linear mixed effect model
It depends on your model, but, in my experience, even colleagues, who don't have a good understanding of mixed effects models, really like if you plot the predictions with different grouping levels: l
Plots to illustrate results of linear mixed effect model It depends on your model, but, in my experience, even colleagues, who don't have a good understanding of mixed effects models, really like if you plot the predictions with different grouping levels: library(nlme) fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1|Subject) newdat <- expand.grid(Sex=unique(Orthodont$Sex), age=c(min(Orthodont$age), max(Orthodont$age))) library(ggplot2) p <- ggplot(Orthodont, aes(x=age, y=distance, colour=Sex)) + geom_point(size=3) + geom_line(aes(y=predict(fm2), group=Subject, size="Subjects")) + geom_line(data=newdat, aes(y=predict(fm2, level=0, newdata=newdat), size="Population")) + scale_size_manual(name="Predictions", values=c("Subjects"=0.5, "Population"=3)) + theme_bw(base_size=22) print(p)
Plots to illustrate results of linear mixed effect model It depends on your model, but, in my experience, even colleagues, who don't have a good understanding of mixed effects models, really like if you plot the predictions with different grouping levels: l
17,120
Where is density estimation useful?
One typical case for the application of density estimation is novelty detection, a.k.a. outlier detection, where the idea is that you only (or mostly) have data of one type, but you are interested in very rare, qualitative distinct data, that deviates significantly from those common cases. Examples are fraud detection, detection of failures in systems, and so on. These are situations where it is very hard and/or expensive to gather data of the sort you are interested in. These rare cases, i.e. cases with low probability of occurring. Most of the times you are not interested on estimating accurately the exact distribution, but on the relative odds (how likely is a given sample to be an actual outlier vs. not being one). There are dozens of tutorials and reviews on the topic. This one might be a good one to start with. EDIT: for some people seems odd using density estimation for outlier detection. Let us first agree on one thing: when somebody fits a mixture model to his data, he is actually performing density estimation. A mixture model represents a distribution of probability. kNN and GMM are actually related: they are two methods of estimating such a density of probability. This is the underlying idea for many approaches in novelty detection. For example, this one based on kNNs, this other one based on Parzen windows (which stress this very idea at the beginning of the paper), and many others. It seems to me (but it is just my personal perception) that most if not all work on this idea. How else would you express the idea of an anomalous/rare event?
Where is density estimation useful?
One typical case for the application of density estimation is novelty detection, a.k.a. outlier detection, where the idea is that you only (or mostly) have data of one type, but you are interested in
Where is density estimation useful? One typical case for the application of density estimation is novelty detection, a.k.a. outlier detection, where the idea is that you only (or mostly) have data of one type, but you are interested in very rare, qualitative distinct data, that deviates significantly from those common cases. Examples are fraud detection, detection of failures in systems, and so on. These are situations where it is very hard and/or expensive to gather data of the sort you are interested in. These rare cases, i.e. cases with low probability of occurring. Most of the times you are not interested on estimating accurately the exact distribution, but on the relative odds (how likely is a given sample to be an actual outlier vs. not being one). There are dozens of tutorials and reviews on the topic. This one might be a good one to start with. EDIT: for some people seems odd using density estimation for outlier detection. Let us first agree on one thing: when somebody fits a mixture model to his data, he is actually performing density estimation. A mixture model represents a distribution of probability. kNN and GMM are actually related: they are two methods of estimating such a density of probability. This is the underlying idea for many approaches in novelty detection. For example, this one based on kNNs, this other one based on Parzen windows (which stress this very idea at the beginning of the paper), and many others. It seems to me (but it is just my personal perception) that most if not all work on this idea. How else would you express the idea of an anomalous/rare event?
Where is density estimation useful? One typical case for the application of density estimation is novelty detection, a.k.a. outlier detection, where the idea is that you only (or mostly) have data of one type, but you are interested in
17,121
Where is density estimation useful?
Typically, KDE is touted as an alternative to histograms. The main advantage of KDE over histograms, in this context, is to alleviate the effects of arbitrarily chosen parameters on the visual output of the procedure. In particular (and as illustrated in the link above), KDE does not need the user to specify start and end points.
Where is density estimation useful?
Typically, KDE is touted as an alternative to histograms. The main advantage of KDE over histograms, in this context, is to alleviate the effects of arbitrarily chosen parameters on the visual output
Where is density estimation useful? Typically, KDE is touted as an alternative to histograms. The main advantage of KDE over histograms, in this context, is to alleviate the effects of arbitrarily chosen parameters on the visual output of the procedure. In particular (and as illustrated in the link above), KDE does not need the user to specify start and end points.
Where is density estimation useful? Typically, KDE is touted as an alternative to histograms. The main advantage of KDE over histograms, in this context, is to alleviate the effects of arbitrarily chosen parameters on the visual output
17,122
Where is density estimation useful?
I guess that the mean-shift algorithm (http://en.wikipedia.org/wiki/Mean-shift) is a good example for an efficient and suited application of kde. The purpose of this algorithm is to locate the maxima of a density function given data $(x_i)$ sampled from that density function and it is entirely based on a kde modeling: $$ f_h(x) \propto \sum_{x_i} \exp( -(x_{i}-x)^{T}\Sigma^{-1} (x_{i}-x)), $$ where $\Sigma^{-1}$ is a covariance matrix (most of the time estimated). This algorithm is widely used in clustering tasks when the number of components is unknown: each discovered mode is a cluster centroid and the closer a sample to a mode the more likely it belongs to the corresponding cluster (everything being weighted properly by the shape of the reconstructed density). The sample data $x_i$ are typically of dimension larger than one: for example, to perform a 2D color image segmentation, the samples can be 5d for (RComponent, GComponent, BComponent, xPosition, yPosition).
Where is density estimation useful?
I guess that the mean-shift algorithm (http://en.wikipedia.org/wiki/Mean-shift) is a good example for an efficient and suited application of kde. The purpose of this algorithm is to locate the maxima
Where is density estimation useful? I guess that the mean-shift algorithm (http://en.wikipedia.org/wiki/Mean-shift) is a good example for an efficient and suited application of kde. The purpose of this algorithm is to locate the maxima of a density function given data $(x_i)$ sampled from that density function and it is entirely based on a kde modeling: $$ f_h(x) \propto \sum_{x_i} \exp( -(x_{i}-x)^{T}\Sigma^{-1} (x_{i}-x)), $$ where $\Sigma^{-1}$ is a covariance matrix (most of the time estimated). This algorithm is widely used in clustering tasks when the number of components is unknown: each discovered mode is a cluster centroid and the closer a sample to a mode the more likely it belongs to the corresponding cluster (everything being weighted properly by the shape of the reconstructed density). The sample data $x_i$ are typically of dimension larger than one: for example, to perform a 2D color image segmentation, the samples can be 5d for (RComponent, GComponent, BComponent, xPosition, yPosition).
Where is density estimation useful? I guess that the mean-shift algorithm (http://en.wikipedia.org/wiki/Mean-shift) is a good example for an efficient and suited application of kde. The purpose of this algorithm is to locate the maxima
17,123
Hypothesis test for difference in medians among more than two samples
The Kruskal-Wallis test could also be used, as it's a non-parametric ANOVA. Additionally, it is often considered to be more powerful than Mood's median test. It can be implemented in R using the kruskal.test function in the stats package in R. To respond to your edit, interpreting K-W is similar to a one-way ANOVA. A significant p-value corresponds to rejected the null that all three means are equal. You must use a follow-up test (again, just like an ANOVA), to answer questions about specific groups. This typically follows specific research questions you may have. Just by looking at the parameters of the simulation, all three groups should be significantly different from one another if you do a follow-up test (as they're all 1 SD apart with N = 100).
Hypothesis test for difference in medians among more than two samples
The Kruskal-Wallis test could also be used, as it's a non-parametric ANOVA. Additionally, it is often considered to be more powerful than Mood's median test. It can be implemented in R using the krusk
Hypothesis test for difference in medians among more than two samples The Kruskal-Wallis test could also be used, as it's a non-parametric ANOVA. Additionally, it is often considered to be more powerful than Mood's median test. It can be implemented in R using the kruskal.test function in the stats package in R. To respond to your edit, interpreting K-W is similar to a one-way ANOVA. A significant p-value corresponds to rejected the null that all three means are equal. You must use a follow-up test (again, just like an ANOVA), to answer questions about specific groups. This typically follows specific research questions you may have. Just by looking at the parameters of the simulation, all three groups should be significantly different from one another if you do a follow-up test (as they're all 1 SD apart with N = 100).
Hypothesis test for difference in medians among more than two samples The Kruskal-Wallis test could also be used, as it's a non-parametric ANOVA. Additionally, it is often considered to be more powerful than Mood's median test. It can be implemented in R using the krusk
17,124
Hypothesis test for difference in medians among more than two samples
First, the Wilcoxon test (or Mann-Whitney test) is not a test of medians (unless you make very strict assumptions that also make it a test of means). And for comparing more than 2 groups the Wilcoxon test can lead to some paradoxical results (see Efron's Dice). Since the Wilcoxon test is just a special case of a permutation test and you are specifically interested in the medians, I would suggest a permutation test on the medians. First choose a measure of the difference, something like the largest of the 3 medians minus the smallest of the 3 (or the variance of the 3 medians, or the MAD, etc.). Now compute your stat for the original data. pool all the data in one set then randomly partition the values into 3 groups of the same sizes as the original and compute the same statistic. repeat many times (like 9998) Compare how the statistic from the real data compares to the distribution of all the stats for your test.
Hypothesis test for difference in medians among more than two samples
First, the Wilcoxon test (or Mann-Whitney test) is not a test of medians (unless you make very strict assumptions that also make it a test of means). And for comparing more than 2 groups the Wilcoxon
Hypothesis test for difference in medians among more than two samples First, the Wilcoxon test (or Mann-Whitney test) is not a test of medians (unless you make very strict assumptions that also make it a test of means). And for comparing more than 2 groups the Wilcoxon test can lead to some paradoxical results (see Efron's Dice). Since the Wilcoxon test is just a special case of a permutation test and you are specifically interested in the medians, I would suggest a permutation test on the medians. First choose a measure of the difference, something like the largest of the 3 medians minus the smallest of the 3 (or the variance of the 3 medians, or the MAD, etc.). Now compute your stat for the original data. pool all the data in one set then randomly partition the values into 3 groups of the same sizes as the original and compute the same statistic. repeat many times (like 9998) Compare how the statistic from the real data compares to the distribution of all the stats for your test.
Hypothesis test for difference in medians among more than two samples First, the Wilcoxon test (or Mann-Whitney test) is not a test of medians (unless you make very strict assumptions that also make it a test of means). And for comparing more than 2 groups the Wilcoxon
17,125
Hypothesis test for difference in medians among more than two samples
The Mood’s median test is a nonparametric test that is used to test the equality of medians from two or more populations. See here for the R part of your question. See also a related question here. Also from here: Mood's median test is the easiest one to do by hand: work out the overall median (of all the data), and count how many values are above and below the median in each group. If the groups are all about the same, the observations should be about 50-50 above and below the overall median in each group... The counts of below-median and above-median... form a two-way table, which is then analyzed using a chi-squared test. Mood's median test is a lot like the sign test generalized to two or more groups. Edit: For three groups, you may consider this simple generalisation of the R code I linked to: median.test2 <- function(x, y, z) { a <- c(x, y, z) g <- rep(1:3, c(length(x), length(y), length(z))) m <- median(a) fisher.test(a < m, g)$p.value }
Hypothesis test for difference in medians among more than two samples
The Mood’s median test is a nonparametric test that is used to test the equality of medians from two or more populations. See here for the R part of your question. See also a related question here. Al
Hypothesis test for difference in medians among more than two samples The Mood’s median test is a nonparametric test that is used to test the equality of medians from two or more populations. See here for the R part of your question. See also a related question here. Also from here: Mood's median test is the easiest one to do by hand: work out the overall median (of all the data), and count how many values are above and below the median in each group. If the groups are all about the same, the observations should be about 50-50 above and below the overall median in each group... The counts of below-median and above-median... form a two-way table, which is then analyzed using a chi-squared test. Mood's median test is a lot like the sign test generalized to two or more groups. Edit: For three groups, you may consider this simple generalisation of the R code I linked to: median.test2 <- function(x, y, z) { a <- c(x, y, z) g <- rep(1:3, c(length(x), length(y), length(z))) m <- median(a) fisher.test(a < m, g)$p.value }
Hypothesis test for difference in medians among more than two samples The Mood’s median test is a nonparametric test that is used to test the equality of medians from two or more populations. See here for the R part of your question. See also a related question here. Al
17,126
Hypothesis test for difference in medians among more than two samples
I know this is way late, but I couldn't find a good package for Mood's median test either, so I took it upon myself to make a function in R that seems to do the trick. #Mood's median test for a data frame with one column containing data (d), #and another containing a factor/grouping variable (f) moods.median = function(d,f) { #make a new matrix data frame m = cbind(f,d) colnames(m) = c("group", "value") #get the names of the factors/groups facs = unique(f) #count the number of factors/groups factorN = length(unique(f)) #Make a 2 by K table that will be saved to the global environment by using "<<-": #2 rows (number of values > overall median & number of values <= overall median) #K-many columns for each level of the factor MoodsMedianTable <<- matrix(NA, nrow = 2, ncol = factorN) rownames(MoodsMedianTable) <<- c("> overall median", "<= overall median") colnames(MoodsMedianTable) <<- c(facs[1:factorN]) colnames(MoodsMedianTable) <<- paste("Factor: ",colnames(MoodsMedianTable)) #get the overall median overallmedian = median(d) #put the following into the 2 by K table: for(j in 1:factorN){ #for each factor level g = facs[j] #assign a temporary "group name" #count the number of observations in the factor that are greater than #the overall median and save it to the table MoodsMedianTable[1,j] <<- sum(m[,2][ which(m[,1]==g)] > overallmedian) #count the number of observations in the factor that are less than # or equal to the overall median and save it to the table MoodsMedianTable[2,j] <<- sum(m[,2][ which(m[,1]==g)] <= overallmedian) } #percent of cells with expected values less than 5 percLT5 = ((sum(chisq.test(MoodsMedianTable)$expected < 5)) / (length(chisq.test(MoodsMedianTable)$expected))) #if >20% of cells have expected values less than 5 #then give chi-squared stat, df, and Fisher's exact p.value if (percLT5 > 0.2) { return(list( "Chi-squared" = chisq.test(MoodsMedianTable)$statistic, "df" = chisq.test(MoodsMedianTable)$parameter, "Fisher's exact p.value" = fisher.test(MoodsMedianTable)$p.value)) } #if <= 20% of cells have expected values less than 5 #then give chi-squared stat, df, and chi-squared p.value if (percLT5 <= 0.2) { return(list( "Chi-squared" = chisq.test(MoodsMedianTable)$statistic, "df" = chisq.test(MoodsMedianTable)$parameter, "Chi-squared p.value" = chisq.test(MoodsMedianTable)$p.value)) } } For the OP's question, you would first run this to make a new data frame to hold the values from your three group vectors with a matched "group" variable. require(reshape2) df = cbind(group1, group2, group3) df = melt(df) colnames(df) = c("observation", "group", "value") and run the function for Mood's median test with moods.median(df$value, df$group)
Hypothesis test for difference in medians among more than two samples
I know this is way late, but I couldn't find a good package for Mood's median test either, so I took it upon myself to make a function in R that seems to do the trick. #Mood's median test for a data
Hypothesis test for difference in medians among more than two samples I know this is way late, but I couldn't find a good package for Mood's median test either, so I took it upon myself to make a function in R that seems to do the trick. #Mood's median test for a data frame with one column containing data (d), #and another containing a factor/grouping variable (f) moods.median = function(d,f) { #make a new matrix data frame m = cbind(f,d) colnames(m) = c("group", "value") #get the names of the factors/groups facs = unique(f) #count the number of factors/groups factorN = length(unique(f)) #Make a 2 by K table that will be saved to the global environment by using "<<-": #2 rows (number of values > overall median & number of values <= overall median) #K-many columns for each level of the factor MoodsMedianTable <<- matrix(NA, nrow = 2, ncol = factorN) rownames(MoodsMedianTable) <<- c("> overall median", "<= overall median") colnames(MoodsMedianTable) <<- c(facs[1:factorN]) colnames(MoodsMedianTable) <<- paste("Factor: ",colnames(MoodsMedianTable)) #get the overall median overallmedian = median(d) #put the following into the 2 by K table: for(j in 1:factorN){ #for each factor level g = facs[j] #assign a temporary "group name" #count the number of observations in the factor that are greater than #the overall median and save it to the table MoodsMedianTable[1,j] <<- sum(m[,2][ which(m[,1]==g)] > overallmedian) #count the number of observations in the factor that are less than # or equal to the overall median and save it to the table MoodsMedianTable[2,j] <<- sum(m[,2][ which(m[,1]==g)] <= overallmedian) } #percent of cells with expected values less than 5 percLT5 = ((sum(chisq.test(MoodsMedianTable)$expected < 5)) / (length(chisq.test(MoodsMedianTable)$expected))) #if >20% of cells have expected values less than 5 #then give chi-squared stat, df, and Fisher's exact p.value if (percLT5 > 0.2) { return(list( "Chi-squared" = chisq.test(MoodsMedianTable)$statistic, "df" = chisq.test(MoodsMedianTable)$parameter, "Fisher's exact p.value" = fisher.test(MoodsMedianTable)$p.value)) } #if <= 20% of cells have expected values less than 5 #then give chi-squared stat, df, and chi-squared p.value if (percLT5 <= 0.2) { return(list( "Chi-squared" = chisq.test(MoodsMedianTable)$statistic, "df" = chisq.test(MoodsMedianTable)$parameter, "Chi-squared p.value" = chisq.test(MoodsMedianTable)$p.value)) } } For the OP's question, you would first run this to make a new data frame to hold the values from your three group vectors with a matched "group" variable. require(reshape2) df = cbind(group1, group2, group3) df = melt(df) colnames(df) = c("observation", "group", "value") and run the function for Mood's median test with moods.median(df$value, df$group)
Hypothesis test for difference in medians among more than two samples I know this is way late, but I couldn't find a good package for Mood's median test either, so I took it upon myself to make a function in R that seems to do the trick. #Mood's median test for a data
17,127
How to get pooled p-values on tests done in multiple imputed datasets?
Yes, it is possible and, yes, there are R functions that do it. Instead of computing the p-values of the repeated analyses by hand, you can use the package Zelig, which is also referred to in the vignette of the Amelia-package (for a more informative method see my update below). I'll use an example from the Amelia-vignette to demonstrate this: library("Amelia") data(freetrade) amelia.out <- amelia(freetrade, m = 15, ts = "year", cs = "country") library("Zelig") zelig.fit <- zelig(tariff ~ pop + gdp.pc + year + polity, data = amelia.out$imputations, model = "ls", cite = FALSE) summary(zelig.fit) This is the corresponding output including $p$-values: Model: ls Number of multiply imputed data sets: 15 Combined results: Call: lm(formula = formula, weights = weights, model = F, data = data) Coefficients: Value Std. Error t-stat p-value (Intercept) 3.18e+03 7.22e+02 4.41 6.20e-05 pop 3.13e-08 5.59e-09 5.59 4.21e-08 gdp.pc -2.11e-03 5.53e-04 -3.81 1.64e-04 year -1.58e+00 3.63e-01 -4.37 7.11e-05 polity 5.52e-01 3.16e-01 1.75 8.41e-02 For combined results from datasets i to j, use summary(x, subset = i:j). For separate results, use print(summary(x), subset = i:j). zelig can fit a host of models other than least squares. To get confidence intervals and degrees of freedom for your estimates you can use mitools: library("mitools") imp.data <- imputationList(amelia.out$imputations) mitools.fit <- MIcombine(with(imp.data, lm(tariff ~ polity + pop + gdp.pc + year))) mitools.res <- summary(mitools.fit) mitools.res <- cbind(mitools.res, df = mitools.fit$df) mitools.res This will give you confidence intervals and proportion of the total variance that is attributable to the missing data: results se (lower upper) missInfo df (Intercept) 3.18e+03 7.22e+02 1.73e+03 4.63e+03 57 % 45.9 pop 3.13e-08 5.59e-09 2.03e-08 4.23e-08 19 % 392.1 gdp.pc -2.11e-03 5.53e-04 -3.20e-03 -1.02e-03 21 % 329.4 year -1.58e+00 3.63e-01 -2.31e+00 -8.54e-01 57 % 45.9 polity 5.52e-01 3.16e-01 -7.58e-02 1.18e+00 41 % 90.8 Of course you can just combine the interesting results into one object: combined.results <- merge(mitools.res, zelig.res$coefficients[, c("t-stat", "p-value")], by = "row.names", all.x = TRUE) Update After some playing around, I have found a more flexible way to get all necessary information using the mice-package. For this to work, you'll need to modify the package's as.mids()-function. Use Gerko's version posted in my follow-up question: as.mids2 <- function(data2, .imp=1, .id=2){ ini <- mice(data2[data2[, .imp] == 0, -c(.imp, .id)], m = max(as.numeric(data2[, .imp])), maxit=0) names <- names(ini$imp) if (!is.null(.id)){ rownames(ini$data) <- data2[data2[, .imp] == 0, .id] } for (i in 1:length(names)){ for(m in 1:(max(as.numeric(data2[, .imp])))){ if(!is.null(ini$imp[[i]])){ indic <- data2[, .imp] == m & is.na(data2[data2[, .imp]==0, names[i]]) ini$imp[[names[i]]][m] <- data2[indic, names[i]] } } } return(ini) } With this defined, you can go on to analyze the imputed data sets: library("mice") imp.data <- do.call("rbind", amelia.out$imputations) imp.data <- rbind(freetrade, imp.data) imp.data$.imp <- as.numeric(rep(c(0:15), each = nrow(freetrade))) mice.data <- as.mids2(imp.data, .imp = ncol(imp.data), .id = NULL) mice.fit <- with(mice.data, lm(tariff ~ polity + pop + gdp.pc + year)) mice.res <- summary(pool(mice.fit, method = "rubin1987")) This will give you all results you get using Zelig and mitools and more: est se t df Pr(>|t|) lo 95 hi 95 nmis fmi lambda (Intercept) 3.18e+03 7.22e+02 4.41 45.9 6.20e-05 1.73e+03 4.63e+03 NA 0.571 0.552 pop 3.13e-08 5.59e-09 5.59 392.1 4.21e-08 2.03e-08 4.23e-08 0 0.193 0.189 gdp.pc -2.11e-03 5.53e-04 -3.81 329.4 1.64e-04 -3.20e-03 -1.02e-03 0 0.211 0.206 year -1.58e+00 3.63e-01 -4.37 45.9 7.11e-05 -2.31e+00 -8.54e-01 0 0.570 0.552 polity 5.52e-01 3.16e-01 1.75 90.8 8.41e-02 -7.58e-02 1.18e+00 2 0.406 0.393 Note, using pool() you can also calculate $p$-values with $df$ adjusted for small samples by omitting the method-parameter. What is even better, you can now also calculate $R^2$ and compare nested models: pool.r.squared(mice.fit) mice.fit2 <- with(mice.data, lm(tariff ~ polity + pop + gdp.pc)) pool.compare(mice.fit, mice.fit2, method = "Wald")$pvalue
How to get pooled p-values on tests done in multiple imputed datasets?
Yes, it is possible and, yes, there are R functions that do it. Instead of computing the p-values of the repeated analyses by hand, you can use the package Zelig, which is also referred to in the vign
How to get pooled p-values on tests done in multiple imputed datasets? Yes, it is possible and, yes, there are R functions that do it. Instead of computing the p-values of the repeated analyses by hand, you can use the package Zelig, which is also referred to in the vignette of the Amelia-package (for a more informative method see my update below). I'll use an example from the Amelia-vignette to demonstrate this: library("Amelia") data(freetrade) amelia.out <- amelia(freetrade, m = 15, ts = "year", cs = "country") library("Zelig") zelig.fit <- zelig(tariff ~ pop + gdp.pc + year + polity, data = amelia.out$imputations, model = "ls", cite = FALSE) summary(zelig.fit) This is the corresponding output including $p$-values: Model: ls Number of multiply imputed data sets: 15 Combined results: Call: lm(formula = formula, weights = weights, model = F, data = data) Coefficients: Value Std. Error t-stat p-value (Intercept) 3.18e+03 7.22e+02 4.41 6.20e-05 pop 3.13e-08 5.59e-09 5.59 4.21e-08 gdp.pc -2.11e-03 5.53e-04 -3.81 1.64e-04 year -1.58e+00 3.63e-01 -4.37 7.11e-05 polity 5.52e-01 3.16e-01 1.75 8.41e-02 For combined results from datasets i to j, use summary(x, subset = i:j). For separate results, use print(summary(x), subset = i:j). zelig can fit a host of models other than least squares. To get confidence intervals and degrees of freedom for your estimates you can use mitools: library("mitools") imp.data <- imputationList(amelia.out$imputations) mitools.fit <- MIcombine(with(imp.data, lm(tariff ~ polity + pop + gdp.pc + year))) mitools.res <- summary(mitools.fit) mitools.res <- cbind(mitools.res, df = mitools.fit$df) mitools.res This will give you confidence intervals and proportion of the total variance that is attributable to the missing data: results se (lower upper) missInfo df (Intercept) 3.18e+03 7.22e+02 1.73e+03 4.63e+03 57 % 45.9 pop 3.13e-08 5.59e-09 2.03e-08 4.23e-08 19 % 392.1 gdp.pc -2.11e-03 5.53e-04 -3.20e-03 -1.02e-03 21 % 329.4 year -1.58e+00 3.63e-01 -2.31e+00 -8.54e-01 57 % 45.9 polity 5.52e-01 3.16e-01 -7.58e-02 1.18e+00 41 % 90.8 Of course you can just combine the interesting results into one object: combined.results <- merge(mitools.res, zelig.res$coefficients[, c("t-stat", "p-value")], by = "row.names", all.x = TRUE) Update After some playing around, I have found a more flexible way to get all necessary information using the mice-package. For this to work, you'll need to modify the package's as.mids()-function. Use Gerko's version posted in my follow-up question: as.mids2 <- function(data2, .imp=1, .id=2){ ini <- mice(data2[data2[, .imp] == 0, -c(.imp, .id)], m = max(as.numeric(data2[, .imp])), maxit=0) names <- names(ini$imp) if (!is.null(.id)){ rownames(ini$data) <- data2[data2[, .imp] == 0, .id] } for (i in 1:length(names)){ for(m in 1:(max(as.numeric(data2[, .imp])))){ if(!is.null(ini$imp[[i]])){ indic <- data2[, .imp] == m & is.na(data2[data2[, .imp]==0, names[i]]) ini$imp[[names[i]]][m] <- data2[indic, names[i]] } } } return(ini) } With this defined, you can go on to analyze the imputed data sets: library("mice") imp.data <- do.call("rbind", amelia.out$imputations) imp.data <- rbind(freetrade, imp.data) imp.data$.imp <- as.numeric(rep(c(0:15), each = nrow(freetrade))) mice.data <- as.mids2(imp.data, .imp = ncol(imp.data), .id = NULL) mice.fit <- with(mice.data, lm(tariff ~ polity + pop + gdp.pc + year)) mice.res <- summary(pool(mice.fit, method = "rubin1987")) This will give you all results you get using Zelig and mitools and more: est se t df Pr(>|t|) lo 95 hi 95 nmis fmi lambda (Intercept) 3.18e+03 7.22e+02 4.41 45.9 6.20e-05 1.73e+03 4.63e+03 NA 0.571 0.552 pop 3.13e-08 5.59e-09 5.59 392.1 4.21e-08 2.03e-08 4.23e-08 0 0.193 0.189 gdp.pc -2.11e-03 5.53e-04 -3.81 329.4 1.64e-04 -3.20e-03 -1.02e-03 0 0.211 0.206 year -1.58e+00 3.63e-01 -4.37 45.9 7.11e-05 -2.31e+00 -8.54e-01 0 0.570 0.552 polity 5.52e-01 3.16e-01 1.75 90.8 8.41e-02 -7.58e-02 1.18e+00 2 0.406 0.393 Note, using pool() you can also calculate $p$-values with $df$ adjusted for small samples by omitting the method-parameter. What is even better, you can now also calculate $R^2$ and compare nested models: pool.r.squared(mice.fit) mice.fit2 <- with(mice.data, lm(tariff ~ polity + pop + gdp.pc)) pool.compare(mice.fit, mice.fit2, method = "Wald")$pvalue
How to get pooled p-values on tests done in multiple imputed datasets? Yes, it is possible and, yes, there are R functions that do it. Instead of computing the p-values of the repeated analyses by hand, you can use the package Zelig, which is also referred to in the vign
17,128
How to get pooled p-values on tests done in multiple imputed datasets?
Normally you would take the p-value by applying Rubin's rules on conventional statistical parameters like regression weights. Thus, there is often no need to pool p-values directly. Also, the likelihood ratio statistic can be pooled to compare models. Pooling procedures for other statistics can be found in my book Flexible Imputation of Missing Data, chapter 6. In cases where there is no known distribution or method, there is an unpublished procedure by Licht and Rubin for one-sided tests. I used this procedure to pool p-values from the wilcoxon() procedure, but it is general and straightforward to adapt to other uses. Use procedure below ONLY if all else fails, as for now, we know little about its statistical properties. lichtrubin <- function(fit){ ## pools the p-values of a one-sided test according to the Licht-Rubin method ## this method pools p-values in the z-score scale, and then transforms back ## the result to the 0-1 scale ## Licht C, Rubin DB (2011) unpublished if (!is.mira(fit)) stop("Argument 'fit' is not an object of class 'mira'.") fitlist <- fit$analyses if (!inherits(fitlist[[1]], "htest")) stop("Object fit$analyses[[1]] is not an object of class 'htest'.") m <- length(fitlist) p <- rep(NA, length = m) for (i in 1:m) p[i] <- fitlist[[i]]$p.value z <- qnorm(p) # transform to z-scale num <- mean(z) den <- sqrt(1 + var(z)) pnorm( num / den) # average and transform back }
How to get pooled p-values on tests done in multiple imputed datasets?
Normally you would take the p-value by applying Rubin's rules on conventional statistical parameters like regression weights. Thus, there is often no need to pool p-values directly. Also, the likeliho
How to get pooled p-values on tests done in multiple imputed datasets? Normally you would take the p-value by applying Rubin's rules on conventional statistical parameters like regression weights. Thus, there is often no need to pool p-values directly. Also, the likelihood ratio statistic can be pooled to compare models. Pooling procedures for other statistics can be found in my book Flexible Imputation of Missing Data, chapter 6. In cases where there is no known distribution or method, there is an unpublished procedure by Licht and Rubin for one-sided tests. I used this procedure to pool p-values from the wilcoxon() procedure, but it is general and straightforward to adapt to other uses. Use procedure below ONLY if all else fails, as for now, we know little about its statistical properties. lichtrubin <- function(fit){ ## pools the p-values of a one-sided test according to the Licht-Rubin method ## this method pools p-values in the z-score scale, and then transforms back ## the result to the 0-1 scale ## Licht C, Rubin DB (2011) unpublished if (!is.mira(fit)) stop("Argument 'fit' is not an object of class 'mira'.") fitlist <- fit$analyses if (!inherits(fitlist[[1]], "htest")) stop("Object fit$analyses[[1]] is not an object of class 'htest'.") m <- length(fitlist) p <- rep(NA, length = m) for (i in 1:m) p[i] <- fitlist[[i]]$p.value z <- qnorm(p) # transform to z-scale num <- mean(z) den <- sqrt(1 + var(z)) pnorm( num / den) # average and transform back }
How to get pooled p-values on tests done in multiple imputed datasets? Normally you would take the p-value by applying Rubin's rules on conventional statistical parameters like regression weights. Thus, there is often no need to pool p-values directly. Also, the likeliho
17,129
Non negative lasso implementation in R
In glmnet there is the option lower.limits=0 that you can use and that would be the appropriate way to enforce positivity constraints on the fitted coefficients and if you set parameter alpha to 1 you will be fitting LASSO. In combination with the argument upper.limits you can also specify box constraints. The glmnet package is also much faster than the penalized package, suggested in another answer here. An Rcpp version of glmnet that can fit the lasso & elastic net with support for positivity and box constraints is also in preparation, and is available for testing at https://github.com/jaredhuling/ordinis
Non negative lasso implementation in R
In glmnet there is the option lower.limits=0 that you can use and that would be the appropriate way to enforce positivity constraints on the fitted coefficients and if you set parameter alpha to 1 yo
Non negative lasso implementation in R In glmnet there is the option lower.limits=0 that you can use and that would be the appropriate way to enforce positivity constraints on the fitted coefficients and if you set parameter alpha to 1 you will be fitting LASSO. In combination with the argument upper.limits you can also specify box constraints. The glmnet package is also much faster than the penalized package, suggested in another answer here. An Rcpp version of glmnet that can fit the lasso & elastic net with support for positivity and box constraints is also in preparation, and is available for testing at https://github.com/jaredhuling/ordinis
Non negative lasso implementation in R In glmnet there is the option lower.limits=0 that you can use and that would be the appropriate way to enforce positivity constraints on the fitted coefficients and if you set parameter alpha to 1 yo
17,130
Non negative lasso implementation in R
See the penalized package for one option. The Vignette (PDF!) that comes with the package has an example of this in section 3.9. Essentially set argument positive = TRUE in the call to the penalized() function.
Non negative lasso implementation in R
See the penalized package for one option. The Vignette (PDF!) that comes with the package has an example of this in section 3.9. Essentially set argument positive = TRUE in the call to the penalized()
Non negative lasso implementation in R See the penalized package for one option. The Vignette (PDF!) that comes with the package has an example of this in section 3.9. Essentially set argument positive = TRUE in the call to the penalized() function.
Non negative lasso implementation in R See the penalized package for one option. The Vignette (PDF!) that comes with the package has an example of this in section 3.9. Essentially set argument positive = TRUE in the call to the penalized()
17,131
Non negative lasso implementation in R
This and this paper demonstrate that under some conditions, hard thresholding of the non-negative least squares solution may perform equivalent or better than L1 regularization (LASSO), in terms of performance. One example is if your design matrix has only non-negative entries, which is often the case. Worth checking out, as NNLS is very widely supported and will also be easier/faster to solve.
Non negative lasso implementation in R
This and this paper demonstrate that under some conditions, hard thresholding of the non-negative least squares solution may perform equivalent or better than L1 regularization (LASSO), in terms of pe
Non negative lasso implementation in R This and this paper demonstrate that under some conditions, hard thresholding of the non-negative least squares solution may perform equivalent or better than L1 regularization (LASSO), in terms of performance. One example is if your design matrix has only non-negative entries, which is often the case. Worth checking out, as NNLS is very widely supported and will also be easier/faster to solve.
Non negative lasso implementation in R This and this paper demonstrate that under some conditions, hard thresholding of the non-negative least squares solution may perform equivalent or better than L1 regularization (LASSO), in terms of pe
17,132
How do I evaluate standard deviation?
Standard deviations aren't "good" or "bad". They are indicators of how spread out your data is. Sometimes, in ratings scales, we want wide spread because it indicates that our questions/ratings cover the range of the group we are rating. Other times, we want a small sd because we want everyone to be "high". For example, if you were testing the math skills of students in a calculus course, you could get a very small sd by asking them questions of elementary arithmetic such as $3+2$. But suppose you gave a more serious placement test for calculus (that is, students who passed would go into Calculus I, those who did not would take lower level courses first). You might expect a lower sd (and a higher average) among freshman at MIT than at South Podunk State, given the same test. So. What is the purpose of your test? Who are in the sample?
How do I evaluate standard deviation?
Standard deviations aren't "good" or "bad". They are indicators of how spread out your data is. Sometimes, in ratings scales, we want wide spread because it indicates that our questions/ratings cover
How do I evaluate standard deviation? Standard deviations aren't "good" or "bad". They are indicators of how spread out your data is. Sometimes, in ratings scales, we want wide spread because it indicates that our questions/ratings cover the range of the group we are rating. Other times, we want a small sd because we want everyone to be "high". For example, if you were testing the math skills of students in a calculus course, you could get a very small sd by asking them questions of elementary arithmetic such as $3+2$. But suppose you gave a more serious placement test for calculus (that is, students who passed would go into Calculus I, those who did not would take lower level courses first). You might expect a lower sd (and a higher average) among freshman at MIT than at South Podunk State, given the same test. So. What is the purpose of your test? Who are in the sample?
How do I evaluate standard deviation? Standard deviations aren't "good" or "bad". They are indicators of how spread out your data is. Sometimes, in ratings scales, we want wide spread because it indicates that our questions/ratings cover
17,133
How do I evaluate standard deviation?
Short answer, it's fine and a bit lower than I might have expected from survey data. But probably your business story is more in the mean or the top-2-box percent. For discrete scales from social science research, in practice the standard deviation is a direct function of the mean. In particular, I have found through empirical analysis of many such studies that the actual standard deviation in surveys on 5-point scales is 40%-60% of the maximum possible variation (alas undocumented here). At the simplest level, consider the extremes, imagine that the mean was 5.0. The standard deviation must be zero, as the only way to average 5 is for everyone to answer 5. Conversely, if the mean were 1.0 then the standard error must be 0 as well. So the standard deviation is precisely defined given the mean. Now in between there's more grey area. Imagine that people could answer either 5.0 or 1.0 but nothing in between. Then the standard deviation is a precise function of the mean: stdev = sqrt ( (5-mean)*(mean-1)) The maximum standard deviation for answers on any bounded scale is half the scale width. Here that's sqrt((5-3)(3-1)) = sqrt(2*2)=2. Now of course people can answer values in between. From metastudies of survey data in our firm, I find that the standard deviation for numeric scales in practice is 40%-60% of the maximum. Specifically 40% for 100% point scales, 50% for 10-point scales and 60% for 5-point scales and 100% for binary scales So for your dataset, I would expect a standard deviation of 60% x 2.0 = 1.2. You got 0.54, which is about half what i would have expected if the results were self-explicated ratings. Are the skills ratings results of more complicated batteries of tests that are averages and thus would have a lower variance? The real story, though, is probably the ability is so low or so high relative to other tasks. Report the means or top-2-box percentages between skills and focus your analysis on that.
How do I evaluate standard deviation?
Short answer, it's fine and a bit lower than I might have expected from survey data. But probably your business story is more in the mean or the top-2-box percent. For discrete scales from social sci
How do I evaluate standard deviation? Short answer, it's fine and a bit lower than I might have expected from survey data. But probably your business story is more in the mean or the top-2-box percent. For discrete scales from social science research, in practice the standard deviation is a direct function of the mean. In particular, I have found through empirical analysis of many such studies that the actual standard deviation in surveys on 5-point scales is 40%-60% of the maximum possible variation (alas undocumented here). At the simplest level, consider the extremes, imagine that the mean was 5.0. The standard deviation must be zero, as the only way to average 5 is for everyone to answer 5. Conversely, if the mean were 1.0 then the standard error must be 0 as well. So the standard deviation is precisely defined given the mean. Now in between there's more grey area. Imagine that people could answer either 5.0 or 1.0 but nothing in between. Then the standard deviation is a precise function of the mean: stdev = sqrt ( (5-mean)*(mean-1)) The maximum standard deviation for answers on any bounded scale is half the scale width. Here that's sqrt((5-3)(3-1)) = sqrt(2*2)=2. Now of course people can answer values in between. From metastudies of survey data in our firm, I find that the standard deviation for numeric scales in practice is 40%-60% of the maximum. Specifically 40% for 100% point scales, 50% for 10-point scales and 60% for 5-point scales and 100% for binary scales So for your dataset, I would expect a standard deviation of 60% x 2.0 = 1.2. You got 0.54, which is about half what i would have expected if the results were self-explicated ratings. Are the skills ratings results of more complicated batteries of tests that are averages and thus would have a lower variance? The real story, though, is probably the ability is so low or so high relative to other tasks. Report the means or top-2-box percentages between skills and focus your analysis on that.
How do I evaluate standard deviation? Short answer, it's fine and a bit lower than I might have expected from survey data. But probably your business story is more in the mean or the top-2-box percent. For discrete scales from social sci
17,134
How do I evaluate standard deviation?
If the data is normally distributed, you could see how population is located. 68% of all people lie within 1 standard deviation of the mean (2.26 - 3.34): 95% of all people lie within 2 standard deviations of the mean (1.72 - 3.88): It tells you how "spread out" your numbers are.
How do I evaluate standard deviation?
If the data is normally distributed, you could see how population is located. 68% of all people lie within 1 standard deviation of the mean (2.26 - 3.34): 95% of all people lie within 2 standard d
How do I evaluate standard deviation? If the data is normally distributed, you could see how population is located. 68% of all people lie within 1 standard deviation of the mean (2.26 - 3.34): 95% of all people lie within 2 standard deviations of the mean (1.72 - 3.88): It tells you how "spread out" your numbers are.
How do I evaluate standard deviation? If the data is normally distributed, you could see how population is located. 68% of all people lie within 1 standard deviation of the mean (2.26 - 3.34): 95% of all people lie within 2 standard d
17,135
A survey of data-mining software tools
This is probably the most comprehensive list you'll find: mloss.org
A survey of data-mining software tools
This is probably the most comprehensive list you'll find: mloss.org
A survey of data-mining software tools This is probably the most comprehensive list you'll find: mloss.org
A survey of data-mining software tools This is probably the most comprehensive list you'll find: mloss.org
17,136
A survey of data-mining software tools
Have a look at Weka (java, strong in classification) Orange (python scripting, mostly classification) GNU R (R language, somewhat vector table oriented, see the Machine Learning taskview, and Rattle UI) ELKI (java, strong on clustering and outlier detection, index structure support for speedups, algorithm list) Mahout (Java, belongs to Hadoop, if you have a cluster and huge data sets) and the UCI Machine Learning Repository for data sets.
A survey of data-mining software tools
Have a look at Weka (java, strong in classification) Orange (python scripting, mostly classification) GNU R (R language, somewhat vector table oriented, see the Machine Learning taskview, and Rattle
A survey of data-mining software tools Have a look at Weka (java, strong in classification) Orange (python scripting, mostly classification) GNU R (R language, somewhat vector table oriented, see the Machine Learning taskview, and Rattle UI) ELKI (java, strong on clustering and outlier detection, index structure support for speedups, algorithm list) Mahout (Java, belongs to Hadoop, if you have a cluster and huge data sets) and the UCI Machine Learning Repository for data sets.
A survey of data-mining software tools Have a look at Weka (java, strong in classification) Orange (python scripting, mostly classification) GNU R (R language, somewhat vector table oriented, see the Machine Learning taskview, and Rattle
17,137
A survey of data-mining software tools
Rattle is a data mining GUI that provides a front end to a wide range of R packages.
A survey of data-mining software tools
Rattle is a data mining GUI that provides a front end to a wide range of R packages.
A survey of data-mining software tools Rattle is a data mining GUI that provides a front end to a wide range of R packages.
A survey of data-mining software tools Rattle is a data mining GUI that provides a front end to a wide range of R packages.
17,138
A survey of data-mining software tools
Have a look at KNIME. Very easy to learn. With lots of scope for further progress. Integrates nicely with Weka and R.
A survey of data-mining software tools
Have a look at KNIME. Very easy to learn. With lots of scope for further progress. Integrates nicely with Weka and R.
A survey of data-mining software tools Have a look at KNIME. Very easy to learn. With lots of scope for further progress. Integrates nicely with Weka and R.
A survey of data-mining software tools Have a look at KNIME. Very easy to learn. With lots of scope for further progress. Integrates nicely with Weka and R.
17,139
A survey of data-mining software tools
From the popularity perspective, this paper (2008) surveys top 10 algorithms in data mining.
A survey of data-mining software tools
From the popularity perspective, this paper (2008) surveys top 10 algorithms in data mining.
A survey of data-mining software tools From the popularity perspective, this paper (2008) surveys top 10 algorithms in data mining.
A survey of data-mining software tools From the popularity perspective, this paper (2008) surveys top 10 algorithms in data mining.
17,140
A survey of data-mining software tools
RapidMiner (Java) [open source]
A survey of data-mining software tools
RapidMiner (Java) [open source]
A survey of data-mining software tools RapidMiner (Java) [open source]
A survey of data-mining software tools RapidMiner (Java) [open source]
17,141
A survey of data-mining software tools
There is ELKI, an open-source university project somewhat comparable to WEKA, but much stronger when it comes to clustering and outlier detection. WEKA actually isn't really data-mining, but machine learning software.
A survey of data-mining software tools
There is ELKI, an open-source university project somewhat comparable to WEKA, but much stronger when it comes to clustering and outlier detection. WEKA actually isn't really data-mining, but machine l
A survey of data-mining software tools There is ELKI, an open-source university project somewhat comparable to WEKA, but much stronger when it comes to clustering and outlier detection. WEKA actually isn't really data-mining, but machine learning software.
A survey of data-mining software tools There is ELKI, an open-source university project somewhat comparable to WEKA, but much stronger when it comes to clustering and outlier detection. WEKA actually isn't really data-mining, but machine l
17,142
A survey of data-mining software tools
There is this Red-R which has a nice GUI and visual programming interface. It make use of R to process the various data analysis.
A survey of data-mining software tools
There is this Red-R which has a nice GUI and visual programming interface. It make use of R to process the various data analysis.
A survey of data-mining software tools There is this Red-R which has a nice GUI and visual programming interface. It make use of R to process the various data analysis.
A survey of data-mining software tools There is this Red-R which has a nice GUI and visual programming interface. It make use of R to process the various data analysis.
17,143
A survey of data-mining software tools
Rexer Anlaytics does a toolkit survey every year. KDnuggets has software descriptions by industry as well as intent.
A survey of data-mining software tools
Rexer Anlaytics does a toolkit survey every year. KDnuggets has software descriptions by industry as well as intent.
A survey of data-mining software tools Rexer Anlaytics does a toolkit survey every year. KDnuggets has software descriptions by industry as well as intent.
A survey of data-mining software tools Rexer Anlaytics does a toolkit survey every year. KDnuggets has software descriptions by industry as well as intent.
17,144
A survey of data-mining software tools
SQL Server Data Mining (SSDM) hasn't been updated in a long time, but it's still quite competitive if you're mining large relational databases and cubes. I'm slowly but systematically slogging my way through tests of as many mining tools as I can and SQL Server's Windows interface is the most productive and stable I've found to date (particularly when it comes to enterprise databases, some of which have surprisingly sloppy interfaces) despite its age. I'd prefer a modern Windows Presentation Foundation (WPF) interface but this is the next-best thing. I wrote a whole series of detailed amateur tutorials on it titled A Rickety Stairway to SQL Server Data Mining, back when I was trying to acquire some basic mining skills. Despite my inexperience they are still useful in helping identify some of the "gotchas" in advance.
A survey of data-mining software tools
SQL Server Data Mining (SSDM) hasn't been updated in a long time, but it's still quite competitive if you're mining large relational databases and cubes. I'm slowly but systematically slogging my way
A survey of data-mining software tools SQL Server Data Mining (SSDM) hasn't been updated in a long time, but it's still quite competitive if you're mining large relational databases and cubes. I'm slowly but systematically slogging my way through tests of as many mining tools as I can and SQL Server's Windows interface is the most productive and stable I've found to date (particularly when it comes to enterprise databases, some of which have surprisingly sloppy interfaces) despite its age. I'd prefer a modern Windows Presentation Foundation (WPF) interface but this is the next-best thing. I wrote a whole series of detailed amateur tutorials on it titled A Rickety Stairway to SQL Server Data Mining, back when I was trying to acquire some basic mining skills. Despite my inexperience they are still useful in helping identify some of the "gotchas" in advance.
A survey of data-mining software tools SQL Server Data Mining (SSDM) hasn't been updated in a long time, but it's still quite competitive if you're mining large relational databases and cubes. I'm slowly but systematically slogging my way
17,145
Cosine Distance as Similarity Measure in KMeans [duplicate]
It should be the same, for normalized vectors cosine similarity and euclidean similarity are connected linearly. Here's the explanation: Cosine distance is actually cosine similarity: $\cos(x,y) = \frac{\sum x_iy_i}{\sqrt{\sum x_i^2 \sum y_i^2 }}$. Now, let's see what we can do with euclidean distance for normalized vectors $(\sum x_i^2 =\sum y_i^2 =1)$: $$\begin{align} ||x-y||^2 &=\sum(x_i -y_i)^2 \\ &=\sum (x_i^2 +y_i^2 -2x_iy_i) \\ &= \sum x_i ^2 +\sum y_i^2 -2\sum x_iy_i \\ &= 1+1-2\cos(x,y)\\ &=2(1-\cos(x,y)) \end{align}$$ Note that for normalized vectors $\cos(x,y) = \frac{\sum x_iy_i}{\sqrt{\sum x_i^2 \sum y_i^2 }} =\sum x_iy_i$ So you can see that there is a direct linear connection between these distances for normalized vectors.
Cosine Distance as Similarity Measure in KMeans [duplicate]
It should be the same, for normalized vectors cosine similarity and euclidean similarity are connected linearly. Here's the explanation: Cosine distance is actually cosine similarity: $\cos(x,y) = \fr
Cosine Distance as Similarity Measure in KMeans [duplicate] It should be the same, for normalized vectors cosine similarity and euclidean similarity are connected linearly. Here's the explanation: Cosine distance is actually cosine similarity: $\cos(x,y) = \frac{\sum x_iy_i}{\sqrt{\sum x_i^2 \sum y_i^2 }}$. Now, let's see what we can do with euclidean distance for normalized vectors $(\sum x_i^2 =\sum y_i^2 =1)$: $$\begin{align} ||x-y||^2 &=\sum(x_i -y_i)^2 \\ &=\sum (x_i^2 +y_i^2 -2x_iy_i) \\ &= \sum x_i ^2 +\sum y_i^2 -2\sum x_iy_i \\ &= 1+1-2\cos(x,y)\\ &=2(1-\cos(x,y)) \end{align}$$ Note that for normalized vectors $\cos(x,y) = \frac{\sum x_iy_i}{\sqrt{\sum x_i^2 \sum y_i^2 }} =\sum x_iy_i$ So you can see that there is a direct linear connection between these distances for normalized vectors.
Cosine Distance as Similarity Measure in KMeans [duplicate] It should be the same, for normalized vectors cosine similarity and euclidean similarity are connected linearly. Here's the explanation: Cosine distance is actually cosine similarity: $\cos(x,y) = \fr
17,146
How to fill in missing data in time series?
The answer will depend on your study design (e.g., cross-sectional time series? cohort time series, serial cohorts time series?). Honaker and King have developed an approach that is useful for cross-sectional time series (possibly useful for serial cohorts time series, depending on your assumptions), including the R package Amelia II for imputing such data. Meanwhile Spratt &Co. have described a different approach that can be used in some cohort time series designs, but is sparse on software implementations. A cross-sectional time series design (aka panel study design) is one in which a population(s) is (are) repeatedly sampled (e.g., every year), using the same study protocol (e.g., same variables, instruments, etc.). If the sampling strategy is representative, these kinds of data produce an annual picture (one measurement per participant or subject) of the distributions of those variables for each population in the study. A cohort time series design (aka repeated cohorts study design, longitudinal study design, also sometimes called a panel study design) is one in which individual units of analysis are sampled once and followed over a long period of time. The individuals may be sampled in a representative fashion from one or more populations. However, a representative cohort time series sample will become an increasingly poor representative of the target population (at least in human populations) as time passes, because of people being born or aging into the target population, and dying or aging out of it, along with immigration and emigration. A serial cohorts time series design (aka repeated, multi-, and multiple cohorts, or panel study design) is one in which a population(s) is (are) repeatedly sampled (e.g., every year), using the same study protocol (e.g., same variables, instruments, etc.), which measures individual units of analysis within a population at two points of time during the period (e.g., during the year) in order to create measures of rate of change. If the sampling strategy is representative, these kinds of data produce an annual picture of the rates of change in those variables for each population in the study. References Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581. Spratt, M., Carpenter, J., Sterne, J. A. C., Carlin, J. B., Heron, J., Henderson, J., and Tilling, K. (2010). Strategies for multiple imputation in longitudinal studies. American Journal of Epidemiology, 172(4):478–4876.
How to fill in missing data in time series?
The answer will depend on your study design (e.g., cross-sectional time series? cohort time series, serial cohorts time series?). Honaker and King have developed an approach that is useful for cross-s
How to fill in missing data in time series? The answer will depend on your study design (e.g., cross-sectional time series? cohort time series, serial cohorts time series?). Honaker and King have developed an approach that is useful for cross-sectional time series (possibly useful for serial cohorts time series, depending on your assumptions), including the R package Amelia II for imputing such data. Meanwhile Spratt &Co. have described a different approach that can be used in some cohort time series designs, but is sparse on software implementations. A cross-sectional time series design (aka panel study design) is one in which a population(s) is (are) repeatedly sampled (e.g., every year), using the same study protocol (e.g., same variables, instruments, etc.). If the sampling strategy is representative, these kinds of data produce an annual picture (one measurement per participant or subject) of the distributions of those variables for each population in the study. A cohort time series design (aka repeated cohorts study design, longitudinal study design, also sometimes called a panel study design) is one in which individual units of analysis are sampled once and followed over a long period of time. The individuals may be sampled in a representative fashion from one or more populations. However, a representative cohort time series sample will become an increasingly poor representative of the target population (at least in human populations) as time passes, because of people being born or aging into the target population, and dying or aging out of it, along with immigration and emigration. A serial cohorts time series design (aka repeated, multi-, and multiple cohorts, or panel study design) is one in which a population(s) is (are) repeatedly sampled (e.g., every year), using the same study protocol (e.g., same variables, instruments, etc.), which measures individual units of analysis within a population at two points of time during the period (e.g., during the year) in order to create measures of rate of change. If the sampling strategy is representative, these kinds of data produce an annual picture of the rates of change in those variables for each population in the study. References Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581. Spratt, M., Carpenter, J., Sterne, J. A. C., Carlin, J. B., Heron, J., Henderson, J., and Tilling, K. (2010). Strategies for multiple imputation in longitudinal studies. American Journal of Epidemiology, 172(4):478–4876.
How to fill in missing data in time series? The answer will depend on your study design (e.g., cross-sectional time series? cohort time series, serial cohorts time series?). Honaker and King have developed an approach that is useful for cross-s
17,147
How to fill in missing data in time series?
you can use imputeTS package in R . I believe the data you are working on is uni-variate time series.The imputeTS package specializes on (univariate) time series imputation. It offers several different imputation algorithm implementations. Beyond the imputation algorithms the package also provides plotting and printing functions of missing data statistics. Well I recommend you to look into State Space Models for Missing Values.This package should help you with your analysis.
How to fill in missing data in time series?
you can use imputeTS package in R . I believe the data you are working on is uni-variate time series.The imputeTS package specializes on (univariate) time series imputation. It offers several differen
How to fill in missing data in time series? you can use imputeTS package in R . I believe the data you are working on is uni-variate time series.The imputeTS package specializes on (univariate) time series imputation. It offers several different imputation algorithm implementations. Beyond the imputation algorithms the package also provides plotting and printing functions of missing data statistics. Well I recommend you to look into State Space Models for Missing Values.This package should help you with your analysis.
How to fill in missing data in time series? you can use imputeTS package in R . I believe the data you are working on is uni-variate time series.The imputeTS package specializes on (univariate) time series imputation. It offers several differen
17,148
How to find 95% credible interval?
As noted by Henry, you are assuming normal distribution and it's perfectly ok if your data follows normal distribution, but will be incorrect if you cannot assume normal distribution for it. Below I describe two different approaches that you could use for unknown distribution given only datapoints x and accompanying density estimates px. The first thing to consider is what exactly do you want to summarize using your intervals. For example, you could be interested in intervals obtained using quantiles, but you could also be interested in highest density region (see here, or here) of your distribution. While this should not make much (if any) difference in simple cases like symmetric, unimodal distributions, this will make a difference for more "complicated" distributions. Generally, quantiles will give you interval containing probability mass concentrated around the median (the middle $100\alpha\%$ of your distribution), while highest density region is a region around the modes of the distribution. This will be more clear if you compare the two plots on the picture below -- quantiles "cut" the distribution vertically, while highest density region "cuts" it horizontally. Next thing to consider is how to deal with the fact that you have incomplete information about the distribution (assuming that we are talking about continuous distribution, you have only a bunch of points rather then a function). What you could do about it is to take the values "as is", or use some kind of interpolation, or smoothing, to obtain the "in between" values. One approach would be to use linear interpolation (see ?approxfun in R), or alternatively something more smooth like splines (see ?splinefun in R). If you choose such approach you have to remember that interpolation algorithms have no domain knowledge about your data and can return invalid results like values below zero etc. # grid of points xx <- seq(min(x), max(x), by = 0.001) # interpolate function from the sample fx <- splinefun(x, px) # interpolating function pxx <- pmax(0, fx(xx)) # normalize so prob >0 Second approach that you could consider is to use kernel density/mixture distribution to approximate your distribution using the data you have. The tricky part in here is to decide about optimal bandwidth. # density of kernel density/mixture distribution dmix <- function(x, m, s, w) { k <- length(m) rowSums(vapply(1:k, function(j) w[j]*dnorm(x, m[j], s[j]), numeric(length(x)))) } # approximate function using kernel density/mixture distribution pxx <- dmix(xx, x, rep(0.4, length.out = length(x)), px) # bandwidth 0.4 chosen arbitrary Next, you are going to find the intervals of interest. You can either proceed numerically, or by simulation. 1a) Sampling to obtain quantile intervals # sample from the "empirical" distribution samp <- sample(xx, 1e5, replace = TRUE, prob = pxx) # or sample from kernel density idx <- sample.int(length(x), 1e5, replace = TRUE, prob = px) samp <- rnorm(1e5, x[idx], 0.4) # this is arbitrary sd # and take sample quantiles quantile(samp, c(0.05, 0.975)) 1b) Sampling to obtain highest density region samp <- sample(pxx, 1e5, replace = TRUE, prob = pxx) # sample probabilities crit <- quantile(samp, 0.05) # boundary for the lower 5% of probability mass # values from the 95% highest density region xx[pxx >= crit] 2a) Find quantiles numerically cpxx <- cumsum(pxx) / sum(pxx) xx[which(cpxx >= 0.025)[1]] # lower boundary xx[which(cpxx >= 0.975)[1]-1] # upper boundary 2b) Find highest density region numerically const <- sum(pxx) spxx <- sort(pxx, decreasing = TRUE) / const crit <- spxx[which(cumsum(spxx) >= 0.95)[1]] * const As you can see on the plots below, in case of unimodal, symmetric distribution both methods return the same interval. Of course, you could also try to find $100\alpha\%$ interval around some central value such that $\Pr(X \in \mu \pm \zeta) \ge \alpha$ and use some kind of optimization to find appropriate $\zeta$, but the two approaches described above seem to be used more commonly and are more intuitive.
How to find 95% credible interval?
As noted by Henry, you are assuming normal distribution and it's perfectly ok if your data follows normal distribution, but will be incorrect if you cannot assume normal distribution for it. Below I d
How to find 95% credible interval? As noted by Henry, you are assuming normal distribution and it's perfectly ok if your data follows normal distribution, but will be incorrect if you cannot assume normal distribution for it. Below I describe two different approaches that you could use for unknown distribution given only datapoints x and accompanying density estimates px. The first thing to consider is what exactly do you want to summarize using your intervals. For example, you could be interested in intervals obtained using quantiles, but you could also be interested in highest density region (see here, or here) of your distribution. While this should not make much (if any) difference in simple cases like symmetric, unimodal distributions, this will make a difference for more "complicated" distributions. Generally, quantiles will give you interval containing probability mass concentrated around the median (the middle $100\alpha\%$ of your distribution), while highest density region is a region around the modes of the distribution. This will be more clear if you compare the two plots on the picture below -- quantiles "cut" the distribution vertically, while highest density region "cuts" it horizontally. Next thing to consider is how to deal with the fact that you have incomplete information about the distribution (assuming that we are talking about continuous distribution, you have only a bunch of points rather then a function). What you could do about it is to take the values "as is", or use some kind of interpolation, or smoothing, to obtain the "in between" values. One approach would be to use linear interpolation (see ?approxfun in R), or alternatively something more smooth like splines (see ?splinefun in R). If you choose such approach you have to remember that interpolation algorithms have no domain knowledge about your data and can return invalid results like values below zero etc. # grid of points xx <- seq(min(x), max(x), by = 0.001) # interpolate function from the sample fx <- splinefun(x, px) # interpolating function pxx <- pmax(0, fx(xx)) # normalize so prob >0 Second approach that you could consider is to use kernel density/mixture distribution to approximate your distribution using the data you have. The tricky part in here is to decide about optimal bandwidth. # density of kernel density/mixture distribution dmix <- function(x, m, s, w) { k <- length(m) rowSums(vapply(1:k, function(j) w[j]*dnorm(x, m[j], s[j]), numeric(length(x)))) } # approximate function using kernel density/mixture distribution pxx <- dmix(xx, x, rep(0.4, length.out = length(x)), px) # bandwidth 0.4 chosen arbitrary Next, you are going to find the intervals of interest. You can either proceed numerically, or by simulation. 1a) Sampling to obtain quantile intervals # sample from the "empirical" distribution samp <- sample(xx, 1e5, replace = TRUE, prob = pxx) # or sample from kernel density idx <- sample.int(length(x), 1e5, replace = TRUE, prob = px) samp <- rnorm(1e5, x[idx], 0.4) # this is arbitrary sd # and take sample quantiles quantile(samp, c(0.05, 0.975)) 1b) Sampling to obtain highest density region samp <- sample(pxx, 1e5, replace = TRUE, prob = pxx) # sample probabilities crit <- quantile(samp, 0.05) # boundary for the lower 5% of probability mass # values from the 95% highest density region xx[pxx >= crit] 2a) Find quantiles numerically cpxx <- cumsum(pxx) / sum(pxx) xx[which(cpxx >= 0.025)[1]] # lower boundary xx[which(cpxx >= 0.975)[1]-1] # upper boundary 2b) Find highest density region numerically const <- sum(pxx) spxx <- sort(pxx, decreasing = TRUE) / const crit <- spxx[which(cumsum(spxx) >= 0.95)[1]] * const As you can see on the plots below, in case of unimodal, symmetric distribution both methods return the same interval. Of course, you could also try to find $100\alpha\%$ interval around some central value such that $\Pr(X \in \mu \pm \zeta) \ge \alpha$ and use some kind of optimization to find appropriate $\zeta$, but the two approaches described above seem to be used more commonly and are more intuitive.
How to find 95% credible interval? As noted by Henry, you are assuming normal distribution and it's perfectly ok if your data follows normal distribution, but will be incorrect if you cannot assume normal distribution for it. Below I d
17,149
Different definitions of the cross entropy loss function
These three definitions are essentially the same. 1) The Tensorflow introduction, $$C = -\frac{1}{n} \sum\limits_x\sum\limits_{j} (y_j \ln a_j).$$ 2) For binary classifications $j=2$, it becomes $$C = -\frac{1}{n} \sum\limits_x (y_1 \ln a_1 + y_2 \ln a_2)$$ and because of the constraints $\sum_ja_j=1$ and $\sum_jy_j=1$, it can be rewritten as $$C = -\frac{1}{n} \sum\limits_x (y_1 \ln a_1 + (1-y_1) \ln (1-a_1))$$ which is the same as in the 3rd chapter. 3) Moreover, if $y$ is a one-hot vector (which is commonly the case for classification labels) with $y_k$ being the only non-zero element, then the cross entropy loss of the corresponding sample is $$C_x=-\sum\limits_{j} (y_j \ln a_j)=-(0+0+...+y_k\ln a_k)=-\ln a_k.$$ In the cs231 notes, the cross entropy loss of one sample is given together with softmax normalization as $$C_x=-\ln(a_k)=-\ln\left(\frac{e^{f_k}}{\sum_je^{f_j}}\right).$$
Different definitions of the cross entropy loss function
These three definitions are essentially the same. 1) The Tensorflow introduction, $$C = -\frac{1}{n} \sum\limits_x\sum\limits_{j} (y_j \ln a_j).$$ 2) For binary classifications $j=2$, it becomes $$C =
Different definitions of the cross entropy loss function These three definitions are essentially the same. 1) The Tensorflow introduction, $$C = -\frac{1}{n} \sum\limits_x\sum\limits_{j} (y_j \ln a_j).$$ 2) For binary classifications $j=2$, it becomes $$C = -\frac{1}{n} \sum\limits_x (y_1 \ln a_1 + y_2 \ln a_2)$$ and because of the constraints $\sum_ja_j=1$ and $\sum_jy_j=1$, it can be rewritten as $$C = -\frac{1}{n} \sum\limits_x (y_1 \ln a_1 + (1-y_1) \ln (1-a_1))$$ which is the same as in the 3rd chapter. 3) Moreover, if $y$ is a one-hot vector (which is commonly the case for classification labels) with $y_k$ being the only non-zero element, then the cross entropy loss of the corresponding sample is $$C_x=-\sum\limits_{j} (y_j \ln a_j)=-(0+0+...+y_k\ln a_k)=-\ln a_k.$$ In the cs231 notes, the cross entropy loss of one sample is given together with softmax normalization as $$C_x=-\ln(a_k)=-\ln\left(\frac{e^{f_k}}{\sum_je^{f_j}}\right).$$
Different definitions of the cross entropy loss function These three definitions are essentially the same. 1) The Tensorflow introduction, $$C = -\frac{1}{n} \sum\limits_x\sum\limits_{j} (y_j \ln a_j).$$ 2) For binary classifications $j=2$, it becomes $$C =
17,150
Different definitions of the cross entropy loss function
In the third chapter, equation (63) is the cross entropy applied to multiple sigmoids (that may not sum to 1) while in the Tensoflow intro the cross-entropy is computed on a softmax output layer. As explained by dontloo both formula are essentially equivalent for two classes but it is not when more than two classes are considered. Softmax makes sense for multiclass with exclusive classes (i.e when there is only one label per sample, that allow the one-hot encoding of labels) while (multiple) sigmoids can be used to describe a multilabel problem (i.e with samples that are possibly positive for several classes). See this other dontloo answer as well.
Different definitions of the cross entropy loss function
In the third chapter, equation (63) is the cross entropy applied to multiple sigmoids (that may not sum to 1) while in the Tensoflow intro the cross-entropy is computed on a softmax output layer. As e
Different definitions of the cross entropy loss function In the third chapter, equation (63) is the cross entropy applied to multiple sigmoids (that may not sum to 1) while in the Tensoflow intro the cross-entropy is computed on a softmax output layer. As explained by dontloo both formula are essentially equivalent for two classes but it is not when more than two classes are considered. Softmax makes sense for multiclass with exclusive classes (i.e when there is only one label per sample, that allow the one-hot encoding of labels) while (multiple) sigmoids can be used to describe a multilabel problem (i.e with samples that are possibly positive for several classes). See this other dontloo answer as well.
Different definitions of the cross entropy loss function In the third chapter, equation (63) is the cross entropy applied to multiple sigmoids (that may not sum to 1) while in the Tensoflow intro the cross-entropy is computed on a softmax output layer. As e
17,151
Classification with ordered classes?
I had a look at this recently with a convolutional neural network classifier working with six ordinal classes. I tried three different methods: Method 1: Standard independent classification This is what you mentioned as a baseline in the question, with the mapping: class 0 -> [1, 0, 0, 0, 0, 0] class 1 -> [0, 1, 0, 0, 0, 0] class 2 -> [0, 0, 1, 0, 0, 0] class 3 -> [0, 0, 0, 1, 0, 0] class 4 -> [0, 0, 0, 0, 1, 0] class 5 -> [0, 0, 0, 0, 0, 1] We would typically use softmax activation, and categorical crossentropy loss with this. However, this does not, as you say, take into account the relationship between the classes, so that the loss function is only affected by whether you hit the right class or not, and is not affected by whether you come close. Method 2: Ordinal target function This is an approach published by Cheng et al. (2008), which has also been referred to on StackExchange here and here. The mapping is now: class 0 -> [0, 0, 0, 0, 0] class 1 -> [1, 0, 0, 0, 0] class 2 -> [1, 1, 0, 0, 0] class 3 -> [1, 1, 1, 0, 0] class 4 -> [1, 1, 1, 1, 0] class 5 -> [1, 1, 1, 1, 1] This is used with a sigmoid activation and binary crossentropy loss. This target function means that the loss is smaller the closer you get to the right class. You can predict a class from the output $\{y_k\}$ of this classifier by finding the first index $k$ where $y_k < 0.5$. $k$ then gives you the predicted class. Method 3: Turning classification into regression This is the same idea as your second one. The mapping here would be: class 0 -> [0] class 1 -> [1] class 2 -> [2] class 3 -> [3] class 4 -> [4] class 5 -> [5] I used a linear activation and mean-squared-error loss with this. Like the previous approach, this also gives you a smaller loss the less you miss. When predicting a class based on the output of this, you can simply round the output to the nearest integer. Some example results I evaluated the different methods with the same data set. The metrics were precise accuracy (hitting the correct class) and adjacent accuracy (hitting the correct class or one of its neighbours), in class-unbalanced and class-balanced versions. Each metric value shown below is found as the average of three runs. For Method 1 / Method 2 / Method 3, the metrics gave: Unbalanced precise accuracy: 0.582 / 0.606 / 0.564 Balanced precise accuracy: 0.460 / 0.499 / 0.524 Unbalanced adjacent accuracy: 0.827 / 0.835 / 0.855 Balanced adjacent accuracy: 0.827 / 0.832 / 0.859 Thus, for my particular dataset and network setup, the regression approach generally does the best, and the standard approach with independent classes generally does the worst. I don't know how well these results generalise to other cases, but it should not be that difficult to adapt any ordinal classifier to be able to use all three methods so that you can test for yourself.
Classification with ordered classes?
I had a look at this recently with a convolutional neural network classifier working with six ordinal classes. I tried three different methods: Method 1: Standard independent classification This is wh
Classification with ordered classes? I had a look at this recently with a convolutional neural network classifier working with six ordinal classes. I tried three different methods: Method 1: Standard independent classification This is what you mentioned as a baseline in the question, with the mapping: class 0 -> [1, 0, 0, 0, 0, 0] class 1 -> [0, 1, 0, 0, 0, 0] class 2 -> [0, 0, 1, 0, 0, 0] class 3 -> [0, 0, 0, 1, 0, 0] class 4 -> [0, 0, 0, 0, 1, 0] class 5 -> [0, 0, 0, 0, 0, 1] We would typically use softmax activation, and categorical crossentropy loss with this. However, this does not, as you say, take into account the relationship between the classes, so that the loss function is only affected by whether you hit the right class or not, and is not affected by whether you come close. Method 2: Ordinal target function This is an approach published by Cheng et al. (2008), which has also been referred to on StackExchange here and here. The mapping is now: class 0 -> [0, 0, 0, 0, 0] class 1 -> [1, 0, 0, 0, 0] class 2 -> [1, 1, 0, 0, 0] class 3 -> [1, 1, 1, 0, 0] class 4 -> [1, 1, 1, 1, 0] class 5 -> [1, 1, 1, 1, 1] This is used with a sigmoid activation and binary crossentropy loss. This target function means that the loss is smaller the closer you get to the right class. You can predict a class from the output $\{y_k\}$ of this classifier by finding the first index $k$ where $y_k < 0.5$. $k$ then gives you the predicted class. Method 3: Turning classification into regression This is the same idea as your second one. The mapping here would be: class 0 -> [0] class 1 -> [1] class 2 -> [2] class 3 -> [3] class 4 -> [4] class 5 -> [5] I used a linear activation and mean-squared-error loss with this. Like the previous approach, this also gives you a smaller loss the less you miss. When predicting a class based on the output of this, you can simply round the output to the nearest integer. Some example results I evaluated the different methods with the same data set. The metrics were precise accuracy (hitting the correct class) and adjacent accuracy (hitting the correct class or one of its neighbours), in class-unbalanced and class-balanced versions. Each metric value shown below is found as the average of three runs. For Method 1 / Method 2 / Method 3, the metrics gave: Unbalanced precise accuracy: 0.582 / 0.606 / 0.564 Balanced precise accuracy: 0.460 / 0.499 / 0.524 Unbalanced adjacent accuracy: 0.827 / 0.835 / 0.855 Balanced adjacent accuracy: 0.827 / 0.832 / 0.859 Thus, for my particular dataset and network setup, the regression approach generally does the best, and the standard approach with independent classes generally does the worst. I don't know how well these results generalise to other cases, but it should not be that difficult to adapt any ordinal classifier to be able to use all three methods so that you can test for yourself.
Classification with ordered classes? I had a look at this recently with a convolutional neural network classifier working with six ordinal classes. I tried three different methods: Method 1: Standard independent classification This is wh
17,152
Classification with ordered classes?
1) change the loss, say increase the loss of predicting young as old or old as young. Sounds like a reasonable approach. 2) turn it into a regression problem, young, middle-aged, and old are represented as say 0, 1 and 2. It depends on the regression learner you are employing, but this can be a bad idea (trees and derivatives would probably be safe against it, for example). Are you sure the "distance" (whatever it may mean) between young and middle-aged is the same as between middle-aged and old? As you are learning nominal variables, I recommend you to treat this problem as classification. More specifically, as you know there's a latent relationship between classes, ordinal classification. You can try the strategy proposed by Frank & Hall [1], where you code your $N$ response variable to $N-1$ binary problems. So you try to learn the distincion between old and not-old and young and not-young, and these actually give you information about the three categories. This is a really simply heuristic that can beat the naive multiclass approach and does not change the underlining workings of the learners. [1] Frank, E., & Hall, M. (2001, September). A simple approach to ordinal classification. In European Conference on Machine Learning (pp. 145-156). Springer Berlin Heidelberg.
Classification with ordered classes?
1) change the loss, say increase the loss of predicting young as old or old as young. Sounds like a reasonable approach. 2) turn it into a regression problem, young, middle-aged, and old are represe
Classification with ordered classes? 1) change the loss, say increase the loss of predicting young as old or old as young. Sounds like a reasonable approach. 2) turn it into a regression problem, young, middle-aged, and old are represented as say 0, 1 and 2. It depends on the regression learner you are employing, but this can be a bad idea (trees and derivatives would probably be safe against it, for example). Are you sure the "distance" (whatever it may mean) between young and middle-aged is the same as between middle-aged and old? As you are learning nominal variables, I recommend you to treat this problem as classification. More specifically, as you know there's a latent relationship between classes, ordinal classification. You can try the strategy proposed by Frank & Hall [1], where you code your $N$ response variable to $N-1$ binary problems. So you try to learn the distincion between old and not-old and young and not-young, and these actually give you information about the three categories. This is a really simply heuristic that can beat the naive multiclass approach and does not change the underlining workings of the learners. [1] Frank, E., & Hall, M. (2001, September). A simple approach to ordinal classification. In European Conference on Machine Learning (pp. 145-156). Springer Berlin Heidelberg.
Classification with ordered classes? 1) change the loss, say increase the loss of predicting young as old or old as young. Sounds like a reasonable approach. 2) turn it into a regression problem, young, middle-aged, and old are represe
17,153
How to read p,d and q of auto.arima()?
Try this: fit <- auto.arima(WWWusage) arimaorder(fit)
How to read p,d and q of auto.arima()?
Try this: fit <- auto.arima(WWWusage) arimaorder(fit)
How to read p,d and q of auto.arima()? Try this: fit <- auto.arima(WWWusage) arimaorder(fit)
How to read p,d and q of auto.arima()? Try this: fit <- auto.arima(WWWusage) arimaorder(fit)
17,154
How to read p,d and q of auto.arima()?
If you look at the help file of auto.arima and navigate to the section "Value", you are directed to the help file of arima function and there you find the following (under the section "Value") regarding the arma slot: A compact form of the specification, as a vector giving the number of AR, MA, seasonal AR and seasonal MA coefficients, plus the period and the number of non-seasonal and seasonal differences. That is what the seven elements you reported correspond to. In your case, you have a non-seasonal ARIMA(1,2,0).
How to read p,d and q of auto.arima()?
If you look at the help file of auto.arima and navigate to the section "Value", you are directed to the help file of arima function and there you find the following (under the section "Value") regardi
How to read p,d and q of auto.arima()? If you look at the help file of auto.arima and navigate to the section "Value", you are directed to the help file of arima function and there you find the following (under the section "Value") regarding the arma slot: A compact form of the specification, as a vector giving the number of AR, MA, seasonal AR and seasonal MA coefficients, plus the period and the number of non-seasonal and seasonal differences. That is what the seven elements you reported correspond to. In your case, you have a non-seasonal ARIMA(1,2,0).
How to read p,d and q of auto.arima()? If you look at the help file of auto.arima and navigate to the section "Value", you are directed to the help file of arima function and there you find the following (under the section "Value") regardi
17,155
How to read p,d and q of auto.arima()?
In case, it might be easier to understand for some people: non_seasonal_ar_order = model_fit$arma[1] non_seasonal_ma_order = model_fit$arma[2] seasonal_ar_order = model_fit$arma[3] seasonal_ma_order = model_fit$arma[4] period_of_data = model_fit$arma[5] # 1 for is non-seasonal data non_seasonal_diff_order = model_fit$arma[6] seasonal_diff_order = model_fit$arma[7]
How to read p,d and q of auto.arima()?
In case, it might be easier to understand for some people: non_seasonal_ar_order = model_fit$arma[1] non_seasonal_ma_order = model_fit$arma[2] seasonal_ar_order = model_fit$arma[3] seasonal_ma_order
How to read p,d and q of auto.arima()? In case, it might be easier to understand for some people: non_seasonal_ar_order = model_fit$arma[1] non_seasonal_ma_order = model_fit$arma[2] seasonal_ar_order = model_fit$arma[3] seasonal_ma_order = model_fit$arma[4] period_of_data = model_fit$arma[5] # 1 for is non-seasonal data non_seasonal_diff_order = model_fit$arma[6] seasonal_diff_order = model_fit$arma[7]
How to read p,d and q of auto.arima()? In case, it might be easier to understand for some people: non_seasonal_ar_order = model_fit$arma[1] non_seasonal_ma_order = model_fit$arma[2] seasonal_ar_order = model_fit$arma[3] seasonal_ma_order
17,156
How to characterize abrupt change?
If the observations of your time series data are correlated with the immediately previous observations, the paper by Chen and Liu (1993)$^{[1]}$ may interest you. It describes a method to detect level shifts and temporary changes in the framework of autoregressive moving-average time series models. [1]: Chen, C. and Liu, L-M. (1993), "Joint Estimation of Model Parameters and Outlier Effects in Time Series," Journal of the American Statistical Association, 88:421, 284-297
How to characterize abrupt change?
If the observations of your time series data are correlated with the immediately previous observations, the paper by Chen and Liu (1993)$^{[1]}$ may interest you. It describes a method to detect level
How to characterize abrupt change? If the observations of your time series data are correlated with the immediately previous observations, the paper by Chen and Liu (1993)$^{[1]}$ may interest you. It describes a method to detect level shifts and temporary changes in the framework of autoregressive moving-average time series models. [1]: Chen, C. and Liu, L-M. (1993), "Joint Estimation of Model Parameters and Outlier Effects in Time Series," Journal of the American Statistical Association, 88:421, 284-297
How to characterize abrupt change? If the observations of your time series data are correlated with the immediately previous observations, the paper by Chen and Liu (1993)$^{[1]}$ may interest you. It describes a method to detect level
17,157
How to characterize abrupt change?
This problem in Stats is referred to as the (univariate) Temporal Event Detection. The simplest idea is to use a moving average and standard deviation. Any reading that is "out of" 3-standard deviations (rule-of-thumb) is considered an "event". There of course, are more advanced models that use HMMs, or Regression. Here is an introductory overview of the field.
How to characterize abrupt change?
This problem in Stats is referred to as the (univariate) Temporal Event Detection. The simplest idea is to use a moving average and standard deviation. Any reading that is "out of" 3-standard deviatio
How to characterize abrupt change? This problem in Stats is referred to as the (univariate) Temporal Event Detection. The simplest idea is to use a moving average and standard deviation. Any reading that is "out of" 3-standard deviations (rule-of-thumb) is considered an "event". There of course, are more advanced models that use HMMs, or Regression. Here is an introductory overview of the field.
How to characterize abrupt change? This problem in Stats is referred to as the (univariate) Temporal Event Detection. The simplest idea is to use a moving average and standard deviation. Any reading that is "out of" 3-standard deviatio
17,158
How to characterize abrupt change?
This inference problem has many names, including change points, switch points, break points, broken line regression, broken stick regression, bilinear regression, piecewise linear regression, local linear regression, segmented regression, and discontinuity models. Here is an overview of change point packages with pros/cons and worked examples. If you know the number of change points a priori, check out the mcp package. First, let's simulate the data: df = data.frame(x = seq(1, 12, by = 0.1)) df$y = c(rnorm(21, 0, 5), rnorm(80, 180, 5), rnorm(10, 20, 5)) For your first problem, it's three intercept-only segments: model = list( y ~ 1, # Intercept ~ 1, # etc... ~ 1 ) library(mcp) fit = mcp(model, df, par_x = "x") We can plot the resulting fit: plot(fit) Here, the change points are very well defined (narrow). Let's summarise the fit to see their inferred locations (cp_1 and cp_2): summary(fit) Family: gaussian(link = 'identity') Iterations: 9000 from 3 chains. Segments: 1: y ~ 1 2: y ~ 1 ~ 1 3: y ~ 1 ~ 1 Population-level parameters: name mean lower upper Rhat n.eff cp_1 3.05 3.0 3.1 1 6445 cp_2 11.05 11.0 11.1 1 6401 int_1 0.14 -1.9 2.1 1 5979 int_2 179.86 178.8 180.9 1 6659 int_3 22.76 19.8 25.5 1 5906 sigma_1 4.68 4.1 5.3 1 5282 You can do much more complicated models with mcp, including modeling Nth-order autoregression (useful for time series), etc. Disclosure: I am the developer of mcp.
How to characterize abrupt change?
This inference problem has many names, including change points, switch points, break points, broken line regression, broken stick regression, bilinear regression, piecewise linear regression, local li
How to characterize abrupt change? This inference problem has many names, including change points, switch points, break points, broken line regression, broken stick regression, bilinear regression, piecewise linear regression, local linear regression, segmented regression, and discontinuity models. Here is an overview of change point packages with pros/cons and worked examples. If you know the number of change points a priori, check out the mcp package. First, let's simulate the data: df = data.frame(x = seq(1, 12, by = 0.1)) df$y = c(rnorm(21, 0, 5), rnorm(80, 180, 5), rnorm(10, 20, 5)) For your first problem, it's three intercept-only segments: model = list( y ~ 1, # Intercept ~ 1, # etc... ~ 1 ) library(mcp) fit = mcp(model, df, par_x = "x") We can plot the resulting fit: plot(fit) Here, the change points are very well defined (narrow). Let's summarise the fit to see their inferred locations (cp_1 and cp_2): summary(fit) Family: gaussian(link = 'identity') Iterations: 9000 from 3 chains. Segments: 1: y ~ 1 2: y ~ 1 ~ 1 3: y ~ 1 ~ 1 Population-level parameters: name mean lower upper Rhat n.eff cp_1 3.05 3.0 3.1 1 6445 cp_2 11.05 11.0 11.1 1 6401 int_1 0.14 -1.9 2.1 1 5979 int_2 179.86 178.8 180.9 1 6659 int_3 22.76 19.8 25.5 1 5906 sigma_1 4.68 4.1 5.3 1 5282 You can do much more complicated models with mcp, including modeling Nth-order autoregression (useful for time series), etc. Disclosure: I am the developer of mcp.
How to characterize abrupt change? This inference problem has many names, including change points, switch points, break points, broken line regression, broken stick regression, bilinear regression, piecewise linear regression, local li
17,159
How to characterize abrupt change?
The area of statistics that you are looking for is changepoint analysis. There is a website here that will give you an overview of the area and also have a page for software. If you are an R user then i'd recommend the changepoint package for changes in mean and the strucchange package for changes in regression. If you want to be Bayesian then the bcp package is good too. In general you have to choose a threshold which indicates the strength of the changes you are looking for. There are, of course, threshold choices that people advocate in certain situations and you can use asymptotic confidence levels or bootstrapping to get confidence too.
How to characterize abrupt change?
The area of statistics that you are looking for is changepoint analysis. There is a website here that will give you an overview of the area and also have a page for software. If you are an R user the
How to characterize abrupt change? The area of statistics that you are looking for is changepoint analysis. There is a website here that will give you an overview of the area and also have a page for software. If you are an R user then i'd recommend the changepoint package for changes in mean and the strucchange package for changes in regression. If you want to be Bayesian then the bcp package is good too. In general you have to choose a threshold which indicates the strength of the changes you are looking for. There are, of course, threshold choices that people advocate in certain situations and you can use asymptotic confidence levels or bootstrapping to get confidence too.
How to characterize abrupt change? The area of statistics that you are looking for is changepoint analysis. There is a website here that will give you an overview of the area and also have a page for software. If you are an R user the
17,160
How to characterize abrupt change?
Here is a quick and easy way to do it. Create a bunch of jump functions like this: $$ J_i = \left\{\begin{array}{l@{\qquad}l} 0 & x < x_i\\ 1 & x \ge x_i \end{array}\right. $$ for candidate cutoff points $x_1<x_2<\cdots<x_m$. Now use stepwise regression to select the best model with the $J_i$ as possible predictors. In your first example, assuming you select two predictors, you'll get one for $J_{april}$ with a positive coefficient equal to the size of the jump upward, and one for $J_{december}$ with a negative coefficient equal to the size of the jump downward. You need to decide how finely you want to divide the candidate jump times, $x_i$, e.g., one per month, one per fortnight, one per week, one per day. There are more elegant and exacting solutions involving nonlinear regression, where you use a model with $J_1$ and $J_2$ and estimate $x_1$ and $x_2$ as parameters. It's a bit messy to set up.
How to characterize abrupt change?
Here is a quick and easy way to do it. Create a bunch of jump functions like this: $$ J_i = \left\{\begin{array}{l@{\qquad}l} 0 & x < x_i\\ 1 & x \ge x_i \end{array}\right. $$ for candida
How to characterize abrupt change? Here is a quick and easy way to do it. Create a bunch of jump functions like this: $$ J_i = \left\{\begin{array}{l@{\qquad}l} 0 & x < x_i\\ 1 & x \ge x_i \end{array}\right. $$ for candidate cutoff points $x_1<x_2<\cdots<x_m$. Now use stepwise regression to select the best model with the $J_i$ as possible predictors. In your first example, assuming you select two predictors, you'll get one for $J_{april}$ with a positive coefficient equal to the size of the jump upward, and one for $J_{december}$ with a negative coefficient equal to the size of the jump downward. You need to decide how finely you want to divide the candidate jump times, $x_i$, e.g., one per month, one per fortnight, one per week, one per day. There are more elegant and exacting solutions involving nonlinear regression, where you use a model with $J_1$ and $J_2$ and estimate $x_1$ and $x_2$ as parameters. It's a bit messy to set up.
How to characterize abrupt change? Here is a quick and easy way to do it. Create a bunch of jump functions like this: $$ J_i = \left\{\begin{array}{l@{\qquad}l} 0 & x < x_i\\ 1 & x \ge x_i \end{array}\right. $$ for candida
17,161
How to characterize abrupt change?
There is a related problem of dividing a series or sequence into spells with ideally constant values. See How can I group numerical data into naturally forming "brackets"? (e.g. income) It's not quite the same problem as the question doesn't exclude spells with slow drift in any or all directions, but without abrupt changes. A more direct answer is to say that we are looking for big jumps, so the only real issue is to define jump. The first idea is then just to look at first differences between neighbouring values. It's not even clear that you need to refine that by removing noise first, as if jumps can't be distinguished from differences in noise, they surely can't be abrupt. On the other hand, the questioner evidently wants abrupt change to include ramped as well as stepped change, so some criterion such as variance or range within fixed-length windows seems called for.
How to characterize abrupt change?
There is a related problem of dividing a series or sequence into spells with ideally constant values. See How can I group numerical data into naturally forming "brackets"? (e.g. income) It's not quit
How to characterize abrupt change? There is a related problem of dividing a series or sequence into spells with ideally constant values. See How can I group numerical data into naturally forming "brackets"? (e.g. income) It's not quite the same problem as the question doesn't exclude spells with slow drift in any or all directions, but without abrupt changes. A more direct answer is to say that we are looking for big jumps, so the only real issue is to define jump. The first idea is then just to look at first differences between neighbouring values. It's not even clear that you need to refine that by removing noise first, as if jumps can't be distinguished from differences in noise, they surely can't be abrupt. On the other hand, the questioner evidently wants abrupt change to include ramped as well as stepped change, so some criterion such as variance or range within fixed-length windows seems called for.
How to characterize abrupt change? There is a related problem of dividing a series or sequence into spells with ideally constant values. See How can I group numerical data into naturally forming "brackets"? (e.g. income) It's not quit
17,162
Difference between ep-SVR and nu-SVR (and least squares SVR)
In $\nu$-SVR, the parameter $\nu$ is used to determine the proportion of the number of support vectors you desire to keep in your solution with respect to the total number of samples in the dataset. In $\nu$-SVR the parameter $\epsilon$ is introduced into the optimization problem formulation and it is estimated automatically (optimally) for you. However, in $\epsilon$-SVR you have no control on how many data vectors from the dataset become support vectors, it could be a few, it could be many. Nonetheless, you will have total control of how much error you will allow your model to have, and anything beyond the specified $\epsilon$ will be penalized in proportion to $C$, which is the regularization parameter. Depending of what I want, I choose between the two. If I am really desperate for a small solution (fewer support vectors) I choose $\nu$-SVR and hope to obtain a decent model. But if I really want to control the amount of error in my model and go for the best performance, I choose $\epsilon$-SVR and hope that the model is not too complex (lots of support vectors).
Difference between ep-SVR and nu-SVR (and least squares SVR)
In $\nu$-SVR, the parameter $\nu$ is used to determine the proportion of the number of support vectors you desire to keep in your solution with respect to the total number of samples in the dataset. I
Difference between ep-SVR and nu-SVR (and least squares SVR) In $\nu$-SVR, the parameter $\nu$ is used to determine the proportion of the number of support vectors you desire to keep in your solution with respect to the total number of samples in the dataset. In $\nu$-SVR the parameter $\epsilon$ is introduced into the optimization problem formulation and it is estimated automatically (optimally) for you. However, in $\epsilon$-SVR you have no control on how many data vectors from the dataset become support vectors, it could be a few, it could be many. Nonetheless, you will have total control of how much error you will allow your model to have, and anything beyond the specified $\epsilon$ will be penalized in proportion to $C$, which is the regularization parameter. Depending of what I want, I choose between the two. If I am really desperate for a small solution (fewer support vectors) I choose $\nu$-SVR and hope to obtain a decent model. But if I really want to control the amount of error in my model and go for the best performance, I choose $\epsilon$-SVR and hope that the model is not too complex (lots of support vectors).
Difference between ep-SVR and nu-SVR (and least squares SVR) In $\nu$-SVR, the parameter $\nu$ is used to determine the proportion of the number of support vectors you desire to keep in your solution with respect to the total number of samples in the dataset. I
17,163
Difference between ep-SVR and nu-SVR (and least squares SVR)
The difference between $\epsilon$-SVR and $\nu$-SVR is how the training problem is parametrized. Both use a type of hinge loss in the cost function. The $\nu$ parameter in $\nu$-SVM can be used to control the amount of support vectors in the resulting model. Given appropriate parameters, the exact same problem is solved.1 Least squares SVR differs from the other two by using squared residuals in the cost function instead of hinge loss. 1: C.-C. Chang and C.-J. Lin. Training $\nu$-support vector regression: Theory and algorithms. Neural Computation, 14(8):1959-1977, 2002.
Difference between ep-SVR and nu-SVR (and least squares SVR)
The difference between $\epsilon$-SVR and $\nu$-SVR is how the training problem is parametrized. Both use a type of hinge loss in the cost function. The $\nu$ parameter in $\nu$-SVM can be used to con
Difference between ep-SVR and nu-SVR (and least squares SVR) The difference between $\epsilon$-SVR and $\nu$-SVR is how the training problem is parametrized. Both use a type of hinge loss in the cost function. The $\nu$ parameter in $\nu$-SVM can be used to control the amount of support vectors in the resulting model. Given appropriate parameters, the exact same problem is solved.1 Least squares SVR differs from the other two by using squared residuals in the cost function instead of hinge loss. 1: C.-C. Chang and C.-J. Lin. Training $\nu$-support vector regression: Theory and algorithms. Neural Computation, 14(8):1959-1977, 2002.
Difference between ep-SVR and nu-SVR (and least squares SVR) The difference between $\epsilon$-SVR and $\nu$-SVR is how the training problem is parametrized. Both use a type of hinge loss in the cost function. The $\nu$ parameter in $\nu$-SVM can be used to con
17,164
Difference between ep-SVR and nu-SVR (and least squares SVR)
I like both Pablo and Marc answers. One additional point: In the paper cited by Marc there is written (section 4) "The motivation of $\nu$-SVR is that it may not be easy to decide the parameter $\epsilon$. Hence, here we are interested in the possible range of $\epsilon$. As expected, results show that $\epsilon$ is related to the target values $y$. [...] As the effective range of $\epsilon$ is affected by the target values $y$, a way to solve this difficulty for $\epsilon$-SVM is by scaling the target values before training the data. For example, if all target values are scaled to $[-1,+1]$, then the effective range of $\epsilon$ will be $[0, 1]$, the same as that of $\nu$. Then it may be easier to choose $\epsilon$." That brings me to think that it should be easier to scale your target variables and use $\epsilon$-SVR, than trying to decide whether to use $\epsilon -$ or $\nu -$ SVR. What do you think?
Difference between ep-SVR and nu-SVR (and least squares SVR)
I like both Pablo and Marc answers. One additional point: In the paper cited by Marc there is written (section 4) "The motivation of $\nu$-SVR is that it may not be easy to decide the parameter $\epsi
Difference between ep-SVR and nu-SVR (and least squares SVR) I like both Pablo and Marc answers. One additional point: In the paper cited by Marc there is written (section 4) "The motivation of $\nu$-SVR is that it may not be easy to decide the parameter $\epsilon$. Hence, here we are interested in the possible range of $\epsilon$. As expected, results show that $\epsilon$ is related to the target values $y$. [...] As the effective range of $\epsilon$ is affected by the target values $y$, a way to solve this difficulty for $\epsilon$-SVM is by scaling the target values before training the data. For example, if all target values are scaled to $[-1,+1]$, then the effective range of $\epsilon$ will be $[0, 1]$, the same as that of $\nu$. Then it may be easier to choose $\epsilon$." That brings me to think that it should be easier to scale your target variables and use $\epsilon$-SVR, than trying to decide whether to use $\epsilon -$ or $\nu -$ SVR. What do you think?
Difference between ep-SVR and nu-SVR (and least squares SVR) I like both Pablo and Marc answers. One additional point: In the paper cited by Marc there is written (section 4) "The motivation of $\nu$-SVR is that it may not be easy to decide the parameter $\epsi
17,165
Understanding dummy (manual or automated) variable creation in GLM
Categorical variables (called "factors" in R) need to be represented by numerical codes in multiple regression models. There are very many possible ways to construct numerical codes appropriately (see this great list at UCLA's stats help site). By default, R uses reference level coding (which R calls "contr.treatment"), and which is pretty much the default statistics-wide. This can be changed for all contrasts for your entire R session using ?options, or for specific analyses / variables using ?contrasts or ?C (note the capital). If you need more information about reference level coding, I explain it here: Regression based for example on days of the week. Some people find reference level coding confusing, and you don't have to use it. If you want, you can have two variables for male and female; this is called level means coding. However, if you do that, you will need to suppress the intercept or the model matrix will be singular and the regression cannot be fit as @Affine notes above and as I explain here: Qualitative variable coding leads to singularities. To suppress the intercept, you modify your formula by adding -1 or +0 like so: y~... -1 or y~... +0. Using level means coding instead of reference level coding will change the coefficients estimated and the meaning of the hypothesis tests that are printed with your output. When you have a two level factor (e.g., male vs. female) and you use reference level coding, you will see the intercept called (constant) and only one variable listed in the output (perhaps sexM). The intercept is the mean of the reference group (perhaps females) and sexM is the difference between the mean of males and the mean of females. The p-value associated with the intercept is a one-sample $t$-test of whether the reference level has a mean of $0$ and the p-value associated with sexM tells you if the sexes differ on your response. But if you use level means coding instead, you will have two variables listed and each p-value will correspond to a one-sample $t$-test of whether the mean of that level is $0$. That is, none of the p-values will be a test of whether the sexes differ. set.seed(1) y = c( rnorm(30), rnorm(30, mean=1) ) sex = rep(c("Female", "Male" ), each=30) fem = ifelse(sex=="Female", 1, 0) male = ifelse(sex=="Male", 1, 0) ref.level.coding.model = lm(y~sex) level.means.coding.model = lm(y~fem+male+0) summary(ref.level.coding.model) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.08246 0.15740 0.524 0.602 # sexMale 1.05032 0.22260 4.718 1.54e-05 *** # --- # Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 # ... summary(level.means.coding.model) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # fem 0.08246 0.15740 0.524 0.602 # male 1.13277 0.15740 7.197 1.37e-09 *** # --- # Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 # ...
Understanding dummy (manual or automated) variable creation in GLM
Categorical variables (called "factors" in R) need to be represented by numerical codes in multiple regression models. There are very many possible ways to construct numerical codes appropriately (se
Understanding dummy (manual or automated) variable creation in GLM Categorical variables (called "factors" in R) need to be represented by numerical codes in multiple regression models. There are very many possible ways to construct numerical codes appropriately (see this great list at UCLA's stats help site). By default, R uses reference level coding (which R calls "contr.treatment"), and which is pretty much the default statistics-wide. This can be changed for all contrasts for your entire R session using ?options, or for specific analyses / variables using ?contrasts or ?C (note the capital). If you need more information about reference level coding, I explain it here: Regression based for example on days of the week. Some people find reference level coding confusing, and you don't have to use it. If you want, you can have two variables for male and female; this is called level means coding. However, if you do that, you will need to suppress the intercept or the model matrix will be singular and the regression cannot be fit as @Affine notes above and as I explain here: Qualitative variable coding leads to singularities. To suppress the intercept, you modify your formula by adding -1 or +0 like so: y~... -1 or y~... +0. Using level means coding instead of reference level coding will change the coefficients estimated and the meaning of the hypothesis tests that are printed with your output. When you have a two level factor (e.g., male vs. female) and you use reference level coding, you will see the intercept called (constant) and only one variable listed in the output (perhaps sexM). The intercept is the mean of the reference group (perhaps females) and sexM is the difference between the mean of males and the mean of females. The p-value associated with the intercept is a one-sample $t$-test of whether the reference level has a mean of $0$ and the p-value associated with sexM tells you if the sexes differ on your response. But if you use level means coding instead, you will have two variables listed and each p-value will correspond to a one-sample $t$-test of whether the mean of that level is $0$. That is, none of the p-values will be a test of whether the sexes differ. set.seed(1) y = c( rnorm(30), rnorm(30, mean=1) ) sex = rep(c("Female", "Male" ), each=30) fem = ifelse(sex=="Female", 1, 0) male = ifelse(sex=="Male", 1, 0) ref.level.coding.model = lm(y~sex) level.means.coding.model = lm(y~fem+male+0) summary(ref.level.coding.model) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.08246 0.15740 0.524 0.602 # sexMale 1.05032 0.22260 4.718 1.54e-05 *** # --- # Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 # ... summary(level.means.coding.model) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # fem 0.08246 0.15740 0.524 0.602 # male 1.13277 0.15740 7.197 1.37e-09 *** # --- # Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 # ...
Understanding dummy (manual or automated) variable creation in GLM Categorical variables (called "factors" in R) need to be represented by numerical codes in multiple regression models. There are very many possible ways to construct numerical codes appropriately (se
17,166
Understanding dummy (manual or automated) variable creation in GLM
The estimated coefficients would be the same subject to the condition that you create your dummy variables (i.e. the numerical ones) consistent to R. For example: lets' create a fake data and fit a Poisson glm using factor. Note that gl function creates a factor variable. > counts <- c(18,17,15,20,10,20,25,13,12) > outcome <- gl(3,1,9) > outcome [1] 1 2 3 1 2 3 1 2 3 Levels: 1 2 3 > class(outcome) [1] "factor" > glm.1<- glm(counts ~ outcome, family = poisson()) > summary(glm.1) Call: glm(formula = counts ~ outcome, family = poisson()) Deviance Residuals: Min 1Q Median 3Q Max -0.9666 -0.6713 -0.1696 0.8471 1.0494 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.0445 0.1260 24.165 <2e-16 *** outcome2 -0.4543 0.2022 -2.247 0.0246 * outcome3 -0.2930 0.1927 -1.520 0.1285 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 10.5814 on 8 degrees of freedom Residual deviance: 5.1291 on 6 degrees of freedom AIC: 52.761 Number of Fisher Scoring iterations: 4 Since outcome has three levels, I create two dummy variables (dummy.1=0 if outcome=2 and dummy.2=1 if outcome=3) and refit using these numerical values: > dummy.1=rep(0,9) > dummy.2=rep(0,9) > dummy.1[outcome==2]=1 > dummy.2[outcome==3]=1 > glm.2<- glm(counts ~ dummy.1+dummy.2, family = poisson()) > summary(glm.2) Call: glm(formula = counts ~ dummy.1 + dummy.2, family = poisson()) Deviance Residuals: Min 1Q Median 3Q Max -0.9666 -0.6713 -0.1696 0.8471 1.0494 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.0445 0.1260 24.165 <2e-16 *** dummy.1 -0.4543 0.2022 -2.247 0.0246 * dummy.2 -0.2930 0.1927 -1.520 0.1285 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 10.5814 on 8 degrees of freedom Residual deviance: 5.1291 on 6 degrees of freedom AIC: 52.761 Number of Fisher Scoring iterations: 4 As you can see the estimated coefficients are the same. But you need to be careful when creating your dummy variables if you want to get the same result. For example if I create two dummy variables as (dummy.1=0 if outcome=1 and dummy.2=1 if outcome=2) then the estimated results are different as follow: > dummy.1=rep(0,9) > dummy.2=rep(0,9) > dummy.1[outcome==1]=1 > dummy.2[outcome==2]=1 > glm.3<- glm(counts ~ dummy.1+dummy.2, family = poisson()) > summary(glm.3) Call: glm(formula = counts ~ dummy.1 + dummy.2, family = poisson()) Deviance Residuals: Min 1Q Median 3Q Max -0.9666 -0.6713 -0.1696 0.8471 1.0494 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.7515 0.1459 18.86 <2e-16 *** dummy.1 0.2930 0.1927 1.52 0.128 dummy.2 -0.1613 0.2151 -0.75 0.453 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 10.5814 on 8 degrees of freedom Residual deviance: 5.1291 on 6 degrees of freedom AIC: 52.761 Number of Fisher Scoring iterations: 4 This is because when you add outcome variable in glm.1, R by default creates two dummy variables namely outcome2 and outcome3 and defines them similarly to dummy.1 and dummy.2 in glm.2 i.e. the first level of outcome is when all other dummy variables (outcome2 and outcome3) are set to be zero.
Understanding dummy (manual or automated) variable creation in GLM
The estimated coefficients would be the same subject to the condition that you create your dummy variables (i.e. the numerical ones) consistent to R. For example: lets' create a fake data and fit a P
Understanding dummy (manual or automated) variable creation in GLM The estimated coefficients would be the same subject to the condition that you create your dummy variables (i.e. the numerical ones) consistent to R. For example: lets' create a fake data and fit a Poisson glm using factor. Note that gl function creates a factor variable. > counts <- c(18,17,15,20,10,20,25,13,12) > outcome <- gl(3,1,9) > outcome [1] 1 2 3 1 2 3 1 2 3 Levels: 1 2 3 > class(outcome) [1] "factor" > glm.1<- glm(counts ~ outcome, family = poisson()) > summary(glm.1) Call: glm(formula = counts ~ outcome, family = poisson()) Deviance Residuals: Min 1Q Median 3Q Max -0.9666 -0.6713 -0.1696 0.8471 1.0494 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.0445 0.1260 24.165 <2e-16 *** outcome2 -0.4543 0.2022 -2.247 0.0246 * outcome3 -0.2930 0.1927 -1.520 0.1285 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 10.5814 on 8 degrees of freedom Residual deviance: 5.1291 on 6 degrees of freedom AIC: 52.761 Number of Fisher Scoring iterations: 4 Since outcome has three levels, I create two dummy variables (dummy.1=0 if outcome=2 and dummy.2=1 if outcome=3) and refit using these numerical values: > dummy.1=rep(0,9) > dummy.2=rep(0,9) > dummy.1[outcome==2]=1 > dummy.2[outcome==3]=1 > glm.2<- glm(counts ~ dummy.1+dummy.2, family = poisson()) > summary(glm.2) Call: glm(formula = counts ~ dummy.1 + dummy.2, family = poisson()) Deviance Residuals: Min 1Q Median 3Q Max -0.9666 -0.6713 -0.1696 0.8471 1.0494 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.0445 0.1260 24.165 <2e-16 *** dummy.1 -0.4543 0.2022 -2.247 0.0246 * dummy.2 -0.2930 0.1927 -1.520 0.1285 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 10.5814 on 8 degrees of freedom Residual deviance: 5.1291 on 6 degrees of freedom AIC: 52.761 Number of Fisher Scoring iterations: 4 As you can see the estimated coefficients are the same. But you need to be careful when creating your dummy variables if you want to get the same result. For example if I create two dummy variables as (dummy.1=0 if outcome=1 and dummy.2=1 if outcome=2) then the estimated results are different as follow: > dummy.1=rep(0,9) > dummy.2=rep(0,9) > dummy.1[outcome==1]=1 > dummy.2[outcome==2]=1 > glm.3<- glm(counts ~ dummy.1+dummy.2, family = poisson()) > summary(glm.3) Call: glm(formula = counts ~ dummy.1 + dummy.2, family = poisson()) Deviance Residuals: Min 1Q Median 3Q Max -0.9666 -0.6713 -0.1696 0.8471 1.0494 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.7515 0.1459 18.86 <2e-16 *** dummy.1 0.2930 0.1927 1.52 0.128 dummy.2 -0.1613 0.2151 -0.75 0.453 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 10.5814 on 8 degrees of freedom Residual deviance: 5.1291 on 6 degrees of freedom AIC: 52.761 Number of Fisher Scoring iterations: 4 This is because when you add outcome variable in glm.1, R by default creates two dummy variables namely outcome2 and outcome3 and defines them similarly to dummy.1 and dummy.2 in glm.2 i.e. the first level of outcome is when all other dummy variables (outcome2 and outcome3) are set to be zero.
Understanding dummy (manual or automated) variable creation in GLM The estimated coefficients would be the same subject to the condition that you create your dummy variables (i.e. the numerical ones) consistent to R. For example: lets' create a fake data and fit a P
17,167
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$
The issue you need to worry about is called endogeneity. More specifically, it depends on whether $x_3$ is correlated in the population with $x_1$ or $x_2$. If it is, then the associated $b_j$s will be biased. That is because OLS regression methods force the residuals, $u_i$, to be uncorrelated with your covariates, $x_j$s. However, your residuals are composed of some irreducible randomness, $\varepsilon_i$, and the unobserved (but relevant) variable, $x_3$, which by stipulation is correlated with $x_1$ and / or $x_2$. On the other hand, if both $x_1$ and $x_2$ are uncorrelated with $x_3$ in the population, then their $b$s won't be biased by this (they may well be biased by something else, of course). One way econometricians try to deal with this issue is by using instrumental variables. For the sake of greater clarity, I've written a quick simulation in R that demonstrates the sampling distribution of $b_2$ is unbiased / centered on the true value of $\beta_2$, when it is uncorrelated with $x_3$. In the second run, however, note that $x_3$ is uncorrelated with $x_1$, but not $x_2$. Not coincidentally, $b_1$ is unbiased, but $b_2$ is biased. library(MASS) # you'll need this package below N = 100 # this is how much data we'll use beta0 = -71 # these are the true values of the beta1 = .84 # parameters beta2 = .64 beta3 = .34 ############## uncorrelated version b0VectU = vector(length=10000) # these will store the parameter b1VectU = vector(length=10000) # estimates b2VectU = vector(length=10000) set.seed(7508) # this makes the simulation reproducible for(i in 1:10000){ # we'll do this 10k times x1 = rnorm(N) x2 = rnorm(N) # these variables are uncorrelated x3 = rnorm(N) y = beta0 + beta1*x1 + beta2*x2 + beta3*x3 + rnorm(100) mod = lm(y~x1+x2) # note all 3 variables are relevant # but the model omits x3 b0VectU[i] = coef(mod)[1] # here I'm storing the estimates b1VectU[i] = coef(mod)[2] b2VectU[i] = coef(mod)[3] } mean(b0VectU) # [1] -71.00005 # all 3 of these are centered on the mean(b1VectU) # [1] 0.8399306 # the true values / are unbiased mean(b2VectU) # [1] 0.6398391 # e.g., .64 = .64 ############## correlated version r23 = .7 # this will be the correlation in the b0VectC = vector(length=10000) # population between x2 & x3 b1VectC = vector(length=10000) b2VectC = vector(length=10000) set.seed(2734) for(i in 1:10000){ x1 = rnorm(N) X = mvrnorm(N, mu=c(0,0), Sigma=rbind(c( 1, r23), c(r23, 1))) x2 = X[,1] x3 = X[,2] # x3 is correated w/ x2, but not x1 y = beta0 + beta1*x1 + beta2*x2 + beta3*x3 + rnorm(100) # once again, all 3 variables are relevant mod = lm(y~x1+x2) # but the model omits x3 b0VectC[i] = coef(mod)[1] b1VectC[i] = coef(mod)[2] # we store the estimates again b2VectC[i] = coef(mod)[3] } mean(b0VectC) # [1] -70.99916 # the 1st 2 are unbiased mean(b1VectC) # [1] 0.8409656 # but the sampling dist of x2 is biased mean(b2VectC) # [1] 0.8784184 # .88 not equal to .64
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$
The issue you need to worry about is called endogeneity. More specifically, it depends on whether $x_3$ is correlated in the population with $x_1$ or $x_2$. If it is, then the associated $b_j$s will
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ The issue you need to worry about is called endogeneity. More specifically, it depends on whether $x_3$ is correlated in the population with $x_1$ or $x_2$. If it is, then the associated $b_j$s will be biased. That is because OLS regression methods force the residuals, $u_i$, to be uncorrelated with your covariates, $x_j$s. However, your residuals are composed of some irreducible randomness, $\varepsilon_i$, and the unobserved (but relevant) variable, $x_3$, which by stipulation is correlated with $x_1$ and / or $x_2$. On the other hand, if both $x_1$ and $x_2$ are uncorrelated with $x_3$ in the population, then their $b$s won't be biased by this (they may well be biased by something else, of course). One way econometricians try to deal with this issue is by using instrumental variables. For the sake of greater clarity, I've written a quick simulation in R that demonstrates the sampling distribution of $b_2$ is unbiased / centered on the true value of $\beta_2$, when it is uncorrelated with $x_3$. In the second run, however, note that $x_3$ is uncorrelated with $x_1$, but not $x_2$. Not coincidentally, $b_1$ is unbiased, but $b_2$ is biased. library(MASS) # you'll need this package below N = 100 # this is how much data we'll use beta0 = -71 # these are the true values of the beta1 = .84 # parameters beta2 = .64 beta3 = .34 ############## uncorrelated version b0VectU = vector(length=10000) # these will store the parameter b1VectU = vector(length=10000) # estimates b2VectU = vector(length=10000) set.seed(7508) # this makes the simulation reproducible for(i in 1:10000){ # we'll do this 10k times x1 = rnorm(N) x2 = rnorm(N) # these variables are uncorrelated x3 = rnorm(N) y = beta0 + beta1*x1 + beta2*x2 + beta3*x3 + rnorm(100) mod = lm(y~x1+x2) # note all 3 variables are relevant # but the model omits x3 b0VectU[i] = coef(mod)[1] # here I'm storing the estimates b1VectU[i] = coef(mod)[2] b2VectU[i] = coef(mod)[3] } mean(b0VectU) # [1] -71.00005 # all 3 of these are centered on the mean(b1VectU) # [1] 0.8399306 # the true values / are unbiased mean(b2VectU) # [1] 0.6398391 # e.g., .64 = .64 ############## correlated version r23 = .7 # this will be the correlation in the b0VectC = vector(length=10000) # population between x2 & x3 b1VectC = vector(length=10000) b2VectC = vector(length=10000) set.seed(2734) for(i in 1:10000){ x1 = rnorm(N) X = mvrnorm(N, mu=c(0,0), Sigma=rbind(c( 1, r23), c(r23, 1))) x2 = X[,1] x3 = X[,2] # x3 is correated w/ x2, but not x1 y = beta0 + beta1*x1 + beta2*x2 + beta3*x3 + rnorm(100) # once again, all 3 variables are relevant mod = lm(y~x1+x2) # but the model omits x3 b0VectC[i] = coef(mod)[1] b1VectC[i] = coef(mod)[2] # we store the estimates again b2VectC[i] = coef(mod)[3] } mean(b0VectC) # [1] -70.99916 # the 1st 2 are unbiased mean(b1VectC) # [1] 0.8409656 # but the sampling dist of x2 is biased mean(b2VectC) # [1] 0.8784184 # .88 not equal to .64
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ The issue you need to worry about is called endogeneity. More specifically, it depends on whether $x_3$ is correlated in the population with $x_1$ or $x_2$. If it is, then the associated $b_j$s will
17,168
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$
Let's think of this in geometric terms. Think of a "ball", the surface of a ball. It is described as $ r^2 = ax^2+by^2+cz^2 + \epsilon$. Now if you have the values for $ x^2$, $ y^2$, $ z^2$, and you have measurements of $ r^2$ then you can determine your coefficients "a", "b", and "c". (You could call it ellipsoid, but to call it a ball is simpler.) If you have only the $ x^2$, and $ y^2$ terms then you can make a circle. Instead of defining the surface of a ball, you will describe a filled in circle. The equation you instead fit is $ r^2 \le ax^2 + by^2 + \epsilon$. You are projecting the "ball", whatever shape it is, into the expression for the circle. It could be a diagonally oriented "ball" that is shaped more like a sewing needle, and so the $ z$ components utterly wreck the estimates of the two axes. It could be a ball that looks like a nearly crushed m&m where the coin-axes are "x" and "y", and there is zero projection. You can't know which it is without the "$ z$" information. That last paragraph was talking about a "pure information" case and didn't account for the noise. Real world measurements have the signal with noise. The noise along the perimeter that is aligned to the axes is going to have a much stronger impact on your fit. Even though you have the same number of samples, you are going to have more uncertainty in your parameter estimates. If it is a different equation than this simple linear axis-oriented case, then things can go "pear shaped". Your current equations are plane-shaped, so instead of having a bound (the surface of the ball), the z-data might just go all over the map - projection could be a serious problem. Is it okay to model? That is a judgment call. An expert who understands the particulars of the problem might answer that. I don't know if someone can give a good answer if they are far from the problem. You do lose several good things, including certainty in parameter estimates, and the nature of the model being transformed. The estimate for $ b_3$ disappears into epsilon and into the other parameter estimates. It is subsumed by the whole equation, depending on the underlying system.
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$
Let's think of this in geometric terms. Think of a "ball", the surface of a ball. It is described as $ r^2 = ax^2+by^2+cz^2 + \epsilon$. Now if you have the values for $ x^2$, $ y^2$, $ z^2$, and y
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ Let's think of this in geometric terms. Think of a "ball", the surface of a ball. It is described as $ r^2 = ax^2+by^2+cz^2 + \epsilon$. Now if you have the values for $ x^2$, $ y^2$, $ z^2$, and you have measurements of $ r^2$ then you can determine your coefficients "a", "b", and "c". (You could call it ellipsoid, but to call it a ball is simpler.) If you have only the $ x^2$, and $ y^2$ terms then you can make a circle. Instead of defining the surface of a ball, you will describe a filled in circle. The equation you instead fit is $ r^2 \le ax^2 + by^2 + \epsilon$. You are projecting the "ball", whatever shape it is, into the expression for the circle. It could be a diagonally oriented "ball" that is shaped more like a sewing needle, and so the $ z$ components utterly wreck the estimates of the two axes. It could be a ball that looks like a nearly crushed m&m where the coin-axes are "x" and "y", and there is zero projection. You can't know which it is without the "$ z$" information. That last paragraph was talking about a "pure information" case and didn't account for the noise. Real world measurements have the signal with noise. The noise along the perimeter that is aligned to the axes is going to have a much stronger impact on your fit. Even though you have the same number of samples, you are going to have more uncertainty in your parameter estimates. If it is a different equation than this simple linear axis-oriented case, then things can go "pear shaped". Your current equations are plane-shaped, so instead of having a bound (the surface of the ball), the z-data might just go all over the map - projection could be a serious problem. Is it okay to model? That is a judgment call. An expert who understands the particulars of the problem might answer that. I don't know if someone can give a good answer if they are far from the problem. You do lose several good things, including certainty in parameter estimates, and the nature of the model being transformed. The estimate for $ b_3$ disappears into epsilon and into the other parameter estimates. It is subsumed by the whole equation, depending on the underlying system.
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ Let's think of this in geometric terms. Think of a "ball", the surface of a ball. It is described as $ r^2 = ax^2+by^2+cz^2 + \epsilon$. Now if you have the values for $ x^2$, $ y^2$, $ z^2$, and y
17,169
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$
The other answers, while not wrong, over complicate the issue a bit. If $x_3$ is truly uncorrelated with $x_1$ and $x_2$ (and the true relationship is as specified) then you can estimate your second equation without an issue. As you suggest, $\beta_3 x_3$ will be absorbed by the (new) error term. The OLS estimates will be unbiased, as long as all the other OLS assumptions hold.
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$
The other answers, while not wrong, over complicate the issue a bit. If $x_3$ is truly uncorrelated with $x_1$ and $x_2$ (and the true relationship is as specified) then you can estimate your second e
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ The other answers, while not wrong, over complicate the issue a bit. If $x_3$ is truly uncorrelated with $x_1$ and $x_2$ (and the true relationship is as specified) then you can estimate your second equation without an issue. As you suggest, $\beta_3 x_3$ will be absorbed by the (new) error term. The OLS estimates will be unbiased, as long as all the other OLS assumptions hold.
Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ The other answers, while not wrong, over complicate the issue a bit. If $x_3$ is truly uncorrelated with $x_1$ and $x_2$ (and the true relationship is as specified) then you can estimate your second e
17,170
Advantages of ROC curves
Many binary classification algorithms compute a sort of classification score (sometimes but not always this is a probability of being in the target state), and they classify based upon whether or not the score is above a certain threshold. Viewing the ROC curve lets you see the tradeoff between sensitivity and specificity for all possible thresholds rather than just the one that was chosen by the modeling technique. Different classification objectives might make one point on the curve more suitable for one task and another more suitable for a different task, so looking at the ROC curve is a way to assess the model independent of the choice of a threshold.
Advantages of ROC curves
Many binary classification algorithms compute a sort of classification score (sometimes but not always this is a probability of being in the target state), and they classify based upon whether or not
Advantages of ROC curves Many binary classification algorithms compute a sort of classification score (sometimes but not always this is a probability of being in the target state), and they classify based upon whether or not the score is above a certain threshold. Viewing the ROC curve lets you see the tradeoff between sensitivity and specificity for all possible thresholds rather than just the one that was chosen by the modeling technique. Different classification objectives might make one point on the curve more suitable for one task and another more suitable for a different task, so looking at the ROC curve is a way to assess the model independent of the choice of a threshold.
Advantages of ROC curves Many binary classification algorithms compute a sort of classification score (sometimes but not always this is a probability of being in the target state), and they classify based upon whether or not
17,171
Advantages of ROC curves
ROC curves are not informative in 99% of the cases I've seen over the past few years. They seem to be thought of as obligatory by many statisticians and even more machine learning practitioners. And make sure your problem is really a classification problem and not a risk estimation problem. At the heart of problems with ROC curves is that they invite users to use cutpoints for continuous variables, and they use backwards probabilities, i.e., probabilities of events that are in reverse time order (sensitivity and specificity). ROC curves cannot be used to find optimum tradeoffs except in very special cases where users of a decision rule abdicate their loss (cost; utility) function to the analyst.
Advantages of ROC curves
ROC curves are not informative in 99% of the cases I've seen over the past few years. They seem to be thought of as obligatory by many statisticians and even more machine learning practitioners. An
Advantages of ROC curves ROC curves are not informative in 99% of the cases I've seen over the past few years. They seem to be thought of as obligatory by many statisticians and even more machine learning practitioners. And make sure your problem is really a classification problem and not a risk estimation problem. At the heart of problems with ROC curves is that they invite users to use cutpoints for continuous variables, and they use backwards probabilities, i.e., probabilities of events that are in reverse time order (sensitivity and specificity). ROC curves cannot be used to find optimum tradeoffs except in very special cases where users of a decision rule abdicate their loss (cost; utility) function to the analyst.
Advantages of ROC curves ROC curves are not informative in 99% of the cases I've seen over the past few years. They seem to be thought of as obligatory by many statisticians and even more machine learning practitioners. An
17,172
Advantages of ROC curves
After creating a ROC curve, the AUC (area under the curve) can be calculated. The AUC is accuracy of the test across many thresholds. AUC = 1 means the test is perfect. AUC = .5 means performs at chance for binary classification. If there are multiple models, AUC provides a single measurement to compare across different models. There are always trade-offs with any single measure but AUC is a good place to start.
Advantages of ROC curves
After creating a ROC curve, the AUC (area under the curve) can be calculated. The AUC is accuracy of the test across many thresholds. AUC = 1 means the test is perfect. AUC = .5 means performs at chan
Advantages of ROC curves After creating a ROC curve, the AUC (area under the curve) can be calculated. The AUC is accuracy of the test across many thresholds. AUC = 1 means the test is perfect. AUC = .5 means performs at chance for binary classification. If there are multiple models, AUC provides a single measurement to compare across different models. There are always trade-offs with any single measure but AUC is a good place to start.
Advantages of ROC curves After creating a ROC curve, the AUC (area under the curve) can be calculated. The AUC is accuracy of the test across many thresholds. AUC = 1 means the test is perfect. AUC = .5 means performs at chan
17,173
Advantages of ROC curves
The AUC does not compare classes real vs. predicted with each other. It is not looking at the predicted class, but the prediction score or the probability. You can do the prediction of the class by applying a cutoff to this score, say, every sample that got a score below 0.5 is classified as negative. But the ROC comes before that happens. It is working with the scores/class-probabilities. It takes these scores and sorts all samples according to that score. Now, whenever you find a positive sample the ROC-curve makes a step up (along the y-axis). Whenever you find a negative sample you move right (along the x-axis). If that score is different for the two classes, the positive samples come first (usually). That means you make more steps up than to the right. Further down the list the negative samples will come, so you move left. When you are through the whole list of samples you reach at the coordinate (1,1) which corresponds to 100% of the positive and 100% of the negative samples. If the score perfectly separates the positive from the negative samples you move all the way from (x=0, y=0) to (1,0) and then from there to (1, 1). So, the area under the curve is 1. If your score has the same distribution for positive and negative samples the probabilities to find a positive or negative sample in the sorted list are equal and therefore the probabilities to move up or left in the ROC-curve are equal. That is why you move along the diagonal, because you essentially move up and left, and up and left, and so on... which gives an AROC value of around 0.5. In the case of an imbalanced dataset, the stepsize is different. So, you make smaller steps to the left (if you have more negative samples). That is why the score is more or less independent of the imbalance. So with the ROC curve, you can visualize how your samples are separated and the area under the curve can be a very good metric to measure the performance of a binary classification algorithm or any variable that may be used to separate classes. The figure shows the same distributions with different sample sizes. The black area shows where ROC-curves of random mixtures of positive and negative samples would be expected.
Advantages of ROC curves
The AUC does not compare classes real vs. predicted with each other. It is not looking at the predicted class, but the prediction score or the probability. You can do the prediction of the class by ap
Advantages of ROC curves The AUC does not compare classes real vs. predicted with each other. It is not looking at the predicted class, but the prediction score or the probability. You can do the prediction of the class by applying a cutoff to this score, say, every sample that got a score below 0.5 is classified as negative. But the ROC comes before that happens. It is working with the scores/class-probabilities. It takes these scores and sorts all samples according to that score. Now, whenever you find a positive sample the ROC-curve makes a step up (along the y-axis). Whenever you find a negative sample you move right (along the x-axis). If that score is different for the two classes, the positive samples come first (usually). That means you make more steps up than to the right. Further down the list the negative samples will come, so you move left. When you are through the whole list of samples you reach at the coordinate (1,1) which corresponds to 100% of the positive and 100% of the negative samples. If the score perfectly separates the positive from the negative samples you move all the way from (x=0, y=0) to (1,0) and then from there to (1, 1). So, the area under the curve is 1. If your score has the same distribution for positive and negative samples the probabilities to find a positive or negative sample in the sorted list are equal and therefore the probabilities to move up or left in the ROC-curve are equal. That is why you move along the diagonal, because you essentially move up and left, and up and left, and so on... which gives an AROC value of around 0.5. In the case of an imbalanced dataset, the stepsize is different. So, you make smaller steps to the left (if you have more negative samples). That is why the score is more or less independent of the imbalance. So with the ROC curve, you can visualize how your samples are separated and the area under the curve can be a very good metric to measure the performance of a binary classification algorithm or any variable that may be used to separate classes. The figure shows the same distributions with different sample sizes. The black area shows where ROC-curves of random mixtures of positive and negative samples would be expected.
Advantages of ROC curves The AUC does not compare classes real vs. predicted with each other. It is not looking at the predicted class, but the prediction score or the probability. You can do the prediction of the class by ap
17,174
Cleaning data of inconsistent format in R?
I would use gsub() to identify the strings that I know and then perhaps do the rest by hand. test <- c("15min", "15 min", "Maybe a few hours", "4hr", "4hour", "3.5hr", "3-10", "3-10") new_var <- rep(NA, length(test)) my_sub <- function(regex, new_var, test){ t2 <- gsub(regex, "\\1", test) identified_vars <- which(test != t2) new_var[identified_vars] <- as.double(t2[identified_vars]) return(new_var) } new_var <- my_sub("([0-9]+)[ ]*min", new_var, test) new_var <- my_sub("([0-9]+)[ ]*(hour|hr)[s]{0,1}", new_var, test) To get work with the ones that you need to change by hand I suggest something like this: # Which have we not found by.hand <- which(is.na(new_var)) # View the unique ones not found unique(test[by.hand]) # Create a list with the ones my_interpretation <- list("3-10"= 5, "Maybe a few hours"=3) for(key_string in names(my_interpretation)){ new_var[test == key_string] <- unlist(my_interpretation[key_string]) } This gives: > new_var [1] 15.0 15.0 3.0 4.0 4.0 3.5 5.0 5.0 Regex can be a little tricky, every time I'm doing anything with regex I run a few simple tests. Se ?regex for the manual. Here's some basic behavior: > # Test some regex > grep("[0-9]", "12") [1] 1 > grep("[0-9]", "12a") [1] 1 > grep("[0-9]$", "12a") integer(0) > grep("^[0-9]$", "12a") integer(0) > grep("^[0-9][0-9]", "12a") [1] 1 > grep("^[0-9]{1,2}", "12a") [1] 1 > grep("^[0-9]*", "a") [1] 1 > grep("^[0-9]+", "a") integer(0) > grep("^[0-9]+", "12222a") [1] 1 > grep("^(yes|no)$", "yes") [1] 1 > grep("^(yes|no)$", "no") [1] 1 > grep("^(yes|no)$", "(yes|no)") integer(0) > # Test some gsub, the \\1 matches default or the found text within the () > gsub("^(yes|maybe) and no$", "\\1", "yes and no") [1] "yes"
Cleaning data of inconsistent format in R?
I would use gsub() to identify the strings that I know and then perhaps do the rest by hand. test <- c("15min", "15 min", "Maybe a few hours", "4hr", "4hour", "3.5hr", "3-10", "3-10") new_v
Cleaning data of inconsistent format in R? I would use gsub() to identify the strings that I know and then perhaps do the rest by hand. test <- c("15min", "15 min", "Maybe a few hours", "4hr", "4hour", "3.5hr", "3-10", "3-10") new_var <- rep(NA, length(test)) my_sub <- function(regex, new_var, test){ t2 <- gsub(regex, "\\1", test) identified_vars <- which(test != t2) new_var[identified_vars] <- as.double(t2[identified_vars]) return(new_var) } new_var <- my_sub("([0-9]+)[ ]*min", new_var, test) new_var <- my_sub("([0-9]+)[ ]*(hour|hr)[s]{0,1}", new_var, test) To get work with the ones that you need to change by hand I suggest something like this: # Which have we not found by.hand <- which(is.na(new_var)) # View the unique ones not found unique(test[by.hand]) # Create a list with the ones my_interpretation <- list("3-10"= 5, "Maybe a few hours"=3) for(key_string in names(my_interpretation)){ new_var[test == key_string] <- unlist(my_interpretation[key_string]) } This gives: > new_var [1] 15.0 15.0 3.0 4.0 4.0 3.5 5.0 5.0 Regex can be a little tricky, every time I'm doing anything with regex I run a few simple tests. Se ?regex for the manual. Here's some basic behavior: > # Test some regex > grep("[0-9]", "12") [1] 1 > grep("[0-9]", "12a") [1] 1 > grep("[0-9]$", "12a") integer(0) > grep("^[0-9]$", "12a") integer(0) > grep("^[0-9][0-9]", "12a") [1] 1 > grep("^[0-9]{1,2}", "12a") [1] 1 > grep("^[0-9]*", "a") [1] 1 > grep("^[0-9]+", "a") integer(0) > grep("^[0-9]+", "12222a") [1] 1 > grep("^(yes|no)$", "yes") [1] 1 > grep("^(yes|no)$", "no") [1] 1 > grep("^(yes|no)$", "(yes|no)") integer(0) > # Test some gsub, the \\1 matches default or the found text within the () > gsub("^(yes|maybe) and no$", "\\1", "yes and no") [1] "yes"
Cleaning data of inconsistent format in R? I would use gsub() to identify the strings that I know and then perhaps do the rest by hand. test <- c("15min", "15 min", "Maybe a few hours", "4hr", "4hour", "3.5hr", "3-10", "3-10") new_v
17,175
Cleaning data of inconsistent format in R?
@Max's suggestion is a good one. It seems that if you write an algorithm that recognizes numbers as well as common time-associated words/abbreviations, you'll get most of the way there. This will not be beautiful code, but it will work and you can improve it over time as you come across problem cases. But for a more robust (and initially time-consuming) approach, try Googling "parsing a natural language time string." Some interesting findings are This open time API, a good Python module, and one of many germane threads like this one on Stack Overflow. Basically, natural language parsing is a common problem and you should look for solutions in languages other than R. You can build tools in another language that you can access using R, or at the very least you can get good ideas for your own algorithm.
Cleaning data of inconsistent format in R?
@Max's suggestion is a good one. It seems that if you write an algorithm that recognizes numbers as well as common time-associated words/abbreviations, you'll get most of the way there. This will not
Cleaning data of inconsistent format in R? @Max's suggestion is a good one. It seems that if you write an algorithm that recognizes numbers as well as common time-associated words/abbreviations, you'll get most of the way there. This will not be beautiful code, but it will work and you can improve it over time as you come across problem cases. But for a more robust (and initially time-consuming) approach, try Googling "parsing a natural language time string." Some interesting findings are This open time API, a good Python module, and one of many germane threads like this one on Stack Overflow. Basically, natural language parsing is a common problem and you should look for solutions in languages other than R. You can build tools in another language that you can access using R, or at the very least you can get good ideas for your own algorithm.
Cleaning data of inconsistent format in R? @Max's suggestion is a good one. It seems that if you write an algorithm that recognizes numbers as well as common time-associated words/abbreviations, you'll get most of the way there. This will not
17,176
Cleaning data of inconsistent format in R?
For something like that, if it was sufficiently long, I think I'd want a list of regular expressions and transformation rules, and take the new values to another column (so you always have the chance to double check without reloading the raw data); the RE's would be applied in order to the not-so-far-transformed data until all the data was transformed or all the rules were exhausted. It's probably best to also keep a list of logical values that indicate which rows haven't yet been transformed. A few such rules are obvious of course and will probably handle 80-90% of cases, but the issue is that there will always be some you don't know will come up (people are very inventive). Then you need a script that goes through and presents the originals of the not-yet-transformed-by-the-list-of-obvious-rules values to you one at a time, giving you a chance to make a regular expression (say) to identify those cases and give a new transformation for the cases that fit it, which it adds to the original list and applies to the not-yet-transformed rows of the original vector before checking if there are any cases left to present to you. It might also be reasonable to have the option to skip a case (so that you can go on to easier ones), so you can pus the very hard cases right to the end. Worst case, you do a few by hand. You can then keep the full list of rules you generate, to apply again when the data grows or a new, similar data set comes along. I don't know if it's remotely approaching best practice (I'd think something much more formal would be needed there), but in terms of processing large amounts of such data quickly, it might have some value.
Cleaning data of inconsistent format in R?
For something like that, if it was sufficiently long, I think I'd want a list of regular expressions and transformation rules, and take the new values to another column (so you always have the chance
Cleaning data of inconsistent format in R? For something like that, if it was sufficiently long, I think I'd want a list of regular expressions and transformation rules, and take the new values to another column (so you always have the chance to double check without reloading the raw data); the RE's would be applied in order to the not-so-far-transformed data until all the data was transformed or all the rules were exhausted. It's probably best to also keep a list of logical values that indicate which rows haven't yet been transformed. A few such rules are obvious of course and will probably handle 80-90% of cases, but the issue is that there will always be some you don't know will come up (people are very inventive). Then you need a script that goes through and presents the originals of the not-yet-transformed-by-the-list-of-obvious-rules values to you one at a time, giving you a chance to make a regular expression (say) to identify those cases and give a new transformation for the cases that fit it, which it adds to the original list and applies to the not-yet-transformed rows of the original vector before checking if there are any cases left to present to you. It might also be reasonable to have the option to skip a case (so that you can go on to easier ones), so you can pus the very hard cases right to the end. Worst case, you do a few by hand. You can then keep the full list of rules you generate, to apply again when the data grows or a new, similar data set comes along. I don't know if it's remotely approaching best practice (I'd think something much more formal would be needed there), but in terms of processing large amounts of such data quickly, it might have some value.
Cleaning data of inconsistent format in R? For something like that, if it was sufficiently long, I think I'd want a list of regular expressions and transformation rules, and take the new values to another column (so you always have the chance
17,177
Cleaning data of inconsistent format in R?
R contains some standard functions for data manipulation, which can be used for data cleaning, in its base package (gsub, transform, etc.), as well as in various third-party packages, such as stringr, reshape, reshape2, and plyr. Examples and best practices of usage for these packages and their functions are described in the following paper: http://vita.had.co.nz/papers/tidy-data.pdf. Additionally, R offers some packages specifically focused on data cleaning and transformation: editrules (http://cran.r-project.org/web/packages/editrules/index.html) deducorrect (http://cran.r-project.org/web/packages/deducorrect/index.html) StatMatch (http://cran.r-project.org/web/packages/StatMatch/index.html) MatchIt (http://cran.r-project.org/web/packages/MatchIt/index.html) A comprehensive and coherent approach to data cleaning in R, including examples and use of editrules and deducorrect packages, as well as a description of workflow (framework) of data cleaning in R, is presented in the following paper, which I highly recommend: http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf.
Cleaning data of inconsistent format in R?
R contains some standard functions for data manipulation, which can be used for data cleaning, in its base package (gsub, transform, etc.), as well as in various third-party packages, such as stringr,
Cleaning data of inconsistent format in R? R contains some standard functions for data manipulation, which can be used for data cleaning, in its base package (gsub, transform, etc.), as well as in various third-party packages, such as stringr, reshape, reshape2, and plyr. Examples and best practices of usage for these packages and their functions are described in the following paper: http://vita.had.co.nz/papers/tidy-data.pdf. Additionally, R offers some packages specifically focused on data cleaning and transformation: editrules (http://cran.r-project.org/web/packages/editrules/index.html) deducorrect (http://cran.r-project.org/web/packages/deducorrect/index.html) StatMatch (http://cran.r-project.org/web/packages/StatMatch/index.html) MatchIt (http://cran.r-project.org/web/packages/MatchIt/index.html) A comprehensive and coherent approach to data cleaning in R, including examples and use of editrules and deducorrect packages, as well as a description of workflow (framework) of data cleaning in R, is presented in the following paper, which I highly recommend: http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf.
Cleaning data of inconsistent format in R? R contains some standard functions for data manipulation, which can be used for data cleaning, in its base package (gsub, transform, etc.), as well as in various third-party packages, such as stringr,
17,178
What is a good resource that includes a comparison of the pros and cons of different classifiers?
The ESL, as already mentioned by Peter Flom, is an excellent suggestion (note that my link is to the author's homepage where the book can be obtained as a pdf-file for free). Let me add a couple of more specific things to look for in the book: Table 10.1 (page 351) gives the authors assessment of certain characteristics of Neural Nets, SVM, Trees, MARS, and k-NN kernels, which somehow appear to be the methods the authors want to include in a list of "off-the-shelf" methods. Chapter 10 treats boosting, which I found missing in the list of methods in the poll cited by the OP. Gradient boosting seems to be one of the better performing methods in a number of examples. Chapter 9 treats generalized additive models (GAMs), which adds to the logistic regression model (top ranked in the poll) the flexibility of non-linear additive effects of the predictors. GAMs would not be nearly as easy to use as logistic regression with all the smoothing parameters that have to be chosen if it wasn't for nice implementations like the one in the R package mgcv. Add to the book the Machine Learning Task View for R, which gives some impression of what the many machine learning packages can actually do, though there is no real comparison. For Python users I imagine that scikit.learn is a good place to look. How much "out-of-the-box" or "off-the-shelf" a method is, is very much determined by how well the implementation deals with automatic adaptation to the data situation versus leaving the detailed tuning to the user. In my mind, mgcv for R is a good example that makes the fitting of a reasonably good generalized additive model really easy and basically without any need for the user to "hand-tune" anything.
What is a good resource that includes a comparison of the pros and cons of different classifiers?
The ESL, as already mentioned by Peter Flom, is an excellent suggestion (note that my link is to the author's homepage where the book can be obtained as a pdf-file for free). Let me add a couple of mo
What is a good resource that includes a comparison of the pros and cons of different classifiers? The ESL, as already mentioned by Peter Flom, is an excellent suggestion (note that my link is to the author's homepage where the book can be obtained as a pdf-file for free). Let me add a couple of more specific things to look for in the book: Table 10.1 (page 351) gives the authors assessment of certain characteristics of Neural Nets, SVM, Trees, MARS, and k-NN kernels, which somehow appear to be the methods the authors want to include in a list of "off-the-shelf" methods. Chapter 10 treats boosting, which I found missing in the list of methods in the poll cited by the OP. Gradient boosting seems to be one of the better performing methods in a number of examples. Chapter 9 treats generalized additive models (GAMs), which adds to the logistic regression model (top ranked in the poll) the flexibility of non-linear additive effects of the predictors. GAMs would not be nearly as easy to use as logistic regression with all the smoothing parameters that have to be chosen if it wasn't for nice implementations like the one in the R package mgcv. Add to the book the Machine Learning Task View for R, which gives some impression of what the many machine learning packages can actually do, though there is no real comparison. For Python users I imagine that scikit.learn is a good place to look. How much "out-of-the-box" or "off-the-shelf" a method is, is very much determined by how well the implementation deals with automatic adaptation to the data situation versus leaving the detailed tuning to the user. In my mind, mgcv for R is a good example that makes the fitting of a reasonably good generalized additive model really easy and basically without any need for the user to "hand-tune" anything.
What is a good resource that includes a comparison of the pros and cons of different classifiers? The ESL, as already mentioned by Peter Flom, is an excellent suggestion (note that my link is to the author's homepage where the book can be obtained as a pdf-file for free). Let me add a couple of mo
17,179
What is a good resource that includes a comparison of the pros and cons of different classifiers?
The resources listed by others are all certainly useful, but I'll chime in and add the following: the "best" classifier is likely to be context and data specific. In a recent foray into assessing different binary classifiers I found a Boosted Regression Tree to work consistently better than other methods I had access to. The key thing for me was learning how to use Orange data mining tools. They have some great documentation to get started on exploring these methods with your data. For example, here is a short Python script I wrote to assess the quality of multiple classifiers across multiple measures of accuracy using k-fold cross validation. import orange, orngTest, orngStat, orngTree , orngEnsemble, orngSVM, orngLR import numpy as np data = orange.ExampleTable("performance_orange_2.tab") bayes = orange.BayesLearner(name="Naive Bayes") svm = orngSVM.SVMLearner(name="SVM") tree = orngTree.TreeLearner(mForPruning=2, name="Regression Tree") bs = orngEnsemble.BoostedLearner(tree, name="Boosted Tree") bg = orngEnsemble.BaggedLearner(tree, name="Bagged Tree") forest = orngEnsemble.RandomForestLearner(trees=100, name="Random Forest") learners = [bayes, svm, tree, bs, bg, forest] results = orngTest.crossValidation(learners, data, folds=10) cm = orngStat.computeConfusionMatrices(results, classIndex=data.domain.classVar.values.index('1')) stat = (('ClsAcc', 'CA(results)'), ('Sens', 'sens(cm)'), ('Spec', 'spec(cm)'), ('AUC', 'AUC(results)'), ('Info', 'IS(results)'), ('Brier', 'BrierScore(results)')) scores = [eval("orngStat." + s[1]) for s in stat] print "Learner " + "".join(["%-9s" % s[0] for s in stat]) print "-----------------------------------------------------------------" for (i, L) in enumerate(learners): print "%-15s " % L.name + "".join(["%5.3f " % s[i] for s in scores]) print "\n\n" measure = orngEnsemble.MeasureAttribute_randomForests(trees=100) print "Random Forest Variable Importance" print "---------------------------------" imps = measure.importances(data) for i,imp in enumerate(imps): print "%-20s %6.2f" % (data.domain.attributes[i].name, imp) print '\n\n' print 'Predictions on new data...' bs_classifier = bs(data) new_data = orange.ExampleTable('performance_orange_new.tab') for obs in new_data: print bs_classifier(obs, orange.GetBoth) When I run this code on my data I get output like In [1]: %run binary_predict.py Learner ClsAcc Sens Spec AUC Info Brier ----------------------------------------------------------------- Naive Bayes 0.556 0.444 0.643 0.756 0.516 0.613 SVM 0.611 0.667 0.714 0.851 0.264 0.582 Regression Tree 0.736 0.778 0.786 0.836 0.945 0.527 Boosted Tree 0.778 0.778 0.857 0.911 1.074 0.444 Bagged Tree 0.653 0.667 0.786 0.816 0.564 0.547 Random Forest 0.736 0.667 0.929 0.940 0.455 0.512 Random Forest Variable Importance --------------------------------- Mileage 2.34 Trade_Area_QI 2.82 Site_Score 8.76 There is a lot more you can do with the Orange objects to introspect performance and make comparisons. I found this package to be extremely helpful in writing a small amount of code to actually apply methods to my data with a consistent API and problem abstraction (i.e., I did not need to use six different packages from six different authors, each with their own approach to API design and documentation, etc).
What is a good resource that includes a comparison of the pros and cons of different classifiers?
The resources listed by others are all certainly useful, but I'll chime in and add the following: the "best" classifier is likely to be context and data specific. In a recent foray into assessing diff
What is a good resource that includes a comparison of the pros and cons of different classifiers? The resources listed by others are all certainly useful, but I'll chime in and add the following: the "best" classifier is likely to be context and data specific. In a recent foray into assessing different binary classifiers I found a Boosted Regression Tree to work consistently better than other methods I had access to. The key thing for me was learning how to use Orange data mining tools. They have some great documentation to get started on exploring these methods with your data. For example, here is a short Python script I wrote to assess the quality of multiple classifiers across multiple measures of accuracy using k-fold cross validation. import orange, orngTest, orngStat, orngTree , orngEnsemble, orngSVM, orngLR import numpy as np data = orange.ExampleTable("performance_orange_2.tab") bayes = orange.BayesLearner(name="Naive Bayes") svm = orngSVM.SVMLearner(name="SVM") tree = orngTree.TreeLearner(mForPruning=2, name="Regression Tree") bs = orngEnsemble.BoostedLearner(tree, name="Boosted Tree") bg = orngEnsemble.BaggedLearner(tree, name="Bagged Tree") forest = orngEnsemble.RandomForestLearner(trees=100, name="Random Forest") learners = [bayes, svm, tree, bs, bg, forest] results = orngTest.crossValidation(learners, data, folds=10) cm = orngStat.computeConfusionMatrices(results, classIndex=data.domain.classVar.values.index('1')) stat = (('ClsAcc', 'CA(results)'), ('Sens', 'sens(cm)'), ('Spec', 'spec(cm)'), ('AUC', 'AUC(results)'), ('Info', 'IS(results)'), ('Brier', 'BrierScore(results)')) scores = [eval("orngStat." + s[1]) for s in stat] print "Learner " + "".join(["%-9s" % s[0] for s in stat]) print "-----------------------------------------------------------------" for (i, L) in enumerate(learners): print "%-15s " % L.name + "".join(["%5.3f " % s[i] for s in scores]) print "\n\n" measure = orngEnsemble.MeasureAttribute_randomForests(trees=100) print "Random Forest Variable Importance" print "---------------------------------" imps = measure.importances(data) for i,imp in enumerate(imps): print "%-20s %6.2f" % (data.domain.attributes[i].name, imp) print '\n\n' print 'Predictions on new data...' bs_classifier = bs(data) new_data = orange.ExampleTable('performance_orange_new.tab') for obs in new_data: print bs_classifier(obs, orange.GetBoth) When I run this code on my data I get output like In [1]: %run binary_predict.py Learner ClsAcc Sens Spec AUC Info Brier ----------------------------------------------------------------- Naive Bayes 0.556 0.444 0.643 0.756 0.516 0.613 SVM 0.611 0.667 0.714 0.851 0.264 0.582 Regression Tree 0.736 0.778 0.786 0.836 0.945 0.527 Boosted Tree 0.778 0.778 0.857 0.911 1.074 0.444 Bagged Tree 0.653 0.667 0.786 0.816 0.564 0.547 Random Forest 0.736 0.667 0.929 0.940 0.455 0.512 Random Forest Variable Importance --------------------------------- Mileage 2.34 Trade_Area_QI 2.82 Site_Score 8.76 There is a lot more you can do with the Orange objects to introspect performance and make comparisons. I found this package to be extremely helpful in writing a small amount of code to actually apply methods to my data with a consistent API and problem abstraction (i.e., I did not need to use six different packages from six different authors, each with their own approach to API design and documentation, etc).
What is a good resource that includes a comparison of the pros and cons of different classifiers? The resources listed by others are all certainly useful, but I'll chime in and add the following: the "best" classifier is likely to be context and data specific. In a recent foray into assessing diff
17,180
What is a good resource that includes a comparison of the pros and cons of different classifiers?
The book The Elements of Statistical Learning has a lot of information on this.
What is a good resource that includes a comparison of the pros and cons of different classifiers?
The book The Elements of Statistical Learning has a lot of information on this.
What is a good resource that includes a comparison of the pros and cons of different classifiers? The book The Elements of Statistical Learning has a lot of information on this.
What is a good resource that includes a comparison of the pros and cons of different classifiers? The book The Elements of Statistical Learning has a lot of information on this.
17,181
What is a good resource that includes a comparison of the pros and cons of different classifiers?
Other resources I found regarding this (free PDF available): The book Machine Learning, Neural and Statistical Classification, edited by D. Michie, D.J. Spiegelhalter and C.C. Taylor Rich Caruana and Alexandru Niculescu-mizil, An Empirical Comparison of Supervised Learning Algorithms
What is a good resource that includes a comparison of the pros and cons of different classifiers?
Other resources I found regarding this (free PDF available): The book Machine Learning, Neural and Statistical Classification, edited by D. Michie, D.J. Spiegelhalter and C.C. Taylor Rich Caruana and
What is a good resource that includes a comparison of the pros and cons of different classifiers? Other resources I found regarding this (free PDF available): The book Machine Learning, Neural and Statistical Classification, edited by D. Michie, D.J. Spiegelhalter and C.C. Taylor Rich Caruana and Alexandru Niculescu-mizil, An Empirical Comparison of Supervised Learning Algorithms
What is a good resource that includes a comparison of the pros and cons of different classifiers? Other resources I found regarding this (free PDF available): The book Machine Learning, Neural and Statistical Classification, edited by D. Michie, D.J. Spiegelhalter and C.C. Taylor Rich Caruana and
17,182
What is a good resource that includes a comparison of the pros and cons of different classifiers?
According to this exhaustive recent study (evaluation of 179 classifiers on 121 datasets), the best classifiers are random forests followed by support vector machines.
What is a good resource that includes a comparison of the pros and cons of different classifiers?
According to this exhaustive recent study (evaluation of 179 classifiers on 121 datasets), the best classifiers are random forests followed by support vector machines.
What is a good resource that includes a comparison of the pros and cons of different classifiers? According to this exhaustive recent study (evaluation of 179 classifiers on 121 datasets), the best classifiers are random forests followed by support vector machines.
What is a good resource that includes a comparison of the pros and cons of different classifiers? According to this exhaustive recent study (evaluation of 179 classifiers on 121 datasets), the best classifiers are random forests followed by support vector machines.
17,183
What is the effect of dichotomising variables?
What information is lost: It depends on the variable. Generally, by dichotomizing, you're asserting that there is a straight line of effect between one variable and another. For example, consider a continuous measure of exposure to a pollutant in a study on cancer. If you dichotomize it to "High" and "Low", you assert that those are the only two values that matter. There is a risk of cancer in high, and there is one in low. But what if the risk rises steadily for awhile, then flattens out, then rises again before finally spiking at high values? All of that is lost. What you gain: It's easier. Dichotomous variables are often much easier to deal with statistically. There are reasons to do it - if a continuous variable falls into two clear groupings anyway, but I tend to avoid dichotomizing unless its a natural form of the variable in the first place. It is often also useful if your field is dichotomizing things anyway to have a dichotomized form of a variable. For example, many consider CD4 cell count of less than 400 to be a critical threshold for HIV. As such, I'd often have a 0/1 variable for Above/Below 400, though I would retain the continuous CD4 count variable as well. This helps cohere your study with others. I'll disagree slightly with Peter. While dividing a continuous variable up into categories is often far more sensible than a crude dichotomization, I'm rather opposed to quantile categorization. Such categorizations are very difficult to give meaningful interpretations. I think your first step should be to see if there are biologically or clinically well supported categorization one can use, and only once those options are exhausted should you use quantiles.
What is the effect of dichotomising variables?
What information is lost: It depends on the variable. Generally, by dichotomizing, you're asserting that there is a straight line of effect between one variable and another. For example, consider a co
What is the effect of dichotomising variables? What information is lost: It depends on the variable. Generally, by dichotomizing, you're asserting that there is a straight line of effect between one variable and another. For example, consider a continuous measure of exposure to a pollutant in a study on cancer. If you dichotomize it to "High" and "Low", you assert that those are the only two values that matter. There is a risk of cancer in high, and there is one in low. But what if the risk rises steadily for awhile, then flattens out, then rises again before finally spiking at high values? All of that is lost. What you gain: It's easier. Dichotomous variables are often much easier to deal with statistically. There are reasons to do it - if a continuous variable falls into two clear groupings anyway, but I tend to avoid dichotomizing unless its a natural form of the variable in the first place. It is often also useful if your field is dichotomizing things anyway to have a dichotomized form of a variable. For example, many consider CD4 cell count of less than 400 to be a critical threshold for HIV. As such, I'd often have a 0/1 variable for Above/Below 400, though I would retain the continuous CD4 count variable as well. This helps cohere your study with others. I'll disagree slightly with Peter. While dividing a continuous variable up into categories is often far more sensible than a crude dichotomization, I'm rather opposed to quantile categorization. Such categorizations are very difficult to give meaningful interpretations. I think your first step should be to see if there are biologically or clinically well supported categorization one can use, and only once those options are exhausted should you use quantiles.
What is the effect of dichotomising variables? What information is lost: It depends on the variable. Generally, by dichotomizing, you're asserting that there is a straight line of effect between one variable and another. For example, consider a co
17,184
What is the effect of dichotomising variables?
Dichotimization adds magical thinking to data analysis. This is very rarely a good idea. Here is an article by Royston, Altman and Sauerbrei on some reasons why it is a bad idea. My own thoughts: if you dichotomize a dependent variable, say, birth weight at 2.5 kg (this is done all the time) then you are treating babies who are born at 2.49 kg just like those born at 1.5 kg, and babies born at 2.51 kg just like those who are 3.5 kg. This does not make sense. A better alternative is often quantile regression. I wrote about this for NESUG recently. That paper is here One exception to the above is when the categories are substantively motivated; for example, if you are working with driving behavior, it will be sensible to categorize based on the legal age for driving.
What is the effect of dichotomising variables?
Dichotimization adds magical thinking to data analysis. This is very rarely a good idea. Here is an article by Royston, Altman and Sauerbrei on some reasons why it is a bad idea. My own thoughts: if y
What is the effect of dichotomising variables? Dichotimization adds magical thinking to data analysis. This is very rarely a good idea. Here is an article by Royston, Altman and Sauerbrei on some reasons why it is a bad idea. My own thoughts: if you dichotomize a dependent variable, say, birth weight at 2.5 kg (this is done all the time) then you are treating babies who are born at 2.49 kg just like those born at 1.5 kg, and babies born at 2.51 kg just like those who are 3.5 kg. This does not make sense. A better alternative is often quantile regression. I wrote about this for NESUG recently. That paper is here One exception to the above is when the categories are substantively motivated; for example, if you are working with driving behavior, it will be sensible to categorize based on the legal age for driving.
What is the effect of dichotomising variables? Dichotimization adds magical thinking to data analysis. This is very rarely a good idea. Here is an article by Royston, Altman and Sauerbrei on some reasons why it is a bad idea. My own thoughts: if y
17,185
What is the effect of dichotomising variables?
I liked and support both @Epigrad's and @Peter's answers. I just wanted to add, that, binning interval variable into binary one makes (potentially) metrical variable just ordinal one. With binary variable it is improper to calculate mean or variance (despite that some people do), and, as I've noted elsewhere, some multivariate analyses become theoretically or logically inapplicable. For example, I think it is not correct to use centroid/Ward hierarchical clustering or factor analysis with binary variables. Clients of investigation often force us to dichotomise variables at output because thinking in terms of few classes rather than one continuous trait is simpler, information seems less foggy and (falsely) more bulky. There are, however, cases when dichotomization may be warranted. For example where there is strong bimodality or when analysis (e.g. MAMBAC or other) show presence of 2 latent classes.
What is the effect of dichotomising variables?
I liked and support both @Epigrad's and @Peter's answers. I just wanted to add, that, binning interval variable into binary one makes (potentially) metrical variable just ordinal one. With binary vari
What is the effect of dichotomising variables? I liked and support both @Epigrad's and @Peter's answers. I just wanted to add, that, binning interval variable into binary one makes (potentially) metrical variable just ordinal one. With binary variable it is improper to calculate mean or variance (despite that some people do), and, as I've noted elsewhere, some multivariate analyses become theoretically or logically inapplicable. For example, I think it is not correct to use centroid/Ward hierarchical clustering or factor analysis with binary variables. Clients of investigation often force us to dichotomise variables at output because thinking in terms of few classes rather than one continuous trait is simpler, information seems less foggy and (falsely) more bulky. There are, however, cases when dichotomization may be warranted. For example where there is strong bimodality or when analysis (e.g. MAMBAC or other) show presence of 2 latent classes.
What is the effect of dichotomising variables? I liked and support both @Epigrad's and @Peter's answers. I just wanted to add, that, binning interval variable into binary one makes (potentially) metrical variable just ordinal one. With binary vari
17,186
How to optimize my R script in order to use "multicore"
Use foreach and doMC. The detailed explanation can be found here. Your script will change very little, the line for(i in 1:plength){ should be changed to foreach(i=1:plength) %dopar% { The prerequisites for any multitasking script using these packages are library(foreach) library(doMC) registerDoMC() Note of caution. According to the documentation you cannot use this in GUI. As for your problem, do you really need multitasking? Your data.frame takes about 1.2GB of RAM, so it should fit into your memory. So you can simply use apply: p1smry <- apply(P1,1,summary) The result will be a matrix with summaries of each row. You can also use function mclapply which is in the package multicore. Then your script might look like this: loopfun <- function(i) { summary(P1[i,]) } res <- mclapply(1:nrow(P1),loopfun) This will return the list, where i-th element will be the summary of i-th row. You can convert it to matrix using sapply mres <- sapply(res,function(x)x)
How to optimize my R script in order to use "multicore"
Use foreach and doMC. The detailed explanation can be found here. Your script will change very little, the line for(i in 1:plength){ should be changed to foreach(i=1:plength) %dopar% { The prerequ
How to optimize my R script in order to use "multicore" Use foreach and doMC. The detailed explanation can be found here. Your script will change very little, the line for(i in 1:plength){ should be changed to foreach(i=1:plength) %dopar% { The prerequisites for any multitasking script using these packages are library(foreach) library(doMC) registerDoMC() Note of caution. According to the documentation you cannot use this in GUI. As for your problem, do you really need multitasking? Your data.frame takes about 1.2GB of RAM, so it should fit into your memory. So you can simply use apply: p1smry <- apply(P1,1,summary) The result will be a matrix with summaries of each row. You can also use function mclapply which is in the package multicore. Then your script might look like this: loopfun <- function(i) { summary(P1[i,]) } res <- mclapply(1:nrow(P1),loopfun) This will return the list, where i-th element will be the summary of i-th row. You can convert it to matrix using sapply mres <- sapply(res,function(x)x)
How to optimize my R script in order to use "multicore" Use foreach and doMC. The detailed explanation can be found here. Your script will change very little, the line for(i in 1:plength){ should be changed to foreach(i=1:plength) %dopar% { The prerequ
17,187
How to optimize my R script in order to use "multicore"
You've already got an answer as to how to use more than one core, but the real problem is with the way you have written your loops. Never extend your result vector/object at each iteration of a loop. If you do this, you force R to copy your result vector/object and extend it which all takes time. Instead, preallocate enough storage space before you start the loop and fill in as you go along. Here is an example: set.seed(1) p1 <- matrix(rnorm(10000), ncol=100) system.time({ p1max <- p1mean <- p1sum <- numeric(length = 100) for(i in seq_along(p1max)){ p1max[i] <- max(p1[i,]) p1mean[i] <- mean(p1[i,]) p1sum[i ]<- sum(p1[i,]) } }) user system elapsed 0.005 0.000 0.005 Or you can do these things via apply(): system.time({ p1max2 <- apply(p1, 1, max) p1mean2 <- apply(p1, 1, mean) p1sum2 <- apply(p1, 1, sum) }) user system elapsed 0.007 0.000 0.006 But note that this is no faster than doing the loop properly and sometimes slower. However, always be on the lookout for vectorised code. You can do row sums and means using rowSums() and rowMeans() which are quicker than either the loop or apply versions: system.time({ p1max3 <- apply(p1, 1, max) p1mean3 <- rowMeans(p1) p1sum3 <- rowSums(p1) }) user system elapsed 0.001 0.000 0.002 If I were a betting man, I would have money on the third approach I mention beating foreach() or the other multi-core options in a speed test on your matrix because they would have to speed things up considerably to justify the overhead incurred in setting up the separate processes that are farmed out the the different CPU cores. Update: Following the comment from @shabbychef is it faster to do the sums once and reuse in the computation of the mean? system.time({ p1max4 <- apply(p1, 1, max) p1sum4 <- rowSums(p1) p1mean4 <- p1sum4 / ncol(p1) }) user system elapsed 0.002 0.000 0.002 Not in this test run, but this is far from exhaustive...
How to optimize my R script in order to use "multicore"
You've already got an answer as to how to use more than one core, but the real problem is with the way you have written your loops. Never extend your result vector/object at each iteration of a loop.
How to optimize my R script in order to use "multicore" You've already got an answer as to how to use more than one core, but the real problem is with the way you have written your loops. Never extend your result vector/object at each iteration of a loop. If you do this, you force R to copy your result vector/object and extend it which all takes time. Instead, preallocate enough storage space before you start the loop and fill in as you go along. Here is an example: set.seed(1) p1 <- matrix(rnorm(10000), ncol=100) system.time({ p1max <- p1mean <- p1sum <- numeric(length = 100) for(i in seq_along(p1max)){ p1max[i] <- max(p1[i,]) p1mean[i] <- mean(p1[i,]) p1sum[i ]<- sum(p1[i,]) } }) user system elapsed 0.005 0.000 0.005 Or you can do these things via apply(): system.time({ p1max2 <- apply(p1, 1, max) p1mean2 <- apply(p1, 1, mean) p1sum2 <- apply(p1, 1, sum) }) user system elapsed 0.007 0.000 0.006 But note that this is no faster than doing the loop properly and sometimes slower. However, always be on the lookout for vectorised code. You can do row sums and means using rowSums() and rowMeans() which are quicker than either the loop or apply versions: system.time({ p1max3 <- apply(p1, 1, max) p1mean3 <- rowMeans(p1) p1sum3 <- rowSums(p1) }) user system elapsed 0.001 0.000 0.002 If I were a betting man, I would have money on the third approach I mention beating foreach() or the other multi-core options in a speed test on your matrix because they would have to speed things up considerably to justify the overhead incurred in setting up the separate processes that are farmed out the the different CPU cores. Update: Following the comment from @shabbychef is it faster to do the sums once and reuse in the computation of the mean? system.time({ p1max4 <- apply(p1, 1, max) p1sum4 <- rowSums(p1) p1mean4 <- p1sum4 / ncol(p1) }) user system elapsed 0.002 0.000 0.002 Not in this test run, but this is far from exhaustive...
How to optimize my R script in order to use "multicore" You've already got an answer as to how to use more than one core, but the real problem is with the way you have written your loops. Never extend your result vector/object at each iteration of a loop.
17,188
How to optimize my R script in order to use "multicore"
Have a look at the snow and snowfall packages. Plenty of examples with those... If you want to speed up that specific code rather than learning about R and parallelism you should do this P1 = matrix(rnorm(1000), ncol=10, nrow=10 apply(P1, 1, max) apply(P1, 1, mean) apply(P1, 1, sum)
How to optimize my R script in order to use "multicore"
Have a look at the snow and snowfall packages. Plenty of examples with those... If you want to speed up that specific code rather than learning about R and parallelism you should do this P1 = matrix(r
How to optimize my R script in order to use "multicore" Have a look at the snow and snowfall packages. Plenty of examples with those... If you want to speed up that specific code rather than learning about R and parallelism you should do this P1 = matrix(rnorm(1000), ncol=10, nrow=10 apply(P1, 1, max) apply(P1, 1, mean) apply(P1, 1, sum)
How to optimize my R script in order to use "multicore" Have a look at the snow and snowfall packages. Plenty of examples with those... If you want to speed up that specific code rather than learning about R and parallelism you should do this P1 = matrix(r
17,189
Can I use moments of a distribution to sample the distribution?
Three moments don't determine a distributional form; if you choose a distribution-famiy with three parameters which relate to the first three population moments, you can do moment matching ("method of moments") to estimate the three parameters and then generate values from such a distribution. There are many such distributions. Sometimes even having all the moments isn't sufficient to determine a distribution. If the moment generating function exists (in a neighborhood of 0) then it uniquely identifies a distribution (you could in principle do an inverse Laplace transform to obtain it). [If some moments are not finite this would mean the mgf doesn't exist, but there are also cases where all moments are finite but the mgf still doesn't exist in a neighborhood of 0.] Given there's a choice of distributions, one might be tempted to consider a maximum entropy solution with the constraint on the first three moments, but there's no distribution on the real line that attains it (since the resulting cubic in the exponent will be unbounded). How the process would work for a specific choice of distribution We can simplify the process of obtaining a distribution matching three moments by ignoring the mean and variance and working with a scaled third moment -- the moment-skewness ($\gamma_1=\mu_3/\mu_2^{3/2}$). We can do this because having selected a distribution with the relevant skewness, we can then back out the desired mean and variance by scaling and shifting. Let's consider an example. Yesterday I created a large data set (which still happens to be in my R session) whose distribution I haven't tried to calculate the functional form of (it's a large set of values of the log of the sample variance of a Cauchy at n=10). We have the first three raw moments as 1.519, 3.597 and 11.479 respectively, or correspondingly a mean of 1.518, a standard deviation* of 1.136 and a skewness of 1.429 (so these are sample values from a large sample). Formally, method of moments would attempt to match the raw moments, but the calculation is simpler if we start with the skewness (turning solving three equations in three unknowns into solving for one parameter at a time, a much simpler task). * I am going to handwave away the distinction between using an n-denominator on the variance - as would correspond to formal method of moments - and an n-1 denominator and simply use sample calculations. This skewness (~1.43) indicates we seek a distribution which is right-skew. I could choose, for example, a shifted lognormal distribution (three parameter lognormal, shape $\sigma$, scale $\mu$ and location-shift $\gamma$) with the same moments. Let's begin by matching the skewness. The population skewness of a two parameter lognormal is: $\gamma_1=(e^{\sigma ^{2}}\!\!+2){\sqrt {e^{\sigma ^{2}}\!\!-1}}$ So let's start by equating that to the desired sample value to obtain an estimate of $\sigma^2$, $\tilde{\sigma}^2$, say. Note that $\gamma_1^2$ is $(\tau+2)^2(\tau-1)$ where $\tau=e^{\sigma^2}$. This then yields a simple cubic equation $\tau^3+3\tau^2-4=\gamma_1^2$. Using the sample skewness in that equation yields $\tilde{\tau}\approx 1.1995$ or $\tilde{\sigma}^2\approx 0.1819$. (The cubic has only one real root so there's no issue with choosing between roots; nor is there any risk of choosing the wrong sign on $\gamma_1$ -- we can flip the distribution left-for-right if we need negative skewness) We can then in turn solve for $\mu$ by matching the variance (or standard deviation) and then for the location parameter by matching the mean. But we could as easily have chosen a shifted-gamma or a shifted-Weibull distribution (or a shifted-F or any number of other choices) and run through essentially the same process. Each of them would be different. [For the sample I was dealing with, a shifted gamma would probably have been a considerably better choice than a shifted lognormal, since the distribution of the logs of the values was left skew and the distribution of their cube root was very close to symmetric; these are consistent with what you will see with (unshifted) gamma densities, but a left-skewed density of the logs cannot be achieved with any shifted lognormal.] One could even take the skewness-kurtosis diagram in a Pearson plot and draw a line at the desired skewness and thereby obtain a two-point distribution, sequence of beta distributions, a gamma distribution, a sequence of beta-prime distributions, an inverse-gamma disribution and a sequence of Pearson type IV distributions all with the same skewness. We can see this illustrated in a skewness-kurtosis plot (Pearson plot) below (note that $\beta_1=\gamma_1^2$ and $\beta_2$ is the kurtosis), with the regions for the various Pearson-distributions marked in. The green horizontal line represents $\gamma_1^2 = 2.042$, and we see it pass through each of the mentioned distribution-families, each point corresponding to a different population kurtosis. (The dashed curve represents the lognormal, which is not a Pearson-family distribution; its intersection with the green line marks the particular lognormal-shape we identified. Note that the dashed curve is purely a function of $\sigma$.) More moments Moments don't pin distributions down very well, so even if you specify many moments, there will still be a lot of different distributions (particularly in relation to their extreme-tail behavior) that will match them. You can of course choose some distributional family with at least four parameters and attempt to match more than three moments; for example the Pearson distributions above allow us to match the first four moments, and there are other choices of distributions that would allow similar degree of flexibility. One can adopt other strategies to choose distributions that can match distributional features - mixture distributions, modelling the log-density using splines, and so forth. Frequently, however, if one goes back to the initial purpose for which one was trying to find a distribution, it often turns out there's something better that can be done than the sort of strategy outlined here.
Can I use moments of a distribution to sample the distribution?
Three moments don't determine a distributional form; if you choose a distribution-famiy with three parameters which relate to the first three population moments, you can do moment matching ("method of
Can I use moments of a distribution to sample the distribution? Three moments don't determine a distributional form; if you choose a distribution-famiy with three parameters which relate to the first three population moments, you can do moment matching ("method of moments") to estimate the three parameters and then generate values from such a distribution. There are many such distributions. Sometimes even having all the moments isn't sufficient to determine a distribution. If the moment generating function exists (in a neighborhood of 0) then it uniquely identifies a distribution (you could in principle do an inverse Laplace transform to obtain it). [If some moments are not finite this would mean the mgf doesn't exist, but there are also cases where all moments are finite but the mgf still doesn't exist in a neighborhood of 0.] Given there's a choice of distributions, one might be tempted to consider a maximum entropy solution with the constraint on the first three moments, but there's no distribution on the real line that attains it (since the resulting cubic in the exponent will be unbounded). How the process would work for a specific choice of distribution We can simplify the process of obtaining a distribution matching three moments by ignoring the mean and variance and working with a scaled third moment -- the moment-skewness ($\gamma_1=\mu_3/\mu_2^{3/2}$). We can do this because having selected a distribution with the relevant skewness, we can then back out the desired mean and variance by scaling and shifting. Let's consider an example. Yesterday I created a large data set (which still happens to be in my R session) whose distribution I haven't tried to calculate the functional form of (it's a large set of values of the log of the sample variance of a Cauchy at n=10). We have the first three raw moments as 1.519, 3.597 and 11.479 respectively, or correspondingly a mean of 1.518, a standard deviation* of 1.136 and a skewness of 1.429 (so these are sample values from a large sample). Formally, method of moments would attempt to match the raw moments, but the calculation is simpler if we start with the skewness (turning solving three equations in three unknowns into solving for one parameter at a time, a much simpler task). * I am going to handwave away the distinction between using an n-denominator on the variance - as would correspond to formal method of moments - and an n-1 denominator and simply use sample calculations. This skewness (~1.43) indicates we seek a distribution which is right-skew. I could choose, for example, a shifted lognormal distribution (three parameter lognormal, shape $\sigma$, scale $\mu$ and location-shift $\gamma$) with the same moments. Let's begin by matching the skewness. The population skewness of a two parameter lognormal is: $\gamma_1=(e^{\sigma ^{2}}\!\!+2){\sqrt {e^{\sigma ^{2}}\!\!-1}}$ So let's start by equating that to the desired sample value to obtain an estimate of $\sigma^2$, $\tilde{\sigma}^2$, say. Note that $\gamma_1^2$ is $(\tau+2)^2(\tau-1)$ where $\tau=e^{\sigma^2}$. This then yields a simple cubic equation $\tau^3+3\tau^2-4=\gamma_1^2$. Using the sample skewness in that equation yields $\tilde{\tau}\approx 1.1995$ or $\tilde{\sigma}^2\approx 0.1819$. (The cubic has only one real root so there's no issue with choosing between roots; nor is there any risk of choosing the wrong sign on $\gamma_1$ -- we can flip the distribution left-for-right if we need negative skewness) We can then in turn solve for $\mu$ by matching the variance (or standard deviation) and then for the location parameter by matching the mean. But we could as easily have chosen a shifted-gamma or a shifted-Weibull distribution (or a shifted-F or any number of other choices) and run through essentially the same process. Each of them would be different. [For the sample I was dealing with, a shifted gamma would probably have been a considerably better choice than a shifted lognormal, since the distribution of the logs of the values was left skew and the distribution of their cube root was very close to symmetric; these are consistent with what you will see with (unshifted) gamma densities, but a left-skewed density of the logs cannot be achieved with any shifted lognormal.] One could even take the skewness-kurtosis diagram in a Pearson plot and draw a line at the desired skewness and thereby obtain a two-point distribution, sequence of beta distributions, a gamma distribution, a sequence of beta-prime distributions, an inverse-gamma disribution and a sequence of Pearson type IV distributions all with the same skewness. We can see this illustrated in a skewness-kurtosis plot (Pearson plot) below (note that $\beta_1=\gamma_1^2$ and $\beta_2$ is the kurtosis), with the regions for the various Pearson-distributions marked in. The green horizontal line represents $\gamma_1^2 = 2.042$, and we see it pass through each of the mentioned distribution-families, each point corresponding to a different population kurtosis. (The dashed curve represents the lognormal, which is not a Pearson-family distribution; its intersection with the green line marks the particular lognormal-shape we identified. Note that the dashed curve is purely a function of $\sigma$.) More moments Moments don't pin distributions down very well, so even if you specify many moments, there will still be a lot of different distributions (particularly in relation to their extreme-tail behavior) that will match them. You can of course choose some distributional family with at least four parameters and attempt to match more than three moments; for example the Pearson distributions above allow us to match the first four moments, and there are other choices of distributions that would allow similar degree of flexibility. One can adopt other strategies to choose distributions that can match distributional features - mixture distributions, modelling the log-density using splines, and so forth. Frequently, however, if one goes back to the initial purpose for which one was trying to find a distribution, it often turns out there's something better that can be done than the sort of strategy outlined here.
Can I use moments of a distribution to sample the distribution? Three moments don't determine a distributional form; if you choose a distribution-famiy with three parameters which relate to the first three population moments, you can do moment matching ("method of
17,190
Can I use moments of a distribution to sample the distribution?
So, the answer is generally NO, you can't do this, but sometimes you can. When you can't The reasons you can't do this usually are two folds. First, if you have N observations, then at most you can calculates N moments. What about the other moments? You can't simply set them to zero. Second, higher moments calculations become less and less precise, because you have to raise the numbers into higher powers. Consider 100th non-central moment, you can't usually calculate it with any precision: $$\gamma_{100}=\sum_i\frac{x_i^{100}} n$$ When you can Now, sometimes you can get the distribution from moments. It's when you make an assumption about the distribution of some sort. For instance, you declare that it's normal. In this case all you need is just two moment, which can be calculated with decent precision, usually. Note, that normal distribution has higher moments, indeed, e.g. kurtosis, but we don't need them. If you were to calculate all moments of the normal distribution (without assuming it's normal), then tried to recover the characteristic function to sample from the distribution, it wouldn't work. However, when you forget about the higher moments and stick to the first two, it does work.
Can I use moments of a distribution to sample the distribution?
So, the answer is generally NO, you can't do this, but sometimes you can. When you can't The reasons you can't do this usually are two folds. First, if you have N observations, then at most you can c
Can I use moments of a distribution to sample the distribution? So, the answer is generally NO, you can't do this, but sometimes you can. When you can't The reasons you can't do this usually are two folds. First, if you have N observations, then at most you can calculates N moments. What about the other moments? You can't simply set them to zero. Second, higher moments calculations become less and less precise, because you have to raise the numbers into higher powers. Consider 100th non-central moment, you can't usually calculate it with any precision: $$\gamma_{100}=\sum_i\frac{x_i^{100}} n$$ When you can Now, sometimes you can get the distribution from moments. It's when you make an assumption about the distribution of some sort. For instance, you declare that it's normal. In this case all you need is just two moment, which can be calculated with decent precision, usually. Note, that normal distribution has higher moments, indeed, e.g. kurtosis, but we don't need them. If you were to calculate all moments of the normal distribution (without assuming it's normal), then tried to recover the characteristic function to sample from the distribution, it wouldn't work. However, when you forget about the higher moments and stick to the first two, it does work.
Can I use moments of a distribution to sample the distribution? So, the answer is generally NO, you can't do this, but sometimes you can. When you can't The reasons you can't do this usually are two folds. First, if you have N observations, then at most you can c
17,191
What distribution to use to model time before a train arrives?
Multiplication of two probabilities The probability for a first arrival at a time between $t$ and $t+dt$ (the waiting time) is equal to the multiplication of the probability for an arrival between $t$ and $t+dt$ (which can be related to the arrival rate $s(t)$ at time $t$) and the probability of no arrival before time $t$ (or otherwise it would not be the first). This latter term is related to: $$P(n=0,t+dt) = (1-s(t)dt) P(n=0,t)$$ or $$\frac{\partial P(n=0,t)}{\partial t} = -s(t) P(n=0,t) $$ giving: $$P(n=0,t) = e^{\int_0^t-s(t) dt}$$ and probability distribution for waiting times is: $$f(t) = s(t)e^{\int_0^t-s(t) dt}$$ Derivation of cumulative distribution. Alternatively you could use the expression for the probability of less than one arrival conditional that the time is $t$ $$P(n<1|t) = F(n=0;t)$$ and the probability for arrival between time $t$ and $t+dt$ is equal to the derivative $$f_{\text{arrival time}}(t) = - \frac{d}{d t} F(n=0 \vert t)$$ This approach/method is for instance useful in deriving the gamma distribution as the waiting time for the n-th arrival in a Poisson process. (waiting-time-of-poisson-process-follows-gamma-distribution) Two examples You might relate this to the waiting paradox (Please explain the waiting paradox). Exponential distribution: If the arrivals are random like a Poisson process then $s(t) = \lambda$ is constant. The probability of a next arrival is independent from the previous waiting time without arrival (say, if you roll a fair dice many times without six, then for the next roll you will not suddenly have a higher probability for a six, see gambler's fallacy). You will get the exponential distribution, and the pdf for the waiting times is: $$f(t) = \lambda e^{-\lambda t} $$ Constant distribution: If the arrivals are occurring at a constant rate (such as trains arriving according to a fixed schedule), then the probability of an arrival, when a person has already been waiting for some time, is increasing. Say a train is supposed to arrive every $T$ minutes then the frequency, after already waiting $t$ minutes is $s(t) = 1/(T-t)$ and the pdf for the waiting time will be: $$f(t)= \frac{e^{\int_0^t -\frac{1}{T-t} dt}}{T-t} = \frac{1}{T}$$ which makes sense since every time between $0$ and $T$ should have equal probability to be the first arrival. So it is this second case, with "then the probability of an arrival, when a person has already been waiting for some time is increasing", that relates to your question. It might need some adjustments depending on your situation. With more information the probability $s(t) dt$ for a train to arrive at a certain moment might be a more complex function.
What distribution to use to model time before a train arrives?
Multiplication of two probabilities The probability for a first arrival at a time between $t$ and $t+dt$ (the waiting time) is equal to the multiplication of the probability for an arrival between $t
What distribution to use to model time before a train arrives? Multiplication of two probabilities The probability for a first arrival at a time between $t$ and $t+dt$ (the waiting time) is equal to the multiplication of the probability for an arrival between $t$ and $t+dt$ (which can be related to the arrival rate $s(t)$ at time $t$) and the probability of no arrival before time $t$ (or otherwise it would not be the first). This latter term is related to: $$P(n=0,t+dt) = (1-s(t)dt) P(n=0,t)$$ or $$\frac{\partial P(n=0,t)}{\partial t} = -s(t) P(n=0,t) $$ giving: $$P(n=0,t) = e^{\int_0^t-s(t) dt}$$ and probability distribution for waiting times is: $$f(t) = s(t)e^{\int_0^t-s(t) dt}$$ Derivation of cumulative distribution. Alternatively you could use the expression for the probability of less than one arrival conditional that the time is $t$ $$P(n<1|t) = F(n=0;t)$$ and the probability for arrival between time $t$ and $t+dt$ is equal to the derivative $$f_{\text{arrival time}}(t) = - \frac{d}{d t} F(n=0 \vert t)$$ This approach/method is for instance useful in deriving the gamma distribution as the waiting time for the n-th arrival in a Poisson process. (waiting-time-of-poisson-process-follows-gamma-distribution) Two examples You might relate this to the waiting paradox (Please explain the waiting paradox). Exponential distribution: If the arrivals are random like a Poisson process then $s(t) = \lambda$ is constant. The probability of a next arrival is independent from the previous waiting time without arrival (say, if you roll a fair dice many times without six, then for the next roll you will not suddenly have a higher probability for a six, see gambler's fallacy). You will get the exponential distribution, and the pdf for the waiting times is: $$f(t) = \lambda e^{-\lambda t} $$ Constant distribution: If the arrivals are occurring at a constant rate (such as trains arriving according to a fixed schedule), then the probability of an arrival, when a person has already been waiting for some time, is increasing. Say a train is supposed to arrive every $T$ minutes then the frequency, after already waiting $t$ minutes is $s(t) = 1/(T-t)$ and the pdf for the waiting time will be: $$f(t)= \frac{e^{\int_0^t -\frac{1}{T-t} dt}}{T-t} = \frac{1}{T}$$ which makes sense since every time between $0$ and $T$ should have equal probability to be the first arrival. So it is this second case, with "then the probability of an arrival, when a person has already been waiting for some time is increasing", that relates to your question. It might need some adjustments depending on your situation. With more information the probability $s(t) dt$ for a train to arrive at a certain moment might be a more complex function.
What distribution to use to model time before a train arrives? Multiplication of two probabilities The probability for a first arrival at a time between $t$ and $t+dt$ (the waiting time) is equal to the multiplication of the probability for an arrival between $t
17,192
What distribution to use to model time before a train arrives?
The classical distribution to model waiting times is the exponential distribution. The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneous Poisson process.
What distribution to use to model time before a train arrives?
The classical distribution to model waiting times is the exponential distribution. The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneou
What distribution to use to model time before a train arrives? The classical distribution to model waiting times is the exponential distribution. The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneous Poisson process.
What distribution to use to model time before a train arrives? The classical distribution to model waiting times is the exponential distribution. The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneou
17,193
What distributional forms yield the "Pythagorean expectation"?
If we take two Exponential random variables$$X\sim\mathcal{E}(\theta_X)\qquad X\sim\mathcal{E}(\theta_Y)$$ we get that$$\mathbb{P}(X>Y|Y=y)=\exp\{-\theta_X y\}$$and $$\mathbb{E}^Y[\exp\{-\theta_X Y\}]=\int_0^\infty \exp\{-\theta_X y\}\,\theta_Y\exp\{-\theta_Y y\}\text{d}y=\dfrac{\theta_Y}{\theta_X+\theta_Y}$$Now, if$$X\sim\mathcal{E}(\theta_X^{-2})\qquad X\sim\mathcal{E}(\theta_Y^{-2})$$then$$\mathbb{P}(X>Y)=\dfrac{\theta_X^2}{\theta_X^2+\theta_Y^2}$$ A more interesting question is whether or not this is the only possible case of distribution for which it works. (For instance, this is the only element of the Gamma family for which it works.) Assuming a scale family structure, a necessary and sufficient on the underlying density $f$ of $X$ and $Y$ is that $$\int_0^\infty z\, f(z)\, f(\tau z) \,\text{d}z = \frac{1}{(1+\tau)^2}$$ But the generic answer is no: as noted in the answer by @soakley, this also works for Weibulls which is not a surprise since$$\mathbb{P}(X>Y)=\mathbb{P}(X^\alpha>Y^\alpha)$$for all $\alpha>0$ (and Weibulls are powers of exponentials). A more general class of examples is thus provided by$$X' = \phi(X) \qquad Y' = \phi(Y)$$for all strictly increasing functions $\phi$, where $X,Y$ are exponentials as above, since then we have $$\mathbb{P}(X'>Y') =\mathbb{P}(\phi(X)>\phi(Y))=\mathbb{P}(X>Y)=\dfrac{\theta_X^2}{\theta_X^2+\theta_Y^2}.$$
What distributional forms yield the "Pythagorean expectation"?
If we take two Exponential random variables$$X\sim\mathcal{E}(\theta_X)\qquad X\sim\mathcal{E}(\theta_Y)$$ we get that$$\mathbb{P}(X>Y|Y=y)=\exp\{-\theta_X y\}$$and $$\mathbb{E}^Y[\exp\{-\theta_X Y\}]
What distributional forms yield the "Pythagorean expectation"? If we take two Exponential random variables$$X\sim\mathcal{E}(\theta_X)\qquad X\sim\mathcal{E}(\theta_Y)$$ we get that$$\mathbb{P}(X>Y|Y=y)=\exp\{-\theta_X y\}$$and $$\mathbb{E}^Y[\exp\{-\theta_X Y\}]=\int_0^\infty \exp\{-\theta_X y\}\,\theta_Y\exp\{-\theta_Y y\}\text{d}y=\dfrac{\theta_Y}{\theta_X+\theta_Y}$$Now, if$$X\sim\mathcal{E}(\theta_X^{-2})\qquad X\sim\mathcal{E}(\theta_Y^{-2})$$then$$\mathbb{P}(X>Y)=\dfrac{\theta_X^2}{\theta_X^2+\theta_Y^2}$$ A more interesting question is whether or not this is the only possible case of distribution for which it works. (For instance, this is the only element of the Gamma family for which it works.) Assuming a scale family structure, a necessary and sufficient on the underlying density $f$ of $X$ and $Y$ is that $$\int_0^\infty z\, f(z)\, f(\tau z) \,\text{d}z = \frac{1}{(1+\tau)^2}$$ But the generic answer is no: as noted in the answer by @soakley, this also works for Weibulls which is not a surprise since$$\mathbb{P}(X>Y)=\mathbb{P}(X^\alpha>Y^\alpha)$$for all $\alpha>0$ (and Weibulls are powers of exponentials). A more general class of examples is thus provided by$$X' = \phi(X) \qquad Y' = \phi(Y)$$for all strictly increasing functions $\phi$, where $X,Y$ are exponentials as above, since then we have $$\mathbb{P}(X'>Y') =\mathbb{P}(\phi(X)>\phi(Y))=\mathbb{P}(X>Y)=\dfrac{\theta_X^2}{\theta_X^2+\theta_Y^2}.$$
What distributional forms yield the "Pythagorean expectation"? If we take two Exponential random variables$$X\sim\mathcal{E}(\theta_X)\qquad X\sim\mathcal{E}(\theta_Y)$$ we get that$$\mathbb{P}(X>Y|Y=y)=\exp\{-\theta_X y\}$$and $$\mathbb{E}^Y[\exp\{-\theta_X Y\}]
17,194
What distributional forms yield the "Pythagorean expectation"?
If $X$ is Weibull $\left( \alpha,\beta_1 \right)$ and $Y$ is an independent Weibull $\left( \alpha, \beta_2 \right) $, where alpha is the shape parameter and the betas are scale parameters, then it is known that $$P \left[ X > Y \right]= \frac{\beta_1^\alpha}{\beta_1^\alpha + \beta_2^\alpha} $$ This can be derived following the same approach given in Xi'an's answer. Now let $\alpha=2$ for both $X$ and $Y$. If $X$ has scale parameter $\theta_X$ and $Y$ has scale parameter $\theta_Y,$ we have $$P \left[ X > Y \right]= \frac{\theta_X^2}{\theta_X^2 + \theta_Y^2} $$
What distributional forms yield the "Pythagorean expectation"?
If $X$ is Weibull $\left( \alpha,\beta_1 \right)$ and $Y$ is an independent Weibull $\left( \alpha, \beta_2 \right) $, where alpha is the shape parameter and the betas are scale parameters, then it is
What distributional forms yield the "Pythagorean expectation"? If $X$ is Weibull $\left( \alpha,\beta_1 \right)$ and $Y$ is an independent Weibull $\left( \alpha, \beta_2 \right) $, where alpha is the shape parameter and the betas are scale parameters, then it is known that $$P \left[ X > Y \right]= \frac{\beta_1^\alpha}{\beta_1^\alpha + \beta_2^\alpha} $$ This can be derived following the same approach given in Xi'an's answer. Now let $\alpha=2$ for both $X$ and $Y$. If $X$ has scale parameter $\theta_X$ and $Y$ has scale parameter $\theta_Y,$ we have $$P \left[ X > Y \right]= \frac{\theta_X^2}{\theta_X^2 + \theta_Y^2} $$
What distributional forms yield the "Pythagorean expectation"? If $X$ is Weibull $\left( \alpha,\beta_1 \right)$ and $Y$ is an independent Weibull $\left( \alpha, \beta_2 \right) $, where alpha is the shape parameter and the betas are scale parameters, then it is
17,195
What is a robust statistical test? What is a powerful statistical test?
Robustness has various meanings in statistics, but all imply some resilience to changes in the type of data used. This may sound a bit ambiguous, but that is because robustness can refer to different kinds of insensitivities to changes. For example: Robustness to outliers Robustness to non-normality Robustness to non-constant variance (or heteroscedasticity) In the case of tests, robustness usually refers to the test still being valid given such a change. In other words, whether the outcome is significant or not is only meaningful if the assumptions of the test are met. When such assumptions are relaxed (i.e. not as important), the test is said to be robust. The power of a test is its ability to detect a significant difference if there is a true difference. The reason specific tests and models are used with various assumptions is that these assumptions simplify the problem (e.g. require less parameters to be estimated). The more assumptions a test makes, the less robust it is, because all these assumptions must be met for the test to be valid. On the other hand, a test with fewer assumptions is more robust. However, robustness generally comes at the cost of power, because either less information from the input is used, or more parameters need to be estimated. Robust A $t$-test could be said to be robust, because while it assumes normally distributed groups, it is still a valid test for comparing approximately normally distributed groups. A Wilcoxon test is less powerful when the assumptions of the $t$-test are met, but it is more robust, because it does not assume an underlying distribution and is thus valid for non-normal data. Its power is generally lower because it uses the ranks of the data, rather than the original numbers and thus essentially discards some information. Not Robust An $F$-test is a comparison of variances, but it is very sensitive to non-normality and therefore invalid for approximate normality. In other words, the $F$-test is not robust.
What is a robust statistical test? What is a powerful statistical test?
Robustness has various meanings in statistics, but all imply some resilience to changes in the type of data used. This may sound a bit ambiguous, but that is because robustness can refer to different
What is a robust statistical test? What is a powerful statistical test? Robustness has various meanings in statistics, but all imply some resilience to changes in the type of data used. This may sound a bit ambiguous, but that is because robustness can refer to different kinds of insensitivities to changes. For example: Robustness to outliers Robustness to non-normality Robustness to non-constant variance (or heteroscedasticity) In the case of tests, robustness usually refers to the test still being valid given such a change. In other words, whether the outcome is significant or not is only meaningful if the assumptions of the test are met. When such assumptions are relaxed (i.e. not as important), the test is said to be robust. The power of a test is its ability to detect a significant difference if there is a true difference. The reason specific tests and models are used with various assumptions is that these assumptions simplify the problem (e.g. require less parameters to be estimated). The more assumptions a test makes, the less robust it is, because all these assumptions must be met for the test to be valid. On the other hand, a test with fewer assumptions is more robust. However, robustness generally comes at the cost of power, because either less information from the input is used, or more parameters need to be estimated. Robust A $t$-test could be said to be robust, because while it assumes normally distributed groups, it is still a valid test for comparing approximately normally distributed groups. A Wilcoxon test is less powerful when the assumptions of the $t$-test are met, but it is more robust, because it does not assume an underlying distribution and is thus valid for non-normal data. Its power is generally lower because it uses the ranks of the data, rather than the original numbers and thus essentially discards some information. Not Robust An $F$-test is a comparison of variances, but it is very sensitive to non-normality and therefore invalid for approximate normality. In other words, the $F$-test is not robust.
What is a robust statistical test? What is a powerful statistical test? Robustness has various meanings in statistics, but all imply some resilience to changes in the type of data used. This may sound a bit ambiguous, but that is because robustness can refer to different
17,196
What is a robust statistical test? What is a powerful statistical test?
There is no formal definition of "robust statistical test", but there is a sort of general agreement as to what this means. The Wikipedia website has a good definition of this (in terms of the statistic rather than the test itself): Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. https://en.wikipedia.org/wiki/Robust_statistics
What is a robust statistical test? What is a powerful statistical test?
There is no formal definition of "robust statistical test", but there is a sort of general agreement as to what this means. The Wikipedia website has a good definition of this (in terms of the statist
What is a robust statistical test? What is a powerful statistical test? There is no formal definition of "robust statistical test", but there is a sort of general agreement as to what this means. The Wikipedia website has a good definition of this (in terms of the statistic rather than the test itself): Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. https://en.wikipedia.org/wiki/Robust_statistics
What is a robust statistical test? What is a powerful statistical test? There is no formal definition of "robust statistical test", but there is a sort of general agreement as to what this means. The Wikipedia website has a good definition of this (in terms of the statist
17,197
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its inputs? [duplicate]
Suppose you want to approximate $f(x)=x^2$ using ReLUs $g(ax+b)$. One approximation might look like $h_1(x)=g(x)+g(-x)=|x|$. But this isn't a very good approximation. But you can add more terms with different choices of $a$ and $b$ to improve the approximation. One such improvement, in the sense that the error is "small" across a larger interval, is we have $h_2(x)=g(x)+g(-x)+g(2x-2)+g(-2x+2)$, and it gets better. You can continue this procedure of adding terms to as much complexity as you like. Notice that, in the first case, the approximation is best for $x\in[-1,1]$, while in the second case, the approximation is best for $x\in[-2,2]$. x <- seq(-3,3,length.out=1000) y_true <- x^2 relu <- function(x,a=1,b=0) sapply(x, function(t) max(a*t+b,0)) h1 <- function(x) relu(x)+relu(-x) png("fig1.png") plot(x, h1(x), type="l") lines(x, y_true, col="red") dev.off() h2 <- function(x) h1(x) + relu(2*(x-1)) + relu(-2*(x+1)) png("fig2.png") plot(x, h2(x), type="l") lines(x, y_true, col="red") dev.off() l2 <- function(y_true,y_hat) 0.5 * (y_true - y_hat)^2 png("fig3.png") plot(x, l2(y_true,h1(x)), type="l") lines(x, l2(y_true,h2(x)), col="red") dev.off()
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its
Suppose you want to approximate $f(x)=x^2$ using ReLUs $g(ax+b)$. One approximation might look like $h_1(x)=g(x)+g(-x)=|x|$. But this isn't a very good approximation. But you can add more terms with
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its inputs? [duplicate] Suppose you want to approximate $f(x)=x^2$ using ReLUs $g(ax+b)$. One approximation might look like $h_1(x)=g(x)+g(-x)=|x|$. But this isn't a very good approximation. But you can add more terms with different choices of $a$ and $b$ to improve the approximation. One such improvement, in the sense that the error is "small" across a larger interval, is we have $h_2(x)=g(x)+g(-x)+g(2x-2)+g(-2x+2)$, and it gets better. You can continue this procedure of adding terms to as much complexity as you like. Notice that, in the first case, the approximation is best for $x\in[-1,1]$, while in the second case, the approximation is best for $x\in[-2,2]$. x <- seq(-3,3,length.out=1000) y_true <- x^2 relu <- function(x,a=1,b=0) sapply(x, function(t) max(a*t+b,0)) h1 <- function(x) relu(x)+relu(-x) png("fig1.png") plot(x, h1(x), type="l") lines(x, y_true, col="red") dev.off() h2 <- function(x) h1(x) + relu(2*(x-1)) + relu(-2*(x+1)) png("fig2.png") plot(x, h2(x), type="l") lines(x, y_true, col="red") dev.off() l2 <- function(y_true,y_hat) 0.5 * (y_true - y_hat)^2 png("fig3.png") plot(x, l2(y_true,h1(x)), type="l") lines(x, l2(y_true,h2(x)), col="red") dev.off()
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its Suppose you want to approximate $f(x)=x^2$ using ReLUs $g(ax+b)$. One approximation might look like $h_1(x)=g(x)+g(-x)=|x|$. But this isn't a very good approximation. But you can add more terms with
17,198
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its inputs? [duplicate]
Think of it as a piecewise linear function. Any segment of the function that you want to model (assuming it's smooth) looks like a line if you zoom in far enough.
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its
Think of it as a piecewise linear function. Any segment of the function that you want to model (assuming it's smooth) looks like a line if you zoom in far enough.
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its inputs? [duplicate] Think of it as a piecewise linear function. Any segment of the function that you want to model (assuming it's smooth) looks like a line if you zoom in far enough.
How does the Rectified Linear Unit (ReLU) activation function produce non-linear interaction of its Think of it as a piecewise linear function. Any segment of the function that you want to model (assuming it's smooth) looks like a line if you zoom in far enough.
17,199
What is the smallest $\lambda$ that gives a 0 component in lasso?
The lasso estimate described in the question is the lagrange multiplier equivalent of the following optimization problem: $${\text{minimize } f(\beta) \text{ subject to } g(\beta) \leq t}$$ $$\begin{align} f(\beta) &= \frac{1}{2n} \vert\vert y-X\beta \vert\vert_2^2 \\ g(\beta) &= \vert\vert \beta \vert\vert_1 \end{align}$$ This optimizazion has a geometric representation of finding the point of contact between a multidimensional sphere and a polytope (spanned by the vectors of X). The surface of the polytope represents $g(\beta)$. The square of the radius of the sphere represents the function $f(\beta)$ and is minimized when the surfaces contact. The images below provides a graphical explanation. The images made use of the following simple problem with vectors of length 3 (for simplicity in order to be able to make a drawing): $$\begin{bmatrix} y_1 \\ y_2 \\ y_3\\ \end{bmatrix} = \begin{bmatrix} 1.4 \\ 1.84 \\ 0.32\\ \end{bmatrix} = \beta_1 \begin{bmatrix} 0.8 \\ 0.6 \\ 0\\ \end{bmatrix} +\beta_2 \begin{bmatrix} 0 \\ 0.6 \\ 0.8\\ \end{bmatrix} +\beta_3 \begin{bmatrix} 0.6 \\ 0.64 \\ -0.48\\ \end{bmatrix} + \begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_3\\ \end{bmatrix} $$ and we minimize $\epsilon_1^2+\epsilon_2^2+\epsilon_3^2$ with the constraint $abs(\beta_1)+abs(\beta_2)+abs(\beta_3) \leq t$ The images show: The red surface depicts the constraint, a polytope spanned by X. And the green surface depicts the minimalized surface, a sphere. The blue line shows the lasso path, the solutions that we find as we change $t$ or $\lambda$. The green vector shows the OLS solution $\hat{y}$ (which was chosen as $\beta_1=\beta_2=\beta_3=1$ or $\hat{y} = x_1 + x_2 + x_3$. The three black vectors are $x_1 = (0.8,0.6,0)$, $x_2 = (0,0.6,0.8)$ and $x_3 = (0.6,0.64,-0.48)$. We show three images: In the first image only a point of the polytope is touching the sphere. This image demonstrates very well why the lasso solution is not just a multiple of the OLS solution. The direction of the OLS solution adds stronger to the sum $\vert \beta \vert_1$. In this case only a single $\beta_i$ is non-zero. In the second image a ridge of the polytope is touching the sphere (in higher dimensions we get higher dimensional analogues). In this case multiple $\beta_i$ are non-zero. In the third image a facet tof the polytope is touching the sphere. In this case all the $\beta_i$ are nonzero. The range of $t$ or $\lambda$ for which we have the first and third cases can be easily calculated due to their simple geometric representation. ##Case 1: Only a single $\beta_i$ non-zero## The non-zero $\beta_i$ is the one for which the associated vector $x_i$ has the highest absolute value of the covariance with $\hat{y}$ (this is the point of the parrallelotope which closest to the OLS solution). We can calculate the Lagrange multiplier $\lambda_{max}$ below which we have at least a non-zero $\beta$ by taking the derivative with $\pm\beta_i$ (the sign depending on whether we increase the $\beta_i$ in negative or positive direction ): $$\frac{\partial ( \frac{1}{2n} \vert \vert y - X\beta \vert \vert_2^2 - \lambda \vert \vert \beta \vert \vert_1 )}{\pm \partial \beta_i} = 0$$ which leads to $$\lambda_{max} = \frac{ \left( \frac{1}{2n}\frac{\partial ( \vert \vert y - X\beta \vert \vert_2^2}{\pm \partial \beta_i} \right) }{ \left( \frac{ \vert \vert \beta \vert \vert_1 )}{\pm \partial \beta_i}\right)} = \pm \frac{\partial ( \frac{1}{2n} \vert \vert y - X\beta \vert \vert_2^2}{\partial \beta_i} = \pm \frac{1}{n} x_i \cdot y $$ which equals the $\vert \vert X^Ty \vert \vert_\infty$ mentioned in the comments. where we should notice that this is only true for the special case in which the tip of the polytope is touching the sphere (so this is not a general solution, although generalization is straightforward). ##Case 3: All $\beta_i$ are non-zero.## In this case that a facet of the polytope is touching the sphere. Then the direction of change of the lasso path is normal to the surface of the particular facet. The polytope has many facets, with positive and negative contributions of the $x_i$. In the case of the last lasso step, when the lasso solution is close to the ols solution, then the contributions of the $x_i$ must be defined by the sign of the OLS solution. The normal of the facet can be defined by taking the gradient of the function $\vert \vert \beta(r) \vert \vert_1 $, the value of the sum of beta at the point $r$, which is: $$ n = - \nabla_r ( \vert \vert \beta(r) \vert \vert_1) = -\nabla_r ( \text{sign} (\hat{\beta}) \cdot (X^TX)^{-1}X^Tr ) = -\text{sign} (\hat{\beta}) \cdot (X^TX)^{-1}X^T $$ and the equivalent change of beta for this direction is: $$ \vec{\beta}_{last} = (X^TX)^{-1}X n = -(X^TX)^{-1}X^T [\text{sign} (\hat{\beta}) \cdot (X^TX)^{-1}X^T]$$ which after some algebraic tricks with shifting the transposes ($A^TB^T = [BA]^T$) and distribution of brackets becomes $$ \vec{\beta}_{last} = - (X^TX)^{-1} \text{sign} (\hat{\beta}) $$ we normalize this direction: $$ \vec{\beta}_{last,normalized} = \frac{\vec{\beta}_{last}}{\sum \vec{\beta}_{last} \cdot sign(\hat{\beta})} $$ To find the $\lambda_{min}$ below which all coefficients are non-zero. We only have to calculate back from the OLS solution back to the point where one of the coefficients is zero, $$ d = min \left( \frac{\hat{\beta}}{\vec{\beta}_{last,normalized}} \right)\qquad \text{with the condition that } \frac{\hat{\beta}}{\vec{\beta}_{last,normalized}} >0$$ ,and at this point we evaluate the derivative (as before when we calculate $\lambda_{max}$). We use that for a quadratic function we have $q'(x) = 2 q(1) x$: $$\lambda_{min} = \frac{d}{n} \vert \vert X \vec{\beta}_{last,normalized} \vert \vert_2^2 $$ ##Images## a point of the polytope is touching the sphere, a single $\beta_i$ is non-zero: a ridge (or differen in multiple dimensions) of the polytope is touching the sphere, many $\beta_i$ are non-zero: a facet of the polytope is touching the sphere, all $\beta_i$ are non-zero: ##Code example: ## library(lars) data(diabetes) y <- diabetes$y - mean(diabetes$y) x <- diabetes$x # models lmc <- coef(lm(y~0+x)) modl <- lars(diabetes$x, diabetes$y, type="lasso") # matrix equation d_x <- matrix(rep(x[,1],9),length(x[,1])) %*% diag(sign(lmc[-c(1)]/lmc[1])) x_c = x[,-1]-d_x y_c = -x[,1] # solving equation cof <- coefficients(lm(y_c~0+x_c)) cof <- c(1-sum(cof*sign(lmc[-c(1)]/lmc[1])),cof) # alternatively the last direction of change in coefficients is found by: solve(t(x) %*% x) %*% sign(lmc) # solution by lars package cof_m <-(coefficients(modl)[13,]-coefficients(modl)[12,]) # last step dist <- x %*% (cof/sum(cof*sign(lmc[]))) #dist_m <- x %*% (cof_m/sum(cof_m*sign(lmc[]))) #for comparison # calculate back to zero shrinking_set <- which(-lmc[]/cof>0) #only the positive values step_last <- min((-lmc/cof)[shrinking_set]) d_err_d_beta <- step_last*sum(dist^2) # compare modl[4] #all computed lambda d_err_d_beta # lambda last change max(t(x) %*% y) # lambda first change enter code here note: those last three lines are the most important > modl[4] # all computed lambda by algorithm $lambda [1] 949.435260 889.315991 452.900969 316.074053 130.130851 88.782430 68.965221 19.981255 5.477473 5.089179 [11] 2.182250 1.310435 > d_err_d_beta # lambda last change by calculating only last step xhdl 1.310435 > max(t(x) %*% y) # lambda first change by max(x^T y) [1] 949.4353 (edit notice: this answer had been edited a lot. I have deleted old parts. But left this answer from July 14 which contains a nice explanation of the separation of $y$, $\epsilon$ and $\hat{y}$. The problems that occured in this initial answer have been solved. The final step of the LARS algorithm is easy to find, it is the normal of the polytope defined by the sign of the $\hat{\beta}$) #ANSWER JULI 14 2017# for $\lambda < \lambda_{max}$ we have at least one non-zero coefficient (and above all are zero) for $\lambda < \lambda_{min}$ we have all coefficients non-zero (and above at least one coefficient is zero) finding $\lambda_{max}$ You can use the following steps to determine $\lambda_{max}$ (and this technique will also help for $\lambda_{min}$ although a bit more difficult). For $\lambda>\lambda_{max}$ we have $\hat{\beta}^\lambda$ = 0, as the penalty in the term $\lambda \vert \vert \beta \vert \vert_1$ will be too large, and an increase of $\beta$ does not reduce $(1/2n) \vert \vert y-X \beta \vert \vert ^2_2$ sufficiently to optimize the following expression (1) \begin{equation}\frac{1}{2n} \vert \vert y-X \beta \vert \vert ^2_2 + \lambda \vert \vert \beta \vert \vert_1 \end{equation} The basic idea is that at some level of $\lambda$ the increase of $\vert \vert \beta \vert \vert_1$ will cause the term $(1/2n) \vert \vert y-X \beta \vert \vert ^2_2$ to decreases more than the term $\lambda \vert \vert \beta \vert \vert_1$ increases. This point at which the terms are equal can be calculated exactly and at this point the expression (1) can be optimized by an increase of $\beta$ and this is the point below which some terms of $\beta$ are non-zero. Determine the angle, or unit vector $\beta_0$, along which the sum $(y - X\beta)^2$ would decrease most. (in the same way as the first step in the LARS algorithm explained in the very nice article referenced by chRrr) This would relate to the variable(s) $x_i$ that correlates the most with $y$. Calculate the rate of change at which the sum of squares of the error change if you increase the length of the predicted vector $\hat{y}$ $r_1= \frac{1}{2n} \frac{\partial\sum{(y - X\beta_0\cdot s)^2}}{\partial s}$ this is related to the angle between $\vec{\beta_0}$ and $\vec{y}$. The change of the square of the length of the error term $y_{err}$ will be equal to $\frac{\partial y_{err}^2}{\partial \vert \vert \beta_0 \vert \vert _1} = 2 y_{err} \frac{\partial y_{err}}{\partial \vert \vert \beta_0 \vert \vert _1} = 2 \vert \vert y \vert \vert _2 \vert \vert X\beta_0 \vert \vert _2 cor(\beta_0,y)$ The term $\vert \vert X\beta_0 \vert \vert _2 cor(\beta_0,y)$ is how much the length $y_{err}$ changes as the coefficient $\beta_0$ changes and this includes a multiplication with $X$ (since the larger $X$ the larger the change as the coefficient changes) and a multiplication with the correlation (since only the projection part reduces the length of the error vector). Then $\lambda_{max}$ is equal to this rate of change and $\lambda_{max} = \frac{1}{2n} \frac{\partial\sum{(y - X\beta_0\cdot s)^2}}{\partial s} = \frac{1}{2n} 2 \vert \vert y \vert \vert _2 \vert \vert x \vert \vert _2 corr(\beta_0,y) = \frac{1}{n} \vert \vert X\beta_0 \cdot y \vert \vert _2 $ Where $\beta_0$ is the unit vector that corresponds to the angle in the first step of the LARS algorithm. If $k$ is the number of vectors that share the maximum correlation then we have $\lambda_{max} = \frac{\sqrt{k}}{n} \vert \vert X^T y\vert \vert_\infty $ In other words. LARS gives you the initial decline of the SSE as you increase the length of the vector $\beta$ and this is then equal to your $\lambda_{max}$ ##Graphical example## The image below explains the concept how the LARS algorithm can help in finding $\lambda_{max}$. (and also hints how we can find $\lambda_{min}$) In the image we see schematically the fitting of the vector $y$ by the vectors $x_1$ and $x_2$. The dotted gray vectors depict the OLS solution with $y_{\perp}$ the part of $y$ that is perpendicular (the error) to the span of the vectors $x_1$ and $x_2$ (and the perpendicular vector is the shortest distance and the smallest sum of squares) the gray lines are iso-lines for which $\vert \vert \beta \vert \vert_1$ is constant The vectors $\beta_0$ and $\beta_1$ depict the path that is being followed in a LARS regression. the blue lines and green lines are the vectors that depict the error term as the LARS regression algorithm is followed. the length of these lines is the SSE and when they reach the perpendicular vector $y_{\perp}$ you have the least sum of squares Now, initially the path is followed along the vector $x_i$ that has the highest correlation with $y$. For this vector the change of the $\sqrt{SSE}$ as function of the change of $\vert \vert \beta \vert \vert$ is largest (namely equal to the correlation of that vector $x_i$ and $y$). While you follow the path this change of $\sqrt{SSE}$/$\vert \vert \beta \vert \vert$ decreases until a path along a combination of two vectors becomes more efficient, and so on. Notice that the ratio of the change $\sqrt{SSE}$ and change $\vert \vert \beta \vert \vert$ is a monotonously decreasing function. Therefore, the solution for $\lambda_{max}$ must be at the origin. finding $\lambda_{min} $ The above illustration also shows how we can find $\lambda_{min}$ the point at which all of the coefficients are non zero. This occurs in the last step of the LARS procedure. We can find this points with similar steps. We can determine the angle of the last step and the pre-last step and determine the point where these steps turn from the one into the other. This point is then used to calculate $\lambda_{min}$. The expression is a bit more awkard since you do not calculate starting from zero. Actually, a solution may not exist. The vector along which the final step is made is in the direction of $X^T \cdot sign$ in which $sign$ is a vector with 1's and -1's depending on the direction of the change for the coefficient in the last step. This direction can be calculated if you know the result of the pre-last step (and could be achieved by itteration towards the first step but such a huge calculation does not seem to be what we want). I do not seem to find out if we can see directly what the sign is of the change of the coefficients in the last step. Note note that we can more easily determine the point $\lambda_{opt}$ for which $\beta^\lambda$ is equal to the OLS solution. In the image this is related to the change of the vector $y_{\perp}$ as we move along the slope $\hat{\beta_n}$ (this might be more practical to calculate) problems with sign changes The LARS solution is close to the LASSO solution. A difference is in the number of steps. In the above method one would find out the direction of the last change and then go back from the OLS solution untill a coefficient becomes zero. A problem occurs if this point is not a point at which a coefficient is added to the set of active coefficients. It could be also a sign change and the LASSO and LARS solutions may differ in these points.
What is the smallest $\lambda$ that gives a 0 component in lasso?
The lasso estimate described in the question is the lagrange multiplier equivalent of the following optimization problem: $${\text{minimize } f(\beta) \text{ subject to } g(\beta) \leq t}$$ $$\begin{a
What is the smallest $\lambda$ that gives a 0 component in lasso? The lasso estimate described in the question is the lagrange multiplier equivalent of the following optimization problem: $${\text{minimize } f(\beta) \text{ subject to } g(\beta) \leq t}$$ $$\begin{align} f(\beta) &= \frac{1}{2n} \vert\vert y-X\beta \vert\vert_2^2 \\ g(\beta) &= \vert\vert \beta \vert\vert_1 \end{align}$$ This optimizazion has a geometric representation of finding the point of contact between a multidimensional sphere and a polytope (spanned by the vectors of X). The surface of the polytope represents $g(\beta)$. The square of the radius of the sphere represents the function $f(\beta)$ and is minimized when the surfaces contact. The images below provides a graphical explanation. The images made use of the following simple problem with vectors of length 3 (for simplicity in order to be able to make a drawing): $$\begin{bmatrix} y_1 \\ y_2 \\ y_3\\ \end{bmatrix} = \begin{bmatrix} 1.4 \\ 1.84 \\ 0.32\\ \end{bmatrix} = \beta_1 \begin{bmatrix} 0.8 \\ 0.6 \\ 0\\ \end{bmatrix} +\beta_2 \begin{bmatrix} 0 \\ 0.6 \\ 0.8\\ \end{bmatrix} +\beta_3 \begin{bmatrix} 0.6 \\ 0.64 \\ -0.48\\ \end{bmatrix} + \begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_3\\ \end{bmatrix} $$ and we minimize $\epsilon_1^2+\epsilon_2^2+\epsilon_3^2$ with the constraint $abs(\beta_1)+abs(\beta_2)+abs(\beta_3) \leq t$ The images show: The red surface depicts the constraint, a polytope spanned by X. And the green surface depicts the minimalized surface, a sphere. The blue line shows the lasso path, the solutions that we find as we change $t$ or $\lambda$. The green vector shows the OLS solution $\hat{y}$ (which was chosen as $\beta_1=\beta_2=\beta_3=1$ or $\hat{y} = x_1 + x_2 + x_3$. The three black vectors are $x_1 = (0.8,0.6,0)$, $x_2 = (0,0.6,0.8)$ and $x_3 = (0.6,0.64,-0.48)$. We show three images: In the first image only a point of the polytope is touching the sphere. This image demonstrates very well why the lasso solution is not just a multiple of the OLS solution. The direction of the OLS solution adds stronger to the sum $\vert \beta \vert_1$. In this case only a single $\beta_i$ is non-zero. In the second image a ridge of the polytope is touching the sphere (in higher dimensions we get higher dimensional analogues). In this case multiple $\beta_i$ are non-zero. In the third image a facet tof the polytope is touching the sphere. In this case all the $\beta_i$ are nonzero. The range of $t$ or $\lambda$ for which we have the first and third cases can be easily calculated due to their simple geometric representation. ##Case 1: Only a single $\beta_i$ non-zero## The non-zero $\beta_i$ is the one for which the associated vector $x_i$ has the highest absolute value of the covariance with $\hat{y}$ (this is the point of the parrallelotope which closest to the OLS solution). We can calculate the Lagrange multiplier $\lambda_{max}$ below which we have at least a non-zero $\beta$ by taking the derivative with $\pm\beta_i$ (the sign depending on whether we increase the $\beta_i$ in negative or positive direction ): $$\frac{\partial ( \frac{1}{2n} \vert \vert y - X\beta \vert \vert_2^2 - \lambda \vert \vert \beta \vert \vert_1 )}{\pm \partial \beta_i} = 0$$ which leads to $$\lambda_{max} = \frac{ \left( \frac{1}{2n}\frac{\partial ( \vert \vert y - X\beta \vert \vert_2^2}{\pm \partial \beta_i} \right) }{ \left( \frac{ \vert \vert \beta \vert \vert_1 )}{\pm \partial \beta_i}\right)} = \pm \frac{\partial ( \frac{1}{2n} \vert \vert y - X\beta \vert \vert_2^2}{\partial \beta_i} = \pm \frac{1}{n} x_i \cdot y $$ which equals the $\vert \vert X^Ty \vert \vert_\infty$ mentioned in the comments. where we should notice that this is only true for the special case in which the tip of the polytope is touching the sphere (so this is not a general solution, although generalization is straightforward). ##Case 3: All $\beta_i$ are non-zero.## In this case that a facet of the polytope is touching the sphere. Then the direction of change of the lasso path is normal to the surface of the particular facet. The polytope has many facets, with positive and negative contributions of the $x_i$. In the case of the last lasso step, when the lasso solution is close to the ols solution, then the contributions of the $x_i$ must be defined by the sign of the OLS solution. The normal of the facet can be defined by taking the gradient of the function $\vert \vert \beta(r) \vert \vert_1 $, the value of the sum of beta at the point $r$, which is: $$ n = - \nabla_r ( \vert \vert \beta(r) \vert \vert_1) = -\nabla_r ( \text{sign} (\hat{\beta}) \cdot (X^TX)^{-1}X^Tr ) = -\text{sign} (\hat{\beta}) \cdot (X^TX)^{-1}X^T $$ and the equivalent change of beta for this direction is: $$ \vec{\beta}_{last} = (X^TX)^{-1}X n = -(X^TX)^{-1}X^T [\text{sign} (\hat{\beta}) \cdot (X^TX)^{-1}X^T]$$ which after some algebraic tricks with shifting the transposes ($A^TB^T = [BA]^T$) and distribution of brackets becomes $$ \vec{\beta}_{last} = - (X^TX)^{-1} \text{sign} (\hat{\beta}) $$ we normalize this direction: $$ \vec{\beta}_{last,normalized} = \frac{\vec{\beta}_{last}}{\sum \vec{\beta}_{last} \cdot sign(\hat{\beta})} $$ To find the $\lambda_{min}$ below which all coefficients are non-zero. We only have to calculate back from the OLS solution back to the point where one of the coefficients is zero, $$ d = min \left( \frac{\hat{\beta}}{\vec{\beta}_{last,normalized}} \right)\qquad \text{with the condition that } \frac{\hat{\beta}}{\vec{\beta}_{last,normalized}} >0$$ ,and at this point we evaluate the derivative (as before when we calculate $\lambda_{max}$). We use that for a quadratic function we have $q'(x) = 2 q(1) x$: $$\lambda_{min} = \frac{d}{n} \vert \vert X \vec{\beta}_{last,normalized} \vert \vert_2^2 $$ ##Images## a point of the polytope is touching the sphere, a single $\beta_i$ is non-zero: a ridge (or differen in multiple dimensions) of the polytope is touching the sphere, many $\beta_i$ are non-zero: a facet of the polytope is touching the sphere, all $\beta_i$ are non-zero: ##Code example: ## library(lars) data(diabetes) y <- diabetes$y - mean(diabetes$y) x <- diabetes$x # models lmc <- coef(lm(y~0+x)) modl <- lars(diabetes$x, diabetes$y, type="lasso") # matrix equation d_x <- matrix(rep(x[,1],9),length(x[,1])) %*% diag(sign(lmc[-c(1)]/lmc[1])) x_c = x[,-1]-d_x y_c = -x[,1] # solving equation cof <- coefficients(lm(y_c~0+x_c)) cof <- c(1-sum(cof*sign(lmc[-c(1)]/lmc[1])),cof) # alternatively the last direction of change in coefficients is found by: solve(t(x) %*% x) %*% sign(lmc) # solution by lars package cof_m <-(coefficients(modl)[13,]-coefficients(modl)[12,]) # last step dist <- x %*% (cof/sum(cof*sign(lmc[]))) #dist_m <- x %*% (cof_m/sum(cof_m*sign(lmc[]))) #for comparison # calculate back to zero shrinking_set <- which(-lmc[]/cof>0) #only the positive values step_last <- min((-lmc/cof)[shrinking_set]) d_err_d_beta <- step_last*sum(dist^2) # compare modl[4] #all computed lambda d_err_d_beta # lambda last change max(t(x) %*% y) # lambda first change enter code here note: those last three lines are the most important > modl[4] # all computed lambda by algorithm $lambda [1] 949.435260 889.315991 452.900969 316.074053 130.130851 88.782430 68.965221 19.981255 5.477473 5.089179 [11] 2.182250 1.310435 > d_err_d_beta # lambda last change by calculating only last step xhdl 1.310435 > max(t(x) %*% y) # lambda first change by max(x^T y) [1] 949.4353 (edit notice: this answer had been edited a lot. I have deleted old parts. But left this answer from July 14 which contains a nice explanation of the separation of $y$, $\epsilon$ and $\hat{y}$. The problems that occured in this initial answer have been solved. The final step of the LARS algorithm is easy to find, it is the normal of the polytope defined by the sign of the $\hat{\beta}$) #ANSWER JULI 14 2017# for $\lambda < \lambda_{max}$ we have at least one non-zero coefficient (and above all are zero) for $\lambda < \lambda_{min}$ we have all coefficients non-zero (and above at least one coefficient is zero) finding $\lambda_{max}$ You can use the following steps to determine $\lambda_{max}$ (and this technique will also help for $\lambda_{min}$ although a bit more difficult). For $\lambda>\lambda_{max}$ we have $\hat{\beta}^\lambda$ = 0, as the penalty in the term $\lambda \vert \vert \beta \vert \vert_1$ will be too large, and an increase of $\beta$ does not reduce $(1/2n) \vert \vert y-X \beta \vert \vert ^2_2$ sufficiently to optimize the following expression (1) \begin{equation}\frac{1}{2n} \vert \vert y-X \beta \vert \vert ^2_2 + \lambda \vert \vert \beta \vert \vert_1 \end{equation} The basic idea is that at some level of $\lambda$ the increase of $\vert \vert \beta \vert \vert_1$ will cause the term $(1/2n) \vert \vert y-X \beta \vert \vert ^2_2$ to decreases more than the term $\lambda \vert \vert \beta \vert \vert_1$ increases. This point at which the terms are equal can be calculated exactly and at this point the expression (1) can be optimized by an increase of $\beta$ and this is the point below which some terms of $\beta$ are non-zero. Determine the angle, or unit vector $\beta_0$, along which the sum $(y - X\beta)^2$ would decrease most. (in the same way as the first step in the LARS algorithm explained in the very nice article referenced by chRrr) This would relate to the variable(s) $x_i$ that correlates the most with $y$. Calculate the rate of change at which the sum of squares of the error change if you increase the length of the predicted vector $\hat{y}$ $r_1= \frac{1}{2n} \frac{\partial\sum{(y - X\beta_0\cdot s)^2}}{\partial s}$ this is related to the angle between $\vec{\beta_0}$ and $\vec{y}$. The change of the square of the length of the error term $y_{err}$ will be equal to $\frac{\partial y_{err}^2}{\partial \vert \vert \beta_0 \vert \vert _1} = 2 y_{err} \frac{\partial y_{err}}{\partial \vert \vert \beta_0 \vert \vert _1} = 2 \vert \vert y \vert \vert _2 \vert \vert X\beta_0 \vert \vert _2 cor(\beta_0,y)$ The term $\vert \vert X\beta_0 \vert \vert _2 cor(\beta_0,y)$ is how much the length $y_{err}$ changes as the coefficient $\beta_0$ changes and this includes a multiplication with $X$ (since the larger $X$ the larger the change as the coefficient changes) and a multiplication with the correlation (since only the projection part reduces the length of the error vector). Then $\lambda_{max}$ is equal to this rate of change and $\lambda_{max} = \frac{1}{2n} \frac{\partial\sum{(y - X\beta_0\cdot s)^2}}{\partial s} = \frac{1}{2n} 2 \vert \vert y \vert \vert _2 \vert \vert x \vert \vert _2 corr(\beta_0,y) = \frac{1}{n} \vert \vert X\beta_0 \cdot y \vert \vert _2 $ Where $\beta_0$ is the unit vector that corresponds to the angle in the first step of the LARS algorithm. If $k$ is the number of vectors that share the maximum correlation then we have $\lambda_{max} = \frac{\sqrt{k}}{n} \vert \vert X^T y\vert \vert_\infty $ In other words. LARS gives you the initial decline of the SSE as you increase the length of the vector $\beta$ and this is then equal to your $\lambda_{max}$ ##Graphical example## The image below explains the concept how the LARS algorithm can help in finding $\lambda_{max}$. (and also hints how we can find $\lambda_{min}$) In the image we see schematically the fitting of the vector $y$ by the vectors $x_1$ and $x_2$. The dotted gray vectors depict the OLS solution with $y_{\perp}$ the part of $y$ that is perpendicular (the error) to the span of the vectors $x_1$ and $x_2$ (and the perpendicular vector is the shortest distance and the smallest sum of squares) the gray lines are iso-lines for which $\vert \vert \beta \vert \vert_1$ is constant The vectors $\beta_0$ and $\beta_1$ depict the path that is being followed in a LARS regression. the blue lines and green lines are the vectors that depict the error term as the LARS regression algorithm is followed. the length of these lines is the SSE and when they reach the perpendicular vector $y_{\perp}$ you have the least sum of squares Now, initially the path is followed along the vector $x_i$ that has the highest correlation with $y$. For this vector the change of the $\sqrt{SSE}$ as function of the change of $\vert \vert \beta \vert \vert$ is largest (namely equal to the correlation of that vector $x_i$ and $y$). While you follow the path this change of $\sqrt{SSE}$/$\vert \vert \beta \vert \vert$ decreases until a path along a combination of two vectors becomes more efficient, and so on. Notice that the ratio of the change $\sqrt{SSE}$ and change $\vert \vert \beta \vert \vert$ is a monotonously decreasing function. Therefore, the solution for $\lambda_{max}$ must be at the origin. finding $\lambda_{min} $ The above illustration also shows how we can find $\lambda_{min}$ the point at which all of the coefficients are non zero. This occurs in the last step of the LARS procedure. We can find this points with similar steps. We can determine the angle of the last step and the pre-last step and determine the point where these steps turn from the one into the other. This point is then used to calculate $\lambda_{min}$. The expression is a bit more awkard since you do not calculate starting from zero. Actually, a solution may not exist. The vector along which the final step is made is in the direction of $X^T \cdot sign$ in which $sign$ is a vector with 1's and -1's depending on the direction of the change for the coefficient in the last step. This direction can be calculated if you know the result of the pre-last step (and could be achieved by itteration towards the first step but such a huge calculation does not seem to be what we want). I do not seem to find out if we can see directly what the sign is of the change of the coefficients in the last step. Note note that we can more easily determine the point $\lambda_{opt}$ for which $\beta^\lambda$ is equal to the OLS solution. In the image this is related to the change of the vector $y_{\perp}$ as we move along the slope $\hat{\beta_n}$ (this might be more practical to calculate) problems with sign changes The LARS solution is close to the LASSO solution. A difference is in the number of steps. In the above method one would find out the direction of the last change and then go back from the OLS solution untill a coefficient becomes zero. A problem occurs if this point is not a point at which a coefficient is added to the set of active coefficients. It could be also a sign change and the LASSO and LARS solutions may differ in these points.
What is the smallest $\lambda$ that gives a 0 component in lasso? The lasso estimate described in the question is the lagrange multiplier equivalent of the following optimization problem: $${\text{minimize } f(\beta) \text{ subject to } g(\beta) \leq t}$$ $$\begin{a
17,200
What is the smallest $\lambda$ that gives a 0 component in lasso?
The lasso is especially useful when $p>n$, i.e., more parameters than sample size. If $p>n$ and $X$ has continuous distributions, the least-squares estimate $\hat\beta^{LS}$ that minimizes $\min_{b\in R^p} \|y-Xb\|^2$ is not unique (the affine subspace of least-squares solutions is exactly $\hat\beta^{LS}_0 + \ker(X)$ for any specific solution $\hat\beta^{LS}_0$ and $\ker(X)$ has dimension at least $p-n$). Also, there exists a least-squares solution with at most $n$ nonzero coordinates and at least $p-n$ zero coordinates. To see this, write $X$ by block as $[X_n|X_{p-n}]$ where $X_n$ is a square $n\times n$ matrix. If $X$ has continuous distribution then $X_n$ is invertible with probability one and $\hat b$ in $R^p$ defined by $X_n^{-1}y$ on the first $n$ coordinates and 0 on the last $(p-n)$ coordinates is a least-squares solution with at least $p-n$ zero entries. So with probability one, when $p>n$, your $\lambda_{min}$ is equal to 0.
What is the smallest $\lambda$ that gives a 0 component in lasso?
The lasso is especially useful when $p>n$, i.e., more parameters than sample size. If $p>n$ and $X$ has continuous distributions, the least-squares estimate $\hat\beta^{LS}$ that minimizes $\min_{b\in
What is the smallest $\lambda$ that gives a 0 component in lasso? The lasso is especially useful when $p>n$, i.e., more parameters than sample size. If $p>n$ and $X$ has continuous distributions, the least-squares estimate $\hat\beta^{LS}$ that minimizes $\min_{b\in R^p} \|y-Xb\|^2$ is not unique (the affine subspace of least-squares solutions is exactly $\hat\beta^{LS}_0 + \ker(X)$ for any specific solution $\hat\beta^{LS}_0$ and $\ker(X)$ has dimension at least $p-n$). Also, there exists a least-squares solution with at most $n$ nonzero coordinates and at least $p-n$ zero coordinates. To see this, write $X$ by block as $[X_n|X_{p-n}]$ where $X_n$ is a square $n\times n$ matrix. If $X$ has continuous distribution then $X_n$ is invertible with probability one and $\hat b$ in $R^p$ defined by $X_n^{-1}y$ on the first $n$ coordinates and 0 on the last $(p-n)$ coordinates is a least-squares solution with at least $p-n$ zero entries. So with probability one, when $p>n$, your $\lambda_{min}$ is equal to 0.
What is the smallest $\lambda$ that gives a 0 component in lasso? The lasso is especially useful when $p>n$, i.e., more parameters than sample size. If $p>n$ and $X$ has continuous distributions, the least-squares estimate $\hat\beta^{LS}$ that minimizes $\min_{b\in