idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
25,401
What are the pros and cons of using mahalanobis distance instead of propensity scores in matching
I don't think they are comparable as they are trying to achieve different things. Using mahalanobis distance matches based upon covariate proximity where as PS matches based upon probability of being assign to the treatment group. As an example of where these differ imagine you have a prognostic variable in your PS model that actually had very low predictive power. In this case you can have huge swings in this variables value that ultimately make no difference on the patients PS and by extension make no difference on who they are matched with. In contrast those big swings in that variables value can have a huge impact on who the patient is matched with under mahalanobis distance. In a way I like to think of PS as a weighted version of mahalanobis distance where we weight the importance of variables based upon our desired outcome (in this case probability of being assigned to treatment)
What are the pros and cons of using mahalanobis distance instead of propensity scores in matching
I don't think they are comparable as they are trying to achieve different things. Using mahalanobis distance matches based upon covariate proximity where as PS matches based upon probability of being
What are the pros and cons of using mahalanobis distance instead of propensity scores in matching I don't think they are comparable as they are trying to achieve different things. Using mahalanobis distance matches based upon covariate proximity where as PS matches based upon probability of being assign to the treatment group. As an example of where these differ imagine you have a prognostic variable in your PS model that actually had very low predictive power. In this case you can have huge swings in this variables value that ultimately make no difference on the patients PS and by extension make no difference on who they are matched with. In contrast those big swings in that variables value can have a huge impact on who the patient is matched with under mahalanobis distance. In a way I like to think of PS as a weighted version of mahalanobis distance where we weight the importance of variables based upon our desired outcome (in this case probability of being assigned to treatment)
What are the pros and cons of using mahalanobis distance instead of propensity scores in matching I don't think they are comparable as they are trying to achieve different things. Using mahalanobis distance matches based upon covariate proximity where as PS matches based upon probability of being
25,402
Multiple Membership vs Crossed Random Effects
Note this has been edited to address the issue of how to construct the model matrix for the random effects and to add that there is now a package lmerMultiMember that adds multiple membership models to lme4. I agree that this can be confusing. But before answering, I would just like to be a bit pedantic and mention that multiple membership (and nesting, and crossing) is not a property of the model. It is a property of the experimental/study design, which is then reflected in the data, which is then encapsulated by the model. Are multiple membership models the same as cross classified models ? No they are not. The reason why my answer that you linked to is ambiguous on this is because some people, erroneously in my opinion, use the two terms interchangeably in certain situations (more on this below), when in fact they are quite different (in my opinion). The example you mentioned, patients in hospitals, is a very good one. The key here is to think about the lowest level of measurement, and where the repeated measures occur. If patients are the lowest level of measurement (that is, there are no repeated measures within patients), then patient will not be a grouping variable; that is, we would not fit random intercepts for it, so by definition there cannot be crossed random effects involving patient. On the other hand, if there are repeated measures within patients then we would fit random intercepts for patients, and therefore we would have crossed random effects for patient and hospital. In the former case we would call this a model with multiple membership, but in the latter case we would call it a model with crossed random effects (in reality it will probably be partially nested and partially crossed). Some people seem to consider both to be multiple membership, and the latter to be just a special case (hence my ambiguous statement in the linked answer). I just think this confuses the situation. So to give a definition of multiple membership, I would say this occurs when the lowest level units "belong" to more than one upper-level unit. So, following the same example, where there are no repeated measures within patients, the lowest level unit is patient; if a patient is treated in more than one hospital we have multiple membership if there are repeated measures within patients, then the lowest level unit is the measurement occasion, which is nested within patients, and patients are (probably partially) crossed with hospitals. how do we fit them ? In the multilevel modelling world, software such as MLwiN can fit multiple membership models "out of the box". With mixed effects models, things are not straightforward, at least with the packages I am familiar with. The problem is that the data will look something like this: Y PatientID HospA HospB HospC HospD HospE HospF HospG HospH 0.1 1 1 0 0 0 0 1 0 1 0.5 2 0 1 0 0 0 1 0 0 2.3 3 0 0 1 0 0 1 0 0 0.7 4 1 0 0 0 0 0 1 0 1.0 5 0 1 0 0 0 1 0 1 3.2 6 0 0 0 0 0 1 0 0 2.1 7 0 0 0 0 0 0 1 0 2.6 8 0 0 0 0 1 0 0 1 Other representations of the data are obviously possible but I think this makes most sense, and makes what follows easier to understand. Edit: It also makes the construction of the model matrix for the random effects quite straightforward (see the edit below). Clearly it does not make any sense to fit random intercepts for each hospital. However, we have repeated measures within hospitals, so we need to account for this somehow, since observations within hospitals are more likely to be similar to each other than to observations in other hospitals. Moreover, not only is there likely to be correlations within hospitals, but each hospital that a patient belongs to contributes to the (single) measured outcome for that patient. I don't know if there is an agreed upon way to handle this with mixed models, but Doug Bates and Ben Bolker have both shown how it can be done in lme4: https://stat.ethz.ch/pipermail/r-sig-mixed-models/2011q2/006318.html https://rstudio-pubs-static.s3.amazonaws.com/442445_4a48ad854b3e45168708cfe4f007d544.html Edit: there is now an R package that implements the method proposed by Ben Bolker as a wrapper around lme4: https://github.com/jvparidon/lmerMultiMember (this package is not yet available on CRAN) You can look in the lmerMultiMember repository to see how multiple membership is implemented, but the idea is to: Create a dummy grouping variable (HospitalID with levels A - H using the above example). Fit a model with random intercepts for the dummy. Some software (e.g., lme4) allows the model to be constructed internally without actually fitting it. We don't need it to be fitted - only to create the model matrix. Construct the correct model matrix for the random effects yourself. This will be based on the HospA - HospH columns of the above example. Update the model with the correct model matrix. (Re)fit the updated model. Edit: to address the question of how to construct the model matrix for the random effects In a mixed model setting, we usually work with the general mixed model formula: $$ y = X \beta + Zu + \epsilon$$ In the above example, we want to fit random intercepts for hospitals. The purpose of the model matrix $Z$ is to map the relevant random effects, $u$, onto the response. In the above example we have 8 hospitals. Therefore, the random effects (random intercepts) will be a vector of length 8. For simplicity let's say that it is: $$ u = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \end{bmatrix} $$ Now, if we look at patient 1, they are in hospitals A, F and H. So that patient will get a contribution of 1 from from hospital A, 6 from hospital F and 8 from hospital H. We could alternatively write this as: $$ (1 \times 1) + (0 \times 2) +( 0 \times 3) + (0 \times 4) + (0 \times 5) + (1 \times 6) + (0 \times 7) + (1 \times 8) $$ We can now see that this is exactly the dot product of two vectors: $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \end{bmatrix} $$ We can now observe that the row-vector above is exactly the same as the row in the data for the hospitals: Y PatientID HospA HospB HospC HospD HospE HospF HospG HospH 0.1 1 1 0 0 0 0 1 0 1 Therefore each row of the model matrix is simply the corresponding row of the hospital "membership" indicators, and the full structure of $Zu$ for the above data is: $$ Zu = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \end{bmatrix} $$
Multiple Membership vs Crossed Random Effects
Note this has been edited to address the issue of how to construct the model matrix for the random effects and to add that there is now a package lmerMultiMember that adds multiple membership models t
Multiple Membership vs Crossed Random Effects Note this has been edited to address the issue of how to construct the model matrix for the random effects and to add that there is now a package lmerMultiMember that adds multiple membership models to lme4. I agree that this can be confusing. But before answering, I would just like to be a bit pedantic and mention that multiple membership (and nesting, and crossing) is not a property of the model. It is a property of the experimental/study design, which is then reflected in the data, which is then encapsulated by the model. Are multiple membership models the same as cross classified models ? No they are not. The reason why my answer that you linked to is ambiguous on this is because some people, erroneously in my opinion, use the two terms interchangeably in certain situations (more on this below), when in fact they are quite different (in my opinion). The example you mentioned, patients in hospitals, is a very good one. The key here is to think about the lowest level of measurement, and where the repeated measures occur. If patients are the lowest level of measurement (that is, there are no repeated measures within patients), then patient will not be a grouping variable; that is, we would not fit random intercepts for it, so by definition there cannot be crossed random effects involving patient. On the other hand, if there are repeated measures within patients then we would fit random intercepts for patients, and therefore we would have crossed random effects for patient and hospital. In the former case we would call this a model with multiple membership, but in the latter case we would call it a model with crossed random effects (in reality it will probably be partially nested and partially crossed). Some people seem to consider both to be multiple membership, and the latter to be just a special case (hence my ambiguous statement in the linked answer). I just think this confuses the situation. So to give a definition of multiple membership, I would say this occurs when the lowest level units "belong" to more than one upper-level unit. So, following the same example, where there are no repeated measures within patients, the lowest level unit is patient; if a patient is treated in more than one hospital we have multiple membership if there are repeated measures within patients, then the lowest level unit is the measurement occasion, which is nested within patients, and patients are (probably partially) crossed with hospitals. how do we fit them ? In the multilevel modelling world, software such as MLwiN can fit multiple membership models "out of the box". With mixed effects models, things are not straightforward, at least with the packages I am familiar with. The problem is that the data will look something like this: Y PatientID HospA HospB HospC HospD HospE HospF HospG HospH 0.1 1 1 0 0 0 0 1 0 1 0.5 2 0 1 0 0 0 1 0 0 2.3 3 0 0 1 0 0 1 0 0 0.7 4 1 0 0 0 0 0 1 0 1.0 5 0 1 0 0 0 1 0 1 3.2 6 0 0 0 0 0 1 0 0 2.1 7 0 0 0 0 0 0 1 0 2.6 8 0 0 0 0 1 0 0 1 Other representations of the data are obviously possible but I think this makes most sense, and makes what follows easier to understand. Edit: It also makes the construction of the model matrix for the random effects quite straightforward (see the edit below). Clearly it does not make any sense to fit random intercepts for each hospital. However, we have repeated measures within hospitals, so we need to account for this somehow, since observations within hospitals are more likely to be similar to each other than to observations in other hospitals. Moreover, not only is there likely to be correlations within hospitals, but each hospital that a patient belongs to contributes to the (single) measured outcome for that patient. I don't know if there is an agreed upon way to handle this with mixed models, but Doug Bates and Ben Bolker have both shown how it can be done in lme4: https://stat.ethz.ch/pipermail/r-sig-mixed-models/2011q2/006318.html https://rstudio-pubs-static.s3.amazonaws.com/442445_4a48ad854b3e45168708cfe4f007d544.html Edit: there is now an R package that implements the method proposed by Ben Bolker as a wrapper around lme4: https://github.com/jvparidon/lmerMultiMember (this package is not yet available on CRAN) You can look in the lmerMultiMember repository to see how multiple membership is implemented, but the idea is to: Create a dummy grouping variable (HospitalID with levels A - H using the above example). Fit a model with random intercepts for the dummy. Some software (e.g., lme4) allows the model to be constructed internally without actually fitting it. We don't need it to be fitted - only to create the model matrix. Construct the correct model matrix for the random effects yourself. This will be based on the HospA - HospH columns of the above example. Update the model with the correct model matrix. (Re)fit the updated model. Edit: to address the question of how to construct the model matrix for the random effects In a mixed model setting, we usually work with the general mixed model formula: $$ y = X \beta + Zu + \epsilon$$ In the above example, we want to fit random intercepts for hospitals. The purpose of the model matrix $Z$ is to map the relevant random effects, $u$, onto the response. In the above example we have 8 hospitals. Therefore, the random effects (random intercepts) will be a vector of length 8. For simplicity let's say that it is: $$ u = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \end{bmatrix} $$ Now, if we look at patient 1, they are in hospitals A, F and H. So that patient will get a contribution of 1 from from hospital A, 6 from hospital F and 8 from hospital H. We could alternatively write this as: $$ (1 \times 1) + (0 \times 2) +( 0 \times 3) + (0 \times 4) + (0 \times 5) + (1 \times 6) + (0 \times 7) + (1 \times 8) $$ We can now see that this is exactly the dot product of two vectors: $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \end{bmatrix} $$ We can now observe that the row-vector above is exactly the same as the row in the data for the hospitals: Y PatientID HospA HospB HospC HospD HospE HospF HospG HospH 0.1 1 1 0 0 0 0 1 0 1 Therefore each row of the model matrix is simply the corresponding row of the hospital "membership" indicators, and the full structure of $Zu$ for the above data is: $$ Zu = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \end{bmatrix} $$
Multiple Membership vs Crossed Random Effects Note this has been edited to address the issue of how to construct the model matrix for the random effects and to add that there is now a package lmerMultiMember that adds multiple membership models t
25,403
How to simplify a singular random structure when reported correlations are not near +1/-1
A good approach to this kind of problem is outlined in Bates et al (2015). But first a bit of background. Bates et al (2015) re-analysed several sets of experimental data where a maximal random structure was adopted. In particular they have re-analysed the dataset used by Barr et al (2013) that was used as an example of “keeping it maximal” and found that the model was severely overfitted. In Barr et al (2013) the authors fit a model with crossed random effects and random slopes for 8 fixed effects across both grouping factors. This means 8 variance components and 28 correlations between them, for /each/ grouping factor, that is a total of 72 parameters. Bearing in mind that the data had only 56 subjects who responded to 32 items, common sense should suggest that such a model would be severely overfitted. Bates, rather diplomatically assessed the idea that the data would support such a complex random structurel as "optimistic" ! However the model actually did converge without warnings, using lme4 in R, although as noted by Bates this was rather "unfortunate", as they went on to show that it was indeed overfitted, and they used principal components analysis to identify this. More recent versions of lme4 actually use very same PCA procedure explained below to determine whether the model has converged with a “singular fit” and produces a warning. Very often this is also accompanied by estimated correlations between the random effects of +1 or -1, and/or variance components estimated at zero, however when the random structure is complex (typically of dimension 3 or higher) then these "symptoms" can be absent. In lme4, a Cholesky decomposition of the variance covariance (VCV) matrix is used during estimated. If the Cholesky factor (a lower triangular matrix) contains one or more columns of zero values, then it is rank deficient, which means there is no variability in one or more of the random effects. This is equivalent to having variance components with no variability. PCA is a dimensionality reduction procedure, and when applied to the estimated VCV matrix of random effects, will immediately indicate whether this matrix is of full rank. If we can reduce the dimensionality of the VCV matrix, that is, if the number of principal components that account for 100% of the variance is less than the number of columns in the VCV matrix, then we have prima facie evidence that the random effects structure is too complex to be supported by the data and can therefore be reduced. Thus Bates suggests the following iterative procedure: Apply PCA to the VCV matrix to determine whether the model is overfitted (singular). Fit a “zero correlation parameter” (ZCP) which will identify random effects with zero, or very small, variance Remove these random effects from the model and fit a newly reduced model and check for any other near-zero random effects. Repeat as needed. Re-introduce correlations among the remaining random effects, and if a non-singular fit is obtained use a likelihood ratio test to compare this model with the previous one. If there is still a singular fit then go back to 2. At this point it is worth noting that lme4 now incorporates step 1 above during the fitting procedure and will produce a warning that the fit is singular. In models where the random structure is simple, such as random intercepts with a single random slope it is usually obvious where the problem lies and removing the random slope will usually cure the problem. It is important to note that this does not mean that there is no random slope in the population, only that the current data do not support it. However, things can be a little confusing when lme4 reports that the fit is singular, but there are no correlations of +/- 1 or variance components of zero. But applying the above procedure can usually result in a more parsimonious model that is not singular. A worked example can demonstrate this: This dataset has 3 variables to be considered as fixed effects: A, B and C, and one grouping factor group with 10 levels. The response variable is Y and there are 15 observations per group. We begin by fitting the maximal model, as suggested by Barr et al (2013). > library(lme4) The data can be downloaded from: https://github.com/WRobertLong/Stackexchange/blob/master/data/singular.csv Here they are loaded into R into the dataframe dt. > m0 <- lmer(y ~ A * B * C + (A * B * C | group), data = dt) boundary (singular) fit: see ?isSingular Note that this is a singular fit. However, if we inspect the VCV matrix we find no correlations near 1 or -1, nor any variance component near zeroL > VarCorr(m0) Groups Name Variance Std.Dev. Corr group (Intercept) 3.710561 1.9263 A 4.054078 2.0135 0.01 B 7.092127 2.6631 -0.01 -0.03 C 4.867372 2.2062 -0.05 -0.02 -0.22 A:B 0.047535 0.2180 -0.05 -0.47 -0.83 -0.03 A:C 0.049629 0.2228 -0.24 -0.51 0.47 -0.74 0.01 B:C 0.048732 0.2208 -0.17 0.08 -0.40 -0.77 0.50 0.44 A:B:C 0.000569 0.0239 0.24 0.43 0.37 0.65 -0.72 -0.63 -0.86 Residual 3.905752 1.9763 Number of obs: 150, groups: group, 10 Now we apply PCA using the rePCA function in lme4: > summary(rePCA(m0)) $`group` Importance of components: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] Standard deviation 1.406 1.069 1.014 0.968 0.02364 0.000853 0.00000322 0 Proportion of Variance 0.389 0.225 0.202 0.184 0.00011 0.000000 0.00000000 0 Cumulative Proportion 0.389 0.613 0.816 1.000 1.00000 1.000000 1.00000000 1 This shows that the VCV matrix has 8 columns, but is rank deficent, because the first 4 principal components explain 100% of the variance. Hence the singular fit, and this means it is over-fitted and we can remove parts of the random structure. So next we fit a "Zero-correlation-parameter" model: > m1 <- lmer(y ~ A * B * C + (A * B * C || group), data = dt) boundary (singular) fit: see ?isSingular As we can see, this is also singular, however we can immediately see that several variance components are now very near zero: > VarCorr(m1) Groups Name Variance Std.Dev. group (Intercept) 3.2349037958 1.7985838 group.1 A 0.9148149412 0.9564596 group.2 B 0.4766785339 0.6904191 group.3 C 1.0714133159 1.0350910 group.4 A:B 0.0000000032 0.0000565 group.5 A:C 0.0000000229 0.0001513 group.6 B:C 0.0013923672 0.0373144 group.7 A:B:C 0.0000000000 0.0000000 Residual 4.4741626418 2.1152217 These happen to be all of the interaction terms. Moreover running PCA again, we find again that 4 components are superfluous: > summary(rePCA(m1)) $`group` Importance of components: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] Standard deviation 0.8503 0.4894 0.4522 0.32641 0.01764 7.152e-05 2.672e-05 0 Proportion of Variance 0.5676 0.1880 0.1605 0.08364 0.00024 0.000e+00 0.000e+00 0 Cumulative Proportion 0.5676 0.7556 0.9161 0.99976 1.00000 1.000e+00 1.000e+00 1 So now we remove the interactions from the random structure: > m2 <- lmer(y ~ A * B * C + (A + B + C || group), data = dt) The model now converges without warning, and PCA shows that the VCV is of full rank: > summary(rePCA(m2)) $`group` Importance of components: [,1] [,2] [,3] [,4] Standard deviation 1.5436 0.50663 0.45275 0.35898 Proportion of Variance 0.8014 0.08633 0.06894 0.04334 Cumulative Proportion 0.8014 0.88772 0.95666 1.00000 So we now re-introduce correlations: m3 <- lmer(y ~ A * B * C + (A + B + C | group), data = dt) boundary (singular) fit: see ?isSingular ...and now the fit is singular again, meaning that at least one of the correlations are not needed. We could then proceed to further models with fewer correlations, but the previous PCA indicated that 4 components were not needed, so in this instance we will settle on the model with no interactions: Random effects: Groups Name Variance Std.Dev. group (Intercept) 10.697 3.271 group.1 A 0.920 0.959 group.2 B 0.579 0.761 group.3 C 1.152 1.073 Residual 4.489 2.119 Fixed effects: Estimate Std. Error t value (Intercept) -44.2911 30.3388 -1.46 A 12.9875 2.9378 4.42 B 13.6100 3.0910 4.40 C 13.3305 3.1316 4.26 A:B -0.3998 0.2999 -1.33 A:C -0.2964 0.2957 -1.00 B:C -0.3023 0.3143 -0.96 A:B:C 0.0349 0.0302 1.16 We can also observe from the fixed effects estimates that the interaction terms have quite large standard errors, so in this instance we will also remove those, producing the final model: > m4 <- lmer(y ~ A + B + C + (A + B + C || group), data = dt) > summary(m4) Random effects: Groups Name Variance Std.Dev. group (Intercept) 4.794 2.189 group.1 A 0.794 0.891 group.2 B 0.553 0.744 group.3 C 1.131 1.064 Residual 4.599 2.145 Number of obs: 150, groups: group, 10 Fixed effects: Estimate Std. Error t value (Intercept) -14.000 1.868 -7.5 A 9.512 0.301 31.6 B 10.082 0.255 39.5 C 10.815 0.351 30.8 I would also point out that I simulated this dataset with standard deviations of 2 for the residual error and random intercept, 1 for all the random slopes, no correlations between the slopes, -10 for the fixed intercept and 10 for each of the fixed effects, and no interactions. So in this case, we have settled upon a model that has estimated all the parameters adequately. References: Bates, D., Kliegl, R., Vasishth, S. and Baayen, H., 2015. Parsimonious mixed models. arXiv preprint arXiv:1506.04967. https://arxiv.org/pdf/1506.04967.pdf Barr, D.J., Levy, R., Scheepers, C. and Tily, H.J., 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3), pp.255-278.
How to simplify a singular random structure when reported correlations are not near +1/-1
A good approach to this kind of problem is outlined in Bates et al (2015). But first a bit of background. Bates et al (2015) re-analysed several sets of experimental data where a maximal random struct
How to simplify a singular random structure when reported correlations are not near +1/-1 A good approach to this kind of problem is outlined in Bates et al (2015). But first a bit of background. Bates et al (2015) re-analysed several sets of experimental data where a maximal random structure was adopted. In particular they have re-analysed the dataset used by Barr et al (2013) that was used as an example of “keeping it maximal” and found that the model was severely overfitted. In Barr et al (2013) the authors fit a model with crossed random effects and random slopes for 8 fixed effects across both grouping factors. This means 8 variance components and 28 correlations between them, for /each/ grouping factor, that is a total of 72 parameters. Bearing in mind that the data had only 56 subjects who responded to 32 items, common sense should suggest that such a model would be severely overfitted. Bates, rather diplomatically assessed the idea that the data would support such a complex random structurel as "optimistic" ! However the model actually did converge without warnings, using lme4 in R, although as noted by Bates this was rather "unfortunate", as they went on to show that it was indeed overfitted, and they used principal components analysis to identify this. More recent versions of lme4 actually use very same PCA procedure explained below to determine whether the model has converged with a “singular fit” and produces a warning. Very often this is also accompanied by estimated correlations between the random effects of +1 or -1, and/or variance components estimated at zero, however when the random structure is complex (typically of dimension 3 or higher) then these "symptoms" can be absent. In lme4, a Cholesky decomposition of the variance covariance (VCV) matrix is used during estimated. If the Cholesky factor (a lower triangular matrix) contains one or more columns of zero values, then it is rank deficient, which means there is no variability in one or more of the random effects. This is equivalent to having variance components with no variability. PCA is a dimensionality reduction procedure, and when applied to the estimated VCV matrix of random effects, will immediately indicate whether this matrix is of full rank. If we can reduce the dimensionality of the VCV matrix, that is, if the number of principal components that account for 100% of the variance is less than the number of columns in the VCV matrix, then we have prima facie evidence that the random effects structure is too complex to be supported by the data and can therefore be reduced. Thus Bates suggests the following iterative procedure: Apply PCA to the VCV matrix to determine whether the model is overfitted (singular). Fit a “zero correlation parameter” (ZCP) which will identify random effects with zero, or very small, variance Remove these random effects from the model and fit a newly reduced model and check for any other near-zero random effects. Repeat as needed. Re-introduce correlations among the remaining random effects, and if a non-singular fit is obtained use a likelihood ratio test to compare this model with the previous one. If there is still a singular fit then go back to 2. At this point it is worth noting that lme4 now incorporates step 1 above during the fitting procedure and will produce a warning that the fit is singular. In models where the random structure is simple, such as random intercepts with a single random slope it is usually obvious where the problem lies and removing the random slope will usually cure the problem. It is important to note that this does not mean that there is no random slope in the population, only that the current data do not support it. However, things can be a little confusing when lme4 reports that the fit is singular, but there are no correlations of +/- 1 or variance components of zero. But applying the above procedure can usually result in a more parsimonious model that is not singular. A worked example can demonstrate this: This dataset has 3 variables to be considered as fixed effects: A, B and C, and one grouping factor group with 10 levels. The response variable is Y and there are 15 observations per group. We begin by fitting the maximal model, as suggested by Barr et al (2013). > library(lme4) The data can be downloaded from: https://github.com/WRobertLong/Stackexchange/blob/master/data/singular.csv Here they are loaded into R into the dataframe dt. > m0 <- lmer(y ~ A * B * C + (A * B * C | group), data = dt) boundary (singular) fit: see ?isSingular Note that this is a singular fit. However, if we inspect the VCV matrix we find no correlations near 1 or -1, nor any variance component near zeroL > VarCorr(m0) Groups Name Variance Std.Dev. Corr group (Intercept) 3.710561 1.9263 A 4.054078 2.0135 0.01 B 7.092127 2.6631 -0.01 -0.03 C 4.867372 2.2062 -0.05 -0.02 -0.22 A:B 0.047535 0.2180 -0.05 -0.47 -0.83 -0.03 A:C 0.049629 0.2228 -0.24 -0.51 0.47 -0.74 0.01 B:C 0.048732 0.2208 -0.17 0.08 -0.40 -0.77 0.50 0.44 A:B:C 0.000569 0.0239 0.24 0.43 0.37 0.65 -0.72 -0.63 -0.86 Residual 3.905752 1.9763 Number of obs: 150, groups: group, 10 Now we apply PCA using the rePCA function in lme4: > summary(rePCA(m0)) $`group` Importance of components: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] Standard deviation 1.406 1.069 1.014 0.968 0.02364 0.000853 0.00000322 0 Proportion of Variance 0.389 0.225 0.202 0.184 0.00011 0.000000 0.00000000 0 Cumulative Proportion 0.389 0.613 0.816 1.000 1.00000 1.000000 1.00000000 1 This shows that the VCV matrix has 8 columns, but is rank deficent, because the first 4 principal components explain 100% of the variance. Hence the singular fit, and this means it is over-fitted and we can remove parts of the random structure. So next we fit a "Zero-correlation-parameter" model: > m1 <- lmer(y ~ A * B * C + (A * B * C || group), data = dt) boundary (singular) fit: see ?isSingular As we can see, this is also singular, however we can immediately see that several variance components are now very near zero: > VarCorr(m1) Groups Name Variance Std.Dev. group (Intercept) 3.2349037958 1.7985838 group.1 A 0.9148149412 0.9564596 group.2 B 0.4766785339 0.6904191 group.3 C 1.0714133159 1.0350910 group.4 A:B 0.0000000032 0.0000565 group.5 A:C 0.0000000229 0.0001513 group.6 B:C 0.0013923672 0.0373144 group.7 A:B:C 0.0000000000 0.0000000 Residual 4.4741626418 2.1152217 These happen to be all of the interaction terms. Moreover running PCA again, we find again that 4 components are superfluous: > summary(rePCA(m1)) $`group` Importance of components: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] Standard deviation 0.8503 0.4894 0.4522 0.32641 0.01764 7.152e-05 2.672e-05 0 Proportion of Variance 0.5676 0.1880 0.1605 0.08364 0.00024 0.000e+00 0.000e+00 0 Cumulative Proportion 0.5676 0.7556 0.9161 0.99976 1.00000 1.000e+00 1.000e+00 1 So now we remove the interactions from the random structure: > m2 <- lmer(y ~ A * B * C + (A + B + C || group), data = dt) The model now converges without warning, and PCA shows that the VCV is of full rank: > summary(rePCA(m2)) $`group` Importance of components: [,1] [,2] [,3] [,4] Standard deviation 1.5436 0.50663 0.45275 0.35898 Proportion of Variance 0.8014 0.08633 0.06894 0.04334 Cumulative Proportion 0.8014 0.88772 0.95666 1.00000 So we now re-introduce correlations: m3 <- lmer(y ~ A * B * C + (A + B + C | group), data = dt) boundary (singular) fit: see ?isSingular ...and now the fit is singular again, meaning that at least one of the correlations are not needed. We could then proceed to further models with fewer correlations, but the previous PCA indicated that 4 components were not needed, so in this instance we will settle on the model with no interactions: Random effects: Groups Name Variance Std.Dev. group (Intercept) 10.697 3.271 group.1 A 0.920 0.959 group.2 B 0.579 0.761 group.3 C 1.152 1.073 Residual 4.489 2.119 Fixed effects: Estimate Std. Error t value (Intercept) -44.2911 30.3388 -1.46 A 12.9875 2.9378 4.42 B 13.6100 3.0910 4.40 C 13.3305 3.1316 4.26 A:B -0.3998 0.2999 -1.33 A:C -0.2964 0.2957 -1.00 B:C -0.3023 0.3143 -0.96 A:B:C 0.0349 0.0302 1.16 We can also observe from the fixed effects estimates that the interaction terms have quite large standard errors, so in this instance we will also remove those, producing the final model: > m4 <- lmer(y ~ A + B + C + (A + B + C || group), data = dt) > summary(m4) Random effects: Groups Name Variance Std.Dev. group (Intercept) 4.794 2.189 group.1 A 0.794 0.891 group.2 B 0.553 0.744 group.3 C 1.131 1.064 Residual 4.599 2.145 Number of obs: 150, groups: group, 10 Fixed effects: Estimate Std. Error t value (Intercept) -14.000 1.868 -7.5 A 9.512 0.301 31.6 B 10.082 0.255 39.5 C 10.815 0.351 30.8 I would also point out that I simulated this dataset with standard deviations of 2 for the residual error and random intercept, 1 for all the random slopes, no correlations between the slopes, -10 for the fixed intercept and 10 for each of the fixed effects, and no interactions. So in this case, we have settled upon a model that has estimated all the parameters adequately. References: Bates, D., Kliegl, R., Vasishth, S. and Baayen, H., 2015. Parsimonious mixed models. arXiv preprint arXiv:1506.04967. https://arxiv.org/pdf/1506.04967.pdf Barr, D.J., Levy, R., Scheepers, C. and Tily, H.J., 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3), pp.255-278.
How to simplify a singular random structure when reported correlations are not near +1/-1 A good approach to this kind of problem is outlined in Bates et al (2015). But first a bit of background. Bates et al (2015) re-analysed several sets of experimental data where a maximal random struct
25,404
Divergent transitions in Stan
A divergent transition in Stan tells you that the region of the posterior distribution around that divergent transition is geometrically difficult to explore. For example here is a quote from the manual: The primary cause of divergent transitions in Euclidean HMC (other than bugs in the code) is highly varying posterior curvature, for which small step sizes are too inefficient in some regions and diverge in other regions. If the step size is too small, the sampler becomes inefficient and halts before making a U-turn (hits the maximum tree depth in NUTS); if the step size is too large, the Hamiltonian simulation diverges. https://mc-stan.org/docs/2_19/reference-manual/divergent-transitions.html Basically it means that the Hamiltonian trajectory that Stan proposed is different from what it should be following. So the expected value of say the log(density) that it predicted it should have at a point in the parameter space is different from what it actually is at that point. When Stan detects this problem it knows something has gone wrong and rejects that transition and basically "tries again". This is demonstrated graphically here: https://dev.to/martinmodrak/taming-divergences-in-stan-models-5762 The reason that you have multiple transitions is that since Stan has rejected that particular transition it will try new ones and those may or may not result in a divergence. Now the reason that you can't just stop the sampling when encountering the first divergence is that divergences are not always a problem. For example if you fit a model with idk 10/10,000 transitions diverging and they are randomly distributed across the parameter space then likely there isn't a problem. If however you end up with these 10 divergent transitions concentrated in a certain part of parameter space (or you have a lot more of them) then it's likely that your model parameters are not estimated accurately by Stan. In that case you should consider reformulating your model. Basically, divergences are a guide to help make your model better but the existence of a single one doesn't have to be fatal. For example page 46 of Betancourt's Conceptual Introduction to Hamiltonian Monte Carlo (https://arxiv.org/pdf/1701.02434.pdf) shows how divergences can be localized to one part of the parameter space and thus ignoring them/or stopping when you get to them would at best bias your inference (because you're not including that challenging region).
Divergent transitions in Stan
A divergent transition in Stan tells you that the region of the posterior distribution around that divergent transition is geometrically difficult to explore. For example here is a quote from the man
Divergent transitions in Stan A divergent transition in Stan tells you that the region of the posterior distribution around that divergent transition is geometrically difficult to explore. For example here is a quote from the manual: The primary cause of divergent transitions in Euclidean HMC (other than bugs in the code) is highly varying posterior curvature, for which small step sizes are too inefficient in some regions and diverge in other regions. If the step size is too small, the sampler becomes inefficient and halts before making a U-turn (hits the maximum tree depth in NUTS); if the step size is too large, the Hamiltonian simulation diverges. https://mc-stan.org/docs/2_19/reference-manual/divergent-transitions.html Basically it means that the Hamiltonian trajectory that Stan proposed is different from what it should be following. So the expected value of say the log(density) that it predicted it should have at a point in the parameter space is different from what it actually is at that point. When Stan detects this problem it knows something has gone wrong and rejects that transition and basically "tries again". This is demonstrated graphically here: https://dev.to/martinmodrak/taming-divergences-in-stan-models-5762 The reason that you have multiple transitions is that since Stan has rejected that particular transition it will try new ones and those may or may not result in a divergence. Now the reason that you can't just stop the sampling when encountering the first divergence is that divergences are not always a problem. For example if you fit a model with idk 10/10,000 transitions diverging and they are randomly distributed across the parameter space then likely there isn't a problem. If however you end up with these 10 divergent transitions concentrated in a certain part of parameter space (or you have a lot more of them) then it's likely that your model parameters are not estimated accurately by Stan. In that case you should consider reformulating your model. Basically, divergences are a guide to help make your model better but the existence of a single one doesn't have to be fatal. For example page 46 of Betancourt's Conceptual Introduction to Hamiltonian Monte Carlo (https://arxiv.org/pdf/1701.02434.pdf) shows how divergences can be localized to one part of the parameter space and thus ignoring them/or stopping when you get to them would at best bias your inference (because you're not including that challenging region).
Divergent transitions in Stan A divergent transition in Stan tells you that the region of the posterior distribution around that divergent transition is geometrically difficult to explore. For example here is a quote from the man
25,405
Why do lots of people want to transform skewed data into normal distributed data for machine learning applications?
As @user2974951 says in a comment, it may be superstition that a Normal distribution is somehow better. Perhaps they have the mistaken idea that since Normal data is the result of many additive errors, if they force their data to be Normal, they can then treat the resulting numbers as having additive error. Or the first stats technique they learned was OLS regression and something about Normal was an assumption... Normality is in general not a requirement. But whether it’s helpful depends on what the model does with the data. For example, financial data is often lognormal -- i.e. has a multiplicative (percentage) error. Variational Autoencoders use a Normal distribution at the bottleneck to force smoothness and simplicity. Sigmoid functions work most naturally with Normal data. Mixture models often use a mixture of Normals. (If you can assume it’s Normal, you only need two parameters to completely define it, and those parameters are fairly intuitive in their meaning.) It could also be that we want a unimodal, symmetric distribution for our modeling and the Normal is that. (And transformations to “Normal” are often not strictly Normal, just more symmetrical.) Normality may simplify some math for you, and it may align with your conception of the process generating your data: most of your data is in the middle with relatively rarer low or high values, which are of interest. But my impression is that it’s Cargo Cult in nature
Why do lots of people want to transform skewed data into normal distributed data for machine learnin
As @user2974951 says in a comment, it may be superstition that a Normal distribution is somehow better. Perhaps they have the mistaken idea that since Normal data is the result of many additive errors
Why do lots of people want to transform skewed data into normal distributed data for machine learning applications? As @user2974951 says in a comment, it may be superstition that a Normal distribution is somehow better. Perhaps they have the mistaken idea that since Normal data is the result of many additive errors, if they force their data to be Normal, they can then treat the resulting numbers as having additive error. Or the first stats technique they learned was OLS regression and something about Normal was an assumption... Normality is in general not a requirement. But whether it’s helpful depends on what the model does with the data. For example, financial data is often lognormal -- i.e. has a multiplicative (percentage) error. Variational Autoencoders use a Normal distribution at the bottleneck to force smoothness and simplicity. Sigmoid functions work most naturally with Normal data. Mixture models often use a mixture of Normals. (If you can assume it’s Normal, you only need two parameters to completely define it, and those parameters are fairly intuitive in their meaning.) It could also be that we want a unimodal, symmetric distribution for our modeling and the Normal is that. (And transformations to “Normal” are often not strictly Normal, just more symmetrical.) Normality may simplify some math for you, and it may align with your conception of the process generating your data: most of your data is in the middle with relatively rarer low or high values, which are of interest. But my impression is that it’s Cargo Cult in nature
Why do lots of people want to transform skewed data into normal distributed data for machine learnin As @user2974951 says in a comment, it may be superstition that a Normal distribution is somehow better. Perhaps they have the mistaken idea that since Normal data is the result of many additive errors
25,406
Why do lots of people want to transform skewed data into normal distributed data for machine learning applications?
The answer above really nails it. I'd just like to add that it is worth separating the idea of wanting "normality" vs. wanting to scale all features to be on the similar range (even if they have different distributions). Both of these transformations have their pros and cons, and sometimes are necessary to avoid numerical quirks in the optimization step or avoid systemic biases in these algorithms. Also, it depends what type of "machine learning" you're referring to (i.e., SVMs, tree-based models, neural nets, etc..), as these all behave differently and may have different numerical issues. As mentioned above, there are benefits in certain situations, but the idea that normalizing skewed data will lead to better performance is not a bullet-proof strategy. In general, justifying any "pre-processing" or "data manipulation/transformation" steps tends to be a more robust alternative.
Why do lots of people want to transform skewed data into normal distributed data for machine learnin
The answer above really nails it. I'd just like to add that it is worth separating the idea of wanting "normality" vs. wanting to scale all features to be on the similar range (even if they have diff
Why do lots of people want to transform skewed data into normal distributed data for machine learning applications? The answer above really nails it. I'd just like to add that it is worth separating the idea of wanting "normality" vs. wanting to scale all features to be on the similar range (even if they have different distributions). Both of these transformations have their pros and cons, and sometimes are necessary to avoid numerical quirks in the optimization step or avoid systemic biases in these algorithms. Also, it depends what type of "machine learning" you're referring to (i.e., SVMs, tree-based models, neural nets, etc..), as these all behave differently and may have different numerical issues. As mentioned above, there are benefits in certain situations, but the idea that normalizing skewed data will lead to better performance is not a bullet-proof strategy. In general, justifying any "pre-processing" or "data manipulation/transformation" steps tends to be a more robust alternative.
Why do lots of people want to transform skewed data into normal distributed data for machine learnin The answer above really nails it. I'd just like to add that it is worth separating the idea of wanting "normality" vs. wanting to scale all features to be on the similar range (even if they have diff
25,407
Difference between pooling and subsampling
I don't think there is any difference. Pooling operation does sub-sampling of the image. You can find that people refer to subsample as an operation performed by pooling layer In fact, in the paper they describe sub-sampling as a pooling layer You can check Yann LeCun's paper Gradient-Based Learning Applied to Document Recognition
Difference between pooling and subsampling
I don't think there is any difference. Pooling operation does sub-sampling of the image. You can find that people refer to subsample as an operation performed by pooling layer In fact, in the paper t
Difference between pooling and subsampling I don't think there is any difference. Pooling operation does sub-sampling of the image. You can find that people refer to subsample as an operation performed by pooling layer In fact, in the paper they describe sub-sampling as a pooling layer You can check Yann LeCun's paper Gradient-Based Learning Applied to Document Recognition
Difference between pooling and subsampling I don't think there is any difference. Pooling operation does sub-sampling of the image. You can find that people refer to subsample as an operation performed by pooling layer In fact, in the paper t
25,408
Difference between pooling and subsampling
There are different types of pooling, including MaxPooling and AveragePooling. MaxPooling captures the maximum pixel value in a grid (say z x z) from the entire image and then processes that in output image. Average Pooling likewise calculates the average and processes that in output image. On the other hand, Subsampling chooses a pixel in the grid and replaces surrounding pixels of said grid by the same pixel value in the output image. The output image from these two might look the same (and might have no major changes in accuracy in the Neural Network), however, they are not exactly same.
Difference between pooling and subsampling
There are different types of pooling, including MaxPooling and AveragePooling. MaxPooling captures the maximum pixel value in a grid (say z x z) from the entire image and then processes that in output
Difference between pooling and subsampling There are different types of pooling, including MaxPooling and AveragePooling. MaxPooling captures the maximum pixel value in a grid (say z x z) from the entire image and then processes that in output image. Average Pooling likewise calculates the average and processes that in output image. On the other hand, Subsampling chooses a pixel in the grid and replaces surrounding pixels of said grid by the same pixel value in the output image. The output image from these two might look the same (and might have no major changes in accuracy in the Neural Network), however, they are not exactly same.
Difference between pooling and subsampling There are different types of pooling, including MaxPooling and AveragePooling. MaxPooling captures the maximum pixel value in a grid (say z x z) from the entire image and then processes that in output
25,409
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis?
A simple problem, by way of example, is given by testing for the mean of a normal population with known variance $\sigma^2=1$. Then, a pivot - a quantity whose distribution does not depend on the parameter, is given by $\bar{Y}-\mu\sim N(0,1/n)$. Critical values $z_{\alpha/2}$ satisfy, in this symmetric case, $\Phi(-z_{\alpha/2})=\alpha/2$ and $\Phi(z_{\alpha/2})=1-\alpha/2$. Hence, \begin{eqnarray*} 1-\alpha&=&\Pr\{(\bar{X}-\mu)/(1/\sqrt{n})\in(-z_{\alpha/2},z_{\alpha/2})\}\\ &=&\Pr\{-z_{\alpha/2}\leqslant(\bar{X}-\mu)\sqrt{n}\leqslant z_{\alpha/2}\}\\ &=&\Pr\{z_{\alpha/2}\geqslant(\mu-\bar{X})\sqrt{n}\geqslant -z_{\alpha/2}\}\\ &=&\Pr\{-z_{\alpha/2}/\sqrt{n}\leqslant\mu-\bar{X}\leqslant z_{\alpha/2}/\sqrt{n}\}\\ &=&\Pr\{\bar{X}-z_{\alpha/2}/\sqrt{n}\leqslant\mu\leqslant \bar{X}+z_{\alpha/2}/\sqrt{n}\}\\ &=&\Pr\{(\bar{X}-z_{\alpha/2}/\sqrt{n},\bar{X}+z_{\alpha/2}/\sqrt{n})\ni\mu\} \end{eqnarray*} so that $$ (\bar{X}-z_{\alpha/2}/\sqrt{n},\bar{X}+z_{\alpha/2}/\sqrt{n})$$ is a confidence interval of level $1-\alpha$. At the same time, the event in first line of the display is precisely also the event that the null hypothesis is not rejected for this $\mu$. Since the rest just contains equivalent reformulations, the c.i. indeed contains all $\mu$ for which the null is not rejected, and no reference to "under the null" is needed. Here is a plot analogous to Martijn's +1 visualization aiming to show what is known as duality between confidence intervals and tests. $C$ denotes the confidence interval belonging to some $\bar{x}^*$ and $A(\mu_0)$ the acceptance region belonging to some hypothesis $\mu=\mu_0$.
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null
A simple problem, by way of example, is given by testing for the mean of a normal population with known variance $\sigma^2=1$. Then, a pivot - a quantity whose distribution does not depend on the para
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? A simple problem, by way of example, is given by testing for the mean of a normal population with known variance $\sigma^2=1$. Then, a pivot - a quantity whose distribution does not depend on the parameter, is given by $\bar{Y}-\mu\sim N(0,1/n)$. Critical values $z_{\alpha/2}$ satisfy, in this symmetric case, $\Phi(-z_{\alpha/2})=\alpha/2$ and $\Phi(z_{\alpha/2})=1-\alpha/2$. Hence, \begin{eqnarray*} 1-\alpha&=&\Pr\{(\bar{X}-\mu)/(1/\sqrt{n})\in(-z_{\alpha/2},z_{\alpha/2})\}\\ &=&\Pr\{-z_{\alpha/2}\leqslant(\bar{X}-\mu)\sqrt{n}\leqslant z_{\alpha/2}\}\\ &=&\Pr\{z_{\alpha/2}\geqslant(\mu-\bar{X})\sqrt{n}\geqslant -z_{\alpha/2}\}\\ &=&\Pr\{-z_{\alpha/2}/\sqrt{n}\leqslant\mu-\bar{X}\leqslant z_{\alpha/2}/\sqrt{n}\}\\ &=&\Pr\{\bar{X}-z_{\alpha/2}/\sqrt{n}\leqslant\mu\leqslant \bar{X}+z_{\alpha/2}/\sqrt{n}\}\\ &=&\Pr\{(\bar{X}-z_{\alpha/2}/\sqrt{n},\bar{X}+z_{\alpha/2}/\sqrt{n})\ni\mu\} \end{eqnarray*} so that $$ (\bar{X}-z_{\alpha/2}/\sqrt{n},\bar{X}+z_{\alpha/2}/\sqrt{n})$$ is a confidence interval of level $1-\alpha$. At the same time, the event in first line of the display is precisely also the event that the null hypothesis is not rejected for this $\mu$. Since the rest just contains equivalent reformulations, the c.i. indeed contains all $\mu$ for which the null is not rejected, and no reference to "under the null" is needed. Here is a plot analogous to Martijn's +1 visualization aiming to show what is known as duality between confidence intervals and tests. $C$ denotes the confidence interval belonging to some $\bar{x}^*$ and $A(\mu_0)$ the acceptance region belonging to some hypothesis $\mu=\mu_0$.
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null A simple problem, by way of example, is given by testing for the mean of a normal population with known variance $\sigma^2=1$. Then, a pivot - a quantity whose distribution does not depend on the para
25,410
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis?
Yes you can replace a hypothesis test (comparing sample with a hypothetical distribution of test outcomes) by a comparison with a confidence interval calculated from the sample. But indirectly a confidence interval is already a sort of hypothesis test, namely: You might see the confidence intervals as being constructed as a range of values for which an $\alpha$ level hypothesis test would succeed and outside the range an $\alpha$ level hypothesis test would fail. The consequence of making such range is that the range only fails a fraction $\alpha$ of the time. Example I am using an image from an answer to the below question: Confidence Intervals: how to formally deal with $P(L(\textbf{X}) \leq \theta, U(\textbf{X})\geq\theta) = 1-\alpha$ It is a variation of a graph from Clopper-Pearson. Imagine the case of 100 Bernoulli trials where the probability of success is $\theta$ and we observe the total number of successes $X$. Note that: In the vertical direction you see hypothesis testing. E.g. for a given hypothesized value $\theta$ you reject the hypothesis if the measured $X$ is above or below the red or green dotted lines. In the horizontal direction you see Clopper-Pearson confidence intervals. If for any given observation X you use these confidence intervals then you will be wrong only 5% of the time (because you will only observe such X, on which you base a 'wrong' interval, 5% of the time)
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null
Yes you can replace a hypothesis test (comparing sample with a hypothetical distribution of test outcomes) by a comparison with a confidence interval calculated from the sample. But indirectly a confi
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? Yes you can replace a hypothesis test (comparing sample with a hypothetical distribution of test outcomes) by a comparison with a confidence interval calculated from the sample. But indirectly a confidence interval is already a sort of hypothesis test, namely: You might see the confidence intervals as being constructed as a range of values for which an $\alpha$ level hypothesis test would succeed and outside the range an $\alpha$ level hypothesis test would fail. The consequence of making such range is that the range only fails a fraction $\alpha$ of the time. Example I am using an image from an answer to the below question: Confidence Intervals: how to formally deal with $P(L(\textbf{X}) \leq \theta, U(\textbf{X})\geq\theta) = 1-\alpha$ It is a variation of a graph from Clopper-Pearson. Imagine the case of 100 Bernoulli trials where the probability of success is $\theta$ and we observe the total number of successes $X$. Note that: In the vertical direction you see hypothesis testing. E.g. for a given hypothesized value $\theta$ you reject the hypothesis if the measured $X$ is above or below the red or green dotted lines. In the horizontal direction you see Clopper-Pearson confidence intervals. If for any given observation X you use these confidence intervals then you will be wrong only 5% of the time (because you will only observe such X, on which you base a 'wrong' interval, 5% of the time)
Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null Yes you can replace a hypothesis test (comparing sample with a hypothetical distribution of test outcomes) by a comparison with a confidence interval calculated from the sample. But indirectly a confi
25,411
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$?
Look at the sequence of random variables $\{Y_n\}$ defined by retaining only large values of $|X|$: $$Y_n:=|X|I(|X|>n).$$ It's clear that $Y_n\ge nI(|X|>n)$, so $$E(Y_n)\ge nP(|X|>n).\tag1$$ Note that $Y_n\to0$ and $|Y_n|\le |X|$ for each $n$. So the LHS of (1) tends to zero by dominated convergence.
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$?
Look at the sequence of random variables $\{Y_n\}$ defined by retaining only large values of $|X|$: $$Y_n:=|X|I(|X|>n).$$ It's clear that $Y_n\ge nI(|X|>n)$, so $$E(Y_n)\ge nP(|X|>n).\tag1$$ Note that
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$? Look at the sequence of random variables $\{Y_n\}$ defined by retaining only large values of $|X|$: $$Y_n:=|X|I(|X|>n).$$ It's clear that $Y_n\ge nI(|X|>n)$, so $$E(Y_n)\ge nP(|X|>n).\tag1$$ Note that $Y_n\to0$ and $|Y_n|\le |X|$ for each $n$. So the LHS of (1) tends to zero by dominated convergence.
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$? Look at the sequence of random variables $\{Y_n\}$ defined by retaining only large values of $|X|$: $$Y_n:=|X|I(|X|>n).$$ It's clear that $Y_n\ge nI(|X|>n)$, so $$E(Y_n)\ge nP(|X|>n).\tag1$$ Note that
25,412
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$?
I can provide an answer for a continuous random variable (there is surely a more general answer). Let $Y=|X|$: $$\mathbb{E}[Y]=\int_0^\infty yf_Y(y)\text{d}y=\int_0^n yf_Y(y)\text{d}y+\int_n^\infty yf_Y(y)\text{d}y\ge\int_0^n yf_Y(y)\text{d}y+n\int_n^\infty f_Y(y)\text{d}y=\dots+n\left(F_Y(\infty)-F_Y(n)\right)=\dots+n(1-F_Y(n))=\int_0^n yf_Y(y)\text{d}y+nP(Y\gt n)$$ Thus $$0\leq nP(Y\gt n)\le\left(\mathbb{E}[Y]-\int_0^n yf_Y(y)\text{d}y\right)$$ Now,since by hypothesis $\mathbb{E}[Y]$ is finite, we have that $$\lim_{n\to \infty}\left(\mathbb{E}[Y]-\int_0^n yf_Y(y)\text{d}y\right)=\mathbb{E}[Y]-\lim_{n\to \infty}\int_0^n yf_Y(y)\text{d}y=\mathbb{E}[Y]-\mathbb{E}[Y]=0$$ Then $$\lim_{n\to \infty}nP(Y\gt n)=0$$ by the sandwich theorem.
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$?
I can provide an answer for a continuous random variable (there is surely a more general answer). Let $Y=|X|$: $$\mathbb{E}[Y]=\int_0^\infty yf_Y(y)\text{d}y=\int_0^n yf_Y(y)\text{d}y+\int_n^\infty yf
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$? I can provide an answer for a continuous random variable (there is surely a more general answer). Let $Y=|X|$: $$\mathbb{E}[Y]=\int_0^\infty yf_Y(y)\text{d}y=\int_0^n yf_Y(y)\text{d}y+\int_n^\infty yf_Y(y)\text{d}y\ge\int_0^n yf_Y(y)\text{d}y+n\int_n^\infty f_Y(y)\text{d}y=\dots+n\left(F_Y(\infty)-F_Y(n)\right)=\dots+n(1-F_Y(n))=\int_0^n yf_Y(y)\text{d}y+nP(Y\gt n)$$ Thus $$0\leq nP(Y\gt n)\le\left(\mathbb{E}[Y]-\int_0^n yf_Y(y)\text{d}y\right)$$ Now,since by hypothesis $\mathbb{E}[Y]$ is finite, we have that $$\lim_{n\to \infty}\left(\mathbb{E}[Y]-\int_0^n yf_Y(y)\text{d}y\right)=\mathbb{E}[Y]-\lim_{n\to \infty}\int_0^n yf_Y(y)\text{d}y=\mathbb{E}[Y]-\mathbb{E}[Y]=0$$ Then $$\lim_{n\to \infty}nP(Y\gt n)=0$$ by the sandwich theorem.
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$? I can provide an answer for a continuous random variable (there is surely a more general answer). Let $Y=|X|$: $$\mathbb{E}[Y]=\int_0^\infty yf_Y(y)\text{d}y=\int_0^n yf_Y(y)\text{d}y+\int_n^\infty yf
25,413
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$?
$E\left | X \right |< \infty \Leftrightarrow E\left | X \right |\mathbb{I}_{\left | X \right |>n}\rightarrow 0$ (uniformly integrable) $E\left | X \right |=E\left | X \right |\mathbb{I}_{\left | X \right |>n}+E\left | X \right |\mathbb{I}_{\left | X \right |\leq n}$ $E\left | X \right |\mathbb{I}_{\left | X \right |>n}\leq E\left | X \right |< \infty $ $E\left | X \right |\mathbb{I}_{\left | X \right |>n}\geq nE\mathbb{I}_{\left | X \right |>n}=nP\left ( \left | X \right |>n\right )$ $E\left | X \right |\mathbb{I}_{\left | X \right |>n} \rightarrow 0 \Rightarrow nP\left ( \left | X \right |>n\right )\rightarrow 0 \Rightarrow P\left ( \left | X \right |>n\right )\rightarrow 0$ i.e. $ \underset{n\rightarrow \infty}{\lim} P\left ( \left | X \right |>n\right )=0$
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$?
$E\left | X \right |< \infty \Leftrightarrow E\left | X \right |\mathbb{I}_{\left | X \right |>n}\rightarrow 0$ (uniformly integrable) $E\left | X \right |=E\left | X \right |\mathbb{I}_{\left | X \r
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$? $E\left | X \right |< \infty \Leftrightarrow E\left | X \right |\mathbb{I}_{\left | X \right |>n}\rightarrow 0$ (uniformly integrable) $E\left | X \right |=E\left | X \right |\mathbb{I}_{\left | X \right |>n}+E\left | X \right |\mathbb{I}_{\left | X \right |\leq n}$ $E\left | X \right |\mathbb{I}_{\left | X \right |>n}\leq E\left | X \right |< \infty $ $E\left | X \right |\mathbb{I}_{\left | X \right |>n}\geq nE\mathbb{I}_{\left | X \right |>n}=nP\left ( \left | X \right |>n\right )$ $E\left | X \right |\mathbb{I}_{\left | X \right |>n} \rightarrow 0 \Rightarrow nP\left ( \left | X \right |>n\right )\rightarrow 0 \Rightarrow P\left ( \left | X \right |>n\right )\rightarrow 0$ i.e. $ \underset{n\rightarrow \infty}{\lim} P\left ( \left | X \right |>n\right )=0$
If $E(|X|)$ is finite, is $\lim_{n\to\infty} nP(|X|>n)=0$? $E\left | X \right |< \infty \Leftrightarrow E\left | X \right |\mathbb{I}_{\left | X \right |>n}\rightarrow 0$ (uniformly integrable) $E\left | X \right |=E\left | X \right |\mathbb{I}_{\left | X \r
25,414
ARIMA forecast straight line?
Often a flat forecast is in fact better than non-trivial ARIMA, just to mention this. However, your data certainly aren't such a case. One problem is that you haven't told R that your data are a time series with a frequency of 365. In this case, R can't "on its own" decide that there is seasonality. After all, a long string of data could have all kinds of seasonalities, e.g., with cycle lengths of 7 (daily data with weekly seasonality), 365.25 (daily data with yearly seasonality), 30 (daily data with monthly seasonality), 60, 3600, 24 (I'll let you guess), 11 (yearly sunspot data), etc. You can't just "let the algorithm decide". Always specify the frequency parameter if your time series might be seasonal. And even if you have specified the frequency, ARIMA has major problems in detecting seasonality with few long cycles in the data - even if the seasonality is "obvious" for a human. library(forecast) set.seed(1) temps <- 20+10*sin(2*pi*(1:856)/365)+arima.sim(list(0.8),856,sd=2) plot(forecast(auto.arima(temps),h=365)) plot(forecast(auto.arima(ts(temps,frequency=365)),h=365)) The last two commands actually produce the very same plot, because auto.arima() simply doesn't detect the seasonality. The solution is to force auto.arima() to use a seasonal model, by specifying D=1: plot(forecast(auto.arima(ts(temps,frequency=365),D=1),h=365)) See also this earlier question. So. This hopefully addresses one of your questions. Your other question is, to be honest, unclear to me. How do you expect to get a different forecast each time you run your modeling (assuming you re-run it on the same data each time)? ARIMA does not involve any randomization. It is deterministic. However, you do already get predictive distributions and prediction intervals. See the fan plots for the forecasts. Maybe this earlier question is helpful: How to incorporate uncertainty of actual historical data into forecast prediction intervals?
ARIMA forecast straight line?
Often a flat forecast is in fact better than non-trivial ARIMA, just to mention this. However, your data certainly aren't such a case. One problem is that you haven't told R that your data are a time
ARIMA forecast straight line? Often a flat forecast is in fact better than non-trivial ARIMA, just to mention this. However, your data certainly aren't such a case. One problem is that you haven't told R that your data are a time series with a frequency of 365. In this case, R can't "on its own" decide that there is seasonality. After all, a long string of data could have all kinds of seasonalities, e.g., with cycle lengths of 7 (daily data with weekly seasonality), 365.25 (daily data with yearly seasonality), 30 (daily data with monthly seasonality), 60, 3600, 24 (I'll let you guess), 11 (yearly sunspot data), etc. You can't just "let the algorithm decide". Always specify the frequency parameter if your time series might be seasonal. And even if you have specified the frequency, ARIMA has major problems in detecting seasonality with few long cycles in the data - even if the seasonality is "obvious" for a human. library(forecast) set.seed(1) temps <- 20+10*sin(2*pi*(1:856)/365)+arima.sim(list(0.8),856,sd=2) plot(forecast(auto.arima(temps),h=365)) plot(forecast(auto.arima(ts(temps,frequency=365)),h=365)) The last two commands actually produce the very same plot, because auto.arima() simply doesn't detect the seasonality. The solution is to force auto.arima() to use a seasonal model, by specifying D=1: plot(forecast(auto.arima(ts(temps,frequency=365),D=1),h=365)) See also this earlier question. So. This hopefully addresses one of your questions. Your other question is, to be honest, unclear to me. How do you expect to get a different forecast each time you run your modeling (assuming you re-run it on the same data each time)? ARIMA does not involve any randomization. It is deterministic. However, you do already get predictive distributions and prediction intervals. See the fan plots for the forecasts. Maybe this earlier question is helpful: How to incorporate uncertainty of actual historical data into forecast prediction intervals?
ARIMA forecast straight line? Often a flat forecast is in fact better than non-trivial ARIMA, just to mention this. However, your data certainly aren't such a case. One problem is that you haven't told R that your data are a time
25,415
Difference between Naive Bayes vs Recurrent Neural Network (LSTM)
On the difference between Naive Bayes and Recurrent Neural Networks First of all let's start off by saying they're both classifiers, meant to solve a problem called statistical classification. This means that you have lots of data (in your case articles) split into two or more categories (in your case positive/negative sentiment). The classifier's goal is to learn how the articles are split into those two categories and then be able to classify new articles on it's own. Two models that can solve this task are the Naive Bayes classifier and Recurrent Neural Networks. Naive Bayes In order to use this classifier for text analysis, you usually pre-process the text (bag of words + tf-tdf) so that you can transform it into vectors containing numerical values. These vectors serve as an input to the NB model. This classifier assumes that your features (the attributes of the vectors we produced) are independent of one another. When this assumption holds, it is a very strong classifier that requires very little data to work. Recurrent Neural Networks These are networks that read your data sequentially, while keeping a "memory" of what they have read previously. These are really useful when dealing with text because of the correlation words have between them. The two models (NB and RNN) differ greatly in the way they attempt to perform this classification: NB belongs to a category of models called generative. This means that during training (the procedure where the algorithm learns to classify), NB tries to find out how the data was generated in the first place. It essentially tries to figure out the underlying distribution that produced the examples you input to the model. On the other hand RNN is a discriminative model. It tries to figure out what the differences are between your positive and negative examples, in order to perform the classification. I suggest querying "discriminative vs generative algorithms" if you want to learn mire While NB has been popular for decades RNNs are starting to find applications over the past decade because of their need for high computational resources. RNNs most of the time are trained on dedicated GPUs (which compute a lot faster than CPUs). tl;dr: they are two very different ways of solving the same task Libraries Because the two algorithms are very popular they have implementations in many libraries. I'll name a few python libraries since you mentioned it: For NB: scikit-learn: is a very easy to use python library containing implementations of several machine learning algorithms, including Naive Bayes. NaiveBayes: haven't used it but I guess it's relevant judging by the name. Because RNNs are considered a deep learning algorithm, they have implementations in all major deep learning libraries: TensorFlow: Most popular DL library at the moment. Published and maintained by google. theano: Similar library to tf, older, published by the University of Montreal. keras: Wrapper for tf and theano. Much easier. What I suggest you use if you ever want to implement RNNs. caffe: DL library published by UC Berkeley. Has python API. All the above offer GPU support if you have a CUDA enabled NVIDIA GPU. Python's NLTK is a library mainly for Natural Language Processing (stemming, tokenizing, part-of-speach tagging). While it has a sentiment package, it's not the focus point. I'm pretty sure NLTK uses NB for sentiment analysis.
Difference between Naive Bayes vs Recurrent Neural Network (LSTM)
On the difference between Naive Bayes and Recurrent Neural Networks First of all let's start off by saying they're both classifiers, meant to solve a problem called statistical classification. This me
Difference between Naive Bayes vs Recurrent Neural Network (LSTM) On the difference between Naive Bayes and Recurrent Neural Networks First of all let's start off by saying they're both classifiers, meant to solve a problem called statistical classification. This means that you have lots of data (in your case articles) split into two or more categories (in your case positive/negative sentiment). The classifier's goal is to learn how the articles are split into those two categories and then be able to classify new articles on it's own. Two models that can solve this task are the Naive Bayes classifier and Recurrent Neural Networks. Naive Bayes In order to use this classifier for text analysis, you usually pre-process the text (bag of words + tf-tdf) so that you can transform it into vectors containing numerical values. These vectors serve as an input to the NB model. This classifier assumes that your features (the attributes of the vectors we produced) are independent of one another. When this assumption holds, it is a very strong classifier that requires very little data to work. Recurrent Neural Networks These are networks that read your data sequentially, while keeping a "memory" of what they have read previously. These are really useful when dealing with text because of the correlation words have between them. The two models (NB and RNN) differ greatly in the way they attempt to perform this classification: NB belongs to a category of models called generative. This means that during training (the procedure where the algorithm learns to classify), NB tries to find out how the data was generated in the first place. It essentially tries to figure out the underlying distribution that produced the examples you input to the model. On the other hand RNN is a discriminative model. It tries to figure out what the differences are between your positive and negative examples, in order to perform the classification. I suggest querying "discriminative vs generative algorithms" if you want to learn mire While NB has been popular for decades RNNs are starting to find applications over the past decade because of their need for high computational resources. RNNs most of the time are trained on dedicated GPUs (which compute a lot faster than CPUs). tl;dr: they are two very different ways of solving the same task Libraries Because the two algorithms are very popular they have implementations in many libraries. I'll name a few python libraries since you mentioned it: For NB: scikit-learn: is a very easy to use python library containing implementations of several machine learning algorithms, including Naive Bayes. NaiveBayes: haven't used it but I guess it's relevant judging by the name. Because RNNs are considered a deep learning algorithm, they have implementations in all major deep learning libraries: TensorFlow: Most popular DL library at the moment. Published and maintained by google. theano: Similar library to tf, older, published by the University of Montreal. keras: Wrapper for tf and theano. Much easier. What I suggest you use if you ever want to implement RNNs. caffe: DL library published by UC Berkeley. Has python API. All the above offer GPU support if you have a CUDA enabled NVIDIA GPU. Python's NLTK is a library mainly for Natural Language Processing (stemming, tokenizing, part-of-speach tagging). While it has a sentiment package, it's not the focus point. I'm pretty sure NLTK uses NB for sentiment analysis.
Difference between Naive Bayes vs Recurrent Neural Network (LSTM) On the difference between Naive Bayes and Recurrent Neural Networks First of all let's start off by saying they're both classifiers, meant to solve a problem called statistical classification. This me
25,416
What do you do if your degrees of freedom goes past the end of your tables?
F tables: The easiest way of all -- if you can -- is to use a statistics package or other program to give you the critical value. So for example, in R, we can do this: qf(.95,5,6744) [1] 2.215425 (but you can as easily calculate an exact p-value for your F). Usually F tables come with an "infinity" degrees of freedom at the end of the table, but a few don't. If you have a really large d.f. (for example, 6744 is really large), you can use the infinity ($\infty$) entry in its place. So you might have tables for $\nu_1=5$ that give 120 df and $\infty$ df: ... 5 ... ⁞ 120 2.2899 ∞ 2.2141 The $\infty$ d.f. row there will work for any really large $\nu_2$ (denominator d.f.). If we use that we have 2.2141 instead of the exact 2.2154 but that's not too bad. If you don't have an infinity degrees of freedom entry, you can work one out from a chi-square table, using the critical value for the numerator d.f. divided by those d.f. So for example, for a $F_{5,\infty}$ critical value, take a $\chi^2_5$ critical value and divide by $5$. The 5% critical value for a $\chi^2_5$ is $11.0705$. If we divide by $5$ that's $2.2141$ which is the $\infty$ row from the table above. If your degrees of freedom may be a bit too small to use the "infinity" entry (but still a lot bigger than 120 or whatever your table goes up to) you can use inverse interpolation between the highest finite d.f. and the infinity entry. Let's say we want to calculate a critical value for $F_{5, 674}$ d.f. F df 120/df ------ ---- ------- 2.2899 120 1 C 674 0.17804 2.2141 ∞ 0 Then we compute the unknown critical value, $C$ as $C \approx 2.2141 + (2.2899-2.2141) \times (0.17804-0)/(1-0) \approx 2.2276$ (The exact value is $2.2274$, so that works pretty well.) More details on interpolation and inverse interpolation are given at that linked post. Chi-squared tables: If your chi-squared d.f. are really large you can use normal tables to get an approximation. For large d.f. $\nu$ the chi-squared distribution is approximately normal with mean $\nu$ and variance $2\nu$. To get the upper 5% value, take the one-tailed 5% critical value for a standard normal ($1.645$) and multiply by $\sqrt{2\nu}$ and add $\nu$. For example, imagine we needed an upper 5% critical value for a $\chi^2_{6744}$. We would calculate $1.645 \times \sqrt{2 \times 6744} + 6744 \approx 6935$. The exact answer (to $5$ significant figures) is $6936.2$. If the degrees of freedom are smaller, we can use the fact that if $X$ is $\chi^2_\nu$ then $\sqrt{2X}\dot\sim N(\sqrt{2\nu-1},1)$. So for example, if we had $674$ d.f. we might use this approximation. The exact upper 5% critical value for a chi-square with 674 d.f. is (to 5 figures) $735.51$. With this approximation, we would calculate as follows: Take the upper (one tailed) 5% critical value for a standard normal (1.645), add $\sqrt{2\nu-1}$, square the total and divide by 2. In this case: $(1.645+\sqrt{2\times 674-1})^2/2 \approx 735.2$. As we see, this is quite close. For considerably smaller degrees of freedom, the Wilson-Hilferty transformation could be used -- it works well down to only a few degrees of freedom -- but the tables should cover that. This approximation is that $(\frac{X}{\nu})^{\frac13}\dot\sim N(1-\frac{2}{9\nu},\frac{2}{9\nu})$.
What do you do if your degrees of freedom goes past the end of your tables?
F tables: The easiest way of all -- if you can -- is to use a statistics package or other program to give you the critical value. So for example, in R, we can do this: qf(.95,5,6744) [1] 2.215425 (
What do you do if your degrees of freedom goes past the end of your tables? F tables: The easiest way of all -- if you can -- is to use a statistics package or other program to give you the critical value. So for example, in R, we can do this: qf(.95,5,6744) [1] 2.215425 (but you can as easily calculate an exact p-value for your F). Usually F tables come with an "infinity" degrees of freedom at the end of the table, but a few don't. If you have a really large d.f. (for example, 6744 is really large), you can use the infinity ($\infty$) entry in its place. So you might have tables for $\nu_1=5$ that give 120 df and $\infty$ df: ... 5 ... ⁞ 120 2.2899 ∞ 2.2141 The $\infty$ d.f. row there will work for any really large $\nu_2$ (denominator d.f.). If we use that we have 2.2141 instead of the exact 2.2154 but that's not too bad. If you don't have an infinity degrees of freedom entry, you can work one out from a chi-square table, using the critical value for the numerator d.f. divided by those d.f. So for example, for a $F_{5,\infty}$ critical value, take a $\chi^2_5$ critical value and divide by $5$. The 5% critical value for a $\chi^2_5$ is $11.0705$. If we divide by $5$ that's $2.2141$ which is the $\infty$ row from the table above. If your degrees of freedom may be a bit too small to use the "infinity" entry (but still a lot bigger than 120 or whatever your table goes up to) you can use inverse interpolation between the highest finite d.f. and the infinity entry. Let's say we want to calculate a critical value for $F_{5, 674}$ d.f. F df 120/df ------ ---- ------- 2.2899 120 1 C 674 0.17804 2.2141 ∞ 0 Then we compute the unknown critical value, $C$ as $C \approx 2.2141 + (2.2899-2.2141) \times (0.17804-0)/(1-0) \approx 2.2276$ (The exact value is $2.2274$, so that works pretty well.) More details on interpolation and inverse interpolation are given at that linked post. Chi-squared tables: If your chi-squared d.f. are really large you can use normal tables to get an approximation. For large d.f. $\nu$ the chi-squared distribution is approximately normal with mean $\nu$ and variance $2\nu$. To get the upper 5% value, take the one-tailed 5% critical value for a standard normal ($1.645$) and multiply by $\sqrt{2\nu}$ and add $\nu$. For example, imagine we needed an upper 5% critical value for a $\chi^2_{6744}$. We would calculate $1.645 \times \sqrt{2 \times 6744} + 6744 \approx 6935$. The exact answer (to $5$ significant figures) is $6936.2$. If the degrees of freedom are smaller, we can use the fact that if $X$ is $\chi^2_\nu$ then $\sqrt{2X}\dot\sim N(\sqrt{2\nu-1},1)$. So for example, if we had $674$ d.f. we might use this approximation. The exact upper 5% critical value for a chi-square with 674 d.f. is (to 5 figures) $735.51$. With this approximation, we would calculate as follows: Take the upper (one tailed) 5% critical value for a standard normal (1.645), add $\sqrt{2\nu-1}$, square the total and divide by 2. In this case: $(1.645+\sqrt{2\times 674-1})^2/2 \approx 735.2$. As we see, this is quite close. For considerably smaller degrees of freedom, the Wilson-Hilferty transformation could be used -- it works well down to only a few degrees of freedom -- but the tables should cover that. This approximation is that $(\frac{X}{\nu})^{\frac13}\dot\sim N(1-\frac{2}{9\nu},\frac{2}{9\nu})$.
What do you do if your degrees of freedom goes past the end of your tables? F tables: The easiest way of all -- if you can -- is to use a statistics package or other program to give you the critical value. So for example, in R, we can do this: qf(.95,5,6744) [1] 2.215425 (
25,417
Chance that bootstrap sample is exactly the same as the original sample
Note that at each observation position ($i=1, 2, ..., n$) we can choose any of the $n$ observations, so there are $n^n$ possible resamples (keeping the order in which they are drawn) of which $n!$ are the "same sample" (i.e. contain all $n$ original observations with no repeats; this accounts for all the ways of ordering the sample we started with). For example, with three observations, a,b and c, you have 27 possible samples: aaa aab aac aba abb abc aca acb acc baa bab bac bba bbb bbc bca bcb bcc caa cab cac cba cbb cbc cca ccb ccc Six of those contain one each of a, b and c. So $n!/n^n$ is the probability of getting the original sample back. Aside - a quick approximation of the probability: Consider that: $${\displaystyle {\sqrt {2\pi }}\ n^{n+{\frac {1}{2}}}e^{-n}\leq n!\leq e\ n^{n+{\frac {1}{2}}}e^{-n}}$$ so $${\displaystyle {\sqrt {2\pi }}\ n^{{\frac {1}{2}}}e^{-n}\leq n!/n^n \leq e\ n^{{\frac {1}{2}}}e^{-n}}$$ With the lower bound being the usual one given for the Stirling approximation (which has low relative error for large $n$). [Gosper has suggested using $n! \approx \sqrt{(2n+\frac13)\,\pi}n^ne^{-n}$ which would yield the approximation $\sqrt{(2n+\frac13)\pi}\,e^{-n}$ for this probability, which works reasonably well down to $n=3$, or even down to $n=1$ depending on how stringent your criteria are.] (Response to comment:) The probability of not getting a particular observation in a given resample is $(1-\frac{1}{n})^n$ which for large $n$ is approximately $e^{-1}$. For details see Why on average does each bootstrap sample contain roughly two thirds of observations?
Chance that bootstrap sample is exactly the same as the original sample
Note that at each observation position ($i=1, 2, ..., n$) we can choose any of the $n$ observations, so there are $n^n$ possible resamples (keeping the order in which they are drawn) of which $n!$ are
Chance that bootstrap sample is exactly the same as the original sample Note that at each observation position ($i=1, 2, ..., n$) we can choose any of the $n$ observations, so there are $n^n$ possible resamples (keeping the order in which they are drawn) of which $n!$ are the "same sample" (i.e. contain all $n$ original observations with no repeats; this accounts for all the ways of ordering the sample we started with). For example, with three observations, a,b and c, you have 27 possible samples: aaa aab aac aba abb abc aca acb acc baa bab bac bba bbb bbc bca bcb bcc caa cab cac cba cbb cbc cca ccb ccc Six of those contain one each of a, b and c. So $n!/n^n$ is the probability of getting the original sample back. Aside - a quick approximation of the probability: Consider that: $${\displaystyle {\sqrt {2\pi }}\ n^{n+{\frac {1}{2}}}e^{-n}\leq n!\leq e\ n^{n+{\frac {1}{2}}}e^{-n}}$$ so $${\displaystyle {\sqrt {2\pi }}\ n^{{\frac {1}{2}}}e^{-n}\leq n!/n^n \leq e\ n^{{\frac {1}{2}}}e^{-n}}$$ With the lower bound being the usual one given for the Stirling approximation (which has low relative error for large $n$). [Gosper has suggested using $n! \approx \sqrt{(2n+\frac13)\,\pi}n^ne^{-n}$ which would yield the approximation $\sqrt{(2n+\frac13)\pi}\,e^{-n}$ for this probability, which works reasonably well down to $n=3$, or even down to $n=1$ depending on how stringent your criteria are.] (Response to comment:) The probability of not getting a particular observation in a given resample is $(1-\frac{1}{n})^n$ which for large $n$ is approximately $e^{-1}$. For details see Why on average does each bootstrap sample contain roughly two thirds of observations?
Chance that bootstrap sample is exactly the same as the original sample Note that at each observation position ($i=1, 2, ..., n$) we can choose any of the $n$ observations, so there are $n^n$ possible resamples (keeping the order in which they are drawn) of which $n!$ are
25,418
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52?
There are 13 kinds, so we can solve the problem for a single kind and then move forward from there. The question then is, what is the probability of drawing 4 successes (like kings) in 20 samples from the same distribution of 4 successes (kings) and 48 failures without replacement? The hypergeometric distribution (wikipedia) gives us the answer to this question, and it is 1.8%. If one friend bets on getting 4 kings, and another bets on getting four queens, they both have 1.8% chance of winning. We need to know how much the two bets overlap in order to say what the probability is of at least one of them winning. The overlap of both winning is similar to the first question, namely: what is the probability of drawing 8 successes (kings and queens) in 20 samples from a distribution of 8 successes (kings and queens) and 44 failures, without replacement? The answer is again hypegeometric, and by my calculation it's 0.017%. So the probability of at least one of the two friends winning is 1.8% + 1.8% - 0.017% = 3.6% In continuing this line of reasoning, the easy part is summing the probabilities for individual kinds (13*1.8%=23.4%), and the difficult part is to figure out how much all of these 13 scenarios overlap. The probability of getting either 4 kings or 4 queens or 4 aces is the sum of getting each four-of-a-kind minus the overlap of them. The overlap consists of getting 4 kings and 4 queens (but not 4 aces), getting 4 kings and 4 aces (but not 4 queens), getting 4 queens and 4 aces (but not 4 kings) and of getting 4 kings and 4 queens and 4 aces. This is where it gets too hairy for me to continue, but proceeding this way with the hypergeometric formula on wikipedia, you can go ahead and write it all out. Maybe somebody can help us reduce the problem?
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52?
There are 13 kinds, so we can solve the problem for a single kind and then move forward from there. The question then is, what is the probability of drawing 4 successes (like kings) in 20 samples from
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52? There are 13 kinds, so we can solve the problem for a single kind and then move forward from there. The question then is, what is the probability of drawing 4 successes (like kings) in 20 samples from the same distribution of 4 successes (kings) and 48 failures without replacement? The hypergeometric distribution (wikipedia) gives us the answer to this question, and it is 1.8%. If one friend bets on getting 4 kings, and another bets on getting four queens, they both have 1.8% chance of winning. We need to know how much the two bets overlap in order to say what the probability is of at least one of them winning. The overlap of both winning is similar to the first question, namely: what is the probability of drawing 8 successes (kings and queens) in 20 samples from a distribution of 8 successes (kings and queens) and 44 failures, without replacement? The answer is again hypegeometric, and by my calculation it's 0.017%. So the probability of at least one of the two friends winning is 1.8% + 1.8% - 0.017% = 3.6% In continuing this line of reasoning, the easy part is summing the probabilities for individual kinds (13*1.8%=23.4%), and the difficult part is to figure out how much all of these 13 scenarios overlap. The probability of getting either 4 kings or 4 queens or 4 aces is the sum of getting each four-of-a-kind minus the overlap of them. The overlap consists of getting 4 kings and 4 queens (but not 4 aces), getting 4 kings and 4 aces (but not 4 queens), getting 4 queens and 4 aces (but not 4 kings) and of getting 4 kings and 4 queens and 4 aces. This is where it gets too hairy for me to continue, but proceeding this way with the hypergeometric formula on wikipedia, you can go ahead and write it all out. Maybe somebody can help us reduce the problem?
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52? There are 13 kinds, so we can solve the problem for a single kind and then move forward from there. The question then is, what is the probability of drawing 4 successes (like kings) in 20 samples from
25,419
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52?
To draw at least $k$ specified four-of-a-kinds, we must draw all $4k$ required cards. This is a hypergeometric distribution, where we must draw all $4k$ successes from population of size $52.$ There are $\binom{13}{k}$ such sets of four-of-a-kinds. Therefore, the chance of getting at least $k$ four-of-a-kinds is $\binom{13}{k} \frac{\binom{4k}{4k}\binom{52-4k}{20-4k}}{\binom{52}{20}} = \binom{52}{20}^{-1} \binom{13}{k} \binom{52-4k}{20-4k} ,$ for $0\leq k\leq5.$ By the inclusion-exclusion principle, the probability of drawing at least one four-of-a-kind is therefore equal to $\binom{52}{20}^{-1} \sum_{k=1}^5 (-1)^{k+1} \binom{13}{k} \binom{52-4k}{20-4k} = -\binom{52}{20}^{-1} \sum_{k=1}^5 (-1)^k \binom{13}{k} \binom{4(13-k)}{4\times 8} .$ This can be calculated numerically to be about $0.2197706.$ The above sum has the form $\sum_{k=0}^n (-1)^k \binom{n}{k} \binom{r(n-k)}{rm},$ if we subtract the $k=0$ term afterwards, since the terms for $5<k\leq 13$ are equal to zero. I wonder if there's a way to simplify that kind of sum.
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52?
To draw at least $k$ specified four-of-a-kinds, we must draw all $4k$ required cards. This is a hypergeometric distribution, where we must draw all $4k$ successes from population of size $52.$ There a
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52? To draw at least $k$ specified four-of-a-kinds, we must draw all $4k$ required cards. This is a hypergeometric distribution, where we must draw all $4k$ successes from population of size $52.$ There are $\binom{13}{k}$ such sets of four-of-a-kinds. Therefore, the chance of getting at least $k$ four-of-a-kinds is $\binom{13}{k} \frac{\binom{4k}{4k}\binom{52-4k}{20-4k}}{\binom{52}{20}} = \binom{52}{20}^{-1} \binom{13}{k} \binom{52-4k}{20-4k} ,$ for $0\leq k\leq5.$ By the inclusion-exclusion principle, the probability of drawing at least one four-of-a-kind is therefore equal to $\binom{52}{20}^{-1} \sum_{k=1}^5 (-1)^{k+1} \binom{13}{k} \binom{52-4k}{20-4k} = -\binom{52}{20}^{-1} \sum_{k=1}^5 (-1)^k \binom{13}{k} \binom{4(13-k)}{4\times 8} .$ This can be calculated numerically to be about $0.2197706.$ The above sum has the form $\sum_{k=0}^n (-1)^k \binom{n}{k} \binom{r(n-k)}{rm},$ if we subtract the $k=0$ term afterwards, since the terms for $5<k\leq 13$ are equal to zero. I wonder if there's a way to simplify that kind of sum.
What is the probability of drawing a four of a kind when 20 cards are drawn from a deck of 52? To draw at least $k$ specified four-of-a-kinds, we must draw all $4k$ required cards. This is a hypergeometric distribution, where we must draw all $4k$ successes from population of size $52.$ There a
25,420
Why is the confidence interval considered a random interval?
Why is the confidence interval considered random? You just stated a reason why in your question! You quoted this: "A confidence interval is a random variable because x-bar (its center) is a random variable." (In this case, it's presumably an interval for the mean, but the reasoning carries over to other confidence intervals.) The sample mean is a statistic -- a quantity you calculate from the sample. Because random samples from some population are, well, random, things calculated from them are also going to be random. Consider: If you drew a second sample from the same population would you have the same observations? Would the sample mean be the same in both samples? Would the sample standard deviation be the same in both samples? The largest observation? The lower quartile? No, they vary from sample to sample; indeed they're also random. A confidence interval is also based on the random sample, so it, too, is a statistic (e.g. define it in terms of its endpoints) and it, too, is random. If it's truly random then why bother with confidence intervals at all? Am I missing something here? Well presumably you'd like to use the data to calculate your interval. After all, it's the thing we have that tells us something about the population we drew the sample from. If you're using the data - a random sample of your population - then useful quantities you calculate from it will also be random, including confidence intervals. Random doesn't mean "ignores your data" -- for example a sample mean tells us about our population mean, and our sample standard deviation can be used to help us work out how far the sample mean will tend to be from the population mean. In fact, we rely on the randomness - we exploit it to get the best possible use of information from our sample. Without random sampling, our intervals wouldn't necessarily tell us much of anything. [You might like to ponder whether there might be a way to get an interval for a population quantity that is simultaneously reasonably informative and not random.]
Why is the confidence interval considered a random interval?
Why is the confidence interval considered random? You just stated a reason why in your question! You quoted this: "A confidence interval is a random variable because x-bar (its center) is a random v
Why is the confidence interval considered a random interval? Why is the confidence interval considered random? You just stated a reason why in your question! You quoted this: "A confidence interval is a random variable because x-bar (its center) is a random variable." (In this case, it's presumably an interval for the mean, but the reasoning carries over to other confidence intervals.) The sample mean is a statistic -- a quantity you calculate from the sample. Because random samples from some population are, well, random, things calculated from them are also going to be random. Consider: If you drew a second sample from the same population would you have the same observations? Would the sample mean be the same in both samples? Would the sample standard deviation be the same in both samples? The largest observation? The lower quartile? No, they vary from sample to sample; indeed they're also random. A confidence interval is also based on the random sample, so it, too, is a statistic (e.g. define it in terms of its endpoints) and it, too, is random. If it's truly random then why bother with confidence intervals at all? Am I missing something here? Well presumably you'd like to use the data to calculate your interval. After all, it's the thing we have that tells us something about the population we drew the sample from. If you're using the data - a random sample of your population - then useful quantities you calculate from it will also be random, including confidence intervals. Random doesn't mean "ignores your data" -- for example a sample mean tells us about our population mean, and our sample standard deviation can be used to help us work out how far the sample mean will tend to be from the population mean. In fact, we rely on the randomness - we exploit it to get the best possible use of information from our sample. Without random sampling, our intervals wouldn't necessarily tell us much of anything. [You might like to ponder whether there might be a way to get an interval for a population quantity that is simultaneously reasonably informative and not random.]
Why is the confidence interval considered a random interval? Why is the confidence interval considered random? You just stated a reason why in your question! You quoted this: "A confidence interval is a random variable because x-bar (its center) is a random v
25,421
Why is the confidence interval considered a random interval?
Several tentative approximations: Random variables are not random. They are deterministic functions from the outcome to the real line, $X: \Omega \rightarrow \mathbb R$. So you run a random experiment (the experiment, say tossing a coin, is random in the sense that we don't have a formula to return the outcome a priori), and get an outcome; run it again, and get another outcome. Soon you have a sample, and you happen to be interested in a parameter, say the proportion of heads, $p$: you are mapping something like $\small \{H,T,H,H,H,T,H,T,T,T\}$ to the interval $[0,1]$ to get an estimate of the parameter $p$ based on your sample, using the simple formula, $\frac{\text{no.heads}}{\text{total}}$, a deterministic formula. You may label this estimate, $\hat p$. Confidence interval: From this point estimate, you can calculate the CI with some formula, such as, $\hat p\,\pm\,1.96\,\sqrt{\frac{\hat p\,(1-\hat p)}{n}}$. Again deterministic, meaning (crazy nomenclature), a random variable... or two: one for the lower bound, and the other for the upper bound. So effectively you have unfolded the point estimate into two point estimates, based on some underling distributional assumptions (normal approximation), completely unrelated to the specific realization that your sample represents. This interval can contain $p$ or not. Again, think about the point estimate - it can fall very far from the true parameter, $p$, and affect the CI accordingly. But there is one saving grace, which is at the same time a painful yoga position: If you were to repeat this process time and time again, and get many $\hat p$ estimates with their respective confidence intervals, the true parameter $p$ would be contained in $95\%$ of them. The confidence interval does not tell you that with $95\%$ probability the true proportion is contained between its bounds, which is mind boggling. It is, instead nothing more than "an elaboration" on the sample based on things like the CLT. As such it is "random" (wink, wink). If you want the probability that the parameter $p$ is contained within an certain interval, you have to change party affiliation, and look up credible intervals under the apparently more satisfying Bayesian paradigm.
Why is the confidence interval considered a random interval?
Several tentative approximations: Random variables are not random. They are deterministic functions from the outcome to the real line, $X: \Omega \rightarrow \mathbb R$. So you run a random experimen
Why is the confidence interval considered a random interval? Several tentative approximations: Random variables are not random. They are deterministic functions from the outcome to the real line, $X: \Omega \rightarrow \mathbb R$. So you run a random experiment (the experiment, say tossing a coin, is random in the sense that we don't have a formula to return the outcome a priori), and get an outcome; run it again, and get another outcome. Soon you have a sample, and you happen to be interested in a parameter, say the proportion of heads, $p$: you are mapping something like $\small \{H,T,H,H,H,T,H,T,T,T\}$ to the interval $[0,1]$ to get an estimate of the parameter $p$ based on your sample, using the simple formula, $\frac{\text{no.heads}}{\text{total}}$, a deterministic formula. You may label this estimate, $\hat p$. Confidence interval: From this point estimate, you can calculate the CI with some formula, such as, $\hat p\,\pm\,1.96\,\sqrt{\frac{\hat p\,(1-\hat p)}{n}}$. Again deterministic, meaning (crazy nomenclature), a random variable... or two: one for the lower bound, and the other for the upper bound. So effectively you have unfolded the point estimate into two point estimates, based on some underling distributional assumptions (normal approximation), completely unrelated to the specific realization that your sample represents. This interval can contain $p$ or not. Again, think about the point estimate - it can fall very far from the true parameter, $p$, and affect the CI accordingly. But there is one saving grace, which is at the same time a painful yoga position: If you were to repeat this process time and time again, and get many $\hat p$ estimates with their respective confidence intervals, the true parameter $p$ would be contained in $95\%$ of them. The confidence interval does not tell you that with $95\%$ probability the true proportion is contained between its bounds, which is mind boggling. It is, instead nothing more than "an elaboration" on the sample based on things like the CLT. As such it is "random" (wink, wink). If you want the probability that the parameter $p$ is contained within an certain interval, you have to change party affiliation, and look up credible intervals under the apparently more satisfying Bayesian paradigm.
Why is the confidence interval considered a random interval? Several tentative approximations: Random variables are not random. They are deterministic functions from the outcome to the real line, $X: \Omega \rightarrow \mathbb R$. So you run a random experimen
25,422
Why is the confidence interval considered a random interval?
Classical probability treats a parameter as fixed, but typically not precisely known. An interval can be developed that contains the parameter with a certain probability P that would occur in repeated sampling. This probability is denoted a "confidence interval" and is the probability the random interval contains the fixed parameter. For a specific sample, a specific confidence interval can be calculated; the parameter is either in or not in this specific confidence interval so it is incorrect to state the parameter has probability P of being in this confidence interval. This confidence interval from the sample has probability P of containing the parameter.
Why is the confidence interval considered a random interval?
Classical probability treats a parameter as fixed, but typically not precisely known. An interval can be developed that contains the parameter with a certain probability P that would occur in repeate
Why is the confidence interval considered a random interval? Classical probability treats a parameter as fixed, but typically not precisely known. An interval can be developed that contains the parameter with a certain probability P that would occur in repeated sampling. This probability is denoted a "confidence interval" and is the probability the random interval contains the fixed parameter. For a specific sample, a specific confidence interval can be calculated; the parameter is either in or not in this specific confidence interval so it is incorrect to state the parameter has probability P of being in this confidence interval. This confidence interval from the sample has probability P of containing the parameter.
Why is the confidence interval considered a random interval? Classical probability treats a parameter as fixed, but typically not precisely known. An interval can be developed that contains the parameter with a certain probability P that would occur in repeate
25,423
Definition of validity of an instrumental variable
Requirements for Z to be a valid instrument for X are: Relevance = Z needs to highly correlated with X Exogenous = Z is correlated with Y solely through its correlation with X; so Z is uncorrelated with the error in the outcome equation The main idea behind IV is that when Z changes, it should also alter X, but not the troublesome part of X that is correlated with the error. To get the effect of X on Y we are only using part of the variation in X, the part that's driven by variation in Z.
Definition of validity of an instrumental variable
Requirements for Z to be a valid instrument for X are: Relevance = Z needs to highly correlated with X Exogenous = Z is correlated with Y solely through its correlation with X; so Z is uncorrelated
Definition of validity of an instrumental variable Requirements for Z to be a valid instrument for X are: Relevance = Z needs to highly correlated with X Exogenous = Z is correlated with Y solely through its correlation with X; so Z is uncorrelated with the error in the outcome equation The main idea behind IV is that when Z changes, it should also alter X, but not the troublesome part of X that is correlated with the error. To get the effect of X on Y we are only using part of the variation in X, the part that's driven by variation in Z.
Definition of validity of an instrumental variable Requirements for Z to be a valid instrument for X are: Relevance = Z needs to highly correlated with X Exogenous = Z is correlated with Y solely through its correlation with X; so Z is uncorrelated
25,424
Definition of validity of an instrumental variable
Following Hernán and Robins' Causal Inference, Chapter 16: Instrumental variable estimation, instrumental variables have four assumptions/requirements: $Z$ must be associated with $X$. $Z$ must causally affect $Y$ only through $X$ There must not be any prior causes of both $Y$ and $Z$. The effect of $X$ on $Y$ must be homogeneous. This assumption/requirement has two forms, weak and strong: Weak homogeneity of the effect of $X$ on $Y$: The effect of $X$ on $Y$ does not vary by the levels of $Z$ (i.e. $Z$ cannot modify the effect of $X$ on $Y$). Strong homogeneity of the effect of $X$ on $Y$: The effect of $X$ on $Y$ is constant across all individuals (or whatever your unit of analysis is). Instruments that do not meet these assumptions are generally invalid. (2) and (3) are generally difficult to provide strong evidence for (hence assumptions). The strong version of condition (4) may be a very unreasonable assumption to make depending on the nature of the phenomena being studied (e.g. the effects of drugs on individuals' health generally varies from individual to individual). The weak version of condition (4) may require the use of atypical IV estimators, depending on the circumstance. The weakness of the effect of $Z$ on $X$ does not really have a formal definition. Certainly IV estimation produces biased results when the effect of $Z$ on $X$ is small relative to the effect of $U$ (unmeasured confounder) on $X$, but there's no hard and fast point, and the bias depends on sample size. Hernán and Robins are (respectfully and constructively) critical of the utility of IV regression relative to estimates based on formal causal reasoning of their approach (that is, the formal causal reasoning approach of the counterfactual causality folks like Pearl, etc.). Hernán, M. A. and Robins, J. M. (2017). Causal Inference. Chapman & Hall/CRC.
Definition of validity of an instrumental variable
Following Hernán and Robins' Causal Inference, Chapter 16: Instrumental variable estimation, instrumental variables have four assumptions/requirements: $Z$ must be associated with $X$. $Z$ must causa
Definition of validity of an instrumental variable Following Hernán and Robins' Causal Inference, Chapter 16: Instrumental variable estimation, instrumental variables have four assumptions/requirements: $Z$ must be associated with $X$. $Z$ must causally affect $Y$ only through $X$ There must not be any prior causes of both $Y$ and $Z$. The effect of $X$ on $Y$ must be homogeneous. This assumption/requirement has two forms, weak and strong: Weak homogeneity of the effect of $X$ on $Y$: The effect of $X$ on $Y$ does not vary by the levels of $Z$ (i.e. $Z$ cannot modify the effect of $X$ on $Y$). Strong homogeneity of the effect of $X$ on $Y$: The effect of $X$ on $Y$ is constant across all individuals (or whatever your unit of analysis is). Instruments that do not meet these assumptions are generally invalid. (2) and (3) are generally difficult to provide strong evidence for (hence assumptions). The strong version of condition (4) may be a very unreasonable assumption to make depending on the nature of the phenomena being studied (e.g. the effects of drugs on individuals' health generally varies from individual to individual). The weak version of condition (4) may require the use of atypical IV estimators, depending on the circumstance. The weakness of the effect of $Z$ on $X$ does not really have a formal definition. Certainly IV estimation produces biased results when the effect of $Z$ on $X$ is small relative to the effect of $U$ (unmeasured confounder) on $X$, but there's no hard and fast point, and the bias depends on sample size. Hernán and Robins are (respectfully and constructively) critical of the utility of IV regression relative to estimates based on formal causal reasoning of their approach (that is, the formal causal reasoning approach of the counterfactual causality folks like Pearl, etc.). Hernán, M. A. and Robins, J. M. (2017). Causal Inference. Chapman & Hall/CRC.
Definition of validity of an instrumental variable Following Hernán and Robins' Causal Inference, Chapter 16: Instrumental variable estimation, instrumental variables have four assumptions/requirements: $Z$ must be associated with $X$. $Z$ must causa
25,425
Definition of validity of an instrumental variable
Both assumptions can be seen by looking at the system of equations: \begin{align} x=&\gamma_1+\gamma_2 z+\epsilon\\ y=&\beta_1+\beta_2 x+\gamma_3 z+u \end{align} The strength of the instrument relates to the coefficient $\gamma_2\neq 0$ and to the $R^2$ of this equation (both should be high enough) The validity relates to the assumption that $\gamma_3=0$, i.e. $z$ has no direct effect on $y$. Note that we cannot test $\gamma_3=0$, only assume it, which explains why it is called an identifying (=untestable) assumption.
Definition of validity of an instrumental variable
Both assumptions can be seen by looking at the system of equations: \begin{align} x=&\gamma_1+\gamma_2 z+\epsilon\\ y=&\beta_1+\beta_2 x+\gamma_3 z+u \end{align} The strength of the instrument relate
Definition of validity of an instrumental variable Both assumptions can be seen by looking at the system of equations: \begin{align} x=&\gamma_1+\gamma_2 z+\epsilon\\ y=&\beta_1+\beta_2 x+\gamma_3 z+u \end{align} The strength of the instrument relates to the coefficient $\gamma_2\neq 0$ and to the $R^2$ of this equation (both should be high enough) The validity relates to the assumption that $\gamma_3=0$, i.e. $z$ has no direct effect on $y$. Note that we cannot test $\gamma_3=0$, only assume it, which explains why it is called an identifying (=untestable) assumption.
Definition of validity of an instrumental variable Both assumptions can be seen by looking at the system of equations: \begin{align} x=&\gamma_1+\gamma_2 z+\epsilon\\ y=&\beta_1+\beta_2 x+\gamma_3 z+u \end{align} The strength of the instrument relate
25,426
How to compute bits per character (BPC)?
From my understanding, the BPC is just the average cross-entropy (used with log base 2). In the case of Alex Graves' papers, the aim of the model is to approximate the probability distribution of the next character given past characters. At each time step $t$, let's call this (approximate) distribution $\hat{P}_t$ and let $P_t$ be the true distribution. These discrete probability distributions can be represented by a vector of size $n$, where n is the number of possible characters in your alphabet. So the BPC or average cross-entropy can be calculated as follows: \begin{align} bpc(string) = \frac{1}{T}\sum_{t=1}^T H(P_t, \hat{P}_t) &= -\frac{1}{T}\sum_{t=1}^T \sum_{c=1}^n P_t(c) \log_2 \hat{P}_t(c), \\ & = -\frac{1}{T}\sum_{t=1}^T \log_2 \hat{P}_t(x_t). \end{align} Where $T$ is the length of your input string. The equality in the second line comes from the fact that the true distribution $P_t$ is zero everywhere except at the index corresponding to the true character $x_t$ in the input string at location $t$. Two things to note: When you use an RNN, $\hat{P}_t$ can be obtained by applying a softmax to the RNN's output at time step $t$ (The number of output units in your RNN should be equal to $n$ - the number of characters in your alphabet). In the equation above, the average cross-entropy is calculated over one input string of size T. In practice, you may have more than one string in your batch. Therefore, you should average over all of them (i.e. $bpc = mean_{strings} bpc(string)$).
How to compute bits per character (BPC)?
From my understanding, the BPC is just the average cross-entropy (used with log base 2). In the case of Alex Graves' papers, the aim of the model is to approximate the probability distribution of the
How to compute bits per character (BPC)? From my understanding, the BPC is just the average cross-entropy (used with log base 2). In the case of Alex Graves' papers, the aim of the model is to approximate the probability distribution of the next character given past characters. At each time step $t$, let's call this (approximate) distribution $\hat{P}_t$ and let $P_t$ be the true distribution. These discrete probability distributions can be represented by a vector of size $n$, where n is the number of possible characters in your alphabet. So the BPC or average cross-entropy can be calculated as follows: \begin{align} bpc(string) = \frac{1}{T}\sum_{t=1}^T H(P_t, \hat{P}_t) &= -\frac{1}{T}\sum_{t=1}^T \sum_{c=1}^n P_t(c) \log_2 \hat{P}_t(c), \\ & = -\frac{1}{T}\sum_{t=1}^T \log_2 \hat{P}_t(x_t). \end{align} Where $T$ is the length of your input string. The equality in the second line comes from the fact that the true distribution $P_t$ is zero everywhere except at the index corresponding to the true character $x_t$ in the input string at location $t$. Two things to note: When you use an RNN, $\hat{P}_t$ can be obtained by applying a softmax to the RNN's output at time step $t$ (The number of output units in your RNN should be equal to $n$ - the number of characters in your alphabet). In the equation above, the average cross-entropy is calculated over one input string of size T. In practice, you may have more than one string in your batch. Therefore, you should average over all of them (i.e. $bpc = mean_{strings} bpc(string)$).
How to compute bits per character (BPC)? From my understanding, the BPC is just the average cross-entropy (used with log base 2). In the case of Alex Graves' papers, the aim of the model is to approximate the probability distribution of the
25,427
How to compute bits per character (BPC)?
bpc is just log2(likekihood) / number-of-tokens. This is used to compare likelihood for different length segments, since longer sequence usually has lower likelihood, and dividing by its length counteract this trend.
How to compute bits per character (BPC)?
bpc is just log2(likekihood) / number-of-tokens. This is used to compare likelihood for different length segments, since longer sequence usually has lower likelihood, and dividing by its length counte
How to compute bits per character (BPC)? bpc is just log2(likekihood) / number-of-tokens. This is used to compare likelihood for different length segments, since longer sequence usually has lower likelihood, and dividing by its length counteract this trend.
How to compute bits per character (BPC)? bpc is just log2(likekihood) / number-of-tokens. This is used to compare likelihood for different length segments, since longer sequence usually has lower likelihood, and dividing by its length counte
25,428
Explaining Mean, Median, Mode in Layman's Terms
Thank you for this simple-yet-profound question about the fundamental statistical concepts of mean, median, and mode. There are some wonderful methods /demonstrations available for explaining and grasping an intuitive -- rather than arithmetic -- understanding of these concepts, but unfortunately they are not widely known (or taught in school, to my knowledge). Mean: 1. Balance Point: Mean as the fulcrum The best way to understand the concept of mean it to think of it as the balance point on a uniform rod. Imagine a series of data points, such as {1,1,1,3,3,6,7,10}. If each of these points are marked on a uniform rod and equal weights are placed at each point (as shown below) then the fulcrum must be placed at the mean of the data for the rod to balance. This visual demonstration also leads to an arithmetic interpretation. The arithmetic rationale for this is that in order for the fulcrum to balance, the total negative deviation from the mean (on the left side of the fulcrum) must equal to the total positive deviation from the mean (on the right side). Hence, the mean acts as the balancing point in a distribution. This visual allows an immediate understanding of the mean as it relates to the distribution of the data points. Other property of the mean that becomes readily apparent from this demonstration is the fact that the mean will always be between the min and the max values in the distribution. Also, the effect of outliers can be easily understood – that a presence of outliers would shift the balancing point, and hence, impact the mean. 2. Redistribution (fair share) value Another interesting way to understand the mean is to think of it as a redistribution value. This interpretation does require some understanding of the arithmetic behind the calculation of the mean, but it utilizes an anthropomorphic quality – namely, the socialist concept of redistribution – to intuitively grasp the concept of the mean. The calculation of the mean involves summing up all values in a distribution (set of values) and dividing the sum by the number of data points in the distribution. $$ \bar{x} = (\sum_{i=1}^n{x_i})/n $$ One way to understand the rationale behind this calculation is to think of each data point as apples (or some other fungible item). Using the same example as before, we have eight people in our sample: {1,1,1,3,3,6,7,10}. The first person has one apple, the second person has one apple, and so on. Now, if one wants to redistribute the number of apples such that it is “fair” to everyone, you can use the mean of the distribution to do this. In other words, you can give four apples (i.e., the mean value) to everyone for the distribution to be fair/equal. This demonstration provides an intuitive explanation for the formula above: dividing the sum of a distribution by the number of data points is equivalent to partitioning the whole of the distribution equally to all of the data points. 3. Visual Mnemonics These following visual mnemonics provide the interpretation of the mean in a unique way: This is a mnemonic for the leveling value interpretation of the mean. The height of the A's crossbar is the mean of the heights of the four letters. And this is another mnemonic for the balance point interpretation of the mean. The position of the fulcrum is roughly the mean of the positions of the M, E, and doubled N. Median Once the interpretation of mean as the balancing point on a rod is understood, the median can be demonstrated by an extension of the same idea: the balancing point on a necklace. Replace the rod with a string, but keep the data markings and weights. Then at the ends, attach a second string, longer than the first, to form a loop [like a necklace], and drape the loop over a well-lubricated pulley. Suppose, initially, that the weights are distinct. The pulley and loop balance when the same number of weights are to each side. In other words, the loop ‘balances’ when the median is the lowest point. Notice that if one of the weights is slid way up the loop creating an outlier, the loop doesn’t move. This demonstrates, physically, the principle that the median is unaffected by outliers. Mode The mode is probably the easiest concept to understand as it involves the most basic mathematical operation: counting. The fact that it’s equal to the most frequently occurring data point leads to an acronym: “Most-often Occurring Data Element”. The mode can also be thought of the most typical value in a set. (Although, a deeper understanding of ‘typical’ would lead to the representative, or average value. However, it’s appropriate to equate ‘typical’ with the mode based on the very literal meaning of the word ‘typical’.) Sources: The Median is a balance point -- Lynch, The College Mathematics Journal (2009) Making Statistics Memorable: New Mnemonics and Motivations -- Lesser, Statistical Education, JSM (2011) On the Use of Mnemonics for Teaching Statistics -- Lesser, Model Assisted Statistics and Applications, 6(2), 151-160 (2011) What does the mean mean? – Watier, Lamontagne and Chartier, Journal of Statistics Education, Volume 19, Number 2 (2011) Typical? Children's and Teachers' Ideas About Average – Russell and Mokros, ICOTS 3 (1990) OVERALL REFERENCE: http://jse.amstat.org/v22n3/lesser.pdf
Explaining Mean, Median, Mode in Layman's Terms
Thank you for this simple-yet-profound question about the fundamental statistical concepts of mean, median, and mode. There are some wonderful methods /demonstrations available for explaining and gras
Explaining Mean, Median, Mode in Layman's Terms Thank you for this simple-yet-profound question about the fundamental statistical concepts of mean, median, and mode. There are some wonderful methods /demonstrations available for explaining and grasping an intuitive -- rather than arithmetic -- understanding of these concepts, but unfortunately they are not widely known (or taught in school, to my knowledge). Mean: 1. Balance Point: Mean as the fulcrum The best way to understand the concept of mean it to think of it as the balance point on a uniform rod. Imagine a series of data points, such as {1,1,1,3,3,6,7,10}. If each of these points are marked on a uniform rod and equal weights are placed at each point (as shown below) then the fulcrum must be placed at the mean of the data for the rod to balance. This visual demonstration also leads to an arithmetic interpretation. The arithmetic rationale for this is that in order for the fulcrum to balance, the total negative deviation from the mean (on the left side of the fulcrum) must equal to the total positive deviation from the mean (on the right side). Hence, the mean acts as the balancing point in a distribution. This visual allows an immediate understanding of the mean as it relates to the distribution of the data points. Other property of the mean that becomes readily apparent from this demonstration is the fact that the mean will always be between the min and the max values in the distribution. Also, the effect of outliers can be easily understood – that a presence of outliers would shift the balancing point, and hence, impact the mean. 2. Redistribution (fair share) value Another interesting way to understand the mean is to think of it as a redistribution value. This interpretation does require some understanding of the arithmetic behind the calculation of the mean, but it utilizes an anthropomorphic quality – namely, the socialist concept of redistribution – to intuitively grasp the concept of the mean. The calculation of the mean involves summing up all values in a distribution (set of values) and dividing the sum by the number of data points in the distribution. $$ \bar{x} = (\sum_{i=1}^n{x_i})/n $$ One way to understand the rationale behind this calculation is to think of each data point as apples (or some other fungible item). Using the same example as before, we have eight people in our sample: {1,1,1,3,3,6,7,10}. The first person has one apple, the second person has one apple, and so on. Now, if one wants to redistribute the number of apples such that it is “fair” to everyone, you can use the mean of the distribution to do this. In other words, you can give four apples (i.e., the mean value) to everyone for the distribution to be fair/equal. This demonstration provides an intuitive explanation for the formula above: dividing the sum of a distribution by the number of data points is equivalent to partitioning the whole of the distribution equally to all of the data points. 3. Visual Mnemonics These following visual mnemonics provide the interpretation of the mean in a unique way: This is a mnemonic for the leveling value interpretation of the mean. The height of the A's crossbar is the mean of the heights of the four letters. And this is another mnemonic for the balance point interpretation of the mean. The position of the fulcrum is roughly the mean of the positions of the M, E, and doubled N. Median Once the interpretation of mean as the balancing point on a rod is understood, the median can be demonstrated by an extension of the same idea: the balancing point on a necklace. Replace the rod with a string, but keep the data markings and weights. Then at the ends, attach a second string, longer than the first, to form a loop [like a necklace], and drape the loop over a well-lubricated pulley. Suppose, initially, that the weights are distinct. The pulley and loop balance when the same number of weights are to each side. In other words, the loop ‘balances’ when the median is the lowest point. Notice that if one of the weights is slid way up the loop creating an outlier, the loop doesn’t move. This demonstrates, physically, the principle that the median is unaffected by outliers. Mode The mode is probably the easiest concept to understand as it involves the most basic mathematical operation: counting. The fact that it’s equal to the most frequently occurring data point leads to an acronym: “Most-often Occurring Data Element”. The mode can also be thought of the most typical value in a set. (Although, a deeper understanding of ‘typical’ would lead to the representative, or average value. However, it’s appropriate to equate ‘typical’ with the mode based on the very literal meaning of the word ‘typical’.) Sources: The Median is a balance point -- Lynch, The College Mathematics Journal (2009) Making Statistics Memorable: New Mnemonics and Motivations -- Lesser, Statistical Education, JSM (2011) On the Use of Mnemonics for Teaching Statistics -- Lesser, Model Assisted Statistics and Applications, 6(2), 151-160 (2011) What does the mean mean? – Watier, Lamontagne and Chartier, Journal of Statistics Education, Volume 19, Number 2 (2011) Typical? Children's and Teachers' Ideas About Average – Russell and Mokros, ICOTS 3 (1990) OVERALL REFERENCE: http://jse.amstat.org/v22n3/lesser.pdf
Explaining Mean, Median, Mode in Layman's Terms Thank you for this simple-yet-profound question about the fundamental statistical concepts of mean, median, and mode. There are some wonderful methods /demonstrations available for explaining and gras
25,429
Explaining Mean, Median, Mode in Layman's Terms
I have to wonder whether your criteria are achievable as you seem to want maximal effectiveness and explanatory power with minimal materials. But a simple example such as 1 1 2 2 2 3 3 4 5 6 15 allows immediate calculation of the mode (2), the median (3) and the mean (44/11) = 4 and thus shows that they can be different. You could then explain that the ideas of the most common value, the value in the middle and the mean are different. And introduce complications by changing values to show the mode can be ambiguous using an example with an even number of values to explain the convention for calculating the median varying values in the tails to emphasise what happens to the mean, and why and why not that may be desirable. using simpler examples in which two or three of mean, median, mode coincide. I have not mentioned central tendency in my teaching except to say that it's a term in various literatures. I prefer to talk about level and how it may be quantified. Conversely, I don't think any serious data analysis is possible unless people have a minimal feeling for skewness as more usual than symmetry.
Explaining Mean, Median, Mode in Layman's Terms
I have to wonder whether your criteria are achievable as you seem to want maximal effectiveness and explanatory power with minimal materials. But a simple example such as 1 1 2 2 2
Explaining Mean, Median, Mode in Layman's Terms I have to wonder whether your criteria are achievable as you seem to want maximal effectiveness and explanatory power with minimal materials. But a simple example such as 1 1 2 2 2 3 3 4 5 6 15 allows immediate calculation of the mode (2), the median (3) and the mean (44/11) = 4 and thus shows that they can be different. You could then explain that the ideas of the most common value, the value in the middle and the mean are different. And introduce complications by changing values to show the mode can be ambiguous using an example with an even number of values to explain the convention for calculating the median varying values in the tails to emphasise what happens to the mean, and why and why not that may be desirable. using simpler examples in which two or three of mean, median, mode coincide. I have not mentioned central tendency in my teaching except to say that it's a term in various literatures. I prefer to talk about level and how it may be quantified. Conversely, I don't think any serious data analysis is possible unless people have a minimal feeling for skewness as more usual than symmetry.
Explaining Mean, Median, Mode in Layman's Terms I have to wonder whether your criteria are achievable as you seem to want maximal effectiveness and explanatory power with minimal materials. But a simple example such as 1 1 2 2 2
25,430
Explaining Mean, Median, Mode in Layman's Terms
This is how I explain them: The (arithmetic) mean is the point that takes the entire data set into account, and settles somewhere "in the middle." Have them think of a point cloud, or a blob, in space: the mean is the center of mass of that point cloud. The median is the point that has "the same number of points on all sides" (where obviously the concept of a "side" isn't well-defined in 2+ dimensions). This represents another kind of "middle," and in fact a more intuitive kind in some sense. Thinking of that same blob in space, it is clear that if the blob is lopsided then the mean will be shifted. But this lopsidedness can be achieved in one of two ways: either you add more points in one area, or you increase the dispersion of points in that area. If you increase the dispersion of points in one area without increasing the number of points, then the median still has the same number of points "on all sides" and will not shift commensurate with the mean. You can demonstrate this with two very trivial "blobs": $y = (1, 2, 3, 4, 5)$ and $y' = (1, 2, 3, 4, 99)$. $\operatorname{mean}(y) = \operatorname{median}(y)$, whereas $\operatorname{mean}(y') > \operatorname{median}(y')$. But I recommend starting with the geometric/visual "blob-based" explanation first: in my experience it's easier to start with a hand-waving graphical demonstration, then move to concrete toy examples. I find that most people (myself included) aren't naturally number-oriented, and starting with a numerical explanation is a recipe for confusion. You can always go back and teach more precise definitions later. The mode is the point that, if points are randomly sampled from that blob, is most likely to appear (recognizing that this is a fudge for continuous data). This can be, but doesn't have to be, located near the mean or median. Once you've explained these concepts, then you can move onto a more "statistical-looking" demo: The solid line is the mean. The dashed line is the median. The dotted line is the mode. The mean represents the positions of the data points along the x axis, while the median reflects only the number of data points on either side. The mode is just the point of greatest probability, which is different from both the mean and the median. R code: set.seed(47730) y <- rgamma(100, 2, 2) d <- density(y) plot(d) rug(y) abline(v = mean(y), lty = 1) abline(v = median(y), lty = 2) abline(v = d$x[which.max(d$y)], lty = 3)
Explaining Mean, Median, Mode in Layman's Terms
This is how I explain them: The (arithmetic) mean is the point that takes the entire data set into account, and settles somewhere "in the middle." Have them think of a point cloud, or a blob, in space
Explaining Mean, Median, Mode in Layman's Terms This is how I explain them: The (arithmetic) mean is the point that takes the entire data set into account, and settles somewhere "in the middle." Have them think of a point cloud, or a blob, in space: the mean is the center of mass of that point cloud. The median is the point that has "the same number of points on all sides" (where obviously the concept of a "side" isn't well-defined in 2+ dimensions). This represents another kind of "middle," and in fact a more intuitive kind in some sense. Thinking of that same blob in space, it is clear that if the blob is lopsided then the mean will be shifted. But this lopsidedness can be achieved in one of two ways: either you add more points in one area, or you increase the dispersion of points in that area. If you increase the dispersion of points in one area without increasing the number of points, then the median still has the same number of points "on all sides" and will not shift commensurate with the mean. You can demonstrate this with two very trivial "blobs": $y = (1, 2, 3, 4, 5)$ and $y' = (1, 2, 3, 4, 99)$. $\operatorname{mean}(y) = \operatorname{median}(y)$, whereas $\operatorname{mean}(y') > \operatorname{median}(y')$. But I recommend starting with the geometric/visual "blob-based" explanation first: in my experience it's easier to start with a hand-waving graphical demonstration, then move to concrete toy examples. I find that most people (myself included) aren't naturally number-oriented, and starting with a numerical explanation is a recipe for confusion. You can always go back and teach more precise definitions later. The mode is the point that, if points are randomly sampled from that blob, is most likely to appear (recognizing that this is a fudge for continuous data). This can be, but doesn't have to be, located near the mean or median. Once you've explained these concepts, then you can move onto a more "statistical-looking" demo: The solid line is the mean. The dashed line is the median. The dotted line is the mode. The mean represents the positions of the data points along the x axis, while the median reflects only the number of data points on either side. The mode is just the point of greatest probability, which is different from both the mean and the median. R code: set.seed(47730) y <- rgamma(100, 2, 2) d <- density(y) plot(d) rug(y) abline(v = mean(y), lty = 1) abline(v = median(y), lty = 2) abline(v = d$x[which.max(d$y)], lty = 3)
Explaining Mean, Median, Mode in Layman's Terms This is how I explain them: The (arithmetic) mean is the point that takes the entire data set into account, and settles somewhere "in the middle." Have them think of a point cloud, or a blob, in space
25,431
Explaining Mean, Median, Mode in Layman's Terms
The "mean", "median" and "mode" are "central tendency", aka "most likely outcome" in different domains. They are all "best bets" in different "games". Probability and Statistics is a field that was, in part, built by gamblers (link, link) . When you go to the horse races, or the poker table, you want to know some science that helps you win. They did too, and wrote about it, so you don't have to invent it yourself. In a horse race, you want to pick a winner. You don't have future information, but you do know some past information. You know how fast each horse ran in the past few races. If you want to make an estimate of how fast they are likely to run in their next race, you can compute and compare the mean, aka the average, race-times. Another central tendency is the "median" - which is the center of a sorted list. What if I put a horrible typo on your list of race times, and the value was 1000x longer than all the others. It would mess up your estimate. You might not bet on the winning horse. How do you address that? You could manually look for that one value, or you might use the "median". What if you are playing cards, like "blackjack", and you are trying to figure out if you need another card given the previous cards. The card you are looking for is not a 3.14 because cards numbers are integer values. How do you figure out what your best bet is when "average" or median is not meaningful? In this case, you want to bet on the "mode" - the most likely card to come out of the dealers stack. In all three cases, the central tendency is just another way of saying "best bet". If you want to account not only for central tendency in your betting, that is to say if you want to bet so that you are able to reduce the impacts of a loss while maximizing winnings, then you must look at "tendencies of variation". Things like standard deviation, inter-quantile-ranges, or alternative modes and their frequencies, are all used to minimize the maximum losses while maximizing the likely winnings.
Explaining Mean, Median, Mode in Layman's Terms
The "mean", "median" and "mode" are "central tendency", aka "most likely outcome" in different domains. They are all "best bets" in different "games". Probability and Statistics is a field that was,
Explaining Mean, Median, Mode in Layman's Terms The "mean", "median" and "mode" are "central tendency", aka "most likely outcome" in different domains. They are all "best bets" in different "games". Probability and Statistics is a field that was, in part, built by gamblers (link, link) . When you go to the horse races, or the poker table, you want to know some science that helps you win. They did too, and wrote about it, so you don't have to invent it yourself. In a horse race, you want to pick a winner. You don't have future information, but you do know some past information. You know how fast each horse ran in the past few races. If you want to make an estimate of how fast they are likely to run in their next race, you can compute and compare the mean, aka the average, race-times. Another central tendency is the "median" - which is the center of a sorted list. What if I put a horrible typo on your list of race times, and the value was 1000x longer than all the others. It would mess up your estimate. You might not bet on the winning horse. How do you address that? You could manually look for that one value, or you might use the "median". What if you are playing cards, like "blackjack", and you are trying to figure out if you need another card given the previous cards. The card you are looking for is not a 3.14 because cards numbers are integer values. How do you figure out what your best bet is when "average" or median is not meaningful? In this case, you want to bet on the "mode" - the most likely card to come out of the dealers stack. In all three cases, the central tendency is just another way of saying "best bet". If you want to account not only for central tendency in your betting, that is to say if you want to bet so that you are able to reduce the impacts of a loss while maximizing winnings, then you must look at "tendencies of variation". Things like standard deviation, inter-quantile-ranges, or alternative modes and their frequencies, are all used to minimize the maximum losses while maximizing the likely winnings.
Explaining Mean, Median, Mode in Layman's Terms The "mean", "median" and "mode" are "central tendency", aka "most likely outcome" in different domains. They are all "best bets" in different "games". Probability and Statistics is a field that was,
25,432
Explaining Mean, Median, Mode in Layman's Terms
I think it's useful to explain this concept when considering multiple means, medians, and modes. These values don't exist by themselves in a vacuum. For example, here's how I would explain mean. Let's say you have 2 crates of watermelons (crate 1 and 2). It's sealed off so you can't see the watermelons inside and thus you don't know their sizes. However, you do know the total weights of the watermelons in each crate and each contains the same number of watermelons. From that, you can compute the mean weights of each crate of watermelons (M1 and M2). Now that you have two different mean values M1 and M2, you can do a rough comparison of the individual contents. If M1 > M2, then a randomly selected watermelons from crate 1 may probably be heavier than one picked from crate 2. Of course, I would love comments on this perspective.
Explaining Mean, Median, Mode in Layman's Terms
I think it's useful to explain this concept when considering multiple means, medians, and modes. These values don't exist by themselves in a vacuum. For example, here's how I would explain mean. Let's
Explaining Mean, Median, Mode in Layman's Terms I think it's useful to explain this concept when considering multiple means, medians, and modes. These values don't exist by themselves in a vacuum. For example, here's how I would explain mean. Let's say you have 2 crates of watermelons (crate 1 and 2). It's sealed off so you can't see the watermelons inside and thus you don't know their sizes. However, you do know the total weights of the watermelons in each crate and each contains the same number of watermelons. From that, you can compute the mean weights of each crate of watermelons (M1 and M2). Now that you have two different mean values M1 and M2, you can do a rough comparison of the individual contents. If M1 > M2, then a randomly selected watermelons from crate 1 may probably be heavier than one picked from crate 2. Of course, I would love comments on this perspective.
Explaining Mean, Median, Mode in Layman's Terms I think it's useful to explain this concept when considering multiple means, medians, and modes. These values don't exist by themselves in a vacuum. For example, here's how I would explain mean. Let's
25,433
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reasonable?
Which method is more reasonable depends on what conclusion you exactly want to draw. Actually, there is a 3rd possibility which differs from your version 2 by choosing the training data with replacement. This is closely related to out-of-bootstrap validation (differs only by the number of training samples you draw). Drawing with replacement is sometimes preferred over the cross validation methods as it is closer to reality (drawing a sample in practice does not diminish the chance to draw another sample of the same characteristics again - at least as long as only a very small fraction of the true population is sampled). I'd prefer such an out-of-bootstrap validation if I want to conclude on the model performance that can be achieved if the given algorithm is trained with $n_{train}$ cases of the given problem. (Though the caveat of Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105 does also apply here: you try to extrapolate from one given data set onto other training data sets as well, and within your data set there is no way to measure how representative that data set actually is) If, on the other hand, you want to estimate (approximately) how good the model you built on the whole data set performs on unknown data (otherwise of the same characteristics of your training data) then I'd prefer approach 1 (iterated/repeated cross validation). Its surrogate models are a closer approximation to the model whose performance you actually want to know - so less randomness in the training data is on purpose here. The surrogate models of iterated cross validation can be seen as perturbed (by exchanging a small fraction of the training cases) versions of each other. Thus, changes you see for the same test case can directly be attributed to model instability. Note that whatever scheme you chose for your cross- or out-of-bootstrap validation, you only ever test as much as $n$ cases. The uncertainty caused by a finite number of test cases cannot decrease further, however many bootstrap or set validation (your approach 2) or iterations of cross validation you run. The part of the variance that does decrease with more iterations/runs is variance caused by model instability. In practice, we've found only small differences in total error between 200 runs of out-of-bootstrap and 40 iterations of $5$-fold cross validation for our type of data: Beleites et al.: Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). Note that for our high-dimensional data, resubstition/autoprediction/training error easily becomes 0, so the .632-bootstrap is not an option and there is essentially no difference between out-of-bootstrap and .632+ out-of-bootstrap. For a study that includes repeated hold out (similar to your approach2), see Kim: Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap Computational Statistics & Data Analysis , 2009, 53, 3735 - 3745.
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reaso
Which method is more reasonable depends on what conclusion you exactly want to draw. Actually, there is a 3rd possibility which differs from your version 2 by choosing the training data with replace
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reasonable? Which method is more reasonable depends on what conclusion you exactly want to draw. Actually, there is a 3rd possibility which differs from your version 2 by choosing the training data with replacement. This is closely related to out-of-bootstrap validation (differs only by the number of training samples you draw). Drawing with replacement is sometimes preferred over the cross validation methods as it is closer to reality (drawing a sample in practice does not diminish the chance to draw another sample of the same characteristics again - at least as long as only a very small fraction of the true population is sampled). I'd prefer such an out-of-bootstrap validation if I want to conclude on the model performance that can be achieved if the given algorithm is trained with $n_{train}$ cases of the given problem. (Though the caveat of Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105 does also apply here: you try to extrapolate from one given data set onto other training data sets as well, and within your data set there is no way to measure how representative that data set actually is) If, on the other hand, you want to estimate (approximately) how good the model you built on the whole data set performs on unknown data (otherwise of the same characteristics of your training data) then I'd prefer approach 1 (iterated/repeated cross validation). Its surrogate models are a closer approximation to the model whose performance you actually want to know - so less randomness in the training data is on purpose here. The surrogate models of iterated cross validation can be seen as perturbed (by exchanging a small fraction of the training cases) versions of each other. Thus, changes you see for the same test case can directly be attributed to model instability. Note that whatever scheme you chose for your cross- or out-of-bootstrap validation, you only ever test as much as $n$ cases. The uncertainty caused by a finite number of test cases cannot decrease further, however many bootstrap or set validation (your approach 2) or iterations of cross validation you run. The part of the variance that does decrease with more iterations/runs is variance caused by model instability. In practice, we've found only small differences in total error between 200 runs of out-of-bootstrap and 40 iterations of $5$-fold cross validation for our type of data: Beleites et al.: Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). Note that for our high-dimensional data, resubstition/autoprediction/training error easily becomes 0, so the .632-bootstrap is not an option and there is essentially no difference between out-of-bootstrap and .632+ out-of-bootstrap. For a study that includes repeated hold out (similar to your approach2), see Kim: Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap Computational Statistics & Data Analysis , 2009, 53, 3735 - 3745.
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reaso Which method is more reasonable depends on what conclusion you exactly want to draw. Actually, there is a 3rd possibility which differs from your version 2 by choosing the training data with replace
25,434
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reasonable?
First you have to understand that the underlying aim of cross validation is to predict the efficiency of a method that is build to classify any future data i.e. in a way compare the efficiency of 2 or more methods (if any). As we do not know any future data, we have to make the method efficient in such a way that it gives maximum efficiency for any random future data. So in a way, we have to implement maximum randomness in our cross validation. Logically speaking, the second one is more reasonable , as if you take fixed 5 sets as the testing sets, then practically you are cutting down randomness to a great extent as you are fixing which set of data is to be the basis of the model(the training set) and which not. However if you are generating a random process, to select your training set ,then true randomness is implemented. so in a way "truer" application of cross validation will be implemented for your second process. However everything that has pros also has cons. After all ,keep in mind that cross validation has to be done on the entire data set, which is why you do cross validation 20 times to exhaust the entire data set. 100 times repeating the second process will not(actually with very low probability) exhaust the entire data set. So this is a con of too much randomness.
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reaso
First you have to understand that the underlying aim of cross validation is to predict the efficiency of a method that is build to classify any future data i.e. in a way compare the efficiency of 2 or
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reasonable? First you have to understand that the underlying aim of cross validation is to predict the efficiency of a method that is build to classify any future data i.e. in a way compare the efficiency of 2 or more methods (if any). As we do not know any future data, we have to make the method efficient in such a way that it gives maximum efficiency for any random future data. So in a way, we have to implement maximum randomness in our cross validation. Logically speaking, the second one is more reasonable , as if you take fixed 5 sets as the testing sets, then practically you are cutting down randomness to a great extent as you are fixing which set of data is to be the basis of the model(the training set) and which not. However if you are generating a random process, to select your training set ,then true randomness is implemented. so in a way "truer" application of cross validation will be implemented for your second process. However everything that has pros also has cons. After all ,keep in mind that cross validation has to be done on the entire data set, which is why you do cross validation 20 times to exhaust the entire data set. 100 times repeating the second process will not(actually with very low probability) exhaust the entire data set. So this is a con of too much randomness.
Repeated k-fold cross-validation vs. repeated holdout cross-validation: which approach is more reaso First you have to understand that the underlying aim of cross validation is to predict the efficiency of a method that is build to classify any future data i.e. in a way compare the efficiency of 2 or
25,435
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
The method is not stated verbosely in the details section of ?prop.test but suitable references are given. Wilson's score method is used, see: Wilson EB (1927). "Probable Inference, the Law of Succession, and Statistical Inference." Journal of the American Statistical Association, 22, 209-212. This is found by Newcombe (1998) - also referenced on ?prop.test - to have much better coverage than the traditional Wald-type interval. See: Newcombe RG (1998). "Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods." Statistics in Medicine, 17, 857-872. There it is called method 3 and 4 (without and with continuity correction, respectively). Thus, you can replicate the confidence interval prop.test(319, 1100, conf.level = 0.99, correct = FALSE)$conf.int ## [1] 0.2561013 0.3264169 ## attr(,"conf.level") ## [1] 0.99 with p <- 319/1100 n <- 1100 z <- qnorm(0.995) (2 * n * p + z^2 + c(-1, 1) * z * sqrt(z^2 + 4 * n * p * (1 - p))) / (2 * (n + z^2)) ## [1] 0.2561013 0.3264169 Of course, the "exact" binomial (Clopper & Pearson), discussed as method 5 in Newcombe (1998), is also available in binom.test: binom.test(319, 1100, conf.level = 0.99)$conf.int ## [1] 0.2552831 0.3265614 ## attr(,"conf.level") ## [1] 0.99
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
The method is not stated verbosely in the details section of ?prop.test but suitable references are given. Wilson's score method is used, see: Wilson EB (1927). "Probable Inference, the Law of Success
Confidence interval from R's prop.test() differs from hand calculation and result from SAS The method is not stated verbosely in the details section of ?prop.test but suitable references are given. Wilson's score method is used, see: Wilson EB (1927). "Probable Inference, the Law of Succession, and Statistical Inference." Journal of the American Statistical Association, 22, 209-212. This is found by Newcombe (1998) - also referenced on ?prop.test - to have much better coverage than the traditional Wald-type interval. See: Newcombe RG (1998). "Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods." Statistics in Medicine, 17, 857-872. There it is called method 3 and 4 (without and with continuity correction, respectively). Thus, you can replicate the confidence interval prop.test(319, 1100, conf.level = 0.99, correct = FALSE)$conf.int ## [1] 0.2561013 0.3264169 ## attr(,"conf.level") ## [1] 0.99 with p <- 319/1100 n <- 1100 z <- qnorm(0.995) (2 * n * p + z^2 + c(-1, 1) * z * sqrt(z^2 + 4 * n * p * (1 - p))) / (2 * (n + z^2)) ## [1] 0.2561013 0.3264169 Of course, the "exact" binomial (Clopper & Pearson), discussed as method 5 in Newcombe (1998), is also available in binom.test: binom.test(319, 1100, conf.level = 0.99)$conf.int ## [1] 0.2552831 0.3265614 ## attr(,"conf.level") ## [1] 0.99
Confidence interval from R's prop.test() differs from hand calculation and result from SAS The method is not stated verbosely in the details section of ?prop.test but suitable references are given. Wilson's score method is used, see: Wilson EB (1927). "Probable Inference, the Law of Success
25,436
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
The accepted answer is right: the 1-sample prop.test() is calculated using the Wilson score. It can be checked with: > binom::binom.confint(319, 1100, conf.level = 0.99) method x n mean lower upper 1 agresti-coull 319 1100 0.2900000 0.2560789 0.3264393 2 asymptotic 319 1100 0.2900000 0.2547589 0.3252411 # Wald's (SAS) 3 bayes 319 1100 0.2901907 0.2554718 0.3258328 4 cloglog 319 1100 0.2900000 0.2552377 0.3255863 5 exact 319 1100 0.2900000 0.2552831 0.3265614 6 logit 319 1100 0.2900000 0.2560616 0.3264627 7 probit 319 1100 0.2900000 0.2558036 0.3261994 8 profile 319 1100 0.2900000 0.2556501 0.3260360 9 lrt 319 1100 0.2900000 0.2556607 0.3260543 10 prop.test 319 1100 0.2900000 0.2635118 0.3179745 11 wilson 319 1100 0.2900000 0.2561013 0.3264169 # Wilson For the 2 sample it's Wald's. > (ppt <- prop.test(x = c(11, 8), n = c(16, 21),correct = FALSE)) 2-sample test for equality of proportions without continuity correction data: c(11, 8) out of c(16, 21) X-squared = 3.4159, df = 1, p-value = 0.06457 alternative hypothesis: two.sided 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: prop 1 prop 2 0.6875000 0.3809524 which agrees with the logistic regression followed by the marginal effect: data <- data.frame(Status = c(rep(TRUE, 11), rep(FALSE, 16-11), rep(TRUE, 8), rep(FALSE, 21-8)), Group = c(rep("Gr1", 16), rep("Gr2", 21))) > m <- glm(Status ~ Group,family = binomial(), data=data) > margins::margins_summary(m) factor AME SE z p lower upper GroupGr2 -0.3065 0.1570 -1.9522 0.0509 -0.6143 0.0012 which agrees with > PropCIs::wald2ci(11, 16, 8, 21, conf.level=0.95, adjust="Wald") data: 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: [1] 0.3065476 While the reported p-value comes from the Rao score test: > anova(m, test="Rao") Analysis of Deviance Table Model: binomial, link: logit Response: Status Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) NULL 36 51.266 Group 1 3.4809 35 47.785 3.4159 0.06457 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
The accepted answer is right: the 1-sample prop.test() is calculated using the Wilson score. It can be checked with: > binom::binom.confint(319, 1100, conf.level = 0.99) method x n
Confidence interval from R's prop.test() differs from hand calculation and result from SAS The accepted answer is right: the 1-sample prop.test() is calculated using the Wilson score. It can be checked with: > binom::binom.confint(319, 1100, conf.level = 0.99) method x n mean lower upper 1 agresti-coull 319 1100 0.2900000 0.2560789 0.3264393 2 asymptotic 319 1100 0.2900000 0.2547589 0.3252411 # Wald's (SAS) 3 bayes 319 1100 0.2901907 0.2554718 0.3258328 4 cloglog 319 1100 0.2900000 0.2552377 0.3255863 5 exact 319 1100 0.2900000 0.2552831 0.3265614 6 logit 319 1100 0.2900000 0.2560616 0.3264627 7 probit 319 1100 0.2900000 0.2558036 0.3261994 8 profile 319 1100 0.2900000 0.2556501 0.3260360 9 lrt 319 1100 0.2900000 0.2556607 0.3260543 10 prop.test 319 1100 0.2900000 0.2635118 0.3179745 11 wilson 319 1100 0.2900000 0.2561013 0.3264169 # Wilson For the 2 sample it's Wald's. > (ppt <- prop.test(x = c(11, 8), n = c(16, 21),correct = FALSE)) 2-sample test for equality of proportions without continuity correction data: c(11, 8) out of c(16, 21) X-squared = 3.4159, df = 1, p-value = 0.06457 alternative hypothesis: two.sided 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: prop 1 prop 2 0.6875000 0.3809524 which agrees with the logistic regression followed by the marginal effect: data <- data.frame(Status = c(rep(TRUE, 11), rep(FALSE, 16-11), rep(TRUE, 8), rep(FALSE, 21-8)), Group = c(rep("Gr1", 16), rep("Gr2", 21))) > m <- glm(Status ~ Group,family = binomial(), data=data) > margins::margins_summary(m) factor AME SE z p lower upper GroupGr2 -0.3065 0.1570 -1.9522 0.0509 -0.6143 0.0012 which agrees with > PropCIs::wald2ci(11, 16, 8, 21, conf.level=0.95, adjust="Wald") data: 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: [1] 0.3065476 While the reported p-value comes from the Rao score test: > anova(m, test="Rao") Analysis of Deviance Table Model: binomial, link: logit Response: Status Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) NULL 36 51.266 Group 1 3.4809 35 47.785 3.4159 0.06457 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Confidence interval from R's prop.test() differs from hand calculation and result from SAS The accepted answer is right: the 1-sample prop.test() is calculated using the Wilson score. It can be checked with: > binom::binom.confint(319, 1100, conf.level = 0.99) method x n
25,437
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
Just note. When you do prop.test(c(56,48), c(70,80)) it does not do the Wilson (score) method. The accepted answer is correct, it does do Wilson (score) for a single proportion, but for two proportions, it will do a standard asymptotic Wald interval.
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
Just note. When you do prop.test(c(56,48), c(70,80)) it does not do the Wilson (score) method. The accepted answer is correct, it does do Wilson (score) for a single proportion, but for two proportio
Confidence interval from R's prop.test() differs from hand calculation and result from SAS Just note. When you do prop.test(c(56,48), c(70,80)) it does not do the Wilson (score) method. The accepted answer is correct, it does do Wilson (score) for a single proportion, but for two proportions, it will do a standard asymptotic Wald interval.
Confidence interval from R's prop.test() differs from hand calculation and result from SAS Just note. When you do prop.test(c(56,48), c(70,80)) it does not do the Wilson (score) method. The accepted answer is correct, it does do Wilson (score) for a single proportion, but for two proportio
25,438
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
Following the answer by nzcoops, note that you can get the asymptotic interval this way (say for 56 cases of 70, with 0.9 confidence level): prop.test(c(56,0), c(70,70),correct=F,conf.level=0.9)$conf (instead of prop.test(56,70,correct=F,conf.level=0.9)$conf, which gives the Wilson interval) And you can change the second 70 for anything, though R will give an error message if it is something very small.
Confidence interval from R's prop.test() differs from hand calculation and result from SAS
Following the answer by nzcoops, note that you can get the asymptotic interval this way (say for 56 cases of 70, with 0.9 confidence level): prop.test(c(56,0), c(70,70),correct=F,conf.level=0.9)$conf
Confidence interval from R's prop.test() differs from hand calculation and result from SAS Following the answer by nzcoops, note that you can get the asymptotic interval this way (say for 56 cases of 70, with 0.9 confidence level): prop.test(c(56,0), c(70,70),correct=F,conf.level=0.9)$conf (instead of prop.test(56,70,correct=F,conf.level=0.9)$conf, which gives the Wilson interval) And you can change the second 70 for anything, though R will give an error message if it is something very small.
Confidence interval from R's prop.test() differs from hand calculation and result from SAS Following the answer by nzcoops, note that you can get the asymptotic interval this way (say for 56 cases of 70, with 0.9 confidence level): prop.test(c(56,0), c(70,70),correct=F,conf.level=0.9)$conf
25,439
Difference between density and probability [duplicate]
?dnorm is the density function for the normal distribution. If you enter a quantile (i.e., a value for X), and the mean and standard deviation of the normal distribution in question, it will output the probability density. ?pnorm is the distribution function for the normal distribution. If you enter a quantile, and the mean and standard deviation of the normal distirubiton in question, it will output the probability of drawing a random variate from that distribution less than that quantile (you also have the option to specify greater than instead). (I'll add more in a bit.) You can also use these functions to plot the PDF and CDF of the specified distribution.
Difference between density and probability [duplicate]
?dnorm is the density function for the normal distribution. If you enter a quantile (i.e., a value for X), and the mean and standard deviation of the normal distribution in question, it will output t
Difference between density and probability [duplicate] ?dnorm is the density function for the normal distribution. If you enter a quantile (i.e., a value for X), and the mean and standard deviation of the normal distribution in question, it will output the probability density. ?pnorm is the distribution function for the normal distribution. If you enter a quantile, and the mean and standard deviation of the normal distirubiton in question, it will output the probability of drawing a random variate from that distribution less than that quantile (you also have the option to specify greater than instead). (I'll add more in a bit.) You can also use these functions to plot the PDF and CDF of the specified distribution.
Difference between density and probability [duplicate] ?dnorm is the density function for the normal distribution. If you enter a quantile (i.e., a value for X), and the mean and standard deviation of the normal distribution in question, it will output t
25,440
Difference between density and probability [duplicate]
I think what you said was all correct. dnorm is density function and pnorm is distribution function. Thus, pnorm is $F(x) = P(X <= x) = \int_{-\infty}^{x}f(x)$ and dnorm is $f(x)$. Since $f(x) = \frac{dF(x)}{dx} = \lim_{\Delta x \to 0} \frac{F(x+ \Delta x) - F(x)}{\Delta x}$, the value of $f(x)$ shows the probability that a random variable falls in the small interval. (You could treat $f(x)$ as constant in a small interval.)
Difference between density and probability [duplicate]
I think what you said was all correct. dnorm is density function and pnorm is distribution function. Thus, pnorm is $F(x) = P(X <= x) = \int_{-\infty}^{x}f(x)$ and dnorm is $f(x)$. Since $f(x) = \fra
Difference between density and probability [duplicate] I think what you said was all correct. dnorm is density function and pnorm is distribution function. Thus, pnorm is $F(x) = P(X <= x) = \int_{-\infty}^{x}f(x)$ and dnorm is $f(x)$. Since $f(x) = \frac{dF(x)}{dx} = \lim_{\Delta x \to 0} \frac{F(x+ \Delta x) - F(x)}{\Delta x}$, the value of $f(x)$ shows the probability that a random variable falls in the small interval. (You could treat $f(x)$ as constant in a small interval.)
Difference between density and probability [duplicate] I think what you said was all correct. dnorm is density function and pnorm is distribution function. Thus, pnorm is $F(x) = P(X <= x) = \int_{-\infty}^{x}f(x)$ and dnorm is $f(x)$. Since $f(x) = \fra
25,441
Difference between density and probability [duplicate]
Everything you write is correct. As far as I understand your question you are asking, you understand why pnorm(0) = 0.5, but you ask what > pnorm(0) [1] 0.5 > dnorm(0) [1] 0.3989423 does the value dnorm(0) = 0.3989423, i.e. the height of the function mean, if it cannot be a probability? Since I cannot say it better, I refer to https://math.stackexchange.com/a/23401
Difference between density and probability [duplicate]
Everything you write is correct. As far as I understand your question you are asking, you understand why pnorm(0) = 0.5, but you ask what > pnorm(0) [1] 0.5 > dnorm(0) [1] 0.3989423 does the value
Difference between density and probability [duplicate] Everything you write is correct. As far as I understand your question you are asking, you understand why pnorm(0) = 0.5, but you ask what > pnorm(0) [1] 0.5 > dnorm(0) [1] 0.3989423 does the value dnorm(0) = 0.3989423, i.e. the height of the function mean, if it cannot be a probability? Since I cannot say it better, I refer to https://math.stackexchange.com/a/23401
Difference between density and probability [duplicate] Everything you write is correct. As far as I understand your question you are asking, you understand why pnorm(0) = 0.5, but you ask what > pnorm(0) [1] 0.5 > dnorm(0) [1] 0.3989423 does the value
25,442
Should the standard deviation be corrected in a Student's T test?
1) No it isn't. 2) because the calculation of the distribution of the test statistic relies on using the square root of the ordinary Bessel-corrected variance to get the estimate of standard deviation. If it were included it would only scale each t-statistic - and hence its distribution - by a factor (a different one at each d.f.); that would then scale the critical values by the same factor. So, you could, if you like, construct a new set of "t"-tables with $s*=s/c_4$ used in the formula for a new statistic, $t*=\frac{\overline{X}-\mu_0}{s*/\sqrt{n}}=c_4(n)t_{n-1}$, then multiply all the tabulated values for $t_\nu$ by the corresponding $c_4(\nu+1)$ to get tables for the new statistic. But we could as readily base our tests on ML estimates of $\sigma$, which would be simpler in several ways, but also wouldn't change anything substantive about testing. Making the estimate of population standard deviation unbiased would only make the calculation more complicated, and wouldn't save anything anywhere else (the same $\bar{x}$, $\overline{x^2}$ and $n$ would still ultimately lead to the same rejection or non-rejection). [To what end? Why not instead choose MLE or minimum MSE or any number of other ways of getting estimators of $\sigma$?] There's nothing especially valuable about having an unbiased estimate of $s$ for this purpose (unbiasedness is a nice thing to have, other things being equal, but other things are rarely equal). Given that people are used to using Bessel-corrected variances and hence the corresponding standard deviation, and the resulting null distributions are reasonably straightforward, there's little - if anything at all - to gain by using some other definition.
Should the standard deviation be corrected in a Student's T test?
1) No it isn't. 2) because the calculation of the distribution of the test statistic relies on using the square root of the ordinary Bessel-corrected variance to get the estimate of standard deviation
Should the standard deviation be corrected in a Student's T test? 1) No it isn't. 2) because the calculation of the distribution of the test statistic relies on using the square root of the ordinary Bessel-corrected variance to get the estimate of standard deviation. If it were included it would only scale each t-statistic - and hence its distribution - by a factor (a different one at each d.f.); that would then scale the critical values by the same factor. So, you could, if you like, construct a new set of "t"-tables with $s*=s/c_4$ used in the formula for a new statistic, $t*=\frac{\overline{X}-\mu_0}{s*/\sqrt{n}}=c_4(n)t_{n-1}$, then multiply all the tabulated values for $t_\nu$ by the corresponding $c_4(\nu+1)$ to get tables for the new statistic. But we could as readily base our tests on ML estimates of $\sigma$, which would be simpler in several ways, but also wouldn't change anything substantive about testing. Making the estimate of population standard deviation unbiased would only make the calculation more complicated, and wouldn't save anything anywhere else (the same $\bar{x}$, $\overline{x^2}$ and $n$ would still ultimately lead to the same rejection or non-rejection). [To what end? Why not instead choose MLE or minimum MSE or any number of other ways of getting estimators of $\sigma$?] There's nothing especially valuable about having an unbiased estimate of $s$ for this purpose (unbiasedness is a nice thing to have, other things being equal, but other things are rarely equal). Given that people are used to using Bessel-corrected variances and hence the corresponding standard deviation, and the resulting null distributions are reasonably straightforward, there's little - if anything at all - to gain by using some other definition.
Should the standard deviation be corrected in a Student's T test? 1) No it isn't. 2) because the calculation of the distribution of the test statistic relies on using the square root of the ordinary Bessel-corrected variance to get the estimate of standard deviation
25,443
Distinguish between short run and long run effects
Suppose you have a model $$y_t=\alpha+\beta y_{t-1}+\gamma x_t+\varepsilon_t.$$ $\gamma$ measures the instantaneous effect (or the short-term effect) of $x_t$ onto $y$. Note that $y_{t-1}$ is included in the model. Since $x_t$ has an effect on $y_t$, $x_t$ will also have an effect on $y_{t+1}$ through the lagged dependent variable, and the size of this effect will be $\beta \gamma x_t$. The story does not end here. The effect of $x_t$ on $y_{t+2}$ will be $\beta^2 \gamma x_t$. The effect of $x_t$ on $y_{t+3}$ will be $\beta^3 \gamma x_t$. And so on, and so forth. If you sum up the instantaneous effect and all the delayed effects all the way to the infinite future, you will obtain the cumulative effect of $x_t$ onto $y$ which will be $\frac{1}{1-\beta}\gamma x_t$ (where you use a formula for the infinite sum of a decaying geometric series, see Wikipedia). That is what is called the long-term effect. The model above can be generalized to more complex lag structures, but the idea remains the same; lagged dependent variables perpetuate an effect into infinite future.
Distinguish between short run and long run effects
Suppose you have a model $$y_t=\alpha+\beta y_{t-1}+\gamma x_t+\varepsilon_t.$$ $\gamma$ measures the instantaneous effect (or the short-term effect) of $x_t$ onto $y$. Note that $y_{t-1}$ is include
Distinguish between short run and long run effects Suppose you have a model $$y_t=\alpha+\beta y_{t-1}+\gamma x_t+\varepsilon_t.$$ $\gamma$ measures the instantaneous effect (or the short-term effect) of $x_t$ onto $y$. Note that $y_{t-1}$ is included in the model. Since $x_t$ has an effect on $y_t$, $x_t$ will also have an effect on $y_{t+1}$ through the lagged dependent variable, and the size of this effect will be $\beta \gamma x_t$. The story does not end here. The effect of $x_t$ on $y_{t+2}$ will be $\beta^2 \gamma x_t$. The effect of $x_t$ on $y_{t+3}$ will be $\beta^3 \gamma x_t$. And so on, and so forth. If you sum up the instantaneous effect and all the delayed effects all the way to the infinite future, you will obtain the cumulative effect of $x_t$ onto $y$ which will be $\frac{1}{1-\beta}\gamma x_t$ (where you use a formula for the infinite sum of a decaying geometric series, see Wikipedia). That is what is called the long-term effect. The model above can be generalized to more complex lag structures, but the idea remains the same; lagged dependent variables perpetuate an effect into infinite future.
Distinguish between short run and long run effects Suppose you have a model $$y_t=\alpha+\beta y_{t-1}+\gamma x_t+\varepsilon_t.$$ $\gamma$ measures the instantaneous effect (or the short-term effect) of $x_t$ onto $y$. Note that $y_{t-1}$ is include
25,444
Formula for Bayesian A/B Testing doesn't make any sense
On the site you quote there is a notice The beta function produces very large numbers, so if you’re getting infinite values in your program, be sure to work with logarithms, as in the code above. Your standard library’s log-beta function will come in handy here. so your implementation is wrong. Below I provide the corrected code: a_A <- 78+1 b_A <- 1000-78+1 a_B <- 100+1 b_B <- 1000-100+1 total <- 0 for (i in 0:(a_B-1) ) { total <- total + exp(lbeta(a_A+i, b_B+b_A) - log(b_B+i) - lbeta(1+i, b_B) - lbeta(a_A, b_A)) } It outputs total = 0.9576921, that is "odds that B will beat A in the long run" (quoting your link) what sounds valid since B in your example has greater proportion. So, it is not a p-value but rather a probability that B is greater then A (you do not expect it to be < 0.05). You can run the simple simulations to check the results: set.seed(123) # does Binomial distributions with proportions # from your data give similar estimates? mean(rbinom(n, 1000, a_B/1000)>rbinom(n, 1000, a_A/1000)) # and does values simulated in a similar fashion to # the model yield similar results? fun2 <- function(n=1000) { pA <- rbeta(1, a_A, b_A) pB <- rbeta(1, a_B, b_B) mean(rbinom(n, 1000, pB) > rbinom(n, 1000, pA)) } summary(replicate(1000, fun2(1000))) In both cases the answer is yes. As about the code, notice that for loop is unnecessary and generally they make things slower in R, so you can alternatively use vapply for cleaner and a little bit faster code: fun <- function(i) exp(lbeta(a_A+i, b_B+b_A) - log(b_B+i) - lbeta(1+i, b_B) - lbeta(a_A, b_A)) sum(vapply(0:(a_B-1), fun, numeric(1)))
Formula for Bayesian A/B Testing doesn't make any sense
On the site you quote there is a notice The beta function produces very large numbers, so if you’re getting infinite values in your program, be sure to work with logarithms, as in the code above.
Formula for Bayesian A/B Testing doesn't make any sense On the site you quote there is a notice The beta function produces very large numbers, so if you’re getting infinite values in your program, be sure to work with logarithms, as in the code above. Your standard library’s log-beta function will come in handy here. so your implementation is wrong. Below I provide the corrected code: a_A <- 78+1 b_A <- 1000-78+1 a_B <- 100+1 b_B <- 1000-100+1 total <- 0 for (i in 0:(a_B-1) ) { total <- total + exp(lbeta(a_A+i, b_B+b_A) - log(b_B+i) - lbeta(1+i, b_B) - lbeta(a_A, b_A)) } It outputs total = 0.9576921, that is "odds that B will beat A in the long run" (quoting your link) what sounds valid since B in your example has greater proportion. So, it is not a p-value but rather a probability that B is greater then A (you do not expect it to be < 0.05). You can run the simple simulations to check the results: set.seed(123) # does Binomial distributions with proportions # from your data give similar estimates? mean(rbinom(n, 1000, a_B/1000)>rbinom(n, 1000, a_A/1000)) # and does values simulated in a similar fashion to # the model yield similar results? fun2 <- function(n=1000) { pA <- rbeta(1, a_A, b_A) pB <- rbeta(1, a_B, b_B) mean(rbinom(n, 1000, pB) > rbinom(n, 1000, pA)) } summary(replicate(1000, fun2(1000))) In both cases the answer is yes. As about the code, notice that for loop is unnecessary and generally they make things slower in R, so you can alternatively use vapply for cleaner and a little bit faster code: fun <- function(i) exp(lbeta(a_A+i, b_B+b_A) - log(b_B+i) - lbeta(1+i, b_B) - lbeta(a_A, b_A)) sum(vapply(0:(a_B-1), fun, numeric(1)))
Formula for Bayesian A/B Testing doesn't make any sense On the site you quote there is a notice The beta function produces very large numbers, so if you’re getting infinite values in your program, be sure to work with logarithms, as in the code above.
25,445
What is the difference between variable and random variable?
A variable is a symbol that represents some quantity. A variable is useful in mathematics because you can prove something without assuming the value of a variable and hence make a general statement over a range of values for that variable. A random variable is a value that follows some probability distribution. In other words, it's a value that is subject to some randomness or chance. In linear regression, $X$ may be viewed either as a random variable that is observed or it can be considered as a predetermined fixed value which, as LEP already discussed, the investigator chooses. As you've pointed out, we usually assume the later (whether or not this assumption is correct is another story). However, the OLS estimator is unbiased whether or not you treat $X$ as random and the estimate of the variance of the OLS estimator is unbiased for the variance of $\hat{\beta}_{OLS}$ whether or not you treat $X$ as random. These are a couple reasons people don't get too caught up in whether or not to assume $X$ is random in regression. If you treat $X$ as random, I will show that the OLS estimator is still unbiased below. Let $X$ be a random variable and let $\hat{\beta}_{OLS} = (X^{T}X)^{-1} X^{T} Y$. $E(\hat{\beta}_{OLS})=E[E[\hat{\beta}_{OLS}|X]]=E[E[(X^{T}X)^{-1} X^{T} Y|X]]=E[(X^{T}X)^{-1} X^{T}E[ Y|X]] =E[(X^{T}X)^{-1} X^{T}X\beta] =E[\beta]=\beta$ If you treat $X$ as random, I will show that the estimate of the variance of $\hat{\beta}_{OLS}$ is unbiased for the unconditional variance below. $Var(\hat{\beta}_{OLS})=Var(E(\hat{\beta}_{OLS}|X)) + E(Var(\hat{\beta}_{OLS}| X))=Var(\beta)+ E(Var(\hat{\beta}_{OLS}|X))=E(Var(\hat{\beta}_{OLS}|X))=E(\sigma^{2}( X^{T} X)^{-1})$
What is the difference between variable and random variable?
A variable is a symbol that represents some quantity. A variable is useful in mathematics because you can prove something without assuming the value of a variable and hence make a general statement ov
What is the difference between variable and random variable? A variable is a symbol that represents some quantity. A variable is useful in mathematics because you can prove something without assuming the value of a variable and hence make a general statement over a range of values for that variable. A random variable is a value that follows some probability distribution. In other words, it's a value that is subject to some randomness or chance. In linear regression, $X$ may be viewed either as a random variable that is observed or it can be considered as a predetermined fixed value which, as LEP already discussed, the investigator chooses. As you've pointed out, we usually assume the later (whether or not this assumption is correct is another story). However, the OLS estimator is unbiased whether or not you treat $X$ as random and the estimate of the variance of the OLS estimator is unbiased for the variance of $\hat{\beta}_{OLS}$ whether or not you treat $X$ as random. These are a couple reasons people don't get too caught up in whether or not to assume $X$ is random in regression. If you treat $X$ as random, I will show that the OLS estimator is still unbiased below. Let $X$ be a random variable and let $\hat{\beta}_{OLS} = (X^{T}X)^{-1} X^{T} Y$. $E(\hat{\beta}_{OLS})=E[E[\hat{\beta}_{OLS}|X]]=E[E[(X^{T}X)^{-1} X^{T} Y|X]]=E[(X^{T}X)^{-1} X^{T}E[ Y|X]] =E[(X^{T}X)^{-1} X^{T}X\beta] =E[\beta]=\beta$ If you treat $X$ as random, I will show that the estimate of the variance of $\hat{\beta}_{OLS}$ is unbiased for the unconditional variance below. $Var(\hat{\beta}_{OLS})=Var(E(\hat{\beta}_{OLS}|X)) + E(Var(\hat{\beta}_{OLS}| X))=Var(\beta)+ E(Var(\hat{\beta}_{OLS}|X))=E(Var(\hat{\beta}_{OLS}|X))=E(\sigma^{2}( X^{T} X)^{-1})$
What is the difference between variable and random variable? A variable is a symbol that represents some quantity. A variable is useful in mathematics because you can prove something without assuming the value of a variable and hence make a general statement ov
25,446
What is the difference between variable and random variable?
When you wrote down your equation, you did not list the assumptions: $$Y=\beta_0+\beta_1X+\epsilon$$ Why is X not a random variable? Yes, it is often assumed (for simplicity of exposition in the intro statistics textbooks) that $X$ is fixed, or as you put it non-random. It is fixed (non-random) in controlled experiments, i.e. mostly in natural sciences such as physics and biology. You can set the parameter $X$ at the level you're interested, and measure the response $Y$. In this case you make a set of assumptions such as Gauss-Markov theorem. For instance, feed the mice 1 mg of ascorbic acid and measure their hair loss. You control how much of the substance to administer. However, it can be random, and it usually $is$ random in observational studies, i.e. 99% of all economics and social sciences alike. I can't set Dow-Jones Index (DJIA) at the arbitrary level, and measure the response in GDP (gross domestic product). I can only observe both, and whatever it is DJIA the day of my observation, that's my $X$. That's why the $X$ is random. In this case I have to use a different set of assumptions than the controlled experiment above. Imagine, now how difficult it is to establish causality between DJIA and DGP, unlike the case with mice when I decided how much of what to feed. Here's additional reading: The Gauss-Markov Theorem and Random Regressors. Author(s): Juliet Popper Shaffer. The American Statistician, Vol. 45, No. 4 (Nov., 1991), pp. 269-273 "Gauss–Markov Assumptions for Observational Research (Arbitrary x)" in Encyclopedia of Research Design, p.532: A parallel but stricter set of Gauss–Markov assumptions is typically applied in practice in the case of observational data, where the researcher cannot assume that x is fixed in repeated samples.
What is the difference between variable and random variable?
When you wrote down your equation, you did not list the assumptions: $$Y=\beta_0+\beta_1X+\epsilon$$ Why is X not a random variable? Yes, it is often assumed (for simplicity of exposition in the int
What is the difference between variable and random variable? When you wrote down your equation, you did not list the assumptions: $$Y=\beta_0+\beta_1X+\epsilon$$ Why is X not a random variable? Yes, it is often assumed (for simplicity of exposition in the intro statistics textbooks) that $X$ is fixed, or as you put it non-random. It is fixed (non-random) in controlled experiments, i.e. mostly in natural sciences such as physics and biology. You can set the parameter $X$ at the level you're interested, and measure the response $Y$. In this case you make a set of assumptions such as Gauss-Markov theorem. For instance, feed the mice 1 mg of ascorbic acid and measure their hair loss. You control how much of the substance to administer. However, it can be random, and it usually $is$ random in observational studies, i.e. 99% of all economics and social sciences alike. I can't set Dow-Jones Index (DJIA) at the arbitrary level, and measure the response in GDP (gross domestic product). I can only observe both, and whatever it is DJIA the day of my observation, that's my $X$. That's why the $X$ is random. In this case I have to use a different set of assumptions than the controlled experiment above. Imagine, now how difficult it is to establish causality between DJIA and DGP, unlike the case with mice when I decided how much of what to feed. Here's additional reading: The Gauss-Markov Theorem and Random Regressors. Author(s): Juliet Popper Shaffer. The American Statistician, Vol. 45, No. 4 (Nov., 1991), pp. 269-273 "Gauss–Markov Assumptions for Observational Research (Arbitrary x)" in Encyclopedia of Research Design, p.532: A parallel but stricter set of Gauss–Markov assumptions is typically applied in practice in the case of observational data, where the researcher cannot assume that x is fixed in repeated samples.
What is the difference between variable and random variable? When you wrote down your equation, you did not list the assumptions: $$Y=\beta_0+\beta_1X+\epsilon$$ Why is X not a random variable? Yes, it is often assumed (for simplicity of exposition in the int
25,447
What is the difference between variable and random variable?
Theoretically speaking the outcomes of an experiment (experiment = a random procedure) can be numerical (i.e rolling a die) or can be mapped to numbers by the designer (i.e flipping a coin, with outcomes 1=head and 0=tail). This numerical representation of the outcomes defines the random variables. In these examples, we can tell that there is some chance involved, in other words we don't control the experiment 100%. So a random variable is linked to observations in the real world, where uncertainty is involved, and that's where the "randomness" comes from. Most importantly, as others have already pointed out, a random variable x (which is either discrete or continuous) is quantified by a probability density function (pdf). So we say that, random variables have distributions. Now, in the case of linear regression, you ALREADY know the values of X and from that you try to figure out the values of Y, in other words, Y is the random variable (as you still don't have its values and they will depend on Xs' values). Resources: [1] Can someone help to explain the difference between independent and random? [2] https://amsi.org.au/ESA_Senior_Years/SeniorTopic4/4_md/SeniorTopic4c.html [3] Independent variable = Random variable?
What is the difference between variable and random variable?
Theoretically speaking the outcomes of an experiment (experiment = a random procedure) can be numerical (i.e rolling a die) or can be mapped to numbers by the designer (i.e flipping a coin, with outco
What is the difference between variable and random variable? Theoretically speaking the outcomes of an experiment (experiment = a random procedure) can be numerical (i.e rolling a die) or can be mapped to numbers by the designer (i.e flipping a coin, with outcomes 1=head and 0=tail). This numerical representation of the outcomes defines the random variables. In these examples, we can tell that there is some chance involved, in other words we don't control the experiment 100%. So a random variable is linked to observations in the real world, where uncertainty is involved, and that's where the "randomness" comes from. Most importantly, as others have already pointed out, a random variable x (which is either discrete or continuous) is quantified by a probability density function (pdf). So we say that, random variables have distributions. Now, in the case of linear regression, you ALREADY know the values of X and from that you try to figure out the values of Y, in other words, Y is the random variable (as you still don't have its values and they will depend on Xs' values). Resources: [1] Can someone help to explain the difference between independent and random? [2] https://amsi.org.au/ESA_Senior_Years/SeniorTopic4/4_md/SeniorTopic4c.html [3] Independent variable = Random variable?
What is the difference between variable and random variable? Theoretically speaking the outcomes of an experiment (experiment = a random procedure) can be numerical (i.e rolling a die) or can be mapped to numbers by the designer (i.e flipping a coin, with outco
25,448
Compare poisson and negative binomial regression with LR test
The Poisson and negative binomial (NB) model are nested: Poisson is a special case with theta = infinity. So a likelihood ratio test comparing the two models is testing the null hypothesis that "theta = infinity" against the alternative that "theta < infinity". Here the two models have the following log-likelihoods R> logLik(m3) 'log Lik.' -1328.642 (df=4) R> logLik(m1) 'log Lik.' -865.6289 (df=5) Thus, the fitted log-likelihood in the NB model is much larger/better using just one additional parameter (4 regression coefficients plus 1 theta). The value of m1$theta is 1.032713, i.e., essentially corresponding to a geometric distribution (theta = 1). So clearly there is overdispersion compared to the Poisson distribution with a theta much lower than infinity. (My personal rule of thumb is that everything beyond 10 is already rather close to infinity.) The likelihood ratio test statistic is then twice the absolute difference of log-likelihoods (2 * (1328.642 - 865.6289) = 926.0262) and has to be compared with a chi-squared distribution with degrees of freedom (df) equal to the difference in df's between the two models (5 - 4 = 1). This is what the code above does. However, it may be somewhat confusing that the difference of logLik() objects retains the "logLik" class and hence displays it with a label and df. Applying as.numeric() to drop the class might clarify this: R> 2 * (logLik(m1) - logLik(m3)) 'log Lik.' 926.0272 (df=5) R> as.numeric(2 * (logLik(m1) - logLik(m3))) [1] 926.0272 And then you could compute the p-value by hand: R> stat <- as.numeric(2 * (logLik(m1) - logLik(m3))) R> pchisq(stat, df = 5 - 4, lower.tail = FALSE) [1] 2.157298e-203 There are also functions that carry out such likelihood ratio tests in a generic way, e.g., lrtest() in lmtest (among others): R> library("lmtest") R> lrtest(m3, m1) Likelihood ratio test Model 1: daysabs ~ math + prog Model 2: daysabs ~ math + prog #Df LogLik Df Chisq Pr(>Chisq) 1 4 -1328.64 2 5 -865.63 1 926.03 < 2.2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 As a final detail: The likelihood ratio test of Poisson vs. NB is non-standard because the parameter theta = infinity is on the boundary of the parameter space. Therefore, the p-value from the chi-squared distribution has to be halved (similar to a one-sided test). Of course, if you already have a significant result without halving, it's still significant afterwards...
Compare poisson and negative binomial regression with LR test
The Poisson and negative binomial (NB) model are nested: Poisson is a special case with theta = infinity. So a likelihood ratio test comparing the two models is testing the null hypothesis that "theta
Compare poisson and negative binomial regression with LR test The Poisson and negative binomial (NB) model are nested: Poisson is a special case with theta = infinity. So a likelihood ratio test comparing the two models is testing the null hypothesis that "theta = infinity" against the alternative that "theta < infinity". Here the two models have the following log-likelihoods R> logLik(m3) 'log Lik.' -1328.642 (df=4) R> logLik(m1) 'log Lik.' -865.6289 (df=5) Thus, the fitted log-likelihood in the NB model is much larger/better using just one additional parameter (4 regression coefficients plus 1 theta). The value of m1$theta is 1.032713, i.e., essentially corresponding to a geometric distribution (theta = 1). So clearly there is overdispersion compared to the Poisson distribution with a theta much lower than infinity. (My personal rule of thumb is that everything beyond 10 is already rather close to infinity.) The likelihood ratio test statistic is then twice the absolute difference of log-likelihoods (2 * (1328.642 - 865.6289) = 926.0262) and has to be compared with a chi-squared distribution with degrees of freedom (df) equal to the difference in df's between the two models (5 - 4 = 1). This is what the code above does. However, it may be somewhat confusing that the difference of logLik() objects retains the "logLik" class and hence displays it with a label and df. Applying as.numeric() to drop the class might clarify this: R> 2 * (logLik(m1) - logLik(m3)) 'log Lik.' 926.0272 (df=5) R> as.numeric(2 * (logLik(m1) - logLik(m3))) [1] 926.0272 And then you could compute the p-value by hand: R> stat <- as.numeric(2 * (logLik(m1) - logLik(m3))) R> pchisq(stat, df = 5 - 4, lower.tail = FALSE) [1] 2.157298e-203 There are also functions that carry out such likelihood ratio tests in a generic way, e.g., lrtest() in lmtest (among others): R> library("lmtest") R> lrtest(m3, m1) Likelihood ratio test Model 1: daysabs ~ math + prog Model 2: daysabs ~ math + prog #Df LogLik Df Chisq Pr(>Chisq) 1 4 -1328.64 2 5 -865.63 1 926.03 < 2.2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 As a final detail: The likelihood ratio test of Poisson vs. NB is non-standard because the parameter theta = infinity is on the boundary of the parameter space. Therefore, the p-value from the chi-squared distribution has to be halved (similar to a one-sided test). Of course, if you already have a significant result without halving, it's still significant afterwards...
Compare poisson and negative binomial regression with LR test The Poisson and negative binomial (NB) model are nested: Poisson is a special case with theta = infinity. So a likelihood ratio test comparing the two models is testing the null hypothesis that "theta
25,449
Equivalence between a repeated measures anova model and a mixed model: lmer vs lme, and compound symmetry
As Ben Bolker already mentioned in the comments, the problem is as you suspect: The lmer() model gets tripped up because it attempts to estimate a variance component model, with the variance component estimates constrained to be non-negative. What I will try to do is give a somewhat intuitive understanding of what it is about your dataset that leads to this, and why this causes a problem for variance component models. Here is a plot of your dataset. The white dots are the actual observations and the black dots are the subject means. To make things more simple, but without changing the spirit of the problem, I will subtract out the fixed effects (i.e., the FITNESS and TEST effects, as well as the grand mean) and deal with the residual data as a one-way random effects problem. So here's what the new dataset looks like: Look hard at the patterns in this plot. Think about how observations taken from the same subject differ from observations taken from different subjects. Specifically, notice the following pattern: As one of the observations for a subject is higher (or lower) above (or below) the subject mean, the other observations from that subject tend to be on the opposite side of the subject mean. And the further that observation is from the subject mean, the further the other observations tend to be from the subject mean on the opposite side. This indicates a negative intra-class correlation. Two observations taken from the same subject actually tend to be less similar, on average, than two observations drawn purely at random from the dataset. Another way to think about this pattern is in terms of the relative magnitudes of the between-subject and within-subject variance. It appears that there is substantially greater within-subject variance compared to between-subject variance. Of course, we expect this to happen to some extent. After all, the within-subject variance is based on variation in the individual data points, while the between-subject variance is based on variation in means of the individual data points (i.e., the subject means), and we know that the variance of a mean will tend to decrease as the number of things being averaged increases. But in this dataset the difference is quite striking: There is way more within-subject than between-subject variation. Actually this difference is exactly the reason why a negative intra-class correlation emerges. Okay, so here is the problem. The variance component model assumes that each data point is the sum of a subject effect and an error: $y_{ij}=u_j+e_{ij}$, where $u_j$ is the effect of the $j$th subject. So let's think about what would happen if there were truly 0 variance in the subject effects -- in other words, if the true between-subjects variance component were 0. Given an actual dataset generated under this model, if we were to compute sample means for each subject's observed data, those sample means would still have some non-zero variance, but they would reflect only error variance, and not any "true" subject variance (because we have assumed there is none). So how variable would we expect these subject means to be? Well, basically each estimated subject effect is a mean, and we know the formula for the variance of a mean: $\text{var}(\bar{X})=\text{var}(X_i)/n$, where $n$ is the number of things being averaged. Now let's apply this formula to your dataset and see how much variance we would expect to see in the estimated subject effects if the true between-subjects variance component were exactly 0. The within-subject variance works out to be $348$, and each subject effect is computed as the mean of 3 observations. So the expected standard deviation in the subject means -- assuming the true between-subject variance is 0 -- works out to be about $10.8$. Now compare this to the standard deviation in subject means that we actually observed: $4.3$! The observed variation is substantially less than the expected variation when we assumed 0 between-subject variance. For a variance component model, the only way that the observed variation could be expected to be as low as what we actually observed is if the true between-subject variance were somehow negative. And therein lies the problem. The data imply that there is somehow a negative variance component, but the software (sensibly) will not allow negative estimates of variance components, since a variance can in fact never be negative. The other models that you fit avoid this problem by directly estimating the intra-class correlation rather than assuming a simple variance component model. If you want to see how you could actually get the negative variance component estimate implied by your dataset, you can use the procedure that I illustrate (with accompanying R code) in this other recent answer of mine. That procedure is not totally trivial, but not too hard either (for a balanced design such as this one).
Equivalence between a repeated measures anova model and a mixed model: lmer vs lme, and compound sym
As Ben Bolker already mentioned in the comments, the problem is as you suspect: The lmer() model gets tripped up because it attempts to estimate a variance component model, with the variance component
Equivalence between a repeated measures anova model and a mixed model: lmer vs lme, and compound symmetry As Ben Bolker already mentioned in the comments, the problem is as you suspect: The lmer() model gets tripped up because it attempts to estimate a variance component model, with the variance component estimates constrained to be non-negative. What I will try to do is give a somewhat intuitive understanding of what it is about your dataset that leads to this, and why this causes a problem for variance component models. Here is a plot of your dataset. The white dots are the actual observations and the black dots are the subject means. To make things more simple, but without changing the spirit of the problem, I will subtract out the fixed effects (i.e., the FITNESS and TEST effects, as well as the grand mean) and deal with the residual data as a one-way random effects problem. So here's what the new dataset looks like: Look hard at the patterns in this plot. Think about how observations taken from the same subject differ from observations taken from different subjects. Specifically, notice the following pattern: As one of the observations for a subject is higher (or lower) above (or below) the subject mean, the other observations from that subject tend to be on the opposite side of the subject mean. And the further that observation is from the subject mean, the further the other observations tend to be from the subject mean on the opposite side. This indicates a negative intra-class correlation. Two observations taken from the same subject actually tend to be less similar, on average, than two observations drawn purely at random from the dataset. Another way to think about this pattern is in terms of the relative magnitudes of the between-subject and within-subject variance. It appears that there is substantially greater within-subject variance compared to between-subject variance. Of course, we expect this to happen to some extent. After all, the within-subject variance is based on variation in the individual data points, while the between-subject variance is based on variation in means of the individual data points (i.e., the subject means), and we know that the variance of a mean will tend to decrease as the number of things being averaged increases. But in this dataset the difference is quite striking: There is way more within-subject than between-subject variation. Actually this difference is exactly the reason why a negative intra-class correlation emerges. Okay, so here is the problem. The variance component model assumes that each data point is the sum of a subject effect and an error: $y_{ij}=u_j+e_{ij}$, where $u_j$ is the effect of the $j$th subject. So let's think about what would happen if there were truly 0 variance in the subject effects -- in other words, if the true between-subjects variance component were 0. Given an actual dataset generated under this model, if we were to compute sample means for each subject's observed data, those sample means would still have some non-zero variance, but they would reflect only error variance, and not any "true" subject variance (because we have assumed there is none). So how variable would we expect these subject means to be? Well, basically each estimated subject effect is a mean, and we know the formula for the variance of a mean: $\text{var}(\bar{X})=\text{var}(X_i)/n$, where $n$ is the number of things being averaged. Now let's apply this formula to your dataset and see how much variance we would expect to see in the estimated subject effects if the true between-subjects variance component were exactly 0. The within-subject variance works out to be $348$, and each subject effect is computed as the mean of 3 observations. So the expected standard deviation in the subject means -- assuming the true between-subject variance is 0 -- works out to be about $10.8$. Now compare this to the standard deviation in subject means that we actually observed: $4.3$! The observed variation is substantially less than the expected variation when we assumed 0 between-subject variance. For a variance component model, the only way that the observed variation could be expected to be as low as what we actually observed is if the true between-subject variance were somehow negative. And therein lies the problem. The data imply that there is somehow a negative variance component, but the software (sensibly) will not allow negative estimates of variance components, since a variance can in fact never be negative. The other models that you fit avoid this problem by directly estimating the intra-class correlation rather than assuming a simple variance component model. If you want to see how you could actually get the negative variance component estimate implied by your dataset, you can use the procedure that I illustrate (with accompanying R code) in this other recent answer of mine. That procedure is not totally trivial, but not too hard either (for a balanced design such as this one).
Equivalence between a repeated measures anova model and a mixed model: lmer vs lme, and compound sym As Ben Bolker already mentioned in the comments, the problem is as you suspect: The lmer() model gets tripped up because it attempts to estimate a variance component model, with the variance component
25,450
Is Hurlbert 1984 the best introductory overview to pseudoreplication?
Well thanks for the nice comments, folks. Since I'm still kicking, I can let you know about several other paper of mine that deal with pseudoreplication, as well as ones that deal with other important statistical issues (multiple comparisons, pseudofactorialism, collapse of the Neyman-Pearson framework, terminological confusion promoted by statisticians, misuse of one-tailed tests) by referring you to my website at http://www.bio.sdsu.edu/pub/stuart/stuart.html On topic, Hurlbert & White (1993) clarified the definitions of the different types of pseudoreplication, and Hurlbert('Ancient black art,' 2009; 'Affirmation', 2013) review the topic and related terminological issues. Pdfs of all these papers are available on the website. Keep calm and carry on! s.h. I should add, in response to Emilie's Sept 25 comment, that pseudoreplication can never be "valid" as by definition it is a statistical analysis (and interpretation) that does not accord with the study design. It is not defined as conducting an experiment with only a single experimental unit per treatment or simply taking multiple samples from a single experimental unit.
Is Hurlbert 1984 the best introductory overview to pseudoreplication?
Well thanks for the nice comments, folks. Since I'm still kicking, I can let you know about several other paper of mine that deal with pseudoreplication, as well as ones that deal with other important
Is Hurlbert 1984 the best introductory overview to pseudoreplication? Well thanks for the nice comments, folks. Since I'm still kicking, I can let you know about several other paper of mine that deal with pseudoreplication, as well as ones that deal with other important statistical issues (multiple comparisons, pseudofactorialism, collapse of the Neyman-Pearson framework, terminological confusion promoted by statisticians, misuse of one-tailed tests) by referring you to my website at http://www.bio.sdsu.edu/pub/stuart/stuart.html On topic, Hurlbert & White (1993) clarified the definitions of the different types of pseudoreplication, and Hurlbert('Ancient black art,' 2009; 'Affirmation', 2013) review the topic and related terminological issues. Pdfs of all these papers are available on the website. Keep calm and carry on! s.h. I should add, in response to Emilie's Sept 25 comment, that pseudoreplication can never be "valid" as by definition it is a statistical analysis (and interpretation) that does not accord with the study design. It is not defined as conducting an experiment with only a single experimental unit per treatment or simply taking multiple samples from a single experimental unit.
Is Hurlbert 1984 the best introductory overview to pseudoreplication? Well thanks for the nice comments, folks. Since I'm still kicking, I can let you know about several other paper of mine that deal with pseudoreplication, as well as ones that deal with other important
25,451
Is Hurlbert 1984 the best introductory overview to pseudoreplication?
As nobody dared answered, I will expand my commentary. I personnally think that Hurlbert 1984 is an essential article for all experimenters. Hurlbert coined the word, but also made a detailed article explaining the problem, with examples, potential sources of confusion and a litterature survey. Can we ask for more ? It is indeed a long article. I am sure we can found in books, about statistics or experiments, good introductions of pseudoreplication. But Hurlbert is widely accessible and even tought not published in open access, I'm pretty sure anyone can find a copy (another advantage). If someone want something shorter or want to go further in the subjet, I suggest looking at Hurlbert 2004, who resume misconceptions about pseudoreplication. In the last paragraph of the first page, Hurlbert cites many articles discussing the subjet and even suggest a good book. The main problem of the 2004 article (or the fun in it), is that it is a bit acrid in the tone, as it's an answer to Oksanen 2001. I would still recommand it. References Hurlbert, S. H. (1984). Pseudoreplication and the design of ecological field experiments. Ecological Monographs 54: 187-211. Hurlbert, S. H. (2004). On misinterpretations of pseudoreplication and related matters: a reply to Oksanen. Oikos 104: 591-597. Oksanen, L. (2001). Logic of experiments in ecology: is pseudoreplication a pseudoissue? Oikos 94: 27-38.
Is Hurlbert 1984 the best introductory overview to pseudoreplication?
As nobody dared answered, I will expand my commentary. I personnally think that Hurlbert 1984 is an essential article for all experimenters. Hurlbert coined the word, but also made a detailed article
Is Hurlbert 1984 the best introductory overview to pseudoreplication? As nobody dared answered, I will expand my commentary. I personnally think that Hurlbert 1984 is an essential article for all experimenters. Hurlbert coined the word, but also made a detailed article explaining the problem, with examples, potential sources of confusion and a litterature survey. Can we ask for more ? It is indeed a long article. I am sure we can found in books, about statistics or experiments, good introductions of pseudoreplication. But Hurlbert is widely accessible and even tought not published in open access, I'm pretty sure anyone can find a copy (another advantage). If someone want something shorter or want to go further in the subjet, I suggest looking at Hurlbert 2004, who resume misconceptions about pseudoreplication. In the last paragraph of the first page, Hurlbert cites many articles discussing the subjet and even suggest a good book. The main problem of the 2004 article (or the fun in it), is that it is a bit acrid in the tone, as it's an answer to Oksanen 2001. I would still recommand it. References Hurlbert, S. H. (1984). Pseudoreplication and the design of ecological field experiments. Ecological Monographs 54: 187-211. Hurlbert, S. H. (2004). On misinterpretations of pseudoreplication and related matters: a reply to Oksanen. Oikos 104: 591-597. Oksanen, L. (2001). Logic of experiments in ecology: is pseudoreplication a pseudoissue? Oikos 94: 27-38.
Is Hurlbert 1984 the best introductory overview to pseudoreplication? As nobody dared answered, I will expand my commentary. I personnally think that Hurlbert 1984 is an essential article for all experimenters. Hurlbert coined the word, but also made a detailed article
25,452
How to assess the proportional hazards assumption for a continous variable
If you have not assumed linearity for the continuous variables, or if linearity truly holds, then a next logical step is to assess proportionality of hazards using smoothed scaled Schoenfeld residual plots as implemented in the R survival package's cox.zph function. These plots show the estimated regression coefficient for a binary or continuous variable as a function of time. You hope for flatness in this relationship if PH holds. The function also provides a formal hypothesis test that is sometimes too sensitive against minor non-PH.
How to assess the proportional hazards assumption for a continous variable
If you have not assumed linearity for the continuous variables, or if linearity truly holds, then a next logical step is to assess proportionality of hazards using smoothed scaled Schoenfeld residual
How to assess the proportional hazards assumption for a continous variable If you have not assumed linearity for the continuous variables, or if linearity truly holds, then a next logical step is to assess proportionality of hazards using smoothed scaled Schoenfeld residual plots as implemented in the R survival package's cox.zph function. These plots show the estimated regression coefficient for a binary or continuous variable as a function of time. You hope for flatness in this relationship if PH holds. The function also provides a formal hypothesis test that is sometimes too sensitive against minor non-PH.
How to assess the proportional hazards assumption for a continous variable If you have not assumed linearity for the continuous variables, or if linearity truly holds, then a next logical step is to assess proportionality of hazards using smoothed scaled Schoenfeld residual
25,453
Standardizing features when using LDA as a pre-processing step
The credit for this answer goes to @ttnphns who explained everything in the comments above. Still, I would like to provide an extended answer. To your question: Are the LDA results on standardized and non-standardized features going to be exactly the same? --- the answer is Yes. I will first give an informal argument, and then proceed with some math. Imagine a 2D dataset shown as a scatter plot on one side of a balloon (original balloon picture taken from here): Here red dots are one class, green dots are another class, and black line is LDA class boundary. Now rescaling of $x$ or $y$ axes corresponds to stretching the balloon horizontally or vertically. It is intuitively clear that even though the slope of the black line will change after such stretching, the classes will be exactly as separable as before, and the relative position of the black line will not change. Each test observation will be assigned to the same class as before the stretching. So one can say that stretching does not influence the results of LDA. Now, mathematically, LDA finds a set of discriminant axes by computing eigenvectors of $\mathbf{W}^{-1} \mathbf{B}$, where $\mathbf{W}$ and $\mathbf{B}$ are within- and between-class scatter matrices. Equivalently, these are generalized eigenvectors of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$. Consider a centred data matrix $\mathbf{X}$ with variables in columns and data points in rows, so that the total scatter matrix is given by $\mathbf{T}=\mathbf{X}^\top\mathbf{X}$. Standardizing the data amounts to scaling each column of $\mathbf{X}$ by a certain number, i.e. replacing it with $\mathbf{X}_\mathrm{new}= \mathbf{X}\boldsymbol\Lambda$, where $\boldsymbol\Lambda$ is a diagonal matrix with scaling coefficients (inverses of the standard deviations of each column) on the diagonal. After such a rescaling, the scatter matrix will change as follows: $\mathbf{T}_\mathrm{new} = \boldsymbol\Lambda\mathbf{T}\boldsymbol\Lambda$, and the same transformation will happen with $\mathbf{W}_\mathrm{new}$ and $\mathbf{B}_\mathrm{new}$. Let $\mathbf{v}$ be an eigenvector of the original problem, i.e. $$\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}.$$ If we multiply this equation with $\boldsymbol\Lambda$ on the left, and insert $\boldsymbol\Lambda\boldsymbol\Lambda^{-1}$ on both sides before $\mathbf{v}$, we obtain $$\boldsymbol\Lambda\mathbf{B}\boldsymbol\Lambda\boldsymbol\Lambda^{-1}\mathbf{v}=\lambda\boldsymbol\Lambda\mathbf{W}\boldsymbol\Lambda\boldsymbol\Lambda^{-1}\mathbf{v},$$ i.e. $$\mathbf{B}_\mathrm{new}\boldsymbol\Lambda^{-1}\mathbf{v}=\lambda\mathbf{W}_\mathrm{new}\boldsymbol\Lambda^{-1}\mathbf{v},$$ which means that $\boldsymbol\Lambda^{-1}\mathbf{v}$ is an eigenvector after rescaling with exactly the same eigenvalue $\lambda$ as before. So discriminant axis (given by the eigenvector) will change, but its eigenvalue, that shows how much the classes are separated, will stay exactly the same. Moreover, projection on this axis, that was originally given by $\mathbf{X}\mathbf{v}$, will now be given by $ \mathbf{X}\boldsymbol\Lambda (\boldsymbol\Lambda^{-1}\mathbf{v})= \mathbf{X}\mathbf{v}$, i.e. will also stay exactly the same (maybe up to a scaling factor).
Standardizing features when using LDA as a pre-processing step
The credit for this answer goes to @ttnphns who explained everything in the comments above. Still, I would like to provide an extended answer. To your question: Are the LDA results on standardized and
Standardizing features when using LDA as a pre-processing step The credit for this answer goes to @ttnphns who explained everything in the comments above. Still, I would like to provide an extended answer. To your question: Are the LDA results on standardized and non-standardized features going to be exactly the same? --- the answer is Yes. I will first give an informal argument, and then proceed with some math. Imagine a 2D dataset shown as a scatter plot on one side of a balloon (original balloon picture taken from here): Here red dots are one class, green dots are another class, and black line is LDA class boundary. Now rescaling of $x$ or $y$ axes corresponds to stretching the balloon horizontally or vertically. It is intuitively clear that even though the slope of the black line will change after such stretching, the classes will be exactly as separable as before, and the relative position of the black line will not change. Each test observation will be assigned to the same class as before the stretching. So one can say that stretching does not influence the results of LDA. Now, mathematically, LDA finds a set of discriminant axes by computing eigenvectors of $\mathbf{W}^{-1} \mathbf{B}$, where $\mathbf{W}$ and $\mathbf{B}$ are within- and between-class scatter matrices. Equivalently, these are generalized eigenvectors of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$. Consider a centred data matrix $\mathbf{X}$ with variables in columns and data points in rows, so that the total scatter matrix is given by $\mathbf{T}=\mathbf{X}^\top\mathbf{X}$. Standardizing the data amounts to scaling each column of $\mathbf{X}$ by a certain number, i.e. replacing it with $\mathbf{X}_\mathrm{new}= \mathbf{X}\boldsymbol\Lambda$, where $\boldsymbol\Lambda$ is a diagonal matrix with scaling coefficients (inverses of the standard deviations of each column) on the diagonal. After such a rescaling, the scatter matrix will change as follows: $\mathbf{T}_\mathrm{new} = \boldsymbol\Lambda\mathbf{T}\boldsymbol\Lambda$, and the same transformation will happen with $\mathbf{W}_\mathrm{new}$ and $\mathbf{B}_\mathrm{new}$. Let $\mathbf{v}$ be an eigenvector of the original problem, i.e. $$\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}.$$ If we multiply this equation with $\boldsymbol\Lambda$ on the left, and insert $\boldsymbol\Lambda\boldsymbol\Lambda^{-1}$ on both sides before $\mathbf{v}$, we obtain $$\boldsymbol\Lambda\mathbf{B}\boldsymbol\Lambda\boldsymbol\Lambda^{-1}\mathbf{v}=\lambda\boldsymbol\Lambda\mathbf{W}\boldsymbol\Lambda\boldsymbol\Lambda^{-1}\mathbf{v},$$ i.e. $$\mathbf{B}_\mathrm{new}\boldsymbol\Lambda^{-1}\mathbf{v}=\lambda\mathbf{W}_\mathrm{new}\boldsymbol\Lambda^{-1}\mathbf{v},$$ which means that $\boldsymbol\Lambda^{-1}\mathbf{v}$ is an eigenvector after rescaling with exactly the same eigenvalue $\lambda$ as before. So discriminant axis (given by the eigenvector) will change, but its eigenvalue, that shows how much the classes are separated, will stay exactly the same. Moreover, projection on this axis, that was originally given by $\mathbf{X}\mathbf{v}$, will now be given by $ \mathbf{X}\boldsymbol\Lambda (\boldsymbol\Lambda^{-1}\mathbf{v})= \mathbf{X}\mathbf{v}$, i.e. will also stay exactly the same (maybe up to a scaling factor).
Standardizing features when using LDA as a pre-processing step The credit for this answer goes to @ttnphns who explained everything in the comments above. Still, I would like to provide an extended answer. To your question: Are the LDA results on standardized and
25,454
Standardizing features when using LDA as a pre-processing step
In addition to the great answer of @amoeba, I would add that since the "1D-Fisher" score is insensible to scaling and that it corresponds to the eigenvalues of the eigendecomposition of the matrix $S_w^{-1}S_b$ (or $W^{-1}B$ using @amoeba's notations), you can see why the separability (ie, score) will be the same - wheter you scale the data or not. In other words, the Fisher score is immune to linear transformation - this is basically what the balloon example above shows (which I found brilliant by the way). Indeed, since the mean operator is linear and the variance operator is such that $Var(aX+b) = a^2Var(X)$, you can show (using basic arithmetics) that the Fisher score for the unscaled data (say, for 2 classes A and B) : $$J = \frac{(\bar{x_A} - \bar{x_B})^2}{\sigma_A^2 + \sigma_B^2}$$ becomes after scaling the $x$ vector to $x' = \frac{x-\bar{x}}{\sigma_x}$: $$J' = \frac{(\bar{x'_A} - \bar{x'_B})^2}{\sigma_{A'}^2+\sigma_{B'}^2} = \frac{(\frac{\bar{x_A} - \bar{x}}{\sigma_x} -\frac{\bar{x_B} - \bar{x}}{\sigma_x} )^2}{\frac{\sigma_A^2}{\sigma_x^2} + \frac{\sigma_B^2}{\sigma_x^2}} = \frac{(\bar{x_A} - \bar{x_B})^2}{\sigma_A^2 + \sigma_B^2} = J$$
Standardizing features when using LDA as a pre-processing step
In addition to the great answer of @amoeba, I would add that since the "1D-Fisher" score is insensible to scaling and that it corresponds to the eigenvalues of the eigendecomposition of the matrix $S_
Standardizing features when using LDA as a pre-processing step In addition to the great answer of @amoeba, I would add that since the "1D-Fisher" score is insensible to scaling and that it corresponds to the eigenvalues of the eigendecomposition of the matrix $S_w^{-1}S_b$ (or $W^{-1}B$ using @amoeba's notations), you can see why the separability (ie, score) will be the same - wheter you scale the data or not. In other words, the Fisher score is immune to linear transformation - this is basically what the balloon example above shows (which I found brilliant by the way). Indeed, since the mean operator is linear and the variance operator is such that $Var(aX+b) = a^2Var(X)$, you can show (using basic arithmetics) that the Fisher score for the unscaled data (say, for 2 classes A and B) : $$J = \frac{(\bar{x_A} - \bar{x_B})^2}{\sigma_A^2 + \sigma_B^2}$$ becomes after scaling the $x$ vector to $x' = \frac{x-\bar{x}}{\sigma_x}$: $$J' = \frac{(\bar{x'_A} - \bar{x'_B})^2}{\sigma_{A'}^2+\sigma_{B'}^2} = \frac{(\frac{\bar{x_A} - \bar{x}}{\sigma_x} -\frac{\bar{x_B} - \bar{x}}{\sigma_x} )^2}{\frac{\sigma_A^2}{\sigma_x^2} + \frac{\sigma_B^2}{\sigma_x^2}} = \frac{(\bar{x_A} - \bar{x_B})^2}{\sigma_A^2 + \sigma_B^2} = J$$
Standardizing features when using LDA as a pre-processing step In addition to the great answer of @amoeba, I would add that since the "1D-Fisher" score is insensible to scaling and that it corresponds to the eigenvalues of the eigendecomposition of the matrix $S_
25,455
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution
The exact sampling distribution is tricky to derive; there have been the first few moments (dating back to 1929), various approximations (dating back to the early 1960s), and tables, often based on simulation (dating back to the 1960s). To be more specific: Fisher (1929) gives moments of the sampling distribution of the skewness and kurtosis in normal samples, and Pearson (1930) (also) gives the first four moments of the sampling distribution of the skewness and kurtosis and proposes tests based on them. So for example$^*$: $E(b_2)=\frac{3(n-1)}{n+1}$ $\text{Var}(b_2)=\frac{24n(n-2)(n-3)}{(n+1)^2(n+3)(n+5)}$ The skewness of $b_2$ is $\frac{216}{n}(1-\frac{29}{n}+\frac{519}{n^2}-\frac{7637}{n^3}+\ldots)$ The excess kurtosis of $b_2$ is $\frac{540}{n}-\frac{20196}{n^2}+\frac{470412}{n^3}+\ldots$. * Beware - the values for the moments and so on depend on the exact definition of the sample kurtosis being used. If you see a different formula for $E(b_2)$ or $\text{Var}(b_2)$, for example, it will generally be because of a slightly different definition of sample kurtosis. In this case, the above formulas should apply to $b_2=n\frac{\sum_i (X_i-\bar X)^4}{(\sum_i (X_i-\bar X)^2)^2}$. Pearson (1963) discusses approximating the sampling distribution of kurtosis in normal samples by a Pearson type IV or a Johnson $S_U$ distribution (doubtless the reason the first four moments were given three decades earlier was in large part to make use of the Pearson family possible). Pearson (1965) gives tables for percentiles of kurtosis for some values of $n$. D'Agostino and Tietjen (1971) give more extensive tables of percentiles for kurtosis. D'Agostino and Pearson (1973) give graphs of percentage points of kurtosis which cover a more extensive range of cases again. Fisher, R. A. (1929), "Moments and Product Moments of Sampling Distributions," Proceedings of the London Mathematical Society, Series 2, 30: 199-238. Pearson, E.S., (1930) "A further development of tests for normality," Biometrika, 22 (1-2), 239-249. Pearson, E.S. (1963) "Some problems arising in approximating to probability distributions, using moments," Biometrika, 50, 95-112 Pearson, E.S. (1965) "Tables of percentage points of $\sqrt{b_1}$ and $b_2$ in normal samples: A rounding off," Biometrika, 52, 282-285 D'Agostino, R.B. and Tietjen, G.L. (1971), "Simulation probability points of $b_2$ for small samples," Biometrika, 58, 669-672. D'Agostino, R.B., and Pearson, E.S. (1973), "Tests for departure from normality. Empirical results for the distribution of $b_2$ and $\sqrt{b_1}$," Biometrika, 60, 613-622.
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution
The exact sampling distribution is tricky to derive; there have been the first few moments (dating back to 1929), various approximations (dating back to the early 1960s), and tables, often based on si
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution The exact sampling distribution is tricky to derive; there have been the first few moments (dating back to 1929), various approximations (dating back to the early 1960s), and tables, often based on simulation (dating back to the 1960s). To be more specific: Fisher (1929) gives moments of the sampling distribution of the skewness and kurtosis in normal samples, and Pearson (1930) (also) gives the first four moments of the sampling distribution of the skewness and kurtosis and proposes tests based on them. So for example$^*$: $E(b_2)=\frac{3(n-1)}{n+1}$ $\text{Var}(b_2)=\frac{24n(n-2)(n-3)}{(n+1)^2(n+3)(n+5)}$ The skewness of $b_2$ is $\frac{216}{n}(1-\frac{29}{n}+\frac{519}{n^2}-\frac{7637}{n^3}+\ldots)$ The excess kurtosis of $b_2$ is $\frac{540}{n}-\frac{20196}{n^2}+\frac{470412}{n^3}+\ldots$. * Beware - the values for the moments and so on depend on the exact definition of the sample kurtosis being used. If you see a different formula for $E(b_2)$ or $\text{Var}(b_2)$, for example, it will generally be because of a slightly different definition of sample kurtosis. In this case, the above formulas should apply to $b_2=n\frac{\sum_i (X_i-\bar X)^4}{(\sum_i (X_i-\bar X)^2)^2}$. Pearson (1963) discusses approximating the sampling distribution of kurtosis in normal samples by a Pearson type IV or a Johnson $S_U$ distribution (doubtless the reason the first four moments were given three decades earlier was in large part to make use of the Pearson family possible). Pearson (1965) gives tables for percentiles of kurtosis for some values of $n$. D'Agostino and Tietjen (1971) give more extensive tables of percentiles for kurtosis. D'Agostino and Pearson (1973) give graphs of percentage points of kurtosis which cover a more extensive range of cases again. Fisher, R. A. (1929), "Moments and Product Moments of Sampling Distributions," Proceedings of the London Mathematical Society, Series 2, 30: 199-238. Pearson, E.S., (1930) "A further development of tests for normality," Biometrika, 22 (1-2), 239-249. Pearson, E.S. (1963) "Some problems arising in approximating to probability distributions, using moments," Biometrika, 50, 95-112 Pearson, E.S. (1965) "Tables of percentage points of $\sqrt{b_1}$ and $b_2$ in normal samples: A rounding off," Biometrika, 52, 282-285 D'Agostino, R.B. and Tietjen, G.L. (1971), "Simulation probability points of $b_2$ for small samples," Biometrika, 58, 669-672. D'Agostino, R.B., and Pearson, E.S. (1973), "Tests for departure from normality. Empirical results for the distribution of $b_2$ and $\sqrt{b_1}$," Biometrika, 60, 613-622.
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution The exact sampling distribution is tricky to derive; there have been the first few moments (dating back to 1929), various approximations (dating back to the early 1960s), and tables, often based on si
25,456
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution
The Sample Kurtosis from a normal sample, is approximately distributed as a zero-mean normal with variance $\approx 24/n$, where $n$ is the sample size (naturally, the larger $n$ the better the approximation. More complicated expressions for the variance can be found in the wikipedia page). For Gaussian samples of small size (<40), percentiles have been derived in this paper: Lacher, D. A. (1989). Sampling distribution of skewness and kurtosis. Clinical chemistry, 35(2), 330-331.
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution
The Sample Kurtosis from a normal sample, is approximately distributed as a zero-mean normal with variance $\approx 24/n$, where $n$ is the sample size (naturally, the larger $n$ the better the approx
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution The Sample Kurtosis from a normal sample, is approximately distributed as a zero-mean normal with variance $\approx 24/n$, where $n$ is the sample size (naturally, the larger $n$ the better the approximation. More complicated expressions for the variance can be found in the wikipedia page). For Gaussian samples of small size (<40), percentiles have been derived in this paper: Lacher, D. A. (1989). Sampling distribution of skewness and kurtosis. Clinical chemistry, 35(2), 330-331.
Closed form expression for the distribution of the sample kurtosis of Gaussian distribution The Sample Kurtosis from a normal sample, is approximately distributed as a zero-mean normal with variance $\approx 24/n$, where $n$ is the sample size (naturally, the larger $n$ the better the approx
25,457
Does ICA require to run PCA first?
The fastICA approach does require a pre-whitening step: the data are first transformed using PCA, which leads to a diagonal covariance matrix, and then each dimension is normalized such that the covariance matrix is equal to the identity matrix (whitening). There are infinite transformations of the data which result in identity covariance matrix, and if your sources were Gaussian you would stop there (for Gaussian multivariate distributions, mean and covariance are sufficient statistics), in the presence of non-Gaussian sources you can minimize some measure of dependence on the whitened data, therefore you look for a rotation of the whitened data that maximizes independence. FastICA achieves this using information theoretic measures and a fixed-point iteration scheme. I would recommend the work of Hyvärinen to get a deeper understanding of the problem: A. Hyvärinen. Fast and Robust Fixed-Point Algorithms for Independent Component Analysis. IEEE Transactions on Neural Networks 10(3):626-634, 1999. A. Hyvärinen, J. Karhunen, E. Oja, Independent Component Analysis, Wiley & Sons. 2001 Please note that doing PCA and doing dimension reduction are not exactly the same thing: when you have more observations (per signal) than signals, you can perform a PCA retaining 100% of the explained variance, and then continue with whitening and fixed point iteration to obtain an estimate of the independent components. Whether you should perform dimension reduction or not is highly context dependent and it is based on your modeling assumptions and data distribution.
Does ICA require to run PCA first?
The fastICA approach does require a pre-whitening step: the data are first transformed using PCA, which leads to a diagonal covariance matrix, and then each dimension is normalized such that the covar
Does ICA require to run PCA first? The fastICA approach does require a pre-whitening step: the data are first transformed using PCA, which leads to a diagonal covariance matrix, and then each dimension is normalized such that the covariance matrix is equal to the identity matrix (whitening). There are infinite transformations of the data which result in identity covariance matrix, and if your sources were Gaussian you would stop there (for Gaussian multivariate distributions, mean and covariance are sufficient statistics), in the presence of non-Gaussian sources you can minimize some measure of dependence on the whitened data, therefore you look for a rotation of the whitened data that maximizes independence. FastICA achieves this using information theoretic measures and a fixed-point iteration scheme. I would recommend the work of Hyvärinen to get a deeper understanding of the problem: A. Hyvärinen. Fast and Robust Fixed-Point Algorithms for Independent Component Analysis. IEEE Transactions on Neural Networks 10(3):626-634, 1999. A. Hyvärinen, J. Karhunen, E. Oja, Independent Component Analysis, Wiley & Sons. 2001 Please note that doing PCA and doing dimension reduction are not exactly the same thing: when you have more observations (per signal) than signals, you can perform a PCA retaining 100% of the explained variance, and then continue with whitening and fixed point iteration to obtain an estimate of the independent components. Whether you should perform dimension reduction or not is highly context dependent and it is based on your modeling assumptions and data distribution.
Does ICA require to run PCA first? The fastICA approach does require a pre-whitening step: the data are first transformed using PCA, which leads to a diagonal covariance matrix, and then each dimension is normalized such that the covar
25,458
Does ICA require to run PCA first?
Applying PCA to your data has the only effect of rotating the original coordinate axes. It is a linear transformation, exactly like for example Fourier transform. Therefore as such it can really not do anything to your data. However, data represented in the new PCA space has some interesting properties. Following coordinate rotation with PCA, you may discard some dimensions based on established criteria such as percentage of total variance explained by the new axes. Depending on your signal, you may achieve a considerable amount of dimensional reduction by this method and this would definitely increase the performance of the following ICA. Doing an ICA without discarding any of the PCA components will have no impact on the result of the following ICA. Furthermore, one can also easily whiten the data in the PCA space due to the orthogonality of the coordinate axes. Whitening has the effect of equalizing variances across all dimensions. I would argue that this is necessary for an ICA to work properly. Otherwise only few PCA components with largest variances would dominate ICA results. I don't really see any drawbacks for PCA based preprocessing before an ICA. Giancarlo cites already the best reference for ICA...
Does ICA require to run PCA first?
Applying PCA to your data has the only effect of rotating the original coordinate axes. It is a linear transformation, exactly like for example Fourier transform. Therefore as such it can really not d
Does ICA require to run PCA first? Applying PCA to your data has the only effect of rotating the original coordinate axes. It is a linear transformation, exactly like for example Fourier transform. Therefore as such it can really not do anything to your data. However, data represented in the new PCA space has some interesting properties. Following coordinate rotation with PCA, you may discard some dimensions based on established criteria such as percentage of total variance explained by the new axes. Depending on your signal, you may achieve a considerable amount of dimensional reduction by this method and this would definitely increase the performance of the following ICA. Doing an ICA without discarding any of the PCA components will have no impact on the result of the following ICA. Furthermore, one can also easily whiten the data in the PCA space due to the orthogonality of the coordinate axes. Whitening has the effect of equalizing variances across all dimensions. I would argue that this is necessary for an ICA to work properly. Otherwise only few PCA components with largest variances would dominate ICA results. I don't really see any drawbacks for PCA based preprocessing before an ICA. Giancarlo cites already the best reference for ICA...
Does ICA require to run PCA first? Applying PCA to your data has the only effect of rotating the original coordinate axes. It is a linear transformation, exactly like for example Fourier transform. Therefore as such it can really not d
25,459
Does ICA require to run PCA first?
The derivation of the fastICA algorithm only requires whitening for a single step. First, you pick the direction of the step (like a gradient descent) and this does not require whitened data. Then, we have to pick the step size, which depends on the inverse of the Hessian. If the data is whitened then this Hessian is diagonal and invertible. So is it required? If you just fixed the step size to a constant (therefore not requiring whitening) you would have standard gradient descent. Gradient descent with a fixed small step size will typically converge, but possibly much slower than the original method. On the other hand, if you have a large data matrix then the whitening could be quite expensive. You might be better off even with the slower convergence you get without whitening. I was surprised to not see mention of this in any literature. One paper discusses the problem: New Fast-ICA Algorithms for Blind Source Separation without Prewhitening by Jimin Ye and Ting Huang. They suggest a somewhat cheaper option to whitening. I wish they had included the obvious comparison of just running ICA without whitening as a baseline, but they didn't. As one further data point I have tried running fastICA without whitening on toy problems and it worked fine. Update: another nice reference addressing whitening is here: robust independent component analysis, Zaroso and Comon. They provide algorithms that do not require whitening.
Does ICA require to run PCA first?
The derivation of the fastICA algorithm only requires whitening for a single step. First, you pick the direction of the step (like a gradient descent) and this does not require whitened data. Then, we
Does ICA require to run PCA first? The derivation of the fastICA algorithm only requires whitening for a single step. First, you pick the direction of the step (like a gradient descent) and this does not require whitened data. Then, we have to pick the step size, which depends on the inverse of the Hessian. If the data is whitened then this Hessian is diagonal and invertible. So is it required? If you just fixed the step size to a constant (therefore not requiring whitening) you would have standard gradient descent. Gradient descent with a fixed small step size will typically converge, but possibly much slower than the original method. On the other hand, if you have a large data matrix then the whitening could be quite expensive. You might be better off even with the slower convergence you get without whitening. I was surprised to not see mention of this in any literature. One paper discusses the problem: New Fast-ICA Algorithms for Blind Source Separation without Prewhitening by Jimin Ye and Ting Huang. They suggest a somewhat cheaper option to whitening. I wish they had included the obvious comparison of just running ICA without whitening as a baseline, but they didn't. As one further data point I have tried running fastICA without whitening on toy problems and it worked fine. Update: another nice reference addressing whitening is here: robust independent component analysis, Zaroso and Comon. They provide algorithms that do not require whitening.
Does ICA require to run PCA first? The derivation of the fastICA algorithm only requires whitening for a single step. First, you pick the direction of the step (like a gradient descent) and this does not require whitened data. Then, we
25,460
Random Intercept model vs. GEE
GEE and Mixed Model Coefficients are not usually thought of as the same. An effective notation for this is to denote GEE coefficient vectors as $\beta^{(m)}$ (the marginal effects) and mixed model coefficient vectors as $\beta^{(c)}$ (the conditional effects). These effects are obviously going to be different for non-collapsible link functions since the GEE averages several instances of the conditional link across several iterations. The standard errors for the marginal and conditional effects are also obviously going to be different. A third and oft overlooked problem is that of model misspecification. GEE gives you tremendous insurance against departures from model assumptions. Because of robust error estimation, GEE linear coefficients using the identity link can always be interpreted as an averaged first order trend. Mixed models give you something similar, but they will be different when the model is misspecified.
Random Intercept model vs. GEE
GEE and Mixed Model Coefficients are not usually thought of as the same. An effective notation for this is to denote GEE coefficient vectors as $\beta^{(m)}$ (the marginal effects) and mixed model coe
Random Intercept model vs. GEE GEE and Mixed Model Coefficients are not usually thought of as the same. An effective notation for this is to denote GEE coefficient vectors as $\beta^{(m)}$ (the marginal effects) and mixed model coefficient vectors as $\beta^{(c)}$ (the conditional effects). These effects are obviously going to be different for non-collapsible link functions since the GEE averages several instances of the conditional link across several iterations. The standard errors for the marginal and conditional effects are also obviously going to be different. A third and oft overlooked problem is that of model misspecification. GEE gives you tremendous insurance against departures from model assumptions. Because of robust error estimation, GEE linear coefficients using the identity link can always be interpreted as an averaged first order trend. Mixed models give you something similar, but they will be different when the model is misspecified.
Random Intercept model vs. GEE GEE and Mixed Model Coefficients are not usually thought of as the same. An effective notation for this is to denote GEE coefficient vectors as $\beta^{(m)}$ (the marginal effects) and mixed model coe
25,461
Random Intercept model vs. GEE
GEE estimates the average population effects. Random intercept models estimate the variability of these effects. If $\alpha_j=\gamma_0+\eta_j$, $\eta_j\sim\mathcal{N}(0,\sigma^2_\alpha)$, random intercept models estimate both $\gamma_0$ (which is the average population intercept and, in normal linear models, is equal to the one estimated by GEE) and $\sigma^2_\alpha$. If the intercept is modeled by second-level predictors, e.g. $\alpha_j=\gamma_0+\gamma_1 w_j+\eta_j$, a random intercept model can estimate how the intercepts vary at the individual level, i.d. according to economic, demographic, familiar etc. factors, to the 'group' to which a specific individual belongs.
Random Intercept model vs. GEE
GEE estimates the average population effects. Random intercept models estimate the variability of these effects. If $\alpha_j=\gamma_0+\eta_j$, $\eta_j\sim\mathcal{N}(0,\sigma^2_\alpha)$, random inter
Random Intercept model vs. GEE GEE estimates the average population effects. Random intercept models estimate the variability of these effects. If $\alpha_j=\gamma_0+\eta_j$, $\eta_j\sim\mathcal{N}(0,\sigma^2_\alpha)$, random intercept models estimate both $\gamma_0$ (which is the average population intercept and, in normal linear models, is equal to the one estimated by GEE) and $\sigma^2_\alpha$. If the intercept is modeled by second-level predictors, e.g. $\alpha_j=\gamma_0+\gamma_1 w_j+\eta_j$, a random intercept model can estimate how the intercepts vary at the individual level, i.d. according to economic, demographic, familiar etc. factors, to the 'group' to which a specific individual belongs.
Random Intercept model vs. GEE GEE estimates the average population effects. Random intercept models estimate the variability of these effects. If $\alpha_j=\gamma_0+\eta_j$, $\eta_j\sim\mathcal{N}(0,\sigma^2_\alpha)$, random inter
25,462
Naive Bayes: Continuous and Categorical Predictors
You can use any kind of predictor in a naive Bayes classifier, as long as you can specify a conditional probability $p(x|y)$ of the predictor value $x$ given the class $y$. Since naive Bayes assumes predictors are conditionally independent given the class, you can mix-and-match different likelihood models for each predictor according to any prior knowledge you have about it. For example, you might know that $p(x|y)$ for some continuous predictor is normally distributed. Simply estimate the mean and variance for this variable under each class in the training set; then use PDF of the Normal distribution to estimate $p(x|y)$ for new unlabeled instances. Similarly, you can use the sufficient statistics and PDF of any other continuous distribution as appropriate. If some other predictor in the classifier is categorical, that's fine. Simply estimate $p(x|y)$ using a Bernoulli or multinomial event model as you normally would, and multiply the two conditional probabilities together in the final prediction (since they are assumed to be independent anyway). Side Note: It isn't strictly the case that SVMs and other discriminative linear models take a mixture of categorical and continuous predictors. You can interpret SVMs as only taking continuous predictors, with values in {0,1} for categorical variables as a special case.
Naive Bayes: Continuous and Categorical Predictors
You can use any kind of predictor in a naive Bayes classifier, as long as you can specify a conditional probability $p(x|y)$ of the predictor value $x$ given the class $y$. Since naive Bayes assumes p
Naive Bayes: Continuous and Categorical Predictors You can use any kind of predictor in a naive Bayes classifier, as long as you can specify a conditional probability $p(x|y)$ of the predictor value $x$ given the class $y$. Since naive Bayes assumes predictors are conditionally independent given the class, you can mix-and-match different likelihood models for each predictor according to any prior knowledge you have about it. For example, you might know that $p(x|y)$ for some continuous predictor is normally distributed. Simply estimate the mean and variance for this variable under each class in the training set; then use PDF of the Normal distribution to estimate $p(x|y)$ for new unlabeled instances. Similarly, you can use the sufficient statistics and PDF of any other continuous distribution as appropriate. If some other predictor in the classifier is categorical, that's fine. Simply estimate $p(x|y)$ using a Bernoulli or multinomial event model as you normally would, and multiply the two conditional probabilities together in the final prediction (since they are assumed to be independent anyway). Side Note: It isn't strictly the case that SVMs and other discriminative linear models take a mixture of categorical and continuous predictors. You can interpret SVMs as only taking continuous predictors, with values in {0,1} for categorical variables as a special case.
Naive Bayes: Continuous and Categorical Predictors You can use any kind of predictor in a naive Bayes classifier, as long as you can specify a conditional probability $p(x|y)$ of the predictor value $x$ given the class $y$. Since naive Bayes assumes p
25,463
Naive Bayes: Continuous and Categorical Predictors
Another simple approach to handling continuous predictors is to "bin" your continuous variables: A common example is to split time of day (continuous, numeric) into AM and PM, for instance. You can potentially capture more information by increasing the # bins (e.g. split 24 hours into 4 6-hour periods); however, this also increases your model's sensitivity to noisy data so you need to be careful. Based on my experience I'd recommend this approach if you have one/few continuous predictors among many categorical predictors.
Naive Bayes: Continuous and Categorical Predictors
Another simple approach to handling continuous predictors is to "bin" your continuous variables: A common example is to split time of day (continuous, numeric) into AM and PM, for instance. You can p
Naive Bayes: Continuous and Categorical Predictors Another simple approach to handling continuous predictors is to "bin" your continuous variables: A common example is to split time of day (continuous, numeric) into AM and PM, for instance. You can potentially capture more information by increasing the # bins (e.g. split 24 hours into 4 6-hour periods); however, this also increases your model's sensitivity to noisy data so you need to be careful. Based on my experience I'd recommend this approach if you have one/few continuous predictors among many categorical predictors.
Naive Bayes: Continuous and Categorical Predictors Another simple approach to handling continuous predictors is to "bin" your continuous variables: A common example is to split time of day (continuous, numeric) into AM and PM, for instance. You can p
25,464
Naive Bayes: Continuous and Categorical Predictors
The way to do this as sketched above - "Another simple approach to handling continuous predictors is to bin your continuous variables" - is available in a ready-to-use webservice. Real-valued numeric variables are 'binned' while maximizing the retained discriminative performance with respect to the classifier outcomes to predict. After this preprocessing step, the classifier is built 'on-the-fly', and its generalization ability tested with N-fold cross validation. You can try this webservice yourself at Insight classifiers. When you want real insight into how classification takes place in your domain, you need to substitute continuous-valued classifiers such as neural networks and support-vector machines with discrete classifiers. A multivariate mixture distribution that comprises discrete and continuous predictive variables - any mapping of this to the classification outcomes involves complex probability integrals Egmont-Petersen et al. And in most classification domains, these probability densities are not Gaussian. So performance-retaining discretization of the predictive variables ensures a distribution-free (non-parametric) classifier, which is also a white-box. This means that you can comprehend the classifier and thereby the underlying domain.
Naive Bayes: Continuous and Categorical Predictors
The way to do this as sketched above - "Another simple approach to handling continuous predictors is to bin your continuous variables" - is available in a ready-to-use webservice. Real-valued numeric
Naive Bayes: Continuous and Categorical Predictors The way to do this as sketched above - "Another simple approach to handling continuous predictors is to bin your continuous variables" - is available in a ready-to-use webservice. Real-valued numeric variables are 'binned' while maximizing the retained discriminative performance with respect to the classifier outcomes to predict. After this preprocessing step, the classifier is built 'on-the-fly', and its generalization ability tested with N-fold cross validation. You can try this webservice yourself at Insight classifiers. When you want real insight into how classification takes place in your domain, you need to substitute continuous-valued classifiers such as neural networks and support-vector machines with discrete classifiers. A multivariate mixture distribution that comprises discrete and continuous predictive variables - any mapping of this to the classification outcomes involves complex probability integrals Egmont-Petersen et al. And in most classification domains, these probability densities are not Gaussian. So performance-retaining discretization of the predictive variables ensures a distribution-free (non-parametric) classifier, which is also a white-box. This means that you can comprehend the classifier and thereby the underlying domain.
Naive Bayes: Continuous and Categorical Predictors The way to do this as sketched above - "Another simple approach to handling continuous predictors is to bin your continuous variables" - is available in a ready-to-use webservice. Real-valued numeric
25,465
In k-fold cross validation does the training subsample include test set?
They are both correct in their own context. They are describing two different ways of model selection in different situations. In general, when you are doing model selection and testing, your data is divided into three parts, training set, validation set and testing set. You use your training set to train different models, estimate the performance on your validation set, then select the model with optimal performance and test it on your testing set. On the other hand, if you are using K-fold cross-validation to estimate the performance of a model, your data is then divided into K folds, you loop through the K folds and each time use one fold as testing(or validation) set and use the rest (K-1) folds as training set. Then you average across all folds to get the estimated testing performance of your model. This is what the Wikipedia page is referring to. But keep in mind that this is for testing a specific model, if you have multiple candidate models and want to do model-selection as well, you have to select a model only with your training set to avoid this subtle circular logic fallacy. So you further divide your (K-1) folds 'training data' into two parts, one for training and one for validation. This means you do an extra 'cross-validation' first to select the optimal model within the (K-1) folds, and then you test this optimal model on your testing fold. In other words, you are doing a two-level cross-validation, one is the K-fold cross-validation in general, and within each cross-validation loop, there is an extra (K-1)-fold cross-validation for model selection. Then you have what you stated in your question, 'Of the k subsamples one subsample is retained as the validation data, one other subsample is retained as the test data, and k-2 subsamples are used as training data.'
In k-fold cross validation does the training subsample include test set?
They are both correct in their own context. They are describing two different ways of model selection in different situations. In general, when you are doing model selection and testing, your data is
In k-fold cross validation does the training subsample include test set? They are both correct in their own context. They are describing two different ways of model selection in different situations. In general, when you are doing model selection and testing, your data is divided into three parts, training set, validation set and testing set. You use your training set to train different models, estimate the performance on your validation set, then select the model with optimal performance and test it on your testing set. On the other hand, if you are using K-fold cross-validation to estimate the performance of a model, your data is then divided into K folds, you loop through the K folds and each time use one fold as testing(or validation) set and use the rest (K-1) folds as training set. Then you average across all folds to get the estimated testing performance of your model. This is what the Wikipedia page is referring to. But keep in mind that this is for testing a specific model, if you have multiple candidate models and want to do model-selection as well, you have to select a model only with your training set to avoid this subtle circular logic fallacy. So you further divide your (K-1) folds 'training data' into two parts, one for training and one for validation. This means you do an extra 'cross-validation' first to select the optimal model within the (K-1) folds, and then you test this optimal model on your testing fold. In other words, you are doing a two-level cross-validation, one is the K-fold cross-validation in general, and within each cross-validation loop, there is an extra (K-1)-fold cross-validation for model selection. Then you have what you stated in your question, 'Of the k subsamples one subsample is retained as the validation data, one other subsample is retained as the test data, and k-2 subsamples are used as training data.'
In k-fold cross validation does the training subsample include test set? They are both correct in their own context. They are describing two different ways of model selection in different situations. In general, when you are doing model selection and testing, your data is
25,466
In k-fold cross validation does the training subsample include test set?
Here I am re-stating what I gathered from the answer of @Yuanning and comments of @cbeleites in pseudocode form. This may be helpful for people like me. To measure perfomance of a determined model we need only training and test sets: function measure_performance(model, full_test_set, k_performance): subset_list <- divide full_test_set into k_performance subsets performances <- empty array for each sub_set in subset_list: test_set <- sub_set training_set <- the rest of the full_test_set model <- train model with training_set performance <- test model with test_set append performance to performances end for each return mean of the values in peformances end function But if we need to do model selection, we should do this: function select_model(data, k_select, k_performance): subset_list <- divide data into k_select subsets performances <- empty array for each sub_set in subset_list: validation_set <- assume that this sub_set is validation set test_set <- one other random sub_set (Question: How to select test_set) training_set <- assume remaining as training set model <- get a model with the help of training_set and validation_set performance <- measure_performance(model,test_set, k_performance) end for each return model with the best performance (for this, performances will be scanned) end function
In k-fold cross validation does the training subsample include test set?
Here I am re-stating what I gathered from the answer of @Yuanning and comments of @cbeleites in pseudocode form. This may be helpful for people like me. To measure perfomance of a determined model we
In k-fold cross validation does the training subsample include test set? Here I am re-stating what I gathered from the answer of @Yuanning and comments of @cbeleites in pseudocode form. This may be helpful for people like me. To measure perfomance of a determined model we need only training and test sets: function measure_performance(model, full_test_set, k_performance): subset_list <- divide full_test_set into k_performance subsets performances <- empty array for each sub_set in subset_list: test_set <- sub_set training_set <- the rest of the full_test_set model <- train model with training_set performance <- test model with test_set append performance to performances end for each return mean of the values in peformances end function But if we need to do model selection, we should do this: function select_model(data, k_select, k_performance): subset_list <- divide data into k_select subsets performances <- empty array for each sub_set in subset_list: validation_set <- assume that this sub_set is validation set test_set <- one other random sub_set (Question: How to select test_set) training_set <- assume remaining as training set model <- get a model with the help of training_set and validation_set performance <- measure_performance(model,test_set, k_performance) end for each return model with the best performance (for this, performances will be scanned) end function
In k-fold cross validation does the training subsample include test set? Here I am re-stating what I gathered from the answer of @Yuanning and comments of @cbeleites in pseudocode form. This may be helpful for people like me. To measure perfomance of a determined model we
25,467
How to interpret 2-way and 3-way interaction in lmer?
First of all, the default contrasts for categorial variables in R are treatment contrasts. In treatment contrast, all levels of a factor are compared to the base level (reference category). The base levels do not appear in the output. In your example, the base levels are: animal: lion color: white sex: female Note that all effects are estimated with respect to the base levels. Let's have a look at the effects. You're interpretation is correct. The intercept is the mean of the dependent variable in the three base levels. rat is the difference between rat and lion (with respect to the dependent variable). Note that this is not a global difference, but a difference with respect to the other base levels. The effect of rat is estimated for data where color = white and sex = female. sexmale is the difference between males and females (where animal = lion and color = white). colorred is the difference between red and white (where animal = lion and sex = female). coloryellow is the difference between yellow and white (where animal = lion and sex = female). rat:sexmale: The difference between lions and rats is higher for males than for females (where color = white). rat:colorred: The difference between lions and rats is higher for red than for white (where sex = female). rat:coloryellow: The difference between lions and rats is higher for yellow than for white (where sex = female). sexmale:colorred: The difference between males and females is higher for red than for white (where animal = lion). sexmale:coloryellow: The difference between males and females is higher for yellow than for white (where animal = lion). rat:sexmale:colorred: Three-factor interaction. The effect rat:sexmale is different for red compared to white. rat:sexmale:coloryellow: Three-factor interaction. The effect rat:sexmale is different for yellow compared to white. To test further contrasts, you have to run another analysis.
How to interpret 2-way and 3-way interaction in lmer?
First of all, the default contrasts for categorial variables in R are treatment contrasts. In treatment contrast, all levels of a factor are compared to the base level (reference category). The base l
How to interpret 2-way and 3-way interaction in lmer? First of all, the default contrasts for categorial variables in R are treatment contrasts. In treatment contrast, all levels of a factor are compared to the base level (reference category). The base levels do not appear in the output. In your example, the base levels are: animal: lion color: white sex: female Note that all effects are estimated with respect to the base levels. Let's have a look at the effects. You're interpretation is correct. The intercept is the mean of the dependent variable in the three base levels. rat is the difference between rat and lion (with respect to the dependent variable). Note that this is not a global difference, but a difference with respect to the other base levels. The effect of rat is estimated for data where color = white and sex = female. sexmale is the difference between males and females (where animal = lion and color = white). colorred is the difference between red and white (where animal = lion and sex = female). coloryellow is the difference between yellow and white (where animal = lion and sex = female). rat:sexmale: The difference between lions and rats is higher for males than for females (where color = white). rat:colorred: The difference between lions and rats is higher for red than for white (where sex = female). rat:coloryellow: The difference between lions and rats is higher for yellow than for white (where sex = female). sexmale:colorred: The difference between males and females is higher for red than for white (where animal = lion). sexmale:coloryellow: The difference between males and females is higher for yellow than for white (where animal = lion). rat:sexmale:colorred: Three-factor interaction. The effect rat:sexmale is different for red compared to white. rat:sexmale:coloryellow: Three-factor interaction. The effect rat:sexmale is different for yellow compared to white. To test further contrasts, you have to run another analysis.
How to interpret 2-way and 3-way interaction in lmer? First of all, the default contrasts for categorial variables in R are treatment contrasts. In treatment contrast, all levels of a factor are compared to the base level (reference category). The base l
25,468
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\mu_0$?
There was a problem with the original simulation in this post, which is hopefully now fixed. While the estimate of sample standard deviation tends to grow along with the numerator as the mean deviates from $\mu_0$, this turns out to not have all that big an effect on power at "typical" significance levels, because in medium to large samples, $s^*/\sqrt n$ still tends to be large enough to reject. In smaller samples it may be having some effect, though, and at very small significance levels this could become very important, because it will place an upper bound on the power that will be less than 1. A second issue, possibly more important at 'common' significance levels, seems to be that the numerator and denominator of the test statistic are no longer independent at the null (the square of $\bar x-\mu$ is correlated with the variance estimate). This means the test no longer has a t-distribution under the null. It's not a fatal flaw, but it means you can't just use tables and get the significance level you want (as we will see in a minute). That is, the test becomes conservative and this impacts the power. As n becomes large, this dependence becomes less of an issue (not least because you can invoke the CLT for the numerator and use Slutsky's theorem to say than there's an asymptotic normal distribution for the modified statistic). Here's the power curve for an ordinary two sample t (purple curve, two tailed test) and for the test using the null value of $\mu_0$ in the calculation of $s$ (blue dots, obtained via simulation, and using t-tables), as the population mean moves away from the hypothesized value, for $n=10$: $\quad\quad\quad\quad$ n=10 You can see the power curve is lower (it gets much worse at lower sample sizes), but much of that seems to be because the dependence between numerator and denominator has lowered the significance level. If you adjust the critical values appropriately, there would be little between them even at n=10. And here's the power curve again, but now for $n=30$ $\quad\quad\quad\quad$ n=30 This suggests that at non-small sample sizes there's not all that much between them, as long as you don't need to use very small significance levels.
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\m
There was a problem with the original simulation in this post, which is hopefully now fixed. While the estimate of sample standard deviation tends to grow along with the numerator as the mean deviates
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\mu_0$? There was a problem with the original simulation in this post, which is hopefully now fixed. While the estimate of sample standard deviation tends to grow along with the numerator as the mean deviates from $\mu_0$, this turns out to not have all that big an effect on power at "typical" significance levels, because in medium to large samples, $s^*/\sqrt n$ still tends to be large enough to reject. In smaller samples it may be having some effect, though, and at very small significance levels this could become very important, because it will place an upper bound on the power that will be less than 1. A second issue, possibly more important at 'common' significance levels, seems to be that the numerator and denominator of the test statistic are no longer independent at the null (the square of $\bar x-\mu$ is correlated with the variance estimate). This means the test no longer has a t-distribution under the null. It's not a fatal flaw, but it means you can't just use tables and get the significance level you want (as we will see in a minute). That is, the test becomes conservative and this impacts the power. As n becomes large, this dependence becomes less of an issue (not least because you can invoke the CLT for the numerator and use Slutsky's theorem to say than there's an asymptotic normal distribution for the modified statistic). Here's the power curve for an ordinary two sample t (purple curve, two tailed test) and for the test using the null value of $\mu_0$ in the calculation of $s$ (blue dots, obtained via simulation, and using t-tables), as the population mean moves away from the hypothesized value, for $n=10$: $\quad\quad\quad\quad$ n=10 You can see the power curve is lower (it gets much worse at lower sample sizes), but much of that seems to be because the dependence between numerator and denominator has lowered the significance level. If you adjust the critical values appropriately, there would be little between them even at n=10. And here's the power curve again, but now for $n=30$ $\quad\quad\quad\quad$ n=30 This suggests that at non-small sample sizes there's not all that much between them, as long as you don't need to use very small significance levels.
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\m There was a problem with the original simulation in this post, which is hopefully now fixed. While the estimate of sample standard deviation tends to grow along with the numerator as the mean deviates
25,469
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\mu_0$?
When the null hypothesis is true, your statistic should be similar to the regular t-test statistic (though in computing the standard deviation you should probably divide by $n$ instead of $n-1$ because you are not spending a degree of freedom to estimate the mean). I would expect it to have similar properties (proper size, similar power) when the null hypothesis is true (the population mean is $\mu_0$. But now consider what happens when the null hypothesis is not true. This means that in calculating the standard error you are subtracting a value that is not the true mean, or an estimate of the true mean, in fact you could be subtracting a value that does not even lie within the range of the x values. This will make your standard deviation larger ($\bar{x}$ is guaranteed to minimize the standard deviation) as $\mu_0$ moves away from the true mean. So when the null is false you will be increasing both the numerator and the denominator in the statistic which will reduce your chances of rejecting the null hypothesis (and it will not be distributed as a t-distribution). So when the null is true either way will probably work, but when the null is false, using $\bar{x}$ will give better power (and probably other properties as well), so it is preferred.
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\m
When the null hypothesis is true, your statistic should be similar to the regular t-test statistic (though in computing the standard deviation you should probably divide by $n$ instead of $n-1$ becaus
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\mu_0$? When the null hypothesis is true, your statistic should be similar to the regular t-test statistic (though in computing the standard deviation you should probably divide by $n$ instead of $n-1$ because you are not spending a degree of freedom to estimate the mean). I would expect it to have similar properties (proper size, similar power) when the null hypothesis is true (the population mean is $\mu_0$. But now consider what happens when the null hypothesis is not true. This means that in calculating the standard error you are subtracting a value that is not the true mean, or an estimate of the true mean, in fact you could be subtracting a value that does not even lie within the range of the x values. This will make your standard deviation larger ($\bar{x}$ is guaranteed to minimize the standard deviation) as $\mu_0$ moves away from the true mean. So when the null is false you will be increasing both the numerator and the denominator in the statistic which will reduce your chances of rejecting the null hypothesis (and it will not be distributed as a t-distribution). So when the null is true either way will probably work, but when the null is false, using $\bar{x}$ will give better power (and probably other properties as well), so it is preferred.
In a one sample t-test, what happens if in the variance estimator the sample mean is replaced by $\m When the null hypothesis is true, your statistic should be similar to the regular t-test statistic (though in computing the standard deviation you should probably divide by $n$ instead of $n-1$ becaus
25,470
Investigating robustness of logistic regression against violation of linearity of logit
The linearity assumption is so commonly violated in regression that it should be called a surprise rather than an assumption. Like other regression models, the logistic model is not robust to nonlinearity when you falsely assume linearity. Rather than detect nonlinearity using residuals or omnibus goodness of fit tests, it is better to use direct tests. For example, expand continuous predictors using regression splines and do a composite test of all the nonlinear terms. Better still don't test the terms and just expect nonlinearity. This approach is much better than trying different single-slope choices of transformations such as square root, log, etc., because statistical inference arise after such analyses will be incorrect because it does not have large enough numerator degrees of freedom. Here's an example in R. require(rms) f <- lrm(y ~ rcs(age,4) + rcs(blood.pressure,5) + sex + rcs(height,4)) # Fits restricted cubic splines in 3 variables with default knots # 4, 5, 4 knots = 2, 3, 2 nonlinear terms Function(f) # display algebraic form of fit anova(f) # obtain individual + combined linearity tests
Investigating robustness of logistic regression against violation of linearity of logit
The linearity assumption is so commonly violated in regression that it should be called a surprise rather than an assumption. Like other regression models, the logistic model is not robust to nonline
Investigating robustness of logistic regression against violation of linearity of logit The linearity assumption is so commonly violated in regression that it should be called a surprise rather than an assumption. Like other regression models, the logistic model is not robust to nonlinearity when you falsely assume linearity. Rather than detect nonlinearity using residuals or omnibus goodness of fit tests, it is better to use direct tests. For example, expand continuous predictors using regression splines and do a composite test of all the nonlinear terms. Better still don't test the terms and just expect nonlinearity. This approach is much better than trying different single-slope choices of transformations such as square root, log, etc., because statistical inference arise after such analyses will be incorrect because it does not have large enough numerator degrees of freedom. Here's an example in R. require(rms) f <- lrm(y ~ rcs(age,4) + rcs(blood.pressure,5) + sex + rcs(height,4)) # Fits restricted cubic splines in 3 variables with default knots # 4, 5, 4 knots = 2, 3, 2 nonlinear terms Function(f) # display algebraic form of fit anova(f) # obtain individual + combined linearity tests
Investigating robustness of logistic regression against violation of linearity of logit The linearity assumption is so commonly violated in regression that it should be called a surprise rather than an assumption. Like other regression models, the logistic model is not robust to nonline
25,471
how to interpret the interaction term in lm formula in R?
The standard way to write the prediction equation for your model is: $\hat y = b_0 + b_1*x_1 + b_2*x_2 + b_{12} * x_1 *x_2$ But understanding the interaction is a little easier if we factor this differently: $\hat y = (b_0 + b_2*x_2) + (b_1 + b_{12}*x_2) * x_1$ With this factoring we can see that for a given value of $x_2$ the y-intercept for $x_1$ is $b_0 + b_2*x_2$ and the slope on $x_1$ is $(b_1 + b_{12}*x_2)$. So the relationship between $y$ and $x_1$ depends on $x_2$. Another way to understand this is by plotting the predicted lines between $y$ and $x_1$ for different values of $x_2$ (or the other way around). The Predict.Plot and TkPredict functions in the TeachingDemos package for R were designed to help with these types of plots.
how to interpret the interaction term in lm formula in R?
The standard way to write the prediction equation for your model is: $\hat y = b_0 + b_1*x_1 + b_2*x_2 + b_{12} * x_1 *x_2$ But understanding the interaction is a little easier if we factor this dif
how to interpret the interaction term in lm formula in R? The standard way to write the prediction equation for your model is: $\hat y = b_0 + b_1*x_1 + b_2*x_2 + b_{12} * x_1 *x_2$ But understanding the interaction is a little easier if we factor this differently: $\hat y = (b_0 + b_2*x_2) + (b_1 + b_{12}*x_2) * x_1$ With this factoring we can see that for a given value of $x_2$ the y-intercept for $x_1$ is $b_0 + b_2*x_2$ and the slope on $x_1$ is $(b_1 + b_{12}*x_2)$. So the relationship between $y$ and $x_1$ depends on $x_2$. Another way to understand this is by plotting the predicted lines between $y$ and $x_1$ for different values of $x_2$ (or the other way around). The Predict.Plot and TkPredict functions in the TeachingDemos package for R were designed to help with these types of plots.
how to interpret the interaction term in lm formula in R? The standard way to write the prediction equation for your model is: $\hat y = b_0 + b_1*x_1 + b_2*x_2 + b_{12} * x_1 *x_2$ But understanding the interaction is a little easier if we factor this dif
25,472
how to interpret the interaction term in lm formula in R?
It is easiest to think about interactions in terms of discrete variables. Perhaps you might have studied two-way ANOVAs, where we have two grouping variables (e.g. gender and age category, with three levels for age) and are looking at how they pertain to some continuous measure (our dependent variable, e.g. IQ). The x1 * x2 term, if significant, can be understood (in this trivial, made-up example) as IQ behaving differently across the levels of age for the different genders. For example, maybe IQ is stable for males across the three age groups, but young females start below young males and have an upward trajectory (with the old age group having a higher mean than the old age group for males). In a means plot, this would imply a horizontal line for males in the middle of the graph, and perhaps a 45 degree line for females that starts below males but ends above males. The gist is that as you move along the levels of one variable (or "holding X1 constant"), what is going on in the other variable changes. This interpretation also works with continuous predictor variables, but is not so easy to illustrate concretely. In that case, you might want to take particular values of X1 and X2 and see what happens to Y.
how to interpret the interaction term in lm formula in R?
It is easiest to think about interactions in terms of discrete variables. Perhaps you might have studied two-way ANOVAs, where we have two grouping variables (e.g. gender and age category, with three
how to interpret the interaction term in lm formula in R? It is easiest to think about interactions in terms of discrete variables. Perhaps you might have studied two-way ANOVAs, where we have two grouping variables (e.g. gender and age category, with three levels for age) and are looking at how they pertain to some continuous measure (our dependent variable, e.g. IQ). The x1 * x2 term, if significant, can be understood (in this trivial, made-up example) as IQ behaving differently across the levels of age for the different genders. For example, maybe IQ is stable for males across the three age groups, but young females start below young males and have an upward trajectory (with the old age group having a higher mean than the old age group for males). In a means plot, this would imply a horizontal line for males in the middle of the graph, and perhaps a 45 degree line for females that starts below males but ends above males. The gist is that as you move along the levels of one variable (or "holding X1 constant"), what is going on in the other variable changes. This interpretation also works with continuous predictor variables, but is not so easy to illustrate concretely. In that case, you might want to take particular values of X1 and X2 and see what happens to Y.
how to interpret the interaction term in lm formula in R? It is easiest to think about interactions in terms of discrete variables. Perhaps you might have studied two-way ANOVAs, where we have two grouping variables (e.g. gender and age category, with three
25,473
how to interpret the interaction term in lm formula in R?
Suppose you get point estimates of 4 for $x_1$, 2 for $x_2$ and 1.5 for the interaction. Then, the equation is saying that the lm fit is $y = 4x_1 + 2x_2 + 1.5x_1x_2$ Is that what you wanted?
how to interpret the interaction term in lm formula in R?
Suppose you get point estimates of 4 for $x_1$, 2 for $x_2$ and 1.5 for the interaction. Then, the equation is saying that the lm fit is $y = 4x_1 + 2x_2 + 1.5x_1x_2$ Is that what you wanted?
how to interpret the interaction term in lm formula in R? Suppose you get point estimates of 4 for $x_1$, 2 for $x_2$ and 1.5 for the interaction. Then, the equation is saying that the lm fit is $y = 4x_1 + 2x_2 + 1.5x_1x_2$ Is that what you wanted?
how to interpret the interaction term in lm formula in R? Suppose you get point estimates of 4 for $x_1$, 2 for $x_2$ and 1.5 for the interaction. Then, the equation is saying that the lm fit is $y = 4x_1 + 2x_2 + 1.5x_1x_2$ Is that what you wanted?
25,474
how to interpret the interaction term in lm formula in R?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Based on @Greg Snow's answer, I just wanted to add a simulation showing this: set.seed(6);library(viridis) n = 100 x.lm1 = rnorm(n = n, mean = 5, sd = 1) x.lm2 = rnorm(n = n, mean = 2, sd = 1) # Note that this doesn't have to be normally distributed. This could be a uniform distribution or from a binomial. beta0 = 2.5 beta1 = 1.5 beta2 = 2 beta3 = 3 err.lm = rnorm(n = n, mean = 0, sd = 1) y.lm = beta0 + beta1*x.lm1 + beta2*x.lm2 + beta3*x.lm1*x.lm2 + err.lm df.lm = data.frame(x1 = x.lm1, x2 = x.lm2, y = y.lm) lm.out = lm(y~x1*x2, data = df.lm) # Make a new range of x2 values on which we will test the effect of x1 x2r = range(x.lm2) x2.sim = seq(x2r[1],x2r[2], by = .5) # this is the effect of x1 at different values of x2 (which moderates the effect of x1) eff.x1 <- coef(lm.out)["x1"] + coef(lm.out)["x1:x2"] * x2.sim # this gets you the slopes eff.x1.int <- coef(lm.out)["(Intercept)"] + coef(lm.out)["x2"] * x2.sim # this gets you the intercepts eff.dat <- data.frame(x2.sim, eff.x1, eff.x1.int) virPal <- viridis::viridis(length(x2.sim),alpha = .8) eff.dat$x2.col <- virPal[as.numeric(cut(eff.dat$x2.sim,breaks = length(x2.sim)))] df.lm$x2.col <- virPal[as.numeric(cut(df.lm$x2,breaks = length(x2.sim)))] par(mfrow=c(1,1), mar =c(4,4,1,1)) plot(x = df.lm$x1, y = df.lm$y, bg = df.lm$x2.col, pch = 21, xlab = "x1", ylab = "y") apply(eff.dat, 1, function(x) abline(a = x[3], b = x[2], col = x[4], lwd = 2)) abline(h = 0, v = 0,lty = 3) legend("topleft", title = "x2",legend = round(eff.dat$x2.sim,1), lty = 1, lwd = 3, col = eff.dat$x2.col, bg = scales::alpha("white",.5))
how to interpret the interaction term in lm formula in R?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
how to interpret the interaction term in lm formula in R? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Based on @Greg Snow's answer, I just wanted to add a simulation showing this: set.seed(6);library(viridis) n = 100 x.lm1 = rnorm(n = n, mean = 5, sd = 1) x.lm2 = rnorm(n = n, mean = 2, sd = 1) # Note that this doesn't have to be normally distributed. This could be a uniform distribution or from a binomial. beta0 = 2.5 beta1 = 1.5 beta2 = 2 beta3 = 3 err.lm = rnorm(n = n, mean = 0, sd = 1) y.lm = beta0 + beta1*x.lm1 + beta2*x.lm2 + beta3*x.lm1*x.lm2 + err.lm df.lm = data.frame(x1 = x.lm1, x2 = x.lm2, y = y.lm) lm.out = lm(y~x1*x2, data = df.lm) # Make a new range of x2 values on which we will test the effect of x1 x2r = range(x.lm2) x2.sim = seq(x2r[1],x2r[2], by = .5) # this is the effect of x1 at different values of x2 (which moderates the effect of x1) eff.x1 <- coef(lm.out)["x1"] + coef(lm.out)["x1:x2"] * x2.sim # this gets you the slopes eff.x1.int <- coef(lm.out)["(Intercept)"] + coef(lm.out)["x2"] * x2.sim # this gets you the intercepts eff.dat <- data.frame(x2.sim, eff.x1, eff.x1.int) virPal <- viridis::viridis(length(x2.sim),alpha = .8) eff.dat$x2.col <- virPal[as.numeric(cut(eff.dat$x2.sim,breaks = length(x2.sim)))] df.lm$x2.col <- virPal[as.numeric(cut(df.lm$x2,breaks = length(x2.sim)))] par(mfrow=c(1,1), mar =c(4,4,1,1)) plot(x = df.lm$x1, y = df.lm$y, bg = df.lm$x2.col, pch = 21, xlab = "x1", ylab = "y") apply(eff.dat, 1, function(x) abline(a = x[3], b = x[2], col = x[4], lwd = 2)) abline(h = 0, v = 0,lty = 3) legend("topleft", title = "x2",legend = round(eff.dat$x2.sim,1), lty = 1, lwd = 3, col = eff.dat$x2.col, bg = scales::alpha("white",.5))
how to interpret the interaction term in lm formula in R? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
25,475
Choosing clusters for k-means: the 1 cluster case
The gap statistic is a great way of doing this; Tibshirani, Hastie & Walther (2001). http://stat.ethz.ch/R-manual/R-devel/library/cluster/html/clusGap.html - The relevant R package. The idea is that it performs a sequential hypothesis test of clustering your data for K=1,2,3,... vs a null hypothesis of random noise, which is equivalent to one cluster. Its particular strength is that it gives you a reliable indication of whether K=1, i.e. whether there are no clusters. Here's an example, I was inspecting some astronomy data a few days ago as it happens - namely from a transiting exoplanet survey. I wanted to know what evidence there are for (convex) clusters. My data is 'transit' library(cluster) cgap <- clusGap(transit, FUN=kmeans, K.max=kmax, B=100) for(k in 1:(kmax-1)) { if(cgap$Tab[k,3]>cgap$Tab[(k+1),3]-cgap$Tab[(k+1),4]) {print(k)}; break; } With the gap statistic you're looking for the first value of K where the test 'fails' i.e. the gap statistic significantly dips. The loop above will print such a k, however simply plotting cgap gives you the following figure: See how there's a significant dip in the Gap from k=1 to k=2, that signifies there are in fact no clusters (i.e. 1 cluster).
Choosing clusters for k-means: the 1 cluster case
The gap statistic is a great way of doing this; Tibshirani, Hastie & Walther (2001). http://stat.ethz.ch/R-manual/R-devel/library/cluster/html/clusGap.html - The relevant R package. The idea is that i
Choosing clusters for k-means: the 1 cluster case The gap statistic is a great way of doing this; Tibshirani, Hastie & Walther (2001). http://stat.ethz.ch/R-manual/R-devel/library/cluster/html/clusGap.html - The relevant R package. The idea is that it performs a sequential hypothesis test of clustering your data for K=1,2,3,... vs a null hypothesis of random noise, which is equivalent to one cluster. Its particular strength is that it gives you a reliable indication of whether K=1, i.e. whether there are no clusters. Here's an example, I was inspecting some astronomy data a few days ago as it happens - namely from a transiting exoplanet survey. I wanted to know what evidence there are for (convex) clusters. My data is 'transit' library(cluster) cgap <- clusGap(transit, FUN=kmeans, K.max=kmax, B=100) for(k in 1:(kmax-1)) { if(cgap$Tab[k,3]>cgap$Tab[(k+1),3]-cgap$Tab[(k+1),4]) {print(k)}; break; } With the gap statistic you're looking for the first value of K where the test 'fails' i.e. the gap statistic significantly dips. The loop above will print such a k, however simply plotting cgap gives you the following figure: See how there's a significant dip in the Gap from k=1 to k=2, that signifies there are in fact no clusters (i.e. 1 cluster).
Choosing clusters for k-means: the 1 cluster case The gap statistic is a great way of doing this; Tibshirani, Hastie & Walther (2001). http://stat.ethz.ch/R-manual/R-devel/library/cluster/html/clusGap.html - The relevant R package. The idea is that i
25,476
Choosing clusters for k-means: the 1 cluster case
You may try also a more recent method: A. Kalogeratos and A.Likas, Dip-means: an incremental clustering method for estimating the number of clusters, NIPS 2012. The idea is to use statistical hypothesis testing for unimodality on vectors containing the similarity/distance between one point and the rest of the points of the set. The testing is done using Hartigan-Hartigan dip test, Ann. Statist. 13(1):70-84. The method starts with all the dataset as one cluster and incrementally splits it as long as the unimodality hypothesis is rejected (i.e. more than one clusters is present). So this method would indicate whether there are more than one clusters in data (your question), but it may provide also the final clustering. Here you can find some code in Matlab.
Choosing clusters for k-means: the 1 cluster case
You may try also a more recent method: A. Kalogeratos and A.Likas, Dip-means: an incremental clustering method for estimating the number of clusters, NIPS 2012. The idea is to use statistical hypothes
Choosing clusters for k-means: the 1 cluster case You may try also a more recent method: A. Kalogeratos and A.Likas, Dip-means: an incremental clustering method for estimating the number of clusters, NIPS 2012. The idea is to use statistical hypothesis testing for unimodality on vectors containing the similarity/distance between one point and the rest of the points of the set. The testing is done using Hartigan-Hartigan dip test, Ann. Statist. 13(1):70-84. The method starts with all the dataset as one cluster and incrementally splits it as long as the unimodality hypothesis is rejected (i.e. more than one clusters is present). So this method would indicate whether there are more than one clusters in data (your question), but it may provide also the final clustering. Here you can find some code in Matlab.
Choosing clusters for k-means: the 1 cluster case You may try also a more recent method: A. Kalogeratos and A.Likas, Dip-means: an incremental clustering method for estimating the number of clusters, NIPS 2012. The idea is to use statistical hypothes
25,477
Choosing clusters for k-means: the 1 cluster case
Suppose I am considering the same example, library(cluster) cgap <- clusGap(transit, FUN=kmeans, K.max=kmax, B=100) for(k in 1:(kmax-1)) { if(cgap$Tab[k,3]>cgap$Tab[(k+1),3]-cgap$Tab[(k+1),4]) {print(k)}; break; } How can I subset elements of clusters corresponding to best clustering solution based on maximum gap statistics? So that I can use it for further analysis on each of the clusters. I know there is a command called subset. There are no issues using this command when we have given the number of clusters we want. But how to subset it when we want to subset based on optimal k obtained using gap (in short, subsetting elements of clusters if there is a loop)
Choosing clusters for k-means: the 1 cluster case
Suppose I am considering the same example, library(cluster) cgap <- clusGap(transit, FUN=kmeans, K.max=kmax, B=100) for(k in 1:(kmax-1)) { if(cgap$Tab[k,3]>cgap$Tab[(k+1),3]-cgap$Tab[(k+1),4]) {pr
Choosing clusters for k-means: the 1 cluster case Suppose I am considering the same example, library(cluster) cgap <- clusGap(transit, FUN=kmeans, K.max=kmax, B=100) for(k in 1:(kmax-1)) { if(cgap$Tab[k,3]>cgap$Tab[(k+1),3]-cgap$Tab[(k+1),4]) {print(k)}; break; } How can I subset elements of clusters corresponding to best clustering solution based on maximum gap statistics? So that I can use it for further analysis on each of the clusters. I know there is a command called subset. There are no issues using this command when we have given the number of clusters we want. But how to subset it when we want to subset based on optimal k obtained using gap (in short, subsetting elements of clusters if there is a loop)
Choosing clusters for k-means: the 1 cluster case Suppose I am considering the same example, library(cluster) cgap <- clusGap(transit, FUN=kmeans, K.max=kmax, B=100) for(k in 1:(kmax-1)) { if(cgap$Tab[k,3]>cgap$Tab[(k+1),3]-cgap$Tab[(k+1),4]) {pr
25,478
What is a good distribution to model average sales
Sales are often assumed to be Poisson distributed, based on the Poisson's properties of modeling "the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event", to quote from Wikipedia - which one could argue applies to people buying stuff. A little more thinking leads us to think about pantry loading, stocking up or hoarding, which would lead to overdispersed data, which can be modeled using the negative binomial distribution. However, sales are often also seasonal or driven by promotions and/or price changes, so you should really think about including such factors in your model, which is when you will end up with the regression variants of the above, i.e., Poisson regression or negative binomial regression. And you are right in that this is a common scenario. People make their living doing this. Like yours truly ;-)
What is a good distribution to model average sales
Sales are often assumed to be Poisson distributed, based on the Poisson's properties of modeling "the probability of a given number of events occurring in a fixed interval of time and/or space if thes
What is a good distribution to model average sales Sales are often assumed to be Poisson distributed, based on the Poisson's properties of modeling "the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event", to quote from Wikipedia - which one could argue applies to people buying stuff. A little more thinking leads us to think about pantry loading, stocking up or hoarding, which would lead to overdispersed data, which can be modeled using the negative binomial distribution. However, sales are often also seasonal or driven by promotions and/or price changes, so you should really think about including such factors in your model, which is when you will end up with the regression variants of the above, i.e., Poisson regression or negative binomial regression. And you are right in that this is a common scenario. People make their living doing this. Like yours truly ;-)
What is a good distribution to model average sales Sales are often assumed to be Poisson distributed, based on the Poisson's properties of modeling "the probability of a given number of events occurring in a fixed interval of time and/or space if thes
25,479
Interpreting seasonality with ACF and PACF
First, here is your intuition illustrated in a simplified time series where the weekend is readily apparent in the ACF: However, this expected ACF pattern can be masked when the data have some trend: A solution (if this is a problem) is to estimate and control for the trend when determining the seasonality. R code that produced these plots follows: # fourteen repeating 'weeks' of five zeroes and two ones weekendeffect <- rep(c(rep(0,5),1,1),times=14) plot(weekendeffect, main="Weekly pattern of five zeroes & two ones", xlab="Time", ylab="Value") acf(weekendeffect, main="ACF") # add steady trend dailydrift <- 0.05 drift <- seq(from=dailydrift, to=length(weekendeffect)*dailydrift, by=dailydrift) driftingtimeseries <- drift + weekendeffect plot(driftingtimeseries, main=c("Weekly pattern with daily drift of",dailydrift), xlab="Time", ylab="Value") acf(driftingtimeseries, main=c("ACF with daily drift of",dailydrift)) # add larger trend dailydrift <- 0.1 drift <- seq(from=dailydrift, to=length(weekendeffect)*dailydrift, by=dailydrift) driftingtimeseries <- drift + weekendeffect plot(driftingtimeseries, main=c("Weekly pattern with daily drift of",dailydrift), xlab="Time", ylab="value") acf(driftingtimeseries, main=c("ACF with daily drift of",dailydrift))
Interpreting seasonality with ACF and PACF
First, here is your intuition illustrated in a simplified time series where the weekend is readily apparent in the ACF: However, this expected ACF pattern can be masked when the data have some trend:
Interpreting seasonality with ACF and PACF First, here is your intuition illustrated in a simplified time series where the weekend is readily apparent in the ACF: However, this expected ACF pattern can be masked when the data have some trend: A solution (if this is a problem) is to estimate and control for the trend when determining the seasonality. R code that produced these plots follows: # fourteen repeating 'weeks' of five zeroes and two ones weekendeffect <- rep(c(rep(0,5),1,1),times=14) plot(weekendeffect, main="Weekly pattern of five zeroes & two ones", xlab="Time", ylab="Value") acf(weekendeffect, main="ACF") # add steady trend dailydrift <- 0.05 drift <- seq(from=dailydrift, to=length(weekendeffect)*dailydrift, by=dailydrift) driftingtimeseries <- drift + weekendeffect plot(driftingtimeseries, main=c("Weekly pattern with daily drift of",dailydrift), xlab="Time", ylab="Value") acf(driftingtimeseries, main=c("ACF with daily drift of",dailydrift)) # add larger trend dailydrift <- 0.1 drift <- seq(from=dailydrift, to=length(weekendeffect)*dailydrift, by=dailydrift) driftingtimeseries <- drift + weekendeffect plot(driftingtimeseries, main=c("Weekly pattern with daily drift of",dailydrift), xlab="Time", ylab="value") acf(driftingtimeseries, main=c("ACF with daily drift of",dailydrift))
Interpreting seasonality with ACF and PACF First, here is your intuition illustrated in a simplified time series where the weekend is readily apparent in the ACF: However, this expected ACF pattern can be masked when the data have some trend:
25,480
Interpreting seasonality with ACF and PACF
Have you used a differencing technique to make your data stationary? your ACF plot suggests that maybe you have not done this step. Once you have a stationary series then it will be easier to interpret the plots. I add two University sources that might assist you with differencing and interpreting. The Pennsylvania State University Duke University
Interpreting seasonality with ACF and PACF
Have you used a differencing technique to make your data stationary? your ACF plot suggests that maybe you have not done this step. Once you have a stationary series then it will be easier to interpre
Interpreting seasonality with ACF and PACF Have you used a differencing technique to make your data stationary? your ACF plot suggests that maybe you have not done this step. Once you have a stationary series then it will be easier to interpret the plots. I add two University sources that might assist you with differencing and interpreting. The Pennsylvania State University Duke University
Interpreting seasonality with ACF and PACF Have you used a differencing technique to make your data stationary? your ACF plot suggests that maybe you have not done this step. Once you have a stationary series then it will be easier to interpre
25,481
Power analysis for binomial data when the null hypothesis is that $p = 0$
You have a one-sided, exact alternative hypothesis $p_{1} > p_{0}$ where $p_{1} = 0.001$ and $p_{0} = 0$. The first step is to identify a threshold $c$ for the number of successes such that the probability to get at least $c$ successes in a sample of size $n$ is very low under the null hypothesis (conventionally $\alpha = 0.05$). In your case, $c=1$, regardless of your particular choice for $n \geqslant 1$ and $\alpha > 0$. The second step is to find out the probability to get at least $c$ successes in a sample of size $n$ under the alternative hypothesis - this is your power. Here, you need a fixed $n$ such that the Binomial distribution $\mathcal{B}(n, p_{1})$ is fully specified. The second step in R with $n = 500$: > n <- 500 # sample size > p1 <- 0.001 # success probability under alternative hypothesis > cc <- 1 # threshold > sum(dbinom(cc:n, n, p1)) # power: probability for cc or more successes given p1 [1] 0.3936211 To get an idea how the power changes with sample size, you can draw a power function: nn <- 10:2000 # sample sizes pow <- 1-pbinom(cc-1, nn, p1) # corresponding power tStr <- expression(paste("Power for ", X>0, " given ", p[1]==0.001)) plot(nn, pow, type="l", xaxs="i", xlab="sample size", ylab="power", lwd=2, col="blue", main=tStr, cex.lab=1.4, cex.main=1.4) If you want to know what sample size you need to achieve at least a pre-specified power, you can use the power values calculated above. Say you want a power of at least $0.5$. > powMin <- 0.5 > idx <- which.min(abs(pow-powMin)) # index for value closest to 0.5 > nn[idx] # sample size for that index [1] 693 > pow[idx] # power for that sample size [1] 0.5000998 So you need a sample size of at least $693$ to achive a power of $0.5$.
Power analysis for binomial data when the null hypothesis is that $p = 0$
You have a one-sided, exact alternative hypothesis $p_{1} > p_{0}$ where $p_{1} = 0.001$ and $p_{0} = 0$. The first step is to identify a threshold $c$ for the number of successes such that the proba
Power analysis for binomial data when the null hypothesis is that $p = 0$ You have a one-sided, exact alternative hypothesis $p_{1} > p_{0}$ where $p_{1} = 0.001$ and $p_{0} = 0$. The first step is to identify a threshold $c$ for the number of successes such that the probability to get at least $c$ successes in a sample of size $n$ is very low under the null hypothesis (conventionally $\alpha = 0.05$). In your case, $c=1$, regardless of your particular choice for $n \geqslant 1$ and $\alpha > 0$. The second step is to find out the probability to get at least $c$ successes in a sample of size $n$ under the alternative hypothesis - this is your power. Here, you need a fixed $n$ such that the Binomial distribution $\mathcal{B}(n, p_{1})$ is fully specified. The second step in R with $n = 500$: > n <- 500 # sample size > p1 <- 0.001 # success probability under alternative hypothesis > cc <- 1 # threshold > sum(dbinom(cc:n, n, p1)) # power: probability for cc or more successes given p1 [1] 0.3936211 To get an idea how the power changes with sample size, you can draw a power function: nn <- 10:2000 # sample sizes pow <- 1-pbinom(cc-1, nn, p1) # corresponding power tStr <- expression(paste("Power for ", X>0, " given ", p[1]==0.001)) plot(nn, pow, type="l", xaxs="i", xlab="sample size", ylab="power", lwd=2, col="blue", main=tStr, cex.lab=1.4, cex.main=1.4) If you want to know what sample size you need to achieve at least a pre-specified power, you can use the power values calculated above. Say you want a power of at least $0.5$. > powMin <- 0.5 > idx <- which.min(abs(pow-powMin)) # index for value closest to 0.5 > nn[idx] # sample size for that index [1] 693 > pow[idx] # power for that sample size [1] 0.5000998 So you need a sample size of at least $693$ to achive a power of $0.5$.
Power analysis for binomial data when the null hypothesis is that $p = 0$ You have a one-sided, exact alternative hypothesis $p_{1} > p_{0}$ where $p_{1} = 0.001$ and $p_{0} = 0$. The first step is to identify a threshold $c$ for the number of successes such that the proba
25,482
Power analysis for binomial data when the null hypothesis is that $p = 0$
You can answer this question easily with the pwr package in R. You will need to define a significance level, power, and effect size. Typically, significance level is set to 0.05 and power is set to 0.8. Higher power will require more observations. Lower significance level will decrease power. The effect size for proportions used in this package is Cohen's h. The cutoff for a small h is often taken to be 0.20. The actual cutoff varies by application, and might be smaller in your case. Smaller h means more observations will be required. You said your alternative is $p = 0.001$. That is very small > ES.h(.001, 0) [1] 0.0632561 But we can still proceed. > pwr.p.test(sig.level=0.05, power=.8, h = ES.h(.001, 0), alt="greater", n = NULL) proportion power calculation for binomial distribution (arcsine transformation) h = 0.0632561 n = 1545.124 sig.level = 0.05 power = 0.8 alternative = greater Using these values, you need at least 1546 observations.
Power analysis for binomial data when the null hypothesis is that $p = 0$
You can answer this question easily with the pwr package in R. You will need to define a significance level, power, and effect size. Typically, significance level is set to 0.05 and power is set to 0.
Power analysis for binomial data when the null hypothesis is that $p = 0$ You can answer this question easily with the pwr package in R. You will need to define a significance level, power, and effect size. Typically, significance level is set to 0.05 and power is set to 0.8. Higher power will require more observations. Lower significance level will decrease power. The effect size for proportions used in this package is Cohen's h. The cutoff for a small h is often taken to be 0.20. The actual cutoff varies by application, and might be smaller in your case. Smaller h means more observations will be required. You said your alternative is $p = 0.001$. That is very small > ES.h(.001, 0) [1] 0.0632561 But we can still proceed. > pwr.p.test(sig.level=0.05, power=.8, h = ES.h(.001, 0), alt="greater", n = NULL) proportion power calculation for binomial distribution (arcsine transformation) h = 0.0632561 n = 1545.124 sig.level = 0.05 power = 0.8 alternative = greater Using these values, you need at least 1546 observations.
Power analysis for binomial data when the null hypothesis is that $p = 0$ You can answer this question easily with the pwr package in R. You will need to define a significance level, power, and effect size. Typically, significance level is set to 0.05 and power is set to 0.
25,483
Power analysis for binomial data when the null hypothesis is that $p = 0$
In your specific case there is a simple exact solution: Under the particular null hypothesis $H_0: p=0$ you should never observe a success. So as soon as you observe one success you can be sure that $p\neq0$. Under the alternative $H_1: p=0.001$ The number of trials required to observe at least 1 success follows a geometric distribution. So in order to obtain the minimum sample size to achieve a power of $1-\beta$, you need to find the smallest k such that, $$1-\beta \leq 1-(1-p)^{(k-1)}$$ So with $p=0.001$ to get $80%$ power you would need at least 1610 samples.
Power analysis for binomial data when the null hypothesis is that $p = 0$
In your specific case there is a simple exact solution: Under the particular null hypothesis $H_0: p=0$ you should never observe a success. So as soon as you observe one success you can be sure that $
Power analysis for binomial data when the null hypothesis is that $p = 0$ In your specific case there is a simple exact solution: Under the particular null hypothesis $H_0: p=0$ you should never observe a success. So as soon as you observe one success you can be sure that $p\neq0$. Under the alternative $H_1: p=0.001$ The number of trials required to observe at least 1 success follows a geometric distribution. So in order to obtain the minimum sample size to achieve a power of $1-\beta$, you need to find the smallest k such that, $$1-\beta \leq 1-(1-p)^{(k-1)}$$ So with $p=0.001$ to get $80%$ power you would need at least 1610 samples.
Power analysis for binomial data when the null hypothesis is that $p = 0$ In your specific case there is a simple exact solution: Under the particular null hypothesis $H_0: p=0$ you should never observe a success. So as soon as you observe one success you can be sure that $
25,484
How to 'intelligently' bin a collection of sorted data?
I think what you want to do is called clustering. You want to group together your "Value"s such that similar values are collected in the same bin and the number of total bins is preset. You can solve this problem using the k-means clustering algorithm. In MATLAB, you can do this by: bin_ids = kmeans(Values,3); Above call will group the values in Values in three groups such that the within-group variance is minimal.
How to 'intelligently' bin a collection of sorted data?
I think what you want to do is called clustering. You want to group together your "Value"s such that similar values are collected in the same bin and the number of total bins is preset. You can solve
How to 'intelligently' bin a collection of sorted data? I think what you want to do is called clustering. You want to group together your "Value"s such that similar values are collected in the same bin and the number of total bins is preset. You can solve this problem using the k-means clustering algorithm. In MATLAB, you can do this by: bin_ids = kmeans(Values,3); Above call will group the values in Values in three groups such that the within-group variance is minimal.
How to 'intelligently' bin a collection of sorted data? I think what you want to do is called clustering. You want to group together your "Value"s such that similar values are collected in the same bin and the number of total bins is preset. You can solve
25,485
How to 'intelligently' bin a collection of sorted data?
k-means is an option, but it is not very sensible for 1 dimensional data. In one-dimensional data, you have one enormous benefit: the data can be fully sorted. Have a look at natural breaks optimization instead: http://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization
How to 'intelligently' bin a collection of sorted data?
k-means is an option, but it is not very sensible for 1 dimensional data. In one-dimensional data, you have one enormous benefit: the data can be fully sorted. Have a look at natural breaks optimizati
How to 'intelligently' bin a collection of sorted data? k-means is an option, but it is not very sensible for 1 dimensional data. In one-dimensional data, you have one enormous benefit: the data can be fully sorted. Have a look at natural breaks optimization instead: http://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization
How to 'intelligently' bin a collection of sorted data? k-means is an option, but it is not very sensible for 1 dimensional data. In one-dimensional data, you have one enormous benefit: the data can be fully sorted. Have a look at natural breaks optimizati
25,486
Why not perform meta-analysis on partially simulated data?
There already exist approaches that aim at synthesizing individual and aggregate person data. The Sutton et al. (2008) paper applies a Bayesian approach which (IMHO) has some similarities to your idea. Riley, R. D., Lambert, P. C., Staessen, J. A., Wang, J., Gueyffier, F., Thijs, L., & Boutitie, F. (2007). Meta-analysis of continuous outcomes combining individual patient data and aggregate data. Statistics in Medicine, 27(11), 1870–1893. doi:10.1002/sim.3165 PDF Riley, R. D., & Steyerberg, E. W. (2010). Meta‐analysis of a binary outcome using individual participant data and aggregate data. Research Synthesis Methods, 1(1), 2–19. doi:10.1002/jrsm.4 Sutton, A. J., Kendrick, D., & Coupland, C. A. C. (2008). Meta-analysis of individual- and aggregate-level data. Statistics in Medicine, 27(5), 651–669.
Why not perform meta-analysis on partially simulated data?
There already exist approaches that aim at synthesizing individual and aggregate person data. The Sutton et al. (2008) paper applies a Bayesian approach which (IMHO) has some similarities to your idea
Why not perform meta-analysis on partially simulated data? There already exist approaches that aim at synthesizing individual and aggregate person data. The Sutton et al. (2008) paper applies a Bayesian approach which (IMHO) has some similarities to your idea. Riley, R. D., Lambert, P. C., Staessen, J. A., Wang, J., Gueyffier, F., Thijs, L., & Boutitie, F. (2007). Meta-analysis of continuous outcomes combining individual patient data and aggregate data. Statistics in Medicine, 27(11), 1870–1893. doi:10.1002/sim.3165 PDF Riley, R. D., & Steyerberg, E. W. (2010). Meta‐analysis of a binary outcome using individual participant data and aggregate data. Research Synthesis Methods, 1(1), 2–19. doi:10.1002/jrsm.4 Sutton, A. J., Kendrick, D., & Coupland, C. A. C. (2008). Meta-analysis of individual- and aggregate-level data. Statistics in Medicine, 27(5), 651–669.
Why not perform meta-analysis on partially simulated data? There already exist approaches that aim at synthesizing individual and aggregate person data. The Sutton et al. (2008) paper applies a Bayesian approach which (IMHO) has some similarities to your idea
25,487
Why not perform meta-analysis on partially simulated data?
I thank @Bernd for pointing me in the right direction. Here are some notes on the references he mentioned in his answer, as well as some of the references mentioned in these articles. Sutton et al (2008) Sutton et al use within a health context the terms individual patient data versus aggregate data. They note that analysis of individual patient data is often considered to be the gold standard for meta-analysis, citing Stewart and Clark (1995). It is particularly useful for assessing data quality and performing analyses on values not reported in existing reports (e.g., particular subgroup analyses). Naturally, they note problems, such as the impossibility in some cases of obtaining all individual patient data and the additional costs in processing such data. They also observe that for simple models where the summary statistics are available results will often be similar or the same. They also observe the infrequency of individual patient meta-analysis citing a review by Simmonds et al (2005). They also mention the the review article of meta-analysis combining individual patient data with aggregate data by Riley RD, Simmonds, et al (2008) Riley Lambert Abo-Zaid (2010) In this article Riley et al describe more about meta-analysis of individual participant data. They outline advantages of meta-analysis of individual participant data (e.g., consistent data processing, modelling of missing data, verification of original reported results, more analysis options, etc.) Stewart & Tierney (2002) Stewart and Tierney review the pros and cons of individual patient data meta-analysis focusing particularly on practical issues. Riley Lambert et al (2007) They describe methods of combining individual patient data with aggregate data in terms of one-step and two-step approaches. Cooper & Patall (2009) Cooper and Patall wrote an article as part of a special issue on meta-analysis of individual-level data in Psychological Methods (see Shrout, 2009 for a summary). Cooper and Patall describe research synthesis as one in a second stage of transition: The first transition is from the narrative research review—in which opaque rules of cognitive algebra are used to synthesize the results of studies—to meta-analysis of [aggregated data]. The second stage involves the transition from meta-analysis of [aggregated data] to the accumulation of [individual participant-level data]. to be continued... References Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychological methods, 14(2), 165–176. doi:10.1037/a0015565 Riley, R. D., Lambert, P. C., Staessen, J. A., Wang, J., Gueyffier, F., Thijs, L., & Boutitie, F. (2007). Meta-analysis of continuous outcomes combining individual patient data and aggregate data. Statistics in Medicine, 27(11), 1870–1893. doi:10.1002/sim.3165 [PDF] (http://www.staessen.net/publications/2006-2010/08-21-P.pdf) Riley, R. D., Lambert, P. C., & Abo-Zaid, G. (2010). Meta-analysis of individual participant data: rationale, conduct, and reporting, BMJ, 340, 221. Riley RD, Simmonds MC, Look MP. (2007) Evidence synthesis combining individual patient data and aggregate data: a systematic review identified current practice and possible methods. Journal of Clinical Epidemiology , in press and early view. Riley, R. D., & Steyerberg, E. W. (2010). Meta‐analysis of a binary outcome using individual participant data and aggregate data. Research Synthesis Methods, 1(1), 2–19. doi:10.1002/jrsm.4 Shrout, P.E. (2009). Short and long views of integrative data analysis: Comments on contributions to the special issue.. Psychological methods, 14, 177. Simmonds MC, Higgins JPT, Stewart LA, Tierney JF, Clarke MJ, Thompson SG. (2005). Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clinical Trials ; 2:209–217. Stewart LA, Clarke MJ. Practical methodology of meta-analyses (overviews) using updated individual patient data. Cochrane Working Group. Statistics in Medicine 1995; 14:2057–2079. Stewart LA, Tierney JF. To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Eval Health Prof 2002;25:76-97. Sutton, A. J., Kendrick, D., & Coupland, C. A. C. (2008). Meta-analysis of individual- and aggregate-level data. Statistics in Medicine, 27(5), 651–669.
Why not perform meta-analysis on partially simulated data?
I thank @Bernd for pointing me in the right direction. Here are some notes on the references he mentioned in his answer, as well as some of the references mentioned in these articles. Sutton et al (20
Why not perform meta-analysis on partially simulated data? I thank @Bernd for pointing me in the right direction. Here are some notes on the references he mentioned in his answer, as well as some of the references mentioned in these articles. Sutton et al (2008) Sutton et al use within a health context the terms individual patient data versus aggregate data. They note that analysis of individual patient data is often considered to be the gold standard for meta-analysis, citing Stewart and Clark (1995). It is particularly useful for assessing data quality and performing analyses on values not reported in existing reports (e.g., particular subgroup analyses). Naturally, they note problems, such as the impossibility in some cases of obtaining all individual patient data and the additional costs in processing such data. They also observe that for simple models where the summary statistics are available results will often be similar or the same. They also observe the infrequency of individual patient meta-analysis citing a review by Simmonds et al (2005). They also mention the the review article of meta-analysis combining individual patient data with aggregate data by Riley RD, Simmonds, et al (2008) Riley Lambert Abo-Zaid (2010) In this article Riley et al describe more about meta-analysis of individual participant data. They outline advantages of meta-analysis of individual participant data (e.g., consistent data processing, modelling of missing data, verification of original reported results, more analysis options, etc.) Stewart & Tierney (2002) Stewart and Tierney review the pros and cons of individual patient data meta-analysis focusing particularly on practical issues. Riley Lambert et al (2007) They describe methods of combining individual patient data with aggregate data in terms of one-step and two-step approaches. Cooper & Patall (2009) Cooper and Patall wrote an article as part of a special issue on meta-analysis of individual-level data in Psychological Methods (see Shrout, 2009 for a summary). Cooper and Patall describe research synthesis as one in a second stage of transition: The first transition is from the narrative research review—in which opaque rules of cognitive algebra are used to synthesize the results of studies—to meta-analysis of [aggregated data]. The second stage involves the transition from meta-analysis of [aggregated data] to the accumulation of [individual participant-level data]. to be continued... References Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychological methods, 14(2), 165–176. doi:10.1037/a0015565 Riley, R. D., Lambert, P. C., Staessen, J. A., Wang, J., Gueyffier, F., Thijs, L., & Boutitie, F. (2007). Meta-analysis of continuous outcomes combining individual patient data and aggregate data. Statistics in Medicine, 27(11), 1870–1893. doi:10.1002/sim.3165 [PDF] (http://www.staessen.net/publications/2006-2010/08-21-P.pdf) Riley, R. D., Lambert, P. C., & Abo-Zaid, G. (2010). Meta-analysis of individual participant data: rationale, conduct, and reporting, BMJ, 340, 221. Riley RD, Simmonds MC, Look MP. (2007) Evidence synthesis combining individual patient data and aggregate data: a systematic review identified current practice and possible methods. Journal of Clinical Epidemiology , in press and early view. Riley, R. D., & Steyerberg, E. W. (2010). Meta‐analysis of a binary outcome using individual participant data and aggregate data. Research Synthesis Methods, 1(1), 2–19. doi:10.1002/jrsm.4 Shrout, P.E. (2009). Short and long views of integrative data analysis: Comments on contributions to the special issue.. Psychological methods, 14, 177. Simmonds MC, Higgins JPT, Stewart LA, Tierney JF, Clarke MJ, Thompson SG. (2005). Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clinical Trials ; 2:209–217. Stewart LA, Clarke MJ. Practical methodology of meta-analyses (overviews) using updated individual patient data. Cochrane Working Group. Statistics in Medicine 1995; 14:2057–2079. Stewart LA, Tierney JF. To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Eval Health Prof 2002;25:76-97. Sutton, A. J., Kendrick, D., & Coupland, C. A. C. (2008). Meta-analysis of individual- and aggregate-level data. Statistics in Medicine, 27(5), 651–669.
Why not perform meta-analysis on partially simulated data? I thank @Bernd for pointing me in the right direction. Here are some notes on the references he mentioned in his answer, as well as some of the references mentioned in these articles. Sutton et al (20
25,488
Linear model Heteroscedasticity
What is your goal? We know that heteroskedasticity does not bias our coefficient estimates; it only makes our standard errors incorrect. Hence, if you only care about the fit of the model, then heteroskedasticity doesn't matter. You can get a more efficient model (i.e., one with smaller standard errors) if you use weighted least squares. In this case, you need to estimate the variance for each observation and weight each observation by the inverse of that observation-specific variance (in the case of the weights argument to lm). This estimation procedure changes your estimates. Alternatively, to correct the standard errors for heteroskedasticity without changing your estimates, you can use robust standard errors. For an R application, see the package sandwich. Using the log transformation can be a good approach to correct for heteroskedasticity, but only if all your values are positive and the new model provides a reasonable interpretation relative to the question that you are asking.
Linear model Heteroscedasticity
What is your goal? We know that heteroskedasticity does not bias our coefficient estimates; it only makes our standard errors incorrect. Hence, if you only care about the fit of the model, then hetero
Linear model Heteroscedasticity What is your goal? We know that heteroskedasticity does not bias our coefficient estimates; it only makes our standard errors incorrect. Hence, if you only care about the fit of the model, then heteroskedasticity doesn't matter. You can get a more efficient model (i.e., one with smaller standard errors) if you use weighted least squares. In this case, you need to estimate the variance for each observation and weight each observation by the inverse of that observation-specific variance (in the case of the weights argument to lm). This estimation procedure changes your estimates. Alternatively, to correct the standard errors for heteroskedasticity without changing your estimates, you can use robust standard errors. For an R application, see the package sandwich. Using the log transformation can be a good approach to correct for heteroskedasticity, but only if all your values are positive and the new model provides a reasonable interpretation relative to the question that you are asking.
Linear model Heteroscedasticity What is your goal? We know that heteroskedasticity does not bias our coefficient estimates; it only makes our standard errors incorrect. Hence, if you only care about the fit of the model, then hetero
25,489
Linear model Heteroscedasticity
You would want to try Box-Cox transformation. It is a version of a power transformation: $$ y \mapsto \left\{ \begin{eqnarray} \frac{y^\lambda-1}{\lambda (\dot y)^{\lambda-1}}, & \lambda \neq 0 \\ \dot y \ln y, & \lambda = 0 \end{eqnarray} \right. $$ where $\dot y$ is the geometric mean of the data. When used as a transformation of the response variable, its nominal role is to make the data closer to the normal distribution, and skewness is the leading reason why the data may look non-normal. My gut feeling with your scatterplot is that it needs to be applied to (some of) the explanatory and the response variables. Some earlier discussions include What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? and How should I transform non-negative data including zeros?. You can find R code following How to search for a statistical procedure in R? Econometricians stopped bothering about heteroskedasticity after seminal work of Halbert White (1980) on setting up inferential procedures robust to heteroskedasticity (which in fact just retold the earlier story by a statistician F. Eicker (1967)). See Wikipedia page that I just rewrote.
Linear model Heteroscedasticity
You would want to try Box-Cox transformation. It is a version of a power transformation: $$ y \mapsto \left\{ \begin{eqnarray} \frac{y^\lambda-1}{\lambda (\dot y)^{\lambda-1}}, & \lambda \neq 0 \\ \do
Linear model Heteroscedasticity You would want to try Box-Cox transformation. It is a version of a power transformation: $$ y \mapsto \left\{ \begin{eqnarray} \frac{y^\lambda-1}{\lambda (\dot y)^{\lambda-1}}, & \lambda \neq 0 \\ \dot y \ln y, & \lambda = 0 \end{eqnarray} \right. $$ where $\dot y$ is the geometric mean of the data. When used as a transformation of the response variable, its nominal role is to make the data closer to the normal distribution, and skewness is the leading reason why the data may look non-normal. My gut feeling with your scatterplot is that it needs to be applied to (some of) the explanatory and the response variables. Some earlier discussions include What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? and How should I transform non-negative data including zeros?. You can find R code following How to search for a statistical procedure in R? Econometricians stopped bothering about heteroskedasticity after seminal work of Halbert White (1980) on setting up inferential procedures robust to heteroskedasticity (which in fact just retold the earlier story by a statistician F. Eicker (1967)). See Wikipedia page that I just rewrote.
Linear model Heteroscedasticity You would want to try Box-Cox transformation. It is a version of a power transformation: $$ y \mapsto \left\{ \begin{eqnarray} \frac{y^\lambda-1}{\lambda (\dot y)^{\lambda-1}}, & \lambda \neq 0 \\ \do
25,490
Linear model Heteroscedasticity
There is a very simple solution to heteroskedasticity issue associated with dependent variables within time series data. I don't know if this is applicable to your dependent variable. Assuming it is, instead of using nominal Y change it to % change in Y from the current period over the prior period. For instance, let's say your nominal Y is GDP of $14 trillion in the most current period. Instead, compute the change in GDP over the most recent period (let's say 2.5%). A nominal time series always grows and is always heteroskedastic (the variance of the error grows over time because the values grow). A % change series is typically homoskedastic because the dependent variable is pretty much stationary.
Linear model Heteroscedasticity
There is a very simple solution to heteroskedasticity issue associated with dependent variables within time series data. I don't know if this is applicable to your dependent variable. Assuming it is
Linear model Heteroscedasticity There is a very simple solution to heteroskedasticity issue associated with dependent variables within time series data. I don't know if this is applicable to your dependent variable. Assuming it is, instead of using nominal Y change it to % change in Y from the current period over the prior period. For instance, let's say your nominal Y is GDP of $14 trillion in the most current period. Instead, compute the change in GDP over the most recent period (let's say 2.5%). A nominal time series always grows and is always heteroskedastic (the variance of the error grows over time because the values grow). A % change series is typically homoskedastic because the dependent variable is pretty much stationary.
Linear model Heteroscedasticity There is a very simple solution to heteroskedasticity issue associated with dependent variables within time series data. I don't know if this is applicable to your dependent variable. Assuming it is
25,491
How are piecewise cubic spline bases constructed?
Look at a simpler problem: construct a basis for the space of piecewise constant functions whose values are allowed to break at the knots. With two knots, that's three intervals. One basis would consist of (a) the function that equals $1$ for all arguments less than or equal to $\xi_1$ and otherwise is $0$, (b) the function equal to $1$ for all arguments from $\xi_1$ through $\xi_2$ and otherwise is $0$, and (c) the function equal to $1$ for all arguments greater than $\xi_2$ but otherwise is $0$. However, there's another way. The idea is to let the basis elements encode the jumps that occur at the knots. The first basis element therefore is a constant function, say $1$, regardless of the knots. The second basis element encodes a jump at $\xi_1$. It's convenient to take it to equal $0$ for values less than or equal to $\xi_1$ and to equal $1$ for larger values. Let's call this function $H_{\xi_1}$. The third basis element can be taken to be $H_{\xi_2}$. For example, the piecewise constant function that jumps from $48$ to $-120$ and then to $240$ at knots $\xi_1 = 2$ and $\xi_2 = 4$ can be written as $48 -168H_2 + 360H_4$: in this form, it reveals itself explicitly as a jump of $-168$ at $\xi_1=2$ followed by a jump of $+360$ at $\xi_2=4$, after starting from a baseline value of $48$. Here is a piecewise constant spline with two knots. It is determined by its three levels or, equivalently, by a "baseline" level and two jumps. It should be clear that although the space of constant functions has dimension $1$, the space of piecewise constant functions with $k\ge 0$ knots has dimension $k+1$: one for a "baseline" constant plus $k$ more dimensions, one for each possible jump. Cubic splines are obtained by integrating piecewise constant functions three times. This introduces three constants of integration. We can absorb them into the integral of the constant function. This gives a "baseline" cubic spanned by $1$, $x$, $x^2$, and $x^3$. Modulo these constants of integration, the integral of $H_{\xi}$ is $\frac{1}{3!}(x-\xi)_+^3$: its third derivative jumps by $1$ at the value $\xi$ and otherwise is constant (equal to $0$ to the left of $\xi$ and $1$ to the right of $\xi$). The basis named in the quotation merely rescales these functions by $3!$. Here is a third integral of the preceding piecewise constant function. Notice that no cubic polynomial possibly can behave this way (it cannot have two flat or nearly-flat sections). Splines are inherently more flexible than polynomials of the same degree; they span a higher-dimensional space of functions. It should now be obvious how to extend this formulation to any number of knots and to any degree of splines. Understanding the procedure can be useful when you need non-standard splines for specific problems. For instance, I recently had to develop circular quadratic splines for a regression that involved an angular independent variable (an orientation in the plane modulo $180$ degrees).
How are piecewise cubic spline bases constructed?
Look at a simpler problem: construct a basis for the space of piecewise constant functions whose values are allowed to break at the knots. With two knots, that's three intervals. One basis would con
How are piecewise cubic spline bases constructed? Look at a simpler problem: construct a basis for the space of piecewise constant functions whose values are allowed to break at the knots. With two knots, that's three intervals. One basis would consist of (a) the function that equals $1$ for all arguments less than or equal to $\xi_1$ and otherwise is $0$, (b) the function equal to $1$ for all arguments from $\xi_1$ through $\xi_2$ and otherwise is $0$, and (c) the function equal to $1$ for all arguments greater than $\xi_2$ but otherwise is $0$. However, there's another way. The idea is to let the basis elements encode the jumps that occur at the knots. The first basis element therefore is a constant function, say $1$, regardless of the knots. The second basis element encodes a jump at $\xi_1$. It's convenient to take it to equal $0$ for values less than or equal to $\xi_1$ and to equal $1$ for larger values. Let's call this function $H_{\xi_1}$. The third basis element can be taken to be $H_{\xi_2}$. For example, the piecewise constant function that jumps from $48$ to $-120$ and then to $240$ at knots $\xi_1 = 2$ and $\xi_2 = 4$ can be written as $48 -168H_2 + 360H_4$: in this form, it reveals itself explicitly as a jump of $-168$ at $\xi_1=2$ followed by a jump of $+360$ at $\xi_2=4$, after starting from a baseline value of $48$. Here is a piecewise constant spline with two knots. It is determined by its three levels or, equivalently, by a "baseline" level and two jumps. It should be clear that although the space of constant functions has dimension $1$, the space of piecewise constant functions with $k\ge 0$ knots has dimension $k+1$: one for a "baseline" constant plus $k$ more dimensions, one for each possible jump. Cubic splines are obtained by integrating piecewise constant functions three times. This introduces three constants of integration. We can absorb them into the integral of the constant function. This gives a "baseline" cubic spanned by $1$, $x$, $x^2$, and $x^3$. Modulo these constants of integration, the integral of $H_{\xi}$ is $\frac{1}{3!}(x-\xi)_+^3$: its third derivative jumps by $1$ at the value $\xi$ and otherwise is constant (equal to $0$ to the left of $\xi$ and $1$ to the right of $\xi$). The basis named in the quotation merely rescales these functions by $3!$. Here is a third integral of the preceding piecewise constant function. Notice that no cubic polynomial possibly can behave this way (it cannot have two flat or nearly-flat sections). Splines are inherently more flexible than polynomials of the same degree; they span a higher-dimensional space of functions. It should now be obvious how to extend this formulation to any number of knots and to any degree of splines. Understanding the procedure can be useful when you need non-standard splines for specific problems. For instance, I recently had to develop circular quadratic splines for a regression that involved an angular independent variable (an orientation in the plane modulo $180$ degrees).
How are piecewise cubic spline bases constructed? Look at a simpler problem: construct a basis for the space of piecewise constant functions whose values are allowed to break at the knots. With two knots, that's three intervals. One basis would con
25,492
Is there a way to use cross validation to do variable/feature selection in R?
I believe what you describe is already implemented in the caret package. Look at the rfe function or the vignette here: http://cran.r-project.org/web/packages/caret/vignettes/caretSelection.pdf Now, having said that, why do you need to reduce the number of features? From 70 to 20 isn't really an order of magnitude decrease. I would think you'd need more than 70 features before you would have a firm prior believe that some of the features really and truly don't matter. But then again, that's where a subjective prior comes in I suppose.
Is there a way to use cross validation to do variable/feature selection in R?
I believe what you describe is already implemented in the caret package. Look at the rfe function or the vignette here: http://cran.r-project.org/web/packages/caret/vignettes/caretSelection.pdf Now,
Is there a way to use cross validation to do variable/feature selection in R? I believe what you describe is already implemented in the caret package. Look at the rfe function or the vignette here: http://cran.r-project.org/web/packages/caret/vignettes/caretSelection.pdf Now, having said that, why do you need to reduce the number of features? From 70 to 20 isn't really an order of magnitude decrease. I would think you'd need more than 70 features before you would have a firm prior believe that some of the features really and truly don't matter. But then again, that's where a subjective prior comes in I suppose.
Is there a way to use cross validation to do variable/feature selection in R? I believe what you describe is already implemented in the caret package. Look at the rfe function or the vignette here: http://cran.r-project.org/web/packages/caret/vignettes/caretSelection.pdf Now,
25,493
Is there a way to use cross validation to do variable/feature selection in R?
There is no reason why variable selection frequency provides any information that you do not already get from the apparent importance of the variables in the initial model. This is essentially a replay of initial statistical significance. you are also adding a new level of arbitrariness when trying to decide on a cutoff for selection frequency. Resampling variable selection is badly damaged by collinearity in addition to the other problems.
Is there a way to use cross validation to do variable/feature selection in R?
There is no reason why variable selection frequency provides any information that you do not already get from the apparent importance of the variables in the initial model. This is essentially a repl
Is there a way to use cross validation to do variable/feature selection in R? There is no reason why variable selection frequency provides any information that you do not already get from the apparent importance of the variables in the initial model. This is essentially a replay of initial statistical significance. you are also adding a new level of arbitrariness when trying to decide on a cutoff for selection frequency. Resampling variable selection is badly damaged by collinearity in addition to the other problems.
Is there a way to use cross validation to do variable/feature selection in R? There is no reason why variable selection frequency provides any information that you do not already get from the apparent importance of the variables in the initial model. This is essentially a repl
25,494
Is there a way to use cross validation to do variable/feature selection in R?
I have revised my answer from earlier today. I have now generated some example data on which to run the code. Others have rightly suggested that you look into using the caret package, which I agree with. In some instances, however, you may find it necessary to write your own code. Below I have attempted to demonstrate how to use the sample() function in R to randomly assign observations to cross-validation folds. I also use for loops to perform variable pre-selection (using univariate linear regression with a lenient p value cutoff of 0.1) and model building (using stepwise regression) on the ten training sets. You can then write your own code to apply the resultant models to the validation folds. Hope this helps! ################################################################################ ## Load the MASS library, which contains the "stepAIC" function for performing ## stepwise regression, to be used later in this script library(MASS) ################################################################################ ################################################################################ ## Generate example data, with 100 observations (rows), 70 variables (columns 1 ## to 70), and a continuous dependent variable (column 71) Data <- NULL Data <- as.data.frame(Data) for (i in 1:71) { for (j in 1:100) { Data[j,i] <- rnorm(1) }} names(Data)[71] <- "Dependent" ################################################################################ ################################################################################ ## Create ten folds for cross-validation. Each observation in your data will ## randomly be assigned to one of ten folds. Data$Fold <- sample(c(rep(1:10,10))) ## Each fold will have the same number of observations assigned to it. You can ## double check this by typing the following: table(Data$Fold) ## Note: If you were to have 105 observations instead of 100, you could instead ## write: Data$Fold <- sample(c(rep(1:10,10),rep(1:5,1))) ################################################################################ ################################################################################ ## I like to use a "for loop" for cross-validation. Here, prior to beginning my ## "for loop", I will define the variables I plan to use in it. You have to do ## this first or R will give you an error code. fit <- NULL stepw <- NULL training <- NULL testing <- NULL Preselection <- NULL Selected <- NULL variables <- NULL ################################################################################ ################################################################################ ## Now we can begin the ten-fold cross validation. First, we open the "for loop" for (CV in 1:10) { ## Now we define your training and testing folds. I like to store these data in ## a list, so at the end of the script, if I want to, I can go back and look at ## the observations in each individual fold training[[CV]] <- Data[which(Data$Fold != CV),] testing[[CV]] <- Data[which(Data$Fold == CV),] ## We can preselect variables by analyzing each variable separately using ## univariate linear regression and then ranking them by p value. First we will ## define the container object to which we plan to output these data. Preselection[[CV]] <- as.data.frame(Preselection[CV]) ## Now we will run a separate linear regression for each of our 70 variables. ## We will store the variable name and the coefficient p value in our object ## called "Preselection". for (i in 1:70) { Preselection[[CV]][i,1] <- i Preselection[[CV]][i,2] <- summary(lm(Dependent ~ training[[CV]][,i] , data = training[[CV]]))$coefficients[2,4] } ## Now we will remove "i" and also we will name the columns of our new object. rm(i) names(Preselection[[CV]]) <- c("Variable", "pValue") ## Now we will make note of those variables whose p values were less than 0.1. Selected[[CV]] <- Preselection[[CV]][which(Preselection[[CV]]$pValue <= 0.1),] ; row.names(Selected[[CV]]) <- NULL ## Fit a model using the pre-selected variables to the training fold ## First we must save the variable names as a character string temp <- NULL for (k in 1:(as.numeric(length(Selected[[CV]]$Variable)))) { temp[k] <- paste("training[[CV]]$V",Selected[[CV]]$Variable[k]," + ",sep="")} variables[[CV]] <- paste(temp, collapse = "") variables[[CV]] <- substr(variables[[CV]],1,(nchar(variables[[CV]])-3)) ## Now we can use this string as the independent variables list in our model y <- training[[CV]][,"Dependent"] form <- as.formula(paste("y ~", variables[[CV]])) ## We can build a model using all of the pre-selected variables fit[[CV]] <- lm(form, training[[CV]]) ## Then we can build new models using stepwise removal of these variables using ## the MASS package stepw[[CV]] <- stepAIC(fit[[CV]], direction="both") ## End for loop } ## Now you have your ten training and validation sets saved as training[[CV]] ## and testing[[CV]]. You also have results from your univariate pre-selection ## analyses saved as Preselection[[CV]]. Those variables that had p values less ## than 0.1 are saved in Selected[[CV]]. Models built using these variables are ## saved in fit[[CV]]. Reduced versions of these models (by stepwise selection) ## are saved in stepw[[CV]]. ## Now you might consider using the predict.lm function from the stats package ## to apply your ten models to their corresponding validation folds. You then ## could look at the performance of the ten models and average their performance ## statistics together to get an overall idea of how well your data predict the ## outcome. ################################################################################ Before performing cross-validation, it is important that you read about its proper use. These two references offer excellent discussions of cross-validation: Simon RM, Subramanian J, Li MC, Menezes S. Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data. Brief Bioinform. 2011 May;12(3):203-14. Epub 2011 Feb 15. http://bib.oxfordjournals.org/content/12/3/203.long Richard Simon, Michael D. Radmacher, Kevin Dobbin and Lisa M. McShane. Pitfalls in the Use of DNA Microarray Data for Diagnostic and Prognostic Classification. JNCI J Natl Cancer Inst (2003) 95 (1): 14-18. http://jnci.oxfordjournals.org/content/95/1/14.long These papers are geared toward biostatisticians, but would be useful for anyone. Also, always keep in mind that using stepwise regression is dangerous (although using cross-validation should help to alleviate overfitting). A good discussion of stepwise regression is available here: http://www.stata.com/support/faqs/stat/stepwise.html. Let me know if you have any additional questions!
Is there a way to use cross validation to do variable/feature selection in R?
I have revised my answer from earlier today. I have now generated some example data on which to run the code. Others have rightly suggested that you look into using the caret package, which I agree wi
Is there a way to use cross validation to do variable/feature selection in R? I have revised my answer from earlier today. I have now generated some example data on which to run the code. Others have rightly suggested that you look into using the caret package, which I agree with. In some instances, however, you may find it necessary to write your own code. Below I have attempted to demonstrate how to use the sample() function in R to randomly assign observations to cross-validation folds. I also use for loops to perform variable pre-selection (using univariate linear regression with a lenient p value cutoff of 0.1) and model building (using stepwise regression) on the ten training sets. You can then write your own code to apply the resultant models to the validation folds. Hope this helps! ################################################################################ ## Load the MASS library, which contains the "stepAIC" function for performing ## stepwise regression, to be used later in this script library(MASS) ################################################################################ ################################################################################ ## Generate example data, with 100 observations (rows), 70 variables (columns 1 ## to 70), and a continuous dependent variable (column 71) Data <- NULL Data <- as.data.frame(Data) for (i in 1:71) { for (j in 1:100) { Data[j,i] <- rnorm(1) }} names(Data)[71] <- "Dependent" ################################################################################ ################################################################################ ## Create ten folds for cross-validation. Each observation in your data will ## randomly be assigned to one of ten folds. Data$Fold <- sample(c(rep(1:10,10))) ## Each fold will have the same number of observations assigned to it. You can ## double check this by typing the following: table(Data$Fold) ## Note: If you were to have 105 observations instead of 100, you could instead ## write: Data$Fold <- sample(c(rep(1:10,10),rep(1:5,1))) ################################################################################ ################################################################################ ## I like to use a "for loop" for cross-validation. Here, prior to beginning my ## "for loop", I will define the variables I plan to use in it. You have to do ## this first or R will give you an error code. fit <- NULL stepw <- NULL training <- NULL testing <- NULL Preselection <- NULL Selected <- NULL variables <- NULL ################################################################################ ################################################################################ ## Now we can begin the ten-fold cross validation. First, we open the "for loop" for (CV in 1:10) { ## Now we define your training and testing folds. I like to store these data in ## a list, so at the end of the script, if I want to, I can go back and look at ## the observations in each individual fold training[[CV]] <- Data[which(Data$Fold != CV),] testing[[CV]] <- Data[which(Data$Fold == CV),] ## We can preselect variables by analyzing each variable separately using ## univariate linear regression and then ranking them by p value. First we will ## define the container object to which we plan to output these data. Preselection[[CV]] <- as.data.frame(Preselection[CV]) ## Now we will run a separate linear regression for each of our 70 variables. ## We will store the variable name and the coefficient p value in our object ## called "Preselection". for (i in 1:70) { Preselection[[CV]][i,1] <- i Preselection[[CV]][i,2] <- summary(lm(Dependent ~ training[[CV]][,i] , data = training[[CV]]))$coefficients[2,4] } ## Now we will remove "i" and also we will name the columns of our new object. rm(i) names(Preselection[[CV]]) <- c("Variable", "pValue") ## Now we will make note of those variables whose p values were less than 0.1. Selected[[CV]] <- Preselection[[CV]][which(Preselection[[CV]]$pValue <= 0.1),] ; row.names(Selected[[CV]]) <- NULL ## Fit a model using the pre-selected variables to the training fold ## First we must save the variable names as a character string temp <- NULL for (k in 1:(as.numeric(length(Selected[[CV]]$Variable)))) { temp[k] <- paste("training[[CV]]$V",Selected[[CV]]$Variable[k]," + ",sep="")} variables[[CV]] <- paste(temp, collapse = "") variables[[CV]] <- substr(variables[[CV]],1,(nchar(variables[[CV]])-3)) ## Now we can use this string as the independent variables list in our model y <- training[[CV]][,"Dependent"] form <- as.formula(paste("y ~", variables[[CV]])) ## We can build a model using all of the pre-selected variables fit[[CV]] <- lm(form, training[[CV]]) ## Then we can build new models using stepwise removal of these variables using ## the MASS package stepw[[CV]] <- stepAIC(fit[[CV]], direction="both") ## End for loop } ## Now you have your ten training and validation sets saved as training[[CV]] ## and testing[[CV]]. You also have results from your univariate pre-selection ## analyses saved as Preselection[[CV]]. Those variables that had p values less ## than 0.1 are saved in Selected[[CV]]. Models built using these variables are ## saved in fit[[CV]]. Reduced versions of these models (by stepwise selection) ## are saved in stepw[[CV]]. ## Now you might consider using the predict.lm function from the stats package ## to apply your ten models to their corresponding validation folds. You then ## could look at the performance of the ten models and average their performance ## statistics together to get an overall idea of how well your data predict the ## outcome. ################################################################################ Before performing cross-validation, it is important that you read about its proper use. These two references offer excellent discussions of cross-validation: Simon RM, Subramanian J, Li MC, Menezes S. Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data. Brief Bioinform. 2011 May;12(3):203-14. Epub 2011 Feb 15. http://bib.oxfordjournals.org/content/12/3/203.long Richard Simon, Michael D. Radmacher, Kevin Dobbin and Lisa M. McShane. Pitfalls in the Use of DNA Microarray Data for Diagnostic and Prognostic Classification. JNCI J Natl Cancer Inst (2003) 95 (1): 14-18. http://jnci.oxfordjournals.org/content/95/1/14.long These papers are geared toward biostatisticians, but would be useful for anyone. Also, always keep in mind that using stepwise regression is dangerous (although using cross-validation should help to alleviate overfitting). A good discussion of stepwise regression is available here: http://www.stata.com/support/faqs/stat/stepwise.html. Let me know if you have any additional questions!
Is there a way to use cross validation to do variable/feature selection in R? I have revised my answer from earlier today. I have now generated some example data on which to run the code. Others have rightly suggested that you look into using the caret package, which I agree wi
25,495
Is there a way to use cross validation to do variable/feature selection in R?
I just found something nice over here: http://cran.r-project.org/web/packages/Causata/vignettes/Causata-vignette.pdf Try this maybe when using the glmnet Package # extract nonzero coefficients coefs.all <- as.matrix(coef(cv.glmnet.obj, s="lambda.min")) idx <- as.vector(abs(coefs.all) > 0) coefs.nonzero <- as.matrix(coefs.all[idx]) rownames(coefs.nonzero) <- rownames(coefs.all)[idx]
Is there a way to use cross validation to do variable/feature selection in R?
I just found something nice over here: http://cran.r-project.org/web/packages/Causata/vignettes/Causata-vignette.pdf Try this maybe when using the glmnet Package # extract nonzero coefficients coefs.a
Is there a way to use cross validation to do variable/feature selection in R? I just found something nice over here: http://cran.r-project.org/web/packages/Causata/vignettes/Causata-vignette.pdf Try this maybe when using the glmnet Package # extract nonzero coefficients coefs.all <- as.matrix(coef(cv.glmnet.obj, s="lambda.min")) idx <- as.vector(abs(coefs.all) > 0) coefs.nonzero <- as.matrix(coefs.all[idx]) rownames(coefs.nonzero) <- rownames(coefs.all)[idx]
Is there a way to use cross validation to do variable/feature selection in R? I just found something nice over here: http://cran.r-project.org/web/packages/Causata/vignettes/Causata-vignette.pdf Try this maybe when using the glmnet Package # extract nonzero coefficients coefs.a
25,496
Why is running split tests until statistically significant a "bad thing"? (Or is it?)
It's the "best two out of three" phenomenon. You know the joke: "Let's flip for it." "OK, go!" "Oops, I lost. How about flipping two more times, with the winner being the best of the three total times?" Significance testing is exactly like coin flipping (but with biased coins, usually). If you run a short test and it's not significant, maybe you can achieve significance (partly through luck) by prolonging the testing. The converse of this (I'm tempted to say the "flip side" of this :-)) is that if you plan to conduct a certain number of tests and happen to see a "significant" result early, that's also not dispositive. It's analogous to the reverse of our first contest: "Let's flip for it. Best two out of three?" "OK, go!" "Ha, I won the first flip, so I win!" Having said that, note that there are versions of testing which allow you to monitor the (nominal) significance as you go along. These work like ending a contest early when it gets too one-sided, so-called mercy rules. If, in the early going, it becomes extremely obvious that a difference is real, you can save time and effort by ending the testing. These are called sequential hypothesis testing procedures. A good case could be made that these should be your standard way of conducting A-B tests, because in the long run you will spend less time and effort overall.
Why is running split tests until statistically significant a "bad thing"? (Or is it?)
It's the "best two out of three" phenomenon. You know the joke: "Let's flip for it." "OK, go!" "Oops, I lost. How about flipping two more times, with the winner being the best of the three total ti
Why is running split tests until statistically significant a "bad thing"? (Or is it?) It's the "best two out of three" phenomenon. You know the joke: "Let's flip for it." "OK, go!" "Oops, I lost. How about flipping two more times, with the winner being the best of the three total times?" Significance testing is exactly like coin flipping (but with biased coins, usually). If you run a short test and it's not significant, maybe you can achieve significance (partly through luck) by prolonging the testing. The converse of this (I'm tempted to say the "flip side" of this :-)) is that if you plan to conduct a certain number of tests and happen to see a "significant" result early, that's also not dispositive. It's analogous to the reverse of our first contest: "Let's flip for it. Best two out of three?" "OK, go!" "Ha, I won the first flip, so I win!" Having said that, note that there are versions of testing which allow you to monitor the (nominal) significance as you go along. These work like ending a contest early when it gets too one-sided, so-called mercy rules. If, in the early going, it becomes extremely obvious that a difference is real, you can save time and effort by ending the testing. These are called sequential hypothesis testing procedures. A good case could be made that these should be your standard way of conducting A-B tests, because in the long run you will spend less time and effort overall.
Why is running split tests until statistically significant a "bad thing"? (Or is it?) It's the "best two out of three" phenomenon. You know the joke: "Let's flip for it." "OK, go!" "Oops, I lost. How about flipping two more times, with the winner being the best of the three total ti
25,497
Kalman filter vs. smoothing splines
Regarding your question on the equivalence, fitting a univariate local linear trend model using a Kalman filter is equivalent to fitting a cubic spline; see Time Series Analysis by State Space Methods, Section 3.11 for instance. I think you are right in pointing that the Kalman filter and smoother are sometimes neglected when they could be put to good use. In particular, I find that the Kalman smoother is much more convenient with irregularly spaced and/or missing data.
Kalman filter vs. smoothing splines
Regarding your question on the equivalence, fitting a univariate local linear trend model using a Kalman filter is equivalent to fitting a cubic spline; see Time Series Analysis by State Space Methods
Kalman filter vs. smoothing splines Regarding your question on the equivalence, fitting a univariate local linear trend model using a Kalman filter is equivalent to fitting a cubic spline; see Time Series Analysis by State Space Methods, Section 3.11 for instance. I think you are right in pointing that the Kalman filter and smoother are sometimes neglected when they could be put to good use. In particular, I find that the Kalman smoother is much more convenient with irregularly spaced and/or missing data.
Kalman filter vs. smoothing splines Regarding your question on the equivalence, fitting a univariate local linear trend model using a Kalman filter is equivalent to fitting a cubic spline; see Time Series Analysis by State Space Methods
25,498
Appropriateness of Wilcoxon signed rank test
Wikipedia has misled you in stating "...if both x and y are given and paired is TRUE, a Wilcoxon signed rank test of the null that the distribution ... of x - y (in the paired two sample case) is symmetric about mu is performed." The test determines whether the RANK-TRANSFORMED values of $z_i = x_i - y_i$ are symmetric around the median you specify in your null hypothesis (I assume you'd use zero). Skewness is not a problem, since the signed-rank test, like most nonparametric tests, is "distribution free." The price you pay for these tests is often reduced power, but it looks like you have a large enough sample to overcome that. A "what the hell" alternative to the rank-sum test might be to try a simple transformation like $\ln(x_i)$ and $\ln(y_i)$ on the off chance that these measurements might roughly follow a lognormal distribution--so the logged values should look "bell curvish". Then you could use a t test and convince yourself (and your boss who only took Business Stats) that the rank-sum test is working. If this works, there's a bonus: the t test on means for lognormal data is a comparison of medians for the original, untransformed, measurements. Me? I'd do both, and anything else I could cook up (likelihood ratio test on Poisson counts by firm size?). Hypothesis testing is all about determining whether evidence is convincing, and some folks take a heap of convincin'.
Appropriateness of Wilcoxon signed rank test
Wikipedia has misled you in stating "...if both x and y are given and paired is TRUE, a Wilcoxon signed rank test of the null that the distribution ... of x - y (in the paired two sample case) is symm
Appropriateness of Wilcoxon signed rank test Wikipedia has misled you in stating "...if both x and y are given and paired is TRUE, a Wilcoxon signed rank test of the null that the distribution ... of x - y (in the paired two sample case) is symmetric about mu is performed." The test determines whether the RANK-TRANSFORMED values of $z_i = x_i - y_i$ are symmetric around the median you specify in your null hypothesis (I assume you'd use zero). Skewness is not a problem, since the signed-rank test, like most nonparametric tests, is "distribution free." The price you pay for these tests is often reduced power, but it looks like you have a large enough sample to overcome that. A "what the hell" alternative to the rank-sum test might be to try a simple transformation like $\ln(x_i)$ and $\ln(y_i)$ on the off chance that these measurements might roughly follow a lognormal distribution--so the logged values should look "bell curvish". Then you could use a t test and convince yourself (and your boss who only took Business Stats) that the rank-sum test is working. If this works, there's a bonus: the t test on means for lognormal data is a comparison of medians for the original, untransformed, measurements. Me? I'd do both, and anything else I could cook up (likelihood ratio test on Poisson counts by firm size?). Hypothesis testing is all about determining whether evidence is convincing, and some folks take a heap of convincin'.
Appropriateness of Wilcoxon signed rank test Wikipedia has misled you in stating "...if both x and y are given and paired is TRUE, a Wilcoxon signed rank test of the null that the distribution ... of x - y (in the paired two sample case) is symm
25,499
Appropriateness of Wilcoxon signed rank test
Both Wikipedia and the R help page are kind-of correct and are trying to state the same thing, they just phrase it differently. The Wikipedia article states the hypotheses as (median = 0) vs (median != 0), and says that you can conclude this from the test if the differences have a symmetric distribution (+ the other assumptions). The R help page is more specific, it states the hypotheses as (median = 0 and the differences have a symmetric distribution) vs (at least one of those is false). So it moved an assumption into the null hypothesis. I think they have done this to emphasize the need for symmetricity: with skewed differences the signed-rank test will reject the null hypothesis even if the median is dead on. If you read a textbook, it might also tell you that the null hypothesis being tested is P(X>Y)=0.5 - the rest in fact just follow from this. In terms of application, the question is of course whether you care specifically about the median (and then skewness is a problem, and the median test is a possible alternative), or whether you care about the entire distribution, and then P(X>y)!=0.5 is evidence of changes.
Appropriateness of Wilcoxon signed rank test
Both Wikipedia and the R help page are kind-of correct and are trying to state the same thing, they just phrase it differently. The Wikipedia article states the hypotheses as (median = 0) vs (median
Appropriateness of Wilcoxon signed rank test Both Wikipedia and the R help page are kind-of correct and are trying to state the same thing, they just phrase it differently. The Wikipedia article states the hypotheses as (median = 0) vs (median != 0), and says that you can conclude this from the test if the differences have a symmetric distribution (+ the other assumptions). The R help page is more specific, it states the hypotheses as (median = 0 and the differences have a symmetric distribution) vs (at least one of those is false). So it moved an assumption into the null hypothesis. I think they have done this to emphasize the need for symmetricity: with skewed differences the signed-rank test will reject the null hypothesis even if the median is dead on. If you read a textbook, it might also tell you that the null hypothesis being tested is P(X>Y)=0.5 - the rest in fact just follow from this. In terms of application, the question is of course whether you care specifically about the median (and then skewness is a problem, and the median test is a possible alternative), or whether you care about the entire distribution, and then P(X>y)!=0.5 is evidence of changes.
Appropriateness of Wilcoxon signed rank test Both Wikipedia and the R help page are kind-of correct and are trying to state the same thing, they just phrase it differently. The Wikipedia article states the hypotheses as (median = 0) vs (median
25,500
How to calculate regularization parameter in ridge regression given degrees of freedom and input matrix?
A Newton-Raphson/Fisher-scoring/Taylor-series algorithm would be suited to this. You have the equation to solve for $\lambda$ $$h(\lambda)=\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda}-df=0$$ with derivative $$\frac{\partial h}{\partial \lambda}=-\sum_{i=1}^{p}\frac{d_{i}^{2}}{(d_{i}^{2}+\lambda)^{2}}$$ You then get: $$h(\lambda)\approx h(\lambda^{(0)})+(\lambda-\lambda^{(0)})\frac{\partial h}{\partial \lambda}|_{\lambda=\lambda^{(0)}}=0$$ re-arranging for $\lambda$ you get: $$\lambda=\lambda^{(0)}-\left[\frac{\partial h}{\partial \lambda}|_{\lambda=\lambda^{(0)}}\right]^{-1}h(\lambda^{(0)})$$ This sets up the iterative search. For initial starting values, assume $d^{2}_{i}=1$ in the summation, then you get $\lambda^{(0)}=\frac{p-df}{df}$. $$\lambda^{(j+1)}=\lambda^{(j)}+\left[\sum_{i=1}^{p}\frac{d_{i}^{2}}{(d_{i}^{2}+\lambda^{(j)})^{2}}\right]^{-1}\left[\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda^{(j)}}-df\right]$$ This "goes" in the right direction (increase $\lambda$ when summation is too big, decrease it when too small), and typically only takes a few iterations to solve. Further the function is monotonic (an increase/decrease in $\lambda$ will always decrease/increase the summation), so that it will converge uniquely (no local maxima).
How to calculate regularization parameter in ridge regression given degrees of freedom and input mat
A Newton-Raphson/Fisher-scoring/Taylor-series algorithm would be suited to this. You have the equation to solve for $\lambda$ $$h(\lambda)=\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda}-df=0$$ with
How to calculate regularization parameter in ridge regression given degrees of freedom and input matrix? A Newton-Raphson/Fisher-scoring/Taylor-series algorithm would be suited to this. You have the equation to solve for $\lambda$ $$h(\lambda)=\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda}-df=0$$ with derivative $$\frac{\partial h}{\partial \lambda}=-\sum_{i=1}^{p}\frac{d_{i}^{2}}{(d_{i}^{2}+\lambda)^{2}}$$ You then get: $$h(\lambda)\approx h(\lambda^{(0)})+(\lambda-\lambda^{(0)})\frac{\partial h}{\partial \lambda}|_{\lambda=\lambda^{(0)}}=0$$ re-arranging for $\lambda$ you get: $$\lambda=\lambda^{(0)}-\left[\frac{\partial h}{\partial \lambda}|_{\lambda=\lambda^{(0)}}\right]^{-1}h(\lambda^{(0)})$$ This sets up the iterative search. For initial starting values, assume $d^{2}_{i}=1$ in the summation, then you get $\lambda^{(0)}=\frac{p-df}{df}$. $$\lambda^{(j+1)}=\lambda^{(j)}+\left[\sum_{i=1}^{p}\frac{d_{i}^{2}}{(d_{i}^{2}+\lambda^{(j)})^{2}}\right]^{-1}\left[\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda^{(j)}}-df\right]$$ This "goes" in the right direction (increase $\lambda$ when summation is too big, decrease it when too small), and typically only takes a few iterations to solve. Further the function is monotonic (an increase/decrease in $\lambda$ will always decrease/increase the summation), so that it will converge uniquely (no local maxima).
How to calculate regularization parameter in ridge regression given degrees of freedom and input mat A Newton-Raphson/Fisher-scoring/Taylor-series algorithm would be suited to this. You have the equation to solve for $\lambda$ $$h(\lambda)=\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda}-df=0$$ with