idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,801
How to construct an interaction plot
I'm not sure that I completely understand your supervisor's suggestion, but the principle that I use when choosing how to create a graph is the make sure that the graph represents the analysis that I'm reporting in my paper. Based on this principle, I would use whatever model to create your graph that you're reporting in your paper. Thus, if you are reporting the following model: $y = var1 + var2 + var1 * var2$ then I would use this model to obtain the predicted values that you plot on your graph. On the other hand, if you are reporting the following model: $y = var1 + var2 + var3 + var4 + var5 + var6 + var7 + var8 + var9 + var1 * var2$ then I would plot the $var1 * var2$ interaction from this model, mean-centering var3 through var9 when you obtain the predicted values for your graph. Assuming that the model with your control variables is the one that you are reporting in your paper, I have included some R code simulating data and creating a graph using those data below. You may want to consider plotting your $y$ points marginalized for your various control variables; if you do not know how to do this, I describe how to accomplish this here. # Set the seed set.seed(2314) # Create the data dat <- matrix(NA, nrow = 200, ncol = 9) colnames(dat) <- paste0("var", 1:9) dat <- data.frame(dat) for(i in 1:9) { dat[, paste0("var", i)] <- rnorm(200, sd = 1) } dat$y <- .5 * dat$var1 + .5 * dat$var2 + .5 * dat$var1 * dat$var2 + rnorm(200, sd = 1) # Fit the model mod <- lm(y ~ var1 * var2 + var3 + var4 + var5 + var6 + var7 + var8 + var9, data = dat) # Create a matrix of desired predicted values for the model. I am holding the control variables # constant at their means pX <- expand.grid(var1 = seq(min(dat$var1), max(dat$var1), by = .1), var2 = c(mean(dat$var2) - sd(dat$var2), mean(dat$var2) + sd(dat$var2)), var3 = mean(dat$var3), var4 = mean(dat$var4), var5 = mean(dat$var5), var6 = mean(dat$var6), var7 = mean(dat$var7), var8 = mean(dat$var8), var9 = mean(dat$var9) ) # Get the predicted values pY <- predict(mod, pX) # Create a plotting space plot(dat$var1, dat$y, frame = F, type = "n", xlab = "var1", ylab = "y") # Plot the points. Points for var1 below the median on var2 are plotted in red, # points for var1 above the median on var2 are plotted in blue points(dat[dat$var2 < median(dat$var2), "var1"], dat[dat$var2 < median(dat$var2), "y"], pch = 16, cex = .5, col = "red") points(dat[dat$var2 >= median(dat$var2), "var1"], dat[dat$var2 >= median(dat$var2), "y"], pch = 16, cex = .5, col = "blue") # Plot the lines. Lines are colored to be consistent with the points lines(pX[pX$var2 == mean(dat$var2) - sd(dat$var2), "var1"], pY[pX$var2 == mean(dat$var2) - sd(dat$var2)], col = "red", lwd = 2) lines(pX[pX$var2 == mean(dat$var2) + sd(dat$var2), "var1"], pY[pX$var2 == mean(dat$var2) + sd(dat$var2)], col = "blue", lwd = 2)
How to construct an interaction plot
I'm not sure that I completely understand your supervisor's suggestion, but the principle that I use when choosing how to create a graph is the make sure that the graph represents the analysis that I'
How to construct an interaction plot I'm not sure that I completely understand your supervisor's suggestion, but the principle that I use when choosing how to create a graph is the make sure that the graph represents the analysis that I'm reporting in my paper. Based on this principle, I would use whatever model to create your graph that you're reporting in your paper. Thus, if you are reporting the following model: $y = var1 + var2 + var1 * var2$ then I would use this model to obtain the predicted values that you plot on your graph. On the other hand, if you are reporting the following model: $y = var1 + var2 + var3 + var4 + var5 + var6 + var7 + var8 + var9 + var1 * var2$ then I would plot the $var1 * var2$ interaction from this model, mean-centering var3 through var9 when you obtain the predicted values for your graph. Assuming that the model with your control variables is the one that you are reporting in your paper, I have included some R code simulating data and creating a graph using those data below. You may want to consider plotting your $y$ points marginalized for your various control variables; if you do not know how to do this, I describe how to accomplish this here. # Set the seed set.seed(2314) # Create the data dat <- matrix(NA, nrow = 200, ncol = 9) colnames(dat) <- paste0("var", 1:9) dat <- data.frame(dat) for(i in 1:9) { dat[, paste0("var", i)] <- rnorm(200, sd = 1) } dat$y <- .5 * dat$var1 + .5 * dat$var2 + .5 * dat$var1 * dat$var2 + rnorm(200, sd = 1) # Fit the model mod <- lm(y ~ var1 * var2 + var3 + var4 + var5 + var6 + var7 + var8 + var9, data = dat) # Create a matrix of desired predicted values for the model. I am holding the control variables # constant at their means pX <- expand.grid(var1 = seq(min(dat$var1), max(dat$var1), by = .1), var2 = c(mean(dat$var2) - sd(dat$var2), mean(dat$var2) + sd(dat$var2)), var3 = mean(dat$var3), var4 = mean(dat$var4), var5 = mean(dat$var5), var6 = mean(dat$var6), var7 = mean(dat$var7), var8 = mean(dat$var8), var9 = mean(dat$var9) ) # Get the predicted values pY <- predict(mod, pX) # Create a plotting space plot(dat$var1, dat$y, frame = F, type = "n", xlab = "var1", ylab = "y") # Plot the points. Points for var1 below the median on var2 are plotted in red, # points for var1 above the median on var2 are plotted in blue points(dat[dat$var2 < median(dat$var2), "var1"], dat[dat$var2 < median(dat$var2), "y"], pch = 16, cex = .5, col = "red") points(dat[dat$var2 >= median(dat$var2), "var1"], dat[dat$var2 >= median(dat$var2), "y"], pch = 16, cex = .5, col = "blue") # Plot the lines. Lines are colored to be consistent with the points lines(pX[pX$var2 == mean(dat$var2) - sd(dat$var2), "var1"], pY[pX$var2 == mean(dat$var2) - sd(dat$var2)], col = "red", lwd = 2) lines(pX[pX$var2 == mean(dat$var2) + sd(dat$var2), "var1"], pY[pX$var2 == mean(dat$var2) + sd(dat$var2)], col = "blue", lwd = 2)
How to construct an interaction plot I'm not sure that I completely understand your supervisor's suggestion, but the principle that I use when choosing how to create a graph is the make sure that the graph represents the analysis that I'
46,802
How to construct an interaction plot
Might be worth saying explicitly what's wrong with your proposal & why you should follow the advice given in @Patrick's answer: First, if the model you're using involves other predictors besides the two involved in the interaction, you clearly need to specify values for all of them to make a prediction using the model. Second, even if you're only interested in showing the form of the expected response $\operatorname{E} Y$ against two predictors, $x_1$ & $x_2$, consider what happens when the full model is $$\operatorname{E} Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \ldots + \beta_8 x_8 + \beta_{12} x_1x_2$$ & you fit a reduced model $$\operatorname{E} Y = \beta_0^* + \beta_1^* x_1 + \beta_2^* x_2 + \beta_{12}^*x_1x_2$$ Does $\beta_1=\beta_1^*$, $\beta_2=\beta_2^*$, & $\beta_{12}=\beta_{12}^*$? Answer:– Not in general—only if you have taken pains in the experiment design to ensure orthogonality. So the interaction plot could look completely different for the two models.
How to construct an interaction plot
Might be worth saying explicitly what's wrong with your proposal & why you should follow the advice given in @Patrick's answer: First, if the model you're using involves other predictors besides the t
How to construct an interaction plot Might be worth saying explicitly what's wrong with your proposal & why you should follow the advice given in @Patrick's answer: First, if the model you're using involves other predictors besides the two involved in the interaction, you clearly need to specify values for all of them to make a prediction using the model. Second, even if you're only interested in showing the form of the expected response $\operatorname{E} Y$ against two predictors, $x_1$ & $x_2$, consider what happens when the full model is $$\operatorname{E} Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \ldots + \beta_8 x_8 + \beta_{12} x_1x_2$$ & you fit a reduced model $$\operatorname{E} Y = \beta_0^* + \beta_1^* x_1 + \beta_2^* x_2 + \beta_{12}^*x_1x_2$$ Does $\beta_1=\beta_1^*$, $\beta_2=\beta_2^*$, & $\beta_{12}=\beta_{12}^*$? Answer:– Not in general—only if you have taken pains in the experiment design to ensure orthogonality. So the interaction plot could look completely different for the two models.
How to construct an interaction plot Might be worth saying explicitly what's wrong with your proposal & why you should follow the advice given in @Patrick's answer: First, if the model you're using involves other predictors besides the t
46,803
A Reference for PAC-Bayesian?
Here are a few quick Google hits... PAC-Bayes Analysis: Background and Applications Probably Approximately Correct Learning and Vapnik-Chervonenkis Dimension Probably approximately correct learning on Wikipedia Overview of the Probably Approximately Correct (PAC) Learning Framework From this last one, a quote: A more refined, Bayesian extension of the PAC model is explored in [26]. Using the Bayesian approach involves assuming a prior distribution over possible target concepts as well as training instances. Given these distributions, the average error of the hypothesis as a function of training sample size, and even as a function of the particular training sample, can be defined. Also, $1 - \delta$ confidence intervals like those in the PAC model can be defined as well. [26] $=$ W. Buntine, A Theory of Learning Classification Rules. PhD thesis, University of Technology, Sydney, 1990.
A Reference for PAC-Bayesian?
Here are a few quick Google hits... PAC-Bayes Analysis: Background and Applications Probably Approximately Correct Learning and Vapnik-Chervonenkis Dimension Probably approximately correct learning o
A Reference for PAC-Bayesian? Here are a few quick Google hits... PAC-Bayes Analysis: Background and Applications Probably Approximately Correct Learning and Vapnik-Chervonenkis Dimension Probably approximately correct learning on Wikipedia Overview of the Probably Approximately Correct (PAC) Learning Framework From this last one, a quote: A more refined, Bayesian extension of the PAC model is explored in [26]. Using the Bayesian approach involves assuming a prior distribution over possible target concepts as well as training instances. Given these distributions, the average error of the hypothesis as a function of training sample size, and even as a function of the particular training sample, can be defined. Also, $1 - \delta$ confidence intervals like those in the PAC model can be defined as well. [26] $=$ W. Buntine, A Theory of Learning Classification Rules. PhD thesis, University of Technology, Sydney, 1990.
A Reference for PAC-Bayesian? Here are a few quick Google hits... PAC-Bayes Analysis: Background and Applications Probably Approximately Correct Learning and Vapnik-Chervonenkis Dimension Probably approximately correct learning o
46,804
A Reference for PAC-Bayesian?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This paper is a good way to start : https://arxiv.org/pdf/1901.05353.pdf
A Reference for PAC-Bayesian?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A Reference for PAC-Bayesian? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This paper is a good way to start : https://arxiv.org/pdf/1901.05353.pdf
A Reference for PAC-Bayesian? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
46,805
A Reference for PAC-Bayesian?
A more recent elementary introduction to PAC-Bayes User-friendly introduction to PAC-Bayes bounds by Pierre Alquier. It is an 80 page study of this topic.
A Reference for PAC-Bayesian?
A more recent elementary introduction to PAC-Bayes User-friendly introduction to PAC-Bayes bounds by Pierre Alquier. It is an 80 page study of this topic.
A Reference for PAC-Bayesian? A more recent elementary introduction to PAC-Bayes User-friendly introduction to PAC-Bayes bounds by Pierre Alquier. It is an 80 page study of this topic.
A Reference for PAC-Bayesian? A more recent elementary introduction to PAC-Bayes User-friendly introduction to PAC-Bayes bounds by Pierre Alquier. It is an 80 page study of this topic.
46,806
Classification problem using imbalanced dataset
Removing samples from the majority class may cause the classifier to miss important concepts/features pertaining to the majority class. One strategy called informed undersampling demonstrated good results. Unsupervised learning algorithm is used to perform independent random sampling from majority class. Multiple classifiers based on the combination of each majority class subset with the minority class data are chosen. Another example of informed undersampling uses the K-nearest neighbor (KNN) classifier to achieve undersampling. One of the four methods on KNN, looks most straightforward, called NearMiss-3, selects a given number of the closest majority samples for each minority sample to guarantee that every minority sample is surrounded by some majority samples. However, another method, NearMiss-2, in which the majority class samples are selected if their average distance to the three farthest minority class samples are the smallest, is proved the most competitive approach in imbalanced learning. The profit (cost) matrix can be considered as a numerical representation of the penalty of classifying samples from one class to another. In decision tree, (1) cost-sensitive adjustments can be applied to the decision threshold; ROC curve is applied to plot the range of performance values as the decision threshold is moved from the point where the total misclassifications on majority class are maximally costly to the point where total misclassifications on the minority class are maximally costly. The most dominant point on the ROC curve corresponds to the final decision threshold. Read this paper for more details. (2) cost-sensitive considerations can be given to the split criteria at each node; This is achieved by fitting an impurity function, and the split with maximum fitting accuracy at each node is selected. This tutorial generalizes the effects of decision tree growth for any choice of spit criteria. (3) cost-sensitive pruning schemes can be applied to the tree. Pruning improves generalization by removing leaves with class probability estimates below a specified threshold. Laplace smoothing method on pruning technique is described in the same tutorial here to reduce the probability that pruning removes leaves on the minority class.
Classification problem using imbalanced dataset
Removing samples from the majority class may cause the classifier to miss important concepts/features pertaining to the majority class. One strategy called informed undersampling demonstrated good res
Classification problem using imbalanced dataset Removing samples from the majority class may cause the classifier to miss important concepts/features pertaining to the majority class. One strategy called informed undersampling demonstrated good results. Unsupervised learning algorithm is used to perform independent random sampling from majority class. Multiple classifiers based on the combination of each majority class subset with the minority class data are chosen. Another example of informed undersampling uses the K-nearest neighbor (KNN) classifier to achieve undersampling. One of the four methods on KNN, looks most straightforward, called NearMiss-3, selects a given number of the closest majority samples for each minority sample to guarantee that every minority sample is surrounded by some majority samples. However, another method, NearMiss-2, in which the majority class samples are selected if their average distance to the three farthest minority class samples are the smallest, is proved the most competitive approach in imbalanced learning. The profit (cost) matrix can be considered as a numerical representation of the penalty of classifying samples from one class to another. In decision tree, (1) cost-sensitive adjustments can be applied to the decision threshold; ROC curve is applied to plot the range of performance values as the decision threshold is moved from the point where the total misclassifications on majority class are maximally costly to the point where total misclassifications on the minority class are maximally costly. The most dominant point on the ROC curve corresponds to the final decision threshold. Read this paper for more details. (2) cost-sensitive considerations can be given to the split criteria at each node; This is achieved by fitting an impurity function, and the split with maximum fitting accuracy at each node is selected. This tutorial generalizes the effects of decision tree growth for any choice of spit criteria. (3) cost-sensitive pruning schemes can be applied to the tree. Pruning improves generalization by removing leaves with class probability estimates below a specified threshold. Laplace smoothing method on pruning technique is described in the same tutorial here to reduce the probability that pruning removes leaves on the minority class.
Classification problem using imbalanced dataset Removing samples from the majority class may cause the classifier to miss important concepts/features pertaining to the majority class. One strategy called informed undersampling demonstrated good res
46,807
Classification problem using imbalanced dataset
First of all, you don't need to down-sample, unless you don't have enough computing power to fit the model to the full dataset. An alternative approach is to assign target observations a weight of 99 and the non-target observations a weight of 1. This will mean the model considers 1 target mis-classification equal to 99 non-target mis-classifications, and will bias the model towards the smaller class without the need for down-sampling. Basically, when down-sampling, you are throwing out information, which reduces the precision of your classifier. Since the positive class is usually more interesting than the negative class this usually isn't a big problem, but if you can use all the data, you should! Adjusting your weights is another way to tell the model what is important and what is not. Finally, regardless of your approach, you can look at how your model performs on a test set. Calculate a ROC curve, which will allow you to see what the tradeoff is between true-positives and false positives for your model and determine a decision threshold. You can also use your profit matrix in this step.
Classification problem using imbalanced dataset
First of all, you don't need to down-sample, unless you don't have enough computing power to fit the model to the full dataset. An alternative approach is to assign target observations a weight of 99
Classification problem using imbalanced dataset First of all, you don't need to down-sample, unless you don't have enough computing power to fit the model to the full dataset. An alternative approach is to assign target observations a weight of 99 and the non-target observations a weight of 1. This will mean the model considers 1 target mis-classification equal to 99 non-target mis-classifications, and will bias the model towards the smaller class without the need for down-sampling. Basically, when down-sampling, you are throwing out information, which reduces the precision of your classifier. Since the positive class is usually more interesting than the negative class this usually isn't a big problem, but if you can use all the data, you should! Adjusting your weights is another way to tell the model what is important and what is not. Finally, regardless of your approach, you can look at how your model performs on a test set. Calculate a ROC curve, which will allow you to see what the tradeoff is between true-positives and false positives for your model and determine a decision threshold. You can also use your profit matrix in this step.
Classification problem using imbalanced dataset First of all, you don't need to down-sample, unless you don't have enough computing power to fit the model to the full dataset. An alternative approach is to assign target observations a weight of 99
46,808
How to handle leverage values?
I'm going to stress that, in the absence of a well-defined analysis plan or protocol for handling such values, the answer is: you leave them in. You report unadulterated results as a primary analysis: the one in which the p-value is viewed as answering the main question. If it is necessary and instructive to discuss results from excluding high-leverage points, this is considered a secondary or a post-hoc analysis and has a significantly lesser weight of evidence, more of a "hypothesis generating" result than a "hypothesis confirming" one. The reason for not excluding such values is because you compromise the interpretation of the results and the reproducibility of your analysis. When you make ad-hoc decisions about which values are and are not worth leaving in, you cannot trust that another statistician would do the same. The practice of throwing observations out is very bad science. Doing so, you actually revise your hypothesis (because you've defined your population differently than originally stated), and the new "population" is paradoxically defined by what you've observed. The p-value, then, doesn't mean what people think it means, and is, in a way, a falsified result. This brings into question the role of diagnostic statistics. It may sound like I'm advocating to never use them. It's quite the opposite. Running diagnostics is good only insofar as it helps to understand the inherent assumptions in the model. As Box said, "All models are wrong, some models are useful." Even with non-linear trends, sometimes the linear relationship is close enough to give us "rules of thumb" that are worth guiding decision making. Take the relationship between lead exposure at birth and adulthood IQ. Very few, if any, children have 0 exposure to lead. Virtually all of us have been exposed in such a way that our IQ has been significantly diminished from what it could have been otherwise. When sampling individuals in such a fashion, you would almost certainly find one or more highly influential individuals who have low lead exposure and high IQ. Think about the difference in hypotheses that are ultimately tested in the scenarios when such individuals are either excluded or maintained in the primary analysis. When diagnostics indicate problematic observations, you need to address a number of issues: Are there unknown sources of variation or covariation present within subgroups? e.g. correlation btn household members or a wave of lab assays run by a contracted lab that has poorly calibrated equipment? Does the mean model hold approximately? Is the hypothesis more accurately tested by using a more flexible modeling approach such as with smoothing splines or ever higher order polynomial effects? Is variance weighting sufficiently accounted for? In LS modeling, this means standard errors are calculated from homoscedastic data or else robust standard errors are used. GLMs automatically reweight such data according to probability models for outcomes. In that case, is the probability model correct?
How to handle leverage values?
I'm going to stress that, in the absence of a well-defined analysis plan or protocol for handling such values, the answer is: you leave them in. You report unadulterated results as a primary analysis:
How to handle leverage values? I'm going to stress that, in the absence of a well-defined analysis plan or protocol for handling such values, the answer is: you leave them in. You report unadulterated results as a primary analysis: the one in which the p-value is viewed as answering the main question. If it is necessary and instructive to discuss results from excluding high-leverage points, this is considered a secondary or a post-hoc analysis and has a significantly lesser weight of evidence, more of a "hypothesis generating" result than a "hypothesis confirming" one. The reason for not excluding such values is because you compromise the interpretation of the results and the reproducibility of your analysis. When you make ad-hoc decisions about which values are and are not worth leaving in, you cannot trust that another statistician would do the same. The practice of throwing observations out is very bad science. Doing so, you actually revise your hypothesis (because you've defined your population differently than originally stated), and the new "population" is paradoxically defined by what you've observed. The p-value, then, doesn't mean what people think it means, and is, in a way, a falsified result. This brings into question the role of diagnostic statistics. It may sound like I'm advocating to never use them. It's quite the opposite. Running diagnostics is good only insofar as it helps to understand the inherent assumptions in the model. As Box said, "All models are wrong, some models are useful." Even with non-linear trends, sometimes the linear relationship is close enough to give us "rules of thumb" that are worth guiding decision making. Take the relationship between lead exposure at birth and adulthood IQ. Very few, if any, children have 0 exposure to lead. Virtually all of us have been exposed in such a way that our IQ has been significantly diminished from what it could have been otherwise. When sampling individuals in such a fashion, you would almost certainly find one or more highly influential individuals who have low lead exposure and high IQ. Think about the difference in hypotheses that are ultimately tested in the scenarios when such individuals are either excluded or maintained in the primary analysis. When diagnostics indicate problematic observations, you need to address a number of issues: Are there unknown sources of variation or covariation present within subgroups? e.g. correlation btn household members or a wave of lab assays run by a contracted lab that has poorly calibrated equipment? Does the mean model hold approximately? Is the hypothesis more accurately tested by using a more flexible modeling approach such as with smoothing splines or ever higher order polynomial effects? Is variance weighting sufficiently accounted for? In LS modeling, this means standard errors are calculated from homoscedastic data or else robust standard errors are used. GLMs automatically reweight such data according to probability models for outcomes. In that case, is the probability model correct?
How to handle leverage values? I'm going to stress that, in the absence of a well-defined analysis plan or protocol for handling such values, the answer is: you leave them in. You report unadulterated results as a primary analysis:
46,809
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$
Your calculation is correct. You simply need to interpret it. Which distribution has an MGF identically equal to 1? Alternatively, your problem can be approached without using MGFs. Recall that $\chi^2(n)$ has the distribution of a sum of $n$ squares of $N(0,1)$ random variables. What can you say about the limiting distribution of $$\frac{1}{n}\sum_{k=1}^n X_k^2,$$ if $X_k\sim N(0,1)$?
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$
Your calculation is correct. You simply need to interpret it. Which distribution has an MGF identically equal to 1? Alternatively, your problem can be approached without using MGFs. Recall that $\chi^
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$ Your calculation is correct. You simply need to interpret it. Which distribution has an MGF identically equal to 1? Alternatively, your problem can be approached without using MGFs. Recall that $\chi^2(n)$ has the distribution of a sum of $n$ squares of $N(0,1)$ random variables. What can you say about the limiting distribution of $$\frac{1}{n}\sum_{k=1}^n X_k^2,$$ if $X_k\sim N(0,1)$?
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$ Your calculation is correct. You simply need to interpret it. Which distribution has an MGF identically equal to 1? Alternatively, your problem can be approached without using MGFs. Recall that $\chi^
46,810
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$
The other answer here gives a useful hint as to what happens. I'm going to show you another aspect of the problem. Using the moment generating functions, it is simple to show that: $$n W_n = \frac{Z_n}{n} \sim \text{Ga} \bigg( \text{Shape} = \frac{n}{2}, \ \text{Rate} = \frac{n}{2} \bigg).$$ This random variable has mean and variance: $$\mathbb{E}(n W_n) = 1 \quad \quad \quad \mathbb{V}(n W_n) = \frac{2}{n},$$ and so asymptotically, we have $n W_n \rightarrow 1$ as $n \rightarrow \infty$. Given that this is true, what do you think happens asymptotically to $W_n$?
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$
The other answer here gives a useful hint as to what happens. I'm going to show you another aspect of the problem. Using the moment generating functions, it is simple to show that: $$n W_n = \frac{Z
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$ The other answer here gives a useful hint as to what happens. I'm going to show you another aspect of the problem. Using the moment generating functions, it is simple to show that: $$n W_n = \frac{Z_n}{n} \sim \text{Ga} \bigg( \text{Shape} = \frac{n}{2}, \ \text{Rate} = \frac{n}{2} \bigg).$$ This random variable has mean and variance: $$\mathbb{E}(n W_n) = 1 \quad \quad \quad \mathbb{V}(n W_n) = \frac{2}{n},$$ and so asymptotically, we have $n W_n \rightarrow 1$ as $n \rightarrow \infty$. Given that this is true, what do you think happens asymptotically to $W_n$?
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$ The other answer here gives a useful hint as to what happens. I'm going to show you another aspect of the problem. Using the moment generating functions, it is simple to show that: $$n W_n = \frac{Z
46,811
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$
The limit should be degenerate at 0. pf: Zn/n2=(Zn/n)(1/n) ; Zn/n → 1 in probability, 1/n → 0 in probability ; Thus (Zn/n)(1/n) → 0 in probability and that equals (Zn/n)(1/n) → 0 in distribution such that Zn/n2 is degenerate at 0
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$
The limit should be degenerate at 0. pf: Zn/n2=(Zn/n)(1/n) ; Zn/n → 1 in probability, 1/n → 0 in probability ; Thus (Zn/n)(1/n) → 0 in probability and that equals (Zn/n)(1/n) → 0 in distribution such
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$ The limit should be degenerate at 0. pf: Zn/n2=(Zn/n)(1/n) ; Zn/n → 1 in probability, 1/n → 0 in probability ; Thus (Zn/n)(1/n) → 0 in probability and that equals (Zn/n)(1/n) → 0 in distribution such that Zn/n2 is degenerate at 0
Limiting Distribution of $W_n=\frac{Z_n}{n^2}$ , $Z_n \sim \chi ^2 (n)$ The limit should be degenerate at 0. pf: Zn/n2=(Zn/n)(1/n) ; Zn/n → 1 in probability, 1/n → 0 in probability ; Thus (Zn/n)(1/n) → 0 in probability and that equals (Zn/n)(1/n) → 0 in distribution such
46,812
Metropolis algorithm, what is the target distribution and how to compose it?
MCMC is a strategy for generating samples $x(i)$ while exploring the state space $X $using a Markov chain mechanism. These are irreducible and aperiodic Markov chains that have $P_{target}(\theta)$ as the invariant distribution. This mechanism is constructed so that the chain spends more time in the most important regions. In particular, it is constructed so that the samples $x(i)$ mimic samples drawn from the target distribution $P_{target}(\theta)$. The answer to your question is: MCMC is used when we cannot draw samples from $P_{target}(\theta)$ directly, but can evaluate $P_{target}(\theta)$ up to a constant of proportionality. To clarify this, let us denote $P_{target}(\theta) = P(\theta | D)$ where $D$ is the data and $P(\theta | D)$ is our posterior target distribution. Normally, calculating the exact $P(\theta | D)$ requires: $ P(\theta | D) = \frac{P(D|\theta) * P(\theta)}{P(D)} $ As you can see, our target distribution: $ P(\theta | D) \propto P(D|\theta) * P(\theta)$ up to a constant of proportionality. We use this product (of the likelihood and the prior) as the target distribution in a Metropolis algorithm. The acceptance criterion of the algorithm only needs the relative posterior probabilities in the target distribution and not the absolute posterior probabilities, so we could use an unnormalised prior or unnormalised posterior when generating sample values of $\theta$. Section 2 of this paper gives examples to situations when sampling from the posterior $P_{target}(\theta)$ is tricky. 4 scenarios in short: 1) Bayesian inference and learning (see my comment to another answer on the page), 2) Statistical mechanics, 3) Optimisation, 4) Penalised likelihood model selection.
Metropolis algorithm, what is the target distribution and how to compose it?
MCMC is a strategy for generating samples $x(i)$ while exploring the state space $X $using a Markov chain mechanism. These are irreducible and aperiodic Markov chains that have $P_{target}(\theta)$ as
Metropolis algorithm, what is the target distribution and how to compose it? MCMC is a strategy for generating samples $x(i)$ while exploring the state space $X $using a Markov chain mechanism. These are irreducible and aperiodic Markov chains that have $P_{target}(\theta)$ as the invariant distribution. This mechanism is constructed so that the chain spends more time in the most important regions. In particular, it is constructed so that the samples $x(i)$ mimic samples drawn from the target distribution $P_{target}(\theta)$. The answer to your question is: MCMC is used when we cannot draw samples from $P_{target}(\theta)$ directly, but can evaluate $P_{target}(\theta)$ up to a constant of proportionality. To clarify this, let us denote $P_{target}(\theta) = P(\theta | D)$ where $D$ is the data and $P(\theta | D)$ is our posterior target distribution. Normally, calculating the exact $P(\theta | D)$ requires: $ P(\theta | D) = \frac{P(D|\theta) * P(\theta)}{P(D)} $ As you can see, our target distribution: $ P(\theta | D) \propto P(D|\theta) * P(\theta)$ up to a constant of proportionality. We use this product (of the likelihood and the prior) as the target distribution in a Metropolis algorithm. The acceptance criterion of the algorithm only needs the relative posterior probabilities in the target distribution and not the absolute posterior probabilities, so we could use an unnormalised prior or unnormalised posterior when generating sample values of $\theta$. Section 2 of this paper gives examples to situations when sampling from the posterior $P_{target}(\theta)$ is tricky. 4 scenarios in short: 1) Bayesian inference and learning (see my comment to another answer on the page), 2) Statistical mechanics, 3) Optimisation, 4) Penalised likelihood model selection.
Metropolis algorithm, what is the target distribution and how to compose it? MCMC is a strategy for generating samples $x(i)$ while exploring the state space $X $using a Markov chain mechanism. These are irreducible and aperiodic Markov chains that have $P_{target}(\theta)$ as
46,813
Metropolis algorithm, what is the target distribution and how to compose it?
I guess that the missing "concept" is the one of "curse of dimensionality" (http://en.wikipedia.org/wiki/Curse_of_dimensionality) that would make your attempt to investigate your posterior by brute force griding irrelevant when the dimension of your posterior is not very small.
Metropolis algorithm, what is the target distribution and how to compose it?
I guess that the missing "concept" is the one of "curse of dimensionality" (http://en.wikipedia.org/wiki/Curse_of_dimensionality) that would make your attempt to investigate your posterior by brute fo
Metropolis algorithm, what is the target distribution and how to compose it? I guess that the missing "concept" is the one of "curse of dimensionality" (http://en.wikipedia.org/wiki/Curse_of_dimensionality) that would make your attempt to investigate your posterior by brute force griding irrelevant when the dimension of your posterior is not very small.
Metropolis algorithm, what is the target distribution and how to compose it? I guess that the missing "concept" is the one of "curse of dimensionality" (http://en.wikipedia.org/wiki/Curse_of_dimensionality) that would make your attempt to investigate your posterior by brute fo
46,814
Metropolis algorithm, what is the target distribution and how to compose it?
My problem is, we should know Ptarget(θ) before we doing this Metropolis process, right? Yes. The whole purpose of MCMC is to sample from the (known) target distribution, because handling it with other methods is difficult. For example, the target distribution might be multi-dimensional and maybe you only need the marginal distribution of one variable, and integrating the target distribution is unfeasible or very difficult to do (specially for hierarchical models, for example, in which every unknown parameter depends on other unknown parameters and so on). Then what is this target distribution, does it have to do with my prior belief of θ? As @Zhubarb answered, by bayes theorem, if we call $p(\theta)$ your prior belief on $\theta$, then your target distribution, a.k.a. the posterior distribution, is $$p(\theta |\textrm{Data})=\frac{p(\textrm{Data}|\theta)p(\theta)}{P(\textrm{Data})}$$ So yeah, your prior belief has to do with your target distribution: in fact, it is a function of it. If I've already known it (the target distribution), why bother doing this Metropolis sampling, can't we just use grid approximation? Yeah, you could just use a grid approximation if you know the posterior. This might seem easy to do in one-dimensional problems, but in multi-dimensional problems it's a mess. For example: how would you go on choosing your grid when you have a 10-dimensional parameter vector $\theta$? Where is the maximum or minimum of the distribution? Not in all settings you'll have an easy target distribution to play with and is in these kind of settings that MCMC is very useful, because it allows you to draw samples from the target distribution.
Metropolis algorithm, what is the target distribution and how to compose it?
My problem is, we should know Ptarget(θ) before we doing this Metropolis process, right? Yes. The whole purpose of MCMC is to sample from the (known) target distribution, because handling it with ot
Metropolis algorithm, what is the target distribution and how to compose it? My problem is, we should know Ptarget(θ) before we doing this Metropolis process, right? Yes. The whole purpose of MCMC is to sample from the (known) target distribution, because handling it with other methods is difficult. For example, the target distribution might be multi-dimensional and maybe you only need the marginal distribution of one variable, and integrating the target distribution is unfeasible or very difficult to do (specially for hierarchical models, for example, in which every unknown parameter depends on other unknown parameters and so on). Then what is this target distribution, does it have to do with my prior belief of θ? As @Zhubarb answered, by bayes theorem, if we call $p(\theta)$ your prior belief on $\theta$, then your target distribution, a.k.a. the posterior distribution, is $$p(\theta |\textrm{Data})=\frac{p(\textrm{Data}|\theta)p(\theta)}{P(\textrm{Data})}$$ So yeah, your prior belief has to do with your target distribution: in fact, it is a function of it. If I've already known it (the target distribution), why bother doing this Metropolis sampling, can't we just use grid approximation? Yeah, you could just use a grid approximation if you know the posterior. This might seem easy to do in one-dimensional problems, but in multi-dimensional problems it's a mess. For example: how would you go on choosing your grid when you have a 10-dimensional parameter vector $\theta$? Where is the maximum or minimum of the distribution? Not in all settings you'll have an easy target distribution to play with and is in these kind of settings that MCMC is very useful, because it allows you to draw samples from the target distribution.
Metropolis algorithm, what is the target distribution and how to compose it? My problem is, we should know Ptarget(θ) before we doing this Metropolis process, right? Yes. The whole purpose of MCMC is to sample from the (known) target distribution, because handling it with ot
46,815
How many zeros in an independent variable are too many for regression?
Regression methods do not make assumptions about the distribution of your independent variable. Strictly speaking, you would have too many zeros for linear regression when all of your data are zeros. Instead the issue here is lower statistical power and reduced ability to check your assumptions. Although it is discussed in terms of the t-test, you might be able to get the main idea from my answer here: How should one interpret the comparison of means from different sample sizes?
How many zeros in an independent variable are too many for regression?
Regression methods do not make assumptions about the distribution of your independent variable. Strictly speaking, you would have too many zeros for linear regression when all of your data are zeros.
How many zeros in an independent variable are too many for regression? Regression methods do not make assumptions about the distribution of your independent variable. Strictly speaking, you would have too many zeros for linear regression when all of your data are zeros. Instead the issue here is lower statistical power and reduced ability to check your assumptions. Although it is discussed in terms of the t-test, you might be able to get the main idea from my answer here: How should one interpret the comparison of means from different sample sizes?
How many zeros in an independent variable are too many for regression? Regression methods do not make assumptions about the distribution of your independent variable. Strictly speaking, you would have too many zeros for linear regression when all of your data are zeros.
46,816
How many zeros in an independent variable are too many for regression?
(1) by simple regression I assume you mean linear regression. (2) zero values are not to my mind an issue for independent variables (in the same way they are for DV count data). (3) what is an issue is that you're unlikely to have a linear relationship between IV and DV. There is a whole section in most regression textbook on how to approach this issue. I usually categorize the IV into categories that seem reasonable. Many statisticians dislike this approach - it is data-driven, arbitrary and wastes statistical power - and would recommend: (a) transforming variable, (b) adding cubic/quadratic terms, (c) using restricted splines, (d) or something even more complex. Depends out what relationship between ID/DV looks like.
How many zeros in an independent variable are too many for regression?
(1) by simple regression I assume you mean linear regression. (2) zero values are not to my mind an issue for independent variables (in the same way they are for DV count data). (3) what is an issue i
How many zeros in an independent variable are too many for regression? (1) by simple regression I assume you mean linear regression. (2) zero values are not to my mind an issue for independent variables (in the same way they are for DV count data). (3) what is an issue is that you're unlikely to have a linear relationship between IV and DV. There is a whole section in most regression textbook on how to approach this issue. I usually categorize the IV into categories that seem reasonable. Many statisticians dislike this approach - it is data-driven, arbitrary and wastes statistical power - and would recommend: (a) transforming variable, (b) adding cubic/quadratic terms, (c) using restricted splines, (d) or something even more complex. Depends out what relationship between ID/DV looks like.
How many zeros in an independent variable are too many for regression? (1) by simple regression I assume you mean linear regression. (2) zero values are not to my mind an issue for independent variables (in the same way they are for DV count data). (3) what is an issue i
46,817
Statistical comparison of 2 independent Cohen's ds
If $d$ is the observed Cohen's d value, then the sampling variance of $d$ is approximately equal to: $$v = \frac{1}{n_1} + \frac{1}{n_2} + \frac{d^2}{2(n_1+n_2)}.$$ So, to test $H_0: \delta_1 = \delta_2$ (where $\delta_1$ and $\delta_2$ denote the true d values of the two studies), compute: $$z = \frac{d_1 - d_2}{\sqrt{v_1 + v_2}},$$ which follows approximately a standard normal distribution under $H_0$. So, if $|z| \ge 1.96$, you can reject $H_0$ at $\alpha = .05$ (two-sided). As mentioned by gung, you could consider applying the bias-correction first, but unless sample sizes are small, the impact on $z$ will be negligible.
Statistical comparison of 2 independent Cohen's ds
If $d$ is the observed Cohen's d value, then the sampling variance of $d$ is approximately equal to: $$v = \frac{1}{n_1} + \frac{1}{n_2} + \frac{d^2}{2(n_1+n_2)}.$$ So, to test $H_0: \delta_1 = \delta
Statistical comparison of 2 independent Cohen's ds If $d$ is the observed Cohen's d value, then the sampling variance of $d$ is approximately equal to: $$v = \frac{1}{n_1} + \frac{1}{n_2} + \frac{d^2}{2(n_1+n_2)}.$$ So, to test $H_0: \delta_1 = \delta_2$ (where $\delta_1$ and $\delta_2$ denote the true d values of the two studies), compute: $$z = \frac{d_1 - d_2}{\sqrt{v_1 + v_2}},$$ which follows approximately a standard normal distribution under $H_0$. So, if $|z| \ge 1.96$, you can reject $H_0$ at $\alpha = .05$ (two-sided). As mentioned by gung, you could consider applying the bias-correction first, but unless sample sizes are small, the impact on $z$ will be negligible.
Statistical comparison of 2 independent Cohen's ds If $d$ is the observed Cohen's d value, then the sampling variance of $d$ is approximately equal to: $$v = \frac{1}{n_1} + \frac{1}{n_2} + \frac{d^2}{2(n_1+n_2)}.$$ So, to test $H_0: \delta_1 = \delta
46,818
Statistical comparison of 2 independent Cohen's ds
I think I have found the source of the formula in the book of Borenstein et al (2009) Introduction to meta-analysis. John, Wiley & Sons Ltd. It is on p.156 and is used to compare two subgroups. Another option could be to take a look at the other hetereogeneity metrics described in the book. Good luck.
Statistical comparison of 2 independent Cohen's ds
I think I have found the source of the formula in the book of Borenstein et al (2009) Introduction to meta-analysis. John, Wiley & Sons Ltd. It is on p.156 and is used to compare two subgroups. Anothe
Statistical comparison of 2 independent Cohen's ds I think I have found the source of the formula in the book of Borenstein et al (2009) Introduction to meta-analysis. John, Wiley & Sons Ltd. It is on p.156 and is used to compare two subgroups. Another option could be to take a look at the other hetereogeneity metrics described in the book. Good luck.
Statistical comparison of 2 independent Cohen's ds I think I have found the source of the formula in the book of Borenstein et al (2009) Introduction to meta-analysis. John, Wiley & Sons Ltd. It is on p.156 and is used to compare two subgroups. Anothe
46,819
Hazard Function - Survival Analysis
Before obtaining the hazard function of $T=\min\{T_1,...,T_n\}$, let's first derive its distribution and its density function, i.e. the CFD and PDF of the first-order statistic from a sample of independently but not identically distributed random variables. The distribution of the minimum of $n$ independent random variables is $$F_T(t) = 1-\prod_{i=1}^n[1-F_i(t)]$$ (see the reasoning in this CV post, if you don't know it already) We differentiate to obtain its density function: $$f_T(t) =\frac {\partial}{\partial t}F_T(t) = f_1(t)\prod_{i\neq 1}[1-F_i(t)]+...+f_n(t)\prod_{i\neq n}[1-F_i(t)]$$ Using $h_i(t) = \frac {f_i(t)}{(1-F_i(t)} \Rightarrow f_i(t) = h_i(t)(1-F_i(t)) $ and substituting in $f_T(t)$ we have $$f_T(t) = h_1(t)(1-F_1(t))\prod_{i\neq 1}[1-F_i(t)]+...+h_n(t)(1-F_n(t))\prod_{i\neq n}[1-F_i(t)]$$ $$=\left(\prod_{i=1}^n[1-F_i(t)]\right)\sum_{i=1}^nh_i(t),\;\;\; h_i(t) = \frac {f_i(t)}{1-F_i(t)} \tag{1}$$ which is the density function of the minimum of $n$ independent but not identically distributed random variables. Then the hazard rate of $T$ is $$h_T(t) = \frac {f_T(t)}{1-F_T(t)} = \frac {\left(\prod_{i=1}^n[1-F_i(t)]\right)\sum_{i=1}^nh_i(t)}{\prod_{i=1}^n[1-F_i(t)]} = \sum_{i=1}^nh_i(t) \tag{2}$$
Hazard Function - Survival Analysis
Before obtaining the hazard function of $T=\min\{T_1,...,T_n\}$, let's first derive its distribution and its density function, i.e. the CFD and PDF of the first-order statistic from a sample of indepe
Hazard Function - Survival Analysis Before obtaining the hazard function of $T=\min\{T_1,...,T_n\}$, let's first derive its distribution and its density function, i.e. the CFD and PDF of the first-order statistic from a sample of independently but not identically distributed random variables. The distribution of the minimum of $n$ independent random variables is $$F_T(t) = 1-\prod_{i=1}^n[1-F_i(t)]$$ (see the reasoning in this CV post, if you don't know it already) We differentiate to obtain its density function: $$f_T(t) =\frac {\partial}{\partial t}F_T(t) = f_1(t)\prod_{i\neq 1}[1-F_i(t)]+...+f_n(t)\prod_{i\neq n}[1-F_i(t)]$$ Using $h_i(t) = \frac {f_i(t)}{(1-F_i(t)} \Rightarrow f_i(t) = h_i(t)(1-F_i(t)) $ and substituting in $f_T(t)$ we have $$f_T(t) = h_1(t)(1-F_1(t))\prod_{i\neq 1}[1-F_i(t)]+...+h_n(t)(1-F_n(t))\prod_{i\neq n}[1-F_i(t)]$$ $$=\left(\prod_{i=1}^n[1-F_i(t)]\right)\sum_{i=1}^nh_i(t),\;\;\; h_i(t) = \frac {f_i(t)}{1-F_i(t)} \tag{1}$$ which is the density function of the minimum of $n$ independent but not identically distributed random variables. Then the hazard rate of $T$ is $$h_T(t) = \frac {f_T(t)}{1-F_T(t)} = \frac {\left(\prod_{i=1}^n[1-F_i(t)]\right)\sum_{i=1}^nh_i(t)}{\prod_{i=1}^n[1-F_i(t)]} = \sum_{i=1}^nh_i(t) \tag{2}$$
Hazard Function - Survival Analysis Before obtaining the hazard function of $T=\min\{T_1,...,T_n\}$, let's first derive its distribution and its density function, i.e. the CFD and PDF of the first-order statistic from a sample of indepe
46,820
Hazard Function - Survival Analysis
Here is an informal way of looking at the matter. Let $h(t)$ denote the hazard rate of a system. Then, $h(T)\Delta T$ is (approximately) the conditional probability that the system fails in the time interval $(T, T+\Delta T]$ given that the system is working at time $T$. Hence $1-h(T)\Delta T$ is (approximately) the probability that a system working at time $T$ is still functioning at time $T+\Delta T$. These approximations improve in accuracy as $\Delta T \to 0$. Now suppose that a system with hazard rate $h(t)$ is actually composed of $n$ subsystems with hazard rates $h_i(t), 1 \leq i \leq n$, and the system fails as soon as (at least) one subsystem fails. The subsystem failures are independent. Consider the event that the system is still working at time $T + \Delta T$ given the event that the system is working at time $T$. But this means that all $n$ subsystems were functional at time $T$ and continue to remain functional at time $T+\Delta T$. Independence of the lack of failures thus gives $$\begin{align} 1 - h(T)\Delta T &\approx \prod_{i=1}^n [1 - h_i(T)\Delta T]\\ &= 1 - \sum_{i=1}^n h_i(T)\Delta T + \sum_{i, j: i\neq j} h_i(T)h_j(T)(\Delta T)^2 - \cdots \\ &\approx 1 - \sum_{i=1}^n h_i(T)\Delta T \quad \text{as}~ \Delta T \to 0 \end{align}$$ That is, $\displaystyle h(t) = \sum_{i=1}^n h_i(t)$. If $A_i$ denotes the event that the $i$-th subsystem fails in the interval $(T,T+\Delta T]$, the probability that the system fails during $(T, T+\Delta T]$ is just $P(A_1\cup A_2\cup \cdots \cup A_n)$, the probability that at least one subsystem fails. But this probability is bounded above $\sum_i P(A_i)$ and the claim is, in effect, that this union bound is tight and becomes an equality in the limit as $\Delta T \to 0$.
Hazard Function - Survival Analysis
Here is an informal way of looking at the matter. Let $h(t)$ denote the hazard rate of a system. Then, $h(T)\Delta T$ is (approximately) the conditional probability that the system fails in the time i
Hazard Function - Survival Analysis Here is an informal way of looking at the matter. Let $h(t)$ denote the hazard rate of a system. Then, $h(T)\Delta T$ is (approximately) the conditional probability that the system fails in the time interval $(T, T+\Delta T]$ given that the system is working at time $T$. Hence $1-h(T)\Delta T$ is (approximately) the probability that a system working at time $T$ is still functioning at time $T+\Delta T$. These approximations improve in accuracy as $\Delta T \to 0$. Now suppose that a system with hazard rate $h(t)$ is actually composed of $n$ subsystems with hazard rates $h_i(t), 1 \leq i \leq n$, and the system fails as soon as (at least) one subsystem fails. The subsystem failures are independent. Consider the event that the system is still working at time $T + \Delta T$ given the event that the system is working at time $T$. But this means that all $n$ subsystems were functional at time $T$ and continue to remain functional at time $T+\Delta T$. Independence of the lack of failures thus gives $$\begin{align} 1 - h(T)\Delta T &\approx \prod_{i=1}^n [1 - h_i(T)\Delta T]\\ &= 1 - \sum_{i=1}^n h_i(T)\Delta T + \sum_{i, j: i\neq j} h_i(T)h_j(T)(\Delta T)^2 - \cdots \\ &\approx 1 - \sum_{i=1}^n h_i(T)\Delta T \quad \text{as}~ \Delta T \to 0 \end{align}$$ That is, $\displaystyle h(t) = \sum_{i=1}^n h_i(t)$. If $A_i$ denotes the event that the $i$-th subsystem fails in the interval $(T,T+\Delta T]$, the probability that the system fails during $(T, T+\Delta T]$ is just $P(A_1\cup A_2\cup \cdots \cup A_n)$, the probability that at least one subsystem fails. But this probability is bounded above $\sum_i P(A_i)$ and the claim is, in effect, that this union bound is tight and becomes an equality in the limit as $\Delta T \to 0$.
Hazard Function - Survival Analysis Here is an informal way of looking at the matter. Let $h(t)$ denote the hazard rate of a system. Then, $h(T)\Delta T$ is (approximately) the conditional probability that the system fails in the time i
46,821
Hazard Function - Survival Analysis
Since $T=\min(T_1,\ldots,T_n)$ and $T_1$,...,$T_n$ are independent, the survivor function $S(t)=P(T>t)$ of $T$ is $$ \begin{align} S(t) &= P(min(T_1,\ldots,T_n)>t) \\ &=P(T_1>t,\ldots,T_n>t) \\ &=P(T_1>t)\cdots P(T_n>t) \\ &=S_1(t)\cdots S_n(t), \end{align}$$ where $S_i(t)=P(T_i>t)$ is the survivor function of $T_i$. Now, since $S_i(t)=\exp(-\int_0^t h_i(s)ds)$, we have that $$ S(t) = \prod_i^n \exp\left(-\int_0^t h_i(s)ds\right)=\exp\left(-\int_0^t\sum_{i=1}^{n}h_i(s)ds\right).$$ Finally, since the hazard function of $T$ is linked to its survivor function by the relation $h(t)=-\frac{d\log S(t)}{dt}$, it follows that $$h(t)=\sum_{i=1}^{n}h_i(t)$$ by the fundamental theorem of calculus.
Hazard Function - Survival Analysis
Since $T=\min(T_1,\ldots,T_n)$ and $T_1$,...,$T_n$ are independent, the survivor function $S(t)=P(T>t)$ of $T$ is $$ \begin{align} S(t) &= P(min(T_1,\ldots,T_n)>t) \\ &=P(T_1>t,\ldots,T_n>t) \\ &=P(T_
Hazard Function - Survival Analysis Since $T=\min(T_1,\ldots,T_n)$ and $T_1$,...,$T_n$ are independent, the survivor function $S(t)=P(T>t)$ of $T$ is $$ \begin{align} S(t) &= P(min(T_1,\ldots,T_n)>t) \\ &=P(T_1>t,\ldots,T_n>t) \\ &=P(T_1>t)\cdots P(T_n>t) \\ &=S_1(t)\cdots S_n(t), \end{align}$$ where $S_i(t)=P(T_i>t)$ is the survivor function of $T_i$. Now, since $S_i(t)=\exp(-\int_0^t h_i(s)ds)$, we have that $$ S(t) = \prod_i^n \exp\left(-\int_0^t h_i(s)ds\right)=\exp\left(-\int_0^t\sum_{i=1}^{n}h_i(s)ds\right).$$ Finally, since the hazard function of $T$ is linked to its survivor function by the relation $h(t)=-\frac{d\log S(t)}{dt}$, it follows that $$h(t)=\sum_{i=1}^{n}h_i(t)$$ by the fundamental theorem of calculus.
Hazard Function - Survival Analysis Since $T=\min(T_1,\ldots,T_n)$ and $T_1$,...,$T_n$ are independent, the survivor function $S(t)=P(T>t)$ of $T$ is $$ \begin{align} S(t) &= P(min(T_1,\ldots,T_n)>t) \\ &=P(T_1>t,\ldots,T_n>t) \\ &=P(T_
46,822
How to generate random data that conforms to a given mean and upper / lower endpoints?
If you want the distribution on the range min to max and with a given population mean: One common solution when trying to generate a distribution with specified mean and endpoints is to use a location-scale family beta distribution. The usual beta is on the range 0-1 and has two parameters, $\alpha$ and $\beta$. The mean of that distribution is $\frac{\alpha}{\alpha+\beta}$. If you multiply by $\text{max}-\text{min}$ and add $\text{min}$, you have something between $\text{max}$ and $\text{min}$ with mean $\text{min}+\frac{\alpha}{\alpha+\beta}(\text{max}-\text{min})$. This suggests you should take $\beta/\alpha = \frac{\text{max} - \text{mean}}{\text{mean}-\text{min}}$ Or $\alpha/\beta = \frac{\text{mean} - \text{min}}{\text{max}-\text{mean}}$ This leaves you with a free parameter (you can choose $\alpha$ or $\beta$ freely and the other is determined). You could choose the smaller of them to be "1". Or you could choose it to satisfy some other condition, if you have one (such as a specified standard deviation). Larger $\alpha$ and $\beta$ will look more 'bell-shaped'. In Minitab, Calc $\to$ Random Data $\to$ Beta. -- Alternatively, you could generate from a triangular distribution rather than a beta distribution. (Or any number of other choices!) The triangular distribution is usually defined in terms of its min, max and mode, and its mean is the average of the min, max and mode. The triangular distribution is reasonably easy to generate from even if you don't have specialized routines for it. To get the mode from a given mean, use mode = 3$\,\times\,$mean - min - max. However, the mean is restricted to lie in the middle third of the range (which is easy to see from the fact that the mean is the average of the mode and the two endpoints). Below is a plot of the density functions for a beta (specifically, $\text{beta}(2,3)$) and a triangular distribution, both with mean 40% of the way between the min and the max: One the other hand, if you want the sample to have a smallest value of min and a largest value of max and a given sample mean, that's quite a different exercise. There are easy ways to do that, though some of them may look a bit odd. One simple method is as follows. Let $p=\frac{\text{mean} - \text{min}}{\text{max}-\text{min}}$. Place $b=\lfloor p(n-1)\rfloor$ points at 1, and $n-1-b$ points at 0, giving an average of $b/(n-1)$ and a sum of $b$. To get the right average, we need the sum to be $np$, so we place the remaining point at $np-b$, and then multiply all the observations by ${\text{max}-\text{min}}$ and add $\text{min}$. e.g. consider $n$ = 12, min = 10, max = 60, mean = 30, so $p$ = 0.4, and $b$ = 4. With seven (12-1-4) points at 0 and four at 1, the sum is 4. If we place the remaining point at 12$\,\times\,$0.4$\,$-$\,$4 = 0.8, the average is 0.4 ($p$). We then multiply all the values by ${\text{max}-\text{min}}$ (50) and add $\text{min}$ (10) giving a mean of 30. Then randomly sample the whole set of $n$ without replacement, (or equivalently, just randomly order them). You now have a random sample with the required mean and extremes, albeit one from a discrete distribution.
How to generate random data that conforms to a given mean and upper / lower endpoints?
If you want the distribution on the range min to max and with a given population mean: One common solution when trying to generate a distribution with specified mean and endpoints is to use a location
How to generate random data that conforms to a given mean and upper / lower endpoints? If you want the distribution on the range min to max and with a given population mean: One common solution when trying to generate a distribution with specified mean and endpoints is to use a location-scale family beta distribution. The usual beta is on the range 0-1 and has two parameters, $\alpha$ and $\beta$. The mean of that distribution is $\frac{\alpha}{\alpha+\beta}$. If you multiply by $\text{max}-\text{min}$ and add $\text{min}$, you have something between $\text{max}$ and $\text{min}$ with mean $\text{min}+\frac{\alpha}{\alpha+\beta}(\text{max}-\text{min})$. This suggests you should take $\beta/\alpha = \frac{\text{max} - \text{mean}}{\text{mean}-\text{min}}$ Or $\alpha/\beta = \frac{\text{mean} - \text{min}}{\text{max}-\text{mean}}$ This leaves you with a free parameter (you can choose $\alpha$ or $\beta$ freely and the other is determined). You could choose the smaller of them to be "1". Or you could choose it to satisfy some other condition, if you have one (such as a specified standard deviation). Larger $\alpha$ and $\beta$ will look more 'bell-shaped'. In Minitab, Calc $\to$ Random Data $\to$ Beta. -- Alternatively, you could generate from a triangular distribution rather than a beta distribution. (Or any number of other choices!) The triangular distribution is usually defined in terms of its min, max and mode, and its mean is the average of the min, max and mode. The triangular distribution is reasonably easy to generate from even if you don't have specialized routines for it. To get the mode from a given mean, use mode = 3$\,\times\,$mean - min - max. However, the mean is restricted to lie in the middle third of the range (which is easy to see from the fact that the mean is the average of the mode and the two endpoints). Below is a plot of the density functions for a beta (specifically, $\text{beta}(2,3)$) and a triangular distribution, both with mean 40% of the way between the min and the max: One the other hand, if you want the sample to have a smallest value of min and a largest value of max and a given sample mean, that's quite a different exercise. There are easy ways to do that, though some of them may look a bit odd. One simple method is as follows. Let $p=\frac{\text{mean} - \text{min}}{\text{max}-\text{min}}$. Place $b=\lfloor p(n-1)\rfloor$ points at 1, and $n-1-b$ points at 0, giving an average of $b/(n-1)$ and a sum of $b$. To get the right average, we need the sum to be $np$, so we place the remaining point at $np-b$, and then multiply all the observations by ${\text{max}-\text{min}}$ and add $\text{min}$. e.g. consider $n$ = 12, min = 10, max = 60, mean = 30, so $p$ = 0.4, and $b$ = 4. With seven (12-1-4) points at 0 and four at 1, the sum is 4. If we place the remaining point at 12$\,\times\,$0.4$\,$-$\,$4 = 0.8, the average is 0.4 ($p$). We then multiply all the values by ${\text{max}-\text{min}}$ (50) and add $\text{min}$ (10) giving a mean of 30. Then randomly sample the whole set of $n$ without replacement, (or equivalently, just randomly order them). You now have a random sample with the required mean and extremes, albeit one from a discrete distribution.
How to generate random data that conforms to a given mean and upper / lower endpoints? If you want the distribution on the range min to max and with a given population mean: One common solution when trying to generate a distribution with specified mean and endpoints is to use a location
46,823
Binary Logistic Regression Multicollinearity Tests
I'm glad you like my answer :-) It's not that there is no valid method of detecting collinearity in logistic regression: Since collinearity is a relationship among the independent variables, the dependent variable doesn't matter. What is problematic is figuring out how much collinearity is too much for logistic regression. David Belslely did extensive work with condition indexes. He found that indexes over 30 with substantial variance accounted for in more than one variable was indicative of collinearity that would cause severe problems in OLS regression. However, "severe" is always a judgment call. Perhaps the easiest way to see the problems of collinearity is to show that small changes in the data make big changes in the results. [this paper http://www.medicine.mcgill.ca/epidemiology/joseph/courses/epib-621/logconfound.pdf] offers examples of collinearity in logistic regression. It even shows that R detects exact collinearity, and, in fact, some cases of approximate collinearity will cause the same warning: Warning message: glm.fit: fitted probabilities numerically 0 or 1 occurred Nevertheless, we can ignore this warning and run set.seed(1234) x1 <- rnorm(100) x2 <- rnorm(100) x3 <- x1 + x2 + rnorm(100, 0, 1) y <- x1 + 2*x2 + 3*x3 + rnorm(100) ylog <- cut(y, 2, c(1,0)) m1<- glm(ylog~x1+x2+x3, family = binomial) coef(m1) which yields -2.55, 1.97, 5.60 and 12.54 We can then slightly perturb x1 and x2, add them for a new x3 and run again: x1a <- x1+rnorm(100,0,.01) x2a <- x2+rnorm(100,0, .01) x3a <- x1a + x2a + rnorm(100, 0, 1) ya <- x1a + 2*x2a + 3*x3a + rnorm(100) yloga <- cut(ya, 2, c(1,0)) m2<- glm(ylog~x1a+x2a+x3a, family = binomial) coef(m2) this yields wildly different coefficients: 0.003, 3.012, 3.51 and -0.41 and yet, this set of independent variables does not have a high condition index: library(perturb) colldiag(m1) says the maximum condition index is 3.54. I am unaware if anyone has done any Monte Carlo studies of this; if not, it seems a good area for research
Binary Logistic Regression Multicollinearity Tests
I'm glad you like my answer :-) It's not that there is no valid method of detecting collinearity in logistic regression: Since collinearity is a relationship among the independent variables, the depen
Binary Logistic Regression Multicollinearity Tests I'm glad you like my answer :-) It's not that there is no valid method of detecting collinearity in logistic regression: Since collinearity is a relationship among the independent variables, the dependent variable doesn't matter. What is problematic is figuring out how much collinearity is too much for logistic regression. David Belslely did extensive work with condition indexes. He found that indexes over 30 with substantial variance accounted for in more than one variable was indicative of collinearity that would cause severe problems in OLS regression. However, "severe" is always a judgment call. Perhaps the easiest way to see the problems of collinearity is to show that small changes in the data make big changes in the results. [this paper http://www.medicine.mcgill.ca/epidemiology/joseph/courses/epib-621/logconfound.pdf] offers examples of collinearity in logistic regression. It even shows that R detects exact collinearity, and, in fact, some cases of approximate collinearity will cause the same warning: Warning message: glm.fit: fitted probabilities numerically 0 or 1 occurred Nevertheless, we can ignore this warning and run set.seed(1234) x1 <- rnorm(100) x2 <- rnorm(100) x3 <- x1 + x2 + rnorm(100, 0, 1) y <- x1 + 2*x2 + 3*x3 + rnorm(100) ylog <- cut(y, 2, c(1,0)) m1<- glm(ylog~x1+x2+x3, family = binomial) coef(m1) which yields -2.55, 1.97, 5.60 and 12.54 We can then slightly perturb x1 and x2, add them for a new x3 and run again: x1a <- x1+rnorm(100,0,.01) x2a <- x2+rnorm(100,0, .01) x3a <- x1a + x2a + rnorm(100, 0, 1) ya <- x1a + 2*x2a + 3*x3a + rnorm(100) yloga <- cut(ya, 2, c(1,0)) m2<- glm(ylog~x1a+x2a+x3a, family = binomial) coef(m2) this yields wildly different coefficients: 0.003, 3.012, 3.51 and -0.41 and yet, this set of independent variables does not have a high condition index: library(perturb) colldiag(m1) says the maximum condition index is 3.54. I am unaware if anyone has done any Monte Carlo studies of this; if not, it seems a good area for research
Binary Logistic Regression Multicollinearity Tests I'm glad you like my answer :-) It's not that there is no valid method of detecting collinearity in logistic regression: Since collinearity is a relationship among the independent variables, the depen
46,824
How to calculate the p-value for a binomial test using pbinom?
If you do not multiply by 2, you will be evaluating the probability of having scores ranging from 18 to 25 (one-sided test). Multiplying by 2, you are evaluating the probability of having scores ranging from 0 to 7 and 18 to 25 (two-sided test). Your command results in an answer similar to this one: binom.test(18, 25, 0.5, alternative="two.sided")
How to calculate the p-value for a binomial test using pbinom?
If you do not multiply by 2, you will be evaluating the probability of having scores ranging from 18 to 25 (one-sided test). Multiplying by 2, you are evaluating the probability of having scores rang
How to calculate the p-value for a binomial test using pbinom? If you do not multiply by 2, you will be evaluating the probability of having scores ranging from 18 to 25 (one-sided test). Multiplying by 2, you are evaluating the probability of having scores ranging from 0 to 7 and 18 to 25 (two-sided test). Your command results in an answer similar to this one: binom.test(18, 25, 0.5, alternative="two.sided")
How to calculate the p-value for a binomial test using pbinom? If you do not multiply by 2, you will be evaluating the probability of having scores ranging from 18 to 25 (one-sided test). Multiplying by 2, you are evaluating the probability of having scores rang
46,825
Expected value of a random variable differing from arithmetic mean
In the discrete case the expected value is a weighted sum, where the possible values of the variable are weighted by their probability of occurring (the probability mass function), $EX=\sum_{i=1}^nx_iP(X=x_i)$. Since all weights are non-negative, smaller than untiy, and their sum equals unity, the expected value of a discrete random variable is also a specific convex combination of its possible values. In the continuous case the expected value is a weighted integral, where the possible values of the variable are weighted by the probability density function $EY=\int_{-\infty}^{\infty}yf_Y(y)dy$. What happens is that the arithmetic (i.e. unweighted) mean from the realization of a collection of identically distributed random variables (i.e. the "sample mean") is shown to be an unbiased and consistent estimator of the expected value, although the latter is a weighted mean.
Expected value of a random variable differing from arithmetic mean
In the discrete case the expected value is a weighted sum, where the possible values of the variable are weighted by their probability of occurring (the probability mass function), $EX=\sum_{i=1}^nx_i
Expected value of a random variable differing from arithmetic mean In the discrete case the expected value is a weighted sum, where the possible values of the variable are weighted by their probability of occurring (the probability mass function), $EX=\sum_{i=1}^nx_iP(X=x_i)$. Since all weights are non-negative, smaller than untiy, and their sum equals unity, the expected value of a discrete random variable is also a specific convex combination of its possible values. In the continuous case the expected value is a weighted integral, where the possible values of the variable are weighted by the probability density function $EY=\int_{-\infty}^{\infty}yf_Y(y)dy$. What happens is that the arithmetic (i.e. unweighted) mean from the realization of a collection of identically distributed random variables (i.e. the "sample mean") is shown to be an unbiased and consistent estimator of the expected value, although the latter is a weighted mean.
Expected value of a random variable differing from arithmetic mean In the discrete case the expected value is a weighted sum, where the possible values of the variable are weighted by their probability of occurring (the probability mass function), $EX=\sum_{i=1}^nx_i
46,826
Expected value of a random variable differing from arithmetic mean
I think that an arithmetic mean approaches expected value, as the number of samples increase in number. Say, you have a die which you have rolled 10 times and the outcomes are {5,6,4,5,3,2,1,2,4,6} The mean of the above values is 3.8. But the expected value when a die is rolled 10 times ( for that matter any number of times) is constant and is 1*1/6+2*1/6+...6*1/6 = 3.5 Hence, we see that the mean and expected values are different. If you take a large number of samples, the mean of the sample means (population mean) will reach the expected vale
Expected value of a random variable differing from arithmetic mean
I think that an arithmetic mean approaches expected value, as the number of samples increase in number. Say, you have a die which you have rolled 10 times and the outcomes are {5,6,4,5,3,2,1,2,4,6} Th
Expected value of a random variable differing from arithmetic mean I think that an arithmetic mean approaches expected value, as the number of samples increase in number. Say, you have a die which you have rolled 10 times and the outcomes are {5,6,4,5,3,2,1,2,4,6} The mean of the above values is 3.8. But the expected value when a die is rolled 10 times ( for that matter any number of times) is constant and is 1*1/6+2*1/6+...6*1/6 = 3.5 Hence, we see that the mean and expected values are different. If you take a large number of samples, the mean of the sample means (population mean) will reach the expected vale
Expected value of a random variable differing from arithmetic mean I think that an arithmetic mean approaches expected value, as the number of samples increase in number. Say, you have a die which you have rolled 10 times and the outcomes are {5,6,4,5,3,2,1,2,4,6} Th
46,827
What is fitted in a GARCH: residual or log-return?
If you use the log returns, you're essentially making the assumption that there is no conditional variation in the mean. In some circumstances you may want to explicitly model both, but other times it may be sufficient to assume a constant mean and focus on the conditional variance. Depends on what you're trying to do. In addition, if you fit a GARCH model with raw log returns, then you're also implicitly assuming the mean is zero. Centering the data may be important if the mean is large (i.e. especially in lower frequency data).
What is fitted in a GARCH: residual or log-return?
If you use the log returns, you're essentially making the assumption that there is no conditional variation in the mean. In some circumstances you may want to explicitly model both, but other times it
What is fitted in a GARCH: residual or log-return? If you use the log returns, you're essentially making the assumption that there is no conditional variation in the mean. In some circumstances you may want to explicitly model both, but other times it may be sufficient to assume a constant mean and focus on the conditional variance. Depends on what you're trying to do. In addition, if you fit a GARCH model with raw log returns, then you're also implicitly assuming the mean is zero. Centering the data may be important if the mean is large (i.e. especially in lower frequency data).
What is fitted in a GARCH: residual or log-return? If you use the log returns, you're essentially making the assumption that there is no conditional variation in the mean. In some circumstances you may want to explicitly model both, but other times it
46,828
Main effects and interaction in multivariate meta-analysis (network meta-analysis) in R
Given that there is a single effect size estimate from each study (see comments above), the analysis can be carried out with regular meta-regression methods. You can carry out such an analysis with the metafor package. The "trick" is to code variables that indicate what treatments have been compared within a particular study: library(metafor) my_data$A1 <- ifelse(treat1 == "A1", 1, 0) my_data$A2 <- ifelse(treat1 == "A2", 1, 0) my_data$B1 <- ifelse(treat2 == "B1", -1, 0) my_data$B2 <- ifelse(treat2 == "B2", -1, 0) res <- rma(TE, sei=seTE, mods = ~ A1 + A2 + B1 - 1, data=my_data) res yields: Mixed-Effects Model (k = 38; tau^2 estimator: REML) tau^2 (estimated amount of residual heterogeneity): 0.2898 (SE = 0.1578) tau (square root of estimated tau^2 value): 0.5384 I^2 (residual heterogeneity / unaccounted variability): 59.02% H^2 (unaccounted variability / sampling variability): 2.44 Test for Residual Heterogeneity: QE(df = 35) = 93.5215, p-val < .0001 Test of Moderators (coefficient(s) 1,2,3): QM(df = 3) = 435.5223, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub A1 2.2446 0.2837 7.9123 <.0001 1.6886 2.8006 *** A2 0.9060 0.3387 2.6751 0.0075 0.2422 1.5699 ** B1 -2.2983 0.3467 -6.6294 <.0001 -2.9778 -1.6188 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Since variable B2 has been left out, this becomes the "reference" treatment. So, the coefficient for A1 is the estimated average effect when comparing treatment A1 against B2. The coefficient for A2 is the estimated average effect when comparing treatment A2 against B2. And the coefficient for B1 is the estimated average effect when comparing treatment B1 against B2. The network that is analyzed here looks like this: A1 A2 |\ /| | \ / | | X | | / \ | |/ \| B1 B2 So, the comparison between B1 and B2 is based purely on indirect evidence. There are 3 more comparisons that can be obtained here besides the ones above (i.e., A1 vs A2, A1 vs B1, and A2 vs B1). You can obtain those by changing the "reference" treatment. An assumption made here is that the amount of heterogeneity is the same regardless of the comparison. This may or may not be true. An article that describes this type of analysis is: Salanti et al. (2008). Evaluation of networks of randomized trials. Statistical Methods in Medical Research, 17, 279-301. Edit: To test whether the effect of the first factor (A & B) depends on the second factor (1 & 2), that is, whether (A1 vs B1) = (A2 vs B2) or not, first note that: (A1-B2) - (A2-B2) - (B1-B2) = (A1-A2) - (B1-B2) = (A1-B1) - (A2-B2). So, you just have to test whether b1 - b2 - b3 = 0. You can do this with: predict(res, newmods=c(1,-1,-1)) or install/load the multcomp package and use: summary(glht(res1, linfct=rbind(c(1,-1,-1))), test=adjusted("none")) which yields: Simultaneous Tests for General Linear Hypotheses Linear Hypotheses: Estimate Std. Error z value Pr(>|z|) 1 == 0 3.6369 0.5779 6.293 3.12e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Adjusted p values reported -- none method)
Main effects and interaction in multivariate meta-analysis (network meta-analysis) in R
Given that there is a single effect size estimate from each study (see comments above), the analysis can be carried out with regular meta-regression methods. You can carry out such an analysis with th
Main effects and interaction in multivariate meta-analysis (network meta-analysis) in R Given that there is a single effect size estimate from each study (see comments above), the analysis can be carried out with regular meta-regression methods. You can carry out such an analysis with the metafor package. The "trick" is to code variables that indicate what treatments have been compared within a particular study: library(metafor) my_data$A1 <- ifelse(treat1 == "A1", 1, 0) my_data$A2 <- ifelse(treat1 == "A2", 1, 0) my_data$B1 <- ifelse(treat2 == "B1", -1, 0) my_data$B2 <- ifelse(treat2 == "B2", -1, 0) res <- rma(TE, sei=seTE, mods = ~ A1 + A2 + B1 - 1, data=my_data) res yields: Mixed-Effects Model (k = 38; tau^2 estimator: REML) tau^2 (estimated amount of residual heterogeneity): 0.2898 (SE = 0.1578) tau (square root of estimated tau^2 value): 0.5384 I^2 (residual heterogeneity / unaccounted variability): 59.02% H^2 (unaccounted variability / sampling variability): 2.44 Test for Residual Heterogeneity: QE(df = 35) = 93.5215, p-val < .0001 Test of Moderators (coefficient(s) 1,2,3): QM(df = 3) = 435.5223, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub A1 2.2446 0.2837 7.9123 <.0001 1.6886 2.8006 *** A2 0.9060 0.3387 2.6751 0.0075 0.2422 1.5699 ** B1 -2.2983 0.3467 -6.6294 <.0001 -2.9778 -1.6188 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Since variable B2 has been left out, this becomes the "reference" treatment. So, the coefficient for A1 is the estimated average effect when comparing treatment A1 against B2. The coefficient for A2 is the estimated average effect when comparing treatment A2 against B2. And the coefficient for B1 is the estimated average effect when comparing treatment B1 against B2. The network that is analyzed here looks like this: A1 A2 |\ /| | \ / | | X | | / \ | |/ \| B1 B2 So, the comparison between B1 and B2 is based purely on indirect evidence. There are 3 more comparisons that can be obtained here besides the ones above (i.e., A1 vs A2, A1 vs B1, and A2 vs B1). You can obtain those by changing the "reference" treatment. An assumption made here is that the amount of heterogeneity is the same regardless of the comparison. This may or may not be true. An article that describes this type of analysis is: Salanti et al. (2008). Evaluation of networks of randomized trials. Statistical Methods in Medical Research, 17, 279-301. Edit: To test whether the effect of the first factor (A & B) depends on the second factor (1 & 2), that is, whether (A1 vs B1) = (A2 vs B2) or not, first note that: (A1-B2) - (A2-B2) - (B1-B2) = (A1-A2) - (B1-B2) = (A1-B1) - (A2-B2). So, you just have to test whether b1 - b2 - b3 = 0. You can do this with: predict(res, newmods=c(1,-1,-1)) or install/load the multcomp package and use: summary(glht(res1, linfct=rbind(c(1,-1,-1))), test=adjusted("none")) which yields: Simultaneous Tests for General Linear Hypotheses Linear Hypotheses: Estimate Std. Error z value Pr(>|z|) 1 == 0 3.6369 0.5779 6.293 3.12e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Adjusted p values reported -- none method)
Main effects and interaction in multivariate meta-analysis (network meta-analysis) in R Given that there is a single effect size estimate from each study (see comments above), the analysis can be carried out with regular meta-regression methods. You can carry out such an analysis with th
46,829
Show that $\mathbb{E}(X)$ is finite?
Please, fill in the details marked with a $\star$. First of all, remember that to prove that $\mathrm{E}[X]$ is finite it is enough $\star$ to check that $\mathrm{E}[|X|]$ is finite. Symmetry $\star$ shows that $$ \mathrm{E}[|X|] = \int_{-\infty}^\infty \frac{|x|\, e^x}{(1+e^x)^2} \, dx = 2 \int_0^\infty \frac{x\, e^x}{(1+e^x)^2} \, dx \, . $$ For $x>0$, we have $\star$ $$ \frac{x\,e^x}{(1+e^x)^2} < \frac{x\,e^x}{e^{2x}} = x\,e^{-x} \, . $$ Therefore, if we let $Y$ be a r.v. with $\mathrm{Exp}(1)$ distribution, it follows $\star$ that $\mathrm{E}[|X|]<2\, \mathrm{E}[Y] = 2$. (Now, taking @cardinal's advice, speak out loudly and proudly: Success!)
Show that $\mathbb{E}(X)$ is finite?
Please, fill in the details marked with a $\star$. First of all, remember that to prove that $\mathrm{E}[X]$ is finite it is enough $\star$ to check that $\mathrm{E}[|X|]$ is finite. Symmetry $\star$
Show that $\mathbb{E}(X)$ is finite? Please, fill in the details marked with a $\star$. First of all, remember that to prove that $\mathrm{E}[X]$ is finite it is enough $\star$ to check that $\mathrm{E}[|X|]$ is finite. Symmetry $\star$ shows that $$ \mathrm{E}[|X|] = \int_{-\infty}^\infty \frac{|x|\, e^x}{(1+e^x)^2} \, dx = 2 \int_0^\infty \frac{x\, e^x}{(1+e^x)^2} \, dx \, . $$ For $x>0$, we have $\star$ $$ \frac{x\,e^x}{(1+e^x)^2} < \frac{x\,e^x}{e^{2x}} = x\,e^{-x} \, . $$ Therefore, if we let $Y$ be a r.v. with $\mathrm{Exp}(1)$ distribution, it follows $\star$ that $\mathrm{E}[|X|]<2\, \mathrm{E}[Y] = 2$. (Now, taking @cardinal's advice, speak out loudly and proudly: Success!)
Show that $\mathbb{E}(X)$ is finite? Please, fill in the details marked with a $\star$. First of all, remember that to prove that $\mathrm{E}[X]$ is finite it is enough $\star$ to check that $\mathrm{E}[|X|]$ is finite. Symmetry $\star$
46,830
how to robustly identify a floor trendline ignoring outliers?
I think linear quantile regression would be close to what you want. This fits a line so that the predicted value for each x value is close to the chosen quantile of the response conditional on x. Here's an R package: http://cran.r-project.org/web/packages/quantreg/index.html For example, you could try a 1% quantile, and see if that avoids the outliers. You can adjust the quantile you choose until it looks about right. If you want to be more principled about deciding where the outliers start, I think you'll need to make some more assumptions about your data distribution.
how to robustly identify a floor trendline ignoring outliers?
I think linear quantile regression would be close to what you want. This fits a line so that the predicted value for each x value is close to the chosen quantile of the response conditional on x. Her
how to robustly identify a floor trendline ignoring outliers? I think linear quantile regression would be close to what you want. This fits a line so that the predicted value for each x value is close to the chosen quantile of the response conditional on x. Here's an R package: http://cran.r-project.org/web/packages/quantreg/index.html For example, you could try a 1% quantile, and see if that avoids the outliers. You can adjust the quantile you choose until it looks about right. If you want to be more principled about deciding where the outliers start, I think you'll need to make some more assumptions about your data distribution.
how to robustly identify a floor trendline ignoring outliers? I think linear quantile regression would be close to what you want. This fits a line so that the predicted value for each x value is close to the chosen quantile of the response conditional on x. Her
46,831
how to robustly identify a floor trendline ignoring outliers?
Is there only 1 trend line ? Probably not . You have outlying points below the "visual trend line at the bottom" . How will these be "ignored" so as to capture the dominant floor trend line that the eye sees and not be influenced by them. To detect tehm and reduce their influence one would want to simultaneously detect BOTH trend line(s) and pulses that are inconsistent with the trend(s). If you can reduce your xy observations as you said and then post the reduced set , I might take a shot at this using the only piece of commercially avaialable software that I know that deals with trend detection while considering ARIMA structure and pulses. As far as I know nothing free is available and of course the heuristics to duplicate the human eye are not disclosable.
how to robustly identify a floor trendline ignoring outliers?
Is there only 1 trend line ? Probably not . You have outlying points below the "visual trend line at the bottom" . How will these be "ignored" so as to capture the dominant floor trend line that the e
how to robustly identify a floor trendline ignoring outliers? Is there only 1 trend line ? Probably not . You have outlying points below the "visual trend line at the bottom" . How will these be "ignored" so as to capture the dominant floor trend line that the eye sees and not be influenced by them. To detect tehm and reduce their influence one would want to simultaneously detect BOTH trend line(s) and pulses that are inconsistent with the trend(s). If you can reduce your xy observations as you said and then post the reduced set , I might take a shot at this using the only piece of commercially avaialable software that I know that deals with trend detection while considering ARIMA structure and pulses. As far as I know nothing free is available and of course the heuristics to duplicate the human eye are not disclosable.
how to robustly identify a floor trendline ignoring outliers? Is there only 1 trend line ? Probably not . You have outlying points below the "visual trend line at the bottom" . How will these be "ignored" so as to capture the dominant floor trend line that the e
46,832
how to robustly identify a floor trendline ignoring outliers?
perhaps try running this custom-built code in the R language..if you've never used R, check http://twotorials.com/ to get started ;) # create eight sets of a thousand random x values x1 <- rnorm( 1000 , mean = 1 ) x2 <- rnorm( 1000 , mean = 2 ) x3 <- rnorm( 1000 , mean = 3 ) x4 <- rnorm( 1000 , mean = 4 ) x5 <- rnorm( 1000 , mean = 5 ) x6 <- rnorm( 1000 , mean = 1 ) x7 <- rnorm( 1000 , mean = 1 ) x8 <- rnorm( 1000 , mean = 3 ) # create eight sets of a thousand random y values y1 <- rnorm( 1000 , mean = 1 ) y2 <- rnorm( 1000 , mean = 2 ) y3 <- rnorm( 1000 , mean = 3 ) y4 <- rnorm( 1000 , mean = 4 ) y5 <- rnorm( 1000 , mean = 5 ) y6 <- rnorm( 1000 , mean = 5 ) y7 <- rnorm( 1000 , mean = 3 ) y8 <- rnorm( 1000 , mean = 5 ) # combine all of these values into two vectors x <- c( x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 ) y <- c( y1 , y2 , y3, y4 , y5 , y6 , y7 , y8 ) # this distribution looks like your example distribution plot( x , y ) # along the x axis, figure out some reasonable intervals to "bin" the data # let's say you want one bin per 100 points.. num.bins <- length( x ) / 100 # figure out what quantiles to cut your bins at quantile.probs <- seq( 0 , 1 , length.out = num.bins ) # slice up your `x` data into that many equal bins bin.cutpoints <- quantile( x , quantile.probs ) # now let's look at just the first bin # # positions within the first bin first.bin <- which( bin.cutpoints [ 1 ] <= x & x < bin.cutpoints [ 2 ] ) # x midpoint between first two cutpoints first.midpoint <- as.numeric( bin.cutpoints [ 1 ] + ( bin.cutpoints [ 2 ] - bin.cutpoints [ 1 ] ) / 2 ) first.midpoint # since you wanted to discard 1% of all points, choose the 1% quantile cutoff point one.percent.cutoff <- round( quantile( 1:length( y[ first.bin ] ) , 0.01 ) ) # find the point at the edge of the first percentile within this bin first.percentile <- sort( y[ first.bin ] )[ one.percent.cutoff ] # and there's your `y` value first.percentile # create two empty vectors to start storing values low.x <- NULL low.y <- NULL # repeat this process for all bins: for ( i in 2:length( bin.cutpoints ) ){ this.bin <- which( bin.cutpoints [ i - 1 ] <= x & x < bin.cutpoints [ i ] ) this.midpoint <- as.numeric( bin.cutpoints [ i ] + ( bin.cutpoints [ i ] - bin.cutpoints [ i ] ) / 2 ) low.x <- c( low.x , this.midpoint ) # since you wanted to discard 1% of all points, choose the 1% quantile cutoff point one.percent.cutoff <- round( quantile( 1:length( y[ this.bin ] ) , 0.01 ) ) # find the point at the edge of the first percentile within this bin first.percentile <- sort( y[ this.bin ] )[ one.percent.cutoff ] # and there's your `y` value first.percentile low.y <- c( low.y , first.percentile ) } # plot your original points plot( x , y , main = 'one bin per hundred points' ) # RE-plot the points that had the second-lowest y value within each "bin" # so you can see exactly what line you're best-fitting points( low.x , low.y , col = "red" , pch = 19 ) # draw your line of best fit abline( lm( low.y ~ low.x ) ) here's the result.. notice it is sensitive to the size of each bin..
how to robustly identify a floor trendline ignoring outliers?
perhaps try running this custom-built code in the R language..if you've never used R, check http://twotorials.com/ to get started ;) # create eight sets of a thousand random x values x1 <- rnorm( 1000
how to robustly identify a floor trendline ignoring outliers? perhaps try running this custom-built code in the R language..if you've never used R, check http://twotorials.com/ to get started ;) # create eight sets of a thousand random x values x1 <- rnorm( 1000 , mean = 1 ) x2 <- rnorm( 1000 , mean = 2 ) x3 <- rnorm( 1000 , mean = 3 ) x4 <- rnorm( 1000 , mean = 4 ) x5 <- rnorm( 1000 , mean = 5 ) x6 <- rnorm( 1000 , mean = 1 ) x7 <- rnorm( 1000 , mean = 1 ) x8 <- rnorm( 1000 , mean = 3 ) # create eight sets of a thousand random y values y1 <- rnorm( 1000 , mean = 1 ) y2 <- rnorm( 1000 , mean = 2 ) y3 <- rnorm( 1000 , mean = 3 ) y4 <- rnorm( 1000 , mean = 4 ) y5 <- rnorm( 1000 , mean = 5 ) y6 <- rnorm( 1000 , mean = 5 ) y7 <- rnorm( 1000 , mean = 3 ) y8 <- rnorm( 1000 , mean = 5 ) # combine all of these values into two vectors x <- c( x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 ) y <- c( y1 , y2 , y3, y4 , y5 , y6 , y7 , y8 ) # this distribution looks like your example distribution plot( x , y ) # along the x axis, figure out some reasonable intervals to "bin" the data # let's say you want one bin per 100 points.. num.bins <- length( x ) / 100 # figure out what quantiles to cut your bins at quantile.probs <- seq( 0 , 1 , length.out = num.bins ) # slice up your `x` data into that many equal bins bin.cutpoints <- quantile( x , quantile.probs ) # now let's look at just the first bin # # positions within the first bin first.bin <- which( bin.cutpoints [ 1 ] <= x & x < bin.cutpoints [ 2 ] ) # x midpoint between first two cutpoints first.midpoint <- as.numeric( bin.cutpoints [ 1 ] + ( bin.cutpoints [ 2 ] - bin.cutpoints [ 1 ] ) / 2 ) first.midpoint # since you wanted to discard 1% of all points, choose the 1% quantile cutoff point one.percent.cutoff <- round( quantile( 1:length( y[ first.bin ] ) , 0.01 ) ) # find the point at the edge of the first percentile within this bin first.percentile <- sort( y[ first.bin ] )[ one.percent.cutoff ] # and there's your `y` value first.percentile # create two empty vectors to start storing values low.x <- NULL low.y <- NULL # repeat this process for all bins: for ( i in 2:length( bin.cutpoints ) ){ this.bin <- which( bin.cutpoints [ i - 1 ] <= x & x < bin.cutpoints [ i ] ) this.midpoint <- as.numeric( bin.cutpoints [ i ] + ( bin.cutpoints [ i ] - bin.cutpoints [ i ] ) / 2 ) low.x <- c( low.x , this.midpoint ) # since you wanted to discard 1% of all points, choose the 1% quantile cutoff point one.percent.cutoff <- round( quantile( 1:length( y[ this.bin ] ) , 0.01 ) ) # find the point at the edge of the first percentile within this bin first.percentile <- sort( y[ this.bin ] )[ one.percent.cutoff ] # and there's your `y` value first.percentile low.y <- c( low.y , first.percentile ) } # plot your original points plot( x , y , main = 'one bin per hundred points' ) # RE-plot the points that had the second-lowest y value within each "bin" # so you can see exactly what line you're best-fitting points( low.x , low.y , col = "red" , pch = 19 ) # draw your line of best fit abline( lm( low.y ~ low.x ) ) here's the result.. notice it is sensitive to the size of each bin..
how to robustly identify a floor trendline ignoring outliers? perhaps try running this custom-built code in the R language..if you've never used R, check http://twotorials.com/ to get started ;) # create eight sets of a thousand random x values x1 <- rnorm( 1000
46,833
Two player dice game probability
For a particular die roll the cumulative probability is $ P(X_i \leq x ) = x/6 $, for $x=1,...,6$. So, if the die rolls are independent, $$ P(\max \{ X_1, ..., X_n \} \leq m) = P(X_1 \leq m, ..., X_n \leq m) = \prod_{i=1}^{n} P(X_i \leq m ) = \left( \frac{m}{6} \right)^n $$ for $m=1,...,6$. When $m > 6$ this probability is clearly $1$ and $0$ if $m < 1$. From this it's simple to deduce that $$P(\max \{ X_1, ..., X_n \} = m) = \frac{m^n - (m-1)^n}{6^n} $$ (I've suppressed the indicator that $m \in \{1,...,6\}$). Note that to generalize this to a $k$-sided die, just replace $6$ everywhere with $k$. Suppose players $A$ and $B$ throw the die $n_A$,$n_B$ times with maximum rolls $M_A, M_B$, respectively. By the description above, player $A$ wins if $M_A > M_B$. Using the law of total probability, \begin{align*} P(M_A > M_B) &= E_{m} \Big( P(M_A > M_B | M_B = m) \Big) \\ &= E_{m} \Big(1 - \Big( \frac{m}{6} \Big)^{n_A} \Big) \\ &= \frac{1}{6^{n_A + n_B}} \sum_{m=1}^{6} \left(6^{n_A} - m^{n_A} \right) \left(m^{n_B} - (m-1)^{n_B} \right) \end{align*} If $n_A = n_B = n$, this simplifies to $$ \frac{1}{6^{2n}}\sum_{m=1}^{6} \left(6^n - m^n \right) \left(m^n - (m-1)^n \right) $$ Below this is plotted as a function of $n$. In this example, it's intuitive that the probability of $A$ winning quickly goes to zero as $n$ increases since their maximums become increasingly likely to both be six, in which case $B$ wins. $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $
Two player dice game probability
For a particular die roll the cumulative probability is $ P(X_i \leq x ) = x/6 $, for $x=1,...,6$. So, if the die rolls are independent, $$ P(\max \{ X_1, ..., X_n \} \leq m) = P(X_1 \leq m, ..., X_n
Two player dice game probability For a particular die roll the cumulative probability is $ P(X_i \leq x ) = x/6 $, for $x=1,...,6$. So, if the die rolls are independent, $$ P(\max \{ X_1, ..., X_n \} \leq m) = P(X_1 \leq m, ..., X_n \leq m) = \prod_{i=1}^{n} P(X_i \leq m ) = \left( \frac{m}{6} \right)^n $$ for $m=1,...,6$. When $m > 6$ this probability is clearly $1$ and $0$ if $m < 1$. From this it's simple to deduce that $$P(\max \{ X_1, ..., X_n \} = m) = \frac{m^n - (m-1)^n}{6^n} $$ (I've suppressed the indicator that $m \in \{1,...,6\}$). Note that to generalize this to a $k$-sided die, just replace $6$ everywhere with $k$. Suppose players $A$ and $B$ throw the die $n_A$,$n_B$ times with maximum rolls $M_A, M_B$, respectively. By the description above, player $A$ wins if $M_A > M_B$. Using the law of total probability, \begin{align*} P(M_A > M_B) &= E_{m} \Big( P(M_A > M_B | M_B = m) \Big) \\ &= E_{m} \Big(1 - \Big( \frac{m}{6} \Big)^{n_A} \Big) \\ &= \frac{1}{6^{n_A + n_B}} \sum_{m=1}^{6} \left(6^{n_A} - m^{n_A} \right) \left(m^{n_B} - (m-1)^{n_B} \right) \end{align*} If $n_A = n_B = n$, this simplifies to $$ \frac{1}{6^{2n}}\sum_{m=1}^{6} \left(6^n - m^n \right) \left(m^n - (m-1)^n \right) $$ Below this is plotted as a function of $n$. In this example, it's intuitive that the probability of $A$ winning quickly goes to zero as $n$ increases since their maximums become increasingly likely to both be six, in which case $B$ wins. $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $
Two player dice game probability For a particular die roll the cumulative probability is $ P(X_i \leq x ) = x/6 $, for $x=1,...,6$. So, if the die rolls are independent, $$ P(\max \{ X_1, ..., X_n \} \leq m) = P(X_1 \leq m, ..., X_n
46,834
How to do external validation of regression models
Regarding part of #1, perhaps a better and more formal way to proceed is to put in a variable that is the logit of a published model, and add to it all of its component variables less one term. Do a chunk likelihood ratio $\chi^2$ for the added value of all the components. That is a test of lack of fit of the published model. Regarding comparing performance on an external dataset, you can build a model with the logits of two models and see if each one adds predictive information to the other using Wald or likelihood ratio $\chi^2$ tests. Then there are methods of Pencina and others (R Hmisc package improveProb function).
How to do external validation of regression models
Regarding part of #1, perhaps a better and more formal way to proceed is to put in a variable that is the logit of a published model, and add to it all of its component variables less one term. Do a
How to do external validation of regression models Regarding part of #1, perhaps a better and more formal way to proceed is to put in a variable that is the logit of a published model, and add to it all of its component variables less one term. Do a chunk likelihood ratio $\chi^2$ for the added value of all the components. That is a test of lack of fit of the published model. Regarding comparing performance on an external dataset, you can build a model with the logits of two models and see if each one adds predictive information to the other using Wald or likelihood ratio $\chi^2$ tests. Then there are methods of Pencina and others (R Hmisc package improveProb function).
How to do external validation of regression models Regarding part of #1, perhaps a better and more formal way to proceed is to put in a variable that is the logit of a published model, and add to it all of its component variables less one term. Do a
46,835
How to do external validation of regression models
Regarding 1. There may be, but I would guess that a formal method isn't very useful (although some journal editor or pointy haired boss may want one). Rather ask "Do the models look the same? Would anyone care about the differences?" Regarding 2. I don't understand the question. Regarding 3. Well.... what's wrong with writing down your model, writing down theirs, and looking? It sounds like you may be re-asking Q1. If the models are substantively different, you can then start to ask why they are. E.g. Different data sets, different models, different methods (e.g. you used splines and they used polynomials) etc.
How to do external validation of regression models
Regarding 1. There may be, but I would guess that a formal method isn't very useful (although some journal editor or pointy haired boss may want one). Rather ask "Do the models look the same? Would an
How to do external validation of regression models Regarding 1. There may be, but I would guess that a formal method isn't very useful (although some journal editor or pointy haired boss may want one). Rather ask "Do the models look the same? Would anyone care about the differences?" Regarding 2. I don't understand the question. Regarding 3. Well.... what's wrong with writing down your model, writing down theirs, and looking? It sounds like you may be re-asking Q1. If the models are substantively different, you can then start to ask why they are. E.g. Different data sets, different models, different methods (e.g. you used splines and they used polynomials) etc.
How to do external validation of regression models Regarding 1. There may be, but I would guess that a formal method isn't very useful (although some journal editor or pointy haired boss may want one). Rather ask "Do the models look the same? Would an
46,836
Is likelihood ratio test the only way to build hypothesis tests?
No, the likelihood ratio is not the only way to construct hypothesis tests, but it often is optimal. In one flavour of the frequentist paradigm you can construct a hypothesis test from any arbitrary test statistic that can generate a p value ie a probability of observing the data, given the null hypothesis. An alternative hypothesis does not need to be formally stated (other than "not null") and hence a likelihood ratio cannot be constructed. Even when we do have a formal alternative hypothesis there are multiple ways of constructing tests, but the Neyman-Pearson lemma shows that in many situations the likelihood ratio will be the most powerful. We are often seeking the most powerful test; or the "uniformly most powerful test" if the alternative hypothesis is composite (eg takes in multiple possible parameter values); or uniformly most powerful unbiased test if there is no clear uniformly most powerful. So we often end up with a likelihood ratio test. There are situations where likelihoods simply don't work - no density exists in the model for example. The Bayesian paradigm gives an entirely different approach again, usually involving the calculation of a "Bayes factor" rather than a likelihood ratio.
Is likelihood ratio test the only way to build hypothesis tests?
No, the likelihood ratio is not the only way to construct hypothesis tests, but it often is optimal. In one flavour of the frequentist paradigm you can construct a hypothesis test from any arbitrary t
Is likelihood ratio test the only way to build hypothesis tests? No, the likelihood ratio is not the only way to construct hypothesis tests, but it often is optimal. In one flavour of the frequentist paradigm you can construct a hypothesis test from any arbitrary test statistic that can generate a p value ie a probability of observing the data, given the null hypothesis. An alternative hypothesis does not need to be formally stated (other than "not null") and hence a likelihood ratio cannot be constructed. Even when we do have a formal alternative hypothesis there are multiple ways of constructing tests, but the Neyman-Pearson lemma shows that in many situations the likelihood ratio will be the most powerful. We are often seeking the most powerful test; or the "uniformly most powerful test" if the alternative hypothesis is composite (eg takes in multiple possible parameter values); or uniformly most powerful unbiased test if there is no clear uniformly most powerful. So we often end up with a likelihood ratio test. There are situations where likelihoods simply don't work - no density exists in the model for example. The Bayesian paradigm gives an entirely different approach again, usually involving the calculation of a "Bayes factor" rather than a likelihood ratio.
Is likelihood ratio test the only way to build hypothesis tests? No, the likelihood ratio is not the only way to construct hypothesis tests, but it often is optimal. In one flavour of the frequentist paradigm you can construct a hypothesis test from any arbitrary t
46,837
Intraclass correlation coefficient interpretation
I am struggling to find anything online which deals with interpreting this The output you present is from SPSS Reliability Analysis procedure. Here you had some variables (items) which are raters or judges for you, and 17 subjects or objects which were rated. Your focus was to assess inter-rater aggreeement by means of intraclass correlation coefficient. In the 1st example you tested p=7 raters, and in the 2nd you tested p=9. More importantly, your two outputs differ in the respect how the raters are considered. In the 1st example, the raters are a fixed factor, which means they are the population of raters for you: you infer about only these specific raters. In the 2nd example, the raters are a random factor, which means they are a random sample of raters for you, while you want infer about the population of all possible raters which those 9 pretend to represent. The 17 subjects that were rated constitute a random sample of population of subjects. And, since each rater rated all 17 subjects, both models are complete two-way (two-factor) models, one is fixed+random=mixed model, the other is random+random=random model. Also, in both instances you requested to assess the consistency between raters, that is, how well their ratings correlate, - rather than to assess the absolute agreement between them - how much identical their scores are. With measuring consistency, Average measures ICC (see the tables) are identical to Cronbach's alpha. Average measures ICC tells you how reliably the/a group of p raters agree. Single measures ICC tells you how reliable is for you to use just one rater. Because, if you know the agreement is high you might choose to inquire from just one rater for that sort of task. If you tested the same number of the same raters (and the same subjects) under both models you'd see that the estimates in the table are the same under both models. However, as I've said, the interpretation differs in that you can generalize the conclusion about the agreement onto the whole population of raters only with two-way random model. You can see also a footnote saying that the mixed model assumes there is no rater-subject interaction; to put clearer, it means that the raters lack individual partialities to subjects' characteristics not relevant to the rated task (e.g. to hair colour of an examenee). SPSS Reliability Analysis procedure assumes additivity of scores (which logically implies interval or dichotomous but not ordinal level of data) and bivariate normality between items/raters. However, F test is quite robust.
Intraclass correlation coefficient interpretation
I am struggling to find anything online which deals with interpreting this The output you present is from SPSS Reliability Analysis procedure. Here you had some variables (items) which are raters o
Intraclass correlation coefficient interpretation I am struggling to find anything online which deals with interpreting this The output you present is from SPSS Reliability Analysis procedure. Here you had some variables (items) which are raters or judges for you, and 17 subjects or objects which were rated. Your focus was to assess inter-rater aggreeement by means of intraclass correlation coefficient. In the 1st example you tested p=7 raters, and in the 2nd you tested p=9. More importantly, your two outputs differ in the respect how the raters are considered. In the 1st example, the raters are a fixed factor, which means they are the population of raters for you: you infer about only these specific raters. In the 2nd example, the raters are a random factor, which means they are a random sample of raters for you, while you want infer about the population of all possible raters which those 9 pretend to represent. The 17 subjects that were rated constitute a random sample of population of subjects. And, since each rater rated all 17 subjects, both models are complete two-way (two-factor) models, one is fixed+random=mixed model, the other is random+random=random model. Also, in both instances you requested to assess the consistency between raters, that is, how well their ratings correlate, - rather than to assess the absolute agreement between them - how much identical their scores are. With measuring consistency, Average measures ICC (see the tables) are identical to Cronbach's alpha. Average measures ICC tells you how reliably the/a group of p raters agree. Single measures ICC tells you how reliable is for you to use just one rater. Because, if you know the agreement is high you might choose to inquire from just one rater for that sort of task. If you tested the same number of the same raters (and the same subjects) under both models you'd see that the estimates in the table are the same under both models. However, as I've said, the interpretation differs in that you can generalize the conclusion about the agreement onto the whole population of raters only with two-way random model. You can see also a footnote saying that the mixed model assumes there is no rater-subject interaction; to put clearer, it means that the raters lack individual partialities to subjects' characteristics not relevant to the rated task (e.g. to hair colour of an examenee). SPSS Reliability Analysis procedure assumes additivity of scores (which logically implies interval or dichotomous but not ordinal level of data) and bivariate normality between items/raters. However, F test is quite robust.
Intraclass correlation coefficient interpretation I am struggling to find anything online which deals with interpreting this The output you present is from SPSS Reliability Analysis procedure. Here you had some variables (items) which are raters o
46,838
Intraclass correlation coefficient interpretation
You might want to read the article by LeBreton and Senter (2007). It's a fairly accessible overview of how to interpret ICC and related indicators of inter-rater agreement. LeBreton, J. M., & Senter, J. L. (2007). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods.
Intraclass correlation coefficient interpretation
You might want to read the article by LeBreton and Senter (2007). It's a fairly accessible overview of how to interpret ICC and related indicators of inter-rater agreement. LeBreton, J. M., & Senter,
Intraclass correlation coefficient interpretation You might want to read the article by LeBreton and Senter (2007). It's a fairly accessible overview of how to interpret ICC and related indicators of inter-rater agreement. LeBreton, J. M., & Senter, J. L. (2007). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods.
Intraclass correlation coefficient interpretation You might want to read the article by LeBreton and Senter (2007). It's a fairly accessible overview of how to interpret ICC and related indicators of inter-rater agreement. LeBreton, J. M., & Senter,
46,839
Intraclass correlation coefficient interpretation
Let me provide a response for the first situation that you analysed because the second situation essentially parallels it, except that you have two more items in the second situation and you chose a different model (more about that below). In providing this response, in some places I have a different interpretation from the extended explanation that has been provided elsewhere in these posts. As I understand it, you had 17 raters (participants), each of whom provided a rating on 5-point scales to seven different items AND you are wanting to see whether there is much agreement between the 17 raters in how they rated those 7 items. I think that, in order to do this (which is surely a pretty unusual situation; usually there are not as many as 17 raters involved in assessing something), you should have selected ABSOLUTE (not consistent) measures in SPSS, and, if your participants are the only raters of interest in this situation (I assume they are, and that you are not wanting to generalize your results to other participants / raters) you should indeed have chosen Model 3 (i.e., 2-way mixed, NOT Model 2 as you did in your second setup), which is the FIRST model offered in SPSS. So, in essence, you have made a basic mistake in selecting the kind of ICC that provides a consistency solution SPSS. (Sorry to give you the bad news.) Next, when you choose an ICC from the output you should choose the ICC from the row titled "Single measures" (i.e., .133) because each of your participants made a single rating for each of the 7 items (and I assume you entered 17 scores into the ICC analysis for each item). If you had averaged all of your 17 participants' ratings on each item BEFORE entering the data into the ICC analysis, it would be appropriate for you to report the ICC that pertains to the Averaged measures (.519). But, from your description, you didn't average the ratings that were made by your participants. If you had chosen Absolute rather than Consistency for your first analysis, an ICC as low as .133 would indicate that your 17 participants / raters exhibited EXTREMELY little agreement among themselves in terms of how they rated the 7 items. An article in 2016 by Trevethan in the journal Health Services and Outcomes Research Methodology provides the background for this answer as well as a lot of other information concerning the selection and interpretation of ICCs. Finally, the small number of items (7 in the first situation) might create some problems statistically. I am sorry, but I am not able to provide advice about that. Maybe it's OK in your situation, but it might be advisable to consult a friendly statistician. References Trevethan, R. (2016). "Intraclass correlation coefficients: Clearing the air, extending some cautions, and making some requests." Health Services and Outcomes Research Methodology. DOI 10.1007/s10742-016-0156-6. (Online publication available until volume, issue, and page numbers have been assigned.)
Intraclass correlation coefficient interpretation
Let me provide a response for the first situation that you analysed because the second situation essentially parallels it, except that you have two more items in the second situation and you chose a d
Intraclass correlation coefficient interpretation Let me provide a response for the first situation that you analysed because the second situation essentially parallels it, except that you have two more items in the second situation and you chose a different model (more about that below). In providing this response, in some places I have a different interpretation from the extended explanation that has been provided elsewhere in these posts. As I understand it, you had 17 raters (participants), each of whom provided a rating on 5-point scales to seven different items AND you are wanting to see whether there is much agreement between the 17 raters in how they rated those 7 items. I think that, in order to do this (which is surely a pretty unusual situation; usually there are not as many as 17 raters involved in assessing something), you should have selected ABSOLUTE (not consistent) measures in SPSS, and, if your participants are the only raters of interest in this situation (I assume they are, and that you are not wanting to generalize your results to other participants / raters) you should indeed have chosen Model 3 (i.e., 2-way mixed, NOT Model 2 as you did in your second setup), which is the FIRST model offered in SPSS. So, in essence, you have made a basic mistake in selecting the kind of ICC that provides a consistency solution SPSS. (Sorry to give you the bad news.) Next, when you choose an ICC from the output you should choose the ICC from the row titled "Single measures" (i.e., .133) because each of your participants made a single rating for each of the 7 items (and I assume you entered 17 scores into the ICC analysis for each item). If you had averaged all of your 17 participants' ratings on each item BEFORE entering the data into the ICC analysis, it would be appropriate for you to report the ICC that pertains to the Averaged measures (.519). But, from your description, you didn't average the ratings that were made by your participants. If you had chosen Absolute rather than Consistency for your first analysis, an ICC as low as .133 would indicate that your 17 participants / raters exhibited EXTREMELY little agreement among themselves in terms of how they rated the 7 items. An article in 2016 by Trevethan in the journal Health Services and Outcomes Research Methodology provides the background for this answer as well as a lot of other information concerning the selection and interpretation of ICCs. Finally, the small number of items (7 in the first situation) might create some problems statistically. I am sorry, but I am not able to provide advice about that. Maybe it's OK in your situation, but it might be advisable to consult a friendly statistician. References Trevethan, R. (2016). "Intraclass correlation coefficients: Clearing the air, extending some cautions, and making some requests." Health Services and Outcomes Research Methodology. DOI 10.1007/s10742-016-0156-6. (Online publication available until volume, issue, and page numbers have been assigned.)
Intraclass correlation coefficient interpretation Let me provide a response for the first situation that you analysed because the second situation essentially parallels it, except that you have two more items in the second situation and you chose a d
46,840
Intraclass correlation coefficient interpretation
I have traced the answer in new Stata 13 documentation on ICC. The question remains on whether the F test in this case can be used given the data does not follow the assumptions of normal distribution.
Intraclass correlation coefficient interpretation
I have traced the answer in new Stata 13 documentation on ICC. The question remains on whether the F test in this case can be used given the data does not follow the assumptions of normal distribution
Intraclass correlation coefficient interpretation I have traced the answer in new Stata 13 documentation on ICC. The question remains on whether the F test in this case can be used given the data does not follow the assumptions of normal distribution.
Intraclass correlation coefficient interpretation I have traced the answer in new Stata 13 documentation on ICC. The question remains on whether the F test in this case can be used given the data does not follow the assumptions of normal distribution
46,841
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estimator of $\sigma^2$?
I don't work directly through your derivation, but provide a more general formulation below. For a more general formulation, let your regression model be $Y = X\beta + \epsilon$, $P_X = X(X^\prime X)^{-1} X^\prime$, and $M_X = I_N - P_X$ ($I_N$ is a $N\times N$ identity matrix). $X$ is $N\times K$ and of full column rank. We assume homoskedasticity and no serial correlation. We show that $\hat{\sigma}^2$ is unbiased: $$\begin{align*} \mathbb{E}\left[\frac{\hat{\epsilon}^\prime \hat{\epsilon}}{N - K}\mid X\right] &= \mathbb{E}\left[\frac{\epsilon^\prime M^\prime M \epsilon}{N - K}\mid X\right] \\ &= \mathbb{E}\left[\frac{\epsilon^\prime M \epsilon}{N - K}\mid X\right] \\ &= \frac{\sum_{i=1}^N{\sum_{j=1}^N{m_{ji}\mathbb{E}[\epsilon_i\epsilon_j\mid X]}}}{N - K} \\ &= \frac{\sum_{i=1}^N{m_{ii}\sigma^2}}{N - K} \\ &= \frac{\sigma^2\mathop{\text{tr}}(M)}{N - K} \\ \end{align*}$$ $$\begin{align*} \text{tr}(M) &= \text{tr}(I_N - P_X) \\ &= \text{tr}(I_N) - \text{tr}(P_X) \\ &= N - \text{tr}\left(X\left(X^\prime X\right)^{-1}X^\prime\right) \\ &= N - \text{tr}\left(\left(X^\prime X\right)^{-1}X^\prime X\right) \\ &= N - \text{tr}(I_{K}) = N - K \\ \Longrightarrow \mathbb{E}\left[\frac{\hat{\epsilon}^\prime \hat{\epsilon}}{N - K}\mid X\right] &= \frac{\sigma^2 (N-K)}{(N-K)} = \sigma^2. \end{align*}$$
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estim
I don't work directly through your derivation, but provide a more general formulation below. For a more general formulation, let your regression model be $Y = X\beta + \epsilon$, $P_X = X(X^\prime X)^
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estimator of $\sigma^2$? I don't work directly through your derivation, but provide a more general formulation below. For a more general formulation, let your regression model be $Y = X\beta + \epsilon$, $P_X = X(X^\prime X)^{-1} X^\prime$, and $M_X = I_N - P_X$ ($I_N$ is a $N\times N$ identity matrix). $X$ is $N\times K$ and of full column rank. We assume homoskedasticity and no serial correlation. We show that $\hat{\sigma}^2$ is unbiased: $$\begin{align*} \mathbb{E}\left[\frac{\hat{\epsilon}^\prime \hat{\epsilon}}{N - K}\mid X\right] &= \mathbb{E}\left[\frac{\epsilon^\prime M^\prime M \epsilon}{N - K}\mid X\right] \\ &= \mathbb{E}\left[\frac{\epsilon^\prime M \epsilon}{N - K}\mid X\right] \\ &= \frac{\sum_{i=1}^N{\sum_{j=1}^N{m_{ji}\mathbb{E}[\epsilon_i\epsilon_j\mid X]}}}{N - K} \\ &= \frac{\sum_{i=1}^N{m_{ii}\sigma^2}}{N - K} \\ &= \frac{\sigma^2\mathop{\text{tr}}(M)}{N - K} \\ \end{align*}$$ $$\begin{align*} \text{tr}(M) &= \text{tr}(I_N - P_X) \\ &= \text{tr}(I_N) - \text{tr}(P_X) \\ &= N - \text{tr}\left(X\left(X^\prime X\right)^{-1}X^\prime\right) \\ &= N - \text{tr}\left(\left(X^\prime X\right)^{-1}X^\prime X\right) \\ &= N - \text{tr}(I_{K}) = N - K \\ \Longrightarrow \mathbb{E}\left[\frac{\hat{\epsilon}^\prime \hat{\epsilon}}{N - K}\mid X\right] &= \frac{\sigma^2 (N-K)}{(N-K)} = \sigma^2. \end{align*}$$
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estim I don't work directly through your derivation, but provide a more general formulation below. For a more general formulation, let your regression model be $Y = X\beta + \epsilon$, $P_X = X(X^\prime X)^
46,842
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estimator of $\sigma^2$?
I think I figured out the version of the proof I was doing even though Charlie's proof is much better (more general I assume). First term: \begin{align} E\left[ \displaystyle\sum\limits_{i=1}^n (u_{i}^2-\bar{u})^{2} \right] &= E\left[ \displaystyle\sum\limits_{i=1}^n u_{i}^2-n(\bar{u})^{2} \right] \\ &=E(u_1^2) + \cdots + E(n_n^2) - nE(\bar{u}^2) \\ &= Var(u_1) + E(u_1)E(u_1) + \cdots + Var(u_n) + E(u_n)E(u_n) - n(Var(\bar{u} - E(\bar{u}) E(\bar{u})) \\ &= n\sigma^2 - \frac{1}{n}Var(u_1 + \cdots + u_n) \\ &= n\sigma^2 - \frac{1}{n}[Var(u_1) + \cdots + Var(u_n) ]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{because the $u_i$ are iid}\\ &= n\sigma^2 - \frac{1}{n}[n\sigma^2]\\ &= (n-1)\sigma^2 \end{align} Second term: \begin{align} E\left[(\hat{\beta}_{1}-\beta_{1})^{2} \displaystyle\sum\limits_{i=1}^n (x_{i} - \bar{x})^2\right] &= E\left[(\hat{\beta}_{1}-\beta_{1})^{2} s^2_x\right] \\ &= s^2_x E\left[(\hat{\beta}_{1}-\beta_{1})^{2} \right] \\ &= s^2_x \left( Var(\hat{\beta}_{1}-\beta_{1}) + E(\hat{\beta}_{1}-\beta_{1})E(\hat{\beta}_{1}-\beta_{1})\right) \\ &= s^2_x \left( Var(\hat{\beta}_{1}) + 0\right) \\ &= s^2_x \frac{\sigma^2}{s^2_x} \\ &= \sigma^2 \end{align} I think this works because 1) $E(\hat{\beta}_{1}-\beta_{1}) = 0$ because $\hat{\beta_1}$ is an unbiased estimator of $\beta_1$, and 2) I already proved (when I worked on this question) that $Var(\hat{\beta_1}) = \sigma^2 / s^2_x$. Third term: \begin{align} E\left[2 (\hat{\beta}_{1}-\beta _{1}) \displaystyle\sum\limits_{i=1}^n (u_{i}-\bar{u})(x_{i}-\bar{x})\right] &= 2 E\left[(\hat{\beta}_{1}-\beta _{1}) (\hat{\beta}_{1}-\beta _{1}) s^2_x \right] \\ &= 2 s^2_x E\left[(\hat{\beta}_{1}-\beta _{1})^2 \right] \\ &= 2 s^2_x \frac{\sigma^2}{s^2_x} \\ &= 2 \sigma^2 \end{align} I think this works because it basically just uses the formula I used in this question: \begin{align} \hat{\beta}_1 - \beta_1 &= \frac{1}{s^2_x} \displaystyle\sum\limits_{i=1}^n (x_i - \bar{x}) u_i \\ \end{align}
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estim
I think I figured out the version of the proof I was doing even though Charlie's proof is much better (more general I assume). First term: \begin{align} E\left[ \displaystyle\sum\limits_{i=1}^n (u_{i}
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estimator of $\sigma^2$? I think I figured out the version of the proof I was doing even though Charlie's proof is much better (more general I assume). First term: \begin{align} E\left[ \displaystyle\sum\limits_{i=1}^n (u_{i}^2-\bar{u})^{2} \right] &= E\left[ \displaystyle\sum\limits_{i=1}^n u_{i}^2-n(\bar{u})^{2} \right] \\ &=E(u_1^2) + \cdots + E(n_n^2) - nE(\bar{u}^2) \\ &= Var(u_1) + E(u_1)E(u_1) + \cdots + Var(u_n) + E(u_n)E(u_n) - n(Var(\bar{u} - E(\bar{u}) E(\bar{u})) \\ &= n\sigma^2 - \frac{1}{n}Var(u_1 + \cdots + u_n) \\ &= n\sigma^2 - \frac{1}{n}[Var(u_1) + \cdots + Var(u_n) ]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{because the $u_i$ are iid}\\ &= n\sigma^2 - \frac{1}{n}[n\sigma^2]\\ &= (n-1)\sigma^2 \end{align} Second term: \begin{align} E\left[(\hat{\beta}_{1}-\beta_{1})^{2} \displaystyle\sum\limits_{i=1}^n (x_{i} - \bar{x})^2\right] &= E\left[(\hat{\beta}_{1}-\beta_{1})^{2} s^2_x\right] \\ &= s^2_x E\left[(\hat{\beta}_{1}-\beta_{1})^{2} \right] \\ &= s^2_x \left( Var(\hat{\beta}_{1}-\beta_{1}) + E(\hat{\beta}_{1}-\beta_{1})E(\hat{\beta}_{1}-\beta_{1})\right) \\ &= s^2_x \left( Var(\hat{\beta}_{1}) + 0\right) \\ &= s^2_x \frac{\sigma^2}{s^2_x} \\ &= \sigma^2 \end{align} I think this works because 1) $E(\hat{\beta}_{1}-\beta_{1}) = 0$ because $\hat{\beta_1}$ is an unbiased estimator of $\beta_1$, and 2) I already proved (when I worked on this question) that $Var(\hat{\beta_1}) = \sigma^2 / s^2_x$. Third term: \begin{align} E\left[2 (\hat{\beta}_{1}-\beta _{1}) \displaystyle\sum\limits_{i=1}^n (u_{i}-\bar{u})(x_{i}-\bar{x})\right] &= 2 E\left[(\hat{\beta}_{1}-\beta _{1}) (\hat{\beta}_{1}-\beta _{1}) s^2_x \right] \\ &= 2 s^2_x E\left[(\hat{\beta}_{1}-\beta _{1})^2 \right] \\ &= 2 s^2_x \frac{\sigma^2}{s^2_x} \\ &= 2 \sigma^2 \end{align} I think this works because it basically just uses the formula I used in this question: \begin{align} \hat{\beta}_1 - \beta_1 &= \frac{1}{s^2_x} \displaystyle\sum\limits_{i=1}^n (x_i - \bar{x}) u_i \\ \end{align}
How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estim I think I figured out the version of the proof I was doing even though Charlie's proof is much better (more general I assume). First term: \begin{align} E\left[ \displaystyle\sum\limits_{i=1}^n (u_{i}
46,843
Logistic Regression/Naive Bayes with highly correlated data
I disagree with discretizing to get rid of collinearity. It doesn't get rid of it, it just pushes it under a rug where it can cause problems while being less visible. "Number of guards" seems like a mediating variable. There is a lot of recent work on mediators, much of it by MacKinnon and his colleagues. E.g. this book but he has also written articles and has a website (Googling will find lots of things).
Logistic Regression/Naive Bayes with highly correlated data
I disagree with discretizing to get rid of collinearity. It doesn't get rid of it, it just pushes it under a rug where it can cause problems while being less visible. "Number of guards" seems like a m
Logistic Regression/Naive Bayes with highly correlated data I disagree with discretizing to get rid of collinearity. It doesn't get rid of it, it just pushes it under a rug where it can cause problems while being less visible. "Number of guards" seems like a mediating variable. There is a lot of recent work on mediators, much of it by MacKinnon and his colleagues. E.g. this book but he has also written articles and has a website (Googling will find lots of things).
Logistic Regression/Naive Bayes with highly correlated data I disagree with discretizing to get rid of collinearity. It doesn't get rid of it, it just pushes it under a rug where it can cause problems while being less visible. "Number of guards" seems like a m
46,844
Logistic Regression/Naive Bayes with highly correlated data
Well what about a) building a model to predict the number of guards, n_act, call its output n_est. b) build model to predict violence based on inputs and (actual guards-estimated, n_act - n_est)
Logistic Regression/Naive Bayes with highly correlated data
Well what about a) building a model to predict the number of guards, n_act, call its output n_est. b) build model to predict violence based on inputs and (actual guards-estimated, n_act - n_est)
Logistic Regression/Naive Bayes with highly correlated data Well what about a) building a model to predict the number of guards, n_act, call its output n_est. b) build model to predict violence based on inputs and (actual guards-estimated, n_act - n_est)
Logistic Regression/Naive Bayes with highly correlated data Well what about a) building a model to predict the number of guards, n_act, call its output n_est. b) build model to predict violence based on inputs and (actual guards-estimated, n_act - n_est)
46,845
Student's t vs Mann-Whitney U for small equal samples
How I would approach this: I would not go for a t-test if the requirements for applying it are not (or not known to be) satisfied. This seems obvious to me and good practice for any hypothesis testing. Mere familiarity with one or other test above other one(s) is not a justification. Can you give details where you found that the difference in centrality two small samples from unknown and unverifiable population distributions would be robust against non-normality? I find this hard to accept. Nonparametric tests like M-W compare the difference of population medians (as opposed to mean values), because the median is more robust. This is especially true in your case, where you cannot test for the population distribution. Do you have any prior information or evidence that the two samples were drawn from the same population? Perhaps a description of the experiment might help to judge. You may be pushing the boundaries about what hypothesis testing can do for you here. With small samples (and no repeat experiments available, I presume?) and without any information on the population, I would not want to involve & quantify statistical concepts like power, significance, CI, etc. All you can do really is list some descriptives for the two samples (quote the two medians or their difference, relative to their sum; the ranges & maximum range of their difference, etc.). However, if you could repeat by drawing many samples of size n=5, that would change the picture at lot. Is that an option for you?
Student's t vs Mann-Whitney U for small equal samples
How I would approach this: I would not go for a t-test if the requirements for applying it are not (or not known to be) satisfied. This seems obvious to me and good practice for any hypothesis testing
Student's t vs Mann-Whitney U for small equal samples How I would approach this: I would not go for a t-test if the requirements for applying it are not (or not known to be) satisfied. This seems obvious to me and good practice for any hypothesis testing. Mere familiarity with one or other test above other one(s) is not a justification. Can you give details where you found that the difference in centrality two small samples from unknown and unverifiable population distributions would be robust against non-normality? I find this hard to accept. Nonparametric tests like M-W compare the difference of population medians (as opposed to mean values), because the median is more robust. This is especially true in your case, where you cannot test for the population distribution. Do you have any prior information or evidence that the two samples were drawn from the same population? Perhaps a description of the experiment might help to judge. You may be pushing the boundaries about what hypothesis testing can do for you here. With small samples (and no repeat experiments available, I presume?) and without any information on the population, I would not want to involve & quantify statistical concepts like power, significance, CI, etc. All you can do really is list some descriptives for the two samples (quote the two medians or their difference, relative to their sum; the ranges & maximum range of their difference, etc.). However, if you could repeat by drawing many samples of size n=5, that would change the picture at lot. Is that an option for you?
Student's t vs Mann-Whitney U for small equal samples How I would approach this: I would not go for a t-test if the requirements for applying it are not (or not known to be) satisfied. This seems obvious to me and good practice for any hypothesis testing
46,846
Student's t vs Mann-Whitney U for small equal samples
You should definitely consider a permutation test, as you can use the mean as test statistic and have a lot less assumptions. You will find a lot of information if you google permutation test or search it here. An implementation in R (taken from this great answer to one of my questions): a <- rnorm(5) b <- rnorm(5, 0.5) DV <- c(a, b) ids <- seq(along = DV) # indices to permute idx <- combn(ids, length(a)) # all possibilities for different groups # function to calculate difference in group means given index vector for group A getDiffM <- function(x) { mean(DV[x]) - mean(DV[!(ids %in% x)]) } resDM <- apply(idx, 2, getDiffM) # difference in means for all permutations diffM <- mean(a) - mean(b) # empirical differencen in group means # p-value: proportion of group means at least as extreme as observed one (pVal <- sum(resDM >= diffM) / length(resDM))
Student's t vs Mann-Whitney U for small equal samples
You should definitely consider a permutation test, as you can use the mean as test statistic and have a lot less assumptions. You will find a lot of information if you google permutation test or searc
Student's t vs Mann-Whitney U for small equal samples You should definitely consider a permutation test, as you can use the mean as test statistic and have a lot less assumptions. You will find a lot of information if you google permutation test or search it here. An implementation in R (taken from this great answer to one of my questions): a <- rnorm(5) b <- rnorm(5, 0.5) DV <- c(a, b) ids <- seq(along = DV) # indices to permute idx <- combn(ids, length(a)) # all possibilities for different groups # function to calculate difference in group means given index vector for group A getDiffM <- function(x) { mean(DV[x]) - mean(DV[!(ids %in% x)]) } resDM <- apply(idx, 2, getDiffM) # difference in means for all permutations diffM <- mean(a) - mean(b) # empirical differencen in group means # p-value: proportion of group means at least as extreme as observed one (pVal <- sum(resDM >= diffM) / length(resDM))
Student's t vs Mann-Whitney U for small equal samples You should definitely consider a permutation test, as you can use the mean as test statistic and have a lot less assumptions. You will find a lot of information if you google permutation test or searc
46,847
Conditional logistic regression vs GLMM in R
The conditional logistic regression applies fixed effects (in the context of econometrics), $$ logit(p_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta+u_i.$$ where each pair of subjects has an individual intercept ($u_i$). It can be implemented with clogit() of package survival or clogistic() of package Epi. Generalized linear mixed models (GLMM) for binary data can adopt link functions like logit, probit and cloglog. The mixed logistic regression is as, $$ logit(p_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta+\boldsymbol z_{ij}^{'}\boldsymbol u_i$$ where $\boldsymbol u_i$ are random variables and can have the distribution assumption (e.g. normal distribution). Of course you can use a random intercept model, i.e. $\boldsymbol z_{ij}^{'}=1$ and $\boldsymbol u_i$ is a scalar. You can estimate GLMM using glmer() of package lme4. As to the choice between conditional logistic regression and GLMM for binary data, some people are in favor of conditonal (fixed-effects) logistic regression and GLMM with probit link, but against fixed-effects probit or GLMM with logit link. The reason may be that some of the consistency properties break down, especially with small within-cluster sample size ($n_i=2$ for your case). You can find the clarification of fixed effects and random effects (and marginal models) in different contexts here.
Conditional logistic regression vs GLMM in R
The conditional logistic regression applies fixed effects (in the context of econometrics), $$ logit(p_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta+u_i.$$ where each pair of subjects has an individual
Conditional logistic regression vs GLMM in R The conditional logistic regression applies fixed effects (in the context of econometrics), $$ logit(p_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta+u_i.$$ where each pair of subjects has an individual intercept ($u_i$). It can be implemented with clogit() of package survival or clogistic() of package Epi. Generalized linear mixed models (GLMM) for binary data can adopt link functions like logit, probit and cloglog. The mixed logistic regression is as, $$ logit(p_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta+\boldsymbol z_{ij}^{'}\boldsymbol u_i$$ where $\boldsymbol u_i$ are random variables and can have the distribution assumption (e.g. normal distribution). Of course you can use a random intercept model, i.e. $\boldsymbol z_{ij}^{'}=1$ and $\boldsymbol u_i$ is a scalar. You can estimate GLMM using glmer() of package lme4. As to the choice between conditional logistic regression and GLMM for binary data, some people are in favor of conditonal (fixed-effects) logistic regression and GLMM with probit link, but against fixed-effects probit or GLMM with logit link. The reason may be that some of the consistency properties break down, especially with small within-cluster sample size ($n_i=2$ for your case). You can find the clarification of fixed effects and random effects (and marginal models) in different contexts here.
Conditional logistic regression vs GLMM in R The conditional logistic regression applies fixed effects (in the context of econometrics), $$ logit(p_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta+u_i.$$ where each pair of subjects has an individual
46,848
Can covariates be categorial variables?
ANOVA, ANCOVA and OLS regression are all the same model. In matrix notation they are all $Y = Xb + e$ where Y is a vector of values on the DV, X is a matrix of values on the IVs, b a vector of parameters to be estimated and e is error. The main reason these are treated so differently is, I think, historical: ANOVA and regression developed separately. The usual terminology is that ANOVA is used when all the IVs are categorical, ANCOVA when some are categorical and some continuous. Regression can easily be used for any sorts of IVs.
Can covariates be categorial variables?
ANOVA, ANCOVA and OLS regression are all the same model. In matrix notation they are all $Y = Xb + e$ where Y is a vector of values on the DV, X is a matrix of values on the IVs, b a vector of paramet
Can covariates be categorial variables? ANOVA, ANCOVA and OLS regression are all the same model. In matrix notation they are all $Y = Xb + e$ where Y is a vector of values on the DV, X is a matrix of values on the IVs, b a vector of parameters to be estimated and e is error. The main reason these are treated so differently is, I think, historical: ANOVA and regression developed separately. The usual terminology is that ANOVA is used when all the IVs are categorical, ANCOVA when some are categorical and some continuous. Regression can easily be used for any sorts of IVs.
Can covariates be categorial variables? ANOVA, ANCOVA and OLS regression are all the same model. In matrix notation they are all $Y = Xb + e$ where Y is a vector of values on the DV, X is a matrix of values on the IVs, b a vector of paramet
46,849
Can covariates be categorial variables?
As a minor addition, but as I don't have enough points yet to comment it comes as an answer. Reading up on Ancova and how and when to use covariates I had the same question. If you needed a citation for being able to use a categorical covariate: Howell (2016) p593. In addition, the use of covariates also depends on whether it is a between (independent) or within (repeated) design and what the variables of interest (Baguely, 2012).
Can covariates be categorial variables?
As a minor addition, but as I don't have enough points yet to comment it comes as an answer. Reading up on Ancova and how and when to use covariates I had the same question. If you needed a citation f
Can covariates be categorial variables? As a minor addition, but as I don't have enough points yet to comment it comes as an answer. Reading up on Ancova and how and when to use covariates I had the same question. If you needed a citation for being able to use a categorical covariate: Howell (2016) p593. In addition, the use of covariates also depends on whether it is a between (independent) or within (repeated) design and what the variables of interest (Baguely, 2012).
Can covariates be categorial variables? As a minor addition, but as I don't have enough points yet to comment it comes as an answer. Reading up on Ancova and how and when to use covariates I had the same question. If you needed a citation f
46,850
Weighted least squares regression on random data, giving large t-statistics more often than "expected"
I think the problem is that you are generating the weights at random, uncorrelated with the y value. In a real weighted regression the points with lower variance will have higher weights. Since the true relationship is mean and variance of 0 that means that points furthest from 0 would be consistent with higher variances and therefore lower weights, but you don't given them lower weights, they get random weights which could be high or low giving some more extreme values than expected. If you do the simulation more realistically by generating a set of weights, then generate Y with variances based on the weights, then analyze (you could use the same set of x's, or randomly generate the x's as well), I would expect the t-values to behave more properly. Here is a quick example: tstats <- replicate(1000, { x <- rnorm(N); w <- abs(rnorm(100,1)); y <- rnorm(100, 0, sqrt(1/w)); coef(summary(lm(y~x, weights=w)))[2,3]}) mean(abs(tstats)>2) I saw just under 5% as expected.
Weighted least squares regression on random data, giving large t-statistics more often than "expecte
I think the problem is that you are generating the weights at random, uncorrelated with the y value. In a real weighted regression the points with lower variance will have higher weights. Since the
Weighted least squares regression on random data, giving large t-statistics more often than "expected" I think the problem is that you are generating the weights at random, uncorrelated with the y value. In a real weighted regression the points with lower variance will have higher weights. Since the true relationship is mean and variance of 0 that means that points furthest from 0 would be consistent with higher variances and therefore lower weights, but you don't given them lower weights, they get random weights which could be high or low giving some more extreme values than expected. If you do the simulation more realistically by generating a set of weights, then generate Y with variances based on the weights, then analyze (you could use the same set of x's, or randomly generate the x's as well), I would expect the t-values to behave more properly. Here is a quick example: tstats <- replicate(1000, { x <- rnorm(N); w <- abs(rnorm(100,1)); y <- rnorm(100, 0, sqrt(1/w)); coef(summary(lm(y~x, weights=w)))[2,3]}) mean(abs(tstats)>2) I saw just under 5% as expected.
Weighted least squares regression on random data, giving large t-statistics more often than "expecte I think the problem is that you are generating the weights at random, uncorrelated with the y value. In a real weighted regression the points with lower variance will have higher weights. Since the
46,851
Question about a marginal distribution
The standard way to do the calculation is to expand the argument of the $\exp$ term as a quadratic in $x$, complete the square, and get to the result that $f(y)$ is also a normal density. Or, you can use the fact that you know that $E[Y\mid X=x] = x$ and so $E[Y\mid X]$ is a random variable (it equals $X$) with mean $$E[Y] = E[E[Y\mid X]] = E[X] = \mu_x.$$ Also, $E[Y^2\mid X=x] = x^2 + \sigma_y^2$ so that $$E[Y^2] = E[E[Y^2\mid X]] = E[X^2+\sigma_y^2] = \mu_x^2 + \sigma_x^2 + \sigma_y^2$$ from which we get that $\operatorname{var}(Y) = E[Y^2]-(E[Y])^2 = \sigma_x^2 + \sigma_y^2.$ Now, given the asserted normality of $Y$, we can write down the density of $Y$ without much further ado. From a slightly different viewpoint, consider the model $Y = X + e$ where $X \sim N(\mu_x,\sigma_x^2)$ and $e \sim N(0,\sigma_y^2)$ are independent random variables with $e$ playing the part of "noise" in the measurement of $X$, with said measurement yielding $Y$ instead of the desired $X$. Clearly, given that the value of $X$ is $x$, the conditional distribution of $Y$ is normal with mean $x$ and variance $\sigma_y^2$ which is what you are given. Equally clearly, $Y$, being the sum of two independent normal random variables, is normal with mean $\mu_x$ and variance $\sigma_x^2+\sigma_y^2$. This calculation requires even less algebra than the two suggested above.
Question about a marginal distribution
The standard way to do the calculation is to expand the argument of the $\exp$ term as a quadratic in $x$, complete the square, and get to the result that $f(y)$ is also a normal density. Or, you ca
Question about a marginal distribution The standard way to do the calculation is to expand the argument of the $\exp$ term as a quadratic in $x$, complete the square, and get to the result that $f(y)$ is also a normal density. Or, you can use the fact that you know that $E[Y\mid X=x] = x$ and so $E[Y\mid X]$ is a random variable (it equals $X$) with mean $$E[Y] = E[E[Y\mid X]] = E[X] = \mu_x.$$ Also, $E[Y^2\mid X=x] = x^2 + \sigma_y^2$ so that $$E[Y^2] = E[E[Y^2\mid X]] = E[X^2+\sigma_y^2] = \mu_x^2 + \sigma_x^2 + \sigma_y^2$$ from which we get that $\operatorname{var}(Y) = E[Y^2]-(E[Y])^2 = \sigma_x^2 + \sigma_y^2.$ Now, given the asserted normality of $Y$, we can write down the density of $Y$ without much further ado. From a slightly different viewpoint, consider the model $Y = X + e$ where $X \sim N(\mu_x,\sigma_x^2)$ and $e \sim N(0,\sigma_y^2)$ are independent random variables with $e$ playing the part of "noise" in the measurement of $X$, with said measurement yielding $Y$ instead of the desired $X$. Clearly, given that the value of $X$ is $x$, the conditional distribution of $Y$ is normal with mean $x$ and variance $\sigma_y^2$ which is what you are given. Equally clearly, $Y$, being the sum of two independent normal random variables, is normal with mean $\mu_x$ and variance $\sigma_x^2+\sigma_y^2$. This calculation requires even less algebra than the two suggested above.
Question about a marginal distribution The standard way to do the calculation is to expand the argument of the $\exp$ term as a quadratic in $x$, complete the square, and get to the result that $f(y)$ is also a normal density. Or, you ca
46,852
Sharing a model trained on confidential data
You could use the hashing trick. That way rather than providing a table which maps words to indices, which would reveal information about the words in your training data, you could just provide a hash function.
Sharing a model trained on confidential data
You could use the hashing trick. That way rather than providing a table which maps words to indices, which would reveal information about the words in your training data, you could just provide a hash
Sharing a model trained on confidential data You could use the hashing trick. That way rather than providing a table which maps words to indices, which would reveal information about the words in your training data, you could just provide a hash function.
Sharing a model trained on confidential data You could use the hashing trick. That way rather than providing a table which maps words to indices, which would reveal information about the words in your training data, you could just provide a hash
46,853
Sharing a model trained on confidential data
Strictly speaking, this is not a stats question but a question of regulatory compliance. You need to run this past the ethics tsar at your institution, which I assume to be in the health care area. Some tsars will say, "No way, Jose", no matter how anonymized the data. Typically, if you collect data for one purpose, and obtain consent on that basis, you can't simply repurpose the data for something else. How the data may be used, once collected, will depend on your institution and your jurisdiction. If you are from Canada, best of luck, buddy. I once wanted to use confidential data for illustrative purposes, and I suggested to my boss that I would draw random samples from the data (as you would for a bootstrap), so that the distributions would be similar, but none of the data would actually belong to real patients. I had multivariate data, and I was prepared to resample in a way that covariances and marginals were respected. My suggestion was not accepted, largely because my boss did not understand it. But could you do something like that here? Scramble your data, so that sentences or "bags of words", get shuffled around different patients. The idea behind confidentiality is that people should not be able to find the patient or identify that person on the basis of the information they see. You don't want someone seeing the data and thinking, "I know that guy."
Sharing a model trained on confidential data
Strictly speaking, this is not a stats question but a question of regulatory compliance. You need to run this past the ethics tsar at your institution, which I assume to be in the health care area. So
Sharing a model trained on confidential data Strictly speaking, this is not a stats question but a question of regulatory compliance. You need to run this past the ethics tsar at your institution, which I assume to be in the health care area. Some tsars will say, "No way, Jose", no matter how anonymized the data. Typically, if you collect data for one purpose, and obtain consent on that basis, you can't simply repurpose the data for something else. How the data may be used, once collected, will depend on your institution and your jurisdiction. If you are from Canada, best of luck, buddy. I once wanted to use confidential data for illustrative purposes, and I suggested to my boss that I would draw random samples from the data (as you would for a bootstrap), so that the distributions would be similar, but none of the data would actually belong to real patients. I had multivariate data, and I was prepared to resample in a way that covariances and marginals were respected. My suggestion was not accepted, largely because my boss did not understand it. But could you do something like that here? Scramble your data, so that sentences or "bags of words", get shuffled around different patients. The idea behind confidentiality is that people should not be able to find the patient or identify that person on the basis of the information they see. You don't want someone seeing the data and thinking, "I know that guy."
Sharing a model trained on confidential data Strictly speaking, this is not a stats question but a question of regulatory compliance. You need to run this past the ethics tsar at your institution, which I assume to be in the health care area. So
46,854
Sharing a model trained on confidential data
You could retrain your model on a completely different set of words and then show it fully disclosed as a proof of concept, i.e. replace all the words with the names of animals, for example, and suggest to your intended audience that if they were to repeat your training steps exactly with more relevant words they could exactly replicate your model.
Sharing a model trained on confidential data
You could retrain your model on a completely different set of words and then show it fully disclosed as a proof of concept, i.e. replace all the words with the names of animals, for example, and sugge
Sharing a model trained on confidential data You could retrain your model on a completely different set of words and then show it fully disclosed as a proof of concept, i.e. replace all the words with the names of animals, for example, and suggest to your intended audience that if they were to repeat your training steps exactly with more relevant words they could exactly replicate your model.
Sharing a model trained on confidential data You could retrain your model on a completely different set of words and then show it fully disclosed as a proof of concept, i.e. replace all the words with the names of animals, for example, and sugge
46,855
Behavior of $R^2$ in non-linear models
I decided to move my comment to an answer and discuss it To expand on my points a little: Your thought that the way you're calculating $R^2$ isn't sensible is right. A high correlation between residuals and arbitrary fitted values doesn't automatically imply a good fit. Indeed, forget nonlinear regression, and consider linear models. Imagine you have two linear models. For the first model, the Flying Spaghetti Monster comes to you in a dream, touches you with his noodly appendage and tells you the true parameter values: $y_i = 9 + 2\,x_i + e_i$, (and that the errors are independent and identically distributed $N(0,\sigma^2)$) The next day, as you're telling a friend over a pint about your experience, a passing homeopathy salesman suggests that instead $y_i = -1000 + 7.3\times 10^{-14}\,x_i + e_i$ Imagine that on looking at the data, the dream-values appear to be about right (least squares estimates come out very close to them), and that $\sigma^2$ is estimated to be really tiny, so that the residuals from both the dream-values and the LS fit are minuscule. Further, the correlation is almost 1. Now, what's the correlation of the data with the fitted values from the salesman-in-the-pub's model? It should be low, right? The model is completely wrong! Its predictions are no better than the mean. ... in fact, its correlation is exactly the same; almost 1. If that was what $R^2$ was, it would be useless as a way of comparing models. This sounds like it might provide a useful object lesson in exactly why they shouldn't be using that definition of $R^2$ to compare those models. However, since they have different numbers of parameters, an unadorned residual sum of squares isn't exactly a fair comparison either. Even a more correct $R^2$ would not necessarily be a good way of comparing nonlinear models; in the nonlinear realm there's often no good reason to consider a constant-mean model, the 'null' situation. Indeed, even when comparing two linear models, $R^2$ is not necessarily the best way to do it, for example, for similar reasons that the (to my mind) marginally more sensible comparison of residual sum of squares I mentioned earlier should be avoided with models with different numbers of parameters. I thought that the squared correlation between observed Y and fitted Y is in fact the standard $R^2$ in OLS. Well, yes, it's true that $R^2$ for a simple ordinary-least-squares model, is the square of the correlation between observed and predicted, but the ability to interpret it the way you want to interpret it is conditioned on it being the result of OLS. If you assert parameter values, for example, you don't change the correlation, but you lose its interpretation as a measure of fit at all. Coding your example seems to confirm this and also shows that the homeopathy salesman has a lower R2. The computation in R isn't accurate for the salesman because of round-off error and accumulated numerical error; this is a mathematically exact relationship that we have to take care over doing numerically. Observe what happens: x <- runif(100,0,10) y <- 9 + 2*x + rnorm(100,0,.005) cor(y,x) cor(y,9+2*x) cor(y,-1000+ 7.3e-4*x) cor(y,-1000+ 7.3e-7*x) cor(y,-1000+ 7.3e-10*x) cor(y,-1000+ 7.3e-14*x)
Behavior of $R^2$ in non-linear models
I decided to move my comment to an answer and discuss it To expand on my points a little: Your thought that the way you're calculating $R^2$ isn't sensible is right. A high correlation between residua
Behavior of $R^2$ in non-linear models I decided to move my comment to an answer and discuss it To expand on my points a little: Your thought that the way you're calculating $R^2$ isn't sensible is right. A high correlation between residuals and arbitrary fitted values doesn't automatically imply a good fit. Indeed, forget nonlinear regression, and consider linear models. Imagine you have two linear models. For the first model, the Flying Spaghetti Monster comes to you in a dream, touches you with his noodly appendage and tells you the true parameter values: $y_i = 9 + 2\,x_i + e_i$, (and that the errors are independent and identically distributed $N(0,\sigma^2)$) The next day, as you're telling a friend over a pint about your experience, a passing homeopathy salesman suggests that instead $y_i = -1000 + 7.3\times 10^{-14}\,x_i + e_i$ Imagine that on looking at the data, the dream-values appear to be about right (least squares estimates come out very close to them), and that $\sigma^2$ is estimated to be really tiny, so that the residuals from both the dream-values and the LS fit are minuscule. Further, the correlation is almost 1. Now, what's the correlation of the data with the fitted values from the salesman-in-the-pub's model? It should be low, right? The model is completely wrong! Its predictions are no better than the mean. ... in fact, its correlation is exactly the same; almost 1. If that was what $R^2$ was, it would be useless as a way of comparing models. This sounds like it might provide a useful object lesson in exactly why they shouldn't be using that definition of $R^2$ to compare those models. However, since they have different numbers of parameters, an unadorned residual sum of squares isn't exactly a fair comparison either. Even a more correct $R^2$ would not necessarily be a good way of comparing nonlinear models; in the nonlinear realm there's often no good reason to consider a constant-mean model, the 'null' situation. Indeed, even when comparing two linear models, $R^2$ is not necessarily the best way to do it, for example, for similar reasons that the (to my mind) marginally more sensible comparison of residual sum of squares I mentioned earlier should be avoided with models with different numbers of parameters. I thought that the squared correlation between observed Y and fitted Y is in fact the standard $R^2$ in OLS. Well, yes, it's true that $R^2$ for a simple ordinary-least-squares model, is the square of the correlation between observed and predicted, but the ability to interpret it the way you want to interpret it is conditioned on it being the result of OLS. If you assert parameter values, for example, you don't change the correlation, but you lose its interpretation as a measure of fit at all. Coding your example seems to confirm this and also shows that the homeopathy salesman has a lower R2. The computation in R isn't accurate for the salesman because of round-off error and accumulated numerical error; this is a mathematically exact relationship that we have to take care over doing numerically. Observe what happens: x <- runif(100,0,10) y <- 9 + 2*x + rnorm(100,0,.005) cor(y,x) cor(y,9+2*x) cor(y,-1000+ 7.3e-4*x) cor(y,-1000+ 7.3e-7*x) cor(y,-1000+ 7.3e-10*x) cor(y,-1000+ 7.3e-14*x)
Behavior of $R^2$ in non-linear models I decided to move my comment to an answer and discuss it To expand on my points a little: Your thought that the way you're calculating $R^2$ isn't sensible is right. A high correlation between residua
46,856
is there something like Mann-Whitney U test that can control for a continous variable?
A generalization of the Wilcoxon-Mann-Whitney test is the proportional odds ordinal logistic model, which accepts covariates in addition to the group variable you are mainly testing. Note that the prop. odds model does not need more than one observations at each unique value of $Y$ in order to work well.
is there something like Mann-Whitney U test that can control for a continous variable?
A generalization of the Wilcoxon-Mann-Whitney test is the proportional odds ordinal logistic model, which accepts covariates in addition to the group variable you are mainly testing. Note that the pr
is there something like Mann-Whitney U test that can control for a continous variable? A generalization of the Wilcoxon-Mann-Whitney test is the proportional odds ordinal logistic model, which accepts covariates in addition to the group variable you are mainly testing. Note that the prop. odds model does not need more than one observations at each unique value of $Y$ in order to work well.
is there something like Mann-Whitney U test that can control for a continous variable? A generalization of the Wilcoxon-Mann-Whitney test is the proportional odds ordinal logistic model, which accepts covariates in addition to the group variable you are mainly testing. Note that the pr
46,857
When is it incorrect to compute factor scores by summing (or averaging) raw variable scores?
Here's how I see it. Technically, you are right. Simply adding the scores (or averaging them) weights them all equally and this may not be the optimal solution. However, it does have certain advantages: 1) It is simple. Factor analysis is not. OK, readers of this list probably understand factor analysis; but what about journal editors, dissertation advisers and the general group that will read whatever you write? 2) It is not subject to objections from choosing the wrong options. Factor analysis is, even if you force a single factor (principal components? Maximum likelihood? What priors? etc). If you allow multiple factors, the complexity goes way up as do the number of choices. 3) It often makes relatively little difference. Sums often correlate very highly with factor scores; in many fields, we have so many other sources of error that this may not matter. So, if you are developing a scale that you hope will be published and widely used, and you are doing a full development, it makes sense to go for FA. But if it's just a one-off scale that won't be used again, it may not be.
When is it incorrect to compute factor scores by summing (or averaging) raw variable scores?
Here's how I see it. Technically, you are right. Simply adding the scores (or averaging them) weights them all equally and this may not be the optimal solution. However, it does have certain advantage
When is it incorrect to compute factor scores by summing (or averaging) raw variable scores? Here's how I see it. Technically, you are right. Simply adding the scores (or averaging them) weights them all equally and this may not be the optimal solution. However, it does have certain advantages: 1) It is simple. Factor analysis is not. OK, readers of this list probably understand factor analysis; but what about journal editors, dissertation advisers and the general group that will read whatever you write? 2) It is not subject to objections from choosing the wrong options. Factor analysis is, even if you force a single factor (principal components? Maximum likelihood? What priors? etc). If you allow multiple factors, the complexity goes way up as do the number of choices. 3) It often makes relatively little difference. Sums often correlate very highly with factor scores; in many fields, we have so many other sources of error that this may not matter. So, if you are developing a scale that you hope will be published and widely used, and you are doing a full development, it makes sense to go for FA. But if it's just a one-off scale that won't be used again, it may not be.
When is it incorrect to compute factor scores by summing (or averaging) raw variable scores? Here's how I see it. Technically, you are right. Simply adding the scores (or averaging them) weights them all equally and this may not be the optimal solution. However, it does have certain advantage
46,858
Poisson Repeated Measures ANOVA
I guess my comments have become so extensive that I should call them an answer. If it's a situation where you want fixed effects, you can do it with a Poisson glm just as you can do ANOVA via lm. If you want a mixed model (glmm), you could use lme4 (such as the function glmer), though there are other suitable packages (see below). If you do want a fixed effects model, like an ANOVA but with Poisson data (and I am not saying that's what you should do, just that it sounds like what you're asking for), for factors you can literally just use exactly the same command in glm as in lm, but with an additional argument of family=poisson. Compare: summary(lm(count~spray,data=InsectSprays)) with summary(glm(count~spray,family=poisson,data=InsectSprays)) The anova command can even be used to compare glms as it is used for lms. The if the null is true and the Poisson assumption holds, the deviance for the difference should be chi-square with the indicated d.f., but fully understanding even basic use of glms would require a textbook. For packages that do glmms and their features, here
Poisson Repeated Measures ANOVA
I guess my comments have become so extensive that I should call them an answer. If it's a situation where you want fixed effects, you can do it with a Poisson glm just as you can do ANOVA via lm. If y
Poisson Repeated Measures ANOVA I guess my comments have become so extensive that I should call them an answer. If it's a situation where you want fixed effects, you can do it with a Poisson glm just as you can do ANOVA via lm. If you want a mixed model (glmm), you could use lme4 (such as the function glmer), though there are other suitable packages (see below). If you do want a fixed effects model, like an ANOVA but with Poisson data (and I am not saying that's what you should do, just that it sounds like what you're asking for), for factors you can literally just use exactly the same command in glm as in lm, but with an additional argument of family=poisson. Compare: summary(lm(count~spray,data=InsectSprays)) with summary(glm(count~spray,family=poisson,data=InsectSprays)) The anova command can even be used to compare glms as it is used for lms. The if the null is true and the Poisson assumption holds, the deviance for the difference should be chi-square with the indicated d.f., but fully understanding even basic use of glms would require a textbook. For packages that do glmms and their features, here
Poisson Repeated Measures ANOVA I guess my comments have become so extensive that I should call them an answer. If it's a situation where you want fixed effects, you can do it with a Poisson glm just as you can do ANOVA via lm. If y
46,859
Utilizing cross-validation with up-sampled data
Yes, the CV results are going to be biased. You could still use them to tune the model. Another option is to use a class weighting scheme that gives asymmetric cost values to different kinds of errors (see the reference below). This is available in some software (e.g. the R kernlab package). I think this is a better approach since it allows you to dial the cost function to meet your sensitivity or specificity needs. It's another tuning parameter and you don't have that "knob to turn" when up-sampling. Max Veropoulos K, Campbell C, Cristianini N (1999). "Controlling the Sensitivity of Support Vector Machines." Proceedings of the International Joint Conference on Artificial Intelligence, 1999, 55–60.
Utilizing cross-validation with up-sampled data
Yes, the CV results are going to be biased. You could still use them to tune the model. Another option is to use a class weighting scheme that gives asymmetric cost values to different kinds of error
Utilizing cross-validation with up-sampled data Yes, the CV results are going to be biased. You could still use them to tune the model. Another option is to use a class weighting scheme that gives asymmetric cost values to different kinds of errors (see the reference below). This is available in some software (e.g. the R kernlab package). I think this is a better approach since it allows you to dial the cost function to meet your sensitivity or specificity needs. It's another tuning parameter and you don't have that "knob to turn" when up-sampling. Max Veropoulos K, Campbell C, Cristianini N (1999). "Controlling the Sensitivity of Support Vector Machines." Proceedings of the International Joint Conference on Artificial Intelligence, 1999, 55–60.
Utilizing cross-validation with up-sampled data Yes, the CV results are going to be biased. You could still use them to tune the model. Another option is to use a class weighting scheme that gives asymmetric cost values to different kinds of error
46,860
Utilizing cross-validation with up-sampled data
Here is a related answer that might be of interest of this problem. (Sorry for the auto-citation)
Utilizing cross-validation with up-sampled data
Here is a related answer that might be of interest of this problem. (Sorry for the auto-citation)
Utilizing cross-validation with up-sampled data Here is a related answer that might be of interest of this problem. (Sorry for the auto-citation)
Utilizing cross-validation with up-sampled data Here is a related answer that might be of interest of this problem. (Sorry for the auto-citation)
46,861
Utilizing cross-validation with up-sampled data
I don't use SVM, but in Logistic Regression and ANN's I have sucessfully used k-fold with replicated training data (cut the folds, generate the proper training\test pairs, generate replicas of the less represented class only in training data), sometimes with noise. Biases were removed in the selection of the cutpoints. A very elementary review to techniques available for unbalanced data if found on this paper. My favorite technique so far (PCA plus noise injection) is found in this paper.
Utilizing cross-validation with up-sampled data
I don't use SVM, but in Logistic Regression and ANN's I have sucessfully used k-fold with replicated training data (cut the folds, generate the proper training\test pairs, generate replicas of the les
Utilizing cross-validation with up-sampled data I don't use SVM, but in Logistic Regression and ANN's I have sucessfully used k-fold with replicated training data (cut the folds, generate the proper training\test pairs, generate replicas of the less represented class only in training data), sometimes with noise. Biases were removed in the selection of the cutpoints. A very elementary review to techniques available for unbalanced data if found on this paper. My favorite technique so far (PCA plus noise injection) is found in this paper.
Utilizing cross-validation with up-sampled data I don't use SVM, but in Logistic Regression and ANN's I have sucessfully used k-fold with replicated training data (cut the folds, generate the proper training\test pairs, generate replicas of the les
46,862
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed data
Shao and Sitter 1996 demonstrate that the right approach is: Take a bootstrap sample, respecting the dependencies in the data (see below); Run one imputation on this sample, estimating the imputation model and producing one model + noise replicate; Run a complete case analysis on this; Repeat 1-3 $B$ times; Combine using the bootstrap rules (not the Rubin rules). $B$ must be bootstrap-large, not the Rubin-large... 5 hundred rather than 5. The biggest issue that comes up with complex survey data which are in the focus of Shao & Sitter's paper is that there are non-trivial dependencies and indepedencies present in complex survey data. By design, observations between strata are independent, and imputation that borrow strength across the whole data set violate that independence. By design, observations within the same PSU are correlated. Both of these effects need to be addressed by the bootstrap scheme. For complex surveys, this needs to be the complex survey bootstrap. For time-series, this needs to be the block bootstrap. The process proposed by orizon (as clarified by Stef) may be right, and I have been rolling it in my head for some while in the past couple of years, but never had the chance to really review it for statistical soundness.
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed d
Shao and Sitter 1996 demonstrate that the right approach is: Take a bootstrap sample, respecting the dependencies in the data (see below); Run one imputation on this sample, estimating the imputation
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed data Shao and Sitter 1996 demonstrate that the right approach is: Take a bootstrap sample, respecting the dependencies in the data (see below); Run one imputation on this sample, estimating the imputation model and producing one model + noise replicate; Run a complete case analysis on this; Repeat 1-3 $B$ times; Combine using the bootstrap rules (not the Rubin rules). $B$ must be bootstrap-large, not the Rubin-large... 5 hundred rather than 5. The biggest issue that comes up with complex survey data which are in the focus of Shao & Sitter's paper is that there are non-trivial dependencies and indepedencies present in complex survey data. By design, observations between strata are independent, and imputation that borrow strength across the whole data set violate that independence. By design, observations within the same PSU are correlated. Both of these effects need to be addressed by the bootstrap scheme. For complex surveys, this needs to be the complex survey bootstrap. For time-series, this needs to be the block bootstrap. The process proposed by orizon (as clarified by Stef) may be right, and I have been rolling it in my head for some while in the past couple of years, but never had the chance to really review it for statistical soundness.
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed d Shao and Sitter 1996 demonstrate that the right approach is: Take a bootstrap sample, respecting the dependencies in the data (see below); Run one imputation on this sample, estimating the imputation
46,863
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed data
Steps 2 and 3 ignore the fact that some of the data have been imputed. Hence the bootstrap estimate of the distribution of $\hat\beta$ will be too narrow. Rubin's pooling rules combine the within and between imputation uncertainty. Though this procedure assumes that $\hat\beta$ is normally distributed around the population value $\beta$, it is actually quite robust against violations of normality.
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed d
Steps 2 and 3 ignore the fact that some of the data have been imputed. Hence the bootstrap estimate of the distribution of $\hat\beta$ will be too narrow. Rubin's pooling rules combine the within and
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed data Steps 2 and 3 ignore the fact that some of the data have been imputed. Hence the bootstrap estimate of the distribution of $\hat\beta$ will be too narrow. Rubin's pooling rules combine the within and between imputation uncertainty. Though this procedure assumes that $\hat\beta$ is normally distributed around the population value $\beta$, it is actually quite robust against violations of normality.
Bootstrapped confidence intervals for the parameters of a linear model applied to multiply imputed d Steps 2 and 3 ignore the fact that some of the data have been imputed. Hence the bootstrap estimate of the distribution of $\hat\beta$ will be too narrow. Rubin's pooling rules combine the within and
46,864
What's the difference between a component and a factor in parallel analysis?
You might wish to read Dinno's Gently Clarifying the Application of Horn’s Parallel Analysis to Principal Component Analysis Versus Factor Analysis. Here's a short distillation: Principal component analysis (PCA) involves the eigen-decomposition of the correlation matrix $\mathbf{R}$ (or less commonly, the covariance matrix $\mathbf{\Sigma}$), to give eigenvectors (which are generally what the substantive interpretation of PCA is about), and eigenvalues, $\mathbf{\Lambda}$ (which are what the empirical retention decisions, like parallel analysis, are based on). Common factor analysis (FA) involves the eigen-decomposition of the correlation matrix $\mathbf{R}$ with the diagonal elements replaced with the communalities: $\mathbf{C} = \mathbf{R} - \text{diag}(\mathbf{R}^{+})^{+}$, where $\mathbf{R}^{+}$ indicates the generalized inverse (aka Moore-Penrose inverse, or pseudo-inverse) of matrix $\mathbf{R}$, to also give eigenvectors (which are also generally what the substantive interpretation of FA is about), and eigenvalues, $\mathbf{\Lambda}$ (which, as with PCA, are what the empirical retention decisions, like parallel analysis, are based on). The eigenvalues, $\mathbf{\Lambda} = \{\lambda_{1}, \dots, \lambda_{p}\}$ ($p$ equals the number of variables producing $\mathbf{R}$) are arranged from largest to smallest, and in a PCA based on $\mathbf{R}$ are interpreted as apportioning $p$ units of total variance under an assumption that each observed variable contributes 1 unit to the total variance. When PCA is based on $\mathbf{\Sigma}$, then each eigenvalue, $\lambda$, is interpreted as apportioning $\text{trace}(\mathbf{\Sigma})$ units of total variance under the assumption that each variable contributes the magnitude of its variance to total variance. In FA, the eigenvalues are interpreted as apportioning $< p$ units of common variance; this interpretation is problematic because eigenvalues in FA can be negative and it is difficult to know how to interpret such values either in terms of apportionment, or in terms of variance. The parallel analysis procedure involves: Obtaining $\{\lambda_{1}, \dots, \lambda_{p}\}$ for the observed data, $\mathbf{X}$. Obtaining $\{\lambda^{r}_{1}, \dots, \lambda^{r}_{p}\}$ for uncorrelated (random) data of the same $n$ and $p$ as $\mathbf{X}$. Repeating step 2 many times, say $k$ number of times. Averaging each eigenvalue from step 3 over $k$ to produce $\{\overline{\lambda}^{r}_{1}, \dots, \overline{\lambda}^{r}_{p}\}$. Retaining those $q$ components or common factors where $\lambda_{q} > \overline{\lambda}^{r}_{q}$ Monte Carlo parallel analysis employs a high centile (e.g. the 95$^{\text{th}}$) rather than the mean in step 4.
What's the difference between a component and a factor in parallel analysis?
You might wish to read Dinno's Gently Clarifying the Application of Horn’s Parallel Analysis to Principal Component Analysis Versus Factor Analysis. Here's a short distillation: Principal component an
What's the difference between a component and a factor in parallel analysis? You might wish to read Dinno's Gently Clarifying the Application of Horn’s Parallel Analysis to Principal Component Analysis Versus Factor Analysis. Here's a short distillation: Principal component analysis (PCA) involves the eigen-decomposition of the correlation matrix $\mathbf{R}$ (or less commonly, the covariance matrix $\mathbf{\Sigma}$), to give eigenvectors (which are generally what the substantive interpretation of PCA is about), and eigenvalues, $\mathbf{\Lambda}$ (which are what the empirical retention decisions, like parallel analysis, are based on). Common factor analysis (FA) involves the eigen-decomposition of the correlation matrix $\mathbf{R}$ with the diagonal elements replaced with the communalities: $\mathbf{C} = \mathbf{R} - \text{diag}(\mathbf{R}^{+})^{+}$, where $\mathbf{R}^{+}$ indicates the generalized inverse (aka Moore-Penrose inverse, or pseudo-inverse) of matrix $\mathbf{R}$, to also give eigenvectors (which are also generally what the substantive interpretation of FA is about), and eigenvalues, $\mathbf{\Lambda}$ (which, as with PCA, are what the empirical retention decisions, like parallel analysis, are based on). The eigenvalues, $\mathbf{\Lambda} = \{\lambda_{1}, \dots, \lambda_{p}\}$ ($p$ equals the number of variables producing $\mathbf{R}$) are arranged from largest to smallest, and in a PCA based on $\mathbf{R}$ are interpreted as apportioning $p$ units of total variance under an assumption that each observed variable contributes 1 unit to the total variance. When PCA is based on $\mathbf{\Sigma}$, then each eigenvalue, $\lambda$, is interpreted as apportioning $\text{trace}(\mathbf{\Sigma})$ units of total variance under the assumption that each variable contributes the magnitude of its variance to total variance. In FA, the eigenvalues are interpreted as apportioning $< p$ units of common variance; this interpretation is problematic because eigenvalues in FA can be negative and it is difficult to know how to interpret such values either in terms of apportionment, or in terms of variance. The parallel analysis procedure involves: Obtaining $\{\lambda_{1}, \dots, \lambda_{p}\}$ for the observed data, $\mathbf{X}$. Obtaining $\{\lambda^{r}_{1}, \dots, \lambda^{r}_{p}\}$ for uncorrelated (random) data of the same $n$ and $p$ as $\mathbf{X}$. Repeating step 2 many times, say $k$ number of times. Averaging each eigenvalue from step 3 over $k$ to produce $\{\overline{\lambda}^{r}_{1}, \dots, \overline{\lambda}^{r}_{p}\}$. Retaining those $q$ components or common factors where $\lambda_{q} > \overline{\lambda}^{r}_{q}$ Monte Carlo parallel analysis employs a high centile (e.g. the 95$^{\text{th}}$) rather than the mean in step 4.
What's the difference between a component and a factor in parallel analysis? You might wish to read Dinno's Gently Clarifying the Application of Horn’s Parallel Analysis to Principal Component Analysis Versus Factor Analysis. Here's a short distillation: Principal component an
46,865
What's the difference between a component and a factor in parallel analysis?
It's talking about principal components. First, it finds the eigenvalues of the correlation matrix which it takes as input. Then it decides how many of those values are "reasonably big" by doing simulations and comparing them with the simulated values. Here is the key part of the code: valuesx <- eigen(rx)$values and then later on: pc.test <- which(!(valuesx > values.sim$mean))[1] - 1 results$nfact <- fa.test results$ncomp <- pc.test cat("Parallel analysis suggests that ") cat("the number of factors = ", fa.test, " and the number of components = ", pc.test, "\n") The whole function is written in R, so you can read its source code by typing its name in the R terminal. Here is a presentation which compares factor analysis with PCA and hopefully answers your question (see the last slide in particular): http://www.stats.ox.ac.uk/~ripley/MultAnal_HT2007/PC-FA.pdf
What's the difference between a component and a factor in parallel analysis?
It's talking about principal components. First, it finds the eigenvalues of the correlation matrix which it takes as input. Then it decides how many of those values are "reasonably big" by doing simul
What's the difference between a component and a factor in parallel analysis? It's talking about principal components. First, it finds the eigenvalues of the correlation matrix which it takes as input. Then it decides how many of those values are "reasonably big" by doing simulations and comparing them with the simulated values. Here is the key part of the code: valuesx <- eigen(rx)$values and then later on: pc.test <- which(!(valuesx > values.sim$mean))[1] - 1 results$nfact <- fa.test results$ncomp <- pc.test cat("Parallel analysis suggests that ") cat("the number of factors = ", fa.test, " and the number of components = ", pc.test, "\n") The whole function is written in R, so you can read its source code by typing its name in the R terminal. Here is a presentation which compares factor analysis with PCA and hopefully answers your question (see the last slide in particular): http://www.stats.ox.ac.uk/~ripley/MultAnal_HT2007/PC-FA.pdf
What's the difference between a component and a factor in parallel analysis? It's talking about principal components. First, it finds the eigenvalues of the correlation matrix which it takes as input. Then it decides how many of those values are "reasonably big" by doing simul
46,866
What's the difference between a component and a factor in parallel analysis?
Actually there are two lines, one for the pca and the other for the minres procedure (default) unless another is selected. The program uses the fa$values and the eigenvalues fa$e.values. The fa$values are the values from the common factor solution. The fa$values are less than the eigenvalues.
What's the difference between a component and a factor in parallel analysis?
Actually there are two lines, one for the pca and the other for the minres procedure (default) unless another is selected. The program uses the fa$values and the eigenvalues fa$e.values. The fa$valu
What's the difference between a component and a factor in parallel analysis? Actually there are two lines, one for the pca and the other for the minres procedure (default) unless another is selected. The program uses the fa$values and the eigenvalues fa$e.values. The fa$values are the values from the common factor solution. The fa$values are less than the eigenvalues.
What's the difference between a component and a factor in parallel analysis? Actually there are two lines, one for the pca and the other for the minres procedure (default) unless another is selected. The program uses the fa$values and the eigenvalues fa$e.values. The fa$valu
46,867
How to store (and analyse) multi-answer multi-choice questionnaire data
The last answer is the best one for your situation. The basic approach is that each check-box should be stored as a 0 (unchecked) or 1 (checked). If you have logic in the questionnaire so some people do not get asked the question, you can have 0 (exposed to question, but unchecked), 1 (checked) and missing/null (not exposed to question). The analysis can be very easy - sum up the values in the column (ie count all the 1s) and divide by the number of responses (count all the 1s and 0s). That's the percentage that checked the box and where you can start. In some situations, when you have a very wide range of possible answers and each respondent has responses on a small set of that range, it may be more efficient to store each record as a combination of the respondent id, the response type, and the value of the response. This helps you avoid having a table with hundreds or thousands of columns which can be unwieldy for storage purposes.
How to store (and analyse) multi-answer multi-choice questionnaire data
The last answer is the best one for your situation. The basic approach is that each check-box should be stored as a 0 (unchecked) or 1 (checked). If you have logic in the questionnaire so some people
How to store (and analyse) multi-answer multi-choice questionnaire data The last answer is the best one for your situation. The basic approach is that each check-box should be stored as a 0 (unchecked) or 1 (checked). If you have logic in the questionnaire so some people do not get asked the question, you can have 0 (exposed to question, but unchecked), 1 (checked) and missing/null (not exposed to question). The analysis can be very easy - sum up the values in the column (ie count all the 1s) and divide by the number of responses (count all the 1s and 0s). That's the percentage that checked the box and where you can start. In some situations, when you have a very wide range of possible answers and each respondent has responses on a small set of that range, it may be more efficient to store each record as a combination of the respondent id, the response type, and the value of the response. This helps you avoid having a table with hundreds or thousands of columns which can be unwieldy for storage purposes.
How to store (and analyse) multi-answer multi-choice questionnaire data The last answer is the best one for your situation. The basic approach is that each check-box should be stored as a 0 (unchecked) or 1 (checked). If you have logic in the questionnaire so some people
46,868
How to store (and analyse) multi-answer multi-choice questionnaire data
Your last option sounds the best to me. Analyzing, is just filtering on that column of the data frame.
How to store (and analyse) multi-answer multi-choice questionnaire data
Your last option sounds the best to me. Analyzing, is just filtering on that column of the data frame.
How to store (and analyse) multi-answer multi-choice questionnaire data Your last option sounds the best to me. Analyzing, is just filtering on that column of the data frame.
How to store (and analyse) multi-answer multi-choice questionnaire data Your last option sounds the best to me. Analyzing, is just filtering on that column of the data frame.
46,869
Regression in SEM programs vs regression in statistical packages such as SPSS
I think doing regression via SEM is bogus. I mean, it is cute to show that you can express a linear regression as a special case of SEM, just to show how general SEMs are, but doing regression with an SEM is a waste of time, as this approach does not utilize the many advances in regression modeling specific to linear models. This is the right tool for the job issue: if nothing else is at hand, I would hammer a nail into a drywall with the screwdriver by holding the latter at a sharp end and hitting the nail with the handle, but I won't recommend doing that, in general. In SEM, you model the covariance matrix of everything: the regressors and the dependent variable. The covariance matrix of the regressors have to be unconstrained. The covariances between the dependent variable and the regressors is what generates the coefficient estimates, and the variance of the dependent variable, the $s^2$. So you've utilized all the degrees of freedom (number of covariance matrix entries), and that's why you see a zero. You should still be able to find $R^2$ in your output, but it will be hidden deeply somewhere, not thrown at you as in regression output: from the point of view of SEMs, your dependent variable is nothing striking, you may have a few dozen regressions in your output, and you can get all of their $R^2$s, or reliabilities, somewhere, but you may have to ask for it with some TECH options in Mplus. The missing value stuff is even greater bogus. Typically, you have to assume some sort of a distribution, such as multivariate normal, to run a full information maximum likelihood. This is very doubtful for most applications, e.g. when you have dummy explanatory variables. The advantage of doing regression properly with R or Stata is that you will have access to all the traditional diagnostics (residuals, leverage, etc. on influence; collinearity, nonlinearity and other goodness of fit issues), as well as additional tools for better inference (sandwich estimator that can be made robust for heteroskedasticity, cluster correlation or autocorrelation). SEM can offer "robust" standard errors, too, but they don't work well when the model is structurally misspecified (and that's what one of my papers was about).
Regression in SEM programs vs regression in statistical packages such as SPSS
I think doing regression via SEM is bogus. I mean, it is cute to show that you can express a linear regression as a special case of SEM, just to show how general SEMs are, but doing regression with an
Regression in SEM programs vs regression in statistical packages such as SPSS I think doing regression via SEM is bogus. I mean, it is cute to show that you can express a linear regression as a special case of SEM, just to show how general SEMs are, but doing regression with an SEM is a waste of time, as this approach does not utilize the many advances in regression modeling specific to linear models. This is the right tool for the job issue: if nothing else is at hand, I would hammer a nail into a drywall with the screwdriver by holding the latter at a sharp end and hitting the nail with the handle, but I won't recommend doing that, in general. In SEM, you model the covariance matrix of everything: the regressors and the dependent variable. The covariance matrix of the regressors have to be unconstrained. The covariances between the dependent variable and the regressors is what generates the coefficient estimates, and the variance of the dependent variable, the $s^2$. So you've utilized all the degrees of freedom (number of covariance matrix entries), and that's why you see a zero. You should still be able to find $R^2$ in your output, but it will be hidden deeply somewhere, not thrown at you as in regression output: from the point of view of SEMs, your dependent variable is nothing striking, you may have a few dozen regressions in your output, and you can get all of their $R^2$s, or reliabilities, somewhere, but you may have to ask for it with some TECH options in Mplus. The missing value stuff is even greater bogus. Typically, you have to assume some sort of a distribution, such as multivariate normal, to run a full information maximum likelihood. This is very doubtful for most applications, e.g. when you have dummy explanatory variables. The advantage of doing regression properly with R or Stata is that you will have access to all the traditional diagnostics (residuals, leverage, etc. on influence; collinearity, nonlinearity and other goodness of fit issues), as well as additional tools for better inference (sandwich estimator that can be made robust for heteroskedasticity, cluster correlation or autocorrelation). SEM can offer "robust" standard errors, too, but they don't work well when the model is structurally misspecified (and that's what one of my papers was about).
Regression in SEM programs vs regression in statistical packages such as SPSS I think doing regression via SEM is bogus. I mean, it is cute to show that you can express a linear regression as a special case of SEM, just to show how general SEMs are, but doing regression with an
46,870
Given a historical disease incident rate of x per 100,000, what is the probability of y per 100,000?
Let's say each kid flips a biased coin to determine whether or not they have cancer. If we assume that the probability of heads (cancer) is 1.6/100,000, we can find the distribution of cancer counts we'd expect using a binomial distribution. In R code, we can find the distribution with the dbinom command: dbinom(x = 0:5, size = 38000, prob = 1.6/100000) Here, x is the number of cases (0:5 means we're looking at the probability of 0 cases, 1 case, etc. up to 5). Size is the number of kids, and prob is the baseline probability you cited. After cleaning up the output slightly, we get a table like this: number_of_cases probability 0 0.54444 1 0.33102 2 0.10063 3 0.02039 4 0.00310 5 0.00038 So you'd expect to find 3 cases out of 38,000 children only 2% of the time under this model--and you'd almost never find more than that. In short, (assuming the figures are comparable), it does seem on the high side, and might be worth investigating further. But you wouldn't necessarily need to invoke any special factors beyond random chance to explain the difference. Edited to add: Per EpiGrad's comment, I added an image that shows how these probabilities could change if we were uncertain about the baseline probability of 1.6 cases per 100k. The red points are the values I listed above, and the cloud of points represent what we'd expect if the For this example, I sampled baselines from a beta distribution using rbeta(1000, 1.6, 100000 - 1.6), which has a mean of 1.6 cases per 100k and some spread on either side but doesn't drop below 0. The amount of spread may or may not be reasonable, depending on what assumptions you'd like to make. My gut feeling is that I included more variation than I should have, but who knows. As you can see from the plot, if the British figures substantially underestimated the "real" pre-Fukushima rate of cancer incidence in Japan, we might expect to see 3 cases per 38k as often as 20% of the time. Whether you think that's likely depends on other information outside the scope of this problem, including whether I included an appropriate amount of uncertainty from the British estimate.
Given a historical disease incident rate of x per 100,000, what is the probability of y per 100,000?
Let's say each kid flips a biased coin to determine whether or not they have cancer. If we assume that the probability of heads (cancer) is 1.6/100,000, we can find the distribution of cancer counts
Given a historical disease incident rate of x per 100,000, what is the probability of y per 100,000? Let's say each kid flips a biased coin to determine whether or not they have cancer. If we assume that the probability of heads (cancer) is 1.6/100,000, we can find the distribution of cancer counts we'd expect using a binomial distribution. In R code, we can find the distribution with the dbinom command: dbinom(x = 0:5, size = 38000, prob = 1.6/100000) Here, x is the number of cases (0:5 means we're looking at the probability of 0 cases, 1 case, etc. up to 5). Size is the number of kids, and prob is the baseline probability you cited. After cleaning up the output slightly, we get a table like this: number_of_cases probability 0 0.54444 1 0.33102 2 0.10063 3 0.02039 4 0.00310 5 0.00038 So you'd expect to find 3 cases out of 38,000 children only 2% of the time under this model--and you'd almost never find more than that. In short, (assuming the figures are comparable), it does seem on the high side, and might be worth investigating further. But you wouldn't necessarily need to invoke any special factors beyond random chance to explain the difference. Edited to add: Per EpiGrad's comment, I added an image that shows how these probabilities could change if we were uncertain about the baseline probability of 1.6 cases per 100k. The red points are the values I listed above, and the cloud of points represent what we'd expect if the For this example, I sampled baselines from a beta distribution using rbeta(1000, 1.6, 100000 - 1.6), which has a mean of 1.6 cases per 100k and some spread on either side but doesn't drop below 0. The amount of spread may or may not be reasonable, depending on what assumptions you'd like to make. My gut feeling is that I included more variation than I should have, but who knows. As you can see from the plot, if the British figures substantially underestimated the "real" pre-Fukushima rate of cancer incidence in Japan, we might expect to see 3 cases per 38k as often as 20% of the time. Whether you think that's likely depends on other information outside the scope of this problem, including whether I included an appropriate amount of uncertainty from the British estimate.
Given a historical disease incident rate of x per 100,000, what is the probability of y per 100,000? Let's say each kid flips a biased coin to determine whether or not they have cancer. If we assume that the probability of heads (cancer) is 1.6/100,000, we can find the distribution of cancer counts
46,871
What does statistical power mean when we are interested in the probability of correctly not rejecting the null hypothesis?
The probability of "correctly not rejecting the null hypothesis"--i.e., if the null hypothesis is true, we do not reject it--is controlled by the significance level at which we are doing the test. If I choose a significance level of $\alpha = .05$, so that I reject if my $p$-value is less than .05, then my probability of correctly not rejecting the null hypothesis is $1-.05 = .95$. If rejecting the null hypothesis when it is in fact true would have very bad consequences then we might use a smaller $\alpha$, say .01 or even .001, which gives us a higher probability of correctly not rejecting the null hypothesis. So, in fact, we already control this probability--in fact, this is much easier to control than the power. Because it's much easier there's much less discussion about it, which is probably why you concluded that statisticians aren't interested in it.
What does statistical power mean when we are interested in the probability of correctly not rejectin
The probability of "correctly not rejecting the null hypothesis"--i.e., if the null hypothesis is true, we do not reject it--is controlled by the significance level at which we are doing the test. If
What does statistical power mean when we are interested in the probability of correctly not rejecting the null hypothesis? The probability of "correctly not rejecting the null hypothesis"--i.e., if the null hypothesis is true, we do not reject it--is controlled by the significance level at which we are doing the test. If I choose a significance level of $\alpha = .05$, so that I reject if my $p$-value is less than .05, then my probability of correctly not rejecting the null hypothesis is $1-.05 = .95$. If rejecting the null hypothesis when it is in fact true would have very bad consequences then we might use a smaller $\alpha$, say .01 or even .001, which gives us a higher probability of correctly not rejecting the null hypothesis. So, in fact, we already control this probability--in fact, this is much easier to control than the power. Because it's much easier there's much less discussion about it, which is probably why you concluded that statisticians aren't interested in it.
What does statistical power mean when we are interested in the probability of correctly not rejectin The probability of "correctly not rejecting the null hypothesis"--i.e., if the null hypothesis is true, we do not reject it--is controlled by the significance level at which we are doing the test. If
46,872
What does statistical power mean when we are interested in the probability of correctly not rejecting the null hypothesis?
I think you are interested in equivalence testing. See this other question on testing a hypothesis of no group differences. There are various approaches that can be adopted to assess whether the null hypothesis is true. In general, the absence of statistically significant effect is very week evidence for the truth of the null hypothesis. Three common approaches include (a) looking at confidence intervals; (b) looking at bayesian posterior densities on the parameter of interest; or (c) setting up two one-sided significance tests. The confidence interval and Bayesian posterior density approach are often used to quantify uncertainty of a parameter of interest. The Bayesian approach is arguably more aligned with the question of interest where the parameter is seen as unknown. Looking at such intervals you could judge that if the interval includes the null hypothesis and other plausible values are sufficiently close to zero, then this means that the null hypothesis or something sufficiently similar is the most likely the truth. A similar approach is to set up two one-sided significance tests. E.g., when testing whether the means are the same for two groups ($\delta = \frac{\mu_1 - \mu_2}{\sigma}$) you could test whether $\delta$ is significantly less than .1 and significantly more than -.1. In this cases you could calculate the statistical power of such tests assuming: the null hypothesis is true alpha sample size Or if you wanted to hold power constant, then you could assess what sample size would be required. You could also vary the threshold for equivalence and see how as you expand the width of the equivalence threshold your power increases. This is a common applied problem in the context of equivalence and non-inferiority testing for drugs (e.g., Walker and Nowacki, 2011). References Walker, E., & Nowacki, A. S. (2011). Understanding equivalence and noninferiority testing. Journal of general internal medicine, 26(2), 192. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3019319/
What does statistical power mean when we are interested in the probability of correctly not rejectin
I think you are interested in equivalence testing. See this other question on testing a hypothesis of no group differences. There are various approaches that can be adopted to assess whether the null
What does statistical power mean when we are interested in the probability of correctly not rejecting the null hypothesis? I think you are interested in equivalence testing. See this other question on testing a hypothesis of no group differences. There are various approaches that can be adopted to assess whether the null hypothesis is true. In general, the absence of statistically significant effect is very week evidence for the truth of the null hypothesis. Three common approaches include (a) looking at confidence intervals; (b) looking at bayesian posterior densities on the parameter of interest; or (c) setting up two one-sided significance tests. The confidence interval and Bayesian posterior density approach are often used to quantify uncertainty of a parameter of interest. The Bayesian approach is arguably more aligned with the question of interest where the parameter is seen as unknown. Looking at such intervals you could judge that if the interval includes the null hypothesis and other plausible values are sufficiently close to zero, then this means that the null hypothesis or something sufficiently similar is the most likely the truth. A similar approach is to set up two one-sided significance tests. E.g., when testing whether the means are the same for two groups ($\delta = \frac{\mu_1 - \mu_2}{\sigma}$) you could test whether $\delta$ is significantly less than .1 and significantly more than -.1. In this cases you could calculate the statistical power of such tests assuming: the null hypothesis is true alpha sample size Or if you wanted to hold power constant, then you could assess what sample size would be required. You could also vary the threshold for equivalence and see how as you expand the width of the equivalence threshold your power increases. This is a common applied problem in the context of equivalence and non-inferiority testing for drugs (e.g., Walker and Nowacki, 2011). References Walker, E., & Nowacki, A. S. (2011). Understanding equivalence and noninferiority testing. Journal of general internal medicine, 26(2), 192. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3019319/
What does statistical power mean when we are interested in the probability of correctly not rejectin I think you are interested in equivalence testing. See this other question on testing a hypothesis of no group differences. There are various approaches that can be adopted to assess whether the null
46,873
How concerned should I be about the appropriateness of my prior?
It's always possible to create a prior that will overwhelm your data, no matter how many observations you have. However, for any fixed prior, as the number of observations grows, the influence of the prior shrinks (except for the 0-mass case that Macro pointed out in his comment). For some prior distributions there's a concept of "prior sample size": if your prior sample size is $n_p$ and you have $n$ observations, then the posterior is in some sense a weighted average of the prior and data, weighted with $n_p$ and $n$ respectively. The easiest place to see this is when the Beta distribution is used as a prior for the Binomial distribution, where the prior sample size is $\alpha+\beta$. If I use a $\operatorname{Beta}(4,1)$ prior, that's sort of like saying that I believe my prior information is as good as 5 observations, and I expect success 80% of the time. If I then observe 5 data points (say 3 successes, 2 failures) my posterior will then be $\operatorname{Beta}(7,3)$--now my posterior is worth 10 observations (5 prior + 5 data), with a mean of .7. The prior is still pretty strongly weighted here. But if I observe 500 observations then my prior is basically irrelevant, because my data sample size is 100 times as large as my prior sample size. On the other hand, I could use a $\operatorname{Beta}(8000,2000)$ prior. In this case, even if I observed 5000 data points, my posterior is still mostly determined by my prior. If you're in a case where it's easy to calculate this sort of "prior sample size" (which also includes common models such as Normal-Normal, InverseGamma-Normal, and Gamma-Poisson), then this can give you an idea of how influential your prior is relative to your data. Otherwise I try to err on the side of diffuse priors, on the basis that it's (usually) better to overestimate your posterior uncertainty than to underestimate it.
How concerned should I be about the appropriateness of my prior?
It's always possible to create a prior that will overwhelm your data, no matter how many observations you have. However, for any fixed prior, as the number of observations grows, the influence of the
How concerned should I be about the appropriateness of my prior? It's always possible to create a prior that will overwhelm your data, no matter how many observations you have. However, for any fixed prior, as the number of observations grows, the influence of the prior shrinks (except for the 0-mass case that Macro pointed out in his comment). For some prior distributions there's a concept of "prior sample size": if your prior sample size is $n_p$ and you have $n$ observations, then the posterior is in some sense a weighted average of the prior and data, weighted with $n_p$ and $n$ respectively. The easiest place to see this is when the Beta distribution is used as a prior for the Binomial distribution, where the prior sample size is $\alpha+\beta$. If I use a $\operatorname{Beta}(4,1)$ prior, that's sort of like saying that I believe my prior information is as good as 5 observations, and I expect success 80% of the time. If I then observe 5 data points (say 3 successes, 2 failures) my posterior will then be $\operatorname{Beta}(7,3)$--now my posterior is worth 10 observations (5 prior + 5 data), with a mean of .7. The prior is still pretty strongly weighted here. But if I observe 500 observations then my prior is basically irrelevant, because my data sample size is 100 times as large as my prior sample size. On the other hand, I could use a $\operatorname{Beta}(8000,2000)$ prior. In this case, even if I observed 5000 data points, my posterior is still mostly determined by my prior. If you're in a case where it's easy to calculate this sort of "prior sample size" (which also includes common models such as Normal-Normal, InverseGamma-Normal, and Gamma-Poisson), then this can give you an idea of how influential your prior is relative to your data. Otherwise I try to err on the side of diffuse priors, on the basis that it's (usually) better to overestimate your posterior uncertainty than to underestimate it.
How concerned should I be about the appropriateness of my prior? It's always possible to create a prior that will overwhelm your data, no matter how many observations you have. However, for any fixed prior, as the number of observations grows, the influence of the
46,874
Regression for poisson process in R
You can still use glm in R, but you include the log of $t$ as an 'offset' to take it into account, something like: fit <- glm( k ~ 1 + offset(log(t)), data=mydata, family=poisson) This will fit an intercept that will be the estimate of $\lambda$, but you could also include covariates if needed.
Regression for poisson process in R
You can still use glm in R, but you include the log of $t$ as an 'offset' to take it into account, something like: fit <- glm( k ~ 1 + offset(log(t)), data=mydata, family=poisson) This will fit an in
Regression for poisson process in R You can still use glm in R, but you include the log of $t$ as an 'offset' to take it into account, something like: fit <- glm( k ~ 1 + offset(log(t)), data=mydata, family=poisson) This will fit an intercept that will be the estimate of $\lambda$, but you could also include covariates if needed.
Regression for poisson process in R You can still use glm in R, but you include the log of $t$ as an 'offset' to take it into account, something like: fit <- glm( k ~ 1 + offset(log(t)), data=mydata, family=poisson) This will fit an in
46,875
Understanding the intra-class correlation coefficient
Common assumptions are that $$ \textrm{Cov}(\mathbf{u}, \mathbf{e}) = \mathbf{0} $$ $$ \textrm{Cov}(\mathbf{e}) = \sigma^2_e \mathbf{I}. $$ Let $i \neq i'$. On the one hand, we have $$\begin{align*} \textrm{Var}(y_{ij}) & = \textrm{Var}(\beta_0 + u_j + e_{ij}) \\ & = \textrm{Var}(u_j + e_{ij}) \\ & = \textrm{Var}(u_j) + \textrm{Var}(e_{ij}) + 2 \textrm{Cov}(u_j, e_{ij})\\ & = \sigma^2_u + \sigma^2_e. \end{align*}$$ On the other hand, we have $$\begin{align*} \textrm{Cov}(y_{ij}, y_{i'j}) & = \textrm{Cov}(\beta_0 + u_j + e_{ij}, \beta_0 + u_j + e_{i'j}) \\ & = \textrm{Cov}(u_j + e_{ij}, u_j + e_{i'j}) \\ & = \textrm{Cov}(u_j, u_j) + \textrm{Cov}(u_j, e_{i'j}) + \textrm{Cov}(e_{ij}, u_j) + \textrm{Cov}(e_{ij}, e_{i'j}) \\ & = \sigma^2_u. \end{align*}$$ Hence $$\begin{align*} \textrm{Cor}(y_{ij}, y_{i'j}) & = \frac{\textrm{Cov}(y_{ij}, y_{i'j})}{\sqrt{\textrm{Var}(y_{ij})}\sqrt{\textrm{Var}(y_{i'j})}} \\ & = \frac{\sigma^2_u}{\sqrt{\sigma^2_u + \sigma^2_e} \sqrt{\sigma^2_u + \sigma^2_e}} \\ & = \frac{\sigma^2_u}{\sigma^2_u + \sigma^2_e}. \end{align*}$$ The latter is the correlation between measurement $y_{ij}$ and measurement $y_{i'j}$ ($i \neq i'$), i.e., the correlation between "any two responses having the same $j$".
Understanding the intra-class correlation coefficient
Common assumptions are that $$ \textrm{Cov}(\mathbf{u}, \mathbf{e}) = \mathbf{0} $$ $$ \textrm{Cov}(\mathbf{e}) = \sigma^2_e \mathbf{I}. $$ Let $i \neq i'$. On the one hand, we have $$\begin{align*}
Understanding the intra-class correlation coefficient Common assumptions are that $$ \textrm{Cov}(\mathbf{u}, \mathbf{e}) = \mathbf{0} $$ $$ \textrm{Cov}(\mathbf{e}) = \sigma^2_e \mathbf{I}. $$ Let $i \neq i'$. On the one hand, we have $$\begin{align*} \textrm{Var}(y_{ij}) & = \textrm{Var}(\beta_0 + u_j + e_{ij}) \\ & = \textrm{Var}(u_j + e_{ij}) \\ & = \textrm{Var}(u_j) + \textrm{Var}(e_{ij}) + 2 \textrm{Cov}(u_j, e_{ij})\\ & = \sigma^2_u + \sigma^2_e. \end{align*}$$ On the other hand, we have $$\begin{align*} \textrm{Cov}(y_{ij}, y_{i'j}) & = \textrm{Cov}(\beta_0 + u_j + e_{ij}, \beta_0 + u_j + e_{i'j}) \\ & = \textrm{Cov}(u_j + e_{ij}, u_j + e_{i'j}) \\ & = \textrm{Cov}(u_j, u_j) + \textrm{Cov}(u_j, e_{i'j}) + \textrm{Cov}(e_{ij}, u_j) + \textrm{Cov}(e_{ij}, e_{i'j}) \\ & = \sigma^2_u. \end{align*}$$ Hence $$\begin{align*} \textrm{Cor}(y_{ij}, y_{i'j}) & = \frac{\textrm{Cov}(y_{ij}, y_{i'j})}{\sqrt{\textrm{Var}(y_{ij})}\sqrt{\textrm{Var}(y_{i'j})}} \\ & = \frac{\sigma^2_u}{\sqrt{\sigma^2_u + \sigma^2_e} \sqrt{\sigma^2_u + \sigma^2_e}} \\ & = \frac{\sigma^2_u}{\sigma^2_u + \sigma^2_e}. \end{align*}$$ The latter is the correlation between measurement $y_{ij}$ and measurement $y_{i'j}$ ($i \neq i'$), i.e., the correlation between "any two responses having the same $j$".
Understanding the intra-class correlation coefficient Common assumptions are that $$ \textrm{Cov}(\mathbf{u}, \mathbf{e}) = \mathbf{0} $$ $$ \textrm{Cov}(\mathbf{e}) = \sigma^2_e \mathbf{I}. $$ Let $i \neq i'$. On the one hand, we have $$\begin{align*}
46,876
Looking for a OLS-Equation if one Regressor is correlated with the error
While this is not a situation that arises in practice, this is related to the so-called control function approach to dealing with endogeneity. Let me rewrite your (simple) model $$ Y_i = \beta_0 + \beta_1X_i + U_i $$ together with your assumptions $\mathbb{E}(U_i)=0$ and $\mathbb{E}(U_iX_i)=\rho$. Then $$ \mathbb{E}(X_i(U_i-\frac{\rho}{X_i}))=0 $$ so that if I rewrite my model $$ Y_i = \beta_0 + \beta_1X_i + \frac{\rho}{X_i} + \underbrace{U_i-\frac{\rho}{X_i}}_{\equiv V_i} $$ and estimate this model by OLS, constraining the coefficient of the $\frac{1}{X_i}$ term to be $\rho$, I should get consistent estimates of $\beta_1$. So, consider the following Stata simulations clear* program simcont, rclass drop _all set obs 1000 g x1 = rnormal() g x2 = rnormal() g u = x1 + rnormal() g x = x1 + x2 g y = 2 + 3*x + u // ols reg y x mat mA = e(b) return scalar ols = el(mA, 1, colnumb(mA, "x")) g cont = 1/x constraint define 1 cont = 1 // true correlation between error and regressor cnsreg y x cont, constraints(1) // constrained regression mat mA = e(b) return scalar cont = el(mA, 1, colnumb(mA, "x")) end simulate olsCoeff = r(ols) controlFuncCoeff=r(cont), reps(100): simcont kdensity olsCoeff, xline(3, lcolor(green)) addplot(kdensity controlFuncCoeff) /// legend(label(1 "KDE of OLS coeff. estimates") label(2 "KDE of control function coeff. estimates")) /// xtitle("estimates") ytitle("density") title("Comparison of OLS and control function approaches") which produces the following picture (based on 100 replications) However, I would think very carefully and experiment with more regressors and in general more data configurations before I put this estimation strategy to work on real data. Follow-up questions: The OP has asked for some clarifications in the comments for which I am providing an updated answer. Let me rewrite your model $$ Y_i = \beta_0 + \beta_1 Z_{1i} + \beta_2 Z_{2i} + \beta_3 X_i + U_i $$ where $Z_{1i}$ and $Z_{2i}$ are exogenous, and $X_{i}$ is endogenous, that is $\mathbb{E}(X_iU_i) = \rho$. In addition, you want the variable $Z_{2i}$ to be constructed as $$ Z_{2i} = \mathbf{1}_{[i\text{ is odd.}]} $$ You are simulating the OLS estimates of the coefficient on $X_i$, that is $\beta_3$. Here is a small Stata script to do that. clear* program simcont, rclass syntax [, errorVariance(real 1.0)] drop _all set obs 1000 scalar beta0 = 5 scalar beta1 = 1 scalar beta2 = 2 scalar beta3 = 3 scalar rho = 0.1 g z1 = rnormal() g z2 = mod(_n, 2) g u = sqrt(`errorVariance')*rnormal() g x = rho*u/`errorVariance' + rnormal() g y = beta0 + beta1*z1 + beta2*z2 + beta3*x + u reg y z1 z2 x mat mA = e(b) return scalar ols = el(mA, 1, colnumb(mA, "x")) // return the results of the heteroskedasticity test estat hettest, rhs iid return scalar hettestPValues = r(p) end // simulate with error variance = 1 simulate olsCoeff = r(ols) hettestPValues = r(hettestPValues), reps(100): simcont su olsCoeff hettestPValues // p-values have the correct mean; no heteroskedasticity cap mat drop biasBeta forvalues errorVariance = 2(1)6 { simulate olsCoeff = r(ols), reps(100): simcont, errorVariance(`errorVariance') qui su olsCoeff mat biasBeta = (nullmat(biasBeta), r(mean) - 3) local colNames "`colNames' errVar:`errorVariance' " } mat colnames biasBeta = `colNames' mat list biasBeta Now note that there are at least two discrepancies here. The first thing is that there is no heteroskedasticity in the model as you have written it. The mean of the p-values of an LM test of homoskedasticity from the simulations indicates that they are being drawn under the null. . su olsCoeff hettestPValues // p-values have the correct mean; no heteroskedasticity Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- olsCoeff | 100 3.094182 .0326048 3.007202 3.179945 hettestPVa~s | 100 .4762952 .2885692 .0039824 .9844939 The next thing to note is that the bias is that the asymptotic bias is the same for all values of the error variance, as long as the degree of endogeneity is the same. Here are the results of my simulation . mat list biasBeta biasBeta[1,5] errVar: errVar: errVar: errVar: errVar: 2 3 4 5 6 r1 .09913641 .10274499 .0912232 .11309942 .09122764 If you are getting results different than this, then you should show us your code, and we can compare the two. Update to follow-up Let me rewrite your model one more time $$ Y_i = \beta_0 + \beta_1Z_{1i} + \beta_2 Z_{2i} + \beta_3 X_i + U_{1i} $$ where as before, $Z_{1i}$ and $Z_{2i}$ are exogenous and $X_i$ is endogenous. You assume the data generating process $$ \begin{align} Z_{2i} &= \mathbf{1}_{[i\text{ is odd}]}\\ X_i &= \alpha_0 + \alpha_1Z_{1i} + U_{2i} \end{align} $$ You introduce endogeneity by assuming that $U_{1i}$ and $U_{2i}$ are correlated. $$ \mathbb{C}(U_{1i}, U_{2i}) = \rho $$ Also, you assume that the errors in the reduced form equation are heteroskedastic: $$ \begin{align} \mathbb{V}(U_{2i}\mid Z_{2i} = 0) &= 1 \\ \mathbb{V}(U_{2i}\mid Z_{2i} = 1) &= \tfrac{1}{q} \\ \end{align} $$ Here is the Stata code to simulate this DGP. clear* program simcont, rclass syntax [, Q(real 1.0)] drop _all set obs 1000 scalar beta0 = 5 scalar beta1 = 1 scalar beta2 = 2 scalar beta3 = 3 scalar alpha0 = 1 scalar alpha1 = 4 scalar rho = 0.1 g z1 = rnormal() g z2 = mod(_n, 2) scalar a11 = 1 scalar a12 = rho*sqrt(1)*sqrt(1) scalar a13 = rho*sqrt(1)*sqrt(1/`q') scalar a21 = rho*sqrt(1)*sqrt(1) scalar a22 = 1 scalar a23 = 0 scalar a31 = rho*sqrt(1)*sqrt(1/`q') scalar a32 = 0 scalar a33 = 1/`q' mat corrMatrix= (a11, a12, a13 \ a21, a22, a23 \a31, a32, a33) drawnorm u1 u3 u4, cov(corrMatrix) g u2 = cond(z2, u3, u4) g x = alpha0 + alpha1*z2 + u2 g y = beta0 + beta1*z1 + beta2*z2 + beta3*x + u1 reg y z1 z2 x mat mA = e(b) return scalar ols = el(mA, 1, colnumb(mA, "x")) // return the results of the heteroskedasticity test qui reg x z2 estat hettest, rhs iid return scalar hettestPValues = r(p) end // simulate to check for heteroskedasticity simulate olsCoeff = r(ols) hettestPValues = r(hettestPValues), reps(100): simcont, q(5) su hettestPValues // strong evidence of heteroskedasticity in the reduced form cap mat drop biasBeta forvalues q = 2(1)5 { simulate olsCoeff = r(ols), reps(1000): simcont, q(`q') qui su olsCoeff mat biasBeta = (nullmat(biasBeta), r(mean) - 3) local colNames "`colNames' q:`q' " } mat colnames biasBeta = `colNames' mat list biasBeta Note that the heteroskedasticity is where you put it in the model, and the simulation results are now able to find it. . su hettestPValues // strong evidence of heteroskedasticity in the reduced form Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- hettestPVa~s | 100 1.53e-26 8.12e-26 3.09e-38 6.22e-25 Also, I can confirm that what you claim is actually true, that as the heteroskedasticity in the reduced form equation increases, the bias in the OLS estimator also increases. biasBeta[1,4] q: q: q: q: 2 3 4 5 r1 .11242463 .11808594 .12122701 .12210455 A simple explanation of this is that as $q$ increases, the conditional (on $Z_{2i}=1$) and hence the unconditional variance of the reduced form error $U_{2i}$ decreases (check through a variance decomposition) which means that the variance of the endogenous regressor decreases, which means that the $(\mathbf{X}'\mathbf{X})^{-1}$ increases in magnitude, and the overall bias increases (this is a very rough description -- I am sure it can be formalized).
Looking for a OLS-Equation if one Regressor is correlated with the error
While this is not a situation that arises in practice, this is related to the so-called control function approach to dealing with endogeneity. Let me rewrite your (simple) model $$ Y_i = \beta_0 + \be
Looking for a OLS-Equation if one Regressor is correlated with the error While this is not a situation that arises in practice, this is related to the so-called control function approach to dealing with endogeneity. Let me rewrite your (simple) model $$ Y_i = \beta_0 + \beta_1X_i + U_i $$ together with your assumptions $\mathbb{E}(U_i)=0$ and $\mathbb{E}(U_iX_i)=\rho$. Then $$ \mathbb{E}(X_i(U_i-\frac{\rho}{X_i}))=0 $$ so that if I rewrite my model $$ Y_i = \beta_0 + \beta_1X_i + \frac{\rho}{X_i} + \underbrace{U_i-\frac{\rho}{X_i}}_{\equiv V_i} $$ and estimate this model by OLS, constraining the coefficient of the $\frac{1}{X_i}$ term to be $\rho$, I should get consistent estimates of $\beta_1$. So, consider the following Stata simulations clear* program simcont, rclass drop _all set obs 1000 g x1 = rnormal() g x2 = rnormal() g u = x1 + rnormal() g x = x1 + x2 g y = 2 + 3*x + u // ols reg y x mat mA = e(b) return scalar ols = el(mA, 1, colnumb(mA, "x")) g cont = 1/x constraint define 1 cont = 1 // true correlation between error and regressor cnsreg y x cont, constraints(1) // constrained regression mat mA = e(b) return scalar cont = el(mA, 1, colnumb(mA, "x")) end simulate olsCoeff = r(ols) controlFuncCoeff=r(cont), reps(100): simcont kdensity olsCoeff, xline(3, lcolor(green)) addplot(kdensity controlFuncCoeff) /// legend(label(1 "KDE of OLS coeff. estimates") label(2 "KDE of control function coeff. estimates")) /// xtitle("estimates") ytitle("density") title("Comparison of OLS and control function approaches") which produces the following picture (based on 100 replications) However, I would think very carefully and experiment with more regressors and in general more data configurations before I put this estimation strategy to work on real data. Follow-up questions: The OP has asked for some clarifications in the comments for which I am providing an updated answer. Let me rewrite your model $$ Y_i = \beta_0 + \beta_1 Z_{1i} + \beta_2 Z_{2i} + \beta_3 X_i + U_i $$ where $Z_{1i}$ and $Z_{2i}$ are exogenous, and $X_{i}$ is endogenous, that is $\mathbb{E}(X_iU_i) = \rho$. In addition, you want the variable $Z_{2i}$ to be constructed as $$ Z_{2i} = \mathbf{1}_{[i\text{ is odd.}]} $$ You are simulating the OLS estimates of the coefficient on $X_i$, that is $\beta_3$. Here is a small Stata script to do that. clear* program simcont, rclass syntax [, errorVariance(real 1.0)] drop _all set obs 1000 scalar beta0 = 5 scalar beta1 = 1 scalar beta2 = 2 scalar beta3 = 3 scalar rho = 0.1 g z1 = rnormal() g z2 = mod(_n, 2) g u = sqrt(`errorVariance')*rnormal() g x = rho*u/`errorVariance' + rnormal() g y = beta0 + beta1*z1 + beta2*z2 + beta3*x + u reg y z1 z2 x mat mA = e(b) return scalar ols = el(mA, 1, colnumb(mA, "x")) // return the results of the heteroskedasticity test estat hettest, rhs iid return scalar hettestPValues = r(p) end // simulate with error variance = 1 simulate olsCoeff = r(ols) hettestPValues = r(hettestPValues), reps(100): simcont su olsCoeff hettestPValues // p-values have the correct mean; no heteroskedasticity cap mat drop biasBeta forvalues errorVariance = 2(1)6 { simulate olsCoeff = r(ols), reps(100): simcont, errorVariance(`errorVariance') qui su olsCoeff mat biasBeta = (nullmat(biasBeta), r(mean) - 3) local colNames "`colNames' errVar:`errorVariance' " } mat colnames biasBeta = `colNames' mat list biasBeta Now note that there are at least two discrepancies here. The first thing is that there is no heteroskedasticity in the model as you have written it. The mean of the p-values of an LM test of homoskedasticity from the simulations indicates that they are being drawn under the null. . su olsCoeff hettestPValues // p-values have the correct mean; no heteroskedasticity Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- olsCoeff | 100 3.094182 .0326048 3.007202 3.179945 hettestPVa~s | 100 .4762952 .2885692 .0039824 .9844939 The next thing to note is that the bias is that the asymptotic bias is the same for all values of the error variance, as long as the degree of endogeneity is the same. Here are the results of my simulation . mat list biasBeta biasBeta[1,5] errVar: errVar: errVar: errVar: errVar: 2 3 4 5 6 r1 .09913641 .10274499 .0912232 .11309942 .09122764 If you are getting results different than this, then you should show us your code, and we can compare the two. Update to follow-up Let me rewrite your model one more time $$ Y_i = \beta_0 + \beta_1Z_{1i} + \beta_2 Z_{2i} + \beta_3 X_i + U_{1i} $$ where as before, $Z_{1i}$ and $Z_{2i}$ are exogenous and $X_i$ is endogenous. You assume the data generating process $$ \begin{align} Z_{2i} &= \mathbf{1}_{[i\text{ is odd}]}\\ X_i &= \alpha_0 + \alpha_1Z_{1i} + U_{2i} \end{align} $$ You introduce endogeneity by assuming that $U_{1i}$ and $U_{2i}$ are correlated. $$ \mathbb{C}(U_{1i}, U_{2i}) = \rho $$ Also, you assume that the errors in the reduced form equation are heteroskedastic: $$ \begin{align} \mathbb{V}(U_{2i}\mid Z_{2i} = 0) &= 1 \\ \mathbb{V}(U_{2i}\mid Z_{2i} = 1) &= \tfrac{1}{q} \\ \end{align} $$ Here is the Stata code to simulate this DGP. clear* program simcont, rclass syntax [, Q(real 1.0)] drop _all set obs 1000 scalar beta0 = 5 scalar beta1 = 1 scalar beta2 = 2 scalar beta3 = 3 scalar alpha0 = 1 scalar alpha1 = 4 scalar rho = 0.1 g z1 = rnormal() g z2 = mod(_n, 2) scalar a11 = 1 scalar a12 = rho*sqrt(1)*sqrt(1) scalar a13 = rho*sqrt(1)*sqrt(1/`q') scalar a21 = rho*sqrt(1)*sqrt(1) scalar a22 = 1 scalar a23 = 0 scalar a31 = rho*sqrt(1)*sqrt(1/`q') scalar a32 = 0 scalar a33 = 1/`q' mat corrMatrix= (a11, a12, a13 \ a21, a22, a23 \a31, a32, a33) drawnorm u1 u3 u4, cov(corrMatrix) g u2 = cond(z2, u3, u4) g x = alpha0 + alpha1*z2 + u2 g y = beta0 + beta1*z1 + beta2*z2 + beta3*x + u1 reg y z1 z2 x mat mA = e(b) return scalar ols = el(mA, 1, colnumb(mA, "x")) // return the results of the heteroskedasticity test qui reg x z2 estat hettest, rhs iid return scalar hettestPValues = r(p) end // simulate to check for heteroskedasticity simulate olsCoeff = r(ols) hettestPValues = r(hettestPValues), reps(100): simcont, q(5) su hettestPValues // strong evidence of heteroskedasticity in the reduced form cap mat drop biasBeta forvalues q = 2(1)5 { simulate olsCoeff = r(ols), reps(1000): simcont, q(`q') qui su olsCoeff mat biasBeta = (nullmat(biasBeta), r(mean) - 3) local colNames "`colNames' q:`q' " } mat colnames biasBeta = `colNames' mat list biasBeta Note that the heteroskedasticity is where you put it in the model, and the simulation results are now able to find it. . su hettestPValues // strong evidence of heteroskedasticity in the reduced form Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- hettestPVa~s | 100 1.53e-26 8.12e-26 3.09e-38 6.22e-25 Also, I can confirm that what you claim is actually true, that as the heteroskedasticity in the reduced form equation increases, the bias in the OLS estimator also increases. biasBeta[1,4] q: q: q: q: 2 3 4 5 r1 .11242463 .11808594 .12122701 .12210455 A simple explanation of this is that as $q$ increases, the conditional (on $Z_{2i}=1$) and hence the unconditional variance of the reduced form error $U_{2i}$ decreases (check through a variance decomposition) which means that the variance of the endogenous regressor decreases, which means that the $(\mathbf{X}'\mathbf{X})^{-1}$ increases in magnitude, and the overall bias increases (this is a very rough description -- I am sure it can be formalized).
Looking for a OLS-Equation if one Regressor is correlated with the error While this is not a situation that arises in practice, this is related to the so-called control function approach to dealing with endogeneity. Let me rewrite your (simple) model $$ Y_i = \beta_0 + \be
46,877
Which data mining packages support anomaly detection?
Use ELKI. It not only has tons of anomaly detection algorithms (they call them "outlier detection" though), but it also is significantly faster than the others, in particular when you used indexes.
Which data mining packages support anomaly detection?
Use ELKI. It not only has tons of anomaly detection algorithms (they call them "outlier detection" though), but it also is significantly faster than the others, in particular when you used indexes.
Which data mining packages support anomaly detection? Use ELKI. It not only has tons of anomaly detection algorithms (they call them "outlier detection" though), but it also is significantly faster than the others, in particular when you used indexes.
Which data mining packages support anomaly detection? Use ELKI. It not only has tons of anomaly detection algorithms (they call them "outlier detection" though), but it also is significantly faster than the others, in particular when you used indexes.
46,878
Which data mining packages support anomaly detection?
For Venturini's (2011) outlier detection method Washer, its inventor published an R implementation here and an R package here. Venturini, A. (2011). Time Series Outlier Detection: A New Non Parametric Methodology (Washer). Statistica 71: 329-344.
Which data mining packages support anomaly detection?
For Venturini's (2011) outlier detection method Washer, its inventor published an R implementation here and an R package here. Venturini, A. (2011). Time Series Outlier Detection: A New Non Parametri
Which data mining packages support anomaly detection? For Venturini's (2011) outlier detection method Washer, its inventor published an R implementation here and an R package here. Venturini, A. (2011). Time Series Outlier Detection: A New Non Parametric Methodology (Washer). Statistica 71: 329-344.
Which data mining packages support anomaly detection? For Venturini's (2011) outlier detection method Washer, its inventor published an R implementation here and an R package here. Venturini, A. (2011). Time Series Outlier Detection: A New Non Parametri
46,879
Which data mining packages support anomaly detection?
R has a full task view listing the major implementations.
Which data mining packages support anomaly detection?
R has a full task view listing the major implementations.
Which data mining packages support anomaly detection? R has a full task view listing the major implementations.
Which data mining packages support anomaly detection? R has a full task view listing the major implementations.
46,880
Estimating specific variance for items in factor analysis - how to achieve the theoretical maximum?
Not sure my response is relevant, perhaps what I say is not news for you. It is about starting values for communalities in factor analysis. Actually, you cannot estimate the true communality (and likewise uniqueness) of a variable before you've done FA. This is because communalities are tied up with the number of factors m being extracted. In Principal Axes factor analysis method of extraction communalities are being iteratively trained (like dogs are trained) to restore pairwise coefficients - correlations or covariances - maximally by m factors. To estimate starting values for communalities several methods can be used, as you probably know: The squared multiple correlation coefficient$^1$ between the variable and the rest variables is considered the best guess for the starting value of communality of the variable. This value is the lower bound for the "true", resultant, communality. Another possible guess for the value is the maximal or the mean absolute correlation/covariance of the variable with the rest ones. Still, another guess value used sometimes is the test-retest reliability (correlation/covariance) coefficient. This would be the upper bound for the "true" communality. And in specific cases, user-defined initial values are used (e.g. communality values borrowed from literature). $^1$ A closer look. If $\bf R$ is the analyzed correlation or covariance matrix, and you make diagonal matrix $\bf D$ with the diagonal elements being the inverses of diagonal elements of $\bf R^{-1}$, then matrix $\bf DR^{-1}D-2D+R$ is called "image covariance matrix" of $\bf R$ (sic! "covariance" irrespective whether $\bf R$ is covariances or correlations). Its diagonal entries are "images" in $\bf R$ (actually, these images are the diagonal of $\bf R-D)$. If $\bf R$ is correlation matrix, images are the squared multiple correlation coefficients (of dependency of a variable on all the other variables). If $\bf R$ is covariance matrix, images are the squared multiple correlation coefficients multiplied by the respective variable variance. These values - the images - are used as starting communalities in both cases. A side note for the curious: matrix $\bf DR^{-1}D$ is known as "anti-image covariance matrix" of $\bf R$. If you convert it to "anti-image correlation matrix" (in a usual way like you convert covariance in correlation, $r_{ij}=cov_{ij}/(\sigma_i \sigma_j)$), then the off-diagonal elements as a result are the negatives of partial correlation coefficients (between two variables controlled for all the other variables). Partial correlation coefficients are optionally used within factor analysis to compute Kaiser-Meyer-Olkin measure of sampling adequacy (KMO). See also.
Estimating specific variance for items in factor analysis - how to achieve the theoretical maximum?
Not sure my response is relevant, perhaps what I say is not news for you. It is about starting values for communalities in factor analysis. Actually, you cannot estimate the true communality (and like
Estimating specific variance for items in factor analysis - how to achieve the theoretical maximum? Not sure my response is relevant, perhaps what I say is not news for you. It is about starting values for communalities in factor analysis. Actually, you cannot estimate the true communality (and likewise uniqueness) of a variable before you've done FA. This is because communalities are tied up with the number of factors m being extracted. In Principal Axes factor analysis method of extraction communalities are being iteratively trained (like dogs are trained) to restore pairwise coefficients - correlations or covariances - maximally by m factors. To estimate starting values for communalities several methods can be used, as you probably know: The squared multiple correlation coefficient$^1$ between the variable and the rest variables is considered the best guess for the starting value of communality of the variable. This value is the lower bound for the "true", resultant, communality. Another possible guess for the value is the maximal or the mean absolute correlation/covariance of the variable with the rest ones. Still, another guess value used sometimes is the test-retest reliability (correlation/covariance) coefficient. This would be the upper bound for the "true" communality. And in specific cases, user-defined initial values are used (e.g. communality values borrowed from literature). $^1$ A closer look. If $\bf R$ is the analyzed correlation or covariance matrix, and you make diagonal matrix $\bf D$ with the diagonal elements being the inverses of diagonal elements of $\bf R^{-1}$, then matrix $\bf DR^{-1}D-2D+R$ is called "image covariance matrix" of $\bf R$ (sic! "covariance" irrespective whether $\bf R$ is covariances or correlations). Its diagonal entries are "images" in $\bf R$ (actually, these images are the diagonal of $\bf R-D)$. If $\bf R$ is correlation matrix, images are the squared multiple correlation coefficients (of dependency of a variable on all the other variables). If $\bf R$ is covariance matrix, images are the squared multiple correlation coefficients multiplied by the respective variable variance. These values - the images - are used as starting communalities in both cases. A side note for the curious: matrix $\bf DR^{-1}D$ is known as "anti-image covariance matrix" of $\bf R$. If you convert it to "anti-image correlation matrix" (in a usual way like you convert covariance in correlation, $r_{ij}=cov_{ij}/(\sigma_i \sigma_j)$), then the off-diagonal elements as a result are the negatives of partial correlation coefficients (between two variables controlled for all the other variables). Partial correlation coefficients are optionally used within factor analysis to compute Kaiser-Meyer-Olkin measure of sampling adequacy (KMO). See also.
Estimating specific variance for items in factor analysis - how to achieve the theoretical maximum? Not sure my response is relevant, perhaps what I say is not news for you. It is about starting values for communalities in factor analysis. Actually, you cannot estimate the true communality (and like
46,881
Book recommendations for biostatisticians in CRO and pharmacy
I must include the excellent "Statistical Issues in Drug Development" by Stephen Senn. Brilliant book.
Book recommendations for biostatisticians in CRO and pharmacy
I must include the excellent "Statistical Issues in Drug Development" by Stephen Senn. Brilliant book.
Book recommendations for biostatisticians in CRO and pharmacy I must include the excellent "Statistical Issues in Drug Development" by Stephen Senn. Brilliant book.
Book recommendations for biostatisticians in CRO and pharmacy I must include the excellent "Statistical Issues in Drug Development" by Stephen Senn. Brilliant book.
46,882
Book recommendations for biostatisticians in CRO and pharmacy
You might find Hahn, & Doganaksoy (2011). A Career in Statistics: Beyond the Numbers, to be helpful for your purposes.
Book recommendations for biostatisticians in CRO and pharmacy
You might find Hahn, & Doganaksoy (2011). A Career in Statistics: Beyond the Numbers, to be helpful for your purposes.
Book recommendations for biostatisticians in CRO and pharmacy You might find Hahn, & Doganaksoy (2011). A Career in Statistics: Beyond the Numbers, to be helpful for your purposes.
Book recommendations for biostatisticians in CRO and pharmacy You might find Hahn, & Doganaksoy (2011). A Career in Statistics: Beyond the Numbers, to be helpful for your purposes.
46,883
Test for equal variance
Note that all the tests for equal variances are rule out tests. They test the null hypothesis that the 2 variances (standard deviations) are equal, so if you reject the null hypothesis then you can be fairly sure that they are not equal, but if you get a non-significant result that does not mean that they are equal, they could be equal or you may just not have enough power to find the difference. The rules of thumb are often more useful because if the variances are not equal, but still similar then your other tests are still reasonable. What is the most important is an understanding of the science that produces the data and the question of interest. There are cases where the distributions have different enough variance that you would not want to use methods that assume equal variances, but many of the samples from the distributions would not reject the equal variances
Test for equal variance
Note that all the tests for equal variances are rule out tests. They test the null hypothesis that the 2 variances (standard deviations) are equal, so if you reject the null hypothesis then you can b
Test for equal variance Note that all the tests for equal variances are rule out tests. They test the null hypothesis that the 2 variances (standard deviations) are equal, so if you reject the null hypothesis then you can be fairly sure that they are not equal, but if you get a non-significant result that does not mean that they are equal, they could be equal or you may just not have enough power to find the difference. The rules of thumb are often more useful because if the variances are not equal, but still similar then your other tests are still reasonable. What is the most important is an understanding of the science that produces the data and the question of interest. There are cases where the distributions have different enough variance that you would not want to use methods that assume equal variances, but many of the samples from the distributions would not reject the equal variances
Test for equal variance Note that all the tests for equal variances are rule out tests. They test the null hypothesis that the 2 variances (standard deviations) are equal, so if you reject the null hypothesis then you can b
46,884
Test for equal variance
I wonder if your lecturer was referring to a common rule of thumb: analyses like ANOVA are fairly robust to heterogeneity and can often withstand differing variances between groups by up to a ratio of four times. You can get a sense of that by just looking at the variances. Another possibility is that your lecturer was warning about the possibility that there is a constant coefficient of variation (the variance is a constant function of the mean), which would make sense of the suggestion to look at the mean also. This phenomenon can be common in some cases, such as when working with counts. I wrote a little about the various tests for homogeneity of variance here: why-levene-test-of-equality-of-variances-rather-than-f-ratio, it it helps.
Test for equal variance
I wonder if your lecturer was referring to a common rule of thumb: analyses like ANOVA are fairly robust to heterogeneity and can often withstand differing variances between groups by up to a ratio of
Test for equal variance I wonder if your lecturer was referring to a common rule of thumb: analyses like ANOVA are fairly robust to heterogeneity and can often withstand differing variances between groups by up to a ratio of four times. You can get a sense of that by just looking at the variances. Another possibility is that your lecturer was warning about the possibility that there is a constant coefficient of variation (the variance is a constant function of the mean), which would make sense of the suggestion to look at the mean also. This phenomenon can be common in some cases, such as when working with counts. I wrote a little about the various tests for homogeneity of variance here: why-levene-test-of-equality-of-variances-rather-than-f-ratio, it it helps.
Test for equal variance I wonder if your lecturer was referring to a common rule of thumb: analyses like ANOVA are fairly robust to heterogeneity and can often withstand differing variances between groups by up to a ratio of
46,885
Multivariate analysis techniques for fMRI data
A very useful text is The Statistical Analysis of Functional MRI Data by Nicole Lazar (free pdf via Springerlink with institutional access). Chapter 7 covers multivariate approaches to the analysis of fMRI data. You don't mention in your post but the analysis of resting state vs. task generally require different approaches. Resting state typically relies on principal components analysis (PCA) or independent components analysis (ICA) with both considered as a correlation analysis. For analyzing voxel activation in the presence of task, I recommend chapter 6 of the book I've linked, which covers spatiotemporal models. I have more experience in this area and a simple approach is to fit the time series to a linear model (i.e. ANOVA) and convolve the design matrix with the so-called "canonical hemodynamic response function (HRF)." Additionally, I've found course materials from the University of New Mexico helpful when I was starting out in this field: Analysis Methods in Functional Magnetic Resonance Imaging.
Multivariate analysis techniques for fMRI data
A very useful text is The Statistical Analysis of Functional MRI Data by Nicole Lazar (free pdf via Springerlink with institutional access). Chapter 7 covers multivariate approaches to the analysis of
Multivariate analysis techniques for fMRI data A very useful text is The Statistical Analysis of Functional MRI Data by Nicole Lazar (free pdf via Springerlink with institutional access). Chapter 7 covers multivariate approaches to the analysis of fMRI data. You don't mention in your post but the analysis of resting state vs. task generally require different approaches. Resting state typically relies on principal components analysis (PCA) or independent components analysis (ICA) with both considered as a correlation analysis. For analyzing voxel activation in the presence of task, I recommend chapter 6 of the book I've linked, which covers spatiotemporal models. I have more experience in this area and a simple approach is to fit the time series to a linear model (i.e. ANOVA) and convolve the design matrix with the so-called "canonical hemodynamic response function (HRF)." Additionally, I've found course materials from the University of New Mexico helpful when I was starting out in this field: Analysis Methods in Functional Magnetic Resonance Imaging.
Multivariate analysis techniques for fMRI data A very useful text is The Statistical Analysis of Functional MRI Data by Nicole Lazar (free pdf via Springerlink with institutional access). Chapter 7 covers multivariate approaches to the analysis of
46,886
Should I use missing value using imputation or listwise or pairwise deletion methods?
It depends on Amount of missing data (what percentage of data is missing) Type of missing data (MAR, MCAR, NMAR) According to this nice article (Tsikriktsis: A review of techniques for treating missing data in OM survey research, 2005), if more than 10% data is missing, the best solution is Maximum likelihood imputation if data are NMAR (non-missing at random) Maximum likelihood and hot-deck if data are MAR (missing at random) Pairwise deletion, hot-deck or regression if data are MCAR (missing completely at random)
Should I use missing value using imputation or listwise or pairwise deletion methods?
It depends on Amount of missing data (what percentage of data is missing) Type of missing data (MAR, MCAR, NMAR) According to this nice article (Tsikriktsis: A review of techniques for treating miss
Should I use missing value using imputation or listwise or pairwise deletion methods? It depends on Amount of missing data (what percentage of data is missing) Type of missing data (MAR, MCAR, NMAR) According to this nice article (Tsikriktsis: A review of techniques for treating missing data in OM survey research, 2005), if more than 10% data is missing, the best solution is Maximum likelihood imputation if data are NMAR (non-missing at random) Maximum likelihood and hot-deck if data are MAR (missing at random) Pairwise deletion, hot-deck or regression if data are MCAR (missing completely at random)
Should I use missing value using imputation or listwise or pairwise deletion methods? It depends on Amount of missing data (what percentage of data is missing) Type of missing data (MAR, MCAR, NMAR) According to this nice article (Tsikriktsis: A review of techniques for treating miss
46,887
Should I use missing value using imputation or listwise or pairwise deletion methods?
In short: If your data is missing completely at random (MCAR), i.e., a true value of a missing value has the same distribution as an observed variable and missingness cannot be predicted from any other variables, your results will be unbiased but inefficient using listwise or pairwise deletion. Multiple imputation by chained equations is regarded the best imputation method by many researchers.
Should I use missing value using imputation or listwise or pairwise deletion methods?
In short: If your data is missing completely at random (MCAR), i.e., a true value of a missing value has the same distribution as an observed variable and missingness cannot be predicted from any othe
Should I use missing value using imputation or listwise or pairwise deletion methods? In short: If your data is missing completely at random (MCAR), i.e., a true value of a missing value has the same distribution as an observed variable and missingness cannot be predicted from any other variables, your results will be unbiased but inefficient using listwise or pairwise deletion. Multiple imputation by chained equations is regarded the best imputation method by many researchers.
Should I use missing value using imputation or listwise or pairwise deletion methods? In short: If your data is missing completely at random (MCAR), i.e., a true value of a missing value has the same distribution as an observed variable and missingness cannot be predicted from any othe
46,888
Problems estimating anisotropy parameters for a spatial model
In short, identifying anisotropy is hopeless with these sparse data. The two parameters in question, psiA and psiR, describe anisotropy (the angle and ratio, respectively, of a "geometric anisotropy": consult GSLIB or Journel & Huijbregts for details, because the geoR documentation in Diggle & Ribeiro Jr is indeed inadequate concerning anisotropy). With relatively few datapoints it is quite possible--indeed, with soils data (which can be notoriously variable) it is quite likely--that in some directions almost no spatial correlation is detected while in other directions there appears to be some correlation. This can result in near-infinite ratios. Also, if there is a trend in just one direction and it is not removed, this trend will create a strong anisotropy. Your problem is that five points are way too few for any kind of parameter estimation and $30$ are still too few to identify anisotropy reliably. Rules of thumb in the literature suggest you need at a minimum between $30$ and $100$ points just to get started with estimating the parameters and computing the predictions (that is, kriging). (All rules of thumb have exceptions, but it sounds like these data are not nice enough to qualify.) If you do not assume an isotropic model, you need to explore directional variograms in at least four cardinal directions, whence each such variogram would be based on approximately $5$ to $10$ points each, which again is too small. To identify anisotropy, figure on needing about $100$ points. The cure is to impose isotropic variograms (or determine anisotropy from considerations independent of the data) and hope for the best. Expect the prediction errors to be large.
Problems estimating anisotropy parameters for a spatial model
In short, identifying anisotropy is hopeless with these sparse data. The two parameters in question, psiA and psiR, describe anisotropy (the angle and ratio, respectively, of a "geometric anisotropy":
Problems estimating anisotropy parameters for a spatial model In short, identifying anisotropy is hopeless with these sparse data. The two parameters in question, psiA and psiR, describe anisotropy (the angle and ratio, respectively, of a "geometric anisotropy": consult GSLIB or Journel & Huijbregts for details, because the geoR documentation in Diggle & Ribeiro Jr is indeed inadequate concerning anisotropy). With relatively few datapoints it is quite possible--indeed, with soils data (which can be notoriously variable) it is quite likely--that in some directions almost no spatial correlation is detected while in other directions there appears to be some correlation. This can result in near-infinite ratios. Also, if there is a trend in just one direction and it is not removed, this trend will create a strong anisotropy. Your problem is that five points are way too few for any kind of parameter estimation and $30$ are still too few to identify anisotropy reliably. Rules of thumb in the literature suggest you need at a minimum between $30$ and $100$ points just to get started with estimating the parameters and computing the predictions (that is, kriging). (All rules of thumb have exceptions, but it sounds like these data are not nice enough to qualify.) If you do not assume an isotropic model, you need to explore directional variograms in at least four cardinal directions, whence each such variogram would be based on approximately $5$ to $10$ points each, which again is too small. To identify anisotropy, figure on needing about $100$ points. The cure is to impose isotropic variograms (or determine anisotropy from considerations independent of the data) and hope for the best. Expect the prediction errors to be large.
Problems estimating anisotropy parameters for a spatial model In short, identifying anisotropy is hopeless with these sparse data. The two parameters in question, psiA and psiR, describe anisotropy (the angle and ratio, respectively, of a "geometric anisotropy":
46,889
Mahalanobis distance and percentage of the distribution represented
I found this to be a very interesting question because it is very natural to ask but I have never seen the answer or thought about it before. Of course the answer should depend on the dimension of the normal. In researching this on the net I found that the Mahalanobis squared distance for a d-dimensional multivariate normal is chi-square with d degrees of freedom. This assumes the mean and covariance matrix are known. So from the chi-square distribution it would be easy to find in units of squared Mahalanobis distance the 90, 95 and 99 percentiles and those the ellipsoid that has that coverage. So what I just explained elaborates on Bill Huber's correct but terse response. Although this is just taken from a chi-square table I thought it would be interesting to look at the table below from the Mahalanobis distance perspective. TABLE OF MAHALANOBIS DISTANCE COVERING 95% OF A MULTIVARIATE NORMAL DISTRIBUTION IN D-DIMENSIONS DIMENSION MD CHI-SQUARE (MD^2) 1 1.960 3.841 2 2.448 5.991 3 2.796 7.815 4 3.080 9.488 5 3.327 11.070 10 4.279 18.307 15 5.000 24.996 20 5.604 31.410 25 6.136 37.652 30 6.691 44.773
Mahalanobis distance and percentage of the distribution represented
I found this to be a very interesting question because it is very natural to ask but I have never seen the answer or thought about it before. Of course the answer should depend on the dimension of th
Mahalanobis distance and percentage of the distribution represented I found this to be a very interesting question because it is very natural to ask but I have never seen the answer or thought about it before. Of course the answer should depend on the dimension of the normal. In researching this on the net I found that the Mahalanobis squared distance for a d-dimensional multivariate normal is chi-square with d degrees of freedom. This assumes the mean and covariance matrix are known. So from the chi-square distribution it would be easy to find in units of squared Mahalanobis distance the 90, 95 and 99 percentiles and those the ellipsoid that has that coverage. So what I just explained elaborates on Bill Huber's correct but terse response. Although this is just taken from a chi-square table I thought it would be interesting to look at the table below from the Mahalanobis distance perspective. TABLE OF MAHALANOBIS DISTANCE COVERING 95% OF A MULTIVARIATE NORMAL DISTRIBUTION IN D-DIMENSIONS DIMENSION MD CHI-SQUARE (MD^2) 1 1.960 3.841 2 2.448 5.991 3 2.796 7.815 4 3.080 9.488 5 3.327 11.070 10 4.279 18.307 15 5.000 24.996 20 5.604 31.410 25 6.136 37.652 30 6.691 44.773
Mahalanobis distance and percentage of the distribution represented I found this to be a very interesting question because it is very natural to ask but I have never seen the answer or thought about it before. Of course the answer should depend on the dimension of th
46,890
Mahalanobis distance and percentage of the distribution represented
In Wikipedia, there is table with one-sigma integrals through dimension 10. In the referenced source article there is a full table through $7\sigma$ (Source: Table 1 of "Confidence Analysis of Standard Deviational Ellipse and Its Extension into Higher Dimensional Euclidean Space", Bin Wang, Wenzhong Shi, Zelang Miao)
Mahalanobis distance and percentage of the distribution represented
In Wikipedia, there is table with one-sigma integrals through dimension 10. In the referenced source article there is a full table through $7\sigma$ (Source: Table 1 of "Confidence Analysis of Standar
Mahalanobis distance and percentage of the distribution represented In Wikipedia, there is table with one-sigma integrals through dimension 10. In the referenced source article there is a full table through $7\sigma$ (Source: Table 1 of "Confidence Analysis of Standard Deviational Ellipse and Its Extension into Higher Dimensional Euclidean Space", Bin Wang, Wenzhong Shi, Zelang Miao)
Mahalanobis distance and percentage of the distribution represented In Wikipedia, there is table with one-sigma integrals through dimension 10. In the referenced source article there is a full table through $7\sigma$ (Source: Table 1 of "Confidence Analysis of Standar
46,891
Hidden Markov Model segmentation of different proportions of binary data
My response is in two parts. First, by changing the input (initial) transition probabilities, you can get something similar to what you'd like. Here's some R code demonstrating this for your example: library(HMM) States <- c("0","1","2","3","4") Symbols <- c("0","1") startProbs <- rep(0.2,5) emissionProbs <- matrix(c(0.999,0.75,0.5,0.25,0.001,0.001,0.25,0.5,0.75,0.999),5,2) transProbs <- matrix(0.025,5,5) diag(transProbs) <- 0.9 hmm <- initHMM(States, Symbols, startProbs, transProbs, emissionProbs) > print(hmm) $States [1] "0" "1" "2" "3" "4" $Symbols [1] "0" "1" $startProbs 0 1 2 3 4 0.2 0.2 0.2 0.2 0.2 $transProbs to from 0 1 2 3 4 0 0.900 0.025 0.025 0.025 0.025 1 0.025 0.900 0.025 0.025 0.025 2 0.025 0.025 0.900 0.025 0.025 3 0.025 0.025 0.025 0.900 0.025 4 0.025 0.025 0.025 0.025 0.900 $emissionProbs symbols states 0 1 0 0.999 0.001 1 0.750 0.250 2 0.500 0.500 3 0.250 0.750 4 0.001 0.999 With this initial transition matrix, we get the following probabilities for observations 8, 20, 30, and 40, which are in the middle (roughly) of sequences of 0, 1, 0, and 0,1,0,1... respectively: obs <- as.character(c(0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,0,0,0,0,1,0, 0,0,1,0,0,0,1,0,0,1,1,0,0,1,0,0,0,0,1,0)) post <- posterior(hmm, obs) > post[,c(8,20,30,40)] index states 8 20 30 40 0 0.934764162 0.000001395 0.725475508 0.00004174 1 0.059970011 0.000724501 0.244379742 0.31189082 2 0.004632750 0.006383026 0.028836815 0.56445433 3 0.000631774 0.082112681 0.001305885 0.11840354 4 0.000001303 0.910778397 0.000002049 0.00520957 As you can see, the max. probability states are 0, 4, 0, and 2 respectively, as you wished. It may also help you out if you don't pick such extreme probabilities for states 0 and 4, perhaps choosing 0.95 / 0.05 instead of 0.999 / 0.001. This will make it easier to have higher transition probabilities out of a given state without winding up in states 0 and 4 all the time. If you are considering alternatives to HMMs, you might consider a continuous state space model, which can be formulated as a generalized additive model. Using the mgcv package in R, this could be set up as follows: library(mgcv) obs <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,0,0,0,0,1,0, 0,0,1,0,0,0,1,0,0,1,1,0,0,1,0,0,0,0,1,0) time <- seq(1,length(obs)) foo <- gam(obs~s(time),family="binomial") > predict(foo,type="response")[c(8,20,30,40)] 8 20 30 0.000000000000000222 0.999999999999999778 0.000277113887323986 40 0.540166858432701846 As you can see, the probabilities line up pretty well with what you'd like. Obviously some tuning of the parameters in the smoothing term would likely be desirable.
Hidden Markov Model segmentation of different proportions of binary data
My response is in two parts. First, by changing the input (initial) transition probabilities, you can get something similar to what you'd like. Here's some R code demonstrating this for your example
Hidden Markov Model segmentation of different proportions of binary data My response is in two parts. First, by changing the input (initial) transition probabilities, you can get something similar to what you'd like. Here's some R code demonstrating this for your example: library(HMM) States <- c("0","1","2","3","4") Symbols <- c("0","1") startProbs <- rep(0.2,5) emissionProbs <- matrix(c(0.999,0.75,0.5,0.25,0.001,0.001,0.25,0.5,0.75,0.999),5,2) transProbs <- matrix(0.025,5,5) diag(transProbs) <- 0.9 hmm <- initHMM(States, Symbols, startProbs, transProbs, emissionProbs) > print(hmm) $States [1] "0" "1" "2" "3" "4" $Symbols [1] "0" "1" $startProbs 0 1 2 3 4 0.2 0.2 0.2 0.2 0.2 $transProbs to from 0 1 2 3 4 0 0.900 0.025 0.025 0.025 0.025 1 0.025 0.900 0.025 0.025 0.025 2 0.025 0.025 0.900 0.025 0.025 3 0.025 0.025 0.025 0.900 0.025 4 0.025 0.025 0.025 0.025 0.900 $emissionProbs symbols states 0 1 0 0.999 0.001 1 0.750 0.250 2 0.500 0.500 3 0.250 0.750 4 0.001 0.999 With this initial transition matrix, we get the following probabilities for observations 8, 20, 30, and 40, which are in the middle (roughly) of sequences of 0, 1, 0, and 0,1,0,1... respectively: obs <- as.character(c(0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,0,0,0,0,1,0, 0,0,1,0,0,0,1,0,0,1,1,0,0,1,0,0,0,0,1,0)) post <- posterior(hmm, obs) > post[,c(8,20,30,40)] index states 8 20 30 40 0 0.934764162 0.000001395 0.725475508 0.00004174 1 0.059970011 0.000724501 0.244379742 0.31189082 2 0.004632750 0.006383026 0.028836815 0.56445433 3 0.000631774 0.082112681 0.001305885 0.11840354 4 0.000001303 0.910778397 0.000002049 0.00520957 As you can see, the max. probability states are 0, 4, 0, and 2 respectively, as you wished. It may also help you out if you don't pick such extreme probabilities for states 0 and 4, perhaps choosing 0.95 / 0.05 instead of 0.999 / 0.001. This will make it easier to have higher transition probabilities out of a given state without winding up in states 0 and 4 all the time. If you are considering alternatives to HMMs, you might consider a continuous state space model, which can be formulated as a generalized additive model. Using the mgcv package in R, this could be set up as follows: library(mgcv) obs <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,0,0,0,0,1,0, 0,0,1,0,0,0,1,0,0,1,1,0,0,1,0,0,0,0,1,0) time <- seq(1,length(obs)) foo <- gam(obs~s(time),family="binomial") > predict(foo,type="response")[c(8,20,30,40)] 8 20 30 0.000000000000000222 0.999999999999999778 0.000277113887323986 40 0.540166858432701846 As you can see, the probabilities line up pretty well with what you'd like. Obviously some tuning of the parameters in the smoothing term would likely be desirable.
Hidden Markov Model segmentation of different proportions of binary data My response is in two parts. First, by changing the input (initial) transition probabilities, you can get something similar to what you'd like. Here's some R code demonstrating this for your example
46,892
What are speed differences beetwen ML implementations in different languages?
It depends heavily on the algorithm. There are several things for which writing code in C won't give you any benefit: matrix operations (dot products, element wise multiplications/applications of functions like sin or so, matrix inversions, QR decompositions, ...) because BLAS or LAPACK is called. This makes it possible to implement lots of algorithms easily. You will have a tough time to match C's performance though when you need to do stuff like trees or huge graphs, which is the case for e.g. decision trees, KNN or sophisticated graphical models with lots of structure. Some random thoughts: machine learning algorithms are notoriously hard to debug without a reference implementation; C is much harder to debug than Python. you will get to 90% of the performance of C in some cases with Python, but if you really need to be fast, you will have to stick with C Python is growing quite a big eco system for machine learning with theano and sklearn, it's a good time to join.
What are speed differences beetwen ML implementations in different languages?
It depends heavily on the algorithm. There are several things for which writing code in C won't give you any benefit: matrix operations (dot products, element wise multiplications/applications of fun
What are speed differences beetwen ML implementations in different languages? It depends heavily on the algorithm. There are several things for which writing code in C won't give you any benefit: matrix operations (dot products, element wise multiplications/applications of functions like sin or so, matrix inversions, QR decompositions, ...) because BLAS or LAPACK is called. This makes it possible to implement lots of algorithms easily. You will have a tough time to match C's performance though when you need to do stuff like trees or huge graphs, which is the case for e.g. decision trees, KNN or sophisticated graphical models with lots of structure. Some random thoughts: machine learning algorithms are notoriously hard to debug without a reference implementation; C is much harder to debug than Python. you will get to 90% of the performance of C in some cases with Python, but if you really need to be fast, you will have to stick with C Python is growing quite a big eco system for machine learning with theano and sklearn, it's a good time to join.
What are speed differences beetwen ML implementations in different languages? It depends heavily on the algorithm. There are several things for which writing code in C won't give you any benefit: matrix operations (dot products, element wise multiplications/applications of fun
46,893
Interpretation of reference category in logistic regression
Setting Let $X$ be the categorical predictor and suppose it has 3 levels ($X = 1$, $X = 2$, and $X = 3$). Let the third level be the reference category. Define $X_1$ and $X_2$ as follows: $$ X_1 = \left\{ \begin{array}{ll} 1 & \textrm{if } X = 1 \\ 0 & \textrm{otherwise;} \end{array} \right. $$ $$ X_2 = \left\{ \begin{array}{ll} 1 & \textrm{if } X = 2 \\ 0 & \textrm{otherwise.} \end{array} \right. $$ If you know both $X_1$ and $X_2$ then you know $X$. In particular, if $X_1 = 0$ and $X_2 = 0$ then $X = 3$. Logistic regression model The model is written $$ \log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i} $$ where $\pi_i$ denotes the probability of success of individual $i$ with covariate information $(x_{1i}, x_{2i})$. If individual $i$ falls in category $1$ then $x_{1i} = 1$, $x_{2i} = 0$ and $\log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0 + \beta_1$. If individual $i$ falls in category $2$ then $x_{1i} = 0$, $x_{2i} = 1$ and $\log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0 + \beta_2$. If individual $i$ falls in category $3$ then $x_{1i} = 0$, $x_{2i} = 0$ and $\log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0$. odds ratio Odds ratios are computed with respect to the reference category. For example, for 'category 1 vs category 3' we have $$ \frac{\exp(\beta_0 + \beta_1)}{\exp(\beta_0)} = \exp(\beta_1). $$
Interpretation of reference category in logistic regression
Setting Let $X$ be the categorical predictor and suppose it has 3 levels ($X = 1$, $X = 2$, and $X = 3$). Let the third level be the reference category. Define $X_1$ and $X_2$ as follows: $$ X_1 = \le
Interpretation of reference category in logistic regression Setting Let $X$ be the categorical predictor and suppose it has 3 levels ($X = 1$, $X = 2$, and $X = 3$). Let the third level be the reference category. Define $X_1$ and $X_2$ as follows: $$ X_1 = \left\{ \begin{array}{ll} 1 & \textrm{if } X = 1 \\ 0 & \textrm{otherwise;} \end{array} \right. $$ $$ X_2 = \left\{ \begin{array}{ll} 1 & \textrm{if } X = 2 \\ 0 & \textrm{otherwise.} \end{array} \right. $$ If you know both $X_1$ and $X_2$ then you know $X$. In particular, if $X_1 = 0$ and $X_2 = 0$ then $X = 3$. Logistic regression model The model is written $$ \log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i} $$ where $\pi_i$ denotes the probability of success of individual $i$ with covariate information $(x_{1i}, x_{2i})$. If individual $i$ falls in category $1$ then $x_{1i} = 1$, $x_{2i} = 0$ and $\log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0 + \beta_1$. If individual $i$ falls in category $2$ then $x_{1i} = 0$, $x_{2i} = 1$ and $\log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0 + \beta_2$. If individual $i$ falls in category $3$ then $x_{1i} = 0$, $x_{2i} = 0$ and $\log \left( \frac{\pi_i}{1 - \pi_i} \right) = \beta_0$. odds ratio Odds ratios are computed with respect to the reference category. For example, for 'category 1 vs category 3' we have $$ \frac{\exp(\beta_0 + \beta_1)}{\exp(\beta_0)} = \exp(\beta_1). $$
Interpretation of reference category in logistic regression Setting Let $X$ be the categorical predictor and suppose it has 3 levels ($X = 1$, $X = 2$, and $X = 3$). Let the third level be the reference category. Define $X_1$ and $X_2$ as follows: $$ X_1 = \le
46,894
Interpretation of reference category in logistic regression
this is standard for a single variable. the intercept is the log odds for the reference category and the dummy variables betas are the difference in log odds compared to the reference category. so an "insignificant" dummy variable means the logs odds arent significantly different from the reference category. this is the same as ordinary anova, just on the log odds scale instead of raw scale
Interpretation of reference category in logistic regression
this is standard for a single variable. the intercept is the log odds for the reference category and the dummy variables betas are the difference in log odds compared to the reference category. so a
Interpretation of reference category in logistic regression this is standard for a single variable. the intercept is the log odds for the reference category and the dummy variables betas are the difference in log odds compared to the reference category. so an "insignificant" dummy variable means the logs odds arent significantly different from the reference category. this is the same as ordinary anova, just on the log odds scale instead of raw scale
Interpretation of reference category in logistic regression this is standard for a single variable. the intercept is the log odds for the reference category and the dummy variables betas are the difference in log odds compared to the reference category. so a
46,895
Using Bayesian model diagrams to present both model description and results (posteriors)?
Thanks for your question. I'm glad that the style of diagram helps people "have a real moment of clarity." I concur from personal experience: For me to really understand a model, I have to make a diagram of it like these. The diagrams are intended to communicate the structure of the prior and likelihood. For that purpose, iconic distributions are better than particular choices of hyperprior constants. For example, the iconic gamma, with its sharply descending curve on the left, communicates instantly that the distribution is limited on the left but has infinite extent to the right. If instead it showed a gamma(0.01,0.01) or whatever, it would be too easy to visually confuse with an exponential distribution. Similarly, the iconic beta distribution instantly communicates that the distribution is limited on both ends. If instead it showed an "uninformed" Haldane prior, approximated by beta(0.0001,0.0001), it would be a confusing squarish U-shaped distribution with spikes at the two ends, that might even be visually confused with a Bernoulli distribution. Thus, the iconic distributions do a good job for their intended purpose. It would not be appropriate to display the posterior this way because the marginals on the posterior are not necessarily shaped like any particular basic distribution. For example, a gamma prior on a parameter need not yield a gamma-shaped marginal posterior. Moreover, although the priors on the parameters are independent in the JAGS model, the posterior distribution usually has correlations among parameters.
Using Bayesian model diagrams to present both model description and results (posteriors)?
Thanks for your question. I'm glad that the style of diagram helps people "have a real moment of clarity." I concur from personal experience: For me to really understand a model, I have to make a diag
Using Bayesian model diagrams to present both model description and results (posteriors)? Thanks for your question. I'm glad that the style of diagram helps people "have a real moment of clarity." I concur from personal experience: For me to really understand a model, I have to make a diagram of it like these. The diagrams are intended to communicate the structure of the prior and likelihood. For that purpose, iconic distributions are better than particular choices of hyperprior constants. For example, the iconic gamma, with its sharply descending curve on the left, communicates instantly that the distribution is limited on the left but has infinite extent to the right. If instead it showed a gamma(0.01,0.01) or whatever, it would be too easy to visually confuse with an exponential distribution. Similarly, the iconic beta distribution instantly communicates that the distribution is limited on both ends. If instead it showed an "uninformed" Haldane prior, approximated by beta(0.0001,0.0001), it would be a confusing squarish U-shaped distribution with spikes at the two ends, that might even be visually confused with a Bernoulli distribution. Thus, the iconic distributions do a good job for their intended purpose. It would not be appropriate to display the posterior this way because the marginals on the posterior are not necessarily shaped like any particular basic distribution. For example, a gamma prior on a parameter need not yield a gamma-shaped marginal posterior. Moreover, although the priors on the parameters are independent in the JAGS model, the posterior distribution usually has correlations among parameters.
Using Bayesian model diagrams to present both model description and results (posteriors)? Thanks for your question. I'm glad that the style of diagram helps people "have a real moment of clarity." I concur from personal experience: For me to really understand a model, I have to make a diag
46,896
What are the potential functions of the cliques in Markov random field?
All potential functions can be written in a log-linear form as described in the Wikipedia article. This however may not be that useful, as it requires you to specify a weight for all possible configurations of your clique. Your choice of potential function depends on the properties of the variables you are modelling. For example, if you are implementing a Kalman filter (which is an autoregressor for continuous variables assuming Gaussian noise), your potential functions are Gaussian. For binary variables ($x_1$, and $x_2$) that should approximate an XOR relationship you could specify the following potential function: $$1/Z \cdot \exp(a + b \cdot x_1 + b \cdot x_2 + c \cdot x_1 \cdot x_2)$$ where a and b are positive and c is negative. For a very good introduction to probabilistic models I'd recommend Mike Jordan's technical report / book Graphical Models, Exponential Families and Variational Inference or consider taking a look at Chris Bishop's book.
What are the potential functions of the cliques in Markov random field?
All potential functions can be written in a log-linear form as described in the Wikipedia article. This however may not be that useful, as it requires you to specify a weight for all possible configu
What are the potential functions of the cliques in Markov random field? All potential functions can be written in a log-linear form as described in the Wikipedia article. This however may not be that useful, as it requires you to specify a weight for all possible configurations of your clique. Your choice of potential function depends on the properties of the variables you are modelling. For example, if you are implementing a Kalman filter (which is an autoregressor for continuous variables assuming Gaussian noise), your potential functions are Gaussian. For binary variables ($x_1$, and $x_2$) that should approximate an XOR relationship you could specify the following potential function: $$1/Z \cdot \exp(a + b \cdot x_1 + b \cdot x_2 + c \cdot x_1 \cdot x_2)$$ where a and b are positive and c is negative. For a very good introduction to probabilistic models I'd recommend Mike Jordan's technical report / book Graphical Models, Exponential Families and Variational Inference or consider taking a look at Chris Bishop's book.
What are the potential functions of the cliques in Markov random field? All potential functions can be written in a log-linear form as described in the Wikipedia article. This however may not be that useful, as it requires you to specify a weight for all possible configu
46,897
On-line detection of over-fitting in neural networks
Some rather disorganized thoughts on this issue (I hope there is something of use in there somewhere): Rather than having training and test data, you ought to have three partitions: (i) the training set, which is used to optimize the weights of the network (ii) a validation set, which is used to decide when to stop training (and to make other choices about the model such as the number of hidden layer neurons to use) and (iii) the test set, which is used to estimate the performance of the final network. You need three partitions rather than two in order to get an unbiased performance estimate. As an aspect of the model has been tuned to maximize performance on both the training set and the validation set (via the choice of when to stop training etc), which means that the performance on both of those sets will give an optimistically biased estimate of true generalization performance (probably rather strongly biased). The basic idea of early stopping is based on the assumption that initially the weights of the network will be changed in ways that learn the underlying structure of the data, but after some time genuine improvements in generalization will no longer be available. When the network gets to that point it can often still reduce the error on the training set by memorizing the noise in the data, which generally results in generalization performance becoming worse. However if we monitor the performance on a separate set of data, we should see the error on that set start to rise once we move from the first phase of learning (which is beneficial) to the second (which isn't). The simplest thing to do is simply to monitor the validation set performance and save a copy of the network every time we see a validation error that is lower than the best we have seen so far, and then simply use that network to make predictions. The problem is that the validation set performance is often rather noisy, so it is difficult to know whether we are likely to see a better network if we continue training, or whether the improvements in the validation set are meaningful. I generally used to just train to convergence and keep the set of weights that minimized the validation set error along the way. There is a good book called "Neural Networks: Tricks of the Trade", edited by Genevieve Orr and Klass-Robert Muller, which is a collection of advice from many leading neural network experts of the 1990s. At least one of these gives some sensible advice on early stopping. These days I prefer regularization instead, Chris Bishops excellent book "neural networks for pattern recognition" uses this approach and explains its relationship to early stopping. It is far easier in practice in my experience, although more modern approaches, such as kernel methods or Gaussian processes tend to be better still.
On-line detection of over-fitting in neural networks
Some rather disorganized thoughts on this issue (I hope there is something of use in there somewhere): Rather than having training and test data, you ought to have three partitions: (i) the training s
On-line detection of over-fitting in neural networks Some rather disorganized thoughts on this issue (I hope there is something of use in there somewhere): Rather than having training and test data, you ought to have three partitions: (i) the training set, which is used to optimize the weights of the network (ii) a validation set, which is used to decide when to stop training (and to make other choices about the model such as the number of hidden layer neurons to use) and (iii) the test set, which is used to estimate the performance of the final network. You need three partitions rather than two in order to get an unbiased performance estimate. As an aspect of the model has been tuned to maximize performance on both the training set and the validation set (via the choice of when to stop training etc), which means that the performance on both of those sets will give an optimistically biased estimate of true generalization performance (probably rather strongly biased). The basic idea of early stopping is based on the assumption that initially the weights of the network will be changed in ways that learn the underlying structure of the data, but after some time genuine improvements in generalization will no longer be available. When the network gets to that point it can often still reduce the error on the training set by memorizing the noise in the data, which generally results in generalization performance becoming worse. However if we monitor the performance on a separate set of data, we should see the error on that set start to rise once we move from the first phase of learning (which is beneficial) to the second (which isn't). The simplest thing to do is simply to monitor the validation set performance and save a copy of the network every time we see a validation error that is lower than the best we have seen so far, and then simply use that network to make predictions. The problem is that the validation set performance is often rather noisy, so it is difficult to know whether we are likely to see a better network if we continue training, or whether the improvements in the validation set are meaningful. I generally used to just train to convergence and keep the set of weights that minimized the validation set error along the way. There is a good book called "Neural Networks: Tricks of the Trade", edited by Genevieve Orr and Klass-Robert Muller, which is a collection of advice from many leading neural network experts of the 1990s. At least one of these gives some sensible advice on early stopping. These days I prefer regularization instead, Chris Bishops excellent book "neural networks for pattern recognition" uses this approach and explains its relationship to early stopping. It is far easier in practice in my experience, although more modern approaches, such as kernel methods or Gaussian processes tend to be better still.
On-line detection of over-fitting in neural networks Some rather disorganized thoughts on this issue (I hope there is something of use in there somewhere): Rather than having training and test data, you ought to have three partitions: (i) the training s
46,898
On-line detection of over-fitting in neural networks
There is no standard method of doing it because it is wrong -- you simply overfit your model on the testing data (it is a hidden form of overfitting by parameter selection).
On-line detection of over-fitting in neural networks
There is no standard method of doing it because it is wrong -- you simply overfit your model on the testing data (it is a hidden form of overfitting by parameter selection).
On-line detection of over-fitting in neural networks There is no standard method of doing it because it is wrong -- you simply overfit your model on the testing data (it is a hidden form of overfitting by parameter selection).
On-line detection of over-fitting in neural networks There is no standard method of doing it because it is wrong -- you simply overfit your model on the testing data (it is a hidden form of overfitting by parameter selection).
46,899
Maximum entropy sampler
You can look for a discrete distribution, with the desired first four moments, and with the maximum entropy possible. You can then interpolate the cumulative distribution function to sample from it. In R, it can be done as follows. kurtosis <- 3 n <- 100 x <- seq(-5,5,length=n) dx <- mean(diff(x)) # Opposite of the Entropy, to minimize f <- function(p) sum( p * log(p) ) # The first moments g <- function(p) c( sum(p)*dx, sum(x*p)*dx, sum(x^2*p)*dx, sum(x^3*p)*dx, sum(x^4*p)*dx ) # Maximize the entropy subject to those constraints library(Rsolnp) r <- solnp( rep(1/n,n), f, # Function to minimize eqfun = g, # Equality constraints eqB = c(1, mean=0, var=1, skewness=0, kurtosis), LB=rep(0,n), UB=rep(1,n) ) # Beware: it is not very precise at the boundaries of the interval plot(x, r$pars, type="l", log="y", las=1) lines(x, dnorm(x), lty=3) # Sample from the corresponding distribution q <- approxfun( c(0,cumsum(r$pars)*dx), c(x[1]-dx,x) ) r <- function(n) q(runif(n)) qqnorm(r(1e4))
Maximum entropy sampler
You can look for a discrete distribution, with the desired first four moments, and with the maximum entropy possible. You can then interpolate the cumulative distribution function to sample from it.
Maximum entropy sampler You can look for a discrete distribution, with the desired first four moments, and with the maximum entropy possible. You can then interpolate the cumulative distribution function to sample from it. In R, it can be done as follows. kurtosis <- 3 n <- 100 x <- seq(-5,5,length=n) dx <- mean(diff(x)) # Opposite of the Entropy, to minimize f <- function(p) sum( p * log(p) ) # The first moments g <- function(p) c( sum(p)*dx, sum(x*p)*dx, sum(x^2*p)*dx, sum(x^3*p)*dx, sum(x^4*p)*dx ) # Maximize the entropy subject to those constraints library(Rsolnp) r <- solnp( rep(1/n,n), f, # Function to minimize eqfun = g, # Equality constraints eqB = c(1, mean=0, var=1, skewness=0, kurtosis), LB=rep(0,n), UB=rep(1,n) ) # Beware: it is not very precise at the boundaries of the interval plot(x, r$pars, type="l", log="y", las=1) lines(x, dnorm(x), lty=3) # Sample from the corresponding distribution q <- approxfun( c(0,cumsum(r$pars)*dx), c(x[1]-dx,x) ) r <- function(n) q(runif(n)) qqnorm(r(1e4))
Maximum entropy sampler You can look for a discrete distribution, with the desired first four moments, and with the maximum entropy possible. You can then interpolate the cumulative distribution function to sample from it.
46,900
Maximum entropy sampler
If you only have the kurtosis issue to address, you can use Student $t$-distribution with $\nu$ degrees of freedom that has kurtosis of $6/(\nu-4)$ for $\nu>4$. You would also need to normalize the variance to 1 (it is equal to $\nu/(\nu-2)$ for the original Student distribution). If you are going to be OK with discrete distributions, you can have a distribution with support on $\{-1/2, -x, x, 1/2\}$ with corresponding probabilities $(1-p)/2, p/2, p/2, (1-p)/2$. It has 0 odd moments, its variance is $p x^2 + (1-p)/4 = 1/4 + p(x^2-1/4)$, so the relation between $x$ and $p$ would be $$p(x^2-1/4)=3/4, \quad p=\frac3{4x^2-1}, \quad x=\sqrt{\frac3{4p}+\frac14}.$$ Finally, it has the fourth moment $$1/16 (1-p)/2 + p x^4 = \frac{1-p}{32} + \frac{(3p+1)^2}{16p}=\frac{19p^2+11p+2}{32p},$$ which is also its kurtosis by virtue of the unit variance, for $0\le p\le 1$.
Maximum entropy sampler
If you only have the kurtosis issue to address, you can use Student $t$-distribution with $\nu$ degrees of freedom that has kurtosis of $6/(\nu-4)$ for $\nu>4$. You would also need to normalize the va
Maximum entropy sampler If you only have the kurtosis issue to address, you can use Student $t$-distribution with $\nu$ degrees of freedom that has kurtosis of $6/(\nu-4)$ for $\nu>4$. You would also need to normalize the variance to 1 (it is equal to $\nu/(\nu-2)$ for the original Student distribution). If you are going to be OK with discrete distributions, you can have a distribution with support on $\{-1/2, -x, x, 1/2\}$ with corresponding probabilities $(1-p)/2, p/2, p/2, (1-p)/2$. It has 0 odd moments, its variance is $p x^2 + (1-p)/4 = 1/4 + p(x^2-1/4)$, so the relation between $x$ and $p$ would be $$p(x^2-1/4)=3/4, \quad p=\frac3{4x^2-1}, \quad x=\sqrt{\frac3{4p}+\frac14}.$$ Finally, it has the fourth moment $$1/16 (1-p)/2 + p x^4 = \frac{1-p}{32} + \frac{(3p+1)^2}{16p}=\frac{19p^2+11p+2}{32p},$$ which is also its kurtosis by virtue of the unit variance, for $0\le p\le 1$.
Maximum entropy sampler If you only have the kurtosis issue to address, you can use Student $t$-distribution with $\nu$ degrees of freedom that has kurtosis of $6/(\nu-4)$ for $\nu>4$. You would also need to normalize the va