idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
42,901
Model broken stick model in R where one line has a constant gradient?
I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x, bp) #Wrapper for Mixed effects model with variable break point foo <- function(bp) { mod <- lmer(y ~ b1(x, bp) + (1|Site), data = dat) REMLcrit(mod) } mod <- lmer(y ~ b1(x, bp) + (1 | Site), data = dat) And everything else as appears on the original post
Model broken stick model in R where one line has a constant gradient?
I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x
Model broken stick model in R where one line has a constant gradient? I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x, bp) #Wrapper for Mixed effects model with variable break point foo <- function(bp) { mod <- lmer(y ~ b1(x, bp) + (1|Site), data = dat) REMLcrit(mod) } mod <- lmer(y ~ b1(x, bp) + (1 | Site), data = dat) And everything else as appears on the original post
Model broken stick model in R where one line has a constant gradient? I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x
42,902
Convergence of distribution
One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, then it becomes natural to guess that there exists an fix-point for this iteration of the averaging operator, which should be the maximum entropy distribution. The point of the information that these are not lattice-distributions would be to ensure that we cannot get caught by a fix-point with lower entropy. Google for "entropy central limit theorem" there is even a book with that in the title! This ideas are related to what physicists call re-normalization theory.
Convergence of distribution
One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, th
Convergence of distribution One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, then it becomes natural to guess that there exists an fix-point for this iteration of the averaging operator, which should be the maximum entropy distribution. The point of the information that these are not lattice-distributions would be to ensure that we cannot get caught by a fix-point with lower entropy. Google for "entropy central limit theorem" there is even a book with that in the title! This ideas are related to what physicists call re-normalization theory.
Convergence of distribution One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, th
42,903
Convergence of distribution
Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at non-zero powers of $t$ going to zero. You would need to verify all the regularity conditions, of course.
Convergence of distribution
Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at n
Convergence of distribution Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at non-zero powers of $t$ going to zero. You would need to verify all the regularity conditions, of course.
Convergence of distribution Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at n
42,904
How to determine if one fit is significantly better than a slightly different fit?
As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can led to a "best" fit when the information criterion is maximized. AIC and BIC are of the form -2 log likelihood + penalty and differ in the choice of penalty. So best by the criteria means maximum. This might help in the sense that it gets around picking an overfitted model. But it is possible that you are still left with two models that are close. Should you really pick the one that is maximum? The question of how to decide between them remains. It may be that it really requires a much larger data set to distinguish between the two.
How to determine if one fit is significantly better than a slightly different fit?
As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can
How to determine if one fit is significantly better than a slightly different fit? As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can led to a "best" fit when the information criterion is maximized. AIC and BIC are of the form -2 log likelihood + penalty and differ in the choice of penalty. So best by the criteria means maximum. This might help in the sense that it gets around picking an overfitted model. But it is possible that you are still left with two models that are close. Should you really pick the one that is maximum? The question of how to decide between them remains. It may be that it really requires a much larger data set to distinguish between the two.
How to determine if one fit is significantly better than a slightly different fit? As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can
42,905
CART (rpart) balanced vs. unbalanced dataset
If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as you have enough data to identify the classes reasonably well. If the class distributions of features overlap considerably its a different story. What the right thing to do is depends on your loss function and the class distribution in the future samples that you want to predict. If the class distribution in future samples is approximately 0.26 / 0.18 / 0.56, as in the training data, and you use the 0-1-loss function to count the number of misclassifications, you will in general get a smaller number of misclassifications if you keep the training data unbalanced. As a general comment I would always avoid actually throwing away data unless the training data set is huge. If you expect that future samples have a class distribution that differs from that of the training data I would try to incorporate that in the model instead. In a classification tree that could be done by weighting. If you use (naive) Bayes you can simply change prior class probabilities.
CART (rpart) balanced vs. unbalanced dataset
If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as
CART (rpart) balanced vs. unbalanced dataset If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as you have enough data to identify the classes reasonably well. If the class distributions of features overlap considerably its a different story. What the right thing to do is depends on your loss function and the class distribution in the future samples that you want to predict. If the class distribution in future samples is approximately 0.26 / 0.18 / 0.56, as in the training data, and you use the 0-1-loss function to count the number of misclassifications, you will in general get a smaller number of misclassifications if you keep the training data unbalanced. As a general comment I would always avoid actually throwing away data unless the training data set is huge. If you expect that future samples have a class distribution that differs from that of the training data I would try to incorporate that in the model instead. In a classification tree that could be done by weighting. If you use (naive) Bayes you can simply change prior class probabilities.
CART (rpart) balanced vs. unbalanced dataset If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as
42,906
CART (rpart) balanced vs. unbalanced dataset
I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
CART (rpart) balanced vs. unbalanced dataset
I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
CART (rpart) balanced vs. unbalanced dataset I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
CART (rpart) balanced vs. unbalanced dataset I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
42,907
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. You can therefore construct the survival function, $t\mapsto \hat{S}(t)$, by constant interpolation. Here is the way I would do this: km <- summary(survfit(Surv(time, event) ~ 1, data=data)) S <- approxfun(km$time, km$surv, method="constant", f=0, yleft=1, rule=2) Now, S can be used as any user-defined function in R: in particular, you can evaluate S(t) at any time t, you can make plots using plot(), and you can superimpose two K-M curves on the same graph using lines(), ... Hope this helps!
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. Yo
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. You can therefore construct the survival function, $t\mapsto \hat{S}(t)$, by constant interpolation. Here is the way I would do this: km <- summary(survfit(Surv(time, event) ~ 1, data=data)) S <- approxfun(km$time, km$surv, method="constant", f=0, yleft=1, rule=2) Now, S can be used as any user-defined function in R: in particular, you can evaluate S(t) at any time t, you can make plots using plot(), and you can superimpose two K-M curves on the same graph using lines(), ... Hope this helps!
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. Yo
42,908
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be able to do the plots, but since the usual mode is to display both process as the cumulative incidence, you will need to do some work to transform the data so that one is S(t) and the other is H(t).
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be ab
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be able to do the plots, but since the usual mode is to display both process as the cumulative incidence, you will need to do some work to transform the data so that one is S(t) and the other is H(t).
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be ab
42,909
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="red",xlim=c(0,70)) par(new=T) plot(survfit(KMfit2 ~ 1),col="green",xlim=c(0,70))
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="r
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="red",xlim=c(0,70)) par(new=T) plot(survfit(KMfit2 ~ 1),col="green",xlim=c(0,70))
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="r
42,910
How to calculate the variance of vectors for clustering?
Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in density based clustering. In this example, the mean is not even inside the cluster. Variances mostly measure the spatial extend of the cluster, not its connectivity and similar properties...
How to calculate the variance of vectors for clustering?
Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in dens
How to calculate the variance of vectors for clustering? Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in density based clustering. In this example, the mean is not even inside the cluster. Variances mostly measure the spatial extend of the cluster, not its connectivity and similar properties...
How to calculate the variance of vectors for clustering? Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in dens
42,911
How to calculate the variance of vectors for clustering?
I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centroid as the variance? Number 3 would be mine if you average them. Number 1 is bad for the reason you already gave. I don't like 2 because you are comparing distances between individual vectors and a variance is measured in terms of a distance from a center or average point.
How to calculate the variance of vectors for clustering?
I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centro
How to calculate the variance of vectors for clustering? I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centroid as the variance? Number 3 would be mine if you average them. Number 1 is bad for the reason you already gave. I don't like 2 because you are comparing distances between individual vectors and a variance is measured in terms of a distance from a center or average point.
How to calculate the variance of vectors for clustering? I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centro
42,912
State-of-the-art in smoothing splines
A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R. (1985) Smoothing Noisy Data with Spline Functions, Numerische Mathematik, 47, p. 99-106 (see also Hutchinson, M.F. (1986) Cubic Spline Data Smoother, Transactions on Mathematical Software, vol. 12, 150-153; you will find the FORTRAN source of the algortihm in http://calgo.acm.org). Note also that the Kalman filter can be a good tool to fit some types of splines; see for instance an answer I gave sometime ago on Kalman filter vs. smoothing splines. You will find much relevant information if you search here in CrossValidated using "splines" as a tag.
State-of-the-art in smoothing splines
A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R.
State-of-the-art in smoothing splines A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R. (1985) Smoothing Noisy Data with Spline Functions, Numerische Mathematik, 47, p. 99-106 (see also Hutchinson, M.F. (1986) Cubic Spline Data Smoother, Transactions on Mathematical Software, vol. 12, 150-153; you will find the FORTRAN source of the algortihm in http://calgo.acm.org). Note also that the Kalman filter can be a good tool to fit some types of splines; see for instance an answer I gave sometime ago on Kalman filter vs. smoothing splines. You will find much relevant information if you search here in CrossValidated using "splines" as a tag.
State-of-the-art in smoothing splines A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R.
42,913
Bootstrap vs other simulated data methods
To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data from one group into the data from another. For any particular group say you have m observations then you would sample m times with replacement from those m observations. You repeat this process with all the other groups (but the value for m may change). Once you have done this you have a bootstrap sample. You fit the model to this bootstrap sample and then repeat the bootstrapping followed by model fitting many times. This will give you a collection of estimated model parameters (a histogram for each if you will). Any time you have a bootstrap histogram of estimates you can construct bootstrap confidence intervals from this collection of estimates. The simplest is Efron's percentile method which takes the 2.5 percentile and the 97.5 percentile from these ordered bootstrap estimate to be the endpoint of a 95% confidence interval. For more detail on this you can read Efron and Tibshirani's An Introduction to Bootstrap (1993) Chapman and Hall, my book Bootstrap Methods 2nd ed (2007) Wiley or the article by Efron and Tibshirani in Statistical Science (1986). Now in the absence of data you may want to get an understanding of how the model works. then you can do simulation of the data and look at the results in a way similar to what I described for the bootstrap. The difference is that instead of sampling from the empirical distribution for the data you have to specify a distribution or distributions whenever you do the sampling.
Bootstrap vs other simulated data methods
To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data
Bootstrap vs other simulated data methods To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data from one group into the data from another. For any particular group say you have m observations then you would sample m times with replacement from those m observations. You repeat this process with all the other groups (but the value for m may change). Once you have done this you have a bootstrap sample. You fit the model to this bootstrap sample and then repeat the bootstrapping followed by model fitting many times. This will give you a collection of estimated model parameters (a histogram for each if you will). Any time you have a bootstrap histogram of estimates you can construct bootstrap confidence intervals from this collection of estimates. The simplest is Efron's percentile method which takes the 2.5 percentile and the 97.5 percentile from these ordered bootstrap estimate to be the endpoint of a 95% confidence interval. For more detail on this you can read Efron and Tibshirani's An Introduction to Bootstrap (1993) Chapman and Hall, my book Bootstrap Methods 2nd ed (2007) Wiley or the article by Efron and Tibshirani in Statistical Science (1986). Now in the absence of data you may want to get an understanding of how the model works. then you can do simulation of the data and look at the results in a way similar to what I described for the bootstrap. The difference is that instead of sampling from the empirical distribution for the data you have to specify a distribution or distributions whenever you do the sampling.
Bootstrap vs other simulated data methods To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data
42,914
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions within cells?
There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about the science that generated the data, not the data itself. The tests for normality, when used for justifying the normality assumption, will either give a meaningless answer to a meaningful question (small sample size) or a meaningful answer to a meaningless question (large sample size). Plots of residuals from an appropriate model (including the repeated measures) can be used along with what you learn by doing your homework about where the data comes from to help you decide if you are comfortable with the normality assumption. But for deciding if the tests and intervals based on the normal are reasonable, dump the formal tests of normality.
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions withi
There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions within cells? There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about the science that generated the data, not the data itself. The tests for normality, when used for justifying the normality assumption, will either give a meaningless answer to a meaningful question (small sample size) or a meaningful answer to a meaningless question (large sample size). Plots of residuals from an appropriate model (including the repeated measures) can be used along with what you learn by doing your homework about where the data comes from to help you decide if you are comfortable with the normality assumption. But for deciding if the tests and intervals based on the normal are reasonable, dump the formal tests of normality.
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions withi There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about
42,915
Effective way to visualize net growth/profit/income?
Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balance instead of net growth = gains - losses. You can infer net growth by comparing bars heights or translating the cumulative slope into the local scale, but it might be useful to have a direct representation. Here's an derivative idea that adds a separate element for net gain or loss. The gain and loss are light colored while the net gain or loss is darker. And the cumulative balance gets a separate frame.
Effective way to visualize net growth/profit/income?
Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balanc
Effective way to visualize net growth/profit/income? Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balance instead of net growth = gains - losses. You can infer net growth by comparing bars heights or translating the cumulative slope into the local scale, but it might be useful to have a direct representation. Here's an derivative idea that adds a separate element for net gain or loss. The gain and loss are light colored while the net gain or loss is darker. And the cumulative balance gets a separate frame.
Effective way to visualize net growth/profit/income? Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balanc
42,916
Bootstrapping a sample with unequal selection probabilities
Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue must have been investigated before. While it is easy to justify the use of observation weights (in practice, by weighting observations, you are hoping to use a better estimate of the unknown distribution function F), I would like to find the relevant background.
Bootstrapping a sample with unequal selection probabilities
Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue
Bootstrapping a sample with unequal selection probabilities Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue must have been investigated before. While it is easy to justify the use of observation weights (in practice, by weighting observations, you are hoping to use a better estimate of the unknown distribution function F), I would like to find the relevant background.
Bootstrapping a sample with unequal selection probabilities Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue
42,917
Bootstrapping a sample with unequal selection probabilities
You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unweighted mean: mean(example$meas) # output = 5 Weighted mean: sum(example$meas * example$wts) / sum(example$wts) # output = 2.346154 Now doing this with bootstrapping: my.avg <- function(data, indices) { d <- data[indices,] return(mean(d$meas)) } Unweighted bootstrapped mean: mean(boot(example, my.avg, 1000)$t) # output = 4.8908 Weighted bootstrapped mean: mean(boot(example, my.avg, 1000, weights=example$wts)$t) # output = 2.3712
Bootstrapping a sample with unequal selection probabilities
You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unwe
Bootstrapping a sample with unequal selection probabilities You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unweighted mean: mean(example$meas) # output = 5 Weighted mean: sum(example$meas * example$wts) / sum(example$wts) # output = 2.346154 Now doing this with bootstrapping: my.avg <- function(data, indices) { d <- data[indices,] return(mean(d$meas)) } Unweighted bootstrapped mean: mean(boot(example, my.avg, 1000)$t) # output = 4.8908 Weighted bootstrapped mean: mean(boot(example, my.avg, 1000, weights=example$wts)$t) # output = 2.3712
Bootstrapping a sample with unequal selection probabilities You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unwe
42,918
k-subset with maximal variance
When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a search over $k_0=1,\ldots,k−1$ does the trick. (Even when the $n$ numbers are not sorted, finding the $k^\text{th}$ largest or smallest element takes $O(n)$ effort, so the algorithm is at worst $O(k n)$ or $O(n\log n)$, whichever is smaller.) By understanding how changing a single value changes the variance, we can see immediately why this is. Consider any $k$ numbers $x_1, \ldots, x_k$. Contemplate changing $x_k$ to $x_k+\delta$ for some number $\delta$. Because the original mean is $$\bar x = \frac{1}{k}(x_1+\cdots+x_k),$$ adding $\delta$ changes it to $$\bar x^\prime = \bar x + \frac{1}{k}\delta.\tag{1}$$ Because the original variance, multiplied by $k$, is $$ks^2 = (x_1^2 + \cdots + x_k^2) - k\bar x^2,$$ we may directly compute the effect of adding $\delta$ to the sum of squares (it affects only the last square) and exploit $(1)$ to determine how the variance changes: $$\eqalign{ ks^{\prime 2} - ks^2 &= [(x_k+\delta)^2 - x_k^2] - k[\bar x^{\prime 2} - \bar x^2]\\ &= 2\delta(x_k - \bar x) + \frac{k-1}{k}\delta^2 \ge 2\delta(x_k - \bar x). }$$ This will be nonnegative whenever the signs of $\delta$ and $x_k-\bar x$ are the same. In other words, if we can manage to move $x_k$ further from the mean, we will increase the variance. Now suppose we have identified a variance-maximizing sequence $\mathcal A$ of $k$ values chosen from a sequence $X$ of given values. It should be clear--and is easy to demonstrate formally--that if $\mathcal A$ does not consist of the very smallest and the very largest elements of $X$--then we can select some $x\in\mathcal A$ that comes from somewhere in the middle of $X$ and replace it by another element $y\in X\setminus \mathcal A$ that is at least as far from the mean of $\mathcal A$ as $x$ itself is. This replacement would be tantamount to adding the value $\delta = y-x$ to $x$, and (by the foregoing calculations) would produce another variance-maximizing subsequence. That's it: after a finite number of such moves, we would no longer be able to find such an $x$. At this point, $\mathcal A$ would be the union of two extreme "tails" of $X$, as claimed. Similar reasoning shows that $\mathcal A$ will not consist of just one tail of $X$ (unless $\mathcal A$ were all of $X$ itself or the elements of $X$ were all equal), for by replacing the most "inner" element of $\mathcal A$ by the extreme element in the same direction, we would not decrease the variance.
k-subset with maximal variance
When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a sear
k-subset with maximal variance When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a search over $k_0=1,\ldots,k−1$ does the trick. (Even when the $n$ numbers are not sorted, finding the $k^\text{th}$ largest or smallest element takes $O(n)$ effort, so the algorithm is at worst $O(k n)$ or $O(n\log n)$, whichever is smaller.) By understanding how changing a single value changes the variance, we can see immediately why this is. Consider any $k$ numbers $x_1, \ldots, x_k$. Contemplate changing $x_k$ to $x_k+\delta$ for some number $\delta$. Because the original mean is $$\bar x = \frac{1}{k}(x_1+\cdots+x_k),$$ adding $\delta$ changes it to $$\bar x^\prime = \bar x + \frac{1}{k}\delta.\tag{1}$$ Because the original variance, multiplied by $k$, is $$ks^2 = (x_1^2 + \cdots + x_k^2) - k\bar x^2,$$ we may directly compute the effect of adding $\delta$ to the sum of squares (it affects only the last square) and exploit $(1)$ to determine how the variance changes: $$\eqalign{ ks^{\prime 2} - ks^2 &= [(x_k+\delta)^2 - x_k^2] - k[\bar x^{\prime 2} - \bar x^2]\\ &= 2\delta(x_k - \bar x) + \frac{k-1}{k}\delta^2 \ge 2\delta(x_k - \bar x). }$$ This will be nonnegative whenever the signs of $\delta$ and $x_k-\bar x$ are the same. In other words, if we can manage to move $x_k$ further from the mean, we will increase the variance. Now suppose we have identified a variance-maximizing sequence $\mathcal A$ of $k$ values chosen from a sequence $X$ of given values. It should be clear--and is easy to demonstrate formally--that if $\mathcal A$ does not consist of the very smallest and the very largest elements of $X$--then we can select some $x\in\mathcal A$ that comes from somewhere in the middle of $X$ and replace it by another element $y\in X\setminus \mathcal A$ that is at least as far from the mean of $\mathcal A$ as $x$ itself is. This replacement would be tantamount to adding the value $\delta = y-x$ to $x$, and (by the foregoing calculations) would produce another variance-maximizing subsequence. That's it: after a finite number of such moves, we would no longer be able to find such an $x$. At this point, $\mathcal A$ would be the union of two extreme "tails" of $X$, as claimed. Similar reasoning shows that $\mathcal A$ will not consist of just one tail of $X$ (unless $\mathcal A$ were all of $X$ itself or the elements of $X$ were all equal), for by replacing the most "inner" element of $\mathcal A$ by the extreme element in the same direction, we would not decrease the variance.
k-subset with maximal variance When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a sear
42,919
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm
yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources are biased for this specific application but you could find a sequence classification frameworks behind it. some good slides some other good slides with good example and MATLAB code in the last ones a good link with MATLAB/OCTAVE code for gesture recognition another good link with clear, easy-to-understan, with all source code in c#. in this, also, there is as an example a basic sequence classifier and there it shows some train values and the log probability for some new sequences. to let my work done i've grabbed the initial values (the alphabet and number of hidden states == the size of the matrixes) from this and then i've played with them. i've also rewritten the little part of kevin murphy matlab code in c++ using armadillo library for linear algebra (matrix) calculi. if you are interested in it just let me know. hope it helps.
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm
yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources a
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources are biased for this specific application but you could find a sequence classification frameworks behind it. some good slides some other good slides with good example and MATLAB code in the last ones a good link with MATLAB/OCTAVE code for gesture recognition another good link with clear, easy-to-understan, with all source code in c#. in this, also, there is as an example a basic sequence classifier and there it shows some train values and the log probability for some new sequences. to let my work done i've grabbed the initial values (the alphabet and number of hidden states == the size of the matrixes) from this and then i've played with them. i've also rewritten the little part of kevin murphy matlab code in c++ using armadillo library for linear algebra (matrix) calculi. if you are interested in it just let me know. hope it helps.
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources a
42,920
Viable distance metric for text articles
I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They used autoencoders to compress documents into low-dimensional, real-valued vectors. The documents appeared to be fairly well clustered in this transformed space, so that I could imagine that even the euclidean metric could give some decent results. Better results can probably be achieved by binarizing the document representations (as described in the talk) and using the Hamming distance.
Viable distance metric for text articles
I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They us
Viable distance metric for text articles I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They used autoencoders to compress documents into low-dimensional, real-valued vectors. The documents appeared to be fairly well clustered in this transformed space, so that I could imagine that even the euclidean metric could give some decent results. Better results can probably be achieved by binarizing the document representations (as described in the talk) and using the Hamming distance.
Viable distance metric for text articles I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They us
42,921
Viable distance metric for text articles
Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text alignment. The conclusion was that cosine similarity with a small modification performed best, although only slightly better than the standard cosine similarity. Note that this was in a medical context, but that shouldn't matter. There is also the Okapi BM25 similarity measure which is used quite often and may be worth looking at too.
Viable distance metric for text articles
Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text
Viable distance metric for text articles Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text alignment. The conclusion was that cosine similarity with a small modification performed best, although only slightly better than the standard cosine similarity. Note that this was in a medical context, but that shouldn't matter. There is also the Okapi BM25 similarity measure which is used quite often and may be worth looking at too.
Viable distance metric for text articles Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text
42,922
Sampling distribution of random effects estimator
I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and quantify uncertainties, I might try a Bayesian approach, i.e., simulating from posterior distributions of variance parameters using MCMC. Maybe look at: http://cran.r-project.org/web/views/Bayesian.html
Sampling distribution of random effects estimator
I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and qua
Sampling distribution of random effects estimator I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and quantify uncertainties, I might try a Bayesian approach, i.e., simulating from posterior distributions of variance parameters using MCMC. Maybe look at: http://cran.r-project.org/web/views/Bayesian.html
Sampling distribution of random effects estimator I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and qua
42,923
Reparameterizing the binomial link for psychometric data
Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ would be the "shift" parameter that you are interested in. While the two are mathematically equivalent ($\alpha = -\beta \delta$), they are not statistically equivalent. In fact, the second parametrization is not linear in its parameters, so it cannot be fitted with a linear model (generalized or not). This also suggests the solution: you have to use a generalized non-linear mixed model. In R, the nlmer function of the lme4 package can be used. It is a bit more work to set up then the linear model, but should be doable.
Reparameterizing the binomial link for psychometric data
Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ w
Reparameterizing the binomial link for psychometric data Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ would be the "shift" parameter that you are interested in. While the two are mathematically equivalent ($\alpha = -\beta \delta$), they are not statistically equivalent. In fact, the second parametrization is not linear in its parameters, so it cannot be fitted with a linear model (generalized or not). This also suggests the solution: you have to use a generalized non-linear mixed model. In R, the nlmer function of the lme4 package can be used. It is a bit more work to set up then the linear model, but should be doable.
Reparameterizing the binomial link for psychometric data Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ w
42,924
Reparameterizing the binomial link for psychometric data
For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your intercept will reflect changes in pss. You'll still be stuck with a magnitude issue when there are interactions... but some issue somewhere is unavoidable.
Reparameterizing the binomial link for psychometric data
For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your inter
Reparameterizing the binomial link for psychometric data For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your intercept will reflect changes in pss. You'll still be stuck with a magnitude issue when there are interactions... but some issue somewhere is unavoidable.
Reparameterizing the binomial link for psychometric data For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your inter
42,925
Reinforcement learning of a policy for multiple actors in large state spaces
I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their observations) then this is no different than treating all different agents as a single agent, and learn sth like a "swarm policy". If this is not the case, you might need to search for "distributed reinforcement learning" or "multi agent reinforcement learning". For (1), you might need to find a representation of the action/state space which is more compact. Some ideas follow. You say that there are 1000 locations. Does it make sense to try to find a low dimensional embedding for them? E.g. are you able to find a suitable distance measure between them? If so, you can use multidimension scaling to embed them in a continuous, k-dimensional space with $k << 1000$. Another approach would be to use policy gradients. The idea is that you use a parametrized policy, $$ \pi: \Theta \times S \mapsto A $$ where each $\theta \in \Theta$ is a point in parameter space which defines the policy. This policy can then be optimized with gradient-based methods. An example would be that you have a neural network which takes the current state as an input, and directly puts out "move object i to location j". You will not need to enumerate all possible actions explicitly. Nevertheless, I doubt this approach will work without serious work. Even when using PGs, you will need to reduce your action/state space.
Reinforcement learning of a policy for multiple actors in large state spaces
I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their
Reinforcement learning of a policy for multiple actors in large state spaces I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their observations) then this is no different than treating all different agents as a single agent, and learn sth like a "swarm policy". If this is not the case, you might need to search for "distributed reinforcement learning" or "multi agent reinforcement learning". For (1), you might need to find a representation of the action/state space which is more compact. Some ideas follow. You say that there are 1000 locations. Does it make sense to try to find a low dimensional embedding for them? E.g. are you able to find a suitable distance measure between them? If so, you can use multidimension scaling to embed them in a continuous, k-dimensional space with $k << 1000$. Another approach would be to use policy gradients. The idea is that you use a parametrized policy, $$ \pi: \Theta \times S \mapsto A $$ where each $\theta \in \Theta$ is a point in parameter space which defines the policy. This policy can then be optimized with gradient-based methods. An example would be that you have a neural network which takes the current state as an input, and directly puts out "move object i to location j". You will not need to enumerate all possible actions explicitly. Nevertheless, I doubt this approach will work without serious work. Even when using PGs, you will need to reduce your action/state space.
Reinforcement learning of a policy for multiple actors in large state spaces I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their
42,926
How to test for and deal with regression toward the mean?
Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the mean is confounded with treatment, and so you will not be able to estimate the "true" treatment effect. This is an interesting set of data, and I think you can do some analyses with it, however you will not be able to treat the method used to generate the data as an experiment. I think you have what is outlined on Wikipedia as a natural experiment and, while useful, these types of studies have some issues not found in controlled experiments. In particular, natural experiments suffer from a lack of control over independent variables, so cause-and-effect relationships may be impossible to identify, although it is still possible to draw conclusions about correlations. In your case, I would be worried about confounding variables. This is a list of possible factors that could influence the results: Possibly your largest confound is that you don't know what else is going on in users' lives away from the website. On the basis of what they write on the website, one user may realise how bad their situation is, they may draw on resources around them (family, friends) for support and therefore the help is not limited to that received on the website. Another user, perhaps due to their life issues, may be alienated from family and friends and the website is all the support they have. We may expect that the time-to-positive-outcome will be different for these two users, but we can't distinguish between them. I'm assuming that the website users accessed the website when they wanted to (which is great for them) but means that the results you have for their problems may not be reflective of the number and severity of their life issues, because I assume they didn't access the site regularly (unlike face-to-face counselling appointments which tend to be scheduled regularly). The level of detail in their writing will be reflective of their written style, and is not likely to be equivalent to what they would express in a face-to-face counselling session. There are also no non-verbal cues which a face-to-face counsellor would also use to help assess the state of their client. Were the changes over time more pronounced in users who wrote less and had less tags applied to their content? If there were a number of lower-score and high-score tags in the same post (e.g. someone is having problems with study and they're in a happy relationship), how was the proxy affected by this, for example was a simple average score take across all tag scores for each post? This could be affecting your scores if there is a particular very negative issue that the person is facing, but much of what else they mention is positive. In a face-to-face setting, the counsellor can focus on the negative and find out, e.g. find out why the person is so depressed even though much of their life seems to be going well, but in the website situation you only have what they write. So it is possible that the way users have written their posts means that taking an overall proxy may not work too well. If the website is for users with life problems, I'm not sure why you wish to include users who scored as being very (happy? successful?) in their first post. These people do not seem to be the target audience for the website and I'm not sure of why you would want to include them in the same group as people who had issues. For example, the happy(?) people do not seem to need treatment, so there is no reason I can think of why the website intervention would be suitable for them. I'm not sure if users were assigned to the website as a treatment after, for example, seeing a counsellor. If that was the case, I would wonder why people who were upset enough to see a counsellor would then do a very positive post on a website designed to help them improve their mental state. Assuming there was this pre-counselling stage, maybe all they needed was that one counselling appointment. Regardless, I think this is quite a different group to the ones that gave initial posts that showed life issues, and for the moment I would omit them as they seem to be a "sampling error". Normally when assessing treatment effects, we only select people who appear to need treatment (e.g. we don't include happy contented people in trials of antidepressants). There may be some social desirability bias in the user posts. Have you undertaken any inter-rater reliability testing with the tags? If not, could some of the issues with scoring be related to bias with some tags? In particular, there could be some quality issues when the counsellor has just started and is learning how to tag posts, just like there are quality issues when any of us learn something new. Also, did some counsellors tend to place more tags, and did some tend to place few tags? Your analysis requires tag consistency across all the posts. These are just suggestions based on your post, and I could well have misunderstood some of your study, or made some incorrect assumptions. I think that the factors you mention at the end of your post - user's language choices, details of website interaction, timing of counsellor response - are all very important. Best wishes with your study.
How to test for and deal with regression toward the mean?
Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the
How to test for and deal with regression toward the mean? Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the mean is confounded with treatment, and so you will not be able to estimate the "true" treatment effect. This is an interesting set of data, and I think you can do some analyses with it, however you will not be able to treat the method used to generate the data as an experiment. I think you have what is outlined on Wikipedia as a natural experiment and, while useful, these types of studies have some issues not found in controlled experiments. In particular, natural experiments suffer from a lack of control over independent variables, so cause-and-effect relationships may be impossible to identify, although it is still possible to draw conclusions about correlations. In your case, I would be worried about confounding variables. This is a list of possible factors that could influence the results: Possibly your largest confound is that you don't know what else is going on in users' lives away from the website. On the basis of what they write on the website, one user may realise how bad their situation is, they may draw on resources around them (family, friends) for support and therefore the help is not limited to that received on the website. Another user, perhaps due to their life issues, may be alienated from family and friends and the website is all the support they have. We may expect that the time-to-positive-outcome will be different for these two users, but we can't distinguish between them. I'm assuming that the website users accessed the website when they wanted to (which is great for them) but means that the results you have for their problems may not be reflective of the number and severity of their life issues, because I assume they didn't access the site regularly (unlike face-to-face counselling appointments which tend to be scheduled regularly). The level of detail in their writing will be reflective of their written style, and is not likely to be equivalent to what they would express in a face-to-face counselling session. There are also no non-verbal cues which a face-to-face counsellor would also use to help assess the state of their client. Were the changes over time more pronounced in users who wrote less and had less tags applied to their content? If there were a number of lower-score and high-score tags in the same post (e.g. someone is having problems with study and they're in a happy relationship), how was the proxy affected by this, for example was a simple average score take across all tag scores for each post? This could be affecting your scores if there is a particular very negative issue that the person is facing, but much of what else they mention is positive. In a face-to-face setting, the counsellor can focus on the negative and find out, e.g. find out why the person is so depressed even though much of their life seems to be going well, but in the website situation you only have what they write. So it is possible that the way users have written their posts means that taking an overall proxy may not work too well. If the website is for users with life problems, I'm not sure why you wish to include users who scored as being very (happy? successful?) in their first post. These people do not seem to be the target audience for the website and I'm not sure of why you would want to include them in the same group as people who had issues. For example, the happy(?) people do not seem to need treatment, so there is no reason I can think of why the website intervention would be suitable for them. I'm not sure if users were assigned to the website as a treatment after, for example, seeing a counsellor. If that was the case, I would wonder why people who were upset enough to see a counsellor would then do a very positive post on a website designed to help them improve their mental state. Assuming there was this pre-counselling stage, maybe all they needed was that one counselling appointment. Regardless, I think this is quite a different group to the ones that gave initial posts that showed life issues, and for the moment I would omit them as they seem to be a "sampling error". Normally when assessing treatment effects, we only select people who appear to need treatment (e.g. we don't include happy contented people in trials of antidepressants). There may be some social desirability bias in the user posts. Have you undertaken any inter-rater reliability testing with the tags? If not, could some of the issues with scoring be related to bias with some tags? In particular, there could be some quality issues when the counsellor has just started and is learning how to tag posts, just like there are quality issues when any of us learn something new. Also, did some counsellors tend to place more tags, and did some tend to place few tags? Your analysis requires tag consistency across all the posts. These are just suggestions based on your post, and I could well have misunderstood some of your study, or made some incorrect assumptions. I think that the factors you mention at the end of your post - user's language choices, details of website interaction, timing of counsellor response - are all very important. Best wishes with your study.
How to test for and deal with regression toward the mean? Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the
42,927
How to test for and deal with regression toward the mean?
I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estimate the degree of regression to the mean using a control group, but since you don't have a control group, maybe you need to jimmy-rig one from the literature. Somewhere in the psychology literature, someone must have said something about the degree of regression to the mean that can be expected in the life happiness of people not too dissimilar to yours (maybe college students who visit counseling services). If a student whose happiness is at the 10th percentile can be expected to go back to regress to the 20th percentile within 6 months, just via regression to the mean, maybe you could make a similar assumption about people in your own data. I emphasize that this method would (and should) reduce the credibility of your analysis, since the hypothetical college students that you would use for a comparison might differ in very important ways from the people who use your online forum, but it might be the best way of dealing with a bad situation. (The inspiration for this suggestion comes from my reading of the Rubin causal model, which gives a flexible way to think about observational research: identify a counterfactual by way of clever assumptions, caveating them as you go.)
How to test for and deal with regression toward the mean?
I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estim
How to test for and deal with regression toward the mean? I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estimate the degree of regression to the mean using a control group, but since you don't have a control group, maybe you need to jimmy-rig one from the literature. Somewhere in the psychology literature, someone must have said something about the degree of regression to the mean that can be expected in the life happiness of people not too dissimilar to yours (maybe college students who visit counseling services). If a student whose happiness is at the 10th percentile can be expected to go back to regress to the 20th percentile within 6 months, just via regression to the mean, maybe you could make a similar assumption about people in your own data. I emphasize that this method would (and should) reduce the credibility of your analysis, since the hypothetical college students that you would use for a comparison might differ in very important ways from the people who use your online forum, but it might be the best way of dealing with a bad situation. (The inspiration for this suggestion comes from my reading of the Rubin causal model, which gives a flexible way to think about observational research: identify a counterfactual by way of clever assumptions, caveating them as you go.)
How to test for and deal with regression toward the mean? I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estim
42,928
Spearman or Kendall correlation? [duplicate]
Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous variable under each ordinal variable; then calculate the correlation coefficient of the continuous variables. There are implementations available in R and Stata.
Spearman or Kendall correlation? [duplicate]
Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous va
Spearman or Kendall correlation? [duplicate] Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous variable under each ordinal variable; then calculate the correlation coefficient of the continuous variables. There are implementations available in R and Stata.
Spearman or Kendall correlation? [duplicate] Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous va
42,929
ANCOVA with repeated measures in R
There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
ANCOVA with repeated measures in R
There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
ANCOVA with repeated measures in R There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
ANCOVA with repeated measures in R There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
42,930
ANCOVA with repeated measures in R
Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
ANCOVA with repeated measures in R
Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
ANCOVA with repeated measures in R Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
ANCOVA with repeated measures in R Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
42,931
ANCOVA with repeated measures in R
Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
ANCOVA with repeated measures in R
Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
ANCOVA with repeated measures in R Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
ANCOVA with repeated measures in R Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
42,932
Why is semipartial correlation cited so seldom?
The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression). And, because squared semipartial correlation is just a standardized form of R-square decrease, texts may find it unnecessary to mention, preferring to speak about R-square change directly instead. This is because we never compare full and reduced models abstractly, we compare concrete models, where standardization is unnecessary.
Why is semipartial correlation cited so seldom?
The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression).
Why is semipartial correlation cited so seldom? The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression). And, because squared semipartial correlation is just a standardized form of R-square decrease, texts may find it unnecessary to mention, preferring to speak about R-square change directly instead. This is because we never compare full and reduced models abstractly, we compare concrete models, where standardization is unnecessary.
Why is semipartial correlation cited so seldom? The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression).
42,933
Learning to create samples from an unknown distribution
Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applications" by A. C. Davison and D. V. Hinkley (1997, CUP). which has an associated R package, "boot". BUT... there's a lot that can go wrong in bootstrapping and it's very easy to get misleading results if you don't know what you're doing (which, to be blunt, sounds likely). It would help a lot if you explained exactly what the problem is that you're trying to solve.
Learning to create samples from an unknown distribution
Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applicat
Learning to create samples from an unknown distribution Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applications" by A. C. Davison and D. V. Hinkley (1997, CUP). which has an associated R package, "boot". BUT... there's a lot that can go wrong in bootstrapping and it's very easy to get misleading results if you don't know what you're doing (which, to be blunt, sounds likely). It would help a lot if you explained exactly what the problem is that you're trying to solve.
Learning to create samples from an unknown distribution Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applicat
42,934
Learning to create samples from an unknown distribution
I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram for the distribution of your samples (with as many bins as you can, within reasonable limits) and the cumulative density function. On the vertical axis of your CDF there are values that range between 0 and 1. Randomly generate numbers between 0 and 1; track them down to the horizontal axis; take the value of the histogram at that bin and you have a new generated value for your new samples. The whole point of this method is that from the generation of (almost) random equiprobable numbers, you obtain a non-equiprobable distribution that is in accord with your first distribution X.
Learning to create samples from an unknown distribution
I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram
Learning to create samples from an unknown distribution I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram for the distribution of your samples (with as many bins as you can, within reasonable limits) and the cumulative density function. On the vertical axis of your CDF there are values that range between 0 and 1. Randomly generate numbers between 0 and 1; track them down to the horizontal axis; take the value of the histogram at that bin and you have a new generated value for your new samples. The whole point of this method is that from the generation of (almost) random equiprobable numbers, you obtain a non-equiprobable distribution that is in accord with your first distribution X.
Learning to create samples from an unknown distribution I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram
42,935
Selecting regression type for Dickey-Fuller test
Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the time series). Contrarily, the test is biased when these parameters are needed, but missing. In economics, we typically don't worry about the trend term, which would imply a trend that was quadratic in time in our variable of interest. Drift implies a linear trend and is commonly incorporated. You may plot a time series of your variable and look at the pattern to see if a trend is noticeable. A basic linear regression of the variable on a linear time trend may give you an idea of whether there is a linear trend as well (of course, you shouldn't pay attention to official hypothesis tests here because serial corrleation/non-stationarities could be biasing your results). Using a spline may also indicate whether there is a linear or quadratic trend in the variable. These visual cues are often good indicators of how you should conduct your Dickey-Fuller test.
Selecting regression type for Dickey-Fuller test
Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the
Selecting regression type for Dickey-Fuller test Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the time series). Contrarily, the test is biased when these parameters are needed, but missing. In economics, we typically don't worry about the trend term, which would imply a trend that was quadratic in time in our variable of interest. Drift implies a linear trend and is commonly incorporated. You may plot a time series of your variable and look at the pattern to see if a trend is noticeable. A basic linear regression of the variable on a linear time trend may give you an idea of whether there is a linear trend as well (of course, you shouldn't pay attention to official hypothesis tests here because serial corrleation/non-stationarities could be biasing your results). Using a spline may also indicate whether there is a linear or quadratic trend in the variable. These visual cues are often good indicators of how you should conduct your Dickey-Fuller test.
Selecting regression type for Dickey-Fuller test Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the
42,936
Selecting regression type for Dickey-Fuller test
Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic regressors. Others have also suggested procedures for testing for a unit root that incorporate testing for the presence of deterministic regressors too. Enders "Applied Econometric Time Series" 2ed p213 has one such approach. I suspect there are others. Enders starts with a general formulation, tests for a unit root, if a unit root then tests for significance of the time trend, if time trend is not significant then tests for a unit root in a formulation without time trend and so on. In any such procedure some caution is needed: 1 Critical values used depend on whether the test can assume a normal distribution or not. 2 Final results are effectively a result of a sequence of pretests, and each result is conditioned on the previous tests being correct. So true significance levels are difficult (impossible?) to work out. 3 Serial correlation should be addressed at each testing stage. Otherwise the test results generated at each stage may be misleading and ultimately give a misleading final result.
Selecting regression type for Dickey-Fuller test
Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic
Selecting regression type for Dickey-Fuller test Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic regressors. Others have also suggested procedures for testing for a unit root that incorporate testing for the presence of deterministic regressors too. Enders "Applied Econometric Time Series" 2ed p213 has one such approach. I suspect there are others. Enders starts with a general formulation, tests for a unit root, if a unit root then tests for significance of the time trend, if time trend is not significant then tests for a unit root in a formulation without time trend and so on. In any such procedure some caution is needed: 1 Critical values used depend on whether the test can assume a normal distribution or not. 2 Final results are effectively a result of a sequence of pretests, and each result is conditioned on the previous tests being correct. So true significance levels are difficult (impossible?) to work out. 3 Serial correlation should be addressed at each testing stage. Otherwise the test results generated at each stage may be misleading and ultimately give a misleading final result.
Selecting regression type for Dickey-Fuller test Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic
42,937
Selecting regression type for Dickey-Fuller test
There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the necessary steps. Alternatively, you could look at the underlying publication by Dolado, Jenkinson, and Sosvilla-Rivero (1990). To summarize their approach, you would start at equation 3. If $\gamma=0$ is rejected, conclude there is no unit root. Otherwise, continue by estimating equation 2 and 3, until you find the true specification.
Selecting regression type for Dickey-Fuller test
There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the ne
Selecting regression type for Dickey-Fuller test There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the necessary steps. Alternatively, you could look at the underlying publication by Dolado, Jenkinson, and Sosvilla-Rivero (1990). To summarize their approach, you would start at equation 3. If $\gamma=0$ is rejected, conclude there is no unit root. Otherwise, continue by estimating equation 2 and 3, until you find the true specification.
Selecting regression type for Dickey-Fuller test There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the ne
42,938
L1 regression versus L2 regression
L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient descent optimisation of the penalised loss there needs to be checks to see if a step goes over one of these discontinuities to make sure it is handled properly (hopefully the solution will lie on one of these discontinuities as this is what gives rise to the sparsity). With L2 regularisation there are no such (additional) discontinuities, so there is no need to check for them, so it is generallly faster. In the case of [kernel] ridge regression, you only need to solve a system of linear equations, which is why I normally use those methods rather than L1 regularisation these days.
L1 regression versus L2 regression
L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient
L1 regression versus L2 regression L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient descent optimisation of the penalised loss there needs to be checks to see if a step goes over one of these discontinuities to make sure it is handled properly (hopefully the solution will lie on one of these discontinuities as this is what gives rise to the sparsity). With L2 regularisation there are no such (additional) discontinuities, so there is no need to check for them, so it is generallly faster. In the case of [kernel] ridge regression, you only need to solve a system of linear equations, which is why I normally use those methods rather than L1 regularisation these days.
L1 regression versus L2 regression L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient
42,939
Estimating error from repeated measurements
If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would report 4.32 +- 0.16/$\sqrt{n}$, where n is the number of measurements. A derivation: So we're trying to measure the width $\mu$ of a block of wood. Suppose every measurement has some Gaussian error (perhaps because different people are measuring differently). We can write these as $X_{1},\ldots,X_{n}\sim\mathcal{N}(\mu,\sigma^{2})$. But the ruler itself has some imprecision, which for simplificity we'll model as Gaussian as well. So our final observations are $Y_{1},\ldots,Y_{n}$, distributed as $Y_{i}\sim\mathcal{N}(X_{i},\rho^{2})$. Suppose $\rho^{2}$ is known (e.g. the precision of the ruler). Then our goal is estimate $\mu$, and $\sigma^{2}$ is a nuisance parameter. This is easy because the $X$'s and $Y$'s are jointly normal. Write $(X,Y)$ for a generic pair of these variables. Their joint distribution is $$ \begin{pmatrix}X\\ Y \end{pmatrix}\sim\mathcal{N}\left(\begin{pmatrix}\mu\\ \mu \end{pmatrix},\begin{pmatrix}\sigma^{2} & \sigma^{2}\\ \sigma^{2} & \rho^{2}+\sigma^{2} \end{pmatrix}\right) $$ So $Y\sim N(\mu,\rho^{2}+\sigma^{2})$. Let $\bar{Y}=(Y_{1}+\cdots+Y_{n})/n$ be the our estimator for $\mu$. Then $\bar{Y}\sim\mathcal{N}(\mu,[\rho^{2}+\sigma^{2}]/n)$. Since we don't know $\sigma^{2}$, I'd recommend plugging in your favorite sample estimate for the variance of $Y$, call it $\hat{\sigma}_{Y}^{2}$, and plugging that in for $\rho^{2}+\sigma^{2}$. So you end up with $\bar{Y}\pm\hat{\sigma}_{Y}/\sqrt{n}$.
Estimating error from repeated measurements
If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would repor
Estimating error from repeated measurements If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would report 4.32 +- 0.16/$\sqrt{n}$, where n is the number of measurements. A derivation: So we're trying to measure the width $\mu$ of a block of wood. Suppose every measurement has some Gaussian error (perhaps because different people are measuring differently). We can write these as $X_{1},\ldots,X_{n}\sim\mathcal{N}(\mu,\sigma^{2})$. But the ruler itself has some imprecision, which for simplificity we'll model as Gaussian as well. So our final observations are $Y_{1},\ldots,Y_{n}$, distributed as $Y_{i}\sim\mathcal{N}(X_{i},\rho^{2})$. Suppose $\rho^{2}$ is known (e.g. the precision of the ruler). Then our goal is estimate $\mu$, and $\sigma^{2}$ is a nuisance parameter. This is easy because the $X$'s and $Y$'s are jointly normal. Write $(X,Y)$ for a generic pair of these variables. Their joint distribution is $$ \begin{pmatrix}X\\ Y \end{pmatrix}\sim\mathcal{N}\left(\begin{pmatrix}\mu\\ \mu \end{pmatrix},\begin{pmatrix}\sigma^{2} & \sigma^{2}\\ \sigma^{2} & \rho^{2}+\sigma^{2} \end{pmatrix}\right) $$ So $Y\sim N(\mu,\rho^{2}+\sigma^{2})$. Let $\bar{Y}=(Y_{1}+\cdots+Y_{n})/n$ be the our estimator for $\mu$. Then $\bar{Y}\sim\mathcal{N}(\mu,[\rho^{2}+\sigma^{2}]/n)$. Since we don't know $\sigma^{2}$, I'd recommend plugging in your favorite sample estimate for the variance of $Y$, call it $\hat{\sigma}_{Y}^{2}$, and plugging that in for $\rho^{2}+\sigma^{2}$. So you end up with $\bar{Y}\pm\hat{\sigma}_{Y}/\sqrt{n}$.
Estimating error from repeated measurements If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would repor
42,940
How to interpret Zivot & Andrews unit root test?
Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a trend stationary process that allows for a one time break in the level, the trend or both. If you reject the unit root null then the interpretation depends on which alternative you are testing against. Here it looks as though the alternative hypothesis is a trend stationary process with a break in the intercept. The test appears to be reporting a rejection of the unit root null in favour of a one time break in the intercept at position 21. I can barely read the chart. I suspect it is a chart of the ADF test statistic values for each possible breakpoint. The ZA approach estimates the breakpoint to be where the ADF unit root t-test statistic is minimised (i.e. the most negative). Might be worth reading the ZA paper.
How to interpret Zivot & Andrews unit root test?
Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a
How to interpret Zivot & Andrews unit root test? Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a trend stationary process that allows for a one time break in the level, the trend or both. If you reject the unit root null then the interpretation depends on which alternative you are testing against. Here it looks as though the alternative hypothesis is a trend stationary process with a break in the intercept. The test appears to be reporting a rejection of the unit root null in favour of a one time break in the intercept at position 21. I can barely read the chart. I suspect it is a chart of the ADF test statistic values for each possible breakpoint. The ZA approach estimates the breakpoint to be where the ADF unit root t-test statistic is minimised (i.e. the most negative). Might be worth reading the ZA paper.
How to interpret Zivot & Andrews unit root test? Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a
42,941
How to interpret regression coefficients in logistic regression?
I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the question, Daniel is trying to assess the effect of no.Green on the probability of subjects choosing red in an experiment. no.Green is the number of balls (1, 2, 3 or 4). And the experiment was conducted under several conditions, nameley: under no.Red equals to 5, 7 and 9; and under condition in which message is blue and message is red. So, we have a total of 4 * 3 * 2 = 24 conditions (4 conditions from no.Green, 3 conditions from no.Red and 2 conditions from message blue or red). One single regression with all interactions terms is quite complex to interpret. However, his main task is fairly simple, namely: to show that no.green has an effect on the probability of choosing red. So, my sugestion is to run a separece regression for message == blue and message == red condition, and also a separate regression for each no.red condition. Moreover, I'll simplify thing by assuming that no.Green is continous (it seems to be possible treat it as continous, or at least an interval variable). In R, for the message == blue case, just do this: fit.1 = glm(DecisionasReceiver ~ no.GREEN, family=binomial, data=subset(lue, messagereceived=="blue" & no.RED==5) ) fit.2 = glm(DecisionasReceiver ~ no.GREEN, family=binomial, data=subset(lue, messagereceived=="blue" & no.RED==7) ) fit.3 = glm(DecisionasReceiver ~ no.GREEN, family=binomial, data=subset(lue, messagereceived=="blue" & no.RED==9) ) Now, in order to properly asses the effect of no.green, you have to take in consideration the uncertainty of estimates. Looking to standard errors, you will see that no.green is significant. However, looking only for standard errors does not allow you to proper understadn the range of uncertainty. Say, for instance, that you are interested to know how less likely to choose red are subjects (under condition no.red ==5) with no.green == 2 compared with subjects with no.green == 1. To answer this type of question, it's better, I think to look at predicted probability, but taking into consideration the uncertainty on estimates. To do this, i'll use the "sim" function, of arm package. require(arm) n.sims = 1000 sim.1 = sim(fit.1, n.sims) with(subset(lue, messagereceived=="blue" & no.RED==5), plot(no.GREEN,jitter(DecisionasReceiver, .1), ylab="Probability of Choosing Red", xlab="Number of Green", main="Effect of Green under no.Red equals 5")) for (s in 1:100) curve(invlogit(coef(sim.1)[s,1] +coef(sim.1)[s,2]*x), col="gray", xlim=c(1,4), add=T) The result is a graphic with 100 logistic curves. Each curve represents a possible effect of no.green on the probability of choosing red. From the graphic, we see which is the most likely range of predicted probability for each value of no.green. I hope it helps.
How to interpret regression coefficients in logistic regression?
I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the questi
How to interpret regression coefficients in logistic regression? I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the question, Daniel is trying to assess the effect of no.Green on the probability of subjects choosing red in an experiment. no.Green is the number of balls (1, 2, 3 or 4). And the experiment was conducted under several conditions, nameley: under no.Red equals to 5, 7 and 9; and under condition in which message is blue and message is red. So, we have a total of 4 * 3 * 2 = 24 conditions (4 conditions from no.Green, 3 conditions from no.Red and 2 conditions from message blue or red). One single regression with all interactions terms is quite complex to interpret. However, his main task is fairly simple, namely: to show that no.green has an effect on the probability of choosing red. So, my sugestion is to run a separece regression for message == blue and message == red condition, and also a separate regression for each no.red condition. Moreover, I'll simplify thing by assuming that no.Green is continous (it seems to be possible treat it as continous, or at least an interval variable). In R, for the message == blue case, just do this: fit.1 = glm(DecisionasReceiver ~ no.GREEN, family=binomial, data=subset(lue, messagereceived=="blue" & no.RED==5) ) fit.2 = glm(DecisionasReceiver ~ no.GREEN, family=binomial, data=subset(lue, messagereceived=="blue" & no.RED==7) ) fit.3 = glm(DecisionasReceiver ~ no.GREEN, family=binomial, data=subset(lue, messagereceived=="blue" & no.RED==9) ) Now, in order to properly asses the effect of no.green, you have to take in consideration the uncertainty of estimates. Looking to standard errors, you will see that no.green is significant. However, looking only for standard errors does not allow you to proper understadn the range of uncertainty. Say, for instance, that you are interested to know how less likely to choose red are subjects (under condition no.red ==5) with no.green == 2 compared with subjects with no.green == 1. To answer this type of question, it's better, I think to look at predicted probability, but taking into consideration the uncertainty on estimates. To do this, i'll use the "sim" function, of arm package. require(arm) n.sims = 1000 sim.1 = sim(fit.1, n.sims) with(subset(lue, messagereceived=="blue" & no.RED==5), plot(no.GREEN,jitter(DecisionasReceiver, .1), ylab="Probability of Choosing Red", xlab="Number of Green", main="Effect of Green under no.Red equals 5")) for (s in 1:100) curve(invlogit(coef(sim.1)[s,1] +coef(sim.1)[s,2]*x), col="gray", xlim=c(1,4), add=T) The result is a graphic with 100 logistic curves. Each curve represents a possible effect of no.green on the probability of choosing red. From the graphic, we see which is the most likely range of predicted probability for each value of no.green. I hope it helps.
How to interpret regression coefficients in logistic regression? I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the questi
42,942
Fractional integration and cointegration with R
Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
Fractional integration and cointegration with R
Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
Fractional integration and cointegration with R Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
Fractional integration and cointegration with R Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
42,943
Fractional integration and cointegration with R
The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA(p,d,q) model (Haslett and Raftery, Appl.Statistics, 1989). The documentation says that the package can perform the following methods: Geweke and Porter-Hudak (GPH) (1983) Reisen (1994) estimator Haslett and Raftery (1989) Regarding Sowell's method from 1992, a number of academic articles (example) report that they obtained the FORTRAN code from Fallaw Sowell. Probably, you could request the code and port it to R (if he or the licence approved of that)?! I couldn't find an existing package with the implementation.
Fractional integration and cointegration with R
The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimati
Fractional integration and cointegration with R The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA(p,d,q) model (Haslett and Raftery, Appl.Statistics, 1989). The documentation says that the package can perform the following methods: Geweke and Porter-Hudak (GPH) (1983) Reisen (1994) estimator Haslett and Raftery (1989) Regarding Sowell's method from 1992, a number of academic articles (example) report that they obtained the FORTRAN code from Fallaw Sowell. Probably, you could request the code and port it to R (if he or the licence approved of that)?! I couldn't find an existing package with the implementation.
Fractional integration and cointegration with R The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimati
42,944
Gibbs sampling for a simple linear model -- need help with the likelihood function
This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicated than what is readily available on that webpage. I'll give some guidance, but as long as you want to learn how to do things, this won't be the complete answer. (If you don't want to do that, we can locate the cooked answers on the web, too.) Each sampling of betas relies on the complete data set. If you are doing this with individual y_i's and x_i's, you are not doing this right. Before you start working with the code, you need to sit down with a piece of paper (letter size or A4, depending on your geography) and derive the posterior distributions of betas: This is given: y|beta is normal with mean x'beta and precision tau This is given: prior for beta is normal with mean mu and precision gamma Obtain this: the marginal distribution of y, by integrating the betas out (which is easy to do, since the joint distribution of y and beta is multivariate normal, and you can do this by kernel matching: the part that depends on beta is going to be exp[ a quadratic form in beta], so you recognize this to be a relevant part of a normal distribution distribution to integrate over; whatever's left after integration should be a normal density in y and the prior parameters) Obtain this: the posterior distribution of beta given y, by Bayes theorem (the likelihood times the prior divided by the posterior; again this should be a moderately complicated combination of exp[ jointly quadratic in y and beta ]) Obtain this: the conditional distribution of beta_1 given beta_2 and y, one of the margins of the multivariate normal distribution obtained at the previous step. You need to know how to manipulate the multivariate normal distribution and get conditional and marginal distributions out of it. Again, if this is over your head, we can find the ready solutions. Note that you also need a sampler for the variance of regression errors, unless you treat it as known (which is hardly a practical situation). This will be slightly more complicated, as you would need to incorporate another dimension into your integration procedures.
Gibbs sampling for a simple linear model -- need help with the likelihood function
This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicate
Gibbs sampling for a simple linear model -- need help with the likelihood function This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicated than what is readily available on that webpage. I'll give some guidance, but as long as you want to learn how to do things, this won't be the complete answer. (If you don't want to do that, we can locate the cooked answers on the web, too.) Each sampling of betas relies on the complete data set. If you are doing this with individual y_i's and x_i's, you are not doing this right. Before you start working with the code, you need to sit down with a piece of paper (letter size or A4, depending on your geography) and derive the posterior distributions of betas: This is given: y|beta is normal with mean x'beta and precision tau This is given: prior for beta is normal with mean mu and precision gamma Obtain this: the marginal distribution of y, by integrating the betas out (which is easy to do, since the joint distribution of y and beta is multivariate normal, and you can do this by kernel matching: the part that depends on beta is going to be exp[ a quadratic form in beta], so you recognize this to be a relevant part of a normal distribution distribution to integrate over; whatever's left after integration should be a normal density in y and the prior parameters) Obtain this: the posterior distribution of beta given y, by Bayes theorem (the likelihood times the prior divided by the posterior; again this should be a moderately complicated combination of exp[ jointly quadratic in y and beta ]) Obtain this: the conditional distribution of beta_1 given beta_2 and y, one of the margins of the multivariate normal distribution obtained at the previous step. You need to know how to manipulate the multivariate normal distribution and get conditional and marginal distributions out of it. Again, if this is over your head, we can find the ready solutions. Note that you also need a sampler for the variance of regression errors, unless you treat it as known (which is hardly a practical situation). This will be slightly more complicated, as you would need to incorporate another dimension into your integration procedures.
Gibbs sampling for a simple linear model -- need help with the likelihood function This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicate
42,945
Clustering with some cluster centers fixed/known
Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of points. For example, if there is centre A with known coordinates, to fix it add many (say, a thousand) extra data points, all with these same coordinates, so that during iterations the centre will be pinned to its position under their "gravity". As for the centres you want to move and eventually find their positions, specify some approximate guess coordinates for them at the start of iterations.
Clustering with some cluster centers fixed/known
Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of
Clustering with some cluster centers fixed/known Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of points. For example, if there is centre A with known coordinates, to fix it add many (say, a thousand) extra data points, all with these same coordinates, so that during iterations the centre will be pinned to its position under their "gravity". As for the centres you want to move and eventually find their positions, specify some approximate guess coordinates for them at the start of iterations.
Clustering with some cluster centers fixed/known Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of
42,946
Assessing statistical significance of a rare binary event in time series
If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subject, use a shared frailty model, which is related to a multilevel logistic regression.
Assessing statistical significance of a rare binary event in time series
If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subjec
Assessing statistical significance of a rare binary event in time series If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subject, use a shared frailty model, which is related to a multilevel logistic regression.
Assessing statistical significance of a rare binary event in time series If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subjec
42,947
Calculating the transfer entropy in R
the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ############################### ############################### # 070527 (ver. 081126), Atte Tenkanen # s, time shift trent<-function(Y,X,s=1){ #---------------------------------# # Transition probability vectors: # #---------------------------------# L4=L1=length(X)-s # Lengths of vector Xn+1. L3=L2=length(X) # Lengths of vector Xn (and Yn). #-------------------# # 1. p(Xn+s,Xn,Yn): # #-------------------# TPvector1=rep(0,L1) # Init. for(i in 1:L1) { TPvector1[i]=paste(c(X[i+s],"i",X[i],"i",Y[i]),collapse="") # "addresses" } TPvector1T=table(TPvector1)/length(TPvector1) # Table of probabilities. #-----------# # 2. p(Xn): # #-----------# TPvector2=X TPvector2T=table(X)/sum(table(X)) #--------------# # 3. p(Xn,Yn): # #--------------# TPvector3=rep(0,L3) for(i in 1:L3) { TPvector3[i]=paste(c(X[i],"i",Y[i]),collapse="") # addresses } TPvector3T=table(TPvector3)/length(TPvector2) #----------------# # 4. p(Xn+s,Xn): # #----------------# TPvector4=rep(0,L4) for(i in 1:L4) { TPvector4[i]=paste(c(X[i+s],"i",X[i]),collapse="") # addresses } TPvector4T=table(TPvector4)/length(TPvector4) #--------------------------# # Transfer entropy T(Y->X) # #--------------------------# SUMvector=rep(0,length(TPvector1T)) for(n in 1:length(TPvector1T)) { SUMvector[n]=TPvector1T[n]*log10((TPvector1T[n]*TPvector2T[(unlist(strsplit(names(TPvector1T)[n],"i")))[2]])/(TPvector3T[paste((unlist(strsplit(names(TPvector1T)[n],"i")))[2],"i",(unlist(strsplit(names(TPvector1T)[n],"i")))[3],sep="",collapse="")]*TPvector4T[paste((unlist(strsplit(names(TPvector1T)[n],"i")))[1],"i",(unlist(strsplit(names(TPvector1T)[n],"i")))[2],sep="",collapse="")])) } return(sum(SUMvector)) } # End of the trent-function.
Calculating the transfer entropy in R
the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ###############################
Calculating the transfer entropy in R the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ############################### ############################### # 070527 (ver. 081126), Atte Tenkanen # s, time shift trent<-function(Y,X,s=1){ #---------------------------------# # Transition probability vectors: # #---------------------------------# L4=L1=length(X)-s # Lengths of vector Xn+1. L3=L2=length(X) # Lengths of vector Xn (and Yn). #-------------------# # 1. p(Xn+s,Xn,Yn): # #-------------------# TPvector1=rep(0,L1) # Init. for(i in 1:L1) { TPvector1[i]=paste(c(X[i+s],"i",X[i],"i",Y[i]),collapse="") # "addresses" } TPvector1T=table(TPvector1)/length(TPvector1) # Table of probabilities. #-----------# # 2. p(Xn): # #-----------# TPvector2=X TPvector2T=table(X)/sum(table(X)) #--------------# # 3. p(Xn,Yn): # #--------------# TPvector3=rep(0,L3) for(i in 1:L3) { TPvector3[i]=paste(c(X[i],"i",Y[i]),collapse="") # addresses } TPvector3T=table(TPvector3)/length(TPvector2) #----------------# # 4. p(Xn+s,Xn): # #----------------# TPvector4=rep(0,L4) for(i in 1:L4) { TPvector4[i]=paste(c(X[i+s],"i",X[i]),collapse="") # addresses } TPvector4T=table(TPvector4)/length(TPvector4) #--------------------------# # Transfer entropy T(Y->X) # #--------------------------# SUMvector=rep(0,length(TPvector1T)) for(n in 1:length(TPvector1T)) { SUMvector[n]=TPvector1T[n]*log10((TPvector1T[n]*TPvector2T[(unlist(strsplit(names(TPvector1T)[n],"i")))[2]])/(TPvector3T[paste((unlist(strsplit(names(TPvector1T)[n],"i")))[2],"i",(unlist(strsplit(names(TPvector1T)[n],"i")))[3],sep="",collapse="")]*TPvector4T[paste((unlist(strsplit(names(TPvector1T)[n],"i")))[1],"i",(unlist(strsplit(names(TPvector1T)[n],"i")))[2],sep="",collapse="")])) } return(sum(SUMvector)) } # End of the trent-function.
Calculating the transfer entropy in R the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ###############################
42,948
Calculating the transfer entropy in R
The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, including various estimators for continuous data (Gaussian, box-kernel, Kraskov). It can be used to calculate transfer entropy in R; this is carried out via the standard rJava package (R-to-Java interface). The JIDT wiki pages describe how to get start using JIDT in R and provide several code examples.
Calculating the transfer entropy in R
The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, includi
Calculating the transfer entropy in R The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, including various estimators for continuous data (Gaussian, box-kernel, Kraskov). It can be used to calculate transfer entropy in R; this is carried out via the standard rJava package (R-to-Java interface). The JIDT wiki pages describe how to get start using JIDT in R and provide several code examples.
Calculating the transfer entropy in R The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, includi
42,949
Calculating the transfer entropy in R
There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonably fast. Its also easy to use using the transfer_entropy(x, y) function for the x->y and y->x directions as well as significance levels. If you want to only get the x->y direction, you can use the shorter calc_te(x, y) function. More examples can be found on the package's GitHub Page. (Disclaimer, I am one of the authors of the package).
Calculating the transfer entropy in R
There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonabl
Calculating the transfer entropy in R There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonably fast. Its also easy to use using the transfer_entropy(x, y) function for the x->y and y->x directions as well as significance levels. If you want to only get the x->y direction, you can use the shorter calc_te(x, y) function. More examples can be found on the package's GitHub Page. (Disclaimer, I am one of the authors of the package).
Calculating the transfer entropy in R There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonabl
42,950
Calculating the transfer entropy in R
See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
Calculating the transfer entropy in R
See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
Calculating the transfer entropy in R See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
Calculating the transfer entropy in R See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
42,951
Calculating the transfer entropy in R
Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
Calculating the transfer entropy in R
Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
Calculating the transfer entropy in R Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
Calculating the transfer entropy in R Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
42,952
Measuring predictive accuracy for multiple dependent variables
In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future data, through the use of the loss function that was being minimised. If the specific problem you are dealing with can be framed in an optimisation framework, then a measure may fall out naturally from the loss function. For example, Kristin Bennett in this paper showed that PLS can be formulated as $$ \min_w \left\| \bf{X} - \bf{y}\bf{w}' \right\|_2, \; s.t. \bf{w}'\bf{w} = 1, $$ which they later show bounds the usual least squares loss. More complex predictive models can be phrased in terms of composite loss functions - see for example these slides from Mike Jordan.
Measuring predictive accuracy for multiple dependent variables
In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future d
Measuring predictive accuracy for multiple dependent variables In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future data, through the use of the loss function that was being minimised. If the specific problem you are dealing with can be framed in an optimisation framework, then a measure may fall out naturally from the loss function. For example, Kristin Bennett in this paper showed that PLS can be formulated as $$ \min_w \left\| \bf{X} - \bf{y}\bf{w}' \right\|_2, \; s.t. \bf{w}'\bf{w} = 1, $$ which they later show bounds the usual least squares loss. More complex predictive models can be phrased in terms of composite loss functions - see for example these slides from Mike Jordan.
Measuring predictive accuracy for multiple dependent variables In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future d
42,953
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The most related technique I know of is described in a talk at ACM Data Mining SIG by Ted Dunning.
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The most related technique I know of is described in a talk at ACM Data Mining SIG by Ted Dunning.
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
42,954
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You can try recurrent neural networks: neural networks for time series/sequences. I have given an explanation here.
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You can try recurrent neural networks: neural networks for time series/sequences. I have given an explanation here.
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
42,955
Pointwise mutual information for text using R
There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property that $pmi(x;y) = h(x) + h(y) - h(xy) $ to calculate the pointwise mutual information. You need to obtain frequency estimates for the two random variables first.
Pointwise mutual information for text using R
There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property t
Pointwise mutual information for text using R There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property that $pmi(x;y) = h(x) + h(y) - h(xy) $ to calculate the pointwise mutual information. You need to obtain frequency estimates for the two random variables first.
Pointwise mutual information for text using R There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property t
42,956
How to differentiate two subgroups from a histogram?
I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold based interpretations of their research results, based merely on the distribution of the measurements. Practice and applications however usually need high positive predictive value or high negative predictive value, so the characteristics of the future population tested have to be considered. My point of view is even if now you just want to "differentiate two groups" you probably want to apply this somehow in the future and thus you probably want to find the optimal threshold, optimising costs, risks and benefits (survival, quality of life etc.) in a practical setting. So I suggest that you to think these over in your application.
How to differentiate two subgroups from a histogram?
I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold b
How to differentiate two subgroups from a histogram? I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold based interpretations of their research results, based merely on the distribution of the measurements. Practice and applications however usually need high positive predictive value or high negative predictive value, so the characteristics of the future population tested have to be considered. My point of view is even if now you just want to "differentiate two groups" you probably want to apply this somehow in the future and thus you probably want to find the optimal threshold, optimising costs, risks and benefits (survival, quality of life etc.) in a practical setting. So I suggest that you to think these over in your application.
How to differentiate two subgroups from a histogram? I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold b
42,957
How to differentiate two subgroups from a histogram?
If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without this assumption you could try an EM algorithm which is indirectly what Matt Suggested since this would be a mixture model approach.
How to differentiate two subgroups from a histogram?
If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without t
How to differentiate two subgroups from a histogram? If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without this assumption you could try an EM algorithm which is indirectly what Matt Suggested since this would be a mixture model approach.
How to differentiate two subgroups from a histogram? If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without t
42,958
Hidden states in hidden conditional random fields
I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of a linear chain HCRF, the hidden state sequences are calculated in exactly the same way as in hidden Markov models. The HCRF formulation using maximal cliques generalizes much of the structure of a general hidden Markov classifier. Hidden Markov classifiers are generally constructed by considering priors over each possible model and estimating the class label by computing its posterior probabilities. If we represent each model by a clique potential function, and restrict each potential function to a single class label, we can reproduce this exact structure in a HCRF. The only difference will be that parameters in a HCRF will not be constrained to probabilities, so we can also see that all possible solutions given by Markov classifiers are just a subset of the possible solutions given by HCRFs. By the way, the summation I was referring to in the original question is intractable to compute in its given form. Since it represents the result of the potential function over all possible paths, in the case of a linear chain, instead of trying to compute this summation directly, we can proceed by computing the probability of each state/transition occurring in the model and multiplying this probability by the results of the potential function along those states/transitions in a single pass using the sum-product algorithm. The model also does not need to be computed using EM. Since its gradient is readily available, one can just use any off-the-shelf function optimizer to do the job. Conjugate gradient or stochastic gradient updates seems to operate better since they can deal better with violations of convexity. Please someone correct me if I got anything wrong. The best resource I have found so far to help understand CRFs and HCRFs (which are just CRFs with latent variables) has been this tutorial by C. Sutton. I hope it could be of some help for others also having the same questions.
Hidden states in hidden conditional random fields
I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of
Hidden states in hidden conditional random fields I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of a linear chain HCRF, the hidden state sequences are calculated in exactly the same way as in hidden Markov models. The HCRF formulation using maximal cliques generalizes much of the structure of a general hidden Markov classifier. Hidden Markov classifiers are generally constructed by considering priors over each possible model and estimating the class label by computing its posterior probabilities. If we represent each model by a clique potential function, and restrict each potential function to a single class label, we can reproduce this exact structure in a HCRF. The only difference will be that parameters in a HCRF will not be constrained to probabilities, so we can also see that all possible solutions given by Markov classifiers are just a subset of the possible solutions given by HCRFs. By the way, the summation I was referring to in the original question is intractable to compute in its given form. Since it represents the result of the potential function over all possible paths, in the case of a linear chain, instead of trying to compute this summation directly, we can proceed by computing the probability of each state/transition occurring in the model and multiplying this probability by the results of the potential function along those states/transitions in a single pass using the sum-product algorithm. The model also does not need to be computed using EM. Since its gradient is readily available, one can just use any off-the-shelf function optimizer to do the job. Conjugate gradient or stochastic gradient updates seems to operate better since they can deal better with violations of convexity. Please someone correct me if I got anything wrong. The best resource I have found so far to help understand CRFs and HCRFs (which are just CRFs with latent variables) has been this tutorial by C. Sutton. I hope it could be of some help for others also having the same questions.
Hidden states in hidden conditional random fields I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of
42,959
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable)
You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that event are reserved. E.g if $p(x_t = 1| x_{1:t-1}) = 0.5$ with $x_{1:t-1}$ being the history of events seen so far, then coding that event will cost you 1 bit, while all others have to use more bits.
Compression theory, practice, for time series with values in a space of distributions (say of a real
You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that e
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable) You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that event are reserved. E.g if $p(x_t = 1| x_{1:t-1}) = 0.5$ with $x_{1:t-1}$ being the history of events seen so far, then coding that event will cost you 1 bit, while all others have to use more bits.
Compression theory, practice, for time series with values in a space of distributions (say of a real You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that e
42,960
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable)
Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can take advantage of autocorrelation and store the parameters of the predictive distribution conditional on its previous values. The entropy of the prior (predictive) distribution of the parameters determines the upper bound for compression strength, but you may not need to compress them further. If you do, use arithmetic compression. Decreasing entropy, say by discretizing quantiles, will give greater compression.
Compression theory, practice, for time series with values in a space of distributions (say of a real
Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can ta
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable) Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can take advantage of autocorrelation and store the parameters of the predictive distribution conditional on its previous values. The entropy of the prior (predictive) distribution of the parameters determines the upper bound for compression strength, but you may not need to compress them further. If you do, use arithmetic compression. Decreasing entropy, say by discretizing quantiles, will give greater compression.
Compression theory, practice, for time series with values in a space of distributions (say of a real Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can ta
42,961
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment?
If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-test, pooled for d0 = 0." I would think you'd want to do such a test for each of the five years, then choose a meta-analytic method of pooling the results. (You can also use an online calculator for each test. http://www.dimensionresearch.com/resources/calculators/ztest.html and http://www.surveystar.com/our_services/ztest.htm are not perfect but each looks serviceable.) In light of further comments...From the research question you've posed, it sounds as if regional differences per se are not important. Therefore you could simplify a great deal by collapsing across thorough-treatment regions and not-thorough-treatment regions, yielding two sets of regions for which to test the difference in proportions. You could do this for each of the years on which you have a substantial amount of data. Then you could pool the different years' test results using a standard meta-analytic method, and you would have a single answer to your question of whether the two levels of implementation show significantly different results.
How can we compare multiple proportions from multiple independent populations to evaluate implementa
If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-te
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment? If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-test, pooled for d0 = 0." I would think you'd want to do such a test for each of the five years, then choose a meta-analytic method of pooling the results. (You can also use an online calculator for each test. http://www.dimensionresearch.com/resources/calculators/ztest.html and http://www.surveystar.com/our_services/ztest.htm are not perfect but each looks serviceable.) In light of further comments...From the research question you've posed, it sounds as if regional differences per se are not important. Therefore you could simplify a great deal by collapsing across thorough-treatment regions and not-thorough-treatment regions, yielding two sets of regions for which to test the difference in proportions. You could do this for each of the years on which you have a substantial amount of data. Then you could pool the different years' test results using a standard meta-analytic method, and you would have a single answer to your question of whether the two levels of implementation show significantly different results.
How can we compare multiple proportions from multiple independent populations to evaluate implementa If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-te
42,962
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment?
You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a related questions: Has anyone used the Marascuilo procedure for comparing multiple proportions?
How can we compare multiple proportions from multiple independent populations to evaluate implementa
You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a rel
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment? You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a related questions: Has anyone used the Marascuilo procedure for comparing multiple proportions?
How can we compare multiple proportions from multiple independent populations to evaluate implementa You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a rel
42,963
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical application you are using. In R, the psych and HDMD packages have commands that allow oblique rotations.
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical a
How to do primary component analysis on multi-mode data with non-orthogonal primary components? There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical application you are using. In R, the psych and HDMD packages have commands that allow oblique rotations.
How to do primary component analysis on multi-mode data with non-orthogonal primary components? There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical a
42,964
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Jain A. (eds) Encyclopedia of Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-73003-5_305
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Ja
How to do primary component analysis on multi-mode data with non-orthogonal primary components? Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Jain A. (eds) Encyclopedia of Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-73003-5_305
How to do primary component analysis on multi-mode data with non-orthogonal primary components? Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Ja
42,965
Getting started with time series in R
It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data easily. to.weekly(X) to.monthly(X) Please note that you will then manipulate xts objects and not ts. But no worries, you can go back to ts whenever needed.
Getting started with time series in R
It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data eas
Getting started with time series in R It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data easily. to.weekly(X) to.monthly(X) Please note that you will then manipulate xts objects and not ts. But no worries, you can go back to ts whenever needed.
Getting started with time series in R It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data eas
42,966
Theoretical results for cross-validation estimation of classification accuracy?
I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.html and the probably relevant section of the thesis: http://hunch.net/~jl/projects/prediction_bounds/thesis/mathml/thesisse15.xml#x22-300004.1
Theoretical results for cross-validation estimation of classification accuracy?
I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.h
Theoretical results for cross-validation estimation of classification accuracy? I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.html and the probably relevant section of the thesis: http://hunch.net/~jl/projects/prediction_bounds/thesis/mathml/thesisse15.xml#x22-300004.1
Theoretical results for cross-validation estimation of classification accuracy? I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.h
42,967
Informative censoring caused by cesarean section
When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone can point to different and more applicable resources. I still leave my initial comment up as I still feel those papers are helpful. I'm not sure how acceptable it would be viewed to use OLS (as oppossed to logistic regression) simply so you can incorporate the Heckman correction estimate. The work of James Heckman would be applicable to your problem, especially if you have an instrument with which you can estimate the probability of being chosen for a C-section independent of trauma risk. Sample Selection Bias as a Specification Error by: James J. Heckman Econometrica, Vol. 47, No. 1. (1979), pp. 153-161. PDF version Also as an intro into the logic of the Heckman selection estimator intended for a largely non-technical audience, I enjoy this paper Is the Magic Still There? The Use of the Heckman Two-Step Correction for Selection Bias in Criminology by: Shawn Bushway, Brian Johnson, Lee Slocum Journal of Quantitative Criminology, Vol. 23, No. 2. (1 June 2007), pp. 151-178. PDF version
Informative censoring caused by cesarean section
When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone
Informative censoring caused by cesarean section When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone can point to different and more applicable resources. I still leave my initial comment up as I still feel those papers are helpful. I'm not sure how acceptable it would be viewed to use OLS (as oppossed to logistic regression) simply so you can incorporate the Heckman correction estimate. The work of James Heckman would be applicable to your problem, especially if you have an instrument with which you can estimate the probability of being chosen for a C-section independent of trauma risk. Sample Selection Bias as a Specification Error by: James J. Heckman Econometrica, Vol. 47, No. 1. (1979), pp. 153-161. PDF version Also as an intro into the logic of the Heckman selection estimator intended for a largely non-technical audience, I enjoy this paper Is the Magic Still There? The Use of the Heckman Two-Step Correction for Selection Bias in Criminology by: Shawn Bushway, Brian Johnson, Lee Slocum Journal of Quantitative Criminology, Vol. 23, No. 2. (1 June 2007), pp. 151-178. PDF version
Informative censoring caused by cesarean section When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone
42,968
Informative censoring caused by cesarean section
As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully address the issue of bias, but one would get some measures of association about the associations between e.g. fetus size and having a non-elective caesarean section, and could see these side by side with the associations between the same exposure and trauma, and at the least make some qualitative comparisons about magnitudes of effect. [This question was bumped by an answer edit, with an inactive original questioner, so I'm really just floating another idea here (plus I'm interested in this kind of question)]
Informative censoring caused by cesarean section
As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully
Informative censoring caused by cesarean section As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully address the issue of bias, but one would get some measures of association about the associations between e.g. fetus size and having a non-elective caesarean section, and could see these side by side with the associations between the same exposure and trauma, and at the least make some qualitative comparisons about magnitudes of effect. [This question was bumped by an answer edit, with an inactive original questioner, so I'm really just floating another idea here (plus I'm interested in this kind of question)]
Informative censoring caused by cesarean section As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully
42,969
Factor Significance for Factor Model
The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent factor. The idea carries over to the more general case albeit with more complications. It follows from your factor model that: $E(y) = B E(x)$ and $Var(y) = B^T B Var(x) + \sigma^2$ (Note: Keep in mind that in this simplified model: $x$ is a scalar we are dealing with only one factor). Since the data is normally distributed the above two equations determine your likelihood function. However, do note that you have identification issues here as you have to estimate $B$ and $x$ simultaneously. The traditional way to solve the identification issue is to assume that: $B(1) = 1$ (i.e., set the factor loading of the first element to 1. Otherwise we can scale $B$ by $\alpha$ and scale $E(x)$ by $\frac{1}{\alpha}$ and obtain an identical E(y)). It follows that: E(x) = E(y(1)) In other words, the above identification constraint on B has effectively constrained the mean of our factor to be the sample mean of our first dependent variable (i.e., y(1)). For similar reasons, we assume that: $Var(x) = 1$ (Otherwise you could just scale $B$ by $\sqrt{\alpha}$ and scale $Var(x)$ by $\frac{1}{\alpha}$ and your likelihood function will not change). Since we impose an identification constraint on the distribution of x (which is in some sense arbitrary) I am not sure how meaningful it is to perform statistical testing for a factor. You could compute factor scores and perform standard statistical testing using the mean of $E(x) = E(y(1))$ and $Var(x) = 1$. So, the answer to your question depends on: Is the conclusion from the above statistical test invariant to your choice of identification constraints? I do not know the answer to the above question.
Factor Significance for Factor Model
The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent f
Factor Significance for Factor Model The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent factor. The idea carries over to the more general case albeit with more complications. It follows from your factor model that: $E(y) = B E(x)$ and $Var(y) = B^T B Var(x) + \sigma^2$ (Note: Keep in mind that in this simplified model: $x$ is a scalar we are dealing with only one factor). Since the data is normally distributed the above two equations determine your likelihood function. However, do note that you have identification issues here as you have to estimate $B$ and $x$ simultaneously. The traditional way to solve the identification issue is to assume that: $B(1) = 1$ (i.e., set the factor loading of the first element to 1. Otherwise we can scale $B$ by $\alpha$ and scale $E(x)$ by $\frac{1}{\alpha}$ and obtain an identical E(y)). It follows that: E(x) = E(y(1)) In other words, the above identification constraint on B has effectively constrained the mean of our factor to be the sample mean of our first dependent variable (i.e., y(1)). For similar reasons, we assume that: $Var(x) = 1$ (Otherwise you could just scale $B$ by $\sqrt{\alpha}$ and scale $Var(x)$ by $\frac{1}{\alpha}$ and your likelihood function will not change). Since we impose an identification constraint on the distribution of x (which is in some sense arbitrary) I am not sure how meaningful it is to perform statistical testing for a factor. You could compute factor scores and perform standard statistical testing using the mean of $E(x) = E(y(1))$ and $Var(x) = 1$. So, the answer to your question depends on: Is the conclusion from the above statistical test invariant to your choice of identification constraints? I do not know the answer to the above question.
Factor Significance for Factor Model The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent f
42,970
Factor Significance for Factor Model
Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normally tested by the use of Structural Equation Modelling, where you attempt to reproduce the observed matrix of data from the structure you have proposed through the use of factor analysis. The SEM, lavaan or OpenMx Packages for R will all do this. Technically, the Chi square test will tell you if a factor model fits perfectly, but this statistic is almost always significant with any appreciable (200+) sample size. The psych package for R also gives you the Bayesian Information Criterion as a measure of fit after you specify a factor model, but I am unsure as to how useful this is.
Factor Significance for Factor Model
Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normall
Factor Significance for Factor Model Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normally tested by the use of Structural Equation Modelling, where you attempt to reproduce the observed matrix of data from the structure you have proposed through the use of factor analysis. The SEM, lavaan or OpenMx Packages for R will all do this. Technically, the Chi square test will tell you if a factor model fits perfectly, but this statistic is almost always significant with any appreciable (200+) sample size. The psych package for R also gives you the Bayesian Information Criterion as a measure of fit after you specify a factor model, but I am unsure as to how useful this is.
Factor Significance for Factor Model Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normall
42,971
Factor Significance for Factor Model
If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the variance of the error. They supply to my knowledge the most updated approach to resolve this issue. See also Alexei Onatski who uses a different method based on eigenvalues of the factor covariance matrix.
Factor Significance for Factor Model
If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the v
Factor Significance for Factor Model If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the variance of the error. They supply to my knowledge the most updated approach to resolve this issue. See also Alexei Onatski who uses a different method based on eigenvalues of the factor covariance matrix.
Factor Significance for Factor Model If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the v
42,972
Factor Significance for Factor Model
I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we do in Multiple Regression. So here I assume you know in advance the exact value of factors and the criterion variable, it's much like multiple regression. If you have multiple criterion variables, then you might want to test if the factor loadings for a specific factor is significantly different for (0,0,0, ...,0). We can approach this problem with Multiple Comparison or multivariate viewpoint.
Factor Significance for Factor Model
I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we
Factor Significance for Factor Model I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we do in Multiple Regression. So here I assume you know in advance the exact value of factors and the criterion variable, it's much like multiple regression. If you have multiple criterion variables, then you might want to test if the factor loadings for a specific factor is significantly different for (0,0,0, ...,0). We can approach this problem with Multiple Comparison or multivariate viewpoint.
Factor Significance for Factor Model I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we
42,973
How to do factorial analysis for a non-normal and heteroscedastic data?
Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more information in the paper: On distance-based permutation tests for between-group comparisons (Reiss et al, 2010) You might also be interested in skimming this table: Choosing the Correct Statistical Test
How to do factorial analysis for a non-normal and heteroscedastic data?
Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more informa
How to do factorial analysis for a non-normal and heteroscedastic data? Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more information in the paper: On distance-based permutation tests for between-group comparisons (Reiss et al, 2010) You might also be interested in skimming this table: Choosing the Correct Statistical Test
How to do factorial analysis for a non-normal and heteroscedastic data? Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more informa
42,974
How to do factorial analysis for a non-normal and heteroscedastic data?
The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-written package skilmack for Stata. Skillings, J. H., and G. A. Mack. 1981. On the use of a Friedman-type statistic in balanced and unbalanced block designs. Technometrics 23: 171-177. Aho, K. asbio: A collection of statistical tools for biologists. Version 0.3-24. 2010-9-18. Comprehensive R Archive Network (CRAN) 2010-09-19. Chatfield, M. and Mander, A. The Skillings–Mack test (Friedman test when there are missing data). Stata Journal 9(2):299-305.
How to do factorial analysis for a non-normal and heteroscedastic data?
The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-
How to do factorial analysis for a non-normal and heteroscedastic data? The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-written package skilmack for Stata. Skillings, J. H., and G. A. Mack. 1981. On the use of a Friedman-type statistic in balanced and unbalanced block designs. Technometrics 23: 171-177. Aho, K. asbio: A collection of statistical tools for biologists. Version 0.3-24. 2010-9-18. Comprehensive R Archive Network (CRAN) 2010-09-19. Chatfield, M. and Mander, A. The Skillings–Mack test (Friedman test when there are missing data). Stata Journal 9(2):299-305.
How to do factorial analysis for a non-normal and heteroscedastic data? The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-
42,975
How to do factorial analysis for a non-normal and heteroscedastic data?
As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable transformations might help and you may be able to take advantage of the design. The t-test is fairly robust to the normality assumptions.
How to do factorial analysis for a non-normal and heteroscedastic data?
As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable tr
How to do factorial analysis for a non-normal and heteroscedastic data? As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable transformations might help and you may be able to take advantage of the design. The t-test is fairly robust to the normality assumptions.
How to do factorial analysis for a non-normal and heteroscedastic data? As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable tr
42,976
Interpreting output of igraph's fastgreedy.community clustering method
The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community function results. You have to provide number of steps - how many merges should be performed. The optimal number of steps(merges) is the one which produce the maximal modularity.
Interpreting output of igraph's fastgreedy.community clustering method
The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community funct
Interpreting output of igraph's fastgreedy.community clustering method The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community function results. You have to provide number of steps - how many merges should be performed. The optimal number of steps(merges) is the one which produce the maximal modularity.
Interpreting output of igraph's fastgreedy.community clustering method The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community funct
42,977
What is a statistical journal with quick turnaround?
Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help.
What is a statistical journal with quick turnaround?
Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help
What is a statistical journal with quick turnaround? Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help.
What is a statistical journal with quick turnaround? Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help
42,978
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts?
Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon_{t-1}$ for $\hat{\sigma}t|I{t-1}$ to come back to the its unconditional value.) Here is a code snipped that downloads stock data, fits (e)Garch and computes half lifes, in R: install.packages("rgarch",repos="http://R-Forge.R-project.org") install.packages("fGarch") install.packages("fImport") library(rgarch) library(fImport) library(fGarch) d1<-yahooSeries(symbols="ibm",nDaysBack=1000,frequency=c("daily"))[,4] dprice1<-diff(log(as.numeric(d1[length(d1):1]))) spec1<-ugarchspec(variance.model=list(model="eGARCH",garchOrder=c(1,1)),mean.model=list(armaOrder=c(0,0),include.mean=T)) spec2<-ugarchspec(variance.model=list(model="fGARCH",submodel="GARCH",garchOrder=c(1,1)),mean.model=list(armaOrder=c(0,0),include.mean=T)) fit1<-ugarchfit(data=dprice1,spec=spec1) fit2<-ugarchfit(data=dprice1,spec=spec2) halflife(fit1) halflife(fit2) The reason for this is that generally speaking, negative spells tend to be more persistent. If you don't control for this, you will generally bias the $\beta$ (i.e. persistance parameters) downwards.
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca
Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts? Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon_{t-1}$ for $\hat{\sigma}t|I{t-1}$ to come back to the its unconditional value.) Here is a code snipped that downloads stock data, fits (e)Garch and computes half lifes, in R: install.packages("rgarch",repos="http://R-Forge.R-project.org") install.packages("fGarch") install.packages("fImport") library(rgarch) library(fImport) library(fGarch) d1<-yahooSeries(symbols="ibm",nDaysBack=1000,frequency=c("daily"))[,4] dprice1<-diff(log(as.numeric(d1[length(d1):1]))) spec1<-ugarchspec(variance.model=list(model="eGARCH",garchOrder=c(1,1)),mean.model=list(armaOrder=c(0,0),include.mean=T)) spec2<-ugarchspec(variance.model=list(model="fGARCH",submodel="GARCH",garchOrder=c(1,1)),mean.model=list(armaOrder=c(0,0),include.mean=T)) fit1<-ugarchfit(data=dprice1,spec=spec1) fit2<-ugarchfit(data=dprice1,spec=spec2) halflife(fit1) halflife(fit2) The reason for this is that generally speaking, negative spells tend to be more persistent. If you don't control for this, you will generally bias the $\beta$ (i.e. persistance parameters) downwards.
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon
42,979
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts?
there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Policy Vol. 2, No. 3, 2012, pp.167-183 ISSN: 2146-4553 www.econjournals.com and many more
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca
there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Polic
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts? there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Policy Vol. 2, No. 3, 2012, pp.167-183 ISSN: 2146-4553 www.econjournals.com and many more
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Polic
42,980
Robust nonparametric estimation of hazard/survival functions based on low count data
This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censored), isn't that what the Nelson-Aalen estimator was built to do? We could have another conversation about the reliability of the available classical confidence intervals -- my understanding is that there basically do not exist functioning exact confidence intervals that guarantee their coverage even over small sample sizes, since such an interval would need to work over all distributions of censoring time. (Maybe the problem is simpler when individuals are always censored exactly after their first event.) And mapping out the coverage of an approximate interval precisely would take work. But if you just need a point estimate, the Nelson-Aalen estimator seems to do the trick. (It's a lot like the Kaplan-Meier estimate for the survival function...) If you want to calculate an a posteriori distribution on a whole family of possible hazard functions, and your prior is that they are drawn from the Gaussian processes with certain statistics, can you explain further what the difficulty is? If there isn't agreement on the covariance matrix, then that needs to be part of the prior -- that the covariance matrix is drawn from some distribution. You're not going to get around having to state a prior if the goal is a posterior.
Robust nonparametric estimation of hazard/survival functions based on low count data
This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censor
Robust nonparametric estimation of hazard/survival functions based on low count data This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censored), isn't that what the Nelson-Aalen estimator was built to do? We could have another conversation about the reliability of the available classical confidence intervals -- my understanding is that there basically do not exist functioning exact confidence intervals that guarantee their coverage even over small sample sizes, since such an interval would need to work over all distributions of censoring time. (Maybe the problem is simpler when individuals are always censored exactly after their first event.) And mapping out the coverage of an approximate interval precisely would take work. But if you just need a point estimate, the Nelson-Aalen estimator seems to do the trick. (It's a lot like the Kaplan-Meier estimate for the survival function...) If you want to calculate an a posteriori distribution on a whole family of possible hazard functions, and your prior is that they are drawn from the Gaussian processes with certain statistics, can you explain further what the difficulty is? If there isn't agreement on the covariance matrix, then that needs to be part of the prior -- that the covariance matrix is drawn from some distribution. You're not going to get around having to state a prior if the goal is a posterior.
Robust nonparametric estimation of hazard/survival functions based on low count data This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censor
42,981
Should we use measured vs. modelled or modelled vs. measured?
Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a conditional distribution of some variable based one or more other variables. Then we have the value for which we want to determine the (conditional) distribution on the y-axis and the value on which we condition on the x-axis. Physics experiments are a typical example. In those cases an experiment is often performed by changing/controlling some variable and this variable is considered an 'independent' variable which has little measurement error. Then the y-axis represents an observed variable that is the 'dependent' variable. The relevant question in such experiments is to determine $Y|X$ the distribution of $Y$ given $X$. And that is exactly what ordinary least squares regression does. In observational studies, as often seen in the field of economy, nutrition and health, or sociology, there are no 'independent' variables. The experimenter has no control over variables and is just observing patterns. In that case there is no natural variable to be placed on the x-axis. Still, one might be interested in a particular direction of patterns. For instance because of some application or goal for which one performs the study. An example could be a doctor that wants to predict some risk based on a set of variables. The risk is modelled as function of some variable, so risk is on the y-axis and the variable on the x-axis (but that doesn't mean that there is a causal relationship in that direction, it is just that the doctor wants to know the statistical model, and not the causal model). Comments on your case For your case it is not directly obvious what should go on the x-axis. I personally prefer to have the modelled value on the x-axis, but that is because I am often dealing with physics experiments where the modelled variable is a well controlled variable and the observed variable contains the error. Plotting the modelled variable on the x-axis can also be a way to decrease the dimensionality. If you have multiple variables that you control in an experiment then it can be difficult to plot the output/observed value as function of all those variables (because it is multidimensional). But instead of all those variables you can replace them by the modelled variable. This might be your case as you could view your data as the 3d plot below This does not always need to be the situation. The modelled variable can depend on controlled variables, but there might be a degree of error in the actual values and the values that were set. (e.g. you might control some concentration of a solution, but due to experimental variations the concentration has error) Relating to regression You migh want to know just the statistical relationship. How well does my model describe observations. What is the error that we make with a particular model? For such questions you want to characterise the distribution of the observations as function of the modelled values. You might want to use regression as a goodness of fit test. For this case you want to use a correct representation of the error distribution. You might not just have statistical variation in the observations but also in the modelled values. This is a more nuanced situation than either regressing observed vs modelled or regressing modelled bs observed. In this case you want to express both errors together and use something like Deming regression. Another note on your plot. Possibly you might want to change the scales of the axes such that the line observation=modelled has a 45 degree angle. The use of 45 degree angles makes it easier to compare differences (see also this question and answer about slopegraphs) Related questions Inverse Regression vs Reverse Regression Effect of switching response and explanatory variable in simple linear regression If in this problem I regress $x$ on $y$ instead than $y$ on $x$, do I need to use an error-in-variables model?
Should we use measured vs. modelled or modelled vs. measured?
Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a
Should we use measured vs. modelled or modelled vs. measured? Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a conditional distribution of some variable based one or more other variables. Then we have the value for which we want to determine the (conditional) distribution on the y-axis and the value on which we condition on the x-axis. Physics experiments are a typical example. In those cases an experiment is often performed by changing/controlling some variable and this variable is considered an 'independent' variable which has little measurement error. Then the y-axis represents an observed variable that is the 'dependent' variable. The relevant question in such experiments is to determine $Y|X$ the distribution of $Y$ given $X$. And that is exactly what ordinary least squares regression does. In observational studies, as often seen in the field of economy, nutrition and health, or sociology, there are no 'independent' variables. The experimenter has no control over variables and is just observing patterns. In that case there is no natural variable to be placed on the x-axis. Still, one might be interested in a particular direction of patterns. For instance because of some application or goal for which one performs the study. An example could be a doctor that wants to predict some risk based on a set of variables. The risk is modelled as function of some variable, so risk is on the y-axis and the variable on the x-axis (but that doesn't mean that there is a causal relationship in that direction, it is just that the doctor wants to know the statistical model, and not the causal model). Comments on your case For your case it is not directly obvious what should go on the x-axis. I personally prefer to have the modelled value on the x-axis, but that is because I am often dealing with physics experiments where the modelled variable is a well controlled variable and the observed variable contains the error. Plotting the modelled variable on the x-axis can also be a way to decrease the dimensionality. If you have multiple variables that you control in an experiment then it can be difficult to plot the output/observed value as function of all those variables (because it is multidimensional). But instead of all those variables you can replace them by the modelled variable. This might be your case as you could view your data as the 3d plot below This does not always need to be the situation. The modelled variable can depend on controlled variables, but there might be a degree of error in the actual values and the values that were set. (e.g. you might control some concentration of a solution, but due to experimental variations the concentration has error) Relating to regression You migh want to know just the statistical relationship. How well does my model describe observations. What is the error that we make with a particular model? For such questions you want to characterise the distribution of the observations as function of the modelled values. You might want to use regression as a goodness of fit test. For this case you want to use a correct representation of the error distribution. You might not just have statistical variation in the observations but also in the modelled values. This is a more nuanced situation than either regressing observed vs modelled or regressing modelled bs observed. In this case you want to express both errors together and use something like Deming regression. Another note on your plot. Possibly you might want to change the scales of the axes such that the line observation=modelled has a 45 degree angle. The use of 45 degree angles makes it easier to compare differences (see also this question and answer about slopegraphs) Related questions Inverse Regression vs Reverse Regression Effect of switching response and explanatory variable in simple linear regression If in this problem I regress $x$ on $y$ instead than $y$ on $x$, do I need to use an error-in-variables model?
Should we use measured vs. modelled or modelled vs. measured? Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a
42,982
How to think about counterfactuals?
How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that are in contradiction with the actual one. They are used for studying individual cases, as opposed to do-operators that are used for studying average effects by keeping all the variables in the network fixed (you don't change their values) and by just setting $X$ to $x$. I'll leave some references at the end of the question that explain this in a better way. I'll try to be more specific on your case by answering your other question in which you were referring to a regression model. How do they differ from calculating $y$ for a given value of $x$, or from the $do()$ operator? So, let us suppose you have a regression model that predicts values of $y$ given an observed input $x$. Let me first point out that such model, as other machine learning models, captures only correlations and not causality because you train it with a dataset of observed features and related target values so you may have unobserved counfunders making you predict spurious correlations. But let us suppose that your model is somehow able to learn interventional probabilities $p(y|do(X=x))$ instead of observational probabilities $p(y|x)$$^1$. Let us also suppose to be in the confounded scenario of the following image. Let us suppose that we observe $X=x_1$. By performing an intervention using your model you are able to get the average value of $y$ after imposing $do(X=x_2)$ (that is $p(y|do(X=x_2))$) which may differ from the counterfactual value of $y$ had $X$ been $x_2$ (that is $p(y|X=x_1,do(X=x_2))$) because in the second case you exploit the extra information you get from your specific observation to get infos on the value of the unobserved variable $U$ as well. In particular, counterfactuals require to perform 3 steps: Abduction: update the probability of unobserved factors $P(u)$ exploiting the current observation $P(u|e)$ Action: Perform the intervention in the model (that is $do(X=x_2)$) Prediction: Predict the value of $Y$ in the modified model. Please note that this step exploits updated probabilities from the previous 2 points, that is not as performing just the intervention. In my opinion this was a great question, I had to dig into different resources to try to answer it so I'll leave my references here, maybe they can complement my answer. I found the first answer very useful (especially the example) to get the differences between do-notation and counterfactuals. I'd suggest you to try to run my example on the data tables provided on the first answer. Judea Pearl's twitted about the difference between counterfactuals and do-operations. $^1$ For the sake of completeness, there should be in the literature some models able to capture interventional probabilities if provided with interventional data.
How to think about counterfactuals?
How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that
How to think about counterfactuals? How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that are in contradiction with the actual one. They are used for studying individual cases, as opposed to do-operators that are used for studying average effects by keeping all the variables in the network fixed (you don't change their values) and by just setting $X$ to $x$. I'll leave some references at the end of the question that explain this in a better way. I'll try to be more specific on your case by answering your other question in which you were referring to a regression model. How do they differ from calculating $y$ for a given value of $x$, or from the $do()$ operator? So, let us suppose you have a regression model that predicts values of $y$ given an observed input $x$. Let me first point out that such model, as other machine learning models, captures only correlations and not causality because you train it with a dataset of observed features and related target values so you may have unobserved counfunders making you predict spurious correlations. But let us suppose that your model is somehow able to learn interventional probabilities $p(y|do(X=x))$ instead of observational probabilities $p(y|x)$$^1$. Let us also suppose to be in the confounded scenario of the following image. Let us suppose that we observe $X=x_1$. By performing an intervention using your model you are able to get the average value of $y$ after imposing $do(X=x_2)$ (that is $p(y|do(X=x_2))$) which may differ from the counterfactual value of $y$ had $X$ been $x_2$ (that is $p(y|X=x_1,do(X=x_2))$) because in the second case you exploit the extra information you get from your specific observation to get infos on the value of the unobserved variable $U$ as well. In particular, counterfactuals require to perform 3 steps: Abduction: update the probability of unobserved factors $P(u)$ exploiting the current observation $P(u|e)$ Action: Perform the intervention in the model (that is $do(X=x_2)$) Prediction: Predict the value of $Y$ in the modified model. Please note that this step exploits updated probabilities from the previous 2 points, that is not as performing just the intervention. In my opinion this was a great question, I had to dig into different resources to try to answer it so I'll leave my references here, maybe they can complement my answer. I found the first answer very useful (especially the example) to get the differences between do-notation and counterfactuals. I'd suggest you to try to run my example on the data tables provided on the first answer. Judea Pearl's twitted about the difference between counterfactuals and do-operations. $^1$ For the sake of completeness, there should be in the literature some models able to capture interventional probabilities if provided with interventional data.
How to think about counterfactuals? How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that
42,983
Why sample size is not a part of sufficient statistic?
It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to specify that the sample size is known, which is true in the vast majority of statistical applications. You can of course proceed on the basis that $n$ is also an unknown parameter in the model. In this latter case your log-likelihood function would be: $$\ell_{\mathbf{x}_n}(n,\theta) = \log {n \choose T(\mathbf{x}_n)} + T(\mathbf{x}_n) \log(\theta) + (n-T(\mathbf{x}_n)) \log(1-\theta),$$ and the minimal sufficient statistic is indeed $(n,T(\mathbf{x}_n))$ (so the statistic $T(\mathbf{x}_n)$ is not sufficient in this case). (Note: I do not agree with the comment by Xi'an asserting that $n$ must be the outcome of a random variable to be included as part of the sufficient statistic; the concept of sufficiency is a classical concept, and in that domain the notion of an "unknown constant" is perfectly valid. There is no need to create a Bayesian model that specifies a distribution for $n$ in order for it to be part of the sufficient statistic.)
Why sample size is not a part of sufficient statistic?
It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to speci
Why sample size is not a part of sufficient statistic? It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to specify that the sample size is known, which is true in the vast majority of statistical applications. You can of course proceed on the basis that $n$ is also an unknown parameter in the model. In this latter case your log-likelihood function would be: $$\ell_{\mathbf{x}_n}(n,\theta) = \log {n \choose T(\mathbf{x}_n)} + T(\mathbf{x}_n) \log(\theta) + (n-T(\mathbf{x}_n)) \log(1-\theta),$$ and the minimal sufficient statistic is indeed $(n,T(\mathbf{x}_n))$ (so the statistic $T(\mathbf{x}_n)$ is not sufficient in this case). (Note: I do not agree with the comment by Xi'an asserting that $n$ must be the outcome of a random variable to be included as part of the sufficient statistic; the concept of sufficiency is a classical concept, and in that domain the notion of an "unknown constant" is perfectly valid. There is no need to create a Bayesian model that specifies a distribution for $n$ in order for it to be part of the sufficient statistic.)
Why sample size is not a part of sufficient statistic? It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to speci
42,984
VAR() or dynlm() or lm()
Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ultimately just run OLS regressions. It is with different degrees of ease, though, reflecting the purpose of the packages. lm is, of course, for all sorts of regressions, while the other two explicitly have time series regressions in mind, and vars even multivariate ones. Here is an example. library(dynlm) library(vars) x <- ts(rnorm(100)) # ts is relevant for dynlm, see discussion in comments below! y <- ts(rnorm(100)) # at a glance all.equal(c(coef(dynlm(x ~ L(x, 1:3) + L(y, 1:3))), coef(dynlm(y ~ L(x, 1:3) + L(y, 1:3)))), c(coef(VAR(cbind(x,y), p = 3, type = "const"))$x[c(7,1,3,5,2,4,6),1], coef(VAR(cbind(x,y), p = 3, type = "const"))$y[c(7,1,3,5,2,4,6),1]), c(coef(lm(x[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97])), coef(lm(y[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]))), check.attributes=F) dynlm(x ~ L(x, 1:3) + L(y, 1:3)) dynlm(y ~ L(x, 1:3) + L(y, 1:3)) VAR(cbind(x,y), p = 3, type = "const") lm(x[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) lm(y[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) Output: > dynlm(x ~ L(x, 1:3) + L(y, 1:3)) Time series regression with "ts" data: Start = 4, End = 100 Call: dynlm(formula = x ~ L(x, 1:3) + L(y, 1:3)) Coefficients: (Intercept) L(x, 1:3)1 L(x, 1:3)2 L(x, 1:3)3 L(y, 1:3)1 L(y, 1:3)2 L(y, 1:3)3 -0.14797 -0.13608 0.04310 -0.14119 0.03736 -0.20556 -0.07980 > dynlm(y ~ L(x, 1:3) + L(y, 1:3)) Time series regression with "ts" data: Start = 4, End = 100 Call: dynlm(formula = y ~ L(x, 1:3) + L(y, 1:3)) Coefficients: (Intercept) L(x, 1:3)1 L(x, 1:3)2 L(x, 1:3)3 L(y, 1:3)1 L(y, 1:3)2 L(y, 1:3)3 0.001093 0.008268 0.101429 -0.122984 0.039118 0.060185 -0.194614 > VAR(cbind(x,y), p = 3, type = "const") VAR Estimation Results: ======================= Estimated coefficients for equation x: ====================================== Call: x = x.l1 + y.l1 + x.l2 + y.l2 + x.l3 + y.l3 + const x.l1 y.l1 x.l2 y.l2 x.l3 y.l3 const -0.13608446 0.03735653 0.04310129 -0.20555950 -0.14119156 -0.07980048 -0.14797419 Estimated coefficients for equation y: ====================================== Call: y = x.l1 + y.l1 + x.l2 + y.l2 + x.l3 + y.l3 + const x.l1 y.l1 x.l2 y.l2 x.l3 y.l3 const 0.008267836 0.039117666 0.101428691 0.060184617 -0.122984226 -0.194613595 0.001093310 > lm(x[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) Call: lm(formula = x[4:100] ~ x[3:99] + x[2:98] + x[1:97] + y[3:99] + y[2:98] + y[1:97]) Coefficients: (Intercept) x[3:99] x[2:98] x[1:97] y[3:99] y[2:98] y[1:97] -0.14797 -0.13608 0.04310 -0.14119 0.03736 -0.20556 -0.07980 > lm(y[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) Call: lm(formula = y[4:100] ~ x[3:99] + x[2:98] + x[1:97] + y[3:99] + y[2:98] + y[1:97]) Coefficients: (Intercept) x[3:99] x[2:98] x[1:97] y[3:99] y[2:98] y[1:97] 0.001093 0.008268 0.101429 -0.122984 0.039118 0.060185 -0.194614
VAR() or dynlm() or lm()
Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ult
VAR() or dynlm() or lm() Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ultimately just run OLS regressions. It is with different degrees of ease, though, reflecting the purpose of the packages. lm is, of course, for all sorts of regressions, while the other two explicitly have time series regressions in mind, and vars even multivariate ones. Here is an example. library(dynlm) library(vars) x <- ts(rnorm(100)) # ts is relevant for dynlm, see discussion in comments below! y <- ts(rnorm(100)) # at a glance all.equal(c(coef(dynlm(x ~ L(x, 1:3) + L(y, 1:3))), coef(dynlm(y ~ L(x, 1:3) + L(y, 1:3)))), c(coef(VAR(cbind(x,y), p = 3, type = "const"))$x[c(7,1,3,5,2,4,6),1], coef(VAR(cbind(x,y), p = 3, type = "const"))$y[c(7,1,3,5,2,4,6),1]), c(coef(lm(x[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97])), coef(lm(y[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]))), check.attributes=F) dynlm(x ~ L(x, 1:3) + L(y, 1:3)) dynlm(y ~ L(x, 1:3) + L(y, 1:3)) VAR(cbind(x,y), p = 3, type = "const") lm(x[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) lm(y[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) Output: > dynlm(x ~ L(x, 1:3) + L(y, 1:3)) Time series regression with "ts" data: Start = 4, End = 100 Call: dynlm(formula = x ~ L(x, 1:3) + L(y, 1:3)) Coefficients: (Intercept) L(x, 1:3)1 L(x, 1:3)2 L(x, 1:3)3 L(y, 1:3)1 L(y, 1:3)2 L(y, 1:3)3 -0.14797 -0.13608 0.04310 -0.14119 0.03736 -0.20556 -0.07980 > dynlm(y ~ L(x, 1:3) + L(y, 1:3)) Time series regression with "ts" data: Start = 4, End = 100 Call: dynlm(formula = y ~ L(x, 1:3) + L(y, 1:3)) Coefficients: (Intercept) L(x, 1:3)1 L(x, 1:3)2 L(x, 1:3)3 L(y, 1:3)1 L(y, 1:3)2 L(y, 1:3)3 0.001093 0.008268 0.101429 -0.122984 0.039118 0.060185 -0.194614 > VAR(cbind(x,y), p = 3, type = "const") VAR Estimation Results: ======================= Estimated coefficients for equation x: ====================================== Call: x = x.l1 + y.l1 + x.l2 + y.l2 + x.l3 + y.l3 + const x.l1 y.l1 x.l2 y.l2 x.l3 y.l3 const -0.13608446 0.03735653 0.04310129 -0.20555950 -0.14119156 -0.07980048 -0.14797419 Estimated coefficients for equation y: ====================================== Call: y = x.l1 + y.l1 + x.l2 + y.l2 + x.l3 + y.l3 + const x.l1 y.l1 x.l2 y.l2 x.l3 y.l3 const 0.008267836 0.039117666 0.101428691 0.060184617 -0.122984226 -0.194613595 0.001093310 > lm(x[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) Call: lm(formula = x[4:100] ~ x[3:99] + x[2:98] + x[1:97] + y[3:99] + y[2:98] + y[1:97]) Coefficients: (Intercept) x[3:99] x[2:98] x[1:97] y[3:99] y[2:98] y[1:97] -0.14797 -0.13608 0.04310 -0.14119 0.03736 -0.20556 -0.07980 > lm(y[4:100]~x[3:99]+x[2:98]+x[1:97]+y[3:99]+y[2:98]+y[1:97]) Call: lm(formula = y[4:100] ~ x[3:99] + x[2:98] + x[1:97] + y[3:99] + y[2:98] + y[1:97]) Coefficients: (Intercept) x[3:99] x[2:98] x[1:97] y[3:99] y[2:98] y[1:97] 0.001093 0.008268 0.101429 -0.122984 0.039118 0.060185 -0.194614
VAR() or dynlm() or lm() Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ult
42,985
How and why do epidemiology and econometrics models handle multi-collinearity differently?
The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discipline and based on statistical power of the sample. We can't tell from statistical output alone whether a variable is a "confounder" or a "collider", it merely boils down to the agreement among experts and commitment to the initial analysis specifications. Model building doesn't mean we stack variables like bricks until we have a bridge from data to publication. "Collinearity" and "parsimony" are abstract concepts that don't factor in to analysis except as a diagnostic. Of course we use extensive diagnostics to look at plots and understand the contributions of the various variables. When "collinearity" specifically refers to extreme collinearity, meaning the results don't converge or they are unstable, some advanced methods or other remediation is needed; I think all accepted methods are viable whether you are an economist or epidemiologist. Similarly, even if an adjustment variable has a perfectly non-significant association with the outcome, you can't remove it on the basis of its non-significant result since you are deviating from your initial proposed analysis. Economics and epidemiology has a wide breadth of literature, meaning there are bad examples, specifically examples where the technical discussion wanders too far into the weeds to represent a meaningful presentation. This is further complicated since good articles typically have an extremely compact summary statement on the model choice. All this makes it hard to have a true head-to-head comparison between the disciplines.
How and why do epidemiology and econometrics models handle multi-collinearity differently?
The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discip
How and why do epidemiology and econometrics models handle multi-collinearity differently? The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discipline and based on statistical power of the sample. We can't tell from statistical output alone whether a variable is a "confounder" or a "collider", it merely boils down to the agreement among experts and commitment to the initial analysis specifications. Model building doesn't mean we stack variables like bricks until we have a bridge from data to publication. "Collinearity" and "parsimony" are abstract concepts that don't factor in to analysis except as a diagnostic. Of course we use extensive diagnostics to look at plots and understand the contributions of the various variables. When "collinearity" specifically refers to extreme collinearity, meaning the results don't converge or they are unstable, some advanced methods or other remediation is needed; I think all accepted methods are viable whether you are an economist or epidemiologist. Similarly, even if an adjustment variable has a perfectly non-significant association with the outcome, you can't remove it on the basis of its non-significant result since you are deviating from your initial proposed analysis. Economics and epidemiology has a wide breadth of literature, meaning there are bad examples, specifically examples where the technical discussion wanders too far into the weeds to represent a meaningful presentation. This is further complicated since good articles typically have an extremely compact summary statement on the model choice. All this makes it hard to have a true head-to-head comparison between the disciplines.
How and why do epidemiology and econometrics models handle multi-collinearity differently? The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discip
42,986
When are ROC curves to compare imaging tests valid? (Focus on the example below)
The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of what one might call "premature dichotomization." There is an established cutoff of <12 mm in saggital spinal canal diameter to classify someone as having "cervical stenosis," based on reconstruction from 3D imaging (like computed tomography scans). The authors examined four measurements from more readily available 2D imaging (which is also less expensive and involves much lower radiation doses) in patients who also had canal diameters determined from 3D imaging. The authors examined whether those measurements in 2D could be used to predict cervical stenosis. This study would have been a great opportunity to model saggital canal diameter as a function of all these 2D measurements, and see how well true canal diameter could be modeled. Unfortunately, the authors only examined individual correlations of each of those 4 measurements with canal diameter to start, and then looked at correlations of canal diameter with a set of pairwise ratios of 2D measurements. That approach thus threw away the more detailed information that a multiple-regression approach involving all 4 measurements together might have provided. Then, to evaluate these less-than-ideal pairwise ratios, the authors seem to have ignored the actual measurements of canal diameter, and only tried to predict the 3D-based classifications into stenosis/normal. The receiver operating characteristic (ROC) curves shown in the paper and in this question show how changing the cutoff for each of those ratios affects the sensitivity and specificity of identifying stenosis. A model in which all measurements were used to estimate canal diameter (along with an error estimate), and only then making the call of < 12 mm diameter, would probably have been much more useful. Although this isn't a great paper from a statistical perspective, the questions raised about it are of general interest and deserve discussion. D. Hand, in Measuring classifier performance: a coherent alternative to the area under the ROC curve, Mach Learn (2009) 77: 103–123 (referenced in this related question) provides an important key. Hand considers two classes labeled $k=0$ and $k=1$, prevalences $\pi_k$, and density functions $f_k(s)$ describing the distribution within each class of a score $s$ that is monotonically increasing with the probability of membership in class $1$. The cost of misclassification into class $k$ is $c_k$, with $c$ the cost ratio for misclassification into class $0$, $c =c_0/(c_0+c_1)$. When the cost ratio is expressed this way and you have the correct model for the probability of class membership, the cost-optimal probability cutoff for class assignment is $c$. Thus a generic measure of model quality might not provide much guidance in applying the model. What's critical is having a well calibrated model of class membership probability, particularly for probabilities near the ultimate decision point if the relative misclassification costs are known. Put another way, any choice of a probability or score cutoff is making an implicit choice about those relative costs. Hand shows (page 111) that the area under the ROC curve, the AUC, is equivalent to taking an average of the losses corresponding to different cost ratios $c$, where the average is calculated according to the distribution: $$w(c) = \pi_0 f_0 (P_1^{-1}(c)) \left| \frac{dP_1^{-1}(c)}{dc} \right| + \pi_1 f_1 (P_1^{-1}(c)) \left| \frac{dP_1^{-1}(c)}{dc} \right|.$$ Here, $P_1^{-1}(c)$ represents the cost-optimal score/probability threshold for classification. This illustrates two problems with using the AUC to compare different classifiers. First, as Hand continues; The implication of this is that the weight distribution over cost ratios $c$, implicitly used in calculating the AUC, depends on the empirical score distributions $f_k$. That is, the weight distribution used to combine different cost ratios c, will vary from classifier to classifier. But this is absurd. The beliefs about likely values of $c$ must be obtained from considerations separate from the data: they are part of the problem definition. One cannot change one’s mind about how important one regards a misclassification according to which tool one uses to make that classification. Nevertheless, this is effectively what the AUC does—-it evaluates different classifiers using different metrics. Second, the weighted average further depends on the class prevalences, $\pi_0$ and $\pi_1$. That can lead to further confusion, described for example by T.M. Hamill and J. Juras, Measuring forecast skill: is it real skill or is it the varying climatology?, Q. J. R. Meteorol. Soc. (2006), 132: 2905–2923. Applying these principles to the 3 specific questions with respect to the Horne et al paper: Is it methodologically correct to compare these different ratios of measurements of the spinal canal (LM/CD, SL/LM, etc) for accuracy using ROC's? Under what criteria is it OK in general? For now, put aside the broader problems with experimental design raised at the beginning. If one takes "compare ... accuracy using ROC's" to mean comparing the AUC values, then that can be dangerous in general. In addition to ignoring relative costs of different misclassifications and the problems of different distributions of within-class scores among the classification schemes that Hand discusses, there is a potentially big problem here arising from the prevalence $\pi$ of stenosis. The population in the Horne et al paper consisted of individuals who already had 2D and 3D imaging for some clinical indication. One probably would not want to apply the same criteria to a broader population in which the prevalence of stenosis might be much lower and relative misclassification costs might differ. Furthermore, even if one chooses to ignore these problems, the AUC is not very sensitive for distinguishing among models. Again, calibration is key. With the sample sizes typical of such clinical studies, comparisons of model performance are better based on resampling, for example repeating the modeling on multiple bootstrap samples from the data and evaluating on the full data set. 2, Is it correct to derive a cutoff point of 0.735 from the ROC curves? That choice seems to be made for the point on the ROC that has the farthest perpendicular distance from the diagonal line representing no skill, called (among other things) the maximum Peirce skill score. In A Note On the Maximum Peirce Skill Score, Weather and Forecasting (2007) 22: 1148-1154, A. Manzato says: "it is the ROC point that maximizes the skill of the classifier." Nevertheless, that choice of cutoff does not take the relative misclassification costs into account, as Manzato goes on to demonstrate. Whether that choice is "correct" depends on the intended use of the scoring system and the relative misclassification costs, which Horne et al don't seem to discuss. And, much less important but curious, wouldn't SL/VB be just as good an (inverse) classifier as LM/CD, indicating a widely open spinal canal? In general, if a particular scoring system does that good a job of choosing the incorrect class, just choose the other class. Note, however, that much of the above has to do with problems in comparing different scoring systems. For any one scoring system, the ROC curve still provides a convenient overview of the underlying sensitivity/specificity tradeoff, particularly if the curve is correspondingly labeled with scores. And for any one scoring system, the AUC provides the fraction of pairs of different-class cases for which the difference in relative scores agrees with class membership.
When are ROC curves to compare imaging tests valid? (Focus on the example below)
The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of wh
When are ROC curves to compare imaging tests valid? (Focus on the example below) The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of what one might call "premature dichotomization." There is an established cutoff of <12 mm in saggital spinal canal diameter to classify someone as having "cervical stenosis," based on reconstruction from 3D imaging (like computed tomography scans). The authors examined four measurements from more readily available 2D imaging (which is also less expensive and involves much lower radiation doses) in patients who also had canal diameters determined from 3D imaging. The authors examined whether those measurements in 2D could be used to predict cervical stenosis. This study would have been a great opportunity to model saggital canal diameter as a function of all these 2D measurements, and see how well true canal diameter could be modeled. Unfortunately, the authors only examined individual correlations of each of those 4 measurements with canal diameter to start, and then looked at correlations of canal diameter with a set of pairwise ratios of 2D measurements. That approach thus threw away the more detailed information that a multiple-regression approach involving all 4 measurements together might have provided. Then, to evaluate these less-than-ideal pairwise ratios, the authors seem to have ignored the actual measurements of canal diameter, and only tried to predict the 3D-based classifications into stenosis/normal. The receiver operating characteristic (ROC) curves shown in the paper and in this question show how changing the cutoff for each of those ratios affects the sensitivity and specificity of identifying stenosis. A model in which all measurements were used to estimate canal diameter (along with an error estimate), and only then making the call of < 12 mm diameter, would probably have been much more useful. Although this isn't a great paper from a statistical perspective, the questions raised about it are of general interest and deserve discussion. D. Hand, in Measuring classifier performance: a coherent alternative to the area under the ROC curve, Mach Learn (2009) 77: 103–123 (referenced in this related question) provides an important key. Hand considers two classes labeled $k=0$ and $k=1$, prevalences $\pi_k$, and density functions $f_k(s)$ describing the distribution within each class of a score $s$ that is monotonically increasing with the probability of membership in class $1$. The cost of misclassification into class $k$ is $c_k$, with $c$ the cost ratio for misclassification into class $0$, $c =c_0/(c_0+c_1)$. When the cost ratio is expressed this way and you have the correct model for the probability of class membership, the cost-optimal probability cutoff for class assignment is $c$. Thus a generic measure of model quality might not provide much guidance in applying the model. What's critical is having a well calibrated model of class membership probability, particularly for probabilities near the ultimate decision point if the relative misclassification costs are known. Put another way, any choice of a probability or score cutoff is making an implicit choice about those relative costs. Hand shows (page 111) that the area under the ROC curve, the AUC, is equivalent to taking an average of the losses corresponding to different cost ratios $c$, where the average is calculated according to the distribution: $$w(c) = \pi_0 f_0 (P_1^{-1}(c)) \left| \frac{dP_1^{-1}(c)}{dc} \right| + \pi_1 f_1 (P_1^{-1}(c)) \left| \frac{dP_1^{-1}(c)}{dc} \right|.$$ Here, $P_1^{-1}(c)$ represents the cost-optimal score/probability threshold for classification. This illustrates two problems with using the AUC to compare different classifiers. First, as Hand continues; The implication of this is that the weight distribution over cost ratios $c$, implicitly used in calculating the AUC, depends on the empirical score distributions $f_k$. That is, the weight distribution used to combine different cost ratios c, will vary from classifier to classifier. But this is absurd. The beliefs about likely values of $c$ must be obtained from considerations separate from the data: they are part of the problem definition. One cannot change one’s mind about how important one regards a misclassification according to which tool one uses to make that classification. Nevertheless, this is effectively what the AUC does—-it evaluates different classifiers using different metrics. Second, the weighted average further depends on the class prevalences, $\pi_0$ and $\pi_1$. That can lead to further confusion, described for example by T.M. Hamill and J. Juras, Measuring forecast skill: is it real skill or is it the varying climatology?, Q. J. R. Meteorol. Soc. (2006), 132: 2905–2923. Applying these principles to the 3 specific questions with respect to the Horne et al paper: Is it methodologically correct to compare these different ratios of measurements of the spinal canal (LM/CD, SL/LM, etc) for accuracy using ROC's? Under what criteria is it OK in general? For now, put aside the broader problems with experimental design raised at the beginning. If one takes "compare ... accuracy using ROC's" to mean comparing the AUC values, then that can be dangerous in general. In addition to ignoring relative costs of different misclassifications and the problems of different distributions of within-class scores among the classification schemes that Hand discusses, there is a potentially big problem here arising from the prevalence $\pi$ of stenosis. The population in the Horne et al paper consisted of individuals who already had 2D and 3D imaging for some clinical indication. One probably would not want to apply the same criteria to a broader population in which the prevalence of stenosis might be much lower and relative misclassification costs might differ. Furthermore, even if one chooses to ignore these problems, the AUC is not very sensitive for distinguishing among models. Again, calibration is key. With the sample sizes typical of such clinical studies, comparisons of model performance are better based on resampling, for example repeating the modeling on multiple bootstrap samples from the data and evaluating on the full data set. 2, Is it correct to derive a cutoff point of 0.735 from the ROC curves? That choice seems to be made for the point on the ROC that has the farthest perpendicular distance from the diagonal line representing no skill, called (among other things) the maximum Peirce skill score. In A Note On the Maximum Peirce Skill Score, Weather and Forecasting (2007) 22: 1148-1154, A. Manzato says: "it is the ROC point that maximizes the skill of the classifier." Nevertheless, that choice of cutoff does not take the relative misclassification costs into account, as Manzato goes on to demonstrate. Whether that choice is "correct" depends on the intended use of the scoring system and the relative misclassification costs, which Horne et al don't seem to discuss. And, much less important but curious, wouldn't SL/VB be just as good an (inverse) classifier as LM/CD, indicating a widely open spinal canal? In general, if a particular scoring system does that good a job of choosing the incorrect class, just choose the other class. Note, however, that much of the above has to do with problems in comparing different scoring systems. For any one scoring system, the ROC curve still provides a convenient overview of the underlying sensitivity/specificity tradeoff, particularly if the curve is correspondingly labeled with scores. And for any one scoring system, the AUC provides the fraction of pairs of different-class cases for which the difference in relative scores agrees with class membership.
When are ROC curves to compare imaging tests valid? (Focus on the example below) The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of wh
42,987
Are there any "convex neural networks"?
Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that architecture, you can get another one by switching the weights of those two neurons. Of course this is not granted to work if the two neurons have different inputs or different activation functions, but still, obtaining a convex optimization problem seems very unlikely anyway. Beware that if you use identity activation (linear neurons) you still don't get a convex problem, because the same argument as above applies, you still get a problem with many redundant solutions. Linear problems can be convex only if there is no over-parametrization, that means using GLMs, not MLPs.
Are there any "convex neural networks"?
Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that arc
Are there any "convex neural networks"? Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that architecture, you can get another one by switching the weights of those two neurons. Of course this is not granted to work if the two neurons have different inputs or different activation functions, but still, obtaining a convex optimization problem seems very unlikely anyway. Beware that if you use identity activation (linear neurons) you still don't get a convex problem, because the same argument as above applies, you still get a problem with many redundant solutions. Linear problems can be convex only if there is no over-parametrization, that means using GLMs, not MLPs.
Are there any "convex neural networks"? Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that arc
42,988
Estimation precision of lower- vs. higher-order moments
Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the precision of an estimator of a certain moment that I use here is the variance. It is well known that the variance of the sample variance, when sampling from a normal population, is $\frac{2\sigma^4}{n-1}$, and that that of the mean is $\sigma^2/n$. So, the former is larger if $$\frac{2\sigma^4}{n-1}>\frac{\sigma^2}{n}$$ or $$\sigma^2>\frac{n-1}{2n},$$ which evidently need not be the case. n <- 10 sigma.sq <- 4/10 # 9/20 or 4.5/10 would be cutoff here sim.mean.s2 <- function(n){ x <- rnorm(n, sd=sqrt(sigma.sq)) xbar <- mean(x) s2 <- var(x) return(list(xbar, s2)) } sims <- matrix(unlist(replicate(1e6, sim.mean.s2(n))), nrow=2) var(sims[1,]) # may also try moments::moment(sims[1,],2, central=T) to simulate population variance, but does not matter at many replications sigma.sq/n var(sims[2,]) 2*sigma.sq^2/(n-1)
Estimation precision of lower- vs. higher-order moments
Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the prec
Estimation precision of lower- vs. higher-order moments Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the precision of an estimator of a certain moment that I use here is the variance. It is well known that the variance of the sample variance, when sampling from a normal population, is $\frac{2\sigma^4}{n-1}$, and that that of the mean is $\sigma^2/n$. So, the former is larger if $$\frac{2\sigma^4}{n-1}>\frac{\sigma^2}{n}$$ or $$\sigma^2>\frac{n-1}{2n},$$ which evidently need not be the case. n <- 10 sigma.sq <- 4/10 # 9/20 or 4.5/10 would be cutoff here sim.mean.s2 <- function(n){ x <- rnorm(n, sd=sqrt(sigma.sq)) xbar <- mean(x) s2 <- var(x) return(list(xbar, s2)) } sims <- matrix(unlist(replicate(1e6, sim.mean.s2(n))), nrow=2) var(sims[1,]) # may also try moments::moment(sims[1,],2, central=T) to simulate population variance, but does not matter at many replications sigma.sq/n var(sims[2,]) 2*sigma.sq^2/(n-1)
Estimation precision of lower- vs. higher-order moments Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the prec
42,989
In reality, there is almost always measurement error in the independent variable(s), so why is this ignored in almost every linear regression model?
Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data. Correction for errors in X makes regression coefficients properly farther from zero but then they apply only to future corrected X. I wish I had a reference for this. A simulation may be in order.
In reality, there is almost always measurement error in the independent variable(s), so why is this
Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data.
In reality, there is almost always measurement error in the independent variable(s), so why is this ignored in almost every linear regression model? Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data. Correction for errors in X makes regression coefficients properly farther from zero but then they apply only to future corrected X. I wish I had a reference for this. A simulation may be in order.
In reality, there is almost always measurement error in the independent variable(s), so why is this Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data.
42,990
Cross Validation in StackingClassifier Scikit-Learn
This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the optimal parameters, those that allow the model to fit the data well without over-fitting. It suffices that our final estimator does this; there is no need for individually figuring out the settings of all the base estimators. The base estimators can include a bunch of different parameter settings, for example; as well as a selection of different types of classifiers. If any of them are prone to overfitting this should be offset by others not having that problem. As long as the final estimator does not put all of its eggs in the wrong basket, we should be fine (and this is why we need cross-validation here, to make sure this does not happen). We will train the final estimator on the full training set -- this happens after we find the optimal parameters or set of base estimators using cross-validation. As the name says, cross-validation is meant for validating the method. Not for creating the final model.
Cross Validation in StackingClassifier Scikit-Learn
This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the
Cross Validation in StackingClassifier Scikit-Learn This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the optimal parameters, those that allow the model to fit the data well without over-fitting. It suffices that our final estimator does this; there is no need for individually figuring out the settings of all the base estimators. The base estimators can include a bunch of different parameter settings, for example; as well as a selection of different types of classifiers. If any of them are prone to overfitting this should be offset by others not having that problem. As long as the final estimator does not put all of its eggs in the wrong basket, we should be fine (and this is why we need cross-validation here, to make sure this does not happen). We will train the final estimator on the full training set -- this happens after we find the optimal parameters or set of base estimators using cross-validation. As the name says, cross-validation is meant for validating the method. Not for creating the final model.
Cross Validation in StackingClassifier Scikit-Learn This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the
42,991
Cross Validation in StackingClassifier Scikit-Learn
My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what StackingClassifier does (and so did I at first), because the description provided in scikit-learn is prone to misinterpretations (not our fault). If you check the source code here, you will see that the scikit-learn implementation does stacking correctly. Long answer. Robby the Belgian's answer does not address the following prospect, which I guess was your main concern. Consider training the final estimator. Suppose that one of subestimators overfits pathologically, e.g. it memorises all data seen at training. If subestimators are passed to the final estimator after being trained on the whole dataset, then the final estimator has no means of telling apart an overfitting subestimator from a genuinely good one, because there is no held-out data left to estimate subestimators' generalisation error. The final estimator will thus rely on the overfit subestimator when making final predictions, thinking that it is the best one, even if truly decent subestimators are available. As a short digression, let me quote a slightly different justification for holding out some data from subestimators when stacking models. Hastie, Tibshirani & Friedman write in "Elements of Statistical Learning" (page 290): ... If [subestimator] $\hat{f}_m(x), \,m=1,\dots,M$ represent the prediction from the best subset of inputs of size $m$ among $M$ total inputs, then linear regression [final estimator] would put all of the weight on the largest model, that is, $\hat{w}_M=1,\, \hat{w}_m=0,\, m<M$. The problem is that we have not put each of the models [subestimators] on the same footing by taking into account their complexity (the number of inputs $m$ in this example). To put this differently, if the candidate models come from nested model spaces $\mathcal{M}_1 \subset \mathcal{M}_{2} \subset \dots \subset \mathcal{M}_M$ and the training set is reused by the final estimator, then it will always choose the model from $\mathcal{M}_M$ (i.e. the most flexible model), simply because the optimum from the superset is always better than what a subset has to offer. [Digression end] Now suppose that all subestimators passed to the final estimator are all decent and do not overfit, despite being trained on the whole dataset. Suppose they are "on equal footing" in the sense that they have similar training and generalisation errors: one subestimator is better at some data, another is better at some other data, and so on. If the final estimator sees input in addition to subestimators' predictions at input (passthrough=True option), then there is a possibility that the final estimator overfits by memorising which of the subestimators happened to be correct at each input, instead of learning to combine sub-predictions in a generalisable way. In fact, the final estimator can potentially identify datapoints just by subestimators' predictions at them (in case passthrough=False). Overfitting of the final estimator can in principle be controlled by tuning its hyperparameters, but this is not what cross_val_predict does inside StackingClassifier. So, overfitting in stacking can be caused by either of the following: some subestimators overfit and are preferred by the final estimator; the final estimator overfits per se. Pitfall 2 can be avoided by using a simple model as the final estimator or by tuning it in an outer loop (like GridSearchCV, but I don't think this is a good idea). We have to do this manually. Pitfall 1 is avoided in StackingCalssifier automatically by the fact that at train-time, subestimators are passed to the final estimator after being trained on a part of the dataset. In other words, the final estimator is trained on the whole dataset but its inputs are out-of-fold predictions of subestimators. This is precisely what is meant by: ... final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. In my opinion, "cross-validated out-of-fold predictions" would be a better phrasing. After the training of the final estimator is done, we can re-train subestimators on the whole dataset to further improve their performance. This is what is meant by Note that estimators_ are fitted on the full X ... Putting these two statements in one sentence causes confusion.
Cross Validation in StackingClassifier Scikit-Learn
My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what
Cross Validation in StackingClassifier Scikit-Learn My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what StackingClassifier does (and so did I at first), because the description provided in scikit-learn is prone to misinterpretations (not our fault). If you check the source code here, you will see that the scikit-learn implementation does stacking correctly. Long answer. Robby the Belgian's answer does not address the following prospect, which I guess was your main concern. Consider training the final estimator. Suppose that one of subestimators overfits pathologically, e.g. it memorises all data seen at training. If subestimators are passed to the final estimator after being trained on the whole dataset, then the final estimator has no means of telling apart an overfitting subestimator from a genuinely good one, because there is no held-out data left to estimate subestimators' generalisation error. The final estimator will thus rely on the overfit subestimator when making final predictions, thinking that it is the best one, even if truly decent subestimators are available. As a short digression, let me quote a slightly different justification for holding out some data from subestimators when stacking models. Hastie, Tibshirani & Friedman write in "Elements of Statistical Learning" (page 290): ... If [subestimator] $\hat{f}_m(x), \,m=1,\dots,M$ represent the prediction from the best subset of inputs of size $m$ among $M$ total inputs, then linear regression [final estimator] would put all of the weight on the largest model, that is, $\hat{w}_M=1,\, \hat{w}_m=0,\, m<M$. The problem is that we have not put each of the models [subestimators] on the same footing by taking into account their complexity (the number of inputs $m$ in this example). To put this differently, if the candidate models come from nested model spaces $\mathcal{M}_1 \subset \mathcal{M}_{2} \subset \dots \subset \mathcal{M}_M$ and the training set is reused by the final estimator, then it will always choose the model from $\mathcal{M}_M$ (i.e. the most flexible model), simply because the optimum from the superset is always better than what a subset has to offer. [Digression end] Now suppose that all subestimators passed to the final estimator are all decent and do not overfit, despite being trained on the whole dataset. Suppose they are "on equal footing" in the sense that they have similar training and generalisation errors: one subestimator is better at some data, another is better at some other data, and so on. If the final estimator sees input in addition to subestimators' predictions at input (passthrough=True option), then there is a possibility that the final estimator overfits by memorising which of the subestimators happened to be correct at each input, instead of learning to combine sub-predictions in a generalisable way. In fact, the final estimator can potentially identify datapoints just by subestimators' predictions at them (in case passthrough=False). Overfitting of the final estimator can in principle be controlled by tuning its hyperparameters, but this is not what cross_val_predict does inside StackingClassifier. So, overfitting in stacking can be caused by either of the following: some subestimators overfit and are preferred by the final estimator; the final estimator overfits per se. Pitfall 2 can be avoided by using a simple model as the final estimator or by tuning it in an outer loop (like GridSearchCV, but I don't think this is a good idea). We have to do this manually. Pitfall 1 is avoided in StackingCalssifier automatically by the fact that at train-time, subestimators are passed to the final estimator after being trained on a part of the dataset. In other words, the final estimator is trained on the whole dataset but its inputs are out-of-fold predictions of subestimators. This is precisely what is meant by: ... final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. In my opinion, "cross-validated out-of-fold predictions" would be a better phrasing. After the training of the final estimator is done, we can re-train subestimators on the whole dataset to further improve their performance. This is what is meant by Note that estimators_ are fitted on the full X ... Putting these two statements in one sentence causes confusion.
Cross Validation in StackingClassifier Scikit-Learn My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what
42,992
threshold choice for binary classifier: on training, validation or test set?
Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
threshold choice for binary classifier: on training, validation or test set?
Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
threshold choice for binary classifier: on training, validation or test set? Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
threshold choice for binary classifier: on training, validation or test set? Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
42,993
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and local average treatment effect (LATE) be similar?
No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers have $D(Z = 1) = 1$ and $D(Z = 0) = 0$: If assigned treatment, they take, if assigned control, they do not take the treatment. These are counterfactual variables. Without further assumptions, we cannot tell whether a given person is a complier, because we do not observe what she would have done under a different assignment. The LATE equals the ATT in the case of an experiment with "one-sided non-compliance". That is, everyone not eligible ($Z = 0$) cannot take the treatment $D$, but those assigned ($Z = 1$) may or may not. Think of a medical trial with a new drug, where you cannot possible take it if you are in the control group, but you may refuse it when you are told to. Formally, this means that for everyone the counterfactual variable $D(Z = 0)$ is 0. Then, for those who took treatment ($D = 1$), by design, $Z = 1$ (there is no other way to receive treatment). So for these this means that $D(Z = 1) = 1$. Since everyone has $D(Z = 0) = 0$, this means that the treated are the compliers, and so ATT = LATE. The other remaining group are the "never-takers". Regarding your specific question, if we are talking about a (general) design where $Z$ is randomized, then the proportions of always-takers, compliers, etc. is the same for $Z = 0$ and $Z = 1$. This is because these types are like background variables, and randomization makes $Z$ independent from such variables. This is also means that if $Z$ is not randomized, then you could have a situation that you describe, where $P(AT|Z = 0) < P(C|Z = 1)$ (AT are always-takers, C compliers). However, this would mean that $Z$ is not a valid instrument. Perhaps you could solve this problem by conditioning on further confounders $X$. Lastly, the situation you describe does not imply that ATT equals LATE. This is because the $D = 1$ group (the treated) is made up of always-takers, compliers, and possibly defiers. $P(AT|Z = 0) < P(C|Z = 1)$ is not sufficient to make sure that this group consists only of compliers. It would be sufficient to assume that everyone is a complier (this is also testable, because given randomization of $Z$, it implies $P(D = 1|Z = 1) = 1$ and $P(D = 0|Z = 0) = 1$). Then ATE = ATT = LATE = ATC. This is because then the experiment is actually perfect: $Z$ is the same variable as $D$. All the confounding of $D$ and $Y$ is killed by the experimental manipulation. Accordingly, units don't choose $D$ depending on potential outcomes of $Y$, so ATE = ATT = ATC. Furthermore, $P(C) = 1$, so LATE = ATE (because the population and the compliers are the same units).
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and l
No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers h
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and local average treatment effect (LATE) be similar? No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers have $D(Z = 1) = 1$ and $D(Z = 0) = 0$: If assigned treatment, they take, if assigned control, they do not take the treatment. These are counterfactual variables. Without further assumptions, we cannot tell whether a given person is a complier, because we do not observe what she would have done under a different assignment. The LATE equals the ATT in the case of an experiment with "one-sided non-compliance". That is, everyone not eligible ($Z = 0$) cannot take the treatment $D$, but those assigned ($Z = 1$) may or may not. Think of a medical trial with a new drug, where you cannot possible take it if you are in the control group, but you may refuse it when you are told to. Formally, this means that for everyone the counterfactual variable $D(Z = 0)$ is 0. Then, for those who took treatment ($D = 1$), by design, $Z = 1$ (there is no other way to receive treatment). So for these this means that $D(Z = 1) = 1$. Since everyone has $D(Z = 0) = 0$, this means that the treated are the compliers, and so ATT = LATE. The other remaining group are the "never-takers". Regarding your specific question, if we are talking about a (general) design where $Z$ is randomized, then the proportions of always-takers, compliers, etc. is the same for $Z = 0$ and $Z = 1$. This is because these types are like background variables, and randomization makes $Z$ independent from such variables. This is also means that if $Z$ is not randomized, then you could have a situation that you describe, where $P(AT|Z = 0) < P(C|Z = 1)$ (AT are always-takers, C compliers). However, this would mean that $Z$ is not a valid instrument. Perhaps you could solve this problem by conditioning on further confounders $X$. Lastly, the situation you describe does not imply that ATT equals LATE. This is because the $D = 1$ group (the treated) is made up of always-takers, compliers, and possibly defiers. $P(AT|Z = 0) < P(C|Z = 1)$ is not sufficient to make sure that this group consists only of compliers. It would be sufficient to assume that everyone is a complier (this is also testable, because given randomization of $Z$, it implies $P(D = 1|Z = 1) = 1$ and $P(D = 0|Z = 0) = 1$). Then ATE = ATT = LATE = ATC. This is because then the experiment is actually perfect: $Z$ is the same variable as $D$. All the confounding of $D$ and $Y$ is killed by the experimental manipulation. Accordingly, units don't choose $D$ depending on potential outcomes of $Y$, so ATE = ATT = ATC. Furthermore, $P(C) = 1$, so LATE = ATE (because the population and the compliers are the same units).
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and l No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers h
42,994
Specifying hierarchical GAM for ecological count data---annual bird migration counts
The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you could write your model as: count ~ s(doy, m = 2) + t2(doy, year, bs = c('cr', 're'), full = TRUE) + offset(log(minutes)) This allows us to extend these effects to higher-order effects because so long as you have the data, memory, and compute power to do it, tensor products can take as many marginal smooths as you want. So say we wanted to extend the model to have the DoY functional effect vary by Year and by Species, we could use: count ~ s(doy, m = 2) + t2(doy, year, species, bs = c('cr', 're', 're'), full = TRUE) + offset(log(minutes)) where we're just bolting on another level of random effects, and are assuming that year and species are coded as factors. You can also use the te() or, importantly in the case of your question here, the ti() smooths, but the parameteristation won't be exactly the same as the 'fs' smooth. Why ti()? Well, once you start having multiple smooth effects of DoY occurring in the model, you can run into problems because as clever as mgcv is, it can't always remove all the redundant terms from the bases/model matrix that arise from having a covariate pop up in many smooth terms. It can also help to use ti() smooths because you can partition up the problem and use the summary() output for the model to determine which levels of the hierarchy you see effects. ## first order effects s(doy) + s(species, bs = 're') + s(year, bs = 're') + s(site, bs = 're') + ## second & third order random effects s(year, site, bs = 're') + s(year, species, bs = 're') + s(site, species, bs = 're') + s(year, site, species, bs = 're') + ## second order functional effects s(doy, year, bs = 'fs') + s(doy, site, bs = 'fs') + s(doy, species, bs = 'fs') + ## higher order functional effects t2(doy, species, year, bs = c('cr', 're', 're'), full = TRUE) + t2(doy, species, site, bs = c('cr', 're', 're'), full = TRUE) + t2(doy, site, year, bs = c('cr', 're', 're'), full = TRUE) + t2(doy, species, site, year, bs = c('cr', 're', 're', 're'), full = TRUE) If year was treated as a continuous variable then you're likely to want to change this to something like: s(doy) + s(year) + s(species, bs = 're') + s(site, bs = 're') + ## second & third order random effects s(site, species, bs = 're') + ti(year, site, species, bs = c('cr', 're', 're')) + ## second order functional effects s(doy, site, bs = 'fs') + s(doy, species, bs = 'fs') + s(year, site, bs = 'fs') + s(year, species, bs = 'fs') + ti(doy, year) + ## higher order functional effects ti(doy, year, species, bs = c('cr', 'cr', 're')) + ti(doy, species, site, bs = c('cr', 're', 're')) + ti(doy, year, site, bs = c('cr', 'cr', 're')) + ti(doy, year, species, site, bs = c('cr', 'cr', 're', 're')) which is where ti() can come in useful as we've got a tonne of terms that now include smooth functions of doy and or year. In both these models we see some duplication; the fs basis in s(doy, year, bs = 'fs') contains random intercepts for year but you'll notice that I also include the first order ranef for year via s(year, bs = 're'). Again, I think this ins fine and it's how we did it in Pedersen et al (2019). The random intercepts in the factor-smooths are fully penalised so the first order year ranef should account for the overall between year variation, and the random intercept aspects of the factor-smooth or higher-order functional effects should not be very large at all. I've assumed that you want site as a random effect and not a smooth effect; the latter may be useful when you more sites along a gradient where you expect smooth effects along the spatial gradient/transect. If you had more sites over a range of lat and long, you could also model the spatial effect via a spatial smooth s(long, lat, bs = 'ds') for example. References Pedersen, E.J., Miller, D.L., Simpson, G.L., Ross, N., 2019. Hierarchical generalized additive models in ecology: an introduction with mgcv. PeerJ 7, e6876. https://doi.org/10.7717/peerj.6876
Specifying hierarchical GAM for ecological count data---annual bird migration counts
The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you
Specifying hierarchical GAM for ecological count data---annual bird migration counts The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you could write your model as: count ~ s(doy, m = 2) + t2(doy, year, bs = c('cr', 're'), full = TRUE) + offset(log(minutes)) This allows us to extend these effects to higher-order effects because so long as you have the data, memory, and compute power to do it, tensor products can take as many marginal smooths as you want. So say we wanted to extend the model to have the DoY functional effect vary by Year and by Species, we could use: count ~ s(doy, m = 2) + t2(doy, year, species, bs = c('cr', 're', 're'), full = TRUE) + offset(log(minutes)) where we're just bolting on another level of random effects, and are assuming that year and species are coded as factors. You can also use the te() or, importantly in the case of your question here, the ti() smooths, but the parameteristation won't be exactly the same as the 'fs' smooth. Why ti()? Well, once you start having multiple smooth effects of DoY occurring in the model, you can run into problems because as clever as mgcv is, it can't always remove all the redundant terms from the bases/model matrix that arise from having a covariate pop up in many smooth terms. It can also help to use ti() smooths because you can partition up the problem and use the summary() output for the model to determine which levels of the hierarchy you see effects. ## first order effects s(doy) + s(species, bs = 're') + s(year, bs = 're') + s(site, bs = 're') + ## second & third order random effects s(year, site, bs = 're') + s(year, species, bs = 're') + s(site, species, bs = 're') + s(year, site, species, bs = 're') + ## second order functional effects s(doy, year, bs = 'fs') + s(doy, site, bs = 'fs') + s(doy, species, bs = 'fs') + ## higher order functional effects t2(doy, species, year, bs = c('cr', 're', 're'), full = TRUE) + t2(doy, species, site, bs = c('cr', 're', 're'), full = TRUE) + t2(doy, site, year, bs = c('cr', 're', 're'), full = TRUE) + t2(doy, species, site, year, bs = c('cr', 're', 're', 're'), full = TRUE) If year was treated as a continuous variable then you're likely to want to change this to something like: s(doy) + s(year) + s(species, bs = 're') + s(site, bs = 're') + ## second & third order random effects s(site, species, bs = 're') + ti(year, site, species, bs = c('cr', 're', 're')) + ## second order functional effects s(doy, site, bs = 'fs') + s(doy, species, bs = 'fs') + s(year, site, bs = 'fs') + s(year, species, bs = 'fs') + ti(doy, year) + ## higher order functional effects ti(doy, year, species, bs = c('cr', 'cr', 're')) + ti(doy, species, site, bs = c('cr', 're', 're')) + ti(doy, year, site, bs = c('cr', 'cr', 're')) + ti(doy, year, species, site, bs = c('cr', 'cr', 're', 're')) which is where ti() can come in useful as we've got a tonne of terms that now include smooth functions of doy and or year. In both these models we see some duplication; the fs basis in s(doy, year, bs = 'fs') contains random intercepts for year but you'll notice that I also include the first order ranef for year via s(year, bs = 're'). Again, I think this ins fine and it's how we did it in Pedersen et al (2019). The random intercepts in the factor-smooths are fully penalised so the first order year ranef should account for the overall between year variation, and the random intercept aspects of the factor-smooth or higher-order functional effects should not be very large at all. I've assumed that you want site as a random effect and not a smooth effect; the latter may be useful when you more sites along a gradient where you expect smooth effects along the spatial gradient/transect. If you had more sites over a range of lat and long, you could also model the spatial effect via a spatial smooth s(long, lat, bs = 'ds') for example. References Pedersen, E.J., Miller, D.L., Simpson, G.L., Ross, N., 2019. Hierarchical generalized additive models in ecology: an introduction with mgcv. PeerJ 7, e6876. https://doi.org/10.7717/peerj.6876
Specifying hierarchical GAM for ecological count data---annual bird migration counts The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you
42,995
Estimate the number of common members in two populations
My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_optimal = argmax(P w.r.t K) I couldn't find an analytical solution so I wrote a few lines of code to calculate it, with an example like this: from math import factorial from scipy.special import comb import seaborn as sns N1 = 100 N2 = 180 n = 50 k = 25 def llhood(K): return comb(K, k)*comb(N1-K, n-k)*comb(N2-K, n-k) def argmaxK(): p = 0 i = k arr = [] while i <= min([N1-n+k, N2-n+k]): ll = llhood(i) arr.append([i, ll]) if ll < p: return i-1, arr p = ll i += 1 return i-1, arr k_opt, probs = argmaxK() probs = np.array(probs) sns.scatterplot(probs[:,0], probs[:,1]) for my case, K is best estimated to be 45.
Estimate the number of common members in two populations
My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_
Estimate the number of common members in two populations My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_optimal = argmax(P w.r.t K) I couldn't find an analytical solution so I wrote a few lines of code to calculate it, with an example like this: from math import factorial from scipy.special import comb import seaborn as sns N1 = 100 N2 = 180 n = 50 k = 25 def llhood(K): return comb(K, k)*comb(N1-K, n-k)*comb(N2-K, n-k) def argmaxK(): p = 0 i = k arr = [] while i <= min([N1-n+k, N2-n+k]): ll = llhood(i) arr.append([i, ll]) if ll < p: return i-1, arr p = ll i += 1 return i-1, arr k_opt, probs = argmaxK() probs = np.array(probs) sns.scatterplot(probs[:,0], probs[:,1]) for my case, K is best estimated to be 45.
Estimate the number of common members in two populations My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_
42,996
Estimate the number of common members in two populations
In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the whole paper and just present the main result. Without loss of generality, assume that $N_1 \leq N_2$. Further, denote $n_1$ and $n_2$ the number of samples drawn from populations $N_1$ and $N_2$ and $n_{12}$ the number of shared members in the sample. We also assume a uniform prior over $K$ (the true number of shared members), which is $p(K) = N_{1 + 1}^{-1}$. The posterior distribution is given by: $$ P(K\,|\,n_1, n_2, n_{12}, N_1, N_2)=\dfrac{\sum_{K_1 = 0}^{N_1}P(n_{12}\,|\,n_2, K_1, N_2)P(K_1\,|\,n_1, K, N_1)}{\sum_{K'=0}^{N_1}\sum_{K_1=0}^{N_1}P(n_{12}\,|\,n_2, K_1, N_2)P(K_1\,|\,n_1, K', N_1)} $$ The posterior mean $\hat{K}$ is then given by: $$ \hat{K}=\sum_{K=0}^{N_1}K\cdot P(K\,|\,n_1, n_2, n_{12}, N_1, N_2) $$ Here, $P(x\,|\,t, u, v)$ denotes the hypergeometric probability of drawing exactly $x$ special objects out of $t$ draws, from a population of size $v$, in which there are $u$ special objects total. These formulas look quite complicated but they only require calls to the hypergeometric distribution which are available in many programs (R, Stata, SAS, Excel, etc.). Larremore provides Python code for the calculations on his GitHub page. Conservative equal-tailed $(1-\alpha)$-credible intervals $[K_{\mathrm{min}}, K_{\mathrm{max}}]$ can be found by finding the smallest index $K_{\mathrm{min}}$ and the largest index $K_{\mathrm{max}}$ for which $$ \sum_{K = K_{\mathrm{max}}}^{N_1}P(K\,|\,n_1, n_2, n_{12})\geq \alpha/2\\ \sum_{K = 0}^{K_{\mathrm{min}}}P(K\,|\,n_1, n_2, n_{12})\geq \alpha/2 $$ Example To illustrate the formulas, let's calculate a concrete example. Assume that $N_1 = 75, N_2 = 100$ and $K = 35$. I randomly drew a sample of size $n_1 = n_2 = 40$ and got $n_{12} = 7$. The posterior distribution together with the posterior mean (orange point) and the $90$% credible interval (shaded blue region) looks like this: The true value of $K$ is indicated by a dashed vertical line. The posterior mean is $35.62$ and the $90$%-credible interval by $[20, 51]$. To test the performance of the estimator, I repeated the above procedure $50$ times and recorded the posterior mean and $90$%-credible intervals. Here is the plot: $4$ out of the $50$ credible intervals do not include the true $K$ of $35$ (they're plotted in red) and hence, $92$% do include it. Also, the mean of the $50$ posterior means is $35.19$.
Estimate the number of common members in two populations
In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the wh
Estimate the number of common members in two populations In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the whole paper and just present the main result. Without loss of generality, assume that $N_1 \leq N_2$. Further, denote $n_1$ and $n_2$ the number of samples drawn from populations $N_1$ and $N_2$ and $n_{12}$ the number of shared members in the sample. We also assume a uniform prior over $K$ (the true number of shared members), which is $p(K) = N_{1 + 1}^{-1}$. The posterior distribution is given by: $$ P(K\,|\,n_1, n_2, n_{12}, N_1, N_2)=\dfrac{\sum_{K_1 = 0}^{N_1}P(n_{12}\,|\,n_2, K_1, N_2)P(K_1\,|\,n_1, K, N_1)}{\sum_{K'=0}^{N_1}\sum_{K_1=0}^{N_1}P(n_{12}\,|\,n_2, K_1, N_2)P(K_1\,|\,n_1, K', N_1)} $$ The posterior mean $\hat{K}$ is then given by: $$ \hat{K}=\sum_{K=0}^{N_1}K\cdot P(K\,|\,n_1, n_2, n_{12}, N_1, N_2) $$ Here, $P(x\,|\,t, u, v)$ denotes the hypergeometric probability of drawing exactly $x$ special objects out of $t$ draws, from a population of size $v$, in which there are $u$ special objects total. These formulas look quite complicated but they only require calls to the hypergeometric distribution which are available in many programs (R, Stata, SAS, Excel, etc.). Larremore provides Python code for the calculations on his GitHub page. Conservative equal-tailed $(1-\alpha)$-credible intervals $[K_{\mathrm{min}}, K_{\mathrm{max}}]$ can be found by finding the smallest index $K_{\mathrm{min}}$ and the largest index $K_{\mathrm{max}}$ for which $$ \sum_{K = K_{\mathrm{max}}}^{N_1}P(K\,|\,n_1, n_2, n_{12})\geq \alpha/2\\ \sum_{K = 0}^{K_{\mathrm{min}}}P(K\,|\,n_1, n_2, n_{12})\geq \alpha/2 $$ Example To illustrate the formulas, let's calculate a concrete example. Assume that $N_1 = 75, N_2 = 100$ and $K = 35$. I randomly drew a sample of size $n_1 = n_2 = 40$ and got $n_{12} = 7$. The posterior distribution together with the posterior mean (orange point) and the $90$% credible interval (shaded blue region) looks like this: The true value of $K$ is indicated by a dashed vertical line. The posterior mean is $35.62$ and the $90$%-credible interval by $[20, 51]$. To test the performance of the estimator, I repeated the above procedure $50$ times and recorded the posterior mean and $90$%-credible intervals. Here is the plot: $4$ out of the $50$ credible intervals do not include the true $K$ of $35$ (they're plotted in red) and hence, $92$% do include it. Also, the mean of the $50$ posterior means is $35.19$.
Estimate the number of common members in two populations In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the wh
42,997
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate]
1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different but how different would depend on the correlation. The model with more covariates will invariably explain more variation, although this is not to say that it is explaining anything meaningful. If the two variables are highly correlated then the resulting multiple linear regression model will be fine for prediction but you won't be good for inference as the two will be trying to explain the same variation. If one of your covariates doesn't explain much variation and the other does, then the results will be more or less equivalent to two separate tests. 3) Assumptions are the same between the two (i.e. normality of errors, equality of variance and independence of observations). 4) Which is superior depends entirely on the question you wish to ask and it is hard to say more than that without writing a rather lengthy essay...
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate]
1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate] 1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different but how different would depend on the correlation. The model with more covariates will invariably explain more variation, although this is not to say that it is explaining anything meaningful. If the two variables are highly correlated then the resulting multiple linear regression model will be fine for prediction but you won't be good for inference as the two will be trying to explain the same variation. If one of your covariates doesn't explain much variation and the other does, then the results will be more or less equivalent to two separate tests. 3) Assumptions are the same between the two (i.e. normality of errors, equality of variance and independence of observations). 4) Which is superior depends entirely on the question you wish to ask and it is hard to say more than that without writing a rather lengthy essay...
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate] 1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different
42,998
Time Series Regressor Selection
"I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify the nature of the input series transfer function structures/lags. This easily gets redefined via tests of necessity and tests of sufficiency via cross-correlation tests of the current model residuals and the pre-whitened X's. As an over-arching comment you have totally ignored the impact of latent deterministic structure such as level shifts, seasonal pulses , pulses AND time trends. Your target should be https://autobox.com/pdfs/SARMAX.pdf
Time Series Regressor Selection
"I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify th
Time Series Regressor Selection "I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify the nature of the input series transfer function structures/lags. This easily gets redefined via tests of necessity and tests of sufficiency via cross-correlation tests of the current model residuals and the pre-whitened X's. As an over-arching comment you have totally ignored the impact of latent deterministic structure such as level shifts, seasonal pulses , pulses AND time trends. Your target should be https://autobox.com/pdfs/SARMAX.pdf
Time Series Regressor Selection "I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify th
42,999
Time Series Regressor Selection
Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number of predictors. The predictors are summarized using a small number of indexes constructed by principal component analysis. An approximate dynamic factor model serves as the statistical framework for the estimation of the indexes and construction of the forecasts. The method is used to construct 6-, 12-, and 24-monthahead forecasts for eight monthly U.S. macroeconomic time series using 215 predictors in simulated real time from 1970 through 1998. During this sample period these new forecasts outperformed univariate autoregressions, small vector autoregressions, and leading indicator models. There are implementation in Python, R or Stata, for example.
Time Series Regressor Selection
Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number o
Time Series Regressor Selection Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number of predictors. The predictors are summarized using a small number of indexes constructed by principal component analysis. An approximate dynamic factor model serves as the statistical framework for the estimation of the indexes and construction of the forecasts. The method is used to construct 6-, 12-, and 24-monthahead forecasts for eight monthly U.S. macroeconomic time series using 215 predictors in simulated real time from 1970 through 1998. During this sample period these new forecasts outperformed univariate autoregressions, small vector autoregressions, and leading indicator models. There are implementation in Python, R or Stata, for example.
Time Series Regressor Selection Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number o
43,000
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design with repeated measurements
Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable using only 2 observations, so any estimate of a variance for this variable, and any statistic that uses it, should be highly suspect. Is the formula for the ICC appropriate, or should $\sigma_{\mathrm{ID:Day}}^{2}$ be omitted from the numerator and hence be treated as error-variance? Yes, I think your formula is appropriate. How would the model and the formula change if I would consider the 2 raters as fixed in the sense that they are the only two raters I would ever consider (i.e. they weren't selected from an infinite population of possible raters)? In light of my answer to 1. above, I think you should take this approach anyway. Whether they can be considered samples from a large population is only one of the considerations in choosing whether to model a factor as fixed or random. The formula then becomes: $$ \mathrm{ICC}_{\mathrm{inter-rater}} = \frac{\sigma_{\mathrm{ID}}^{2} + \sigma_{\mathrm{ID:Day}}^{2}}{\sigma_{\mathrm{ID}}^{2} + \sigma_{\mathrm{ID:Day}}^{2} + \sigma_{\mathrm{Residual}}^{2}} $$
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design wit
Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable u
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design with repeated measurements Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable using only 2 observations, so any estimate of a variance for this variable, and any statistic that uses it, should be highly suspect. Is the formula for the ICC appropriate, or should $\sigma_{\mathrm{ID:Day}}^{2}$ be omitted from the numerator and hence be treated as error-variance? Yes, I think your formula is appropriate. How would the model and the formula change if I would consider the 2 raters as fixed in the sense that they are the only two raters I would ever consider (i.e. they weren't selected from an infinite population of possible raters)? In light of my answer to 1. above, I think you should take this approach anyway. Whether they can be considered samples from a large population is only one of the considerations in choosing whether to model a factor as fixed or random. The formula then becomes: $$ \mathrm{ICC}_{\mathrm{inter-rater}} = \frac{\sigma_{\mathrm{ID}}^{2} + \sigma_{\mathrm{ID:Day}}^{2}}{\sigma_{\mathrm{ID}}^{2} + \sigma_{\mathrm{ID:Day}}^{2} + \sigma_{\mathrm{Residual}}^{2}} $$
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design wit Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable u