idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
42,901
Model broken stick model in R where one line has a constant gradient?
I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x, bp) #Wrapper for Mixed effects model with variable break point foo <- function(bp) { mod <- lmer(y ~ b1(x, bp) + ...
Model broken stick model in R where one line has a constant gradient?
I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x
Model broken stick model in R where one line has a constant gradient? I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x, bp) #Wrapper for Mixed effects model with varia...
Model broken stick model in R where one line has a constant gradient? I see your post is pretty old, but I'm working on the same issue -- and I found a slightly different solution than yours.. figured I'd post for others out there. b1 <- function(x, bp) ifelse(x < bp, x
42,902
Convergence of distribution
One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, then it becomes natural to guess that there exists an fix-point for this iteration of the averaging operator, which should...
Convergence of distribution
One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, th
Convergence of distribution One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, then it becomes natural to guess that there exists an fix-point for this iteration of the aver...
Convergence of distribution One interesting idea is the following: Of all distributions on $[0,2\pi]$, the uniform distribution maximizes entropy. So you could try to prove that the averaging operator cannot decrease entropy, th
42,903
Convergence of distribution
Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at non-zero powers of $t$ going to zero. You would need to verify all the regularity conditions, of course.
Convergence of distribution
Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at n
Convergence of distribution Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at non-zero powers of $t$ going to zero. You would need to verify all the regularity conditions,...
Convergence of distribution Looks like you would need to figure out the characteristic function of this sum. The problem 26.29 hints at this c.f. converging to that of the uniform distribution, by virtue of the coefficients at n
42,904
How to determine if one fit is significantly better than a slightly different fit?
As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can led to a "best" fit when the information criterion is maximized. AIC and BIC are of the form -2 log likelihood + penal...
How to determine if one fit is significantly better than a slightly different fit?
As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can
How to determine if one fit is significantly better than a slightly different fit? As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can led to a "best" fit when the informa...
How to determine if one fit is significantly better than a slightly different fit? As Peter Flom suggests given your models you have a likelihood function and these information criteria can compare models based on their likelihood function with penalties for parameters used that can
42,905
CART (rpart) balanced vs. unbalanced dataset
If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as you have enough data to identify the classes reasonably well. If the class distributions of features overlap considerab...
CART (rpart) balanced vs. unbalanced dataset
If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as
CART (rpart) balanced vs. unbalanced dataset If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as you have enough data to identify the classes reasonably well. If the class...
CART (rpart) balanced vs. unbalanced dataset If you have well separated classes in the feature space it will not make much of a change on the predictions of the test data whether you have a balanced or an unbalanced training data set as long as
42,906
CART (rpart) balanced vs. unbalanced dataset
I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
CART (rpart) balanced vs. unbalanced dataset
I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
CART (rpart) balanced vs. unbalanced dataset I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
CART (rpart) balanced vs. unbalanced dataset I have offered a related answer under the post 'cart Training a decision tree against unbalanced data'
42,907
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. You can therefore construct the survival function, $t\mapsto \hat{S}(t)$, by constant interpolation. Here is the way I wou...
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. Yo
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. You can therefore construct the survival functio...
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? In R, a survfit.object---returned by survfit()---stores a fitted survival curve. In particular, this object contains the time points at which the curve has a step and the ordinates at those points. Yo
42,908
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be able to do the plots, but since the usual mode is to display both process as the cumulative incidence, you will need to do...
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be ab
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be able to do the plots, but since the usual mode i...
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? What you are asking for is a simultaneous plot of the survival function for one process and the cumulative incidence function (= 1- S(t)) for the competing process. The 'cmprsk' R package should be ab
42,909
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="red",xlim=c(0,70)) par(new=T) plot(survfit(KMfit2 ~ 1),col="green",xlim=c(0,70))
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves?
Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="r
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="red",xlim=c(0,70)) par(new=T) plot(survfit(KM...
R packages (or SAS code) to produce two simultaneous Kaplan-Meier curves? Wouldn't it be good enough if you could plot two curves using par(new=T)? plot(survfit(KMfit1 ~ 1),main="Kaplan-Meier estimate with 95% confidence bounds",xlab="time", ylab="survival function",col="r
42,910
How to calculate the variance of vectors for clustering?
Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in density based clustering. In this example, the mean is not even inside the cluster. Variances mostly measure the spatial ext...
How to calculate the variance of vectors for clustering?
Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in dens
How to calculate the variance of vectors for clustering? Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in density based clustering. In this example, the mean is not even ins...
How to calculate the variance of vectors for clustering? Note that not all clustering algorithms assume spherical clusters. All the measures you describe do not seem too sensible for non-convex clusters, say, banana-shaped clusters; a common concept in dens
42,911
How to calculate the variance of vectors for clustering?
I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centroid as the variance? Number 3 would be mine if you average them. Number 1 is bad for the reason you already gave. I don...
How to calculate the variance of vectors for clustering?
I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centro
How to calculate the variance of vectors for clustering? I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centroid as the variance? Number 3 would be mine if you average them...
How to calculate the variance of vectors for clustering? I think the question can be answered. I don't like any of these measures. Why didn't you include what I think is the most suitable and obvious, the mean square distance for the vectors from the centro
42,912
State-of-the-art in smoothing splines
A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R. (1985) Smoothing Noisy Data with Spline Functions, Numerische Mathematik, 47, p. 99-106 (see also Hutchinson, M.F. (1986...
State-of-the-art in smoothing splines
A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R.
State-of-the-art in smoothing splines A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R. (1985) Smoothing Noisy Data with Spline Functions, Numerische Mathematik, 47, p. 9...
State-of-the-art in smoothing splines A bit more modern than what you quote is de Boor, C. (1978) A Practical Guide to Splines, Springer Verlag. An efficient algorithm for smoothing splines is given by Hutchinson, M.F. and de Hoog, F.R.
42,913
Bootstrap vs other simulated data methods
To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data from one group into the data from another. For any particular group say you have m observations then you would sample ...
Bootstrap vs other simulated data methods
To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data
Bootstrap vs other simulated data methods To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data from one group into the data from another. For any particular group say you ...
Bootstrap vs other simulated data methods To bootstrap in a mixed effects linear model you would do sampling with replacement in a way that maintains the model structure. So your data is divided into groups and you don't want to mix the data
42,914
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions within cells?
There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about the science that generated the data, not the data itself. The tests for normality, when used for justifying the normalit...
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions withi
There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions within cells? There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about the science...
Whether to assess normality in a factorial repeated measures ANOVA by looking at distributions withi There is a reason that we talk about the normality 'assumption' rather than the normality 'condition'. Whether you are comfortable with the assumption of normality needs to come from knowledge about
42,915
Effective way to visualize net growth/profit/income?
Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balance instead of net growth = gains - losses. You can infer net growth by comparing bars heights or translating the cumulati...
Effective way to visualize net growth/profit/income?
Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balanc
Effective way to visualize net growth/profit/income? Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balance instead of net growth = gains - losses. You can infer net growth ...
Effective way to visualize net growth/profit/income? Your mock-up looks pretty good, though I prefer not to mix scales on the same graph. You asked for gains, losses and net growth, but with your black line it looks like you're showing cumulative balanc
42,916
Bootstrapping a sample with unequal selection probabilities
Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue must have been investigated before. While it is easy to justify the use of observation weights (in practice, by weighti...
Bootstrapping a sample with unequal selection probabilities
Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue
Bootstrapping a sample with unequal selection probabilities Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue must have been investigated before. While it is easy to jus...
Bootstrapping a sample with unequal selection probabilities Did you find a satisfactory answer for this question? I recently found this reference: http://www.wseas.us/e-library/conferences/2009/hangzhou/ACACOS/ACACOS21.pdf but I am pretty sure that the issue
42,917
Bootstrapping a sample with unequal selection probabilities
You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unweighted mean: mean(example$meas) # output = 5 Weighted mean: sum(example$meas * example$wts) / sum(example$wts) # output...
Bootstrapping a sample with unequal selection probabilities
You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unwe
Bootstrapping a sample with unequal selection probabilities You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unweighted mean: mean(example$meas) # output = 5 Weighted mean:...
Bootstrapping a sample with unequal selection probabilities You can verify that the "weights" parameter in the boot package is operating as importance weights with a simple simulation. example <- data.frame( meas=c(1,1,5,8,10), wts=c(10,10,3,2,1) ) Unwe
42,918
k-subset with maximal variance
When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a search over $k_0=1,\ldots,k−1$ does the trick. (Even when the $n$ numbers are not sorted, finding the $k^\text{th}$ largest ...
k-subset with maximal variance
When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a sear
k-subset with maximal variance When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a search over $k_0=1,\ldots,k−1$ does the trick. (Even when the $n$ numbers are not sorted, fin...
k-subset with maximal variance When the numbers are sorted there's a simple $O(k)$ algorithm, because when $k\gt1,$ some variance-maximizing subset will consist of the $k_0\ge 1$ smallest and $k−k_0$ largest elements, whence a sear
42,919
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm
yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources are biased for this specific application but you could find a sequence classification frameworks behind it. some good sl...
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm
yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources a
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources are biased for this specific a...
Difficulty in understanding Hidden Markov Model for syntax parsing using Viterbi algorithm yeah ok. i've just done some work on it. i've managed to make it done even if i haven't got all math beyound. EDIT: this is some usefull resoruces: i've done some gesture recognition so my resources a
42,920
Viable distance metric for text articles
I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They used autoencoders to compress documents into low-dimensional, real-valued vectors. The documents appeared to be fairly wel...
Viable distance metric for text articles
I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They us
Viable distance metric for text articles I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They used autoencoders to compress documents into low-dimensional, real-valued vectors...
Viable distance metric for text articles I don't know much about working with documents, but an interesting approach to documents was taken by Hinton & Salakhutdinov and can be found in this paper (and also in this Google Tech Talk). They us
42,921
Viable distance metric for text articles
Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text alignment. The conclusion was that cosine similarity with a small modification performed best, although only slightly be...
Viable distance metric for text articles
Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text
Viable distance metric for text articles Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text alignment. The conclusion was that cosine similarity with a small modification ...
Viable distance metric for text articles Have a look at this paper: Text similarity: an alternative way to search MEDLINE. They compare the simple cosine similarity with a modified version and also some more complex approaches based on text
42,922
Sampling distribution of random effects estimator
I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and quantify uncertainties, I might try a Bayesian approach, i.e., simulating from posterior distributions of variance paramete...
Sampling distribution of random effects estimator
I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and qua
Sampling distribution of random effects estimator I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and quantify uncertainties, I might try a Bayesian approach, i.e., simulating...
Sampling distribution of random effects estimator I don't know of anything offhand, aside from Doug Bates' books and his postings on various online forums. That should be sufficient to justify why you don't report them. But if you want to try and qua
42,923
Reparameterizing the binomial link for psychometric data
Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ would be the "shift" parameter that you are interested in. While the two are mathematically equivalent ($\alpha = -\beta ...
Reparameterizing the binomial link for psychometric data
Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ w
Reparameterizing the binomial link for psychometric data Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ would be the "shift" parameter that you are interested in. While...
Reparameterizing the binomial link for psychometric data Your problem is not really the link function, but rather the parametrization of the linear predictor. Instead of having $\alpha + \beta x$, you would like to have $\beta (x - \delta)$. Here $\delta$ w
42,924
Reparameterizing the binomial link for psychometric data
For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your intercept will reflect changes in pss. You'll still be stuck with a magnitude issue when there are interactions... but some i...
Reparameterizing the binomial link for psychometric data
For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your inter
Reparameterizing the binomial link for psychometric data For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your intercept will reflect changes in pss. You'll still be stuck with a ...
Reparameterizing the binomial link for psychometric data For simple designs one solution might be to centre based on the psu of one condition. You could do an initial model of one condition, get the pss, recentre all of the data on that, and now your inter
42,925
Reinforcement learning of a policy for multiple actors in large state spaces
I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their observations) then this is no different than treating all different agents as a single agent, and learn sth like a "swar...
Reinforcement learning of a policy for multiple actors in large state spaces
I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their
Reinforcement learning of a policy for multiple actors in large state spaces I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their observations) then this is no different tha...
Reinforcement learning of a policy for multiple actors in large state spaces I think there are two problems here: The huge state space, The fact that many agents are involved. I have no experience with (2), but I guess if all the agents can share their knowledge (e.g. their
42,926
How to test for and deal with regression toward the mean?
Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the mean is confounded with treatment, and so you will not be able to estimate the "true" treatment effect. This is an inter...
How to test for and deal with regression toward the mean?
Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the
How to test for and deal with regression toward the mean? Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the mean is confounded with treatment, and so you will not be able...
How to test for and deal with regression toward the mean? Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time and have the same directionality for people needing treatment, the regression to the
42,927
How to test for and deal with regression toward the mean?
I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estimate the degree of regression to the mean using a control group, but since you don't have a control group, maybe you need...
How to test for and deal with regression toward the mean?
I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estim
How to test for and deal with regression toward the mean? I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estimate the degree of regression to the mean using a control group...
How to test for and deal with regression toward the mean? I'm not in any way an authority on statistics, but might I suggest using other studies to get an estimate of the degree of regression to the mean you have in yours? In an ideal world, you would estim
42,928
Spearman or Kendall correlation? [duplicate]
Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous variable under each ordinal variable; then calculate the correlation coefficient of the continuous variables. There are i...
Spearman or Kendall correlation? [duplicate]
Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous va
Spearman or Kendall correlation? [duplicate] Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous variable under each ordinal variable; then calculate the correlation coeffici...
Spearman or Kendall correlation? [duplicate] Rather than either of those I would use Polychoric correlations which were designed for just this instance. They use maximum likelihood to fit a model an underlying normally distributed continuous va
42,929
ANCOVA with repeated measures in R
There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
ANCOVA with repeated measures in R
There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
ANCOVA with repeated measures in R There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
ANCOVA with repeated measures in R There is a list of tutorials on this subject here: http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/ Good luck.
42,930
ANCOVA with repeated measures in R
Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
ANCOVA with repeated measures in R
Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
ANCOVA with repeated measures in R Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
ANCOVA with repeated measures in R Sample data? set.seed(5) d <- expand.grid(Site=LETTERS[1:4], Date=1:20, Year=factor(1:2)) d$Temp <- round(rnorm(nrow(d), mean=60, sd=15))
42,931
ANCOVA with repeated measures in R
Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
ANCOVA with repeated measures in R
Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
ANCOVA with repeated measures in R Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
ANCOVA with repeated measures in R Maybe you can try lm(formula = Temp~Site*(Date + Year)) In this way you will have two interactions with the Site, and there will be no interaction between Date and year
42,932
Why is semipartial correlation cited so seldom?
The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression). And, because squared semipartial correlation is just a standardized form of R-square decrease, texts may find it unnece...
Why is semipartial correlation cited so seldom?
The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression).
Why is semipartial correlation cited so seldom? The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression). And, because squared semipartial correlation is just a standardized for...
Why is semipartial correlation cited so seldom? The notion of semipartial correlation usually arises in the context when one compares the model with a predictor and the model with that predictor removed (e.g. in the context of stepwise regression).
42,933
Learning to create samples from an unknown distribution
Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applications" by A. C. Davison and D. V. Hinkley (1997, CUP). which has an associated R package, "boot". BUT... there's a lot t...
Learning to create samples from an unknown distribution
Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applicat
Learning to create samples from an unknown distribution Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applications" by A. C. Davison and D. V. Hinkley (1997, CUP). which has...
Learning to create samples from an unknown distribution Basically, it sounds like you want to bootstrap your data: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 A good (and relatively cheap) reference is: "Bootstrap Methods and Their Applicat
42,934
Learning to create samples from an unknown distribution
I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram for the distribution of your samples (with as many bins as you can, within reasonable limits) and the cumulative density...
Learning to create samples from an unknown distribution
I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram
Learning to create samples from an unknown distribution I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram for the distribution of your samples (with as many bins as you c...
Learning to create samples from an unknown distribution I have recently faced a similar problem in my research. I did not generate a new function to approximate X. The solution I applied is the following (I used MATLAB to program it): Obtain the histogram
42,935
Selecting regression type for Dickey-Fuller test
Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the time series). Contrarily, the test is biased when these parameters are needed, but missing. In economics, we typically ...
Selecting regression type for Dickey-Fuller test
Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the
Selecting regression type for Dickey-Fuller test Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the time series). Contrarily, the test is biased when these parameters are...
Selecting regression type for Dickey-Fuller test Including a trend and drift term when they are not necessary reduce the power of the test---that is, its ability to reject the null hypothesis of non-stationarity (i.e., the null of a unit root in the
42,936
Selecting regression type for Dickey-Fuller test
Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic regressors. Others have also suggested procedures for testing for a unit root that incorporate testing for the presence...
Selecting regression type for Dickey-Fuller test
Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic
Selecting regression type for Dickey-Fuller test Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic regressors. Others have also suggested procedures for testing for a uni...
Selecting regression type for Dickey-Fuller test Charlie's suggestion to use other information to help determine what deterministic components are included is good. I would add that theoretical considerations might suggest appropriate deterministic
42,937
Selecting regression type for Dickey-Fuller test
There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the necessary steps. Alternatively, you could look at the underlying publication by Dolado, Jenkinson, and Sosvilla-Rivero (19...
Selecting regression type for Dickey-Fuller test
There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the ne
Selecting regression type for Dickey-Fuller test There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the necessary steps. Alternatively, you could look at the underlying publicat...
Selecting regression type for Dickey-Fuller test There is a formal procedure to test for Unit Roots, when the true data-generating process is completely unknown. Enders mentions this in Appendix 4.2, where there is also a flowchart explaining the ne
42,938
L1 regression versus L2 regression
L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient descent optimisation of the penalised loss there needs to be checks to see if a step goes over one of these discontinui...
L1 regression versus L2 regression
L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient
L1 regression versus L2 regression L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient descent optimisation of the penalised loss there needs to be checks to see if a step...
L1 regression versus L2 regression L1 regularisation results in a penalised loss function with discontinuities in the derivatives, whereas L2 regularisation does not introduce discontinuities. This means that when you perform gradient
42,939
Estimating error from repeated measurements
If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would report 4.32 +- 0.16/$\sqrt{n}$, where n is the number of measurements. A derivation: So we're trying to measure the width $\...
Estimating error from repeated measurements
If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would repor
Estimating error from repeated measurements If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would report 4.32 +- 0.16/$\sqrt{n}$, where n is the number of measurements. A derivat...
Estimating error from repeated measurements If you assume all the noises are Gaussian (and especially that +/- 0.1 really isn't... but anyway), then I think the 0.16 is already an estimate of the combination of the two noises. So I would repor
42,940
How to interpret Zivot & Andrews unit root test?
Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a trend stationary process that allows for a one time break in the level, the trend or both. If you reject the unit root ...
How to interpret Zivot & Andrews unit root test?
Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a
How to interpret Zivot & Andrews unit root test? Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a trend stationary process that allows for a one time break in the level...
How to interpret Zivot & Andrews unit root test? Zivot Andrews has a null hypothesis of a unit root process with drift that excludes exogenous structural change: H0 :yt =μ+yt−1 +εt Then depending on the model variant, the alternative hypothesis is a
42,941
How to interpret regression coefficients in logistic regression?
I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the question, Daniel is trying to assess the effect of no.Green on the probability of subjects choosing red in an experiment. no.G...
How to interpret regression coefficients in logistic regression?
I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the questi
How to interpret regression coefficients in logistic regression? I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the question, Daniel is trying to assess the effect of no.Green o...
How to interpret regression coefficients in logistic regression? I completly changed my answer as a result of a long conversation with Daniel. I'll try to provide some background information so interested readers can understand my answer. As I understand the questi
42,942
Fractional integration and cointegration with R
Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
Fractional integration and cointegration with R
Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
Fractional integration and cointegration with R Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
Fractional integration and cointegration with R Well now we have partialCI: An R package for the analysis of partially cointegrated time series by Matthew Clegg, Christopher Krauss and Jonas Rende
42,943
Fractional integration and cointegration with R
The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA(p,d,q) model (Haslett and Raftery, Appl.Statistics, 1989). The ...
Fractional integration and cointegration with R
The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimati
Fractional integration and cointegration with R The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA(p,d,q) model (H...
Fractional integration and cointegration with R The CRAN Task Views page for Time Series Analysis lists the fracdiff package: Fractionally differenced ARIMA aka ARFIMA(p,d,q) models. The package is described as follows: Maximum likelihood estimati
42,944
Gibbs sampling for a simple linear model -- need help with the likelihood function
This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicated than what is readily available on that webpage. I'll give some guidance, but as long as you want to learn how to do th...
Gibbs sampling for a simple linear model -- need help with the likelihood function
This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicate
Gibbs sampling for a simple linear model -- need help with the likelihood function This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicated than what is readily available on t...
Gibbs sampling for a simple linear model -- need help with the likelihood function This is a statistics question, not a programming question, and would better be asked on CrossValidated. At least, the LaTeX code is getting parsed there automatically :). Also, this is more complicate
42,945
Clustering with some cluster centers fixed/known
Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of points. For example, if there is centre A with known coordinates, to fix it add many (say, a thousand) extra data point...
Clustering with some cluster centers fixed/known
Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of
Clustering with some cluster centers fixed/known Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of points. For example, if there is centre A with known coordinates, to f...
Clustering with some cluster centers fixed/known Sorry I can't help with mclust (I don't know r). What if you run K-mean clustering with some initial centres fixed and some free to move? To fix a centre you simply need to pad it with large amount of
42,946
Assessing statistical significance of a rare binary event in time series
If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subject, use a shared frailty model, which is related to a multilevel logistic regression.
Assessing statistical significance of a rare binary event in time series
If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subjec
Assessing statistical significance of a rare binary event in time series If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subject, use a shared frailty model, which is related...
Assessing statistical significance of a rare binary event in time series If by "failure" you mean something that can only occur once for a subject without occurring again, use the Cox proportional hazards model. If your "failure" can occur more than once for a given subjec
42,947
Calculating the transfer entropy in R
the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ############################### ############################### # 070527 (ver. 081126), Atte Tenkanen # s, time shift trent<-function(Y,X,s=1){ #...
Calculating the transfer entropy in R
the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ###############################
Calculating the transfer entropy in R the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ############################### ############################### # 070527 (ver. 081126), Atte Tenkanen # s, time ...
Calculating the transfer entropy in R the same as above from the same page http://users.utu.fi/attenka/trent.R ############################### ############################### ## FUNCTION TRANSFER ENTROPY ## ###############################
42,948
Calculating the transfer entropy in R
The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, including various estimators for continuous data (Gaussian, box-kernel, Kraskov). It can be used to calculate transfer entropy ...
Calculating the transfer entropy in R
The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, includi
Calculating the transfer entropy in R The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, including various estimators for continuous data (Gaussian, box-kernel, Kraskov). It can ...
Calculating the transfer entropy in R The JIDT toolkit which is the successor to the Matlab code in my high level summary linked in the original question, provides transfer entropy estimators for both discrete and continuous data, includi
42,949
Calculating the transfer entropy in R
There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonably fast. Its also easy to use using the transfer_entropy(x, y) function for the x->y and y->x directions as well as signi...
Calculating the transfer entropy in R
There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonabl
Calculating the transfer entropy in R There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonably fast. Its also easy to use using the transfer_entropy(x, y) function for the x->...
Calculating the transfer entropy in R There is also the RTransferEntropy package, which allows the calculation of Shannon and Renyi TE measures and provides significance measures. The package uses C++ internally, so it should be reasonabl
42,950
Calculating the transfer entropy in R
See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
Calculating the transfer entropy in R
See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
Calculating the transfer entropy in R See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
Calculating the transfer entropy in R See the .pdf found by Ramnath in comments section: http://users.utu.fi/attenka/TEpresentation081128.pdf
42,951
Calculating the transfer entropy in R
Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
Calculating the transfer entropy in R
Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
Calculating the transfer entropy in R Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
Calculating the transfer entropy in R Would this: https://cran.r-project.org/web/packages/TransferEntropy/TransferEntropy.pdf help as well? The example makes sense sense but I am personally not sure how to measure significance.
42,952
Measuring predictive accuracy for multiple dependent variables
In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future data, through the use of the loss function that was being minimised. If the specific problem you are dealing with can be ...
Measuring predictive accuracy for multiple dependent variables
In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future d
Measuring predictive accuracy for multiple dependent variables In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future data, through the use of the loss function that was being ...
Measuring predictive accuracy for multiple dependent variables In Machine Learning, many algorithms directly minimise a loss function with some form of capacity control (regularisation). This gives a direct measure of the performance of the classifier on future d
42,953
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The most related technique I know of is described in a...
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The most related...
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
42,954
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You can try recurrent neural networks: neural networks...
Machine learning for activity streams
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You can try recu...
Machine learning for activity streams Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
42,955
Pointwise mutual information for text using R
There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property that $pmi(x;y) = h(x) + h(y) - h(xy) $ to calculate the pointwise mutual information. You need to obtain frequency estima...
Pointwise mutual information for text using R
There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property t
Pointwise mutual information for text using R There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property that $pmi(x;y) = h(x) + h(y) - h(xy) $ to calculate the pointwise mutual in...
Pointwise mutual information for text using R There are many functions for estimating the mutual information or the entropy in R, for example the entropy package. Enter install.packages("entropy") at the R-prompt. You can then use the property t
42,956
How to differentiate two subgroups from a histogram?
I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold based interpretations of their research results, based merely on the distribution of the measurements. Practice and appli...
How to differentiate two subgroups from a histogram?
I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold b
How to differentiate two subgroups from a histogram? I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold based interpretations of their research results, based merely on the...
How to differentiate two subgroups from a histogram? I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia. I often see in medical research that physicians want to have cut-offs and simple threshold b
42,957
How to differentiate two subgroups from a histogram?
If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without this assumption you could try an EM algorithm which is indirectly what Matt Suggested since this would be a mixture model...
How to differentiate two subgroups from a histogram?
If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without t
How to differentiate two subgroups from a histogram? If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without this assumption you could try an EM algorithm which is indirectly wh...
How to differentiate two subgroups from a histogram? If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function). Without t
42,958
Hidden states in hidden conditional random fields
I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of a linear chain HCRF, the hidden state sequences are calculated in exactly the same way as in hidden Markov models. The ...
Hidden states in hidden conditional random fields
I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of
Hidden states in hidden conditional random fields I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of a linear chain HCRF, the hidden state sequences are calculated in exa...
Hidden states in hidden conditional random fields I've posted my question in another site, where I also didn't receive the answer I was looking for. I answered my own question there and I decided to answer my own question here as well: In the case of
42,959
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable)
You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that event are reserved. E.g if $p(x_t = 1| x_{1:t-1}) = 0.5$ with $x_{1:t-1}$ being the history of events seen so far, then c...
Compression theory, practice, for time series with values in a space of distributions (say of a real
You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that e
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable) You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that eve...
Compression theory, practice, for time series with values in a space of distributions (say of a real You could use any probabilistic time series model in combination with arithmetic coding. You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that e
42,960
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable)
Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can take advantage of autocorrelation and store the parameters of the predictive distribution conditional on its previous valu...
Compression theory, practice, for time series with values in a space of distributions (say of a real
Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can ta
Compression theory, practice, for time series with values in a space of distributions (say of a real random variable) Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can take...
Compression theory, practice, for time series with values in a space of distributions (say of a real Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can ta
42,961
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment?
If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-test, pooled for d0 = 0." I would think you'd want to do such a test for each of the five years, then choose a meta-analy...
How can we compare multiple proportions from multiple independent populations to evaluate implementa
If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-te
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment? If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-t...
How can we compare multiple proportions from multiple independent populations to evaluate implementa If I'm reading you right (and changing Tal's 4 to a 5), then at http://en.wikipedia.org/wiki/Statistical_hypothesis_testing if you scroll halfway down you'll find the formula for "Two-proportion z-te
42,962
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment?
You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a related questions: Has anyone used the Marascuilo procedure for comparing multiple proportions?
How can we compare multiple proportions from multiple independent populations to evaluate implementa
You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a rel
How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment? You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a re...
How can we compare multiple proportions from multiple independent populations to evaluate implementa You could also check the Marascuillo procedure to compare multiple proportions in one test. Here is a detailed walkthrough: http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm And a rel
42,963
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical application you are using. In R, the psych and HDMD packages have commands that allow oblique rotations.
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical a
How to do primary component analysis on multi-mode data with non-orthogonal primary components? There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical application you are using...
How to do primary component analysis on multi-mode data with non-orthogonal primary components? There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation. Not sure what statistical a
42,964
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Jain A. (eds) Encyclopedia of Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-73003-5_305
How to do primary component analysis on multi-mode data with non-orthogonal primary components?
Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Ja
How to do primary component analysis on multi-mode data with non-orthogonal primary components? Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Jain A. (eds) Encyclopedia...
How to do primary component analysis on multi-mode data with non-orthogonal primary components? Independent component analysis is suitable for separating non-orthogonal basis. Check out this paper. I guess figure 1 is what you want. Choi S. (2009) Independent Component Analysis. In: Li S.Z., Ja
42,965
Getting started with time series in R
It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data easily. to.weekly(X) to.monthly(X) Please note that you will then manipulate xts objects and not ts. But no worries, you...
Getting started with time series in R
It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data eas
Getting started with time series in R It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data easily. to.weekly(X) to.monthly(X) Please note that you will then manipulate xts o...
Getting started with time series in R It seems like you need the package xts. Create your time serie using install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) Then you will be able to manipulate your data eas
42,966
Theoretical results for cross-validation estimation of classification accuracy?
I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.html and the probably relevant section of the thesis: http://hunch.net/~jl/projects/prediction_bounds/thesis/mathml/thesi...
Theoretical results for cross-validation estimation of classification accuracy?
I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.h
Theoretical results for cross-validation estimation of classification accuracy? I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.html and the probably relevant section of...
Theoretical results for cross-validation estimation of classification accuracy? I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.h
42,967
Informative censoring caused by cesarean section
When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone can point to different and more applicable resources. I still leave my initial comment up as I still feel those papers ...
Informative censoring caused by cesarean section
When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone
Informative censoring caused by cesarean section When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone can point to different and more applicable resources. I still leave my...
Informative censoring caused by cesarean section When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone
42,968
Informative censoring caused by cesarean section
As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully address the issue of bias, but one would get some measures of association about the associations between e.g. fetus size...
Informative censoring caused by cesarean section
As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully
Informative censoring caused by cesarean section As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully address the issue of bias, but one would get some measures of associati...
Informative censoring caused by cesarean section As another option here, how about using a multinomial logistic regression (with the outcomes being trauma, [non-elective] caesarean, and no trauma?) I'm not entirely sure if this approach will fully
42,969
Factor Significance for Factor Model
The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent factor. The idea carries over to the more general case albeit with more complications. It follows from your factor model ...
Factor Significance for Factor Model
The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent f
Factor Significance for Factor Model The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent factor. The idea carries over to the more general case albeit with more complication...
Factor Significance for Factor Model The short answer is: There is something you can do but I am not sure how meaningful it will be. The Long answer: I will give the long answer for a simple model where we have only one unknown latent f
42,970
Factor Significance for Factor Model
Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normally tested by the use of Structural Equation Modelling, where you attempt to reproduce the observed matrix of data from th...
Factor Significance for Factor Model
Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normall
Factor Significance for Factor Model Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normally tested by the use of Structural Equation Modelling, where you attempt to reproduc...
Factor Significance for Factor Model Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis. That being said, the significance (fit really) of a statistical model is normall
42,971
Factor Significance for Factor Model
If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the variance of the error. They supply to my knowledge the most updated approach to resolve this issue. See also Alexei Onats...
Factor Significance for Factor Model
If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the v
Factor Significance for Factor Model If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the variance of the error. They supply to my knowledge the most updated approach to reso...
Factor Significance for Factor Model If the problem at issue consists of testing for the optimal number of factors, Jushan Bai and Serena Ng in several articles provide a test based on AIC/BIC that minimizes, for different options, the v
42,972
Factor Significance for Factor Model
I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we do in Multiple Regression. So here I assume you know in advance the exact value of factors and the criterion variable, ...
Factor Significance for Factor Model
I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we
Factor Significance for Factor Model I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we do in Multiple Regression. So here I assume you know in advance the exact value of ...
Factor Significance for Factor Model I am not sure if I got your question right, but if you already have a number of exact factors, I guess you can use chi-squared test to see if the factor loading of your concern is significant as we
42,973
How to do factorial analysis for a non-normal and heteroscedastic data?
Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more information in the paper: On distance-based permutation tests for between-group comparisons (Reiss et al, 2010) You might als...
How to do factorial analysis for a non-normal and heteroscedastic data?
Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more informa
How to do factorial analysis for a non-normal and heteroscedastic data? Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more information in the paper: On distance-based permutatio...
How to do factorial analysis for a non-normal and heteroscedastic data? Package vegan implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the documentation. There's also more informa
42,974
How to do factorial analysis for a non-normal and heteroscedastic data?
The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-written package skilmack for Stata. Skillings, J. H., and G. A. Mack. 1981. On the use of a Friedman-type statistic in b...
How to do factorial analysis for a non-normal and heteroscedastic data?
The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-
How to do factorial analysis for a non-normal and heteroscedastic data? The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-written package skilmack for Stata. Skillings, J...
How to do factorial analysis for a non-normal and heteroscedastic data? The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the asbio package for R, and there's a user-
42,975
How to do factorial analysis for a non-normal and heteroscedastic data?
As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable transformations might help and you may be able to take advantage of the design. The t-test is fairly robust to the normali...
How to do factorial analysis for a non-normal and heteroscedastic data?
As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable tr
How to do factorial analysis for a non-normal and heteroscedastic data? As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable transformations might help and you may be able to ...
How to do factorial analysis for a non-normal and heteroscedastic data? As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable tr
42,976
Interpreting output of igraph's fastgreedy.community clustering method
The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community function results. You have to provide number of steps - how many merges should be performed. The optimal number of steps(merg...
Interpreting output of igraph's fastgreedy.community clustering method
The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community funct
Interpreting output of igraph's fastgreedy.community clustering method The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community function results. You have to provide number of steps ...
Interpreting output of igraph's fastgreedy.community clustering method The function which is used for this purpose: community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE) this can be used to extract membership based on the fastgreedy.community funct
42,977
What is a statistical journal with quick turnaround?
Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help.
What is a statistical journal with quick turnaround?
Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help
What is a statistical journal with quick turnaround? Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help.
What is a statistical journal with quick turnaround? Maybe Statistics Surveys (but I think they are seeking review more than short note), Statistica Sinica, or the Electronic Journal of Statistics. They are not as quoted as SPL, but I hope this may help
42,978
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts?
Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon_{t-1}$ for $\hat{\sigma}t|I{t-1}$ to come back to the its unconditional value.) Here is a code snipped that downloads ...
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca
Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts? Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon_{t-1}$ for $\h...
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon
42,979
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts?
there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Policy Vol. 2, No. 3, 2012, pp.167-183 ISSN: 2146-4553 www.econjournals.com and many more
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca
there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Polic
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Forecasts? there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Policy Vol. 2, No. 3...
How Large a Difference Can Be Expected Between Standard GARCH and Asymmetric GARCH Volatility Foreca there is a significant difference and there a couple of published papers to that effect Comparative Performance of Volatility Models for Oil Price International Journal of Energy Economics and Polic
42,980
Robust nonparametric estimation of hazard/survival functions based on low count data
This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censored), isn't that what the Nelson-Aalen estimator was built to do? We could have another conversation about the reliabilit...
Robust nonparametric estimation of hazard/survival functions based on low count data
This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censor
Robust nonparametric estimation of hazard/survival functions based on low count data This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censored), isn't that what the Nelson-Aal...
Robust nonparametric estimation of hazard/survival functions based on low count data This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censor
42,981
Should we use measured vs. modelled or modelled vs. measured?
Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a conditional distribution of some variable based one or more other variables. Then we have the value for which we want to...
Should we use measured vs. modelled or modelled vs. measured?
Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a
Should we use measured vs. modelled or modelled vs. measured? Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a conditional distribution of some variable based one or mor...
Should we use measured vs. modelled or modelled vs. measured? Whether a value is observed or modelled is irrelevant. What matters is whether or not a value has an error or a random distribution that you want to study. Two common cases It is common to consider a
42,982
How to think about counterfactuals?
How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that are in contradiction with the actual one. They are used for studying individual cases, as opposed to do-operators that a...
How to think about counterfactuals?
How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that
How to think about counterfactuals? How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that are in contradiction with the actual one. They are used for studying individual case...
How to think about counterfactuals? How are counterfactuals useful and how should I understand them? Counterfactuals are useful in situations for which you observed a set of parameters and you want to reason about other scenarios that
42,983
Why sample size is not a part of sufficient statistic?
It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to specify that the sample size is known, which is true in the vast majority of statistical applications. You can of course pro...
Why sample size is not a part of sufficient statistic?
It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to speci
Why sample size is not a part of sufficient statistic? It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to specify that the sample size is known, which is true in the vast major...
Why sample size is not a part of sufficient statistic? It is typical practice that the sample size is considered (implicitly) to be a known constant unless we specify the contrary in the analysis. This practice saves time by alleviating the need to speci
42,984
VAR() or dynlm() or lm()
Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ultimately just run OLS regressions. It is with different degrees of ease, though, reflecting the purpose of the packages. ...
VAR() or dynlm() or lm()
Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ult
VAR() or dynlm() or lm() Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ultimately just run OLS regressions. It is with different degrees of ease, though, reflecting the ...
VAR() or dynlm() or lm() Without seeing your code, it is hard to spell out the difference in results. But it sure is possible to get the same results in either package, as - as you correctly point out - all three commands ult
42,985
How and why do epidemiology and econometrics models handle multi-collinearity differently?
The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discipline and based on statistical power of the sample. We can't tell from statistical output alone whether a variable is a "...
How and why do epidemiology and econometrics models handle multi-collinearity differently?
The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discip
How and why do epidemiology and econometrics models handle multi-collinearity differently? The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discipline and based on statistical...
How and why do epidemiology and econometrics models handle multi-collinearity differently? The "model building" process is a misnomer. A well conducted analysis pre-specifies the variables, and their encoding, to be included in the final model based on the scientific expertise of the discip
42,986
When are ROC curves to compare imaging tests valid? (Focus on the example below)
The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of what one might call "premature dichotomization." There is an established cutoff of <12 mm in saggital spinal canal diamete...
When are ROC curves to compare imaging tests valid? (Focus on the example below)
The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of wh
When are ROC curves to compare imaging tests valid? (Focus on the example below) The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of what one might call "premature dichotomiz...
When are ROC curves to compare imaging tests valid? (Focus on the example below) The particular paper in question, P.H. Horne et al, A Novel Radiographic Indicator of Developmental Cervical Stenosis, J Bone Joint Surg Am. (2016) 98:1206-14, seems to be an unfortunate example of wh
42,987
Are there any "convex neural networks"?
Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that architecture, you can get another one by switching the weights of those two neurons. Of course this is not granted to work ...
Are there any "convex neural networks"?
Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that arc
Are there any "convex neural networks"? Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that architecture, you can get another one by switching the weights of those two neurons...
Are there any "convex neural networks"? Any neural net with at least one hidden layer with any more than just one neuron, leads to an optimization problem that is not convex, this is true because if you have any (local) optimum for that arc
42,988
Estimation precision of lower- vs. higher-order moments
Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the precision of an estimator of a certain moment that I use here is the variance. It is well known that the variance of the sam...
Estimation precision of lower- vs. higher-order moments
Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the prec
Estimation precision of lower- vs. higher-order moments Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the precision of an estimator of a certain moment that I use here is the...
Estimation precision of lower- vs. higher-order moments Here is what I believe might be a counterexample if the intuition were a general claim, or at least a result that seems to indicate that the answer to 2. might be "not really". The measure of the prec
42,989
In reality, there is almost always measurement error in the independent variable(s), so why is this ignored in almost every linear regression model?
Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data. Correction for errors in X makes regression coefficients properly farther from zero but then they apply only to future c...
In reality, there is almost always measurement error in the independent variable(s), so why is this
Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data.
In reality, there is almost always measurement error in the independent variable(s), so why is this ignored in almost every linear regression model? Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as oc...
In reality, there is almost always measurement error in the independent variable(s), so why is this Errors in X are ignored for (1) expediency and (2) because if you correct for such errors predictions will be off for future data that have the same degree of errors as occurred in the training data.
42,990
Cross Validation in StackingClassifier Scikit-Learn
This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the optimal parameters, those that allow the model to fit the data well without over-fitting. It suffices that our final est...
Cross Validation in StackingClassifier Scikit-Learn
This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the
Cross Validation in StackingClassifier Scikit-Learn This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the optimal parameters, those that allow the model to fit the data well ...
Cross Validation in StackingClassifier Scikit-Learn This includes 2 questions, I will address each of them. We could use cross-validation on the entire system, but that would handicap us a bit too much. The purpose of cross-validation is to find the
42,991
Cross Validation in StackingClassifier Scikit-Learn
My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what StackingClassifier does (and so did I at first), because the description provided in scikit-learn is prone to misinterpr...
Cross Validation in StackingClassifier Scikit-Learn
My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what
Cross Validation in StackingClassifier Scikit-Learn My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what StackingClassifier does (and so did I at first), because the descrip...
Cross Validation in StackingClassifier Scikit-Learn My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what
42,992
threshold choice for binary classifier: on training, validation or test set?
Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
threshold choice for binary classifier: on training, validation or test set?
Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
threshold choice for binary classifier: on training, validation or test set? Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
threshold choice for binary classifier: on training, validation or test set? Go with 3: wrt 1 you are correct - this makes the test set part of the training of the actual classifier 2 is a waste of cases that doesn't gain you anything over 3
42,993
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and local average treatment effect (LATE) be similar?
No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers have $D(Z = 1) = 1$ and $D(Z = 0) = 0$: If assigned treatment, they take, if assigned control, they do not take the treat...
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and l
No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers h
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and local average treatment effect (LATE) be similar? No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual tre...
Instrumental variables: In which cases would the average treatment effect on the treated (ATT) and l No, this is not correct. Let's walk through the basics to see why, and to see under what other assumptions ATT = LATE. Let us call treatment assignment $Z$, and actual treatment taken $D$. Compliers h
42,994
Specifying hierarchical GAM for ecological count data---annual bird migration counts
The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you could write your model as: count ~ s(doy, m = 2) + t2(doy, year, bs = c('cr', 're'), full = TRUE) + offset(log(minute...
Specifying hierarchical GAM for ecological count data---annual bird migration counts
The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you
Specifying hierarchical GAM for ecological count data---annual bird migration counts The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you could write your model as: count ~ ...
Specifying hierarchical GAM for ecological count data---annual bird migration counts The way to extend these models to higher order terms is to use tensor product smooths. You can get exactly the same smooth as a bs = 'fs' term by using t2(x, f, bs = c('cr','re'), full = TRUE) so you
42,995
Estimate the number of common members in two populations
My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_optimal = argmax(P w.r.t K) I couldn't find an analytical solution so I wrote a few lines of code to calculate it, with ...
Estimate the number of common members in two populations
My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_
Estimate the number of common members in two populations My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_optimal = argmax(P w.r.t K) I couldn't find an analytical solut...
Estimate the number of common members in two populations My two cents: Use maximum likelihood estimation on K: Likelihood $P(data|N1, N2, K)\propto$ ${K\choose k} {N1-K \choose n-k} {N2-k \choose n-k}$ where nCr is the N-choose-k combinations. Then find: K_
42,996
Estimate the number of common members in two populations
In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the whole paper and just present the main result. Without loss of generality, assume that $N_1 \leq N_2$. Further, denote $n_1...
Estimate the number of common members in two populations
In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the wh
Estimate the number of common members in two populations In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the whole paper and just present the main result. Without loss of gen...
Estimate the number of common members in two populations In a 2019 paper titled Bayes-optimal estimation of overlap between populations of fixed size, Daniel Larremore presents a solution when $N_1$ and $N_2$ are fixed and known. I'm not gonna repeat the wh
42,997
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate]
1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different but how different would depend on the correlation. The model with more covariates will invariably explain more variation...
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate]
1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate] 1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different but how different would depe...
Difference between Multivariate Regression vs Iterative Regression on Residuals [duplicate] 1) The two approaches actually work in the same way, albeit in the first (multiple covariates) case the system of linear equations that are being solved is greater. 2) The outputs would be different
42,998
Time Series Regressor Selection
"I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify the nature of the input series transfer function structures/lags. This easily gets redefined via tests of necessity and te...
Time Series Regressor Selection
"I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify th
Time Series Regressor Selection "I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify the nature of the input series transfer function structures/lags. This easily gets redefin...
Time Series Regressor Selection "I do not want to use cross-correlation (as suggested in this answer) because I want to take into account the covariance between the regressors." The role of pre-whitening is to INITIALLY identify th
42,999
Time Series Regressor Selection
Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number of predictors. The predictors are summarized using a small number of indexes constructed by principal component analysis....
Time Series Regressor Selection
Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number o
Time Series Regressor Selection Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number of predictors. The predictors are summarized using a small number of indexes constructed ...
Time Series Regressor Selection Dynamic Factor Models, introduced among others by Stock and Watson (2002) seem to do that I am looking for This article studies forecasting a macroeconomic time series variable using a large number o
43,000
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design with repeated measurements
Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable using only 2 observations, so any estimate of a variance for this variable, and any statistic that uses it, should be hig...
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design wit
Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable u
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design with repeated measurements Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variab...
Intraclass correlation coefficient (ICC) for two raters using a mixed effects model for a design wit Is the model appropriate for the design of the study? I think not. The issue is that you have only 2 raters. You are asking the software to estimate the variance of a normally distributed variable u