idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
32,701
For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \leq Z \leq b)$
Perhaps a more intuitive explanation is that for a continuous variable the contribution of the edges (e.g., $a$ or $b$) to the cumulative probability in the surrounding intervals (or semi-intervals) is negligibly small.
For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \
Perhaps a more intuitive explanation is that for a continuous variable the contribution of the edges (e.g., $a$ or $b$) to the cumulative probability in the surrounding intervals (or semi-intervals) i
For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \leq Z \leq b)$ Perhaps a more intuitive explanation is that for a continuous variable the contribution of the edges (e.g., $a$ or $b$) to the cumulative probability in the surrounding intervals (or semi-intervals) is negligibly small.
For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \ Perhaps a more intuitive explanation is that for a continuous variable the contribution of the edges (e.g., $a$ or $b$) to the cumulative probability in the surrounding intervals (or semi-intervals) i
32,702
How to compare variability within and between groups?
You can do this with an ANOVA analysis: my.locs$cluster = factor(rep(c(1, 2), each=5)) anova(lm(attribute ~ cluster, my.locs)) # Analysis of Variance Table # # Response: attribute # Df Sum Sq Mean Sq F value Pr(>F) # cluster 1 62.5 62.50 0.1109 0.7477 # Residuals 8 4510.0 563.75 This finds the variance within and between groups, and uses an F-test to determine a p-value. For the simulated data, the p-value is 0.7477, indicating there is not a significant difference between the clusters of nests (not surprising, since the data was randomly generated without distinguishing between clusters).
How to compare variability within and between groups?
You can do this with an ANOVA analysis: my.locs$cluster = factor(rep(c(1, 2), each=5)) anova(lm(attribute ~ cluster, my.locs)) # Analysis of Variance Table # # Response: attribute # Df Sum
How to compare variability within and between groups? You can do this with an ANOVA analysis: my.locs$cluster = factor(rep(c(1, 2), each=5)) anova(lm(attribute ~ cluster, my.locs)) # Analysis of Variance Table # # Response: attribute # Df Sum Sq Mean Sq F value Pr(>F) # cluster 1 62.5 62.50 0.1109 0.7477 # Residuals 8 4510.0 563.75 This finds the variance within and between groups, and uses an F-test to determine a p-value. For the simulated data, the p-value is 0.7477, indicating there is not a significant difference between the clusters of nests (not surprising, since the data was randomly generated without distinguishing between clusters).
How to compare variability within and between groups? You can do this with an ANOVA analysis: my.locs$cluster = factor(rep(c(1, 2), each=5)) anova(lm(attribute ~ cluster, my.locs)) # Analysis of Variance Table # # Response: attribute # Df Sum
32,703
Lasso-ing the order of a lag?
You can do cross validation repeatedly from k = 0 to whatever the maximum is, and plot the performance against k. Since the model is being tested on data it hasn't seen before, there is no guarantee the complex models will perform better, and indeed you should see a degradation in performance if the model becomes too complex due to overfitting. Personally I think this is safer and easier to justify than having an arbitrary penalty factor, but your mileage may vary. I also don't really follow how ordered Lasso answers the question. It seems too restrictive, it is completely forcing the ordering of the coefficients. Whereas the original question may end up for some data having a solution where $\phi_{lj}$ is not strictly decreasing with l.
Lasso-ing the order of a lag?
You can do cross validation repeatedly from k = 0 to whatever the maximum is, and plot the performance against k. Since the model is being tested on data it hasn't seen before, there is no guarantee t
Lasso-ing the order of a lag? You can do cross validation repeatedly from k = 0 to whatever the maximum is, and plot the performance against k. Since the model is being tested on data it hasn't seen before, there is no guarantee the complex models will perform better, and indeed you should see a degradation in performance if the model becomes too complex due to overfitting. Personally I think this is safer and easier to justify than having an arbitrary penalty factor, but your mileage may vary. I also don't really follow how ordered Lasso answers the question. It seems too restrictive, it is completely forcing the ordering of the coefficients. Whereas the original question may end up for some data having a solution where $\phi_{lj}$ is not strictly decreasing with l.
Lasso-ing the order of a lag? You can do cross validation repeatedly from k = 0 to whatever the maximum is, and plot the performance against k. Since the model is being tested on data it hasn't seen before, there is no guarantee t
32,704
Lasso-ing the order of a lag?
The ordered LASSO seems to be what you're looking for: It computes the regularized regression coefficients $\beta_{1...j}$ as in the standard LASSO, but subject to the additional constraint that $|\beta_1| \geq |\beta_2|...\geq|\beta_j|$. This accomplishes the second goal of zeroing out coefficients for higher-order lags, but is more restrictive than the sole restriction of preferring a lower lag model. And as others point out, this is a heavy restriction that can be very difficult to justify. Having dispensed with the caveats, the paper presents the method's results on both real and simulated time series data, and details algorithms to find the coefficients. The conclusion mentions an R package, but the paper is rather recent and a search on CRAN for "ordered LASSO" comes up empty, so I suspect the package is still in development. The paper also offers a generalized approach in which two regularization parameters "encourage near-monotonicity." (See p. 6.) In other words, one should be able to tune the parameters to allow for a relaxed ordering. Sadly, neither examples nor comparisons of the relaxed method are provided. But, the authors write that implementing this change is a simple matter of replacing one algorithm with another, so one hopes it will be part of the coming R package.
Lasso-ing the order of a lag?
The ordered LASSO seems to be what you're looking for: It computes the regularized regression coefficients $\beta_{1...j}$ as in the standard LASSO, but subject to the additional constraint that $|\be
Lasso-ing the order of a lag? The ordered LASSO seems to be what you're looking for: It computes the regularized regression coefficients $\beta_{1...j}$ as in the standard LASSO, but subject to the additional constraint that $|\beta_1| \geq |\beta_2|...\geq|\beta_j|$. This accomplishes the second goal of zeroing out coefficients for higher-order lags, but is more restrictive than the sole restriction of preferring a lower lag model. And as others point out, this is a heavy restriction that can be very difficult to justify. Having dispensed with the caveats, the paper presents the method's results on both real and simulated time series data, and details algorithms to find the coefficients. The conclusion mentions an R package, but the paper is rather recent and a search on CRAN for "ordered LASSO" comes up empty, so I suspect the package is still in development. The paper also offers a generalized approach in which two regularization parameters "encourage near-monotonicity." (See p. 6.) In other words, one should be able to tune the parameters to allow for a relaxed ordering. Sadly, neither examples nor comparisons of the relaxed method are provided. But, the authors write that implementing this change is a simple matter of replacing one algorithm with another, so one hopes it will be part of the coming R package.
Lasso-ing the order of a lag? The ordered LASSO seems to be what you're looking for: It computes the regularized regression coefficients $\beta_{1...j}$ as in the standard LASSO, but subject to the additional constraint that $|\be
32,705
Lasso-ing the order of a lag?
The nested LASSO penalty (pdf) could be employed but there are no R packages for it.
Lasso-ing the order of a lag?
The nested LASSO penalty (pdf) could be employed but there are no R packages for it.
Lasso-ing the order of a lag? The nested LASSO penalty (pdf) could be employed but there are no R packages for it.
Lasso-ing the order of a lag? The nested LASSO penalty (pdf) could be employed but there are no R packages for it.
32,706
Lasso-ing the order of a lag?
I know you wrote it as a premise, but I would not use the ordered LASSO without being absolutely sure that this is thing that is needed, because the assumptions of the ordered LASSO are not directly appropriate for time-series prediction. As a counter-example, consider the case where you have a delay-time of, say, ten time-steps between measurement and target. Obviously, the ordered LASSO constraints cannot handle such a effects without attributing nonsense to the first nine parameters. In contrast, I would rather stick to the normal LASSO and include all previous observation -- particularly because you wrote your model space is small, and coordinate-descent optimization routines for the LASSO (as described here) are working efficiently also for large datasets. Then compute the path for the regularization strength parameter $\lambda$ and look which parameters get included as you go from large $\lambda$ to $\lambda=0$. Especially those included earlier are the important ones. Finally, you have to choose an appropriate criterion and optimize the parameter $\lambda$ using cross-validation, standard one-dimensional minimization or whatever. The criterion can for example be something as "prediction error + number of included variables" (--AIC criterion-like).
Lasso-ing the order of a lag?
I know you wrote it as a premise, but I would not use the ordered LASSO without being absolutely sure that this is thing that is needed, because the assumptions of the ordered LASSO are not directly a
Lasso-ing the order of a lag? I know you wrote it as a premise, but I would not use the ordered LASSO without being absolutely sure that this is thing that is needed, because the assumptions of the ordered LASSO are not directly appropriate for time-series prediction. As a counter-example, consider the case where you have a delay-time of, say, ten time-steps between measurement and target. Obviously, the ordered LASSO constraints cannot handle such a effects without attributing nonsense to the first nine parameters. In contrast, I would rather stick to the normal LASSO and include all previous observation -- particularly because you wrote your model space is small, and coordinate-descent optimization routines for the LASSO (as described here) are working efficiently also for large datasets. Then compute the path for the regularization strength parameter $\lambda$ and look which parameters get included as you go from large $\lambda$ to $\lambda=0$. Especially those included earlier are the important ones. Finally, you have to choose an appropriate criterion and optimize the parameter $\lambda$ using cross-validation, standard one-dimensional minimization or whatever. The criterion can for example be something as "prediction error + number of included variables" (--AIC criterion-like).
Lasso-ing the order of a lag? I know you wrote it as a premise, but I would not use the ordered LASSO without being absolutely sure that this is thing that is needed, because the assumptions of the ordered LASSO are not directly a
32,707
Self organizing maps vs. kernel k-means
This has the potential to be an interesting question. Clustering algorithms perform 'well' or 'not-well' depending on the topology of your data and what you are looking for in that data. ¿What do you want the clusters to represent? I attach a diagram which sadly does not include kernel k-means or SOM but I think it is of great value for understanding the grave differences between the techniques. You probably need to ask and respond this to yourself before you dig in to measuring the "pros" and "cons". This is the source of the image.
Self organizing maps vs. kernel k-means
This has the potential to be an interesting question. Clustering algorithms perform 'well' or 'not-well' depending on the topology of your data and what you are looking for in that data. ¿What do you
Self organizing maps vs. kernel k-means This has the potential to be an interesting question. Clustering algorithms perform 'well' or 'not-well' depending on the topology of your data and what you are looking for in that data. ¿What do you want the clusters to represent? I attach a diagram which sadly does not include kernel k-means or SOM but I think it is of great value for understanding the grave differences between the techniques. You probably need to ask and respond this to yourself before you dig in to measuring the "pros" and "cons". This is the source of the image.
Self organizing maps vs. kernel k-means This has the potential to be an interesting question. Clustering algorithms perform 'well' or 'not-well' depending on the topology of your data and what you are looking for in that data. ¿What do you
32,708
Looking for the 'Elbow' in data
Depending on your definition of the "elbow" there are many statistical tests at your disposal. With an entire R package dedicated to this topic. I personally tend to avoid them, since you never know in advance what will they consider an "elbow" and whether your and their opinions will coincide. (but this might be considered an extreme position) It would also depend whether you want to know if there is an "elbow" in a specific location, or whether you want to ask if there is one in general. For the case of a specific location, you can of course fit a local regression, compare the coefficients and declare one an elbow according to your own rule about the difference in slopes. The real problem occurs in the latter case. If you have only a couple of points anyway you can just try them all. Otherwise I would fit something non-parametric such as LOESS, calculate the gradient of the line at regular intervals (with sufficient density), such as shown here: https://stackoverflow.com/questions/12183137/calculate-min-max-slope-of-loess-fitted-curve-with-r and use again some rule that you find convenient to declare something an "elbow". I view the "elbow" as the case when a large enough change of gradient of a function occurs over a short enough interval. Of course the thresholds for the above rules are a matter of individual taste, which is why there is no test. In general, I presume this would be quite useless if the data is wiggly (as there would be a lot of changes in the gradient).
Looking for the 'Elbow' in data
Depending on your definition of the "elbow" there are many statistical tests at your disposal. With an entire R package dedicated to this topic. I personally tend to avoid them, since you never know i
Looking for the 'Elbow' in data Depending on your definition of the "elbow" there are many statistical tests at your disposal. With an entire R package dedicated to this topic. I personally tend to avoid them, since you never know in advance what will they consider an "elbow" and whether your and their opinions will coincide. (but this might be considered an extreme position) It would also depend whether you want to know if there is an "elbow" in a specific location, or whether you want to ask if there is one in general. For the case of a specific location, you can of course fit a local regression, compare the coefficients and declare one an elbow according to your own rule about the difference in slopes. The real problem occurs in the latter case. If you have only a couple of points anyway you can just try them all. Otherwise I would fit something non-parametric such as LOESS, calculate the gradient of the line at regular intervals (with sufficient density), such as shown here: https://stackoverflow.com/questions/12183137/calculate-min-max-slope-of-loess-fitted-curve-with-r and use again some rule that you find convenient to declare something an "elbow". I view the "elbow" as the case when a large enough change of gradient of a function occurs over a short enough interval. Of course the thresholds for the above rules are a matter of individual taste, which is why there is no test. In general, I presume this would be quite useless if the data is wiggly (as there would be a lot of changes in the gradient).
Looking for the 'Elbow' in data Depending on your definition of the "elbow" there are many statistical tests at your disposal. With an entire R package dedicated to this topic. I personally tend to avoid them, since you never know i
32,709
Looking for the 'Elbow' in data
For the example provided, a simple method would be to apply a smoothing algorithm then do the 2nd derivative as shown in my answer to this other question.
Looking for the 'Elbow' in data
For the example provided, a simple method would be to apply a smoothing algorithm then do the 2nd derivative as shown in my answer to this other question.
Looking for the 'Elbow' in data For the example provided, a simple method would be to apply a smoothing algorithm then do the 2nd derivative as shown in my answer to this other question.
Looking for the 'Elbow' in data For the example provided, a simple method would be to apply a smoothing algorithm then do the 2nd derivative as shown in my answer to this other question.
32,710
Why is the Dirichlet Process unsuitable for applications in Bayesian nonparametrics?
With probability one, the realizations of a Dirichlet Process are discrete probability measures. A rigorous proof can be found in Blackwell, D. (1973). "Discreteness of Ferguson Selections", The Annals of Statistics, 1(2): 356–358. The stick breaking representation of the Dirichlet Process makes this property transparent. Draw independent $B_i\sim\mathrm{Beta}(1,c)$, for $i\geq 1$. Define $P_1=B_1$ and $P_i=B_i \prod_{j=1}^{i-1}(1-B_j)$, for $i>1$. Draw independent $Y_i\sim F$, for $i\geq 1$. Sethuraman proved that the discrete distribution function $$ G(t,\omega)=\sum_{i=1}^\infty P_i(\omega) I_{[Y_i(\omega),\infty)}(t) $$ is a realization of a Dirichlet Process with concentration parameter $c$ and centered at the distribution function $F$. The expectation of this Dirichlet Processs is simply $F$, and this may be the distribution function of a continuous random variable. But, if random variables $X_1,\dots,X_n$ form a random sample from this Dirichlet Process, the posterior expectation is a probability measure that puts positive mass on each sample point. Regarding the original question, you can see that the plain Dirichlet Process may be unsuitable to model some problems of Bayesian nonparametrics, like the problem of Bayesian density estimation, but suitable extensions of the Dirichlet Process are available to handle these cases.
Why is the Dirichlet Process unsuitable for applications in Bayesian nonparametrics?
With probability one, the realizations of a Dirichlet Process are discrete probability measures. A rigorous proof can be found in Blackwell, D. (1973). "Discreteness of Ferguson Selections", The Annal
Why is the Dirichlet Process unsuitable for applications in Bayesian nonparametrics? With probability one, the realizations of a Dirichlet Process are discrete probability measures. A rigorous proof can be found in Blackwell, D. (1973). "Discreteness of Ferguson Selections", The Annals of Statistics, 1(2): 356–358. The stick breaking representation of the Dirichlet Process makes this property transparent. Draw independent $B_i\sim\mathrm{Beta}(1,c)$, for $i\geq 1$. Define $P_1=B_1$ and $P_i=B_i \prod_{j=1}^{i-1}(1-B_j)$, for $i>1$. Draw independent $Y_i\sim F$, for $i\geq 1$. Sethuraman proved that the discrete distribution function $$ G(t,\omega)=\sum_{i=1}^\infty P_i(\omega) I_{[Y_i(\omega),\infty)}(t) $$ is a realization of a Dirichlet Process with concentration parameter $c$ and centered at the distribution function $F$. The expectation of this Dirichlet Processs is simply $F$, and this may be the distribution function of a continuous random variable. But, if random variables $X_1,\dots,X_n$ form a random sample from this Dirichlet Process, the posterior expectation is a probability measure that puts positive mass on each sample point. Regarding the original question, you can see that the plain Dirichlet Process may be unsuitable to model some problems of Bayesian nonparametrics, like the problem of Bayesian density estimation, but suitable extensions of the Dirichlet Process are available to handle these cases.
Why is the Dirichlet Process unsuitable for applications in Bayesian nonparametrics? With probability one, the realizations of a Dirichlet Process are discrete probability measures. A rigorous proof can be found in Blackwell, D. (1973). "Discreteness of Ferguson Selections", The Annal
32,711
Restricting minimum subgroup size in a bootstrap resampling study - why is this approach wrong?
This is an interesting question (+1). It's strange that you have gotten no attention. I'm no bootstrapping expert, but I think the answer is to go back to the principles of bootstrapping. What you are supposed to do on each bootstrap replication is 1) draw a bootstrap sample in a way which imitates (in a way which preserves independence) the way you drew the original sample, then 2) do to that bootstrap sample whatever your estimation technique calls for, then 3) record the outcome of the estimation. To answer the question, I think you need to think carefully about how the original sample was collected (so that your bootstrapping properly imitates it). Also, you need to think about what your estimation technique really was/is. Suppose you were collecting your original data. Suppose you came to the end of the data collection. Suppose you notice that you only have two females. What would you have done? If the answer to this question is "I would have thrown away my entire dataset and done the whole data collection process again," then your bootstrapping procedure is exactly right. I doubt that this is what you would have done, however. And this is the answer to your question. This is why what you are doing is wrong. Maybe you would have continued collecting more data until you had eight females. If so, then imitate that in the the bootstrap sampling step (1). Maybe you would have decided that two females is too few females and you would have dropped all females from the analysis. If so, then imitate that in the bootstrap estimation step (2). Another way to say this is that you should think about what question you want the boostrapping procedure to answer. If you want to answer the question "How often would the confidence intervals cover the true parameter value if I did the experiment over and over, mindlessly running the exact same regression each time without any attention to what the sample looked like," then just mindlessly bootstrap like your colleague is telling you to. If you want to answer the question "How often would the confidence intervals cover the true parameter value if I did the experiment over and over, analyzing the data the way Max Gordon would analyze it," then do what I suggested. If you want to get this work published, do the conventional thing: the thing your colleague is suggesting that you do. Well, unless you can find a paper in Biometrika which agrees with what I say above. I don't know the relevant literature, unfortunately, so I can't help you with that.
Restricting minimum subgroup size in a bootstrap resampling study - why is this approach wrong?
This is an interesting question (+1). It's strange that you have gotten no attention. I'm no bootstrapping expert, but I think the answer is to go back to the principles of bootstrapping. What you a
Restricting minimum subgroup size in a bootstrap resampling study - why is this approach wrong? This is an interesting question (+1). It's strange that you have gotten no attention. I'm no bootstrapping expert, but I think the answer is to go back to the principles of bootstrapping. What you are supposed to do on each bootstrap replication is 1) draw a bootstrap sample in a way which imitates (in a way which preserves independence) the way you drew the original sample, then 2) do to that bootstrap sample whatever your estimation technique calls for, then 3) record the outcome of the estimation. To answer the question, I think you need to think carefully about how the original sample was collected (so that your bootstrapping properly imitates it). Also, you need to think about what your estimation technique really was/is. Suppose you were collecting your original data. Suppose you came to the end of the data collection. Suppose you notice that you only have two females. What would you have done? If the answer to this question is "I would have thrown away my entire dataset and done the whole data collection process again," then your bootstrapping procedure is exactly right. I doubt that this is what you would have done, however. And this is the answer to your question. This is why what you are doing is wrong. Maybe you would have continued collecting more data until you had eight females. If so, then imitate that in the the bootstrap sampling step (1). Maybe you would have decided that two females is too few females and you would have dropped all females from the analysis. If so, then imitate that in the bootstrap estimation step (2). Another way to say this is that you should think about what question you want the boostrapping procedure to answer. If you want to answer the question "How often would the confidence intervals cover the true parameter value if I did the experiment over and over, mindlessly running the exact same regression each time without any attention to what the sample looked like," then just mindlessly bootstrap like your colleague is telling you to. If you want to answer the question "How often would the confidence intervals cover the true parameter value if I did the experiment over and over, analyzing the data the way Max Gordon would analyze it," then do what I suggested. If you want to get this work published, do the conventional thing: the thing your colleague is suggesting that you do. Well, unless you can find a paper in Biometrika which agrees with what I say above. I don't know the relevant literature, unfortunately, so I can't help you with that.
Restricting minimum subgroup size in a bootstrap resampling study - why is this approach wrong? This is an interesting question (+1). It's strange that you have gotten no attention. I'm no bootstrapping expert, but I think the answer is to go back to the principles of bootstrapping. What you a
32,712
Generating discrete uniform from coin flips
Like I said above in my comments, the paper http://arxiv.org/pdf/1304.1916v1.pdf, details exactly how to generate from the discrete uniform distribution from coin flips and gives a very detailed proof and results section of why the method works. As a proof of concept I coded up their pseudo code in R to show how fast, simple and efficient their method is. #Function for sampling from a discrete uniform distribution rdunif = function(n){ v = 1 c = 0 a = 0 while(a > -1){ v = 2*v c = 2*c + rbinom(1,1,.5) #This is the dice roll part if(v >= n){ if(c < n){ return(c) }else{ v = v-n c = c-n } } } } #Running the function k times for n = 11 n = 11 k = 10000 random.samples = rep(NA,k) for(i in 1:k){ random.samples[i] = rdunif(n) } counts = table(random.samples) barplot(counts,main="Random Samples from a Discrete Uniform(0,10)")
Generating discrete uniform from coin flips
Like I said above in my comments, the paper http://arxiv.org/pdf/1304.1916v1.pdf, details exactly how to generate from the discrete uniform distribution from coin flips and gives a very detailed proof
Generating discrete uniform from coin flips Like I said above in my comments, the paper http://arxiv.org/pdf/1304.1916v1.pdf, details exactly how to generate from the discrete uniform distribution from coin flips and gives a very detailed proof and results section of why the method works. As a proof of concept I coded up their pseudo code in R to show how fast, simple and efficient their method is. #Function for sampling from a discrete uniform distribution rdunif = function(n){ v = 1 c = 0 a = 0 while(a > -1){ v = 2*v c = 2*c + rbinom(1,1,.5) #This is the dice roll part if(v >= n){ if(c < n){ return(c) }else{ v = v-n c = c-n } } } } #Running the function k times for n = 11 n = 11 k = 10000 random.samples = rep(NA,k) for(i in 1:k){ random.samples[i] = rdunif(n) } counts = table(random.samples) barplot(counts,main="Random Samples from a Discrete Uniform(0,10)")
Generating discrete uniform from coin flips Like I said above in my comments, the paper http://arxiv.org/pdf/1304.1916v1.pdf, details exactly how to generate from the discrete uniform distribution from coin flips and gives a very detailed proof
32,713
When is the distribution of product of two normal distributed variables near normal distribution?
(this answer uses parts of @whuber's comment) Let $X,Y$ be two independent normals. We can write the product as $$ XY = \frac14 \left( (X+Y)^2 - (X-Y)^2 \right) $$ will have the distribution of the difference (scaled) of two noncentral chisquare random variables (central if both have zero means). Note that if the variances are equal, the two terms will be independent. Since chisquare distribution is a case of gamma, Generic sum of Gamma random variables is relevant. I will give a very special case of this, taken from the encyclopedic reference https://www.amazon.com/Probability-Distributions-Involving-Gaussian-Variables/dp/0387346570 When $X$ and $Y$ are independent, zero-mean with possibly different variances the density function of the product $Z=XY$ is given by $$ f(z)= \frac1{\pi \sigma_1 \sigma_2} K_0(\frac{|z|}{\sigma_1 \sigma_2}) $$ where $K_0$ is the modified Bessel function of the second kind. This can be written in R as dprodnorm <- function(x, sigma1=1, sigma2=1) { (1/(pi*sigma1*sigma2)) * besselK(abs(x)/(sigma1*sigma2), 0) } ### Numerical check: integrate( function(x) dprodnorm(x), lower=-Inf, upper=Inf) 0.9999999 with absolute error < 3e-06 Let us plot this, together with some simulations: set.seed(7*11*13) Z <- rnorm(10000) * rnorm(10000) hist(Z, prob=TRUE, nclass="scott", ylim=c(0, 1.5), main="histogram and density of product of independent normals") plot( function(x) dprodnorm(x), from=-5, to=5, n=1001, col="red", add=TRUE, lwd=3) ### Change to nclass="fd" gives a closer fit The plot shows quite clearly that the distribution is not close to normal. The stated reference do also give more involved cases (non-zero means ...) but then expressions for density functions becomes so complicated that they only gives characteristic function, which still are reasonably simple, and can be inverted to get densities.
When is the distribution of product of two normal distributed variables near normal distribution?
(this answer uses parts of @whuber's comment) Let $X,Y$ be two independent normals. We can write the product as $$ XY = \frac14 \left( (X+Y)^2 - (X-Y)^2 \right) $$ will have the distribution of th
When is the distribution of product of two normal distributed variables near normal distribution? (this answer uses parts of @whuber's comment) Let $X,Y$ be two independent normals. We can write the product as $$ XY = \frac14 \left( (X+Y)^2 - (X-Y)^2 \right) $$ will have the distribution of the difference (scaled) of two noncentral chisquare random variables (central if both have zero means). Note that if the variances are equal, the two terms will be independent. Since chisquare distribution is a case of gamma, Generic sum of Gamma random variables is relevant. I will give a very special case of this, taken from the encyclopedic reference https://www.amazon.com/Probability-Distributions-Involving-Gaussian-Variables/dp/0387346570 When $X$ and $Y$ are independent, zero-mean with possibly different variances the density function of the product $Z=XY$ is given by $$ f(z)= \frac1{\pi \sigma_1 \sigma_2} K_0(\frac{|z|}{\sigma_1 \sigma_2}) $$ where $K_0$ is the modified Bessel function of the second kind. This can be written in R as dprodnorm <- function(x, sigma1=1, sigma2=1) { (1/(pi*sigma1*sigma2)) * besselK(abs(x)/(sigma1*sigma2), 0) } ### Numerical check: integrate( function(x) dprodnorm(x), lower=-Inf, upper=Inf) 0.9999999 with absolute error < 3e-06 Let us plot this, together with some simulations: set.seed(7*11*13) Z <- rnorm(10000) * rnorm(10000) hist(Z, prob=TRUE, nclass="scott", ylim=c(0, 1.5), main="histogram and density of product of independent normals") plot( function(x) dprodnorm(x), from=-5, to=5, n=1001, col="red", add=TRUE, lwd=3) ### Change to nclass="fd" gives a closer fit The plot shows quite clearly that the distribution is not close to normal. The stated reference do also give more involved cases (non-zero means ...) but then expressions for density functions becomes so complicated that they only gives characteristic function, which still are reasonably simple, and can be inverted to get densities.
When is the distribution of product of two normal distributed variables near normal distribution? (this answer uses parts of @whuber's comment) Let $X,Y$ be two independent normals. We can write the product as $$ XY = \frac14 \left( (X+Y)^2 - (X-Y)^2 \right) $$ will have the distribution of th
32,714
How to apply multiple testing correction for gene list overlap using R
I don't know anything about gene expression studies but I do have some interest in multiple inference so I will risk an answer on this part of the question anyway. Personally, I would not approach the problem in that way. I would adjust the error level in the original studies, compute the new overlap and leave the test at the end alone. If the number of differentially expressed genes (and any other result you are using) is already based on adjusted tests, I would argue that you don't need to do anything. If you cannot go back to the original data and really do want to adjust the p-value, you can indeed multiply it by the number of tests but I don't see why it should have anything to do with the size of list2. It would make more sense to adjust for the total number of tests performed in both studies (i.e. two times the population). This is going to be brutal, though. To adjust p-values in R, you can use p.adjust(p), where p is a vector of p-values. p.adjust(p, method="bonferroni") # Bonferroni method, simple multiplication p.adjust(p, method="holm") # Holm-Bonferroni method, more powerful than Bonferroni p.adjust(p, method="BH") # Benjamini-Hochberg As stated in the help file, there is no reason not to use Holm-Bonferroni over Bonferroni as it also provides strong control of the familywise error rate in any case but is more powerful. Benjamini-Hochberg controls the false discovery rate, which is a less stringent criterion. Edited after the comment below: The more I think about the problem, the more I think that a correction for multiple comparisons is unnecessary and inappropriate in this situation. This is where the notion of a “family” of hypotheses kicks in. Your last test isn't quite comparable to all the earlier tests, there is no risk of “capitalizing on chance” or cherry-picking significant results, there is only one test of interest and it's legitimate to use the ordinary error level for this one. Even if you correct aggressively for the many tests performed before, you would still not be directly addressing the main concern, which is the fact that some of the genes in both lists might have been spuriously detected as differentially expressed. The earlier test results still “stand” and if you want to interpret these results while controlling the family-wise error rate, you still need to correct all of them too. But if the null hypothesis really is true for all genes, any significant result would be a false positive and you would not expect the same gene to be flagged again in the next sample. Overlap between both lists would therefore happen only by chance and this is exactly what the test based on the hypergeometric distribution is testing. So even if the lists of genes are complete junk, the result of that last test is safe. Intuitively, it seems that anything in-between (a mix of true and false hypotheses) should be fine too. Maybe someone with more experience in this field might weigh in but I think an adjustment would only become necessary if you want to compare the total number of genes detected or find out which ones are differentially expressed, i.e. if you want to interpret the thousands of individual tests performed in each study.
How to apply multiple testing correction for gene list overlap using R
I don't know anything about gene expression studies but I do have some interest in multiple inference so I will risk an answer on this part of the question anyway. Personally, I would not approach the
How to apply multiple testing correction for gene list overlap using R I don't know anything about gene expression studies but I do have some interest in multiple inference so I will risk an answer on this part of the question anyway. Personally, I would not approach the problem in that way. I would adjust the error level in the original studies, compute the new overlap and leave the test at the end alone. If the number of differentially expressed genes (and any other result you are using) is already based on adjusted tests, I would argue that you don't need to do anything. If you cannot go back to the original data and really do want to adjust the p-value, you can indeed multiply it by the number of tests but I don't see why it should have anything to do with the size of list2. It would make more sense to adjust for the total number of tests performed in both studies (i.e. two times the population). This is going to be brutal, though. To adjust p-values in R, you can use p.adjust(p), where p is a vector of p-values. p.adjust(p, method="bonferroni") # Bonferroni method, simple multiplication p.adjust(p, method="holm") # Holm-Bonferroni method, more powerful than Bonferroni p.adjust(p, method="BH") # Benjamini-Hochberg As stated in the help file, there is no reason not to use Holm-Bonferroni over Bonferroni as it also provides strong control of the familywise error rate in any case but is more powerful. Benjamini-Hochberg controls the false discovery rate, which is a less stringent criterion. Edited after the comment below: The more I think about the problem, the more I think that a correction for multiple comparisons is unnecessary and inappropriate in this situation. This is where the notion of a “family” of hypotheses kicks in. Your last test isn't quite comparable to all the earlier tests, there is no risk of “capitalizing on chance” or cherry-picking significant results, there is only one test of interest and it's legitimate to use the ordinary error level for this one. Even if you correct aggressively for the many tests performed before, you would still not be directly addressing the main concern, which is the fact that some of the genes in both lists might have been spuriously detected as differentially expressed. The earlier test results still “stand” and if you want to interpret these results while controlling the family-wise error rate, you still need to correct all of them too. But if the null hypothesis really is true for all genes, any significant result would be a false positive and you would not expect the same gene to be flagged again in the next sample. Overlap between both lists would therefore happen only by chance and this is exactly what the test based on the hypergeometric distribution is testing. So even if the lists of genes are complete junk, the result of that last test is safe. Intuitively, it seems that anything in-between (a mix of true and false hypotheses) should be fine too. Maybe someone with more experience in this field might weigh in but I think an adjustment would only become necessary if you want to compare the total number of genes detected or find out which ones are differentially expressed, i.e. if you want to interpret the thousands of individual tests performed in each study.
How to apply multiple testing correction for gene list overlap using R I don't know anything about gene expression studies but I do have some interest in multiple inference so I will risk an answer on this part of the question anyway. Personally, I would not approach the
32,715
How to apply multiple testing correction for gene list overlap using R
You do not need to correct the p value for your one single overlap test. However, let's say you were interested in determining if the drug affects genes in the same pathway. How would you determine which pathway had the most overlap? Let's say you have 500 pathway gene sets. You run the hypergeometric set overlap test 500 times and ranked them by p value. Since you ran this test 500 times (or even more depending on how much data you have), there is a chance you could get a good score just by chance (false positive). So then you need to correct for that and perform a pvalue adjustment...either Bonferroni(most conservative) or Benjamini Hochberg.
How to apply multiple testing correction for gene list overlap using R
You do not need to correct the p value for your one single overlap test. However, let's say you were interested in determining if the drug affects genes in the same pathway. How would you determine wh
How to apply multiple testing correction for gene list overlap using R You do not need to correct the p value for your one single overlap test. However, let's say you were interested in determining if the drug affects genes in the same pathway. How would you determine which pathway had the most overlap? Let's say you have 500 pathway gene sets. You run the hypergeometric set overlap test 500 times and ranked them by p value. Since you ran this test 500 times (or even more depending on how much data you have), there is a chance you could get a good score just by chance (false positive). So then you need to correct for that and perform a pvalue adjustment...either Bonferroni(most conservative) or Benjamini Hochberg.
How to apply multiple testing correction for gene list overlap using R You do not need to correct the p value for your one single overlap test. However, let's say you were interested in determining if the drug affects genes in the same pathway. How would you determine wh
32,716
Expectation maximization on Bayesian networks with latent variables
I'm the creator of bayes-scala toolbox you are referring to. Last year I implemented the EM in discrete bayesian network for learning from incomplete data (including latent variables), that looks like the use case you are asking about. Some tutorial for a sprinkler bayesian network is here. And, "Learning Dynamic Bayesian Networks with latent variables" is given here.
Expectation maximization on Bayesian networks with latent variables
I'm the creator of bayes-scala toolbox you are referring to. Last year I implemented the EM in discrete bayesian network for learning from incomplete data (including latent variables), that looks like
Expectation maximization on Bayesian networks with latent variables I'm the creator of bayes-scala toolbox you are referring to. Last year I implemented the EM in discrete bayesian network for learning from incomplete data (including latent variables), that looks like the use case you are asking about. Some tutorial for a sprinkler bayesian network is here. And, "Learning Dynamic Bayesian Networks with latent variables" is given here.
Expectation maximization on Bayesian networks with latent variables I'm the creator of bayes-scala toolbox you are referring to. Last year I implemented the EM in discrete bayesian network for learning from incomplete data (including latent variables), that looks like
32,717
Expectation maximization on Bayesian networks with latent variables
Regarding to "...Additionally an odd error occurs where if I set evidence in a LoopyBP more than once I start getting NA ..." I think it's because you set the evidence in the same Bayesian Network iterating over all samples in a training set and then you end up with some factors having 0 probabilities for all factor values. How setEvidence() works for discrete factors is that it sets the evidence probability to 0 for all factor values incompatible with evidence. I will throw an error if setEvidence() is set for all factor values. Why would you want to set the evidence in a single bayesian network over all samples? Regarding to flat probabilities for hidden nodes, remember that EM does not guarantee to converge to a global maximum, it's quite important how you set the priors in the network. Please send me your code including training set and I will check it further. On factor graph: Bayes-scala also supports inference on a factor graphs but only for continuous and hybrid bayesian networks using Expectation Propagation algorithm, the two cases I tested include kalman robot localisation and TrueSkill rating model. And the last thing on sepsets including more than one variable. It's the current limitation of bayes-scala to allow for a single sepset variable.
Expectation maximization on Bayesian networks with latent variables
Regarding to "...Additionally an odd error occurs where if I set evidence in a LoopyBP more than once I start getting NA ..." I think it's because you set the evidence in the same Bayesian Network ite
Expectation maximization on Bayesian networks with latent variables Regarding to "...Additionally an odd error occurs where if I set evidence in a LoopyBP more than once I start getting NA ..." I think it's because you set the evidence in the same Bayesian Network iterating over all samples in a training set and then you end up with some factors having 0 probabilities for all factor values. How setEvidence() works for discrete factors is that it sets the evidence probability to 0 for all factor values incompatible with evidence. I will throw an error if setEvidence() is set for all factor values. Why would you want to set the evidence in a single bayesian network over all samples? Regarding to flat probabilities for hidden nodes, remember that EM does not guarantee to converge to a global maximum, it's quite important how you set the priors in the network. Please send me your code including training set and I will check it further. On factor graph: Bayes-scala also supports inference on a factor graphs but only for continuous and hybrid bayesian networks using Expectation Propagation algorithm, the two cases I tested include kalman robot localisation and TrueSkill rating model. And the last thing on sepsets including more than one variable. It's the current limitation of bayes-scala to allow for a single sepset variable.
Expectation maximization on Bayesian networks with latent variables Regarding to "...Additionally an odd error occurs where if I set evidence in a LoopyBP more than once I start getting NA ..." I think it's because you set the evidence in the same Bayesian Network ite
32,718
Is $R^2$ value valid for insignificant OLS regression model?
Yes, you're trying to calculate the Extra Sum of Squares. In short you are partitioning the regression sum of squares. Assume we have two $X$ variables, $X_1$ and $X_2$. The $SSTO$ (total sum of squares, made up of the SSR and SSE) is the same regardless of how many $X$ variables we have. Denote the $SSR$ and $SSE$ to indicate which $X$ variables are in the model: e.g. $SSR(X_1,X_2) = 385$ and $SSE(X_1,X_2) = 110$ Now let's assume we did the regression just on $X_1$ e.g. $SSR(X_1) = 352$ and $SSE(X_1) = 143$. The (marginal) increase in the regression sum of squares in $X_2$ given that $X_1$ is already in the model is: \begin{eqnarray} SSR(X_2|X_1)& = &SSR(X_1,X_2) - SSR(X_1)\\ & = & 385 - 352\\ & = & 33 \end{eqnarray} or equivalently, the extra reduction in the error sum of squares associated with $X_2$ given that $X_1$ is already in the model is: \begin{eqnarray} SSR(X_2|X_1) & = & SSE(X_1) - SSE(X_2,X_1)\\ &=& 143 - 110\\ &=& 33 \end{eqnarray} In the same way we can find: \begin{eqnarray} SSR(X_1|X_2) &=& SSE(X_2) - SSE(X_1,X_2)\\ &=& SSR(X_1,X_2) - SSR(X_2) \end{eqnarray} Of course, this also works for more $X$ variables as well.
Is $R^2$ value valid for insignificant OLS regression model?
Yes, you're trying to calculate the Extra Sum of Squares. In short you are partitioning the regression sum of squares. Assume we have two $X$ variables, $X_1$ and $X_2$. The $SSTO$ (total sum of sq
Is $R^2$ value valid for insignificant OLS regression model? Yes, you're trying to calculate the Extra Sum of Squares. In short you are partitioning the regression sum of squares. Assume we have two $X$ variables, $X_1$ and $X_2$. The $SSTO$ (total sum of squares, made up of the SSR and SSE) is the same regardless of how many $X$ variables we have. Denote the $SSR$ and $SSE$ to indicate which $X$ variables are in the model: e.g. $SSR(X_1,X_2) = 385$ and $SSE(X_1,X_2) = 110$ Now let's assume we did the regression just on $X_1$ e.g. $SSR(X_1) = 352$ and $SSE(X_1) = 143$. The (marginal) increase in the regression sum of squares in $X_2$ given that $X_1$ is already in the model is: \begin{eqnarray} SSR(X_2|X_1)& = &SSR(X_1,X_2) - SSR(X_1)\\ & = & 385 - 352\\ & = & 33 \end{eqnarray} or equivalently, the extra reduction in the error sum of squares associated with $X_2$ given that $X_1$ is already in the model is: \begin{eqnarray} SSR(X_2|X_1) & = & SSE(X_1) - SSE(X_2,X_1)\\ &=& 143 - 110\\ &=& 33 \end{eqnarray} In the same way we can find: \begin{eqnarray} SSR(X_1|X_2) &=& SSE(X_2) - SSE(X_1,X_2)\\ &=& SSR(X_1,X_2) - SSR(X_2) \end{eqnarray} Of course, this also works for more $X$ variables as well.
Is $R^2$ value valid for insignificant OLS regression model? Yes, you're trying to calculate the Extra Sum of Squares. In short you are partitioning the regression sum of squares. Assume we have two $X$ variables, $X_1$ and $X_2$. The $SSTO$ (total sum of sq
32,719
Is $R^2$ value valid for insignificant OLS regression model?
I think you have to use multiple regressions as indicated by Eric Peterson. The other option is that you use partial correlation coefficients. That would make sense, if the absolute values of $y$ and the magnitude of the squares are not important. In a certain sense, $R^2$ is very valid for an insignificant OLS model. The significance thresholds are after all lines drawn in water. For example, if you have to choose between models to use for making predictions, it makes sense to use the model with highest $R^2$. Or, if the models involve a different number of covariates, better use adjusted $R^2$ denoted as $\bar{R}^2$. $R^2$ is actually the square of the correlation coefficient between predicted values and the reality, i.e. $R^2=(cor(y_i,\hat{y}_i))^2$, and thus, you can use tables for the correlation coefficient to test the ovarall significance of your model. The other option is that you use the F test.
Is $R^2$ value valid for insignificant OLS regression model?
I think you have to use multiple regressions as indicated by Eric Peterson. The other option is that you use partial correlation coefficients. That would make sense, if the absolute values of $y$ and
Is $R^2$ value valid for insignificant OLS regression model? I think you have to use multiple regressions as indicated by Eric Peterson. The other option is that you use partial correlation coefficients. That would make sense, if the absolute values of $y$ and the magnitude of the squares are not important. In a certain sense, $R^2$ is very valid for an insignificant OLS model. The significance thresholds are after all lines drawn in water. For example, if you have to choose between models to use for making predictions, it makes sense to use the model with highest $R^2$. Or, if the models involve a different number of covariates, better use adjusted $R^2$ denoted as $\bar{R}^2$. $R^2$ is actually the square of the correlation coefficient between predicted values and the reality, i.e. $R^2=(cor(y_i,\hat{y}_i))^2$, and thus, you can use tables for the correlation coefficient to test the ovarall significance of your model. The other option is that you use the F test.
Is $R^2$ value valid for insignificant OLS regression model? I think you have to use multiple regressions as indicated by Eric Peterson. The other option is that you use partial correlation coefficients. That would make sense, if the absolute values of $y$ and
32,720
Explanation of minimum observations for multiple regression
You miss the point with overfitting. It is not about the number of observations, but about in-sample and out-of-sample errors. The properly built model will have in-sample error approximately equal to out-of-sample error. Overfit model will have out-of-sample error larger than in-sample. The purpose of modelling is not finding the best fit for your data, but finding the function which is able to predict the relationship, when you get new data not used for building the model. In such case the purpose is not to minimize the error, but minimize the difference between in- and out-of-sample errors. Given the name of this page, I can't resist but link you to Cross-Validation.
Explanation of minimum observations for multiple regression
You miss the point with overfitting. It is not about the number of observations, but about in-sample and out-of-sample errors. The properly built model will have in-sample error approximately equal to
Explanation of minimum observations for multiple regression You miss the point with overfitting. It is not about the number of observations, but about in-sample and out-of-sample errors. The properly built model will have in-sample error approximately equal to out-of-sample error. Overfit model will have out-of-sample error larger than in-sample. The purpose of modelling is not finding the best fit for your data, but finding the function which is able to predict the relationship, when you get new data not used for building the model. In such case the purpose is not to minimize the error, but minimize the difference between in- and out-of-sample errors. Given the name of this page, I can't resist but link you to Cross-Validation.
Explanation of minimum observations for multiple regression You miss the point with overfitting. It is not about the number of observations, but about in-sample and out-of-sample errors. The properly built model will have in-sample error approximately equal to
32,721
Explanation of minimum observations for multiple regression
And in response to your second question: Yes, forward selection can solve the problem, as can backward selection or stepwise; but these aren't the best ways of doing so. They reduce the variance in the predictions at the cost of introducing bias (by ignoring predictors). And there's not much principle behind them to give you confidence they're getting the balance right: a varying number of hypothesis tests at arbitrary significance levels. See Peter Flom's paper & the 'model-selection' tag here. (Though, to be fair, I've rarely found models fitted by such methods to be as bad as you might think - they do do roughly the right kind of thing.)
Explanation of minimum observations for multiple regression
And in response to your second question: Yes, forward selection can solve the problem, as can backward selection or stepwise; but these aren't the best ways of doing so. They reduce the variance in th
Explanation of minimum observations for multiple regression And in response to your second question: Yes, forward selection can solve the problem, as can backward selection or stepwise; but these aren't the best ways of doing so. They reduce the variance in the predictions at the cost of introducing bias (by ignoring predictors). And there's not much principle behind them to give you confidence they're getting the balance right: a varying number of hypothesis tests at arbitrary significance levels. See Peter Flom's paper & the 'model-selection' tag here. (Though, to be fair, I've rarely found models fitted by such methods to be as bad as you might think - they do do roughly the right kind of thing.)
Explanation of minimum observations for multiple regression And in response to your second question: Yes, forward selection can solve the problem, as can backward selection or stepwise; but these aren't the best ways of doing so. They reduce the variance in th
32,722
Explanation of minimum observations for multiple regression
In relation to your second question, 'can overfitting be solved by using stepwise selection', I suggest that stepwise selection ignores relationships between variables too easily by focussing on the individual relationship of the predictor to the dependent variable. In addition the stepwise selection (I believe) means that the order in which the predictor variables are entered is influential. I always examine the relationships between each of the dependent variables in a lot of detail prior to performing regression analysis, I also examine the effect that removal or addition has upon the remaining variables, and I check assumptions such as multicollinearity. Often, checking these assumptions enables decisions to be taken about the inclusion or removal of variables. What I mean is that in order to determine which predictors to include, I spend a lot of time thinking about my research and examining the variables. I rarely just follow convention on the minimum number of observations (in fact I am much more strict than your example and usually ask for a minimum number of observations within subgroups) because my research is social in nature and often requires a lot of control variables. My concern for your example is that unless your cohort are very similar (low variance amongst all variables), you won't be able to accurately observe relationships for uncommon subgroups because there aren't enough data to support these throughout the model, although I should add that I do not know what your independent variables are. For example if I have a small overall sample, and I am interested in looking at a gender effect, but I only have a small number of women observed then I would question whether gender should be included.
Explanation of minimum observations for multiple regression
In relation to your second question, 'can overfitting be solved by using stepwise selection', I suggest that stepwise selection ignores relationships between variables too easily by focussing on the i
Explanation of minimum observations for multiple regression In relation to your second question, 'can overfitting be solved by using stepwise selection', I suggest that stepwise selection ignores relationships between variables too easily by focussing on the individual relationship of the predictor to the dependent variable. In addition the stepwise selection (I believe) means that the order in which the predictor variables are entered is influential. I always examine the relationships between each of the dependent variables in a lot of detail prior to performing regression analysis, I also examine the effect that removal or addition has upon the remaining variables, and I check assumptions such as multicollinearity. Often, checking these assumptions enables decisions to be taken about the inclusion or removal of variables. What I mean is that in order to determine which predictors to include, I spend a lot of time thinking about my research and examining the variables. I rarely just follow convention on the minimum number of observations (in fact I am much more strict than your example and usually ask for a minimum number of observations within subgroups) because my research is social in nature and often requires a lot of control variables. My concern for your example is that unless your cohort are very similar (low variance amongst all variables), you won't be able to accurately observe relationships for uncommon subgroups because there aren't enough data to support these throughout the model, although I should add that I do not know what your independent variables are. For example if I have a small overall sample, and I am interested in looking at a gender effect, but I only have a small number of women observed then I would question whether gender should be included.
Explanation of minimum observations for multiple regression In relation to your second question, 'can overfitting be solved by using stepwise selection', I suggest that stepwise selection ignores relationships between variables too easily by focussing on the i
32,723
how does rpart handle missing values in predictors?
This is where the surrogate variables come in - for each split, observations where the split variable is missing are split based on the best surrogate variable, if that's missing by the next best and so on, this is detailed in: Therneau, Terry M. & Atkinson Elizabeth J. (March 28, 2014). An Introduction to Recursive Partitioning Using the RPART Routines, Mayo Foundation, section 5. The document is accessible through rpart help (pdf).
how does rpart handle missing values in predictors?
This is where the surrogate variables come in - for each split, observations where the split variable is missing are split based on the best surrogate variable, if that's missing by the next best and
how does rpart handle missing values in predictors? This is where the surrogate variables come in - for each split, observations where the split variable is missing are split based on the best surrogate variable, if that's missing by the next best and so on, this is detailed in: Therneau, Terry M. & Atkinson Elizabeth J. (March 28, 2014). An Introduction to Recursive Partitioning Using the RPART Routines, Mayo Foundation, section 5. The document is accessible through rpart help (pdf).
how does rpart handle missing values in predictors? This is where the surrogate variables come in - for each split, observations where the split variable is missing are split based on the best surrogate variable, if that's missing by the next best and
32,724
Bounds for the population variance?
The general asymptotic result for the asymptotic distribution of the sample variance is (see this post) $$\sqrt n(\hat v - v) \xrightarrow{d} N\left(0,\mu_4 - v^2\right)$$ where here, I have used the notation $v\equiv \sigma^2$ to avoid later confusion with squares, and where $\mu_4 = \mathrm{E}\left((X_i -\mu)^4\right)$. Therefore by the continuous mapping theorem $$\frac {n(\hat v - v)^2}{\mu_4 - v^2} \xrightarrow{d} \chi^2_1 $$ Then, accepting the approximation, $$P\left(\frac {n(\hat v - v)^2}{\mu_4 - v^2}\leq \chi^2_{1,1-a}\right)=1-a$$ The term in the parenthesis will give us a quadratic equation in $v$ that will include the unknown term $\mu_4$. Accepting a further approximation, we can estimate this from the sample. Then we will obtain $$P\left(Av^2 + Bv +\Gamma\leq 0 \right)=1-a$$ The roots of the polynomial are $$v^*_{1,2}= \frac {-B \pm \sqrt {B^2 -4A\Gamma}}{2A}$$ and our $1-a$ confidence interval for the population variance will be $$\max\Big\{0,\min\{v^*_{1,2}\}\Big\}\leq \sigma^2 \leq \max\{v^*_{1,2}\}$$ since the probability that the quadratic polynomial is smaller than zero, equals (in our case, where $A>0$) the probability that the population variance lies in between the roots of the polynomial. Monte Carlo Study For clarity, denote $\chi^2_{1,1-a}\equiv z$. A little algebra gives us that $$A = n+z, \;\;\ B = -2n\hat v,\;\; \Gamma = n\hat v^2 -z \hat \mu_4$$ which leads to $$v^*_{1,2}= \frac {n\hat v \pm \sqrt {nz(\hat \mu_4-\hat v^2)+z^2\hat \mu_4}}{n+z}$$ For $a=0.05$ we have $\chi^2_{1,1-a}\equiv z = 3.84$ I generated $10,000$ samples each of size $n=100$ from a Gamma distribution with shape parameter $k=3$ and scale parameter $\theta = 2$. The true mean is $\mu = 6$, and the true variance is $v=\sigma^2 =12$. Results: The sample distribution of the sample variance had a long road ahead to become normal, but this is to be expected for the small sample size chosen. Its average value though was $11.88$, pretty close to the true value. The estimation bound was smaller than the true variance, in $1,456$ samples, while the lower bound was greater than the true variance only $17$ times. So the true value was missed by the $CI$ in $14.73$% of the samples, mostly due to undershooting, giving a confidence level of $85$%, which is a $~10$ percentage points worsening from the nominal confidence level of $95$%. On average the lower bound was $7.20$, while on average the upper bound was $15.68$. The average length of the CI was $8.47$. Its minimum length was $2.56$ while its maximum length was $34.52$.
Bounds for the population variance?
The general asymptotic result for the asymptotic distribution of the sample variance is (see this post) $$\sqrt n(\hat v - v) \xrightarrow{d} N\left(0,\mu_4 - v^2\right)$$ where here, I have used the
Bounds for the population variance? The general asymptotic result for the asymptotic distribution of the sample variance is (see this post) $$\sqrt n(\hat v - v) \xrightarrow{d} N\left(0,\mu_4 - v^2\right)$$ where here, I have used the notation $v\equiv \sigma^2$ to avoid later confusion with squares, and where $\mu_4 = \mathrm{E}\left((X_i -\mu)^4\right)$. Therefore by the continuous mapping theorem $$\frac {n(\hat v - v)^2}{\mu_4 - v^2} \xrightarrow{d} \chi^2_1 $$ Then, accepting the approximation, $$P\left(\frac {n(\hat v - v)^2}{\mu_4 - v^2}\leq \chi^2_{1,1-a}\right)=1-a$$ The term in the parenthesis will give us a quadratic equation in $v$ that will include the unknown term $\mu_4$. Accepting a further approximation, we can estimate this from the sample. Then we will obtain $$P\left(Av^2 + Bv +\Gamma\leq 0 \right)=1-a$$ The roots of the polynomial are $$v^*_{1,2}= \frac {-B \pm \sqrt {B^2 -4A\Gamma}}{2A}$$ and our $1-a$ confidence interval for the population variance will be $$\max\Big\{0,\min\{v^*_{1,2}\}\Big\}\leq \sigma^2 \leq \max\{v^*_{1,2}\}$$ since the probability that the quadratic polynomial is smaller than zero, equals (in our case, where $A>0$) the probability that the population variance lies in between the roots of the polynomial. Monte Carlo Study For clarity, denote $\chi^2_{1,1-a}\equiv z$. A little algebra gives us that $$A = n+z, \;\;\ B = -2n\hat v,\;\; \Gamma = n\hat v^2 -z \hat \mu_4$$ which leads to $$v^*_{1,2}= \frac {n\hat v \pm \sqrt {nz(\hat \mu_4-\hat v^2)+z^2\hat \mu_4}}{n+z}$$ For $a=0.05$ we have $\chi^2_{1,1-a}\equiv z = 3.84$ I generated $10,000$ samples each of size $n=100$ from a Gamma distribution with shape parameter $k=3$ and scale parameter $\theta = 2$. The true mean is $\mu = 6$, and the true variance is $v=\sigma^2 =12$. Results: The sample distribution of the sample variance had a long road ahead to become normal, but this is to be expected for the small sample size chosen. Its average value though was $11.88$, pretty close to the true value. The estimation bound was smaller than the true variance, in $1,456$ samples, while the lower bound was greater than the true variance only $17$ times. So the true value was missed by the $CI$ in $14.73$% of the samples, mostly due to undershooting, giving a confidence level of $85$%, which is a $~10$ percentage points worsening from the nominal confidence level of $95$%. On average the lower bound was $7.20$, while on average the upper bound was $15.68$. The average length of the CI was $8.47$. Its minimum length was $2.56$ while its maximum length was $34.52$.
Bounds for the population variance? The general asymptotic result for the asymptotic distribution of the sample variance is (see this post) $$\sqrt n(\hat v - v) \xrightarrow{d} N\left(0,\mu_4 - v^2\right)$$ where here, I have used the
32,725
Assumptions of generalized linear models
These plots (even for a good model, do not have the "blobby" pattern characteristic of similar plots in OLS regression, and so are harder to judge. Further, they show nothing akin to quantile plots. The DHARMa R package solves this problem by simulating from the fitted model to transform the residuals of any GL(M)M into a standardized space. Once this is done, all regular methods for visually and formally assessing residual problems (e.g. qq plots, overdispersion, heteroskedasticity, autocorrelation) can be applied. See the package vignette for worked-through examples. Regarding the comment of @Otto_K: if homogenous overdispersion is the only problem, it is probably simpler to use an observational-level random effect, which can be implemented with a standard binomial GLMM. However, I think @PeterFlom was concerned also about heteroskedasticity, i.e. a change of the dispersion parameter with some predictor or model predictions. This will not be picked up / corrected by standard overdispersion checks / corrections, but you can see it in DHARMa residual plots. For correcting it, modelling the dispersion as a function of something else in JAGS or STAN is probably the only way at the moment.
Assumptions of generalized linear models
These plots (even for a good model, do not have the "blobby" pattern characteristic of similar plots in OLS regression, and so are harder to judge. Further, they show nothing akin to quantile plots.
Assumptions of generalized linear models These plots (even for a good model, do not have the "blobby" pattern characteristic of similar plots in OLS regression, and so are harder to judge. Further, they show nothing akin to quantile plots. The DHARMa R package solves this problem by simulating from the fitted model to transform the residuals of any GL(M)M into a standardized space. Once this is done, all regular methods for visually and formally assessing residual problems (e.g. qq plots, overdispersion, heteroskedasticity, autocorrelation) can be applied. See the package vignette for worked-through examples. Regarding the comment of @Otto_K: if homogenous overdispersion is the only problem, it is probably simpler to use an observational-level random effect, which can be implemented with a standard binomial GLMM. However, I think @PeterFlom was concerned also about heteroskedasticity, i.e. a change of the dispersion parameter with some predictor or model predictions. This will not be picked up / corrected by standard overdispersion checks / corrections, but you can see it in DHARMa residual plots. For correcting it, modelling the dispersion as a function of something else in JAGS or STAN is probably the only way at the moment.
Assumptions of generalized linear models These plots (even for a good model, do not have the "blobby" pattern characteristic of similar plots in OLS regression, and so are harder to judge. Further, they show nothing akin to quantile plots.
32,726
Assumptions of generalized linear models
The topic you explain is frequently called overdispersion. In my work I saw a possible solution to such topic: Using a Bayesian approach, and estimating a Beta-Binomial distribution. This has the great advantage to other distributions (induced by other priors), to have a closed form solution. References: Beta-binomial distribution Peter Hoff Bayes estimators notes (pdf)
Assumptions of generalized linear models
The topic you explain is frequently called overdispersion. In my work I saw a possible solution to such topic: Using a Bayesian approach, and estimating a Beta-Binomial distribution. This has the gr
Assumptions of generalized linear models The topic you explain is frequently called overdispersion. In my work I saw a possible solution to such topic: Using a Bayesian approach, and estimating a Beta-Binomial distribution. This has the great advantage to other distributions (induced by other priors), to have a closed form solution. References: Beta-binomial distribution Peter Hoff Bayes estimators notes (pdf)
Assumptions of generalized linear models The topic you explain is frequently called overdispersion. In my work I saw a possible solution to such topic: Using a Bayesian approach, and estimating a Beta-Binomial distribution. This has the gr
32,727
Forecasting nonstationary time series
I have never seen a model like Box-Jenkins identification process led me to ARIMA(0,1,3) model BUT i had never seen a black swan until I went to Australia. Please post your data as it may suggest the need for Intervention Detection leading to including level shifts, local time trends et al Time varying parameters Time varying error variance If your data is confidential, simply scale it. OK having received your data (some 80000 readings), I selected 805 observations starting at point 6287 and obtained. . A significant change point was detected at period 137 suggesting time-varying parameters. The remaining 668 observations suggest a pdq ARIMA Model (3,0,0) with a level.step shift supporting your preliminary conclusions about lag 3. . The Actual/Fit/Forecast graph is The Residual Plot and the acf of the residuals is . Since the acf of the residuals shows strong structure at periods 5 and 10 , you might further investigate seasonal structure at lag 5. I hope this helps.
Forecasting nonstationary time series
I have never seen a model like Box-Jenkins identification process led me to ARIMA(0,1,3) model BUT i had never seen a black swan until I went to Australia. Please post your data as it may suggest the
Forecasting nonstationary time series I have never seen a model like Box-Jenkins identification process led me to ARIMA(0,1,3) model BUT i had never seen a black swan until I went to Australia. Please post your data as it may suggest the need for Intervention Detection leading to including level shifts, local time trends et al Time varying parameters Time varying error variance If your data is confidential, simply scale it. OK having received your data (some 80000 readings), I selected 805 observations starting at point 6287 and obtained. . A significant change point was detected at period 137 suggesting time-varying parameters. The remaining 668 observations suggest a pdq ARIMA Model (3,0,0) with a level.step shift supporting your preliminary conclusions about lag 3. . The Actual/Fit/Forecast graph is The Residual Plot and the acf of the residuals is . Since the acf of the residuals shows strong structure at periods 5 and 10 , you might further investigate seasonal structure at lag 5. I hope this helps.
Forecasting nonstationary time series I have never seen a model like Box-Jenkins identification process led me to ARIMA(0,1,3) model BUT i had never seen a black swan until I went to Australia. Please post your data as it may suggest the
32,728
Why does a mixed design using R's aov() need the between subject factors specified more than once?
No, it is not necessary to specify those terms twice. I suspect it was either a copy/paste typo, or that the author wanted to denote separately the terms that use the subject term for the denominator in the F test and the terms that use the subject/time term. As you note, when the code is run, however, the terms are absolutely unnecessary. In this case, also notice that the /time part of the Error call is unnecessary; the subject:time interaction is the lowest level, which is always included in the model. So using Error(subject) and Error(subject/time) give the same result; the only difference is that in the output, that level of results is called "Within" for the first and is called "subject:time" for the second.
Why does a mixed design using R's aov() need the between subject factors specified more than once?
No, it is not necessary to specify those terms twice. I suspect it was either a copy/paste typo, or that the author wanted to denote separately the terms that use the subject term for the denominator
Why does a mixed design using R's aov() need the between subject factors specified more than once? No, it is not necessary to specify those terms twice. I suspect it was either a copy/paste typo, or that the author wanted to denote separately the terms that use the subject term for the denominator in the F test and the terms that use the subject/time term. As you note, when the code is run, however, the terms are absolutely unnecessary. In this case, also notice that the /time part of the Error call is unnecessary; the subject:time interaction is the lowest level, which is always included in the model. So using Error(subject) and Error(subject/time) give the same result; the only difference is that in the output, that level of results is called "Within" for the first and is called "subject:time" for the second.
Why does a mixed design using R's aov() need the between subject factors specified more than once? No, it is not necessary to specify those terms twice. I suspect it was either a copy/paste typo, or that the author wanted to denote separately the terms that use the subject term for the denominator
32,729
MCMC and data augmentation
Your answer does not account for the fact that the observations equal to zero and to one are merged together: what you computed is the posterior for the complete Poisson data, $(X_1,\ldots,X_n)$, rather than for the aggregated or merged data, $(X_1^*,\ldots,X^*_n)$. If we take the convention that cases when the observation $X_i^*=1$ correspond to $X_i=1$ or $X_i=0$ and the observation $X_i^*=k>1$ to $X_i=k$, the density of the observed vector $(X_1^*,\ldots,X^*_n)$ is (after some algebra and factorisation) $$ \pi(\lambda|x_1^*,\ldots,x^*_n) \propto \lambda^{\sum_{i=1}^n x_i^*\mathbb{I}(x_i^*>1)} \exp\{-\lambda(\lambda_0+n)\} \times \{1+\lambda\}^{n_1} $$ where $n_1$ is the number of times the $x_i^*$'s are equal to one. The last term between brackets above is the probability to get 0 or 1 in a Poisson draw. So this is your true/observed posterior. From there, you can implement a Gibbs sampler by Generating the "missing observations" given $\lambda$ and the observations, namely simulating $p(x_i|\lambda,x_i^*=1)$, which is given by $$ \mathbb{P}(x_i=0|\lambda,x_i^*=1)=1-\mathbb{P}(x_i=1|\lambda,x_i^*=1)=\dfrac{1}{1+\lambda}\,. $$ Generating $\lambda$ given the "completed data", which amounts to $$ \lambda|x_1,\ldots,x_n \sim \mathcal{G}(\sum_i x_i + 1,n+\lambda_0) $$ as you already computed. (If you want more details, Example 9.7, p.346, in my Monte Carlo Statistical Methods book with George Casella covers exactly this setting.)
MCMC and data augmentation
Your answer does not account for the fact that the observations equal to zero and to one are merged together: what you computed is the posterior for the complete Poisson data, $(X_1,\ldots,X_n)$, rath
MCMC and data augmentation Your answer does not account for the fact that the observations equal to zero and to one are merged together: what you computed is the posterior for the complete Poisson data, $(X_1,\ldots,X_n)$, rather than for the aggregated or merged data, $(X_1^*,\ldots,X^*_n)$. If we take the convention that cases when the observation $X_i^*=1$ correspond to $X_i=1$ or $X_i=0$ and the observation $X_i^*=k>1$ to $X_i=k$, the density of the observed vector $(X_1^*,\ldots,X^*_n)$ is (after some algebra and factorisation) $$ \pi(\lambda|x_1^*,\ldots,x^*_n) \propto \lambda^{\sum_{i=1}^n x_i^*\mathbb{I}(x_i^*>1)} \exp\{-\lambda(\lambda_0+n)\} \times \{1+\lambda\}^{n_1} $$ where $n_1$ is the number of times the $x_i^*$'s are equal to one. The last term between brackets above is the probability to get 0 or 1 in a Poisson draw. So this is your true/observed posterior. From there, you can implement a Gibbs sampler by Generating the "missing observations" given $\lambda$ and the observations, namely simulating $p(x_i|\lambda,x_i^*=1)$, which is given by $$ \mathbb{P}(x_i=0|\lambda,x_i^*=1)=1-\mathbb{P}(x_i=1|\lambda,x_i^*=1)=\dfrac{1}{1+\lambda}\,. $$ Generating $\lambda$ given the "completed data", which amounts to $$ \lambda|x_1,\ldots,x_n \sim \mathcal{G}(\sum_i x_i + 1,n+\lambda_0) $$ as you already computed. (If you want more details, Example 9.7, p.346, in my Monte Carlo Statistical Methods book with George Casella covers exactly this setting.)
MCMC and data augmentation Your answer does not account for the fact that the observations equal to zero and to one are merged together: what you computed is the posterior for the complete Poisson data, $(X_1,\ldots,X_n)$, rath
32,730
How do you get lmFuncs functions of the rfe function in caret to do a logistic regression?
lmFuncs is fitting linear regression. Just type lmFuncs$fit to see. Try to rewrite it: lmFuncs$fit<-function (x, y, first, last, ...){ tmp <- as.data.frame(x) tmp$y <- y glm(y ~ ., data = tmp,family=binomial) } Note, that I don't know how to attach <environment: namespace:caret> and what is its meaning. You may try this trick on your data and comment the result.
How do you get lmFuncs functions of the rfe function in caret to do a logistic regression?
lmFuncs is fitting linear regression. Just type lmFuncs$fit to see. Try to rewrite it: lmFuncs$fit<-function (x, y, first, last, ...){ tmp <- as.data.frame(x) tmp$y <- y glm(y ~
How do you get lmFuncs functions of the rfe function in caret to do a logistic regression? lmFuncs is fitting linear regression. Just type lmFuncs$fit to see. Try to rewrite it: lmFuncs$fit<-function (x, y, first, last, ...){ tmp <- as.data.frame(x) tmp$y <- y glm(y ~ ., data = tmp,family=binomial) } Note, that I don't know how to attach <environment: namespace:caret> and what is its meaning. You may try this trick on your data and comment the result.
How do you get lmFuncs functions of the rfe function in caret to do a logistic regression? lmFuncs is fitting linear regression. Just type lmFuncs$fit to see. Try to rewrite it: lmFuncs$fit<-function (x, y, first, last, ...){ tmp <- as.data.frame(x) tmp$y <- y glm(y ~
32,731
Why does a finite, irreducible and aperiodic Markov chain with a doubly-stochastic matrix P have a uniform limiting distribution?
Suppose we have an $M+1$-state irreducible and aperiodic Markov chain, with states $m_j$, $j=0,1,\ldots, M$, with a doubly stochastic transition matrix (i.e., $\sum_{i=0}^M P_{i,j}=1$ for all $j$). Then the limiting distribution is $\pi_j=\frac{1}{M+1}$. Proof First note that the $\pi_j$ is the unique solution to $\pi_j=\sum_{i=0}^M \pi_iP_{i,j}$ and $\sum_{i=0}^M\pi_i=1$. Try $\pi_i=1$. This gives $\pi_j=\sum_{i=0}^M \pi_iP_{i,j}=\sum_{i=0}^M P_{i,j}=1$ (because the matrix is doubly stochastic). Thus $\pi_i=1$ is a solution to the first set of equations, and to make it a solution to the second normalize by dividing by $M+1$. By uniqueness, $\pi_j=\frac{1}{M+1}$.
Why does a finite, irreducible and aperiodic Markov chain with a doubly-stochastic matrix P have a u
Suppose we have an $M+1$-state irreducible and aperiodic Markov chain, with states $m_j$, $j=0,1,\ldots, M$, with a doubly stochastic transition matrix (i.e., $\sum_{i=0}^M P_{i,j}=1$ for all $j$). Th
Why does a finite, irreducible and aperiodic Markov chain with a doubly-stochastic matrix P have a uniform limiting distribution? Suppose we have an $M+1$-state irreducible and aperiodic Markov chain, with states $m_j$, $j=0,1,\ldots, M$, with a doubly stochastic transition matrix (i.e., $\sum_{i=0}^M P_{i,j}=1$ for all $j$). Then the limiting distribution is $\pi_j=\frac{1}{M+1}$. Proof First note that the $\pi_j$ is the unique solution to $\pi_j=\sum_{i=0}^M \pi_iP_{i,j}$ and $\sum_{i=0}^M\pi_i=1$. Try $\pi_i=1$. This gives $\pi_j=\sum_{i=0}^M \pi_iP_{i,j}=\sum_{i=0}^M P_{i,j}=1$ (because the matrix is doubly stochastic). Thus $\pi_i=1$ is a solution to the first set of equations, and to make it a solution to the second normalize by dividing by $M+1$. By uniqueness, $\pi_j=\frac{1}{M+1}$.
Why does a finite, irreducible and aperiodic Markov chain with a doubly-stochastic matrix P have a u Suppose we have an $M+1$-state irreducible and aperiodic Markov chain, with states $m_j$, $j=0,1,\ldots, M$, with a doubly stochastic transition matrix (i.e., $\sum_{i=0}^M P_{i,j}=1$ for all $j$). Th
32,732
Confidence intervals around a centroid with modified Gower similarity
I'm not immediately clear what centroid you want, but the centroid that comes to mind is the point in multivariate space at the centre of the mass of the points per group. About this you want a 95% confidence ellipse. Both aspects can be computed using the ordiellipse() function in vegan. Here is a modified example from ?ordiellipse but using a PCO as a means to embed the dissimilarities in an Euclidean space from which we can derive centroids and confidence ellipses for groups based on the Nature Management variable Management. require(vegan) data(dune) dij <- vegdist(decostand(dune, "log"), method = "altGower") ord <- capscale(dij ~ 1) ## This does PCO data(dune.env) ## load the environmental data Now we display the first 2 PCO axes and add a 95% confidence ellipse based on the standard errors of the average of the axis scores. We want standard errors so set kind="se" and use the conf argument to give the confidence interval required. plot(ord, display = "sites", type = "n") stats <- with(dune.env, ordiellipse(ord, Management, kind="se", conf=0.95, lwd=2, draw = "polygon", col="skyblue", border = "blue")) points(ord) ordipointlabel(ord, add = TRUE) Notice that I capture the output from ordiellipse(). This returns a list, one component per group, with details of the centroid and ellipse. You can extract the center component from each of these to get at the centroids > t(sapply(stats, `[[`, "center")) MDS1 MDS2 BF -1.2222687 0.1569338 HF -0.6222935 -0.1839497 NM 0.8848758 1.2061265 SF 0.2448365 -1.1313020 Notice that the centroid is only for the 2d solution. A more general option is to compute the centroids yourself. The centroid is just the individual averages of the variables or in this case the PCO axes. As you are working with the dissimilarities, they need to be embedded in an ordination space so you have axes (variables) that you can compute averages of. Here the axis scores are in columns and the sites in rows. The centroid of a group is the vector of column averages for the group. There are several ways of splitting the data but here I use aggregate() to split the scores on the first 2 PCO axes into groups based on Management and compute their averages scrs <- scores(ord, display = "sites") cent <- aggregate(scrs ~ Management, data = dune.env, FUN = mean) names(cent)[-1] <- colnames(scrs) This gives: > cent Management MDS1 MDS2 1 BF -1.2222687 0.1569338 2 HF -0.6222935 -0.1839497 3 NM 0.8848758 1.2061265 4 SF 0.2448365 -1.1313020 which is the same as the values stored in stats as extracted above. The aggregate() approach generalises to any number of axes, e.g.: > scrs2 <- scores(ord, choices = 1:4, display = "sites") > cent2 <- aggregate(scrs2 ~ Management, data = dune.env, FUN = mean) > names(cent2)[-1] <- colnames(scrs2) > cent2 Management MDS1 MDS2 MDS3 MDS4 1 BF -1.2222687 0.1569338 -0.5300011 -0.1063031 2 HF -0.6222935 -0.1839497 0.3252891 1.1354676 3 NM 0.8848758 1.2061265 -0.1986570 -0.4012043 4 SF 0.2448365 -1.1313020 0.1925833 -0.4918671 Obviously, the centroids on the first two PCO axes don't change when we ask for more axes, so you could compute the centroids over all axes once, then use what ever dimension you want. You can add the centroids to the above plot with points(cent[, -1], pch = 22, col = "darkred", bg = "darkred", cex = 1.1) The resulting plot will now look like this Finally, vegan contains the adonis() and betadisper() functions that are designed to look at differences in means and variances of multivariate data in ways very similar to Marti's papers/software. betadisper() is closely linked to the content of the paper you cite and can also return the centroids for you.
Confidence intervals around a centroid with modified Gower similarity
I'm not immediately clear what centroid you want, but the centroid that comes to mind is the point in multivariate space at the centre of the mass of the points per group. About this you want a 95% co
Confidence intervals around a centroid with modified Gower similarity I'm not immediately clear what centroid you want, but the centroid that comes to mind is the point in multivariate space at the centre of the mass of the points per group. About this you want a 95% confidence ellipse. Both aspects can be computed using the ordiellipse() function in vegan. Here is a modified example from ?ordiellipse but using a PCO as a means to embed the dissimilarities in an Euclidean space from which we can derive centroids and confidence ellipses for groups based on the Nature Management variable Management. require(vegan) data(dune) dij <- vegdist(decostand(dune, "log"), method = "altGower") ord <- capscale(dij ~ 1) ## This does PCO data(dune.env) ## load the environmental data Now we display the first 2 PCO axes and add a 95% confidence ellipse based on the standard errors of the average of the axis scores. We want standard errors so set kind="se" and use the conf argument to give the confidence interval required. plot(ord, display = "sites", type = "n") stats <- with(dune.env, ordiellipse(ord, Management, kind="se", conf=0.95, lwd=2, draw = "polygon", col="skyblue", border = "blue")) points(ord) ordipointlabel(ord, add = TRUE) Notice that I capture the output from ordiellipse(). This returns a list, one component per group, with details of the centroid and ellipse. You can extract the center component from each of these to get at the centroids > t(sapply(stats, `[[`, "center")) MDS1 MDS2 BF -1.2222687 0.1569338 HF -0.6222935 -0.1839497 NM 0.8848758 1.2061265 SF 0.2448365 -1.1313020 Notice that the centroid is only for the 2d solution. A more general option is to compute the centroids yourself. The centroid is just the individual averages of the variables or in this case the PCO axes. As you are working with the dissimilarities, they need to be embedded in an ordination space so you have axes (variables) that you can compute averages of. Here the axis scores are in columns and the sites in rows. The centroid of a group is the vector of column averages for the group. There are several ways of splitting the data but here I use aggregate() to split the scores on the first 2 PCO axes into groups based on Management and compute their averages scrs <- scores(ord, display = "sites") cent <- aggregate(scrs ~ Management, data = dune.env, FUN = mean) names(cent)[-1] <- colnames(scrs) This gives: > cent Management MDS1 MDS2 1 BF -1.2222687 0.1569338 2 HF -0.6222935 -0.1839497 3 NM 0.8848758 1.2061265 4 SF 0.2448365 -1.1313020 which is the same as the values stored in stats as extracted above. The aggregate() approach generalises to any number of axes, e.g.: > scrs2 <- scores(ord, choices = 1:4, display = "sites") > cent2 <- aggregate(scrs2 ~ Management, data = dune.env, FUN = mean) > names(cent2)[-1] <- colnames(scrs2) > cent2 Management MDS1 MDS2 MDS3 MDS4 1 BF -1.2222687 0.1569338 -0.5300011 -0.1063031 2 HF -0.6222935 -0.1839497 0.3252891 1.1354676 3 NM 0.8848758 1.2061265 -0.1986570 -0.4012043 4 SF 0.2448365 -1.1313020 0.1925833 -0.4918671 Obviously, the centroids on the first two PCO axes don't change when we ask for more axes, so you could compute the centroids over all axes once, then use what ever dimension you want. You can add the centroids to the above plot with points(cent[, -1], pch = 22, col = "darkred", bg = "darkred", cex = 1.1) The resulting plot will now look like this Finally, vegan contains the adonis() and betadisper() functions that are designed to look at differences in means and variances of multivariate data in ways very similar to Marti's papers/software. betadisper() is closely linked to the content of the paper you cite and can also return the centroids for you.
Confidence intervals around a centroid with modified Gower similarity I'm not immediately clear what centroid you want, but the centroid that comes to mind is the point in multivariate space at the centre of the mass of the points per group. About this you want a 95% co
32,733
How to get standard errors from R zero-inflated count data regression? [closed]
To my knowledge, the predict method for results from zeroinfl does not include standard errors. If your goal is to construct confidence intervals, one attractive alternative is to use bootstrapping. I say attractive because bootstrapping has the potential to be more robust (at a loss of efficiency if all the assumptions for the SEs are met). Here is some rough code to do what you want. It will not work exactly, but hopefully you can make the necessary corrections. ## load boot package require(boot) ## output coefficients from your original model ## these can be used as starting values for your bootstrap model ## to help speed up convergence and the bootstrap dput(round(coef(zeroinfl.fit, "count"), 3)) dput(round(coef(zeroinfl.fit, "zero"), 3)) ## function to pass to the boot function to fit your model ## needs to take data, an index (as the second argument!) and your new data f <- function(data, i, newdata) { require(pscl) m <- zeroinfl(count ~ child + camper | persons, data = data[i, ], start = list(count = c(1.598, -1.0428, 0.834), zero = c(1.297, -0.564))) mparams <- as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2])) yhat <- predict(m, newdata, type = "response") return(c(mparams, yhat)) } ## set the seed and do the bootstrap, make sure to set your number of cpus ## note this requires a fairly recent version of R set.seed(10) res <- boot(dat, f, R = 1200, newdata = Predict, parallel = "snow", ncpus = 4) ## get the bootstrapped percentile CIs ## the 10 here is because in my initial example, there were 10 parameters before predicted values yhat <- t(sapply(10 + (1:nrow(Predict)), function(i) { out <- boot.ci(res, index = i, type = c("perc")) with(out, c(Est = t0, pLL = percent[4], pUL = percent[5])) })) ## merge CIs with predicted values Predict<- cbind(Predict, yhat) I drew this code from two pages I wrote, one bootstrapping parameters from a zero-inflated poisson regression with zeroinfl Zero-inflated poisson and one demonstrating how to get bootstrapped confidence intervals for predicted values from a zero-truncated negative binomial model Zero-truncated negative binomial. Combined, hopefully that provides you sufficient examples to get it working with predicted values from a zero-inflated poisson. You may also get some graphing ideas :)
How to get standard errors from R zero-inflated count data regression? [closed]
To my knowledge, the predict method for results from zeroinfl does not include standard errors. If your goal is to construct confidence intervals, one attractive alternative is to use bootstrapping.
How to get standard errors from R zero-inflated count data regression? [closed] To my knowledge, the predict method for results from zeroinfl does not include standard errors. If your goal is to construct confidence intervals, one attractive alternative is to use bootstrapping. I say attractive because bootstrapping has the potential to be more robust (at a loss of efficiency if all the assumptions for the SEs are met). Here is some rough code to do what you want. It will not work exactly, but hopefully you can make the necessary corrections. ## load boot package require(boot) ## output coefficients from your original model ## these can be used as starting values for your bootstrap model ## to help speed up convergence and the bootstrap dput(round(coef(zeroinfl.fit, "count"), 3)) dput(round(coef(zeroinfl.fit, "zero"), 3)) ## function to pass to the boot function to fit your model ## needs to take data, an index (as the second argument!) and your new data f <- function(data, i, newdata) { require(pscl) m <- zeroinfl(count ~ child + camper | persons, data = data[i, ], start = list(count = c(1.598, -1.0428, 0.834), zero = c(1.297, -0.564))) mparams <- as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2])) yhat <- predict(m, newdata, type = "response") return(c(mparams, yhat)) } ## set the seed and do the bootstrap, make sure to set your number of cpus ## note this requires a fairly recent version of R set.seed(10) res <- boot(dat, f, R = 1200, newdata = Predict, parallel = "snow", ncpus = 4) ## get the bootstrapped percentile CIs ## the 10 here is because in my initial example, there were 10 parameters before predicted values yhat <- t(sapply(10 + (1:nrow(Predict)), function(i) { out <- boot.ci(res, index = i, type = c("perc")) with(out, c(Est = t0, pLL = percent[4], pUL = percent[5])) })) ## merge CIs with predicted values Predict<- cbind(Predict, yhat) I drew this code from two pages I wrote, one bootstrapping parameters from a zero-inflated poisson regression with zeroinfl Zero-inflated poisson and one demonstrating how to get bootstrapped confidence intervals for predicted values from a zero-truncated negative binomial model Zero-truncated negative binomial. Combined, hopefully that provides you sufficient examples to get it working with predicted values from a zero-inflated poisson. You may also get some graphing ideas :)
How to get standard errors from R zero-inflated count data regression? [closed] To my knowledge, the predict method for results from zeroinfl does not include standard errors. If your goal is to construct confidence intervals, one attractive alternative is to use bootstrapping.
32,734
Welch's t-test gives worse p-value for more extreme difference
Yes, it's the degrees of freedom. The t-statistics themselves increase as we compare groups B,C,D to A; the numerators get bigger and the denominators get smaller. Why doesn't your approach "work"? Well, the Satterthwaite approximation for the degrees of freedom, and the reference distribution is (as the name suggests!) just an approximation. It would work fine if you had more samples in each group, and not hugely heavy-tailed data; 3 observations per group is really very small for most purposes. (Also, while p-values are useful for doing tests, they don't measure evidence and don't estimate parameters with direct interpretations in terms of data.) If you really want to work out the exact distribution of the test statistic - and a better calibrated p-value - there are methods cited here that could be used. However, they rely on assuming Normality, an assumption you have no appreciable ability to check, here.
Welch's t-test gives worse p-value for more extreme difference
Yes, it's the degrees of freedom. The t-statistics themselves increase as we compare groups B,C,D to A; the numerators get bigger and the denominators get smaller. Why doesn't your approach "work"? W
Welch's t-test gives worse p-value for more extreme difference Yes, it's the degrees of freedom. The t-statistics themselves increase as we compare groups B,C,D to A; the numerators get bigger and the denominators get smaller. Why doesn't your approach "work"? Well, the Satterthwaite approximation for the degrees of freedom, and the reference distribution is (as the name suggests!) just an approximation. It would work fine if you had more samples in each group, and not hugely heavy-tailed data; 3 observations per group is really very small for most purposes. (Also, while p-values are useful for doing tests, they don't measure evidence and don't estimate parameters with direct interpretations in terms of data.) If you really want to work out the exact distribution of the test statistic - and a better calibrated p-value - there are methods cited here that could be used. However, they rely on assuming Normality, an assumption you have no appreciable ability to check, here.
Welch's t-test gives worse p-value for more extreme difference Yes, it's the degrees of freedom. The t-statistics themselves increase as we compare groups B,C,D to A; the numerators get bigger and the denominators get smaller. Why doesn't your approach "work"? W
32,735
Welch's t-test gives worse p-value for more extreme difference
There is quite a bit to this question, and I'm fairly sure that some of it is beyond my understanding. Thus while I have a likely solution to the 'problem' and some speculation, you might need to check my 'workings'. You are interested in evidence. Fisher proposed the use of p values as evidence but the evidence within a dataset against the null hypothesis is more readily (sensibly?) shown with a likelihood function than the p value. However, a more extreme p value is stronger evidence. This is my solution: Don't use the Welch's t-test, but instead transform the data with a square-root transform to equalise the variances and then use a standard Student's t-test. That transform works well on your data and is one of the standard approaches for data that is heteroscedastic. The order of the p values now matches your intuition and will serve for evidence. If you are using p values as evidence rather than attempting to protect against long-term false positive errors then the arguments for adjusting the p values for multiple comparisons become fairly weak, in my opinion. Now, the speculative part. As I understand it, Welch's t-test is a solution to the Fisher-Behrens problem (testing means where the data have unequal variances), but it is a solution that Fisher was unhappy with. Perhaps it is a Neyman-Pearsonian in its underlying philosophy. Anyway, the amount of evidence in a p value from a t-test is dependent on the p value AND on the sample size. (That is not widely recognised, perhaps because the evidence in a p value from a z-test is independent of the sample size.) I suspect that the Welch's test screws up the evidential nature of the p value by its adjustment of degrees of freedom.
Welch's t-test gives worse p-value for more extreme difference
There is quite a bit to this question, and I'm fairly sure that some of it is beyond my understanding. Thus while I have a likely solution to the 'problem' and some speculation, you might need to chec
Welch's t-test gives worse p-value for more extreme difference There is quite a bit to this question, and I'm fairly sure that some of it is beyond my understanding. Thus while I have a likely solution to the 'problem' and some speculation, you might need to check my 'workings'. You are interested in evidence. Fisher proposed the use of p values as evidence but the evidence within a dataset against the null hypothesis is more readily (sensibly?) shown with a likelihood function than the p value. However, a more extreme p value is stronger evidence. This is my solution: Don't use the Welch's t-test, but instead transform the data with a square-root transform to equalise the variances and then use a standard Student's t-test. That transform works well on your data and is one of the standard approaches for data that is heteroscedastic. The order of the p values now matches your intuition and will serve for evidence. If you are using p values as evidence rather than attempting to protect against long-term false positive errors then the arguments for adjusting the p values for multiple comparisons become fairly weak, in my opinion. Now, the speculative part. As I understand it, Welch's t-test is a solution to the Fisher-Behrens problem (testing means where the data have unequal variances), but it is a solution that Fisher was unhappy with. Perhaps it is a Neyman-Pearsonian in its underlying philosophy. Anyway, the amount of evidence in a p value from a t-test is dependent on the p value AND on the sample size. (That is not widely recognised, perhaps because the evidence in a p value from a z-test is independent of the sample size.) I suspect that the Welch's test screws up the evidential nature of the p value by its adjustment of degrees of freedom.
Welch's t-test gives worse p-value for more extreme difference There is quite a bit to this question, and I'm fairly sure that some of it is beyond my understanding. Thus while I have a likely solution to the 'problem' and some speculation, you might need to chec
32,736
Welch's t-test gives worse p-value for more extreme difference
After digging around, I think my final verdict goes something like this: To simplify the discussion, lets consider only the case when the sample sizes are equal. In that case, the approximation to the degrees of freedom can be written as $$ \frac{\left(\frac{s_1^2}{n} + \frac{s_2^2}{n}\right)^2}{\frac{s_1^4}{n^2(n-1)} + \frac{s_2^4}{n^2(n-1)}} = ... = (n-1)\left(1 + \frac{2 s_1^2 s_2^2}{s_1^4 + s_2^4}\right), $$ where $s_1^2$ and $s_2^2$ are the sample variances and $n$ is the sample size. Hence, the degrees of freedom is $(n-1)\cdot2$ when the sample variances are equal and approaches $(n-1)$ as the sample sizes become more unequal. This means that the degrees of freedom will differ by a factor of almost 2 based only on the sample variances. Even for reasonably-sized sample sizes (say 10 or 20) the situation illustrated in the main post can easily occur. When many t-tests are performed, sorting the comparisons by p-value could easily result in the best comparisons not making it to the top of the list, or being excluded after adjusting for multiple testing. My personal opinion is that this is a fundamental flaw in Welch's t-test since it is designed for comparisons between samples with unequal variances, yet the more unequal the variances become, the more you lose power (in the sense that the ordering of your p-values will be wrong). The only solution I can think of is to either use some permutation-based testing instead or transform the data so that the variances in your tests are not too far from each other.
Welch's t-test gives worse p-value for more extreme difference
After digging around, I think my final verdict goes something like this: To simplify the discussion, lets consider only the case when the sample sizes are equal. In that case, the approximation to the
Welch's t-test gives worse p-value for more extreme difference After digging around, I think my final verdict goes something like this: To simplify the discussion, lets consider only the case when the sample sizes are equal. In that case, the approximation to the degrees of freedom can be written as $$ \frac{\left(\frac{s_1^2}{n} + \frac{s_2^2}{n}\right)^2}{\frac{s_1^4}{n^2(n-1)} + \frac{s_2^4}{n^2(n-1)}} = ... = (n-1)\left(1 + \frac{2 s_1^2 s_2^2}{s_1^4 + s_2^4}\right), $$ where $s_1^2$ and $s_2^2$ are the sample variances and $n$ is the sample size. Hence, the degrees of freedom is $(n-1)\cdot2$ when the sample variances are equal and approaches $(n-1)$ as the sample sizes become more unequal. This means that the degrees of freedom will differ by a factor of almost 2 based only on the sample variances. Even for reasonably-sized sample sizes (say 10 or 20) the situation illustrated in the main post can easily occur. When many t-tests are performed, sorting the comparisons by p-value could easily result in the best comparisons not making it to the top of the list, or being excluded after adjusting for multiple testing. My personal opinion is that this is a fundamental flaw in Welch's t-test since it is designed for comparisons between samples with unequal variances, yet the more unequal the variances become, the more you lose power (in the sense that the ordering of your p-values will be wrong). The only solution I can think of is to either use some permutation-based testing instead or transform the data so that the variances in your tests are not too far from each other.
Welch's t-test gives worse p-value for more extreme difference After digging around, I think my final verdict goes something like this: To simplify the discussion, lets consider only the case when the sample sizes are equal. In that case, the approximation to the
32,737
Welch's t-test gives worse p-value for more extreme difference
As far as I know, I heard Welch's t-test which use the Satterthwaite approximation is verified for 0.05 significance test. Which means when P(linear combination of chi-squared distribuiton > c)=0.05, we can get approximate c. So, I think p-value is quite reliable around 0.05, And obviously it's not so when it gets much less than 0.05. p1=0 p2=0 for (m in 1:50) { a<-c(-m+95.47, -m+87.90, -m+99.00) c<-c(38.4, 40.4, 32.8) d<-c(1.8, 1.2, 1.1) p1[m]=t.test(a,c, var.eqaul=F)$p.value p2[m]=t.test(a,d, var.eqaul=F)$p.value } plot(1:50, p1, col="black") points(1:50, p2, col="red") You can see the p-values get more correct as it approaches 0.05... So We must not use p-values which is much less than 0.05 when using Welch's t-test. If it is used, I think we should write a paper about it. Anyhow, I am currently writing about "Statistics" and this theme is intriguing. I hope to use your data writing the book with your permission. Would you let me use your data? And I will be grateful if you could tell the source of data and the context from which they came!
Welch's t-test gives worse p-value for more extreme difference
As far as I know, I heard Welch's t-test which use the Satterthwaite approximation is verified for 0.05 significance test. Which means when P(linear combination of chi-squared distribuiton > c)=0.05,
Welch's t-test gives worse p-value for more extreme difference As far as I know, I heard Welch's t-test which use the Satterthwaite approximation is verified for 0.05 significance test. Which means when P(linear combination of chi-squared distribuiton > c)=0.05, we can get approximate c. So, I think p-value is quite reliable around 0.05, And obviously it's not so when it gets much less than 0.05. p1=0 p2=0 for (m in 1:50) { a<-c(-m+95.47, -m+87.90, -m+99.00) c<-c(38.4, 40.4, 32.8) d<-c(1.8, 1.2, 1.1) p1[m]=t.test(a,c, var.eqaul=F)$p.value p2[m]=t.test(a,d, var.eqaul=F)$p.value } plot(1:50, p1, col="black") points(1:50, p2, col="red") You can see the p-values get more correct as it approaches 0.05... So We must not use p-values which is much less than 0.05 when using Welch's t-test. If it is used, I think we should write a paper about it. Anyhow, I am currently writing about "Statistics" and this theme is intriguing. I hope to use your data writing the book with your permission. Would you let me use your data? And I will be grateful if you could tell the source of data and the context from which they came!
Welch's t-test gives worse p-value for more extreme difference As far as I know, I heard Welch's t-test which use the Satterthwaite approximation is verified for 0.05 significance test. Which means when P(linear combination of chi-squared distribuiton > c)=0.05,
32,738
How to simulate repeated measures multivariate outcomes in R?
Use the rmvnorm() function, It takes 3 arguments: the variance covariance matrix, the means and the number of rows. The sigma will have 3*5=15 rows and columns. One for each observation of each variable. There are many ways of setting these 15^2 parameters(ar, bilateral symmetry, unstructured...). However you fill in this matrix be aware of the assumptions, particularly when you set a correlation/covariance to zero, or when you set two variances to be equal. For a starting point a sigma matrix might might look something like this: sigma=matrix(c( #y1 y2 y3 3 ,.5, 0, 0, 0, 0, 0, 0, 0, 0,.5,.2, 0, 0, 0, .5, 3,.5, 0, 0, 0, 0, 0, 0, 0,.2,.5,.2, 0, 0, 0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0,.2,.5,.2, 0, 0 , 0,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0,.2,.5,.2, 0 , 0, 0,.5, 3, 0, 0, 0, 0, 0, 0, 0, 0,.2,.5, 0 ,0 ,0 ,0 , 0, 3,.5, 0, 0, 0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3, 0, 0, 0, 0, 0, .5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 , 0, 3,.5, 0, 0, 0, .2,.5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0 ,.2,.5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0 ,0 ,.2,.5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0 ,0 ,0 ,.2,.5,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3 ),15,15) So the sigma[1,12] is .2 and that means that the covariance between the first observation of Y1 and the 2nd observation of Y3 is .2, conditional on all the other 13 variables. The diagonal rows do not all have to be the same number: that is a simplifying assumption that I made. Sometimes it makes sense, sometimes it doesn't. In general it means the correlation between a 3rd observation and a 4th is the same as the correlation between a 1st and a second. You also need means. It could be as simple as meanTreat=c(1:5,51:55,101:105) meanControl=c(1,1,1,1,1,50,50,50,50,50,100,100,100,100,100) Here the first 5 are the means for the 5 observations of Y1, ... , the last 5 are the observations of Y3 then get 2000 observation of your data with: sampleT=rmvnorm(1000,meanTreat,sigma) sampleC=rmvnorm(1000,meanControl,sigma) sample=data.frame(cbind(sampleT,sampleC) ) sample$group=c(rep("Treat",1000),rep("Control",1000) ) colnames(sample)=c("Y11","Y12","Y13","Y14","Y15", "Y21","Y22","Y23","Y24","Y25", "Y31","Y32","Y33","Y34","Y35") Where Y11 is the 1st observation of Y1,...,Y15 is the 5th obs of Y1...
How to simulate repeated measures multivariate outcomes in R?
Use the rmvnorm() function, It takes 3 arguments: the variance covariance matrix, the means and the number of rows. The sigma will have 3*5=15 rows and columns. One for each observation of each varia
How to simulate repeated measures multivariate outcomes in R? Use the rmvnorm() function, It takes 3 arguments: the variance covariance matrix, the means and the number of rows. The sigma will have 3*5=15 rows and columns. One for each observation of each variable. There are many ways of setting these 15^2 parameters(ar, bilateral symmetry, unstructured...). However you fill in this matrix be aware of the assumptions, particularly when you set a correlation/covariance to zero, or when you set two variances to be equal. For a starting point a sigma matrix might might look something like this: sigma=matrix(c( #y1 y2 y3 3 ,.5, 0, 0, 0, 0, 0, 0, 0, 0,.5,.2, 0, 0, 0, .5, 3,.5, 0, 0, 0, 0, 0, 0, 0,.2,.5,.2, 0, 0, 0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0,.2,.5,.2, 0, 0 , 0,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0,.2,.5,.2, 0 , 0, 0,.5, 3, 0, 0, 0, 0, 0, 0, 0, 0,.2,.5, 0 ,0 ,0 ,0 , 0, 3,.5, 0, 0, 0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3, 0, 0, 0, 0, 0, .5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 , 0, 3,.5, 0, 0, 0, .2,.5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0, 0 ,.2,.5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0, 0 ,0 ,.2,.5,.2,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3,.5, 0 ,0 ,0 ,.2,.5,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,.5, 3 ),15,15) So the sigma[1,12] is .2 and that means that the covariance between the first observation of Y1 and the 2nd observation of Y3 is .2, conditional on all the other 13 variables. The diagonal rows do not all have to be the same number: that is a simplifying assumption that I made. Sometimes it makes sense, sometimes it doesn't. In general it means the correlation between a 3rd observation and a 4th is the same as the correlation between a 1st and a second. You also need means. It could be as simple as meanTreat=c(1:5,51:55,101:105) meanControl=c(1,1,1,1,1,50,50,50,50,50,100,100,100,100,100) Here the first 5 are the means for the 5 observations of Y1, ... , the last 5 are the observations of Y3 then get 2000 observation of your data with: sampleT=rmvnorm(1000,meanTreat,sigma) sampleC=rmvnorm(1000,meanControl,sigma) sample=data.frame(cbind(sampleT,sampleC) ) sample$group=c(rep("Treat",1000),rep("Control",1000) ) colnames(sample)=c("Y11","Y12","Y13","Y14","Y15", "Y21","Y22","Y23","Y24","Y25", "Y31","Y32","Y33","Y34","Y35") Where Y11 is the 1st observation of Y1,...,Y15 is the 5th obs of Y1...
How to simulate repeated measures multivariate outcomes in R? Use the rmvnorm() function, It takes 3 arguments: the variance covariance matrix, the means and the number of rows. The sigma will have 3*5=15 rows and columns. One for each observation of each varia
32,739
How to simulate repeated measures multivariate outcomes in R?
To generate multivariate normal data with a specified correlation structure, you need to construct the variance covariance matrix and calculate its Cholesky decomposition using the chol function. The product of the Cholesky decomposition of the desired vcov matrix and independent random normal vectors of observations will yield random normal data with that variance covariance matrix. v <- matrix(c(2,.3,.3,2), 2) cv <- chol(v) o <- replicate(1000, { y <- cv %*% matrix(rnorm(100),2) v1 <- var(y[1,]) v2 <- var(y[2,]) v3 <- cov(y[1,], y[2,]) return(c(v1,v2,v3)) }) ## MCMC means should estimate components of v rowMeans(o)
How to simulate repeated measures multivariate outcomes in R?
To generate multivariate normal data with a specified correlation structure, you need to construct the variance covariance matrix and calculate its Cholesky decomposition using the chol function. The
How to simulate repeated measures multivariate outcomes in R? To generate multivariate normal data with a specified correlation structure, you need to construct the variance covariance matrix and calculate its Cholesky decomposition using the chol function. The product of the Cholesky decomposition of the desired vcov matrix and independent random normal vectors of observations will yield random normal data with that variance covariance matrix. v <- matrix(c(2,.3,.3,2), 2) cv <- chol(v) o <- replicate(1000, { y <- cv %*% matrix(rnorm(100),2) v1 <- var(y[1,]) v2 <- var(y[2,]) v3 <- cov(y[1,], y[2,]) return(c(v1,v2,v3)) }) ## MCMC means should estimate components of v rowMeans(o)
How to simulate repeated measures multivariate outcomes in R? To generate multivariate normal data with a specified correlation structure, you need to construct the variance covariance matrix and calculate its Cholesky decomposition using the chol function. The
32,740
Do low-discrepancy sequences work in discrete spaces?
The short answer is: Yes! It can work, and is as simple as multiplying the vector $t_n \in (0,1)^d$ by an integer $m$, and the taking the integer part of each of its components. The longer answer is that your intuition is correct, that in practice it has mixed results depending on the choice of: which sequence you choose (Halton, Sobol, etc..) the basis parameters (eg, 2,3,5,...) and to a lesser degree, the value of $m$. However, I have recently written a detailed blog post "The unreasonable effectiveness of quasirandom sequences, on how to easily create an open-ended low discrepancy sequence in arbitrary dimensions, that is much more amenable to discretization than existing existing low discrepancy sequences, such as the Halton and Kronecker sequences. The section in the post called "Covering" specifically deals with your question of discretizing the low discrepancy sequences. In the following image squares (which indicates a unique integer latttice point) with less red implies a more even distribution, as each red square indicates that the cell does not contain a blue point. One can clearly see how even the $R$-sequence distributes points compared to other contemporary methods. The solution is an additive recurrence method (modulo 1) which generalizes the 1-Dimensional problem whose solution depends on the Golden Ratio. The solution to the $d$-dimensional problem, depends on a special constant $\phi_d$, where $\phi_d$ is the value of smallest, positive real-value of $x$ such that $$ x^{d+1}\;=x+1$$ For $d=1$,  $ \phi_1 = 1.618033989... $, which is the canonical golden ratio. For $d=2$, $ \phi_2 = 1.3247179572... $, which  is often called the plastic constant, and has some beautiful properties. This value was conjectured to most likely be the optimal value for a related two-dimensional problem [Hensley, 2002]. Jacob Rus has posted a beautiful visualization of this 2-dimensional low discrepancy sequence, which can be found here. With this special constant in hand, the calculation of the $n$-th term is now extremely simple and fast to calculate: $$ R: \mathbf{t}_n = \pmb{\alpha}_0 + n \pmb{\alpha} \; (\textrm{mod} \; 1),  \quad n=1,2,3,... $$ $$ \textrm{where} \quad \pmb{\alpha} =(\frac{1}{\phi_d}, \frac{1}{\phi_d^2},\frac{1}{\phi_d^3},...\frac{1}{\phi_d^d}), $$ Of course, the reason this is called a recurrence sequence is because the above definition is equivalent to $$ R: \mathbf{t}_{n+1} = \mathbf{t}_{n} + \pmb{\alpha} \; (\textrm{mod} \; 1) $$ In nearly all instances, the choice of $\pmb{\alpha}_0 $ does not change the key characteristics, and so for reasons of obvious simplicity, $\pmb{\alpha}_0 =\pmb{0}$ is the usual choice. However, there are some arguments, relating to symmetry, that suggest that $\pmb{\alpha}_0=\pmb{1/2}$ is a better choice. The Python code is # Use Newton-Rhapson-Method def gamma(d): x=1.0000 for i in range(20): x = x-(pow(x,d+1)-x-1)/((d+1)*pow(x,d)-1) return x d=5 n=1000 # m can be any number. # In the diagram above it is chosen to be exactly sqrt of n, # simply to to make the visualization more intuitive # so that ideally each cell should have exactly one dot. m=10 g = gamma(d) alpha = np.zeros(d) for j in range(d): alpha[j] = pow(1/g,j+1) %1 z = np.zeros((n, d)) c = (np.zeros((n,d)).astype(int) for i in range(n): z = (0.5 + alpha*(i+1)) %1 c = (np.floor(m *z)).astype(int) print(c) Hope that helps!
Do low-discrepancy sequences work in discrete spaces?
The short answer is: Yes! It can work, and is as simple as multiplying the vector $t_n \in (0,1)^d$ by an integer $m$, and the taking the integer part of each of its components. The longer answer is t
Do low-discrepancy sequences work in discrete spaces? The short answer is: Yes! It can work, and is as simple as multiplying the vector $t_n \in (0,1)^d$ by an integer $m$, and the taking the integer part of each of its components. The longer answer is that your intuition is correct, that in practice it has mixed results depending on the choice of: which sequence you choose (Halton, Sobol, etc..) the basis parameters (eg, 2,3,5,...) and to a lesser degree, the value of $m$. However, I have recently written a detailed blog post "The unreasonable effectiveness of quasirandom sequences, on how to easily create an open-ended low discrepancy sequence in arbitrary dimensions, that is much more amenable to discretization than existing existing low discrepancy sequences, such as the Halton and Kronecker sequences. The section in the post called "Covering" specifically deals with your question of discretizing the low discrepancy sequences. In the following image squares (which indicates a unique integer latttice point) with less red implies a more even distribution, as each red square indicates that the cell does not contain a blue point. One can clearly see how even the $R$-sequence distributes points compared to other contemporary methods. The solution is an additive recurrence method (modulo 1) which generalizes the 1-Dimensional problem whose solution depends on the Golden Ratio. The solution to the $d$-dimensional problem, depends on a special constant $\phi_d$, where $\phi_d$ is the value of smallest, positive real-value of $x$ such that $$ x^{d+1}\;=x+1$$ For $d=1$,  $ \phi_1 = 1.618033989... $, which is the canonical golden ratio. For $d=2$, $ \phi_2 = 1.3247179572... $, which  is often called the plastic constant, and has some beautiful properties. This value was conjectured to most likely be the optimal value for a related two-dimensional problem [Hensley, 2002]. Jacob Rus has posted a beautiful visualization of this 2-dimensional low discrepancy sequence, which can be found here. With this special constant in hand, the calculation of the $n$-th term is now extremely simple and fast to calculate: $$ R: \mathbf{t}_n = \pmb{\alpha}_0 + n \pmb{\alpha} \; (\textrm{mod} \; 1),  \quad n=1,2,3,... $$ $$ \textrm{where} \quad \pmb{\alpha} =(\frac{1}{\phi_d}, \frac{1}{\phi_d^2},\frac{1}{\phi_d^3},...\frac{1}{\phi_d^d}), $$ Of course, the reason this is called a recurrence sequence is because the above definition is equivalent to $$ R: \mathbf{t}_{n+1} = \mathbf{t}_{n} + \pmb{\alpha} \; (\textrm{mod} \; 1) $$ In nearly all instances, the choice of $\pmb{\alpha}_0 $ does not change the key characteristics, and so for reasons of obvious simplicity, $\pmb{\alpha}_0 =\pmb{0}$ is the usual choice. However, there are some arguments, relating to symmetry, that suggest that $\pmb{\alpha}_0=\pmb{1/2}$ is a better choice. The Python code is # Use Newton-Rhapson-Method def gamma(d): x=1.0000 for i in range(20): x = x-(pow(x,d+1)-x-1)/((d+1)*pow(x,d)-1) return x d=5 n=1000 # m can be any number. # In the diagram above it is chosen to be exactly sqrt of n, # simply to to make the visualization more intuitive # so that ideally each cell should have exactly one dot. m=10 g = gamma(d) alpha = np.zeros(d) for j in range(d): alpha[j] = pow(1/g,j+1) %1 z = np.zeros((n, d)) c = (np.zeros((n,d)).astype(int) for i in range(n): z = (0.5 + alpha*(i+1)) %1 c = (np.floor(m *z)).astype(int) print(c) Hope that helps!
Do low-discrepancy sequences work in discrete spaces? The short answer is: Yes! It can work, and is as simple as multiplying the vector $t_n \in (0,1)^d$ by an integer $m$, and the taking the integer part of each of its components. The longer answer is t
32,741
Do low-discrepancy sequences work in discrete spaces?
If you have a finite number of spaces, you will be better off with an explicit enumeration of possible spaces with a balanced incomplete block design built upon them. In the end, the properties of the low discrepancy sequences are asymptotic, with desirable properties achieved with the lengths of the order $N\sim 6^s$ where $s$ is the dimension of your space. If the number of possible combinations is less than that, you can just take all possible combination and achieve a balanced design with that. Update: there was a book that discussed using QMC for Poisson processes and Bernoulli trials. May be you'd find something useful there, although in my opinion it is a very far cry from a good value for the money. For $15, maybe. I found it to be somewhat sloppy in places, pushing the author's (sometimes weird) ideas rather than utilizing what's been understood as the best methods in the literature.
Do low-discrepancy sequences work in discrete spaces?
If you have a finite number of spaces, you will be better off with an explicit enumeration of possible spaces with a balanced incomplete block design built upon them. In the end, the properties of the
Do low-discrepancy sequences work in discrete spaces? If you have a finite number of spaces, you will be better off with an explicit enumeration of possible spaces with a balanced incomplete block design built upon them. In the end, the properties of the low discrepancy sequences are asymptotic, with desirable properties achieved with the lengths of the order $N\sim 6^s$ where $s$ is the dimension of your space. If the number of possible combinations is less than that, you can just take all possible combination and achieve a balanced design with that. Update: there was a book that discussed using QMC for Poisson processes and Bernoulli trials. May be you'd find something useful there, although in my opinion it is a very far cry from a good value for the money. For $15, maybe. I found it to be somewhat sloppy in places, pushing the author's (sometimes weird) ideas rather than utilizing what's been understood as the best methods in the literature.
Do low-discrepancy sequences work in discrete spaces? If you have a finite number of spaces, you will be better off with an explicit enumeration of possible spaces with a balanced incomplete block design built upon them. In the end, the properties of the
32,742
Missing rates and multiple imputation
From the comments, you're confident that your in a MAR or MCAR situation. Then multiple imputation is at least reasonable. So how much missingness is tractable? Think of it this way: Basically, multiple imputation makes all your model parameter estimates less certain as a function of the accuracy with which the missing data can be predicted with your imputation model, which will depend, among other things, on the amount of missing that needs imputing, and the number of imputations you use. How much is 'too much' missingness therefore depends on how much added variance/uncertainty you are willing to put up with. A useful quantity for you might be the relative efficiency ($RE$) of an MI analysis. This depends on the 'fraction of missing information' (not the simple rate of missingness), usually called $\lambda$, and the number of imputations, usually called $m$, as $RE \approx 1/(1+\lambda/m)$. Rather than generate the definitions of missing information etc. here, you might simply read the MI FAQ which puts things very clearly. From there you'll know whether you want to tackle the original sources: Rubin etc. Practically speaking you should probably just try an imputation analysis and see how it works out.
Missing rates and multiple imputation
From the comments, you're confident that your in a MAR or MCAR situation. Then multiple imputation is at least reasonable. So how much missingness is tractable? Think of it this way: Basically, mul
Missing rates and multiple imputation From the comments, you're confident that your in a MAR or MCAR situation. Then multiple imputation is at least reasonable. So how much missingness is tractable? Think of it this way: Basically, multiple imputation makes all your model parameter estimates less certain as a function of the accuracy with which the missing data can be predicted with your imputation model, which will depend, among other things, on the amount of missing that needs imputing, and the number of imputations you use. How much is 'too much' missingness therefore depends on how much added variance/uncertainty you are willing to put up with. A useful quantity for you might be the relative efficiency ($RE$) of an MI analysis. This depends on the 'fraction of missing information' (not the simple rate of missingness), usually called $\lambda$, and the number of imputations, usually called $m$, as $RE \approx 1/(1+\lambda/m)$. Rather than generate the definitions of missing information etc. here, you might simply read the MI FAQ which puts things very clearly. From there you'll know whether you want to tackle the original sources: Rubin etc. Practically speaking you should probably just try an imputation analysis and see how it works out.
Missing rates and multiple imputation From the comments, you're confident that your in a MAR or MCAR situation. Then multiple imputation is at least reasonable. So how much missingness is tractable? Think of it this way: Basically, mul
32,743
Missing rates and multiple imputation
You might find Rubin, Donald B. and Nathaniel Schenker. 1986. “Multiple Imputation for Interval Estimation from Simple Random Samples with Ignorable Nonresponse.” Journal of the American Statistical Association 81(394):366–374. helpful.
Missing rates and multiple imputation
You might find Rubin, Donald B. and Nathaniel Schenker. 1986. “Multiple Imputation for Interval Estimation from Simple Random Samples with Ignorable Nonresponse.” Journal of the American Statist
Missing rates and multiple imputation You might find Rubin, Donald B. and Nathaniel Schenker. 1986. “Multiple Imputation for Interval Estimation from Simple Random Samples with Ignorable Nonresponse.” Journal of the American Statistical Association 81(394):366–374. helpful.
Missing rates and multiple imputation You might find Rubin, Donald B. and Nathaniel Schenker. 1986. “Multiple Imputation for Interval Estimation from Simple Random Samples with Ignorable Nonresponse.” Journal of the American Statist
32,744
Expression for the median of a sum of half-Cauchy random variables
One way to deduce the simulated $n\log(n)$ behavior is through truncation. Generally, for any distribution $F$ and $0\le \alpha\lt 1$ let $F_\alpha$ be truncated symmetrically in both tails so that for $F^{-1}(\alpha/2)\le x \le F^{-1}(1-\alpha/2),$ $$F_\alpha(x) = \min{\left(1, \frac{F(x) - \alpha/2}{1-\alpha}\right)}.$$ Let $E_\alpha$ be the expectation of $F_\alpha$ defined by $$E_\alpha = \int_{\mathbb R} x\,\mathrm{d}F_\alpha(x).$$ Because the support of $F_\alpha$ is bounded, it has finite expectation and variance. The Central Limit Theorem tells us that for sufficiently large sample sizes $n,$ the sum of $n$ iid random variables with $F_\alpha$ for their common distribution is approximately Normal with mean $n E_\alpha.$ This approximate Normality implies the median of the sum distribution also is approximately $n E_\alpha.$ Now, the median of a sum of independent variables with distribution $F$ won't be quite the same as this approximate median: it must lie somewhere between the $1/2-\alpha/2$ and $1/2+\alpha/2$ quantiles. In the present case, where $F$ is nonzero in a neighborhood of its median, this means that by taking $\alpha$ sufficiently small (but still nonzero), we can make the median of $F_\alpha$ as close as we like to the median of $F=F_0.$ It remains only to find the mean of $F_\alpha$ in the question, where $F$ is the half-Cauchy distribution determined by $$F(x) = \frac{2}{\pi}\int_0^x \frac{\mathrm{d}t}{1+t^2} = \frac{2}{\pi}\tan^{-1}(x)$$ for $x\ge 0.$ The ensuing calculations are straightforward. First, $$E_\alpha = \frac{1}{1-\alpha}\int_{x_0}^{x_1} \frac{x\,\mathrm{d}x}{1+x^2} = \frac{1}{\pi(1-\alpha)} \log\frac{1+x_1^2}{1+x_0^2}$$ where $F(x_0) = \alpha/2$ and $F(x_1) = 1-\alpha/2.$ Thus $$x_0 = \tan\left(\frac{\pi\alpha}{4}\right) \approx \frac{\pi\alpha}{4},\quad x_1 = \tan\left(\frac{\pi}{2}-\frac{\pi\alpha}{4}\right) = \frac{1}{x_0} \approx \frac{4}{\pi\alpha},$$ whence $$E_\alpha \approx \frac{1}{\pi} \log \frac{1 + \left(\frac{4}{\pi\alpha}\right)^2}{1 + \left(\frac{\pi\alpha}{4}\right)^2} \approx \frac{2}{\pi}\log\left(\frac{4}{\pi\alpha}\right).$$ Finally, when taking a sample of size $n$ from $F$ we would like to ensure that most of the time the sample really is a sample of $F_\alpha,$ uncontaminated by any values in its tails. This happens with probability $(1-\alpha)^n.$ To make this chance exceed some threshold $1-\epsilon,$ with a tiny value of $\epsilon,$ we must have $$\alpha \approx \frac{\epsilon}{n}.$$ Using this value in the foregoing gives $$E_\alpha \approx \frac{2}{\pi}\log\left(\frac{4}{\pi\epsilon/n}\right) = \frac{2}{\pi}\left(\log\left(\frac{4}{\pi\epsilon}\right) + \log(n)\right).$$ Consequently, the median of the sum of $n$ iid half-Cauchy variables must be close to $$nE_\alpha \approx \frac{2}{\pi}n\log(n) + \left(\frac{2}{\pi}\log\frac{4}{\pi\epsilon}\right)n.$$ For any desired $\epsilon\gt 0,$ we may choose $n$ so large that the value is dominated by the first term: that is how the $n\log n$ behavior arises. Moreover, now we have the implicit constant $2/\pi$ along with the asymptotic order of the error term, $O(n).$ Let the "relative median" be $nE_\alpha$ divided by $\left(2/\pi\right) n\log(n).$ Here is a plot of such relative medians as observed in 5,000 simulations of samples up to size 5,000. The reference red curve is a multiple of $1/\log(n),$ which this analysis indicates is proportional to the asymptotic relative error. The fit is good. The R code to produce it can be used for additional simulation studies. n.sim <- 5e3 # Simulation count n <- 5e3 # Maximum sample size X <- apply(apply(matrix(abs(rt(n*n.sim, 1)), n), 2, cumsum), 1, median) N <- seq_len(n)[-1] # Can't divide by n log(n) when n==0 X <- X[-1] x.0 <- 2/pi * N * log(N) # Theory # # Plot the relative medians. # plot(N, X / x.0, xlab="n", log="xy", cex=0.5, pch=19, ylab="Value", main="Simulated Relative Medians") # # Draw a reference curve. # fit <- lm(X / x.0 - 1 ~ 0 + I(1/log(N))) summary(fit) b <- coefficients(fit)[1] curve(1 + b/log(n) , add=TRUE, xname="n", lwd=2, col="Red")
Expression for the median of a sum of half-Cauchy random variables
One way to deduce the simulated $n\log(n)$ behavior is through truncation. Generally, for any distribution $F$ and $0\le \alpha\lt 1$ let $F_\alpha$ be truncated symmetrically in both tails so that f
Expression for the median of a sum of half-Cauchy random variables One way to deduce the simulated $n\log(n)$ behavior is through truncation. Generally, for any distribution $F$ and $0\le \alpha\lt 1$ let $F_\alpha$ be truncated symmetrically in both tails so that for $F^{-1}(\alpha/2)\le x \le F^{-1}(1-\alpha/2),$ $$F_\alpha(x) = \min{\left(1, \frac{F(x) - \alpha/2}{1-\alpha}\right)}.$$ Let $E_\alpha$ be the expectation of $F_\alpha$ defined by $$E_\alpha = \int_{\mathbb R} x\,\mathrm{d}F_\alpha(x).$$ Because the support of $F_\alpha$ is bounded, it has finite expectation and variance. The Central Limit Theorem tells us that for sufficiently large sample sizes $n,$ the sum of $n$ iid random variables with $F_\alpha$ for their common distribution is approximately Normal with mean $n E_\alpha.$ This approximate Normality implies the median of the sum distribution also is approximately $n E_\alpha.$ Now, the median of a sum of independent variables with distribution $F$ won't be quite the same as this approximate median: it must lie somewhere between the $1/2-\alpha/2$ and $1/2+\alpha/2$ quantiles. In the present case, where $F$ is nonzero in a neighborhood of its median, this means that by taking $\alpha$ sufficiently small (but still nonzero), we can make the median of $F_\alpha$ as close as we like to the median of $F=F_0.$ It remains only to find the mean of $F_\alpha$ in the question, where $F$ is the half-Cauchy distribution determined by $$F(x) = \frac{2}{\pi}\int_0^x \frac{\mathrm{d}t}{1+t^2} = \frac{2}{\pi}\tan^{-1}(x)$$ for $x\ge 0.$ The ensuing calculations are straightforward. First, $$E_\alpha = \frac{1}{1-\alpha}\int_{x_0}^{x_1} \frac{x\,\mathrm{d}x}{1+x^2} = \frac{1}{\pi(1-\alpha)} \log\frac{1+x_1^2}{1+x_0^2}$$ where $F(x_0) = \alpha/2$ and $F(x_1) = 1-\alpha/2.$ Thus $$x_0 = \tan\left(\frac{\pi\alpha}{4}\right) \approx \frac{\pi\alpha}{4},\quad x_1 = \tan\left(\frac{\pi}{2}-\frac{\pi\alpha}{4}\right) = \frac{1}{x_0} \approx \frac{4}{\pi\alpha},$$ whence $$E_\alpha \approx \frac{1}{\pi} \log \frac{1 + \left(\frac{4}{\pi\alpha}\right)^2}{1 + \left(\frac{\pi\alpha}{4}\right)^2} \approx \frac{2}{\pi}\log\left(\frac{4}{\pi\alpha}\right).$$ Finally, when taking a sample of size $n$ from $F$ we would like to ensure that most of the time the sample really is a sample of $F_\alpha,$ uncontaminated by any values in its tails. This happens with probability $(1-\alpha)^n.$ To make this chance exceed some threshold $1-\epsilon,$ with a tiny value of $\epsilon,$ we must have $$\alpha \approx \frac{\epsilon}{n}.$$ Using this value in the foregoing gives $$E_\alpha \approx \frac{2}{\pi}\log\left(\frac{4}{\pi\epsilon/n}\right) = \frac{2}{\pi}\left(\log\left(\frac{4}{\pi\epsilon}\right) + \log(n)\right).$$ Consequently, the median of the sum of $n$ iid half-Cauchy variables must be close to $$nE_\alpha \approx \frac{2}{\pi}n\log(n) + \left(\frac{2}{\pi}\log\frac{4}{\pi\epsilon}\right)n.$$ For any desired $\epsilon\gt 0,$ we may choose $n$ so large that the value is dominated by the first term: that is how the $n\log n$ behavior arises. Moreover, now we have the implicit constant $2/\pi$ along with the asymptotic order of the error term, $O(n).$ Let the "relative median" be $nE_\alpha$ divided by $\left(2/\pi\right) n\log(n).$ Here is a plot of such relative medians as observed in 5,000 simulations of samples up to size 5,000. The reference red curve is a multiple of $1/\log(n),$ which this analysis indicates is proportional to the asymptotic relative error. The fit is good. The R code to produce it can be used for additional simulation studies. n.sim <- 5e3 # Simulation count n <- 5e3 # Maximum sample size X <- apply(apply(matrix(abs(rt(n*n.sim, 1)), n), 2, cumsum), 1, median) N <- seq_len(n)[-1] # Can't divide by n log(n) when n==0 X <- X[-1] x.0 <- 2/pi * N * log(N) # Theory # # Plot the relative medians. # plot(N, X / x.0, xlab="n", log="xy", cex=0.5, pch=19, ylab="Value", main="Simulated Relative Medians") # # Draw a reference curve. # fit <- lm(X / x.0 - 1 ~ 0 + I(1/log(N))) summary(fit) b <- coefficients(fit)[1] curve(1 + b/log(n) , add=TRUE, xname="n", lwd=2, col="Red")
Expression for the median of a sum of half-Cauchy random variables One way to deduce the simulated $n\log(n)$ behavior is through truncation. Generally, for any distribution $F$ and $0\le \alpha\lt 1$ let $F_\alpha$ be truncated symmetrically in both tails so that f
32,745
Expression for the median of a sum of half-Cauchy random variables
Here is one way to estimate the median in the question, using order statistics: \begin{align} \text{median}\left(\sum_{i=1}^n|X|_i\right) &=\text{median}\left(\sum_{i=1}^n|X|_{(i)}\right) && (1)\\ &\sim\sum_{i=1}^n\text{median}\left(|X|_{(i)}\right) && (2)\\ &\sim\sum_{i=1}^n F_{|X|}^{-1}\left(\frac{i}{n+1}\right) && (3)\\ &=\sum_{i=1}^n F_{X}^{-1}\left(\frac{1}{2}+\frac{i}{2n+2}\right) &&\\ &=\sum_{i=1}^n \tan\left(\frac{\pi i}{2n+2}\right) && \\ &\sim \sum_{i=1}^n \int_{x=i-1/2}^{i+1/2} \tan\left(\frac{\pi x}{2n+2}\right)dx && (4)\\ &=\int_{x=1/2}^{n+1/2} \tan\left(\frac{\pi x}{2n+2}\right)dx && \\ &=\frac{2n+2}{\pi}\log\left(\cot\left(\frac{\pi}{4n+4}\right)\right) && (5)\\ \end{align} In (1), we wrote the sum as the sum of order statistics. In (2), we replaced the median of the sum by the sum of the medians, which is the most questionable part of this estimate. In (3), we used the standard estimate for the central value of an order statistic. In (4), we estimated a tangent as the average of nearby tangents. In (5), the cotangent of small $x$ is like $1/x$, so we could also write the final result as $O(n \log n)$, as predicted in the comments above.
Expression for the median of a sum of half-Cauchy random variables
Here is one way to estimate the median in the question, using order statistics: \begin{align} \text{median}\left(\sum_{i=1}^n|X|_i\right) &=\text{median}\left(\sum_{i=1}^n|X|_{(i)}\right) && (1)\\ &\s
Expression for the median of a sum of half-Cauchy random variables Here is one way to estimate the median in the question, using order statistics: \begin{align} \text{median}\left(\sum_{i=1}^n|X|_i\right) &=\text{median}\left(\sum_{i=1}^n|X|_{(i)}\right) && (1)\\ &\sim\sum_{i=1}^n\text{median}\left(|X|_{(i)}\right) && (2)\\ &\sim\sum_{i=1}^n F_{|X|}^{-1}\left(\frac{i}{n+1}\right) && (3)\\ &=\sum_{i=1}^n F_{X}^{-1}\left(\frac{1}{2}+\frac{i}{2n+2}\right) &&\\ &=\sum_{i=1}^n \tan\left(\frac{\pi i}{2n+2}\right) && \\ &\sim \sum_{i=1}^n \int_{x=i-1/2}^{i+1/2} \tan\left(\frac{\pi x}{2n+2}\right)dx && (4)\\ &=\int_{x=1/2}^{n+1/2} \tan\left(\frac{\pi x}{2n+2}\right)dx && \\ &=\frac{2n+2}{\pi}\log\left(\cot\left(\frac{\pi}{4n+4}\right)\right) && (5)\\ \end{align} In (1), we wrote the sum as the sum of order statistics. In (2), we replaced the median of the sum by the sum of the medians, which is the most questionable part of this estimate. In (3), we used the standard estimate for the central value of an order statistic. In (4), we estimated a tangent as the average of nearby tangents. In (5), the cotangent of small $x$ is like $1/x$, so we could also write the final result as $O(n \log n)$, as predicted in the comments above.
Expression for the median of a sum of half-Cauchy random variables Here is one way to estimate the median in the question, using order statistics: \begin{align} \text{median}\left(\sum_{i=1}^n|X|_i\right) &=\text{median}\left(\sum_{i=1}^n|X|_{(i)}\right) && (1)\\ &\s
32,746
How to define what a "sample" is?
Sometimes I appeal to glossaries of statistics, and they usually help. Search and bookmark one you like or think it is most helpful. For example, here are some definitions retrived from the "Glossary of Statistical Terms" from stat.berkeley.edu website. Units: a member of a population A unit could also be interpreted as an observation from a population. Sample: a sample is a collection of units from a population. Random sample: a random sample is a sample whose members are chosen at random from a given population in such a way that the chance of obtaining any particular sample can be computed. The number of units in the sample is called the sample size, often denoted n. The number of units in the population often is denoted N. ... The definition of random sample continues, but here I quoted the relevant part related to the question.
How to define what a "sample" is?
Sometimes I appeal to glossaries of statistics, and they usually help. Search and bookmark one you like or think it is most helpful. For example, here are some definitions retrived from the "Glossary
How to define what a "sample" is? Sometimes I appeal to glossaries of statistics, and they usually help. Search and bookmark one you like or think it is most helpful. For example, here are some definitions retrived from the "Glossary of Statistical Terms" from stat.berkeley.edu website. Units: a member of a population A unit could also be interpreted as an observation from a population. Sample: a sample is a collection of units from a population. Random sample: a random sample is a sample whose members are chosen at random from a given population in such a way that the chance of obtaining any particular sample can be computed. The number of units in the sample is called the sample size, often denoted n. The number of units in the population often is denoted N. ... The definition of random sample continues, but here I quoted the relevant part related to the question.
How to define what a "sample" is? Sometimes I appeal to glossaries of statistics, and they usually help. Search and bookmark one you like or think it is most helpful. For example, here are some definitions retrived from the "Glossary
32,747
How to explain linear mixed models to laypeople?
Test grades (dependent variable) could be related to how much the students study (fixed effect), but might also be dependent on the school they go to (random effect), as well as simple variation between students (residual error).
How to explain linear mixed models to laypeople?
Test grades (dependent variable) could be related to how much the students study (fixed effect), but might also be dependent on the school they go to (random effect), as well as simple variation betwe
How to explain linear mixed models to laypeople? Test grades (dependent variable) could be related to how much the students study (fixed effect), but might also be dependent on the school they go to (random effect), as well as simple variation between students (residual error).
How to explain linear mixed models to laypeople? Test grades (dependent variable) could be related to how much the students study (fixed effect), but might also be dependent on the school they go to (random effect), as well as simple variation betwe
32,748
How to explain linear mixed models to laypeople?
A sentence or two? Yikes! It's all about random vs fixed effects, I suppose, and so I would focus on shrinking individual estimates toward the population mean (aka BLUP).
How to explain linear mixed models to laypeople?
A sentence or two? Yikes! It's all about random vs fixed effects, I suppose, and so I would focus on shrinking individual estimates toward the population mean (aka BLUP).
How to explain linear mixed models to laypeople? A sentence or two? Yikes! It's all about random vs fixed effects, I suppose, and so I would focus on shrinking individual estimates toward the population mean (aka BLUP).
How to explain linear mixed models to laypeople? A sentence or two? Yikes! It's all about random vs fixed effects, I suppose, and so I would focus on shrinking individual estimates toward the population mean (aka BLUP).
32,749
Evaluating and combining methods based on ROC and PR curves
I will state a few things about the ROC / PR spaces that are surely evident for you but that I prefer to make clear. The ROC space is on the $x$-axis one minus the specificity : $1-Sp$, and on the $y$-axis the sensitivity : $Se$. The PR space is on the $x$-axis the recall, which is an other name of the sensitivity : $Re = Se$, and on the $y$-axis the precision, which is an other name of the Positive Predictive Value : $Pr = PPV$ ; If $p$ is the probability of being in the "positive class", we have $$Pr = PPV = {Se\cdot p \over (1-Sp)\cdot(1-p) + Se \cdot p}.$$ The "horizontal slices" in the ROC space correspond to "vertical slices" of PR space. From the above equality, it is easy to see that when in the ROC space a curve (eg the red curve of your first graph) is on the left of a second one (the green curve), in the PR space the corresponding (red) curve is above the (green) curve. This is the case in your second graph, except for Recall values $< 0.1$. The corresponding part of the ROC curves in your first graph is for Se $< 0.1$ which is "glued" to the $y$-axis, and you can’t see anything. Here the advantage of the PR space is that it helps visualizing this area. So I don’t see contradiction in these results : method 3 is indeed better than the two others, except for Sensitivity / Recall values $< 0.1$, which correspond to very high Specificity values. The morality is that the way you improve your classifier slightly degrades its performances when you demand it to have a very high Specificity. These are quite trivial reflexions, but who knows, this can help?
Evaluating and combining methods based on ROC and PR curves
I will state a few things about the ROC / PR spaces that are surely evident for you but that I prefer to make clear. The ROC space is on the $x$-axis one minus the specificity : $1-Sp$, and on the $y
Evaluating and combining methods based on ROC and PR curves I will state a few things about the ROC / PR spaces that are surely evident for you but that I prefer to make clear. The ROC space is on the $x$-axis one minus the specificity : $1-Sp$, and on the $y$-axis the sensitivity : $Se$. The PR space is on the $x$-axis the recall, which is an other name of the sensitivity : $Re = Se$, and on the $y$-axis the precision, which is an other name of the Positive Predictive Value : $Pr = PPV$ ; If $p$ is the probability of being in the "positive class", we have $$Pr = PPV = {Se\cdot p \over (1-Sp)\cdot(1-p) + Se \cdot p}.$$ The "horizontal slices" in the ROC space correspond to "vertical slices" of PR space. From the above equality, it is easy to see that when in the ROC space a curve (eg the red curve of your first graph) is on the left of a second one (the green curve), in the PR space the corresponding (red) curve is above the (green) curve. This is the case in your second graph, except for Recall values $< 0.1$. The corresponding part of the ROC curves in your first graph is for Se $< 0.1$ which is "glued" to the $y$-axis, and you can’t see anything. Here the advantage of the PR space is that it helps visualizing this area. So I don’t see contradiction in these results : method 3 is indeed better than the two others, except for Sensitivity / Recall values $< 0.1$, which correspond to very high Specificity values. The morality is that the way you improve your classifier slightly degrades its performances when you demand it to have a very high Specificity. These are quite trivial reflexions, but who knows, this can help?
Evaluating and combining methods based on ROC and PR curves I will state a few things about the ROC / PR spaces that are surely evident for you but that I prefer to make clear. The ROC space is on the $x$-axis one minus the specificity : $1-Sp$, and on the $y
32,750
Evaluating and combining methods based on ROC and PR curves
Deviance (or -2 log likelihood) is the most statistically sensitive measure. I would use that to compare models.
Evaluating and combining methods based on ROC and PR curves
Deviance (or -2 log likelihood) is the most statistically sensitive measure. I would use that to compare models.
Evaluating and combining methods based on ROC and PR curves Deviance (or -2 log likelihood) is the most statistically sensitive measure. I would use that to compare models.
Evaluating and combining methods based on ROC and PR curves Deviance (or -2 log likelihood) is the most statistically sensitive measure. I would use that to compare models.
32,751
Evaluating and combining methods based on ROC and PR curves
For imbalanced classes using AUC as a measure of classifier performance, rather than (0,1)-loss can be misleading. See for example Xue and Titterington "Do unbalanced data have a negative effect on LDA?". For two-class classification the (0,1)-loss is usually the loss of real interest, so you may find that working directly with that loss, rather than AUC, is more informative.
Evaluating and combining methods based on ROC and PR curves
For imbalanced classes using AUC as a measure of classifier performance, rather than (0,1)-loss can be misleading. See for example Xue and Titterington "Do unbalanced data have a negative effect on LD
Evaluating and combining methods based on ROC and PR curves For imbalanced classes using AUC as a measure of classifier performance, rather than (0,1)-loss can be misleading. See for example Xue and Titterington "Do unbalanced data have a negative effect on LDA?". For two-class classification the (0,1)-loss is usually the loss of real interest, so you may find that working directly with that loss, rather than AUC, is more informative.
Evaluating and combining methods based on ROC and PR curves For imbalanced classes using AUC as a measure of classifier performance, rather than (0,1)-loss can be misleading. See for example Xue and Titterington "Do unbalanced data have a negative effect on LD
32,752
Evaluating and combining methods based on ROC and PR curves
I eventually resorted to using logistic regression (and similar models like adaptive splines) etc. to combine the scores. I think the idea is that of stacking and has been used before, e.g., here and here.
Evaluating and combining methods based on ROC and PR curves
I eventually resorted to using logistic regression (and similar models like adaptive splines) etc. to combine the scores. I think the idea is that of stacking and has been used before, e.g., here and
Evaluating and combining methods based on ROC and PR curves I eventually resorted to using logistic regression (and similar models like adaptive splines) etc. to combine the scores. I think the idea is that of stacking and has been used before, e.g., here and here.
Evaluating and combining methods based on ROC and PR curves I eventually resorted to using logistic regression (and similar models like adaptive splines) etc. to combine the scores. I think the idea is that of stacking and has been used before, e.g., here and
32,753
Inference in linear model with conditional heteroskedasticity
In a slightly more general context with $Y$ an $n$-dimensional vector of $y$-observations (the responses, or dependent variables), $X$ an $n \times p$ matrix of $x$-observations (covariates, or dependent variables) and $\theta = (\beta_1, \beta_2, \sigma)$ the parameters such that $Y \sim N(X\beta_1, \Sigma(\beta_2, \sigma))$ then the minus-log-likelihood is $$l(\beta_1, \beta_2, \sigma) = \frac{1}{2}(Y-X\beta_1)^T \Sigma(\beta_2, \sigma)^{-1} (Y-X\beta_1) + \frac{1}{2}\log |\Sigma(\beta_2, \sigma)|$$ In the OP's question, $\Sigma(\beta_2, \sigma)$ is diagonal with $$\Sigma(\beta_2, \sigma)_{ii} = \sigma^2 g(z_i^T \beta_2)^2$$ so the determinant becomes $\sigma^{2n} \prod_{i=1}^n g(z_i^T \beta_2)^2$ and the resulting minus-log-likelihood becomes $$\frac{1}{2\sigma^2} \sum_{i=1}^n \frac{(y_i-x_i^T\beta_1)^2}{ g(z_i^T \beta_2)^2} + n \log \sigma + \sum_{i=1}^n \log g(z_i^T \beta_2)$$ There are several ways to approach the minimization of this function (assuming the three parameters are variation independent). You can try to minimize the function by a standard optimization algorithm remembering the constraint that $\sigma > 0$. You can compute the profile minus-log-likelihood of $(\beta_1, \beta_2)$ by minimizing over $\sigma$ for fixed $(\beta_1, \beta_2)$, and then plug the resulting function into a standard unconstrained optimization algorithm. You can alternate between optimizing over each of the three parameters separately. Optimizing over $\sigma$ can be done analytically, optimizing over $\beta_1$ is a weighted least squares regression problem, and optimizing over $\beta_2$ is equivalent to fitting a gamma generalized linear model with $g^2$ the inverse link. The last suggestion appeals to me because it builds on solutions that I already know well. In addition, the first iteration is something I would consider doing anyway. That is, first compute an initial estimate of $\beta_1$ by ordinary least squares ignoring the potential heteroskedasticity, and then fit a gamma glm to the squared residuals to get an initial estimate of $\beta_2$ $-$ just to check if the more complicated model seems worthwhile. Iterations incorporating the heteroskedasticity into the least squares solution as weights might then improve upon the estimate. Regarding the second part of the question, I would probably consider computing a confidence interval for the linear combination $w_1^T\beta_1 + w_2^T\beta_2$ either by using standard MLE asymptotics (checking with simulations that the asymptotics works) or by bootstrapping. Edit: By standard MLE asymptotics I mean using the multivariate normal approximation to the distribution of the MLE with covariance matrix the inverse Fisher information. The Fisher information is by definition the covariance matrix of the gradient of $l$. It depends in general on the parameters. If you can find an analytic expression for this quantity you can try plugging in the MLE. In the alternative, you can estimate the Fisher information by the observed Fisher information, which is the Hessian of $l$ in the MLE. Your parameter of interest is a linear combination of the parameters in the two $\beta$-vectors, hence from the approximating multivariate normal of the MLE you can find a normal approximation of the estimators distribution as described here. This gives you an approximate standard error and you can compute confidence intervals. It's well described in many (mathematical) statistics books, but a reasonably accessible presentation I can recommend is In All Likelihood by Yudi Pawitan. Anyway, the formal derivation of the asymptotic theory is fairly complicated and rely on a number of regularity conditions, and it only gives valid asymptotic distributions. Hence, if in doubt I would always do some simulations with a new model to check if I can trust the results for realistic parameters and sample sizes. Simple, non-parametric bootstrapping where you sample the triples $(y_i,x_i,z_i)$ from the observed data set with replacement can be a useful alternative if the fitting procedure is not too time consuming.
Inference in linear model with conditional heteroskedasticity
In a slightly more general context with $Y$ an $n$-dimensional vector of $y$-observations (the responses, or dependent variables), $X$ an $n \times p$ matrix of $x$-observations (covariates, or depend
Inference in linear model with conditional heteroskedasticity In a slightly more general context with $Y$ an $n$-dimensional vector of $y$-observations (the responses, or dependent variables), $X$ an $n \times p$ matrix of $x$-observations (covariates, or dependent variables) and $\theta = (\beta_1, \beta_2, \sigma)$ the parameters such that $Y \sim N(X\beta_1, \Sigma(\beta_2, \sigma))$ then the minus-log-likelihood is $$l(\beta_1, \beta_2, \sigma) = \frac{1}{2}(Y-X\beta_1)^T \Sigma(\beta_2, \sigma)^{-1} (Y-X\beta_1) + \frac{1}{2}\log |\Sigma(\beta_2, \sigma)|$$ In the OP's question, $\Sigma(\beta_2, \sigma)$ is diagonal with $$\Sigma(\beta_2, \sigma)_{ii} = \sigma^2 g(z_i^T \beta_2)^2$$ so the determinant becomes $\sigma^{2n} \prod_{i=1}^n g(z_i^T \beta_2)^2$ and the resulting minus-log-likelihood becomes $$\frac{1}{2\sigma^2} \sum_{i=1}^n \frac{(y_i-x_i^T\beta_1)^2}{ g(z_i^T \beta_2)^2} + n \log \sigma + \sum_{i=1}^n \log g(z_i^T \beta_2)$$ There are several ways to approach the minimization of this function (assuming the three parameters are variation independent). You can try to minimize the function by a standard optimization algorithm remembering the constraint that $\sigma > 0$. You can compute the profile minus-log-likelihood of $(\beta_1, \beta_2)$ by minimizing over $\sigma$ for fixed $(\beta_1, \beta_2)$, and then plug the resulting function into a standard unconstrained optimization algorithm. You can alternate between optimizing over each of the three parameters separately. Optimizing over $\sigma$ can be done analytically, optimizing over $\beta_1$ is a weighted least squares regression problem, and optimizing over $\beta_2$ is equivalent to fitting a gamma generalized linear model with $g^2$ the inverse link. The last suggestion appeals to me because it builds on solutions that I already know well. In addition, the first iteration is something I would consider doing anyway. That is, first compute an initial estimate of $\beta_1$ by ordinary least squares ignoring the potential heteroskedasticity, and then fit a gamma glm to the squared residuals to get an initial estimate of $\beta_2$ $-$ just to check if the more complicated model seems worthwhile. Iterations incorporating the heteroskedasticity into the least squares solution as weights might then improve upon the estimate. Regarding the second part of the question, I would probably consider computing a confidence interval for the linear combination $w_1^T\beta_1 + w_2^T\beta_2$ either by using standard MLE asymptotics (checking with simulations that the asymptotics works) or by bootstrapping. Edit: By standard MLE asymptotics I mean using the multivariate normal approximation to the distribution of the MLE with covariance matrix the inverse Fisher information. The Fisher information is by definition the covariance matrix of the gradient of $l$. It depends in general on the parameters. If you can find an analytic expression for this quantity you can try plugging in the MLE. In the alternative, you can estimate the Fisher information by the observed Fisher information, which is the Hessian of $l$ in the MLE. Your parameter of interest is a linear combination of the parameters in the two $\beta$-vectors, hence from the approximating multivariate normal of the MLE you can find a normal approximation of the estimators distribution as described here. This gives you an approximate standard error and you can compute confidence intervals. It's well described in many (mathematical) statistics books, but a reasonably accessible presentation I can recommend is In All Likelihood by Yudi Pawitan. Anyway, the formal derivation of the asymptotic theory is fairly complicated and rely on a number of regularity conditions, and it only gives valid asymptotic distributions. Hence, if in doubt I would always do some simulations with a new model to check if I can trust the results for realistic parameters and sample sizes. Simple, non-parametric bootstrapping where you sample the triples $(y_i,x_i,z_i)$ from the observed data set with replacement can be a useful alternative if the fitting procedure is not too time consuming.
Inference in linear model with conditional heteroskedasticity In a slightly more general context with $Y$ an $n$-dimensional vector of $y$-observations (the responses, or dependent variables), $X$ an $n \times p$ matrix of $x$-observations (covariates, or depend
32,754
Averages of averages (of averages, of averages...)
This is not a direct answer to your question ('Which type of averaging to choose'), but rather a recommendation to avoid calculating averages at all: Your scenario seems to look like a case for hierarchical/ multilevel models (MLM), as data are perfectly nested. You have three levels of random effects: pixels (Level 1) nested in cells (L2), nested in fields (L3), nested in wells (L4). Treatments should be treated as fixed effects. You are only interested in the effect of treatment; the MLM method takes care of the different variances of each level and gives you also an estimate of how much variance is explained by which level. So you do not 'lose' any variance by treating an averaged value as 'the measurement', but you estimate the model on the level of raw data. This method, however, calls for a sufficient number of groups for each random effect (i.e., enough pixels, enough cells, enough fields, enough wells). As you are not interested in cross level interactions, general recommendations say something like 10 to 30 units minimum (of course, depending on the specific scenario, etc.; see, e.g., here).
Averages of averages (of averages, of averages...)
This is not a direct answer to your question ('Which type of averaging to choose'), but rather a recommendation to avoid calculating averages at all: Your scenario seems to look like a case for hierar
Averages of averages (of averages, of averages...) This is not a direct answer to your question ('Which type of averaging to choose'), but rather a recommendation to avoid calculating averages at all: Your scenario seems to look like a case for hierarchical/ multilevel models (MLM), as data are perfectly nested. You have three levels of random effects: pixels (Level 1) nested in cells (L2), nested in fields (L3), nested in wells (L4). Treatments should be treated as fixed effects. You are only interested in the effect of treatment; the MLM method takes care of the different variances of each level and gives you also an estimate of how much variance is explained by which level. So you do not 'lose' any variance by treating an averaged value as 'the measurement', but you estimate the model on the level of raw data. This method, however, calls for a sufficient number of groups for each random effect (i.e., enough pixels, enough cells, enough fields, enough wells). As you are not interested in cross level interactions, general recommendations say something like 10 to 30 units minimum (of course, depending on the specific scenario, etc.; see, e.g., here).
Averages of averages (of averages, of averages...) This is not a direct answer to your question ('Which type of averaging to choose'), but rather a recommendation to avoid calculating averages at all: Your scenario seems to look like a case for hierar
32,755
Inter-rater reliability with many non-overlapping raters
Check out Krippendorff's alpha. It has several advantages over some other measures such as Cohen's Kappa, Fleiss's Kappa, Cronbach's alpha: it is robust to missing data (which I gather is the main concern you have); it is capable of dealing with more than 2 raters; and it can handle different types of scales ( nominal, ordinal, etc.), and it also accounts for chance agreements better than some other measures like Cohen's Kappa. Calculation of Krippendorff's alpha is supported by several statistical software packages, including R (by the irr package), SPSS, etc. Below are some relevant papers, that discuss Krippendorff's alpha including its properties and its implementation, and compare it with other measures: Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77-89. Krippendorff, K. (2004). Reliability in Content Analysis: Some Common Misconceptions and Recommendations. Human Communication Research, 30(3), 411-433. doi: 10.1111/j.1468-2958.2004.tb00738.x Chapter 3 in Krippendorff, K. (2013). Content Analysis: An Introduction to Its Methodology (3rd ed.): Sage. There are some additional technical papers in Krippendorff's website
Inter-rater reliability with many non-overlapping raters
Check out Krippendorff's alpha. It has several advantages over some other measures such as Cohen's Kappa, Fleiss's Kappa, Cronbach's alpha: it is robust to missing data (which I gather is the main con
Inter-rater reliability with many non-overlapping raters Check out Krippendorff's alpha. It has several advantages over some other measures such as Cohen's Kappa, Fleiss's Kappa, Cronbach's alpha: it is robust to missing data (which I gather is the main concern you have); it is capable of dealing with more than 2 raters; and it can handle different types of scales ( nominal, ordinal, etc.), and it also accounts for chance agreements better than some other measures like Cohen's Kappa. Calculation of Krippendorff's alpha is supported by several statistical software packages, including R (by the irr package), SPSS, etc. Below are some relevant papers, that discuss Krippendorff's alpha including its properties and its implementation, and compare it with other measures: Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77-89. Krippendorff, K. (2004). Reliability in Content Analysis: Some Common Misconceptions and Recommendations. Human Communication Research, 30(3), 411-433. doi: 10.1111/j.1468-2958.2004.tb00738.x Chapter 3 in Krippendorff, K. (2013). Content Analysis: An Introduction to Its Methodology (3rd ed.): Sage. There are some additional technical papers in Krippendorff's website
Inter-rater reliability with many non-overlapping raters Check out Krippendorff's alpha. It has several advantages over some other measures such as Cohen's Kappa, Fleiss's Kappa, Cronbach's alpha: it is robust to missing data (which I gather is the main con
32,756
Inter-rater reliability with many non-overlapping raters
If you just need to convince yourself (rather than report a number for another party), you could fit a cross-classified hierarchical/mixed model, with items and raters being two random effects. Then the intraclass correlation for the raters is [variance of the raters' random effect]/[variance of the raters' random effect + variance of the items' random effect + (variance of the logistic distribution = $\pi^2/3$)]. A specific implementation depends on the computational platform you are using; the default on CV is R, so you'd be using nlme with it, but you may have something different like SPSS or Stata.
Inter-rater reliability with many non-overlapping raters
If you just need to convince yourself (rather than report a number for another party), you could fit a cross-classified hierarchical/mixed model, with items and raters being two random effects. Then t
Inter-rater reliability with many non-overlapping raters If you just need to convince yourself (rather than report a number for another party), you could fit a cross-classified hierarchical/mixed model, with items and raters being two random effects. Then the intraclass correlation for the raters is [variance of the raters' random effect]/[variance of the raters' random effect + variance of the items' random effect + (variance of the logistic distribution = $\pi^2/3$)]. A specific implementation depends on the computational platform you are using; the default on CV is R, so you'd be using nlme with it, but you may have something different like SPSS or Stata.
Inter-rater reliability with many non-overlapping raters If you just need to convince yourself (rather than report a number for another party), you could fit a cross-classified hierarchical/mixed model, with items and raters being two random effects. Then t
32,757
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic of other minima?
I answer this: "Arbitrarily group the draws into n groups with m values in each group. Look at the minimum value in each group. Take the group that has the greatest of these minima. Now, what is the distribution that defines the maximum value in that group?" Let $X_{i,j}$ the i-th random variable in group j and $f(x_{i,j})$ ($F(x_{i,j})$) its density (cdf) function. Let $X_{\max,j}, X_{\min,j}$ the maximum and minimum in group $j$. Let $X_{final}$ the variable that results at the end of all process. We want to calculate $P(X_{final}<x)$ which is $$P(X_{\max,j_0}<x \hbox{ and } X_{\min,j_0}=\max_j{X_{\min,j}} \hbox { and } 1\leq j_0\leq n)$$ $$=nP(X_{max,1}<x \hbox{ and } X_{\min,1}=\max_j{X_{\min,j}})$$ $$=nmP(X_{1,1}<x\hbox{ and } X_{1,1}=\max_i(X_{i,1})\hbox{ and } X_{\min,1}=\max_j{X_{\min,j}})$$ $$=nmP(X_{1,1}<x, X_{1,1}>X_{2,1}>\max_{j=2\ldots n} X_{min,j},\ldots,X_{1,1}>X_{m,1}>\max_{j=2\ldots n} X_{min,j})$$ Now, let $Y=\max_{j=2\ldots n} X_{min,j}$ and $W=X_{1,1}$. A reminder: if $X_1,\ldots X_n$ are iid with pdf (cdf) $h$ ($H$), then $X_{\min}$ has pdf $h_{\min}=nh(1-H)^{n-1}$ and $X_{\max}$ has pdf $h_{max}=nhH^{n-1}$. Using this, we get the pdf of $Y$ is $$g(y)=(n-1)mf(1-F)^{m-1}[\int_0^y mf(z)(1-F(z))^{m-1} dz]^{n-2},n\geq 2$$ Note that $Y$ is a statistics that is independent of group 1 so its joint density with any variable in the group 1 is the product of densities. Now the above probability becomes $$nm\int_0^x f(w)[\int_0^w \int_y^w f(x_{2,1})dx_{2,1}\ldots\int_y^w f(x_{m,1})dx_{m,1}g(y)dy]dw$$ $$=nm\int_0^x f(w)[\int_0^w (F(w)-F(y))^{m-1}g(y)dy]dw$$ By taking derivative of this integral wrt $x$ and using binomial formula we obtain the pdf of $X_{final}$. Example: $X$ is uniform, $n=4$, $m=3$. Then $$g(y)=9(1-y)^2(3y+y^3-3y^2)^2,$$ $$P(X_{final}<x)=(1/55)x^{12}-(12/55)x^{11}$$ $$+ (6/5)x^{10}-(27/7)x^9+(54/7)x^8-(324/35)x^7+(27/5)x^6. $$ Mean of $X_{final}$ is $374/455=0.822$ and its s.d. is $0.145$ .
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic
I answer this: "Arbitrarily group the draws into n groups with m values in each group. Look at the minimum value in each group. Take the group that has the greatest of these minima. Now, what is the d
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic of other minima? I answer this: "Arbitrarily group the draws into n groups with m values in each group. Look at the minimum value in each group. Take the group that has the greatest of these minima. Now, what is the distribution that defines the maximum value in that group?" Let $X_{i,j}$ the i-th random variable in group j and $f(x_{i,j})$ ($F(x_{i,j})$) its density (cdf) function. Let $X_{\max,j}, X_{\min,j}$ the maximum and minimum in group $j$. Let $X_{final}$ the variable that results at the end of all process. We want to calculate $P(X_{final}<x)$ which is $$P(X_{\max,j_0}<x \hbox{ and } X_{\min,j_0}=\max_j{X_{\min,j}} \hbox { and } 1\leq j_0\leq n)$$ $$=nP(X_{max,1}<x \hbox{ and } X_{\min,1}=\max_j{X_{\min,j}})$$ $$=nmP(X_{1,1}<x\hbox{ and } X_{1,1}=\max_i(X_{i,1})\hbox{ and } X_{\min,1}=\max_j{X_{\min,j}})$$ $$=nmP(X_{1,1}<x, X_{1,1}>X_{2,1}>\max_{j=2\ldots n} X_{min,j},\ldots,X_{1,1}>X_{m,1}>\max_{j=2\ldots n} X_{min,j})$$ Now, let $Y=\max_{j=2\ldots n} X_{min,j}$ and $W=X_{1,1}$. A reminder: if $X_1,\ldots X_n$ are iid with pdf (cdf) $h$ ($H$), then $X_{\min}$ has pdf $h_{\min}=nh(1-H)^{n-1}$ and $X_{\max}$ has pdf $h_{max}=nhH^{n-1}$. Using this, we get the pdf of $Y$ is $$g(y)=(n-1)mf(1-F)^{m-1}[\int_0^y mf(z)(1-F(z))^{m-1} dz]^{n-2},n\geq 2$$ Note that $Y$ is a statistics that is independent of group 1 so its joint density with any variable in the group 1 is the product of densities. Now the above probability becomes $$nm\int_0^x f(w)[\int_0^w \int_y^w f(x_{2,1})dx_{2,1}\ldots\int_y^w f(x_{m,1})dx_{m,1}g(y)dy]dw$$ $$=nm\int_0^x f(w)[\int_0^w (F(w)-F(y))^{m-1}g(y)dy]dw$$ By taking derivative of this integral wrt $x$ and using binomial formula we obtain the pdf of $X_{final}$. Example: $X$ is uniform, $n=4$, $m=3$. Then $$g(y)=9(1-y)^2(3y+y^3-3y^2)^2,$$ $$P(X_{final}<x)=(1/55)x^{12}-(12/55)x^{11}$$ $$+ (6/5)x^{10}-(27/7)x^9+(54/7)x^8-(324/35)x^7+(27/5)x^6. $$ Mean of $X_{final}$ is $374/455=0.822$ and its s.d. is $0.145$ .
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic I answer this: "Arbitrarily group the draws into n groups with m values in each group. Look at the minimum value in each group. Take the group that has the greatest of these minima. Now, what is the d
32,758
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic of other minima?
Since the draws are from an iid samples, we can just consider the draw selected. Consider $f(x) = \frac{d F(x)}{dx}$. Now we know that $b$ is from $f(x)$ and that $b>a$. So, $$p(b|a) = \frac{f(b)}{\int_a^1 f(y) dy} \forall b>a, 0 \text{ otherwise}.$$ The minimum $m$ in a draw of two is $$p_2(m) = f(m)\int_m^1f(y) dy.$$ The largest minimum among 4 draws would be $$p(a) = p_2(a)\left[\int_0^a p_2(z) dz\right]^3 = f(a)\int_a^1f(x) dx \left[\int_0^af(y)\left(\int_y^1f(z)dz\right) dy \right]^3.$$ So finally, $$p(b) = \int_0^1 \left[u(a) \frac{f(b)}{\int_a^1 f(y)dy} f(a)\int_a^1f(x) dx \left[\int_0^af(y)\left(\int_y^1f(z)dz\right) dy \right]^3 \right] da.$$
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic
Since the draws are from an iid samples, we can just consider the draw selected. Consider $f(x) = \frac{d F(x)}{dx}$. Now we know that $b$ is from $f(x)$ and that $b>a$. So, $$p(b|a) = \frac{f(b)}{\in
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic of other minima? Since the draws are from an iid samples, we can just consider the draw selected. Consider $f(x) = \frac{d F(x)}{dx}$. Now we know that $b$ is from $f(x)$ and that $b>a$. So, $$p(b|a) = \frac{f(b)}{\int_a^1 f(y) dy} \forall b>a, 0 \text{ otherwise}.$$ The minimum $m$ in a draw of two is $$p_2(m) = f(m)\int_m^1f(y) dy.$$ The largest minimum among 4 draws would be $$p(a) = p_2(a)\left[\int_0^a p_2(z) dz\right]^3 = f(a)\int_a^1f(x) dx \left[\int_0^af(y)\left(\int_y^1f(z)dz\right) dy \right]^3.$$ So finally, $$p(b) = \int_0^1 \left[u(a) \frac{f(b)}{\int_a^1 f(y)dy} f(a)\int_a^1f(x) dx \left[\int_0^af(y)\left(\int_y^1f(z)dz\right) dy \right]^3 \right] da.$$
What is the distribution of maximum of a pair of iid draws, where the minimum is an order statistic Since the draws are from an iid samples, we can just consider the draw selected. Consider $f(x) = \frac{d F(x)}{dx}$. Now we know that $b$ is from $f(x)$ and that $b>a$. So, $$p(b|a) = \frac{f(b)}{\in
32,759
Factor analysis on mixed (continuous/ordinal/nominal) data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Particularly if you have nominal indicators along with the ordinal & continuous ones, this is probably a good candidate for latent class factor analysis. Take a look at this -- http://web.archive.org/web/20130502181643/http://www.statisticalinnovations.com/articles/bozdogan.pdf
Factor analysis on mixed (continuous/ordinal/nominal) data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Factor analysis on mixed (continuous/ordinal/nominal) data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Particularly if you have nominal indicators along with the ordinal & continuous ones, this is probably a good candidate for latent class factor analysis. Take a look at this -- http://web.archive.org/web/20130502181643/http://www.statisticalinnovations.com/articles/bozdogan.pdf
Factor analysis on mixed (continuous/ordinal/nominal) data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
32,760
Factor analysis on mixed (continuous/ordinal/nominal) data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. FactoMineR is a nice package for Factor Analysis on mixed variables.
Factor analysis on mixed (continuous/ordinal/nominal) data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Factor analysis on mixed (continuous/ordinal/nominal) data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. FactoMineR is a nice package for Factor Analysis on mixed variables.
Factor analysis on mixed (continuous/ordinal/nominal) data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
32,761
Categorization/Segmentation techniques
It sounds like any linear classifier will do what you need. Suppose you have $N$ features and the value of feature $i$ is $f_i$. Then a linear classifier will compute a score $$s = \sum_i w_i f_i + o$$ (where $o$ is the offset). Then, if $s > t$ (where $t$ is some threshold), then the feature belongs to a class (a group), and if $s < t$, then it doesn't. Note that there is a single threshold applied to the entire score (rather than to individual feature values), so indeed a deficiency in one parameter can be compensated for by abundance in another. The weights are intuitively interpretable, in the sense that the higher the weight is, the more important (or more decisive) that feature is. There are a lot of off-the-shelf linear classifiers that can do that, including SVM, LDA (linear discriminant analysis), linear neural networks, and many others. I'd start by running linear SVM because it works well in a lot of cases and can tolerate limited training data. There are also a lot of packages in many environments (like Matlab and R), so you can easily try it. The downside of SVM is that it can be computationally heavy, so if you need to learn a lot of classes, it might be less appropriate. If you want to preserve some of the threshold behavior you currently have, you can pass the feature values through a sigmoid with the threshold in the right location. E.g. for a feature $i$ for which you currently use a threshold of $t_i$, first compute $$g_i = \frac{1}{1 + \exp(f_i - t_i)},$$ and then learn a linear classifier using $g$'s rather than $f$'s. This way, the compensating behavior will only happen near the threshold, and things that are too far away from the threshold cannot be compensated for (which is sometimes desirable). Another thing that you could try is to use probabilistic classifiers like Naive Bayes or TAN. Naive Bayes is almost like a linear classifier, except it computes $$s = \sum_i w^i_{f_i}.$$ So there is still a sum of weights. These weights depend on the feature values $f_i$, but not by multiplication like in a usual linear classifier. The score in this case is the log-probability, and the weights are the contributions of the individual features into that log-probability. The disadvantage of using this in your case is that you will need many bins for your feature values, and then learning may become difficult. There are ways around that (for example, using priors), but since you have no experience with this, it might be more difficult. Regarding terminology: what you called 'test set' is usually called a 'training set' in this context, and what you called 'new data' is called the 'test set'. For a book, I'd read "Pattern recognition" by Duda, Hart, and Stork. The first chapter is a very good introduction for beginners.
Categorization/Segmentation techniques
It sounds like any linear classifier will do what you need. Suppose you have $N$ features and the value of feature $i$ is $f_i$. Then a linear classifier will compute a score $$s = \sum_i w_i f_i + o
Categorization/Segmentation techniques It sounds like any linear classifier will do what you need. Suppose you have $N$ features and the value of feature $i$ is $f_i$. Then a linear classifier will compute a score $$s = \sum_i w_i f_i + o$$ (where $o$ is the offset). Then, if $s > t$ (where $t$ is some threshold), then the feature belongs to a class (a group), and if $s < t$, then it doesn't. Note that there is a single threshold applied to the entire score (rather than to individual feature values), so indeed a deficiency in one parameter can be compensated for by abundance in another. The weights are intuitively interpretable, in the sense that the higher the weight is, the more important (or more decisive) that feature is. There are a lot of off-the-shelf linear classifiers that can do that, including SVM, LDA (linear discriminant analysis), linear neural networks, and many others. I'd start by running linear SVM because it works well in a lot of cases and can tolerate limited training data. There are also a lot of packages in many environments (like Matlab and R), so you can easily try it. The downside of SVM is that it can be computationally heavy, so if you need to learn a lot of classes, it might be less appropriate. If you want to preserve some of the threshold behavior you currently have, you can pass the feature values through a sigmoid with the threshold in the right location. E.g. for a feature $i$ for which you currently use a threshold of $t_i$, first compute $$g_i = \frac{1}{1 + \exp(f_i - t_i)},$$ and then learn a linear classifier using $g$'s rather than $f$'s. This way, the compensating behavior will only happen near the threshold, and things that are too far away from the threshold cannot be compensated for (which is sometimes desirable). Another thing that you could try is to use probabilistic classifiers like Naive Bayes or TAN. Naive Bayes is almost like a linear classifier, except it computes $$s = \sum_i w^i_{f_i}.$$ So there is still a sum of weights. These weights depend on the feature values $f_i$, but not by multiplication like in a usual linear classifier. The score in this case is the log-probability, and the weights are the contributions of the individual features into that log-probability. The disadvantage of using this in your case is that you will need many bins for your feature values, and then learning may become difficult. There are ways around that (for example, using priors), but since you have no experience with this, it might be more difficult. Regarding terminology: what you called 'test set' is usually called a 'training set' in this context, and what you called 'new data' is called the 'test set'. For a book, I'd read "Pattern recognition" by Duda, Hart, and Stork. The first chapter is a very good introduction for beginners.
Categorization/Segmentation techniques It sounds like any linear classifier will do what you need. Suppose you have $N$ features and the value of feature $i$ is $f_i$. Then a linear classifier will compute a score $$s = \sum_i w_i f_i + o
32,762
How to use/interpret empirical distribution?
Empirical distributions are used all the time for inference so you're definitely on the right track! One of the most common use of empirical distributions is for bootstrapping. In fact, you don't even have to use any of the machinery you've described above. In an nutshell, you make many draws (with replacement) from the original samples in a uniform fashion and the results can be used to calculate the confidence intervals on your previously calculated statistical quantities. Furthermore, these samples have well developed theoretical convergence properties. Check out the wikipedia article on the topic here.
How to use/interpret empirical distribution?
Empirical distributions are used all the time for inference so you're definitely on the right track! One of the most common use of empirical distributions is for bootstrapping. In fact, you don't even
How to use/interpret empirical distribution? Empirical distributions are used all the time for inference so you're definitely on the right track! One of the most common use of empirical distributions is for bootstrapping. In fact, you don't even have to use any of the machinery you've described above. In an nutshell, you make many draws (with replacement) from the original samples in a uniform fashion and the results can be used to calculate the confidence intervals on your previously calculated statistical quantities. Furthermore, these samples have well developed theoretical convergence properties. Check out the wikipedia article on the topic here.
How to use/interpret empirical distribution? Empirical distributions are used all the time for inference so you're definitely on the right track! One of the most common use of empirical distributions is for bootstrapping. In fact, you don't even
32,763
Discussing binomial regression and modeling strategies
It does sound like you are in a bit of a quandary because you only have 1 response variable for each individual measurement. I was initially going to recommend a multi-level approach. But in order for that to work you need to observe the response at the lowest level - which you do not - you observe your response at the individual level (which would be level 2 in a MLM) 1)Taking the average of x means losing information in the within-individual variability of x. You are losing variability of the covariate x, but this only matters if the other information contained in X is related to the response. There is nothing from stopping you from putting the variance of X in as a covariate either. 2) The mean is itself a statistic, so by putting it in the model we end up doing statistics on statistics. A statistic is a function of the observed data. So any covariate is a "statistic". So you are already doing "statistics on statistics" whether you like it or not. However, it does make a difference to how you should interpret the slope coefficient - as an average value, and not a value in the individual birth. If you don't care about the individual births, then this matters little. If you do, then this approach can be misleading. 3) The number of offspring an individual had is in the model, but it is also used to calculate the mean of variable x, which I think could cause trouble. It would only matter if the mean of X was functionally/deterministically related to number of offspring. One way this can happen is if the value of X is the same for each individual who had the same number of births. Usually this isn't the case. You could specify a model which includes each value of X as a covariate. But this would probably involve some new methodological research on your part I would imagine. Your likelihood function would be different for different individuals, due to the different number of measurements within individuals. I don't think multi-level modeling applies in this case conceptually. This is simply because the births are not a subset or sample within individuals. Although the maths may be the same. One way you could incorporate this structure is to create a model like: $$(Y_{ij}|x_{ij}) \sim Bin(Y_{ij}|n_{ij},p_{ij})$$ Where $Y_{ij}$ is the binomial response for individual $i$ and $j$ denotes the number of births, $x_{ij}$ is the covariates, and $n_{ij}$ is the number of individuals with the same covariate values, and also had the same number of births. $p_{ij}$ is the probability, which you normally model as: $$g(p_{ij}) = x_{ij}^{T}\beta$$ For some monotonic/invertible function $g(.)$. The "tricky" part comes in because the dimension of $x_{ij}$ varies with $j$. The log-likelihood in this case is: $$L=L(\beta)=\sum_{j\in B}\Bigg[\sum_{i=1}^{N_{j}} log[Bin(Y_{ij}|n_{ij},g^{-1}(x_{ij}^{T}\beta))]\Bigg]$$ Where $B$ is just the set of the number of births which you have available in your data set. To maximise it is likely to be a nontrivial task, and you probably won't get the usual IRLS equations from doing a taylor series expansions about the current estimate. Taylor series is the way I would go from here - I just don't have the energy to run through the process at this time. I would suggest you try to re-arrange your answer so that it looks like an "ordinary" binomial GLM. This will allow you to take advantage of the standard software available. What I can tell you is that when you differentiate with respect to a beta which depends on $j$ (e.g. the coefficient for the metabolic rate for the third birth), some of terms in this summation will drop out. This is basically the likelihood "telling you" that certain observations contribute nothing to estimating certain parameters (e.g. individuals who give birth to two or less offspring contribute nothing to the estimated slope for the metabolic rate for the third birth). So in summary, your intuition is spot on when you suggest that something is being lost. However, the price for "purity" could be high - especially if you need to write your own algorithm to get your estimates.
Discussing binomial regression and modeling strategies
It does sound like you are in a bit of a quandary because you only have 1 response variable for each individual measurement. I was initially going to recommend a multi-level approach. But in order f
Discussing binomial regression and modeling strategies It does sound like you are in a bit of a quandary because you only have 1 response variable for each individual measurement. I was initially going to recommend a multi-level approach. But in order for that to work you need to observe the response at the lowest level - which you do not - you observe your response at the individual level (which would be level 2 in a MLM) 1)Taking the average of x means losing information in the within-individual variability of x. You are losing variability of the covariate x, but this only matters if the other information contained in X is related to the response. There is nothing from stopping you from putting the variance of X in as a covariate either. 2) The mean is itself a statistic, so by putting it in the model we end up doing statistics on statistics. A statistic is a function of the observed data. So any covariate is a "statistic". So you are already doing "statistics on statistics" whether you like it or not. However, it does make a difference to how you should interpret the slope coefficient - as an average value, and not a value in the individual birth. If you don't care about the individual births, then this matters little. If you do, then this approach can be misleading. 3) The number of offspring an individual had is in the model, but it is also used to calculate the mean of variable x, which I think could cause trouble. It would only matter if the mean of X was functionally/deterministically related to number of offspring. One way this can happen is if the value of X is the same for each individual who had the same number of births. Usually this isn't the case. You could specify a model which includes each value of X as a covariate. But this would probably involve some new methodological research on your part I would imagine. Your likelihood function would be different for different individuals, due to the different number of measurements within individuals. I don't think multi-level modeling applies in this case conceptually. This is simply because the births are not a subset or sample within individuals. Although the maths may be the same. One way you could incorporate this structure is to create a model like: $$(Y_{ij}|x_{ij}) \sim Bin(Y_{ij}|n_{ij},p_{ij})$$ Where $Y_{ij}$ is the binomial response for individual $i$ and $j$ denotes the number of births, $x_{ij}$ is the covariates, and $n_{ij}$ is the number of individuals with the same covariate values, and also had the same number of births. $p_{ij}$ is the probability, which you normally model as: $$g(p_{ij}) = x_{ij}^{T}\beta$$ For some monotonic/invertible function $g(.)$. The "tricky" part comes in because the dimension of $x_{ij}$ varies with $j$. The log-likelihood in this case is: $$L=L(\beta)=\sum_{j\in B}\Bigg[\sum_{i=1}^{N_{j}} log[Bin(Y_{ij}|n_{ij},g^{-1}(x_{ij}^{T}\beta))]\Bigg]$$ Where $B$ is just the set of the number of births which you have available in your data set. To maximise it is likely to be a nontrivial task, and you probably won't get the usual IRLS equations from doing a taylor series expansions about the current estimate. Taylor series is the way I would go from here - I just don't have the energy to run through the process at this time. I would suggest you try to re-arrange your answer so that it looks like an "ordinary" binomial GLM. This will allow you to take advantage of the standard software available. What I can tell you is that when you differentiate with respect to a beta which depends on $j$ (e.g. the coefficient for the metabolic rate for the third birth), some of terms in this summation will drop out. This is basically the likelihood "telling you" that certain observations contribute nothing to estimating certain parameters (e.g. individuals who give birth to two or less offspring contribute nothing to the estimated slope for the metabolic rate for the third birth). So in summary, your intuition is spot on when you suggest that something is being lost. However, the price for "purity" could be high - especially if you need to write your own algorithm to get your estimates.
Discussing binomial regression and modeling strategies It does sound like you are in a bit of a quandary because you only have 1 response variable for each individual measurement. I was initially going to recommend a multi-level approach. But in order f
32,764
Discussing binomial regression and modeling strategies
I think you could explore a nonlinear mixed model; this should allow you to use the data you have effectively. But if relatively few subjects have multiple measures, it won't matter much and may not work well (I think there could be convergence problems). If you are using SAS you could use PROC GLIMMIX; if using R I think lme4 should be useful.
Discussing binomial regression and modeling strategies
I think you could explore a nonlinear mixed model; this should allow you to use the data you have effectively. But if relatively few subjects have multiple measures, it won't matter much and may not
Discussing binomial regression and modeling strategies I think you could explore a nonlinear mixed model; this should allow you to use the data you have effectively. But if relatively few subjects have multiple measures, it won't matter much and may not work well (I think there could be convergence problems). If you are using SAS you could use PROC GLIMMIX; if using R I think lme4 should be useful.
Discussing binomial regression and modeling strategies I think you could explore a nonlinear mixed model; this should allow you to use the data you have effectively. But if relatively few subjects have multiple measures, it won't matter much and may not
32,765
Can I estimate the frequency of an event based on random samplings of its occurrence?
Your data will give partial answers by means of the Hansen-Hurwitz or Horvitz-Thompson estimators. The model is this: represent this individual's attendance as a sequence of indicator (0/1) variables $(q_i)$, $i=1, 2, \ldots$. You randomly observe a two-element subset out of each weekly block $(q_{5k+1}, q_{5k+2}, \ldots, q_{5k+5})$. (This is a form of systematic sampling.) How often does he train? You want to estimate the weekly mean of the $q_i$. The statistics you gather tell you the mean observation is 0.9. Let's suppose this was collected over $w$ weeks. Then the Horvitz-Thompson estimator of the total number of the individual's visits is $\sum{\frac{q_i}{\pi_i}}$ = ${5\over2} \sum{q_i}$ = ${5\over2} (2 w) 0.9$ = $4.5 w$ (where $\pi_i$ is the chance of observing $q_i$ and the sum is over your actual observations.) That is, you should estimate he trains 4.5 days per week. See the reference for how to compute the standard error of this estimate. As an extremely good approximation you can use the usual (Binomial) formulas. Does he train randomly? There is no way to tell. You would need to maintain totals by day of week.
Can I estimate the frequency of an event based on random samplings of its occurrence?
Your data will give partial answers by means of the Hansen-Hurwitz or Horvitz-Thompson estimators. The model is this: represent this individual's attendance as a sequence of indicator (0/1) variables
Can I estimate the frequency of an event based on random samplings of its occurrence? Your data will give partial answers by means of the Hansen-Hurwitz or Horvitz-Thompson estimators. The model is this: represent this individual's attendance as a sequence of indicator (0/1) variables $(q_i)$, $i=1, 2, \ldots$. You randomly observe a two-element subset out of each weekly block $(q_{5k+1}, q_{5k+2}, \ldots, q_{5k+5})$. (This is a form of systematic sampling.) How often does he train? You want to estimate the weekly mean of the $q_i$. The statistics you gather tell you the mean observation is 0.9. Let's suppose this was collected over $w$ weeks. Then the Horvitz-Thompson estimator of the total number of the individual's visits is $\sum{\frac{q_i}{\pi_i}}$ = ${5\over2} \sum{q_i}$ = ${5\over2} (2 w) 0.9$ = $4.5 w$ (where $\pi_i$ is the chance of observing $q_i$ and the sum is over your actual observations.) That is, you should estimate he trains 4.5 days per week. See the reference for how to compute the standard error of this estimate. As an extremely good approximation you can use the usual (Binomial) formulas. Does he train randomly? There is no way to tell. You would need to maintain totals by day of week.
Can I estimate the frequency of an event based on random samplings of its occurrence? Your data will give partial answers by means of the Hansen-Hurwitz or Horvitz-Thompson estimators. The model is this: represent this individual's attendance as a sequence of indicator (0/1) variables
32,766
How bad are the last 11 dry years in a row?
The last 11 years have very low ranks. If the data is such that every year is independently distributed, then there is a strong statistical significant effect that the recent 11 years are lower than the 50 years before that. Asside from comparing the numbers above/below the mean (whose worst case give the p-value 0.000488) you could also use a rank test, which gives an even lower p-value. Wilcoxon rank sum test data: y[1:50] and y[51:61] W = 492, p-value = 4.686e-05 alternative hypothesis: true location shift is not equal to 0 An important question is, "does it make sense to assume that the years are independent?". Clearly the hypothesis that you have a steady state distribution where every year is independent is wrong. However, this does not need to mean that 'something structural have changed'. It can be that you have random fluctuations over larger time scales that influence multiple years. It can be normal to have longer periods of years that are high or low. or it is just too "cherry picking" as an assertion This is always a risk with observational studies. Black swans happen and will be cherry picked. More data, experiments, and theory can improve your believes. then the probability p=0.0488% is an upper bound for the "true probability", so considering all the years registered I have that at least with a probability of 99.95% something structural have change on the final 11 years. The p-value indicates the probability of a type-I error (the probability of falsely rejecting the null hypothesis when it is actually true). It is not the probability that a certain effect is present.
How bad are the last 11 dry years in a row?
The last 11 years have very low ranks. If the data is such that every year is independently distributed, then there is a strong statistical significant effect that the recent 11 years are lower than t
How bad are the last 11 dry years in a row? The last 11 years have very low ranks. If the data is such that every year is independently distributed, then there is a strong statistical significant effect that the recent 11 years are lower than the 50 years before that. Asside from comparing the numbers above/below the mean (whose worst case give the p-value 0.000488) you could also use a rank test, which gives an even lower p-value. Wilcoxon rank sum test data: y[1:50] and y[51:61] W = 492, p-value = 4.686e-05 alternative hypothesis: true location shift is not equal to 0 An important question is, "does it make sense to assume that the years are independent?". Clearly the hypothesis that you have a steady state distribution where every year is independent is wrong. However, this does not need to mean that 'something structural have changed'. It can be that you have random fluctuations over larger time scales that influence multiple years. It can be normal to have longer periods of years that are high or low. or it is just too "cherry picking" as an assertion This is always a risk with observational studies. Black swans happen and will be cherry picked. More data, experiments, and theory can improve your believes. then the probability p=0.0488% is an upper bound for the "true probability", so considering all the years registered I have that at least with a probability of 99.95% something structural have change on the final 11 years. The p-value indicates the probability of a type-I error (the probability of falsely rejecting the null hypothesis when it is actually true). It is not the probability that a certain effect is present.
How bad are the last 11 dry years in a row? The last 11 years have very low ranks. If the data is such that every year is independently distributed, then there is a strong statistical significant effect that the recent 11 years are lower than t
32,767
How bad are the last 11 dry years in a row?
This will be a two-part answer, the first part a direct answer to the question, and the second part a commentary. Part 1: A simple, and exact, way to do it is to use the Hypergeometric distribution, as follows. I am going to translate your problem into an "urn" model. We have 61 balls, corresponding to the 61 years of observations. 32 of these balls are "above" the average, and 29 are "below" the average. If I choose 11 balls without replacement - corresponding to the last 11 observations - what is the probability that they are all "below" balls? The probability is easily calculated using any number of stat packages, or a calculator, as approximately $0.01\%$. Part 2: However, this isn't really telling you what you want to know, in a formal statistical sense. To see this, consider whether you even would have done this test had, say, 5 of the last 11 observations been below the average, or what test you would have done if it had been the last 9 observations below average instead of the last 11 observations. The fact that you observed what appeared to be a highly unusual result, then tested the significance of exactly that result, pretty much cancels out the value of the significance test - as it's based on "samples from a finite population that I think are highly unusual" rather than "random samples from a finite population" as the test calculation assumes. In an informal sense, it's OK to say "I thought this was a highly unusual result, and it is!" But it shouldn't be cited as a formal statistical test result. Edit in response to comments: To lend support to the validity of the Hypergeometric, I've constructed a simple example in code. We have 61 observations, 29 of which are "below" and 32 "above". We randomly rearrange them a million times, count the number of times that the last 11 observations have $0, 1, 2, \dots, 11$ "below" values, and compare to what the Hypergeometric distribution tells us to expect: obs <- c(rep("above",32), rep("below", 29)) p0_to_11 <- rep(0,12) for (i in 1:1e6) { x <- sample(obs) # randomly rearranges the elements of "obs" nbelow <- sum(x[51:61] == "below") p0_to_11[nbelow+1] <- p0_to_11[nbelow+1] + 1 } p0_to_11 <- p0_to_11 / 1e6 plot(p0_to_11 ~ c(0:11), type="b", pch=16, lwd=2, col=2, ylab = "Probabilities & frequencies", xlab = "# of 'below' observations") lines(dhyper(0:11,29,32,11) ~ c(0:11), type="l", lwd=2, col=1) The red dots indicate the observed frequencies, and the black lines are the Hypergeometric probabilities. There would be red lines too, except that the black lines overlay them. This is at least supporting evidence for the statement that the Hypergeometric is indeed the distribution to use in this circumstance.
How bad are the last 11 dry years in a row?
This will be a two-part answer, the first part a direct answer to the question, and the second part a commentary. Part 1: A simple, and exact, way to do it is to use the Hypergeometric distribution, a
How bad are the last 11 dry years in a row? This will be a two-part answer, the first part a direct answer to the question, and the second part a commentary. Part 1: A simple, and exact, way to do it is to use the Hypergeometric distribution, as follows. I am going to translate your problem into an "urn" model. We have 61 balls, corresponding to the 61 years of observations. 32 of these balls are "above" the average, and 29 are "below" the average. If I choose 11 balls without replacement - corresponding to the last 11 observations - what is the probability that they are all "below" balls? The probability is easily calculated using any number of stat packages, or a calculator, as approximately $0.01\%$. Part 2: However, this isn't really telling you what you want to know, in a formal statistical sense. To see this, consider whether you even would have done this test had, say, 5 of the last 11 observations been below the average, or what test you would have done if it had been the last 9 observations below average instead of the last 11 observations. The fact that you observed what appeared to be a highly unusual result, then tested the significance of exactly that result, pretty much cancels out the value of the significance test - as it's based on "samples from a finite population that I think are highly unusual" rather than "random samples from a finite population" as the test calculation assumes. In an informal sense, it's OK to say "I thought this was a highly unusual result, and it is!" But it shouldn't be cited as a formal statistical test result. Edit in response to comments: To lend support to the validity of the Hypergeometric, I've constructed a simple example in code. We have 61 observations, 29 of which are "below" and 32 "above". We randomly rearrange them a million times, count the number of times that the last 11 observations have $0, 1, 2, \dots, 11$ "below" values, and compare to what the Hypergeometric distribution tells us to expect: obs <- c(rep("above",32), rep("below", 29)) p0_to_11 <- rep(0,12) for (i in 1:1e6) { x <- sample(obs) # randomly rearranges the elements of "obs" nbelow <- sum(x[51:61] == "below") p0_to_11[nbelow+1] <- p0_to_11[nbelow+1] + 1 } p0_to_11 <- p0_to_11 / 1e6 plot(p0_to_11 ~ c(0:11), type="b", pch=16, lwd=2, col=2, ylab = "Probabilities & frequencies", xlab = "# of 'below' observations") lines(dhyper(0:11,29,32,11) ~ c(0:11), type="l", lwd=2, col=1) The red dots indicate the observed frequencies, and the black lines are the Hypergeometric probabilities. There would be red lines too, except that the black lines overlay them. This is at least supporting evidence for the statement that the Hypergeometric is indeed the distribution to use in this circumstance.
How bad are the last 11 dry years in a row? This will be a two-part answer, the first part a direct answer to the question, and the second part a commentary. Part 1: A simple, and exact, way to do it is to use the Hypergeometric distribution, a
32,768
How bad are the last 11 dry years in a row?
The easiest excel-friendly way to show the trend is to do a polynomial interpolation, a more sophisticated version of this would be to use a digital filter. However, apart from the approach @TickaJules suggested, I don't think there is a sensible way to define or interpret the probability of $11$ specific below-average data points.
How bad are the last 11 dry years in a row?
The easiest excel-friendly way to show the trend is to do a polynomial interpolation, a more sophisticated version of this would be to use a digital filter. However, apart from the approach @TickaJul
How bad are the last 11 dry years in a row? The easiest excel-friendly way to show the trend is to do a polynomial interpolation, a more sophisticated version of this would be to use a digital filter. However, apart from the approach @TickaJules suggested, I don't think there is a sensible way to define or interpret the probability of $11$ specific below-average data points.
How bad are the last 11 dry years in a row? The easiest excel-friendly way to show the trend is to do a polynomial interpolation, a more sophisticated version of this would be to use a digital filter. However, apart from the approach @TickaJul
32,769
"Unbiased" (at least ballpark) Estimate of Condition Number of True Covariance Matrix being Estimated & other Symmetric Matrices (e.g.,Hessian)
I would like to preface my answer by clarifying that I fully understand that the question requests estimating the condition number of the true unknown matrix, and not the condition number of the estimate of the matrix which is available to us (from the available samples). My answer does not directly address this requirement; instead, I propose 2 things: An answer to the last part of the question: an estimate of the largest eigenvalue of the true matrix being estimated. My answer uses the available matrix as a proxy for the unknown true matrix, and my assumption is that this estimate will apply to the true matrix. Links to 3 papers which directly deal with estimating large covariance matrices using shrinkage, including a recent paper by Ledoit and Wolf: Quadratic Shrinkage for Large Covariance Matrices (published 2022). First, an answer for the last part of the question: as an alternative to condition number, it would be useful to get a ballpark (within s mall number of orders of magnitude) estimate of the largest eigenvalue of the true matrix being estimated. I would like to emphasize that my answer uses the available matrix as a proxy for the unknown true matrix, and my assumption is that this estimate will apply to the true matrix. A justification for this assumption (although not proof) has to do with the radii in the Gershgorin circle theorem, which are directly affected by the "sample variation" between the samples which are available to us, from which the available sample covariance matrix is created, and the true unknown covariance matrix. I'm contemplating a proof of this, but do not yet have one, meanwhile I'd like to present my unproven answer for now. The Gershgorin circle theorem (https://en.wikipedia.org/wiki/Gershgorin_circle_theorem) can provide estimates of the locations of eigenvalues of a given square matrix. To summarize, this theorem says: the eigenvalues of any matrix are located within discs whose centers are the diagonal elements, and radii are row or column sums of absolute-values of the non-diagonal elements corresponding to each diagonal element (the minimum of the row or column sum can be used because the same theorem applies also to the transpose of the given matrix). A nice property of this method is that it gives an $O(n^2)$ algorithm for an $n$-by-$n$ matrix, which is much faster than a general eigensolver $O(n^3)$, when the covariance matrix $C$ does not have any known structure to exploit. In the current case of a covariance matrix $C$, it is symmetric, so the eigenvalues are all real and non-negative. Symmetry also means row sums are identical to column sums, of course. Therefore, a reasonable estimate for the largest eigenvalue $\lambda_{max} = \lambda_1$ can be obtained as: $$\hat{\lambda_{max}} := \max_i (c_i + r_i)$$ where $c_i = C_{ii}$ are the diagonal elements, and $r_i = \sum_{j \neq i}{|C_{ij}|}$ are the radii corresponding to each diagonal element. A different method for estimating the largest eigenvalue is using a few iterations of the Power iteration (https://en.wikipedia.org/wiki/Power_iteration) from a random initialization, where each iteration requires only a matrix-by-vector product, so is $O(n^2)$, but the number of iterations for a reasonable estimate is unknown in advance, since it is determined by the ratio of the unknown 2nd-largest to largest eigenvalues $\lambda_2 / \lambda_1$. Instead of several iterations of the Power iteration, one could try several different random initializations, doing a single iteration in each, and picking the maximum estimated eigenvalue, with an appropriate scaling. This has been proposed at: End of chapter 2 (p. 58) in Numerical Recipes in C, 2nd edition, by Press, Teukolsky, Vetterling, and Flannery. Returning to the original problem of estimating the condition number. It would be tempting to use the Gershgorin circle theorem to also estimate the smallest eigenvalue $\lambda_{min} = \lambda_n$ as: $$\hat{\lambda_{min}} := \max \{ \min_i (c_i - r_i), 0 \} \geq 0$$ However, in a few simple cases I tried, this is an extremely disappointing estimate. Since the motivation of the OP was estimation of covariance matrices, not just their condition numbers, then additional material which may be useful are the following pair of papers, which deal with a covariance matrix having a so-called "spiked" model, which roughly means a signal subspace of low rank within a full-rank noise subspace. Donoho, Gavish, and Johnstone: Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model, https://arxiv.org/abs/1311.0851 Donoho and Ghorbani: Optimal Covariance Estimation for Condition Number Loss in the Spiked Model, https://arxiv.org/abs/1810.07403 The OP already mentioned the 2004 paper by Ledoit and Wolf, which is also referenced in the papers above. Olivier Ledoit's home-page (http://www.ledoit.net) presents updated research in this area, including the most recent paper which claims an improvement over Stein's shrinkage: Ledoit and Wolf: Quadratic Shrinkage for Large Covariance Matrices, submitted 2021: http://www.ledoit.net/BEJ1911-021R1A0.pdf, published 2022: https://www.econstor.eu/bitstream/10419/228874/1/1743676301.pdf
"Unbiased" (at least ballpark) Estimate of Condition Number of True Covariance Matrix being Estimate
I would like to preface my answer by clarifying that I fully understand that the question requests estimating the condition number of the true unknown matrix, and not the condition number of the estim
"Unbiased" (at least ballpark) Estimate of Condition Number of True Covariance Matrix being Estimated & other Symmetric Matrices (e.g.,Hessian) I would like to preface my answer by clarifying that I fully understand that the question requests estimating the condition number of the true unknown matrix, and not the condition number of the estimate of the matrix which is available to us (from the available samples). My answer does not directly address this requirement; instead, I propose 2 things: An answer to the last part of the question: an estimate of the largest eigenvalue of the true matrix being estimated. My answer uses the available matrix as a proxy for the unknown true matrix, and my assumption is that this estimate will apply to the true matrix. Links to 3 papers which directly deal with estimating large covariance matrices using shrinkage, including a recent paper by Ledoit and Wolf: Quadratic Shrinkage for Large Covariance Matrices (published 2022). First, an answer for the last part of the question: as an alternative to condition number, it would be useful to get a ballpark (within s mall number of orders of magnitude) estimate of the largest eigenvalue of the true matrix being estimated. I would like to emphasize that my answer uses the available matrix as a proxy for the unknown true matrix, and my assumption is that this estimate will apply to the true matrix. A justification for this assumption (although not proof) has to do with the radii in the Gershgorin circle theorem, which are directly affected by the "sample variation" between the samples which are available to us, from which the available sample covariance matrix is created, and the true unknown covariance matrix. I'm contemplating a proof of this, but do not yet have one, meanwhile I'd like to present my unproven answer for now. The Gershgorin circle theorem (https://en.wikipedia.org/wiki/Gershgorin_circle_theorem) can provide estimates of the locations of eigenvalues of a given square matrix. To summarize, this theorem says: the eigenvalues of any matrix are located within discs whose centers are the diagonal elements, and radii are row or column sums of absolute-values of the non-diagonal elements corresponding to each diagonal element (the minimum of the row or column sum can be used because the same theorem applies also to the transpose of the given matrix). A nice property of this method is that it gives an $O(n^2)$ algorithm for an $n$-by-$n$ matrix, which is much faster than a general eigensolver $O(n^3)$, when the covariance matrix $C$ does not have any known structure to exploit. In the current case of a covariance matrix $C$, it is symmetric, so the eigenvalues are all real and non-negative. Symmetry also means row sums are identical to column sums, of course. Therefore, a reasonable estimate for the largest eigenvalue $\lambda_{max} = \lambda_1$ can be obtained as: $$\hat{\lambda_{max}} := \max_i (c_i + r_i)$$ where $c_i = C_{ii}$ are the diagonal elements, and $r_i = \sum_{j \neq i}{|C_{ij}|}$ are the radii corresponding to each diagonal element. A different method for estimating the largest eigenvalue is using a few iterations of the Power iteration (https://en.wikipedia.org/wiki/Power_iteration) from a random initialization, where each iteration requires only a matrix-by-vector product, so is $O(n^2)$, but the number of iterations for a reasonable estimate is unknown in advance, since it is determined by the ratio of the unknown 2nd-largest to largest eigenvalues $\lambda_2 / \lambda_1$. Instead of several iterations of the Power iteration, one could try several different random initializations, doing a single iteration in each, and picking the maximum estimated eigenvalue, with an appropriate scaling. This has been proposed at: End of chapter 2 (p. 58) in Numerical Recipes in C, 2nd edition, by Press, Teukolsky, Vetterling, and Flannery. Returning to the original problem of estimating the condition number. It would be tempting to use the Gershgorin circle theorem to also estimate the smallest eigenvalue $\lambda_{min} = \lambda_n$ as: $$\hat{\lambda_{min}} := \max \{ \min_i (c_i - r_i), 0 \} \geq 0$$ However, in a few simple cases I tried, this is an extremely disappointing estimate. Since the motivation of the OP was estimation of covariance matrices, not just their condition numbers, then additional material which may be useful are the following pair of papers, which deal with a covariance matrix having a so-called "spiked" model, which roughly means a signal subspace of low rank within a full-rank noise subspace. Donoho, Gavish, and Johnstone: Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model, https://arxiv.org/abs/1311.0851 Donoho and Ghorbani: Optimal Covariance Estimation for Condition Number Loss in the Spiked Model, https://arxiv.org/abs/1810.07403 The OP already mentioned the 2004 paper by Ledoit and Wolf, which is also referenced in the papers above. Olivier Ledoit's home-page (http://www.ledoit.net) presents updated research in this area, including the most recent paper which claims an improvement over Stein's shrinkage: Ledoit and Wolf: Quadratic Shrinkage for Large Covariance Matrices, submitted 2021: http://www.ledoit.net/BEJ1911-021R1A0.pdf, published 2022: https://www.econstor.eu/bitstream/10419/228874/1/1743676301.pdf
"Unbiased" (at least ballpark) Estimate of Condition Number of True Covariance Matrix being Estimate I would like to preface my answer by clarifying that I fully understand that the question requests estimating the condition number of the true unknown matrix, and not the condition number of the estim
32,770
Finding the MVUE of the center of a circle of unknown location
Here is a homework problem from Mark Schervish's Theory of Statistics that addresses a similar question: Let $(X_1, Y_1),\dots,(X_n, Y_n)$ be conditionally IID with uniform distribution on the risk of radius $r$ centered at $(\theta_1, \theta_2)$ in $\mathbb R^2$ given $(\Theta_1, \Theta_2, R)=(\theta_1, \theta_2, r)$. a. If $(\Theta_1, \Theta_2)$ is known, find a minimal sufficient statistic for $R$. b. If all parameters are unknown, show that the convex hull of the sample points is a sufficient statistic. When only the center $\theta$ is unknown, the likelihood is constant over the set $$\mathfrak O = \bigcap_{i=1}^n \{\theta; d(x_i,\theta)\le r\}$$ which is therefore a (set-valued) "sufficient statistic". There is however no sufficient statistic in the classical sense and any value in $\mathfrak O$ is a MLE. The problem can be considered from a different perspective, namely as a location model, since $$Z_1,\ldots,Z_n\sim\mathcal U(\mathcal B(\theta,r))$$ is the translation by $\theta$ of a $$U_1,\ldots,U_n\sim\mathcal U(\mathcal B(0,r))$$ [ancillary] sample. Therefore one could consider the best equivariant estimator¹ attached with this problem (and squared error loss), $$\hat\theta^\text P = \dfrac{\int \theta\,\mathbb I_{\max_i \vert\theta-z_i\vert<r}\,\text d\theta}{\int \mathbb I_{\max_i \vert\theta-z_i\vert<r}\,\text d\theta}$$ as established by Pitman (1933). This best equivariant estimator is unbiased unique the center of the "sufficient" region mentioned above with constant risk (by construction) minimax (because of 4.) the MVUE if the later exists admissible under squared error loss (as Stein phenomenon only occurs in dimension 3 and more for spherically symmetric distributions), which means that it cannot be dominated everywhere by another estimator but not sufficient. Both 5. and 7. indicate that this estimator is minimum variance in the weak sense that there is no other estimator with a strictly everywhere smaller maximal MSE. ¹See Theory of Point estimation, by Lehmann and Casella, for a textbook entry to equivariance and best equivariant estimators.
Finding the MVUE of the center of a circle of unknown location
Here is a homework problem from Mark Schervish's Theory of Statistics that addresses a similar question: Let $(X_1, Y_1),\dots,(X_n, Y_n)$ be conditionally IID with uniform distribution on the risk
Finding the MVUE of the center of a circle of unknown location Here is a homework problem from Mark Schervish's Theory of Statistics that addresses a similar question: Let $(X_1, Y_1),\dots,(X_n, Y_n)$ be conditionally IID with uniform distribution on the risk of radius $r$ centered at $(\theta_1, \theta_2)$ in $\mathbb R^2$ given $(\Theta_1, \Theta_2, R)=(\theta_1, \theta_2, r)$. a. If $(\Theta_1, \Theta_2)$ is known, find a minimal sufficient statistic for $R$. b. If all parameters are unknown, show that the convex hull of the sample points is a sufficient statistic. When only the center $\theta$ is unknown, the likelihood is constant over the set $$\mathfrak O = \bigcap_{i=1}^n \{\theta; d(x_i,\theta)\le r\}$$ which is therefore a (set-valued) "sufficient statistic". There is however no sufficient statistic in the classical sense and any value in $\mathfrak O$ is a MLE. The problem can be considered from a different perspective, namely as a location model, since $$Z_1,\ldots,Z_n\sim\mathcal U(\mathcal B(\theta,r))$$ is the translation by $\theta$ of a $$U_1,\ldots,U_n\sim\mathcal U(\mathcal B(0,r))$$ [ancillary] sample. Therefore one could consider the best equivariant estimator¹ attached with this problem (and squared error loss), $$\hat\theta^\text P = \dfrac{\int \theta\,\mathbb I_{\max_i \vert\theta-z_i\vert<r}\,\text d\theta}{\int \mathbb I_{\max_i \vert\theta-z_i\vert<r}\,\text d\theta}$$ as established by Pitman (1933). This best equivariant estimator is unbiased unique the center of the "sufficient" region mentioned above with constant risk (by construction) minimax (because of 4.) the MVUE if the later exists admissible under squared error loss (as Stein phenomenon only occurs in dimension 3 and more for spherically symmetric distributions), which means that it cannot be dominated everywhere by another estimator but not sufficient. Both 5. and 7. indicate that this estimator is minimum variance in the weak sense that there is no other estimator with a strictly everywhere smaller maximal MSE. ¹See Theory of Point estimation, by Lehmann and Casella, for a textbook entry to equivariance and best equivariant estimators.
Finding the MVUE of the center of a circle of unknown location Here is a homework problem from Mark Schervish's Theory of Statistics that addresses a similar question: Let $(X_1, Y_1),\dots,(X_n, Y_n)$ be conditionally IID with uniform distribution on the risk
32,771
Formula to compute approximate memory requirements of Transformer models
I was surprised that afaik there are no good answers for this (and similar) questions on the internet. I'm going to derive the following approximate formula for GPT: $M \approx M_{activatons} \approx \frac{BT^2}{4ND^2}$ M = memory B = batch size T = sequence length N = # of attention heads D = dimension per head Let's get started. The GPT transformer block has the following form: Multi-head Attention -> LayerNorm -> MLP LayerNorm To simplify the problem, let's exclude the layer norm and bias terms from our parameter count. Assume we have $N$ heads, a hidden dimension of $D$ per each head, and data of batch size $B$ and sequence length $T$. Let's represent the total dimension as $C = N * D$ and assume the MLP has dimension $C$ also. We want to express the memory footprint in terms of $C, B, T$. There are three components that will contribute to the overall footprint: Storing the model $M_{model}$ Storing the activations $M_{activations}$ Storing the gradients $M_{gradients}$ So the total memory is $M = M_{model} + M_{activations} + M_{gradients}$. Unless you are computing higher order gradients $ M_{model} \geq M_{gradients}$. For transformers $M_{activations} >> M_{model}$ so the term we care about most is $M_{activations}$. I'll derive both though to show you why: The model: Each transformer block will have query, key, value networks and an MLP. We're ignoring layer norms and biases so the total parameters per block are $3C^2 + C^2 = 4C^2$. If the transformer has $L$ layers this means: $M_{model} = 4LC^2 = 4 L N^2 D^2$ The activations: Attention is the following operation $\text{Attention}(Q, K, V) = \text{softmax}(Q K^T / \sqrt{d}) V$. The $Q K^T$ operation has the following shape: [B, N, T, D] @ [B, N, D, T] = [B, N, T, T] Then the multiplication by $V$ and the MLP both output [B, N, T, D] activations. So the total memory per block is: $BNT^2 + 2 BNTD = BNT(T + 2D)$ This happens at each layer so $M_{activations} = BNLT(T+2D)$ Relative activation-to-model memory ratio is $M_{activations} / M_{model} = BT(T+2D)/4N D^2$ Now let's assume we're modelling long sequences, then $T >> D$ and we have $M_{activations} / M_{model} \approx \frac{BT^2}{4ND^2}$ Meaning that $M_{activations} >> M_{model}$$ so the total memory is dominated by activations: $M \approx M_{activatons} \approx M_{model}\frac{BT^2}{4ND^2}$
Formula to compute approximate memory requirements of Transformer models
I was surprised that afaik there are no good answers for this (and similar) questions on the internet. I'm going to derive the following approximate formula for GPT: $M \approx M_{activatons} \approx
Formula to compute approximate memory requirements of Transformer models I was surprised that afaik there are no good answers for this (and similar) questions on the internet. I'm going to derive the following approximate formula for GPT: $M \approx M_{activatons} \approx \frac{BT^2}{4ND^2}$ M = memory B = batch size T = sequence length N = # of attention heads D = dimension per head Let's get started. The GPT transformer block has the following form: Multi-head Attention -> LayerNorm -> MLP LayerNorm To simplify the problem, let's exclude the layer norm and bias terms from our parameter count. Assume we have $N$ heads, a hidden dimension of $D$ per each head, and data of batch size $B$ and sequence length $T$. Let's represent the total dimension as $C = N * D$ and assume the MLP has dimension $C$ also. We want to express the memory footprint in terms of $C, B, T$. There are three components that will contribute to the overall footprint: Storing the model $M_{model}$ Storing the activations $M_{activations}$ Storing the gradients $M_{gradients}$ So the total memory is $M = M_{model} + M_{activations} + M_{gradients}$. Unless you are computing higher order gradients $ M_{model} \geq M_{gradients}$. For transformers $M_{activations} >> M_{model}$ so the term we care about most is $M_{activations}$. I'll derive both though to show you why: The model: Each transformer block will have query, key, value networks and an MLP. We're ignoring layer norms and biases so the total parameters per block are $3C^2 + C^2 = 4C^2$. If the transformer has $L$ layers this means: $M_{model} = 4LC^2 = 4 L N^2 D^2$ The activations: Attention is the following operation $\text{Attention}(Q, K, V) = \text{softmax}(Q K^T / \sqrt{d}) V$. The $Q K^T$ operation has the following shape: [B, N, T, D] @ [B, N, D, T] = [B, N, T, T] Then the multiplication by $V$ and the MLP both output [B, N, T, D] activations. So the total memory per block is: $BNT^2 + 2 BNTD = BNT(T + 2D)$ This happens at each layer so $M_{activations} = BNLT(T+2D)$ Relative activation-to-model memory ratio is $M_{activations} / M_{model} = BT(T+2D)/4N D^2$ Now let's assume we're modelling long sequences, then $T >> D$ and we have $M_{activations} / M_{model} \approx \frac{BT^2}{4ND^2}$ Meaning that $M_{activations} >> M_{model}$$ so the total memory is dominated by activations: $M \approx M_{activatons} \approx M_{model}\frac{BT^2}{4ND^2}$
Formula to compute approximate memory requirements of Transformer models I was surprised that afaik there are no good answers for this (and similar) questions on the internet. I'm going to derive the following approximate formula for GPT: $M \approx M_{activatons} \approx
32,772
Difference between exchangeability and independence in causal inference
My question is, why is this assumption called the "exchangeability" assumption when it's a statement about independence? Exchangeability is the assumption of being able to exchange groups without changing the outcome of the study. Why? Because the relationship between treatment and outcome is not confounded. Why? Because treatment assignment is independent of everything else. If you have people with a more severe version of the disease in one group, if you exchange the treatment group with the control group, your results will be different, so exchangeability, in this case, would have been violated. You can also run into this assumption with a different name, depending on what source you're checking. One example is unconfoundedness. If you made treatment and outcome independent by adjusting for the appropriate variables $Z$ (check backdoor criterion, for example), you can also see this as conditional exchangeability, e.g., $Y^a \perp\!\!\!\perp A \mid Z $.
Difference between exchangeability and independence in causal inference
My question is, why is this assumption called the "exchangeability" assumption when it's a statement about independence? Exchangeability is the assumption of being able to exchange groups without cha
Difference between exchangeability and independence in causal inference My question is, why is this assumption called the "exchangeability" assumption when it's a statement about independence? Exchangeability is the assumption of being able to exchange groups without changing the outcome of the study. Why? Because the relationship between treatment and outcome is not confounded. Why? Because treatment assignment is independent of everything else. If you have people with a more severe version of the disease in one group, if you exchange the treatment group with the control group, your results will be different, so exchangeability, in this case, would have been violated. You can also run into this assumption with a different name, depending on what source you're checking. One example is unconfoundedness. If you made treatment and outcome independent by adjusting for the appropriate variables $Z$ (check backdoor criterion, for example), you can also see this as conditional exchangeability, e.g., $Y^a \perp\!\!\!\perp A \mid Z $.
Difference between exchangeability and independence in causal inference My question is, why is this assumption called the "exchangeability" assumption when it's a statement about independence? Exchangeability is the assumption of being able to exchange groups without cha
32,773
Convergence in distribution to a degenerate distribution
Your answer is correct (assuming that you have accurately transcribed the question). The proof: Let $F_n(c)$ be the cdf of $(1 - X_{(n)})$, where $X_{(n)}$ is the greatest element in a sample of size $n$. Let $F(c)$ be the cdf for the constant 0 distribution. For $c < 0$, of course $F_n(c) = 0 = F(c)$. For $c > 1$, of course $F_n(c) = 1 = F(c)$. For $0 < c \le 1$: $$ \begin{align} F_n(c) &= P(1 - X_{(n)} \le c) \\ &= P(X_{(n)} \ge 1-c) \\ &= 1-P(X_1 < 1-c, ..., X_n < 1-c) \\ &= 1 - P(X_1 < 1-c)^n \to 1 = F(c) \end{align} $$ And the case $c = 0$ doesn't matter, because $F$ isn't continuous at $0$. If the people you are arguing with don't realise that convergence in distribution to a constant is a thing, you could point them to e.g. Wikipedia's Proofs of convergence of random variables article.
Convergence in distribution to a degenerate distribution
Your answer is correct (assuming that you have accurately transcribed the question). The proof: Let $F_n(c)$ be the cdf of $(1 - X_{(n)})$, where $X_{(n)}$ is the greatest element in a sample of size
Convergence in distribution to a degenerate distribution Your answer is correct (assuming that you have accurately transcribed the question). The proof: Let $F_n(c)$ be the cdf of $(1 - X_{(n)})$, where $X_{(n)}$ is the greatest element in a sample of size $n$. Let $F(c)$ be the cdf for the constant 0 distribution. For $c < 0$, of course $F_n(c) = 0 = F(c)$. For $c > 1$, of course $F_n(c) = 1 = F(c)$. For $0 < c \le 1$: $$ \begin{align} F_n(c) &= P(1 - X_{(n)} \le c) \\ &= P(X_{(n)} \ge 1-c) \\ &= 1-P(X_1 < 1-c, ..., X_n < 1-c) \\ &= 1 - P(X_1 < 1-c)^n \to 1 = F(c) \end{align} $$ And the case $c = 0$ doesn't matter, because $F$ isn't continuous at $0$. If the people you are arguing with don't realise that convergence in distribution to a constant is a thing, you could point them to e.g. Wikipedia's Proofs of convergence of random variables article.
Convergence in distribution to a degenerate distribution Your answer is correct (assuming that you have accurately transcribed the question). The proof: Let $F_n(c)$ be the cdf of $(1 - X_{(n)})$, where $X_{(n)}$ is the greatest element in a sample of size
32,774
Convergence in distribution to a degenerate distribution
Presumably the real issue here is that the TA does not like your answer because the purpose of scaling when seeking convergence results is to find an asymptotic distribution that is non-degenerate. However, having said that, your answer is technically correct (the best kind of correct?). You can easily tighten up your argument by giving the explicit distribution of the quantity of interest and showing the limit of this distribution as $n \rightarrow \infty$. I will show you how to do this here. Take $v=0$ and denote the resulting quantity as $Y_n \equiv 1- X_{(n)} = 1 - \max (X_1,...,X_n)$. Now observe that for all $0 \leqslant y \leqslant 1$ we have: $$\begin{align} F_{Y_n}(y) \equiv \mathbb{P}(Y_n \leqslant y) &= 1 - \mathbb{P}(Y_n > y) \\[12pt] &= 1 - \mathbb{P}(\max (X_1,...,X_n) < 1-y) \\[6pt] &= 1 - \prod_{i=1}^n \mathbb{P}(X_i < 1-y) \\[4pt] &= 1 - F_X(1-y)^n \\[12pt] &= 1 - (3(1-y)-3(1-y)^2+(1-y)^3)^n \\[12pt] &= 1 - ((3 - 3y) + (-3 + 6y - 3y^2) + (1 - 3y + 3y^2 - y^3))^n \\[12pt] &= 1 - (1 - y^3)^n. \\[12pt] \end{align}$$ Taking the limit (and now considering the broader range $y \in \mathbb{R}$) then gives: $$\lim_{n \rightarrow \infty} F_{Y_n}(y) = \mathbb{I}(y \geqslant 0),$$ which is the CDF of the point-mass distribution at zero. So, you are correct that the distribution of $Y_{n}$ converges to a point-mass distribution at zero. Of course, this is a degenerate distribution, and ideally we would like to get an asymptotic result giving convergence to a non-degenerate distribution. I recommend you see if you can also derive the latter, which was presumably the intended goal of the exercise.
Convergence in distribution to a degenerate distribution
Presumably the real issue here is that the TA does not like your answer because the purpose of scaling when seeking convergence results is to find an asymptotic distribution that is non-degenerate. H
Convergence in distribution to a degenerate distribution Presumably the real issue here is that the TA does not like your answer because the purpose of scaling when seeking convergence results is to find an asymptotic distribution that is non-degenerate. However, having said that, your answer is technically correct (the best kind of correct?). You can easily tighten up your argument by giving the explicit distribution of the quantity of interest and showing the limit of this distribution as $n \rightarrow \infty$. I will show you how to do this here. Take $v=0$ and denote the resulting quantity as $Y_n \equiv 1- X_{(n)} = 1 - \max (X_1,...,X_n)$. Now observe that for all $0 \leqslant y \leqslant 1$ we have: $$\begin{align} F_{Y_n}(y) \equiv \mathbb{P}(Y_n \leqslant y) &= 1 - \mathbb{P}(Y_n > y) \\[12pt] &= 1 - \mathbb{P}(\max (X_1,...,X_n) < 1-y) \\[6pt] &= 1 - \prod_{i=1}^n \mathbb{P}(X_i < 1-y) \\[4pt] &= 1 - F_X(1-y)^n \\[12pt] &= 1 - (3(1-y)-3(1-y)^2+(1-y)^3)^n \\[12pt] &= 1 - ((3 - 3y) + (-3 + 6y - 3y^2) + (1 - 3y + 3y^2 - y^3))^n \\[12pt] &= 1 - (1 - y^3)^n. \\[12pt] \end{align}$$ Taking the limit (and now considering the broader range $y \in \mathbb{R}$) then gives: $$\lim_{n \rightarrow \infty} F_{Y_n}(y) = \mathbb{I}(y \geqslant 0),$$ which is the CDF of the point-mass distribution at zero. So, you are correct that the distribution of $Y_{n}$ converges to a point-mass distribution at zero. Of course, this is a degenerate distribution, and ideally we would like to get an asymptotic result giving convergence to a non-degenerate distribution. I recommend you see if you can also derive the latter, which was presumably the intended goal of the exercise.
Convergence in distribution to a degenerate distribution Presumably the real issue here is that the TA does not like your answer because the purpose of scaling when seeking convergence results is to find an asymptotic distribution that is non-degenerate. H
32,775
How can I determine that a forecast is significantly more accurate than another one? (time series)
Forecasters (those who do worry about statistical significance, which is still not all of us; compare Diebold's 2015 recollection of "bewilderment as to why anyone would care about the subject" in the referee report to their initial submission) will often happily summarize multiple steps ahead to obtain a mean error per series and method across time, then compare these summaries using the Diebold-Mariano test, even if it is in principle only intended to compare a single time step. However, the issue why the DM test is not very helpful is that it compares only two forecasts, and you have many, so you have a multiple comparisons problem. In such a case, the standard approach is the "multiple comparisons to the best" (MCB) test originally proposed by Koning et al. (2005) for a re-analysis of the M3 forecasting competition. Most recently it has been applied to submissions in the M5 forecasting competition as well. It is rank-based, so it works with any accuracy measure (and appropriate point forecasts, Kolassa, 2020, SCNR). A related alternative would be the Friedman-Nemenyi test (Demsar, 2006). Both the MCB and the Nememyi test are implemented in the TStools package for R. An empirical comparison between the two is given by Hibon et al.'s 2012 ISF presentation.
How can I determine that a forecast is significantly more accurate than another one? (time series)
Forecasters (those who do worry about statistical significance, which is still not all of us; compare Diebold's 2015 recollection of "bewilderment as to why anyone would care about the subject" in the
How can I determine that a forecast is significantly more accurate than another one? (time series) Forecasters (those who do worry about statistical significance, which is still not all of us; compare Diebold's 2015 recollection of "bewilderment as to why anyone would care about the subject" in the referee report to their initial submission) will often happily summarize multiple steps ahead to obtain a mean error per series and method across time, then compare these summaries using the Diebold-Mariano test, even if it is in principle only intended to compare a single time step. However, the issue why the DM test is not very helpful is that it compares only two forecasts, and you have many, so you have a multiple comparisons problem. In such a case, the standard approach is the "multiple comparisons to the best" (MCB) test originally proposed by Koning et al. (2005) for a re-analysis of the M3 forecasting competition. Most recently it has been applied to submissions in the M5 forecasting competition as well. It is rank-based, so it works with any accuracy measure (and appropriate point forecasts, Kolassa, 2020, SCNR). A related alternative would be the Friedman-Nemenyi test (Demsar, 2006). Both the MCB and the Nememyi test are implemented in the TStools package for R. An empirical comparison between the two is given by Hibon et al.'s 2012 ISF presentation.
How can I determine that a forecast is significantly more accurate than another one? (time series) Forecasters (those who do worry about statistical significance, which is still not all of us; compare Diebold's 2015 recollection of "bewilderment as to why anyone would care about the subject" in the
32,776
How can I determine that a forecast is significantly more accurate than another one? (time series)
Best can be defined many ways. More accurate is the way I define it as a practitioner not a researcher. I just compare the predicted results to the actual results with a MAPE. But even for accuracy there are many alternatives. There is of course no test of statistical significance this ways. Since the results are real I don't really think that matters. No sample and population issues are involved.
How can I determine that a forecast is significantly more accurate than another one? (time series)
Best can be defined many ways. More accurate is the way I define it as a practitioner not a researcher. I just compare the predicted results to the actual results with a MAPE. But even for accuracy th
How can I determine that a forecast is significantly more accurate than another one? (time series) Best can be defined many ways. More accurate is the way I define it as a practitioner not a researcher. I just compare the predicted results to the actual results with a MAPE. But even for accuracy there are many alternatives. There is of course no test of statistical significance this ways. Since the results are real I don't really think that matters. No sample and population issues are involved.
How can I determine that a forecast is significantly more accurate than another one? (time series) Best can be defined many ways. More accurate is the way I define it as a practitioner not a researcher. I just compare the predicted results to the actual results with a MAPE. But even for accuracy th
32,777
Why is the EM algorithm well suited for exponential families?
I'm not sure that the method necessarily works any more effectively for exponential families (though I'm open to being convinced to the contrary). I think more likely what is meant here is that the method is simpler to apply to exponential families since the maximisation step leads to a relatively simple form. You are essentially already seeing the advantage here; you just aren't comparing it with anything to see how much nicer it is to have this form instead of groping around in the darkness with functions of unspecified form. If you try to apply the method to distributions outside the exponential family you will find that you have to proceed ad hoc for the particular functional form at issue, instead of skipping right to using a sufficient statistic. Depending on the complexity of the distribution you are using you might get a reasonable maximising step or you might get a nasty one. In the worst case scenario, the form of the function to be maximised will be complicated enough that you might need to do difficult numerical computations or even a grid search.
Why is the EM algorithm well suited for exponential families?
I'm not sure that the method necessarily works any more effectively for exponential families (though I'm open to being convinced to the contrary). I think more likely what is meant here is that the m
Why is the EM algorithm well suited for exponential families? I'm not sure that the method necessarily works any more effectively for exponential families (though I'm open to being convinced to the contrary). I think more likely what is meant here is that the method is simpler to apply to exponential families since the maximisation step leads to a relatively simple form. You are essentially already seeing the advantage here; you just aren't comparing it with anything to see how much nicer it is to have this form instead of groping around in the darkness with functions of unspecified form. If you try to apply the method to distributions outside the exponential family you will find that you have to proceed ad hoc for the particular functional form at issue, instead of skipping right to using a sufficient statistic. Depending on the complexity of the distribution you are using you might get a reasonable maximising step or you might get a nasty one. In the worst case scenario, the form of the function to be maximised will be complicated enough that you might need to do difficult numerical computations or even a grid search.
Why is the EM algorithm well suited for exponential families? I'm not sure that the method necessarily works any more effectively for exponential families (though I'm open to being convinced to the contrary). I think more likely what is meant here is that the m
32,778
Gaussian process - what am I doing wrong?
Covariance matrix of Gaussian process $K$ is defined in terms of evaluations of the kernel function $k$ over the pairs of datapoints, i.e. $K_{ij} = k(\mathbf{x}_i, \mathbf{x}_j)$. For train $X$ and test $X_*$ datasets, we have submatrices $K = K(X, X)$ and $K_* = K(X, X_*)$. In such case, predictive mean of the Gaussian process is $$ \mu = K_* K^\top y $$ Eyeballing the code, I don't see any obvious bug. You need to do standard debugging, so for every step check if the outputs match what you would expect from processing the inputs (values, shapes, etc). Also, I'd recommend starting with simple, unoptimized code, as premature optimization is a root of all evil. For example: for evaluating the kernel use old-fashioned for-loops rather than vectorized code, moreover, you seem to use $K_* = K(X_*, X)$ to avoid transposing, instead write it exactly as in the equation, and only if it works as expected, optimize the code. Finally, write unit tests.
Gaussian process - what am I doing wrong?
Covariance matrix of Gaussian process $K$ is defined in terms of evaluations of the kernel function $k$ over the pairs of datapoints, i.e. $K_{ij} = k(\mathbf{x}_i, \mathbf{x}_j)$. For train $X$ and t
Gaussian process - what am I doing wrong? Covariance matrix of Gaussian process $K$ is defined in terms of evaluations of the kernel function $k$ over the pairs of datapoints, i.e. $K_{ij} = k(\mathbf{x}_i, \mathbf{x}_j)$. For train $X$ and test $X_*$ datasets, we have submatrices $K = K(X, X)$ and $K_* = K(X, X_*)$. In such case, predictive mean of the Gaussian process is $$ \mu = K_* K^\top y $$ Eyeballing the code, I don't see any obvious bug. You need to do standard debugging, so for every step check if the outputs match what you would expect from processing the inputs (values, shapes, etc). Also, I'd recommend starting with simple, unoptimized code, as premature optimization is a root of all evil. For example: for evaluating the kernel use old-fashioned for-loops rather than vectorized code, moreover, you seem to use $K_* = K(X_*, X)$ to avoid transposing, instead write it exactly as in the equation, and only if it works as expected, optimize the code. Finally, write unit tests.
Gaussian process - what am I doing wrong? Covariance matrix of Gaussian process $K$ is defined in terms of evaluations of the kernel function $k$ over the pairs of datapoints, i.e. $K_{ij} = k(\mathbf{x}_i, \mathbf{x}_j)$. For train $X$ and t
32,779
Gaussian process - what am I doing wrong?
I think with a large N you are sampling densely from [0, 1]. Some data points in x_train are very close to each other, causing K nearly singular. Consequently np.linalg.inv(K) will give you unstable results. Using np.linalg.pinv(K) should solve your problem.
Gaussian process - what am I doing wrong?
I think with a large N you are sampling densely from [0, 1]. Some data points in x_train are very close to each other, causing K nearly singular. Consequently np.linalg.inv(K) will give you unstable r
Gaussian process - what am I doing wrong? I think with a large N you are sampling densely from [0, 1]. Some data points in x_train are very close to each other, causing K nearly singular. Consequently np.linalg.inv(K) will give you unstable results. Using np.linalg.pinv(K) should solve your problem.
Gaussian process - what am I doing wrong? I think with a large N you are sampling densely from [0, 1]. Some data points in x_train are very close to each other, causing K nearly singular. Consequently np.linalg.inv(K) will give you unstable r
32,780
Omitted Variable Bias (OVB) and multicollinearity
This is a good question. The confusion stems from the "assumption" of no multicollinearity. From the Wikipedia page on multicollinearity: Note that in statements of the assumptions underlying regression analyses such as ordinary least squares, the phrase "no multicollinearity" usually refers to the absence of perfect multicollinearity, which is an exact (non-stochastic) linear relation among the predictors. In such case, the data matrix $X$ has less than full rank, and therefore the moment matrix $X^TX$ cannot be inverted. Under these circumstances, for a general linear model $y = X\beta + \epsilon$ , the ordinary least squares estimator $\hat\beta_{OLS} = (X^TX)^{-1} X^T y $ does not exist. Multicollinearity in the sense that you describe will inflate the variance of the OLS estimator, but unless you include $X_2$ in the regression, the OLS estimator is biased. In short, if you have to worry about OVB, you should not be worrying about multicollinearity. Why would we want a more precise but biased estimator? At more length, I am not sure that multicollinearity (or variance inflation) is at all meaningful to consider when we are concerned with OVB. Assume $$ Y = 5X_1 + X_2 + \epsilon $$ $$ X_1 = -0.1X_2 + u $$ If $\text{Cov}(X_2, u) = 0$, the correlation between $X_1$ and $X_2$ is $$ \rho = \frac{\sigma_{x_1x_2}}{\sigma_{x_1}\sigma_{x_2}} = \frac{-0.1\sigma_{x_2}}{\sqrt{0.01\sigma_{x_2}^2 + \sigma_u^2}} $$ If we let $\sigma_{x_2} = \sigma_{x_1}$, then $\rho \approx -0.1$ (which is a case where we would not worry about multicollinearity). Simulating in R, we see that an OLS regression of $Y$ on $X_1$ controlling for $X_2$ is unbiased. However, the bias that we get by excluding $X_2$ is pretty small. iter <- 10000 # NUMBER OF ITERATIONS n <- 100 # NUMBER OF OBSERVATIONS PER SAMPLE sigma_e = sigma_u = sigma_x2 = 5 mu_e = mu_u = mu_x2 = 0 res0 = res1 = list() # LISTS FOR SAVING RESULTS for(i in 1:iter) { #print(i) x2 <- rnorm(n, mu_x2, sigma_x2) u <- rnorm(n, mu_u, sigma_u) e <- rnorm(n, mu_e, sigma_e) x1 <- -0.1*x2 + u y <- 5*x1 + x2 + e res0[[i]] <- lm(y ~ x1 + x2)$coef res1[[i]] <- lm(y ~ x1)$coef } res0 <- as.data.frame(do.call("rbind", res0)) res1 <- as.data.frame(do.call("rbind", res1)) If we increase the variance of $X_2$ so that $\rho \approx -0.95$ sigma_x2 <- 150 and repeat the simulation we see that this does not affect the precision of the estimator for $X_1$ (but the precision for $X_2$ increases). However, the bias is now pretty big, which means that there is a big difference between the association between $X_1$ and and $Y$, where other factors (that is, $X_2$) are not held constant, and the effect of $X_1$ on $Y$ ceteris paribus. As long as there is some variation in $X_1$ that does not depend on $X_2$ (i.e., $\sigma_u^2 > 0$), we can retrieve this effect by OLS; the precision of the estimator will depend on the size of $\sigma_u^2$ compared to $\sigma_\epsilon^2$. We can illustrate the effect of variance inflation by simulating with and without correlation between $X_1$ and $X_2$ and regressing $Y$ on $X_1$ and $X_2$ for both the correlated and uncorrelated case. install.packages("mvtnorm") library(mvtnorm) sigma_x2 <- 5 # RESET STANDARD DEVIATION FOR X2 res0 = res1 = list() Sigma <- matrix(c(sigma_x1^2, sigma_x1*sigma_x2*-0.95, 0, sigma_x1*sigma_x2*-0.95, sigma_x2^2, 0, 0, 0, sigma_e^2), ncol = 3) Sigma0 <- matrix(c(sigma_x1^2, 0, 0, 0, sigma_x2^2, 0, 0, 0, sigma_e^2), ncol = 3) for(i in 1:iter) { print(i) tmp <- rmvnorm(n, mean = c(mu_x1, mu_x2, mu_e), sigma = Sigma0) x1 <- tmp[,1] x2 <- tmp[,2] e <- tmp[,3] y <- 5*x1 + x2 + e res0[[i]] <- lm(y ~ x1 + x2)$coef tmp <- rmvnorm(n, mean = c(mu_x1, mu_x2, mu_e), sigma = Sigma) x1 <- tmp[,1] x2 <- tmp[,2] e <- tmp[,3] y <- 5*x1 + x2 + e res1[[i]] <- lm(y ~ x1 + x2)$coef } res0 <- as.data.frame(do.call("rbind", res0)) res1 <- as.data.frame(do.call("rbind", res1)) This shows that the precision of the estimator would be better if $X_1$ and $X_2$ were uncorrelated, but if they are not, there is nothing we can do about it. It seems about as valuable as knowing that if our sample size were greater, then the precision would be better. I can think of one example in which we could potentially care about both OVB and multicollinearity. Say that $X_2$ is a theoretical construct and you are unsure about how to measure it. You could use $X_{2A}$, $X_{2B}$, and/or $X_{2C}$. In this case, you might choose to just include one of theses measures of $X_2$ rather than all of them to avoid too much multicollinearity. However, if you are primarily interested in the effect of $X_1$ this is not a major concern.
Omitted Variable Bias (OVB) and multicollinearity
This is a good question. The confusion stems from the "assumption" of no multicollinearity. From the Wikipedia page on multicollinearity: Note that in statements of the assumptions underlying regress
Omitted Variable Bias (OVB) and multicollinearity This is a good question. The confusion stems from the "assumption" of no multicollinearity. From the Wikipedia page on multicollinearity: Note that in statements of the assumptions underlying regression analyses such as ordinary least squares, the phrase "no multicollinearity" usually refers to the absence of perfect multicollinearity, which is an exact (non-stochastic) linear relation among the predictors. In such case, the data matrix $X$ has less than full rank, and therefore the moment matrix $X^TX$ cannot be inverted. Under these circumstances, for a general linear model $y = X\beta + \epsilon$ , the ordinary least squares estimator $\hat\beta_{OLS} = (X^TX)^{-1} X^T y $ does not exist. Multicollinearity in the sense that you describe will inflate the variance of the OLS estimator, but unless you include $X_2$ in the regression, the OLS estimator is biased. In short, if you have to worry about OVB, you should not be worrying about multicollinearity. Why would we want a more precise but biased estimator? At more length, I am not sure that multicollinearity (or variance inflation) is at all meaningful to consider when we are concerned with OVB. Assume $$ Y = 5X_1 + X_2 + \epsilon $$ $$ X_1 = -0.1X_2 + u $$ If $\text{Cov}(X_2, u) = 0$, the correlation between $X_1$ and $X_2$ is $$ \rho = \frac{\sigma_{x_1x_2}}{\sigma_{x_1}\sigma_{x_2}} = \frac{-0.1\sigma_{x_2}}{\sqrt{0.01\sigma_{x_2}^2 + \sigma_u^2}} $$ If we let $\sigma_{x_2} = \sigma_{x_1}$, then $\rho \approx -0.1$ (which is a case where we would not worry about multicollinearity). Simulating in R, we see that an OLS regression of $Y$ on $X_1$ controlling for $X_2$ is unbiased. However, the bias that we get by excluding $X_2$ is pretty small. iter <- 10000 # NUMBER OF ITERATIONS n <- 100 # NUMBER OF OBSERVATIONS PER SAMPLE sigma_e = sigma_u = sigma_x2 = 5 mu_e = mu_u = mu_x2 = 0 res0 = res1 = list() # LISTS FOR SAVING RESULTS for(i in 1:iter) { #print(i) x2 <- rnorm(n, mu_x2, sigma_x2) u <- rnorm(n, mu_u, sigma_u) e <- rnorm(n, mu_e, sigma_e) x1 <- -0.1*x2 + u y <- 5*x1 + x2 + e res0[[i]] <- lm(y ~ x1 + x2)$coef res1[[i]] <- lm(y ~ x1)$coef } res0 <- as.data.frame(do.call("rbind", res0)) res1 <- as.data.frame(do.call("rbind", res1)) If we increase the variance of $X_2$ so that $\rho \approx -0.95$ sigma_x2 <- 150 and repeat the simulation we see that this does not affect the precision of the estimator for $X_1$ (but the precision for $X_2$ increases). However, the bias is now pretty big, which means that there is a big difference between the association between $X_1$ and and $Y$, where other factors (that is, $X_2$) are not held constant, and the effect of $X_1$ on $Y$ ceteris paribus. As long as there is some variation in $X_1$ that does not depend on $X_2$ (i.e., $\sigma_u^2 > 0$), we can retrieve this effect by OLS; the precision of the estimator will depend on the size of $\sigma_u^2$ compared to $\sigma_\epsilon^2$. We can illustrate the effect of variance inflation by simulating with and without correlation between $X_1$ and $X_2$ and regressing $Y$ on $X_1$ and $X_2$ for both the correlated and uncorrelated case. install.packages("mvtnorm") library(mvtnorm) sigma_x2 <- 5 # RESET STANDARD DEVIATION FOR X2 res0 = res1 = list() Sigma <- matrix(c(sigma_x1^2, sigma_x1*sigma_x2*-0.95, 0, sigma_x1*sigma_x2*-0.95, sigma_x2^2, 0, 0, 0, sigma_e^2), ncol = 3) Sigma0 <- matrix(c(sigma_x1^2, 0, 0, 0, sigma_x2^2, 0, 0, 0, sigma_e^2), ncol = 3) for(i in 1:iter) { print(i) tmp <- rmvnorm(n, mean = c(mu_x1, mu_x2, mu_e), sigma = Sigma0) x1 <- tmp[,1] x2 <- tmp[,2] e <- tmp[,3] y <- 5*x1 + x2 + e res0[[i]] <- lm(y ~ x1 + x2)$coef tmp <- rmvnorm(n, mean = c(mu_x1, mu_x2, mu_e), sigma = Sigma) x1 <- tmp[,1] x2 <- tmp[,2] e <- tmp[,3] y <- 5*x1 + x2 + e res1[[i]] <- lm(y ~ x1 + x2)$coef } res0 <- as.data.frame(do.call("rbind", res0)) res1 <- as.data.frame(do.call("rbind", res1)) This shows that the precision of the estimator would be better if $X_1$ and $X_2$ were uncorrelated, but if they are not, there is nothing we can do about it. It seems about as valuable as knowing that if our sample size were greater, then the precision would be better. I can think of one example in which we could potentially care about both OVB and multicollinearity. Say that $X_2$ is a theoretical construct and you are unsure about how to measure it. You could use $X_{2A}$, $X_{2B}$, and/or $X_{2C}$. In this case, you might choose to just include one of theses measures of $X_2$ rather than all of them to avoid too much multicollinearity. However, if you are primarily interested in the effect of $X_1$ this is not a major concern.
Omitted Variable Bias (OVB) and multicollinearity This is a good question. The confusion stems from the "assumption" of no multicollinearity. From the Wikipedia page on multicollinearity: Note that in statements of the assumptions underlying regress
32,781
Probability of selecting maximum in bivariate correlated order statistics?
There is an analytic answer for a slightly different distribution, the Ali-Mikhail-Haq copula. If $0\le r \le \frac12$, we can choose this copula to have the same standard normal distribution of $X$'s as the bivariate normal the same standard normal distribution of $Y$'s as the bivariate normal the same Kendall's tau measure of correlation between the two variables. First we choose the parameter $\theta$ to get Kendall's tau to agree for the two distributions: $$1 - \frac{2((1-\theta)^2\log(1-\theta)+\theta)}{3\theta^2}=\frac{2}{\pi}\arcsin(r)$$ Then we can calculate the probability of $X$ and $Y$ being maximized at the same observation: In the limiting case where $r=\frac{1}{2}$, we have $\theta=1$, and the probability is $\frac{2}{1+n}$ In the limiting case of high $n$, the probability tends to $\frac{1+\theta}{n}$ and more precisely $\lim_{n\rightarrow \infty}np(n,\theta)=1+\theta$. In general, the probability is $$p(n,\theta)=t\frac{\, _2F_1\left(1,n+1;n+2;t\right)}{n+1} - 2nt^2\frac{\, _2F_1\left(1,n+2;n+3;t\right)}{(n+1)(n+2)} -\frac{2nt}{(n+1)^2}+\frac{1}{n}$$ where $t=\theta/(\theta-1)$ and $\,_2F_1$ is the hypergeometric function. The following plot of pdfs shows the closeness of this approximation: the orange is the AMH copula with $\theta=1$, and the blue is the standard bivariate normal with $r=\frac{1}{2}$. The copula is defined by the formula $$P_{copula}[X\le u,\ Y\le v]=\frac{\Phi(u)\Phi(v)}{1-\theta(1-\Phi(u))(1-\Phi(v))}$$ The advantage of such a copula for order statistics is that we can also do the analysis on the simpler distribution where $X$ and $Y$ are uniform variables between 0 and 1. Then $$P[X\le x,\ Y\le y]=F[x,y]=\frac{xy}{1-\theta(1-x)(1-y)}$$ The pdf for this distribution is: $$f(x,y)=\frac{1-\theta(2-x-y-xy)+\theta^2(1-x-y+xy)}{(1+\theta(1-x)(1-y))^3}$$ If there are $n$ samples from the distribution, the pdf for the maximal $X$ is just $$nx^{n-1}$$ If the maximum of the $X$'s is $x$, then the maximal $Y$ occurs at the same observation with probability $$q(x)=\int_0^1 \frac{F[x,y]^{n-1}}{F[x,1]^{n-1}}f(x,y) dy = \frac{1+n-\theta+n\theta(2x-1)}{n(1+n)(1+\theta x-\theta)}$$ So the overall probability that the maximal $X$ and maximal $Y$ occur at the same observation is $\int_0^1 nx^{n-1}q(x)dx$, which gives the expression for $p(n,\theta)$ at the beginning.
Probability of selecting maximum in bivariate correlated order statistics?
There is an analytic answer for a slightly different distribution, the Ali-Mikhail-Haq copula. If $0\le r \le \frac12$, we can choose this copula to have the same standard normal distribution of $X$'
Probability of selecting maximum in bivariate correlated order statistics? There is an analytic answer for a slightly different distribution, the Ali-Mikhail-Haq copula. If $0\le r \le \frac12$, we can choose this copula to have the same standard normal distribution of $X$'s as the bivariate normal the same standard normal distribution of $Y$'s as the bivariate normal the same Kendall's tau measure of correlation between the two variables. First we choose the parameter $\theta$ to get Kendall's tau to agree for the two distributions: $$1 - \frac{2((1-\theta)^2\log(1-\theta)+\theta)}{3\theta^2}=\frac{2}{\pi}\arcsin(r)$$ Then we can calculate the probability of $X$ and $Y$ being maximized at the same observation: In the limiting case where $r=\frac{1}{2}$, we have $\theta=1$, and the probability is $\frac{2}{1+n}$ In the limiting case of high $n$, the probability tends to $\frac{1+\theta}{n}$ and more precisely $\lim_{n\rightarrow \infty}np(n,\theta)=1+\theta$. In general, the probability is $$p(n,\theta)=t\frac{\, _2F_1\left(1,n+1;n+2;t\right)}{n+1} - 2nt^2\frac{\, _2F_1\left(1,n+2;n+3;t\right)}{(n+1)(n+2)} -\frac{2nt}{(n+1)^2}+\frac{1}{n}$$ where $t=\theta/(\theta-1)$ and $\,_2F_1$ is the hypergeometric function. The following plot of pdfs shows the closeness of this approximation: the orange is the AMH copula with $\theta=1$, and the blue is the standard bivariate normal with $r=\frac{1}{2}$. The copula is defined by the formula $$P_{copula}[X\le u,\ Y\le v]=\frac{\Phi(u)\Phi(v)}{1-\theta(1-\Phi(u))(1-\Phi(v))}$$ The advantage of such a copula for order statistics is that we can also do the analysis on the simpler distribution where $X$ and $Y$ are uniform variables between 0 and 1. Then $$P[X\le x,\ Y\le y]=F[x,y]=\frac{xy}{1-\theta(1-x)(1-y)}$$ The pdf for this distribution is: $$f(x,y)=\frac{1-\theta(2-x-y-xy)+\theta^2(1-x-y+xy)}{(1+\theta(1-x)(1-y))^3}$$ If there are $n$ samples from the distribution, the pdf for the maximal $X$ is just $$nx^{n-1}$$ If the maximum of the $X$'s is $x$, then the maximal $Y$ occurs at the same observation with probability $$q(x)=\int_0^1 \frac{F[x,y]^{n-1}}{F[x,1]^{n-1}}f(x,y) dy = \frac{1+n-\theta+n\theta(2x-1)}{n(1+n)(1+\theta x-\theta)}$$ So the overall probability that the maximal $X$ and maximal $Y$ occur at the same observation is $\int_0^1 nx^{n-1}q(x)dx$, which gives the expression for $p(n,\theta)$ at the beginning.
Probability of selecting maximum in bivariate correlated order statistics? There is an analytic answer for a slightly different distribution, the Ali-Mikhail-Haq copula. If $0\le r \le \frac12$, we can choose this copula to have the same standard normal distribution of $X$'
32,782
Which index is preferred in GAM, "R-sq" or "Deviance explained"?
In the most updated version of mgcv (1.8-37). Wood elaborated on the r.sq definition by stating "The proportion null deviance explained is probably more appropriate for non-normal errors." Therefore, deviance explained should be a more generalized measurement of goodness of fit especially for non-gaussian models. More detailed explanation on deviance explained can be found at How I can interpret GAM results?
Which index is preferred in GAM, "R-sq" or "Deviance explained"?
In the most updated version of mgcv (1.8-37). Wood elaborated on the r.sq definition by stating "The proportion null deviance explained is probably more appropriate for non-normal errors." Therefore,
Which index is preferred in GAM, "R-sq" or "Deviance explained"? In the most updated version of mgcv (1.8-37). Wood elaborated on the r.sq definition by stating "The proportion null deviance explained is probably more appropriate for non-normal errors." Therefore, deviance explained should be a more generalized measurement of goodness of fit especially for non-gaussian models. More detailed explanation on deviance explained can be found at How I can interpret GAM results?
Which index is preferred in GAM, "R-sq" or "Deviance explained"? In the most updated version of mgcv (1.8-37). Wood elaborated on the r.sq definition by stating "The proportion null deviance explained is probably more appropriate for non-normal errors." Therefore,
32,783
MLE, regularity conditions, finite and infinite parameter spaces
You don't want a proof showing the MLE is never consistent with an infinite parameter space, because that's not true. There are many settings with countably infinite parameter spaces that have consistent MLEs. There are even many settings with uncountable parameter spaces that have consistent MLEs -- the usual $N(\mu,\sigma^2)$ model with real $\mu$ and positive $\sigma^2$, for example. The issue is that you need some extra conditions when the parameter space is infinite; it's not automatic. You're correct that the question is whether $\inf_n \epsilon(n)>0$, and that it's not (in general obvious). In fact, it might be zero or might not be, and whether it is will depend on details of the model; there isn't a general result from set theory. If you look at proofs of consistency when the parameter space is an interval of the reals (or of $\mathbb{R}^d$), they typically assume some smoothness for the dependence of the density on $\theta$, and then have some way to ensure that $\hat\theta$ is eventually in some compact neighourhood of $\theta_0$. Compactness + smoothness acts as a substitute for finiteness, meaning that you don't have to consider each $A_{jn}$ separately.
MLE, regularity conditions, finite and infinite parameter spaces
You don't want a proof showing the MLE is never consistent with an infinite parameter space, because that's not true. There are many settings with countably infinite parameter spaces that have consist
MLE, regularity conditions, finite and infinite parameter spaces You don't want a proof showing the MLE is never consistent with an infinite parameter space, because that's not true. There are many settings with countably infinite parameter spaces that have consistent MLEs. There are even many settings with uncountable parameter spaces that have consistent MLEs -- the usual $N(\mu,\sigma^2)$ model with real $\mu$ and positive $\sigma^2$, for example. The issue is that you need some extra conditions when the parameter space is infinite; it's not automatic. You're correct that the question is whether $\inf_n \epsilon(n)>0$, and that it's not (in general obvious). In fact, it might be zero or might not be, and whether it is will depend on details of the model; there isn't a general result from set theory. If you look at proofs of consistency when the parameter space is an interval of the reals (or of $\mathbb{R}^d$), they typically assume some smoothness for the dependence of the density on $\theta$, and then have some way to ensure that $\hat\theta$ is eventually in some compact neighourhood of $\theta_0$. Compactness + smoothness acts as a substitute for finiteness, meaning that you don't have to consider each $A_{jn}$ separately.
MLE, regularity conditions, finite and infinite parameter spaces You don't want a proof showing the MLE is never consistent with an infinite parameter space, because that's not true. There are many settings with countably infinite parameter spaces that have consist
32,784
Would an "importance Gibbs" sampling method work?
This is an interesting idea, but I see several difficulties with it: contrary to standard importance sampling, or even Metropolised importance sampling the proposal is not acting in the same space as the target distribution, but in a space of smaller dimension so validation is unclear [and may impose to keep weights across iterations, hence facing degeneracy] the missing normalising constants in the full conditionals change at each iteration but are not accounted for [see below] the weights are not bounded, in that along iterations, there will eventually be simulations with a very large weight, unless one keeps track of the last occurrence of an update for the same index $j$, which may clash with the Markovian validation of the Gibbs sampler. Running a modest experiment with $n=2$ and $T=10^3$ iterations shows a range of weights from 7.656397e-07 to 3.699364e+04. To get into more details, consider a two-dimensional target $p(\cdot,\cdot)$, including the proper normalising constant, and implement the importance Gibbs sampler with proposals $q_X(\cdot|y)$ and $q_Y(\cdot|x)$. Correct importance weights [in the sense of producing the correct expectation, i.e., an unbiased estimator, for an arbitrary function of $(X,Y)$] for successive simulations are either $$\dfrac{p(x_t,y_{t-1})}{q_X(x_t|y_{t-1})m_Y(y_{t-1})}\qquad\text{or}\qquad\dfrac{p(x_{t-1},y_{t})}{q_Y(y_t|x_{t-1})m_X(x_{t-1})}$$ where $m_X(\cdots)$ and $m_Y(\cdot)$ are the marginals of $p(\cdot,\cdot)$. Or equivalently $$\dfrac{p_X(x_t|y_{t-1})}{q_X(x_t|y_{t-1})}\qquad\text{or}\qquad\dfrac{p_Y(y_{t}|x_{t-1})}{q_Y(y_t|x_{t-1})}$$ In either case, this requires the [intractable] marginal densities of $X$ and $Y$ under the target $p(\cdot,\cdot)$. It is worthwhile to compare what happens here with the parallel importance weighted Metropolis algorithm. (See for instance Schuster und Klebanov, 2018.) If the target is again $p(\cdot,\cdot)$ and the proposal is $q(\cdot,\cdot|x,y)$, the importance weight $$\dfrac{p(x',y')}{q(x',y'|x,y)}$$is correct [towards producing an unbiased estimate] and does not update the earlier weight but starts from scratch at each iteration. (C.) A correction to the original importance Gibbs proposal is to propose a new value for the entire vector, e.g., $(x,y)$, from the Gibbs proposal $q_X(x_t|y_{t-1})q_Y(y_t|x_{t})$, because then the importance weight $$\dfrac{p(x_t,y_t)}{q_X(x_t|y_{t-1})q_Y(y_t|x_{t})}$$ is correct [missing a possible normalising constant that is now truly constant and does not carry from previous Gibbs iterations]. A final note: for the random walk target considered in the code, direct simulation is feasible by cascading: simulate $X_1$, then $X_2$ given $X_1$, &tc.
Would an "importance Gibbs" sampling method work?
This is an interesting idea, but I see several difficulties with it: contrary to standard importance sampling, or even Metropolised importance sampling the proposal is not acting in the same space as
Would an "importance Gibbs" sampling method work? This is an interesting idea, but I see several difficulties with it: contrary to standard importance sampling, or even Metropolised importance sampling the proposal is not acting in the same space as the target distribution, but in a space of smaller dimension so validation is unclear [and may impose to keep weights across iterations, hence facing degeneracy] the missing normalising constants in the full conditionals change at each iteration but are not accounted for [see below] the weights are not bounded, in that along iterations, there will eventually be simulations with a very large weight, unless one keeps track of the last occurrence of an update for the same index $j$, which may clash with the Markovian validation of the Gibbs sampler. Running a modest experiment with $n=2$ and $T=10^3$ iterations shows a range of weights from 7.656397e-07 to 3.699364e+04. To get into more details, consider a two-dimensional target $p(\cdot,\cdot)$, including the proper normalising constant, and implement the importance Gibbs sampler with proposals $q_X(\cdot|y)$ and $q_Y(\cdot|x)$. Correct importance weights [in the sense of producing the correct expectation, i.e., an unbiased estimator, for an arbitrary function of $(X,Y)$] for successive simulations are either $$\dfrac{p(x_t,y_{t-1})}{q_X(x_t|y_{t-1})m_Y(y_{t-1})}\qquad\text{or}\qquad\dfrac{p(x_{t-1},y_{t})}{q_Y(y_t|x_{t-1})m_X(x_{t-1})}$$ where $m_X(\cdots)$ and $m_Y(\cdot)$ are the marginals of $p(\cdot,\cdot)$. Or equivalently $$\dfrac{p_X(x_t|y_{t-1})}{q_X(x_t|y_{t-1})}\qquad\text{or}\qquad\dfrac{p_Y(y_{t}|x_{t-1})}{q_Y(y_t|x_{t-1})}$$ In either case, this requires the [intractable] marginal densities of $X$ and $Y$ under the target $p(\cdot,\cdot)$. It is worthwhile to compare what happens here with the parallel importance weighted Metropolis algorithm. (See for instance Schuster und Klebanov, 2018.) If the target is again $p(\cdot,\cdot)$ and the proposal is $q(\cdot,\cdot|x,y)$, the importance weight $$\dfrac{p(x',y')}{q(x',y'|x,y)}$$is correct [towards producing an unbiased estimate] and does not update the earlier weight but starts from scratch at each iteration. (C.) A correction to the original importance Gibbs proposal is to propose a new value for the entire vector, e.g., $(x,y)$, from the Gibbs proposal $q_X(x_t|y_{t-1})q_Y(y_t|x_{t})$, because then the importance weight $$\dfrac{p(x_t,y_t)}{q_X(x_t|y_{t-1})q_Y(y_t|x_{t})}$$ is correct [missing a possible normalising constant that is now truly constant and does not carry from previous Gibbs iterations]. A final note: for the random walk target considered in the code, direct simulation is feasible by cascading: simulate $X_1$, then $X_2$ given $X_1$, &tc.
Would an "importance Gibbs" sampling method work? This is an interesting idea, but I see several difficulties with it: contrary to standard importance sampling, or even Metropolised importance sampling the proposal is not acting in the same space as
32,785
Reproduce figure of "Computer Age Statistical Inference" from Efron and Hastie
In the website of the book Computer Age Statistical Inference, there is a discussion session where Trevor Hastie and Brad Efron often reply to several questions. So, I posted this question there (as below) and received from Trevor Hastie the confirmation that there is an error in the book that will be fixed (in other words, my simulations and calculations - as implemented in Python in this question - are correct). When Trevor Hastie replied that "In fact c=.75 for that plot" means that at the figure below (original Figure 2.2 from the book) the cutoff $c$ should be $c=0.75$ instead of $c=0.4$: So, using my functions alpha_simulation(.), beta_simulation(.), alpha_calculation(.) and beta_calculation(.) (whose the full Python code is available in this question) I got $\alpha=0.10$ and $\beta=0.38$ for a cutoff $c=0.75$ as a confirmation that my code is correct. alpha_simulated_c075 = alpha_simulation(0.75, f0_density, f1_density, sample_size, replicates) beta_simulated_c075 = beta_simulation(0.75, f0_density, f1_density, sample_size, replicates) alpha_calculated_c075 = alpha_calculation(0.75, 0.0, 0.5, 1.0, sample_size) beta_calculated_c075 = beta_calculation(0.75, 0.0, 0.5, 1.0, sample_size) print("Simulated: c=0.75, alpha={0:.2f}, beta={1:.2f}".format(alpha_simulated_c075, beta_simulated_c075)) print("Calculated: c=0.75, alpha={0:.2f}, beta={1:.2f}".format(alpha_calculated_c075, beta_calculated_c075)) Finally, when Trevor Hastie replied that "... resulting in a threshold for x of .4" it means that $k=0.4$ in the equation below (see section B from this question): $$ \bar{x} \ge k \text{, where } k = \frac{c\sigma^2}{n\left(\mu_1-\mu_0\right)} + \frac{\left(\mu_1+\mu_0\right)}{2} $$ resulting in $$ t_c(x) = \left\{ \begin{array}{ll} 1\enspace\text{if } \bar{x} \ge k\\ 0\enspace\text{if } \bar{x} \lt k.\end{array} \right. \enspace \enspace \text{, where } k = \frac{c\sigma^2}{n\left(\mu_1-\mu_0\right)} + \frac{\left(\mu_1+\mu_0\right)}{2} $$ So, in Python we can get $k=0.4$ for a cutoff $c=0.75$ as below: n = 10 m_0 = 0.0 m_1 = 0.5 variance = 1.0 c = 0.75 k = (c*variance)/(n*(m_1-m_0)) + (m_1+m_0)/2.0 threshold_for_x = k print("threshold for x (when cutoff c=0.75) = {0:.1f}".format(threshold_for_x))
Reproduce figure of "Computer Age Statistical Inference" from Efron and Hastie
In the website of the book Computer Age Statistical Inference, there is a discussion session where Trevor Hastie and Brad Efron often reply to several questions. So, I posted this question there (as b
Reproduce figure of "Computer Age Statistical Inference" from Efron and Hastie In the website of the book Computer Age Statistical Inference, there is a discussion session where Trevor Hastie and Brad Efron often reply to several questions. So, I posted this question there (as below) and received from Trevor Hastie the confirmation that there is an error in the book that will be fixed (in other words, my simulations and calculations - as implemented in Python in this question - are correct). When Trevor Hastie replied that "In fact c=.75 for that plot" means that at the figure below (original Figure 2.2 from the book) the cutoff $c$ should be $c=0.75$ instead of $c=0.4$: So, using my functions alpha_simulation(.), beta_simulation(.), alpha_calculation(.) and beta_calculation(.) (whose the full Python code is available in this question) I got $\alpha=0.10$ and $\beta=0.38$ for a cutoff $c=0.75$ as a confirmation that my code is correct. alpha_simulated_c075 = alpha_simulation(0.75, f0_density, f1_density, sample_size, replicates) beta_simulated_c075 = beta_simulation(0.75, f0_density, f1_density, sample_size, replicates) alpha_calculated_c075 = alpha_calculation(0.75, 0.0, 0.5, 1.0, sample_size) beta_calculated_c075 = beta_calculation(0.75, 0.0, 0.5, 1.0, sample_size) print("Simulated: c=0.75, alpha={0:.2f}, beta={1:.2f}".format(alpha_simulated_c075, beta_simulated_c075)) print("Calculated: c=0.75, alpha={0:.2f}, beta={1:.2f}".format(alpha_calculated_c075, beta_calculated_c075)) Finally, when Trevor Hastie replied that "... resulting in a threshold for x of .4" it means that $k=0.4$ in the equation below (see section B from this question): $$ \bar{x} \ge k \text{, where } k = \frac{c\sigma^2}{n\left(\mu_1-\mu_0\right)} + \frac{\left(\mu_1+\mu_0\right)}{2} $$ resulting in $$ t_c(x) = \left\{ \begin{array}{ll} 1\enspace\text{if } \bar{x} \ge k\\ 0\enspace\text{if } \bar{x} \lt k.\end{array} \right. \enspace \enspace \text{, where } k = \frac{c\sigma^2}{n\left(\mu_1-\mu_0\right)} + \frac{\left(\mu_1+\mu_0\right)}{2} $$ So, in Python we can get $k=0.4$ for a cutoff $c=0.75$ as below: n = 10 m_0 = 0.0 m_1 = 0.5 variance = 1.0 c = 0.75 k = (c*variance)/(n*(m_1-m_0)) + (m_1+m_0)/2.0 threshold_for_x = k print("threshold for x (when cutoff c=0.75) = {0:.1f}".format(threshold_for_x))
Reproduce figure of "Computer Age Statistical Inference" from Efron and Hastie In the website of the book Computer Age Statistical Inference, there is a discussion session where Trevor Hastie and Brad Efron often reply to several questions. So, I posted this question there (as b
32,786
Why do we need the temperature in Gumbel-Softmax trick?
one way to sample is to apply argmax(softmax($\alpha_j$)) That is hardly "sampling", given that you deterministically pick the largest $\alpha_j$ every time. (also, you said that $\alpha$ is the unnormalized probability but that doesn't make sense seeing as log probabilities go into the softmax). The correct way to sample would be sample(softmax($x$)), where $x$ are the logits. Indeed, the goal of gumbel-softmax is not to replace the softmax operation as you've written it, but the sampling operation: We can replace sample($p$) where $p$ are a vector of probabilities with argmax($\log p + g$) where $g$ is the gumbel noise. Of course, this is equivalent to argmax($x + g$) where $x$ are again the logits. To conclude, sample(softmax($x$)) and argmax($x+g)$ are equivalent procedures. Then, if the goal was to have the full distribution over possible outcomes for $z_j$, we can use softmax transformation on top of the perturbation with Gumbel noise. In fact you already have a distribution over all possible outcomes. However, argmax($x+g$) is not differentiable wrt $x$, therefore to backpropagate we replace its gradient with the gradient of softmax($(x+g)\tau^{-1}$). When $\tau \rightarrow 0$, the expression approaches argmax. Picking a reasonable, small values of $\tau$ will ensure a good estimate of the gradient while ensuring that the gradients are numerically well behaved. and $\tau=1$ just makes the two equations identical In fact, there is no special significance to $\tau = 1$. Rather, $\tau \rightarrow 0$ makes the gradient estimate unbiased but high in variance, where as larger values of $\tau$ add more bias to the gradient estimate but lower the variance.
Why do we need the temperature in Gumbel-Softmax trick?
one way to sample is to apply argmax(softmax($\alpha_j$)) That is hardly "sampling", given that you deterministically pick the largest $\alpha_j$ every time. (also, you said that $\alpha$ is the unno
Why do we need the temperature in Gumbel-Softmax trick? one way to sample is to apply argmax(softmax($\alpha_j$)) That is hardly "sampling", given that you deterministically pick the largest $\alpha_j$ every time. (also, you said that $\alpha$ is the unnormalized probability but that doesn't make sense seeing as log probabilities go into the softmax). The correct way to sample would be sample(softmax($x$)), where $x$ are the logits. Indeed, the goal of gumbel-softmax is not to replace the softmax operation as you've written it, but the sampling operation: We can replace sample($p$) where $p$ are a vector of probabilities with argmax($\log p + g$) where $g$ is the gumbel noise. Of course, this is equivalent to argmax($x + g$) where $x$ are again the logits. To conclude, sample(softmax($x$)) and argmax($x+g)$ are equivalent procedures. Then, if the goal was to have the full distribution over possible outcomes for $z_j$, we can use softmax transformation on top of the perturbation with Gumbel noise. In fact you already have a distribution over all possible outcomes. However, argmax($x+g$) is not differentiable wrt $x$, therefore to backpropagate we replace its gradient with the gradient of softmax($(x+g)\tau^{-1}$). When $\tau \rightarrow 0$, the expression approaches argmax. Picking a reasonable, small values of $\tau$ will ensure a good estimate of the gradient while ensuring that the gradients are numerically well behaved. and $\tau=1$ just makes the two equations identical In fact, there is no special significance to $\tau = 1$. Rather, $\tau \rightarrow 0$ makes the gradient estimate unbiased but high in variance, where as larger values of $\tau$ add more bias to the gradient estimate but lower the variance.
Why do we need the temperature in Gumbel-Softmax trick? one way to sample is to apply argmax(softmax($\alpha_j$)) That is hardly "sampling", given that you deterministically pick the largest $\alpha_j$ every time. (also, you said that $\alpha$ is the unno
32,787
Can I combine many gradient boosting trees using bagging technique
Yes, you can. Bagging as a technique does not rely on a single classification or regression tree being the base learner; you can do it with anything, although many base learners (e.g., linear regression) are of less value than others. The bootstrap aggregating article on Wikipedia contains an example of bagging LOESS smoothers on ozone data. If you were to do so, however, you would almost certainly not want to use the same parameters as a fully-tuned single GBM. A large part of the point of tuning a GBM is to prevent overfitting; bagging reduces overfitting through a different mechanism, so if your tuned GBM doesn't overfit much, bagging probably won't help much either - and, since you're likely to need hundreds of trees to bag effectively, your runtime will go up by a factor of several hundred as well. So now you have two problems - how to tune your GBM given that it's embedded in a random forest (although it likely isn't so important to get it right, given that it's embedded in a random forest,) and the runtime issue. Having written all that, it is true that bagging-type thinking can be profitably integrated with GBM, although in a different manner. H20, for example, provides the option to have each tree of the GBM tree sequence developed on a random sample of the training data. This sample is done without replacement, as sampling with replacement is thought to cause the resultant tree to overfit those parts of the sample that were repeated. This approach was explicitly motivated by Breiman's "adaptive bagging" procedure, see Friedman's 1999 Stochastic Gradient Boosting paper for details.
Can I combine many gradient boosting trees using bagging technique
Yes, you can. Bagging as a technique does not rely on a single classification or regression tree being the base learner; you can do it with anything, although many base learners (e.g., linear regress
Can I combine many gradient boosting trees using bagging technique Yes, you can. Bagging as a technique does not rely on a single classification or regression tree being the base learner; you can do it with anything, although many base learners (e.g., linear regression) are of less value than others. The bootstrap aggregating article on Wikipedia contains an example of bagging LOESS smoothers on ozone data. If you were to do so, however, you would almost certainly not want to use the same parameters as a fully-tuned single GBM. A large part of the point of tuning a GBM is to prevent overfitting; bagging reduces overfitting through a different mechanism, so if your tuned GBM doesn't overfit much, bagging probably won't help much either - and, since you're likely to need hundreds of trees to bag effectively, your runtime will go up by a factor of several hundred as well. So now you have two problems - how to tune your GBM given that it's embedded in a random forest (although it likely isn't so important to get it right, given that it's embedded in a random forest,) and the runtime issue. Having written all that, it is true that bagging-type thinking can be profitably integrated with GBM, although in a different manner. H20, for example, provides the option to have each tree of the GBM tree sequence developed on a random sample of the training data. This sample is done without replacement, as sampling with replacement is thought to cause the resultant tree to overfit those parts of the sample that were repeated. This approach was explicitly motivated by Breiman's "adaptive bagging" procedure, see Friedman's 1999 Stochastic Gradient Boosting paper for details.
Can I combine many gradient boosting trees using bagging technique Yes, you can. Bagging as a technique does not rely on a single classification or regression tree being the base learner; you can do it with anything, although many base learners (e.g., linear regress
32,788
sampling cost of $O(d)$ versus $O(2^d)$
Here's a fairly obvious recursive sampler that's $O(d)$ in the best case (in terms of the weights $\omega_i$), but exponential in the worst case. Suppose we've already selected $x_1, \dots, x_{i-1}$, and wish to choose $x_{i}$. We need to compute $$w(x_1, \dots, x_{i-1}, x_i) = \sum_{x_{i+1} \in \{-1, 1\}} \cdots \sum_{x_{d} \in \{-1, 1\}} \left( \sum_{j=1}^d \omega_j x_j \right)_+$$ and choose $x_i = 1$ with probability $$\frac{w(x_1, \dots, x_{i-1}, 1)}{w(x_1, \dots, x_{i-1}, 1) + w(x_1, \dots, x_{i-1}, -1)}.$$ The denominator will be nonzero for any valid choice of samples $x_1, \dots, x_{i-1}$. Now, of course, the question is how to compute $w(x_1, \dots, x_i)$. If we have that $C := \sum_{j=1}^{i} \omega_j x_j \ge \sum_{j=i+1}^{d} \lvert \omega_j \rvert$, then $\omega \cdot x \ge 0$ for any $x$ with leading entries $x_{1:i}$, and so $w$ becomes: \begin{align} \sum_{x_{i+1}} \cdots \sum_{x_d} \omega \cdot x &= \omega \cdot \left( \sum_{x_{i+1}} \cdots \sum_{x_d} x \right) \\&= \sum_{j=1}^i \omega_j \underbrace{\left( \sum_{x_{i+1}} \cdots \sum_{x_d} x_j \right)}_{2^{d-i} x_j} + \sum_{j=i+1}^d \omega_j \underbrace{\left( \sum_{x_{i+1}} \cdots \sum_{x_d} x_j \right)}_{0} \\&= 2^{d-i} C .\end{align} In the opposite case, $C \le - \sum_{j=i+1}^{d} \lvert \omega_j \rvert$, we have that $\omega \cdot x \le 0$ and so $w(x_1, \dots, x_i) = 0$. Otherwise, we must recurse, using $w(x_1, \dots, x_i) = w(x_1, \dots, x_i, 1) + w(x_1, \dots, x_i, -1)$. Assume that memory isn't an issue and that we can cache all sub-computations in $w(1)$, $w(-1)$ in a tree – up to the point that we hit one of the "nice" cases, after which any calls take constant time. (We'll need to compute this whole tree anyway to select $x_1$.) Then, once this tree of $w$ computations is built, the sampler will take only $O(d)$ time. The question is how long it takes to build the tree, or equivalently how large it is. We will of course hit the "nice" cases faster if the $\omega_i$ are sorted, $\omega_1 \ge \omega_2 \ge \dots \ge \omega_d$. In the best case, $\lvert \omega_1 \rvert > \sum_{j=2}^d \lvert \omega_j \rvert$. Then we hit a "nice" case immediately for either $w(1)$ or $w(-1)$, so $w$ tree construction takes constant time, and the whole sampler takes $O(d)$ time. In the worst (sorted) case, $\omega_1 = \omega_2 = \dots = \omega_d$. Then the question is: how big is the total tree? Well, the first paths to terminate are of course $(1, 1, \dots, 1)$ and $(-1, -1, \dots, -1)$ of length $\lceil d/2 \rceil$. The tree is therefore complete up to that depth, and so contains at least $O(2^{d/2})$ nodes. (It has more; you can probably find it with an argument like the ones used in gambler's ruin problems, but I couldn't find it in two minutes of Googling and don't particularly care – $2^{d/2}$ is bad enough....) If your setting has only a few very large $\omega_i$, this is probably a reasonably practical approach. If the $\omega_i$ are all of similar magnitude, it's probably still exponential and too expensive for large $d$.
sampling cost of $O(d)$ versus $O(2^d)$
Here's a fairly obvious recursive sampler that's $O(d)$ in the best case (in terms of the weights $\omega_i$), but exponential in the worst case. Suppose we've already selected $x_1, \dots, x_{i-1}$,
sampling cost of $O(d)$ versus $O(2^d)$ Here's a fairly obvious recursive sampler that's $O(d)$ in the best case (in terms of the weights $\omega_i$), but exponential in the worst case. Suppose we've already selected $x_1, \dots, x_{i-1}$, and wish to choose $x_{i}$. We need to compute $$w(x_1, \dots, x_{i-1}, x_i) = \sum_{x_{i+1} \in \{-1, 1\}} \cdots \sum_{x_{d} \in \{-1, 1\}} \left( \sum_{j=1}^d \omega_j x_j \right)_+$$ and choose $x_i = 1$ with probability $$\frac{w(x_1, \dots, x_{i-1}, 1)}{w(x_1, \dots, x_{i-1}, 1) + w(x_1, \dots, x_{i-1}, -1)}.$$ The denominator will be nonzero for any valid choice of samples $x_1, \dots, x_{i-1}$. Now, of course, the question is how to compute $w(x_1, \dots, x_i)$. If we have that $C := \sum_{j=1}^{i} \omega_j x_j \ge \sum_{j=i+1}^{d} \lvert \omega_j \rvert$, then $\omega \cdot x \ge 0$ for any $x$ with leading entries $x_{1:i}$, and so $w$ becomes: \begin{align} \sum_{x_{i+1}} \cdots \sum_{x_d} \omega \cdot x &= \omega \cdot \left( \sum_{x_{i+1}} \cdots \sum_{x_d} x \right) \\&= \sum_{j=1}^i \omega_j \underbrace{\left( \sum_{x_{i+1}} \cdots \sum_{x_d} x_j \right)}_{2^{d-i} x_j} + \sum_{j=i+1}^d \omega_j \underbrace{\left( \sum_{x_{i+1}} \cdots \sum_{x_d} x_j \right)}_{0} \\&= 2^{d-i} C .\end{align} In the opposite case, $C \le - \sum_{j=i+1}^{d} \lvert \omega_j \rvert$, we have that $\omega \cdot x \le 0$ and so $w(x_1, \dots, x_i) = 0$. Otherwise, we must recurse, using $w(x_1, \dots, x_i) = w(x_1, \dots, x_i, 1) + w(x_1, \dots, x_i, -1)$. Assume that memory isn't an issue and that we can cache all sub-computations in $w(1)$, $w(-1)$ in a tree – up to the point that we hit one of the "nice" cases, after which any calls take constant time. (We'll need to compute this whole tree anyway to select $x_1$.) Then, once this tree of $w$ computations is built, the sampler will take only $O(d)$ time. The question is how long it takes to build the tree, or equivalently how large it is. We will of course hit the "nice" cases faster if the $\omega_i$ are sorted, $\omega_1 \ge \omega_2 \ge \dots \ge \omega_d$. In the best case, $\lvert \omega_1 \rvert > \sum_{j=2}^d \lvert \omega_j \rvert$. Then we hit a "nice" case immediately for either $w(1)$ or $w(-1)$, so $w$ tree construction takes constant time, and the whole sampler takes $O(d)$ time. In the worst (sorted) case, $\omega_1 = \omega_2 = \dots = \omega_d$. Then the question is: how big is the total tree? Well, the first paths to terminate are of course $(1, 1, \dots, 1)$ and $(-1, -1, \dots, -1)$ of length $\lceil d/2 \rceil$. The tree is therefore complete up to that depth, and so contains at least $O(2^{d/2})$ nodes. (It has more; you can probably find it with an argument like the ones used in gambler's ruin problems, but I couldn't find it in two minutes of Googling and don't particularly care – $2^{d/2}$ is bad enough....) If your setting has only a few very large $\omega_i$, this is probably a reasonably practical approach. If the $\omega_i$ are all of similar magnitude, it's probably still exponential and too expensive for large $d$.
sampling cost of $O(d)$ versus $O(2^d)$ Here's a fairly obvious recursive sampler that's $O(d)$ in the best case (in terms of the weights $\omega_i$), but exponential in the worst case. Suppose we've already selected $x_1, \dots, x_{i-1}$,
32,789
Do random variables follow the same algebraic rules as ordinary numbers?
The algebra of random variables (ARV) is an extension of the usual algebra of numbers "high school algebra". This must be so because numbers can be embedded in the ARV as rv equal to a constant with probability 1. So there cannot be any inconsistency, but it could well be new properties which doesn't say anything about numbers. In the ARV equality is equality in distribution, so it is really an algebra of distributions. But for rv's constant with probability 1, this is an extension of equality of numbers in the usual sense. About the given example from Wikipedia, there is no inconsistency there, only a (maybe for someone) surprising possibility that arises because there are many random variables such that $X$ and $X^{-1}$ have the same distribution, while there are only two numbers with this property, $-1$ and 1. The Cauchy distribution have this property, see What can we say about distributions of random variables $X$ such that $X$ and its inverse $1/X$ have the same distribution?.
Do random variables follow the same algebraic rules as ordinary numbers?
The algebra of random variables (ARV) is an extension of the usual algebra of numbers "high school algebra". This must be so because numbers can be embedded in the ARV as rv equal to a constant with p
Do random variables follow the same algebraic rules as ordinary numbers? The algebra of random variables (ARV) is an extension of the usual algebra of numbers "high school algebra". This must be so because numbers can be embedded in the ARV as rv equal to a constant with probability 1. So there cannot be any inconsistency, but it could well be new properties which doesn't say anything about numbers. In the ARV equality is equality in distribution, so it is really an algebra of distributions. But for rv's constant with probability 1, this is an extension of equality of numbers in the usual sense. About the given example from Wikipedia, there is no inconsistency there, only a (maybe for someone) surprising possibility that arises because there are many random variables such that $X$ and $X^{-1}$ have the same distribution, while there are only two numbers with this property, $-1$ and 1. The Cauchy distribution have this property, see What can we say about distributions of random variables $X$ such that $X$ and its inverse $1/X$ have the same distribution?.
Do random variables follow the same algebraic rules as ordinary numbers? The algebra of random variables (ARV) is an extension of the usual algebra of numbers "high school algebra". This must be so because numbers can be embedded in the ARV as rv equal to a constant with p
32,790
Do random variables follow the same algebraic rules as ordinary numbers?
Random variables are actually functions (measurable on a sample space) so they follow the rules of functions. Confusion comes in what "=" means since it is often misused as "same distribution" in practice rather than truly identical.
Do random variables follow the same algebraic rules as ordinary numbers?
Random variables are actually functions (measurable on a sample space) so they follow the rules of functions. Confusion comes in what "=" means since it is often misused as "same distribution" in pra
Do random variables follow the same algebraic rules as ordinary numbers? Random variables are actually functions (measurable on a sample space) so they follow the rules of functions. Confusion comes in what "=" means since it is often misused as "same distribution" in practice rather than truly identical.
Do random variables follow the same algebraic rules as ordinary numbers? Random variables are actually functions (measurable on a sample space) so they follow the rules of functions. Confusion comes in what "=" means since it is often misused as "same distribution" in pra
32,791
How can the AIC or BIC be used instead of the train/test split?
In chapter 5.5 of this book, they discuss how a lot of these model selection criteria arise. They start with Akaike's FPE criterion for AR models, and then go on to discuss AIC, AICc and BIC. They walk through the derivations pretty thoroughly. What these have in common is that they investigate what happens when you use some observed in-sample data $\{X_t\}$ to estimate the model parameters, and then look at some loss function (mean square prediction error or KL divergence) on some unobserved/hypothetical out-of-sample data $\{Y_t\}$ that arises from using the estimated model on this new data. The main ideas are that a) you take the expectation with respect to all of the data, and 2) use some asymptotic results to get expressions for some of the expectations. The quantity from (1) gives you expected overall performance, but (2) assumes you have a lot more data than you actually do. I am no expert, but I assume that cross-validation approaches target these measures of performance as well; but instead of considering the out-of-sample data hypothetical, they use real data that was split off from the training data. The simplest example is the FPE criterion. Assume you estimate your AR model on the entire data (kind of like the test-set), and obtain $\{\hat{\phi}_i\}_i$. Then the expected loss on the unobserved data $\{Y_t\}$ (it's hypothetical, not split apart like in cross-validation) is \begin{align*} & E(Y_{n+1} -\hat{\phi}_1Y_n -\cdots - \hat{\phi}_p Y_{n+1-p} )^2 \\ &= E(Y_{n+1} -\phi_1Y_n -\cdots - \phi_p Y_{n+1-p} - \\ & \hspace{30mm} (\hat{\phi}_1 - \phi_1)Y_n - \cdots - (\hat{\phi}_p - \phi_p) Y_{n+1-p} )^2 \\ &= E( Z_t + (\hat{\phi}_1 - \phi_1)Y_n - \cdots - (\hat{\phi}_p - \phi_p) Y_{n+1-p} )^2 \\ &= \sigma^2 + E[E[((\hat{\phi}_1 - \phi_1)Y_n - \cdots - (\hat{\phi}_p - \phi_p) Y_{n+1-p} )^2 | \{X_t\} ]] \\ &= \sigma^2 + E\left[ \sum_{i=1}^p \sum_{j=1}^p (\hat{\phi}_i - \phi_i)(\hat{\phi}_j - \phi_j)E\left[ Y_{n+1-i}Y_{n+1-j} |\{X_t\} \right] \right] \\ &= \sigma^2 + E[({\hat{\phi}}_p -{\phi}_p )' \Gamma_p ({\hat{\phi}}_p -{\phi}_p )] \\ &\approx \sigma^2 ( 1 + \frac{p}{n}) \tag{typo in book: $n^{-1/2}$ should be $n^{1/2}$} \\ &\approx \frac{n \hat{\sigma}^2}{n-p} ( 1 + \frac{p}{n}) = \hat{\sigma}^2 \frac{n+p}{n-p} \tag{$n \hat{\sigma}^2/\sigma^2$ approx. $\chi^2_{n-p}$ }. \\ \end{align*} I don't know of any papers off the top of my head that compare empirically the performance of these criteria with cross validation techniques. However this book does give a lot of resources about how FPE,AIC,AICc and BIC compare with each other.
How can the AIC or BIC be used instead of the train/test split?
In chapter 5.5 of this book, they discuss how a lot of these model selection criteria arise. They start with Akaike's FPE criterion for AR models, and then go on to discuss AIC, AICc and BIC. They wal
How can the AIC or BIC be used instead of the train/test split? In chapter 5.5 of this book, they discuss how a lot of these model selection criteria arise. They start with Akaike's FPE criterion for AR models, and then go on to discuss AIC, AICc and BIC. They walk through the derivations pretty thoroughly. What these have in common is that they investigate what happens when you use some observed in-sample data $\{X_t\}$ to estimate the model parameters, and then look at some loss function (mean square prediction error or KL divergence) on some unobserved/hypothetical out-of-sample data $\{Y_t\}$ that arises from using the estimated model on this new data. The main ideas are that a) you take the expectation with respect to all of the data, and 2) use some asymptotic results to get expressions for some of the expectations. The quantity from (1) gives you expected overall performance, but (2) assumes you have a lot more data than you actually do. I am no expert, but I assume that cross-validation approaches target these measures of performance as well; but instead of considering the out-of-sample data hypothetical, they use real data that was split off from the training data. The simplest example is the FPE criterion. Assume you estimate your AR model on the entire data (kind of like the test-set), and obtain $\{\hat{\phi}_i\}_i$. Then the expected loss on the unobserved data $\{Y_t\}$ (it's hypothetical, not split apart like in cross-validation) is \begin{align*} & E(Y_{n+1} -\hat{\phi}_1Y_n -\cdots - \hat{\phi}_p Y_{n+1-p} )^2 \\ &= E(Y_{n+1} -\phi_1Y_n -\cdots - \phi_p Y_{n+1-p} - \\ & \hspace{30mm} (\hat{\phi}_1 - \phi_1)Y_n - \cdots - (\hat{\phi}_p - \phi_p) Y_{n+1-p} )^2 \\ &= E( Z_t + (\hat{\phi}_1 - \phi_1)Y_n - \cdots - (\hat{\phi}_p - \phi_p) Y_{n+1-p} )^2 \\ &= \sigma^2 + E[E[((\hat{\phi}_1 - \phi_1)Y_n - \cdots - (\hat{\phi}_p - \phi_p) Y_{n+1-p} )^2 | \{X_t\} ]] \\ &= \sigma^2 + E\left[ \sum_{i=1}^p \sum_{j=1}^p (\hat{\phi}_i - \phi_i)(\hat{\phi}_j - \phi_j)E\left[ Y_{n+1-i}Y_{n+1-j} |\{X_t\} \right] \right] \\ &= \sigma^2 + E[({\hat{\phi}}_p -{\phi}_p )' \Gamma_p ({\hat{\phi}}_p -{\phi}_p )] \\ &\approx \sigma^2 ( 1 + \frac{p}{n}) \tag{typo in book: $n^{-1/2}$ should be $n^{1/2}$} \\ &\approx \frac{n \hat{\sigma}^2}{n-p} ( 1 + \frac{p}{n}) = \hat{\sigma}^2 \frac{n+p}{n-p} \tag{$n \hat{\sigma}^2/\sigma^2$ approx. $\chi^2_{n-p}$ }. \\ \end{align*} I don't know of any papers off the top of my head that compare empirically the performance of these criteria with cross validation techniques. However this book does give a lot of resources about how FPE,AIC,AICc and BIC compare with each other.
How can the AIC or BIC be used instead of the train/test split? In chapter 5.5 of this book, they discuss how a lot of these model selection criteria arise. They start with Akaike's FPE criterion for AR models, and then go on to discuss AIC, AICc and BIC. They wal
32,792
Combined distribution of beta and uniform variables
There is no closed form for the density. Its integral form can be obtained as follows. If we condition on $X=x$ we have $Y = x + (1-2x) C$ where $C$ ranges over the unit interval. The support under this condition is: $$\text{supp}(Y|X=x) = \begin{cases} [x,1-x] & & & \text{for } 0 \leqslant x < \tfrac{1}{2}, \\[6pt] [1-x,x] & & & \text{for } \tfrac{1}{2} < x \leqslant 1. \\[6pt] \end{cases}$$ (We can ignore the case where $x=\tfrac{1}{2}$ since this occurs with probability zero.) Over this support we have the conditional density: $$\begin{aligned} p_{Y|X}(y|x) &= \frac{1}{|1-2x|} \cdot p_C \bigg( \frac{y-x}{1-2x} \bigg) \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( 1 - \bigg( \frac{y-x}{1-2x} \bigg)^2 \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{(1-2x)^2 - (y-x)^2}{(1-2x)^2} \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{(1-4x+4x^2) - (y^2-2xy+x^2)}{(1-2x)^2} \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{1 - 4x + 3x^2 + 2xy - y^2}{(1-2x)^2} \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \cdot \frac{|1-2x|}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \\[6pt] &= \frac{2}{\pi} \cdot \frac{1}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}}. \\[6pt] \end{aligned}$$ Inverting the support we have $\text{supp}(X|Y=y) = [0,\min(y,1-y)] \cap [\max(y,1-y),1]$. Thus, applying the law of total probability then gives you: $$\begin{aligned} p_Y(y) &= \int \limits_0^1 p_{Y|X}(y|x) p_X(x) \ dx \\[6pt] &= \int \limits_0^{\min(y,1-y)} p_{Y|X}(y|x) p_X(x) \ dx + \int \limits_{\max(y,1-y)}^1 p_{Y|X}(y|x) p_X(x) \ dx \\[6pt] &= \frac{2}{\pi} \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \Bigg[ \quad \int \limits_0^{\min(y,1-y)} \frac{x^{\alpha-1} (1-x)^{\beta-1}}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \ dx \\ &\quad \quad \quad \quad \quad \quad \quad \quad + \int \limits_{\max(y,1-y)}^1 \frac{x^{\alpha-1} (1-x)^{\beta-1}}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \ dx \Bigg]. \\[6pt] \end{aligned}$$ There is no closed form for this integral so it must be evaluated using numerical methods.
Combined distribution of beta and uniform variables
There is no closed form for the density. Its integral form can be obtained as follows. If we condition on $X=x$ we have $Y = x + (1-2x) C$ where $C$ ranges over the unit interval. The support under
Combined distribution of beta and uniform variables There is no closed form for the density. Its integral form can be obtained as follows. If we condition on $X=x$ we have $Y = x + (1-2x) C$ where $C$ ranges over the unit interval. The support under this condition is: $$\text{supp}(Y|X=x) = \begin{cases} [x,1-x] & & & \text{for } 0 \leqslant x < \tfrac{1}{2}, \\[6pt] [1-x,x] & & & \text{for } \tfrac{1}{2} < x \leqslant 1. \\[6pt] \end{cases}$$ (We can ignore the case where $x=\tfrac{1}{2}$ since this occurs with probability zero.) Over this support we have the conditional density: $$\begin{aligned} p_{Y|X}(y|x) &= \frac{1}{|1-2x|} \cdot p_C \bigg( \frac{y-x}{1-2x} \bigg) \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( 1 - \bigg( \frac{y-x}{1-2x} \bigg)^2 \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{(1-2x)^2 - (y-x)^2}{(1-2x)^2} \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{(1-4x+4x^2) - (y^2-2xy+x^2)}{(1-2x)^2} \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{1 - 4x + 3x^2 + 2xy - y^2}{(1-2x)^2} \bigg)^{-1/2} \\[6pt] &= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \cdot \frac{|1-2x|}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \\[6pt] &= \frac{2}{\pi} \cdot \frac{1}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}}. \\[6pt] \end{aligned}$$ Inverting the support we have $\text{supp}(X|Y=y) = [0,\min(y,1-y)] \cap [\max(y,1-y),1]$. Thus, applying the law of total probability then gives you: $$\begin{aligned} p_Y(y) &= \int \limits_0^1 p_{Y|X}(y|x) p_X(x) \ dx \\[6pt] &= \int \limits_0^{\min(y,1-y)} p_{Y|X}(y|x) p_X(x) \ dx + \int \limits_{\max(y,1-y)}^1 p_{Y|X}(y|x) p_X(x) \ dx \\[6pt] &= \frac{2}{\pi} \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \Bigg[ \quad \int \limits_0^{\min(y,1-y)} \frac{x^{\alpha-1} (1-x)^{\beta-1}}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \ dx \\ &\quad \quad \quad \quad \quad \quad \quad \quad + \int \limits_{\max(y,1-y)}^1 \frac{x^{\alpha-1} (1-x)^{\beta-1}}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \ dx \Bigg]. \\[6pt] \end{aligned}$$ There is no closed form for this integral so it must be evaluated using numerical methods.
Combined distribution of beta and uniform variables There is no closed form for the density. Its integral form can be obtained as follows. If we condition on $X=x$ we have $Y = x + (1-2x) C$ where $C$ ranges over the unit interval. The support under
32,793
Combined distribution of beta and uniform variables
First of all note that given the support of $\theta$, the function $\theta\to\cos(\theta)=:C$ is a bijection. So once you fix $Y=y,X=x$, your variable $C$ has value $\frac{y-x}{1-2x}$ and there is a unique $\theta$ giving such value of $C$. Given $Y:=X+(1-2X)C$ you can see that the support of $Y$ given $X=x$ is $[1-x,1+x]$ (since $X>0$ and $C\in[0,1]$). Conversely, fixing $Y=y$, $X\in\begin{cases}[1-y,1+y]&\text{if $y>0$}\\ [1+y,1-y]&\text{if $y<0$}\end{cases}$ So $p_Y(y)=\begin{cases}\int_{1-y}^{1+y}p_X(x)p_C\left(\frac{y-x}{1-2x}\right)dx&\text{if $y>0$}\\ \int_{1+y}^{1-y}p_X(x)p_C\left(\frac{y-x}{1-2x}\right)dx&\text{if $y<0$}\end{cases}$
Combined distribution of beta and uniform variables
First of all note that given the support of $\theta$, the function $\theta\to\cos(\theta)=:C$ is a bijection. So once you fix $Y=y,X=x$, your variable $C$ has value $\frac{y-x}{1-2x}$ and there is a u
Combined distribution of beta and uniform variables First of all note that given the support of $\theta$, the function $\theta\to\cos(\theta)=:C$ is a bijection. So once you fix $Y=y,X=x$, your variable $C$ has value $\frac{y-x}{1-2x}$ and there is a unique $\theta$ giving such value of $C$. Given $Y:=X+(1-2X)C$ you can see that the support of $Y$ given $X=x$ is $[1-x,1+x]$ (since $X>0$ and $C\in[0,1]$). Conversely, fixing $Y=y$, $X\in\begin{cases}[1-y,1+y]&\text{if $y>0$}\\ [1+y,1-y]&\text{if $y<0$}\end{cases}$ So $p_Y(y)=\begin{cases}\int_{1-y}^{1+y}p_X(x)p_C\left(\frac{y-x}{1-2x}\right)dx&\text{if $y>0$}\\ \int_{1+y}^{1-y}p_X(x)p_C\left(\frac{y-x}{1-2x}\right)dx&\text{if $y<0$}\end{cases}$
Combined distribution of beta and uniform variables First of all note that given the support of $\theta$, the function $\theta\to\cos(\theta)=:C$ is a bijection. So once you fix $Y=y,X=x$, your variable $C$ has value $\frac{y-x}{1-2x}$ and there is a u
32,794
Validation metrics (R2 and Q2) for Partial Least Squares (PLS) Regression
I was also looking for information on these parameters and found a good explanation in the book Eriksson et al. Multi- and Metavariate Data Analysis Principles and Applications. In general, I think you have the right idea. According to Eriksson et al, the fit tells us how well we are able to mathematically reproduce the data of the training set. The $R^2$ parameter is known as the "goodness of fit", or explained variation. The $Q^2$ parameter is termed "goodness of prediction", or predicted variation. The following points are emphasised: In PLS, the terms $R^2$ and $Q^2$ generally refer to the model performance of the Y-data, the responses, rather than that of the X-data, the predictors. The two parameters vary differently with increasing model complexity. $R^2$ is inflationary and rapidly approaches unity as model complexity (number of model parameters) increases. Therefore, it is not sufficient only to have a high $R^2$. $Q^2$, on the other hand, is not inflationary and at a certain degree of complexity will not improve any further and then degrade. There is a trade off between fit and predictive ability, so it is the zone where we have a balance between good fit and predictive power that we wish to identify. For your side question, I find no specific recommendations and no particular reason to scale the Y variable (I'm assuming there is only one). X-variables are scaled to give them the same variance and thus equal weight in the model. The model should be mathematically equivalent whether the response is scaled or not. If there is more than one Y-variable, a more important issue would be to test whether they correlated and whether to fit one model or separate models for each predictor.
Validation metrics (R2 and Q2) for Partial Least Squares (PLS) Regression
I was also looking for information on these parameters and found a good explanation in the book Eriksson et al. Multi- and Metavariate Data Analysis Principles and Applications. In general, I think yo
Validation metrics (R2 and Q2) for Partial Least Squares (PLS) Regression I was also looking for information on these parameters and found a good explanation in the book Eriksson et al. Multi- and Metavariate Data Analysis Principles and Applications. In general, I think you have the right idea. According to Eriksson et al, the fit tells us how well we are able to mathematically reproduce the data of the training set. The $R^2$ parameter is known as the "goodness of fit", or explained variation. The $Q^2$ parameter is termed "goodness of prediction", or predicted variation. The following points are emphasised: In PLS, the terms $R^2$ and $Q^2$ generally refer to the model performance of the Y-data, the responses, rather than that of the X-data, the predictors. The two parameters vary differently with increasing model complexity. $R^2$ is inflationary and rapidly approaches unity as model complexity (number of model parameters) increases. Therefore, it is not sufficient only to have a high $R^2$. $Q^2$, on the other hand, is not inflationary and at a certain degree of complexity will not improve any further and then degrade. There is a trade off between fit and predictive ability, so it is the zone where we have a balance between good fit and predictive power that we wish to identify. For your side question, I find no specific recommendations and no particular reason to scale the Y variable (I'm assuming there is only one). X-variables are scaled to give them the same variance and thus equal weight in the model. The model should be mathematically equivalent whether the response is scaled or not. If there is more than one Y-variable, a more important issue would be to test whether they correlated and whether to fit one model or separate models for each predictor.
Validation metrics (R2 and Q2) for Partial Least Squares (PLS) Regression I was also looking for information on these parameters and found a good explanation in the book Eriksson et al. Multi- and Metavariate Data Analysis Principles and Applications. In general, I think yo
32,795
Bias-variance decomposition: term for expected squared forecast error less irreducible error
I propose reducible error. This is also the terminology adopted in paragraph 2.1.1 of Gareth, Witten, Hastie & Tibshirani, An Introduction to Statistical Learning, a book which is basically a simplification of ESL + some very cool R code laboratories (except for the fact that they use attach, but, hey, nobody's perfect). I'll list below the reasons the pros and cons of this terminology. First of all, we must recall that we not only assume $\epsilon$ to have mean 0, but to also be independent of $X$ (see paragraph 2.6.1, formula 2.29 of ESL, 2nd edition, 12th printing). Then of course $\epsilon$ cannot be estimated from $X$, no matter which hypothesis class $\mathcal{H}$ (family of models) we choose, and how large a sample we use to learn our hypothesis (estimate our model). This explains why $\sigma^2_{\epsilon}$ is called irreducible error. By analogy, it seems natural to define the remaining part of the error, $\text{Err}(x_0)-\sigma^2_{\epsilon}$, the reducible error. Now, this terminology may sound somewhat confusing: as a matter of fact, under the assumption we made for the data generating process, we can prove that $$ f(x)=\mathbb{E}[Y\vert X=x]$$ Thus, the reducible error can be reduced to zero if and only if $\mathbb{E}[Y\vert X=x]\in \mathcal{H}$ (assuming of course we have a consistent estimator). If $\mathbb{E}[Y\vert X=x]\notin \mathcal{H}$, we cannot drive the reducible error to 0, even in the limit of an infinite sample size. However, it's still the only part of our error which can be reduced, if not eliminated, by changing the sample size, introducing regularization (shrinkage) in our estimator, etc. In other words, by choosing another $\hat{f}(x)$ in our family of models. Basically, reducible is meant not in the sense of zeroable (yuck!), but in the sense of that part of the error which can be reduced, even if not necessarily made arbitrarily small. Also, note that in principle this error can be reduced to 0 by enlarging $\mathcal{H}$ until it includes $\mathbb{E}[Y\vert X=x]$. In contrast, $\sigma^2_{\epsilon}$ cannot be reduced, no matter how large $\mathcal{H}$ is, because $\epsilon\perp X$.
Bias-variance decomposition: term for expected squared forecast error less irreducible error
I propose reducible error. This is also the terminology adopted in paragraph 2.1.1 of Gareth, Witten, Hastie & Tibshirani, An Introduction to Statistical Learning, a book which is basically a simplifi
Bias-variance decomposition: term for expected squared forecast error less irreducible error I propose reducible error. This is also the terminology adopted in paragraph 2.1.1 of Gareth, Witten, Hastie & Tibshirani, An Introduction to Statistical Learning, a book which is basically a simplification of ESL + some very cool R code laboratories (except for the fact that they use attach, but, hey, nobody's perfect). I'll list below the reasons the pros and cons of this terminology. First of all, we must recall that we not only assume $\epsilon$ to have mean 0, but to also be independent of $X$ (see paragraph 2.6.1, formula 2.29 of ESL, 2nd edition, 12th printing). Then of course $\epsilon$ cannot be estimated from $X$, no matter which hypothesis class $\mathcal{H}$ (family of models) we choose, and how large a sample we use to learn our hypothesis (estimate our model). This explains why $\sigma^2_{\epsilon}$ is called irreducible error. By analogy, it seems natural to define the remaining part of the error, $\text{Err}(x_0)-\sigma^2_{\epsilon}$, the reducible error. Now, this terminology may sound somewhat confusing: as a matter of fact, under the assumption we made for the data generating process, we can prove that $$ f(x)=\mathbb{E}[Y\vert X=x]$$ Thus, the reducible error can be reduced to zero if and only if $\mathbb{E}[Y\vert X=x]\in \mathcal{H}$ (assuming of course we have a consistent estimator). If $\mathbb{E}[Y\vert X=x]\notin \mathcal{H}$, we cannot drive the reducible error to 0, even in the limit of an infinite sample size. However, it's still the only part of our error which can be reduced, if not eliminated, by changing the sample size, introducing regularization (shrinkage) in our estimator, etc. In other words, by choosing another $\hat{f}(x)$ in our family of models. Basically, reducible is meant not in the sense of zeroable (yuck!), but in the sense of that part of the error which can be reduced, even if not necessarily made arbitrarily small. Also, note that in principle this error can be reduced to 0 by enlarging $\mathcal{H}$ until it includes $\mathbb{E}[Y\vert X=x]$. In contrast, $\sigma^2_{\epsilon}$ cannot be reduced, no matter how large $\mathcal{H}$ is, because $\epsilon\perp X$.
Bias-variance decomposition: term for expected squared forecast error less irreducible error I propose reducible error. This is also the terminology adopted in paragraph 2.1.1 of Gareth, Witten, Hastie & Tibshirani, An Introduction to Statistical Learning, a book which is basically a simplifi
32,796
Bias-variance decomposition: term for expected squared forecast error less irreducible error
In a system for which all of the physical occurrences have been properly modeled, the left over would be noise. However, there is generally more structure in the error of a model to data than just noise. For example, modelling bias and noise alone do not explain curvilinear residuals, i.e., unmodelled data structure. The totality of unexplained fraction is $1-R^2$, which can consist of misrepresentation of the physics as well as bias and noise of known structure. If by bias we mean only the error in estimating mean $y$, by "irreducible error" we mean noise, and by variance we mean the systemic physical error of the model, then the sum of bias (squared) and systemic physical error is not any special anything, it is merely the error that is not noise. The term (squared) misregistration might be used for this in a specific context, see below. If you want to say error independent of $n$, versus error that is a function of $n$, say that. IMHO, neither error is irreducible, so that the irreducibility property misleads to such an extent that it confuses more than it illuminates. Why do i not like the term "reducibility"? It smacks of a self-referential tautology as in the Axiom of reducibility. I agree with Russell 1919 that "I do not see any reason to believe that the axiom of reducibility is logically necessary, which is what would be meant by saying that it is true in all possible worlds. The admission of this axiom into a system of logic is therefore a defect ... a dubious assumption." Below is an example of structured residuals due to incomplete physical modelling. This represents residuals from ordinary least squares fitting of a scaled gamma distribution, i.e., a gamma variate (GV), to blood plasma samples of radioactivity of a renal glomerular filtered radiopharmaceutical [1]. Note that the more data that is discarded ($n=36$ for each time-sample), the better the model becomes so that reducibility deproves with more sample range. It is notable, that as one drops the first sample at five minutes, the physics improves as it does sequentially as one continues to drop early samples out to 60 min. This shows that although the GV eventually forms a good model for plasma concentration of the drug, something else is going on during early times. Indeed, if one convolves two gamma distributions, one for early time, circulatory delivery of the drug, and one for organ clearance, this type of error, physical modeling error, can be reduced to less than $1\%$ [2]. Next is an illustration of that convolution. From that latter example, for a square root of counts versus time graph, the $y$-axis deviations are standardized deviations in sense of Poisson noise error. Such a graph is an image for which errors of fit are image misregistration from distortion or warping. In that context, and only that context, misregistration is bias plus modelling error, and total error is misregistration plus noise error.
Bias-variance decomposition: term for expected squared forecast error less irreducible error
In a system for which all of the physical occurrences have been properly modeled, the left over would be noise. However, there is generally more structure in the error of a model to data than just noi
Bias-variance decomposition: term for expected squared forecast error less irreducible error In a system for which all of the physical occurrences have been properly modeled, the left over would be noise. However, there is generally more structure in the error of a model to data than just noise. For example, modelling bias and noise alone do not explain curvilinear residuals, i.e., unmodelled data structure. The totality of unexplained fraction is $1-R^2$, which can consist of misrepresentation of the physics as well as bias and noise of known structure. If by bias we mean only the error in estimating mean $y$, by "irreducible error" we mean noise, and by variance we mean the systemic physical error of the model, then the sum of bias (squared) and systemic physical error is not any special anything, it is merely the error that is not noise. The term (squared) misregistration might be used for this in a specific context, see below. If you want to say error independent of $n$, versus error that is a function of $n$, say that. IMHO, neither error is irreducible, so that the irreducibility property misleads to such an extent that it confuses more than it illuminates. Why do i not like the term "reducibility"? It smacks of a self-referential tautology as in the Axiom of reducibility. I agree with Russell 1919 that "I do not see any reason to believe that the axiom of reducibility is logically necessary, which is what would be meant by saying that it is true in all possible worlds. The admission of this axiom into a system of logic is therefore a defect ... a dubious assumption." Below is an example of structured residuals due to incomplete physical modelling. This represents residuals from ordinary least squares fitting of a scaled gamma distribution, i.e., a gamma variate (GV), to blood plasma samples of radioactivity of a renal glomerular filtered radiopharmaceutical [1]. Note that the more data that is discarded ($n=36$ for each time-sample), the better the model becomes so that reducibility deproves with more sample range. It is notable, that as one drops the first sample at five minutes, the physics improves as it does sequentially as one continues to drop early samples out to 60 min. This shows that although the GV eventually forms a good model for plasma concentration of the drug, something else is going on during early times. Indeed, if one convolves two gamma distributions, one for early time, circulatory delivery of the drug, and one for organ clearance, this type of error, physical modeling error, can be reduced to less than $1\%$ [2]. Next is an illustration of that convolution. From that latter example, for a square root of counts versus time graph, the $y$-axis deviations are standardized deviations in sense of Poisson noise error. Such a graph is an image for which errors of fit are image misregistration from distortion or warping. In that context, and only that context, misregistration is bias plus modelling error, and total error is misregistration plus noise error.
Bias-variance decomposition: term for expected squared forecast error less irreducible error In a system for which all of the physical occurrences have been properly modeled, the left over would be noise. However, there is generally more structure in the error of a model to data than just noi
32,797
Practical usefulness of pointwise convergence without uniform convergence
It's hard to give a definitive answer, because "useful" and "useless" are not mathematical and in many situations subjective (in some others one could try to formalise usefulness, but such formalisations are then again open to discussion). Here are some thoughts. (a) Uniform convergence is clearly much stronger than pointwise convergence; with pointwise convergence there is no guarantee, if you don't know the true parameter value, that for any given $n$ you are anywhere near to where you want to be. (b) Pointwise convergence is still stronger than not having any convergence at all. (c) If you have a given $n$ that is not huge and uniform convergence, the uniform bound that you can actually show with the $n$ you have may not be any good. This doesn't mean that your estimator is bad, it rather means that the uniform convergence bound doesn't guarantee that you're close enough to the true value. You may still be. (d) In case we don't have a uniform convergence result, there are various possibilities: i) Uniform convergence may in fact hold but nobody has managed to prove it yet. ii) Uniform convergence may be violated, however it may only be violated in areas of the parameter space that are not realistic, so the actual convergence behaviour maybe alright. As in (c), just because you don't have a theorem that guarantees that you're close to the true value doesn't mean you are far. iii) Uniform convergence may be violated and you may encounter irregular behaviour in all kinds of realistic situations. Tough luck. iv) There may even be small $n$-situations in which for the $n$ actually available in practice something that is not convergent at all is better than something that is pointwise or uniformly convergent. (e) Now you may say, so uniform convergence is clearly useful because it gives us a guarantee with a clear practical value and without that we won't have any guarantee. But apart from the fact that an estimator may be good even if we can't guarantee that it is good, also in fact we never have a guarantee that really applies in practice, because in practice model assumptions don't hold, and the situation is actually more complicated than saying, OK, model P is wrong but there is a true model Q that is just too complicated and may be tamed by a nonparametric uniform convergence result; no, all these models are idealisations and nothing is i.i.d. or follows any regular dependence or non-identity pattern in the first place (not even the random numbers that we use in simulations are in fact random numbers). So also the uniform convergence guarantee applies to a situation that is idealised, and practice is a different story. We use theory like uniform convergence to make quality statements about estimators in idealised situations, because these are the situations we can handle. We can really only say, in such idealised situations, constructed artificially by us for the sake of making theory possible, uniform convergence is stronger than pointwise convergence and pointwise convergence is stronger than nothing, but other ways of reasoning (such as simulation studies or even weaker theory) play a role, too, and may in some situations dominate what we know from asymptotic theory. Sorry, no specific examples, but in any setup in which you cannot find a uniformly convergent estimator but only a pointwise convergent one, chances are the pointwise convergent one will help you (sometimes an estimator of which you cannot even show pointwise convergence may help you as well or even more). Then it may not, but then for whatever practical reason (issue with model assumptions, small $n$, measurement, whatever) the uniformly convergent one may be misleading in a specific situation as well.
Practical usefulness of pointwise convergence without uniform convergence
It's hard to give a definitive answer, because "useful" and "useless" are not mathematical and in many situations subjective (in some others one could try to formalise usefulness, but such formalisati
Practical usefulness of pointwise convergence without uniform convergence It's hard to give a definitive answer, because "useful" and "useless" are not mathematical and in many situations subjective (in some others one could try to formalise usefulness, but such formalisations are then again open to discussion). Here are some thoughts. (a) Uniform convergence is clearly much stronger than pointwise convergence; with pointwise convergence there is no guarantee, if you don't know the true parameter value, that for any given $n$ you are anywhere near to where you want to be. (b) Pointwise convergence is still stronger than not having any convergence at all. (c) If you have a given $n$ that is not huge and uniform convergence, the uniform bound that you can actually show with the $n$ you have may not be any good. This doesn't mean that your estimator is bad, it rather means that the uniform convergence bound doesn't guarantee that you're close enough to the true value. You may still be. (d) In case we don't have a uniform convergence result, there are various possibilities: i) Uniform convergence may in fact hold but nobody has managed to prove it yet. ii) Uniform convergence may be violated, however it may only be violated in areas of the parameter space that are not realistic, so the actual convergence behaviour maybe alright. As in (c), just because you don't have a theorem that guarantees that you're close to the true value doesn't mean you are far. iii) Uniform convergence may be violated and you may encounter irregular behaviour in all kinds of realistic situations. Tough luck. iv) There may even be small $n$-situations in which for the $n$ actually available in practice something that is not convergent at all is better than something that is pointwise or uniformly convergent. (e) Now you may say, so uniform convergence is clearly useful because it gives us a guarantee with a clear practical value and without that we won't have any guarantee. But apart from the fact that an estimator may be good even if we can't guarantee that it is good, also in fact we never have a guarantee that really applies in practice, because in practice model assumptions don't hold, and the situation is actually more complicated than saying, OK, model P is wrong but there is a true model Q that is just too complicated and may be tamed by a nonparametric uniform convergence result; no, all these models are idealisations and nothing is i.i.d. or follows any regular dependence or non-identity pattern in the first place (not even the random numbers that we use in simulations are in fact random numbers). So also the uniform convergence guarantee applies to a situation that is idealised, and practice is a different story. We use theory like uniform convergence to make quality statements about estimators in idealised situations, because these are the situations we can handle. We can really only say, in such idealised situations, constructed artificially by us for the sake of making theory possible, uniform convergence is stronger than pointwise convergence and pointwise convergence is stronger than nothing, but other ways of reasoning (such as simulation studies or even weaker theory) play a role, too, and may in some situations dominate what we know from asymptotic theory. Sorry, no specific examples, but in any setup in which you cannot find a uniformly convergent estimator but only a pointwise convergent one, chances are the pointwise convergent one will help you (sometimes an estimator of which you cannot even show pointwise convergence may help you as well or even more). Then it may not, but then for whatever practical reason (issue with model assumptions, small $n$, measurement, whatever) the uniformly convergent one may be misleading in a specific situation as well.
Practical usefulness of pointwise convergence without uniform convergence It's hard to give a definitive answer, because "useful" and "useless" are not mathematical and in many situations subjective (in some others one could try to formalise usefulness, but such formalisati
32,798
Statistical inference under model misspecification
The way out is literally out of sample test, a true one. Not the one where you split sample into training and hold out like in crossvalidation, but the true prediction. This works very well in natural sciences. In fact it's the only way it works. You build a theory on some data, then you're expected to come up with a prediction of something that was not observed yet. Obviously, this doesn't work in most social (so called) sciences such as economics. In the industry this works as in sciences. For instance, if the trading algorithm doesn't work, you're going to lose money, eventually, and then you abandon it. Cross validation and training data sets are used extensively in development and making a decision to deploy the algorithm, but after it's in production it's all about making money or losing. Very simple out of sample test.
Statistical inference under model misspecification
The way out is literally out of sample test, a true one. Not the one where you split sample into training and hold out like in crossvalidation, but the true prediction. This works very well in natural
Statistical inference under model misspecification The way out is literally out of sample test, a true one. Not the one where you split sample into training and hold out like in crossvalidation, but the true prediction. This works very well in natural sciences. In fact it's the only way it works. You build a theory on some data, then you're expected to come up with a prediction of something that was not observed yet. Obviously, this doesn't work in most social (so called) sciences such as economics. In the industry this works as in sciences. For instance, if the trading algorithm doesn't work, you're going to lose money, eventually, and then you abandon it. Cross validation and training data sets are used extensively in development and making a decision to deploy the algorithm, but after it's in production it's all about making money or losing. Very simple out of sample test.
Statistical inference under model misspecification The way out is literally out of sample test, a true one. Not the one where you split sample into training and hold out like in crossvalidation, but the true prediction. This works very well in natural
32,799
Statistical inference under model misspecification
You could define a "combined procedure" and investigate its characteristics. Let's say you start from a simple model and allow for one, two or three more complex (or nonparametric) models to be fitted in case that the simple model doesn't fit. You need to specify a formal rule according to which you decide not to fit the simple model but one of the others (and which one). You also need to have tests for your hypothesis of interest to be applied under all the involved models (parametric or nonparametric). With such a setup you can simulate the characteristics, i.e., with what percentage your null hypothesis is finally rejected in case it is true, and in case of several deviations of interest. Also you can simulate from all involved models, and look at things such as conditional level and conditional power given that data came from model X, Y, or Z, or given that the model misspecification test procedure selected model X, Y, or Z. You may find that model selection doesn't do much harm in the sense that the achieved level is still very close to the level you were after, and the power is OK if not excellent. Or you may find that data-dependent model selection really screws things up; it'll depend on the details (if your model selection procedure is very reliable, chances are level and power won't be affected very strongly). Now this isn't quite the same as specifying one model and then looking at the data and deciding "oh, I need another", but it's probably as close as you can get to investigating what would be the characteristics of such an approach. It's not trivial because you need to make a number of choices to get this going. General remark: I think it is misleading to classify applied statistical methodology binarily into "valid" and "invalid". Nothing is ever 100% valid because model assumptions never hold precisely in practice. On the other hand, although you may find valid (!) reasons for calling something "invalid", if one investigates the characteristics of the supposedly invalid approach in depth, one may find out that it still works fairly well.
Statistical inference under model misspecification
You could define a "combined procedure" and investigate its characteristics. Let's say you start from a simple model and allow for one, two or three more complex (or nonparametric) models to be fitted
Statistical inference under model misspecification You could define a "combined procedure" and investigate its characteristics. Let's say you start from a simple model and allow for one, two or three more complex (or nonparametric) models to be fitted in case that the simple model doesn't fit. You need to specify a formal rule according to which you decide not to fit the simple model but one of the others (and which one). You also need to have tests for your hypothesis of interest to be applied under all the involved models (parametric or nonparametric). With such a setup you can simulate the characteristics, i.e., with what percentage your null hypothesis is finally rejected in case it is true, and in case of several deviations of interest. Also you can simulate from all involved models, and look at things such as conditional level and conditional power given that data came from model X, Y, or Z, or given that the model misspecification test procedure selected model X, Y, or Z. You may find that model selection doesn't do much harm in the sense that the achieved level is still very close to the level you were after, and the power is OK if not excellent. Or you may find that data-dependent model selection really screws things up; it'll depend on the details (if your model selection procedure is very reliable, chances are level and power won't be affected very strongly). Now this isn't quite the same as specifying one model and then looking at the data and deciding "oh, I need another", but it's probably as close as you can get to investigating what would be the characteristics of such an approach. It's not trivial because you need to make a number of choices to get this going. General remark: I think it is misleading to classify applied statistical methodology binarily into "valid" and "invalid". Nothing is ever 100% valid because model assumptions never hold precisely in practice. On the other hand, although you may find valid (!) reasons for calling something "invalid", if one investigates the characteristics of the supposedly invalid approach in depth, one may find out that it still works fairly well.
Statistical inference under model misspecification You could define a "combined procedure" and investigate its characteristics. Let's say you start from a simple model and allow for one, two or three more complex (or nonparametric) models to be fitted
32,800
R PCA: principal (psych package) vs prcomp loadings
As I can see from your data - the difference is only in scaling. For instance, PC3 is scaled with $psych = 2.099188243053083 * prcomp$, some scaled with negative number. So, both algorithms are correct as principal direction doesn't change with positive scaling, and negative scaling encodes the same space. In order to see the whole picture - you can check out eigenvalues (from sdev - a squared root of eigenvalue): pca_results <- prcomp(df, center = TRUE, scale. = TRUE) > pca_results$rotation[,1:4] PC1 PC2 PC3 PC4 Sepal.Length 0.5210659 -0.37741762 0.7195664 0.2612863 Sepal.Width -0.2693474 -0.92329566 -0.2443818 -0.1235096 Petal.Length 0.5804131 -0.02449161 -0.1421264 -0.8014492 Petal.Width 0.5648565 -0.06694199 -0.6342727 0.5235971 > pca_results$sdev [1] 1.7083611 0.9560494 0.3830886 0.1439265 Compare it with psych's loadings ("eigenvalues" are called SS-loadings here): > pca_fit <- principal(df, nfactors = 4, rotate = "none") > pca_fit$loadings Loadings: PC1 PC2 PC3 PC4 Sepal.Length 0.890 0.361 -0.276 Sepal.Width -0.460 0.883 Petal.Length 0.992 0.115 Petal.Width 0.965 0.243 PC1 PC2 PC3 PC4 SS loadings 2.918 0.914 0.147 0.021 Proportion Var 0.730 0.229 0.037 0.005 Cumulative Var 0.730 0.958 0.995 1.000 You can see that they're same (after squaring). So the algorithms output same factors, they just represent principal directions differently: eigenvectors in prcomp (unit length), loadings in psych (non-unit length). So, the only "problem" with psych is that its vector doesn't have unit length. The prcomp vector has length 1, for instance for PC1: prcomp: 0.5210659 * 0.5210659 + 0.2693474 * 0.2693474 + 0.5804131 * 0.5804131 + 0.5648565 * 0.5648565 = 0.99999992627343 psych: 0.890 * 0.890 + 0.460 * 0.460 + 0.992 * 0.992 + 0.965 * 0.965 = 2.918989 P.S. Yes, psych package doesn't show values less than epsilon, that's why there's some empty cells there. Edit: As @William Revelle pointed out "the reason the small loadings are dropped is that the loadings object is of class "loadings" and R print drops values less than .3. Unclass the loadings with (unclass(pca_fit$loadings) and then print them." As a matter of fact, loadings in psych are not unit eigenvectors because it uses them for factor rotation. That's why loadings are eigenvectors scaled by the square roots of the respective eigenvalues (even if you specify no rotation). See https://stats.stackexchange.com/a/137003/137805
R PCA: principal (psych package) vs prcomp loadings
As I can see from your data - the difference is only in scaling. For instance, PC3 is scaled with $psych = 2.099188243053083 * prcomp$, some scaled with negative number. So, both algorithms are correc
R PCA: principal (psych package) vs prcomp loadings As I can see from your data - the difference is only in scaling. For instance, PC3 is scaled with $psych = 2.099188243053083 * prcomp$, some scaled with negative number. So, both algorithms are correct as principal direction doesn't change with positive scaling, and negative scaling encodes the same space. In order to see the whole picture - you can check out eigenvalues (from sdev - a squared root of eigenvalue): pca_results <- prcomp(df, center = TRUE, scale. = TRUE) > pca_results$rotation[,1:4] PC1 PC2 PC3 PC4 Sepal.Length 0.5210659 -0.37741762 0.7195664 0.2612863 Sepal.Width -0.2693474 -0.92329566 -0.2443818 -0.1235096 Petal.Length 0.5804131 -0.02449161 -0.1421264 -0.8014492 Petal.Width 0.5648565 -0.06694199 -0.6342727 0.5235971 > pca_results$sdev [1] 1.7083611 0.9560494 0.3830886 0.1439265 Compare it with psych's loadings ("eigenvalues" are called SS-loadings here): > pca_fit <- principal(df, nfactors = 4, rotate = "none") > pca_fit$loadings Loadings: PC1 PC2 PC3 PC4 Sepal.Length 0.890 0.361 -0.276 Sepal.Width -0.460 0.883 Petal.Length 0.992 0.115 Petal.Width 0.965 0.243 PC1 PC2 PC3 PC4 SS loadings 2.918 0.914 0.147 0.021 Proportion Var 0.730 0.229 0.037 0.005 Cumulative Var 0.730 0.958 0.995 1.000 You can see that they're same (after squaring). So the algorithms output same factors, they just represent principal directions differently: eigenvectors in prcomp (unit length), loadings in psych (non-unit length). So, the only "problem" with psych is that its vector doesn't have unit length. The prcomp vector has length 1, for instance for PC1: prcomp: 0.5210659 * 0.5210659 + 0.2693474 * 0.2693474 + 0.5804131 * 0.5804131 + 0.5648565 * 0.5648565 = 0.99999992627343 psych: 0.890 * 0.890 + 0.460 * 0.460 + 0.992 * 0.992 + 0.965 * 0.965 = 2.918989 P.S. Yes, psych package doesn't show values less than epsilon, that's why there's some empty cells there. Edit: As @William Revelle pointed out "the reason the small loadings are dropped is that the loadings object is of class "loadings" and R print drops values less than .3. Unclass the loadings with (unclass(pca_fit$loadings) and then print them." As a matter of fact, loadings in psych are not unit eigenvectors because it uses them for factor rotation. That's why loadings are eigenvectors scaled by the square roots of the respective eigenvalues (even if you specify no rotation). See https://stats.stackexchange.com/a/137003/137805
R PCA: principal (psych package) vs prcomp loadings As I can see from your data - the difference is only in scaling. For instance, PC3 is scaled with $psych = 2.099188243053083 * prcomp$, some scaled with negative number. So, both algorithms are correc