idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
28,901
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's?
You can do sample size calculations for unequal sample sizes. For example, you can decide the n's are in some ratio (such as in proportion to the populations perhaps). It's then possible to do power calculations (at the least you can simulate to obtain the power under any particular set of circumstances, whether or not you are able to do the algebra). The problem is that it's relatively inefficient at finding differences compared to the same total number of observations at equal sample sizes. Imagine you had a total sample of $n=n_1 + n_2$, with equal variance in the population and close to equal sample variance, and that your choice was between a 50-50 split and a 90-10 split ($n_1 = 0.5n$ vs $n_1=0.9n$). The two-sample t-statistic is: $t = \frac{\bar {X}_1 - \bar{X}_2}{s_{\text{pooled}} \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$ The impact of the sample size is in the term $1/{\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$. If you have the 50-50 split it's like having a 40% smaller standard deviation; at a given $n_1+n_2$ you can pick up a substantially smaller effect with the even split. If the combined sample size is not an effective constraint, this calculation may pointless however. It matters in cases where every observation carries the same marginal cost, which is not always relevant.
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's?
You can do sample size calculations for unequal sample sizes. For example, you can decide the n's are in some ratio (such as in proportion to the populations perhaps). It's then possible to do power c
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's? You can do sample size calculations for unequal sample sizes. For example, you can decide the n's are in some ratio (such as in proportion to the populations perhaps). It's then possible to do power calculations (at the least you can simulate to obtain the power under any particular set of circumstances, whether or not you are able to do the algebra). The problem is that it's relatively inefficient at finding differences compared to the same total number of observations at equal sample sizes. Imagine you had a total sample of $n=n_1 + n_2$, with equal variance in the population and close to equal sample variance, and that your choice was between a 50-50 split and a 90-10 split ($n_1 = 0.5n$ vs $n_1=0.9n$). The two-sample t-statistic is: $t = \frac{\bar {X}_1 - \bar{X}_2}{s_{\text{pooled}} \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$ The impact of the sample size is in the term $1/{\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$. If you have the 50-50 split it's like having a 40% smaller standard deviation; at a given $n_1+n_2$ you can pick up a substantially smaller effect with the even split. If the combined sample size is not an effective constraint, this calculation may pointless however. It matters in cases where every observation carries the same marginal cost, which is not always relevant.
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's? You can do sample size calculations for unequal sample sizes. For example, you can decide the n's are in some ratio (such as in proportion to the populations perhaps). It's then possible to do power c
28,902
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's?
First off, why are you assuming equal variances in the two groups? Please don't say, "Because it's convenient." I seriously doubt that the group variances are equal, although in the case of equal sample sizes that isn't crucial. Your degrees of freedom will be off, but you know you have at least 130, so who cares? There are much bigger questions to address. If you are going to permit (or require) unequal group sample sizes, the problem will not have a unique solution. There are two unknowns ($n_1$ and $n_2$ and only one constraint (the power must be at least $\phi$.) I don't think the problem can be solved without an additional constraint. There are two obvious possibilities. The first is to fix one of the sample sizes (e.g., the sponsors want at least 300 observations from Group I). The other is to fix the ratio (e.g., because Group I is ten times the count of Group II, we want $n_1 = 10\, n_2$). Now proceed with your power analysis.
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's?
First off, why are you assuming equal variances in the two groups? Please don't say, "Because it's convenient." I seriously doubt that the group variances are equal, although in the case of equal sa
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's? First off, why are you assuming equal variances in the two groups? Please don't say, "Because it's convenient." I seriously doubt that the group variances are equal, although in the case of equal sample sizes that isn't crucial. Your degrees of freedom will be off, but you know you have at least 130, so who cares? There are much bigger questions to address. If you are going to permit (or require) unequal group sample sizes, the problem will not have a unique solution. There are two unknowns ($n_1$ and $n_2$ and only one constraint (the power must be at least $\phi$.) I don't think the problem can be solved without an additional constraint. There are two obvious possibilities. The first is to fix one of the sample sizes (e.g., the sponsors want at least 300 observations from Group I). The other is to fix the ratio (e.g., because Group I is ten times the count of Group II, we want $n_1 = 10\, n_2$). Now proceed with your power analysis.
Can I do a t-test Power Analysis for Unequal Size Groups which Produces 2 Different Minimum n's? First off, why are you assuming equal variances in the two groups? Please don't say, "Because it's convenient." I seriously doubt that the group variances are equal, although in the case of equal sa
28,903
Stable distributions that can be multiplied?
A "stable distribution" is a particular kind of location-scale family of distributions. The class of stable distributions is parameterized by two real numbers, the stability $\alpha\in(0,2]$ and skewness $\beta\in[-1,1]$. A result quoted in the Wikipedia article resolves this question about closure under products of density functions. When $f$ is the density of a stable distribution with $\alpha \lt 2$, then asymptotically $$f(x) \sim |x|^{-(1+\alpha)} g(\operatorname{sgn}(x), \alpha, \beta)$$ for an explicitly given function $g$ whose details do not matter. (In particular, $g$ will be nonzero either for all positive $x$ or all negative $x$ or both.) The product of any two such densities therefore will be asymptotically proportional to $|x|^{-2(1+\alpha)}$ in at least one tail. Since $2(1+\alpha)\ne 1+\alpha$, this product (after renormalization) cannot correspond to any distribution in the same stable family. (Indeed, because $3(1+\alpha) \ne 1+\alpha^\prime$ for any possible $\alpha^\prime\in(0,2]$, the product of any three such density functions cannot even be the density function of any stable distribution. That destroys any hope of extending the idea of product closure from a single stable distribution to a set of stable distributions.) The only remaining possibility is $\alpha=2$. These are the Normal distributions, with densities proportional to $\exp(-(x-\mu)^2/(2\sigma^2))$ for the location and scale parameters $\mu$ and $\sigma$. It is straightforward to check that a product of two such expressions is of the same form (because the sum of two quadratic forms in $x$ is another quadratic form in $x$). The unique answer, then, is that the Normal distribution family is the only product-of-density-closed stable distribution.
Stable distributions that can be multiplied?
A "stable distribution" is a particular kind of location-scale family of distributions. The class of stable distributions is parameterized by two real numbers, the stability $\alpha\in(0,2]$ and skew
Stable distributions that can be multiplied? A "stable distribution" is a particular kind of location-scale family of distributions. The class of stable distributions is parameterized by two real numbers, the stability $\alpha\in(0,2]$ and skewness $\beta\in[-1,1]$. A result quoted in the Wikipedia article resolves this question about closure under products of density functions. When $f$ is the density of a stable distribution with $\alpha \lt 2$, then asymptotically $$f(x) \sim |x|^{-(1+\alpha)} g(\operatorname{sgn}(x), \alpha, \beta)$$ for an explicitly given function $g$ whose details do not matter. (In particular, $g$ will be nonzero either for all positive $x$ or all negative $x$ or both.) The product of any two such densities therefore will be asymptotically proportional to $|x|^{-2(1+\alpha)}$ in at least one tail. Since $2(1+\alpha)\ne 1+\alpha$, this product (after renormalization) cannot correspond to any distribution in the same stable family. (Indeed, because $3(1+\alpha) \ne 1+\alpha^\prime$ for any possible $\alpha^\prime\in(0,2]$, the product of any three such density functions cannot even be the density function of any stable distribution. That destroys any hope of extending the idea of product closure from a single stable distribution to a set of stable distributions.) The only remaining possibility is $\alpha=2$. These are the Normal distributions, with densities proportional to $\exp(-(x-\mu)^2/(2\sigma^2))$ for the location and scale parameters $\mu$ and $\sigma$. It is straightforward to check that a product of two such expressions is of the same form (because the sum of two quadratic forms in $x$ is another quadratic form in $x$). The unique answer, then, is that the Normal distribution family is the only product-of-density-closed stable distribution.
Stable distributions that can be multiplied? A "stable distribution" is a particular kind of location-scale family of distributions. The class of stable distributions is parameterized by two real numbers, the stability $\alpha\in(0,2]$ and skew
28,904
Stable distributions that can be multiplied?
I know this is a partial answer and I'm not an expert, but this might help: if one of two unimodal pdfs is log-concave, then their convolution is unimodal. Due to Ibragimov (1956), via these notes. Apparently, if both are log-concave, then the convolution is also log-concave. As far as product closure, the only "clean" result I know of for product distributions is the limit theorem described in this math.se answer. How about a truncated version of these? The bounded uniform distribution is a limiting case of its shape parameter, and as far as I'm aware they're unimodal and log-concave so they have unimodal, log-concave convolutions. I have no clue about their products . When I have more time later this week I could try and run some simulations to see if I get log-concave products of truncated error distributions. Maybe Govindarajulu (1966) would help. I'm not sure what the policy on crossposting is, but it seems like the math.se people might be able to help you as well. Out of curiosity, are you trying to build an algebraic structure out of probability distributions?
Stable distributions that can be multiplied?
I know this is a partial answer and I'm not an expert, but this might help: if one of two unimodal pdfs is log-concave, then their convolution is unimodal. Due to Ibragimov (1956), via these notes. Ap
Stable distributions that can be multiplied? I know this is a partial answer and I'm not an expert, but this might help: if one of two unimodal pdfs is log-concave, then their convolution is unimodal. Due to Ibragimov (1956), via these notes. Apparently, if both are log-concave, then the convolution is also log-concave. As far as product closure, the only "clean" result I know of for product distributions is the limit theorem described in this math.se answer. How about a truncated version of these? The bounded uniform distribution is a limiting case of its shape parameter, and as far as I'm aware they're unimodal and log-concave so they have unimodal, log-concave convolutions. I have no clue about their products . When I have more time later this week I could try and run some simulations to see if I get log-concave products of truncated error distributions. Maybe Govindarajulu (1966) would help. I'm not sure what the policy on crossposting is, but it seems like the math.se people might be able to help you as well. Out of curiosity, are you trying to build an algebraic structure out of probability distributions?
Stable distributions that can be multiplied? I know this is a partial answer and I'm not an expert, but this might help: if one of two unimodal pdfs is log-concave, then their convolution is unimodal. Due to Ibragimov (1956), via these notes. Ap
28,905
Why does non-normally distributed errors compromise the validity of our significance statements?
why when we have non-normally distributed errors is the validity of our significance statements compromised? Why will confidence intervals be too wide or narrow? The confidence intervals are based on the way that the numerator and denominator are distributed in a t-statistic. With normal data the numerator of a t-statistic has a normal distribution and the distribution of the square of the denominator (which is then a variance) is a particular multiple of a chi-squared distribution. When the numerator and denominator are also independent (as will only be the case with normal data, given the observations themselves are independent), the whole statistic has a t-distribution. This then means that a t-statistic like $\frac{\hat \beta - \beta}{s_{\hat\beta}}$ will be a pivotal quantity (its distribution doesn't depend on what the true slope coefficient is, and it's a function of the unknown $\beta$), which makes it suitable for constructing confidence intervals ... and these intervals will then use $t$-quantiles in their construction to get the desired coverage. If the data were from some other distribution, the statistic wouldn't have a t-distribution. For example, if it were heavy tailed, the t-distribution would tend to be a bit lighter tailed (the outlying observations affect the denominator more than the numerator). Here's an example. In both cases, the histogram is for 10,000 regressions: The histogram on the left is for when the data are conditionally normal, n=30 (and where in this case, $\beta=0$). The distribution looks as it should. The histogram on the right is for the case when the conditional distribution is right skewed and heavy-tailed, and the histogram shows very few values outside $(-2,2)$ - the distribution isn't much like the theoretical distribution for normal data, because the statistic no longer has the t-distribution. A 95% t-interval (which should include 95% of the slopes in our sample) runs from -2.048 to 2.048. For the normal data, it actually included 95.15% of the 10000 sample slopes. For the skewed data it includes 99.91%.
Why does non-normally distributed errors compromise the validity of our significance statements?
why when we have non-normally distributed errors is the validity of our significance statements compromised? Why will confidence intervals be too wide or narrow? The confidence intervals are based on
Why does non-normally distributed errors compromise the validity of our significance statements? why when we have non-normally distributed errors is the validity of our significance statements compromised? Why will confidence intervals be too wide or narrow? The confidence intervals are based on the way that the numerator and denominator are distributed in a t-statistic. With normal data the numerator of a t-statistic has a normal distribution and the distribution of the square of the denominator (which is then a variance) is a particular multiple of a chi-squared distribution. When the numerator and denominator are also independent (as will only be the case with normal data, given the observations themselves are independent), the whole statistic has a t-distribution. This then means that a t-statistic like $\frac{\hat \beta - \beta}{s_{\hat\beta}}$ will be a pivotal quantity (its distribution doesn't depend on what the true slope coefficient is, and it's a function of the unknown $\beta$), which makes it suitable for constructing confidence intervals ... and these intervals will then use $t$-quantiles in their construction to get the desired coverage. If the data were from some other distribution, the statistic wouldn't have a t-distribution. For example, if it were heavy tailed, the t-distribution would tend to be a bit lighter tailed (the outlying observations affect the denominator more than the numerator). Here's an example. In both cases, the histogram is for 10,000 regressions: The histogram on the left is for when the data are conditionally normal, n=30 (and where in this case, $\beta=0$). The distribution looks as it should. The histogram on the right is for the case when the conditional distribution is right skewed and heavy-tailed, and the histogram shows very few values outside $(-2,2)$ - the distribution isn't much like the theoretical distribution for normal data, because the statistic no longer has the t-distribution. A 95% t-interval (which should include 95% of the slopes in our sample) runs from -2.048 to 2.048. For the normal data, it actually included 95.15% of the 10000 sample slopes. For the skewed data it includes 99.91%.
Why does non-normally distributed errors compromise the validity of our significance statements? why when we have non-normally distributed errors is the validity of our significance statements compromised? Why will confidence intervals be too wide or narrow? The confidence intervals are based on
28,906
Can a narrow confidence interval around a non-significant effect provide evidence for the null?
In short: Yes. As Andy W wrote, concluding that the parameter equals a specified value (in your case, effect size equals zero), is a matter of equivalence testing. In your case, this narrow confidence interval may in fact indicate that the effect is practically zero, that means, the equivalence's null hypothesis may be rejected. Significant equivalence at $1-\alpha$-level is typically shown by an ordinary $1-2\alpha$-confidence interval that lies completely within a prespecified equivalence interval. This equivalence interval takes into account that you are able to neglect really tiny deviations, i.e. all effect sizes within this equivalence interval can be considered to be practically equivalent. (Statistical test of equality are not possible.) Please see Stefan Wellek's "Testing Statistical Hypotheses of Equivalence and Noninferiority" for further reading, the most comprehensive book on this matter.
Can a narrow confidence interval around a non-significant effect provide evidence for the null?
In short: Yes. As Andy W wrote, concluding that the parameter equals a specified value (in your case, effect size equals zero), is a matter of equivalence testing. In your case, this narrow confidenc
Can a narrow confidence interval around a non-significant effect provide evidence for the null? In short: Yes. As Andy W wrote, concluding that the parameter equals a specified value (in your case, effect size equals zero), is a matter of equivalence testing. In your case, this narrow confidence interval may in fact indicate that the effect is practically zero, that means, the equivalence's null hypothesis may be rejected. Significant equivalence at $1-\alpha$-level is typically shown by an ordinary $1-2\alpha$-confidence interval that lies completely within a prespecified equivalence interval. This equivalence interval takes into account that you are able to neglect really tiny deviations, i.e. all effect sizes within this equivalence interval can be considered to be practically equivalent. (Statistical test of equality are not possible.) Please see Stefan Wellek's "Testing Statistical Hypotheses of Equivalence and Noninferiority" for further reading, the most comprehensive book on this matter.
Can a narrow confidence interval around a non-significant effect provide evidence for the null? In short: Yes. As Andy W wrote, concluding that the parameter equals a specified value (in your case, effect size equals zero), is a matter of equivalence testing. In your case, this narrow confidenc
28,907
Can a narrow confidence interval around a non-significant effect provide evidence for the null?
Null hypotheses exemplify the meaning of "All models are wrong, but some are useful." They're probably most useful if not taken literally and out of context – that is, it's important to remember the epistemic purpose of the null. If it can be falsified, which is the intended objective, then the alternative becomes more useful by comparison, albeit still rather uninformative. If you reject the null, you're saying the effect is probably not zero (or whatever – null hypotheses can specify other values for falsification too)...so what is it then? The effect size you calculate is your best point estimate of the population parameter. Generally, chances should be equally good that it's an overestimate or underestimate, but the chances that it's a dead-center bulls-eye are infinitesimal, as @Glen_b's comment implies. If by some bizarre twist of fate (or by construction – either way, I assume we're speaking hypothetically?) your estimate falls directly on $0.\bar 0$, this is still not much evidence that the parameter is not a different value within the confidence interval. The meaning of the confidence interval does not change based on the significance of any hypothesis test, except in as much as it may change location and width in a related way. In case you're not familiar with what effect size estimates look like for samples from a (simulated) population of which the null hypothesis is literally true (or in case you haven't seen it yet and are just here for a little statistical entertainment), check out Geoff Cumming's Dance of the $p$ Values. In case those confidence intervals aren't narrow enough for your taste, I've tried simulating some of my own in R using randomly generated samples just shy of $n=1\rm M$ each from $\mathcal N(0,1)$. I forgot to set a seed, but set x=c() and then ran x=append(x,replicate(500,cor(rnorm(999999),rnorm(999999)))) as many times as I cared to before finishing this answer, which gave me 6000 samples in the end. Here's a histogram and a density plot using hist(x,n=length(x)/100) and plot(density(x)), respectively: $\ \ \ \ $ As one would expect, there's evidence for a variety of nonzero effects arising from these random samples of a population with literally zero effect, and these estimates are distributed more or less normally around the true parameter (skew(x)= -.005, kurtosis(x)= 2.85). Imagine you only knew the value of your estimate from a sample of $n=1\rm M$, not the true parameter: why would you expect the parameter to be closer to zero than your estimate instead of further? Your confidence interval might include the null, but the null isn't really any more plausible than the value of equivalent distance from your sample effect size in the opposite direction, and other values may be more plausible than that, especially your point estimate! If, in practice, you want to demonstrate that an effect is more or less zero, you need to define how much more or less you're inclined to ignore. With these huge samples I've simulated, the estimate of largest magnitude I generated was $|r|=.004$. With more realistic samples of $n=999$, the largest I find among $1\rm M$ samples is $|r|=.14$. Again, the residuals are normally distributed, so these are unlikely, but the point is they're not implausible. A CI is probably more useful for inference than a NHST in general. It doesn't just represent how bad an idea it might be to assume the parameter is negligibly small; it represents a good idea of what the parameter actually is. One can still decide whether this is negligible, but can also get a sense of how non-negligible it could be. For further advocacy of confidence intervals, see Cumming (2014, 2013). References - Cumming, G. (2013). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. Routledge. - Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(7), 7–29. Retrieved from http://pss.sagepub.com/content/25/1/7.full.pdf+html.
Can a narrow confidence interval around a non-significant effect provide evidence for the null?
Null hypotheses exemplify the meaning of "All models are wrong, but some are useful." They're probably most useful if not taken literally and out of context – that is, it's important to remember the e
Can a narrow confidence interval around a non-significant effect provide evidence for the null? Null hypotheses exemplify the meaning of "All models are wrong, but some are useful." They're probably most useful if not taken literally and out of context – that is, it's important to remember the epistemic purpose of the null. If it can be falsified, which is the intended objective, then the alternative becomes more useful by comparison, albeit still rather uninformative. If you reject the null, you're saying the effect is probably not zero (or whatever – null hypotheses can specify other values for falsification too)...so what is it then? The effect size you calculate is your best point estimate of the population parameter. Generally, chances should be equally good that it's an overestimate or underestimate, but the chances that it's a dead-center bulls-eye are infinitesimal, as @Glen_b's comment implies. If by some bizarre twist of fate (or by construction – either way, I assume we're speaking hypothetically?) your estimate falls directly on $0.\bar 0$, this is still not much evidence that the parameter is not a different value within the confidence interval. The meaning of the confidence interval does not change based on the significance of any hypothesis test, except in as much as it may change location and width in a related way. In case you're not familiar with what effect size estimates look like for samples from a (simulated) population of which the null hypothesis is literally true (or in case you haven't seen it yet and are just here for a little statistical entertainment), check out Geoff Cumming's Dance of the $p$ Values. In case those confidence intervals aren't narrow enough for your taste, I've tried simulating some of my own in R using randomly generated samples just shy of $n=1\rm M$ each from $\mathcal N(0,1)$. I forgot to set a seed, but set x=c() and then ran x=append(x,replicate(500,cor(rnorm(999999),rnorm(999999)))) as many times as I cared to before finishing this answer, which gave me 6000 samples in the end. Here's a histogram and a density plot using hist(x,n=length(x)/100) and plot(density(x)), respectively: $\ \ \ \ $ As one would expect, there's evidence for a variety of nonzero effects arising from these random samples of a population with literally zero effect, and these estimates are distributed more or less normally around the true parameter (skew(x)= -.005, kurtosis(x)= 2.85). Imagine you only knew the value of your estimate from a sample of $n=1\rm M$, not the true parameter: why would you expect the parameter to be closer to zero than your estimate instead of further? Your confidence interval might include the null, but the null isn't really any more plausible than the value of equivalent distance from your sample effect size in the opposite direction, and other values may be more plausible than that, especially your point estimate! If, in practice, you want to demonstrate that an effect is more or less zero, you need to define how much more or less you're inclined to ignore. With these huge samples I've simulated, the estimate of largest magnitude I generated was $|r|=.004$. With more realistic samples of $n=999$, the largest I find among $1\rm M$ samples is $|r|=.14$. Again, the residuals are normally distributed, so these are unlikely, but the point is they're not implausible. A CI is probably more useful for inference than a NHST in general. It doesn't just represent how bad an idea it might be to assume the parameter is negligibly small; it represents a good idea of what the parameter actually is. One can still decide whether this is negligible, but can also get a sense of how non-negligible it could be. For further advocacy of confidence intervals, see Cumming (2014, 2013). References - Cumming, G. (2013). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. Routledge. - Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(7), 7–29. Retrieved from http://pss.sagepub.com/content/25/1/7.full.pdf+html.
Can a narrow confidence interval around a non-significant effect provide evidence for the null? Null hypotheses exemplify the meaning of "All models are wrong, but some are useful." They're probably most useful if not taken literally and out of context – that is, it's important to remember the e
28,908
SD larger than mean, non-negative scale
It's easily possible for the standard deviation to exceed the mean with non-negative or strictly positive data I'd describe the case for your data as the standard deviation being close to the mean (not every value is larger and the ones that are larger are generally close). For non-negative data, it does pretty clearly indicate that the data are skew (for example, the gamma distribution with coefficient of variation =1 would be the exponential distribution, so if the data were gamma, they'd look somewhere near exponential) However, with that sort of sample size, the ANOVA may not be particularly badly affected by that; the uncertainty in the estimate of pooled variance will be pretty small, so we might consider that between the CLT (for the means) and Slutsky's theorem (for the variance estimate on the denominator), an ANOVA will probably work reasonably well, since you'll have an asymptotic chi-square, for which the ANOVA-F with its large denominator-degrees-of-freedom will be a good approximation. (i.e. it should have reasonable level-robustness, and since the means are not so very far from constant, the power shouldn't be too badly impacted by the heteroskedasticity) That said, if your study will have a smaller sample size, you may be better off looking at using a different test (perhaps a permutation test, or one more suitable for skewed data perhaps one based on a GLM). The change in test may require a somewhat larger sample size than you'd get for a straight ANOVA. With the original data you could do a power analysis under a suitable model/analysis. Even in the absence of the original data, one could make more plausible assumptions about the distribution (perhaps a variety of them) and investigate the entire power curve (or, more simply, just the type I error rate and the power at whatever effect size is of interest). A variety of reasonable assumptions could be used, which gives some idea of what power may be achieved under plausible circumstances, and how much larger the sample size might need to be.
SD larger than mean, non-negative scale
It's easily possible for the standard deviation to exceed the mean with non-negative or strictly positive data I'd describe the case for your data as the standard deviation being close to the mean (no
SD larger than mean, non-negative scale It's easily possible for the standard deviation to exceed the mean with non-negative or strictly positive data I'd describe the case for your data as the standard deviation being close to the mean (not every value is larger and the ones that are larger are generally close). For non-negative data, it does pretty clearly indicate that the data are skew (for example, the gamma distribution with coefficient of variation =1 would be the exponential distribution, so if the data were gamma, they'd look somewhere near exponential) However, with that sort of sample size, the ANOVA may not be particularly badly affected by that; the uncertainty in the estimate of pooled variance will be pretty small, so we might consider that between the CLT (for the means) and Slutsky's theorem (for the variance estimate on the denominator), an ANOVA will probably work reasonably well, since you'll have an asymptotic chi-square, for which the ANOVA-F with its large denominator-degrees-of-freedom will be a good approximation. (i.e. it should have reasonable level-robustness, and since the means are not so very far from constant, the power shouldn't be too badly impacted by the heteroskedasticity) That said, if your study will have a smaller sample size, you may be better off looking at using a different test (perhaps a permutation test, or one more suitable for skewed data perhaps one based on a GLM). The change in test may require a somewhat larger sample size than you'd get for a straight ANOVA. With the original data you could do a power analysis under a suitable model/analysis. Even in the absence of the original data, one could make more plausible assumptions about the distribution (perhaps a variety of them) and investigate the entire power curve (or, more simply, just the type I error rate and the power at whatever effect size is of interest). A variety of reasonable assumptions could be used, which gives some idea of what power may be achieved under plausible circumstances, and how much larger the sample size might need to be.
SD larger than mean, non-negative scale It's easily possible for the standard deviation to exceed the mean with non-negative or strictly positive data I'd describe the case for your data as the standard deviation being close to the mean (no
28,909
SD larger than mean, non-negative scale
You are correct in concluding that the data is non-normal. If the data were normal then we would expect about 16% of observations to be less than the mean minus the standard deviation. With an SD larger than the mean this number is negative and you state that there cannot be negative numbers, so what you are seeing is not consistent with normally distributed data. The SD values are possible, but only if the distribution is very right skewed (which is common in durations). I agree that choosing a sample size based on assuming data will be normal is not a good idea, but if you can find out more about the process and find a right skewed distribution (a gamma distribution as one possibility) that is a reasonable assumption, then you could use that to help determine sample size.
SD larger than mean, non-negative scale
You are correct in concluding that the data is non-normal. If the data were normal then we would expect about 16% of observations to be less than the mean minus the standard deviation. With an SD la
SD larger than mean, non-negative scale You are correct in concluding that the data is non-normal. If the data were normal then we would expect about 16% of observations to be less than the mean minus the standard deviation. With an SD larger than the mean this number is negative and you state that there cannot be negative numbers, so what you are seeing is not consistent with normally distributed data. The SD values are possible, but only if the distribution is very right skewed (which is common in durations). I agree that choosing a sample size based on assuming data will be normal is not a good idea, but if you can find out more about the process and find a right skewed distribution (a gamma distribution as one possibility) that is a reasonable assumption, then you could use that to help determine sample size.
SD larger than mean, non-negative scale You are correct in concluding that the data is non-normal. If the data were normal then we would expect about 16% of observations to be less than the mean minus the standard deviation. With an SD la
28,910
Variance of sample proportion decreases with n but of a count increases with n - why?
Very roughly, imagine that we are tossing a fair coin. Success is defined as heads. If we toss the coin once $(n=1)$, you will count either $1$ success or $0$ successes. Both have a equal positive probability of happening $(1/2)$. Now imagine we toss the coin $10$ times ($n=10$). Now you can get still get $0$ and $1$ successes (though both are less likely), but you can also get $2$ through $10$ (which are more likely). If variance measures how far a set of numbers is spread out, you can see with $10$ tosses the spread is wider than with $1$ toss or trial. This explains why the variance of the number of successes increases with $n$. With the proportion (number of successes divided by number of tosses), you are trying to approximate the true value of $p$. As you get more information with more trials, your uncertainty about $p$ goes down, and so that variance shrinks. With one toss that comes up heads, you don't know very much (only that $p \ne 0)$. With $10$ tosses that all turn out to be heads, you're pretty sure that $p$ is near one.
Variance of sample proportion decreases with n but of a count increases with n - why?
Very roughly, imagine that we are tossing a fair coin. Success is defined as heads. If we toss the coin once $(n=1)$, you will count either $1$ success or $0$ successes. Both have a equal positive pro
Variance of sample proportion decreases with n but of a count increases with n - why? Very roughly, imagine that we are tossing a fair coin. Success is defined as heads. If we toss the coin once $(n=1)$, you will count either $1$ success or $0$ successes. Both have a equal positive probability of happening $(1/2)$. Now imagine we toss the coin $10$ times ($n=10$). Now you can get still get $0$ and $1$ successes (though both are less likely), but you can also get $2$ through $10$ (which are more likely). If variance measures how far a set of numbers is spread out, you can see with $10$ tosses the spread is wider than with $1$ toss or trial. This explains why the variance of the number of successes increases with $n$. With the proportion (number of successes divided by number of tosses), you are trying to approximate the true value of $p$. As you get more information with more trials, your uncertainty about $p$ goes down, and so that variance shrinks. With one toss that comes up heads, you don't know very much (only that $p \ne 0)$. With $10$ tosses that all turn out to be heads, you're pretty sure that $p$ is near one.
Variance of sample proportion decreases with n but of a count increases with n - why? Very roughly, imagine that we are tossing a fair coin. Success is defined as heads. If we toss the coin once $(n=1)$, you will count either $1$ success or $0$ successes. Both have a equal positive pro
28,911
Variance of sample proportion decreases with n but of a count increases with n - why?
Lets start by assuming the binomial distribution standard deviation is correct (it is). This is the standard deviation of the distribution of the number of successes out of $n$ trials given constant probability of success $p$. Call the number of successes, $X$. So $Var(X) = np(1-p)$, which is what you have (standard deviation squared). Since a proportion is the number of successes over the number of trials, we have: $Var(\frac{X}{n}) = \frac{Var(X)}{n^2} = \frac{np(1-p)}{n^2} = \frac{p(1-p)}{n}$. And thus standard deviation is of course $\sqrt{\frac{p(1-p)}{n}}$. In one case you are looking at counts, in the other you are looking at counts divided by sample size. Intuitively, you can imagine the counts of the number of successes are much higher ($X = 0, 1, 2, \ldots, n$) than a proportion ($0 \leq p \leq 1$). As $n$ increases, $X$ can take many different (and larger) integer values and has more variability; $p$, on the other hand, is restricted between 0 and 1. So $X$ has more variability.
Variance of sample proportion decreases with n but of a count increases with n - why?
Lets start by assuming the binomial distribution standard deviation is correct (it is). This is the standard deviation of the distribution of the number of successes out of $n$ trials given constant
Variance of sample proportion decreases with n but of a count increases with n - why? Lets start by assuming the binomial distribution standard deviation is correct (it is). This is the standard deviation of the distribution of the number of successes out of $n$ trials given constant probability of success $p$. Call the number of successes, $X$. So $Var(X) = np(1-p)$, which is what you have (standard deviation squared). Since a proportion is the number of successes over the number of trials, we have: $Var(\frac{X}{n}) = \frac{Var(X)}{n^2} = \frac{np(1-p)}{n^2} = \frac{p(1-p)}{n}$. And thus standard deviation is of course $\sqrt{\frac{p(1-p)}{n}}$. In one case you are looking at counts, in the other you are looking at counts divided by sample size. Intuitively, you can imagine the counts of the number of successes are much higher ($X = 0, 1, 2, \ldots, n$) than a proportion ($0 \leq p \leq 1$). As $n$ increases, $X$ can take many different (and larger) integer values and has more variability; $p$, on the other hand, is restricted between 0 and 1. So $X$ has more variability.
Variance of sample proportion decreases with n but of a count increases with n - why? Lets start by assuming the binomial distribution standard deviation is correct (it is). This is the standard deviation of the distribution of the number of successes out of $n$ trials given constant
28,912
Variance of sample proportion decreases with n but of a count increases with n - why?
Okay! Ill make it very easy. When using the std and variance USUALLY you are looking backwards, trying to see what is going on and then projecting the future. as you look backwards, the more trials usually helps get MORE info. More and more trials help narrow down what happened. and you now rotate better around the mean. Std and var just rotate around the mean so you get closer and closer to what will happen. Binomial is different! we already know whats up, we know the probability. so looking backwards isnt as useful because, well, we already know the probability. More and more trials doesnt help us understand better and better how things rotate around the mean, it just gives us wider and wider distribution. increasing the trials really only gives more room for variance. Imagine two scenarios: one you want to know how tall everyone is in a room. more measurements = closer to what the real average height is in the room, you are thankful for every new measurement. second you have a coin. you already know what the average is. its 50/50 i mean at that point you are done. so lets pretend you start flipping, well every new flip is only more room for error. you flip 10 times and you get all 10 heads, you say to your friend, what the heck! where were the odds of that, thats so dumb! well if you only flipped it once you would have only had one chance for some crazy outliers. more flips dont really give you more info they just give more room for crazy results. 0 math and 0 formulas, hope that helps.
Variance of sample proportion decreases with n but of a count increases with n - why?
Okay! Ill make it very easy. When using the std and variance USUALLY you are looking backwards, trying to see what is going on and then projecting the future. as you look backwards, the more trials us
Variance of sample proportion decreases with n but of a count increases with n - why? Okay! Ill make it very easy. When using the std and variance USUALLY you are looking backwards, trying to see what is going on and then projecting the future. as you look backwards, the more trials usually helps get MORE info. More and more trials help narrow down what happened. and you now rotate better around the mean. Std and var just rotate around the mean so you get closer and closer to what will happen. Binomial is different! we already know whats up, we know the probability. so looking backwards isnt as useful because, well, we already know the probability. More and more trials doesnt help us understand better and better how things rotate around the mean, it just gives us wider and wider distribution. increasing the trials really only gives more room for variance. Imagine two scenarios: one you want to know how tall everyone is in a room. more measurements = closer to what the real average height is in the room, you are thankful for every new measurement. second you have a coin. you already know what the average is. its 50/50 i mean at that point you are done. so lets pretend you start flipping, well every new flip is only more room for error. you flip 10 times and you get all 10 heads, you say to your friend, what the heck! where were the odds of that, thats so dumb! well if you only flipped it once you would have only had one chance for some crazy outliers. more flips dont really give you more info they just give more room for crazy results. 0 math and 0 formulas, hope that helps.
Variance of sample proportion decreases with n but of a count increases with n - why? Okay! Ill make it very easy. When using the std and variance USUALLY you are looking backwards, trying to see what is going on and then projecting the future. as you look backwards, the more trials us
28,913
Variance of sample proportion decreases with n but of a count increases with n - why?
If you're looking for some intuition on this result, ask yourself which of the following things is more variable: ... the proportion of females in a household, or the proportion of females in a whole country? ... the number of females in a household, or the number of females in a whole country?
Variance of sample proportion decreases with n but of a count increases with n - why?
If you're looking for some intuition on this result, ask yourself which of the following things is more variable: ... the proportion of females in a household, or the proportion of females in a whole
Variance of sample proportion decreases with n but of a count increases with n - why? If you're looking for some intuition on this result, ask yourself which of the following things is more variable: ... the proportion of females in a household, or the proportion of females in a whole country? ... the number of females in a household, or the number of females in a whole country?
Variance of sample proportion decreases with n but of a count increases with n - why? If you're looking for some intuition on this result, ask yourself which of the following things is more variable: ... the proportion of females in a household, or the proportion of females in a whole
28,914
Comparison of CPH, accelerated failure time model or neural networks for survival analysis
It depends on why you are making models. Two main reasons to construct survival models are (1) to make predictions or (2) to model effect sizes of covariates. If you want to use them in a predictive setting in which you want to obtain an expected survival time given a set of covariates, neural networks are likely the best choice because they are universal approximators and make less assumptions than the usual (semi-)parametric models. Another option which is less popular but not less powerful is support vector machines. If you are modelling to quantify effect sizes, neural networks won't be of much use. Both Cox proportional hazards and accelerated failure time models can be used for this goal. Cox PH models are by far the most widely used in clinical settings, in which the hazard ratio gives a measure of effect size for each covariate/interaction. In engineering settings, however, accelerated failure time (AFT) models are the weapon of choice.
Comparison of CPH, accelerated failure time model or neural networks for survival analysis
It depends on why you are making models. Two main reasons to construct survival models are (1) to make predictions or (2) to model effect sizes of covariates. If you want to use them in a predictive s
Comparison of CPH, accelerated failure time model or neural networks for survival analysis It depends on why you are making models. Two main reasons to construct survival models are (1) to make predictions or (2) to model effect sizes of covariates. If you want to use them in a predictive setting in which you want to obtain an expected survival time given a set of covariates, neural networks are likely the best choice because they are universal approximators and make less assumptions than the usual (semi-)parametric models. Another option which is less popular but not less powerful is support vector machines. If you are modelling to quantify effect sizes, neural networks won't be of much use. Both Cox proportional hazards and accelerated failure time models can be used for this goal. Cox PH models are by far the most widely used in clinical settings, in which the hazard ratio gives a measure of effect size for each covariate/interaction. In engineering settings, however, accelerated failure time (AFT) models are the weapon of choice.
Comparison of CPH, accelerated failure time model or neural networks for survival analysis It depends on why you are making models. Two main reasons to construct survival models are (1) to make predictions or (2) to model effect sizes of covariates. If you want to use them in a predictive s
28,915
How to evaluate/select cross validation method?
There have been a bunch of similar questions, please browse through the threads on [cross-validation], e.g. Cross-validation or bootstrapping to evaluate classification performance? Here's the gist: You need to worry only if you are in a small sample size situation. For estimating proportions (like accuracy), I'd say anything that leads to a denominator of the proportion < 100 - 300 independent cases (depending on the precision you need) is small. For the model itself, it also depends on how difficult the problem is, but a sample size that will give a decent estimate of the performace often allows to train a decent model as well. Choosing among iterated/repeated $k$-fold cross validation, out-of-bootstrap and iterated/repeated set validation from my personal experience is largely a matter of taste in practice. The important thing is to calculate enough surrogate models, so you can have a good estimate of model instability. How many you need will depend on your data and the model (complexity). Leave-one-out, however, I can not recommend as it neither allows to measure model stability, nor to reduce the variance uncertainty on the validation result caused by model instability. In addition, there are situations where it is subject to a large pessimistic bias (as opposed to the minimal pessimistic bias that is expected). The PhD thesis of Ron Kohavi, "Wrappers for Performance Enhancement and Oblivious Decision Graphs" contains an excellent discussion. Also have a look at the work of Dougherty and Braga-Neto, e.g. Dougherty, E. R. et al.: Performance of Error Estimators for Classification Current Bioinformatics, 2010, 5, 53-67 Statisticians like the bootstrap (resampling with replacement) as from a theory point of view it has nicer properties than cross validation (resampling without replacement). That may mean that you'd need to do more iterations with cross validation (more precisely: calculate more surrogate models) than with the bootstrap. There is useful information, however, that is easier to be gotten from iterated cross validation, e.g. model stability expressed as stability of the predictions. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 For my type of data (spectroscopic data, wide matrices), we found similar overall performance for out-of-bootstrap and iterated $k$-fold cross validation with equal numbers of surrogate models. .632-bootstrap was overoptimistic, as the models easily overfit. Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). Kim, J.-H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap , Computational Statistics & Data Analysis , 53, 3735 - 3745 (2009). DOI: 10.1016/j.csda.2009.04.009 report similar findings. update: The papers deal with classifier validation. Validation of regression models tends to be easier in that in my experience it is easier there to get stable models (= less variance due to model instability), and also the variance due to the finite test sample size tends to be less problematic. I forgot to link Esbensen, K. H. & Geladi, P.: Principles of Proper Validation: use and abuse of re-sampling for validation, J Chemom, 24, 168-187 (2010). DOI: 10.1002/cem.1310 which discusses important limits of resampling validation, namely, that it cannot be used to measure error caused by (instrumental) drift. update: @alfa asks about time-complexity: time complexity is linear with the number of surrogate models. As the bootstrap is said to be somewhat more efficient (i.e. fewer iterations needed than for cross validation), so it may have a slight edge here. I don't think this matters in practice (at least for my data, as the variance uncertainty due to having few test cases only is the limiting factor for my applications). For linear models, leave-one-out estimators can be calculated using the "hat matrix". This means that the LOO estimator can be computed without refitting $n$ surrogate models from the fit of all data points. Approximations to this are knows for some more models. BUT a) this is possible only if each row in the data set is an independent case, and b) the problems that you cannot iterate/repeat and thus cannot check model stability and reduce the impact of the associated variance is not solved by that approach. choice of $k$: Choice of K in K-fold cross-validation
How to evaluate/select cross validation method?
There have been a bunch of similar questions, please browse through the threads on [cross-validation], e.g. Cross-validation or bootstrapping to evaluate classification performance? Here's the gist:
How to evaluate/select cross validation method? There have been a bunch of similar questions, please browse through the threads on [cross-validation], e.g. Cross-validation or bootstrapping to evaluate classification performance? Here's the gist: You need to worry only if you are in a small sample size situation. For estimating proportions (like accuracy), I'd say anything that leads to a denominator of the proportion < 100 - 300 independent cases (depending on the precision you need) is small. For the model itself, it also depends on how difficult the problem is, but a sample size that will give a decent estimate of the performace often allows to train a decent model as well. Choosing among iterated/repeated $k$-fold cross validation, out-of-bootstrap and iterated/repeated set validation from my personal experience is largely a matter of taste in practice. The important thing is to calculate enough surrogate models, so you can have a good estimate of model instability. How many you need will depend on your data and the model (complexity). Leave-one-out, however, I can not recommend as it neither allows to measure model stability, nor to reduce the variance uncertainty on the validation result caused by model instability. In addition, there are situations where it is subject to a large pessimistic bias (as opposed to the minimal pessimistic bias that is expected). The PhD thesis of Ron Kohavi, "Wrappers for Performance Enhancement and Oblivious Decision Graphs" contains an excellent discussion. Also have a look at the work of Dougherty and Braga-Neto, e.g. Dougherty, E. R. et al.: Performance of Error Estimators for Classification Current Bioinformatics, 2010, 5, 53-67 Statisticians like the bootstrap (resampling with replacement) as from a theory point of view it has nicer properties than cross validation (resampling without replacement). That may mean that you'd need to do more iterations with cross validation (more precisely: calculate more surrogate models) than with the bootstrap. There is useful information, however, that is easier to be gotten from iterated cross validation, e.g. model stability expressed as stability of the predictions. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 For my type of data (spectroscopic data, wide matrices), we found similar overall performance for out-of-bootstrap and iterated $k$-fold cross validation with equal numbers of surrogate models. .632-bootstrap was overoptimistic, as the models easily overfit. Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). Kim, J.-H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap , Computational Statistics & Data Analysis , 53, 3735 - 3745 (2009). DOI: 10.1016/j.csda.2009.04.009 report similar findings. update: The papers deal with classifier validation. Validation of regression models tends to be easier in that in my experience it is easier there to get stable models (= less variance due to model instability), and also the variance due to the finite test sample size tends to be less problematic. I forgot to link Esbensen, K. H. & Geladi, P.: Principles of Proper Validation: use and abuse of re-sampling for validation, J Chemom, 24, 168-187 (2010). DOI: 10.1002/cem.1310 which discusses important limits of resampling validation, namely, that it cannot be used to measure error caused by (instrumental) drift. update: @alfa asks about time-complexity: time complexity is linear with the number of surrogate models. As the bootstrap is said to be somewhat more efficient (i.e. fewer iterations needed than for cross validation), so it may have a slight edge here. I don't think this matters in practice (at least for my data, as the variance uncertainty due to having few test cases only is the limiting factor for my applications). For linear models, leave-one-out estimators can be calculated using the "hat matrix". This means that the LOO estimator can be computed without refitting $n$ surrogate models from the fit of all data points. Approximations to this are knows for some more models. BUT a) this is possible only if each row in the data set is an independent case, and b) the problems that you cannot iterate/repeat and thus cannot check model stability and reduce the impact of the associated variance is not solved by that approach. choice of $k$: Choice of K in K-fold cross-validation
How to evaluate/select cross validation method? There have been a bunch of similar questions, please browse through the threads on [cross-validation], e.g. Cross-validation or bootstrapping to evaluate classification performance? Here's the gist:
28,916
How to evaluate/select cross validation method?
If you have few samples one of the approaches you can do is leave-one-out. Definitely, you would need to combine it with some sort of resampling technique like bootstrap or jackknife, in order to have a sense of the stability of the results. If you have enough data then you can go for K-fold. The K depends on the stability of the results. If results are stable across the K-folds you are fine. The problems begin when you don't have enough data for training each of the K-folds, or there is too much noise etc. If you have a LOT of samples you can simply split on training set by some proportion (e.g. 70/30%). Then it is a matter of choosing wisely the way to split (e.g. randomly with respect to timestamps if that's the case, etc). In practice it may be hard to train e.g. 5 times when each training can take some days. That said, in all cases, if you want to do a proper evaluation you should make three splits, i.e. training/validation/test.
How to evaluate/select cross validation method?
If you have few samples one of the approaches you can do is leave-one-out. Definitely, you would need to combine it with some sort of resampling technique like bootstrap or jackknife, in order to have
How to evaluate/select cross validation method? If you have few samples one of the approaches you can do is leave-one-out. Definitely, you would need to combine it with some sort of resampling technique like bootstrap or jackknife, in order to have a sense of the stability of the results. If you have enough data then you can go for K-fold. The K depends on the stability of the results. If results are stable across the K-folds you are fine. The problems begin when you don't have enough data for training each of the K-folds, or there is too much noise etc. If you have a LOT of samples you can simply split on training set by some proportion (e.g. 70/30%). Then it is a matter of choosing wisely the way to split (e.g. randomly with respect to timestamps if that's the case, etc). In practice it may be hard to train e.g. 5 times when each training can take some days. That said, in all cases, if you want to do a proper evaluation you should make three splits, i.e. training/validation/test.
How to evaluate/select cross validation method? If you have few samples one of the approaches you can do is leave-one-out. Definitely, you would need to combine it with some sort of resampling technique like bootstrap or jackknife, in order to have
28,917
How to choose the split in Random forest for categorical predictors (features)?
The usual vanilla implementation tries all possible combinations of your categories. It expresses these combinations as an integer which represents which categories are selected and which are left out at the split. It goes from left to right. For example if you have a variable with the classes "Cat", "Dog", "Cow", "Rat" it would sweep through possible splits, meaning something like: Dog vs the rest = 0100 (remember, read from left to right) Cat vs the rest = 1000 By themselves, but also Dog and Cat vs Cow and Rat = 1100 Cow and Cat vs Dog and Rat = 1010 And then, as mentioned, it uses integers to handle this, to represent the split: library(R.utils) > intToBin(12) [1] "1100"
How to choose the split in Random forest for categorical predictors (features)?
The usual vanilla implementation tries all possible combinations of your categories. It expresses these combinations as an integer which represents which categories are selected and which are left out
How to choose the split in Random forest for categorical predictors (features)? The usual vanilla implementation tries all possible combinations of your categories. It expresses these combinations as an integer which represents which categories are selected and which are left out at the split. It goes from left to right. For example if you have a variable with the classes "Cat", "Dog", "Cow", "Rat" it would sweep through possible splits, meaning something like: Dog vs the rest = 0100 (remember, read from left to right) Cat vs the rest = 1000 By themselves, but also Dog and Cat vs Cow and Rat = 1100 Cow and Cat vs Dog and Rat = 1010 And then, as mentioned, it uses integers to handle this, to represent the split: library(R.utils) > intToBin(12) [1] "1100"
How to choose the split in Random forest for categorical predictors (features)? The usual vanilla implementation tries all possible combinations of your categories. It expresses these combinations as an integer which represents which categories are selected and which are left out
28,918
How to choose the split in Random forest for categorical predictors (features)?
Forest is an ensemble method of trees. So I think your question is more based on the algorithm of trees about splitting variables. There is two kind of categorical predictor, ordered factor, and not ordered factor. Ordered factor is similar to numeric variable and the random forest will find the cut point, while the latter one is used another algorithm as below. It will try to catch first level of the factor out as the split and try to fit the model and find the performance with loss function. Then try to find the second level and fit it again and find the performance and so on. In the end, it find the best splitting level combinations according to the best performance. So you will find that it takes a much longer while and memory for trees model or random forest model to fit factors than numeric.
How to choose the split in Random forest for categorical predictors (features)?
Forest is an ensemble method of trees. So I think your question is more based on the algorithm of trees about splitting variables. There is two kind of categorical predictor, ordered factor, and not o
How to choose the split in Random forest for categorical predictors (features)? Forest is an ensemble method of trees. So I think your question is more based on the algorithm of trees about splitting variables. There is two kind of categorical predictor, ordered factor, and not ordered factor. Ordered factor is similar to numeric variable and the random forest will find the cut point, while the latter one is used another algorithm as below. It will try to catch first level of the factor out as the split and try to fit the model and find the performance with loss function. Then try to find the second level and fit it again and find the performance and so on. In the end, it find the best splitting level combinations according to the best performance. So you will find that it takes a much longer while and memory for trees model or random forest model to fit factors than numeric.
How to choose the split in Random forest for categorical predictors (features)? Forest is an ensemble method of trees. So I think your question is more based on the algorithm of trees about splitting variables. There is two kind of categorical predictor, ordered factor, and not o
28,919
How to choose the split in Random forest for categorical predictors (features)?
If your features are categorical, the first idea that comes to my mind is to create a binary feature for every possible value in the category. Thus, if you have a feature corresponding to "mobile phone brand" which can only be "Samsung, Apple, HTC or Nokia", I would represent it as four categories (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0) and (0, 0, 0, 1) respectively. This way the threshold will select between being a brand or any of the others at each split, without having strange effects. Hope this helps!
How to choose the split in Random forest for categorical predictors (features)?
If your features are categorical, the first idea that comes to my mind is to create a binary feature for every possible value in the category. Thus, if you have a feature corresponding to "mobile pho
How to choose the split in Random forest for categorical predictors (features)? If your features are categorical, the first idea that comes to my mind is to create a binary feature for every possible value in the category. Thus, if you have a feature corresponding to "mobile phone brand" which can only be "Samsung, Apple, HTC or Nokia", I would represent it as four categories (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0) and (0, 0, 0, 1) respectively. This way the threshold will select between being a brand or any of the others at each split, without having strange effects. Hope this helps!
How to choose the split in Random forest for categorical predictors (features)? If your features are categorical, the first idea that comes to my mind is to create a binary feature for every possible value in the category. Thus, if you have a feature corresponding to "mobile pho
28,920
How to choose the split in Random forest for categorical predictors (features)?
Either choose some random categories and use the category which gives the best split, or choose some random combinations of categories and use the combination which gives the best split. I think it doesn't really matter which of the two methods you choose since splitting on a combination of categories at a single node can be simulated by splitting on a single category at multiple nodes.
How to choose the split in Random forest for categorical predictors (features)?
Either choose some random categories and use the category which gives the best split, or choose some random combinations of categories and use the combination which gives the best split. I think it do
How to choose the split in Random forest for categorical predictors (features)? Either choose some random categories and use the category which gives the best split, or choose some random combinations of categories and use the combination which gives the best split. I think it doesn't really matter which of the two methods you choose since splitting on a combination of categories at a single node can be simulated by splitting on a single category at multiple nodes.
How to choose the split in Random forest for categorical predictors (features)? Either choose some random categories and use the category which gives the best split, or choose some random combinations of categories and use the combination which gives the best split. I think it do
28,921
Choosing a classification performance metric for model selection, feature selection, and publication
It seems that k-fold cross-validation error is very sensitive to the type of performance measure. It also has an error in itself because the training and validation sets are chosen randomly. I think you've discovered the high variance of performance measures that are proportions of case counts such as $\frac{\text{# correct predictions}}{\text{# test cases}}$. You try to estimate e.g. the probability that your classifier returns a correct answer. From a statistics point of view, that is described as a Bernoulli trial, leading to a binomial distribution. You can calculate confidence intervals for binomial distributions and will find that they are very wide. This of course limits your ability to do model comparison. With resampling validation schemes such as cross validation you have an additional source of variation: the instability of your models (as you build $k$ surrogate models during each CV run) Moreover, changing the number of folds gives me different optimal parameter values. That is to be expected due to the variance. You may have an additional effect here: libSVM splits the data only once if you use their built-in cross validation for tuning. Due to the nature of SVMs, if you built the SVM with identical training data and slowly vary the parameters, you'll find that support vectors (and consequently accuracy) jumps: as long as the SVM parameters are not too different, it will still choose the same support vectors. Only when the paraters are changed enough, suddenly different support vectors will result. So evaluating the SVM parameter grid with exactly the same cross validation splits may hide variability, which you see between different runs. IMHO the basic problem is that you do a grid search, which is an optimization that relies on a reasonably smooth behaviour of your target functional (accuracy or whatever else you use). Due to the high variance of your performance measurements, this assumption is violated. The "jumpy" dependence of the SVM model also violates this assumption. Accuracy metrics for cross validation may be overly optimistic. Usually anything over a 2-fold cross-validation gives me 100% accuracy. Also, the error rate is discretized due to small sample size. Model selection will often give me the same error rate across all or most parameter values. That is to be expected given the general problems of the approach. However, usually it is possible to choose really extreme parameter values where the classifier breaks down. IMHO the parameter ranges where the SVMs work well is important information. In any case you absolutely need an external (double/nested) validation of the performance of the model you choose as 'best'. I'd probably do a number of runs/repetitions/iterations of an outer cross validation or an outer out-of-bootstrap validation and give the distribution of hyperparameters for the "best" model reported performance of the tuning observed performance of outer validation The difference between the last two is an indicator of overfitting (e.g. due to "skimming" the variance). When writing a report, how would I know that a classification is 'good' or 'acceptable'? In the field, it seems like we don't have something like a goodness of fit or p-value threshold that is commonly accepted. Since I am adding to the data iteratively, I would like to know when to stop- what is a good N where the model does not significantly improve? (What are you adding? Cases or variates/features?) First of all, if you do an iterative modeling, you either need to report that due to your fitting procedure your performance is not to be taken seriously as it is subject to an optimistic bias. The better alternative is to do a validation of the final model. However, the test data of that must be independent of all data that ever went into training or your decision process for the modeling (so you may not have any such data left).
Choosing a classification performance metric for model selection, feature selection, and publication
It seems that k-fold cross-validation error is very sensitive to the type of performance measure. It also has an error in itself because the training and validation sets are chosen randomly. I think
Choosing a classification performance metric for model selection, feature selection, and publication It seems that k-fold cross-validation error is very sensitive to the type of performance measure. It also has an error in itself because the training and validation sets are chosen randomly. I think you've discovered the high variance of performance measures that are proportions of case counts such as $\frac{\text{# correct predictions}}{\text{# test cases}}$. You try to estimate e.g. the probability that your classifier returns a correct answer. From a statistics point of view, that is described as a Bernoulli trial, leading to a binomial distribution. You can calculate confidence intervals for binomial distributions and will find that they are very wide. This of course limits your ability to do model comparison. With resampling validation schemes such as cross validation you have an additional source of variation: the instability of your models (as you build $k$ surrogate models during each CV run) Moreover, changing the number of folds gives me different optimal parameter values. That is to be expected due to the variance. You may have an additional effect here: libSVM splits the data only once if you use their built-in cross validation for tuning. Due to the nature of SVMs, if you built the SVM with identical training data and slowly vary the parameters, you'll find that support vectors (and consequently accuracy) jumps: as long as the SVM parameters are not too different, it will still choose the same support vectors. Only when the paraters are changed enough, suddenly different support vectors will result. So evaluating the SVM parameter grid with exactly the same cross validation splits may hide variability, which you see between different runs. IMHO the basic problem is that you do a grid search, which is an optimization that relies on a reasonably smooth behaviour of your target functional (accuracy or whatever else you use). Due to the high variance of your performance measurements, this assumption is violated. The "jumpy" dependence of the SVM model also violates this assumption. Accuracy metrics for cross validation may be overly optimistic. Usually anything over a 2-fold cross-validation gives me 100% accuracy. Also, the error rate is discretized due to small sample size. Model selection will often give me the same error rate across all or most parameter values. That is to be expected given the general problems of the approach. However, usually it is possible to choose really extreme parameter values where the classifier breaks down. IMHO the parameter ranges where the SVMs work well is important information. In any case you absolutely need an external (double/nested) validation of the performance of the model you choose as 'best'. I'd probably do a number of runs/repetitions/iterations of an outer cross validation or an outer out-of-bootstrap validation and give the distribution of hyperparameters for the "best" model reported performance of the tuning observed performance of outer validation The difference between the last two is an indicator of overfitting (e.g. due to "skimming" the variance). When writing a report, how would I know that a classification is 'good' or 'acceptable'? In the field, it seems like we don't have something like a goodness of fit or p-value threshold that is commonly accepted. Since I am adding to the data iteratively, I would like to know when to stop- what is a good N where the model does not significantly improve? (What are you adding? Cases or variates/features?) First of all, if you do an iterative modeling, you either need to report that due to your fitting procedure your performance is not to be taken seriously as it is subject to an optimistic bias. The better alternative is to do a validation of the final model. However, the test data of that must be independent of all data that ever went into training or your decision process for the modeling (so you may not have any such data left).
Choosing a classification performance metric for model selection, feature selection, and publication It seems that k-fold cross-validation error is very sensitive to the type of performance measure. It also has an error in itself because the training and validation sets are chosen randomly. I think
28,922
Choosing a classification performance metric for model selection, feature selection, and publication
Simpler than BIR is the logarithmic or quadratic (Brier) scoring rules. These are proper scores that, unlike the proportion classified correctly, will not give rise to a bogus model upon optimization.
Choosing a classification performance metric for model selection, feature selection, and publication
Simpler than BIR is the logarithmic or quadratic (Brier) scoring rules. These are proper scores that, unlike the proportion classified correctly, will not give rise to a bogus model upon optimization
Choosing a classification performance metric for model selection, feature selection, and publication Simpler than BIR is the logarithmic or quadratic (Brier) scoring rules. These are proper scores that, unlike the proportion classified correctly, will not give rise to a bogus model upon optimization.
Choosing a classification performance metric for model selection, feature selection, and publication Simpler than BIR is the logarithmic or quadratic (Brier) scoring rules. These are proper scores that, unlike the proportion classified correctly, will not give rise to a bogus model upon optimization
28,923
Choosing a classification performance metric for model selection, feature selection, and publication
As you point out, predictive accuracy and AUC are limited in certain aspects. I would give the Bayesian Information Reward (BIR) a go, which should give a more sensitive assessment of how well or badly your classifier is doing and how that changes as you tweak your parameters (number of validation folds, etc.). The intuition of BIR is as follows: a bettor is rewarded not just for identifying the ultimate winners and losers (0's and 1's), but more importantly for identifying the appropriate odds. Furthermore, it goes a step ahead and compares all predictions with the prior probabilities. Let's say you have a list of 10 Arsenal (football team in England) games with possible outcomes: $Win$ or $Lose$. The formula for binary classification rewarding per game is: where, $p$ is your model's prediction for a particular Arsenal game, and $p'$ is the prior probability of Arsenal winning a game. The catch-point is: if I know beforehand that $p'=0.6$, and my predictor model produced $p =0.6$,even if its prediction was correct it is rewarded 0 since it is not conveying any new information. As a note, you treat the correct and incorrect classifications differently as shown in the equations. As a result, based on whether the prediction is correct or incorrect, the BIR for a single prediction can take a value between $(-inf, 1]$. BIR is not limited to binary classifications but is generalised for multinomial classification problems as well.
Choosing a classification performance metric for model selection, feature selection, and publication
As you point out, predictive accuracy and AUC are limited in certain aspects. I would give the Bayesian Information Reward (BIR) a go, which should give a more sensitive assessment of how well or badl
Choosing a classification performance metric for model selection, feature selection, and publication As you point out, predictive accuracy and AUC are limited in certain aspects. I would give the Bayesian Information Reward (BIR) a go, which should give a more sensitive assessment of how well or badly your classifier is doing and how that changes as you tweak your parameters (number of validation folds, etc.). The intuition of BIR is as follows: a bettor is rewarded not just for identifying the ultimate winners and losers (0's and 1's), but more importantly for identifying the appropriate odds. Furthermore, it goes a step ahead and compares all predictions with the prior probabilities. Let's say you have a list of 10 Arsenal (football team in England) games with possible outcomes: $Win$ or $Lose$. The formula for binary classification rewarding per game is: where, $p$ is your model's prediction for a particular Arsenal game, and $p'$ is the prior probability of Arsenal winning a game. The catch-point is: if I know beforehand that $p'=0.6$, and my predictor model produced $p =0.6$,even if its prediction was correct it is rewarded 0 since it is not conveying any new information. As a note, you treat the correct and incorrect classifications differently as shown in the equations. As a result, based on whether the prediction is correct or incorrect, the BIR for a single prediction can take a value between $(-inf, 1]$. BIR is not limited to binary classifications but is generalised for multinomial classification problems as well.
Choosing a classification performance metric for model selection, feature selection, and publication As you point out, predictive accuracy and AUC are limited in certain aspects. I would give the Bayesian Information Reward (BIR) a go, which should give a more sensitive assessment of how well or badl
28,924
Heteroskedasticity - residual plot interpretation
Concerning heteroscedasticity, you are interested in understanding how the vertical spread of the points varies with the fitted values. To do this, you must slice the plot into thin vertical sections, find the central elevation (y-value) in each section, evaluate the spread around that central value, then connect everything up. Here are some possible slices: Ordinarily this would be done using robust estimates of location and spread, such as a median and interquartile range. If we had the data, we might generate a wandering schematic plot. Data are difficult to extract numerically from a graphical image with overplotted points. However, in this case the vertical spreads tend to be compact, symmetrical, and without outliers, so we are safe using means and standard deviations instead--and these are easily computed using image processing software. In fact, what I did was to smear the dots horizontally and then compute the mean and variance of their locations for every vertical column of pixels in the image. (This processing will be a little inaccurate due to overplotting of some points, but it's not likely to bias the relative SDs much.) There is a definite wedge shape to the smeared points, narrowing from left to right. (Squinting at a graphic can sometimes help bring out such an overall gestalt impression of a scatterplot, provided it has many points.) The mean (shown below in blue) and the mean plus or minus a suitable multiple of the square root of the variance (in red and gold) will trace out the location and typical limits of the residuals. I chose a multiple designed to place about 5% of the points above the upper trace and another 5% below the lower trace. With practice you can see such traces by closely examining the plot itself--no calculations are necessary. Scanning across from left to right, estimate the middle of each vertical column of dots. Estimate their spread. Inflate your estimates of spread a little where there are relatively fewer dots--they haven't had a chance to show the full amount of their dispersion. At the same time, discount your estimates (that is, don't give them much credence) in areas where there are very few dots, because your estimates are highly uncertain there. Look for clear consistent patterns of changes in spread. In the preceding figure, the upper trace (red) and lower trace (gold) appear to draw a little closer together from left to right, as the fitted value increases. This can be made more apparent by plotting the standard deviation. The units don't matter, but the vertical axis should start at zero to give an accurate rendition of relative sizes of the spreads: This confirms the initial impression of a decreasing SD with increasing fitted value. Overall, the SD is halved as we scan from left to right. (The slight upward increase at the very right can be discounted since it is associated with few data points.) This is a classic form of heteroscedasticity: the spread changes systematically with the fitted value. The use of dummy variables in a multiple regression will not introduce heteroscedasticity. Often it will reduce it, by resolving overlapping groups of residuals into separate ones. Whether heteroscedasticity is actually a problem depends on the purpose of the analysis, the regression method employed, what information is being extracted from the results, and the nature of the data.
Heteroskedasticity - residual plot interpretation
Concerning heteroscedasticity, you are interested in understanding how the vertical spread of the points varies with the fitted values. To do this, you must slice the plot into thin vertical sections
Heteroskedasticity - residual plot interpretation Concerning heteroscedasticity, you are interested in understanding how the vertical spread of the points varies with the fitted values. To do this, you must slice the plot into thin vertical sections, find the central elevation (y-value) in each section, evaluate the spread around that central value, then connect everything up. Here are some possible slices: Ordinarily this would be done using robust estimates of location and spread, such as a median and interquartile range. If we had the data, we might generate a wandering schematic plot. Data are difficult to extract numerically from a graphical image with overplotted points. However, in this case the vertical spreads tend to be compact, symmetrical, and without outliers, so we are safe using means and standard deviations instead--and these are easily computed using image processing software. In fact, what I did was to smear the dots horizontally and then compute the mean and variance of their locations for every vertical column of pixels in the image. (This processing will be a little inaccurate due to overplotting of some points, but it's not likely to bias the relative SDs much.) There is a definite wedge shape to the smeared points, narrowing from left to right. (Squinting at a graphic can sometimes help bring out such an overall gestalt impression of a scatterplot, provided it has many points.) The mean (shown below in blue) and the mean plus or minus a suitable multiple of the square root of the variance (in red and gold) will trace out the location and typical limits of the residuals. I chose a multiple designed to place about 5% of the points above the upper trace and another 5% below the lower trace. With practice you can see such traces by closely examining the plot itself--no calculations are necessary. Scanning across from left to right, estimate the middle of each vertical column of dots. Estimate their spread. Inflate your estimates of spread a little where there are relatively fewer dots--they haven't had a chance to show the full amount of their dispersion. At the same time, discount your estimates (that is, don't give them much credence) in areas where there are very few dots, because your estimates are highly uncertain there. Look for clear consistent patterns of changes in spread. In the preceding figure, the upper trace (red) and lower trace (gold) appear to draw a little closer together from left to right, as the fitted value increases. This can be made more apparent by plotting the standard deviation. The units don't matter, but the vertical axis should start at zero to give an accurate rendition of relative sizes of the spreads: This confirms the initial impression of a decreasing SD with increasing fitted value. Overall, the SD is halved as we scan from left to right. (The slight upward increase at the very right can be discounted since it is associated with few data points.) This is a classic form of heteroscedasticity: the spread changes systematically with the fitted value. The use of dummy variables in a multiple regression will not introduce heteroscedasticity. Often it will reduce it, by resolving overlapping groups of residuals into separate ones. Whether heteroscedasticity is actually a problem depends on the purpose of the analysis, the regression method employed, what information is being extracted from the results, and the nature of the data.
Heteroskedasticity - residual plot interpretation Concerning heteroscedasticity, you are interested in understanding how the vertical spread of the points varies with the fitted values. To do this, you must slice the plot into thin vertical sections
28,925
Heteroskedasticity - residual plot interpretation
There is no doubt that these plots indicate heteroscedasticity. If an exact test needed my recent study will give the respond "A new test to detect monotonic and non-monotonic types of heteroscedasticity, journal of applied statistics, 2016"
Heteroskedasticity - residual plot interpretation
There is no doubt that these plots indicate heteroscedasticity. If an exact test needed my recent study will give the respond "A new test to detect monotonic and non-monotonic types of heteroscedastic
Heteroskedasticity - residual plot interpretation There is no doubt that these plots indicate heteroscedasticity. If an exact test needed my recent study will give the respond "A new test to detect monotonic and non-monotonic types of heteroscedasticity, journal of applied statistics, 2016"
Heteroskedasticity - residual plot interpretation There is no doubt that these plots indicate heteroscedasticity. If an exact test needed my recent study will give the respond "A new test to detect monotonic and non-monotonic types of heteroscedastic
28,926
Recommendations for mathematical multivariate statistics with exercises
Anderson is probably the most mathematical of the existing textbooks, and as such is orthogonal to the material in HTF. So this could be a good fit for you. I wrote an Amazon list on multivariate books when I was looking for one to teach both an elective doctorate course, and an applied master's course. Mardia, Kent and Bibby and Rencher top that list, so either one could be a decent alternative if Anderson proves to be too heavy.
Recommendations for mathematical multivariate statistics with exercises
Anderson is probably the most mathematical of the existing textbooks, and as such is orthogonal to the material in HTF. So this could be a good fit for you. I wrote an Amazon list on multivariate book
Recommendations for mathematical multivariate statistics with exercises Anderson is probably the most mathematical of the existing textbooks, and as such is orthogonal to the material in HTF. So this could be a good fit for you. I wrote an Amazon list on multivariate books when I was looking for one to teach both an elective doctorate course, and an applied master's course. Mardia, Kent and Bibby and Rencher top that list, so either one could be a decent alternative if Anderson proves to be too heavy.
Recommendations for mathematical multivariate statistics with exercises Anderson is probably the most mathematical of the existing textbooks, and as such is orthogonal to the material in HTF. So this could be a good fit for you. I wrote an Amazon list on multivariate book
28,927
Recommendations for mathematical multivariate statistics with exercises
On the other hand, if you want something still more theoretical than Anderson's book, you can go with Muirhead: "Aspects of Multivariate Statistical Theory". This is if you really want to enter the theory of the Wishart distribution. A book that focuses on the mathematics behind multivariate statistics is Farrell: "Multivariate calculation: use of the continuous groups" which I also have found useful.
Recommendations for mathematical multivariate statistics with exercises
On the other hand, if you want something still more theoretical than Anderson's book, you can go with Muirhead: "Aspects of Multivariate Statistical Theory". This is if you really want to enter the t
Recommendations for mathematical multivariate statistics with exercises On the other hand, if you want something still more theoretical than Anderson's book, you can go with Muirhead: "Aspects of Multivariate Statistical Theory". This is if you really want to enter the theory of the Wishart distribution. A book that focuses on the mathematics behind multivariate statistics is Farrell: "Multivariate calculation: use of the continuous groups" which I also have found useful.
Recommendations for mathematical multivariate statistics with exercises On the other hand, if you want something still more theoretical than Anderson's book, you can go with Muirhead: "Aspects of Multivariate Statistical Theory". This is if you really want to enter the t
28,928
Recommendations for mathematical multivariate statistics with exercises
My graduate program at UCSB taught with An Introduction to Multivariate Analysis and it is a very good book for learning the basic tools however, it is not overly theoretical or filled with too many proofs (I always felt like it was an undergrad text). However, I do think its an excellent book in solidifying the concepts and the exercises are is very good at actually allowing you to use the methods/concepts as if you were applying them to the real world and not just deriving theory. Just my two cents.
Recommendations for mathematical multivariate statistics with exercises
My graduate program at UCSB taught with An Introduction to Multivariate Analysis and it is a very good book for learning the basic tools however, it is not overly theoretical or filled with too many p
Recommendations for mathematical multivariate statistics with exercises My graduate program at UCSB taught with An Introduction to Multivariate Analysis and it is a very good book for learning the basic tools however, it is not overly theoretical or filled with too many proofs (I always felt like it was an undergrad text). However, I do think its an excellent book in solidifying the concepts and the exercises are is very good at actually allowing you to use the methods/concepts as if you were applying them to the real world and not just deriving theory. Just my two cents.
Recommendations for mathematical multivariate statistics with exercises My graduate program at UCSB taught with An Introduction to Multivariate Analysis and it is a very good book for learning the basic tools however, it is not overly theoretical or filled with too many p
28,929
Intuition for recursive least squares
It is roughly reminiscent of a Kalman Filter (where the "state variable" is the LS-estimator), and in any case is a weighted average (and possibly a convex combination) of past estimation and current data (and in that it is an adaptive estimator). I will use a hat to denote the estimator. Re-write the basic equation $$\hat \beta_t = \hat \beta_{t-1} +\frac{1}{t}R_t^{-1}x_t'(y_t-x_t\hat \beta_{t-1}) $$ as $$\hat \beta_t = \left(1- \frac{1}{t}R_t^{-1}x_t'x_t\right) \hat \beta_{t-1} +\frac{1}{t}R_t^{-1}x_t'y_t $$ To use standard Kalman filter notation, define $$F_t = \left(1- \frac{1}{t}R_t^{-1}x_t'x_t\right)$$ Then you arrive at $$\hat \beta_t = F_t \hat \beta_{t-1} +(1-F_t)y_t $$ If $F_t$ lies in $(0,1)$ then this weighted average becomes a convex combination, and hence exhibits exponential smoothing with variable smoothing factor. Whatever you call it, this is highly intuitive: I give some weight to my previous result, and some weight to new data. And the intuition doesn't stop there. Re-write the second equation as $$R_t = \left(1-\frac{1}{t}\right)R_{t-1}+\frac{1}{t}x_t'x_t$$ This is always a convex combination of past data and current data, with more weight given to past data as it accumulates (i.e. as $t$ increases). REFERENCES RLS is a stochastic approximation algorithm, the seminal paper about which is Ljung, L. (1977). Analysis of recursive stochastic algorithms. Automatic Control, IEEE Transactions on, 22(4), 551-575. Recursive Least Squares has seen extensive use in the context of Adaptive Learning literature in the Economics discipline. A clear exposition on the mechanics of the matter and the relation with recursive stochastic algortihms can be found in ch. 6 of Evans, G. W., Honkapohja, S. (2001). Learning and Expectations in Macroeconomics. Princeton University Press.
Intuition for recursive least squares
It is roughly reminiscent of a Kalman Filter (where the "state variable" is the LS-estimator), and in any case is a weighted average (and possibly a convex combination) of past estimation and current
Intuition for recursive least squares It is roughly reminiscent of a Kalman Filter (where the "state variable" is the LS-estimator), and in any case is a weighted average (and possibly a convex combination) of past estimation and current data (and in that it is an adaptive estimator). I will use a hat to denote the estimator. Re-write the basic equation $$\hat \beta_t = \hat \beta_{t-1} +\frac{1}{t}R_t^{-1}x_t'(y_t-x_t\hat \beta_{t-1}) $$ as $$\hat \beta_t = \left(1- \frac{1}{t}R_t^{-1}x_t'x_t\right) \hat \beta_{t-1} +\frac{1}{t}R_t^{-1}x_t'y_t $$ To use standard Kalman filter notation, define $$F_t = \left(1- \frac{1}{t}R_t^{-1}x_t'x_t\right)$$ Then you arrive at $$\hat \beta_t = F_t \hat \beta_{t-1} +(1-F_t)y_t $$ If $F_t$ lies in $(0,1)$ then this weighted average becomes a convex combination, and hence exhibits exponential smoothing with variable smoothing factor. Whatever you call it, this is highly intuitive: I give some weight to my previous result, and some weight to new data. And the intuition doesn't stop there. Re-write the second equation as $$R_t = \left(1-\frac{1}{t}\right)R_{t-1}+\frac{1}{t}x_t'x_t$$ This is always a convex combination of past data and current data, with more weight given to past data as it accumulates (i.e. as $t$ increases). REFERENCES RLS is a stochastic approximation algorithm, the seminal paper about which is Ljung, L. (1977). Analysis of recursive stochastic algorithms. Automatic Control, IEEE Transactions on, 22(4), 551-575. Recursive Least Squares has seen extensive use in the context of Adaptive Learning literature in the Economics discipline. A clear exposition on the mechanics of the matter and the relation with recursive stochastic algortihms can be found in ch. 6 of Evans, G. W., Honkapohja, S. (2001). Learning and Expectations in Macroeconomics. Princeton University Press.
Intuition for recursive least squares It is roughly reminiscent of a Kalman Filter (where the "state variable" is the LS-estimator), and in any case is a weighted average (and possibly a convex combination) of past estimation and current
28,930
Intuition for recursive least squares
I thought I'd add my intuition for it a few months later. First of all, Alecos' intuition is great in qualitative terms! My post is more of a mathematical intuition (as in, how one would rediscover the formulation of recursive least squares, using only linear algebra). Denote by $\hat{Y}_t=\beta_t X_t$ the vector (in $\mathbb{R}^t$) of fitted values of $y$. Since the least squares procedure is a projection, we have that $\langle \hat{Y}_t, X^i_t \rangle = \langle Y_t, X^i_t \rangle$ for $i\in\{1, \ldots, k\}$ and $\hat{Y}_t\in span\{X^1_t, \ldots, X^k_t\}$ ($\langle \cdot, \cdot \rangle$ denotes the inner product of $\mathbb{R}^t$). Similarly, we have $\langle \hat{Y}_{t-1}, X^i_{t-1} \rangle = \langle Y_{t-1}, X^i_{t-1} \rangle$ for $i\in\{1, \ldots, k\}$ and $\hat{Y}_{t-1}\in span\{X^1_{t-1}, \ldots, X^k_{t-1}\}$ ($\langle \cdot, \cdot \rangle$ denotes the inner product of $\mathbb{R}^{t-1}$). We want to relate the two inner products somehow. From the definition of the inner product, it is clear that $\langle Y_t, X^i_t\rangle = \langle Y_{t-1}, X^i_{t-1}\rangle + x^i_t y_t$. Therefore, we get $\langle \hat{Y}_t, X_t^i\rangle = \langle \hat{Y}_{t-1}, X^i_{t-1}\rangle + x^i_t y_t$. This, I think, is the essential formula. To express everything in terms of the betas, we substitute in $\beta_t X_t$ for $\hat{Y}_t$ and bear in mind that $X_t = (X_t^1|\ldots| X_t^k)$ and get $X_t'X_t\beta_t = X_{t-1}'X_{t-1}\beta_{t-1} + x_t'y_t$. Now we are led to want to relate $X_t'X_t$ to $X_{t-1}'X_{t-1}$. This is easy, $X_t'X_t = X_{t-1}'X_{t-1} + x_t'x_t$. Therefore, we get $\beta_t = \beta_{t-1} + (X_t'X_t)^{-1}x_t'(y_t-x_t\beta_{t-1})$ (with several intuitions, see Alecos' answer). We conclude that we have a recursive formulation: $\beta_t = \beta_{t-1} + (X_t'X_t)^{-1}x_t'(y_t-x_t\beta_{t-1})$. $X_t'X_t = X_{t-1}'X_{t-1} + x_t'x_t$ The super crisp geometric intuition (why the formula is obvious without computation) is still evading me. Maybe I will return in another six months.
Intuition for recursive least squares
I thought I'd add my intuition for it a few months later. First of all, Alecos' intuition is great in qualitative terms! My post is more of a mathematical intuition (as in, how one would rediscover th
Intuition for recursive least squares I thought I'd add my intuition for it a few months later. First of all, Alecos' intuition is great in qualitative terms! My post is more of a mathematical intuition (as in, how one would rediscover the formulation of recursive least squares, using only linear algebra). Denote by $\hat{Y}_t=\beta_t X_t$ the vector (in $\mathbb{R}^t$) of fitted values of $y$. Since the least squares procedure is a projection, we have that $\langle \hat{Y}_t, X^i_t \rangle = \langle Y_t, X^i_t \rangle$ for $i\in\{1, \ldots, k\}$ and $\hat{Y}_t\in span\{X^1_t, \ldots, X^k_t\}$ ($\langle \cdot, \cdot \rangle$ denotes the inner product of $\mathbb{R}^t$). Similarly, we have $\langle \hat{Y}_{t-1}, X^i_{t-1} \rangle = \langle Y_{t-1}, X^i_{t-1} \rangle$ for $i\in\{1, \ldots, k\}$ and $\hat{Y}_{t-1}\in span\{X^1_{t-1}, \ldots, X^k_{t-1}\}$ ($\langle \cdot, \cdot \rangle$ denotes the inner product of $\mathbb{R}^{t-1}$). We want to relate the two inner products somehow. From the definition of the inner product, it is clear that $\langle Y_t, X^i_t\rangle = \langle Y_{t-1}, X^i_{t-1}\rangle + x^i_t y_t$. Therefore, we get $\langle \hat{Y}_t, X_t^i\rangle = \langle \hat{Y}_{t-1}, X^i_{t-1}\rangle + x^i_t y_t$. This, I think, is the essential formula. To express everything in terms of the betas, we substitute in $\beta_t X_t$ for $\hat{Y}_t$ and bear in mind that $X_t = (X_t^1|\ldots| X_t^k)$ and get $X_t'X_t\beta_t = X_{t-1}'X_{t-1}\beta_{t-1} + x_t'y_t$. Now we are led to want to relate $X_t'X_t$ to $X_{t-1}'X_{t-1}$. This is easy, $X_t'X_t = X_{t-1}'X_{t-1} + x_t'x_t$. Therefore, we get $\beta_t = \beta_{t-1} + (X_t'X_t)^{-1}x_t'(y_t-x_t\beta_{t-1})$ (with several intuitions, see Alecos' answer). We conclude that we have a recursive formulation: $\beta_t = \beta_{t-1} + (X_t'X_t)^{-1}x_t'(y_t-x_t\beta_{t-1})$. $X_t'X_t = X_{t-1}'X_{t-1} + x_t'x_t$ The super crisp geometric intuition (why the formula is obvious without computation) is still evading me. Maybe I will return in another six months.
Intuition for recursive least squares I thought I'd add my intuition for it a few months later. First of all, Alecos' intuition is great in qualitative terms! My post is more of a mathematical intuition (as in, how one would rediscover th
28,931
How do I interpret the Mann-Whitney U when using R's formula interface
Technically, the reference category and the direction of the test depend on the way the factor variable is encoded. With your toy data: > wilcox.test(x ~ y, data=data, alternative="greater") Wilcoxon rank sum test with continuity correction data: x by y W = 52, p-value = 1 alternative hypothesis: true location shift is greater than 0 > wilcox.test(x ~ y, data=data, alternative="less") Wilcoxon rank sum test with continuity correction data: x by y W = 52, p-value < 2.2e-16 alternative hypothesis: true location shift is less than 0 Notice that the W statistic is the same in both cases but the test uses opposite tails of its sampling distribution. Now let's look at the factor variable: > levels(data$y) [1] "A" "B" We can recode it to make "B" the first level: > data$y <- factor(data$y, levels=c("B", "A")) Now we have: > levels(data$y) [1] "B" "A" Note that we did not change the data themselves, just the way the categorical variable is encoded “under the hood”: > head(data) x y 1 0.4395244 A 2 0.7698225 A 3 2.5587083 A 4 1.0705084 A 5 1.1292877 A 6 2.7150650 A > aggregate(data$x, by=list(data$y), mean) Group.1 x 1 B 5.292817 2 A 1.034404 But the directions of the test are now inverted: > wilcox.test(x ~ y, data=data, alternative="greater") Wilcoxon rank sum test with continuity correction data: x by y W = 2448, p-value < 2.2e-16 alternative hypothesis: true location shift is greater than 0 The W statistic is different but the p-value is the same than for the alternative="less" test with the categories in the original order. With the original data, it could be interpreted as “the location shift from B to A is less than 0” and with the recoded data it becomes “the location shift from A to B is greater than 0” but this is really the same hypothesis (but see Glen_b's comments to the question for the correct interpretation). In your case, it therefore seems that the test you want is alternative="less" (or, equivalently, alternative="greater" with the recoded data). Does that help?
How do I interpret the Mann-Whitney U when using R's formula interface
Technically, the reference category and the direction of the test depend on the way the factor variable is encoded. With your toy data: > wilcox.test(x ~ y, data=data, alternative="greater") Wilc
How do I interpret the Mann-Whitney U when using R's formula interface Technically, the reference category and the direction of the test depend on the way the factor variable is encoded. With your toy data: > wilcox.test(x ~ y, data=data, alternative="greater") Wilcoxon rank sum test with continuity correction data: x by y W = 52, p-value = 1 alternative hypothesis: true location shift is greater than 0 > wilcox.test(x ~ y, data=data, alternative="less") Wilcoxon rank sum test with continuity correction data: x by y W = 52, p-value < 2.2e-16 alternative hypothesis: true location shift is less than 0 Notice that the W statistic is the same in both cases but the test uses opposite tails of its sampling distribution. Now let's look at the factor variable: > levels(data$y) [1] "A" "B" We can recode it to make "B" the first level: > data$y <- factor(data$y, levels=c("B", "A")) Now we have: > levels(data$y) [1] "B" "A" Note that we did not change the data themselves, just the way the categorical variable is encoded “under the hood”: > head(data) x y 1 0.4395244 A 2 0.7698225 A 3 2.5587083 A 4 1.0705084 A 5 1.1292877 A 6 2.7150650 A > aggregate(data$x, by=list(data$y), mean) Group.1 x 1 B 5.292817 2 A 1.034404 But the directions of the test are now inverted: > wilcox.test(x ~ y, data=data, alternative="greater") Wilcoxon rank sum test with continuity correction data: x by y W = 2448, p-value < 2.2e-16 alternative hypothesis: true location shift is greater than 0 The W statistic is different but the p-value is the same than for the alternative="less" test with the categories in the original order. With the original data, it could be interpreted as “the location shift from B to A is less than 0” and with the recoded data it becomes “the location shift from A to B is greater than 0” but this is really the same hypothesis (but see Glen_b's comments to the question for the correct interpretation). In your case, it therefore seems that the test you want is alternative="less" (or, equivalently, alternative="greater" with the recoded data). Does that help?
How do I interpret the Mann-Whitney U when using R's formula interface Technically, the reference category and the direction of the test depend on the way the factor variable is encoded. With your toy data: > wilcox.test(x ~ y, data=data, alternative="greater") Wilc
28,932
Sum of Binomial and Poisson random variables
You will end up with two different formulas for $p_{X_1+X_2}(k)$, one for $0 \leq k < n$, and one for $k \geq n$. The easiest way of doing this problem is to compute the product of $\sum_{i=0}^n p_{X_1}(i)z^k$ and $\sum_{j=0}^{\infty}p_{X_2}(j)z^j$. Then, $p_{X_1+X_2}(k)$ is the coefficient of $z^k$ in the product. No simplification of the sums is possible.
Sum of Binomial and Poisson random variables
You will end up with two different formulas for $p_{X_1+X_2}(k)$, one for $0 \leq k < n$, and one for $k \geq n$. The easiest way of doing this problem is to compute the product of $\sum_{i=0}^n p_{X
Sum of Binomial and Poisson random variables You will end up with two different formulas for $p_{X_1+X_2}(k)$, one for $0 \leq k < n$, and one for $k \geq n$. The easiest way of doing this problem is to compute the product of $\sum_{i=0}^n p_{X_1}(i)z^k$ and $\sum_{j=0}^{\infty}p_{X_2}(j)z^j$. Then, $p_{X_1+X_2}(k)$ is the coefficient of $z^k$ in the product. No simplification of the sums is possible.
Sum of Binomial and Poisson random variables You will end up with two different formulas for $p_{X_1+X_2}(k)$, one for $0 \leq k < n$, and one for $k \geq n$. The easiest way of doing this problem is to compute the product of $\sum_{i=0}^n p_{X
28,933
Sum of Binomial and Poisson random variables
Dilip Sarwate stated 7 years ago that no simplification is possible, although this has been challenged in comments. However, I think it is useful to note that even without any simplification the computation is quite straightforward in any spreadsheet or programming language. Here is an implementation in R: # example parameters n <- 10 p <- .3 lambda <- 5 # probability for just a single value x <- 10 # example value sum(dbinom(0:x, n, p) * dpois(x:0, lambda)) # probability function for all values x0 <- 0:30 # 0 to the maximum value of interest x <- outer(x0, x0, "+") db <- dbinom(x0, n, p) dp <- dpois(x0, lambda) dbp <- outer(db, dp) aggregate(as.vector(dbp), by=list(as.vector(x)), sum)[1:(max(x0)+1),]
Sum of Binomial and Poisson random variables
Dilip Sarwate stated 7 years ago that no simplification is possible, although this has been challenged in comments. However, I think it is useful to note that even without any simplification the compu
Sum of Binomial and Poisson random variables Dilip Sarwate stated 7 years ago that no simplification is possible, although this has been challenged in comments. However, I think it is useful to note that even without any simplification the computation is quite straightforward in any spreadsheet or programming language. Here is an implementation in R: # example parameters n <- 10 p <- .3 lambda <- 5 # probability for just a single value x <- 10 # example value sum(dbinom(0:x, n, p) * dpois(x:0, lambda)) # probability function for all values x0 <- 0:30 # 0 to the maximum value of interest x <- outer(x0, x0, "+") db <- dbinom(x0, n, p) dp <- dpois(x0, lambda) dbp <- outer(db, dp) aggregate(as.vector(dbp), by=list(as.vector(x)), sum)[1:(max(x0)+1),]
Sum of Binomial and Poisson random variables Dilip Sarwate stated 7 years ago that no simplification is possible, although this has been challenged in comments. However, I think it is useful to note that even without any simplification the compu
28,934
Sum of Binomial and Poisson random variables
Giving the closed formula in terms of generalized hypergeometric functions (GHF) hinted at in other answers (the GHF in this case is really only a finite polynomial, so is a shorthand for the finite sum.) I used maple to sum the convolution, with this result: $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(X_1+X_2=k)= \sum_{x_1=0}^{\min(n,k)} \binom{n}{x_1} p^{x_1}(1-p)^{n-x_1} e^{-\lambda} \frac{\lambda^{k-x_1}}{(k-x_1)!}= {\frac { \left( 1-p \right) ^{n}{{\rm e}^{-\lambda}}{\lambda}^{k}}{ \Gamma \left( k+1 \right) } {\mbox{$_2$F$_0$}(-k,-n;\,\ ;\,-{\frac {p}{ \left( p-1 \right) \lambda}})} } $$
Sum of Binomial and Poisson random variables
Giving the closed formula in terms of generalized hypergeometric functions (GHF) hinted at in other answers (the GHF in this case is really only a finite polynomial, so is a shorthand for the finite s
Sum of Binomial and Poisson random variables Giving the closed formula in terms of generalized hypergeometric functions (GHF) hinted at in other answers (the GHF in this case is really only a finite polynomial, so is a shorthand for the finite sum.) I used maple to sum the convolution, with this result: $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(X_1+X_2=k)= \sum_{x_1=0}^{\min(n,k)} \binom{n}{x_1} p^{x_1}(1-p)^{n-x_1} e^{-\lambda} \frac{\lambda^{k-x_1}}{(k-x_1)!}= {\frac { \left( 1-p \right) ^{n}{{\rm e}^{-\lambda}}{\lambda}^{k}}{ \Gamma \left( k+1 \right) } {\mbox{$_2$F$_0$}(-k,-n;\,\ ;\,-{\frac {p}{ \left( p-1 \right) \lambda}})} } $$
Sum of Binomial and Poisson random variables Giving the closed formula in terms of generalized hypergeometric functions (GHF) hinted at in other answers (the GHF in this case is really only a finite polynomial, so is a shorthand for the finite s
28,935
Interpreting coefficients of an interaction between categorical and continuous variable
Your interpretation of the model’s coefficients is not completely accurate. Let me first summarize the terms of the model. Categorial variables (factors): $race$, $sex$, and $educa$ The factor race has four levels: $race = \{white, black, mexican, multi/other\}$. The factor sexhas two levels: $sex = \{male, female\}$. The factor educa has five levels: $educa = \{1, 2, 3, 4, 5\}$. By default, R uses treatment contrasts for categorical variables. In these contrasts, the first value of the factor is used a reference level and the remaining values are tested against the reference. The maximum number of contrasts for a categorical variable equals the number of levels minus one. The contrasts for race allow testing the following differences: $race = black\ vs. race = white$, $race = mexican\ vs. race = white$, and $race = multi/other\ vs. race = white$. For the factor $educa$, the reference level is $1$, the pattern of contrasts is analogous. These effects can be interpreted as the difference in the dependent variable. In your example, the mean value of cog is $13.8266$ units higher for $educa = 2$ compared to $educa = 1$ (as.factor(educa)2). One important note: If treatment contrasts for a categorical variable are present in a model, the estimation of further effects is based on the reference level of the categorical variable if interactions between further effects and the categorical variable are included too. If the variable is not part of an interaction, its coefficient corresponds to the average of the the individual slopes of subsets of this varible along all remaining categorical variables. The effects of $race$ and $educa$ correspond to average effects with respect to the factor levels of the other variables. To test overall effects of $race$, you would need to leave $educa$ and $sex$ out of the model. Numeric variables: $lg\_hag$ and $pdg$ Both lg_hag and pdg are numeric variables hence the coefficients represent the change in the dependent variable associated with an increase of $1$ in the predictor. In principle, the interpretation of these effects is straightforward. But note that if interations are present, the estimation of the coefficients is based on the references categories of the factors (if treatment contrasts are employed). Since $pdg$ is not part of an interaction, its coefficient corrsespods to the average slope of the variable with respect. The variable $lg\_hag$ is also part of an interaction with $educa$. Therefore, its effect holds for $educa = 1$, the base level.; it is not a test of an overall influence of the numeric variable $lg\_hag$ irrespective of the levels of the factors. Interactions between categorical and numeric variables: $lg\_hag \times educa$ The model does not only include main effects but also interactions between the numeric variable $lg\_hag$ and the four contrasts associated with $educa$. These effects can be interpreted as the difference in the slopes of $lg\_hag$ between a certain level of $educa$ and the reference level ($educa = 1$). For example, the coefficient of lg_hag:as.factor(educa)2 (-21.2224) means that slope of $lg\_hag$ is $21.2224$ units lower for $educa = 2$ compared to $educa = 1$.
Interpreting coefficients of an interaction between categorical and continuous variable
Your interpretation of the model’s coefficients is not completely accurate. Let me first summarize the terms of the model. Categorial variables (factors): $race$, $sex$, and $educa$ The factor race ha
Interpreting coefficients of an interaction between categorical and continuous variable Your interpretation of the model’s coefficients is not completely accurate. Let me first summarize the terms of the model. Categorial variables (factors): $race$, $sex$, and $educa$ The factor race has four levels: $race = \{white, black, mexican, multi/other\}$. The factor sexhas two levels: $sex = \{male, female\}$. The factor educa has five levels: $educa = \{1, 2, 3, 4, 5\}$. By default, R uses treatment contrasts for categorical variables. In these contrasts, the first value of the factor is used a reference level and the remaining values are tested against the reference. The maximum number of contrasts for a categorical variable equals the number of levels minus one. The contrasts for race allow testing the following differences: $race = black\ vs. race = white$, $race = mexican\ vs. race = white$, and $race = multi/other\ vs. race = white$. For the factor $educa$, the reference level is $1$, the pattern of contrasts is analogous. These effects can be interpreted as the difference in the dependent variable. In your example, the mean value of cog is $13.8266$ units higher for $educa = 2$ compared to $educa = 1$ (as.factor(educa)2). One important note: If treatment contrasts for a categorical variable are present in a model, the estimation of further effects is based on the reference level of the categorical variable if interactions between further effects and the categorical variable are included too. If the variable is not part of an interaction, its coefficient corresponds to the average of the the individual slopes of subsets of this varible along all remaining categorical variables. The effects of $race$ and $educa$ correspond to average effects with respect to the factor levels of the other variables. To test overall effects of $race$, you would need to leave $educa$ and $sex$ out of the model. Numeric variables: $lg\_hag$ and $pdg$ Both lg_hag and pdg are numeric variables hence the coefficients represent the change in the dependent variable associated with an increase of $1$ in the predictor. In principle, the interpretation of these effects is straightforward. But note that if interations are present, the estimation of the coefficients is based on the references categories of the factors (if treatment contrasts are employed). Since $pdg$ is not part of an interaction, its coefficient corrsespods to the average slope of the variable with respect. The variable $lg\_hag$ is also part of an interaction with $educa$. Therefore, its effect holds for $educa = 1$, the base level.; it is not a test of an overall influence of the numeric variable $lg\_hag$ irrespective of the levels of the factors. Interactions between categorical and numeric variables: $lg\_hag \times educa$ The model does not only include main effects but also interactions between the numeric variable $lg\_hag$ and the four contrasts associated with $educa$. These effects can be interpreted as the difference in the slopes of $lg\_hag$ between a certain level of $educa$ and the reference level ($educa = 1$). For example, the coefficient of lg_hag:as.factor(educa)2 (-21.2224) means that slope of $lg\_hag$ is $21.2224$ units lower for $educa = 2$ compared to $educa = 1$.
Interpreting coefficients of an interaction between categorical and continuous variable Your interpretation of the model’s coefficients is not completely accurate. Let me first summarize the terms of the model. Categorial variables (factors): $race$, $sex$, and $educa$ The factor race ha
28,936
Covariance of a linear and quadratic form of a multivariate normal
This is straightforward in the case you're interested in (${\boldsymbol \mu} = 0$) without using matrix algebra. To clarify the notation ${\bf y} = \{y_{1}, ..., y_{n} \}$ is a multivariate normal random vector, ${\bf a} = \{a_{1}, ..., a_{n} \}$ is a row vector, and ${\bf H}$ is an $n \times n$ matrix with entries $\{ h_{jk} \}_{j,k=1}^{n}$. By definition (see e.g. page 3 here) you can re-write this covariance as $$ {\rm cov}({\bf a}'{\bf y}, {\bf y}' {\bf H} {\bf y}) = {\rm cov} \left( \sum_{i=1}^{n} a_i y_i, \sum_{j=1}^{n} \sum_{k=1}^{n} h_{jk} y_{j} y_{k} \right) = \sum_{i,j,k} {\rm cov}( a_i y_i, h_{jk} y_{j} y_{k} ) $$ where the second equality follows from bilinearity of covariance. When ${\boldsymbol \mu} = 0$, each term in the sum is $0$ because $${\rm cov}( a_i y_i, h_{jk} y_{j} y_{k} ) \propto E(y_i y_j y_k) - E(y_i) E(y_k y_j) = 0$$ The second term is zero because $E(y_i) = 0$. The first term is zero because the third order mean-centered moments of a multivariate normal random vector are 0, this can be seen more clearly by looking at the each cases: when $i,j,k$ are distinct, then $E(y_i y_j y_k)=0$ by Isserlis' Theorem when $i\neq j = k$, then we have $E(y_i y_j y_k) = E(y_i y_{j}^2)$. First we can deduce from here that $E(y_i | y_j=y) = y \cdot \Sigma_{ij}/\Sigma_{jj}$. Therefore, $E(y_{i} y_{j}^2 | y_{j} = y) = y^3 \cdot \Sigma_{ij}/\Sigma_{jj}$. Therefore, by the law of total expectation, $$E(y_i y_{j}^2) = E( E(y_{i} y_{j}^2 | y_{j} = y) ) = E(y^3 \Sigma_{ij}/\Sigma_{jj} ) = E(y^3) \cdot \Sigma_{ij}/\Sigma_{jj} = 0 $$ where $E(y_{j}^3) = 0$ because $y_j$ being symmetrically distributed with mean 0 implies that $y_{j}^3$ is also symmetrically distributed with mean 0. when $i=j=k$, $E(y_i y_j y_k) = E(y_{i}^3) = 0$ by the same rationale just given.
Covariance of a linear and quadratic form of a multivariate normal
This is straightforward in the case you're interested in (${\boldsymbol \mu} = 0$) without using matrix algebra. To clarify the notation ${\bf y} = \{y_{1}, ..., y_{n} \}$ is a multivariate normal ra
Covariance of a linear and quadratic form of a multivariate normal This is straightforward in the case you're interested in (${\boldsymbol \mu} = 0$) without using matrix algebra. To clarify the notation ${\bf y} = \{y_{1}, ..., y_{n} \}$ is a multivariate normal random vector, ${\bf a} = \{a_{1}, ..., a_{n} \}$ is a row vector, and ${\bf H}$ is an $n \times n$ matrix with entries $\{ h_{jk} \}_{j,k=1}^{n}$. By definition (see e.g. page 3 here) you can re-write this covariance as $$ {\rm cov}({\bf a}'{\bf y}, {\bf y}' {\bf H} {\bf y}) = {\rm cov} \left( \sum_{i=1}^{n} a_i y_i, \sum_{j=1}^{n} \sum_{k=1}^{n} h_{jk} y_{j} y_{k} \right) = \sum_{i,j,k} {\rm cov}( a_i y_i, h_{jk} y_{j} y_{k} ) $$ where the second equality follows from bilinearity of covariance. When ${\boldsymbol \mu} = 0$, each term in the sum is $0$ because $${\rm cov}( a_i y_i, h_{jk} y_{j} y_{k} ) \propto E(y_i y_j y_k) - E(y_i) E(y_k y_j) = 0$$ The second term is zero because $E(y_i) = 0$. The first term is zero because the third order mean-centered moments of a multivariate normal random vector are 0, this can be seen more clearly by looking at the each cases: when $i,j,k$ are distinct, then $E(y_i y_j y_k)=0$ by Isserlis' Theorem when $i\neq j = k$, then we have $E(y_i y_j y_k) = E(y_i y_{j}^2)$. First we can deduce from here that $E(y_i | y_j=y) = y \cdot \Sigma_{ij}/\Sigma_{jj}$. Therefore, $E(y_{i} y_{j}^2 | y_{j} = y) = y^3 \cdot \Sigma_{ij}/\Sigma_{jj}$. Therefore, by the law of total expectation, $$E(y_i y_{j}^2) = E( E(y_{i} y_{j}^2 | y_{j} = y) ) = E(y^3 \Sigma_{ij}/\Sigma_{jj} ) = E(y^3) \cdot \Sigma_{ij}/\Sigma_{jj} = 0 $$ where $E(y_{j}^3) = 0$ because $y_j$ being symmetrically distributed with mean 0 implies that $y_{j}^3$ is also symmetrically distributed with mean 0. when $i=j=k$, $E(y_i y_j y_k) = E(y_{i}^3) = 0$ by the same rationale just given.
Covariance of a linear and quadratic form of a multivariate normal This is straightforward in the case you're interested in (${\boldsymbol \mu} = 0$) without using matrix algebra. To clarify the notation ${\bf y} = \{y_{1}, ..., y_{n} \}$ is a multivariate normal ra
28,937
Covariance of a linear and quadratic form of a multivariate normal
This can be proved with multivariate Stein's Lemma. Letting $x=a'y\sim \mathcal{N}(a'\mu,a'\Sigma a)$, we have that $Cov(x,y) = a'\Sigma$. Let $h(y) = y'Hy$. Stein's lemma then tells us $$ Cov(x,h(y)) = Cov(x,y)E\left[\nabla h(y)\right] = a'\Sigma E\left[\left(H + H'\right) y\right] = a'\Sigma \left(H + H'\right) \mu. $$ Some simulations: library(mvtnorm) p <- 9 nsim <- 100000 set.seed(1234) mu <- rnorm(p) X <- matrix(rnorm(3*p^2),ncol=p) H <- matrix(rnorm(p^2),ncol=p) Sigma <- cov(X) a <- rnorm(p) Y <- rmvnorm(nsim,mean=mu,sigma=Sigma) Ya <- Y %*% a YHY <- rowSums((Y %*% H) * Y) # empirical emp <- cov(Ya,YHY) # theoretical thr <- a %*% Sigma %*% (H + t(H)) %*% mu cat('empirical: ',emp,' theoretical: ',thr,' \n') empirical: -2.39 theoretical: -2.385
Covariance of a linear and quadratic form of a multivariate normal
This can be proved with multivariate Stein's Lemma. Letting $x=a'y\sim \mathcal{N}(a'\mu,a'\Sigma a)$, we have that $Cov(x,y) = a'\Sigma$. Let $h(y) = y'Hy$. Stein's lemma then tells us $$ Cov(x,h(y))
Covariance of a linear and quadratic form of a multivariate normal This can be proved with multivariate Stein's Lemma. Letting $x=a'y\sim \mathcal{N}(a'\mu,a'\Sigma a)$, we have that $Cov(x,y) = a'\Sigma$. Let $h(y) = y'Hy$. Stein's lemma then tells us $$ Cov(x,h(y)) = Cov(x,y)E\left[\nabla h(y)\right] = a'\Sigma E\left[\left(H + H'\right) y\right] = a'\Sigma \left(H + H'\right) \mu. $$ Some simulations: library(mvtnorm) p <- 9 nsim <- 100000 set.seed(1234) mu <- rnorm(p) X <- matrix(rnorm(3*p^2),ncol=p) H <- matrix(rnorm(p^2),ncol=p) Sigma <- cov(X) a <- rnorm(p) Y <- rmvnorm(nsim,mean=mu,sigma=Sigma) Ya <- Y %*% a YHY <- rowSums((Y %*% H) * Y) # empirical emp <- cov(Ya,YHY) # theoretical thr <- a %*% Sigma %*% (H + t(H)) %*% mu cat('empirical: ',emp,' theoretical: ',thr,' \n') empirical: -2.39 theoretical: -2.385
Covariance of a linear and quadratic form of a multivariate normal This can be proved with multivariate Stein's Lemma. Letting $x=a'y\sim \mathcal{N}(a'\mu,a'\Sigma a)$, we have that $Cov(x,y) = a'\Sigma$. Let $h(y) = y'Hy$. Stein's lemma then tells us $$ Cov(x,h(y))
28,938
Relevancy of order statistics to the roll-and-keep dice mechanic?
The relationship is simple: $S = Z_{(x-y+1):x} + ... + Z_{x:x}$. This should make logical sense: The sum of the top $k$ dice is the highest die plue the second highest die etc. down to the $k$th highest die. If you still believe $S \ne Z_{(x-y+1):x} + ... + Z_{x:x}$, try to exhibit a roll of the dice so that the sum of the top $k$ dice is not equal to the highest die plus the second highest etc. plus the $k$th highest. Order statistics are only independent for trivial constant distributions. They are not independent here. Since the summands are not independent, you can't use TransformedDistribution in Mathematica. See the documentation which says "TransformedDistribution represents a transformed distribution where , , ... are independent and follow the distributions , , ...." This is why the distribution you calculate for the right hand side is not correct. Because of the dependence, you can't determine the distribution of the sum from the distributions of the summands. The same is true in much simpler cases. If $X_0$ and $X_1$ are both $1$ with probability $1/2$ and $0$ with probability $1/2$, then it is possible that $X_0 = 1-X_1$ so that $X_0 + X_1$ is the constant $1$. It is also possible that $X_0 = X_1$ so that $X_0+X_1$ is $0$ with probability $1/2$ and $2$ with probability $1/2$. It is also possible that $X_0$ and $X_1$ are independent so that $X_0+X_1$ takes the values $0,1,2$ with probabilities $1/4,1/2,1/4$, respectively. Nevertheless, $S = Z_{(x-y+1):x} + ... + Z_{x:x}$. Expectation is linear regardless of whether the terms are independent, so $E(S) = E(Z_{(x-y+1):x}) + ... + E(Z_{x:x})$. I've thought about expressing the distribution of $S$, and I keep getting the same expression that techmologist did (with the corrected upper bound I edited in). Order statistics for discrete distributions are messy, so I don't expect to find a big simplification by using them.
Relevancy of order statistics to the roll-and-keep dice mechanic?
The relationship is simple: $S = Z_{(x-y+1):x} + ... + Z_{x:x}$. This should make logical sense: The sum of the top $k$ dice is the highest die plue the second highest die etc. down to the $k$th highe
Relevancy of order statistics to the roll-and-keep dice mechanic? The relationship is simple: $S = Z_{(x-y+1):x} + ... + Z_{x:x}$. This should make logical sense: The sum of the top $k$ dice is the highest die plue the second highest die etc. down to the $k$th highest die. If you still believe $S \ne Z_{(x-y+1):x} + ... + Z_{x:x}$, try to exhibit a roll of the dice so that the sum of the top $k$ dice is not equal to the highest die plus the second highest etc. plus the $k$th highest. Order statistics are only independent for trivial constant distributions. They are not independent here. Since the summands are not independent, you can't use TransformedDistribution in Mathematica. See the documentation which says "TransformedDistribution represents a transformed distribution where , , ... are independent and follow the distributions , , ...." This is why the distribution you calculate for the right hand side is not correct. Because of the dependence, you can't determine the distribution of the sum from the distributions of the summands. The same is true in much simpler cases. If $X_0$ and $X_1$ are both $1$ with probability $1/2$ and $0$ with probability $1/2$, then it is possible that $X_0 = 1-X_1$ so that $X_0 + X_1$ is the constant $1$. It is also possible that $X_0 = X_1$ so that $X_0+X_1$ is $0$ with probability $1/2$ and $2$ with probability $1/2$. It is also possible that $X_0$ and $X_1$ are independent so that $X_0+X_1$ takes the values $0,1,2$ with probabilities $1/4,1/2,1/4$, respectively. Nevertheless, $S = Z_{(x-y+1):x} + ... + Z_{x:x}$. Expectation is linear regardless of whether the terms are independent, so $E(S) = E(Z_{(x-y+1):x}) + ... + E(Z_{x:x})$. I've thought about expressing the distribution of $S$, and I keep getting the same expression that techmologist did (with the corrected upper bound I edited in). Order statistics for discrete distributions are messy, so I don't expect to find a big simplification by using them.
Relevancy of order statistics to the roll-and-keep dice mechanic? The relationship is simple: $S = Z_{(x-y+1):x} + ... + Z_{x:x}$. This should make logical sense: The sum of the top $k$ dice is the highest die plue the second highest die etc. down to the $k$th highe
28,939
Response-distribution-dependent bias in random forest regression
It is perfectly as you suspect -- the fact that leaf nodes contain means over some set of objects make any regression tree model tighten the response distribution and make any extrapolation impossible. Ensemble of course does not help with that and in fact make situation worse. The naive solution (and dangerous because of overfitting) is to wrap the model in some kind of classical regression which would rescale the response to its desired distribution. The better solution is one of the model-in-leaf tree models, like for instance MOB in party package. The idea here is that partitioning of feature space should end when the problem is simplified not to a simple value (as in regular tree) but to a simple relation (say linear) between the response and some predictors. Such relation can be now resolved by fitting some simple model which won't disturb the distribution or trim extreme values and would be able to extrapolate.
Response-distribution-dependent bias in random forest regression
It is perfectly as you suspect -- the fact that leaf nodes contain means over some set of objects make any regression tree model tighten the response distribution and make any extrapolation impossible
Response-distribution-dependent bias in random forest regression It is perfectly as you suspect -- the fact that leaf nodes contain means over some set of objects make any regression tree model tighten the response distribution and make any extrapolation impossible. Ensemble of course does not help with that and in fact make situation worse. The naive solution (and dangerous because of overfitting) is to wrap the model in some kind of classical regression which would rescale the response to its desired distribution. The better solution is one of the model-in-leaf tree models, like for instance MOB in party package. The idea here is that partitioning of feature space should end when the problem is simplified not to a simple value (as in regular tree) but to a simple relation (say linear) between the response and some predictors. Such relation can be now resolved by fitting some simple model which won't disturb the distribution or trim extreme values and would be able to extrapolate.
Response-distribution-dependent bias in random forest regression It is perfectly as you suspect -- the fact that leaf nodes contain means over some set of objects make any regression tree model tighten the response distribution and make any extrapolation impossible
28,940
Response-distribution-dependent bias in random forest regression
I had exactly the same issue with Conditional RF accessed via the rattle package. I emailed Graham Williams (author of rattle) about it, who kindly forwarded my query to the cforest authors, who responded and suggested playing with two parameters that do not actually seem to be referenced anywhere in the CRF documentation, but which nonetheless seemed to address the problem, namely minplit = 2 and minbucket = 1.
Response-distribution-dependent bias in random forest regression
I had exactly the same issue with Conditional RF accessed via the rattle package. I emailed Graham Williams (author of rattle) about it, who kindly forwarded my query to the cforest authors, who resp
Response-distribution-dependent bias in random forest regression I had exactly the same issue with Conditional RF accessed via the rattle package. I emailed Graham Williams (author of rattle) about it, who kindly forwarded my query to the cforest authors, who responded and suggested playing with two parameters that do not actually seem to be referenced anywhere in the CRF documentation, but which nonetheless seemed to address the problem, namely minplit = 2 and minbucket = 1.
Response-distribution-dependent bias in random forest regression I had exactly the same issue with Conditional RF accessed via the rattle package. I emailed Graham Williams (author of rattle) about it, who kindly forwarded my query to the cforest authors, who resp
28,941
Response-distribution-dependent bias in random forest regression
You should be estimating the optimal value of mtry and sampsize by minimizing the out of sample "cross-validated error" over a grid of different mtry, sampsize parameters, for any corresponding response variable for a fixed set of features and then make any conclusions-in terms of the results. You may create a combination of the grid parameters using expand.grid.
Response-distribution-dependent bias in random forest regression
You should be estimating the optimal value of mtry and sampsize by minimizing the out of sample "cross-validated error" over a grid of different mtry, sampsize parameters, for any corresponding respon
Response-distribution-dependent bias in random forest regression You should be estimating the optimal value of mtry and sampsize by minimizing the out of sample "cross-validated error" over a grid of different mtry, sampsize parameters, for any corresponding response variable for a fixed set of features and then make any conclusions-in terms of the results. You may create a combination of the grid parameters using expand.grid.
Response-distribution-dependent bias in random forest regression You should be estimating the optimal value of mtry and sampsize by minimizing the out of sample "cross-validated error" over a grid of different mtry, sampsize parameters, for any corresponding respon
28,942
AUC in ordinal logistic regression
I only like the area under the ROC curve ($c$-index) because it happens to be a concordance probability. $c$ is a building block of rank correlation coefficients. For example, Somers' $D_{xy} = 2\times (c - \frac{1}{2})$. For ordinal $Y$, $D_{xy}$ is an excellent measure of predictive discrimination, and the R rms package provides easy ways to get bootstrap overfitting-corrected estimates of $D_{xy}$. You can backsolve for a generalized $c$-index (generalized AUROC). There are reasons not to consider each level of $Y$ separately because this does not exploit the ordinal nature of $Y$. In rms there are two functions for ordinal regression: lrm and orm, the latter handling continuous $Y$ and providing more distribution families (link functions) than proportional odds.
AUC in ordinal logistic regression
I only like the area under the ROC curve ($c$-index) because it happens to be a concordance probability. $c$ is a building block of rank correlation coefficients. For example, Somers' $D_{xy} = 2\ti
AUC in ordinal logistic regression I only like the area under the ROC curve ($c$-index) because it happens to be a concordance probability. $c$ is a building block of rank correlation coefficients. For example, Somers' $D_{xy} = 2\times (c - \frac{1}{2})$. For ordinal $Y$, $D_{xy}$ is an excellent measure of predictive discrimination, and the R rms package provides easy ways to get bootstrap overfitting-corrected estimates of $D_{xy}$. You can backsolve for a generalized $c$-index (generalized AUROC). There are reasons not to consider each level of $Y$ separately because this does not exploit the ordinal nature of $Y$. In rms there are two functions for ordinal regression: lrm and orm, the latter handling continuous $Y$ and providing more distribution families (link functions) than proportional odds.
AUC in ordinal logistic regression I only like the area under the ROC curve ($c$-index) because it happens to be a concordance probability. $c$ is a building block of rank correlation coefficients. For example, Somers' $D_{xy} = 2\ti
28,943
AUC in ordinal logistic regression
AUC for ordinal regression is something tricky. You might want to calculate the AUC for each class by creating dummies to take value 1 for the class you are calculating the AUC and 0 for the rest of the other classes. If you have 4 classes then you will create 4 AUCs and plot them on the same graph. The main problem with this method is the fact that it penalizes miss-classification equally. Much more intuitively miss-classifying a class 1 into class 3 should be worst than miss-classifying class 1 into class 2.
AUC in ordinal logistic regression
AUC for ordinal regression is something tricky. You might want to calculate the AUC for each class by creating dummies to take value 1 for the class you are calculating the AUC and 0 for the rest o
AUC in ordinal logistic regression AUC for ordinal regression is something tricky. You might want to calculate the AUC for each class by creating dummies to take value 1 for the class you are calculating the AUC and 0 for the rest of the other classes. If you have 4 classes then you will create 4 AUCs and plot them on the same graph. The main problem with this method is the fact that it penalizes miss-classification equally. Much more intuitively miss-classifying a class 1 into class 3 should be worst than miss-classifying class 1 into class 2.
AUC in ordinal logistic regression AUC for ordinal regression is something tricky. You might want to calculate the AUC for each class by creating dummies to take value 1 for the class you are calculating the AUC and 0 for the rest o
28,944
Error when running glmnet in multinomial [closed]
There is a subtle bug. What is happening is the following: In your artificial data set, the three group means are on a line, and with the relatively small standard deviation used, the three groups become linearly separable in your 10-dimensional space. As a consequence, all parameters related to the second group are estimated to 0 for all $\lambda$. Check coef(glm) Internally in cv.glmnet there is a call to predict to determine for each $\lambda$ the number of non-zero coefficients. Try predict(glm, type = "nonzero") The structure is, from reading the cv.glmnet code, supposed to be a list of lists, but the second entry in the list is NULL, and not a list! This causes the error. It happens in this block of code from cv.glmnet if (inherits(glmnet.object, "multnet")) { nz = predict(glmnet.object, type = "nonzero") nz = sapply(nz, function(x) sapply(x, length)) nz = ceiling(apply(nz, 1, median)) } The result returned from the two nested sapply calls is not a matrix as expected in the last call of apply. This generates the error. It might be very unlikely to run into the error in practice, but the code should of course be robust to extreme cases. You should report the problem to the maintainer, Trevor Hastie (his email is listed at the link).
Error when running glmnet in multinomial [closed]
There is a subtle bug. What is happening is the following: In your artificial data set, the three group means are on a line, and with the relatively small standard deviation used, the three groups b
Error when running glmnet in multinomial [closed] There is a subtle bug. What is happening is the following: In your artificial data set, the three group means are on a line, and with the relatively small standard deviation used, the three groups become linearly separable in your 10-dimensional space. As a consequence, all parameters related to the second group are estimated to 0 for all $\lambda$. Check coef(glm) Internally in cv.glmnet there is a call to predict to determine for each $\lambda$ the number of non-zero coefficients. Try predict(glm, type = "nonzero") The structure is, from reading the cv.glmnet code, supposed to be a list of lists, but the second entry in the list is NULL, and not a list! This causes the error. It happens in this block of code from cv.glmnet if (inherits(glmnet.object, "multnet")) { nz = predict(glmnet.object, type = "nonzero") nz = sapply(nz, function(x) sapply(x, length)) nz = ceiling(apply(nz, 1, median)) } The result returned from the two nested sapply calls is not a matrix as expected in the last call of apply. This generates the error. It might be very unlikely to run into the error in practice, but the code should of course be robust to extreme cases. You should report the problem to the maintainer, Trevor Hastie (his email is listed at the link).
Error when running glmnet in multinomial [closed] There is a subtle bug. What is happening is the following: In your artificial data set, the three group means are on a line, and with the relatively small standard deviation used, the three groups b
28,945
Error when running glmnet in multinomial [closed]
First convert your matrix for example x without response into numeric . After that the significant coefficient(s) that contributing to the model find by search colnames or rownames as in the data structure the variables are.
Error when running glmnet in multinomial [closed]
First convert your matrix for example x without response into numeric . After that the significant coefficient(s) that contributing to the model find by search colnames or rownames as in the
Error when running glmnet in multinomial [closed] First convert your matrix for example x without response into numeric . After that the significant coefficient(s) that contributing to the model find by search colnames or rownames as in the data structure the variables are.
Error when running glmnet in multinomial [closed] First convert your matrix for example x without response into numeric . After that the significant coefficient(s) that contributing to the model find by search colnames or rownames as in the
28,946
Is it possible to specify a lmer model without any fixed effects?
As @Mike Lawrence mentioned the obvious thing to do when defining a model without fixed effects is something in the form of: lmer(y ~ -1 + (1|GroupIndicator)) which is actually quite straightforward; one defines no intercept or an X matrix. The basic reason which this doesn't work out is that as @maxTC pointed out "lme4 package is dedicated to mixed models only". In particular what lmer() fitting does is calculate the profiled deviance by solving the penalized least square regression between the $\hat{y}$ and ${y}$ as well as the spherical random effects $u$ and $0$ (Eq. (11), Ref.(2)). Computationally this optimization procedure computes the Cholesky decomposition of the corresponding system exploiting the system's block structure (Eq. (5), Ref.(1)). Setting no global fixed effects practically distorts that block structure in a way that the code of lmer() can't cope. Among other things the conditional expected value of $u$ is based on $\hat{\beta}$'s but solving for $\hat{\beta}$ asks the solution of a matrix system that never existed (the matrix $R_{XX}$ in Ref.(1), or $L_X$ in Ref.(2)). So you get an error like: Error in mer_finalize(ans) : Cholmod error 'invalid xtype' at file:../Cholesky/cholmod_solve.c, line 970 cause after all there was nothing to solve for in the first place. Assuming you don't want to re-write lmer() profiled deviance cost function the easiest solution is based on the CS-101 axiom: garbage in, garbage out. N = length(y); Garbage <- rnorm(N); lmer(y ~ -1 + Garbage + (1|GroupIndicator)); So what we do is define a variable $Garbage$ that is just noise; as before lmer() is instructed to use no fixed intercept but only the X matrix defined us (in this case the single column matrix Garbage). This extra Gaussian noise variable will be in expectation uncorrelated to our sample measurement errors as well as with your random effects variance. Needless to say the more structure your model has the smaller the probability of getting unwanted but statistically significant random correlations. So lmer() has a placebo $X$ variable (matrix) to play with, you in expectation will get the associated $\beta$ to be zero and you didn't have to normalize your data in any way (centring them, whitening them etc.). Probably trying a couple random initialization of the placebo $X$ matrix won't hurt either. A final note for the "Garbage": using Gaussian noise wasn't "accidental"; it has the largest entropy amongst all random variables of equal variance so the least chance of providing an information gain. Clearly, this is more a computational trick than a solution, but it allows the user to effectively specify an lmer model without global fixed effect. Apologies for hoping around the two references. In general I think Ref.(1) is the best bet for anyone for realize what lmer() is doing, but Ref.(2) is closer to the spirit of the actual code. Here's a bit of code show-casing the idea above: library(lme4) N= 500; #Number of Samples nlevA = 25; #Number of levels in the random effect set.seed(0) #Set the seed e = rnorm(N); e = 1*(e - mean(e) )/sd(e); #Some errors GroupIndicator = sample(nlevA, N, replace=T) #Random Nvel Classes Q = lmer( rnorm(N) ~ (1| GroupIndicator )); #Dummy regression to get the matrix Zt easily Z = t(Q@Zt); #Z matrix RA <- rnorm(nlevA ) #Random Normal Matrix gammas =c(3*RA/sd(RA)) #Colour this a bit y = as.vector( Z %*% gammas + e ) #Our measurements are the sum of measurement error (e) and Group specific variance lmer_native <- lmer(y ~ -1 +(1| GroupIndicator)) #No luck here. Garbage <- rnorm(N) #Prepare the garbage lmer_fooled <- lmer(y ~ -1 + Garbage+(1| GroupIndicator)) #OK... summary(lmer_fooled) #Hey, it sort of works! References: Linear mixed models and penalized least squares by D.M. Bates and S. DebRoy, Journal of Multivariate Analysis, Volume 91 Issue 1, October 2004 (Link to preprint) Computational methods for mixed models by Douglas Bates, June 2012. (Link to source )
Is it possible to specify a lmer model without any fixed effects?
As @Mike Lawrence mentioned the obvious thing to do when defining a model without fixed effects is something in the form of: lmer(y ~ -1 + (1|GroupIndicator)) which is actually quite straightforward;
Is it possible to specify a lmer model without any fixed effects? As @Mike Lawrence mentioned the obvious thing to do when defining a model without fixed effects is something in the form of: lmer(y ~ -1 + (1|GroupIndicator)) which is actually quite straightforward; one defines no intercept or an X matrix. The basic reason which this doesn't work out is that as @maxTC pointed out "lme4 package is dedicated to mixed models only". In particular what lmer() fitting does is calculate the profiled deviance by solving the penalized least square regression between the $\hat{y}$ and ${y}$ as well as the spherical random effects $u$ and $0$ (Eq. (11), Ref.(2)). Computationally this optimization procedure computes the Cholesky decomposition of the corresponding system exploiting the system's block structure (Eq. (5), Ref.(1)). Setting no global fixed effects practically distorts that block structure in a way that the code of lmer() can't cope. Among other things the conditional expected value of $u$ is based on $\hat{\beta}$'s but solving for $\hat{\beta}$ asks the solution of a matrix system that never existed (the matrix $R_{XX}$ in Ref.(1), or $L_X$ in Ref.(2)). So you get an error like: Error in mer_finalize(ans) : Cholmod error 'invalid xtype' at file:../Cholesky/cholmod_solve.c, line 970 cause after all there was nothing to solve for in the first place. Assuming you don't want to re-write lmer() profiled deviance cost function the easiest solution is based on the CS-101 axiom: garbage in, garbage out. N = length(y); Garbage <- rnorm(N); lmer(y ~ -1 + Garbage + (1|GroupIndicator)); So what we do is define a variable $Garbage$ that is just noise; as before lmer() is instructed to use no fixed intercept but only the X matrix defined us (in this case the single column matrix Garbage). This extra Gaussian noise variable will be in expectation uncorrelated to our sample measurement errors as well as with your random effects variance. Needless to say the more structure your model has the smaller the probability of getting unwanted but statistically significant random correlations. So lmer() has a placebo $X$ variable (matrix) to play with, you in expectation will get the associated $\beta$ to be zero and you didn't have to normalize your data in any way (centring them, whitening them etc.). Probably trying a couple random initialization of the placebo $X$ matrix won't hurt either. A final note for the "Garbage": using Gaussian noise wasn't "accidental"; it has the largest entropy amongst all random variables of equal variance so the least chance of providing an information gain. Clearly, this is more a computational trick than a solution, but it allows the user to effectively specify an lmer model without global fixed effect. Apologies for hoping around the two references. In general I think Ref.(1) is the best bet for anyone for realize what lmer() is doing, but Ref.(2) is closer to the spirit of the actual code. Here's a bit of code show-casing the idea above: library(lme4) N= 500; #Number of Samples nlevA = 25; #Number of levels in the random effect set.seed(0) #Set the seed e = rnorm(N); e = 1*(e - mean(e) )/sd(e); #Some errors GroupIndicator = sample(nlevA, N, replace=T) #Random Nvel Classes Q = lmer( rnorm(N) ~ (1| GroupIndicator )); #Dummy regression to get the matrix Zt easily Z = t(Q@Zt); #Z matrix RA <- rnorm(nlevA ) #Random Normal Matrix gammas =c(3*RA/sd(RA)) #Colour this a bit y = as.vector( Z %*% gammas + e ) #Our measurements are the sum of measurement error (e) and Group specific variance lmer_native <- lmer(y ~ -1 +(1| GroupIndicator)) #No luck here. Garbage <- rnorm(N) #Prepare the garbage lmer_fooled <- lmer(y ~ -1 + Garbage+(1| GroupIndicator)) #OK... summary(lmer_fooled) #Hey, it sort of works! References: Linear mixed models and penalized least squares by D.M. Bates and S. DebRoy, Journal of Multivariate Analysis, Volume 91 Issue 1, October 2004 (Link to preprint) Computational methods for mixed models by Douglas Bates, June 2012. (Link to source )
Is it possible to specify a lmer model without any fixed effects? As @Mike Lawrence mentioned the obvious thing to do when defining a model without fixed effects is something in the form of: lmer(y ~ -1 + (1|GroupIndicator)) which is actually quite straightforward;
28,947
How to report asymmetrical confidence intervals of a proportion?
You should report the lower and upper intervals and also the method used to calculate the interval. It turns out that there is no 'right' way to calculate confidence intervals for proportions, but instead many competing methods, each with advantages and disadvantages. The lack of a universally correct method stands in contrast to many statistical things that you might put numbers to, like means and standard deviations. For your interval to be fully specified you have to say how you calculated it.
How to report asymmetrical confidence intervals of a proportion?
You should report the lower and upper intervals and also the method used to calculate the interval. It turns out that there is no 'right' way to calculate confidence intervals for proportions, but in
How to report asymmetrical confidence intervals of a proportion? You should report the lower and upper intervals and also the method used to calculate the interval. It turns out that there is no 'right' way to calculate confidence intervals for proportions, but instead many competing methods, each with advantages and disadvantages. The lack of a universally correct method stands in contrast to many statistical things that you might put numbers to, like means and standard deviations. For your interval to be fully specified you have to say how you calculated it.
How to report asymmetrical confidence intervals of a proportion? You should report the lower and upper intervals and also the method used to calculate the interval. It turns out that there is no 'right' way to calculate confidence intervals for proportions, but in
28,948
What is a propensity weighting sampling / RIM?
You may know that weighting generally aims at ensuring that a given sample is representative of its target population. If in your sample some attributes (e.g., gender, SES, type of medication) are less well represented than in the population from which the sample comes from, then we may adjust the weights of the incriminated statistical units to better reflect the hypothetical target population. RIM weighting (or raking) means that we will equate the sample marginal distribution to the theoretical marginal distribution. It bears some idea with post-stratification, but allows to account for many covariates. I found a good overview in this handout about Weighting Methods, and here is an example of its use in a real study: Raking Fire Data. Propensity weighting is used to compensate for unit non-response in a survey, for example, by increasing the sampling weights of the respondents in the sample using estimates of the probabilities that they responded to the survey. This is in spirit the same idea than the use of propensity scores to adjust for treatment selection bias in observational clinical studies: based on external information, we estimate the probability of patients being included in a given treatment group and compute weights based on factors hypothesized to influence treatment selection. Here are some pointers I found to go further: The propensity score and estimation in nonrandom surveys - an overview A Simulation Study to Compare Weighting Methods for Nonresponses in the National Survey of Recent College Graduates A Comparison of Propensity Score and Linear Regression Analysis of Complex Survey Data. As for a general reference, I would suggest Kalton G, Flores-Cervantes I. Weighting Methods. J. Off. Stat. (2003) 19: 81-97. Available on http://www.jos.nu/
What is a propensity weighting sampling / RIM?
You may know that weighting generally aims at ensuring that a given sample is representative of its target population. If in your sample some attributes (e.g., gender, SES, type of medication) are les
What is a propensity weighting sampling / RIM? You may know that weighting generally aims at ensuring that a given sample is representative of its target population. If in your sample some attributes (e.g., gender, SES, type of medication) are less well represented than in the population from which the sample comes from, then we may adjust the weights of the incriminated statistical units to better reflect the hypothetical target population. RIM weighting (or raking) means that we will equate the sample marginal distribution to the theoretical marginal distribution. It bears some idea with post-stratification, but allows to account for many covariates. I found a good overview in this handout about Weighting Methods, and here is an example of its use in a real study: Raking Fire Data. Propensity weighting is used to compensate for unit non-response in a survey, for example, by increasing the sampling weights of the respondents in the sample using estimates of the probabilities that they responded to the survey. This is in spirit the same idea than the use of propensity scores to adjust for treatment selection bias in observational clinical studies: based on external information, we estimate the probability of patients being included in a given treatment group and compute weights based on factors hypothesized to influence treatment selection. Here are some pointers I found to go further: The propensity score and estimation in nonrandom surveys - an overview A Simulation Study to Compare Weighting Methods for Nonresponses in the National Survey of Recent College Graduates A Comparison of Propensity Score and Linear Regression Analysis of Complex Survey Data. As for a general reference, I would suggest Kalton G, Flores-Cervantes I. Weighting Methods. J. Off. Stat. (2003) 19: 81-97. Available on http://www.jos.nu/
What is a propensity weighting sampling / RIM? You may know that weighting generally aims at ensuring that a given sample is representative of its target population. If in your sample some attributes (e.g., gender, SES, type of medication) are les
28,949
Does the variable order matter in linear regression [duplicate]
It surely can (actually, it even matters with regard to the assumptions on your data - you only make assumptions about the distribution of the outcome given the covariate). In this light, you might look up a term like "inverse prediction variance". Either way, linear regression says nothing about causation! At best, you can say something about causation through careful design.
Does the variable order matter in linear regression [duplicate]
It surely can (actually, it even matters with regard to the assumptions on your data - you only make assumptions about the distribution of the outcome given the covariate). In this light, you might lo
Does the variable order matter in linear regression [duplicate] It surely can (actually, it even matters with regard to the assumptions on your data - you only make assumptions about the distribution of the outcome given the covariate). In this light, you might look up a term like "inverse prediction variance". Either way, linear regression says nothing about causation! At best, you can say something about causation through careful design.
Does the variable order matter in linear regression [duplicate] It surely can (actually, it even matters with regard to the assumptions on your data - you only make assumptions about the distribution of the outcome given the covariate). In this light, you might lo
28,950
Does the variable order matter in linear regression [duplicate]
To make the case symmetrical, one may regress the difference between the two variables ($\Delta x$) vs their average value.
Does the variable order matter in linear regression [duplicate]
To make the case symmetrical, one may regress the difference between the two variables ($\Delta x$) vs their average value.
Does the variable order matter in linear regression [duplicate] To make the case symmetrical, one may regress the difference between the two variables ($\Delta x$) vs their average value.
Does the variable order matter in linear regression [duplicate] To make the case symmetrical, one may regress the difference between the two variables ($\Delta x$) vs their average value.
28,951
Does the variable order matter in linear regression [duplicate]
Standard regression minimizes the vertical distance between the points and the line, so switching the 2 variables will now minimize the horizontal distance (given the same scatterplot). Another option (which goes by several names) is to minimize the perpendicular distance, this can be done using principle components. Here is some R code that shows the differences: library(MASS) tmp <- mvrnorm(100, c(0,0), rbind( c(1,.9),c(.9,1)) ) plot(tmp, asp=1) fit1 <- lm(tmp[,1] ~ tmp[,2]) # horizontal residuals segments( tmp[,1], tmp[,2], fitted(fit1),tmp[,2], col='blue' ) o <- order(tmp[,2]) lines( fitted(fit1)[o], tmp[o,2], col='blue' ) fit2 <- lm(tmp[,2] ~ tmp[,1]) # vertical residuals segments( tmp[,1], tmp[,2], tmp[,1], fitted(fit2), col='green' ) o <- order(tmp[,1]) lines( tmp[o,1], fitted(fit2)[o], col='green' ) fit3 <- prcomp(tmp) b <- -fit3$rotation[1,2]/fit3$rotation[2,2] a <- fit3$center[2] - b*fit3$center[1] abline(a,b, col='red') segments(tmp[,1], tmp[,2], tmp[,1]-fit3$x[,2]*fit3$rotation[1,2], tmp[,2]-fit3$x[,2]*fit3$rotation[2,2], col='red') legend('bottomright', legend=c('Horizontal','Vertical','Perpendicular'), lty=1, col=c('blue','green','red')) To look for outliers you can just plot the results of the principle components analysis. You may also want to look at: Bland and Altman (1986), Statistical Methods for Assessing Agreement Between Two Methods of CLinical Measurement. Lancet, pp 307-310
Does the variable order matter in linear regression [duplicate]
Standard regression minimizes the vertical distance between the points and the line, so switching the 2 variables will now minimize the horizontal distance (given the same scatterplot). Another optio
Does the variable order matter in linear regression [duplicate] Standard regression minimizes the vertical distance between the points and the line, so switching the 2 variables will now minimize the horizontal distance (given the same scatterplot). Another option (which goes by several names) is to minimize the perpendicular distance, this can be done using principle components. Here is some R code that shows the differences: library(MASS) tmp <- mvrnorm(100, c(0,0), rbind( c(1,.9),c(.9,1)) ) plot(tmp, asp=1) fit1 <- lm(tmp[,1] ~ tmp[,2]) # horizontal residuals segments( tmp[,1], tmp[,2], fitted(fit1),tmp[,2], col='blue' ) o <- order(tmp[,2]) lines( fitted(fit1)[o], tmp[o,2], col='blue' ) fit2 <- lm(tmp[,2] ~ tmp[,1]) # vertical residuals segments( tmp[,1], tmp[,2], tmp[,1], fitted(fit2), col='green' ) o <- order(tmp[,1]) lines( tmp[o,1], fitted(fit2)[o], col='green' ) fit3 <- prcomp(tmp) b <- -fit3$rotation[1,2]/fit3$rotation[2,2] a <- fit3$center[2] - b*fit3$center[1] abline(a,b, col='red') segments(tmp[,1], tmp[,2], tmp[,1]-fit3$x[,2]*fit3$rotation[1,2], tmp[,2]-fit3$x[,2]*fit3$rotation[2,2], col='red') legend('bottomright', legend=c('Horizontal','Vertical','Perpendicular'), lty=1, col=c('blue','green','red')) To look for outliers you can just plot the results of the principle components analysis. You may also want to look at: Bland and Altman (1986), Statistical Methods for Assessing Agreement Between Two Methods of CLinical Measurement. Lancet, pp 307-310
Does the variable order matter in linear regression [duplicate] Standard regression minimizes the vertical distance between the points and the line, so switching the 2 variables will now minimize the horizontal distance (given the same scatterplot). Another optio
28,952
Does the variable order matter in linear regression [duplicate]
Your x1 and x2 variables are collinear. In the presence of multicollinearity, your parameter estimates are still unbiased, but their variance is large, i.e., your inference on the significance of the parameter estimates is not valid, and your prediction will have large confidence intervals. Interpretation of the parameter estimates is also difficult. In the linear regression framework, the parameter estimate on x1 is the change in Y for a unit change in x1 given every other exogeneous variable in the model is held constant. In your case, x1 and x2 are highly correlated, and you cannot hold x2 constant when x1 is changing.
Does the variable order matter in linear regression [duplicate]
Your x1 and x2 variables are collinear. In the presence of multicollinearity, your parameter estimates are still unbiased, but their variance is large, i.e., your inference on the significance of the
Does the variable order matter in linear regression [duplicate] Your x1 and x2 variables are collinear. In the presence of multicollinearity, your parameter estimates are still unbiased, but their variance is large, i.e., your inference on the significance of the parameter estimates is not valid, and your prediction will have large confidence intervals. Interpretation of the parameter estimates is also difficult. In the linear regression framework, the parameter estimate on x1 is the change in Y for a unit change in x1 given every other exogeneous variable in the model is held constant. In your case, x1 and x2 are highly correlated, and you cannot hold x2 constant when x1 is changing.
Does the variable order matter in linear regression [duplicate] Your x1 and x2 variables are collinear. In the presence of multicollinearity, your parameter estimates are still unbiased, but their variance is large, i.e., your inference on the significance of the
28,953
Do confidence intervals apply to quota sampling?
As whuber says, the short answer is that quota samples are the "poster child for outmoded, known-bad sampling methods" and "have long been discredited." The longer answer is that there may be conditions under which "quota-like" samples can work reasonably well. Exhibit A here is recent work on reconstructing representative results from opt-in Internet panels. This paper gives the statistical grounding for this approach. To make a long story short, typical sampling schemes 1) draw a random sample, 2) attempt to recruit subjects, and then 3) add post-stratification weights to compensate for differences in who responds. In the opt-in approach, you 1) recruit subjects non-randomly, 2) compare responses to a representative baseline, and 3) add weights to compensate for the differences. In terms of practice, opt-in sampling is similar to quota sampling, but the statistical foundation is more developed. The upside is that you can make claims about representative sampling, confidence intervals, etc. The downside is that your claims are based on difficult-to-verify assumptions about how people self-select into your sample. A lot of people are skeptical about these methods -- they sound too much like quota sampling. But some evidence suggests that opt-in sampling can work well at least some of the time. So despite the controversy, Polimetrix/YouGov (an early adopter of the opt-in sampling model) seems to be doing reasonably well. Among other things, they've done all the data collection for the Cooperative Congressional Election Study, a series of recent academic U.S. national election studies. (I'm pretty sure ICPSR carries this data. If not, Harvard's social science dataverse certainly does. Lots of academics are using data from these samples.) Anyway, you asked about quota sampling. As you can see already in the comment thread here, any well-trained pollster will tell you that quota sampling is bunk. The jury is still out on opt-in sampling. For the time being, if you want to draw confidence intervals around quota samples, I'd say these methods are your best bet.
Do confidence intervals apply to quota sampling?
As whuber says, the short answer is that quota samples are the "poster child for outmoded, known-bad sampling methods" and "have long been discredited." The longer answer is that there may be conditi
Do confidence intervals apply to quota sampling? As whuber says, the short answer is that quota samples are the "poster child for outmoded, known-bad sampling methods" and "have long been discredited." The longer answer is that there may be conditions under which "quota-like" samples can work reasonably well. Exhibit A here is recent work on reconstructing representative results from opt-in Internet panels. This paper gives the statistical grounding for this approach. To make a long story short, typical sampling schemes 1) draw a random sample, 2) attempt to recruit subjects, and then 3) add post-stratification weights to compensate for differences in who responds. In the opt-in approach, you 1) recruit subjects non-randomly, 2) compare responses to a representative baseline, and 3) add weights to compensate for the differences. In terms of practice, opt-in sampling is similar to quota sampling, but the statistical foundation is more developed. The upside is that you can make claims about representative sampling, confidence intervals, etc. The downside is that your claims are based on difficult-to-verify assumptions about how people self-select into your sample. A lot of people are skeptical about these methods -- they sound too much like quota sampling. But some evidence suggests that opt-in sampling can work well at least some of the time. So despite the controversy, Polimetrix/YouGov (an early adopter of the opt-in sampling model) seems to be doing reasonably well. Among other things, they've done all the data collection for the Cooperative Congressional Election Study, a series of recent academic U.S. national election studies. (I'm pretty sure ICPSR carries this data. If not, Harvard's social science dataverse certainly does. Lots of academics are using data from these samples.) Anyway, you asked about quota sampling. As you can see already in the comment thread here, any well-trained pollster will tell you that quota sampling is bunk. The jury is still out on opt-in sampling. For the time being, if you want to draw confidence intervals around quota samples, I'd say these methods are your best bet.
Do confidence intervals apply to quota sampling? As whuber says, the short answer is that quota samples are the "poster child for outmoded, known-bad sampling methods" and "have long been discredited." The longer answer is that there may be conditi
28,954
Do confidence intervals apply to quota sampling?
In most non-compulsory survey contexts, there is a substantial problem with nonresponse. This from 2002: "the recently reported estimate of survey cooperation rates from CMOR, the Council for Market and Opinion Research [USA], averaged only 14.7 percent." and from Paul Gerhold, "I believe that it is still possible to draw random samples. I just don't believe that it is possible to execute them." In this context, the fact that the SAMPLE is random isn't very relevant, because the resulting data isn't. This makes bias adjustment the major issue in valid estimation, and field method design is an important component. The ways in which one might want to do this, and the resulting confidence estimates, are well beyond what can be discussed here.
Do confidence intervals apply to quota sampling?
In most non-compulsory survey contexts, there is a substantial problem with nonresponse. This from 2002: "the recently reported estimate of survey cooperation rates from CMOR, the Council for Market a
Do confidence intervals apply to quota sampling? In most non-compulsory survey contexts, there is a substantial problem with nonresponse. This from 2002: "the recently reported estimate of survey cooperation rates from CMOR, the Council for Market and Opinion Research [USA], averaged only 14.7 percent." and from Paul Gerhold, "I believe that it is still possible to draw random samples. I just don't believe that it is possible to execute them." In this context, the fact that the SAMPLE is random isn't very relevant, because the resulting data isn't. This makes bias adjustment the major issue in valid estimation, and field method design is an important component. The ways in which one might want to do this, and the resulting confidence estimates, are well beyond what can be discussed here.
Do confidence intervals apply to quota sampling? In most non-compulsory survey contexts, there is a substantial problem with nonresponse. This from 2002: "the recently reported estimate of survey cooperation rates from CMOR, the Council for Market a
28,955
RNG, R, mclapply and cluster of computers
The snow has explicit support to initialise the given number of RNG streams in a cluster computation. It can employ one of two RNG implementations: rsprng and rlecuyer Otherwise you have to do the coordination by hand.
RNG, R, mclapply and cluster of computers
The snow has explicit support to initialise the given number of RNG streams in a cluster computation. It can employ one of two RNG implementations: rsprng and rlecuyer Otherwise you have to do t
RNG, R, mclapply and cluster of computers The snow has explicit support to initialise the given number of RNG streams in a cluster computation. It can employ one of two RNG implementations: rsprng and rlecuyer Otherwise you have to do the coordination by hand.
RNG, R, mclapply and cluster of computers The snow has explicit support to initialise the given number of RNG streams in a cluster computation. It can employ one of two RNG implementations: rsprng and rlecuyer Otherwise you have to do t
28,956
RNG, R, mclapply and cluster of computers
You need to use a RNG specifically designed for parallel computing. See the "Parallel computing: Random numbers" section of the High Performance Computing Task View.
RNG, R, mclapply and cluster of computers
You need to use a RNG specifically designed for parallel computing. See the "Parallel computing: Random numbers" section of the High Performance Computing Task View.
RNG, R, mclapply and cluster of computers You need to use a RNG specifically designed for parallel computing. See the "Parallel computing: Random numbers" section of the High Performance Computing Task View.
RNG, R, mclapply and cluster of computers You need to use a RNG specifically designed for parallel computing. See the "Parallel computing: Random numbers" section of the High Performance Computing Task View.
28,957
What are the practical & interpretation differences between alternatives and logistic regression?
Disclaimer: It is certainly far from being a full answer to the question! I think there are at least two levels to consider before establishing a distinction between all such methods: whether a single model is fitted or not: This helps opposing methods like logistic regression vs. RF or Gradient Boosting (or more generally Ensemble methods), and also put emphasis on parameters estimation (with associated asymptotic or bootstrap confidence intervals) vs. classification or prediction accuracy computation; whether all variables are considered or not: This is the basis of feature selection, in the sense that penalization or regularization allows to cope with "irregular" data sets (e.g., large $p$ and/or small $n$) and improve generalizability of the findings. Here are few other points that I think are relevant to the question. In case we consider several models--the same model is fitted on different subsets (individuals and/or variables) of the available data, or different competitive models are fitted on the same data set--, cross-validation can be used to avoid overfitting and perform model or feature selection, although CV is not limited to this particular cases (it can be used with GAMs or penalized GLMs, for instance). Also, there is the traditional interpretation issue: more complex models often implies more complex interpretation (more parameters, more stringent assumptions, etc.). Gradient boosting and RFs overcome the limitations of a single decision tree, thanks to Boosting whose main idea is to combine the output of several weak learning algorithms in order to build a more accurate and stable decision rule, and Bagging where we "average" results over resampled data sets. Altogether, they are often viewed as some kind of black boxes in comparison to more "classical" models where clear specifications for the model are provided (I can think of three classes of models: parameteric, semi-parametric, non-parametric), but I think the discussion held under this other thread The Two Cultures: statistics vs. machine learning? provide interesting viewpoints. Here are a couple of papers about feature selection and some ML techniques: Saeys, Y, Inza, I, and Larrañaga, P. A review of feature selection techniques in bioinformatics, Bioinformatics (2007) 23(19): 2507-2517. Dougherty, ER, Hua J, and Sima, C. Performance of Feature Selection Methods, Current Genomics (2009) 10(6): 365–374. Boulesteix, A-L and Strobl, C. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction, BMC Medical Research Methodology (2009) 9:85. Caruana, R and Niculescu-Mizil, A. An Empirical Comparison of Supervised Learning Algorithms. Proceedings of the 23rd International Conference on Machine Learning (2006). Friedman, J, Hastie, T, and Tibshirani, R. Additive logistic regression: A statistical view of boosting, Ann. Statist. (2000) 28(2):337-407. (With discussion) Olden, JD, Lawler, JJ, and Poff, NL. Machine learning methods without tears: a primer for ecologists, Q Rev Biol. (2008) 83(2):171-93. And of course, The Elements of Statistical Learning, by Hastie and coll., is full of illustrations and references. Also be sure to check the Statistical Data Mining Tutorials, from Andrew Moore.
What are the practical & interpretation differences between alternatives and logistic regression?
Disclaimer: It is certainly far from being a full answer to the question! I think there are at least two levels to consider before establishing a distinction between all such methods: whether a singl
What are the practical & interpretation differences between alternatives and logistic regression? Disclaimer: It is certainly far from being a full answer to the question! I think there are at least two levels to consider before establishing a distinction between all such methods: whether a single model is fitted or not: This helps opposing methods like logistic regression vs. RF or Gradient Boosting (or more generally Ensemble methods), and also put emphasis on parameters estimation (with associated asymptotic or bootstrap confidence intervals) vs. classification or prediction accuracy computation; whether all variables are considered or not: This is the basis of feature selection, in the sense that penalization or regularization allows to cope with "irregular" data sets (e.g., large $p$ and/or small $n$) and improve generalizability of the findings. Here are few other points that I think are relevant to the question. In case we consider several models--the same model is fitted on different subsets (individuals and/or variables) of the available data, or different competitive models are fitted on the same data set--, cross-validation can be used to avoid overfitting and perform model or feature selection, although CV is not limited to this particular cases (it can be used with GAMs or penalized GLMs, for instance). Also, there is the traditional interpretation issue: more complex models often implies more complex interpretation (more parameters, more stringent assumptions, etc.). Gradient boosting and RFs overcome the limitations of a single decision tree, thanks to Boosting whose main idea is to combine the output of several weak learning algorithms in order to build a more accurate and stable decision rule, and Bagging where we "average" results over resampled data sets. Altogether, they are often viewed as some kind of black boxes in comparison to more "classical" models where clear specifications for the model are provided (I can think of three classes of models: parameteric, semi-parametric, non-parametric), but I think the discussion held under this other thread The Two Cultures: statistics vs. machine learning? provide interesting viewpoints. Here are a couple of papers about feature selection and some ML techniques: Saeys, Y, Inza, I, and Larrañaga, P. A review of feature selection techniques in bioinformatics, Bioinformatics (2007) 23(19): 2507-2517. Dougherty, ER, Hua J, and Sima, C. Performance of Feature Selection Methods, Current Genomics (2009) 10(6): 365–374. Boulesteix, A-L and Strobl, C. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction, BMC Medical Research Methodology (2009) 9:85. Caruana, R and Niculescu-Mizil, A. An Empirical Comparison of Supervised Learning Algorithms. Proceedings of the 23rd International Conference on Machine Learning (2006). Friedman, J, Hastie, T, and Tibshirani, R. Additive logistic regression: A statistical view of boosting, Ann. Statist. (2000) 28(2):337-407. (With discussion) Olden, JD, Lawler, JJ, and Poff, NL. Machine learning methods without tears: a primer for ecologists, Q Rev Biol. (2008) 83(2):171-93. And of course, The Elements of Statistical Learning, by Hastie and coll., is full of illustrations and references. Also be sure to check the Statistical Data Mining Tutorials, from Andrew Moore.
What are the practical & interpretation differences between alternatives and logistic regression? Disclaimer: It is certainly far from being a full answer to the question! I think there are at least two levels to consider before establishing a distinction between all such methods: whether a singl
28,958
The distribution of the linear combination of Gamma random variables [duplicate]
See Theorem 1 given in Moschopoulos (1985) for the distribution of a sum of independent gamma variables. You can extend this result using the scaling property for linear combinations.
The distribution of the linear combination of Gamma random variables [duplicate]
See Theorem 1 given in Moschopoulos (1985) for the distribution of a sum of independent gamma variables. You can extend this result using the scaling property for linear combinations.
The distribution of the linear combination of Gamma random variables [duplicate] See Theorem 1 given in Moschopoulos (1985) for the distribution of a sum of independent gamma variables. You can extend this result using the scaling property for linear combinations.
The distribution of the linear combination of Gamma random variables [duplicate] See Theorem 1 given in Moschopoulos (1985) for the distribution of a sum of independent gamma variables. You can extend this result using the scaling property for linear combinations.
28,959
Why is median not a sufficient statistic? [duplicate]
A tricky part of this question is that the median sounds like a good estimator and mathematically it is not so clear what the mean is gonna improve. So imagine the following simpler case: Say we have a sample of size $X_1, X_2, \dots , X_n$ where each $X_i$ follows (independently) a normal distribution $X_i \sim N(\theta,1)$ and we want to estimate $\theta$. We could make an estimate $\hat{\theta} = X_1$ and it is obvious that this is not a sufficient statistic. The other $n-1$ values can provide information about $\theta$ as well. The statistic $X_1$ is not sufficient. A sufficient statistic occurs when the distribution of the data is independent of the parameter $\theta$ conditional on the sufficient statistic. This is not true for $X_1$. For example, the distribution of $\bar{X}$ conditional on $X_1$ is not independent from $\theta$
Why is median not a sufficient statistic? [duplicate]
A tricky part of this question is that the median sounds like a good estimator and mathematically it is not so clear what the mean is gonna improve. So imagine the following simpler case: Say we have
Why is median not a sufficient statistic? [duplicate] A tricky part of this question is that the median sounds like a good estimator and mathematically it is not so clear what the mean is gonna improve. So imagine the following simpler case: Say we have a sample of size $X_1, X_2, \dots , X_n$ where each $X_i$ follows (independently) a normal distribution $X_i \sim N(\theta,1)$ and we want to estimate $\theta$. We could make an estimate $\hat{\theta} = X_1$ and it is obvious that this is not a sufficient statistic. The other $n-1$ values can provide information about $\theta$ as well. The statistic $X_1$ is not sufficient. A sufficient statistic occurs when the distribution of the data is independent of the parameter $\theta$ conditional on the sufficient statistic. This is not true for $X_1$. For example, the distribution of $\bar{X}$ conditional on $X_1$ is not independent from $\theta$
Why is median not a sufficient statistic? [duplicate] A tricky part of this question is that the median sounds like a good estimator and mathematically it is not so clear what the mean is gonna improve. So imagine the following simpler case: Say we have
28,960
Why is median not a sufficient statistic? [duplicate]
One obvious way to check if a statistic is sufficient is to identify a minimal sufficient statistic (if it exists) and check if the minimal sufficient statistic is a function of your proposed statistic. Here we want to use the fact that a minimal sufficient statistic is a function of any sufficient statistic. Take $n=3$ for example. A minimal sufficient statistic for $\mu$ is the sample mean $\overline x=\frac13(x_1+x_2+x_3)$. But $\overline x$ is not a function of the sample median $x_{(2)}$ only. You have to argue this formally. In essence, sufficiency is concerned with data reduction (see What does it mean that a statistic $T(X)$ is sufficient for a parameter?). So $\overline x$ is sufficient means that all the information about $\mu$ in the sample can be condensed in $\overline x$. This happens to be the maximum possible data condensation in this model, which makes the sample mean minimal sufficient. Given $\overline x$, you can make inference on $\mu$. But knowing $x_{(2)}$ alone does not give you enough information about $\mu$. You also need to know $x_{(1)}$ and $x_{(3)}$ to avoid any loss of information. A numerical example might help to make this point clear.
Why is median not a sufficient statistic? [duplicate]
One obvious way to check if a statistic is sufficient is to identify a minimal sufficient statistic (if it exists) and check if the minimal sufficient statistic is a function of your proposed statisti
Why is median not a sufficient statistic? [duplicate] One obvious way to check if a statistic is sufficient is to identify a minimal sufficient statistic (if it exists) and check if the minimal sufficient statistic is a function of your proposed statistic. Here we want to use the fact that a minimal sufficient statistic is a function of any sufficient statistic. Take $n=3$ for example. A minimal sufficient statistic for $\mu$ is the sample mean $\overline x=\frac13(x_1+x_2+x_3)$. But $\overline x$ is not a function of the sample median $x_{(2)}$ only. You have to argue this formally. In essence, sufficiency is concerned with data reduction (see What does it mean that a statistic $T(X)$ is sufficient for a parameter?). So $\overline x$ is sufficient means that all the information about $\mu$ in the sample can be condensed in $\overline x$. This happens to be the maximum possible data condensation in this model, which makes the sample mean minimal sufficient. Given $\overline x$, you can make inference on $\mu$. But knowing $x_{(2)}$ alone does not give you enough information about $\mu$. You also need to know $x_{(1)}$ and $x_{(3)}$ to avoid any loss of information. A numerical example might help to make this point clear.
Why is median not a sufficient statistic? [duplicate] One obvious way to check if a statistic is sufficient is to identify a minimal sufficient statistic (if it exists) and check if the minimal sufficient statistic is a function of your proposed statisti
28,961
Why is median not a sufficient statistic? [duplicate]
This is a great technical question. First I want to point out that your argument $T$ is a sufficient statistic if $$ \frac{f_X(x|\theta)}{q(t|\theta)} $$ does not depend on $\theta$, where $f_X$ is the probability density function of $X$ and $q$ is the probability density function of $T$. does not hold. To see it, we know that for $T = \bar{X} \sim N(\mu, \frac{1}{n})$, whence \begin{align} & \frac{f_\mu(x_1, \ldots, x_n)}{q_\mu(t)} = \frac{(2\pi)^{-n/2}\exp(-\frac{1}{2}\sum_{i = 1}^n(x_i - \mu)^2)} {(2\pi n)^{-1/2}\exp(-\frac{n}{2}(t - \mu)^2)} = h(x)e^{-\frac{n}{2}(\bar{x} - t)\mu}, \end{align} which depends on $\mu$ (you may argue that it does not depend on $\mu$ if substitute $\bar{x}$ to $t$, however, this is not what the proposed ratio formula says), but $\bar{X}$ is of course sufficient. It looks like your "criterion" tries to match the definition of sufficiency of $T$ (Section 1.6, Theory of Point Estimation): A statistic $T$ is said to be sufficient for $X$, or for the family $\mathcal{P} = \{P_\theta, \theta \in \Omega\}$ of possible distributions of $X$, or for $\theta$, if the conditional distribution $X$ given $T = t$ is independent of $\theta$ for all $t$. A naive interpretation of the above definition is that the conditional density of the data $X = (X_1, \ldots, X_n)$ given $T = t$ is independent of $\theta$, which may be formally written as (note how the numerator in $(1)$ differs from that in your proposed ratio) \begin{align} f_{X|T; \theta}(x|t) = \frac{f_\theta(x, t)}{q_\theta(t)} \text{ is independent of $\theta$.} \tag{1} \end{align} However, a technical difficulty of the interpretation $(1)$ is that the "joint density" $f_\theta(x, t)$ of $(X, T)$ is degenerate (i.e., it is not a probability density on $\mathbb{R}^{n + 1}$. In general, the conditional density defining relation $f_{X|Y = y}(x|y) = \frac{f(x, y)}{f_Y(y)}$ only makes sense when $f(x, y)$ is a valid, non-degenerate density), hence $(1)$ is actually impossible to check. For a relevant discussion, see this question. To solve this difficulty, some 1-1 transformation between $(X_1, \ldots, X_n)$ and $(Y_1, \ldots, Y_{n - 1}, T)$ needs to be defined and the sufficiency needs to be augmented accordingly in terms of the conditional density of $Y$ given $T$. For details, refer to Eq (1.17) -- Eq (1.19) in Section 1.9, Testing Statistical Hypotheses. Having clarified this, one way to show that $M$ is not a sufficient statistic for $\mu$ is to find a subset $Y$ (possibly with transformation) of $(X_1, \ldots, X_n)$, such that the conditional density of $Y$ given $T$ is well-defined (i.e., non-degenerate) and does depend on $\theta$. For simplicity and without losing of generality, assume $n = 3$. One choice of $Y$ is $(X_{(1)}, X_{(3)}) = (\min(X_1, X_2, X_3), \max(X_1, X_2, X_3))$. It is well known by the order statistic theory that the joint density of $(Y, M) = (X_{(1)}, X_{(3)}, X_{(2)})$ is \begin{align} f(x_{(1)}, x_{(3)}, x_{(2)}) = 6\varphi_\mu(x_{(1)})\varphi_\mu(x_{(2)})\varphi_\mu(x_{(3)}), \quad x_{(1)} < x_{(2)} < x_{(3)}, \tag{2} \end{align} where $\varphi_\mu(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{(x - \mu)^2}{2}}$. And the marginal density of $X_{(2)}$ is (where $\Phi_\mu(x) = \int_{-\infty}^x \varphi_\mu(t)dt$) \begin{align} f_{X_{(2)}}(x_{(2)}) = 6\Phi_\mu(x_{(2)})(1 - \Phi_\mu(x_{(2)}))\varphi_\mu(x_{(2)}). \tag{3} \end{align} $(2)$ and $(3)$ then give the conditional density of $(X_{(1)}, X_{(3)})$ given $X_{(2)}$ as \begin{align} f_{(X_{(1)}, X_{(3)})|X_{(2)} = x_{(2)}}(x_{(1)}, x_{(3)}|x_{(2)}) = \frac{\varphi_\mu(x_{(1)})\varphi_\mu(x_{(3)})} {\Phi_\mu(x_{(2)})(1 - \Phi_\mu(x_{(2)}))}, \end{align} which depends on $\mu$. In contrast, you can verify that \begin{align} f_{(X_{(1)}, X_{(3)})|\bar{X} = \bar{x}}(x_{(1)}, x_{(3)}|\bar{x}) = \frac{3\sqrt{3}}{\pi}\exp\left(\frac{3}{2}\bar{x}^2 - \frac{1}{2}(x_{(1)}^2 + x_{(3)}^2 + (3\bar{x} - x_{(1)} - x_{(3)})^2)\right), \end{align} which is independent of $\mu$.
Why is median not a sufficient statistic? [duplicate]
This is a great technical question. First I want to point out that your argument $T$ is a sufficient statistic if $$ \frac{f_X(x|\theta)}{q(t|\theta)} $$ does not depend on $\theta$, where $f_X$ is
Why is median not a sufficient statistic? [duplicate] This is a great technical question. First I want to point out that your argument $T$ is a sufficient statistic if $$ \frac{f_X(x|\theta)}{q(t|\theta)} $$ does not depend on $\theta$, where $f_X$ is the probability density function of $X$ and $q$ is the probability density function of $T$. does not hold. To see it, we know that for $T = \bar{X} \sim N(\mu, \frac{1}{n})$, whence \begin{align} & \frac{f_\mu(x_1, \ldots, x_n)}{q_\mu(t)} = \frac{(2\pi)^{-n/2}\exp(-\frac{1}{2}\sum_{i = 1}^n(x_i - \mu)^2)} {(2\pi n)^{-1/2}\exp(-\frac{n}{2}(t - \mu)^2)} = h(x)e^{-\frac{n}{2}(\bar{x} - t)\mu}, \end{align} which depends on $\mu$ (you may argue that it does not depend on $\mu$ if substitute $\bar{x}$ to $t$, however, this is not what the proposed ratio formula says), but $\bar{X}$ is of course sufficient. It looks like your "criterion" tries to match the definition of sufficiency of $T$ (Section 1.6, Theory of Point Estimation): A statistic $T$ is said to be sufficient for $X$, or for the family $\mathcal{P} = \{P_\theta, \theta \in \Omega\}$ of possible distributions of $X$, or for $\theta$, if the conditional distribution $X$ given $T = t$ is independent of $\theta$ for all $t$. A naive interpretation of the above definition is that the conditional density of the data $X = (X_1, \ldots, X_n)$ given $T = t$ is independent of $\theta$, which may be formally written as (note how the numerator in $(1)$ differs from that in your proposed ratio) \begin{align} f_{X|T; \theta}(x|t) = \frac{f_\theta(x, t)}{q_\theta(t)} \text{ is independent of $\theta$.} \tag{1} \end{align} However, a technical difficulty of the interpretation $(1)$ is that the "joint density" $f_\theta(x, t)$ of $(X, T)$ is degenerate (i.e., it is not a probability density on $\mathbb{R}^{n + 1}$. In general, the conditional density defining relation $f_{X|Y = y}(x|y) = \frac{f(x, y)}{f_Y(y)}$ only makes sense when $f(x, y)$ is a valid, non-degenerate density), hence $(1)$ is actually impossible to check. For a relevant discussion, see this question. To solve this difficulty, some 1-1 transformation between $(X_1, \ldots, X_n)$ and $(Y_1, \ldots, Y_{n - 1}, T)$ needs to be defined and the sufficiency needs to be augmented accordingly in terms of the conditional density of $Y$ given $T$. For details, refer to Eq (1.17) -- Eq (1.19) in Section 1.9, Testing Statistical Hypotheses. Having clarified this, one way to show that $M$ is not a sufficient statistic for $\mu$ is to find a subset $Y$ (possibly with transformation) of $(X_1, \ldots, X_n)$, such that the conditional density of $Y$ given $T$ is well-defined (i.e., non-degenerate) and does depend on $\theta$. For simplicity and without losing of generality, assume $n = 3$. One choice of $Y$ is $(X_{(1)}, X_{(3)}) = (\min(X_1, X_2, X_3), \max(X_1, X_2, X_3))$. It is well known by the order statistic theory that the joint density of $(Y, M) = (X_{(1)}, X_{(3)}, X_{(2)})$ is \begin{align} f(x_{(1)}, x_{(3)}, x_{(2)}) = 6\varphi_\mu(x_{(1)})\varphi_\mu(x_{(2)})\varphi_\mu(x_{(3)}), \quad x_{(1)} < x_{(2)} < x_{(3)}, \tag{2} \end{align} where $\varphi_\mu(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{(x - \mu)^2}{2}}$. And the marginal density of $X_{(2)}$ is (where $\Phi_\mu(x) = \int_{-\infty}^x \varphi_\mu(t)dt$) \begin{align} f_{X_{(2)}}(x_{(2)}) = 6\Phi_\mu(x_{(2)})(1 - \Phi_\mu(x_{(2)}))\varphi_\mu(x_{(2)}). \tag{3} \end{align} $(2)$ and $(3)$ then give the conditional density of $(X_{(1)}, X_{(3)})$ given $X_{(2)}$ as \begin{align} f_{(X_{(1)}, X_{(3)})|X_{(2)} = x_{(2)}}(x_{(1)}, x_{(3)}|x_{(2)}) = \frac{\varphi_\mu(x_{(1)})\varphi_\mu(x_{(3)})} {\Phi_\mu(x_{(2)})(1 - \Phi_\mu(x_{(2)}))}, \end{align} which depends on $\mu$. In contrast, you can verify that \begin{align} f_{(X_{(1)}, X_{(3)})|\bar{X} = \bar{x}}(x_{(1)}, x_{(3)}|\bar{x}) = \frac{3\sqrt{3}}{\pi}\exp\left(\frac{3}{2}\bar{x}^2 - \frac{1}{2}(x_{(1)}^2 + x_{(3)}^2 + (3\bar{x} - x_{(1)} - x_{(3)})^2)\right), \end{align} which is independent of $\mu$.
Why is median not a sufficient statistic? [duplicate] This is a great technical question. First I want to point out that your argument $T$ is a sufficient statistic if $$ \frac{f_X(x|\theta)}{q(t|\theta)} $$ does not depend on $\theta$, where $f_X$ is
28,962
Why is median not a sufficient statistic? [duplicate]
Let us assume that $n = 2k+1$ is odd, and let $J_i$ be a ternary variable showing whether $X_i$ is equal to the median $M$, below it or above it. That is, $$ J_i = \begin{cases} 1 & \text{$X_i > M$} \\ 0 & \text{$X_i = M$} \\ -1 & \text{$X_i < M$} \end{cases} $$ If $M$ is sufficient, then $(M,J_1,\dots,J_n)$ has to be sufficient as well. So it is enough to show that $(M,J_1,\dots,J_n)$ is not sufficient. What does sufficiency mean? That the distribution of $(X_1,\dots,X_n)$ given $(M,J_1,\dots,J_n))$ is independent of the parameter, $\mu$ in this case. That this independence fails in this case is intuitive, but a bit cumbersome to write down. With probability 1, exactly one of $J_1,\dots,J_n$ can be equal to 0, and the rest divided equally among +1 and -1 (since $N(\mu,1)$ is continuous w.r.t. the Lebesgue measure the chance of seeing repeated values in a finite sequence of independent draws from it is zero.) Without loss of generality, let us condition on \begin{align} A = \{M = m,\;\; J_1 = 0, \;\;&J_2 = +1, \dots, J_{k+1} = +1, \\ &J_{k+2} = -1, \dots, J_{2k+1} = -1\}, \end{align} that is, $X_1$ is the median, the next $k$ observations are below the median and the next $k$ after are above the median. Then, $X_1 = m$ is completely determined, and by independence $X_2,\dots,X_{k+1}$ are i.i.d. from $N(\mu,1)$ truncated to $(m,\infty)$ which we denote as $N(\mu,1; m, \infty)$. Similarly, $X_{k+2},\dots,X_{2k+1}$ are i.i.d. from $N(\mu,1)$ truncated to $(-\infty,m)$ which we denote as $N(\mu,1; -\infty, m)$. Clearly, the distribution of $(X_1,\dots,X_n)$ given $A$ depends on $\mu$ and hence sufficiency fails. If you want to write it down, technically, $$ (X_1,\dots,X_n) \, |\, A \;\;\sim \;\; \delta_{m} \otimes \Bigl(\prod_{i=1}^k N(\mu,1; m, \infty) \Bigr) \otimes \Bigl(\prod_{i=1}^k N(\mu,1; -\infty, m) \Bigr) $$ where $\delta_m$ is the point mass measure at $m$, and $\otimes$ and $\prod$ denote products of measures. You can also consider other combinations of values for $\{J_i\}$ and it would be similar (you just get a different permutation of the same terms), but not necessary to establish the failure of sufficiency (one combination is enough). You can also answer the question about ancillarity. A statistic is ancillary if its distribution does not depend on the parameter $\mu$.
Why is median not a sufficient statistic? [duplicate]
Let us assume that $n = 2k+1$ is odd, and let $J_i$ be a ternary variable showing whether $X_i$ is equal to the median $M$, below it or above it. That is, $$ J_i = \begin{cases} 1 & \text{$X_i > M$}
Why is median not a sufficient statistic? [duplicate] Let us assume that $n = 2k+1$ is odd, and let $J_i$ be a ternary variable showing whether $X_i$ is equal to the median $M$, below it or above it. That is, $$ J_i = \begin{cases} 1 & \text{$X_i > M$} \\ 0 & \text{$X_i = M$} \\ -1 & \text{$X_i < M$} \end{cases} $$ If $M$ is sufficient, then $(M,J_1,\dots,J_n)$ has to be sufficient as well. So it is enough to show that $(M,J_1,\dots,J_n)$ is not sufficient. What does sufficiency mean? That the distribution of $(X_1,\dots,X_n)$ given $(M,J_1,\dots,J_n))$ is independent of the parameter, $\mu$ in this case. That this independence fails in this case is intuitive, but a bit cumbersome to write down. With probability 1, exactly one of $J_1,\dots,J_n$ can be equal to 0, and the rest divided equally among +1 and -1 (since $N(\mu,1)$ is continuous w.r.t. the Lebesgue measure the chance of seeing repeated values in a finite sequence of independent draws from it is zero.) Without loss of generality, let us condition on \begin{align} A = \{M = m,\;\; J_1 = 0, \;\;&J_2 = +1, \dots, J_{k+1} = +1, \\ &J_{k+2} = -1, \dots, J_{2k+1} = -1\}, \end{align} that is, $X_1$ is the median, the next $k$ observations are below the median and the next $k$ after are above the median. Then, $X_1 = m$ is completely determined, and by independence $X_2,\dots,X_{k+1}$ are i.i.d. from $N(\mu,1)$ truncated to $(m,\infty)$ which we denote as $N(\mu,1; m, \infty)$. Similarly, $X_{k+2},\dots,X_{2k+1}$ are i.i.d. from $N(\mu,1)$ truncated to $(-\infty,m)$ which we denote as $N(\mu,1; -\infty, m)$. Clearly, the distribution of $(X_1,\dots,X_n)$ given $A$ depends on $\mu$ and hence sufficiency fails. If you want to write it down, technically, $$ (X_1,\dots,X_n) \, |\, A \;\;\sim \;\; \delta_{m} \otimes \Bigl(\prod_{i=1}^k N(\mu,1; m, \infty) \Bigr) \otimes \Bigl(\prod_{i=1}^k N(\mu,1; -\infty, m) \Bigr) $$ where $\delta_m$ is the point mass measure at $m$, and $\otimes$ and $\prod$ denote products of measures. You can also consider other combinations of values for $\{J_i\}$ and it would be similar (you just get a different permutation of the same terms), but not necessary to establish the failure of sufficiency (one combination is enough). You can also answer the question about ancillarity. A statistic is ancillary if its distribution does not depend on the parameter $\mu$.
Why is median not a sufficient statistic? [duplicate] Let us assume that $n = 2k+1$ is odd, and let $J_i$ be a ternary variable showing whether $X_i$ is equal to the median $M$, below it or above it. That is, $$ J_i = \begin{cases} 1 & \text{$X_i > M$}
28,963
Use R to generate random positive definite matrix with zero constraints
Every $d\times d$ symmetric positive (semi)definite matrix $\Sigma$ can be factored as $$\Sigma = \Lambda^\prime\, Q^\prime \,Q\,\Lambda$$ where $Q$ is an orthonormal matrix and $\Lambda$ is a diagonal matrix with non-negative(positive) entries $\lambda_1, \ldots, \lambda_d.$ ($\Sigma$ is always the covariance matrix of some $d$-variate distribution and $QQ^\prime$ will be its correlation matrix; the $\lambda_i$ are the standard deviations of the marginal distributions.) Let's interpret this formula. The $(i,j)$ entry $\Sigma_{i,j}$ is the dot product of columns $i$ and $j$ of $Q$, multiplied by $\lambda_i\lambda_j.$ Thus, the zero-constraints on $\Sigma$ are orthogonality constraints on the dot products of the columns of $Q.$ (Notice that all diagonal entries of a positive-definite matrix must be nonzero, so I assume the zero-constraints are all off the diagonal. I also extend any constraint on the $(i,j)$ entry to a constraint on the $(j,i)$ entry, to assure symmetry of the result.) One (completely general) way to impose such constraints is to generate the columns of $Q$ sequentially. Use any method you please to create a $d\times d$ matrix of initial values. At step $i=1,2,\ldots, d,$ alter column $i$ regressing it on all the columns $1, 2, \ldots, i-1$ of $Q$ that need to be orthogonal to it and retaining the residuals. Normalize those results so their dot product (sum of squares) is unity. That is column $i$ of $Q.$ Having created an instance of $Q,$ randomly generate the diagonal of $\Lambda$ any way you please (as discussed in the closely related answer at https://stats.stackexchange.com/a/215647/919). The following R function rQ uses iid standard Normal variates for the initial values by default. I have tested it extensively with dimensions $d=1$ through $200,$ checking systematically that the intended constraints hold. I also tested it with Poisson$(0.1)$ variates, which--because they are likely to be zero--generate highly problematic initial solutions. The principal input to rQ is a logical matrix indicating where the zero-constraints are to be applied. Here is an example with the constraints specified in the question. set.seed(17) Q <- matrix(c(FALSE, TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE), 4) Lambda <- rexp(4) zapsmall(rQ(Q, Lambda)) [,1] [,2] [,3] [,4] [1,] 2.646156 0.000000 0.000000 2.249189 [2,] 0.000000 0.079933 0.014089 -0.360013 [3,] 0.000000 0.014089 0.006021 -0.055590 [4,] 2.249189 -0.360013 -0.055590 4.167296 As a convenience, you may pass the diagonal of $\Lambda$ as the second argument to rQ. Its third argument, f, must be a random number generator (or any other function for which f(n) returns a numeric vector of length n). rQ <- function(Q, Lambda, f=rnorm) { normalize <- function(x) { v <- zapsmall(c(1, sqrt(sum(x * x))))[2] if (v == 0) v <- 1 x / v } Q <- Q | t(Q) # Force symmetry by applying all constraints d <- nrow(Q) if (missing(Lambda)) Lambda <- rep(1, d) R <- matrix(f(d^2), d, d) # An array of column vectors for (i in seq_len(d)) { j <- which(Q[seq_len(i-1), i]) # Indices of the preceding orthogonal vectors R[, i] <- normalize(residuals(.lm.fit(R[, j, drop=FALSE], R[, i]))) } R <- R %*% diag(Lambda) crossprod(R) }
Use R to generate random positive definite matrix with zero constraints
Every $d\times d$ symmetric positive (semi)definite matrix $\Sigma$ can be factored as $$\Sigma = \Lambda^\prime\, Q^\prime \,Q\,\Lambda$$ where $Q$ is an orthonormal matrix and $\Lambda$ is a diagona
Use R to generate random positive definite matrix with zero constraints Every $d\times d$ symmetric positive (semi)definite matrix $\Sigma$ can be factored as $$\Sigma = \Lambda^\prime\, Q^\prime \,Q\,\Lambda$$ where $Q$ is an orthonormal matrix and $\Lambda$ is a diagonal matrix with non-negative(positive) entries $\lambda_1, \ldots, \lambda_d.$ ($\Sigma$ is always the covariance matrix of some $d$-variate distribution and $QQ^\prime$ will be its correlation matrix; the $\lambda_i$ are the standard deviations of the marginal distributions.) Let's interpret this formula. The $(i,j)$ entry $\Sigma_{i,j}$ is the dot product of columns $i$ and $j$ of $Q$, multiplied by $\lambda_i\lambda_j.$ Thus, the zero-constraints on $\Sigma$ are orthogonality constraints on the dot products of the columns of $Q.$ (Notice that all diagonal entries of a positive-definite matrix must be nonzero, so I assume the zero-constraints are all off the diagonal. I also extend any constraint on the $(i,j)$ entry to a constraint on the $(j,i)$ entry, to assure symmetry of the result.) One (completely general) way to impose such constraints is to generate the columns of $Q$ sequentially. Use any method you please to create a $d\times d$ matrix of initial values. At step $i=1,2,\ldots, d,$ alter column $i$ regressing it on all the columns $1, 2, \ldots, i-1$ of $Q$ that need to be orthogonal to it and retaining the residuals. Normalize those results so their dot product (sum of squares) is unity. That is column $i$ of $Q.$ Having created an instance of $Q,$ randomly generate the diagonal of $\Lambda$ any way you please (as discussed in the closely related answer at https://stats.stackexchange.com/a/215647/919). The following R function rQ uses iid standard Normal variates for the initial values by default. I have tested it extensively with dimensions $d=1$ through $200,$ checking systematically that the intended constraints hold. I also tested it with Poisson$(0.1)$ variates, which--because they are likely to be zero--generate highly problematic initial solutions. The principal input to rQ is a logical matrix indicating where the zero-constraints are to be applied. Here is an example with the constraints specified in the question. set.seed(17) Q <- matrix(c(FALSE, TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE), 4) Lambda <- rexp(4) zapsmall(rQ(Q, Lambda)) [,1] [,2] [,3] [,4] [1,] 2.646156 0.000000 0.000000 2.249189 [2,] 0.000000 0.079933 0.014089 -0.360013 [3,] 0.000000 0.014089 0.006021 -0.055590 [4,] 2.249189 -0.360013 -0.055590 4.167296 As a convenience, you may pass the diagonal of $\Lambda$ as the second argument to rQ. Its third argument, f, must be a random number generator (or any other function for which f(n) returns a numeric vector of length n). rQ <- function(Q, Lambda, f=rnorm) { normalize <- function(x) { v <- zapsmall(c(1, sqrt(sum(x * x))))[2] if (v == 0) v <- 1 x / v } Q <- Q | t(Q) # Force symmetry by applying all constraints d <- nrow(Q) if (missing(Lambda)) Lambda <- rep(1, d) R <- matrix(f(d^2), d, d) # An array of column vectors for (i in seq_len(d)) { j <- which(Q[seq_len(i-1), i]) # Indices of the preceding orthogonal vectors R[, i] <- normalize(residuals(.lm.fit(R[, j, drop=FALSE], R[, i]))) } R <- R %*% diag(Lambda) crossprod(R) }
Use R to generate random positive definite matrix with zero constraints Every $d\times d$ symmetric positive (semi)definite matrix $\Sigma$ can be factored as $$\Sigma = \Lambda^\prime\, Q^\prime \,Q\,\Lambda$$ where $Q$ is an orthonormal matrix and $\Lambda$ is a diagona
28,964
Use R to generate random positive definite matrix with zero constraints
I'm not sure if this is what you want, but for the specific example you gave (this doesn't necessarily generalize easily to arbitrary zero constraints, as the algebra can get messy!) if $L$ is a lower-triangular matrix with positive values on the diagonal then $\Omega = L L^\top$ is positive definite (requiring the diagonal to be positive is not necessary for positive definiteness, but makes the decomposition unique: see Pinheiro and Bates 1996 "Unconstrained parametrizations for variance-covariance matrices"). $\Omega_{12} = L_{11} L_{21}$ and $\Omega_{13} = L_{11} L_{31}$. Thus, I think that without any further loss of generality, a lower-triangular matrix with a positive diagonal and $L_{21} = L_{31} = 0$ will give you the constraint pattern you want. (Setting $L_{11}=0$ would give you a singular matrix.) "random" is pretty vague. (You didn't say "uniform" ...) We could for example pick $\theta_{ii} \sim U(0,20)$, $\theta_{ij} \sim U(-10,10)$ (for $i \neq j$ and $\{i,j\}$ not equal to $\{2,1\}$ or $\{3,1\}$). set.seed(101) m <- matrix(0, 4, 4) diag(m) <- runif(4, max=20) m[lower.tri(m)] <- runif(6, min=-10, max=10) m[2,1] <- m[3,1] <- 0 S <- m %*% t(m) S [,1] [,2] [,3] [,4] [1,] 55.41265 0.0000000 0.000000 12.634888 [2,] 0.00000 0.7682458 -2.919309 2.138861 [3,] 0.00000 -2.9193087 212.553839 4.881917 [4,] 12.63489 2.1388607 4.881917 182.698471 eigen(S)$values [1] 213.387898 183.174454 54.170033 0.700823
Use R to generate random positive definite matrix with zero constraints
I'm not sure if this is what you want, but for the specific example you gave (this doesn't necessarily generalize easily to arbitrary zero constraints, as the algebra can get messy!) if $L$ is a lowe
Use R to generate random positive definite matrix with zero constraints I'm not sure if this is what you want, but for the specific example you gave (this doesn't necessarily generalize easily to arbitrary zero constraints, as the algebra can get messy!) if $L$ is a lower-triangular matrix with positive values on the diagonal then $\Omega = L L^\top$ is positive definite (requiring the diagonal to be positive is not necessary for positive definiteness, but makes the decomposition unique: see Pinheiro and Bates 1996 "Unconstrained parametrizations for variance-covariance matrices"). $\Omega_{12} = L_{11} L_{21}$ and $\Omega_{13} = L_{11} L_{31}$. Thus, I think that without any further loss of generality, a lower-triangular matrix with a positive diagonal and $L_{21} = L_{31} = 0$ will give you the constraint pattern you want. (Setting $L_{11}=0$ would give you a singular matrix.) "random" is pretty vague. (You didn't say "uniform" ...) We could for example pick $\theta_{ii} \sim U(0,20)$, $\theta_{ij} \sim U(-10,10)$ (for $i \neq j$ and $\{i,j\}$ not equal to $\{2,1\}$ or $\{3,1\}$). set.seed(101) m <- matrix(0, 4, 4) diag(m) <- runif(4, max=20) m[lower.tri(m)] <- runif(6, min=-10, max=10) m[2,1] <- m[3,1] <- 0 S <- m %*% t(m) S [,1] [,2] [,3] [,4] [1,] 55.41265 0.0000000 0.000000 12.634888 [2,] 0.00000 0.7682458 -2.919309 2.138861 [3,] 0.00000 -2.9193087 212.553839 4.881917 [4,] 12.63489 2.1388607 4.881917 182.698471 eigen(S)$values [1] 213.387898 183.174454 54.170033 0.700823
Use R to generate random positive definite matrix with zero constraints I'm not sure if this is what you want, but for the specific example you gave (this doesn't necessarily generalize easily to arbitrary zero constraints, as the algebra can get messy!) if $L$ is a lowe
28,965
Use R to generate random positive definite matrix with zero constraints
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. First, generate the random symmetric matrix. Second, apply ledoit wolf regularization to make it SPD.
Use R to generate random positive definite matrix with zero constraints
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Use R to generate random positive definite matrix with zero constraints Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. First, generate the random symmetric matrix. Second, apply ledoit wolf regularization to make it SPD.
Use R to generate random positive definite matrix with zero constraints Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
28,966
Reference Request: Book on Unit Root Theory
In addition to the references by Richard Hardy, the following may be helpful: Bierens, Unit Roots, Ch. 29 in "A Companion to Theoretical Econometrics", https://onlinelibrary.wiley.com/doi/10.1002/9780470996249.ch30 A Primer for Unit Root Testing (Palgrave Texts in Econometrics) Hardcover by K. Patterson https://www.amazon.de/Primer-Testing-Palgrave-Texts-Econometrics/dp/1403902046
Reference Request: Book on Unit Root Theory
In addition to the references by Richard Hardy, the following may be helpful: Bierens, Unit Roots, Ch. 29 in "A Companion to Theoretical Econometrics", https://onlinelibrary.wiley.com/doi/10.1002/9780
Reference Request: Book on Unit Root Theory In addition to the references by Richard Hardy, the following may be helpful: Bierens, Unit Roots, Ch. 29 in "A Companion to Theoretical Econometrics", https://onlinelibrary.wiley.com/doi/10.1002/9780470996249.ch30 A Primer for Unit Root Testing (Palgrave Texts in Econometrics) Hardcover by K. Patterson https://www.amazon.de/Primer-Testing-Palgrave-Texts-Econometrics/dp/1403902046
Reference Request: Book on Unit Root Theory In addition to the references by Richard Hardy, the following may be helpful: Bierens, Unit Roots, Ch. 29 in "A Companion to Theoretical Econometrics", https://onlinelibrary.wiley.com/doi/10.1002/9780
28,967
Reference Request: Book on Unit Root Theory
Johansen "Likelihood-based inference in cointegrated vector autoregressive models" (1995), Oxford University Press. This is a pretty technical and theoretical treatment. Juselius "The Cointegrated VAR Model: Methodology and Applications" (2006), Oxford University Press. This is a more applied threatment. While both books have "cointegration/cointegrated" in their titles, they do discuss unit roots, too, as that is a prerequisite for cointegration analysis which you seem to be interested in as well.
Reference Request: Book on Unit Root Theory
Johansen "Likelihood-based inference in cointegrated vector autoregressive models" (1995), Oxford University Press. This is a pretty technical and theoretical treatment. Juselius "The Cointegrated VAR
Reference Request: Book on Unit Root Theory Johansen "Likelihood-based inference in cointegrated vector autoregressive models" (1995), Oxford University Press. This is a pretty technical and theoretical treatment. Juselius "The Cointegrated VAR Model: Methodology and Applications" (2006), Oxford University Press. This is a more applied threatment. While both books have "cointegration/cointegrated" in their titles, they do discuss unit roots, too, as that is a prerequisite for cointegration analysis which you seem to be interested in as well.
Reference Request: Book on Unit Root Theory Johansen "Likelihood-based inference in cointegrated vector autoregressive models" (1995), Oxford University Press. This is a pretty technical and theoretical treatment. Juselius "The Cointegrated VAR
28,968
How do Lawson and Hanson solve the unconstrained least squares problem?
TNT-NN For the TNT-NN see: Myre, Joe M., et al. "TNT-NN: a fast active set method for solving large non-negative least squares problems." Procedia Computer Science 108 (2017): 755-764. https://doi.org/10.1016/j.procs.2017.05.194 To form the active set, TNT-NN first solves an unconstrained least squares problem. Variables that violate the non-negativity constraint are added to the active set. Once a feasible solution is found, where none of the non-negativity constraints are violated, the 2-norm of the residual is used as a measure of fitness and the solution is saved as the current “best” solution. TNT-NN attempts to modify the active set by iteratively moving some of the variables from the active set back into the unconstrained set. The active set variables are sorted based on their components of the gradient. Variables that show the largest positive gradient components are tested by moving some of them from the active set into the unconstrained set. It is important to note that initially large groups of variables can be moved in a single test. If the new solution does not improve in fitness, then the solution is rejected and a smaller set of variables is tested. If a group of the variables can be removed from the active set and a new feasible solution is found that is “better”, the solution is saved and the algorithm begins a new iteration. The algorithm reaches convergence when the active set can no longer be modified. Your implementation is only half part of this solution. It is the part where the new feasible set is searched to test the removal of a variable from the active set. In the code below it is demonstrated. I have copied your fastnnls function and turned it into feasible_set which is only part of the algorithm. In the article Myre et al speak of " by moving some of them from the active set into the unconstrained set. It is important to note that initially large groups of variables can be moved in a single test". But I am not sure how they do that so in the code, I have been adding them one by one. Probably there are some additional tricks to make faster selections of large groups to be added at once instead of my for loop that tries all variables. The difference between multiway::fnnls and nnls::nnls You got a difference because of a small error in your comparison. The one function requires the matrix $X$ and vector $Y$, the other function requires the matrix $X^TX$ and vector $X^Ty$. You have used the latter for both functions. In the code below I give an example output. Example code ### finding the feasible set feasible_set <- function(a, b, ind){ x <- rep(0, length(b)) x[ind] <- solve(a[ind, ind], b[ind]) while(any(x < 0)){ ind <- which(x > 0) x <- rep(0, length(b)) x[ind] <- solve(a[ind, ind], b[ind]) } as.vector(x) } ### finding the gradients gradients <- function(b,y,X) { current_y <- X %*% b d_y <- y-current_y gradients <- t(X) %*% d_y return(gradients) } ### The algorithm that repeatedly updates the active set ### The updates are done by removing the variable with the highest positive gradient fastnnls <- function(y,X) { ### Initiation a <- crossprod(X) b <- as.vector(crossprod(X, y)) current_active <- rep(TRUE,length(X[1,])) ### start with all variables in active set current_s <- rep(0,length(X[1,])) ### initial conditions current_y <- X %*% current_s current_loss <- sum((y-current_y)^2) ### algorithm that stops untill no improvement can be made cont <- TRUE while (cont) { ### add variables based on gradients ### in these four lines the gradients are found and ordered gradients <- gradients(current_s,y,X) testing <- which(gradients*current_active>0) ### find out which variables are active and have positive gradients ord <- order(gradients, decreasing = TRUE) ord <- ord[ord %in% testing] ### strip the negative or non-active variables ### keep adding variables in a loop while this improves the solution addition <- 0 ### itterative variable keeping track of the additions new_active <- current_active for (i in 1:length(ord)) { ### Try out a new active set with one variable removed new_active[ord[i]] <- FALSE ### remove 'ord[i]' from active set new_s <- feasible_set(a,b, ind = which(new_active == FALSE)) new_y <- X %*% new_s new_loss <- sum((y-new_y)^2) ### Update the solution if the new trial is better if (new_loss < current_loss) { addition <- i current_active <- new_active current_loss <- new_loss current_s <- new_s current_y <- new_y } else { break ### skip loop to end } } if (addition == 0) { ### quit while when no addition is made cont = FALSE } else { new_active <- new_s == 0 ### in the for loop we had only been decreasing the active set ### but the feasible_set function also increases the active it and we need to adapt accrdingly } if (sum(current_active) == 0) { ### quit if active set is empty (all variables positive) cont = FALSE } ### The while loop continues by recomputing the gradients } return(current_s) } set.seed(123) X <- matrix(rnorm(2000),100,20) y <- X %*% runif(20) + rnorm(100)*5 library(nnls) library(multiway) data.frame(multiway = multiway::fnnls(a, b), nnls = nnls::nnls(X, y)$x, manual = fastnnls(y,X)) Output > data.frame(multiway = multiway::fnnls(a, b), + nnls = nnls::nnls(X, y)$x, + manual = fastnnls(y,X)) multiway nnls manual 1 0.610802720 0.610802720 0.610802720 2 0.146121047 0.146121047 0.146121047 3 0.841809005 0.841809005 0.841809005 4 1.131040740 1.131040740 1.131040740 5 0.000000000 0.000000000 0.000000000 6 1.093652478 1.093652478 1.093652478 7 0.725590111 0.725590111 0.725590111 8 0.211525228 0.211525228 0.211525228 9 0.000000000 0.000000000 0.000000000 10 1.472333600 1.472333600 1.472333600 11 0.005740395 0.005740395 0.005740395 12 2.131277775 2.131277775 2.131277775 13 0.000000000 0.000000000 0.000000000 14 0.590923989 0.590923989 0.590923989 15 0.652530944 0.652530944 0.652530944 16 0.717713755 0.717713755 0.717713755 17 1.115162378 1.115162378 1.115162378 18 0.603304661 0.603304661 0.603304661 19 0.000000000 0.000000000 0.000000000 20 0.218073317 0.218073317 0.218073317
How do Lawson and Hanson solve the unconstrained least squares problem?
TNT-NN For the TNT-NN see: Myre, Joe M., et al. "TNT-NN: a fast active set method for solving large non-negative least squares problems." Procedia Computer Science 108 (2017): 755-764. https://doi.org
How do Lawson and Hanson solve the unconstrained least squares problem? TNT-NN For the TNT-NN see: Myre, Joe M., et al. "TNT-NN: a fast active set method for solving large non-negative least squares problems." Procedia Computer Science 108 (2017): 755-764. https://doi.org/10.1016/j.procs.2017.05.194 To form the active set, TNT-NN first solves an unconstrained least squares problem. Variables that violate the non-negativity constraint are added to the active set. Once a feasible solution is found, where none of the non-negativity constraints are violated, the 2-norm of the residual is used as a measure of fitness and the solution is saved as the current “best” solution. TNT-NN attempts to modify the active set by iteratively moving some of the variables from the active set back into the unconstrained set. The active set variables are sorted based on their components of the gradient. Variables that show the largest positive gradient components are tested by moving some of them from the active set into the unconstrained set. It is important to note that initially large groups of variables can be moved in a single test. If the new solution does not improve in fitness, then the solution is rejected and a smaller set of variables is tested. If a group of the variables can be removed from the active set and a new feasible solution is found that is “better”, the solution is saved and the algorithm begins a new iteration. The algorithm reaches convergence when the active set can no longer be modified. Your implementation is only half part of this solution. It is the part where the new feasible set is searched to test the removal of a variable from the active set. In the code below it is demonstrated. I have copied your fastnnls function and turned it into feasible_set which is only part of the algorithm. In the article Myre et al speak of " by moving some of them from the active set into the unconstrained set. It is important to note that initially large groups of variables can be moved in a single test". But I am not sure how they do that so in the code, I have been adding them one by one. Probably there are some additional tricks to make faster selections of large groups to be added at once instead of my for loop that tries all variables. The difference between multiway::fnnls and nnls::nnls You got a difference because of a small error in your comparison. The one function requires the matrix $X$ and vector $Y$, the other function requires the matrix $X^TX$ and vector $X^Ty$. You have used the latter for both functions. In the code below I give an example output. Example code ### finding the feasible set feasible_set <- function(a, b, ind){ x <- rep(0, length(b)) x[ind] <- solve(a[ind, ind], b[ind]) while(any(x < 0)){ ind <- which(x > 0) x <- rep(0, length(b)) x[ind] <- solve(a[ind, ind], b[ind]) } as.vector(x) } ### finding the gradients gradients <- function(b,y,X) { current_y <- X %*% b d_y <- y-current_y gradients <- t(X) %*% d_y return(gradients) } ### The algorithm that repeatedly updates the active set ### The updates are done by removing the variable with the highest positive gradient fastnnls <- function(y,X) { ### Initiation a <- crossprod(X) b <- as.vector(crossprod(X, y)) current_active <- rep(TRUE,length(X[1,])) ### start with all variables in active set current_s <- rep(0,length(X[1,])) ### initial conditions current_y <- X %*% current_s current_loss <- sum((y-current_y)^2) ### algorithm that stops untill no improvement can be made cont <- TRUE while (cont) { ### add variables based on gradients ### in these four lines the gradients are found and ordered gradients <- gradients(current_s,y,X) testing <- which(gradients*current_active>0) ### find out which variables are active and have positive gradients ord <- order(gradients, decreasing = TRUE) ord <- ord[ord %in% testing] ### strip the negative or non-active variables ### keep adding variables in a loop while this improves the solution addition <- 0 ### itterative variable keeping track of the additions new_active <- current_active for (i in 1:length(ord)) { ### Try out a new active set with one variable removed new_active[ord[i]] <- FALSE ### remove 'ord[i]' from active set new_s <- feasible_set(a,b, ind = which(new_active == FALSE)) new_y <- X %*% new_s new_loss <- sum((y-new_y)^2) ### Update the solution if the new trial is better if (new_loss < current_loss) { addition <- i current_active <- new_active current_loss <- new_loss current_s <- new_s current_y <- new_y } else { break ### skip loop to end } } if (addition == 0) { ### quit while when no addition is made cont = FALSE } else { new_active <- new_s == 0 ### in the for loop we had only been decreasing the active set ### but the feasible_set function also increases the active it and we need to adapt accrdingly } if (sum(current_active) == 0) { ### quit if active set is empty (all variables positive) cont = FALSE } ### The while loop continues by recomputing the gradients } return(current_s) } set.seed(123) X <- matrix(rnorm(2000),100,20) y <- X %*% runif(20) + rnorm(100)*5 library(nnls) library(multiway) data.frame(multiway = multiway::fnnls(a, b), nnls = nnls::nnls(X, y)$x, manual = fastnnls(y,X)) Output > data.frame(multiway = multiway::fnnls(a, b), + nnls = nnls::nnls(X, y)$x, + manual = fastnnls(y,X)) multiway nnls manual 1 0.610802720 0.610802720 0.610802720 2 0.146121047 0.146121047 0.146121047 3 0.841809005 0.841809005 0.841809005 4 1.131040740 1.131040740 1.131040740 5 0.000000000 0.000000000 0.000000000 6 1.093652478 1.093652478 1.093652478 7 0.725590111 0.725590111 0.725590111 8 0.211525228 0.211525228 0.211525228 9 0.000000000 0.000000000 0.000000000 10 1.472333600 1.472333600 1.472333600 11 0.005740395 0.005740395 0.005740395 12 2.131277775 2.131277775 2.131277775 13 0.000000000 0.000000000 0.000000000 14 0.590923989 0.590923989 0.590923989 15 0.652530944 0.652530944 0.652530944 16 0.717713755 0.717713755 0.717713755 17 1.115162378 1.115162378 1.115162378 18 0.603304661 0.603304661 0.603304661 19 0.000000000 0.000000000 0.000000000 20 0.218073317 0.218073317 0.218073317
How do Lawson and Hanson solve the unconstrained least squares problem? TNT-NN For the TNT-NN see: Myre, Joe M., et al. "TNT-NN: a fast active set method for solving large non-negative least squares problems." Procedia Computer Science 108 (2017): 755-764. https://doi.org
28,969
What is the difference between fitting multinomal logistic regression and fitting multiple logistic regressions?
The problem is how best to display the results, not the multinomial analysis per se. Yes, the intercept and regression coefficients can seem hard to interpret in a multinomial model. Those coefficients simply provide the starting point for data display. Although you have log-probabilities for most groups expressed relative to a single reference category, there is nothing to prevent you from combining those probabilities in any way you want, along with associated error estimates. Use the multinomial regression probabilities for the categories in ways that display sets of outcome categories that might be of interest as a function of the predictor values. If you want to transform the results into odds ratios in a way that makes your point about an interaction term for a predictor, or display single-category results against all others, just do so starting with your properly constructed multinomial model. In general, you can display any linear combination of model predictions you want, along with error estimates based on the formula for the variance of weighted sum of correlated variables. To make your life easier, there are packages like the R emmeans package that will do the calculations for you.
What is the difference between fitting multinomal logistic regression and fitting multiple logistic
The problem is how best to display the results, not the multinomial analysis per se. Yes, the intercept and regression coefficients can seem hard to interpret in a multinomial model. Those coefficient
What is the difference between fitting multinomal logistic regression and fitting multiple logistic regressions? The problem is how best to display the results, not the multinomial analysis per se. Yes, the intercept and regression coefficients can seem hard to interpret in a multinomial model. Those coefficients simply provide the starting point for data display. Although you have log-probabilities for most groups expressed relative to a single reference category, there is nothing to prevent you from combining those probabilities in any way you want, along with associated error estimates. Use the multinomial regression probabilities for the categories in ways that display sets of outcome categories that might be of interest as a function of the predictor values. If you want to transform the results into odds ratios in a way that makes your point about an interaction term for a predictor, or display single-category results against all others, just do so starting with your properly constructed multinomial model. In general, you can display any linear combination of model predictions you want, along with error estimates based on the formula for the variance of weighted sum of correlated variables. To make your life easier, there are packages like the R emmeans package that will do the calculations for you.
What is the difference between fitting multinomal logistic regression and fitting multiple logistic The problem is how best to display the results, not the multinomial analysis per se. Yes, the intercept and regression coefficients can seem hard to interpret in a multinomial model. Those coefficient
28,970
Statistics on LOESS smoothing
Your data suggest all responses may have a common functional form, they are symmetric with distance, and that they differ only in magnitude. They also exhibit noticeable scatter around the fitted values. This scatter is (a) roughly symmetric, (b) proportional to the fitted value, yet with (c) some irreducible level of "noise." We may express this mathematically. Let $f:[0,\infty)\to[0,\infty)$ be the common functional form of the responses. I will assume its scale with distance is unknown and has to be estimated, so let $1/\rho$ be that common scale. For a given combination of treatment $t$ and genotype $g,$ let $\beta_{gt}$ be the amplitude of the response. To model the scatter I will suppose its variance is a linear function of the (true underlying) response. Thus, for an observation of genotype $g,$ treatment $t,$ and distance $x$ the response $Y$ is a random variable with $$E[Y; x,g,t] = \beta_{gt} f(\rho\, |x|)$$ and $$\operatorname{Var}[Y; x,g,t] = \sigma^2 + E[Y; x,g,t]\tau^2.$$ This model has $4\times 4 + 3 = 19$ parameters to fit and all may be of some interest, although ultimately you will want to test hypotheses about the $\beta_{gt}.$ There are various ways to fit such a model. A maximum likelihood estimator that assumes the $Y$ have (independent) Normal distributions will behave very much like an adaptively weighted Least Squares estimator, which may represent a good compromise between robustness and simplicity. As an example, let's take $f$ to be what physical theory suggests for the attenuation of radiation through a homogeneous medium: a decaying exponential. Here is a dataset of $120\times 4\times 4$ data generated according to such a model. (You can see its parameters near the end of this post.) The fits are the default Loess smooths offered by ggplot2. I hope you agree this dataset looks qualitatively like yours in all important respects. One problem with this graphic is that the Loess smooth (and practically any smooth, for that matter) is going to flatten the peaks at a distance of $0.$ It is better to plot the response against the absolute distance: I fit this model using the "nonlinear minimizer" nlm offered in R. Here is its solution: As usual with Maximum Likelihood estimation, the negative reciprocal of the Hessian of the likelihood at its maximum estimates the covariance matrix of these 19 parameter estimates. The following table reports the square roots of its diagonal as the standard errors (SE) and uses them to compute $t$ statistics for the comparison of the estimates to the known true values. None differ significantly: that is, the fitting procedure works when the data are generated by the assumed model. Actual Fit SE t Genotype 1 Treatment 1 1.000 0.981 0.0862 -0.22168 Genotype 2 Treatment 1 0.500 0.395 0.1820 -1.29861 Genotype 3 Treatment 1 1.500 1.500 0.0660 0.00131 Genotype 4 Treatment 1 2.000 1.793 0.0590 -1.85298 Genotype 1 Treatment 2 0.250 0.272 0.2540 0.33021 Genotype 2 Treatment 2 0.750 0.728 0.1085 -0.27759 Genotype 3 Treatment 2 0.500 0.566 0.1367 0.90650 Genotype 4 Treatment 2 1.250 1.160 0.0776 -0.95882 Genotype 1 Treatment 3 0.125 0.143 0.4454 0.29498 Genotype 2 Treatment 3 0.500 0.440 0.1647 -0.77498 Genotype 3 Treatment 3 0.250 0.287 0.2414 0.56512 Genotype 4 Treatment 3 0.250 0.147 0.4336 -1.22782 Genotype 1 Treatment 4 1.000 1.034 0.0837 0.39558 Genotype 2 Treatment 4 1.750 1.577 0.0641 -1.62926 Genotype 3 Treatment 4 0.250 0.232 0.2973 -0.24787 Genotype 4 Treatment 4 0.750 0.804 0.1010 0.68505 tau 0.250 0.272 0.0982 0.87706 sigma 0.250 0.249 0.0236 -0.16593 rate 2.000 1.895 0.0424 -1.26810 The covariance matrix furnishes the information needed for comparing the $\beta_{gt}$ to each other using the usual t-tests and F-tests. Provided the sample sizes for each combination of genotype and treatment are comparable, we can expect the correlations among the estimates of these parameters to be very small. (Because there is a clear tradeoff between $\tau$ and $\sigma$ in this model, their estimates will be strongly negatively correlated). This lack of correlation simplifies the direct comparison of any two estimates. For instance, to test whether Genotypes 3 and 4 differ on Treatment 1, we compare the difference in estimates to the root sum of squares of their variances: $$\frac{1.500 - 1.793}{\sqrt{0.0660^2 + 0.0590^2}} = -3.31.$$ Because a standardized Normal variable has a very small chance of exceeding $|-3.31|$ in magnitude (less than one in a thousand), you would likely conclude there really is a difference. (As the table shows, the actual difference is $1.500 - 2.000 = -0.500$ and so, in this case, that conclusion is correct.) This model is readily modified to accommodate different function forms for its response vs. distance and its the variance vs. distance components. For instance, $f$ could be replaced by a spline. You could even contemplate using different rates (or scale factors) for each of the genotypes, each of the treatments, or even for each unique combination. If possible, select those functional forms based on theoretical considerations. If that's not possible, hold out a confirmation dataset and select among likely functional forms by exploring or through cross validation. Further details can be found in the R code that generated this example. # # Specify the parameters. # beta <- rbind("Genotype 1" = c(1, 1/4, 1/8, 1), "Genotype 2" = c(1/2, 3/4, 1/2, 7/4), "Genotype 3" = c(3/2, 1/2, 1/4, 1/4), "Genotype 4" = c(2, 5/4, 1/4, 3/4)) colnames(beta) <- paste("Treatment", 1:4) sigma <- 1/4 tau <- 1/4 rate <- 2 # # Simulate. # ff <- function(x) exp(-abs(x)) # ff <- function(x) (1 + x^2)^(-1) rf <- function(x, beta, tau, sigma, rate) { y <- beta * ff(rate * x) rnorm(length(y), y, sqrt(sigma^2 + y * tau^2)) } n.per.group <- 120*2 set.seed(17) x <- seq(-2, 2, length.out=n.per.group) X <- expand.grid(Distance = x, Genotype = rownames(beta), Treatment = colnames(beta)) X$Response <- c(sapply(c(beta), function(b) rf(x, b, tau, sigma, rate))) # # Plot. # library(ggplot2) ggplot(X, aes(abs(Distance), Response, color = Treatment)) + geom_hline(yintercept=0) + geom_point(alpha = 1/4, show.legend = FALSE) + stat_smooth(method = "loess", formula = "y ~ x", size=1.5, show.legend = FALSE, se=FALSE) + labs(x = "Distance", y = "Response") + theme_classic () + facet_grid(Genotype ~ Treatment) + ggtitle("Data", "(Randomly Generated)") ggplot(X, aes(Distance, Response, color = Treatment)) + geom_hline(yintercept=0) + geom_point(alpha = 1/4, show.legend = FALSE) + stat_smooth(method = "loess", formula = "y ~ x", size=1.5, show.legend = FALSE) + labs(x = "Distance", y = "Response") + theme_classic () + facet_grid(Genotype ~ Treatment) + ggtitle("Data", "(Randomly Generated)") # # # Fit. # f <- function(theta) { beta <- matrix(exp(theta[1:16]), 4, dimnames=list(paste("Genotype", 1:4), paste("Treatment", 1:4))) tau <- exp(theta[17]) sigma <- exp(theta[18]) rate <- exp(theta[19]) y <- beta[cbind(X$Genotype, X$Treatment)] * ff(X$Distance * rate) -sum(dnorm(X$Response, y, sqrt(sigma^2 + y * tau^2), log = TRUE)) } theta <- rep(0, 19) fit <- nlm(f, theta, hessian=TRUE) theta.hat <- fit$estimate beta.hat <- matrix(exp(theta.hat[1:16]), 4, dimnames=list(paste("Genotype", 1:4), paste("Treatment", 1:4))) tau.hat <- exp(theta.hat[17]) sigma.hat <- exp(theta.hat[18]) rate.hat <- exp(theta.hat[19]) # # Plot. # X$Prediction <- c(beta.hat[cbind(X$Genotype, X$Treatment)] * ff(X$Distance * rate.hat)) ggplot(X, aes(Prediction, Response)) + geom_point(alpha=1/2) + geom_abline(intercept=0, slope=1, size=1.5, color="#d01010") + ggtitle("Model Fit") ggplot(X, aes(abs(Distance), Prediction, color = Treatment)) + geom_hline(yintercept=0) + geom_point(aes(y=Response), alpha=1/4, show.legend=FALSE) + geom_line(size=1.5, show.legend=FALSE) + labs(x = "Distance", y = "Response") + theme_classic () + facet_grid(Genotype ~ Treatment) + ggtitle("Fitted Values", "(With Original Data)") # # Extract the covariance matrix. # V <- solve(fit$hessian) s <- rbind(Actual = c(beta, tau, sigma, rate), Fit = exp(fit$estimate), SE = sqrt(diag(V)), t = (fit$estimate - log(c(beta, tau, sigma, rate))) / sqrt(diag(V))) colnames(s) <- c(c(outer(rownames(beta), colnames(beta), paste)), "tau", "sigma", "rate") print(t(s), digits=3) colnames(V) <- rownames(V) <- colnames(s) # # Check the correlation among the estimates. # Sigma <- t(V/sqrt(diag(V))) / sqrt(diag(V)) # Shows little correlation among the betas; negative correlation between tau and sigma image(seq_along(colnames(Sigma)), seq_along(rownames(Sigma)), Sigma, main="Variance-Covariance Matrix of Estimates") Sigma <- pmin(1, pmax(-1, Sigma)) h <- hclust(as.dist(matrix(acos(Sigma), nrow(V))), method="median") plot(h)
Statistics on LOESS smoothing
Your data suggest all responses may have a common functional form, they are symmetric with distance, and that they differ only in magnitude. They also exhibit noticeable scatter around the fitted valu
Statistics on LOESS smoothing Your data suggest all responses may have a common functional form, they are symmetric with distance, and that they differ only in magnitude. They also exhibit noticeable scatter around the fitted values. This scatter is (a) roughly symmetric, (b) proportional to the fitted value, yet with (c) some irreducible level of "noise." We may express this mathematically. Let $f:[0,\infty)\to[0,\infty)$ be the common functional form of the responses. I will assume its scale with distance is unknown and has to be estimated, so let $1/\rho$ be that common scale. For a given combination of treatment $t$ and genotype $g,$ let $\beta_{gt}$ be the amplitude of the response. To model the scatter I will suppose its variance is a linear function of the (true underlying) response. Thus, for an observation of genotype $g,$ treatment $t,$ and distance $x$ the response $Y$ is a random variable with $$E[Y; x,g,t] = \beta_{gt} f(\rho\, |x|)$$ and $$\operatorname{Var}[Y; x,g,t] = \sigma^2 + E[Y; x,g,t]\tau^2.$$ This model has $4\times 4 + 3 = 19$ parameters to fit and all may be of some interest, although ultimately you will want to test hypotheses about the $\beta_{gt}.$ There are various ways to fit such a model. A maximum likelihood estimator that assumes the $Y$ have (independent) Normal distributions will behave very much like an adaptively weighted Least Squares estimator, which may represent a good compromise between robustness and simplicity. As an example, let's take $f$ to be what physical theory suggests for the attenuation of radiation through a homogeneous medium: a decaying exponential. Here is a dataset of $120\times 4\times 4$ data generated according to such a model. (You can see its parameters near the end of this post.) The fits are the default Loess smooths offered by ggplot2. I hope you agree this dataset looks qualitatively like yours in all important respects. One problem with this graphic is that the Loess smooth (and practically any smooth, for that matter) is going to flatten the peaks at a distance of $0.$ It is better to plot the response against the absolute distance: I fit this model using the "nonlinear minimizer" nlm offered in R. Here is its solution: As usual with Maximum Likelihood estimation, the negative reciprocal of the Hessian of the likelihood at its maximum estimates the covariance matrix of these 19 parameter estimates. The following table reports the square roots of its diagonal as the standard errors (SE) and uses them to compute $t$ statistics for the comparison of the estimates to the known true values. None differ significantly: that is, the fitting procedure works when the data are generated by the assumed model. Actual Fit SE t Genotype 1 Treatment 1 1.000 0.981 0.0862 -0.22168 Genotype 2 Treatment 1 0.500 0.395 0.1820 -1.29861 Genotype 3 Treatment 1 1.500 1.500 0.0660 0.00131 Genotype 4 Treatment 1 2.000 1.793 0.0590 -1.85298 Genotype 1 Treatment 2 0.250 0.272 0.2540 0.33021 Genotype 2 Treatment 2 0.750 0.728 0.1085 -0.27759 Genotype 3 Treatment 2 0.500 0.566 0.1367 0.90650 Genotype 4 Treatment 2 1.250 1.160 0.0776 -0.95882 Genotype 1 Treatment 3 0.125 0.143 0.4454 0.29498 Genotype 2 Treatment 3 0.500 0.440 0.1647 -0.77498 Genotype 3 Treatment 3 0.250 0.287 0.2414 0.56512 Genotype 4 Treatment 3 0.250 0.147 0.4336 -1.22782 Genotype 1 Treatment 4 1.000 1.034 0.0837 0.39558 Genotype 2 Treatment 4 1.750 1.577 0.0641 -1.62926 Genotype 3 Treatment 4 0.250 0.232 0.2973 -0.24787 Genotype 4 Treatment 4 0.750 0.804 0.1010 0.68505 tau 0.250 0.272 0.0982 0.87706 sigma 0.250 0.249 0.0236 -0.16593 rate 2.000 1.895 0.0424 -1.26810 The covariance matrix furnishes the information needed for comparing the $\beta_{gt}$ to each other using the usual t-tests and F-tests. Provided the sample sizes for each combination of genotype and treatment are comparable, we can expect the correlations among the estimates of these parameters to be very small. (Because there is a clear tradeoff between $\tau$ and $\sigma$ in this model, their estimates will be strongly negatively correlated). This lack of correlation simplifies the direct comparison of any two estimates. For instance, to test whether Genotypes 3 and 4 differ on Treatment 1, we compare the difference in estimates to the root sum of squares of their variances: $$\frac{1.500 - 1.793}{\sqrt{0.0660^2 + 0.0590^2}} = -3.31.$$ Because a standardized Normal variable has a very small chance of exceeding $|-3.31|$ in magnitude (less than one in a thousand), you would likely conclude there really is a difference. (As the table shows, the actual difference is $1.500 - 2.000 = -0.500$ and so, in this case, that conclusion is correct.) This model is readily modified to accommodate different function forms for its response vs. distance and its the variance vs. distance components. For instance, $f$ could be replaced by a spline. You could even contemplate using different rates (or scale factors) for each of the genotypes, each of the treatments, or even for each unique combination. If possible, select those functional forms based on theoretical considerations. If that's not possible, hold out a confirmation dataset and select among likely functional forms by exploring or through cross validation. Further details can be found in the R code that generated this example. # # Specify the parameters. # beta <- rbind("Genotype 1" = c(1, 1/4, 1/8, 1), "Genotype 2" = c(1/2, 3/4, 1/2, 7/4), "Genotype 3" = c(3/2, 1/2, 1/4, 1/4), "Genotype 4" = c(2, 5/4, 1/4, 3/4)) colnames(beta) <- paste("Treatment", 1:4) sigma <- 1/4 tau <- 1/4 rate <- 2 # # Simulate. # ff <- function(x) exp(-abs(x)) # ff <- function(x) (1 + x^2)^(-1) rf <- function(x, beta, tau, sigma, rate) { y <- beta * ff(rate * x) rnorm(length(y), y, sqrt(sigma^2 + y * tau^2)) } n.per.group <- 120*2 set.seed(17) x <- seq(-2, 2, length.out=n.per.group) X <- expand.grid(Distance = x, Genotype = rownames(beta), Treatment = colnames(beta)) X$Response <- c(sapply(c(beta), function(b) rf(x, b, tau, sigma, rate))) # # Plot. # library(ggplot2) ggplot(X, aes(abs(Distance), Response, color = Treatment)) + geom_hline(yintercept=0) + geom_point(alpha = 1/4, show.legend = FALSE) + stat_smooth(method = "loess", formula = "y ~ x", size=1.5, show.legend = FALSE, se=FALSE) + labs(x = "Distance", y = "Response") + theme_classic () + facet_grid(Genotype ~ Treatment) + ggtitle("Data", "(Randomly Generated)") ggplot(X, aes(Distance, Response, color = Treatment)) + geom_hline(yintercept=0) + geom_point(alpha = 1/4, show.legend = FALSE) + stat_smooth(method = "loess", formula = "y ~ x", size=1.5, show.legend = FALSE) + labs(x = "Distance", y = "Response") + theme_classic () + facet_grid(Genotype ~ Treatment) + ggtitle("Data", "(Randomly Generated)") # # # Fit. # f <- function(theta) { beta <- matrix(exp(theta[1:16]), 4, dimnames=list(paste("Genotype", 1:4), paste("Treatment", 1:4))) tau <- exp(theta[17]) sigma <- exp(theta[18]) rate <- exp(theta[19]) y <- beta[cbind(X$Genotype, X$Treatment)] * ff(X$Distance * rate) -sum(dnorm(X$Response, y, sqrt(sigma^2 + y * tau^2), log = TRUE)) } theta <- rep(0, 19) fit <- nlm(f, theta, hessian=TRUE) theta.hat <- fit$estimate beta.hat <- matrix(exp(theta.hat[1:16]), 4, dimnames=list(paste("Genotype", 1:4), paste("Treatment", 1:4))) tau.hat <- exp(theta.hat[17]) sigma.hat <- exp(theta.hat[18]) rate.hat <- exp(theta.hat[19]) # # Plot. # X$Prediction <- c(beta.hat[cbind(X$Genotype, X$Treatment)] * ff(X$Distance * rate.hat)) ggplot(X, aes(Prediction, Response)) + geom_point(alpha=1/2) + geom_abline(intercept=0, slope=1, size=1.5, color="#d01010") + ggtitle("Model Fit") ggplot(X, aes(abs(Distance), Prediction, color = Treatment)) + geom_hline(yintercept=0) + geom_point(aes(y=Response), alpha=1/4, show.legend=FALSE) + geom_line(size=1.5, show.legend=FALSE) + labs(x = "Distance", y = "Response") + theme_classic () + facet_grid(Genotype ~ Treatment) + ggtitle("Fitted Values", "(With Original Data)") # # Extract the covariance matrix. # V <- solve(fit$hessian) s <- rbind(Actual = c(beta, tau, sigma, rate), Fit = exp(fit$estimate), SE = sqrt(diag(V)), t = (fit$estimate - log(c(beta, tau, sigma, rate))) / sqrt(diag(V))) colnames(s) <- c(c(outer(rownames(beta), colnames(beta), paste)), "tau", "sigma", "rate") print(t(s), digits=3) colnames(V) <- rownames(V) <- colnames(s) # # Check the correlation among the estimates. # Sigma <- t(V/sqrt(diag(V))) / sqrt(diag(V)) # Shows little correlation among the betas; negative correlation between tau and sigma image(seq_along(colnames(Sigma)), seq_along(rownames(Sigma)), Sigma, main="Variance-Covariance Matrix of Estimates") Sigma <- pmin(1, pmax(-1, Sigma)) h <- hclust(as.dist(matrix(acos(Sigma), nrow(V))), method="median") plot(h)
Statistics on LOESS smoothing Your data suggest all responses may have a common functional form, they are symmetric with distance, and that they differ only in magnitude. They also exhibit noticeable scatter around the fitted valu
28,971
Statistics on LOESS smoothing
I'm sure there are better ways to do it but maybe this is an idea to get started. Divide your "Distance" variable in bins, each containing a reasonable number of datapoints. At a glance, bins from -2 to +2 in 0.5 step could do [i.e. seq(-2, 2, by= 0.5)]. Your data table now should have columns: "response", "bin", "treatment", "genotype". Then fit the ANOVA model with interaction between bin, treatment, genotype: aov1 <- aov(data= dat, response ~ as.factor(bin) * treatment * genotype) summary.lm(aov1) This should pick up that bins around 0 in treatment 1, genotype 4 are different from the baseline. You can then check for comparisons between bins, treatments and genotypes with: TukeyHSD(aov1) EDIT after whuber's comment: A simple improvement to the above solution may be to use the absolute distance from 0 for the binning, e.g. use abs(seq(-2, 2, by= 0.5)), since bins to the left and right of 0 are assumed to be equivalent with respect to the response. This will half the number of bins and increase power. Bins could either be treated as a nominal variable or as an ordinal variable to reflect that there is an increasing trend moving towards 0.
Statistics on LOESS smoothing
I'm sure there are better ways to do it but maybe this is an idea to get started. Divide your "Distance" variable in bins, each containing a reasonable number of datapoints. At a glance, bins from -2
Statistics on LOESS smoothing I'm sure there are better ways to do it but maybe this is an idea to get started. Divide your "Distance" variable in bins, each containing a reasonable number of datapoints. At a glance, bins from -2 to +2 in 0.5 step could do [i.e. seq(-2, 2, by= 0.5)]. Your data table now should have columns: "response", "bin", "treatment", "genotype". Then fit the ANOVA model with interaction between bin, treatment, genotype: aov1 <- aov(data= dat, response ~ as.factor(bin) * treatment * genotype) summary.lm(aov1) This should pick up that bins around 0 in treatment 1, genotype 4 are different from the baseline. You can then check for comparisons between bins, treatments and genotypes with: TukeyHSD(aov1) EDIT after whuber's comment: A simple improvement to the above solution may be to use the absolute distance from 0 for the binning, e.g. use abs(seq(-2, 2, by= 0.5)), since bins to the left and right of 0 are assumed to be equivalent with respect to the response. This will half the number of bins and increase power. Bins could either be treated as a nominal variable or as an ordinal variable to reflect that there is an increasing trend moving towards 0.
Statistics on LOESS smoothing I'm sure there are better ways to do it but maybe this is an idea to get started. Divide your "Distance" variable in bins, each containing a reasonable number of datapoints. At a glance, bins from -2
28,972
Decision rule as a hyper-parameter in LASSO
Start with the decision rule in its ideal sense. It represents the tradeoff of costs between false-positive and false-negative classifications. In that sense the decision rule isn't a function of the data; it's a function of how you want to use your model of the data. In that sense it's not a hyper-parameter; it's a prior choice of a critical parameter value. This report explains this relationship in Section 7. With 0 cost of true classification, say that the costs of false positives and false negatives are scaled to sum to 1. Call the cost of a false positive $c$ so that the cost of a false negative is $(1-c)$. Then the optimal probability classification cutoff for minimizing expected cost is at $c$. When you specify a decision rule at 0.6, you are in effect specifying $c = 0.6$, saying that false positives are 1.5 (0.6/0.4) times as costly as false negatives. Changing the decision rule is just changing your estimate of the relative costs. So the decision rule in that sense represents your choice about how to use the data and your model, not something to be learned from the data independent of that choice. This relationship is, however, based on having the true probability model in hand, notated as $\eta(\boldsymbol{x})$ as a function of the covarates $\boldsymbol{x}$ in the linked report. Instead, what you have is an estimated model, notated as $q(\boldsymbol{x})$. Section 7 of the above report states: While $\eta(\boldsymbol{x})$ may not be well-approximated by the model $q(\boldsymbol{x})$, it may still be possible for each cost $c$ to approximate $\{\eta(\boldsymbol{x})> c\}$ well with $\{q(\boldsymbol{x})> c\}$, but each $c$ requiring a separate model fit $q(.)$. So you wish to tune the parameters for the model fit $q(.)$ to come close to $\eta(\boldsymbol{x})$ in a sense that they have similar behaviors with respect to the (ideal) decision-rule value $c$. One way to do something like that is instead to find a cutoff probability value for the mis-specified model $q(\boldsymbol{x})$ to be other than $c$, say $c^\dagger$, to provide the desired model performance (e.g., accuracy) on your data. That is, you try to approximate $\{\eta(\boldsymbol{x})> c\} $ well with $\{q(\boldsymbol{x})> c^\dagger\}$ in a way that suits your purpose. I'll leave to others to decide whether one should call such a modification of a mis-specified model a "hyper-parameter" choice and, if so, whether that would be "in the strict sense." One could argue that the choice of decision rule (in the first sense above) should instead be used to tune the modeling approach. A standard logistic regression, with coefficient values determined by maximum likelihood, represents only one of many ways to fit a linear model to data with binary outcomes. Its solution is equivalent to minimizing a log-loss function. Log-loss is a strictly proper scoring rule in the sense that it is optimized at the true probability distribution. There is, however, a wide universe of strictly proper scoring rules from which one might choose; see Sections 2 and 3 of the report linked above. These rules differ in terms of their weighting along the probability scale. The log-loss rule puts high weight near the extremes. If you have a false-positive cost of $c$ in the above formulation, you might want instead to choose a scoring rule that puts more weight on probabilities around $c$. The report linked above describes these issues extensively, and shows in Section 9 how to use iteratively weighted least squares to fit a linear model based on any proper scoring rule. This approach can be extended to penalization methods like LASSO; Section 15 of the report suggests that shrinkage of coefficients (as provided by LASSO and other penalization methods) can improve performance with some choices of weight function. That said, I suspect that mis-specification of a linear model typically poses more of a problem than the choice of proper scoring rule in practical applications. Optimizing your model near the probability cutoff associated with your choice of relative false positive/negative costs is nevertheless something to consider seriously. For example, that is the approach used in targeted maximum likelihood estimation, in which models are tuned to focus on a particular prediction region of interest. Combining multiple such models can minimize the dangers posed by any one model being mis-specified.
Decision rule as a hyper-parameter in LASSO
Start with the decision rule in its ideal sense. It represents the tradeoff of costs between false-positive and false-negative classifications. In that sense the decision rule isn't a function of the
Decision rule as a hyper-parameter in LASSO Start with the decision rule in its ideal sense. It represents the tradeoff of costs between false-positive and false-negative classifications. In that sense the decision rule isn't a function of the data; it's a function of how you want to use your model of the data. In that sense it's not a hyper-parameter; it's a prior choice of a critical parameter value. This report explains this relationship in Section 7. With 0 cost of true classification, say that the costs of false positives and false negatives are scaled to sum to 1. Call the cost of a false positive $c$ so that the cost of a false negative is $(1-c)$. Then the optimal probability classification cutoff for minimizing expected cost is at $c$. When you specify a decision rule at 0.6, you are in effect specifying $c = 0.6$, saying that false positives are 1.5 (0.6/0.4) times as costly as false negatives. Changing the decision rule is just changing your estimate of the relative costs. So the decision rule in that sense represents your choice about how to use the data and your model, not something to be learned from the data independent of that choice. This relationship is, however, based on having the true probability model in hand, notated as $\eta(\boldsymbol{x})$ as a function of the covarates $\boldsymbol{x}$ in the linked report. Instead, what you have is an estimated model, notated as $q(\boldsymbol{x})$. Section 7 of the above report states: While $\eta(\boldsymbol{x})$ may not be well-approximated by the model $q(\boldsymbol{x})$, it may still be possible for each cost $c$ to approximate $\{\eta(\boldsymbol{x})> c\}$ well with $\{q(\boldsymbol{x})> c\}$, but each $c$ requiring a separate model fit $q(.)$. So you wish to tune the parameters for the model fit $q(.)$ to come close to $\eta(\boldsymbol{x})$ in a sense that they have similar behaviors with respect to the (ideal) decision-rule value $c$. One way to do something like that is instead to find a cutoff probability value for the mis-specified model $q(\boldsymbol{x})$ to be other than $c$, say $c^\dagger$, to provide the desired model performance (e.g., accuracy) on your data. That is, you try to approximate $\{\eta(\boldsymbol{x})> c\} $ well with $\{q(\boldsymbol{x})> c^\dagger\}$ in a way that suits your purpose. I'll leave to others to decide whether one should call such a modification of a mis-specified model a "hyper-parameter" choice and, if so, whether that would be "in the strict sense." One could argue that the choice of decision rule (in the first sense above) should instead be used to tune the modeling approach. A standard logistic regression, with coefficient values determined by maximum likelihood, represents only one of many ways to fit a linear model to data with binary outcomes. Its solution is equivalent to minimizing a log-loss function. Log-loss is a strictly proper scoring rule in the sense that it is optimized at the true probability distribution. There is, however, a wide universe of strictly proper scoring rules from which one might choose; see Sections 2 and 3 of the report linked above. These rules differ in terms of their weighting along the probability scale. The log-loss rule puts high weight near the extremes. If you have a false-positive cost of $c$ in the above formulation, you might want instead to choose a scoring rule that puts more weight on probabilities around $c$. The report linked above describes these issues extensively, and shows in Section 9 how to use iteratively weighted least squares to fit a linear model based on any proper scoring rule. This approach can be extended to penalization methods like LASSO; Section 15 of the report suggests that shrinkage of coefficients (as provided by LASSO and other penalization methods) can improve performance with some choices of weight function. That said, I suspect that mis-specification of a linear model typically poses more of a problem than the choice of proper scoring rule in practical applications. Optimizing your model near the probability cutoff associated with your choice of relative false positive/negative costs is nevertheless something to consider seriously. For example, that is the approach used in targeted maximum likelihood estimation, in which models are tuned to focus on a particular prediction region of interest. Combining multiple such models can minimize the dangers posed by any one model being mis-specified.
Decision rule as a hyper-parameter in LASSO Start with the decision rule in its ideal sense. It represents the tradeoff of costs between false-positive and false-negative classifications. In that sense the decision rule isn't a function of the
28,973
When can't Cramer-Rao lower bound be reached?
There are several instances of (2), namely the case where the variance of a UMVU estimator exceeds the Cramer-Rao lower bound. Here are some common examples: Estimation of $e^{-\theta}$ when $X_1,\ldots,X_n$ are i.i.d $\mathsf{Poisson}(\theta)$: Consider the case $n=1$ separately. Here we are to estimate the parametric function $e^{-\theta}=\delta$ (say) based on $X\sim\mathsf{Poisson}(\theta) $. Suppose $T(X)$ is unbiased for $\delta$. Therefore, $$E_{\theta}[T(X)]=\delta\quad,\forall\,\theta$$ Or, $$\sum_{j=0}^\infty T(j)\frac{\delta(\ln (\frac{1}{\delta}))^j}{j!}=\delta\quad,\forall\,\theta$$ That is, $$T(0)\delta+T(1)\delta\cdot\ln\left(\frac{1}{\delta}\right)+\cdots=\delta\quad,\forall\,\theta$$ So we have the unique unbiased estimator (hence also UMVUE) of $\delta(\theta)$: $$T(X)=\begin{cases}1&,\text{ if }X=0 \\ 0&,\text{ otherwise }\end{cases}$$ Clearly, \begin{align} \operatorname{Var}_{\theta}(T(X))&=P_{\theta}(X=0)(1-P_{\theta}(X=0)) \\&=e^{-\theta}(1-e^{-\theta}) \end{align} The Cramer-Rao bound for $\delta$ is $$\text{CRLB}(\delta)=\frac{\left(\frac{d}{d\theta}\delta(\theta)\right)^2}{I(\theta)}\,,$$ where $I(\theta)=E_{\theta}\left[\frac{\partial}{\partial\theta}\ln f_{\theta}(X)\right]^2=\frac1{\theta}$ is the Fisher information, $f_{\theta}$ being the pmf of $X$. This eventually reduces to $$\text{CRLB}(\delta)=\theta e^{-2\theta}$$ Now take the ratio of variance of $T$ and the Cramer-Rao bound: \begin{align} \frac{\operatorname{Var}_{\theta}(T(X))}{\text{CRLB}(\delta)}&=\frac{e^{-\theta}(1-e^{-\theta})}{\theta e^{-2\theta}} \\&=\frac{e^{\theta}-1}{\theta} \\&=\frac{1}{\theta}\left[\left(1+\theta+\frac{\theta^2}{2}+\cdots\right)-1\right] \\&=1+\frac{\theta}{2}+\cdots \\&>1 \end{align} With exactly same calculation this conclusion holds here if there is a sample of $n$ observations with $n>1$. In this case the UMVUE of $\delta$ is $\left(1-\frac1n\right)^{\sum_{i=1}^n X_i}$ with variance $e^{-2\theta}(e^{\theta/n}-1)$. Estimation of $\theta$ when $X_1,\ldots,X_n$ ( $n>1$) are i.i.d $\mathsf{Exp}$ with mean $1/\theta$: Here UMVUE of $\theta$ is $\hat\theta=\frac{n-1}{\sum_{i=1}^n X_i}$, as shown here. Using the Gamma distribution of $\sum\limits_{i=1}^n X_i$, a straightforward calculation shows $$\operatorname{Var}_{\theta}(\hat\theta)=\frac{\theta^2}{n-2}>\frac{\theta^2}{n}=\text{CRLB}(\theta)\quad,\,n>2$$ Since several distributions can be transformed to this exponential distribution, this example in fact generates many more examples. Estimation of $\theta^2$ when $X_1,\ldots,X_n$ are i.i.d $N(\theta,1)$: The UMVUE of $\theta^2$ is $\overline X^2-\frac1n$ where $\overline X$ is sample mean. Among other drawbacks, this estimator can be shown to be not attaining the lower bound. See page 4 of this note for details.
When can't Cramer-Rao lower bound be reached?
There are several instances of (2), namely the case where the variance of a UMVU estimator exceeds the Cramer-Rao lower bound. Here are some common examples: Estimation of $e^{-\theta}$ when $X_1,\l
When can't Cramer-Rao lower bound be reached? There are several instances of (2), namely the case where the variance of a UMVU estimator exceeds the Cramer-Rao lower bound. Here are some common examples: Estimation of $e^{-\theta}$ when $X_1,\ldots,X_n$ are i.i.d $\mathsf{Poisson}(\theta)$: Consider the case $n=1$ separately. Here we are to estimate the parametric function $e^{-\theta}=\delta$ (say) based on $X\sim\mathsf{Poisson}(\theta) $. Suppose $T(X)$ is unbiased for $\delta$. Therefore, $$E_{\theta}[T(X)]=\delta\quad,\forall\,\theta$$ Or, $$\sum_{j=0}^\infty T(j)\frac{\delta(\ln (\frac{1}{\delta}))^j}{j!}=\delta\quad,\forall\,\theta$$ That is, $$T(0)\delta+T(1)\delta\cdot\ln\left(\frac{1}{\delta}\right)+\cdots=\delta\quad,\forall\,\theta$$ So we have the unique unbiased estimator (hence also UMVUE) of $\delta(\theta)$: $$T(X)=\begin{cases}1&,\text{ if }X=0 \\ 0&,\text{ otherwise }\end{cases}$$ Clearly, \begin{align} \operatorname{Var}_{\theta}(T(X))&=P_{\theta}(X=0)(1-P_{\theta}(X=0)) \\&=e^{-\theta}(1-e^{-\theta}) \end{align} The Cramer-Rao bound for $\delta$ is $$\text{CRLB}(\delta)=\frac{\left(\frac{d}{d\theta}\delta(\theta)\right)^2}{I(\theta)}\,,$$ where $I(\theta)=E_{\theta}\left[\frac{\partial}{\partial\theta}\ln f_{\theta}(X)\right]^2=\frac1{\theta}$ is the Fisher information, $f_{\theta}$ being the pmf of $X$. This eventually reduces to $$\text{CRLB}(\delta)=\theta e^{-2\theta}$$ Now take the ratio of variance of $T$ and the Cramer-Rao bound: \begin{align} \frac{\operatorname{Var}_{\theta}(T(X))}{\text{CRLB}(\delta)}&=\frac{e^{-\theta}(1-e^{-\theta})}{\theta e^{-2\theta}} \\&=\frac{e^{\theta}-1}{\theta} \\&=\frac{1}{\theta}\left[\left(1+\theta+\frac{\theta^2}{2}+\cdots\right)-1\right] \\&=1+\frac{\theta}{2}+\cdots \\&>1 \end{align} With exactly same calculation this conclusion holds here if there is a sample of $n$ observations with $n>1$. In this case the UMVUE of $\delta$ is $\left(1-\frac1n\right)^{\sum_{i=1}^n X_i}$ with variance $e^{-2\theta}(e^{\theta/n}-1)$. Estimation of $\theta$ when $X_1,\ldots,X_n$ ( $n>1$) are i.i.d $\mathsf{Exp}$ with mean $1/\theta$: Here UMVUE of $\theta$ is $\hat\theta=\frac{n-1}{\sum_{i=1}^n X_i}$, as shown here. Using the Gamma distribution of $\sum\limits_{i=1}^n X_i$, a straightforward calculation shows $$\operatorname{Var}_{\theta}(\hat\theta)=\frac{\theta^2}{n-2}>\frac{\theta^2}{n}=\text{CRLB}(\theta)\quad,\,n>2$$ Since several distributions can be transformed to this exponential distribution, this example in fact generates many more examples. Estimation of $\theta^2$ when $X_1,\ldots,X_n$ are i.i.d $N(\theta,1)$: The UMVUE of $\theta^2$ is $\overline X^2-\frac1n$ where $\overline X$ is sample mean. Among other drawbacks, this estimator can be shown to be not attaining the lower bound. See page 4 of this note for details.
When can't Cramer-Rao lower bound be reached? There are several instances of (2), namely the case where the variance of a UMVU estimator exceeds the Cramer-Rao lower bound. Here are some common examples: Estimation of $e^{-\theta}$ when $X_1,\l
28,974
Reason for absolute value of Jacobian determinant in change-of-variable formula?
For a specific example, in addition to @whuber's advice, let $y=f(x)=-2x$, and $x=g(y)=-y/2$; and $x \in [0,1]$, i.e. the support. Then, $y$ would be in the range $[-2,0]$. Also, we have $g'(y)=-1/2, f'(x)=-2$. Normally, you'd take the integral $\int p(g(y))\left|\frac{dx}{dy}\right|dy$ from $-2$ to $0$, while using the formula. However, it actually is from $0$ to $-2$, since $x$ and $y$ directions differ, i.e. $$\int_{0}^{-2} p(g(y))\frac{dx}{dy}dy=\int_{-2}^{0} p(g(y))\left(-\frac{dx}{dy}\right)dy=\int_{-2}^{0} p(g(y))\left|\frac{dx}{dy}\right|dy$$ The use of absolute value removes the need of considering the inverse directions (i.e. negative directions of $x$ and $y$ which is reflected by negative derivatives).
Reason for absolute value of Jacobian determinant in change-of-variable formula?
For a specific example, in addition to @whuber's advice, let $y=f(x)=-2x$, and $x=g(y)=-y/2$; and $x \in [0,1]$, i.e. the support. Then, $y$ would be in the range $[-2,0]$. Also, we have $g'(y)=-1/2,
Reason for absolute value of Jacobian determinant in change-of-variable formula? For a specific example, in addition to @whuber's advice, let $y=f(x)=-2x$, and $x=g(y)=-y/2$; and $x \in [0,1]$, i.e. the support. Then, $y$ would be in the range $[-2,0]$. Also, we have $g'(y)=-1/2, f'(x)=-2$. Normally, you'd take the integral $\int p(g(y))\left|\frac{dx}{dy}\right|dy$ from $-2$ to $0$, while using the formula. However, it actually is from $0$ to $-2$, since $x$ and $y$ directions differ, i.e. $$\int_{0}^{-2} p(g(y))\frac{dx}{dy}dy=\int_{-2}^{0} p(g(y))\left(-\frac{dx}{dy}\right)dy=\int_{-2}^{0} p(g(y))\left|\frac{dx}{dy}\right|dy$$ The use of absolute value removes the need of considering the inverse directions (i.e. negative directions of $x$ and $y$ which is reflected by negative derivatives).
Reason for absolute value of Jacobian determinant in change-of-variable formula? For a specific example, in addition to @whuber's advice, let $y=f(x)=-2x$, and $x=g(y)=-y/2$; and $x \in [0,1]$, i.e. the support. Then, $y$ would be in the range $[-2,0]$. Also, we have $g'(y)=-1/2,
28,975
LASSO Regression - p-values and coefficients
To expand on what Ben Bolker notes in a comment on another answer, the issue of what a frequentist p-value means for a regression coefficient in LASSO is not at all easy. What's the actual null hypothesis against which you are testing the coefficient values? How do you take into account the fact that LASSO performed on multiple samples from the same population may return wholly different sets of predictors, particularly with the types of correlated predictors that often are seen in practice? How do you take into account that you have used the outcome values as part of the model-building process, for example in the cross-validation or other method you used to select the level of penalty and thus the number of retained predictors? These issues are discussed on this site. This page is one good place to start, with links to the R hdi package that you mention and also to the selectiveInference package, which is also discussed on this page. Statistical Learning with Sparsity covers inference for LASSO in Chapter 6, with references to the literature as of a few years ago. Please don't simply use the p-values returned by those or any other methods for LASSO as simple plug-and-play results. It's important to think why/whether you need p-values and what they really mean in LASSO. If your main interest is in prediction rather than inference, measures of predictive performance would be much more useful to you and to your audience.
LASSO Regression - p-values and coefficients
To expand on what Ben Bolker notes in a comment on another answer, the issue of what a frequentist p-value means for a regression coefficient in LASSO is not at all easy. What's the actual null hypoth
LASSO Regression - p-values and coefficients To expand on what Ben Bolker notes in a comment on another answer, the issue of what a frequentist p-value means for a regression coefficient in LASSO is not at all easy. What's the actual null hypothesis against which you are testing the coefficient values? How do you take into account the fact that LASSO performed on multiple samples from the same population may return wholly different sets of predictors, particularly with the types of correlated predictors that often are seen in practice? How do you take into account that you have used the outcome values as part of the model-building process, for example in the cross-validation or other method you used to select the level of penalty and thus the number of retained predictors? These issues are discussed on this site. This page is one good place to start, with links to the R hdi package that you mention and also to the selectiveInference package, which is also discussed on this page. Statistical Learning with Sparsity covers inference for LASSO in Chapter 6, with references to the literature as of a few years ago. Please don't simply use the p-values returned by those or any other methods for LASSO as simple plug-and-play results. It's important to think why/whether you need p-values and what they really mean in LASSO. If your main interest is in prediction rather than inference, measures of predictive performance would be much more useful to you and to your audience.
LASSO Regression - p-values and coefficients To expand on what Ben Bolker notes in a comment on another answer, the issue of what a frequentist p-value means for a regression coefficient in LASSO is not at all easy. What's the actual null hypoth
28,976
LASSO Regression - p-values and coefficients
Recall that LASSO functions as an elimination process. In other words, it keeps the "best" feature space using CV. One possible remedy is to select the final feature space and feed it back into an lm command. This way, you would be able to compute the statistical significance of the final selected X variables. For instance, see the following code: library(ISLR) library(glmnet) ds <- na.omit(Hitters) X <- as.matrix(ds[,1:10]) lM_LASSO <- cv.glmnet(X,y = log(ds$Salary), intercept=TRUE, alpha=1, nfolds=nrow(ds), parallel = T) opt_lam <- lM_LASSO$lambda.min lM_LASSO <- glmnet(X,y = log(ds$Salary), intercept=TRUE, alpha=1, lambda = opt_lam) W <- as.matrix(coef(lM_LASSO)) W 1 (Intercept) 4.5630727825 AtBat -0.0021567122 Hits 0.0115095746 HmRun 0.0055676901 Runs 0.0003147141 RBI 0.0001307846 Walks 0.0069978218 Years 0.0485039070 CHits 0.0003636287 keep_X <- rownames(W)[W!=0] keep_X <- keep_X[!keep_X == "(Intercept)"] X <- X[,keep_X] summary(lm(log(ds$Salary)~X)) Call: lm(formula = log(ds$Salary) ~ X) Residuals: Min 1Q Median 3Q Max -2.23409 -0.45747 0.06435 0.40762 3.02005 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.5801734 0.1559086 29.377 < 2e-16 *** XAtBat -0.0025470 0.0010447 -2.438 0.01546 * XHits 0.0126216 0.0039645 3.184 0.00164 ** XHmRun 0.0057538 0.0103619 0.555 0.57919 XRuns 0.0003510 0.0048428 0.072 0.94228 XRBI 0.0002455 0.0045771 0.054 0.95727 XWalks 0.0072372 0.0026936 2.687 0.00769 ** XYears 0.0487293 0.0206030 2.365 0.01877 * XCHits 0.0003622 0.0001564 2.316 0.02138 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.6251 on 254 degrees of freedom Multiple R-squared: 0.5209, Adjusted R-squared: 0.5058 F-statistic: 34.52 on 8 and 254 DF, p-value: < 2.2e-16 Note that the coefficients are little different from the ones derived from the glmnet model. Finally, you can use the stargazer package to output into a well-formatted table. In this case, we have stargazer::stargazer(lm(log(ds$Salary)~X),type = "text") =============================================== Dependent variable: --------------------------- Salary) ----------------------------------------------- XAtBat -0.003** (0.001) XHits 0.013*** (0.004) XHmRun 0.006 (0.010) XRuns 0.0004 (0.005) XRBI 0.0002 (0.005) XWalks 0.007*** (0.003) XYears 0.049** (0.021) XCHits 0.0004** (0.0002) Constant 4.580*** (0.156) ----------------------------------------------- Observations 263 R2 0.521 Adjusted R2 0.506 Residual Std. Error 0.625 (df = 254) F Statistic 34.521*** (df = 8; 254) =============================================== Note: *p<0.1; **p<0.05; ***p<0.01 Bootstrap Using a bootstrap approach, I compare the above standard errors with the bootstrapped one as a robustness check: library(boot) W_boot <- function(ds, indices) { ds_boot <- ds[indices,] X <- as.matrix(ds_boot[,1:10]) y <- log(ds$Salary) lM_LASSO <- glmnet(X,y = log(ds$Salary), intercept=TRUE, alpha=1, lambda = opt_lam) W <- as.matrix(coef(lM_LASSO)) return(W) } results <- boot(data=ds, statistic=W_boot, R=10000) se1 <- summary(lm(log(ds$Salary)~X))$coef[,2] se2 <- apply(results$t,2,sd) se2 <- se2[W!=0] plot(se2~se1) abline(a=0,b=1) There seems to be a small bias for the intercept. Otherwise, the ad-hoc approach seems to be justified. In any case, you may wanna check this thread for further discussion on this.
LASSO Regression - p-values and coefficients
Recall that LASSO functions as an elimination process. In other words, it keeps the "best" feature space using CV. One possible remedy is to select the final feature space and feed it back into an lm
LASSO Regression - p-values and coefficients Recall that LASSO functions as an elimination process. In other words, it keeps the "best" feature space using CV. One possible remedy is to select the final feature space and feed it back into an lm command. This way, you would be able to compute the statistical significance of the final selected X variables. For instance, see the following code: library(ISLR) library(glmnet) ds <- na.omit(Hitters) X <- as.matrix(ds[,1:10]) lM_LASSO <- cv.glmnet(X,y = log(ds$Salary), intercept=TRUE, alpha=1, nfolds=nrow(ds), parallel = T) opt_lam <- lM_LASSO$lambda.min lM_LASSO <- glmnet(X,y = log(ds$Salary), intercept=TRUE, alpha=1, lambda = opt_lam) W <- as.matrix(coef(lM_LASSO)) W 1 (Intercept) 4.5630727825 AtBat -0.0021567122 Hits 0.0115095746 HmRun 0.0055676901 Runs 0.0003147141 RBI 0.0001307846 Walks 0.0069978218 Years 0.0485039070 CHits 0.0003636287 keep_X <- rownames(W)[W!=0] keep_X <- keep_X[!keep_X == "(Intercept)"] X <- X[,keep_X] summary(lm(log(ds$Salary)~X)) Call: lm(formula = log(ds$Salary) ~ X) Residuals: Min 1Q Median 3Q Max -2.23409 -0.45747 0.06435 0.40762 3.02005 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.5801734 0.1559086 29.377 < 2e-16 *** XAtBat -0.0025470 0.0010447 -2.438 0.01546 * XHits 0.0126216 0.0039645 3.184 0.00164 ** XHmRun 0.0057538 0.0103619 0.555 0.57919 XRuns 0.0003510 0.0048428 0.072 0.94228 XRBI 0.0002455 0.0045771 0.054 0.95727 XWalks 0.0072372 0.0026936 2.687 0.00769 ** XYears 0.0487293 0.0206030 2.365 0.01877 * XCHits 0.0003622 0.0001564 2.316 0.02138 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.6251 on 254 degrees of freedom Multiple R-squared: 0.5209, Adjusted R-squared: 0.5058 F-statistic: 34.52 on 8 and 254 DF, p-value: < 2.2e-16 Note that the coefficients are little different from the ones derived from the glmnet model. Finally, you can use the stargazer package to output into a well-formatted table. In this case, we have stargazer::stargazer(lm(log(ds$Salary)~X),type = "text") =============================================== Dependent variable: --------------------------- Salary) ----------------------------------------------- XAtBat -0.003** (0.001) XHits 0.013*** (0.004) XHmRun 0.006 (0.010) XRuns 0.0004 (0.005) XRBI 0.0002 (0.005) XWalks 0.007*** (0.003) XYears 0.049** (0.021) XCHits 0.0004** (0.0002) Constant 4.580*** (0.156) ----------------------------------------------- Observations 263 R2 0.521 Adjusted R2 0.506 Residual Std. Error 0.625 (df = 254) F Statistic 34.521*** (df = 8; 254) =============================================== Note: *p<0.1; **p<0.05; ***p<0.01 Bootstrap Using a bootstrap approach, I compare the above standard errors with the bootstrapped one as a robustness check: library(boot) W_boot <- function(ds, indices) { ds_boot <- ds[indices,] X <- as.matrix(ds_boot[,1:10]) y <- log(ds$Salary) lM_LASSO <- glmnet(X,y = log(ds$Salary), intercept=TRUE, alpha=1, lambda = opt_lam) W <- as.matrix(coef(lM_LASSO)) return(W) } results <- boot(data=ds, statistic=W_boot, R=10000) se1 <- summary(lm(log(ds$Salary)~X))$coef[,2] se2 <- apply(results$t,2,sd) se2 <- se2[W!=0] plot(se2~se1) abline(a=0,b=1) There seems to be a small bias for the intercept. Otherwise, the ad-hoc approach seems to be justified. In any case, you may wanna check this thread for further discussion on this.
LASSO Regression - p-values and coefficients Recall that LASSO functions as an elimination process. In other words, it keeps the "best" feature space using CV. One possible remedy is to select the final feature space and feed it back into an lm
28,977
Alluvial plot vs. Sankey diagram
I'm not sure there's any consensus on this. Wikipedia says that an alluvial diagram is a type of Sankey diagram "that uses the same kind of representation to depict how items re-group" RAWGraphs likewise says: Alluvial diagrams are a specific kind of Sankey diagrams: they use the same logic to show how the same set of items regroups according to different dimensions. Azavea says: A Sankey diagram visualizes the proportional flow between variables (or nodes) within a network. The term “alluvial diagram” is generally used interchangeably. However, some argue that an alluvial diagram visualizes the changes in the network over time as opposed to across different variables. Datasmith says they "are profoundly different types of diagram": Alluvial plot: Shows how a population of facts is allocated across categorical dimensions. Left/right position has no particular significance; dimensions could be in any order. ‘Nodes’ are lined up in columns. Is useful for showing how features of a population are related — for example, answering questions like ‘how many people have features A and B, compared to how many have B but not A?’ Sankey diagram: Shows how quantities flow from one state to another. Left/right position shows movement or change. ‘Nodes’ could be anywhere, and must be laid out by an algorithm. Is useful for showing flows or processes where the amount, size, or population of something needs to be tracked — for example, answering questions like ‘out of the energy in system A, how much came from systems B and C and where will most of it go?’ This seems opposite of the Azavea definition? The Data Visualisation Catalogue Blog tries to distinguish between them: Like with a Flow Chart, a Sankey Diagram can include cycles, which is something that distinguishes them from Parallel Sets and Alluvial Diagrams. Also, the flow paths in a Sankey Diagram can combine or split apart at any stage of the system’s process. Whereas on Parallel Sets and Alluvial Diagrams, you tend to get a group of flow paths only going from one side to the other. Parallel Sets and Alluvial Diagrams aren’t too different from one another. However, the main difference between these two chart types is that Parallel Sets display a part-to-the-whole relationship, while Alluvial Diagrams only visualise quantities between dimensions. Parallel Sets will have line-sets that are all uniform in length, while Alluvial Diagrams will have a lot more variation.
Alluvial plot vs. Sankey diagram
I'm not sure there's any consensus on this. Wikipedia says that an alluvial diagram is a type of Sankey diagram "that uses the same kind of representation to depict how items re-group" RAWGraphs likew
Alluvial plot vs. Sankey diagram I'm not sure there's any consensus on this. Wikipedia says that an alluvial diagram is a type of Sankey diagram "that uses the same kind of representation to depict how items re-group" RAWGraphs likewise says: Alluvial diagrams are a specific kind of Sankey diagrams: they use the same logic to show how the same set of items regroups according to different dimensions. Azavea says: A Sankey diagram visualizes the proportional flow between variables (or nodes) within a network. The term “alluvial diagram” is generally used interchangeably. However, some argue that an alluvial diagram visualizes the changes in the network over time as opposed to across different variables. Datasmith says they "are profoundly different types of diagram": Alluvial plot: Shows how a population of facts is allocated across categorical dimensions. Left/right position has no particular significance; dimensions could be in any order. ‘Nodes’ are lined up in columns. Is useful for showing how features of a population are related — for example, answering questions like ‘how many people have features A and B, compared to how many have B but not A?’ Sankey diagram: Shows how quantities flow from one state to another. Left/right position shows movement or change. ‘Nodes’ could be anywhere, and must be laid out by an algorithm. Is useful for showing flows or processes where the amount, size, or population of something needs to be tracked — for example, answering questions like ‘out of the energy in system A, how much came from systems B and C and where will most of it go?’ This seems opposite of the Azavea definition? The Data Visualisation Catalogue Blog tries to distinguish between them: Like with a Flow Chart, a Sankey Diagram can include cycles, which is something that distinguishes them from Parallel Sets and Alluvial Diagrams. Also, the flow paths in a Sankey Diagram can combine or split apart at any stage of the system’s process. Whereas on Parallel Sets and Alluvial Diagrams, you tend to get a group of flow paths only going from one side to the other. Parallel Sets and Alluvial Diagrams aren’t too different from one another. However, the main difference between these two chart types is that Parallel Sets display a part-to-the-whole relationship, while Alluvial Diagrams only visualise quantities between dimensions. Parallel Sets will have line-sets that are all uniform in length, while Alluvial Diagrams will have a lot more variation.
Alluvial plot vs. Sankey diagram I'm not sure there's any consensus on this. Wikipedia says that an alluvial diagram is a type of Sankey diagram "that uses the same kind of representation to depict how items re-group" RAWGraphs likew
28,978
Find UMVUE of $\frac{1}{\theta}$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$
Your reasoning is mostly correct. The joint density of the sample $(X_1,X_2,\ldots,X_n)$ is \begin{align} f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{\theta^n}{\left(\prod_{i=1}^n (1+x_i)\right)^{1+\theta}}\mathbf1_{x_1,x_2,\ldots,x_n>0}\qquad,\,\theta>0 \\\\\implies \ln f_{\theta}(x_1,x_2,\ldots,x_n)&=n\ln(\theta)-(1+\theta)\sum_{i=1}^n\ln(1+x_i)+\ln(\mathbf1_{\min_{1\le i\le n} x_i>0}) \\\\\implies\frac{\partial}{\partial \theta}\ln f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{n}{\theta}-\sum_{i=1}^n\ln(1+x_i) \\\\&=-n\left(\frac{\sum_{i=1}^n\ln(1+x_i)}{n}-\frac{1}{\theta}\right) \end{align} Thus we have expressed the score function in the form $$\frac{\partial}{\partial \theta}\ln f_{\theta}(x_1,x_2,\ldots,x_n)=k(\theta)\left(T(x_1,x_2,\ldots,x_n)-\frac{1}{\theta}\right)\tag{1}$$ , which is the equality condition in the Cramér-Rao inequality. It is not difficult to verify that $$E(T)=\frac{1}{n}\sum_{i=1}^n\underbrace{E(\ln(1+X_i))}_{=1/\theta}=\frac{1}{\theta}\tag{2}$$ From $(1)$ and $(2)$ we can conclude that The statistic $T(X_1,X_2,\ldots,X_n)$ is an unbiased estimator of $1/\theta$. $T$ satisfies the equality condition of the Cramér-Rao inequality. These two facts together imply that $T$ is the UMVUE of $1/\theta$. The second bullet actually tells us that variance of $T$ attains the Cramér-Rao lower bound for $1/\theta$. Indeed, as you have shown, $$E_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(X_1)\right]=-\frac{1}{\theta^2}$$ This implies that the information function for the whole sample is $$I(\theta)=-nE_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(X_1)\right]=\frac{n}{\theta^2}$$ So the Cramér-Rao lower bound for $1/\theta$ and hence the variance of the UMVUE is $$\operatorname{Var}(T)=\frac{\left[\frac{d}{d\theta}\left(\frac{1}{\theta}\right)\right]^2}{I(\theta)}=\frac{1}{n\theta^2}$$ Here we have exploited a corollary of the Cramér-Rao inequality, which says that for a family of distributions $f$ parametrised by $\theta$ (assuming regularity conditions of CR inequality to hold), if a statistic $T$ is unbiased for $g(\theta)$ for some function $g$ and if it satisfies the condition of equality in the CR inequality, namely $$\frac{\partial}{\partial\theta}\ln f_{\theta}(x)=k(\theta)\left(T(x)-g(\theta)\right)$$, then $T$ must be the UMVUE of $g(\theta)$. So this argument does not work in every problem. Alternatively, using the Lehmann-Scheffe theorem you could say that $T=\frac{1}{n}\sum_{i=1}^{n} \ln(1+X_i)$ is the UMVUE of $1/\theta$ as it is unbiased for $1/\theta$ and is a complete sufficient statistic for the family of distributions. That $T$ is compete sufficient is clear from the structure of the joint density of the sample in terms of a one-parameter exponential family. But variance of $T$ might be a little tricky to find directly.
Find UMVUE of $\frac{1}{\theta}$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x
Your reasoning is mostly correct. The joint density of the sample $(X_1,X_2,\ldots,X_n)$ is \begin{align} f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{\theta^n}{\left(\prod_{i=1}^n (1+x_i)\right)^{1+\theta}
Find UMVUE of $\frac{1}{\theta}$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$ Your reasoning is mostly correct. The joint density of the sample $(X_1,X_2,\ldots,X_n)$ is \begin{align} f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{\theta^n}{\left(\prod_{i=1}^n (1+x_i)\right)^{1+\theta}}\mathbf1_{x_1,x_2,\ldots,x_n>0}\qquad,\,\theta>0 \\\\\implies \ln f_{\theta}(x_1,x_2,\ldots,x_n)&=n\ln(\theta)-(1+\theta)\sum_{i=1}^n\ln(1+x_i)+\ln(\mathbf1_{\min_{1\le i\le n} x_i>0}) \\\\\implies\frac{\partial}{\partial \theta}\ln f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{n}{\theta}-\sum_{i=1}^n\ln(1+x_i) \\\\&=-n\left(\frac{\sum_{i=1}^n\ln(1+x_i)}{n}-\frac{1}{\theta}\right) \end{align} Thus we have expressed the score function in the form $$\frac{\partial}{\partial \theta}\ln f_{\theta}(x_1,x_2,\ldots,x_n)=k(\theta)\left(T(x_1,x_2,\ldots,x_n)-\frac{1}{\theta}\right)\tag{1}$$ , which is the equality condition in the Cramér-Rao inequality. It is not difficult to verify that $$E(T)=\frac{1}{n}\sum_{i=1}^n\underbrace{E(\ln(1+X_i))}_{=1/\theta}=\frac{1}{\theta}\tag{2}$$ From $(1)$ and $(2)$ we can conclude that The statistic $T(X_1,X_2,\ldots,X_n)$ is an unbiased estimator of $1/\theta$. $T$ satisfies the equality condition of the Cramér-Rao inequality. These two facts together imply that $T$ is the UMVUE of $1/\theta$. The second bullet actually tells us that variance of $T$ attains the Cramér-Rao lower bound for $1/\theta$. Indeed, as you have shown, $$E_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(X_1)\right]=-\frac{1}{\theta^2}$$ This implies that the information function for the whole sample is $$I(\theta)=-nE_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(X_1)\right]=\frac{n}{\theta^2}$$ So the Cramér-Rao lower bound for $1/\theta$ and hence the variance of the UMVUE is $$\operatorname{Var}(T)=\frac{\left[\frac{d}{d\theta}\left(\frac{1}{\theta}\right)\right]^2}{I(\theta)}=\frac{1}{n\theta^2}$$ Here we have exploited a corollary of the Cramér-Rao inequality, which says that for a family of distributions $f$ parametrised by $\theta$ (assuming regularity conditions of CR inequality to hold), if a statistic $T$ is unbiased for $g(\theta)$ for some function $g$ and if it satisfies the condition of equality in the CR inequality, namely $$\frac{\partial}{\partial\theta}\ln f_{\theta}(x)=k(\theta)\left(T(x)-g(\theta)\right)$$, then $T$ must be the UMVUE of $g(\theta)$. So this argument does not work in every problem. Alternatively, using the Lehmann-Scheffe theorem you could say that $T=\frac{1}{n}\sum_{i=1}^{n} \ln(1+X_i)$ is the UMVUE of $1/\theta$ as it is unbiased for $1/\theta$ and is a complete sufficient statistic for the family of distributions. That $T$ is compete sufficient is clear from the structure of the joint density of the sample in terms of a one-parameter exponential family. But variance of $T$ might be a little tricky to find directly.
Find UMVUE of $\frac{1}{\theta}$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x Your reasoning is mostly correct. The joint density of the sample $(X_1,X_2,\ldots,X_n)$ is \begin{align} f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{\theta^n}{\left(\prod_{i=1}^n (1+x_i)\right)^{1+\theta}
28,979
What is the probability that $X<Y$ given $\min(X,Y)$?
Using the slightly more explicit notation $P(X<Y|\min(X, Y)=m)$, where $m$ is a real number, not a random variable. The set on which $\min(X,Y) = m$ is an L shaped path with two half-open segments: one going straight up from the point $(m,m)$ and another going straight to the right from this same point. It's clear that on the vertical leg, $x<y$ and on the horizontal leg $x>y$. Given this geometric intuition its easy to rewrite the problem in an equivalent form, where in the numerator we have only the vertical leg where $x<y$ and in the denominator we have the sum of the two legs. $P(X<Y|\min(X, Y)) = \frac{ \displaystyle P(m<Y|X=m) }{ \displaystyle P(m<Y|X=m) + P(m<X|Y=m) } \tag{1}$ So now we need to calculate two expressions of the form $P(m<X|Y=m)$. Such conditional probabilities of the bivariate normal distribution always have a normal distribution $\mathcal{N}\left(\mu_{X|Y=m}, s^2_{X|Y=m}\right)$ with parameters: $\mu_{X|Y=m} = \mu_1+\frac{\displaystyle \sigma_{12}}{\displaystyle \sigma_{22}}({m}-\mu_2) \tag{2}$ $s^2_{X|Y=m} = \sigma_{11}-\frac{\displaystyle \sigma_{12}^2}{\displaystyle \sigma_{22}} \tag{3} $ Note that in the original problem definition, $\sigma_{ij}$ referred to elements of the covariance matrix, contrary to the more common convention of using $\sigma$ for standard deviation. Below, we will find it more convenient to use $s^2$ for the variance and $s$ for the standard deviation of the conditional probability distribution. Knowing these two parameters, we can calculate the probability than $m<X$ from the cumulative distribution function. $P(m<X|Y=m) = \Phi \left(\frac{\displaystyle \mu_{X;Y=m} -m}{\displaystyle s_{X;Y=m}} \right) \tag{4}$ mutatis mutandis, we have a similar expression for $P(Y>m|X=m)$. Let $ z_{X|Y=m} = \frac{\displaystyle \mu_{X;Y=m} - m}{\displaystyle s_{X;Y=m}} \tag{5} $ and $ z_{Y|X=m} = \frac{\displaystyle \mu_{Y;X=m} -m}{\displaystyle s_{Y;X=m}} \tag{6} $ Then we can write the complete solution compactly in terms of these two $z$ scores: $ P(X<Y|\min(X, Y)=m) = 1 - \frac{ \displaystyle \Phi(z_{X|Y=m}) }{ \displaystyle \Phi(z_{X|Y=m})+\Phi(z_{Y|X=m}) } \tag{7}$ Based on simulation code provided by the question author, we can compare this theoretical result to the simulated results:
What is the probability that $X<Y$ given $\min(X,Y)$?
Using the slightly more explicit notation $P(X<Y|\min(X, Y)=m)$, where $m$ is a real number, not a random variable. The set on which $\min(X,Y) = m$ is an L shaped path with two half-open segments: on
What is the probability that $X<Y$ given $\min(X,Y)$? Using the slightly more explicit notation $P(X<Y|\min(X, Y)=m)$, where $m$ is a real number, not a random variable. The set on which $\min(X,Y) = m$ is an L shaped path with two half-open segments: one going straight up from the point $(m,m)$ and another going straight to the right from this same point. It's clear that on the vertical leg, $x<y$ and on the horizontal leg $x>y$. Given this geometric intuition its easy to rewrite the problem in an equivalent form, where in the numerator we have only the vertical leg where $x<y$ and in the denominator we have the sum of the two legs. $P(X<Y|\min(X, Y)) = \frac{ \displaystyle P(m<Y|X=m) }{ \displaystyle P(m<Y|X=m) + P(m<X|Y=m) } \tag{1}$ So now we need to calculate two expressions of the form $P(m<X|Y=m)$. Such conditional probabilities of the bivariate normal distribution always have a normal distribution $\mathcal{N}\left(\mu_{X|Y=m}, s^2_{X|Y=m}\right)$ with parameters: $\mu_{X|Y=m} = \mu_1+\frac{\displaystyle \sigma_{12}}{\displaystyle \sigma_{22}}({m}-\mu_2) \tag{2}$ $s^2_{X|Y=m} = \sigma_{11}-\frac{\displaystyle \sigma_{12}^2}{\displaystyle \sigma_{22}} \tag{3} $ Note that in the original problem definition, $\sigma_{ij}$ referred to elements of the covariance matrix, contrary to the more common convention of using $\sigma$ for standard deviation. Below, we will find it more convenient to use $s^2$ for the variance and $s$ for the standard deviation of the conditional probability distribution. Knowing these two parameters, we can calculate the probability than $m<X$ from the cumulative distribution function. $P(m<X|Y=m) = \Phi \left(\frac{\displaystyle \mu_{X;Y=m} -m}{\displaystyle s_{X;Y=m}} \right) \tag{4}$ mutatis mutandis, we have a similar expression for $P(Y>m|X=m)$. Let $ z_{X|Y=m} = \frac{\displaystyle \mu_{X;Y=m} - m}{\displaystyle s_{X;Y=m}} \tag{5} $ and $ z_{Y|X=m} = \frac{\displaystyle \mu_{Y;X=m} -m}{\displaystyle s_{Y;X=m}} \tag{6} $ Then we can write the complete solution compactly in terms of these two $z$ scores: $ P(X<Y|\min(X, Y)=m) = 1 - \frac{ \displaystyle \Phi(z_{X|Y=m}) }{ \displaystyle \Phi(z_{X|Y=m})+\Phi(z_{Y|X=m}) } \tag{7}$ Based on simulation code provided by the question author, we can compare this theoretical result to the simulated results:
What is the probability that $X<Y$ given $\min(X,Y)$? Using the slightly more explicit notation $P(X<Y|\min(X, Y)=m)$, where $m$ is a real number, not a random variable. The set on which $\min(X,Y) = m$ is an L shaped path with two half-open segments: on
28,980
What is the probability that $X<Y$ given $\min(X,Y)$?
The question can be rewritten using a modified version of Bayes theorem (and an abuse of notion for $Pr$) \begin{align} Pr(X<Y|min(X,Y) = m) &= \frac{Pr(min(X,Y)=m|X<Y)Pr(X<Y)}{Pr(min(X,Y)=m|X<Y)Pr(X<Y)+Pr(min(X,Y)=m|X\geq Y)Pr(X\geq Y)}\\ &= \frac{Pr(X<Y,min(X,Y)=m)}{Pr(X<Y,min(X,Y)=m)+Pr(X\geq Y,min(X,Y)=m)}. \end{align} Define $f_{X,Y}$ to be the bivariate PDF of $X$ and $Y$, $\phi(x) = \frac{1}{\sqrt{2\pi}}exp(-\frac{1}{2}x^2)$ and $\Phi(x) = \int_{-\infty}^x\phi(t)dt$. Then \begin{align} Pr(X<Y,min(X,Y)=m) &=Pr(X=m,Y>m) \\ &= \int_m^\infty f_{X,Y}(m,t)dt \end{align} and \begin{align} Pr(X\geq Y,min(X,Y)=m) &=Pr(X\geq m,Y=m) \\ &= \int_m^\infty f_{X,Y}(t,m)dt \end{align} Using normality and the definition of conditional probability the integrands can be rewritten as $$f_{X,Y}(m,t) = f_{Y|X}(t)f_X(m) = \frac{1}{\sqrt{\sigma_{Y|X}}}\phi\left(\frac{t-\mu_{Y|X}}{\sqrt{\sigma_{Y|X}}}\right)\frac{1}{\sqrt{\sigma_{11}}}\phi\left(\frac{m-\mu_1}{\sqrt{\sigma_{11}}}\right)$$ and $$f_{X,Y}(t,m) = f_{X|Y}(t)f_Y(m) = \frac{1}{\sqrt{\sigma_{X|Y}}}\phi\left(\frac{t-\mu_{X|Y}}{\sqrt{\sigma_{X|Y}}}\right)\frac{1}{\sqrt{\sigma_{22}}}\phi\left(\frac{m-\mu_2}{\sqrt{\sigma_{22}}}\right).$$ Where $$\mu_{X|Y} = \mu_1 + \frac{\sigma_{12}}{\sigma_{22}}(m-\mu_2),$$ $$\mu_{Y|X} = \mu_2 + \frac{\sigma_{12}}{\sigma_{11}}(m-\mu_1),$$ $$\sigma_{X|Y} = \left(1-\frac{\sigma_{12}^2}{\sigma_{11}\sigma_{22}}\right)\sigma_{11}$$ and $$\sigma_{Y|X} = \left(1-\frac{\sigma_{12}^2}{\sigma_{11}\sigma_{22}}\right)\sigma_{22}.$$ Thus \begin{equation} Pr(X<Y|min(X,Y) = m) = \frac{\left(1-\Phi\left(\frac{m-\mu_{Y|X}}{\sqrt{\sigma_{Y|X}}}\right)\right)\frac{1}{\sqrt{\sigma_{11}}}\phi\left(\frac{m-\mu_1}{\sqrt{\sigma_{11}}}\right)}{\left(1-\Phi\left(\frac{m-\mu_{Y|X}}{\sqrt{\sigma_{Y|X}}}\right)\right)\frac{1}{\sqrt{\sigma_{11}}}\phi\left(\frac{m-\mu_1}{\sqrt{\sigma_{11}}}\right)+\left(1-\Phi\left(\frac{m-\mu_{X|Y}}{\sqrt{\sigma_{X|Y}}}\right)\right)\frac{1}{\sqrt{\sigma_{22}}}\phi\left(\frac{m-\mu_2}{\sqrt{\sigma_{22}}}\right)}. \end{equation} This final form is very similar to the result @olooney arrived at. The difference is his probabilities are not weighted by the normal densities. An R script for numerical verification can be found here
What is the probability that $X<Y$ given $\min(X,Y)$?
The question can be rewritten using a modified version of Bayes theorem (and an abuse of notion for $Pr$) \begin{align} Pr(X<Y|min(X,Y) = m) &= \frac{Pr(min(X,Y)=m|X<Y)Pr(X<Y)}{Pr(min(X,Y)=m|X<Y)Pr(X<
What is the probability that $X<Y$ given $\min(X,Y)$? The question can be rewritten using a modified version of Bayes theorem (and an abuse of notion for $Pr$) \begin{align} Pr(X<Y|min(X,Y) = m) &= \frac{Pr(min(X,Y)=m|X<Y)Pr(X<Y)}{Pr(min(X,Y)=m|X<Y)Pr(X<Y)+Pr(min(X,Y)=m|X\geq Y)Pr(X\geq Y)}\\ &= \frac{Pr(X<Y,min(X,Y)=m)}{Pr(X<Y,min(X,Y)=m)+Pr(X\geq Y,min(X,Y)=m)}. \end{align} Define $f_{X,Y}$ to be the bivariate PDF of $X$ and $Y$, $\phi(x) = \frac{1}{\sqrt{2\pi}}exp(-\frac{1}{2}x^2)$ and $\Phi(x) = \int_{-\infty}^x\phi(t)dt$. Then \begin{align} Pr(X<Y,min(X,Y)=m) &=Pr(X=m,Y>m) \\ &= \int_m^\infty f_{X,Y}(m,t)dt \end{align} and \begin{align} Pr(X\geq Y,min(X,Y)=m) &=Pr(X\geq m,Y=m) \\ &= \int_m^\infty f_{X,Y}(t,m)dt \end{align} Using normality and the definition of conditional probability the integrands can be rewritten as $$f_{X,Y}(m,t) = f_{Y|X}(t)f_X(m) = \frac{1}{\sqrt{\sigma_{Y|X}}}\phi\left(\frac{t-\mu_{Y|X}}{\sqrt{\sigma_{Y|X}}}\right)\frac{1}{\sqrt{\sigma_{11}}}\phi\left(\frac{m-\mu_1}{\sqrt{\sigma_{11}}}\right)$$ and $$f_{X,Y}(t,m) = f_{X|Y}(t)f_Y(m) = \frac{1}{\sqrt{\sigma_{X|Y}}}\phi\left(\frac{t-\mu_{X|Y}}{\sqrt{\sigma_{X|Y}}}\right)\frac{1}{\sqrt{\sigma_{22}}}\phi\left(\frac{m-\mu_2}{\sqrt{\sigma_{22}}}\right).$$ Where $$\mu_{X|Y} = \mu_1 + \frac{\sigma_{12}}{\sigma_{22}}(m-\mu_2),$$ $$\mu_{Y|X} = \mu_2 + \frac{\sigma_{12}}{\sigma_{11}}(m-\mu_1),$$ $$\sigma_{X|Y} = \left(1-\frac{\sigma_{12}^2}{\sigma_{11}\sigma_{22}}\right)\sigma_{11}$$ and $$\sigma_{Y|X} = \left(1-\frac{\sigma_{12}^2}{\sigma_{11}\sigma_{22}}\right)\sigma_{22}.$$ Thus \begin{equation} Pr(X<Y|min(X,Y) = m) = \frac{\left(1-\Phi\left(\frac{m-\mu_{Y|X}}{\sqrt{\sigma_{Y|X}}}\right)\right)\frac{1}{\sqrt{\sigma_{11}}}\phi\left(\frac{m-\mu_1}{\sqrt{\sigma_{11}}}\right)}{\left(1-\Phi\left(\frac{m-\mu_{Y|X}}{\sqrt{\sigma_{Y|X}}}\right)\right)\frac{1}{\sqrt{\sigma_{11}}}\phi\left(\frac{m-\mu_1}{\sqrt{\sigma_{11}}}\right)+\left(1-\Phi\left(\frac{m-\mu_{X|Y}}{\sqrt{\sigma_{X|Y}}}\right)\right)\frac{1}{\sqrt{\sigma_{22}}}\phi\left(\frac{m-\mu_2}{\sqrt{\sigma_{22}}}\right)}. \end{equation} This final form is very similar to the result @olooney arrived at. The difference is his probabilities are not weighted by the normal densities. An R script for numerical verification can be found here
What is the probability that $X<Y$ given $\min(X,Y)$? The question can be rewritten using a modified version of Bayes theorem (and an abuse of notion for $Pr$) \begin{align} Pr(X<Y|min(X,Y) = m) &= \frac{Pr(min(X,Y)=m|X<Y)Pr(X<Y)}{Pr(min(X,Y)=m|X<Y)Pr(X<
28,981
Independence of statistics from gamma distribution
There is a cute, simple, intuitively obvious demonstration for integral $\alpha.$ It relies only on well-known properties of the uniform distribution, Gamma distribution, Poisson processes, and random variables and goes like this: Each $X_i$ is the waiting time until $\alpha$ points of a Poisson process occur. The sum $Y = X_1+X_2+\cdots + X_n$ therefore is the waiting time until $n\alpha$ points of that process occur. Let's call these points $Z_1, Z_2, \ldots, Z_{n\alpha}.$ Conditional on $Y$, the first $n\alpha-1$ points are independently uniformly distributed between $0$ and $Y.$ Therefore the ratios $Z_i/Y,\ i=1,2,\ldots, n\alpha-1$ are independently uniformly distributed between $0$ and $1.$ In particular, their distributions do not depend on $Y.$ Consequently, any (measurable) function of the $Z_i/Y$ is independent of $Y.$ Among such functions are $$\eqalign{X_1/Y &= Z_{[\alpha]}/Y\\ X_2/Y &= Z_{[2\alpha]}/Y - Z_{[\alpha]}/Y\\ \ldots\\ X_{n-1}/Y &= Z_{[(n-1)\alpha]}/Y - Z_{[(n-2)\alpha]}/Y\\ X_n/Y &= 1 - Z_{[(n-1)\alpha]}/Y}$$ (where the brackets $[]$ denote the order statistics of the $Z_i$). At this point, simply note that $S^2/\bar X^2$ can be written explicitly as a (measurable) function of the $X_i/Y$ and therefore is independent of $\bar X = Y/n.$
Independence of statistics from gamma distribution
There is a cute, simple, intuitively obvious demonstration for integral $\alpha.$ It relies only on well-known properties of the uniform distribution, Gamma distribution, Poisson processes, and rando
Independence of statistics from gamma distribution There is a cute, simple, intuitively obvious demonstration for integral $\alpha.$ It relies only on well-known properties of the uniform distribution, Gamma distribution, Poisson processes, and random variables and goes like this: Each $X_i$ is the waiting time until $\alpha$ points of a Poisson process occur. The sum $Y = X_1+X_2+\cdots + X_n$ therefore is the waiting time until $n\alpha$ points of that process occur. Let's call these points $Z_1, Z_2, \ldots, Z_{n\alpha}.$ Conditional on $Y$, the first $n\alpha-1$ points are independently uniformly distributed between $0$ and $Y.$ Therefore the ratios $Z_i/Y,\ i=1,2,\ldots, n\alpha-1$ are independently uniformly distributed between $0$ and $1.$ In particular, their distributions do not depend on $Y.$ Consequently, any (measurable) function of the $Z_i/Y$ is independent of $Y.$ Among such functions are $$\eqalign{X_1/Y &= Z_{[\alpha]}/Y\\ X_2/Y &= Z_{[2\alpha]}/Y - Z_{[\alpha]}/Y\\ \ldots\\ X_{n-1}/Y &= Z_{[(n-1)\alpha]}/Y - Z_{[(n-2)\alpha]}/Y\\ X_n/Y &= 1 - Z_{[(n-1)\alpha]}/Y}$$ (where the brackets $[]$ denote the order statistics of the $Z_i$). At this point, simply note that $S^2/\bar X^2$ can be written explicitly as a (measurable) function of the $X_i/Y$ and therefore is independent of $\bar X = Y/n.$
Independence of statistics from gamma distribution There is a cute, simple, intuitively obvious demonstration for integral $\alpha.$ It relies only on well-known properties of the uniform distribution, Gamma distribution, Poisson processes, and rando
28,982
Independence of statistics from gamma distribution
You want to prove that the mean $\bar{X}$ and the $n$ rv.s $X_i/\bar{X}$ are independent, or equivalently that the sum $U := \sum X_i$ and the $n$ ratios $W_i := X_i / U$ are independent. We can prove a slightly more general result by assuming that the $X_i$ have possibly different shapes $\alpha_i$, but the same scale $\beta>0$ which can be assumed to be $\beta = 1$. Consider the joint Laplace transform of $U$ and $\mathbf{W}=[W_i]_{i=1}^n$ i.e., $$\psi(t,\,\mathbf{z}) := \text{E}\{\exp[-tU - \mathbf{z}^\top \mathbf{W}\} = \text{E}\left\{ \exp\left[-t \sum_i X_i - \sum_i z_i \,\frac{X_i}{U} \right] \right\} $$ This expresses as an $n$-dimensional integral over $(0, \infty)^n$ $$ %% \psi(t,\,\mathbf{z} = \text{Cst} \, \int \exp \left[- (1 + t)(x_1 + \dots + x_n) - \frac{z_1 x_1 + \dots + z_n x_n}{x_1 + \dots + x_n} \right] \, x_1 ^{\alpha_1 - 1} \dots \, x_n^{\alpha_n - 1} \text{d}\mathbf{x} $$ where the constant is relative to $\mathbf{x}$. If we introduce new variables under the integral sign by setting $\mathbf{y} := (1 + t)\, \mathbf{x}$, we see easily that the integral can be written as a product of two functions, one depending on $t$ the other depending on the vector $\mathbf{z}$. This proves that $U$ and $\mathbf{W}$ are independent. Disclaimer. This question relates to Lukacs' theorem on proportion-sum independence, hence to the article by Eugene Lukacs A Characterization of the Gamma Distribution. I just extracted here the relevant part of this article (namely p. 324), with some changes in the notations. I also replaced the use of the characteristic function by that of the Laplace transform to avoid changes of variables involving complex numbers.
Independence of statistics from gamma distribution
You want to prove that the mean $\bar{X}$ and the $n$ rv.s $X_i/\bar{X}$ are independent, or equivalently that the sum $U := \sum X_i$ and the $n$ ratios $W_i := X_i / U$ are independent. We can prove
Independence of statistics from gamma distribution You want to prove that the mean $\bar{X}$ and the $n$ rv.s $X_i/\bar{X}$ are independent, or equivalently that the sum $U := \sum X_i$ and the $n$ ratios $W_i := X_i / U$ are independent. We can prove a slightly more general result by assuming that the $X_i$ have possibly different shapes $\alpha_i$, but the same scale $\beta>0$ which can be assumed to be $\beta = 1$. Consider the joint Laplace transform of $U$ and $\mathbf{W}=[W_i]_{i=1}^n$ i.e., $$\psi(t,\,\mathbf{z}) := \text{E}\{\exp[-tU - \mathbf{z}^\top \mathbf{W}\} = \text{E}\left\{ \exp\left[-t \sum_i X_i - \sum_i z_i \,\frac{X_i}{U} \right] \right\} $$ This expresses as an $n$-dimensional integral over $(0, \infty)^n$ $$ %% \psi(t,\,\mathbf{z} = \text{Cst} \, \int \exp \left[- (1 + t)(x_1 + \dots + x_n) - \frac{z_1 x_1 + \dots + z_n x_n}{x_1 + \dots + x_n} \right] \, x_1 ^{\alpha_1 - 1} \dots \, x_n^{\alpha_n - 1} \text{d}\mathbf{x} $$ where the constant is relative to $\mathbf{x}$. If we introduce new variables under the integral sign by setting $\mathbf{y} := (1 + t)\, \mathbf{x}$, we see easily that the integral can be written as a product of two functions, one depending on $t$ the other depending on the vector $\mathbf{z}$. This proves that $U$ and $\mathbf{W}$ are independent. Disclaimer. This question relates to Lukacs' theorem on proportion-sum independence, hence to the article by Eugene Lukacs A Characterization of the Gamma Distribution. I just extracted here the relevant part of this article (namely p. 324), with some changes in the notations. I also replaced the use of the characteristic function by that of the Laplace transform to avoid changes of variables involving complex numbers.
Independence of statistics from gamma distribution You want to prove that the mean $\bar{X}$ and the $n$ rv.s $X_i/\bar{X}$ are independent, or equivalently that the sum $U := \sum X_i$ and the $n$ ratios $W_i := X_i / U$ are independent. We can prove
28,983
Independence of statistics from gamma distribution
Let $U=\sum_i X_i$. Note that $(X_i /U)_i$ is an ancillary statistic of $\beta$, i.e. its distribution does not depend on $\beta$. Since $U$ is a complete sufficient statistic of $\beta$, it is independent to $(X_i /U)_i$ by Basu's theorem, so the conclusion follows. I'm not sure of the construction of the ancillary statistic, since it is only independent of $\beta$, not $\alpha$.
Independence of statistics from gamma distribution
Let $U=\sum_i X_i$. Note that $(X_i /U)_i$ is an ancillary statistic of $\beta$, i.e. its distribution does not depend on $\beta$. Since $U$ is a complete sufficient statistic of $\beta$, it is indepe
Independence of statistics from gamma distribution Let $U=\sum_i X_i$. Note that $(X_i /U)_i$ is an ancillary statistic of $\beta$, i.e. its distribution does not depend on $\beta$. Since $U$ is a complete sufficient statistic of $\beta$, it is independent to $(X_i /U)_i$ by Basu's theorem, so the conclusion follows. I'm not sure of the construction of the ancillary statistic, since it is only independent of $\beta$, not $\alpha$.
Independence of statistics from gamma distribution Let $U=\sum_i X_i$. Note that $(X_i /U)_i$ is an ancillary statistic of $\beta$, i.e. its distribution does not depend on $\beta$. Since $U$ is a complete sufficient statistic of $\beta$, it is indepe
28,984
Are time series motifs and the Matrix profile algorithm a good fit for my problem?
Yes, the Matrix Profile allows discord discovery, which is very competitive for anomaly detection (according to multiple independent test) And yes, while "finding similarities among time series" is a bit too vague to clearly respond to, the Matrix Profile does do that. If you write to the author of the tutorial (me) with some data samples, he will advise more.
Are time series motifs and the Matrix profile algorithm a good fit for my problem?
Yes, the Matrix Profile allows discord discovery, which is very competitive for anomaly detection (according to multiple independent test) And yes, while "finding similarities among time series" is a
Are time series motifs and the Matrix profile algorithm a good fit for my problem? Yes, the Matrix Profile allows discord discovery, which is very competitive for anomaly detection (according to multiple independent test) And yes, while "finding similarities among time series" is a bit too vague to clearly respond to, the Matrix Profile does do that. If you write to the author of the tutorial (me) with some data samples, he will advise more.
Are time series motifs and the Matrix profile algorithm a good fit for my problem? Yes, the Matrix Profile allows discord discovery, which is very competitive for anomaly detection (according to multiple independent test) And yes, while "finding similarities among time series" is a
28,985
What is a good proposal distribution for Metropolis-Hastings for strictly positive parameters?
The most natural [and generic] resolution [imo] is to turn $\theta$ into $\eta=\log\theta$ in the original problem so that $\eta$ is unconstrained. This allows for the use of random walk proposals like Metropolis et al.'s. The only warning is that the prior must incorporate the change of variable through a Jacobian: $$\pi_\eta(\eta)=\pi_\theta(\exp\{\eta\})\times\exp\{\eta\}$$ Warning: This proposal is only equivalent to propose a log-normal new value in the original parameterisation if the proper Metropolis ratio is used [in the original parameterisation, the proposal is no longer a random walk]. (Note that an exponential change of variables turns the likelihood $p(D|θ,ν)$ into $p(D|\exp\{η\},ν)$, without a Jacobian there!) Otherwise, a Uniform $\text{U}(\theta^\text{old}-\epsilon,\theta^\text{old}+\epsilon)$ proposal, as in Hastings (1970), can be chosen as a proposal, with the potential to propose negative [and hence surely rejected] values.
What is a good proposal distribution for Metropolis-Hastings for strictly positive parameters?
The most natural [and generic] resolution [imo] is to turn $\theta$ into $\eta=\log\theta$ in the original problem so that $\eta$ is unconstrained. This allows for the use of random walk proposals lik
What is a good proposal distribution for Metropolis-Hastings for strictly positive parameters? The most natural [and generic] resolution [imo] is to turn $\theta$ into $\eta=\log\theta$ in the original problem so that $\eta$ is unconstrained. This allows for the use of random walk proposals like Metropolis et al.'s. The only warning is that the prior must incorporate the change of variable through a Jacobian: $$\pi_\eta(\eta)=\pi_\theta(\exp\{\eta\})\times\exp\{\eta\}$$ Warning: This proposal is only equivalent to propose a log-normal new value in the original parameterisation if the proper Metropolis ratio is used [in the original parameterisation, the proposal is no longer a random walk]. (Note that an exponential change of variables turns the likelihood $p(D|θ,ν)$ into $p(D|\exp\{η\},ν)$, without a Jacobian there!) Otherwise, a Uniform $\text{U}(\theta^\text{old}-\epsilon,\theta^\text{old}+\epsilon)$ proposal, as in Hastings (1970), can be chosen as a proposal, with the potential to propose negative [and hence surely rejected] values.
What is a good proposal distribution for Metropolis-Hastings for strictly positive parameters? The most natural [and generic] resolution [imo] is to turn $\theta$ into $\eta=\log\theta$ in the original problem so that $\eta$ is unconstrained. This allows for the use of random walk proposals lik
28,986
Does $\mathbb{Cov} \left(f(X),Y\right) = 0 \; \forall \; f(.)$ imply independence of $X$ and $Y$?
Let's begin with the intuition. The slope of the ordinary least squares regression of $Y$ against $h(X)$, for any function $h$, is proportional to the covariance of $h(X)$ and $Y$. The assumption is that all regressions are all zero (not just the linear ones). If you imagine $(X,Y)$ represented by a point cloud (really, a probability density cloud), then no matter how you slice it vertically and reorder the slices (which carries out the mapping $h$), the regression remains zero. This implies the conditional expectations of $Y$ (which are the regression function) are all constant. We could screw around with the conditional distributions while keeping the expectations constant, thereby ruining any chance of independence. We therefore should expect that the conclusion does not always hold. There are simple counterexamples. Consider a sample space of nine abstract elements $$\Omega = \{\omega_{i,j}\mid -1 \le i,j,\le 1\}$$ and a discrete measure with probability determined by $$\mathbb{P}(\omega_{0,0})=0;\ \mathbb{P}(\omega_{0,j})=1/5\,(j=\pm 1);\ \mathbb{P}(\omega_{i,j}=1/10)\text{ otherwise.}$$ Define $$X(\omega_{i,j})=j,\ Y(\omega_{i,j})=i.$$ We could display these probabilities as an array $$\pmatrix{1&2&1\\1&0&1\\1&2&1}$$ (with all entries multiplied by $1/10$) indexed in both directions by the values $-1,0,1$. The marginal probabilities are $$f_X(-1)=f_X(1)=3/10;\, f_X(0)=4/10$$ and $$f_Y(-1)=f_Y(1)=4/10;\, f_Y(0)=2/10,$$ as computed by the column sums and row sums of the array, respectively. Since $$f_X(0)f_Y(0)=(4/10)(2/10)\ne 0=\mathbb{P}(\omega_{0,0})=f_{XY}(0,0),$$ these variables are not independent. This was constructed to make the conditional distribution of $Y$ when $X=0$ different from the other conditional distributions for $X=\pm 1$. You can see this by comparing the middle column of the matrix to the other columns. The symmetry in the $Y$ coordinates and in all the conditional probabilities immediately shows all conditional expectations are zero, whence all covariances are zero, no matter how the associated values of $X$ might be reassigned to the columns. For those who might remain unconvinced, the counterexample may be demonstrated through direct computation--there are only $27$ functions that have to be considered and for each of them the covariance is zero.
Does $\mathbb{Cov} \left(f(X),Y\right) = 0 \; \forall \; f(.)$ imply independence of $X$ and $Y$?
Let's begin with the intuition. The slope of the ordinary least squares regression of $Y$ against $h(X)$, for any function $h$, is proportional to the covariance of $h(X)$ and $Y$. The assumption is
Does $\mathbb{Cov} \left(f(X),Y\right) = 0 \; \forall \; f(.)$ imply independence of $X$ and $Y$? Let's begin with the intuition. The slope of the ordinary least squares regression of $Y$ against $h(X)$, for any function $h$, is proportional to the covariance of $h(X)$ and $Y$. The assumption is that all regressions are all zero (not just the linear ones). If you imagine $(X,Y)$ represented by a point cloud (really, a probability density cloud), then no matter how you slice it vertically and reorder the slices (which carries out the mapping $h$), the regression remains zero. This implies the conditional expectations of $Y$ (which are the regression function) are all constant. We could screw around with the conditional distributions while keeping the expectations constant, thereby ruining any chance of independence. We therefore should expect that the conclusion does not always hold. There are simple counterexamples. Consider a sample space of nine abstract elements $$\Omega = \{\omega_{i,j}\mid -1 \le i,j,\le 1\}$$ and a discrete measure with probability determined by $$\mathbb{P}(\omega_{0,0})=0;\ \mathbb{P}(\omega_{0,j})=1/5\,(j=\pm 1);\ \mathbb{P}(\omega_{i,j}=1/10)\text{ otherwise.}$$ Define $$X(\omega_{i,j})=j,\ Y(\omega_{i,j})=i.$$ We could display these probabilities as an array $$\pmatrix{1&2&1\\1&0&1\\1&2&1}$$ (with all entries multiplied by $1/10$) indexed in both directions by the values $-1,0,1$. The marginal probabilities are $$f_X(-1)=f_X(1)=3/10;\, f_X(0)=4/10$$ and $$f_Y(-1)=f_Y(1)=4/10;\, f_Y(0)=2/10,$$ as computed by the column sums and row sums of the array, respectively. Since $$f_X(0)f_Y(0)=(4/10)(2/10)\ne 0=\mathbb{P}(\omega_{0,0})=f_{XY}(0,0),$$ these variables are not independent. This was constructed to make the conditional distribution of $Y$ when $X=0$ different from the other conditional distributions for $X=\pm 1$. You can see this by comparing the middle column of the matrix to the other columns. The symmetry in the $Y$ coordinates and in all the conditional probabilities immediately shows all conditional expectations are zero, whence all covariances are zero, no matter how the associated values of $X$ might be reassigned to the columns. For those who might remain unconvinced, the counterexample may be demonstrated through direct computation--there are only $27$ functions that have to be considered and for each of them the covariance is zero.
Does $\mathbb{Cov} \left(f(X),Y\right) = 0 \; \forall \; f(.)$ imply independence of $X$ and $Y$? Let's begin with the intuition. The slope of the ordinary least squares regression of $Y$ against $h(X)$, for any function $h$, is proportional to the covariance of $h(X)$ and $Y$. The assumption is
28,987
Expected Fisher's information matrix for Student's t-distribution?
It was brought to my attention that Lange et al 1989 derived the expected Fisher's Information for the multivariate t-distribution in Appendix B. Therefore, I got the answer I wanted, you can regard this question as answered! In particular, using the result of Lange et al, I derived the following Fisher's Information Matrix for the univariate t-distribution (with fixed degrees of freedom parameter $v$): \begin{align*} \boldsymbol{I}=\begin{bmatrix} \frac{v+1}{(v+3)\sigma^2} & 0 \\ 0 & \frac{v}{2(v+3)\sigma^4} \end{bmatrix} \end{align*}
Expected Fisher's information matrix for Student's t-distribution?
It was brought to my attention that Lange et al 1989 derived the expected Fisher's Information for the multivariate t-distribution in Appendix B. Therefore, I got the answer I wanted, you can regard
Expected Fisher's information matrix for Student's t-distribution? It was brought to my attention that Lange et al 1989 derived the expected Fisher's Information for the multivariate t-distribution in Appendix B. Therefore, I got the answer I wanted, you can regard this question as answered! In particular, using the result of Lange et al, I derived the following Fisher's Information Matrix for the univariate t-distribution (with fixed degrees of freedom parameter $v$): \begin{align*} \boldsymbol{I}=\begin{bmatrix} \frac{v+1}{(v+3)\sigma^2} & 0 \\ 0 & \frac{v}{2(v+3)\sigma^4} \end{bmatrix} \end{align*}
Expected Fisher's information matrix for Student's t-distribution? It was brought to my attention that Lange et al 1989 derived the expected Fisher's Information for the multivariate t-distribution in Appendix B. Therefore, I got the answer I wanted, you can regard
28,988
Expected Fisher's information matrix for Student's t-distribution?
It is not difficult (but a bit tedious) by using the formula $$ \mathcal{I}(\mu, \sigma^2) = \mathbb{E}\left[\begin{pmatrix}{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 & \left(\frac{\partial}{\partial\mu} \log f(Y)\right)\left(\frac{\partial}{\partial\sigma} \log f(Y)\right) \\ \left(\frac{\partial}{\partial\mu} \log f(Y)\right)\left(\frac{\partial}{\partial\sigma} \log f(Y)\right) & {\left(\frac{\partial}{\partial\sigma^2} \log f(Y)\right)}^2 \end{pmatrix} \right]. $$ First, observe that by the change of variables $y \mapsto y-\mu$ in any involved integral, one can take $\mu=0$ in the calculations. The calculations rely on the following integral: $$ I(\lambda, a, b) := \int_0^\infty y^{2a-1}\left(1+\frac{1}{\lambda}y^2\right)^{-\frac{2a+b}{2}} \textrm{d}y = \frac{\lambda^a}{2} B\left(a, \frac{b}{2}\right). $$ This equality is obtained by the change of variables $y \mapsto y^2$ and with the help of the density of the Beta prime distribution. Observe that the integrand is an even function when $2a-1$ is an even integer, hence $$ J(\lambda, a, b) := \int_{-\infty}^{+\infty} y^{2a-1}\left(1+\frac{1}{\lambda}y^2\right)^{-\frac{p+1+b}{2}} \textrm{d}y = 2 I(\lambda, a, b) = \lambda^a B\left(a, \frac{b}{2}\right). $$ I will detail only the first calculation. Set $$ K(\nu, \sigma)=\frac{1}{B\left(\frac12,\frac{\nu}{2}\right)}\frac{1}{\sqrt{\nu \sigma^2}}, $$ the normalization constant of the density. One has $$ \begin{align} \mathbb{E}\left[{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 \right] = K(\nu, \sigma){\left(\frac{\nu+1}{\nu\sigma^2}\right)}^2 J\left(\nu\sigma^2, \frac{3}{2}, \nu+2\right). \end{align} $$ Since $\frac{B\left(\frac12,\frac{\nu}{2}\right)}{B\left(\frac{3}{2},\frac{\nu+2}{2}\right)} = \frac{B\left(\frac12,\frac{\nu}{2}\right)}{B\left(\frac{3}{2},\frac{\nu}{2}\right)}\frac{B\left(\frac{3}{2},\frac{\nu}{2}\right)}{B\left(\frac{3}{2},\frac{\nu+2}{2}\right)} = \frac{(\nu+1)}{1}\frac{(\nu+3)}{\nu}$, we find $$ \mathbb{E}\left[{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 \right] = \frac{\nu}{\nu+3}(\nu+1){(\nu\sigma^2)}^{-1/2-2+3/2} = \frac{\nu+1}{(\nu+3)\sigma^2}. $$ The second calculation is easy: $$ \mathbb{E}\left[ \left(\frac{\partial}{\partial\mu} \log f(Y)\right)\left(\frac{\partial}{\partial\sigma} \log f(Y)\right)\right] = 0 $$ because it only involves integrals of odd functions. Finally the calculation of $$ \mathbb{E}\left[{\left(\frac{\partial}{\partial\sigma^2} \log f(Y)\right)}^2 \right] $$ is the more tedious and I skip it. Its calculation involves integrals $J(\nu\sigma^2, a, b)$ with $2a-1$ even integer, whose value is given above. I've done the calculations and I've found $$ \frac{{(\nu+1)}^2}{4{(\nu\sigma^4)}^2} K(\nu, \sigma^2) J\left(\nu\sigma^2, \frac{5}{2},\nu\right) - \frac{\nu+1}{2\nu\sigma^6} K(\nu, \sigma^2) J\left(\nu\sigma^2, \frac{3}{2},\nu\right) + \frac{1}{4\sigma^4} $$ and this simplifies to $$ \frac{\nu}{2(\nu+3)\sigma^4}. $$
Expected Fisher's information matrix for Student's t-distribution?
It is not difficult (but a bit tedious) by using the formula $$ \mathcal{I}(\mu, \sigma^2) = \mathbb{E}\left[\begin{pmatrix}{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 & \left(\frac{\parti
Expected Fisher's information matrix for Student's t-distribution? It is not difficult (but a bit tedious) by using the formula $$ \mathcal{I}(\mu, \sigma^2) = \mathbb{E}\left[\begin{pmatrix}{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 & \left(\frac{\partial}{\partial\mu} \log f(Y)\right)\left(\frac{\partial}{\partial\sigma} \log f(Y)\right) \\ \left(\frac{\partial}{\partial\mu} \log f(Y)\right)\left(\frac{\partial}{\partial\sigma} \log f(Y)\right) & {\left(\frac{\partial}{\partial\sigma^2} \log f(Y)\right)}^2 \end{pmatrix} \right]. $$ First, observe that by the change of variables $y \mapsto y-\mu$ in any involved integral, one can take $\mu=0$ in the calculations. The calculations rely on the following integral: $$ I(\lambda, a, b) := \int_0^\infty y^{2a-1}\left(1+\frac{1}{\lambda}y^2\right)^{-\frac{2a+b}{2}} \textrm{d}y = \frac{\lambda^a}{2} B\left(a, \frac{b}{2}\right). $$ This equality is obtained by the change of variables $y \mapsto y^2$ and with the help of the density of the Beta prime distribution. Observe that the integrand is an even function when $2a-1$ is an even integer, hence $$ J(\lambda, a, b) := \int_{-\infty}^{+\infty} y^{2a-1}\left(1+\frac{1}{\lambda}y^2\right)^{-\frac{p+1+b}{2}} \textrm{d}y = 2 I(\lambda, a, b) = \lambda^a B\left(a, \frac{b}{2}\right). $$ I will detail only the first calculation. Set $$ K(\nu, \sigma)=\frac{1}{B\left(\frac12,\frac{\nu}{2}\right)}\frac{1}{\sqrt{\nu \sigma^2}}, $$ the normalization constant of the density. One has $$ \begin{align} \mathbb{E}\left[{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 \right] = K(\nu, \sigma){\left(\frac{\nu+1}{\nu\sigma^2}\right)}^2 J\left(\nu\sigma^2, \frac{3}{2}, \nu+2\right). \end{align} $$ Since $\frac{B\left(\frac12,\frac{\nu}{2}\right)}{B\left(\frac{3}{2},\frac{\nu+2}{2}\right)} = \frac{B\left(\frac12,\frac{\nu}{2}\right)}{B\left(\frac{3}{2},\frac{\nu}{2}\right)}\frac{B\left(\frac{3}{2},\frac{\nu}{2}\right)}{B\left(\frac{3}{2},\frac{\nu+2}{2}\right)} = \frac{(\nu+1)}{1}\frac{(\nu+3)}{\nu}$, we find $$ \mathbb{E}\left[{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 \right] = \frac{\nu}{\nu+3}(\nu+1){(\nu\sigma^2)}^{-1/2-2+3/2} = \frac{\nu+1}{(\nu+3)\sigma^2}. $$ The second calculation is easy: $$ \mathbb{E}\left[ \left(\frac{\partial}{\partial\mu} \log f(Y)\right)\left(\frac{\partial}{\partial\sigma} \log f(Y)\right)\right] = 0 $$ because it only involves integrals of odd functions. Finally the calculation of $$ \mathbb{E}\left[{\left(\frac{\partial}{\partial\sigma^2} \log f(Y)\right)}^2 \right] $$ is the more tedious and I skip it. Its calculation involves integrals $J(\nu\sigma^2, a, b)$ with $2a-1$ even integer, whose value is given above. I've done the calculations and I've found $$ \frac{{(\nu+1)}^2}{4{(\nu\sigma^4)}^2} K(\nu, \sigma^2) J\left(\nu\sigma^2, \frac{5}{2},\nu\right) - \frac{\nu+1}{2\nu\sigma^6} K(\nu, \sigma^2) J\left(\nu\sigma^2, \frac{3}{2},\nu\right) + \frac{1}{4\sigma^4} $$ and this simplifies to $$ \frac{\nu}{2(\nu+3)\sigma^4}. $$
Expected Fisher's information matrix for Student's t-distribution? It is not difficult (but a bit tedious) by using the formula $$ \mathcal{I}(\mu, \sigma^2) = \mathbb{E}\left[\begin{pmatrix}{\left(\frac{\partial}{\partial\mu} \log f(Y)\right)}^2 & \left(\frac{\parti
28,989
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks?
Hi~ To understand the change of the variables, we can first take a look at the Figure.1 in Generative Adversarial Networks, Goodfellow et al (2014), eprint arXiv:1406.2661. According to the paper. The lower horizontal line is the domain from which $z$ is sampled and the above horizontal line is part of the domain of $x$. The upward arrows show the transformation $x = g(z)$. Back to the equation it's clear that: $$\int_z p_Z(z)\log(1-D(g(z))\,dz=E_{p_z}[\log(1-D(g(z))]$$ Since $x = g(z)$, we can replace $g(z)$ with variable $x$. Also notice that, in this case, $p_g$ is the distribution of $x$. As a result, we have this: $$E_{p_Z}[\log(1-D(g(z))] = E_{p_g}[\log(1-D(x))]$$ Then we expand the expection to an integral form: $$E_{p_g}[\log(1-D(x))] = \int_x p_g(x)\log(1-D(x))\,dx$$
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i
Hi~ To understand the change of the variables, we can first take a look at the Figure.1 in Generative Adversarial Networks, Goodfellow et al (2014), eprint arXiv:1406.2661. According to the paper. T
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks? Hi~ To understand the change of the variables, we can first take a look at the Figure.1 in Generative Adversarial Networks, Goodfellow et al (2014), eprint arXiv:1406.2661. According to the paper. The lower horizontal line is the domain from which $z$ is sampled and the above horizontal line is part of the domain of $x$. The upward arrows show the transformation $x = g(z)$. Back to the equation it's clear that: $$\int_z p_Z(z)\log(1-D(g(z))\,dz=E_{p_z}[\log(1-D(g(z))]$$ Since $x = g(z)$, we can replace $g(z)$ with variable $x$. Also notice that, in this case, $p_g$ is the distribution of $x$. As a result, we have this: $$E_{p_Z}[\log(1-D(g(z))] = E_{p_g}[\log(1-D(x))]$$ Then we expand the expection to an integral form: $$E_{p_g}[\log(1-D(x))] = \int_x p_g(x)\log(1-D(x))\,dx$$
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i Hi~ To understand the change of the variables, we can first take a look at the Figure.1 in Generative Adversarial Networks, Goodfellow et al (2014), eprint arXiv:1406.2661. According to the paper. T
28,990
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks?
You've basically gotten it. So the definition of $p_g$ (see first paragraph of section 4 Theoretical Results) is the distribution of samples $G(z)$ obtained when $z$ comes from distribution $p_z$. Thus $$\int_z p_Z(z)\log(1-D(g(z))dz=E_{p_Z}[\log(1-D(g(z))]=E_{p_x}[\log(1-D(x))]$$
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i
You've basically gotten it. So the definition of $p_g$ (see first paragraph of section 4 Theoretical Results) is the distribution of samples $G(z)$ obtained when $z$ comes from distribution $p_z$. Thu
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks? You've basically gotten it. So the definition of $p_g$ (see first paragraph of section 4 Theoretical Results) is the distribution of samples $G(z)$ obtained when $z$ comes from distribution $p_z$. Thus $$\int_z p_Z(z)\log(1-D(g(z))dz=E_{p_Z}[\log(1-D(g(z))]=E_{p_x}[\log(1-D(x))]$$
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i You've basically gotten it. So the definition of $p_g$ (see first paragraph of section 4 Theoretical Results) is the distribution of samples $G(z)$ obtained when $z$ comes from distribution $p_z$. Thu
28,991
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks?
Q1: Why the first line of $V(G,D)$ can be changed to second line of $V(G,D)$? The task is to find the maximum value of $V(G,D)$ so perhaps better notation for the first line would be $$\max[V(G,D)] = \max\left[\int_x p_\text{data}(x)\log(D(x))\,dx + \int_z p_Z(z) \log(1-D(g(z)))\,dz\right]$$ Then the second line is as follows $$\max[V(G,D)]= \max \left[ \int_x p_\text{data}(x)\log (D(x)) + p_g(x) \log(1-D(g(x))) \, dx\right]$$ has the form $y → a \log(y) + b \log(1 − y)$ inside the integral, which achieves its maximum in $[0, 1]$ at $\frac a {a+b }$. That implies that $z=x$ allows for the maximum sum of the integrals, which allowed the first line to lead to the second line. Q2: Is this appropriate? If $g'(z)=1$ at $\max[p_g(x)]$. I think the problem suggests that $\max[V(G,D)] \neq V(G,D)$ except when $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$. The answer has the form $\frac a {a+b }$, where $p_\text{data}(x)=a$ and $P_g(x)=b$.
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i
Q1: Why the first line of $V(G,D)$ can be changed to second line of $V(G,D)$? The task is to find the maximum value of $V(G,D)$ so perhaps better notation for the first line would be $$\max[V(G,D)] =
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks? Q1: Why the first line of $V(G,D)$ can be changed to second line of $V(G,D)$? The task is to find the maximum value of $V(G,D)$ so perhaps better notation for the first line would be $$\max[V(G,D)] = \max\left[\int_x p_\text{data}(x)\log(D(x))\,dx + \int_z p_Z(z) \log(1-D(g(z)))\,dz\right]$$ Then the second line is as follows $$\max[V(G,D)]= \max \left[ \int_x p_\text{data}(x)\log (D(x)) + p_g(x) \log(1-D(g(x))) \, dx\right]$$ has the form $y → a \log(y) + b \log(1 − y)$ inside the integral, which achieves its maximum in $[0, 1]$ at $\frac a {a+b }$. That implies that $z=x$ allows for the maximum sum of the integrals, which allowed the first line to lead to the second line. Q2: Is this appropriate? If $g'(z)=1$ at $\max[p_g(x)]$. I think the problem suggests that $\max[V(G,D)] \neq V(G,D)$ except when $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$. The answer has the form $\frac a {a+b }$, where $p_\text{data}(x)=a$ and $P_g(x)=b$.
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i Q1: Why the first line of $V(G,D)$ can be changed to second line of $V(G,D)$? The task is to find the maximum value of $V(G,D)$ so perhaps better notation for the first line would be $$\max[V(G,D)] =
28,992
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks?
Since $z \mapsto G(z)$ is a deterministic mapping from $\mathcal{Z}$ to $\mathcal{X}$, let $y = G(z)$, then $p(y|z) = \delta(y - G(z))$. Therefore $$\begin{split} \int_{\mathcal{X}} p_g(y)\log(1 - D(y)) dy & = \int_{\mathcal{X}} \left[\int_{\mathcal{Z}}p(z,y)dz\right]\log(1-D(y))dy \\ & = \int_{\mathcal{X}} \left[\int_{\mathcal{Z}}p(z)p(y|z)dz\right]\log(1-D(y))dy \\ & = \int_{\mathcal{X}} \left[\int_{\mathcal{Z}}p(z)dz\right]p(y|z)\log(1-D(y))dy \\ & = \int_{\mathcal{Z}}p(z)\left[\int_{\mathcal{X}}\delta(y - G(z))\log(1 - D(y))dy\right]dz \\ & = \int_{\mathcal{Z}}p(z)\left[\delta(y-G(z)) * \log(1-D(y))\right]dz \\ & = \int_{\mathcal{Z}}p(z)\log(1 - D(G(z)))dz. \end{split}$$ The 2nd last row to the last row is the convolution property of the Dirac delta function.
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i
Since $z \mapsto G(z)$ is a deterministic mapping from $\mathcal{Z}$ to $\mathcal{X}$, let $y = G(z)$, then $p(y|z) = \delta(y - G(z))$. Therefore $$\begin{split} \int_{\mathcal{X}} p_g(y)\log(1 -
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks? Since $z \mapsto G(z)$ is a deterministic mapping from $\mathcal{Z}$ to $\mathcal{X}$, let $y = G(z)$, then $p(y|z) = \delta(y - G(z))$. Therefore $$\begin{split} \int_{\mathcal{X}} p_g(y)\log(1 - D(y)) dy & = \int_{\mathcal{X}} \left[\int_{\mathcal{Z}}p(z,y)dz\right]\log(1-D(y))dy \\ & = \int_{\mathcal{X}} \left[\int_{\mathcal{Z}}p(z)p(y|z)dz\right]\log(1-D(y))dy \\ & = \int_{\mathcal{X}} \left[\int_{\mathcal{Z}}p(z)dz\right]p(y|z)\log(1-D(y))dy \\ & = \int_{\mathcal{Z}}p(z)\left[\int_{\mathcal{X}}\delta(y - G(z))\log(1 - D(y))dy\right]dz \\ & = \int_{\mathcal{Z}}p(z)\left[\delta(y-G(z)) * \log(1-D(y))\right]dz \\ & = \int_{\mathcal{Z}}p(z)\log(1 - D(G(z)))dz. \end{split}$$ The 2nd last row to the last row is the convolution property of the Dirac delta function.
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i Since $z \mapsto G(z)$ is a deterministic mapping from $\mathcal{Z}$ to $\mathcal{X}$, let $y = G(z)$, then $p(y|z) = \delta(y - G(z))$. Therefore $$\begin{split} \int_{\mathcal{X}} p_g(y)\log(1 -
28,993
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks?
The only thing you seem to be missing is the change of variable formula for probabilities, which states that the distribution of a random variable, $X$, that is transformed to $Y = f(X)$ by the function $f$ is given by $$p_Y(y) = p_X(f^{-1}(y)) \left|f'(f^{-1}(y))\right|^{-1}.$$ Therefore, if we write out the substitution $x = g(z)$ in the integral, this change of variables formula magically appears: $$\int_z p_Z(z)\log(1-D(g(z)))\,dz = \int_x \underbrace{\frac{1}{g'(g^{-1}(x))} p_Z(g^{-1}(x))}_{=p_{g(z)}(x)} \log(1 - D(x))\,dx$$ Note that this ignores the absolute value. I am not quite sure whether/how much it matters in this case (since the gradient of the generator is definitely not guaranteed to be positive).
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i
The only thing you seem to be missing is the change of variable formula for probabilities, which states that the distribution of a random variable, $X$, that is transformed to $Y = f(X)$ by the functi
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks? The only thing you seem to be missing is the change of variable formula for probabilities, which states that the distribution of a random variable, $X$, that is transformed to $Y = f(X)$ by the function $f$ is given by $$p_Y(y) = p_X(f^{-1}(y)) \left|f'(f^{-1}(y))\right|^{-1}.$$ Therefore, if we write out the substitution $x = g(z)$ in the integral, this change of variables formula magically appears: $$\int_z p_Z(z)\log(1-D(g(z)))\,dz = \int_x \underbrace{\frac{1}{g'(g^{-1}(x))} p_Z(g^{-1}(x))}_{=p_{g(z)}(x)} \log(1 - D(x))\,dx$$ Note that this ignores the absolute value. I am not quite sure whether/how much it matters in this case (since the gradient of the generator is definitely not guaranteed to be positive).
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i The only thing you seem to be missing is the change of variable formula for probabilities, which states that the distribution of a random variable, $X$, that is transformed to $Y = f(X)$ by the functi
28,994
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks?
Q1: Why the first line of V(G,D) can be changed to second line of V(G,D) The change of random variable is a standard result from probability theory (see e.g. Papoulis' book Probability, Random Variables, and Stochastic Processes). In the 1-D case, for some arbitrary function $f(z)$ the expectation is given by: ${\rm E}_z[f(z)] = \int_{-\infty}^{\infty} f(z){\rm p}_z(z) dz$ if we apply an invertible change of variable by letting $x=g(z)$ where $x$ is a scalar random variable, where the PDFs satisfy ${\rm p}_x(x) dx={\rm p}_z(z) dz$, we obtain: ${\rm E}_z[f(z)]=\int_{-\infty}^{\infty} f(g^{-1}(x)) {\rm p}_x(x) dx={\rm E}_x[f(g^{-1}(x))]$ The result generalises to (i) many-to-one scalar transformations (ii) invertible vector transformations $G(\cdot)$ with ${\rm dim}(x)={\rm dim}(z)$; and (iii) vector transformations where ${\rm dim}(x)<{\rm dim}(z)$. In case (i), the PDF is composed of a sum of terms each of which corresponds to a root of the equation $g(z)=0$. In case (ii), the two PDFs are linked via the determinant of the Jacobian. In the case where ${\rm dim}(x)>{\rm dim}(z)$, the PDF ${\rm p}_x(x)$ is non-unique and degenerate (it contains contains delta functions). This situation is illustrated by Example 2.1 in the paper (IEEE Access, Dec. 2021) https://www.researchgate.net/publication/356815736_Convergence_and_Optimality_Analysis_of_Low-Dimensional_Generative_Adversarial_Networks_using_Error_Function_Integrals Q2: In my own trial to change the V(G,D), the above condition was needed. Is it appropriate condition?! You need to pay attention to the dimensions of $x$ and $z$. For a scalar change of variables $x=g(z)$, clearly $dx/dz=g'(z)$ so ${\rm p}_x(x)={\rm p}_z(z) dz/dx$, which is the same as your equation. This requires $x$ and $z$ to be scalar random variables. For vectors of the same dimension, you can use the Jacobian. However, as explained above, when ${\rm dim}(x)>{\rm dim}(z)$, the PDF of $x$ is degenerate. Although the expectation result still holds, the method used to obtain the optimal discriminator does not. This is because the latter uses calculus of variations, which requires continuously differentiable integrands. The integrand as a function of $x$ and $D$ is not continuously differentiable when ${\rm dim}(x)>{\rm dim}(z)$, so the optimal discriminator does not exist in this case. This case is actually the one of practical interest and the counter-examples provided in the reference (based on the arguments in this post) invalidate the theoretical results in Goodfellow et al's 2014 GAN paper where they are based on Proposition 1 [Optimal Discriminator]. This is not to say the algorithms don't work: they clearly do in many cases and are extremely useful for machine learning; however the theory does not stand up to scrutiny when the data dimension ${\rm dim}(x)$ exceeds the latent variable dimension ${\rm dim}(z)$. These points are explained further in the paper "Convergence and Optimality Analysis of Low-Dimensional Generative Adversarial Networks using Error Function Integrals" for which the link was given above. A practical demonstration appeared in a paper by Qin et al. (NIPS 2020) "Training Generative Adversarial Networks by Solving Ordinary Differential Equations." They showed on CIFAR-10 with ${\rm dim}(x)=3072>{\rm dim}(z)=256$ that neither the discriminator nor the generator losses converged to the predicted "Nash equilibrium" values, whereas the Nash equilibrium values were obtained on a ${\rm dim}(x)=2\leq {\rm dim}(z)=32$ Gaussian mixture simulation.
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i
Q1: Why the first line of V(G,D) can be changed to second line of V(G,D) The change of random variable is a standard result from probability theory (see e.g. Papoulis' book Probability, Random Variabl
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ in Generative Adversarial Networks? Q1: Why the first line of V(G,D) can be changed to second line of V(G,D) The change of random variable is a standard result from probability theory (see e.g. Papoulis' book Probability, Random Variables, and Stochastic Processes). In the 1-D case, for some arbitrary function $f(z)$ the expectation is given by: ${\rm E}_z[f(z)] = \int_{-\infty}^{\infty} f(z){\rm p}_z(z) dz$ if we apply an invertible change of variable by letting $x=g(z)$ where $x$ is a scalar random variable, where the PDFs satisfy ${\rm p}_x(x) dx={\rm p}_z(z) dz$, we obtain: ${\rm E}_z[f(z)]=\int_{-\infty}^{\infty} f(g^{-1}(x)) {\rm p}_x(x) dx={\rm E}_x[f(g^{-1}(x))]$ The result generalises to (i) many-to-one scalar transformations (ii) invertible vector transformations $G(\cdot)$ with ${\rm dim}(x)={\rm dim}(z)$; and (iii) vector transformations where ${\rm dim}(x)<{\rm dim}(z)$. In case (i), the PDF is composed of a sum of terms each of which corresponds to a root of the equation $g(z)=0$. In case (ii), the two PDFs are linked via the determinant of the Jacobian. In the case where ${\rm dim}(x)>{\rm dim}(z)$, the PDF ${\rm p}_x(x)$ is non-unique and degenerate (it contains contains delta functions). This situation is illustrated by Example 2.1 in the paper (IEEE Access, Dec. 2021) https://www.researchgate.net/publication/356815736_Convergence_and_Optimality_Analysis_of_Low-Dimensional_Generative_Adversarial_Networks_using_Error_Function_Integrals Q2: In my own trial to change the V(G,D), the above condition was needed. Is it appropriate condition?! You need to pay attention to the dimensions of $x$ and $z$. For a scalar change of variables $x=g(z)$, clearly $dx/dz=g'(z)$ so ${\rm p}_x(x)={\rm p}_z(z) dz/dx$, which is the same as your equation. This requires $x$ and $z$ to be scalar random variables. For vectors of the same dimension, you can use the Jacobian. However, as explained above, when ${\rm dim}(x)>{\rm dim}(z)$, the PDF of $x$ is degenerate. Although the expectation result still holds, the method used to obtain the optimal discriminator does not. This is because the latter uses calculus of variations, which requires continuously differentiable integrands. The integrand as a function of $x$ and $D$ is not continuously differentiable when ${\rm dim}(x)>{\rm dim}(z)$, so the optimal discriminator does not exist in this case. This case is actually the one of practical interest and the counter-examples provided in the reference (based on the arguments in this post) invalidate the theoretical results in Goodfellow et al's 2014 GAN paper where they are based on Proposition 1 [Optimal Discriminator]. This is not to say the algorithms don't work: they clearly do in many cases and are extremely useful for machine learning; however the theory does not stand up to scrutiny when the data dimension ${\rm dim}(x)$ exceeds the latent variable dimension ${\rm dim}(z)$. These points are explained further in the paper "Convergence and Optimality Analysis of Low-Dimensional Generative Adversarial Networks using Error Function Integrals" for which the link was given above. A practical demonstration appeared in a paper by Qin et al. (NIPS 2020) "Training Generative Adversarial Networks by Solving Ordinary Differential Equations." They showed on CIFAR-10 with ${\rm dim}(x)=3072>{\rm dim}(z)=256$ that neither the discriminator nor the generator losses converged to the predicted "Nash equilibrium" values, whereas the Nash equilibrium values were obtained on a ${\rm dim}(x)=2\leq {\rm dim}(z)=32$ Gaussian mixture simulation.
Why is the Optimal Discriminator $D^{*}_G(x) = \frac{p_\text{data}(x)}{p_\text{data}(x) + p_g(x)}$ i Q1: Why the first line of V(G,D) can be changed to second line of V(G,D) The change of random variable is a standard result from probability theory (see e.g. Papoulis' book Probability, Random Variabl
28,995
Example of how Bayesian Statistics can estimate parameters that are very challenging to estimate through frequentist methods
I have objections with that quote: "Frequentism" is an approach to inference that is based on the frequency properties of the chosen estimators. This is a vague notion in that it does not even state that the estimators must converge and if they do under how they must converge. For instance, unbiasedness is a frequentist notion but it cannot hold for any and every function [of the parameter $\theta$] of interest since the collection of transforms of $\theta$ that allow for an unbiased estimator is very restricted. Further, a frequentist estimator is not produced by the paradigm but must first be chosen before being evaluated. In that sense, a Bayesian estimator is a frequentist estimator if it satisfies some frequentist property. The inference produced by a Bayesian approach is based on the posterior distribution, represented by its density $\pi(\theta|\mathfrak{D})$. I do not understand how the term "exact" can be attached to $\pi(\theta|\mathfrak{D})$.It is uniquely associated with a prior distribution $\pi(\theta)$ and it is exactly derived by Bayes theorem. But it does not return exact inference in that the point estimate is not the true value of the parameter $\theta$ and it produces exact probability statements only within the framework provided by the pair prior x likelihood. Changing one term in the pair does modify the posterior and the inference, while there is no generic argument for defending a single prior or likelihood. Similarly, other probability statements like “the true parameter has a probability of 0.95 of falling in a 95% credible interval” found in the same page of this SAS documentation has a meaning relative to the framework of the posterior distribution but not in absolute value. From a computational perspective, it is true that a Bayesian approach may often return exact or approximate answers in cases when a standard classical approach fails. This is for instance the case for latent [or missing] variable models$$f(x|\theta)=\int g(x,z|\theta)\,\text{d}z$$where $g(x,z|\theta)$ is a joint density for the pair $(X,Z)$ and where $Z$ is not observed, Producing estimates of $\theta$ and of its posterior by simulation of the pair $(\theta,\mathfrak{Z})$ may prove much easier than deriving a maximum likelihood [frequentist?] estimator. A practical example of this setting is Kingman's coalescent model in population genetics, where the evolution of populations from a common ancestor involves latent events on binary trees. This model can be handled by [approximate] Bayesian inference through an algorithm called ABC, even though there also exist non-Bayesian software resolutions. However, even in such cases, I do not think that Bayesian inference is the only possible resolution. Machine-learning techniques like neural nets, random forests, deep learning, can be classified as frequentist methods since they train on a sample by cross-validation, minimising an error or distance criterion that can be seen as an expectation [under the true model] approximated by a sample average. For instance, Kingman's coalescent model can also be handled by non-Bayesian software resolutions. A final point is that, for point estimation, the Bayesian approach may well produce plug-in estimates. For some loss functions that I called intrinsic losses, the Bayes estimator of the transform $\mathfrak{h}(\theta)$ is the transform $\mathfrak{h}(\hat\theta)$ of the Bayes estimator of $\theta$.
Example of how Bayesian Statistics can estimate parameters that are very challenging to estimate thr
I have objections with that quote: "Frequentism" is an approach to inference that is based on the frequency properties of the chosen estimators. This is a vague notion in that it does not even state
Example of how Bayesian Statistics can estimate parameters that are very challenging to estimate through frequentist methods I have objections with that quote: "Frequentism" is an approach to inference that is based on the frequency properties of the chosen estimators. This is a vague notion in that it does not even state that the estimators must converge and if they do under how they must converge. For instance, unbiasedness is a frequentist notion but it cannot hold for any and every function [of the parameter $\theta$] of interest since the collection of transforms of $\theta$ that allow for an unbiased estimator is very restricted. Further, a frequentist estimator is not produced by the paradigm but must first be chosen before being evaluated. In that sense, a Bayesian estimator is a frequentist estimator if it satisfies some frequentist property. The inference produced by a Bayesian approach is based on the posterior distribution, represented by its density $\pi(\theta|\mathfrak{D})$. I do not understand how the term "exact" can be attached to $\pi(\theta|\mathfrak{D})$.It is uniquely associated with a prior distribution $\pi(\theta)$ and it is exactly derived by Bayes theorem. But it does not return exact inference in that the point estimate is not the true value of the parameter $\theta$ and it produces exact probability statements only within the framework provided by the pair prior x likelihood. Changing one term in the pair does modify the posterior and the inference, while there is no generic argument for defending a single prior or likelihood. Similarly, other probability statements like “the true parameter has a probability of 0.95 of falling in a 95% credible interval” found in the same page of this SAS documentation has a meaning relative to the framework of the posterior distribution but not in absolute value. From a computational perspective, it is true that a Bayesian approach may often return exact or approximate answers in cases when a standard classical approach fails. This is for instance the case for latent [or missing] variable models$$f(x|\theta)=\int g(x,z|\theta)\,\text{d}z$$where $g(x,z|\theta)$ is a joint density for the pair $(X,Z)$ and where $Z$ is not observed, Producing estimates of $\theta$ and of its posterior by simulation of the pair $(\theta,\mathfrak{Z})$ may prove much easier than deriving a maximum likelihood [frequentist?] estimator. A practical example of this setting is Kingman's coalescent model in population genetics, where the evolution of populations from a common ancestor involves latent events on binary trees. This model can be handled by [approximate] Bayesian inference through an algorithm called ABC, even though there also exist non-Bayesian software resolutions. However, even in such cases, I do not think that Bayesian inference is the only possible resolution. Machine-learning techniques like neural nets, random forests, deep learning, can be classified as frequentist methods since they train on a sample by cross-validation, minimising an error or distance criterion that can be seen as an expectation [under the true model] approximated by a sample average. For instance, Kingman's coalescent model can also be handled by non-Bayesian software resolutions. A final point is that, for point estimation, the Bayesian approach may well produce plug-in estimates. For some loss functions that I called intrinsic losses, the Bayes estimator of the transform $\mathfrak{h}(\theta)$ is the transform $\mathfrak{h}(\hat\theta)$ of the Bayes estimator of $\theta$.
Example of how Bayesian Statistics can estimate parameters that are very challenging to estimate thr I have objections with that quote: "Frequentism" is an approach to inference that is based on the frequency properties of the chosen estimators. This is a vague notion in that it does not even state
28,996
Are a zero-truncated Poisson and basic Poisson nested or non-nested?
Just come across this now. To avoid confusion, I am the Wilson of Wilson(2015) referenced in the original question, which asks whether the Poisson and truncated Poisson models are nested, non nested etc. Slightly simplifying, a smaller model is nested in a larger model if the larger model reduces to the smaller one if a subset of its parameters are fixed at stated values; two models are overlapping if they both reduce to the same model when subsets of their respective parameters are fixed to certain values, they are non-nested if no matter how parameters are fixed one cannot reduce to the other. According to this definition the truncated Poisson and standard Poisson are non-nested. HOWEVER, and this is a point that seems to have been overlooked by many, Vuong's distributional theory refers to STRICTLY nested, STRICTLY non-nested, and STRICTLY overlapping. "STRICTLY" referring to the addition of six restrictions to the basic definition of nested etc. These restrictions are not exactly simple, but they do, among other things, mean that Vuong's results about the distribution of log likelihood ratios are not applicable in cases where models/distributions are nested at a boundary of a parameter space (as is the case with Poisson/zero inflated Poisson with an identity link for the zero-inflation parameter) or when one model tends to the other when a parameter tends to infinity, as is the case with the Poisson/zero-inflated Poisson when a logit link is used to model the zero-inflation parameter. Vuong advances no theory about the distribution of log likelihood ratios in these circumstances. Unfortunately here, this is the case with Poisson and truncated Poisson distributions, one tends to the other as the parameter tends to infinty, to see this, note that the ratio of the pmfs of Poisson and truncated Poisson distributions is 1-exp(-lambda) which tends to 1 as lambda tends to infinity, thus the two distributions are not stricty non-nested, or strictly anything for that matter, and Voung's theory is not applicable. The following R code will simulate the distribution of poisson and truncated Poisson loglikelihood ratios. It requires the VGAM package. n<-30 lambda1<-1 H<-rep(999,10000) for(i in 1:10000){ print(i) y<-rpospois(n, lambda1) fit1 <- vglm(y ~ 1, pospoisson) fit2<-glm(y~1, family=poisson(link="log")) H[i]<-logLik(fit1)-logLik(fit2) } hist(H,col="lemonchiffon")
Are a zero-truncated Poisson and basic Poisson nested or non-nested?
Just come across this now. To avoid confusion, I am the Wilson of Wilson(2015) referenced in the original question, which asks whether the Poisson and truncated Poisson models are nested, non nested
Are a zero-truncated Poisson and basic Poisson nested or non-nested? Just come across this now. To avoid confusion, I am the Wilson of Wilson(2015) referenced in the original question, which asks whether the Poisson and truncated Poisson models are nested, non nested etc. Slightly simplifying, a smaller model is nested in a larger model if the larger model reduces to the smaller one if a subset of its parameters are fixed at stated values; two models are overlapping if they both reduce to the same model when subsets of their respective parameters are fixed to certain values, they are non-nested if no matter how parameters are fixed one cannot reduce to the other. According to this definition the truncated Poisson and standard Poisson are non-nested. HOWEVER, and this is a point that seems to have been overlooked by many, Vuong's distributional theory refers to STRICTLY nested, STRICTLY non-nested, and STRICTLY overlapping. "STRICTLY" referring to the addition of six restrictions to the basic definition of nested etc. These restrictions are not exactly simple, but they do, among other things, mean that Vuong's results about the distribution of log likelihood ratios are not applicable in cases where models/distributions are nested at a boundary of a parameter space (as is the case with Poisson/zero inflated Poisson with an identity link for the zero-inflation parameter) or when one model tends to the other when a parameter tends to infinity, as is the case with the Poisson/zero-inflated Poisson when a logit link is used to model the zero-inflation parameter. Vuong advances no theory about the distribution of log likelihood ratios in these circumstances. Unfortunately here, this is the case with Poisson and truncated Poisson distributions, one tends to the other as the parameter tends to infinty, to see this, note that the ratio of the pmfs of Poisson and truncated Poisson distributions is 1-exp(-lambda) which tends to 1 as lambda tends to infinity, thus the two distributions are not stricty non-nested, or strictly anything for that matter, and Voung's theory is not applicable. The following R code will simulate the distribution of poisson and truncated Poisson loglikelihood ratios. It requires the VGAM package. n<-30 lambda1<-1 H<-rep(999,10000) for(i in 1:10000){ print(i) y<-rpospois(n, lambda1) fit1 <- vglm(y ~ 1, pospoisson) fit2<-glm(y~1, family=poisson(link="log")) H[i]<-logLik(fit1)-logLik(fit2) } hist(H,col="lemonchiffon")
Are a zero-truncated Poisson and basic Poisson nested or non-nested? Just come across this now. To avoid confusion, I am the Wilson of Wilson(2015) referenced in the original question, which asks whether the Poisson and truncated Poisson models are nested, non nested
28,997
Are a zero-truncated Poisson and basic Poisson nested or non-nested?
The basic Poisson can be thought of as nested inside a more general form: $p(x) = (1-p)\frac{\text{e}^{-\lambda}\lambda^x}{x!} + p1(x=0)$ When $p = 0$, we have the basic Poisson. When $p = -\exp\{-\lambda\}/(1 -\exp\{-\lambda\})$, we have the zero-truncated Poisson. When $-\exp\{-\lambda\}/(1 -\exp\{-\lambda\}) < p < 0$, we have a zero-reduced Poisson. When $0 < p < 1$, we have a zero-inflated Poisson, and we have a degenerate distribution at $p = 1$. So it seems to me that the nested version of the Vuong test, or the chi-square as you suggest, would be appropriate in your case. Note, though, that the chi-square can have problems due to the small probabilities of "large" (relative to $\lambda$) observations. You'd probably want to use a bootstrap to get the p-value for the chi-square statistic instead of relying on the asymptotics unless you've got rather a lot of data.
Are a zero-truncated Poisson and basic Poisson nested or non-nested?
The basic Poisson can be thought of as nested inside a more general form: $p(x) = (1-p)\frac{\text{e}^{-\lambda}\lambda^x}{x!} + p1(x=0)$ When $p = 0$, we have the basic Poisson. When $p = -\exp\{-\l
Are a zero-truncated Poisson and basic Poisson nested or non-nested? The basic Poisson can be thought of as nested inside a more general form: $p(x) = (1-p)\frac{\text{e}^{-\lambda}\lambda^x}{x!} + p1(x=0)$ When $p = 0$, we have the basic Poisson. When $p = -\exp\{-\lambda\}/(1 -\exp\{-\lambda\})$, we have the zero-truncated Poisson. When $-\exp\{-\lambda\}/(1 -\exp\{-\lambda\}) < p < 0$, we have a zero-reduced Poisson. When $0 < p < 1$, we have a zero-inflated Poisson, and we have a degenerate distribution at $p = 1$. So it seems to me that the nested version of the Vuong test, or the chi-square as you suggest, would be appropriate in your case. Note, though, that the chi-square can have problems due to the small probabilities of "large" (relative to $\lambda$) observations. You'd probably want to use a bootstrap to get the p-value for the chi-square statistic instead of relying on the asymptotics unless you've got rather a lot of data.
Are a zero-truncated Poisson and basic Poisson nested or non-nested? The basic Poisson can be thought of as nested inside a more general form: $p(x) = (1-p)\frac{\text{e}^{-\lambda}\lambda^x}{x!} + p1(x=0)$ When $p = 0$, we have the basic Poisson. When $p = -\exp\{-\l
28,998
What is causing autocorrelation in MCMC sampler?
When using Markov chain Monte Carlo (MCMC) algorithms in Bayesian analysis, often the goal is to sample from the posterior distribution. We resort to MCMC when other independent sampling techniques are not possible (like rejection sampling). The problem however with MCMC is that the resulting samples are correlated. This is because each subsequent sample is drawn by using the current sample. There are two main MCMC sampling methods: Gibbs sampling and Metropolis-Hastings (MH) algorithm. Autocorrelation in the samples is affected by a lot of things. For example, when using MH algorithms, to some extent you can reduce or increase your autocorrelations by adjusting the step size of proposal distribution. In Gibbs sampling however, there is no such adjustment possible. The autocorrelation is also affected by starting values of the Markov chain. There is generally an (unknown) optimum starting value that leads to the comparatively less autocorrelation. Multi-modality of the target distribution can also greatly affect the autocorrelation of the samples. Thus there are attributes of the target distribution that can definitely dictate autocorrelation. But most often autocorrelation is dictated by the sampler used. Broadly speaking if an MCMC sampler jumps around the state space more, it is probably going to have smaller autocorrelation. Thus, it is often desirable to choose samplers that make large accepted moves (like Hamiltonian Monte Carlo). I am unfamiliar with JAGS. If you have decided on the sampler already, and do not have the option of playing around with other samplers, then the best bet would be to do some preliminary analysis to find good starting values and step sizes. Thinning is not generally suggested since it is argued that throwing away samples is less efficient than using correlated samples. A universal solution is to run the sampler for a long time, so that you Effective Sample Size (ESS) is large. Look at the R package mcmcse here. If you look at the vignette on Page 8, the author proposes a calculation of the minimum effective samples one would need for their estimation process. You can find that number for your problem, and let the Markov chain run until you have that many effective samples.
What is causing autocorrelation in MCMC sampler?
When using Markov chain Monte Carlo (MCMC) algorithms in Bayesian analysis, often the goal is to sample from the posterior distribution. We resort to MCMC when other independent sampling techniques ar
What is causing autocorrelation in MCMC sampler? When using Markov chain Monte Carlo (MCMC) algorithms in Bayesian analysis, often the goal is to sample from the posterior distribution. We resort to MCMC when other independent sampling techniques are not possible (like rejection sampling). The problem however with MCMC is that the resulting samples are correlated. This is because each subsequent sample is drawn by using the current sample. There are two main MCMC sampling methods: Gibbs sampling and Metropolis-Hastings (MH) algorithm. Autocorrelation in the samples is affected by a lot of things. For example, when using MH algorithms, to some extent you can reduce or increase your autocorrelations by adjusting the step size of proposal distribution. In Gibbs sampling however, there is no such adjustment possible. The autocorrelation is also affected by starting values of the Markov chain. There is generally an (unknown) optimum starting value that leads to the comparatively less autocorrelation. Multi-modality of the target distribution can also greatly affect the autocorrelation of the samples. Thus there are attributes of the target distribution that can definitely dictate autocorrelation. But most often autocorrelation is dictated by the sampler used. Broadly speaking if an MCMC sampler jumps around the state space more, it is probably going to have smaller autocorrelation. Thus, it is often desirable to choose samplers that make large accepted moves (like Hamiltonian Monte Carlo). I am unfamiliar with JAGS. If you have decided on the sampler already, and do not have the option of playing around with other samplers, then the best bet would be to do some preliminary analysis to find good starting values and step sizes. Thinning is not generally suggested since it is argued that throwing away samples is less efficient than using correlated samples. A universal solution is to run the sampler for a long time, so that you Effective Sample Size (ESS) is large. Look at the R package mcmcse here. If you look at the vignette on Page 8, the author proposes a calculation of the minimum effective samples one would need for their estimation process. You can find that number for your problem, and let the Markov chain run until you have that many effective samples.
What is causing autocorrelation in MCMC sampler? When using Markov chain Monte Carlo (MCMC) algorithms in Bayesian analysis, often the goal is to sample from the posterior distribution. We resort to MCMC when other independent sampling techniques ar
28,999
Is there any real statistics behind "the Pythagorean theorem of baseball"?
The mathematical/statistical foundations for the "Pythagorean rule" was examined in Miller (2007). This paper showed that if the number of runs scored by each team in each game follows a Weibull distribution with common shape parameter $\gamma$ but different scale parameters, then the generalised form of the Pythagorean rule (with generalised power $\gamma$) emerges as the predicted win probability. That paper also fits the posited Weibull model to baseball data from 14 teams playing in the 2004 American League. The results show a reasonable model fit, with $\hat{\gamma} \approx 1.74 \text{-} 1.82$ using various estimation techniques. This suggests that the generalised Pythagorean rule may be a reasonable prediction technique for prediction wins-losses, but the power parameter should be a bit less than the squared value that appears in the book by Winston. Miller, S. (2007) A derivation of the Pythagorean won-loss formula in baseball. Chance 20(1), pp. 40-48.
Is there any real statistics behind "the Pythagorean theorem of baseball"?
The mathematical/statistical foundations for the "Pythagorean rule" was examined in Miller (2007). This paper showed that if the number of runs scored by each team in each game follows a Weibull dist
Is there any real statistics behind "the Pythagorean theorem of baseball"? The mathematical/statistical foundations for the "Pythagorean rule" was examined in Miller (2007). This paper showed that if the number of runs scored by each team in each game follows a Weibull distribution with common shape parameter $\gamma$ but different scale parameters, then the generalised form of the Pythagorean rule (with generalised power $\gamma$) emerges as the predicted win probability. That paper also fits the posited Weibull model to baseball data from 14 teams playing in the 2004 American League. The results show a reasonable model fit, with $\hat{\gamma} \approx 1.74 \text{-} 1.82$ using various estimation techniques. This suggests that the generalised Pythagorean rule may be a reasonable prediction technique for prediction wins-losses, but the power parameter should be a bit less than the squared value that appears in the book by Winston. Miller, S. (2007) A derivation of the Pythagorean won-loss formula in baseball. Chance 20(1), pp. 40-48.
Is there any real statistics behind "the Pythagorean theorem of baseball"? The mathematical/statistical foundations for the "Pythagorean rule" was examined in Miller (2007). This paper showed that if the number of runs scored by each team in each game follows a Weibull dist
29,000
Expected value of iid random variables
First of all, $X_1, X_2,...,X_n$ are not samples. These are random variables as pointed out by Tim. Suppose you are doing an experiment in which you estimate the amount of water in a food item; for that you take say 100 measurements of water content for 100 different food items. Each time you get a value of water content. Here the water content is random variable and Now suppose there were in total 1000 food items which exist in the world. 100 different food items will be called a sample of these 1000 food items. Notice that water content is the random variable and 100 values of water content obtained make a sample. Suppose you randomly sample out n values from a probability distribution, independently and identically, It is given that the $E(X)=\mu$. Now you need to find out expected value of $\bar{X}$. Since each of $X_i$ is independently and identically sampled, expected value of each of the $X_i$ is $\mu$. Therefore you get $\frac{n\mu}{n} =\mu$. The third equation in your question is the condition for an estimator to be unbiased estimator of the population parameter. The condition for an estimator to be unbiased is $$ E(\bar{\theta})=\theta $$ where theta is the population parameter and $\bar{\theta}$ is the parameter estimated by sample. In your example you population is $\{1,2,3,4,5,6\}$ and you have been given a sample of $10$ i.i.d. values which are $\{5,2,1,4,4,2,6,2,3,5\}$. The question is how would you estimate the population mean given this sample. According to above formula the average of the sample is an unbiased estimator of the population mean. The unbiased estimator doesn't need to be equal to actual mean, but it is as close to mean as you can get given this information.
Expected value of iid random variables
First of all, $X_1, X_2,...,X_n$ are not samples. These are random variables as pointed out by Tim. Suppose you are doing an experiment in which you estimate the amount of water in a food item; for th
Expected value of iid random variables First of all, $X_1, X_2,...,X_n$ are not samples. These are random variables as pointed out by Tim. Suppose you are doing an experiment in which you estimate the amount of water in a food item; for that you take say 100 measurements of water content for 100 different food items. Each time you get a value of water content. Here the water content is random variable and Now suppose there were in total 1000 food items which exist in the world. 100 different food items will be called a sample of these 1000 food items. Notice that water content is the random variable and 100 values of water content obtained make a sample. Suppose you randomly sample out n values from a probability distribution, independently and identically, It is given that the $E(X)=\mu$. Now you need to find out expected value of $\bar{X}$. Since each of $X_i$ is independently and identically sampled, expected value of each of the $X_i$ is $\mu$. Therefore you get $\frac{n\mu}{n} =\mu$. The third equation in your question is the condition for an estimator to be unbiased estimator of the population parameter. The condition for an estimator to be unbiased is $$ E(\bar{\theta})=\theta $$ where theta is the population parameter and $\bar{\theta}$ is the parameter estimated by sample. In your example you population is $\{1,2,3,4,5,6\}$ and you have been given a sample of $10$ i.i.d. values which are $\{5,2,1,4,4,2,6,2,3,5\}$. The question is how would you estimate the population mean given this sample. According to above formula the average of the sample is an unbiased estimator of the population mean. The unbiased estimator doesn't need to be equal to actual mean, but it is as close to mean as you can get given this information.
Expected value of iid random variables First of all, $X_1, X_2,...,X_n$ are not samples. These are random variables as pointed out by Tim. Suppose you are doing an experiment in which you estimate the amount of water in a food item; for th