idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
28,701
Regression for power law
If you want equal error-variance on every observation in the untransformed scale, you can use nonlinear least squares. (This will often not be suitable; errors over many orders of magnitude are rarely constant in size.) If we go ahead and use it nonetheless, we get a much closer fit to the later values: And if we examine residuals we can see that my warning above is entirely well-founded: This shows that the variability is not constant on the original scale (and that the fit of this single power curve doesn't fit all that well at the high end either, since there's distinct curvature in the third quarter of range of the log values on the x-scale -- between about 0 and 5 on the x-axis above). The variability is nearer to constant in the log scale (though it's a little more variable in relative terms at low values than high ones there). What it would be best to do here depends on what you're trying to achieve.
Regression for power law
If you want equal error-variance on every observation in the untransformed scale, you can use nonlinear least squares. (This will often not be suitable; errors over many orders of magnitude are rarely
Regression for power law If you want equal error-variance on every observation in the untransformed scale, you can use nonlinear least squares. (This will often not be suitable; errors over many orders of magnitude are rarely constant in size.) If we go ahead and use it nonetheless, we get a much closer fit to the later values: And if we examine residuals we can see that my warning above is entirely well-founded: This shows that the variability is not constant on the original scale (and that the fit of this single power curve doesn't fit all that well at the high end either, since there's distinct curvature in the third quarter of range of the log values on the x-scale -- between about 0 and 5 on the x-axis above). The variability is nearer to constant in the log scale (though it's a little more variable in relative terms at low values than high ones there). What it would be best to do here depends on what you're trying to achieve.
Regression for power law If you want equal error-variance on every observation in the untransformed scale, you can use nonlinear least squares. (This will often not be suitable; errors over many orders of magnitude are rarely
28,702
Regression for power law
A paper by Lin and Tegmark nicely summarizes the reasons why lognormal and/or markov process distributions fail to fit data displaying critical, power law behaviors... https://ai2-s2-pdfs.s3.amazonaws.com/5ba0/3a03d844f10d7b4861d3b116818afe2b75f2.pdf . As they note, "Markov processes...fail epically by predicting exponentially decaying mutual information..." Their solution and recommendation is to employ deep learning neural networks such as long-short-term memory (LSTM) models. Being old school and neither conversant nor comfortable with NNs or LSTMs, I will give a tip of the hat to @glen_b's nonlinear approach. However, I prefer more tractable and readily accessible workarounds such as value-based quantile regression. Having used this approach on heavy tailed insurance claims, I know that it can provide a much better fit to the tails than more traditional methods, including multiplicative, log-log models. The modest challenge in using QR is finding the appropriate quantile around which to base one's model(s). Typically, this is much greater than the median. That said, I don't want to oversell this method as there remained significant lack of fit in the most extreme values of the tail. Hyndman, et al (http://robjhyndman.com/papers/sig-alternate.pdf), propose an alternative QR they term boosting additive quantile regression. Their approach builds models across a full range or grid of quantiles, producing probabilistic estimates or forecasts which can be evaluated with any one of the extreme value distributions, e.g., Cauchy, Levy-stable, whatever. I have yet to employ their method but it seems promising. Another approach to extreme value modeling is known as POT or peak over threshold models. This involves setting a threshold or cut-off for an empirical distribution of values and modeling only the largest values that fall above the cutoff based on a GEV or generalized extreme value distribution. The advantage to this approach is that any possible future extreme value can be calibrated or located based on the parameters from the model. However, the method has the obvious disadvantage that one is not using the full PDF. Finally, in a 2013 paper, J.P. Bouchaud proposes the RFIM (random field ising model) for modeling complex information displaying criticality and heavy tailed behaviors such as herding, trends, avalanches, and so on. Bouchaud falls in a class of polymaths that should include the likes of Mandelbrot, Shannon, Tukey, Turing, etc. I can claim to be highly intrigued by his discussion while, at the same time, being intimidated by the rigors involved with implementing his suggestions. https://www.researchgate.net/profile/Jean-Philippe_Bouchaud/publication/230788728_Crises_and_Collective_Socio-Economic_Phenomena_Simple_Models_and_Challenges/links/5682d40008ae051f9aee7ee9.pdf?inViewer=0&pdfJsDownload=0&origin=publication_detail
Regression for power law
A paper by Lin and Tegmark nicely summarizes the reasons why lognormal and/or markov process distributions fail to fit data displaying critical, power law behaviors... https://ai2-s2-pdfs.s3.amazonaws
Regression for power law A paper by Lin and Tegmark nicely summarizes the reasons why lognormal and/or markov process distributions fail to fit data displaying critical, power law behaviors... https://ai2-s2-pdfs.s3.amazonaws.com/5ba0/3a03d844f10d7b4861d3b116818afe2b75f2.pdf . As they note, "Markov processes...fail epically by predicting exponentially decaying mutual information..." Their solution and recommendation is to employ deep learning neural networks such as long-short-term memory (LSTM) models. Being old school and neither conversant nor comfortable with NNs or LSTMs, I will give a tip of the hat to @glen_b's nonlinear approach. However, I prefer more tractable and readily accessible workarounds such as value-based quantile regression. Having used this approach on heavy tailed insurance claims, I know that it can provide a much better fit to the tails than more traditional methods, including multiplicative, log-log models. The modest challenge in using QR is finding the appropriate quantile around which to base one's model(s). Typically, this is much greater than the median. That said, I don't want to oversell this method as there remained significant lack of fit in the most extreme values of the tail. Hyndman, et al (http://robjhyndman.com/papers/sig-alternate.pdf), propose an alternative QR they term boosting additive quantile regression. Their approach builds models across a full range or grid of quantiles, producing probabilistic estimates or forecasts which can be evaluated with any one of the extreme value distributions, e.g., Cauchy, Levy-stable, whatever. I have yet to employ their method but it seems promising. Another approach to extreme value modeling is known as POT or peak over threshold models. This involves setting a threshold or cut-off for an empirical distribution of values and modeling only the largest values that fall above the cutoff based on a GEV or generalized extreme value distribution. The advantage to this approach is that any possible future extreme value can be calibrated or located based on the parameters from the model. However, the method has the obvious disadvantage that one is not using the full PDF. Finally, in a 2013 paper, J.P. Bouchaud proposes the RFIM (random field ising model) for modeling complex information displaying criticality and heavy tailed behaviors such as herding, trends, avalanches, and so on. Bouchaud falls in a class of polymaths that should include the likes of Mandelbrot, Shannon, Tukey, Turing, etc. I can claim to be highly intrigued by his discussion while, at the same time, being intimidated by the rigors involved with implementing his suggestions. https://www.researchgate.net/profile/Jean-Philippe_Bouchaud/publication/230788728_Crises_and_Collective_Socio-Economic_Phenomena_Simple_Models_and_Challenges/links/5682d40008ae051f9aee7ee9.pdf?inViewer=0&pdfJsDownload=0&origin=publication_detail
Regression for power law A paper by Lin and Tegmark nicely summarizes the reasons why lognormal and/or markov process distributions fail to fit data displaying critical, power law behaviors... https://ai2-s2-pdfs.s3.amazonaws
28,703
Why is the variance of 2SLS bigger than that of OLS?
We say a matrix $A$ is at least as large as $B$ if their difference $A-B$ is positive semidefinite (psd). An equivalent statement that turns out to be handier to check here is that $B^{-1}-A^{-1}$ is psd (much like $a>b$ is equivalent to $1/b>1/a$). So we want to check that $$ X'X-X'Z(Z'Z)^{-1}Z'X $$ is psd. Write $$ X'X-X'Z(Z'Z)^{-1}Z'X=X'(I-Z(Z'Z)^{-1}Z')X=X'M_ZX $$ To check that $X'M_ZX$ is psd, we must show that, for any vector $d$, $$ d'X'M_ZXd\geq0 $$ Let $c=Xd$. Then, $$ c'M_Zc\geq0 $$ as $M_Z$ is a symmetric and idempotent projection matrix, which is known to be psd: write, using symmetry and idempotency, $$ c'M_Zc=c'M_ZM_Zc=c'M_Z'M_Zc $$ and let $e=M_Zc$, so that $c'M_Zc=e'e=\sum_ie_i^2$, which, being a sum of squares, must be nonnegative. P.S.: Two little quibbles - you refer to the estimated asymptotic variances $\widehat{Avar}(\hat\beta_j)$. Now, the OLS estimator and the 2SLS estimator of $\sigma^2$ are not the same. As Paul mentions in his answer, this will however not affect the ranking as OLS is, by definition, the estimator which minimizes the sum of squared residuals. (The OLS estimate conventionally divides by $n-k$ and the IV estimate by $n$, but that seems unlikely to affect the ranking in realisitic samples.) Also, the asymptotic variances are generally scaled by $n$ so as to obtain a nondegenerate quantity as $n\to\infty$. (Of course, scaling both by $n$ will not affect the ranking, so that the issue is a little moot for this particular question.)
Why is the variance of 2SLS bigger than that of OLS?
We say a matrix $A$ is at least as large as $B$ if their difference $A-B$ is positive semidefinite (psd). An equivalent statement that turns out to be handier to check here is that $B^{-1}-A^{-1}$ is
Why is the variance of 2SLS bigger than that of OLS? We say a matrix $A$ is at least as large as $B$ if their difference $A-B$ is positive semidefinite (psd). An equivalent statement that turns out to be handier to check here is that $B^{-1}-A^{-1}$ is psd (much like $a>b$ is equivalent to $1/b>1/a$). So we want to check that $$ X'X-X'Z(Z'Z)^{-1}Z'X $$ is psd. Write $$ X'X-X'Z(Z'Z)^{-1}Z'X=X'(I-Z(Z'Z)^{-1}Z')X=X'M_ZX $$ To check that $X'M_ZX$ is psd, we must show that, for any vector $d$, $$ d'X'M_ZXd\geq0 $$ Let $c=Xd$. Then, $$ c'M_Zc\geq0 $$ as $M_Z$ is a symmetric and idempotent projection matrix, which is known to be psd: write, using symmetry and idempotency, $$ c'M_Zc=c'M_ZM_Zc=c'M_Z'M_Zc $$ and let $e=M_Zc$, so that $c'M_Zc=e'e=\sum_ie_i^2$, which, being a sum of squares, must be nonnegative. P.S.: Two little quibbles - you refer to the estimated asymptotic variances $\widehat{Avar}(\hat\beta_j)$. Now, the OLS estimator and the 2SLS estimator of $\sigma^2$ are not the same. As Paul mentions in his answer, this will however not affect the ranking as OLS is, by definition, the estimator which minimizes the sum of squared residuals. (The OLS estimate conventionally divides by $n-k$ and the IV estimate by $n$, but that seems unlikely to affect the ranking in realisitic samples.) Also, the asymptotic variances are generally scaled by $n$ so as to obtain a nondegenerate quantity as $n\to\infty$. (Of course, scaling both by $n$ will not affect the ranking, so that the issue is a little moot for this particular question.)
Why is the variance of 2SLS bigger than that of OLS? We say a matrix $A$ is at least as large as $B$ if their difference $A-B$ is positive semidefinite (psd). An equivalent statement that turns out to be handier to check here is that $B^{-1}-A^{-1}$ is
28,704
Why is the variance of 2SLS bigger than that of OLS?
I think this is one those times where it is much easier to look at the simple one equation, one variable setting. So tehcnically this is IV-regression and not 2SLS (but the result is still general). So we will asume a model (using Wooldridge notation), for some $i$ we have: $$ y_i = \beta_0 + \beta_1 x_{i1} + u_i $$ Now, if we assume that this models follows the Gauss-Markov assumptions then we know (see any decent textbook) that the asymptotic variance of $\hat\beta_1$ is given by: $$ Avar(\hat\beta_{OLS})=\frac{\hat\sigma^2}{SST_x} $$ Where $SST_x$ is the total sum of squares for $x$. If instead we assume that $x$ is (possible) endegonoues, and use IV regression with $z$ as an instrument, then the asymptotic variance of the IV estimator is: $$ Avar(\hat\beta_{iv}) = \frac{\hat\sigma^2}{SST_x \cdot R^2_{x,z}} $$ Since $R^2$ is always between $0$ and $1$, it must be the case that the denominator for the IV estimator is smaller then for OLS (if OLS is actually valid).
Why is the variance of 2SLS bigger than that of OLS?
I think this is one those times where it is much easier to look at the simple one equation, one variable setting. So tehcnically this is IV-regression and not 2SLS (but the result is still general). S
Why is the variance of 2SLS bigger than that of OLS? I think this is one those times where it is much easier to look at the simple one equation, one variable setting. So tehcnically this is IV-regression and not 2SLS (but the result is still general). So we will asume a model (using Wooldridge notation), for some $i$ we have: $$ y_i = \beta_0 + \beta_1 x_{i1} + u_i $$ Now, if we assume that this models follows the Gauss-Markov assumptions then we know (see any decent textbook) that the asymptotic variance of $\hat\beta_1$ is given by: $$ Avar(\hat\beta_{OLS})=\frac{\hat\sigma^2}{SST_x} $$ Where $SST_x$ is the total sum of squares for $x$. If instead we assume that $x$ is (possible) endegonoues, and use IV regression with $z$ as an instrument, then the asymptotic variance of the IV estimator is: $$ Avar(\hat\beta_{iv}) = \frac{\hat\sigma^2}{SST_x \cdot R^2_{x,z}} $$ Since $R^2$ is always between $0$ and $1$, it must be the case that the denominator for the IV estimator is smaller then for OLS (if OLS is actually valid).
Why is the variance of 2SLS bigger than that of OLS? I think this is one those times where it is much easier to look at the simple one equation, one variable setting. So tehcnically this is IV-regression and not 2SLS (but the result is still general). S
28,705
Why is the variance of 2SLS bigger than that of OLS?
Just a comment. I guess that it is pretty clear that the estimate of the variance of the errors is higher when using 2SLS. Recall that OLS minimizes the estimate of this variance. So, any other estimator should have a higher sample estimate of the variance of the errors.
Why is the variance of 2SLS bigger than that of OLS?
Just a comment. I guess that it is pretty clear that the estimate of the variance of the errors is higher when using 2SLS. Recall that OLS minimizes the estimate of this variance. So, any other estima
Why is the variance of 2SLS bigger than that of OLS? Just a comment. I guess that it is pretty clear that the estimate of the variance of the errors is higher when using 2SLS. Recall that OLS minimizes the estimate of this variance. So, any other estimator should have a higher sample estimate of the variance of the errors.
Why is the variance of 2SLS bigger than that of OLS? Just a comment. I guess that it is pretty clear that the estimate of the variance of the errors is higher when using 2SLS. Recall that OLS minimizes the estimate of this variance. So, any other estima
28,706
Why are my VAR models working better with nonstationary data than stationary data?
Two facts: When you regress one random walk on another random walk and incorrectly assume stationarity, your software will generally spit back statistically significant results, even if they are independent processes! For example, see these lecture notes. (Google for spurious random walk and numerous links will come up.) What's going wrong? The usual OLS estimate and standard-errors are based upon assumptions that aren't true in the case of random walks. Pretending the usual OLS assumptions apply and regressing two independent random walks on each other will generally lead to regressions with huge $R^2$, highly significant coefficients, and it's all entirely bogus! When there's a random walk and you run a regression in levels the usual assumptions for OLS are violated, your estimate does not converge as $t \rightarrow \infty$, the usual central limit theorem does not apply, and the t-stats and p-values your regression spits out are all wrong. If two variables are cointegrated, you can regress one on the other and your estimator will converge faster than usual regression, a result known as super-consistency. Eg. checkout John Cochrane's Time Series book online and search for "superconsistent."
Why are my VAR models working better with nonstationary data than stationary data?
Two facts: When you regress one random walk on another random walk and incorrectly assume stationarity, your software will generally spit back statistically significant results, even if they are inde
Why are my VAR models working better with nonstationary data than stationary data? Two facts: When you regress one random walk on another random walk and incorrectly assume stationarity, your software will generally spit back statistically significant results, even if they are independent processes! For example, see these lecture notes. (Google for spurious random walk and numerous links will come up.) What's going wrong? The usual OLS estimate and standard-errors are based upon assumptions that aren't true in the case of random walks. Pretending the usual OLS assumptions apply and regressing two independent random walks on each other will generally lead to regressions with huge $R^2$, highly significant coefficients, and it's all entirely bogus! When there's a random walk and you run a regression in levels the usual assumptions for OLS are violated, your estimate does not converge as $t \rightarrow \infty$, the usual central limit theorem does not apply, and the t-stats and p-values your regression spits out are all wrong. If two variables are cointegrated, you can regress one on the other and your estimator will converge faster than usual regression, a result known as super-consistency. Eg. checkout John Cochrane's Time Series book online and search for "superconsistent."
Why are my VAR models working better with nonstationary data than stationary data? Two facts: When you regress one random walk on another random walk and incorrectly assume stationarity, your software will generally spit back statistically significant results, even if they are inde
28,707
Why is the logistic regression hypothesis seen as a probability function?
No, it's not merely a heuristic. It's quite deliberately intended to be a model for the conditional distribution of the response. Logistic regression is a particular case of a generalized linear model (GLM), in this case for a process where the response variable is conditionally Bernoulli (or more generally, binomial). A GLM includes a specification of a model for the conditional mean of the response. In the case of a Bernoulli variable, its conditional mean is the parameter $p_i$, which is explicitly the probability that the response, $Y_i$ is $1$. It is modeled in terms of one or more predictors. Here's the model for the mean for a single predictor, $x_i$: $$P(Y_i=1|x_i)=\frac{\exp(\beta_0+\beta_1x_i)}{1+\exp(\beta_0+\beta_1x_i)}$$ So it is (intentionally) a model for the probability that the response is $1$, given the value of the predictors. The form of the link function $\eta=\log(p/(1-p))$ (and its inverse $p=\exp(\eta)/(1+\exp(\eta))$) is no accident either -- the logit link (which is what makes it logistic regression) is the natural (or canonical) link function for a binomial response. Other choices of link function are possible (and they, too will be models for the probability of a 1). Other common choices for a binomial response are the probit and the complementary log-log but the logistic is by far the most common.
Why is the logistic regression hypothesis seen as a probability function?
No, it's not merely a heuristic. It's quite deliberately intended to be a model for the conditional distribution of the response. Logistic regression is a particular case of a generalized linear model
Why is the logistic regression hypothesis seen as a probability function? No, it's not merely a heuristic. It's quite deliberately intended to be a model for the conditional distribution of the response. Logistic regression is a particular case of a generalized linear model (GLM), in this case for a process where the response variable is conditionally Bernoulli (or more generally, binomial). A GLM includes a specification of a model for the conditional mean of the response. In the case of a Bernoulli variable, its conditional mean is the parameter $p_i$, which is explicitly the probability that the response, $Y_i$ is $1$. It is modeled in terms of one or more predictors. Here's the model for the mean for a single predictor, $x_i$: $$P(Y_i=1|x_i)=\frac{\exp(\beta_0+\beta_1x_i)}{1+\exp(\beta_0+\beta_1x_i)}$$ So it is (intentionally) a model for the probability that the response is $1$, given the value of the predictors. The form of the link function $\eta=\log(p/(1-p))$ (and its inverse $p=\exp(\eta)/(1+\exp(\eta))$) is no accident either -- the logit link (which is what makes it logistic regression) is the natural (or canonical) link function for a binomial response. Other choices of link function are possible (and they, too will be models for the probability of a 1). Other common choices for a binomial response are the probit and the complementary log-log but the logistic is by far the most common.
Why is the logistic regression hypothesis seen as a probability function? No, it's not merely a heuristic. It's quite deliberately intended to be a model for the conditional distribution of the response. Logistic regression is a particular case of a generalized linear model
28,708
Frequentist definition of probability; does there exist a formal definition?
TL;DR It doesn't seem like it is possible to define a frequentist definition of probability consistent with the Kolmogorov framework which isn't completely circular (i.e. in the sense of circular logic). Not too long so I did read: I want to address what I see as some potential problems with the candidate frequentist definition of probability $$\underset{n \to \infty}{\lim} \frac{n_A}{n} $$ First, $n_A$ can only be reasonably be interpreted as a random variable, so the above expression is not precisely defined in a rigorous sense. We need to specify the mode of convergence for this random variable, be it almost surely, in probability, in distribution, in mean, or in mean squared. But all of these notions of convergence require a measure on the probability space to be defined to be meaningful. The intuitive choice, of course, would be to pick convergence almost surely. This has the feature the limit needs to exist pointwise except on an event of measure zero. What constitutes a set of measure zero will coincide for any family of measures which are absolutely continuous with respect to each other -- this allows us to define a notion of almost sure convergence making the above limit rigorous while still being somewhat agnostic about what the underlying measure for the measurable space of events is (i.e. because it could be any measure absolutely continuous with respect to some chosen measure). This would prevent circularity in the definition which would arise from fixing a given measure in advance, since that measure could (and in the Kolmogorov framework usually is) defined to be the "probability". However, if we are using almost sure convergence, then that means we are confining ourselves to the situation of the strong law of large numbers (henceforth SLLN). Let me state that theorem (as given on p. 133 of Chung) for the sake of reference here: Let $\{X_n\}$ be a sequence of independent, identically distributed random variables. Then we have $$ \mathbb{E}|X_1| < \infty \implies \frac{S_n}{n} \to \mathbb{E}(X_1)\quad a.s.$$ $$\mathbb{E}|X_1| = \infty \implies \underset{n \to \infty}{\lim\sup}\frac{|S_n|}{n} = + \infty \quad a.s. $$ where $S_n:= X_1 + X_2 + \dots + X_n$. So let's say we have a measurable space $(X, \mathscr{F})$ and we want to define the probability of some event $A \in \mathscr{F}$ with respect to some family of mutually absolutely continuous probability measures $\{\mu_i\}_{i \in I}$. Then by either the Kolmogorov Extension Theorem or Ionescu Tulcea Extension Theorem (I think both work), we can construct a family of product spaces $\{(\prod_{j=1}^{\infty} X_j)_i\}_{i \in I}$, one for each $\mu_i$. (Note that the existence of infinite product spaces which is a conclusion of Kolmogorov's theorem requires the measure of each space to be $1$, hence why I am now restricting to probability, instead of arbitrary, measures). Then define $\mathbb{1}_{A_j}$ to be the indicator random variable, i.e. which equals $1$ if $A$ occurs in the $j$th copy and $0$ if it does not, in other words $$n_A = \mathbb{1}_{A_1} + \mathbb{1}_{A_2} + \dots + \mathbb{1}_{A_n}.$$ Then clearly $0 \le \mathbb{E}_i \mathbb{1}_{A_j} \le 1 $ (where $\mathbb{E}_i$ denotes expectation with respect to $\mu_i$), so the strong law of large numbers will in fact apply to $(\prod_{j=1}^{\infty} X_j)_i$ (because by construction the $\mathbb{1}_{A_j}$ are identically and independently distributed - note that being independently distributed means that the measure of the product space is multiplicative with respect to the coordinate measures) so we get that $$\frac{n_A}{n} \to \mathbb{E}_i \mathbb{1}_{A_1} \quad a.s. $$ and thus our definition for the probability of $A$ with respect to $\mu_i$ should naturally be $\mathbb{E}_1 \mathbb{1}_{A}$. I just realized however that even though the sequence of random variables $\frac{n_A}{n}$ will converge almost surely with respect to $\mu_{i_1}$ if and only if it converges almost surely with respect to $\mu_{i_2}$, (where $i_1, i_2 \in I$) that doesn't necessarily mean that it will converge to the same value; in fact, the SLLN guarantees that it won't unless $\mathbb{E}_{i_1} \mathbb{1}_A = \mathbb{E}_{i_2} \mathbb{1}_A$ which is not true generically. If $\mu$ is somehow "canonical enough", say like the uniform distribution for a finite set, then maybe this works out nicely, but doesn't really give any new insights. In particular, for the uniform distribution, $\mathbb{E}\mathbb{1}_A = \frac{|A|}{|X|}$, i.e. the probability of $A$ is just the proportion of points or elementary events in $X$ which belong to $A$, which again seems somewhat circular to me. For a continuous random variable I don't see how we could ever agree on a "canonical" choice of $\mu$. I.e. it seems like it makes sense to define the frequency of an event as the probability of the event, but it does not seem like it makes sense to define the probability of the event to be the frequency (at least without being circular). This is especially problematic, since in real life we don't actually know what the probability is; we have to estimate it. Also note that this definition of frequency for a subset of a measurable space depends on the chosen measure being a probability space; for instance, there is no product measure for countably many copies of $\mathbb{R}$ endowed with the Lebesgue measure, since $\mu(\mathbb{R})=\infty$. Likewise, the measure of $\prod_{j=1}^n X$ using the canonical product measure is $(\mu(X))^n$, which either blows up to infinity if $\mu(X) >1$ or goes to zero if $\mu(X) <1$, i.e. Kolmogorov's and Tulcea's extension theorems are very special results peculiar to probability measures.
Frequentist definition of probability; does there exist a formal definition?
TL;DR It doesn't seem like it is possible to define a frequentist definition of probability consistent with the Kolmogorov framework which isn't completely circular (i.e. in the sense of circular logi
Frequentist definition of probability; does there exist a formal definition? TL;DR It doesn't seem like it is possible to define a frequentist definition of probability consistent with the Kolmogorov framework which isn't completely circular (i.e. in the sense of circular logic). Not too long so I did read: I want to address what I see as some potential problems with the candidate frequentist definition of probability $$\underset{n \to \infty}{\lim} \frac{n_A}{n} $$ First, $n_A$ can only be reasonably be interpreted as a random variable, so the above expression is not precisely defined in a rigorous sense. We need to specify the mode of convergence for this random variable, be it almost surely, in probability, in distribution, in mean, or in mean squared. But all of these notions of convergence require a measure on the probability space to be defined to be meaningful. The intuitive choice, of course, would be to pick convergence almost surely. This has the feature the limit needs to exist pointwise except on an event of measure zero. What constitutes a set of measure zero will coincide for any family of measures which are absolutely continuous with respect to each other -- this allows us to define a notion of almost sure convergence making the above limit rigorous while still being somewhat agnostic about what the underlying measure for the measurable space of events is (i.e. because it could be any measure absolutely continuous with respect to some chosen measure). This would prevent circularity in the definition which would arise from fixing a given measure in advance, since that measure could (and in the Kolmogorov framework usually is) defined to be the "probability". However, if we are using almost sure convergence, then that means we are confining ourselves to the situation of the strong law of large numbers (henceforth SLLN). Let me state that theorem (as given on p. 133 of Chung) for the sake of reference here: Let $\{X_n\}$ be a sequence of independent, identically distributed random variables. Then we have $$ \mathbb{E}|X_1| < \infty \implies \frac{S_n}{n} \to \mathbb{E}(X_1)\quad a.s.$$ $$\mathbb{E}|X_1| = \infty \implies \underset{n \to \infty}{\lim\sup}\frac{|S_n|}{n} = + \infty \quad a.s. $$ where $S_n:= X_1 + X_2 + \dots + X_n$. So let's say we have a measurable space $(X, \mathscr{F})$ and we want to define the probability of some event $A \in \mathscr{F}$ with respect to some family of mutually absolutely continuous probability measures $\{\mu_i\}_{i \in I}$. Then by either the Kolmogorov Extension Theorem or Ionescu Tulcea Extension Theorem (I think both work), we can construct a family of product spaces $\{(\prod_{j=1}^{\infty} X_j)_i\}_{i \in I}$, one for each $\mu_i$. (Note that the existence of infinite product spaces which is a conclusion of Kolmogorov's theorem requires the measure of each space to be $1$, hence why I am now restricting to probability, instead of arbitrary, measures). Then define $\mathbb{1}_{A_j}$ to be the indicator random variable, i.e. which equals $1$ if $A$ occurs in the $j$th copy and $0$ if it does not, in other words $$n_A = \mathbb{1}_{A_1} + \mathbb{1}_{A_2} + \dots + \mathbb{1}_{A_n}.$$ Then clearly $0 \le \mathbb{E}_i \mathbb{1}_{A_j} \le 1 $ (where $\mathbb{E}_i$ denotes expectation with respect to $\mu_i$), so the strong law of large numbers will in fact apply to $(\prod_{j=1}^{\infty} X_j)_i$ (because by construction the $\mathbb{1}_{A_j}$ are identically and independently distributed - note that being independently distributed means that the measure of the product space is multiplicative with respect to the coordinate measures) so we get that $$\frac{n_A}{n} \to \mathbb{E}_i \mathbb{1}_{A_1} \quad a.s. $$ and thus our definition for the probability of $A$ with respect to $\mu_i$ should naturally be $\mathbb{E}_1 \mathbb{1}_{A}$. I just realized however that even though the sequence of random variables $\frac{n_A}{n}$ will converge almost surely with respect to $\mu_{i_1}$ if and only if it converges almost surely with respect to $\mu_{i_2}$, (where $i_1, i_2 \in I$) that doesn't necessarily mean that it will converge to the same value; in fact, the SLLN guarantees that it won't unless $\mathbb{E}_{i_1} \mathbb{1}_A = \mathbb{E}_{i_2} \mathbb{1}_A$ which is not true generically. If $\mu$ is somehow "canonical enough", say like the uniform distribution for a finite set, then maybe this works out nicely, but doesn't really give any new insights. In particular, for the uniform distribution, $\mathbb{E}\mathbb{1}_A = \frac{|A|}{|X|}$, i.e. the probability of $A$ is just the proportion of points or elementary events in $X$ which belong to $A$, which again seems somewhat circular to me. For a continuous random variable I don't see how we could ever agree on a "canonical" choice of $\mu$. I.e. it seems like it makes sense to define the frequency of an event as the probability of the event, but it does not seem like it makes sense to define the probability of the event to be the frequency (at least without being circular). This is especially problematic, since in real life we don't actually know what the probability is; we have to estimate it. Also note that this definition of frequency for a subset of a measurable space depends on the chosen measure being a probability space; for instance, there is no product measure for countably many copies of $\mathbb{R}$ endowed with the Lebesgue measure, since $\mu(\mathbb{R})=\infty$. Likewise, the measure of $\prod_{j=1}^n X$ using the canonical product measure is $(\mu(X))^n$, which either blows up to infinity if $\mu(X) >1$ or goes to zero if $\mu(X) <1$, i.e. Kolmogorov's and Tulcea's extension theorems are very special results peculiar to probability measures.
Frequentist definition of probability; does there exist a formal definition? TL;DR It doesn't seem like it is possible to define a frequentist definition of probability consistent with the Kolmogorov framework which isn't completely circular (i.e. in the sense of circular logi
28,709
Frequentist definition of probability; does there exist a formal definition?
I don't think there is a mathematical definition, no. The difference between the various interpretations of probability is not a difference in how probability is mathematically defined. Probability could be mathematically defined this way: if $(Ω, Σ, μ)$ is a measure space with $μ(Ω) = 1$, then the probability of any event $S ∈ Σ$ is just $μ(S)$. I hope you agree that this definition is neutral to questions like whether we should interpret probabilities in a frequentist or Bayesian fashion.
Frequentist definition of probability; does there exist a formal definition?
I don't think there is a mathematical definition, no. The difference between the various interpretations of probability is not a difference in how probability is mathematically defined. Probability co
Frequentist definition of probability; does there exist a formal definition? I don't think there is a mathematical definition, no. The difference between the various interpretations of probability is not a difference in how probability is mathematically defined. Probability could be mathematically defined this way: if $(Ω, Σ, μ)$ is a measure space with $μ(Ω) = 1$, then the probability of any event $S ∈ Σ$ is just $μ(S)$. I hope you agree that this definition is neutral to questions like whether we should interpret probabilities in a frequentist or Bayesian fashion.
Frequentist definition of probability; does there exist a formal definition? I don't think there is a mathematical definition, no. The difference between the various interpretations of probability is not a difference in how probability is mathematically defined. Probability co
28,710
Is a Gaussian Process the same as a kernelized Generalized Linear Model?
Since an ounce of algebra is equal to a ton of words, let me write some formulas. Notation Denote $k( \cdot, \cdot )$ some covariance function, assume we have $m$ observations $(\mathbf x_i, y_i )_{i=1}^m$. Denote $$ \Sigma = \begin{bmatrix} k( \mathbf x_1 , \mathbf x_1 ) & \dots & k( \mathbf x_1 , \mathbf x_m ) \\ k( \mathbf x_2 , \mathbf x_1 ) & \dots & k( \mathbf x_2, \mathbf x_m ) \\ \vdots & & \vdots \\ k( \mathbf x_m , \mathbf x_1 ) & \dots & k( \mathbf x_m , \mathbf x_m ) \\ \end{bmatrix} \in \mathbb{R}^{m \times m }, \ k(\mathbf x) = \begin{bmatrix} k(\mathbf x, \mathbf x_1 ) \\ \vdots \\ k(\mathbf x, \mathbf x_m ) \end{bmatrix} \in \mathbb{R}^m,\ \mathbf y = \begin{bmatrix} y_1 \\ \vdots \\ y_m \end{bmatrix} \in \mathbb{R}^m $$ and $$ X = \begin{bmatrix} ------ & \mathbf x_1^t & ------ \\ & \vdots & \\ ------ & \mathbf x_m^t & ------ \end{bmatrix}. $$ Gaussian Process Regression Gaussian Process Regression (GPR) gives the poserior for $\mathbf x$ as $$ y \sim \mathcal{N} (k^t(\mathbf x) \Sigma^{-1} \mathbf y, k(\mathbf x,\mathbf x) - k^t(\mathbf x) \Sigma^{-1} k(\mathbf x ) ). $$ This arises by assuming $(\mathbf y, y )$ are all jointly Gaussian with zero mean and a covariance structure specified by $k( \cdot, \cdot )$. That's the main idea and the rest is calculations using Schur complements. You would (probably) want to make a prediction based on either the posterior mean or the posterior mode. Luckily, in this case they are the same. You would predict, for a given $\mathbf x$: $$ y^{\star} = k^t(\mathbf x ) \Sigma^{-1} \mathbf y. $$ General Linear Model A General Linear Model (GLM) arises when you try to find the best linear model to describe observations with a given covariance structure (specified by $\Sigma$). You assume $$ \mathbf y = X \beta + \epsilon, \epsilon \sim \mathcal{N}(0,\Sigma). $$ Then the log-likelihood is $$ \log p(\mathbf y|X,\beta x) = -\frac{1}{2} (X \beta - \mathbf y )^t\Sigma^{-1} (X\beta - \mathbf y), $$ up to an additive constant. Then the following $\beta^{\star}$ is a Maximum Likelihood Estimator for $\beta$: $$ \beta^{\star} := \arg \min_{\beta} \| \Sigma^{-1/2} (X\beta - \mathbf y) \|_2^2 \\ = ( (\Sigma^{-1/2} X)^t (\Sigma^{-1/2} X) )^{-1} (\Sigma^{-1/2} X)^t \Sigma^{-1/2}\mathbf y \\ = (X^t \Sigma^{-1} X)^{-1} X^t \Sigma^{-1} \mathbf y. $$ Now, a prediction is made using this linear model as follows: $$ y^{\star} = \mathbf x^t \beta^{\star} = \mathbf x^t (X^t \Sigma^{-1} X)^{-1} X^t \Sigma^{-1} \mathbf y. $$ Conclusion The formulas for the posterior mean for GPR and the GLM predictor are clearly different, so this answers your question. A Few Comments One key difference is that a GLM does not take into account the covariance between $\mathbf x$ and $\mathbf x_i$, for any $i$. In the GPR model, this information on $\mathbf x$ enters via the vector $k(\mathbf x)$. Expanding on this point you can think of either one of these models as a weigting scheme used to get from $\mathbf y$ to $y$. In the GLM case, your weights are a linear function of $\mathbf x$ itself. In the GPR case, these weights are a still linear, but now in $k(\mathbf x, \cdot )$! More on this is the book, chapter 2. http://www.gaussianprocess.org/gpml/ The Gaussian Process model is Bayesian. It gives you a posterior distribution (of which you take the mean for prediction). The GLM is frequentist - no posterior distribution, just point estimates (for $\beta^{\star}$ and for $y^{\star}$).
Is a Gaussian Process the same as a kernelized Generalized Linear Model?
Since an ounce of algebra is equal to a ton of words, let me write some formulas. Notation Denote $k( \cdot, \cdot )$ some covariance function, assume we have $m$ observations $(\mathbf x_i, y_i )_{i=
Is a Gaussian Process the same as a kernelized Generalized Linear Model? Since an ounce of algebra is equal to a ton of words, let me write some formulas. Notation Denote $k( \cdot, \cdot )$ some covariance function, assume we have $m$ observations $(\mathbf x_i, y_i )_{i=1}^m$. Denote $$ \Sigma = \begin{bmatrix} k( \mathbf x_1 , \mathbf x_1 ) & \dots & k( \mathbf x_1 , \mathbf x_m ) \\ k( \mathbf x_2 , \mathbf x_1 ) & \dots & k( \mathbf x_2, \mathbf x_m ) \\ \vdots & & \vdots \\ k( \mathbf x_m , \mathbf x_1 ) & \dots & k( \mathbf x_m , \mathbf x_m ) \\ \end{bmatrix} \in \mathbb{R}^{m \times m }, \ k(\mathbf x) = \begin{bmatrix} k(\mathbf x, \mathbf x_1 ) \\ \vdots \\ k(\mathbf x, \mathbf x_m ) \end{bmatrix} \in \mathbb{R}^m,\ \mathbf y = \begin{bmatrix} y_1 \\ \vdots \\ y_m \end{bmatrix} \in \mathbb{R}^m $$ and $$ X = \begin{bmatrix} ------ & \mathbf x_1^t & ------ \\ & \vdots & \\ ------ & \mathbf x_m^t & ------ \end{bmatrix}. $$ Gaussian Process Regression Gaussian Process Regression (GPR) gives the poserior for $\mathbf x$ as $$ y \sim \mathcal{N} (k^t(\mathbf x) \Sigma^{-1} \mathbf y, k(\mathbf x,\mathbf x) - k^t(\mathbf x) \Sigma^{-1} k(\mathbf x ) ). $$ This arises by assuming $(\mathbf y, y )$ are all jointly Gaussian with zero mean and a covariance structure specified by $k( \cdot, \cdot )$. That's the main idea and the rest is calculations using Schur complements. You would (probably) want to make a prediction based on either the posterior mean or the posterior mode. Luckily, in this case they are the same. You would predict, for a given $\mathbf x$: $$ y^{\star} = k^t(\mathbf x ) \Sigma^{-1} \mathbf y. $$ General Linear Model A General Linear Model (GLM) arises when you try to find the best linear model to describe observations with a given covariance structure (specified by $\Sigma$). You assume $$ \mathbf y = X \beta + \epsilon, \epsilon \sim \mathcal{N}(0,\Sigma). $$ Then the log-likelihood is $$ \log p(\mathbf y|X,\beta x) = -\frac{1}{2} (X \beta - \mathbf y )^t\Sigma^{-1} (X\beta - \mathbf y), $$ up to an additive constant. Then the following $\beta^{\star}$ is a Maximum Likelihood Estimator for $\beta$: $$ \beta^{\star} := \arg \min_{\beta} \| \Sigma^{-1/2} (X\beta - \mathbf y) \|_2^2 \\ = ( (\Sigma^{-1/2} X)^t (\Sigma^{-1/2} X) )^{-1} (\Sigma^{-1/2} X)^t \Sigma^{-1/2}\mathbf y \\ = (X^t \Sigma^{-1} X)^{-1} X^t \Sigma^{-1} \mathbf y. $$ Now, a prediction is made using this linear model as follows: $$ y^{\star} = \mathbf x^t \beta^{\star} = \mathbf x^t (X^t \Sigma^{-1} X)^{-1} X^t \Sigma^{-1} \mathbf y. $$ Conclusion The formulas for the posterior mean for GPR and the GLM predictor are clearly different, so this answers your question. A Few Comments One key difference is that a GLM does not take into account the covariance between $\mathbf x$ and $\mathbf x_i$, for any $i$. In the GPR model, this information on $\mathbf x$ enters via the vector $k(\mathbf x)$. Expanding on this point you can think of either one of these models as a weigting scheme used to get from $\mathbf y$ to $y$. In the GLM case, your weights are a linear function of $\mathbf x$ itself. In the GPR case, these weights are a still linear, but now in $k(\mathbf x, \cdot )$! More on this is the book, chapter 2. http://www.gaussianprocess.org/gpml/ The Gaussian Process model is Bayesian. It gives you a posterior distribution (of which you take the mean for prediction). The GLM is frequentist - no posterior distribution, just point estimates (for $\beta^{\star}$ and for $y^{\star}$).
Is a Gaussian Process the same as a kernelized Generalized Linear Model? Since an ounce of algebra is equal to a ton of words, let me write some formulas. Notation Denote $k( \cdot, \cdot )$ some covariance function, assume we have $m$ observations $(\mathbf x_i, y_i )_{i=
28,711
Is a Gaussian Process the same as a kernelized Generalized Linear Model?
No, there are special cases where these two overlap but in general they are different. It is confusing because they are related and can both be used for nonparametric regression. Also the word "kernel" is ambiguous here as there is a distinction between kernel machines and kernel density estimate type nonparametric regression. An example of overlap is that relevance vector machines (RVMs) which can be seen as type of Bayesian kernelised GLM with sparsity inducing priors, can also be formulated as a Gaussian process. This is described in the Rasmussen & Williams book mentioned in the comment. Gaussian processes are, strictly speaking, a type of distribution where every finite sample has a joint Gaussian distribution. Nothing says that this distribution needs to be used for regression. Gaussian processes can be used for unsupervised learning, such as Gaussian process latent variable models. Gaussian processes can also be used for optimisation. Kernelised GLMs don't really make sense in either of these contexts. There are a couple of other differences: GPs require the kernel to be positive semi-definite, kernelised GLMs do not. fitting kernelised GLMs requires parameter estimation, fitting GPs do not.
Is a Gaussian Process the same as a kernelized Generalized Linear Model?
No, there are special cases where these two overlap but in general they are different. It is confusing because they are related and can both be used for nonparametric regression. Also the word "kernel
Is a Gaussian Process the same as a kernelized Generalized Linear Model? No, there are special cases where these two overlap but in general they are different. It is confusing because they are related and can both be used for nonparametric regression. Also the word "kernel" is ambiguous here as there is a distinction between kernel machines and kernel density estimate type nonparametric regression. An example of overlap is that relevance vector machines (RVMs) which can be seen as type of Bayesian kernelised GLM with sparsity inducing priors, can also be formulated as a Gaussian process. This is described in the Rasmussen & Williams book mentioned in the comment. Gaussian processes are, strictly speaking, a type of distribution where every finite sample has a joint Gaussian distribution. Nothing says that this distribution needs to be used for regression. Gaussian processes can be used for unsupervised learning, such as Gaussian process latent variable models. Gaussian processes can also be used for optimisation. Kernelised GLMs don't really make sense in either of these contexts. There are a couple of other differences: GPs require the kernel to be positive semi-definite, kernelised GLMs do not. fitting kernelised GLMs requires parameter estimation, fitting GPs do not.
Is a Gaussian Process the same as a kernelized Generalized Linear Model? No, there are special cases where these two overlap but in general they are different. It is confusing because they are related and can both be used for nonparametric regression. Also the word "kernel
28,712
Is a Gaussian Process the same as a kernelized Generalized Linear Model?
If you are using a Gaussian process for regression and only care about the predictive mean, then it is exactly equivalent to performing kernel ridge regression. I had the same question in mind while reading about these things, therefore decided to compile a short summary of the relationships between Gaussian process and kernel ridge regression.
Is a Gaussian Process the same as a kernelized Generalized Linear Model?
If you are using a Gaussian process for regression and only care about the predictive mean, then it is exactly equivalent to performing kernel ridge regression. I had the same question in mind while r
Is a Gaussian Process the same as a kernelized Generalized Linear Model? If you are using a Gaussian process for regression and only care about the predictive mean, then it is exactly equivalent to performing kernel ridge regression. I had the same question in mind while reading about these things, therefore decided to compile a short summary of the relationships between Gaussian process and kernel ridge regression.
Is a Gaussian Process the same as a kernelized Generalized Linear Model? If you are using a Gaussian process for regression and only care about the predictive mean, then it is exactly equivalent to performing kernel ridge regression. I had the same question in mind while r
28,713
Sources' seeming disagreement on linear, quadratic and Fisher's discriminant analysis
I'm addressing only to one aspect of the question, and doing it intuitively without algebra. If the $g$ classes have the same variance-covariance matrices and differ only by the shift of their centroids in the $p$-dimensional space then they are completely linearly separable in the $q=min(g-1,p)$ "subspace". This is what LDA is doing. Imagine you have three identical ellipsoids in the space of variables $V_1, V_2, V_3$. You have to use the information from all the variables in order to predict the class membership without error. But due to the fact that these were identically sized and oriented clouds it is possible to rescale them by a common transform into balls of unit radius. Then $q=g-1=2$ independent dimensions will suffice to predict the class membership as precisely as formerly. These dimensions are called discriminant functions $D_1, D_2$. Having 3 same-size balls of points you need only 2 axial lines and to know the balls' centres coordinates onto them in order to assign every point correctly. Discriminants are uncorrelated variables, their within-class covariance matrices are ideally identity ones (the balls). Discriminants form a subspace of the original variables space - they are their linear combinations. However, they are not rotation-like (PCA-like) axes: seen in the original variables space, discriminants as axes are not mutually orthogonal. So, under the assumption of homogeneity of within-class variance-covariances LDA using for classification all the existing discriminants is no worse than classifying immediately by the original variables. But you don't have to use all the discriminants. You might use only $m<q$ first most strong / statistically significant of them. This way you lose minimal information for classifying and the missclassification will be minimal. Seen from this perspective, LDA is a data reduction similar to PCA, only supervised. Note that assuming the homogeneity (+ multivariate normality) and provided that you plan to use but all the discriminants in classification it is possible to bypass the extraction of the discriminants themselves - which involves generalized eigenproblem - and compute the so called "Fisher's classification functions" from the variables directly, in order to classify with them, with the equivalent result. So, when the $g$ classes are identical in shape we could consider the $p$ input variables or the $g$ Fisher's functions or the $q$ discriminants as all equivalent sets of "classifiers". But discriminants are more convenient in many respect.$^1$ Since usually the classes are not "identical ellipses" in reality, the classification by the $q$ discriminants is somewhat poorer than if you do Bayes classification by all the $p$ original variables. For example, on this plot the two ellipsoids are not parallel to each other; and one can visually grasp that the single existing discriminant is not enough to classify points as accurately as the two variables allow to. QDA (quadratic discriminant analysis) would be then a step better approximation than LDA. A practical approach half-way between LDA and QDA is to use LDA-discriminants but use their observed separate-class covariance matrices at classification (see,see) instead of their pooled matrix (which is the identity). (And yes, LDA can be seen as closely related to, even a specific case of, MANOVA and Canonical correlation analysis or Reduced rank multivariate regression - see, see, see.) $^1$ An important terminological note. In some texts the $g$ Fisher's classification functions may be called "Fisher's discriminant functions", which may confuse with the $q$ discriminats which are canonical discriminant functions (i.e. obtained in the eigendecomposition of $\bf W^{-1}B$). For clarity, I recommend to say "Fisher's classification functions" vs "canonical discriminant functions" (= discriminants, for short). In modern understanding, LDA is the canonical linear discriminant analysis. "Fisher's discriminant analysis" is, at least to my awareness, either LDA with 2 classes (where the single canonical discriminant is inevitably the same thing as the Fisher's classification functions) or, broadly, the computation of Fisher's classification functions in multiclass settings.
Sources' seeming disagreement on linear, quadratic and Fisher's discriminant analysis
I'm addressing only to one aspect of the question, and doing it intuitively without algebra. If the $g$ classes have the same variance-covariance matrices and differ only by the shift of their centroi
Sources' seeming disagreement on linear, quadratic and Fisher's discriminant analysis I'm addressing only to one aspect of the question, and doing it intuitively without algebra. If the $g$ classes have the same variance-covariance matrices and differ only by the shift of their centroids in the $p$-dimensional space then they are completely linearly separable in the $q=min(g-1,p)$ "subspace". This is what LDA is doing. Imagine you have three identical ellipsoids in the space of variables $V_1, V_2, V_3$. You have to use the information from all the variables in order to predict the class membership without error. But due to the fact that these were identically sized and oriented clouds it is possible to rescale them by a common transform into balls of unit radius. Then $q=g-1=2$ independent dimensions will suffice to predict the class membership as precisely as formerly. These dimensions are called discriminant functions $D_1, D_2$. Having 3 same-size balls of points you need only 2 axial lines and to know the balls' centres coordinates onto them in order to assign every point correctly. Discriminants are uncorrelated variables, their within-class covariance matrices are ideally identity ones (the balls). Discriminants form a subspace of the original variables space - they are their linear combinations. However, they are not rotation-like (PCA-like) axes: seen in the original variables space, discriminants as axes are not mutually orthogonal. So, under the assumption of homogeneity of within-class variance-covariances LDA using for classification all the existing discriminants is no worse than classifying immediately by the original variables. But you don't have to use all the discriminants. You might use only $m<q$ first most strong / statistically significant of them. This way you lose minimal information for classifying and the missclassification will be minimal. Seen from this perspective, LDA is a data reduction similar to PCA, only supervised. Note that assuming the homogeneity (+ multivariate normality) and provided that you plan to use but all the discriminants in classification it is possible to bypass the extraction of the discriminants themselves - which involves generalized eigenproblem - and compute the so called "Fisher's classification functions" from the variables directly, in order to classify with them, with the equivalent result. So, when the $g$ classes are identical in shape we could consider the $p$ input variables or the $g$ Fisher's functions or the $q$ discriminants as all equivalent sets of "classifiers". But discriminants are more convenient in many respect.$^1$ Since usually the classes are not "identical ellipses" in reality, the classification by the $q$ discriminants is somewhat poorer than if you do Bayes classification by all the $p$ original variables. For example, on this plot the two ellipsoids are not parallel to each other; and one can visually grasp that the single existing discriminant is not enough to classify points as accurately as the two variables allow to. QDA (quadratic discriminant analysis) would be then a step better approximation than LDA. A practical approach half-way between LDA and QDA is to use LDA-discriminants but use their observed separate-class covariance matrices at classification (see,see) instead of their pooled matrix (which is the identity). (And yes, LDA can be seen as closely related to, even a specific case of, MANOVA and Canonical correlation analysis or Reduced rank multivariate regression - see, see, see.) $^1$ An important terminological note. In some texts the $g$ Fisher's classification functions may be called "Fisher's discriminant functions", which may confuse with the $q$ discriminats which are canonical discriminant functions (i.e. obtained in the eigendecomposition of $\bf W^{-1}B$). For clarity, I recommend to say "Fisher's classification functions" vs "canonical discriminant functions" (= discriminants, for short). In modern understanding, LDA is the canonical linear discriminant analysis. "Fisher's discriminant analysis" is, at least to my awareness, either LDA with 2 classes (where the single canonical discriminant is inevitably the same thing as the Fisher's classification functions) or, broadly, the computation of Fisher's classification functions in multiclass settings.
Sources' seeming disagreement on linear, quadratic and Fisher's discriminant analysis I'm addressing only to one aspect of the question, and doing it intuitively without algebra. If the $g$ classes have the same variance-covariance matrices and differ only by the shift of their centroi
28,714
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$
The transformation argument works fine and is always useful but I will now suggest an alternative way to solve this problem that bears certain resemblance to the method you would use if the variables were discrete. Recall that the main difference is that while for a discrete random variable $X$ $P(X=x)$ is well-defined for a continuous rv $Y$, $P(Y=y)=0$, so we need to be a little careful. Let $S=\sum_{i=1}^n X_i$ and we are now looking for the joint distribution $$f_{X_1, S} \left(x_1, s \right)$$ which we can approximate with the probability \begin{align} f_{X_1, S} \left(x_1, s \right) &\approx P\left[ x_1 <X_1< x_1 +\Delta x_1 , s<S<s+\Delta s \right] \\ &\approx P\left[ x_1 <X_1< x_1 +\Delta x_1 , s-x_1<\sum_{i=2}^n X_i<s-x_1+\Delta s \right] \\ &= P\left[ x_1 <X_1< x_1 +\Delta x_1 \right] P \left[ s-x_1<\sum_{i=2}^n X_i<s-x_1+\Delta s \right] \\ &\approx \frac{1}{\theta} \exp\left\{-\frac{x_1}{\theta}\right\} \frac{ \left(s-x_1\right)^{n-2} \exp\left\{-\frac{s-x_1}{\theta} \right\}}{\Gamma \left(n-1 \right) \theta^{n-1}} \\ &= \frac{\left(s-x_1\right)^{n-2}}{\theta^n \left(n-2\right)!} \exp\left\{-\frac{s}{\theta} \right\} \end{align} for $ 0<x_1<s<\infty $. Note that in the fourth line we have used the additivity property of the gamma distribution, of which the exponential is a special case. If you adjust the notation we are getting the same thing here as above. This method allows you to get away with the multiple integration and that's why I prefer it. Again, be careful in how you define the densities, however. Hope this helps.
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$
The transformation argument works fine and is always useful but I will now suggest an alternative way to solve this problem that bears certain resemblance to the method you would use if the variables
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$ The transformation argument works fine and is always useful but I will now suggest an alternative way to solve this problem that bears certain resemblance to the method you would use if the variables were discrete. Recall that the main difference is that while for a discrete random variable $X$ $P(X=x)$ is well-defined for a continuous rv $Y$, $P(Y=y)=0$, so we need to be a little careful. Let $S=\sum_{i=1}^n X_i$ and we are now looking for the joint distribution $$f_{X_1, S} \left(x_1, s \right)$$ which we can approximate with the probability \begin{align} f_{X_1, S} \left(x_1, s \right) &\approx P\left[ x_1 <X_1< x_1 +\Delta x_1 , s<S<s+\Delta s \right] \\ &\approx P\left[ x_1 <X_1< x_1 +\Delta x_1 , s-x_1<\sum_{i=2}^n X_i<s-x_1+\Delta s \right] \\ &= P\left[ x_1 <X_1< x_1 +\Delta x_1 \right] P \left[ s-x_1<\sum_{i=2}^n X_i<s-x_1+\Delta s \right] \\ &\approx \frac{1}{\theta} \exp\left\{-\frac{x_1}{\theta}\right\} \frac{ \left(s-x_1\right)^{n-2} \exp\left\{-\frac{s-x_1}{\theta} \right\}}{\Gamma \left(n-1 \right) \theta^{n-1}} \\ &= \frac{\left(s-x_1\right)^{n-2}}{\theta^n \left(n-2\right)!} \exp\left\{-\frac{s}{\theta} \right\} \end{align} for $ 0<x_1<s<\infty $. Note that in the fourth line we have used the additivity property of the gamma distribution, of which the exponential is a special case. If you adjust the notation we are getting the same thing here as above. This method allows you to get away with the multiple integration and that's why I prefer it. Again, be careful in how you define the densities, however. Hope this helps.
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$ The transformation argument works fine and is always useful but I will now suggest an alternative way to solve this problem that bears certain resemblance to the method you would use if the variables
28,715
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$
Correct me if I am wrong, but I don't think one needs to find the conditional distribution to find the conditional expectation for the UMVUE. We can find the conditional mean using well-known relations between independent Beta and Gamma variables. Specifically, the fact that if $U$ and $V$ are independent Gamma variates, then $U+V$ is a Gamma variate, and it is independent of the Beta variate $\frac{U}{U+V}$. Here, note that $X_1\sim\text{Gamma}(1,\frac{1}{\theta})$ and $\sum_{i=2}^nX_i\sim\text{Gamma}(n-1,\frac{1}{\theta})$ are independently distributed. And $X_1+\sum_{i=2}^nX_i\sim\text{Gamma}(n,\frac{1}{\theta})$ is distributed independently of $\dfrac{X_1}{X_1+\sum_{i=2}^nX_i}\sim\text{Beta}(1,n-1)$. Define $h(X_1,\cdots,X_n)=\begin{cases}1&,\text{ if }X_1\le2\\0&,\text{ otherwise }\end{cases}$ $T=\sum_{i=1}^n X_i$ is complete sufficient for the family of distributions $\{1-\exp(-\frac{x}{\theta}):\theta>0\}$. So UMVUE of $P(X\le 2)$ is $E(h\mid T)$ by the Lehmann-Scheffe theorem. We have, \begin{align}E(h\mid T=t)&=P(X_1\le2\mid \sum_{i=1}^n X_i=t)\\&=P\left(\frac{X_1}{\sum_{i=1}^nX_i}\le\frac{2}{t}\mid\sum_{i=1}^n X_i=t\right)\\&=P\left(\frac{X_1}{X_1+\sum_{i=2}^nX_i}\le\frac{2}{t}\mid\sum_{i=1}^n X_i=t\right)\\&=P\left(\frac{X_1}{X_1+\sum_{i=2}^nX_i}\le\frac{2}{t}\right)\\&=\int_0^{2/t}\frac{(1-x)^{n-2}}{B(1,n-1)}\,\mathrm{d}x\\&=1-\left(1-\frac{2}{t}\right)^{n-1}\end{align} Hence the UMVUE of $P(X\le2)$ should be $1-\left(1-\dfrac{2}{\sum_{i=1}^nX_i}\right)^{n-1}$.
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$
Correct me if I am wrong, but I don't think one needs to find the conditional distribution to find the conditional expectation for the UMVUE. We can find the conditional mean using well-known relation
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$ Correct me if I am wrong, but I don't think one needs to find the conditional distribution to find the conditional expectation for the UMVUE. We can find the conditional mean using well-known relations between independent Beta and Gamma variables. Specifically, the fact that if $U$ and $V$ are independent Gamma variates, then $U+V$ is a Gamma variate, and it is independent of the Beta variate $\frac{U}{U+V}$. Here, note that $X_1\sim\text{Gamma}(1,\frac{1}{\theta})$ and $\sum_{i=2}^nX_i\sim\text{Gamma}(n-1,\frac{1}{\theta})$ are independently distributed. And $X_1+\sum_{i=2}^nX_i\sim\text{Gamma}(n,\frac{1}{\theta})$ is distributed independently of $\dfrac{X_1}{X_1+\sum_{i=2}^nX_i}\sim\text{Beta}(1,n-1)$. Define $h(X_1,\cdots,X_n)=\begin{cases}1&,\text{ if }X_1\le2\\0&,\text{ otherwise }\end{cases}$ $T=\sum_{i=1}^n X_i$ is complete sufficient for the family of distributions $\{1-\exp(-\frac{x}{\theta}):\theta>0\}$. So UMVUE of $P(X\le 2)$ is $E(h\mid T)$ by the Lehmann-Scheffe theorem. We have, \begin{align}E(h\mid T=t)&=P(X_1\le2\mid \sum_{i=1}^n X_i=t)\\&=P\left(\frac{X_1}{\sum_{i=1}^nX_i}\le\frac{2}{t}\mid\sum_{i=1}^n X_i=t\right)\\&=P\left(\frac{X_1}{X_1+\sum_{i=2}^nX_i}\le\frac{2}{t}\mid\sum_{i=1}^n X_i=t\right)\\&=P\left(\frac{X_1}{X_1+\sum_{i=2}^nX_i}\le\frac{2}{t}\right)\\&=\int_0^{2/t}\frac{(1-x)^{n-2}}{B(1,n-1)}\,\mathrm{d}x\\&=1-\left(1-\frac{2}{t}\right)^{n-1}\end{align} Hence the UMVUE of $P(X\le2)$ should be $1-\left(1-\dfrac{2}{\sum_{i=1}^nX_i}\right)^{n-1}$.
Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$ Correct me if I am wrong, but I don't think one needs to find the conditional distribution to find the conditional expectation for the UMVUE. We can find the conditional mean using well-known relation
28,716
Is the `weights=` option in lmer() doing what I want?
The log-likelihood is defined as: $$ \log(L(\boldsymbol{\theta})) = \sum_{i = 1}^{n} w_i \log(P(y_i | \boldsymbol{x}_i, \boldsymbol{\theta})) $$ where $\boldsymbol{\theta}$ are the model parameters, $w_i$ is the weight for observation $i$, $y_i$ is the response for observation $i$, and $\boldsymbol{x}_i$ is the vector of covariates for observation $i$. So, yes, I think the weights option is doing exactly what you want - the more recent observations have a greater contribution to the log-likelihood. I know you specifically didn't ask for any comments on this in your question, but Dixon and Coles considered using such weights in order to increase the predictive performance of their soccer model - so might be worth a look at using a similar weighting function (if you are not already familiar with this).
Is the `weights=` option in lmer() doing what I want?
The log-likelihood is defined as: $$ \log(L(\boldsymbol{\theta})) = \sum_{i = 1}^{n} w_i \log(P(y_i | \boldsymbol{x}_i, \boldsymbol{\theta})) $$ where $\boldsymbol{\theta}$ are the model parameters, $
Is the `weights=` option in lmer() doing what I want? The log-likelihood is defined as: $$ \log(L(\boldsymbol{\theta})) = \sum_{i = 1}^{n} w_i \log(P(y_i | \boldsymbol{x}_i, \boldsymbol{\theta})) $$ where $\boldsymbol{\theta}$ are the model parameters, $w_i$ is the weight for observation $i$, $y_i$ is the response for observation $i$, and $\boldsymbol{x}_i$ is the vector of covariates for observation $i$. So, yes, I think the weights option is doing exactly what you want - the more recent observations have a greater contribution to the log-likelihood. I know you specifically didn't ask for any comments on this in your question, but Dixon and Coles considered using such weights in order to increase the predictive performance of their soccer model - so might be worth a look at using a similar weighting function (if you are not already familiar with this).
Is the `weights=` option in lmer() doing what I want? The log-likelihood is defined as: $$ \log(L(\boldsymbol{\theta})) = \sum_{i = 1}^{n} w_i \log(P(y_i | \boldsymbol{x}_i, \boldsymbol{\theta})) $$ where $\boldsymbol{\theta}$ are the model parameters, $
28,717
How to calculate the absolute central moment of a Binomial distribution?
By induction on $m$ it is straightforward to show that $$\sum_{k=0}^m \binom{n}{k}p^k(1-p)^{n-k}(pn-k) = (m+1)\binom{n}{m+1}p^{m+1}(1-p)^{n-m}.$$ For a Binomial variable $X$ with parameters $n$ and $p$, which models the "sum" in the question, the mean absolute deviation from the mean $np$ is $$\eqalign{ \mathbb{E}\left(|np - X|\right) &= \sum_{k=0}^{\lfloor np \rfloor}\binom{n}{k}p^k(1-p)^{n-k}(np-k) - \sum_{k=\lfloor np \rfloor+1}^n\binom{n}{k}p^k(1-p)^{n-k}(np-k) \\ &= 2 (1-p)^{n-\lfloor n p\rfloor } p^{\lfloor n p\rfloor +1} (\lfloor n p\rfloor +1) \binom{n}{\lfloor n p\rfloor+1}, }$$ with the last step following from two applications of the first result (together with elementary binomial identities). The notation "$\lfloor n p \rfloor$" refers to the floor of $np$--the greatest integer less than or equal to $np$. For example, with $n=5$ and $p=1/2$ as in the question, this formula gives $$2(1-1/2)^{5-2}(1/2)^3(2+1)\binom{5}{3} = \frac{15}{16} = 0.9375.$$
How to calculate the absolute central moment of a Binomial distribution?
By induction on $m$ it is straightforward to show that $$\sum_{k=0}^m \binom{n}{k}p^k(1-p)^{n-k}(pn-k) = (m+1)\binom{n}{m+1}p^{m+1}(1-p)^{n-m}.$$ For a Binomial variable $X$ with parameters $n$ and $p
How to calculate the absolute central moment of a Binomial distribution? By induction on $m$ it is straightforward to show that $$\sum_{k=0}^m \binom{n}{k}p^k(1-p)^{n-k}(pn-k) = (m+1)\binom{n}{m+1}p^{m+1}(1-p)^{n-m}.$$ For a Binomial variable $X$ with parameters $n$ and $p$, which models the "sum" in the question, the mean absolute deviation from the mean $np$ is $$\eqalign{ \mathbb{E}\left(|np - X|\right) &= \sum_{k=0}^{\lfloor np \rfloor}\binom{n}{k}p^k(1-p)^{n-k}(np-k) - \sum_{k=\lfloor np \rfloor+1}^n\binom{n}{k}p^k(1-p)^{n-k}(np-k) \\ &= 2 (1-p)^{n-\lfloor n p\rfloor } p^{\lfloor n p\rfloor +1} (\lfloor n p\rfloor +1) \binom{n}{\lfloor n p\rfloor+1}, }$$ with the last step following from two applications of the first result (together with elementary binomial identities). The notation "$\lfloor n p \rfloor$" refers to the floor of $np$--the greatest integer less than or equal to $np$. For example, with $n=5$ and $p=1/2$ as in the question, this formula gives $$2(1-1/2)^{5-2}(1/2)^3(2+1)\binom{5}{3} = \frac{15}{16} = 0.9375.$$
How to calculate the absolute central moment of a Binomial distribution? By induction on $m$ it is straightforward to show that $$\sum_{k=0}^m \binom{n}{k}p^k(1-p)^{n-k}(pn-k) = (m+1)\binom{n}{m+1}p^{m+1}(1-p)^{n-m}.$$ For a Binomial variable $X$ with parameters $n$ and $p
28,718
Covariance among latent variables
The idea behind SEM and confirmatory factor analysis is to explain the covariances between the observed variables in terms of a smaller number of underlying factors -- unseen but presumed to exist. You could fit a model with uncorrelated factors, but allowing the factors to have correlations gives you extra parameters to improve the fit. Unless you have strong, theoretical reasons for positing independence, it makes sense to include the covariances. Otherwise, the model won't fit. If factors X and Y are correlated (double arrow) and each impacts Z, there is no way to distinguish between the direct effect of X on Z and the effect of Y on Z through X. The effect exists, but it will be confounded with the direct effect of X on Z. However if X depends on Y (single arrow) and Z depends on X (single arrow), the correlation of Z and Y will increase if the correlation of X and Y increases. The difference between covariances between latent factors and covariances between indicator variables (Observed) should matter to you. The whole point of SEM is to explain covariances between indicator variables in terms of factors. Conditional on the factors, the indicators should be independent. You can tweak the model to add correlations between the indicators if you feel you need them, but these are often a counsel of desperation employed when in truth there are no latent factors. Exceptions exist. Suppose the indicators are something measured at successive time points. I can believe they might all depend on some underlying factors, and yet influence each other through a learning effect in the subject, say. In that case, it would make sense to have covariances in the indicators and covariances in the factors. But in general, I would want to see a solid, subject matter reason for introducing covariances at the indicator level. Recommended texts Sadly, I have yet to find a text on SEM that I really like. I think that Bollen's book, is the classic in this field, but it's pricey and I don't own a copy. I recently ordered Beaujean's book, which is focused on using lavaan in R. My copy hasn't arrived yet, so I don't know if it goes into the detail you need about path analysis. If you have access to a good library, or a lot of cash, I would check out Bollen.
Covariance among latent variables
The idea behind SEM and confirmatory factor analysis is to explain the covariances between the observed variables in terms of a smaller number of underlying factors -- unseen but presumed to exist. Yo
Covariance among latent variables The idea behind SEM and confirmatory factor analysis is to explain the covariances between the observed variables in terms of a smaller number of underlying factors -- unseen but presumed to exist. You could fit a model with uncorrelated factors, but allowing the factors to have correlations gives you extra parameters to improve the fit. Unless you have strong, theoretical reasons for positing independence, it makes sense to include the covariances. Otherwise, the model won't fit. If factors X and Y are correlated (double arrow) and each impacts Z, there is no way to distinguish between the direct effect of X on Z and the effect of Y on Z through X. The effect exists, but it will be confounded with the direct effect of X on Z. However if X depends on Y (single arrow) and Z depends on X (single arrow), the correlation of Z and Y will increase if the correlation of X and Y increases. The difference between covariances between latent factors and covariances between indicator variables (Observed) should matter to you. The whole point of SEM is to explain covariances between indicator variables in terms of factors. Conditional on the factors, the indicators should be independent. You can tweak the model to add correlations between the indicators if you feel you need them, but these are often a counsel of desperation employed when in truth there are no latent factors. Exceptions exist. Suppose the indicators are something measured at successive time points. I can believe they might all depend on some underlying factors, and yet influence each other through a learning effect in the subject, say. In that case, it would make sense to have covariances in the indicators and covariances in the factors. But in general, I would want to see a solid, subject matter reason for introducing covariances at the indicator level. Recommended texts Sadly, I have yet to find a text on SEM that I really like. I think that Bollen's book, is the classic in this field, but it's pricey and I don't own a copy. I recently ordered Beaujean's book, which is focused on using lavaan in R. My copy hasn't arrived yet, so I don't know if it goes into the detail you need about path analysis. If you have access to a good library, or a lot of cash, I would check out Bollen.
Covariance among latent variables The idea behind SEM and confirmatory factor analysis is to explain the covariances between the observed variables in terms of a smaller number of underlying factors -- unseen but presumed to exist. Yo
28,719
Covariance among latent variables
If you don't include the covariances between latent variables then cross loadings will be biased. That's because cross loadings fit the correlations between measurement items that measure the two latent variables, and so does the the factor covariance. If you mis-specify one of them you will affect the other. From there direct and indirect effects can be biased as well. If you don't have cross loadings you are fine - most everything will be fine due to some conditional model expressions.
Covariance among latent variables
If you don't include the covariances between latent variables then cross loadings will be biased. That's because cross loadings fit the correlations between measurement items that measure the two late
Covariance among latent variables If you don't include the covariances between latent variables then cross loadings will be biased. That's because cross loadings fit the correlations between measurement items that measure the two latent variables, and so does the the factor covariance. If you mis-specify one of them you will affect the other. From there direct and indirect effects can be biased as well. If you don't have cross loadings you are fine - most everything will be fine due to some conditional model expressions.
Covariance among latent variables If you don't include the covariances between latent variables then cross loadings will be biased. That's because cross loadings fit the correlations between measurement items that measure the two late
28,720
Difference between marginal and conditional treatment effect? Relating to regression vs. propensity score methods
What do marginal and conditional relate to? Assuming the treatment effects are accurately estimated, the conditional treatment effect relates to the estimated effect on an individual whereas the marginal treatment effect relates to the effect on the entire population. When do the estimates differ? It sounds odd that the two estimates can differ, but they can in certain situations. The most commonly encountered situations are when the treatment effect is an odds ratio or hazard ratio (HR). Note that the marginal and conditional estimates are equal with risk ratios or with linear regressions. The scenarios where marginal and conditional (odds ratios or HRs) estimates differ most tend to coincide with scenarios when the difference between HRs and risk ratios are greatest. This is when the outcome is "common" and the covariates included in the multivariate regression model are highly predictive of the outcome. How does this affect a clinician's interpretation? If the conditional HR is 0.7 then you can say that giving drug A rather than drug B will lower the hazard in the patient sitting in front of you by 30%. Whereas for a marginal HR of 0.7 you could say that if you gave the entire population drug A rather than drug B you will lower the entire population's hazard by 30% (this can be useful for healthcare planners or decision makers). Note that the conditional HR is usually less precise and tends to give larger treatment effects (further from the null). Why the difference between PS and regression? When you use multivariate regression the interpretation of the coefficients is "the estimated change in outcome whilst holding all other variables constant". This may help you understand why this treatment is conditional-it is whilst conditioning on the other covariates in the model (ie comparing two patients with the same set of characteristics). In contrast, say you use inverse probability weighting with the propensity score. You just compare outcome rates in two (weighted) population, without reference to each individual's characteristics. This gives you a marginal HR. Coincidentally, this is the type of HR you get from a typical randomized clinical trial (with an primary analysis which does not adjust for covariates).
Difference between marginal and conditional treatment effect? Relating to regression vs. propensity
What do marginal and conditional relate to? Assuming the treatment effects are accurately estimated, the conditional treatment effect relates to the estimated effect on an individual whereas the margi
Difference between marginal and conditional treatment effect? Relating to regression vs. propensity score methods What do marginal and conditional relate to? Assuming the treatment effects are accurately estimated, the conditional treatment effect relates to the estimated effect on an individual whereas the marginal treatment effect relates to the effect on the entire population. When do the estimates differ? It sounds odd that the two estimates can differ, but they can in certain situations. The most commonly encountered situations are when the treatment effect is an odds ratio or hazard ratio (HR). Note that the marginal and conditional estimates are equal with risk ratios or with linear regressions. The scenarios where marginal and conditional (odds ratios or HRs) estimates differ most tend to coincide with scenarios when the difference between HRs and risk ratios are greatest. This is when the outcome is "common" and the covariates included in the multivariate regression model are highly predictive of the outcome. How does this affect a clinician's interpretation? If the conditional HR is 0.7 then you can say that giving drug A rather than drug B will lower the hazard in the patient sitting in front of you by 30%. Whereas for a marginal HR of 0.7 you could say that if you gave the entire population drug A rather than drug B you will lower the entire population's hazard by 30% (this can be useful for healthcare planners or decision makers). Note that the conditional HR is usually less precise and tends to give larger treatment effects (further from the null). Why the difference between PS and regression? When you use multivariate regression the interpretation of the coefficients is "the estimated change in outcome whilst holding all other variables constant". This may help you understand why this treatment is conditional-it is whilst conditioning on the other covariates in the model (ie comparing two patients with the same set of characteristics). In contrast, say you use inverse probability weighting with the propensity score. You just compare outcome rates in two (weighted) population, without reference to each individual's characteristics. This gives you a marginal HR. Coincidentally, this is the type of HR you get from a typical randomized clinical trial (with an primary analysis which does not adjust for covariates).
Difference between marginal and conditional treatment effect? Relating to regression vs. propensity What do marginal and conditional relate to? Assuming the treatment effects are accurately estimated, the conditional treatment effect relates to the estimated effect on an individual whereas the margi
28,721
Random Forest Probabilistic Prediction vs majority vote
Such questions are always best answered by looking at the code, if you're fluent in Python. RandomForestClassifier.predict, at least in the current version 0.16.1, predicts the class with highest probability estimate, as given by predict_proba. (this line) The documentation for predict_proba says: The predicted class probabilities of an input sample is computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. The difference from the original method is probably just so that predict gives predictions consistent with predict_proba. The result is sometimes called "soft voting", rather than the "hard" majority vote used in the original Breiman paper. I couldn't in quick searching find an appropriate comparison of the performance of the two methods, but they both seem fairly reasonable in this situation. The predict documentation is at best quite misleading; I've submitted a pull request to fix it. If you want to do majority vote prediction instead, here's a function to do it. Call it like predict_majvote(clf, X) rather than clf.predict(X). (Based on predict_proba; only lightly tested, but I think it should work.) from scipy.stats import mode from sklearn.ensemble.forest import _partition_estimators, _parallel_helper from sklearn.tree._tree import DTYPE from sklearn.externals.joblib import Parallel, delayed from sklearn.utils import check_array from sklearn.utils.validation import check_is_fitted def predict_majvote(forest, X): """Predict class for X. Uses majority voting, rather than the soft voting scheme used by RandomForestClassifier.predict. Parameters ---------- X : array-like or sparse matrix of shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns ------- y : array of shape = [n_samples] or [n_samples, n_outputs] The predicted classes. """ check_is_fitted(forest, 'n_outputs_') # Check data X = check_array(X, dtype=DTYPE, accept_sparse="csr") # Assign chunk of trees to jobs n_jobs, n_trees, starts = _partition_estimators(forest.n_estimators, forest.n_jobs) # Parallel loop all_preds = Parallel(n_jobs=n_jobs, verbose=forest.verbose, backend="threading")( delayed(_parallel_helper)(e, 'predict', X, check_input=False) for e in forest.estimators_) # Reduce modes, counts = mode(all_preds, axis=0) if forest.n_outputs_ == 1: return forest.classes_.take(modes[0], axis=0) else: n_samples = all_preds[0].shape[0] preds = np.zeros((n_samples, forest.n_outputs_), dtype=forest.classes_.dtype) for k in range(forest.n_outputs_): preds[:, k] = forest.classes_[k].take(modes[:, k], axis=0) return preds On the dumb synthetic case I tried, predictions agreed with the predict method every time.
Random Forest Probabilistic Prediction vs majority vote
Such questions are always best answered by looking at the code, if you're fluent in Python. RandomForestClassifier.predict, at least in the current version 0.16.1, predicts the class with highest prob
Random Forest Probabilistic Prediction vs majority vote Such questions are always best answered by looking at the code, if you're fluent in Python. RandomForestClassifier.predict, at least in the current version 0.16.1, predicts the class with highest probability estimate, as given by predict_proba. (this line) The documentation for predict_proba says: The predicted class probabilities of an input sample is computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. The difference from the original method is probably just so that predict gives predictions consistent with predict_proba. The result is sometimes called "soft voting", rather than the "hard" majority vote used in the original Breiman paper. I couldn't in quick searching find an appropriate comparison of the performance of the two methods, but they both seem fairly reasonable in this situation. The predict documentation is at best quite misleading; I've submitted a pull request to fix it. If you want to do majority vote prediction instead, here's a function to do it. Call it like predict_majvote(clf, X) rather than clf.predict(X). (Based on predict_proba; only lightly tested, but I think it should work.) from scipy.stats import mode from sklearn.ensemble.forest import _partition_estimators, _parallel_helper from sklearn.tree._tree import DTYPE from sklearn.externals.joblib import Parallel, delayed from sklearn.utils import check_array from sklearn.utils.validation import check_is_fitted def predict_majvote(forest, X): """Predict class for X. Uses majority voting, rather than the soft voting scheme used by RandomForestClassifier.predict. Parameters ---------- X : array-like or sparse matrix of shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns ------- y : array of shape = [n_samples] or [n_samples, n_outputs] The predicted classes. """ check_is_fitted(forest, 'n_outputs_') # Check data X = check_array(X, dtype=DTYPE, accept_sparse="csr") # Assign chunk of trees to jobs n_jobs, n_trees, starts = _partition_estimators(forest.n_estimators, forest.n_jobs) # Parallel loop all_preds = Parallel(n_jobs=n_jobs, verbose=forest.verbose, backend="threading")( delayed(_parallel_helper)(e, 'predict', X, check_input=False) for e in forest.estimators_) # Reduce modes, counts = mode(all_preds, axis=0) if forest.n_outputs_ == 1: return forest.classes_.take(modes[0], axis=0) else: n_samples = all_preds[0].shape[0] preds = np.zeros((n_samples, forest.n_outputs_), dtype=forest.classes_.dtype) for k in range(forest.n_outputs_): preds[:, k] = forest.classes_[k].take(modes[:, k], axis=0) return preds On the dumb synthetic case I tried, predictions agreed with the predict method every time.
Random Forest Probabilistic Prediction vs majority vote Such questions are always best answered by looking at the code, if you're fluent in Python. RandomForestClassifier.predict, at least in the current version 0.16.1, predicts the class with highest prob
28,722
Does failure to reject the null in Neyman-Pearson approach mean that one should "accept" it?
If "the Neymann–Pearson approach" is understood narrowly as extending Fisher's by introducing an alternative hypothesis in addition to the null hypothesis, then there's no motive to change the terminology. The alternative can influence only the choice of test statistic (by consideration of the test's power); once that choice has been made the distribution of the test statistic is calculated under the null. "Fail to reject" reflects this asymmetry between the null & alternative hypotheses†; you're provisionally assuming the null true till amassing enough evidence to the contrary. ("Retain" is an alternative way of putting it.) Pedagogues labour this nice semantic distinction in an attempt to avert the misconception that an "insignificant" result necessarily reflects a preponderance of evidence against the alternative. For example, consider a single observation from a Gaussian distribution with unit variance & unknown mean, $X\sim\mathcal{N}(\mu,1)$. With the point null & alternative $H_0: \mu=0$ vs $H_\mathrm{A}: \mu=1$, a test of size $0.05$ rejects the null only when $x > 1.64$, even though the alternative is better supported whenever $x > 0.5$. If on the other hand you were to take the decision-theoretic framework of the NP approach seriously (see the excellent answer here), you just wouldn't bother to perform a test that was underpowered for your purposes; then talk of "accepting" the null hypothesis would seem a lot more sensible. Some notable expositors of testing theory from this viewpoint have apparently thought so. Lehmann & Romano (2005), Testing Statistical Hypotheses, unabashedly use "accept" & "reject" throughout. Casella & Berger (2002), Statistical Inference, use "accept" & "reject" too, even saying "We view a hypothesis testing problem as a problem in which one of two actions is going to be taken—the actions being the assertion of $H_0$ or $H_1$".‡ † An asymmetry exacerbated when the null's composite—either through specifying a range of parameter values, or because of a nuisance parameter that hasn't been removed through conditioning on an ancillary statistic— in which case it isn't rejected unless the test statistic is sufficiently extreme under any of its constituent simple nulls. ‡ While Cox & Hinkley (1974) Theoretical Statistics, manage four chapters on testing without using "accept" or "reject", except once in scare quotes.
Does failure to reject the null in Neyman-Pearson approach mean that one should "accept" it?
If "the Neymann–Pearson approach" is understood narrowly as extending Fisher's by introducing an alternative hypothesis in addition to the null hypothesis, then there's no motive to change the termino
Does failure to reject the null in Neyman-Pearson approach mean that one should "accept" it? If "the Neymann–Pearson approach" is understood narrowly as extending Fisher's by introducing an alternative hypothesis in addition to the null hypothesis, then there's no motive to change the terminology. The alternative can influence only the choice of test statistic (by consideration of the test's power); once that choice has been made the distribution of the test statistic is calculated under the null. "Fail to reject" reflects this asymmetry between the null & alternative hypotheses†; you're provisionally assuming the null true till amassing enough evidence to the contrary. ("Retain" is an alternative way of putting it.) Pedagogues labour this nice semantic distinction in an attempt to avert the misconception that an "insignificant" result necessarily reflects a preponderance of evidence against the alternative. For example, consider a single observation from a Gaussian distribution with unit variance & unknown mean, $X\sim\mathcal{N}(\mu,1)$. With the point null & alternative $H_0: \mu=0$ vs $H_\mathrm{A}: \mu=1$, a test of size $0.05$ rejects the null only when $x > 1.64$, even though the alternative is better supported whenever $x > 0.5$. If on the other hand you were to take the decision-theoretic framework of the NP approach seriously (see the excellent answer here), you just wouldn't bother to perform a test that was underpowered for your purposes; then talk of "accepting" the null hypothesis would seem a lot more sensible. Some notable expositors of testing theory from this viewpoint have apparently thought so. Lehmann & Romano (2005), Testing Statistical Hypotheses, unabashedly use "accept" & "reject" throughout. Casella & Berger (2002), Statistical Inference, use "accept" & "reject" too, even saying "We view a hypothesis testing problem as a problem in which one of two actions is going to be taken—the actions being the assertion of $H_0$ or $H_1$".‡ † An asymmetry exacerbated when the null's composite—either through specifying a range of parameter values, or because of a nuisance parameter that hasn't been removed through conditioning on an ancillary statistic— in which case it isn't rejected unless the test statistic is sufficiently extreme under any of its constituent simple nulls. ‡ While Cox & Hinkley (1974) Theoretical Statistics, manage four chapters on testing without using "accept" or "reject", except once in scare quotes.
Does failure to reject the null in Neyman-Pearson approach mean that one should "accept" it? If "the Neymann–Pearson approach" is understood narrowly as extending Fisher's by introducing an alternative hypothesis in addition to the null hypothesis, then there's no motive to change the termino
28,723
What's the proper y-axis label for an empirical cumulative distribution plot in a publication?
For the y axis of an ecdf plot, you can use "Fraction of Data" which is easier to interpret than F(x)
What's the proper y-axis label for an empirical cumulative distribution plot in a publication?
For the y axis of an ecdf plot, you can use "Fraction of Data" which is easier to interpret than F(x)
What's the proper y-axis label for an empirical cumulative distribution plot in a publication? For the y axis of an ecdf plot, you can use "Fraction of Data" which is easier to interpret than F(x)
What's the proper y-axis label for an empirical cumulative distribution plot in a publication? For the y axis of an ecdf plot, you can use "Fraction of Data" which is easier to interpret than F(x)
28,724
What's the proper y-axis label for an empirical cumulative distribution plot in a publication?
Since it is called (empirical) cumulative distribution function, I tend to label the y axis Cumulative distribution of x This makes it very explicit and even if you would have never heard of (E)CDF, you know that the y axis shows. For example, if we have an (E)CDF of human heights, I would label the y axis "Cumulative distribution of human heights"
What's the proper y-axis label for an empirical cumulative distribution plot in a publication?
Since it is called (empirical) cumulative distribution function, I tend to label the y axis Cumulative distribution of x This makes it very explicit and even if you would have never heard of (E)CDF, y
What's the proper y-axis label for an empirical cumulative distribution plot in a publication? Since it is called (empirical) cumulative distribution function, I tend to label the y axis Cumulative distribution of x This makes it very explicit and even if you would have never heard of (E)CDF, you know that the y axis shows. For example, if we have an (E)CDF of human heights, I would label the y axis "Cumulative distribution of human heights"
What's the proper y-axis label for an empirical cumulative distribution plot in a publication? Since it is called (empirical) cumulative distribution function, I tend to label the y axis Cumulative distribution of x This makes it very explicit and even if you would have never heard of (E)CDF, y
28,725
Getting to predicted values using cv.glmnet
Note that you are using the predict.cv.glmnet method when called as you did. The help for this function is a bit counterintuitive, but you can pass arguments to the predict.glmnet method, which does the predictions, via the ... argument. Hence you probably want response <- predict(cvFit, as.matrix(imputedTestData[,2:33]), s = "lambda.min", type = "class") where type = "class" has meaning: Type ‘"class"’ applies only to ‘"binomial"’ or ‘"multinomial"’ models, and produces the class label corresponding to the maximum probability. (from ?predict.glmnet) What you were seeing was the predicted values on the scale of the linear predictor (link function), i.e. before the inverse of the logit function had been applied to yield probability of class == 1. This is fairly typical in R, and just as typically this behaviour can be controlled via a type argument.
Getting to predicted values using cv.glmnet
Note that you are using the predict.cv.glmnet method when called as you did. The help for this function is a bit counterintuitive, but you can pass arguments to the predict.glmnet method, which does t
Getting to predicted values using cv.glmnet Note that you are using the predict.cv.glmnet method when called as you did. The help for this function is a bit counterintuitive, but you can pass arguments to the predict.glmnet method, which does the predictions, via the ... argument. Hence you probably want response <- predict(cvFit, as.matrix(imputedTestData[,2:33]), s = "lambda.min", type = "class") where type = "class" has meaning: Type ‘"class"’ applies only to ‘"binomial"’ or ‘"multinomial"’ models, and produces the class label corresponding to the maximum probability. (from ?predict.glmnet) What you were seeing was the predicted values on the scale of the linear predictor (link function), i.e. before the inverse of the logit function had been applied to yield probability of class == 1. This is fairly typical in R, and just as typically this behaviour can be controlled via a type argument.
Getting to predicted values using cv.glmnet Note that you are using the predict.cv.glmnet method when called as you did. The help for this function is a bit counterintuitive, but you can pass arguments to the predict.glmnet method, which does t
28,726
2-Gaussian mixture model inference with MCMC and PyMC
The problem is caused by the way that PyMC draws samples for this model. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated together. For small arrays like center this is not a problem, but for a large array like category it leads to a low acceptance rate. You can see the acceptance rate via print mcmc.step_method_dict[category][0].ratio The solution suggested in the documentation is to use an array of scalar-valued variables. In addition, you need to configure some of the proposal distributions since the default choices are bad. Here is the code that works for me: import pymc as pm sigmas = pm.Normal('sigmas', mu=0.1, tau=1000, size=2) centers = pm.Normal('centers', [0.3, 0.7], [1/(0.1)**2, 1/(0.1)**2], size=2) alpha = pm.Beta('alpha', alpha=2, beta=3) category = pm.Container([pm.Categorical("category%i" % i, [alpha, 1 - alpha]) for i in range(nsamples)]) observations = pm.Container([pm.Normal('samples_model%i' % i, mu=centers[category[i]], tau=1/(sigmas[category[i]]**2), value=samples[i], observed=True) for i in range(nsamples)]) model = pm.Model([observations, category, alpha, sigmas, centers]) mcmc = pm.MCMC(model) # initialize in a good place to reduce the number of steps required centers.value = [mu1_true, mu2_true] # set a custom proposal for centers, since the default is bad mcmc.use_step_method(pm.Metropolis, centers, proposal_sd=sig1_true/np.sqrt(nsamples)) # set a custom proposal for category, since the default is bad for i in range(nsamples): mcmc.use_step_method(pm.DiscreteMetropolis, category[i], proposal_distribution='Prior') mcmc.sample(100) # beware sampling takes much longer now # check the acceptance rates print mcmc.step_method_dict[category[0]][0].ratio print mcmc.step_method_dict[centers][0].ratio print mcmc.step_method_dict[alpha][0].ratio The proposal_sd and proposal_distribution options are explained in section 5.7.1. For the centers, I set the proposal to roughly match the standard deviation of the posterior, which is much smaller than the default due to the amount of data. PyMC does attempt to tune the width of the proposal, but this only works if your acceptance rate is sufficiently high to begin with. For category, the default proposal_distribution = 'Poisson' which gives poor results (I don't know why this is, but it certainly doesn't sound like a sensible proposal for a binary variable).
2-Gaussian mixture model inference with MCMC and PyMC
The problem is caused by the way that PyMC draws samples for this model. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated together. For small ar
2-Gaussian mixture model inference with MCMC and PyMC The problem is caused by the way that PyMC draws samples for this model. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated together. For small arrays like center this is not a problem, but for a large array like category it leads to a low acceptance rate. You can see the acceptance rate via print mcmc.step_method_dict[category][0].ratio The solution suggested in the documentation is to use an array of scalar-valued variables. In addition, you need to configure some of the proposal distributions since the default choices are bad. Here is the code that works for me: import pymc as pm sigmas = pm.Normal('sigmas', mu=0.1, tau=1000, size=2) centers = pm.Normal('centers', [0.3, 0.7], [1/(0.1)**2, 1/(0.1)**2], size=2) alpha = pm.Beta('alpha', alpha=2, beta=3) category = pm.Container([pm.Categorical("category%i" % i, [alpha, 1 - alpha]) for i in range(nsamples)]) observations = pm.Container([pm.Normal('samples_model%i' % i, mu=centers[category[i]], tau=1/(sigmas[category[i]]**2), value=samples[i], observed=True) for i in range(nsamples)]) model = pm.Model([observations, category, alpha, sigmas, centers]) mcmc = pm.MCMC(model) # initialize in a good place to reduce the number of steps required centers.value = [mu1_true, mu2_true] # set a custom proposal for centers, since the default is bad mcmc.use_step_method(pm.Metropolis, centers, proposal_sd=sig1_true/np.sqrt(nsamples)) # set a custom proposal for category, since the default is bad for i in range(nsamples): mcmc.use_step_method(pm.DiscreteMetropolis, category[i], proposal_distribution='Prior') mcmc.sample(100) # beware sampling takes much longer now # check the acceptance rates print mcmc.step_method_dict[category[0]][0].ratio print mcmc.step_method_dict[centers][0].ratio print mcmc.step_method_dict[alpha][0].ratio The proposal_sd and proposal_distribution options are explained in section 5.7.1. For the centers, I set the proposal to roughly match the standard deviation of the posterior, which is much smaller than the default due to the amount of data. PyMC does attempt to tune the width of the proposal, but this only works if your acceptance rate is sufficiently high to begin with. For category, the default proposal_distribution = 'Poisson' which gives poor results (I don't know why this is, but it certainly doesn't sound like a sensible proposal for a binary variable).
2-Gaussian mixture model inference with MCMC and PyMC The problem is caused by the way that PyMC draws samples for this model. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated together. For small ar
28,727
2-Gaussian mixture model inference with MCMC and PyMC
You should not model $\sigma$ with a Normal, that way you are allowing negative values for the standard variation. Use instead something like: sigmas = pm.Exponential('sigmas', 0.1, size=2) update: I got near the initial parameters of the data by changing these parts of your model: sigmas = pm.Exponential('sigmas', 0.1, size=2) alpha = pm.Beta('alpha', alpha=1, beta=1) and by invoking the mcmc with some thinning: mcmc.sample(200000, 3000, 10) results: The posteriors are not very nice thou... In section 11.6 of the BUGS Book they discuss this type of model and state that there are convergence problems with no obvious solution. Check also here.
2-Gaussian mixture model inference with MCMC and PyMC
You should not model $\sigma$ with a Normal, that way you are allowing negative values for the standard variation. Use instead something like: sigmas = pm.Exponential('sigmas', 0.1, size=2) update: I
2-Gaussian mixture model inference with MCMC and PyMC You should not model $\sigma$ with a Normal, that way you are allowing negative values for the standard variation. Use instead something like: sigmas = pm.Exponential('sigmas', 0.1, size=2) update: I got near the initial parameters of the data by changing these parts of your model: sigmas = pm.Exponential('sigmas', 0.1, size=2) alpha = pm.Beta('alpha', alpha=1, beta=1) and by invoking the mcmc with some thinning: mcmc.sample(200000, 3000, 10) results: The posteriors are not very nice thou... In section 11.6 of the BUGS Book they discuss this type of model and state that there are convergence problems with no obvious solution. Check also here.
2-Gaussian mixture model inference with MCMC and PyMC You should not model $\sigma$ with a Normal, that way you are allowing negative values for the standard variation. Use instead something like: sigmas = pm.Exponential('sigmas', 0.1, size=2) update: I
28,728
2-Gaussian mixture model inference with MCMC and PyMC
Also, non-identifiability is a big problem for using MCMC for mixture models. Basically, if you switch labels on your cluster means and cluster assignments, the likelihood doesn't change, and this can confuse the sampler (between chains or within chains). One thing you might try to mitigate this is using Potentials in PyMC3. A good implementation of a GMM with Potentials is here. The posterior for these kinds of problems is also generally highly multimodal, which also presents a big problem. You might instead want to use EM (or Variational inference).
2-Gaussian mixture model inference with MCMC and PyMC
Also, non-identifiability is a big problem for using MCMC for mixture models. Basically, if you switch labels on your cluster means and cluster assignments, the likelihood doesn't change, and this can
2-Gaussian mixture model inference with MCMC and PyMC Also, non-identifiability is a big problem for using MCMC for mixture models. Basically, if you switch labels on your cluster means and cluster assignments, the likelihood doesn't change, and this can confuse the sampler (between chains or within chains). One thing you might try to mitigate this is using Potentials in PyMC3. A good implementation of a GMM with Potentials is here. The posterior for these kinds of problems is also generally highly multimodal, which also presents a big problem. You might instead want to use EM (or Variational inference).
2-Gaussian mixture model inference with MCMC and PyMC Also, non-identifiability is a big problem for using MCMC for mixture models. Basically, if you switch labels on your cluster means and cluster assignments, the likelihood doesn't change, and this can
28,729
What is the correct notation for stating that random variables X and Y are independent?
$\require{txfonts}$As you say, the use of $\perp$ (\perp) for independence is not good, since it often means orthogonal, which in probability theory translates to correlation zero. Independence is a (much) stronger concept, so needs a stronger symbol, and sometimes I have seen $\perp\!\!\!\perp$ (\perp\!\!\!\perp) used. That seems like a good idea! OK, seems like math markup here does not like \Perp, but it is defined in $\LaTeX$ packages pxfonts/txfonts. It is like \perp, but with double vertical lines. Above I replace a hack.
What is the correct notation for stating that random variables X and Y are independent?
$\require{txfonts}$As you say, the use of $\perp$ (\perp) for independence is not good, since it often means orthogonal, which in probability theory translates to correlation zero. Independence is a
What is the correct notation for stating that random variables X and Y are independent? $\require{txfonts}$As you say, the use of $\perp$ (\perp) for independence is not good, since it often means orthogonal, which in probability theory translates to correlation zero. Independence is a (much) stronger concept, so needs a stronger symbol, and sometimes I have seen $\perp\!\!\!\perp$ (\perp\!\!\!\perp) used. That seems like a good idea! OK, seems like math markup here does not like \Perp, but it is defined in $\LaTeX$ packages pxfonts/txfonts. It is like \perp, but with double vertical lines. Above I replace a hack.
What is the correct notation for stating that random variables X and Y are independent? $\require{txfonts}$As you say, the use of $\perp$ (\perp) for independence is not good, since it often means orthogonal, which in probability theory translates to correlation zero. Independence is a
28,730
What is the correct notation for stating that random variables X and Y are independent?
Apart from multivariate normal distributions of the kind $(X,Y)$, where one can write $Cov(X,Y)=0$, one writes "$X$ and $Y$ are independent". Why bother with symbols if normal language is already clear and short?
What is the correct notation for stating that random variables X and Y are independent?
Apart from multivariate normal distributions of the kind $(X,Y)$, where one can write $Cov(X,Y)=0$, one writes "$X$ and $Y$ are independent". Why bother with symbols if normal language is already clea
What is the correct notation for stating that random variables X and Y are independent? Apart from multivariate normal distributions of the kind $(X,Y)$, where one can write $Cov(X,Y)=0$, one writes "$X$ and $Y$ are independent". Why bother with symbols if normal language is already clear and short?
What is the correct notation for stating that random variables X and Y are independent? Apart from multivariate normal distributions of the kind $(X,Y)$, where one can write $Cov(X,Y)=0$, one writes "$X$ and $Y$ are independent". Why bother with symbols if normal language is already clea
28,731
What is the correct notation for stating that random variables X and Y are independent?
$X|Y = X$ does not reflect the symmetry of the (non-)relation but shouldn't it signify independence?
What is the correct notation for stating that random variables X and Y are independent?
$X|Y = X$ does not reflect the symmetry of the (non-)relation but shouldn't it signify independence?
What is the correct notation for stating that random variables X and Y are independent? $X|Y = X$ does not reflect the symmetry of the (non-)relation but shouldn't it signify independence?
What is the correct notation for stating that random variables X and Y are independent? $X|Y = X$ does not reflect the symmetry of the (non-)relation but shouldn't it signify independence?
28,732
What is the correct notation for stating that random variables X and Y are independent?
Old question, old answers. However, I'm using the Unicode character U+2AEB.
What is the correct notation for stating that random variables X and Y are independent?
Old question, old answers. However, I'm using the Unicode character U+2AEB.
What is the correct notation for stating that random variables X and Y are independent? Old question, old answers. However, I'm using the Unicode character U+2AEB.
What is the correct notation for stating that random variables X and Y are independent? Old question, old answers. However, I'm using the Unicode character U+2AEB.
28,733
Permutation tests: criteria to choose a test statistic
The t-statistic makes a lot of sense as a test statistic; many people find it intuitive. If I quote a t-statistic of 0.5 or 5.5, it tells you something - how many standard errors apart the means are. The difficulty - at least with moderate non-normality - is not so much with using the statistic as using the t-distribution for its distribution under the null. The statistic is quite sensible. Of course, if you expect substantially heavier tails than the normal, a more robust statistic would do better, but the t-statistic is not highly sensitive to mild deviations from normality (for example it's less senstive than the variance-ratio statistic). If you want to use just the numerator of the statistic, that's great, it makes perfect sense as a permutation statistic, if you're interested in a difference in means. If you're interested in a more general sense of location shift, it opens up a plethora of other possibilities. You're right to think there's a lot of freedom to chose a statistic and to tailor it to the particular circumstances - what alternatives you want power against, or what possible problems you'd like to be robust to (contamination, for example, can impact power). There are really almost no restrictions - you're free to choose almost anything, including useless test statistics. There are some considerations that you really should think about when choosing tests, of course, but you're free not to. -- That said, there are some criteria that can be applied in various circumstances. For example, if you're particularly interested in a specific kind of hypothesis, you can make use of a statistic that reflects it - for example, if you want to test a difference in population means, it often makes sense to make your test statistic related to a difference in sample means. If you know something about the kind of distribution you might have - heavy tails, or skew, or notionally light tailed but with some degree of contamination, or bimodal, ... you can devise a test statistic that might do well in such circumstances, for example, choosing a statistic that should perform well in the anticipated situation but has some robustness to contamination. -- Simulation is one way to investigate power under various situations.
Permutation tests: criteria to choose a test statistic
The t-statistic makes a lot of sense as a test statistic; many people find it intuitive. If I quote a t-statistic of 0.5 or 5.5, it tells you something - how many standard errors apart the means are.
Permutation tests: criteria to choose a test statistic The t-statistic makes a lot of sense as a test statistic; many people find it intuitive. If I quote a t-statistic of 0.5 or 5.5, it tells you something - how many standard errors apart the means are. The difficulty - at least with moderate non-normality - is not so much with using the statistic as using the t-distribution for its distribution under the null. The statistic is quite sensible. Of course, if you expect substantially heavier tails than the normal, a more robust statistic would do better, but the t-statistic is not highly sensitive to mild deviations from normality (for example it's less senstive than the variance-ratio statistic). If you want to use just the numerator of the statistic, that's great, it makes perfect sense as a permutation statistic, if you're interested in a difference in means. If you're interested in a more general sense of location shift, it opens up a plethora of other possibilities. You're right to think there's a lot of freedom to chose a statistic and to tailor it to the particular circumstances - what alternatives you want power against, or what possible problems you'd like to be robust to (contamination, for example, can impact power). There are really almost no restrictions - you're free to choose almost anything, including useless test statistics. There are some considerations that you really should think about when choosing tests, of course, but you're free not to. -- That said, there are some criteria that can be applied in various circumstances. For example, if you're particularly interested in a specific kind of hypothesis, you can make use of a statistic that reflects it - for example, if you want to test a difference in population means, it often makes sense to make your test statistic related to a difference in sample means. If you know something about the kind of distribution you might have - heavy tails, or skew, or notionally light tailed but with some degree of contamination, or bimodal, ... you can devise a test statistic that might do well in such circumstances, for example, choosing a statistic that should perform well in the anticipated situation but has some robustness to contamination. -- Simulation is one way to investigate power under various situations.
Permutation tests: criteria to choose a test statistic The t-statistic makes a lot of sense as a test statistic; many people find it intuitive. If I quote a t-statistic of 0.5 or 5.5, it tells you something - how many standard errors apart the means are.
28,734
Optimal bin width for two dimensional histogram
My advice would generally be that it's even more critical than in 1-D to smooth where possible i.e. to do something like kernel density estimation (or some other such method, like log-spline estimation), which tends to be substantially more efficient than using histograms. As whuber points out, it's quite possible to be fooled by the appearance of a histogram, especially with few bins and small to moderate sample sizes. If you're trying to optimize mean integrated squared error (MISE), say, there are rules that apply in higher dimensions (the number of bins depends on the number of observations, the variance, the dimension, and the "shape"), for both kernel density estimation and histograms. [Indeed many of the issues for one are also issues for the other, so some of the information in this wikipedia article will be relevant.] This dependence on shape seems to imply that to choose optimally, you already need to know what you're plotting. However, if you're prepared to make some reasonable assumptions, you can use those (so for example, some people might say "approximately Gaussian"), or alternatively, you can use some form of "plug-in" estimator of the appropriate functional. Wand, 1997$^{[1]}$ covers the 1-D case. If you're able to get that article, take a look as much of what's there is also relevant to the situation in higher dimensions (in so far as the kinds of analysis that are done). (It exists in working paper form on the internet if you don't have access to the journal.) Analysis in higher dimensions is somewhat more complicated (in pretty much the same way it proceeds from 1-D to r-dimensions for kernel density estimation), but there's a term in the dimension that comes into the power of n. Sec 3.4 Eqn 3.61 (p83) of Scott, 1992$^{[2]}$ gives the asymptotically optimal binwidth: $h^∗=R(f_k)^{-1/2}\,\left(6\prod_{i=1}^dR(f_i)^{1/2}\right)^{1/(2+d)} n^{−1/(2+d)}$ where $R(f)=\int_{\mathfrak{R}^d} f(x)^2 dx$ is a roughness term (not the only one possible), and I believe $f_i$ is the derivative of $f$ with respect to the $i^\text{th}$ term in $x$. So for 2D that suggests binwidths that shrink as $n^{−1/4}$. In the case of independent normal variables, the approximate rule is $h_k^*\approx 3.5\sigma_k n^{−1/(2+d)}$, where $h_k$ is the binwidth in dimension $k$, the $*$ indicates the asymptotically optimal value, and $\sigma_k$ is the population standard deviation in dimension $k$. For bivariate normal with correlation $\rho$, the binwidth is $h_i^* = 3.504 \sigma_i(1-\rho^2)^{3/8}n^{-1/4}$ When the distribution is skewed, or heavy tailed, or multimodal, generally much smaller binwidths result; consequently the normal results would often be at best upper bounds on bindwith. Of course, it's entirely possible you're not interested in mean integrated squared error, but in some other criterion. [1]: Wand, M.P. (1997), "Data-based choice of histogram bin width", American Statistician 51, 59-64 [2]: Scott, D.W. (1992), Multivariate Density Estimation: Theory, Practice, and Visualization, John Wiley & Sons, Inc., Hoboken, NJ, USA.
Optimal bin width for two dimensional histogram
My advice would generally be that it's even more critical than in 1-D to smooth where possible i.e. to do something like kernel density estimation (or some other such method, like log-spline estimatio
Optimal bin width for two dimensional histogram My advice would generally be that it's even more critical than in 1-D to smooth where possible i.e. to do something like kernel density estimation (or some other such method, like log-spline estimation), which tends to be substantially more efficient than using histograms. As whuber points out, it's quite possible to be fooled by the appearance of a histogram, especially with few bins and small to moderate sample sizes. If you're trying to optimize mean integrated squared error (MISE), say, there are rules that apply in higher dimensions (the number of bins depends on the number of observations, the variance, the dimension, and the "shape"), for both kernel density estimation and histograms. [Indeed many of the issues for one are also issues for the other, so some of the information in this wikipedia article will be relevant.] This dependence on shape seems to imply that to choose optimally, you already need to know what you're plotting. However, if you're prepared to make some reasonable assumptions, you can use those (so for example, some people might say "approximately Gaussian"), or alternatively, you can use some form of "plug-in" estimator of the appropriate functional. Wand, 1997$^{[1]}$ covers the 1-D case. If you're able to get that article, take a look as much of what's there is also relevant to the situation in higher dimensions (in so far as the kinds of analysis that are done). (It exists in working paper form on the internet if you don't have access to the journal.) Analysis in higher dimensions is somewhat more complicated (in pretty much the same way it proceeds from 1-D to r-dimensions for kernel density estimation), but there's a term in the dimension that comes into the power of n. Sec 3.4 Eqn 3.61 (p83) of Scott, 1992$^{[2]}$ gives the asymptotically optimal binwidth: $h^∗=R(f_k)^{-1/2}\,\left(6\prod_{i=1}^dR(f_i)^{1/2}\right)^{1/(2+d)} n^{−1/(2+d)}$ where $R(f)=\int_{\mathfrak{R}^d} f(x)^2 dx$ is a roughness term (not the only one possible), and I believe $f_i$ is the derivative of $f$ with respect to the $i^\text{th}$ term in $x$. So for 2D that suggests binwidths that shrink as $n^{−1/4}$. In the case of independent normal variables, the approximate rule is $h_k^*\approx 3.5\sigma_k n^{−1/(2+d)}$, where $h_k$ is the binwidth in dimension $k$, the $*$ indicates the asymptotically optimal value, and $\sigma_k$ is the population standard deviation in dimension $k$. For bivariate normal with correlation $\rho$, the binwidth is $h_i^* = 3.504 \sigma_i(1-\rho^2)^{3/8}n^{-1/4}$ When the distribution is skewed, or heavy tailed, or multimodal, generally much smaller binwidths result; consequently the normal results would often be at best upper bounds on bindwith. Of course, it's entirely possible you're not interested in mean integrated squared error, but in some other criterion. [1]: Wand, M.P. (1997), "Data-based choice of histogram bin width", American Statistician 51, 59-64 [2]: Scott, D.W. (1992), Multivariate Density Estimation: Theory, Practice, and Visualization, John Wiley & Sons, Inc., Hoboken, NJ, USA.
Optimal bin width for two dimensional histogram My advice would generally be that it's even more critical than in 1-D to smooth where possible i.e. to do something like kernel density estimation (or some other such method, like log-spline estimatio
28,735
Optimal bin width for two dimensional histogram
Given you have a fixed number $N$ of data (ie. you have equal number of reading on both dimensions) you could immediately use: The square-root rule rounded-down ($\sqrt{N}$), (ie. the Excel-way :) ) Sturges' rule ($\log_2N +1$), Some other rule that is based only on the number of available data-points (eg. Rick's rule). To find the common number of bins $M$ across each dimension. On the hand, you might want to try something more robust like the Freedman–Diaconis rule which essentially defines the bandwidth $h$ as equal to: $h= 2 IQR(x) N^{-1/3}$, where IQR is the interquartile range of your data $x$. You then calculate the number of bins $M$ along each dimension as being equal to: $M = \lceil (max(x)- min(x))/h \rceil$. You do this across both dimensions of your data $x$; this gives you two, possibly different, numbers of bins that "should" be used across each dimension. You naively take the larger one so you do not "lose" information. Yet, a fourth option would be to try to treat your sample as natively two-dimensional, calculate the norm for each of the sample points and then perform the Freedman–Diaconis rule on the sample's norms. ie.: $x_{new} = \sqrt{x_1^2 + x_2^2}$ OK, here is some code and a plot for the procedures I describe: rng(123,'twister'); % Fix random seed for reproducibility N = 250; % Number of points in our sample A = random('normal',0,1,[N,2]); % Generate a N-by-2 matrix with N(0,1) A(:,2) = A(:,2) * 5; % Make the second dimension more variable % The sqrt(N) rule: nbins_sqrtN = floor(sqrt(N)); % The Sturges formula: nbins_str = ceil(log2(N) +1); % The Freedman–Diaconis-like choice: IQRs = iqr(A); % Get the IQ ranges across each dimension Hs = 2* IQRs* N^(-1/3); % Get the bandwidths across each dimension Ranges = range(A); % Get the range of values across each dimension % Get the suggested number of bins along each dimension nbins_dim1 = ceil(Ranges(1)/Hs(1)); % 12 here nbins_dim2 = ceil(Ranges(2)/Hs(2)); % 15 here % Get the maximum of the two nbins_fd_1 = max( [nbins_dim1, nbins_dim2]); % The Freedman–Diaconis choice on the norms Norms = sqrt(sum(A.^2,2)); % Get the norm of each point in th 2-D sample H_norms = 2* iqr(Norms)* N^(-1/3);% Get the "norm" bandwidth nbins_fd_2 = ceil(range(Norms)/ H_norms); % Get number of bins [nbins_sqrtN nbins_str nbins_fd_1 nbins_fd_2] % Plot the results / Make bivariate histograms % I use the hist3 function from MATLAB figure(1); subplot(2,2,1); hist3(A,[ nbins_sqrtN nbins_sqrtN] ); title('Square Root rule'); subplot(2,2,2); hist3(A,[ nbins_str nbins_str] ); title('Sturges formula rule'); subplot(2,2,3); hist3(A,[ nbins_fd_1 nbins_fd_1]); title('Freedman–Diaconis-like rule'); subplot(2,2,4); hist3(A,[ nbins_fd_2 nbins_fd_2]); title('Freedman–Diaconis rule on the norms'); As others have noted smoothing is almost certainly more appropriate for this case (ie. getting a KDE). I hope thought this gives you an idea about what I described in my comment regarding the direct generalization (with all the problems it might entail) of 1-D sample rules to 2-D sample rules. Notably, most procedure do assume some degree of "normality" in the sample. If you have a sample that clearly is not normally distributed (eg. it is leptokurtotic) these procedure (even in 1-D) would fail quite badly.
Optimal bin width for two dimensional histogram
Given you have a fixed number $N$ of data (ie. you have equal number of reading on both dimensions) you could immediately use: The square-root rule rounded-down ($\sqrt{N}$), (ie. the Excel-way :) )
Optimal bin width for two dimensional histogram Given you have a fixed number $N$ of data (ie. you have equal number of reading on both dimensions) you could immediately use: The square-root rule rounded-down ($\sqrt{N}$), (ie. the Excel-way :) ) Sturges' rule ($\log_2N +1$), Some other rule that is based only on the number of available data-points (eg. Rick's rule). To find the common number of bins $M$ across each dimension. On the hand, you might want to try something more robust like the Freedman–Diaconis rule which essentially defines the bandwidth $h$ as equal to: $h= 2 IQR(x) N^{-1/3}$, where IQR is the interquartile range of your data $x$. You then calculate the number of bins $M$ along each dimension as being equal to: $M = \lceil (max(x)- min(x))/h \rceil$. You do this across both dimensions of your data $x$; this gives you two, possibly different, numbers of bins that "should" be used across each dimension. You naively take the larger one so you do not "lose" information. Yet, a fourth option would be to try to treat your sample as natively two-dimensional, calculate the norm for each of the sample points and then perform the Freedman–Diaconis rule on the sample's norms. ie.: $x_{new} = \sqrt{x_1^2 + x_2^2}$ OK, here is some code and a plot for the procedures I describe: rng(123,'twister'); % Fix random seed for reproducibility N = 250; % Number of points in our sample A = random('normal',0,1,[N,2]); % Generate a N-by-2 matrix with N(0,1) A(:,2) = A(:,2) * 5; % Make the second dimension more variable % The sqrt(N) rule: nbins_sqrtN = floor(sqrt(N)); % The Sturges formula: nbins_str = ceil(log2(N) +1); % The Freedman–Diaconis-like choice: IQRs = iqr(A); % Get the IQ ranges across each dimension Hs = 2* IQRs* N^(-1/3); % Get the bandwidths across each dimension Ranges = range(A); % Get the range of values across each dimension % Get the suggested number of bins along each dimension nbins_dim1 = ceil(Ranges(1)/Hs(1)); % 12 here nbins_dim2 = ceil(Ranges(2)/Hs(2)); % 15 here % Get the maximum of the two nbins_fd_1 = max( [nbins_dim1, nbins_dim2]); % The Freedman–Diaconis choice on the norms Norms = sqrt(sum(A.^2,2)); % Get the norm of each point in th 2-D sample H_norms = 2* iqr(Norms)* N^(-1/3);% Get the "norm" bandwidth nbins_fd_2 = ceil(range(Norms)/ H_norms); % Get number of bins [nbins_sqrtN nbins_str nbins_fd_1 nbins_fd_2] % Plot the results / Make bivariate histograms % I use the hist3 function from MATLAB figure(1); subplot(2,2,1); hist3(A,[ nbins_sqrtN nbins_sqrtN] ); title('Square Root rule'); subplot(2,2,2); hist3(A,[ nbins_str nbins_str] ); title('Sturges formula rule'); subplot(2,2,3); hist3(A,[ nbins_fd_1 nbins_fd_1]); title('Freedman–Diaconis-like rule'); subplot(2,2,4); hist3(A,[ nbins_fd_2 nbins_fd_2]); title('Freedman–Diaconis rule on the norms'); As others have noted smoothing is almost certainly more appropriate for this case (ie. getting a KDE). I hope thought this gives you an idea about what I described in my comment regarding the direct generalization (with all the problems it might entail) of 1-D sample rules to 2-D sample rules. Notably, most procedure do assume some degree of "normality" in the sample. If you have a sample that clearly is not normally distributed (eg. it is leptokurtotic) these procedure (even in 1-D) would fail quite badly.
Optimal bin width for two dimensional histogram Given you have a fixed number $N$ of data (ie. you have equal number of reading on both dimensions) you could immediately use: The square-root rule rounded-down ($\sqrt{N}$), (ie. the Excel-way :) )
28,736
Is it reasonable for a classifier to obtain a high AUC and a low MCC? Or the opposite?
A binary classifier might produce at prediction time either directly the class labels for each classified instance, or some probability values for each class. In the later case, for each instance it will produce in binary case a probability value $p$ for one class, and a probability value $q=1-p$ for the second class. If the classifier produces probabilities, one have to use a threshold value in order to obtain classification labels. Usually, this threshold value is $0.5$, but this is often not the case, and sometimes it is not the best possible value. Now, MCC is computed directly from classification labels, which means that it was used a single threshold value to transform probabilities into classification labels, no matter what the threshold value was. AUC, on the other hand is using the whole range of threshold values. The idea is that these two values, AUC and MCC, measure different things. While MCC measures a kind of statistical accuracy (related with Chi squared test which gives some hints on the significance of the differences), the AUC is more related with the robustness of the classifier. AUC and ROC curves gives more hints on how well the classifier separates the binary classes, for all possible threshold values. Even for the degenerate case, where AUC is computed directly on labels (not advisable since it looses a lot of information), the purpose of AUC remains the same. Model selection is a hard problem. My advice for you would be to try yourself to answer to the question: what means a better classifier? Find an answer which includes considerations like: cost matrix, robustness to unbalanced data, an optimistic or conservative classifier, etc. Anyway, provide enough details until you find one or few criteria which can be used to select a metric before trying to measure different things related with accuracy and ask later what to do with them. [Later edit - related with usage of robust word] I used the term "robust" because I could not find a proper single word for "how well a classifier separates the two classes". I know that the term "robust" has some special meaning in statistics. Generally, an AUC close to $1.0$ separates well binary cases for many values of the threshold. In this sense, the AUC close to $1.0$ means that it is less sensitive to which threshold values is used, which means its robust to this choice. However, a value which is not close to $1.0$ does not mean the contrary, which does not necessarily mean that there is not enough range of good threshold values. In most cases, a graphical inspection of ROC curve is necessary. This is one of the main reasons why AUC is often considered misleading. AUC is a measure over all possible classifiers (one for each possible threshold value), and is not a measure of a specific classifier, but in practice one can't use more than one threshold value. While AUC can give hints about the well-separation (my use of the term "robustness"), it is not used alone as a single authoritative measure of accuracy.
Is it reasonable for a classifier to obtain a high AUC and a low MCC? Or the opposite?
A binary classifier might produce at prediction time either directly the class labels for each classified instance, or some probability values for each class. In the later case, for each instance it w
Is it reasonable for a classifier to obtain a high AUC and a low MCC? Or the opposite? A binary classifier might produce at prediction time either directly the class labels for each classified instance, or some probability values for each class. In the later case, for each instance it will produce in binary case a probability value $p$ for one class, and a probability value $q=1-p$ for the second class. If the classifier produces probabilities, one have to use a threshold value in order to obtain classification labels. Usually, this threshold value is $0.5$, but this is often not the case, and sometimes it is not the best possible value. Now, MCC is computed directly from classification labels, which means that it was used a single threshold value to transform probabilities into classification labels, no matter what the threshold value was. AUC, on the other hand is using the whole range of threshold values. The idea is that these two values, AUC and MCC, measure different things. While MCC measures a kind of statistical accuracy (related with Chi squared test which gives some hints on the significance of the differences), the AUC is more related with the robustness of the classifier. AUC and ROC curves gives more hints on how well the classifier separates the binary classes, for all possible threshold values. Even for the degenerate case, where AUC is computed directly on labels (not advisable since it looses a lot of information), the purpose of AUC remains the same. Model selection is a hard problem. My advice for you would be to try yourself to answer to the question: what means a better classifier? Find an answer which includes considerations like: cost matrix, robustness to unbalanced data, an optimistic or conservative classifier, etc. Anyway, provide enough details until you find one or few criteria which can be used to select a metric before trying to measure different things related with accuracy and ask later what to do with them. [Later edit - related with usage of robust word] I used the term "robust" because I could not find a proper single word for "how well a classifier separates the two classes". I know that the term "robust" has some special meaning in statistics. Generally, an AUC close to $1.0$ separates well binary cases for many values of the threshold. In this sense, the AUC close to $1.0$ means that it is less sensitive to which threshold values is used, which means its robust to this choice. However, a value which is not close to $1.0$ does not mean the contrary, which does not necessarily mean that there is not enough range of good threshold values. In most cases, a graphical inspection of ROC curve is necessary. This is one of the main reasons why AUC is often considered misleading. AUC is a measure over all possible classifiers (one for each possible threshold value), and is not a measure of a specific classifier, but in practice one can't use more than one threshold value. While AUC can give hints about the well-separation (my use of the term "robustness"), it is not used alone as a single authoritative measure of accuracy.
Is it reasonable for a classifier to obtain a high AUC and a low MCC? Or the opposite? A binary classifier might produce at prediction time either directly the class labels for each classified instance, or some probability values for each class. In the later case, for each instance it w
28,737
How to generate survival data with time dependent covariates using R
OK from your R code you are assuming an exponential distribution (constant hazard) for your baseline hazard. Your hazard functions are therefore: $$ h\left(t \mid X_i\right) = \begin{cases} \exp{\left(\alpha \beta_0\right)} & \text{if $X_i = 0$,} \\ \exp{\left(\gamma + \alpha\left(\beta_0+\beta_1+\beta_2 t\right)\right)} & \text{if $X_i = 1$.} \end{cases} $$ We then integrate these with respect to $t$ to get the cumulative hazard function: $$ \begin{align} \Lambda\left(t\mid X_i\right) &= \begin{cases} t \exp{\left(\alpha \beta_0\right)} & \text{if $X_i=0$,} \\ \int_0^t{\exp{\left(\gamma + \alpha\left(\beta_0+\beta_1+\beta_2 \tau\right)\right)} \,d\tau} & \text{if $X_i=1$.} \end{cases} \\ &= \begin{cases} t \exp{\left(\alpha \beta_0\right)} & \text{if $X_i=0$,} \\ \exp{\left(\gamma + \alpha\left(\beta_0+\beta_1\right)\right)} \frac{1}{\alpha\beta_2} \left(\exp\left(\alpha\beta_2 t\right)-1\right) & \text{if $X_i=1$.} \end{cases} \end{align} $$ These then give us the survival functions: $$ \begin{align} S\left(t\right) &= \exp{\left(-\Lambda\left(t\right)\right)} \\ &= \begin{cases} \exp{\left(-t \exp{\left(\alpha \beta_0\right)}\right)} & \text{if $X_i=0$,} \\ \exp{\left(-\exp{\left(\gamma + \alpha\left(\beta_0+\beta_1\right)\right)} \frac{1}{\alpha\beta_2} \left(\exp\left(\alpha\beta_2 t\right)-1\right)\right)} & \text{if $X_i=1$.} \end{cases} \end{align} $$ You then generate by sampling $X_i$ and $U\sim\mathrm{Uniform\left(0,1\right)}$, substituting $U$ for $S\left(t\right)$ and rearranging the appropriate formula (based on $X_i$) to simulate $t$. This should be straightforward algebra you can then code up in R but please let me know by comment if you need any further help.
How to generate survival data with time dependent covariates using R
OK from your R code you are assuming an exponential distribution (constant hazard) for your baseline hazard. Your hazard functions are therefore: $$ h\left(t \mid X_i\right) = \begin{cases} \exp{\le
How to generate survival data with time dependent covariates using R OK from your R code you are assuming an exponential distribution (constant hazard) for your baseline hazard. Your hazard functions are therefore: $$ h\left(t \mid X_i\right) = \begin{cases} \exp{\left(\alpha \beta_0\right)} & \text{if $X_i = 0$,} \\ \exp{\left(\gamma + \alpha\left(\beta_0+\beta_1+\beta_2 t\right)\right)} & \text{if $X_i = 1$.} \end{cases} $$ We then integrate these with respect to $t$ to get the cumulative hazard function: $$ \begin{align} \Lambda\left(t\mid X_i\right) &= \begin{cases} t \exp{\left(\alpha \beta_0\right)} & \text{if $X_i=0$,} \\ \int_0^t{\exp{\left(\gamma + \alpha\left(\beta_0+\beta_1+\beta_2 \tau\right)\right)} \,d\tau} & \text{if $X_i=1$.} \end{cases} \\ &= \begin{cases} t \exp{\left(\alpha \beta_0\right)} & \text{if $X_i=0$,} \\ \exp{\left(\gamma + \alpha\left(\beta_0+\beta_1\right)\right)} \frac{1}{\alpha\beta_2} \left(\exp\left(\alpha\beta_2 t\right)-1\right) & \text{if $X_i=1$.} \end{cases} \end{align} $$ These then give us the survival functions: $$ \begin{align} S\left(t\right) &= \exp{\left(-\Lambda\left(t\right)\right)} \\ &= \begin{cases} \exp{\left(-t \exp{\left(\alpha \beta_0\right)}\right)} & \text{if $X_i=0$,} \\ \exp{\left(-\exp{\left(\gamma + \alpha\left(\beta_0+\beta_1\right)\right)} \frac{1}{\alpha\beta_2} \left(\exp\left(\alpha\beta_2 t\right)-1\right)\right)} & \text{if $X_i=1$.} \end{cases} \end{align} $$ You then generate by sampling $X_i$ and $U\sim\mathrm{Uniform\left(0,1\right)}$, substituting $U$ for $S\left(t\right)$ and rearranging the appropriate formula (based on $X_i$) to simulate $t$. This should be straightforward algebra you can then code up in R but please let me know by comment if you need any further help.
How to generate survival data with time dependent covariates using R OK from your R code you are assuming an exponential distribution (constant hazard) for your baseline hazard. Your hazard functions are therefore: $$ h\left(t \mid X_i\right) = \begin{cases} \exp{\le
28,738
Is the beta distribution really better than the normal distribution for testing the difference of two proportions?
From your code (and my knowledge of AB testing), I gather your proportions come in discrete increments. That is, for every person who visits a site, they end up categorized as a "success" or a "failure". In other words, your proportions come from a finite number of Bernoulli trials; they are not continuous proportions. As a result, the beta distribution (which is for continuous proportions) is not really appropriate here. Instead, you should use the binomial distribution. Provided your $n$'s are large enough relative to the proportion of successes, the normal approximation is quite acceptable (the standard rule of thumb is that the lesser of $np$ and $n(1-p)$ should be $>5$, in your case those values are $46$ and $33$). I would go with the chi-squared test in your situation, and not use the beta distribution. If you didn't have enough successes to trust the normal approximation, you could use a permutation test, as @jbowman discusses here: The $z$-test vs. the $\chi^2$-test for comparing the odds of catching a cold in two groups. On the other hand, if your proportions were continuous (e.g., the mass of a tumor as a proportion of the mass of an organ), the beta distribution would be preferable. You could use beta regression in an ANOVA-ish way (i.e., only having categorical predictor variables). I have a simple example of beta regression in R that could be adapted to such a situation here: Remove effect of a factor on continuous proportion data using regression in R.
Is the beta distribution really better than the normal distribution for testing the difference of tw
From your code (and my knowledge of AB testing), I gather your proportions come in discrete increments. That is, for every person who visits a site, they end up categorized as a "success" or a "failu
Is the beta distribution really better than the normal distribution for testing the difference of two proportions? From your code (and my knowledge of AB testing), I gather your proportions come in discrete increments. That is, for every person who visits a site, they end up categorized as a "success" or a "failure". In other words, your proportions come from a finite number of Bernoulli trials; they are not continuous proportions. As a result, the beta distribution (which is for continuous proportions) is not really appropriate here. Instead, you should use the binomial distribution. Provided your $n$'s are large enough relative to the proportion of successes, the normal approximation is quite acceptable (the standard rule of thumb is that the lesser of $np$ and $n(1-p)$ should be $>5$, in your case those values are $46$ and $33$). I would go with the chi-squared test in your situation, and not use the beta distribution. If you didn't have enough successes to trust the normal approximation, you could use a permutation test, as @jbowman discusses here: The $z$-test vs. the $\chi^2$-test for comparing the odds of catching a cold in two groups. On the other hand, if your proportions were continuous (e.g., the mass of a tumor as a proportion of the mass of an organ), the beta distribution would be preferable. You could use beta regression in an ANOVA-ish way (i.e., only having categorical predictor variables). I have a simple example of beta regression in R that could be adapted to such a situation here: Remove effect of a factor on continuous proportion data using regression in R.
Is the beta distribution really better than the normal distribution for testing the difference of tw From your code (and my knowledge of AB testing), I gather your proportions come in discrete increments. That is, for every person who visits a site, they end up categorized as a "success" or a "failu
28,739
Is the beta distribution really better than the normal distribution for testing the difference of two proportions?
As other commenters said, number of successes is distributed binomially. Therefore, if you want to sample/simulate, use rbinom(). That said, beta distribution is a conjugate prior for binomial distribution. Therefore, if you want to obtain distribution of the parameter of your binomial distribution using observations, use dbeta().
Is the beta distribution really better than the normal distribution for testing the difference of tw
As other commenters said, number of successes is distributed binomially. Therefore, if you want to sample/simulate, use rbinom(). That said, beta distribution is a conjugate prior for binomial distrib
Is the beta distribution really better than the normal distribution for testing the difference of two proportions? As other commenters said, number of successes is distributed binomially. Therefore, if you want to sample/simulate, use rbinom(). That said, beta distribution is a conjugate prior for binomial distribution. Therefore, if you want to obtain distribution of the parameter of your binomial distribution using observations, use dbeta().
Is the beta distribution really better than the normal distribution for testing the difference of tw As other commenters said, number of successes is distributed binomially. Therefore, if you want to sample/simulate, use rbinom(). That said, beta distribution is a conjugate prior for binomial distrib
28,740
How to interpret log-log regression coefficients for other than 1 or 10 percent change?
The question concerns models of the form $$\log(y) = \cdots + \beta \log(x) + \cdots$$ (where none of the omitted terms involves anything that changes with $x$). When we change $x$ by $100\delta\%$ we multiply it by $1+\delta$. According to the laws of logarithms, when $1 + \delta \gt 0$, $$\log(x(1+\delta)) = \log(x) + \log(1 + \delta).$$ Therefore if such a change in $x$ changes $y$ to $y^\prime$, $$\eqalign{ \log(y^\prime) &= \cdots + \beta \log(x(1+\delta)) + \cdots \\ &= \cdots + \beta\left(\log(x) + \log(1 + \delta)\right) + \cdots \\ &= \cdots + \beta\log(x) + \beta\log(1 + \delta) + \cdots . }$$ The change in $\log(y)$ is $$\log(y^\prime) - \log(y) = \beta\log(1 + \delta).$$ When $\delta$ is nearly zero (say, $10\%$ or smaller in size), $\log(1 + \delta) \approx \delta$ is a good approximation. When in turn $\beta\delta$ is also close to zero, this is the basis for the approximate interpretation "a $\delta$ percent change in $x$ corresponds to a $\beta\delta$ percent change in $y$." For larger $\delta$ or $\beta$, however, this approximation fails. The fully general relationship is obtained by noting that adding $\beta\log(1+\delta)$ to $\log{y}$ is tantamount to multiplying $y$ by $$\exp(\beta\log(1+\delta)) = (1+\delta)^\beta.$$ Therefore, when working with logarithms, think multiplicatively. We may memorialize the result of this analysis with a simple rule: When $x$ is multiplied by any positive amount $c$, $y$ is multiplied by $c^\beta$. In other words, log-log relationship are power relationships. Let's look at some examples: When $\beta=2$, multiplying $x$ by $c$ multiplies $y$ by $c^2$. For instance, tripling $x$ will multiply $y$ by $9$. When $\beta=1/3$, multiplying $x$ by $c$ multiplies $y$ by $c^{1/3} = \sqrt[3]{c}$. For instance, doubling $x$ will only multiply $y$ by $\sqrt[3]{2}\approx 1.26$. When $\beta = -1$, multiplying $x$ by $c$ multiplies $y$ by $c^{-1} = 1/c$; that is, $y$ is divided by $c$.
How to interpret log-log regression coefficients for other than 1 or 10 percent change?
The question concerns models of the form $$\log(y) = \cdots + \beta \log(x) + \cdots$$ (where none of the omitted terms involves anything that changes with $x$). When we change $x$ by $100\delta\%$ w
How to interpret log-log regression coefficients for other than 1 or 10 percent change? The question concerns models of the form $$\log(y) = \cdots + \beta \log(x) + \cdots$$ (where none of the omitted terms involves anything that changes with $x$). When we change $x$ by $100\delta\%$ we multiply it by $1+\delta$. According to the laws of logarithms, when $1 + \delta \gt 0$, $$\log(x(1+\delta)) = \log(x) + \log(1 + \delta).$$ Therefore if such a change in $x$ changes $y$ to $y^\prime$, $$\eqalign{ \log(y^\prime) &= \cdots + \beta \log(x(1+\delta)) + \cdots \\ &= \cdots + \beta\left(\log(x) + \log(1 + \delta)\right) + \cdots \\ &= \cdots + \beta\log(x) + \beta\log(1 + \delta) + \cdots . }$$ The change in $\log(y)$ is $$\log(y^\prime) - \log(y) = \beta\log(1 + \delta).$$ When $\delta$ is nearly zero (say, $10\%$ or smaller in size), $\log(1 + \delta) \approx \delta$ is a good approximation. When in turn $\beta\delta$ is also close to zero, this is the basis for the approximate interpretation "a $\delta$ percent change in $x$ corresponds to a $\beta\delta$ percent change in $y$." For larger $\delta$ or $\beta$, however, this approximation fails. The fully general relationship is obtained by noting that adding $\beta\log(1+\delta)$ to $\log{y}$ is tantamount to multiplying $y$ by $$\exp(\beta\log(1+\delta)) = (1+\delta)^\beta.$$ Therefore, when working with logarithms, think multiplicatively. We may memorialize the result of this analysis with a simple rule: When $x$ is multiplied by any positive amount $c$, $y$ is multiplied by $c^\beta$. In other words, log-log relationship are power relationships. Let's look at some examples: When $\beta=2$, multiplying $x$ by $c$ multiplies $y$ by $c^2$. For instance, tripling $x$ will multiply $y$ by $9$. When $\beta=1/3$, multiplying $x$ by $c$ multiplies $y$ by $c^{1/3} = \sqrt[3]{c}$. For instance, doubling $x$ will only multiply $y$ by $\sqrt[3]{2}\approx 1.26$. When $\beta = -1$, multiplying $x$ by $c$ multiplies $y$ by $c^{-1} = 1/c$; that is, $y$ is divided by $c$.
How to interpret log-log regression coefficients for other than 1 or 10 percent change? The question concerns models of the form $$\log(y) = \cdots + \beta \log(x) + \cdots$$ (where none of the omitted terms involves anything that changes with $x$). When we change $x$ by $100\delta\%$ w
28,741
When is a second hidden layer needed in feed forward neural networks?
Stanford Professor Andrew Ng gave some guidelines for selecting a neural network architecture in his Machine Learning class on Coursera. I don't see the specific lecture videos on YouTube, but the course is free so it's no cost to access them on Coursera's site. Here's a summary of the relevant material. In lecture 9-7 Putting it all together, general guidelines are given on picking default values for your neural network architecture. Number of input units: Dimension of features x(i) Number of output units: Number of classes Reasonable default is one hidden layer, or if > 1 hidden layer, have the same number of hidden units in every layer (usually the more the better, anywhere from about 1X to 4X the number of input units). In lecture 10-7 Deciding what to do next revisited, Professor Ng goes in to more detail. Small neural networks: fewer parameters more prone to underfitting computationally cheaper Large neural networks: more parameters more prone to overfitting computationally more expensive use regularization to address overfitting Number of hidden layers: Split your data into training, cross validation, and test sets, and train neural networks with 1, 2, and 3 hidden layers, then see which one has the lowest cross validation error to choose an architecture.
When is a second hidden layer needed in feed forward neural networks?
Stanford Professor Andrew Ng gave some guidelines for selecting a neural network architecture in his Machine Learning class on Coursera. I don't see the specific lecture videos on YouTube, but the cou
When is a second hidden layer needed in feed forward neural networks? Stanford Professor Andrew Ng gave some guidelines for selecting a neural network architecture in his Machine Learning class on Coursera. I don't see the specific lecture videos on YouTube, but the course is free so it's no cost to access them on Coursera's site. Here's a summary of the relevant material. In lecture 9-7 Putting it all together, general guidelines are given on picking default values for your neural network architecture. Number of input units: Dimension of features x(i) Number of output units: Number of classes Reasonable default is one hidden layer, or if > 1 hidden layer, have the same number of hidden units in every layer (usually the more the better, anywhere from about 1X to 4X the number of input units). In lecture 10-7 Deciding what to do next revisited, Professor Ng goes in to more detail. Small neural networks: fewer parameters more prone to underfitting computationally cheaper Large neural networks: more parameters more prone to overfitting computationally more expensive use regularization to address overfitting Number of hidden layers: Split your data into training, cross validation, and test sets, and train neural networks with 1, 2, and 3 hidden layers, then see which one has the lowest cross validation error to choose an architecture.
When is a second hidden layer needed in feed forward neural networks? Stanford Professor Andrew Ng gave some guidelines for selecting a neural network architecture in his Machine Learning class on Coursera. I don't see the specific lecture videos on YouTube, but the cou
28,742
When is a second hidden layer needed in feed forward neural networks?
From a theoretical point of view you can approximate almost any function with one layer neural network. There are some examples where a two layer neural network can approximate with a finite number of nodes functions that with a one layer neural network can be approximated only with an infinite number of neurons. Try to increase the number of nodes in the one layer neural network or try to train your on layer neural network with another algorithm like PSO. It is often easy to fall in a local minimum.
When is a second hidden layer needed in feed forward neural networks?
From a theoretical point of view you can approximate almost any function with one layer neural network. There are some examples where a two layer neural network can approximate with a finite number o
When is a second hidden layer needed in feed forward neural networks? From a theoretical point of view you can approximate almost any function with one layer neural network. There are some examples where a two layer neural network can approximate with a finite number of nodes functions that with a one layer neural network can be approximated only with an infinite number of neurons. Try to increase the number of nodes in the one layer neural network or try to train your on layer neural network with another algorithm like PSO. It is often easy to fall in a local minimum.
When is a second hidden layer needed in feed forward neural networks? From a theoretical point of view you can approximate almost any function with one layer neural network. There are some examples where a two layer neural network can approximate with a finite number o
28,743
Effects in panel models "individual", "time" or "twoways"
The canonical two-way model is $$ y_{it}=x_{it}'\beta+\alpha_i+\theta_t+\epsilon_{it} $$ Here, the individual effect is $\alpha_i$, and $\theta_t$ is the time effect. It is a two-way model if both are present. Thus, $\alpha_i$ captures effects that are specific to some panel unit but constant over time, whereas $\theta_t$ captures effects that are specific to some time period but constant over panel units. So, whether you need both will, as @Ben pointed out, depend on your research question. For example, if have a panel of firms, $\theta_t$ might represent business cycle effects, whereas $\alpha_i$ would contain firm specific effects that can be argued to be constant over time, such as the "culture" of the firm.
Effects in panel models "individual", "time" or "twoways"
The canonical two-way model is $$ y_{it}=x_{it}'\beta+\alpha_i+\theta_t+\epsilon_{it} $$ Here, the individual effect is $\alpha_i$, and $\theta_t$ is the time effect. It is a two-way model if both are
Effects in panel models "individual", "time" or "twoways" The canonical two-way model is $$ y_{it}=x_{it}'\beta+\alpha_i+\theta_t+\epsilon_{it} $$ Here, the individual effect is $\alpha_i$, and $\theta_t$ is the time effect. It is a two-way model if both are present. Thus, $\alpha_i$ captures effects that are specific to some panel unit but constant over time, whereas $\theta_t$ captures effects that are specific to some time period but constant over panel units. So, whether you need both will, as @Ben pointed out, depend on your research question. For example, if have a panel of firms, $\theta_t$ might represent business cycle effects, whereas $\alpha_i$ would contain firm specific effects that can be argued to be constant over time, such as the "culture" of the firm.
Effects in panel models "individual", "time" or "twoways" The canonical two-way model is $$ y_{it}=x_{it}'\beta+\alpha_i+\theta_t+\epsilon_{it} $$ Here, the individual effect is $\alpha_i$, and $\theta_t$ is the time effect. It is a two-way model if both are
28,744
Effects in panel models "individual", "time" or "twoways"
It depends on your research, in some cases time effects could solve the cross-sectional problem. An article that is very useful is "Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches" by Mitchell A. Petersen, 2009. In fact, twoways here means both individual and time effects, so it is just two specifications hope this helps
Effects in panel models "individual", "time" or "twoways"
It depends on your research, in some cases time effects could solve the cross-sectional problem. An article that is very useful is "Estimating Standard Errors in Finance Panel Data Sets: Comparing App
Effects in panel models "individual", "time" or "twoways" It depends on your research, in some cases time effects could solve the cross-sectional problem. An article that is very useful is "Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches" by Mitchell A. Petersen, 2009. In fact, twoways here means both individual and time effects, so it is just two specifications hope this helps
Effects in panel models "individual", "time" or "twoways" It depends on your research, in some cases time effects could solve the cross-sectional problem. An article that is very useful is "Estimating Standard Errors in Finance Panel Data Sets: Comparing App
28,745
What is a "strictly positive distribution"?
A strictly positive distribution $D_{sp}$ has values $D_{sp}(x)>0$ for all $x$. This is different from a non-negative distribution $D_{nn}$ where $D_{nn}(x) \geq 0$.
What is a "strictly positive distribution"?
A strictly positive distribution $D_{sp}$ has values $D_{sp}(x)>0$ for all $x$. This is different from a non-negative distribution $D_{nn}$ where $D_{nn}(x) \geq 0$.
What is a "strictly positive distribution"? A strictly positive distribution $D_{sp}$ has values $D_{sp}(x)>0$ for all $x$. This is different from a non-negative distribution $D_{nn}$ where $D_{nn}(x) \geq 0$.
What is a "strictly positive distribution"? A strictly positive distribution $D_{sp}$ has values $D_{sp}(x)>0$ for all $x$. This is different from a non-negative distribution $D_{nn}$ where $D_{nn}(x) \geq 0$.
28,746
What is a "strictly positive distribution"?
The mass of each ball bearing in a population of ball bearings would be strictly positive because something with zero mass cannot be a ball bearing.
What is a "strictly positive distribution"?
The mass of each ball bearing in a population of ball bearings would be strictly positive because something with zero mass cannot be a ball bearing.
What is a "strictly positive distribution"? The mass of each ball bearing in a population of ball bearings would be strictly positive because something with zero mass cannot be a ball bearing.
What is a "strictly positive distribution"? The mass of each ball bearing in a population of ball bearings would be strictly positive because something with zero mass cannot be a ball bearing.
28,747
What is a "strictly positive distribution"?
A strictly positive probability distribution over a state space simply means that all states are possible, ie no state has a probability of zero. All states have a probability greater than zero. "Strictly positive" means greater than zero. Strictly positive does not imply that the probability of any state could be negative. There is no such thing as negative probability.
What is a "strictly positive distribution"?
A strictly positive probability distribution over a state space simply means that all states are possible, ie no state has a probability of zero. All states have a probability greater than zero. "Stri
What is a "strictly positive distribution"? A strictly positive probability distribution over a state space simply means that all states are possible, ie no state has a probability of zero. All states have a probability greater than zero. "Strictly positive" means greater than zero. Strictly positive does not imply that the probability of any state could be negative. There is no such thing as negative probability.
What is a "strictly positive distribution"? A strictly positive probability distribution over a state space simply means that all states are possible, ie no state has a probability of zero. All states have a probability greater than zero. "Stri
28,748
What is a "strictly positive distribution"?
As an example illustrating the definition of a strictly positive probability distribution in action (Courtesy of an old paper by Richard Holley on FKG Inequalities), imagine that we have $\Lambda$ which is a finite fixed set. Imagine also that we have $\Gamma$, which is a sublattice of the lattice of subsets of $\Lambda$. Let us then let $\mu$ be a strictly positive probability distribution on some finite distributed lattice $\Gamma$. For $\mu$ to be strictly positive, $\mu(A)>0$ for all $A\in\Gamma$ and $\sum_{A\in\Gamma}\mu(A)=1$
What is a "strictly positive distribution"?
As an example illustrating the definition of a strictly positive probability distribution in action (Courtesy of an old paper by Richard Holley on FKG Inequalities), imagine that we have $\Lambda$ whi
What is a "strictly positive distribution"? As an example illustrating the definition of a strictly positive probability distribution in action (Courtesy of an old paper by Richard Holley on FKG Inequalities), imagine that we have $\Lambda$ which is a finite fixed set. Imagine also that we have $\Gamma$, which is a sublattice of the lattice of subsets of $\Lambda$. Let us then let $\mu$ be a strictly positive probability distribution on some finite distributed lattice $\Gamma$. For $\mu$ to be strictly positive, $\mu(A)>0$ for all $A\in\Gamma$ and $\sum_{A\in\Gamma}\mu(A)=1$
What is a "strictly positive distribution"? As an example illustrating the definition of a strictly positive probability distribution in action (Courtesy of an old paper by Richard Holley on FKG Inequalities), imagine that we have $\Lambda$ whi
28,749
What is the variance of the mean of correlated binomial variables?
As a very general rule, whenever $X = (X_1, \ldots, X_B)$ are random variables with given covariances $\sigma_{ij}=\text{Cov}(X_i,X_j),$ then the covariance of any linear combination $\lambda \cdot X = \lambda_1 X_1 + \cdots + \lambda_B X_B$ is given by the matrix $\Sigma = (\sigma_{ij})$ via $$\text{Cov}(\lambda X, \lambda X) = \lambda^\prime \Sigma \lambda.$$ The rest is just arithmetic. In the present case $\sigma_{ij} = \rho\sigma^2$ when $i\ne j$ and otherwise $\sigma_{ii} = \sigma^2 = \left[\rho + (1-\rho)\right]\sigma^2$. That is to say, we may view $\Sigma$ as the sum of two simple matrices: one has $\rho$ in every entry and the other has values of $1-\rho$ on the diagonal and zeros elsewhere. This leads to an efficient calculation, because evidently $$\Sigma = \sigma^2\left(\rho 1_B 1_B^\prime + (1-\rho)\mathbb{Id}_B \right)$$ where I have written "$1_B$" for the column vector with $B$ $1$'s in it and "$\mathbb{Id}_B$" for the $B$ by $B$ identity matrix. Whence, factoring out the scalars $\sigma^2$, $\rho$, and $1-\rho$ as appropriate, we obtain $$\eqalign{ \text{Cov}(\lambda X, \lambda X) &= \lambda^\prime \sigma^2\left(\rho 1_B 1_B^\prime + (1-\rho)\mathbb{Id}_B \right)\lambda \\ &= \left(\lambda^\prime 1_B 1_B^\prime \lambda\right) \rho\sigma^2 + \left(\lambda^\prime \mathbb{Id}_B \lambda \right)(1-\rho)\sigma^2. }$$ For the arithmetic mean, $\lambda = (1/B, 1/B, \ldots, 1/B)$ entailing $$\lambda^\prime 1_B 1_B^\prime \lambda = (\lambda^\prime 1_B)^2 = 1^2 = 1$$ and $$\lambda^\prime \mathbb{Id}_B \lambda = 1/B^2 + 1/B^2 + \cdots + 1/B^2 = 1/B,$$ QED.
What is the variance of the mean of correlated binomial variables?
As a very general rule, whenever $X = (X_1, \ldots, X_B)$ are random variables with given covariances $\sigma_{ij}=\text{Cov}(X_i,X_j),$ then the covariance of any linear combination $\lambda \cdot X
What is the variance of the mean of correlated binomial variables? As a very general rule, whenever $X = (X_1, \ldots, X_B)$ are random variables with given covariances $\sigma_{ij}=\text{Cov}(X_i,X_j),$ then the covariance of any linear combination $\lambda \cdot X = \lambda_1 X_1 + \cdots + \lambda_B X_B$ is given by the matrix $\Sigma = (\sigma_{ij})$ via $$\text{Cov}(\lambda X, \lambda X) = \lambda^\prime \Sigma \lambda.$$ The rest is just arithmetic. In the present case $\sigma_{ij} = \rho\sigma^2$ when $i\ne j$ and otherwise $\sigma_{ii} = \sigma^2 = \left[\rho + (1-\rho)\right]\sigma^2$. That is to say, we may view $\Sigma$ as the sum of two simple matrices: one has $\rho$ in every entry and the other has values of $1-\rho$ on the diagonal and zeros elsewhere. This leads to an efficient calculation, because evidently $$\Sigma = \sigma^2\left(\rho 1_B 1_B^\prime + (1-\rho)\mathbb{Id}_B \right)$$ where I have written "$1_B$" for the column vector with $B$ $1$'s in it and "$\mathbb{Id}_B$" for the $B$ by $B$ identity matrix. Whence, factoring out the scalars $\sigma^2$, $\rho$, and $1-\rho$ as appropriate, we obtain $$\eqalign{ \text{Cov}(\lambda X, \lambda X) &= \lambda^\prime \sigma^2\left(\rho 1_B 1_B^\prime + (1-\rho)\mathbb{Id}_B \right)\lambda \\ &= \left(\lambda^\prime 1_B 1_B^\prime \lambda\right) \rho\sigma^2 + \left(\lambda^\prime \mathbb{Id}_B \lambda \right)(1-\rho)\sigma^2. }$$ For the arithmetic mean, $\lambda = (1/B, 1/B, \ldots, 1/B)$ entailing $$\lambda^\prime 1_B 1_B^\prime \lambda = (\lambda^\prime 1_B)^2 = 1^2 = 1$$ and $$\lambda^\prime \mathbb{Id}_B \lambda = 1/B^2 + 1/B^2 + \cdots + 1/B^2 = 1/B,$$ QED.
What is the variance of the mean of correlated binomial variables? As a very general rule, whenever $X = (X_1, \ldots, X_B)$ are random variables with given covariances $\sigma_{ij}=\text{Cov}(X_i,X_j),$ then the covariance of any linear combination $\lambda \cdot X
28,750
What is the variance of the mean of correlated binomial variables?
To reformulate the answer of @whuber, if anyone finds it helpful. Suppose a column vector $$X = (x_1, x_2, ..., x_n)^\intercal \in \mathbb{R}^n$$ where the random variables $x_i$ are identically distributed (each with variance $\sigma^2$) with positive pairwise correlation $\rho$. By definition, $\forall \; i \neq j$, $$\rho = \frac{\mathbb{Cov}[x_i, x_j]}{\sigma_{x_i} \sigma_{x_j}} = \frac{\mathbb{Cov}[x_i, x_j]}{\sigma^2}$$ In consequence, $$ \begin{equation} \begin{split} \mathbb{Cov}[X] &= \begin{cases} \sigma^2 \;\,\,\text{ if } i = j\\ \rho\sigma^2 \,\text{ if } i \neq j \end{cases}\\ &= \rho \sigma^2 \mathbb{1}\mathbb{1}^\intercal + (1 - \rho) \sigma^2 I \end{split} \end{equation} $$ where $\mathbb{1}$ is the column vector of ones, $I$ is the identity matrix. Using this result, let's denote $\lambda = \frac{1}{n} \mathbb{1}$, $$ \begin{equation} \begin{split} \mathbb{Var}[\frac{1}{n}\sum_i^n x_i] &= \mathbb{Var}[\lambda^\intercal X]\\ &= \lambda^\intercal \mathbb{Cov}[X] \lambda \\ &= \rho\sigma^2 \lambda^\intercal \mathbb{1}\mathbb{1}^\intercal \lambda + (1-\rho) \sigma^2 \lambda^\intercal \lambda \\ &= \rho\sigma^2 ||\lambda^\intercal \mathbb{1}||^2 + (1-\rho) \sigma^2 \lambda^\intercal \lambda \\ &= \rho\sigma^2 + \frac{1-\rho}{n} \sigma^2 \\ \end{split} \end{equation} $$ where $$\lambda^\intercal \mathbb{1} = 1$$ $$\lambda^\intercal \lambda = \frac{1}{n}$$
What is the variance of the mean of correlated binomial variables?
To reformulate the answer of @whuber, if anyone finds it helpful. Suppose a column vector $$X = (x_1, x_2, ..., x_n)^\intercal \in \mathbb{R}^n$$ where the random variables $x_i$ are identically distr
What is the variance of the mean of correlated binomial variables? To reformulate the answer of @whuber, if anyone finds it helpful. Suppose a column vector $$X = (x_1, x_2, ..., x_n)^\intercal \in \mathbb{R}^n$$ where the random variables $x_i$ are identically distributed (each with variance $\sigma^2$) with positive pairwise correlation $\rho$. By definition, $\forall \; i \neq j$, $$\rho = \frac{\mathbb{Cov}[x_i, x_j]}{\sigma_{x_i} \sigma_{x_j}} = \frac{\mathbb{Cov}[x_i, x_j]}{\sigma^2}$$ In consequence, $$ \begin{equation} \begin{split} \mathbb{Cov}[X] &= \begin{cases} \sigma^2 \;\,\,\text{ if } i = j\\ \rho\sigma^2 \,\text{ if } i \neq j \end{cases}\\ &= \rho \sigma^2 \mathbb{1}\mathbb{1}^\intercal + (1 - \rho) \sigma^2 I \end{split} \end{equation} $$ where $\mathbb{1}$ is the column vector of ones, $I$ is the identity matrix. Using this result, let's denote $\lambda = \frac{1}{n} \mathbb{1}$, $$ \begin{equation} \begin{split} \mathbb{Var}[\frac{1}{n}\sum_i^n x_i] &= \mathbb{Var}[\lambda^\intercal X]\\ &= \lambda^\intercal \mathbb{Cov}[X] \lambda \\ &= \rho\sigma^2 \lambda^\intercal \mathbb{1}\mathbb{1}^\intercal \lambda + (1-\rho) \sigma^2 \lambda^\intercal \lambda \\ &= \rho\sigma^2 ||\lambda^\intercal \mathbb{1}||^2 + (1-\rho) \sigma^2 \lambda^\intercal \lambda \\ &= \rho\sigma^2 + \frac{1-\rho}{n} \sigma^2 \\ \end{split} \end{equation} $$ where $$\lambda^\intercal \mathbb{1} = 1$$ $$\lambda^\intercal \lambda = \frac{1}{n}$$
What is the variance of the mean of correlated binomial variables? To reformulate the answer of @whuber, if anyone finds it helpful. Suppose a column vector $$X = (x_1, x_2, ..., x_n)^\intercal \in \mathbb{R}^n$$ where the random variables $x_i$ are identically distr
28,751
What is the variance of the mean of correlated binomial variables?
I think whuber's post gives a nice proof, however there is a possible caveat in the formula in the original post. Suppose the correlation in the original post is negative, say minus one. Then just looking at the formula it seems that if we just make $B$ sufficiently large, our mean will have a negative variance, which is of course nonsense. I think the problem here is that we must be sure that our original assumption is sound. If we say the variables forming the mean are identically distributed (say with mean zero and variance one), then we cannot also impose that they have pairwise correlation of minus one. Since if this would be true, $X_2 = - X_1$ and $ X_3 = -X_2 = X_1 $ hence the correlation between $X_1$ and $X_3$ would be one, contrary to assumption. Perhaps there is a way to formulate a condition for when the formula actually holds? Edit: Ok, the original post assumed positive correlation, is it obvious that the formula always works then? Perhaps the implicit condition should simply always be that the problem is consistenly formulated... my guess is that it would be enough that the covariance matrix is positve-semidefinite and symmetric; then such variables exist. And the matrix in my example above would have a diagonal consisting of ones and all other entries equal to minus one, hence have -1 as an eigenvalue and thus not positive-semidefinite.
What is the variance of the mean of correlated binomial variables?
I think whuber's post gives a nice proof, however there is a possible caveat in the formula in the original post. Suppose the correlation in the original post is negative, say minus one. Then just loo
What is the variance of the mean of correlated binomial variables? I think whuber's post gives a nice proof, however there is a possible caveat in the formula in the original post. Suppose the correlation in the original post is negative, say minus one. Then just looking at the formula it seems that if we just make $B$ sufficiently large, our mean will have a negative variance, which is of course nonsense. I think the problem here is that we must be sure that our original assumption is sound. If we say the variables forming the mean are identically distributed (say with mean zero and variance one), then we cannot also impose that they have pairwise correlation of minus one. Since if this would be true, $X_2 = - X_1$ and $ X_3 = -X_2 = X_1 $ hence the correlation between $X_1$ and $X_3$ would be one, contrary to assumption. Perhaps there is a way to formulate a condition for when the formula actually holds? Edit: Ok, the original post assumed positive correlation, is it obvious that the formula always works then? Perhaps the implicit condition should simply always be that the problem is consistenly formulated... my guess is that it would be enough that the covariance matrix is positve-semidefinite and symmetric; then such variables exist. And the matrix in my example above would have a diagonal consisting of ones and all other entries equal to minus one, hence have -1 as an eigenvalue and thus not positive-semidefinite.
What is the variance of the mean of correlated binomial variables? I think whuber's post gives a nice proof, however there is a possible caveat in the formula in the original post. Suppose the correlation in the original post is negative, say minus one. Then just loo
28,752
Comparing nested binary logistic regression models when $n$ is large
(1) There is an extensive literature on why one should prefer full models to restricted/parsimonious models. My understanding are few reasons to prefer the parsimonious model. However, larger models may not be feasible for many clinical applications. (2) As far as I know, Discrimination/Discrimination indexes aren’t (?should not be) used as a model/variable selection parameter. They aren’t intended for this use and as a result there may not be much of a literature on why they shouldn’t be used for model building. (3) Parsimonious models may have limitations that aren’t readily apparent. They may be less well calibrated than larger models, external/internal validity may be reduced. (4) The c statistic may not be optimal in assessing models that predict future risk or stratify individuals into risk categories. In this setting, calibration is as important to the accurate assessment of risk. For example, a biomarker with an odds ratio of 3 may have little effect on the cstatistic, yet an increased level could shift estimated 10-year cardiovascular risk for an individual patient from 8% to 24% Cook N.R.; Use and misuse of the ROC curve in the medical literature. Circulation. 115 2007:928-935. (5) AUC/c-statistic/discrimination is known to be insensitive to significant predictor variables. This is discussed in the Cook reference above, and the motivating force behind the development of net reclassification index. Also discussed in Cook above. (6) Large datasets can still lead to larger models than desired if standard variable selection methods are used. In stepwise selection procedures often a p-value cut-off of 0.05 is used. But there is nothing intrinsic about this value that means you should choose this value. With smaller datasets a larger p-value (0.2) may be more appropriate, in larger datasets a smaller p-value may be appropriate (0.01 was used for the GUSTO I dataset for this reason). (7) While AIC is often use for model selection, and is better supported by the literature, BIC may be a valid alternative in larger datasets. For BIC model selection the chi-squared must exceed log(n), thus it will result in smaller models in larger datasets. (Mallow’s may have similar characteristics) (8) But if you just want a max of 10 or 12 variables, the easier solution is something like bestglm or leaps packages were you just set the maximum number of variables you want to consider. (9) if you just want a test that will make the two models look the same, and aren't too worried about the details, you could likely compare the AUC of the two models. Some packages will even give you a p-value for the comparison. Doesn't seem advisable. Ambler G (2002) Simplifying a prognostic model: a simulation study based on clinical data Cook N.R.; Use and misuse of the ROC curve in the medical literature. Circulation. 115 2007:928-935. Gail M.H., Pfeiffer R.M.; On criteria for evaluating models of absolute risk. Biostat. 6 2005:227-239. (10) Once the model has been build, c-statistics/decimation indexes may not be the best approach to comparing models and have well documented limitations. Comparisons should likely also at the minimum include calibration, reclassification index. Steyerber (2010) Assessing the performance of prediction models: a framework for some traditional and novel measures (11) It may be a good idea to go beyond above and use decision analytic measures. Vickers AJ, Elkin EB. Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making. 2006;26:565-74. Baker SG, Cook NR, Vickers A, Kramer BS. Using relative utility curves to evaluate risk prediction. J R Stat Soc A. 2009;172:729-48. Van Calster B, Vickers AJ, Pencina MJ, Baker SG, Timmerman D, Steyerberg EW. Evaluation of Markers and Risk Prediction Models: Overview of Relationships between NRI and Decision-Analytic Measures. Med Decis Making. 2013;33:490-501 ---Update--- I find the Vickers article the most interesting. But this still hasn't been widely accepted despite many editorials. So may not be of much practical use. The Cook and Steyerberg articles are much more practical. No one likes stepwise selection. I'm certainly not going to advocate for it. I might emphasize that most of the criticisms of stepwise assumes EPV<50 and a choice between a full or pre-specified model and a reduced model. If EPV>50 and there is a commitment to a reduce model the cost-benefit analysis may be different. The weak thought behind comparing c-statistics is that they may not be different and I seem to remember this test being significantly underpowered. But now I can't find the reference, so might be way off base on that.
Comparing nested binary logistic regression models when $n$ is large
(1) There is an extensive literature on why one should prefer full models to restricted/parsimonious models. My understanding are few reasons to prefer the parsimonious model. However, larger models m
Comparing nested binary logistic regression models when $n$ is large (1) There is an extensive literature on why one should prefer full models to restricted/parsimonious models. My understanding are few reasons to prefer the parsimonious model. However, larger models may not be feasible for many clinical applications. (2) As far as I know, Discrimination/Discrimination indexes aren’t (?should not be) used as a model/variable selection parameter. They aren’t intended for this use and as a result there may not be much of a literature on why they shouldn’t be used for model building. (3) Parsimonious models may have limitations that aren’t readily apparent. They may be less well calibrated than larger models, external/internal validity may be reduced. (4) The c statistic may not be optimal in assessing models that predict future risk or stratify individuals into risk categories. In this setting, calibration is as important to the accurate assessment of risk. For example, a biomarker with an odds ratio of 3 may have little effect on the cstatistic, yet an increased level could shift estimated 10-year cardiovascular risk for an individual patient from 8% to 24% Cook N.R.; Use and misuse of the ROC curve in the medical literature. Circulation. 115 2007:928-935. (5) AUC/c-statistic/discrimination is known to be insensitive to significant predictor variables. This is discussed in the Cook reference above, and the motivating force behind the development of net reclassification index. Also discussed in Cook above. (6) Large datasets can still lead to larger models than desired if standard variable selection methods are used. In stepwise selection procedures often a p-value cut-off of 0.05 is used. But there is nothing intrinsic about this value that means you should choose this value. With smaller datasets a larger p-value (0.2) may be more appropriate, in larger datasets a smaller p-value may be appropriate (0.01 was used for the GUSTO I dataset for this reason). (7) While AIC is often use for model selection, and is better supported by the literature, BIC may be a valid alternative in larger datasets. For BIC model selection the chi-squared must exceed log(n), thus it will result in smaller models in larger datasets. (Mallow’s may have similar characteristics) (8) But if you just want a max of 10 or 12 variables, the easier solution is something like bestglm or leaps packages were you just set the maximum number of variables you want to consider. (9) if you just want a test that will make the two models look the same, and aren't too worried about the details, you could likely compare the AUC of the two models. Some packages will even give you a p-value for the comparison. Doesn't seem advisable. Ambler G (2002) Simplifying a prognostic model: a simulation study based on clinical data Cook N.R.; Use and misuse of the ROC curve in the medical literature. Circulation. 115 2007:928-935. Gail M.H., Pfeiffer R.M.; On criteria for evaluating models of absolute risk. Biostat. 6 2005:227-239. (10) Once the model has been build, c-statistics/decimation indexes may not be the best approach to comparing models and have well documented limitations. Comparisons should likely also at the minimum include calibration, reclassification index. Steyerber (2010) Assessing the performance of prediction models: a framework for some traditional and novel measures (11) It may be a good idea to go beyond above and use decision analytic measures. Vickers AJ, Elkin EB. Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making. 2006;26:565-74. Baker SG, Cook NR, Vickers A, Kramer BS. Using relative utility curves to evaluate risk prediction. J R Stat Soc A. 2009;172:729-48. Van Calster B, Vickers AJ, Pencina MJ, Baker SG, Timmerman D, Steyerberg EW. Evaluation of Markers and Risk Prediction Models: Overview of Relationships between NRI and Decision-Analytic Measures. Med Decis Making. 2013;33:490-501 ---Update--- I find the Vickers article the most interesting. But this still hasn't been widely accepted despite many editorials. So may not be of much practical use. The Cook and Steyerberg articles are much more practical. No one likes stepwise selection. I'm certainly not going to advocate for it. I might emphasize that most of the criticisms of stepwise assumes EPV<50 and a choice between a full or pre-specified model and a reduced model. If EPV>50 and there is a commitment to a reduce model the cost-benefit analysis may be different. The weak thought behind comparing c-statistics is that they may not be different and I seem to remember this test being significantly underpowered. But now I can't find the reference, so might be way off base on that.
Comparing nested binary logistic regression models when $n$ is large (1) There is an extensive literature on why one should prefer full models to restricted/parsimonious models. My understanding are few reasons to prefer the parsimonious model. However, larger models m
28,753
Comparing nested binary logistic regression models when $n$ is large
One option is to use pseudo R-square measures for both models. A strong difference in pseudo R-square would suggest that the model fit strongly decreases by omitting V17. There are different kinds of Pseudo R-squares available. An overview can be found here, for example: http://www.ats.ucla.edu/stat/mult_pkg/faq/general/Psuedo_RSquareds.htm A popular measure is Nagelkerke R-square. It varies between 0 and 1 and, with care, can be interpreted like R-squared from a simple linear regression model. It is based on a transformed ratio of estimated likelihoods of the full model to the intercept-only model. You could estimate it for fit and fit2, respectively, and compare the relative size to get an indication on your problem. A substantially higher Nagelkerke R-square for fit would suggest that fit2 looses a lot of predictive power by omission of V17. In lrm the stats value provides Nagelkerke's R-squared. So giving fit$stats should provide you with an estimate. See also ?lrm.
Comparing nested binary logistic regression models when $n$ is large
One option is to use pseudo R-square measures for both models. A strong difference in pseudo R-square would suggest that the model fit strongly decreases by omitting V17. There are different kinds of
Comparing nested binary logistic regression models when $n$ is large One option is to use pseudo R-square measures for both models. A strong difference in pseudo R-square would suggest that the model fit strongly decreases by omitting V17. There are different kinds of Pseudo R-squares available. An overview can be found here, for example: http://www.ats.ucla.edu/stat/mult_pkg/faq/general/Psuedo_RSquareds.htm A popular measure is Nagelkerke R-square. It varies between 0 and 1 and, with care, can be interpreted like R-squared from a simple linear regression model. It is based on a transformed ratio of estimated likelihoods of the full model to the intercept-only model. You could estimate it for fit and fit2, respectively, and compare the relative size to get an indication on your problem. A substantially higher Nagelkerke R-square for fit would suggest that fit2 looses a lot of predictive power by omission of V17. In lrm the stats value provides Nagelkerke's R-squared. So giving fit$stats should provide you with an estimate. See also ?lrm.
Comparing nested binary logistic regression models when $n$ is large One option is to use pseudo R-square measures for both models. A strong difference in pseudo R-square would suggest that the model fit strongly decreases by omitting V17. There are different kinds of
28,754
Comparing nested binary logistic regression models when $n$ is large
I just read about this. The proper way to do this is use R's glm's finalmodel output and look for "Residual deviance: " and derive the delta between the two models and use this value in a chi-squared test using df equal to the # of predictor terms dropped. And that is your p value. Applied Regression Modeling Iaian Pardoe 2nd edition 2012 pg 270
Comparing nested binary logistic regression models when $n$ is large
I just read about this. The proper way to do this is use R's glm's finalmodel output and look for "Residual deviance: " and derive the delta between the two models and use this value in a chi-squared
Comparing nested binary logistic regression models when $n$ is large I just read about this. The proper way to do this is use R's glm's finalmodel output and look for "Residual deviance: " and derive the delta between the two models and use this value in a chi-squared test using df equal to the # of predictor terms dropped. And that is your p value. Applied Regression Modeling Iaian Pardoe 2nd edition 2012 pg 270
Comparing nested binary logistic regression models when $n$ is large I just read about this. The proper way to do this is use R's glm's finalmodel output and look for "Residual deviance: " and derive the delta between the two models and use this value in a chi-squared
28,755
Find k of n items with least pairwise correlations
[Forewarning: this answer appeared before the OP decided to reformulate the question, so it may have lost relevance. Originally the question was about How to rank items according to their pairwise correlations] Because matrix of pairwise correlations isn't a unidimensional array it is not quite clear what "ranking" may look like. Especially as long as you haven't worked out your idea in detail, as it seems. But you mentioned PCA as suitable for you, and that immediately made me to think of Cholesky root as potentially even more suitable alternative. Cholesky root is like a matrix of loadings left by PCA, only it is triangular. I'll explain both with an example. R, correlation matrix V1 V2 V3 V4 V1 1.0000 -.5255 -.1487 -.2790 V2 -.5255 1.0000 .2134 .2624 V3 -.1487 .2134 1.0000 .1254 V4 -.2790 .2624 .1254 1.0000 A, PCA full loading matrix I II III IV V1 -.7933 .2385 .2944 .4767 V2 .8071 -.0971 -.3198 .4867 V3 .4413 .8918 .0721 -.0683 V4 .5916 -.2130 .7771 .0261 B, Cholesky root matrix I II III IV V1 1.0000 .0000 .0000 .0000 V2 -.5255 .8508 .0000 .0000 V3 -.1487 .1589 .9760 .0000 V4 -.2790 .1361 .0638 .9485 A*A' or B*B': both restore R V1 V2 V3 V4 V1 1.0000 -.5255 -.1487 -.2790 V2 -.5255 1.0000 .2134 .2624 V3 -.1487 .2134 1.0000 .1254 V4 -.2790 .2624 .1254 1.0000 PCA's loading matrix A is the matrix of correlations between the variables and the principal components. We may say it because row sums of squares are all 1 (the diagonal of R) while matrix sum of squares is the overall variance (trace of R). Cholesky root's elements of B are correlations too, because that matrix also has these two properties. Columns of B are not principal components of A, although they are "components", in a sense. Both A and B can restore R and thus both can replace R, as its representation. B is triangular which clearly shows the fact that it captures the pairwise correlations of R sequentially, or hierarhically. Cholesky's component I correlates with all the variables and is the linear image of the first of them V1. Component II no more shares with V1 but correlates with the last three... Finally IV is correlated only with the last, V4. I thought such sort of "ranking" is perhaps what you seek for?. The problem with Cholesky decomposition, though, is that - unlike PCA - it depends on the order of items in the matrix R. Well, you might sort the items is descending or ascending order of the sum of squared elements (or, if you like, sum of absolute elements, or in order of multiple correlarion coefficient - see about it below). This order reflects the how much an item is gross correlated. R, rearranged V2 V1 V4 V3 V2 1.0000 -.5255 .2624 .2134 V1 -.5255 1.0000 -.2790 -.1487 V4 .2624 -.2790 1.0000 .1254 V3 .2134 -.1487 .1254 1.0000 Column sum of squares (descending) 1.3906 1.3761 1.1624 1.0833 B I II III IV V2 1.0000 .0000 .0000 .0000 V1 -.5255 .8508 .0000 .0000 V4 .2624 -.1658 .9506 .0000 V3 .2134 -.0430 .0655 .9738 From last B matrix we see that V2, most grossly correlated item, pawns all its correlations in I. Next grossly correlated item V1 pawns all its correlatedness, except that with V2, in II; and so on. Another decision could be computing Multiple correlation coefficient for every item and ranking based on its magnitude. Multiple correlation between an item and all the other items grows as the item correlates more with all of them but them correlate less with each other. The squared multiple correlation coefficients form the diagonal of the so called image covariance matrix which is $\bf S R^{-1} S - 2S + R$, where $\bf S$ is the diagonal matrix of the reciprocals of the diagonals of $\bf R^{-1}$.
Find k of n items with least pairwise correlations
[Forewarning: this answer appeared before the OP decided to reformulate the question, so it may have lost relevance. Originally the question was about How to rank items according to their pairwise cor
Find k of n items with least pairwise correlations [Forewarning: this answer appeared before the OP decided to reformulate the question, so it may have lost relevance. Originally the question was about How to rank items according to their pairwise correlations] Because matrix of pairwise correlations isn't a unidimensional array it is not quite clear what "ranking" may look like. Especially as long as you haven't worked out your idea in detail, as it seems. But you mentioned PCA as suitable for you, and that immediately made me to think of Cholesky root as potentially even more suitable alternative. Cholesky root is like a matrix of loadings left by PCA, only it is triangular. I'll explain both with an example. R, correlation matrix V1 V2 V3 V4 V1 1.0000 -.5255 -.1487 -.2790 V2 -.5255 1.0000 .2134 .2624 V3 -.1487 .2134 1.0000 .1254 V4 -.2790 .2624 .1254 1.0000 A, PCA full loading matrix I II III IV V1 -.7933 .2385 .2944 .4767 V2 .8071 -.0971 -.3198 .4867 V3 .4413 .8918 .0721 -.0683 V4 .5916 -.2130 .7771 .0261 B, Cholesky root matrix I II III IV V1 1.0000 .0000 .0000 .0000 V2 -.5255 .8508 .0000 .0000 V3 -.1487 .1589 .9760 .0000 V4 -.2790 .1361 .0638 .9485 A*A' or B*B': both restore R V1 V2 V3 V4 V1 1.0000 -.5255 -.1487 -.2790 V2 -.5255 1.0000 .2134 .2624 V3 -.1487 .2134 1.0000 .1254 V4 -.2790 .2624 .1254 1.0000 PCA's loading matrix A is the matrix of correlations between the variables and the principal components. We may say it because row sums of squares are all 1 (the diagonal of R) while matrix sum of squares is the overall variance (trace of R). Cholesky root's elements of B are correlations too, because that matrix also has these two properties. Columns of B are not principal components of A, although they are "components", in a sense. Both A and B can restore R and thus both can replace R, as its representation. B is triangular which clearly shows the fact that it captures the pairwise correlations of R sequentially, or hierarhically. Cholesky's component I correlates with all the variables and is the linear image of the first of them V1. Component II no more shares with V1 but correlates with the last three... Finally IV is correlated only with the last, V4. I thought such sort of "ranking" is perhaps what you seek for?. The problem with Cholesky decomposition, though, is that - unlike PCA - it depends on the order of items in the matrix R. Well, you might sort the items is descending or ascending order of the sum of squared elements (or, if you like, sum of absolute elements, or in order of multiple correlarion coefficient - see about it below). This order reflects the how much an item is gross correlated. R, rearranged V2 V1 V4 V3 V2 1.0000 -.5255 .2624 .2134 V1 -.5255 1.0000 -.2790 -.1487 V4 .2624 -.2790 1.0000 .1254 V3 .2134 -.1487 .1254 1.0000 Column sum of squares (descending) 1.3906 1.3761 1.1624 1.0833 B I II III IV V2 1.0000 .0000 .0000 .0000 V1 -.5255 .8508 .0000 .0000 V4 .2624 -.1658 .9506 .0000 V3 .2134 -.0430 .0655 .9738 From last B matrix we see that V2, most grossly correlated item, pawns all its correlations in I. Next grossly correlated item V1 pawns all its correlatedness, except that with V2, in II; and so on. Another decision could be computing Multiple correlation coefficient for every item and ranking based on its magnitude. Multiple correlation between an item and all the other items grows as the item correlates more with all of them but them correlate less with each other. The squared multiple correlation coefficients form the diagonal of the so called image covariance matrix which is $\bf S R^{-1} S - 2S + R$, where $\bf S$ is the diagonal matrix of the reciprocals of the diagonals of $\bf R^{-1}$.
Find k of n items with least pairwise correlations [Forewarning: this answer appeared before the OP decided to reformulate the question, so it may have lost relevance. Originally the question was about How to rank items according to their pairwise cor
28,756
Find k of n items with least pairwise correlations
Here's my solution to the problem. I calculate all possible combinations of k of n items and calculate their mutual dependencies by transforming the problem in a graph-theoretical one: Which is the complete graph containing all k nodes with the smallest edge sum (dependencies)? Here's a python script using the networkx library and one possible output. Please apologize for any ambiguity in my question! Code: import networkx as nx import itertools import os #Create new graph G=nx.Graph() #Each node represents a dimension G.add_nodes_from([1,2,3,4,5,6,7,8,9,10,11]) #For each dimension add edges and correlations as weights G.add_weighted_edges_from([(3,1,0.563),(3,2,0.25)]) G.add_weighted_edges_from([(4,1,0.688),(4,3,0.438)]) G.add_weighted_edges_from([(5,1,0.25),(5,2,0.063),(5,3,0.063),(5,4,0.063)]) G.add_weighted_edges_from([(6,1,0.063),(6,2,0.25),(6,3,0.063),(6,4,0.063),(6,5,0.063)]) G.add_weighted_edges_from([(7,2,0.25),(7,3,0.063),(7,5,0.125),(7,6,0.063)]) G.add_weighted_edges_from([(8,1,0.125),(8,2,0.125),(8,3,0.5625),(8,5,0.25),(8,6,0.188),(8,7,0.125)]) G.add_weighted_edges_from([(9,1,0.063),(9,2,0.063),(9,3,0.25),(9,6,0.438),(9,7,0.063),(9,8,0.063)]) G.add_weighted_edges_from([(10,1,0.25),(10,2,0.25),(10,3,0.563),(10,4,0.125),(10,5,0.125),(10,6,0.125),(10,7,0.125),(10,8,0.375),(10,9,0.125)]) G.add_weighted_edges_from([(11,1,0.125),(11,2,0.063),(11,3,0.438),(11,5,0.063),(11,6,0.1875),(11,7,0.125),(11,8,0.563),(11,9,0.125),(11,9,0.188)]) nodes = set(G.nodes()) combs = set(itertools.combinations(nodes,6)) sumList = [] for comb in combs: S=G.subgraph(list(comb)) sum=0 for edge in S.edges(data=True): sum+=edge[2]['weight'] sumList.append((sum,comb)) sorted = sorted(sumList, key=lambda tup: tup[0]) fo = open("dependency_ranking.txt","wb") for i in range(0,len(sorted)): totalWeight = sorted[i][0] nodes = list(sorted[i][1]) nodes.sort() out = str(i)+": "+str(totalWeight)+","+str(nodes) fo.write(out.encode()) fo.write("\n".encode()) fo.close() S=G.subgraph([1,2,3,4,6,7]) sum = 0 for edge in S.edges(data=True): sum+=edge[2]['weight'] print(sum) Sample output: 0: 1.0659999999999998,[2, 4, 5, 7, 9, 11] 1: 1.127,[4, 5, 7, 9, 10, 11] 2: 1.128,[2, 4, 5, 9, 10, 11] 3: 1.19,[2, 4, 5, 7, 8, 9] 4: 1.2525,[4, 5, 6, 7, 10, 11] 5: 1.377,[2, 4, 5, 7, 9, 10] 6: 1.377,[2, 4, 7, 9, 10, 11] 7: 1.377,[2, 4, 5, 7, 10, 11] Input graph: Solution graph: For a toy example, k=4, n=6: Input graph: Solution graph: Best, Christian
Find k of n items with least pairwise correlations
Here's my solution to the problem. I calculate all possible combinations of k of n items and calculate their mutual dependencies by transforming the problem in a graph-theoretical one: Which is the co
Find k of n items with least pairwise correlations Here's my solution to the problem. I calculate all possible combinations of k of n items and calculate their mutual dependencies by transforming the problem in a graph-theoretical one: Which is the complete graph containing all k nodes with the smallest edge sum (dependencies)? Here's a python script using the networkx library and one possible output. Please apologize for any ambiguity in my question! Code: import networkx as nx import itertools import os #Create new graph G=nx.Graph() #Each node represents a dimension G.add_nodes_from([1,2,3,4,5,6,7,8,9,10,11]) #For each dimension add edges and correlations as weights G.add_weighted_edges_from([(3,1,0.563),(3,2,0.25)]) G.add_weighted_edges_from([(4,1,0.688),(4,3,0.438)]) G.add_weighted_edges_from([(5,1,0.25),(5,2,0.063),(5,3,0.063),(5,4,0.063)]) G.add_weighted_edges_from([(6,1,0.063),(6,2,0.25),(6,3,0.063),(6,4,0.063),(6,5,0.063)]) G.add_weighted_edges_from([(7,2,0.25),(7,3,0.063),(7,5,0.125),(7,6,0.063)]) G.add_weighted_edges_from([(8,1,0.125),(8,2,0.125),(8,3,0.5625),(8,5,0.25),(8,6,0.188),(8,7,0.125)]) G.add_weighted_edges_from([(9,1,0.063),(9,2,0.063),(9,3,0.25),(9,6,0.438),(9,7,0.063),(9,8,0.063)]) G.add_weighted_edges_from([(10,1,0.25),(10,2,0.25),(10,3,0.563),(10,4,0.125),(10,5,0.125),(10,6,0.125),(10,7,0.125),(10,8,0.375),(10,9,0.125)]) G.add_weighted_edges_from([(11,1,0.125),(11,2,0.063),(11,3,0.438),(11,5,0.063),(11,6,0.1875),(11,7,0.125),(11,8,0.563),(11,9,0.125),(11,9,0.188)]) nodes = set(G.nodes()) combs = set(itertools.combinations(nodes,6)) sumList = [] for comb in combs: S=G.subgraph(list(comb)) sum=0 for edge in S.edges(data=True): sum+=edge[2]['weight'] sumList.append((sum,comb)) sorted = sorted(sumList, key=lambda tup: tup[0]) fo = open("dependency_ranking.txt","wb") for i in range(0,len(sorted)): totalWeight = sorted[i][0] nodes = list(sorted[i][1]) nodes.sort() out = str(i)+": "+str(totalWeight)+","+str(nodes) fo.write(out.encode()) fo.write("\n".encode()) fo.close() S=G.subgraph([1,2,3,4,6,7]) sum = 0 for edge in S.edges(data=True): sum+=edge[2]['weight'] print(sum) Sample output: 0: 1.0659999999999998,[2, 4, 5, 7, 9, 11] 1: 1.127,[4, 5, 7, 9, 10, 11] 2: 1.128,[2, 4, 5, 9, 10, 11] 3: 1.19,[2, 4, 5, 7, 8, 9] 4: 1.2525,[4, 5, 6, 7, 10, 11] 5: 1.377,[2, 4, 5, 7, 9, 10] 6: 1.377,[2, 4, 7, 9, 10, 11] 7: 1.377,[2, 4, 5, 7, 10, 11] Input graph: Solution graph: For a toy example, k=4, n=6: Input graph: Solution graph: Best, Christian
Find k of n items with least pairwise correlations Here's my solution to the problem. I calculate all possible combinations of k of n items and calculate their mutual dependencies by transforming the problem in a graph-theoretical one: Which is the co
28,757
Find k of n items with least pairwise correlations
Find $k$ of $n$ items with the least pairwise correlation: Since a correlation of say $0.6$ explains $0.36$ of the relation between two series it makes more sense to minimize the sum of the squares of correlations for your target $k$ items. Here is my simple solution. Rewrite your $n \times n$ matrix of correlations to a matrix of squares of correlations. Sum the squares of each column. Eliminate the column and corresponding row with the greatest sum. You now have a $(n-1) \times (n-1)$ matrix. Repeat until you have a $k \times k$ matrix. You could also just keep the columns and corresponding rows with the $k$ smallest sums. Comparing the methods, I found in a matrix with $n=43$ and $k=20$ that only two items with close sums were differently kept and eliminated.
Find k of n items with least pairwise correlations
Find $k$ of $n$ items with the least pairwise correlation: Since a correlation of say $0.6$ explains $0.36$ of the relation between two series it makes more sense to minimize the sum of the squares of
Find k of n items with least pairwise correlations Find $k$ of $n$ items with the least pairwise correlation: Since a correlation of say $0.6$ explains $0.36$ of the relation between two series it makes more sense to minimize the sum of the squares of correlations for your target $k$ items. Here is my simple solution. Rewrite your $n \times n$ matrix of correlations to a matrix of squares of correlations. Sum the squares of each column. Eliminate the column and corresponding row with the greatest sum. You now have a $(n-1) \times (n-1)$ matrix. Repeat until you have a $k \times k$ matrix. You could also just keep the columns and corresponding rows with the $k$ smallest sums. Comparing the methods, I found in a matrix with $n=43$ and $k=20$ that only two items with close sums were differently kept and eliminated.
Find k of n items with least pairwise correlations Find $k$ of $n$ items with the least pairwise correlation: Since a correlation of say $0.6$ explains $0.36$ of the relation between two series it makes more sense to minimize the sum of the squares of
28,758
Confusion related to which transformation to use
It's not clear from your question why you need to transform at all. (What are you trying to achieve and why?) As for why logs might make the appearance more symmetric in some cases and not others, not all distributions are the same - while log transformations may sometimes make skewed data nearly symmetric, there's no guarantee that it always does. Often other transformations do much better. For example logs work very nicely on lognormal distributions, while cube roots do better on gamma. Below, $a$ is simulated from a lognormal distribution, and $b$ from a gamma distribution. They look vaguely similar, but the log-transform makes $a$ symmetric (in fact, normal), while making $b$ left-skewed. On the other hand a cube root transformation leaves $a$ still somewhat right skew, but makes $b$ very nearly symmetric (and pretty close to normal): Other times there's simply no monotonic transformation to achieve approximate symmetry (e.g. if your distribution is discrete and sufficiently skew, like a geometric(0.5), or say a Poisson(0.5), no monotonic transformation can make it reasonably normal - wherever you put them, the leftmost spike will always be taller than the next one). Incidentally, you might want to use more bars on your histograms, and maybe consider using other displays as well, to get a handle on the distributional shape. See my cautionary tale.
Confusion related to which transformation to use
It's not clear from your question why you need to transform at all. (What are you trying to achieve and why?) As for why logs might make the appearance more symmetric in some cases and not others, n
Confusion related to which transformation to use It's not clear from your question why you need to transform at all. (What are you trying to achieve and why?) As for why logs might make the appearance more symmetric in some cases and not others, not all distributions are the same - while log transformations may sometimes make skewed data nearly symmetric, there's no guarantee that it always does. Often other transformations do much better. For example logs work very nicely on lognormal distributions, while cube roots do better on gamma. Below, $a$ is simulated from a lognormal distribution, and $b$ from a gamma distribution. They look vaguely similar, but the log-transform makes $a$ symmetric (in fact, normal), while making $b$ left-skewed. On the other hand a cube root transformation leaves $a$ still somewhat right skew, but makes $b$ very nearly symmetric (and pretty close to normal): Other times there's simply no monotonic transformation to achieve approximate symmetry (e.g. if your distribution is discrete and sufficiently skew, like a geometric(0.5), or say a Poisson(0.5), no monotonic transformation can make it reasonably normal - wherever you put them, the leftmost spike will always be taller than the next one). Incidentally, you might want to use more bars on your histograms, and maybe consider using other displays as well, to get a handle on the distributional shape. See my cautionary tale.
Confusion related to which transformation to use It's not clear from your question why you need to transform at all. (What are you trying to achieve and why?) As for why logs might make the appearance more symmetric in some cases and not others, n
28,759
Alternatives to the multinomial logit model
There is a variety of models available to model multinomial models. I recommend Cameron & Trivedi Microeconometrics Using Stata for an easy and excellent introduction or take a look at the Imbens & Wooldridge Lecture Slides or here which are available online. Widely used models include: multinomial logistic regression or mlogit in Stata multinomial conditional logit (allows to easily include not only individual-specific but also choice-specific predictors) or asclogit in Stata nested logit (relax the independence from irrelevant alternatives assumption (IIA) by grouping/ranking choices in an hierarchical way) or nlogit in Stata mixed logit (relaxes the IIA assumption by assuming e.g. normal distributed parameters) or mixlogit in Stata. multinomial probit model (can further relax the IIA assumption but you should have choice-specific predictors available) mixed logit (relaxes the IIA assumption assuming e.g. normal distributed parameters), use asmprobit in Stata (mprobit does not allow to use choice-specific predictors but you should use them to relax the IIA asumption)
Alternatives to the multinomial logit model
There is a variety of models available to model multinomial models. I recommend Cameron & Trivedi Microeconometrics Using Stata for an easy and excellent introduction or take a look at the Imbens & Wo
Alternatives to the multinomial logit model There is a variety of models available to model multinomial models. I recommend Cameron & Trivedi Microeconometrics Using Stata for an easy and excellent introduction or take a look at the Imbens & Wooldridge Lecture Slides or here which are available online. Widely used models include: multinomial logistic regression or mlogit in Stata multinomial conditional logit (allows to easily include not only individual-specific but also choice-specific predictors) or asclogit in Stata nested logit (relax the independence from irrelevant alternatives assumption (IIA) by grouping/ranking choices in an hierarchical way) or nlogit in Stata mixed logit (relaxes the IIA assumption by assuming e.g. normal distributed parameters) or mixlogit in Stata. multinomial probit model (can further relax the IIA assumption but you should have choice-specific predictors available) mixed logit (relaxes the IIA assumption assuming e.g. normal distributed parameters), use asmprobit in Stata (mprobit does not allow to use choice-specific predictors but you should use them to relax the IIA asumption)
Alternatives to the multinomial logit model There is a variety of models available to model multinomial models. I recommend Cameron & Trivedi Microeconometrics Using Stata for an easy and excellent introduction or take a look at the Imbens & Wo
28,760
Alternatives to the multinomial logit model
If you're wanting options quite different from a logistic regression, you could use a neural net. For example, R's nnet package has a multinom function. Or you could use a Random Forest (R's randomForest package, and others). And there are several other Machine Learning alternatives, though options like an SVM tend to not be well-calibrated which makes their outputs inferior -- in my opinion -- to a logistic regression. [Actually, a logit is probably being used under the hood by the neurons in the neural net. So it's quite different, but not quite different at the same time.]
Alternatives to the multinomial logit model
If you're wanting options quite different from a logistic regression, you could use a neural net. For example, R's nnet package has a multinom function. Or you could use a Random Forest (R's randomFor
Alternatives to the multinomial logit model If you're wanting options quite different from a logistic regression, you could use a neural net. For example, R's nnet package has a multinom function. Or you could use a Random Forest (R's randomForest package, and others). And there are several other Machine Learning alternatives, though options like an SVM tend to not be well-calibrated which makes their outputs inferior -- in my opinion -- to a logistic regression. [Actually, a logit is probably being used under the hood by the neurons in the neural net. So it's quite different, but not quite different at the same time.]
Alternatives to the multinomial logit model If you're wanting options quite different from a logistic regression, you could use a neural net. For example, R's nnet package has a multinom function. Or you could use a Random Forest (R's randomFor
28,761
Alternatives to the multinomial logit model
Also, think Neural Nets (with softmax activation), Decision Trees (or Random Forests) do not require the IIA assumption to be met considering the unreliability of these tests concerned with checking the IIA assumption. So this might be an advantage compared to the multinomial logistic if all we are concerned is only predictions. Alternatively, multiple logistic models can be built for the K-1 categories with the Kth category as the reference. This also allows for different predictors to be plugged for each of the equations in contrast to the multinomial
Alternatives to the multinomial logit model
Also, think Neural Nets (with softmax activation), Decision Trees (or Random Forests) do not require the IIA assumption to be met considering the unreliability of these tests concerned with checking t
Alternatives to the multinomial logit model Also, think Neural Nets (with softmax activation), Decision Trees (or Random Forests) do not require the IIA assumption to be met considering the unreliability of these tests concerned with checking the IIA assumption. So this might be an advantage compared to the multinomial logistic if all we are concerned is only predictions. Alternatively, multiple logistic models can be built for the K-1 categories with the Kth category as the reference. This also allows for different predictors to be plugged for each of the equations in contrast to the multinomial
Alternatives to the multinomial logit model Also, think Neural Nets (with softmax activation), Decision Trees (or Random Forests) do not require the IIA assumption to be met considering the unreliability of these tests concerned with checking t
28,762
Properties of bivariate standard normal and implied conditional probability in the Roy model
First, in the Roy model, $\sigma_{\varepsilon}^{2}$ is normalized to be $1$ for identification reason (c.f. Cameron and Trivedi: Microeconometrics: methods and applications). I will maintain this normalization hereafter. To answer your question, let's show $$ \mathrm{{E}}\left(U_{1}\mid\varepsilon<Z\right)=-\sigma_{1\varepsilon}\frac{\phi\left(Z\right)}{\Phi\left(Z\right)} $$ first. Here $\phi$ and $\Phi$ are the pdf and cdf of a standard normal distribution, respectively. Note that $$ \mathrm{E}\left(U_{1}\mid\varepsilon<Z\right)=\mathrm{E}\left(\mathrm{E}\left(U_{1}\mid\varepsilon\right)\mid\varepsilon<Z\right) $$ by the law of iterated expectation. The vector $\left(U_{1},\varepsilon\right)$ is a bivariate normal with mean $\left(0,0\right)'$ and covariance matrix $$ \left[\begin{array}{cc} \sigma_{1}^{2} & \sigma_{1\epsilon}\\ & 1 \end{array}\right]. $$ The conditional mean $\mathrm{{E}}\left(U_{1}\mid\varepsilon\right)=\sigma_{1\varepsilon}\varepsilon$ (note that covariance not correlation arises here because $\sigma_{\varepsilon}^{2}=1$). Thus, $$ \mathrm{E}\left(U_{1}\mid\varepsilon<Z\right)=\sigma_{1\varepsilon}\mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right). $$ The density function of $\varepsilon\mid\varepsilon<Z$ is $$ f\left(\varepsilon\mid\varepsilon<Z\right)=\begin{cases} \frac{\phi\left(\varepsilon\right)}{\Phi\left(Z\right)}, & -\infty<\varepsilon<Z;\\ 0, & \varepsilon\geq Z. \end{cases} $$ The conditional mean $\mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right)$ is \begin{eqnarray*} \mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right) & = & \int_{-\infty}^{Z}t\frac{\phi\left(t\right)}{\Phi\left(Z\right)}\,\mathrm{{d}}t\\ & = & \frac{1}{\Phi\left(Z\right)}\int_{-\infty}^{Z}t\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}t^{2}\right)\,\mathrm{{d}}t\\ & = & -\frac{1}{\Phi\left(Z\right)}\int_{-\infty}^{Z}\frac{\partial}{\partial t}\left\{ \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}t^{2}\right)\right\} \,\mathrm{{d}}t\\ & = & -\frac{1}{\Phi\left(Z\right)}\left(\phi\left(Z\right)-\phi\left(-\infty\right)\right). \end{eqnarray*} Note how the negative sign comes out. Thus, $\mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right)=-\phi\left(Z\right)/\Phi\left(Z\right)$, and the conclusion follows.
Properties of bivariate standard normal and implied conditional probability in the Roy model
First, in the Roy model, $\sigma_{\varepsilon}^{2}$ is normalized to be $1$ for identification reason (c.f. Cameron and Trivedi: Microeconometrics: methods and applications). I will maintain this norm
Properties of bivariate standard normal and implied conditional probability in the Roy model First, in the Roy model, $\sigma_{\varepsilon}^{2}$ is normalized to be $1$ for identification reason (c.f. Cameron and Trivedi: Microeconometrics: methods and applications). I will maintain this normalization hereafter. To answer your question, let's show $$ \mathrm{{E}}\left(U_{1}\mid\varepsilon<Z\right)=-\sigma_{1\varepsilon}\frac{\phi\left(Z\right)}{\Phi\left(Z\right)} $$ first. Here $\phi$ and $\Phi$ are the pdf and cdf of a standard normal distribution, respectively. Note that $$ \mathrm{E}\left(U_{1}\mid\varepsilon<Z\right)=\mathrm{E}\left(\mathrm{E}\left(U_{1}\mid\varepsilon\right)\mid\varepsilon<Z\right) $$ by the law of iterated expectation. The vector $\left(U_{1},\varepsilon\right)$ is a bivariate normal with mean $\left(0,0\right)'$ and covariance matrix $$ \left[\begin{array}{cc} \sigma_{1}^{2} & \sigma_{1\epsilon}\\ & 1 \end{array}\right]. $$ The conditional mean $\mathrm{{E}}\left(U_{1}\mid\varepsilon\right)=\sigma_{1\varepsilon}\varepsilon$ (note that covariance not correlation arises here because $\sigma_{\varepsilon}^{2}=1$). Thus, $$ \mathrm{E}\left(U_{1}\mid\varepsilon<Z\right)=\sigma_{1\varepsilon}\mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right). $$ The density function of $\varepsilon\mid\varepsilon<Z$ is $$ f\left(\varepsilon\mid\varepsilon<Z\right)=\begin{cases} \frac{\phi\left(\varepsilon\right)}{\Phi\left(Z\right)}, & -\infty<\varepsilon<Z;\\ 0, & \varepsilon\geq Z. \end{cases} $$ The conditional mean $\mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right)$ is \begin{eqnarray*} \mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right) & = & \int_{-\infty}^{Z}t\frac{\phi\left(t\right)}{\Phi\left(Z\right)}\,\mathrm{{d}}t\\ & = & \frac{1}{\Phi\left(Z\right)}\int_{-\infty}^{Z}t\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}t^{2}\right)\,\mathrm{{d}}t\\ & = & -\frac{1}{\Phi\left(Z\right)}\int_{-\infty}^{Z}\frac{\partial}{\partial t}\left\{ \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}t^{2}\right)\right\} \,\mathrm{{d}}t\\ & = & -\frac{1}{\Phi\left(Z\right)}\left(\phi\left(Z\right)-\phi\left(-\infty\right)\right). \end{eqnarray*} Note how the negative sign comes out. Thus, $\mathrm{E}\left(\varepsilon\mid\varepsilon<Z\right)=-\phi\left(Z\right)/\Phi\left(Z\right)$, and the conclusion follows.
Properties of bivariate standard normal and implied conditional probability in the Roy model First, in the Roy model, $\sigma_{\varepsilon}^{2}$ is normalized to be $1$ for identification reason (c.f. Cameron and Trivedi: Microeconometrics: methods and applications). I will maintain this norm
28,763
Why are $x$ and $x^2$ correlated?
The Pearson correlation measures the amount of linear relationship -- it doesn't ignore variables that have a relationship that's not perfectly linear. If things increase and decrease together, some portion of their relationship is explainable as linear relationship (and some of it isn't). For example, if $X$ is positive, then both $X$ and $X^2$ will increase or decrease together, and so be somewhat positively correlated. On the other hand if $X$ is negative, then $X^2$ will increase as $X$ decreases (becomes more negative). Here's a case where the population mean of $X$ is large compared to its spread, and so $X$ and $X^2$ have a high Pearson correlation: In this case the population correlation is about 0.99867 and the sample correlation was about 0.99868. If $X$ is both positive and negative then there are parts where $X^2$ increases as $X$ increases and parts where $X^2$ decreases as $X$ increases. This may result in an overall positive, negative or zero correlation (depending on the extent to which they cancel out).
Why are $x$ and $x^2$ correlated?
The Pearson correlation measures the amount of linear relationship -- it doesn't ignore variables that have a relationship that's not perfectly linear. If things increase and decrease together, some p
Why are $x$ and $x^2$ correlated? The Pearson correlation measures the amount of linear relationship -- it doesn't ignore variables that have a relationship that's not perfectly linear. If things increase and decrease together, some portion of their relationship is explainable as linear relationship (and some of it isn't). For example, if $X$ is positive, then both $X$ and $X^2$ will increase or decrease together, and so be somewhat positively correlated. On the other hand if $X$ is negative, then $X^2$ will increase as $X$ decreases (becomes more negative). Here's a case where the population mean of $X$ is large compared to its spread, and so $X$ and $X^2$ have a high Pearson correlation: In this case the population correlation is about 0.99867 and the sample correlation was about 0.99868. If $X$ is both positive and negative then there are parts where $X^2$ increases as $X$ increases and parts where $X^2$ decreases as $X$ increases. This may result in an overall positive, negative or zero correlation (depending on the extent to which they cancel out).
Why are $x$ and $x^2$ correlated? The Pearson correlation measures the amount of linear relationship -- it doesn't ignore variables that have a relationship that's not perfectly linear. If things increase and decrease together, some p
28,764
ARIMA forecast with seasonality and trend, strange result
From the appearance of your data, after seasonal differencing, there may well be no substantive remaining seasonality. That peak at the start of each year, and the subsequent pattern through the rest of the year is quite well picked up by an $I_{[12]}$ model; the model has incorporated "obvious seasonality". Yes, indeed, the suggested model is "This June = Last June + constant + error", and similarly for the other months. What's wrong with that exactly? It seems to be an excellent description of your data. You might find a time-series decomposition more intuitive and easier to explain, perhaps even something based off a Basic Structural Model - one with seasonality - but that doesn't necessarily imply a model that functions better than the one you have. Still one or more of the standard decomposition techniques might be worth trying -- there's a lot to be said for a model that you comprehend well.
ARIMA forecast with seasonality and trend, strange result
From the appearance of your data, after seasonal differencing, there may well be no substantive remaining seasonality. That peak at the start of each year, and the subsequent pattern through the rest
ARIMA forecast with seasonality and trend, strange result From the appearance of your data, after seasonal differencing, there may well be no substantive remaining seasonality. That peak at the start of each year, and the subsequent pattern through the rest of the year is quite well picked up by an $I_{[12]}$ model; the model has incorporated "obvious seasonality". Yes, indeed, the suggested model is "This June = Last June + constant + error", and similarly for the other months. What's wrong with that exactly? It seems to be an excellent description of your data. You might find a time-series decomposition more intuitive and easier to explain, perhaps even something based off a Basic Structural Model - one with seasonality - but that doesn't necessarily imply a model that functions better than the one you have. Still one or more of the standard decomposition techniques might be worth trying -- there's a lot to be said for a model that you comprehend well.
ARIMA forecast with seasonality and trend, strange result From the appearance of your data, after seasonal differencing, there may well be no substantive remaining seasonality. That peak at the start of each year, and the subsequent pattern through the rest
28,765
ARIMA forecast with seasonality and trend, strange result
I believe that our problem is that we are jumping directly to ARIMA model without trying the traditional models. for this reason, you can find the model is not giving the needed results. In your case, I tested your data, I found that there is a seasonality every 12 months which is clear for you, but also I found that a simple moving average of 3 terms Seasonal adjustment: Multiplicative is the best model. In my opinion, We have to try the traditional forecasting algorithms before jumping to any advanced technique.
ARIMA forecast with seasonality and trend, strange result
I believe that our problem is that we are jumping directly to ARIMA model without trying the traditional models. for this reason, you can find the model is not giving the needed results. In your cas
ARIMA forecast with seasonality and trend, strange result I believe that our problem is that we are jumping directly to ARIMA model without trying the traditional models. for this reason, you can find the model is not giving the needed results. In your case, I tested your data, I found that there is a seasonality every 12 months which is clear for you, but also I found that a simple moving average of 3 terms Seasonal adjustment: Multiplicative is the best model. In my opinion, We have to try the traditional forecasting algorithms before jumping to any advanced technique.
ARIMA forecast with seasonality and trend, strange result I believe that our problem is that we are jumping directly to ARIMA model without trying the traditional models. for this reason, you can find the model is not giving the needed results. In your cas
28,766
Counterexample for the sufficient condition required for consistency
Glad to see that my (incorrect) answer generated two more, and turned a dead question into a lively Q&A thread. So it's time to try to offer something worthwhile, I guess). Consider a serially correlated, covariance-stationary stochastic process $\{y_t\},\;\; t=1,...,n$, with mean $\mu$ and autocovariances $\{\gamma_j\},\;\; \gamma_j\equiv \operatorname{Cov}(y_t,y_{t-j})$. Assume that $\lim_{j\rightarrow \infty}\gamma_j= 0$ (this bounds the "strength" of autocorrelation as two realizations of the process are further and further away in time). Then we have that $$\bar y_n = \frac 1n\sum_{t=1}^ny_t\rightarrow_{m.s} \mu,\;\; \text{as}\; n\rightarrow \infty$$ i.e. the sample mean converges in mean square to the true mean of the process, and therefore it also converges in probability: so it is a consistent estimator of $\mu$. The variance of $\bar y_n$ can be found to be $$\operatorname{Var}(\bar y_n) = \frac 1n \gamma_0+\frac 2n \sum_{j=1}^{n-1}\left(1-\frac {j}{n}\right)\gamma_j$$ which is easily shown to go to zero as $n$ goes to infinity. Now, making use of Cardinal's comment let's randomize further our estimator of the mean, by considering the estimator $$\tilde \mu_n = \bar y_n + z_n$$ where $\{z_t\}$ is an stochastic process of independent random variables which are also independent from the $y_i$'s, taking the value $at$ (parameter $a>0$ to be specified by us) with probability $1/t^2$, the value $-at$ with probability $1/t^2$, and zero otherwise. So $\{z_t\}$ has expected value and variance $$E(z_t) = at\frac 1{t^2} -at\frac 1{t^2} + 0\cdot \left (1-\frac 2{t^2}\right)= 0,\;\;\operatorname{Var}(z_t) = 2a^2$$ The expected value and the variance of the estimator is therefore $$E(\tilde \mu) = \mu,\;\;\operatorname{Var}(\tilde \mu) = \operatorname{Var}(\bar y_n) + 2a^2$$ Consider the probability distribution of $|z_n|$, $P\left(|z_n| \le \epsilon\right),\;\epsilon>0$: $|z_n|$ takes the value $0$ with probability $(1-2/n^2)$ and the value $an$ with probability $2/n^2$. So $$P\left(|z_n| <\epsilon\right) \ge 1-2/n^2 = \lim_{n\rightarrow \infty}P\left(|z_n| < \epsilon\right) \ge 1 = 1$$ which means that $z_n$ converges in probability to $0$ (while its variance remains finite). Therefore $$\operatorname{plim}\tilde \mu_n = \operatorname{plim}\bar y_n+\operatorname{plim} z_n = \mu$$ so this randomized estimator of the mean value of the $y$-stochastic process remains consistent. But its variance does not go to zero as $n$ goes to infinity, neither does it go to infinity. Closing, why all the apparently useless elaboration with an autocorrelated stochastic process? Because Cardinal qualified his example by calling it "absurd", like "just to show that mathematically, we can have a consistent estimator with non-zero and finite variance". I wanted to give a hint that it isn't necessarily a curiosity, at least in spirit: There are times in real life that new processes begin, man-made processes, that had to do with how we organize our lives and activities. While we usually have designed them, and can say a lot about them, still, they may be so complex that they are reasonably treated as stochastic (the illusion of complete control over such processes, or of complete a priori knowledge on their evolution, processes that may represent new ways to trade or produce, or arrange the rights-and-obligations structure between humans, is just that, an illusion). Being also new, we do not have enough accumulated realizations of them in order to do reliable statistical inference on how they will evolve. Then, ad hoc and perhaps "suboptimal" corrections are nevertheless an actual phenomenon, when for example we have a process where we strongly believe that its present depends on the past (hence the auto-correlated stochastic process), but we really don't know how as yet (hence the ad hoc randomization, while we wait for data to accumulate in order to estimate the covariances). And maybe a statistician would find a better way to deal with such kind of severe uncertainty -but many entities have to function in an uncertain environment without the benefit of such scientific services. What follows is the initial (wrong) answer (see especially Cardinal's comment) Estimators that converge in probability to a random variable do exist: the case of "spurious regression" comes to mind, where if we attempt to regress two independent random walks (i.e. non-stationary stochastic processes) on each other by using ordinary least squares estimation, the OLS estimator will converge to a random variable. But a consistent estimator with non-zero variance does not exist, because consistency is defined as the convergence in probability of an estimator to a constant, which, by conception, has zero variance.
Counterexample for the sufficient condition required for consistency
Glad to see that my (incorrect) answer generated two more, and turned a dead question into a lively Q&A thread. So it's time to try to offer something worthwhile, I guess). Consider a serially corr
Counterexample for the sufficient condition required for consistency Glad to see that my (incorrect) answer generated two more, and turned a dead question into a lively Q&A thread. So it's time to try to offer something worthwhile, I guess). Consider a serially correlated, covariance-stationary stochastic process $\{y_t\},\;\; t=1,...,n$, with mean $\mu$ and autocovariances $\{\gamma_j\},\;\; \gamma_j\equiv \operatorname{Cov}(y_t,y_{t-j})$. Assume that $\lim_{j\rightarrow \infty}\gamma_j= 0$ (this bounds the "strength" of autocorrelation as two realizations of the process are further and further away in time). Then we have that $$\bar y_n = \frac 1n\sum_{t=1}^ny_t\rightarrow_{m.s} \mu,\;\; \text{as}\; n\rightarrow \infty$$ i.e. the sample mean converges in mean square to the true mean of the process, and therefore it also converges in probability: so it is a consistent estimator of $\mu$. The variance of $\bar y_n$ can be found to be $$\operatorname{Var}(\bar y_n) = \frac 1n \gamma_0+\frac 2n \sum_{j=1}^{n-1}\left(1-\frac {j}{n}\right)\gamma_j$$ which is easily shown to go to zero as $n$ goes to infinity. Now, making use of Cardinal's comment let's randomize further our estimator of the mean, by considering the estimator $$\tilde \mu_n = \bar y_n + z_n$$ where $\{z_t\}$ is an stochastic process of independent random variables which are also independent from the $y_i$'s, taking the value $at$ (parameter $a>0$ to be specified by us) with probability $1/t^2$, the value $-at$ with probability $1/t^2$, and zero otherwise. So $\{z_t\}$ has expected value and variance $$E(z_t) = at\frac 1{t^2} -at\frac 1{t^2} + 0\cdot \left (1-\frac 2{t^2}\right)= 0,\;\;\operatorname{Var}(z_t) = 2a^2$$ The expected value and the variance of the estimator is therefore $$E(\tilde \mu) = \mu,\;\;\operatorname{Var}(\tilde \mu) = \operatorname{Var}(\bar y_n) + 2a^2$$ Consider the probability distribution of $|z_n|$, $P\left(|z_n| \le \epsilon\right),\;\epsilon>0$: $|z_n|$ takes the value $0$ with probability $(1-2/n^2)$ and the value $an$ with probability $2/n^2$. So $$P\left(|z_n| <\epsilon\right) \ge 1-2/n^2 = \lim_{n\rightarrow \infty}P\left(|z_n| < \epsilon\right) \ge 1 = 1$$ which means that $z_n$ converges in probability to $0$ (while its variance remains finite). Therefore $$\operatorname{plim}\tilde \mu_n = \operatorname{plim}\bar y_n+\operatorname{plim} z_n = \mu$$ so this randomized estimator of the mean value of the $y$-stochastic process remains consistent. But its variance does not go to zero as $n$ goes to infinity, neither does it go to infinity. Closing, why all the apparently useless elaboration with an autocorrelated stochastic process? Because Cardinal qualified his example by calling it "absurd", like "just to show that mathematically, we can have a consistent estimator with non-zero and finite variance". I wanted to give a hint that it isn't necessarily a curiosity, at least in spirit: There are times in real life that new processes begin, man-made processes, that had to do with how we organize our lives and activities. While we usually have designed them, and can say a lot about them, still, they may be so complex that they are reasonably treated as stochastic (the illusion of complete control over such processes, or of complete a priori knowledge on their evolution, processes that may represent new ways to trade or produce, or arrange the rights-and-obligations structure between humans, is just that, an illusion). Being also new, we do not have enough accumulated realizations of them in order to do reliable statistical inference on how they will evolve. Then, ad hoc and perhaps "suboptimal" corrections are nevertheless an actual phenomenon, when for example we have a process where we strongly believe that its present depends on the past (hence the auto-correlated stochastic process), but we really don't know how as yet (hence the ad hoc randomization, while we wait for data to accumulate in order to estimate the covariances). And maybe a statistician would find a better way to deal with such kind of severe uncertainty -but many entities have to function in an uncertain environment without the benefit of such scientific services. What follows is the initial (wrong) answer (see especially Cardinal's comment) Estimators that converge in probability to a random variable do exist: the case of "spurious regression" comes to mind, where if we attempt to regress two independent random walks (i.e. non-stationary stochastic processes) on each other by using ordinary least squares estimation, the OLS estimator will converge to a random variable. But a consistent estimator with non-zero variance does not exist, because consistency is defined as the convergence in probability of an estimator to a constant, which, by conception, has zero variance.
Counterexample for the sufficient condition required for consistency Glad to see that my (incorrect) answer generated two more, and turned a dead question into a lively Q&A thread. So it's time to try to offer something worthwhile, I guess). Consider a serially corr
28,767
Counterexample for the sufficient condition required for consistency
Take any sample from the distribution with finite expectation and infinite variance (Pareto with $\alpha\in(1,2]$ for example). Then the sample mean will converge to the expectation due to the law or large numbers (which requires only the existence of mean) and the variance will be infinite.
Counterexample for the sufficient condition required for consistency
Take any sample from the distribution with finite expectation and infinite variance (Pareto with $\alpha\in(1,2]$ for example). Then the sample mean will converge to the expectation due to the law or
Counterexample for the sufficient condition required for consistency Take any sample from the distribution with finite expectation and infinite variance (Pareto with $\alpha\in(1,2]$ for example). Then the sample mean will converge to the expectation due to the law or large numbers (which requires only the existence of mean) and the variance will be infinite.
Counterexample for the sufficient condition required for consistency Take any sample from the distribution with finite expectation and infinite variance (Pareto with $\alpha\in(1,2]$ for example). Then the sample mean will converge to the expectation due to the law or
28,768
Counterexample for the sufficient condition required for consistency
Let me give an example of a sequence of random variable converging to zero in probability but with infinite variance. In essence, an estimator is just a random variable so with a little abstraction, you can see that convergence in probability to a constant does not imply variance approaching zero. Consider the random variable $\xi_n(x):=\chi_{[0,1/n]}(x)x^{-1/2}$ on $[0,1]$ where the probability measure considered is the Lebesgue measure. Clearly, $P(\xi_n(x)>0)=1/n\to0$ but $$\int\xi_n^2dP=\int_{0}^{1/n}x^{-1}dx=\log(x)\mid _{0}^{1/n}=\infty,$$ for all $n$ so its variance does not go to zero. Now, just make up an estimator where as your sample grows you estimate the true value $\mu=0$ by a draw of $\xi_n$. Note that this estimator is not unbiased for 0, but to make it unbiased you can just set $\eta_n:=\pm\xi_n$ with equal probability 1/2 and use that as your estimator. The same argument for convergence and variance clearly holds. Edit: If you want an example in which the variance is finite, take $$\xi_n(x):=\chi_{[0,1/n]}(x)\sqrt{n},$$ and again consider $\eta_n:=\pm\xi_n$ w.p. 1/2.
Counterexample for the sufficient condition required for consistency
Let me give an example of a sequence of random variable converging to zero in probability but with infinite variance. In essence, an estimator is just a random variable so with a little abstraction, y
Counterexample for the sufficient condition required for consistency Let me give an example of a sequence of random variable converging to zero in probability but with infinite variance. In essence, an estimator is just a random variable so with a little abstraction, you can see that convergence in probability to a constant does not imply variance approaching zero. Consider the random variable $\xi_n(x):=\chi_{[0,1/n]}(x)x^{-1/2}$ on $[0,1]$ where the probability measure considered is the Lebesgue measure. Clearly, $P(\xi_n(x)>0)=1/n\to0$ but $$\int\xi_n^2dP=\int_{0}^{1/n}x^{-1}dx=\log(x)\mid _{0}^{1/n}=\infty,$$ for all $n$ so its variance does not go to zero. Now, just make up an estimator where as your sample grows you estimate the true value $\mu=0$ by a draw of $\xi_n$. Note that this estimator is not unbiased for 0, but to make it unbiased you can just set $\eta_n:=\pm\xi_n$ with equal probability 1/2 and use that as your estimator. The same argument for convergence and variance clearly holds. Edit: If you want an example in which the variance is finite, take $$\xi_n(x):=\chi_{[0,1/n]}(x)\sqrt{n},$$ and again consider $\eta_n:=\pm\xi_n$ w.p. 1/2.
Counterexample for the sufficient condition required for consistency Let me give an example of a sequence of random variable converging to zero in probability but with infinite variance. In essence, an estimator is just a random variable so with a little abstraction, y
28,769
Equivalence tests for non-normal data?
The logic of TOST employed for Wald-type t and z test statistics (i.e. $\theta / s_{\theta}$ and $\theta / \sigma_{\theta}$, respectively) can be applied to the z approximations for nonparametric tests like the sign, sign rank, and rank sum tests. For simplicity I assume that equivalence is expressed symmetrically with a single term, but extending my answer to asymmetric equivalence terms is straightforward. One issue that arises when doing this is that if one is accustomed to expressing the equivalence term (say, $\Delta$) in the same units as $\theta$, then the the equivalence term must be expressed in units of the particular sign, signed rank, or rank sum statistic, which is both abstruse, and dependent on N. However, one can also express TOST equivalence terms in units of the test statistic itself. Consider that in TOST, if $z = \theta/\sigma_{\theta}$, then $z_{1} = (\Delta - \theta)/\sigma_{\theta}$, and $z_{2} = (\theta + \Delta)/\sigma_{\theta}$. If we let $\varepsilon = \Delta / \sigma_{\theta}$, then $z_{1} = \varepsilon - z$, and $z_{2} = z + \varepsilon$. (The statistics expressed here are both evaluated in the right tail: $p_{1} = \text{P}(Z > z_{1})$ and $p_{2} = \text{P}(Z > z_{2})$.) Using units of the z distribution to define the equivalence/relevance threshold may be preferable for non-parametric tests, since the alternative defines the threshold in units of signed-ranks or rank sums, which may be substantively meaningless to researchers and difficult to interpret. If we recognize that (for symmetric equivalence intervals) it is not possible to reject any TOST null hypothesis when $\varepsilon \le z_{1-\alpha}$, then we might proceed to make decisions on appropriate size of the equivalence term accordingly. For example $\varepsilon = z_{1-\alpha} + 0.5$. This approach has been implemented with options for continuity correction, etc. in the package tost for Stata (which now includes specific TOST implementations for the Shapiro-Wilk and Shapiro-Francia tests), which you can access by typing in Stata: Edit: Why the logic of TOST is sound, and equivalence test formations have been applied to omnibus tests, I have been persuaded that my solution was based on a deep misunderstanding of the approximate statistics for the Shapiro-Wilk and Shapiro-Francia tests
Equivalence tests for non-normal data?
The logic of TOST employed for Wald-type t and z test statistics (i.e. $\theta / s_{\theta}$ and $\theta / \sigma_{\theta}$, respectively) can be applied to the z approximations for nonparametric test
Equivalence tests for non-normal data? The logic of TOST employed for Wald-type t and z test statistics (i.e. $\theta / s_{\theta}$ and $\theta / \sigma_{\theta}$, respectively) can be applied to the z approximations for nonparametric tests like the sign, sign rank, and rank sum tests. For simplicity I assume that equivalence is expressed symmetrically with a single term, but extending my answer to asymmetric equivalence terms is straightforward. One issue that arises when doing this is that if one is accustomed to expressing the equivalence term (say, $\Delta$) in the same units as $\theta$, then the the equivalence term must be expressed in units of the particular sign, signed rank, or rank sum statistic, which is both abstruse, and dependent on N. However, one can also express TOST equivalence terms in units of the test statistic itself. Consider that in TOST, if $z = \theta/\sigma_{\theta}$, then $z_{1} = (\Delta - \theta)/\sigma_{\theta}$, and $z_{2} = (\theta + \Delta)/\sigma_{\theta}$. If we let $\varepsilon = \Delta / \sigma_{\theta}$, then $z_{1} = \varepsilon - z$, and $z_{2} = z + \varepsilon$. (The statistics expressed here are both evaluated in the right tail: $p_{1} = \text{P}(Z > z_{1})$ and $p_{2} = \text{P}(Z > z_{2})$.) Using units of the z distribution to define the equivalence/relevance threshold may be preferable for non-parametric tests, since the alternative defines the threshold in units of signed-ranks or rank sums, which may be substantively meaningless to researchers and difficult to interpret. If we recognize that (for symmetric equivalence intervals) it is not possible to reject any TOST null hypothesis when $\varepsilon \le z_{1-\alpha}$, then we might proceed to make decisions on appropriate size of the equivalence term accordingly. For example $\varepsilon = z_{1-\alpha} + 0.5$. This approach has been implemented with options for continuity correction, etc. in the package tost for Stata (which now includes specific TOST implementations for the Shapiro-Wilk and Shapiro-Francia tests), which you can access by typing in Stata: Edit: Why the logic of TOST is sound, and equivalence test formations have been applied to omnibus tests, I have been persuaded that my solution was based on a deep misunderstanding of the approximate statistics for the Shapiro-Wilk and Shapiro-Francia tests
Equivalence tests for non-normal data? The logic of TOST employed for Wald-type t and z test statistics (i.e. $\theta / s_{\theta}$ and $\theta / \sigma_{\theta}$, respectively) can be applied to the z approximations for nonparametric test
28,770
Equivalence tests for non-normal data?
It's not a TOST per se, but the Komolgorov-Smirnov test allows one to test for the significance of the difference between a sample distribution and a second reference distribution you can specify. You can use this test to rule out a specific kind of different distribution, but not different distributions in general (at least, not without controlling for error inflation across tests of all possible alternatives...if that's somehow possible itself). The alternative hypothesis for any one test will remain the less specific "catch-all" hypothesis, as usual. If you can settle for a test of distributional differences between two groups where the null hypothesis is that the two groups are equivalently distributed, you can use the Komolgorov-Smirnov test to compare one group's distribution to the other group's. That's probably the conventional approach: ignore the differences if they're not statistically significant, and justify this decision with a test statistic. In any case, you may want to consider some deeper issues arising from the "all-or-nothing" approach to rejecting a null hypothesis. One such issue is very popular here on Cross Validated: "Is normality testing 'essentially useless'?" People like to answer normality-testing questions with a question: "Why do you want to test this?" The intention, I assume, is generally to invalidate the reason for testing, which may ultimately lead in the right direction. The gist of useful responses to the question I've linked here seems to be as follows: If you're concerned about violations of parametric test assumptions, you should just find a nonparametric test that doesn't make distributional assumptions instead. Don't test whether you need to use the nonparametric test; just use it! You should replace the question, "Is my distribution significantly non-normal?" with, "How non-normal is my distribution, and how is this likely to affect my analyses of interest?" For instance, tests regarding central tendency (especially involving means) may be more sensitive to skewness than to kurtosis, and vice versa for tests regarding (co)variance. Nonetheless, there are robust alternatives for most analytic purposes that aren't very sensitive to either kind of non-normality. If you still wish to pursue a test of equivalence, here's another popular discussion on Cross Validated that involves equivalence testing.
Equivalence tests for non-normal data?
It's not a TOST per se, but the Komolgorov-Smirnov test allows one to test for the significance of the difference between a sample distribution and a second reference distribution you can specify. You
Equivalence tests for non-normal data? It's not a TOST per se, but the Komolgorov-Smirnov test allows one to test for the significance of the difference between a sample distribution and a second reference distribution you can specify. You can use this test to rule out a specific kind of different distribution, but not different distributions in general (at least, not without controlling for error inflation across tests of all possible alternatives...if that's somehow possible itself). The alternative hypothesis for any one test will remain the less specific "catch-all" hypothesis, as usual. If you can settle for a test of distributional differences between two groups where the null hypothesis is that the two groups are equivalently distributed, you can use the Komolgorov-Smirnov test to compare one group's distribution to the other group's. That's probably the conventional approach: ignore the differences if they're not statistically significant, and justify this decision with a test statistic. In any case, you may want to consider some deeper issues arising from the "all-or-nothing" approach to rejecting a null hypothesis. One such issue is very popular here on Cross Validated: "Is normality testing 'essentially useless'?" People like to answer normality-testing questions with a question: "Why do you want to test this?" The intention, I assume, is generally to invalidate the reason for testing, which may ultimately lead in the right direction. The gist of useful responses to the question I've linked here seems to be as follows: If you're concerned about violations of parametric test assumptions, you should just find a nonparametric test that doesn't make distributional assumptions instead. Don't test whether you need to use the nonparametric test; just use it! You should replace the question, "Is my distribution significantly non-normal?" with, "How non-normal is my distribution, and how is this likely to affect my analyses of interest?" For instance, tests regarding central tendency (especially involving means) may be more sensitive to skewness than to kurtosis, and vice versa for tests regarding (co)variance. Nonetheless, there are robust alternatives for most analytic purposes that aren't very sensitive to either kind of non-normality. If you still wish to pursue a test of equivalence, here's another popular discussion on Cross Validated that involves equivalence testing.
Equivalence tests for non-normal data? It's not a TOST per se, but the Komolgorov-Smirnov test allows one to test for the significance of the difference between a sample distribution and a second reference distribution you can specify. You
28,771
Equivalence tests for non-normal data?
Equivalence is never something we can test. Think about the hypothesis: $\mathcal{H}_0: f_x \ne f_y$ vs $\mathcal{H}_1: f_x = f_y$. NHST theory tells us that, under the null, we can choose anything under $\mathcal{H}_0$ that best fits the data. That means we can almost always get arbitrarily close to the distribution. For instance, if I want to test $f_x \sim \mathcal{N}(0, 1)$, the probability model that allows for separate distributions of $\hat{f}_x$ and $\hat{f}_y$ will always be more likely under the null, a violation of critical testing assumptions. Even if the sample $X=Y$ identically, I can get a likelihood ratio that is arbitrarily close to 1 with $f_y \approx f_x$. If you know a suitable probability model for the data, you can use a penalized information criterion to rank alternate models. One way is to use the BICs of the two probability models (the one estimated under $\mathcal{H}_0$ and $\mathcal{H}_1$. I've used a normal probability model, but you can easily get a BIC from any type of maximum likelihood procedure, either by hand or using the GLM. This Stackoverflow post gets in nitty-gritty for fitting distributions. An example of doing this is here: set.seed(123) p <- replicate(1000, { ## generate data under the null x <- rnorm(100) g <- sample(0:1, 100, replace=T) BIC(lm(x~1)) > BIC(lm(x~g)) }) mean(p) gives > mean(p) [1] 0.034 $p$ here is the proportion of times that the BIC of the null model (separate models) is better (lower) than the alternative model (equivalent model). This is remarkably close to the nominal 0.05 level of statistical tests. On the other hand if we take: set.seed(123) p <- replicate(1000, { ## generate data under the null x <- rnorm(100) g <- sample(0:1, 100, replace=T) x <- x + 0.4*g BIC(lm(x~1)) > BIC(lm(x~g)) }) mean(p) Gives: > mean(p) [1] 0.437 As with NHST there are subtle issues of power and false positive error rates that should be explored with simulation before making definitive conclusions. I think a similar (perhaps more general method) is using Bayesian stats to compare the posterior estimated under either probability model.
Equivalence tests for non-normal data?
Equivalence is never something we can test. Think about the hypothesis: $\mathcal{H}_0: f_x \ne f_y$ vs $\mathcal{H}_1: f_x = f_y$. NHST theory tells us that, under the null, we can choose anything un
Equivalence tests for non-normal data? Equivalence is never something we can test. Think about the hypothesis: $\mathcal{H}_0: f_x \ne f_y$ vs $\mathcal{H}_1: f_x = f_y$. NHST theory tells us that, under the null, we can choose anything under $\mathcal{H}_0$ that best fits the data. That means we can almost always get arbitrarily close to the distribution. For instance, if I want to test $f_x \sim \mathcal{N}(0, 1)$, the probability model that allows for separate distributions of $\hat{f}_x$ and $\hat{f}_y$ will always be more likely under the null, a violation of critical testing assumptions. Even if the sample $X=Y$ identically, I can get a likelihood ratio that is arbitrarily close to 1 with $f_y \approx f_x$. If you know a suitable probability model for the data, you can use a penalized information criterion to rank alternate models. One way is to use the BICs of the two probability models (the one estimated under $\mathcal{H}_0$ and $\mathcal{H}_1$. I've used a normal probability model, but you can easily get a BIC from any type of maximum likelihood procedure, either by hand or using the GLM. This Stackoverflow post gets in nitty-gritty for fitting distributions. An example of doing this is here: set.seed(123) p <- replicate(1000, { ## generate data under the null x <- rnorm(100) g <- sample(0:1, 100, replace=T) BIC(lm(x~1)) > BIC(lm(x~g)) }) mean(p) gives > mean(p) [1] 0.034 $p$ here is the proportion of times that the BIC of the null model (separate models) is better (lower) than the alternative model (equivalent model). This is remarkably close to the nominal 0.05 level of statistical tests. On the other hand if we take: set.seed(123) p <- replicate(1000, { ## generate data under the null x <- rnorm(100) g <- sample(0:1, 100, replace=T) x <- x + 0.4*g BIC(lm(x~1)) > BIC(lm(x~g)) }) mean(p) Gives: > mean(p) [1] 0.437 As with NHST there are subtle issues of power and false positive error rates that should be explored with simulation before making definitive conclusions. I think a similar (perhaps more general method) is using Bayesian stats to compare the posterior estimated under either probability model.
Equivalence tests for non-normal data? Equivalence is never something we can test. Think about the hypothesis: $\mathcal{H}_0: f_x \ne f_y$ vs $\mathcal{H}_1: f_x = f_y$. NHST theory tells us that, under the null, we can choose anything un
28,772
Suggested books on spatial statistics
Ok. These books seem to be general books on spatial statistics, not restricted to particular area: Bivand et al - Applied Spatial Data Analysis with R - This book was recommended in some presentation at an ecological conference. Banerjee et. al - Hierarchical Modeling and Analysis for Spatial Data. This one I just found randomly I think, don't know nothing about it... From my perspective of Population Ecology and Species Distribution Modelling, I've come accross these books: Janet Franklin - Mapping Species Distributions: Spatial Inference and Prediction. I like this book, it seems to be quite nice for beginners. Peterson et al - Ecological Niches and Geographic Distributions. I haven't read this book but my advisor recommends it as a good piece of work on SDM. Rhodes et al - Population Dynamics in Ecological Space and Time. This one I just found randomly I think, don't know nothing about it. Tilman et al - Spatial Ecology: The Role of Space in Population Dynamics and Interspecific Interactions. This one I just found randomly I think, don't know nothing about it.
Suggested books on spatial statistics
Ok. These books seem to be general books on spatial statistics, not restricted to particular area: Bivand et al - Applied Spatial Data Analysis with R - This book was recommended in some presentation
Suggested books on spatial statistics Ok. These books seem to be general books on spatial statistics, not restricted to particular area: Bivand et al - Applied Spatial Data Analysis with R - This book was recommended in some presentation at an ecological conference. Banerjee et. al - Hierarchical Modeling and Analysis for Spatial Data. This one I just found randomly I think, don't know nothing about it... From my perspective of Population Ecology and Species Distribution Modelling, I've come accross these books: Janet Franklin - Mapping Species Distributions: Spatial Inference and Prediction. I like this book, it seems to be quite nice for beginners. Peterson et al - Ecological Niches and Geographic Distributions. I haven't read this book but my advisor recommends it as a good piece of work on SDM. Rhodes et al - Population Dynamics in Ecological Space and Time. This one I just found randomly I think, don't know nothing about it. Tilman et al - Spatial Ecology: The Role of Space in Population Dynamics and Interspecific Interactions. This one I just found randomly I think, don't know nothing about it.
Suggested books on spatial statistics Ok. These books seem to be general books on spatial statistics, not restricted to particular area: Bivand et al - Applied Spatial Data Analysis with R - This book was recommended in some presentation
28,773
SVM optimization problem
It is difficult to solve the norm $||w||$ because that involves square root. That's why we can square this result to $||w||^2$ without any problem. And we usually append $\frac{1}{2}||w||^2$ for mathematical purpose when we will derivate the function for optimize it using Lagrange multiplier to find the solution.
SVM optimization problem
It is difficult to solve the norm $||w||$ because that involves square root. That's why we can square this result to $||w||^2$ without any problem. And we usually append $\frac{1}{2}||w||^2$ for math
SVM optimization problem It is difficult to solve the norm $||w||$ because that involves square root. That's why we can square this result to $||w||^2$ without any problem. And we usually append $\frac{1}{2}||w||^2$ for mathematical purpose when we will derivate the function for optimize it using Lagrange multiplier to find the solution.
SVM optimization problem It is difficult to solve the norm $||w||$ because that involves square root. That's why we can square this result to $||w||^2$ without any problem. And we usually append $\frac{1}{2}||w||^2$ for math
28,774
What is the link between methods such as matching and statistically controlling for variables?
As with AdamO, I think the key to answering this question is the notion of causal inference, and how to get "toward" a causal model using observational setups. In a perfect world, we would have something called a counterfactual population - the study population, identical in all respects except for the single thing we are interested in. The difference between those two populations, based on that difference, is a true causal result. Obviously, we can't have this. There are ways however, to try to get close to it: Randomization: This theoretically (if randomization is done correctly) should give you two populations that are identical, except for treatment post-randomization. Stratification: You can look at a population within levels of covariates, where you are making "like with like" comparisons. This works splendidly for small numbers of levels, but quickly becomes cumbersome. Matching: Matching is an attempt to assemble a study population such that Group A resembles Group B, and thus is amenable to comparison. Statistical adjustment: Including covariates in a regression model allows for the estimation of an effect within levels of the covariates - again, comparing like with like, or at least attempting to. All are an attempt to get closer to that counterfactual population. How to best get at it depends on what you want to get out, and what your study looks like.
What is the link between methods such as matching and statistically controlling for variables?
As with AdamO, I think the key to answering this question is the notion of causal inference, and how to get "toward" a causal model using observational setups. In a perfect world, we would have someth
What is the link between methods such as matching and statistically controlling for variables? As with AdamO, I think the key to answering this question is the notion of causal inference, and how to get "toward" a causal model using observational setups. In a perfect world, we would have something called a counterfactual population - the study population, identical in all respects except for the single thing we are interested in. The difference between those two populations, based on that difference, is a true causal result. Obviously, we can't have this. There are ways however, to try to get close to it: Randomization: This theoretically (if randomization is done correctly) should give you two populations that are identical, except for treatment post-randomization. Stratification: You can look at a population within levels of covariates, where you are making "like with like" comparisons. This works splendidly for small numbers of levels, but quickly becomes cumbersome. Matching: Matching is an attempt to assemble a study population such that Group A resembles Group B, and thus is amenable to comparison. Statistical adjustment: Including covariates in a regression model allows for the estimation of an effect within levels of the covariates - again, comparing like with like, or at least attempting to. All are an attempt to get closer to that counterfactual population. How to best get at it depends on what you want to get out, and what your study looks like.
What is the link between methods such as matching and statistically controlling for variables? As with AdamO, I think the key to answering this question is the notion of causal inference, and how to get "toward" a causal model using observational setups. In a perfect world, we would have someth
28,775
What is the link between methods such as matching and statistically controlling for variables?
I think causal modeling is the key to answering this question. One is faced at the outset to identify the correct adjusted/stratified/controlled effect of interest before even looking at data. If I were to estimate the height / lung capacity relationship in adults, I would adjust for smoking status since smoking stunts growth and influences lung capacity. Confounders are variables which are causally related to the predictor of interest and are associated with the outcome of interest. See Causality from Judea Pearl, 2nd ed. One should specify and power their analysis for the correct confounding variables before the data collection process even begins using rational logic and prior knowledge from previous exploratory studies. This doesn't mean, however, that some researchers don't rely on data-driven methods to select adjustment variables. I don't agree with doing this in practice when conducting confirmatory analyses. Some common techniques in model selection for multiple adjusted models is forward/backward model selection where you can restrict to classes of models which you believe to be at least plausible. The blackbox AIC selection criteria for this is related to the likelihood and, hence, the degree of reduction in the $R^2$ for linear models for these adjustment variables. Another process common in epidemiology is where variables are only added to the model if they change the estimate of the main effect (like an odds ratio or hazard ratio) by at least 10%. While this is "more" correct than AIC based model selection, I still think there are major caveats in this approach. My recommendation is prespecify the desired analysis as part of a hypothesis. The age adjusted smoking / cancer risk is a different parameter, and leads to different inference in a controlled study than the crude smoking / cancer risk. Using subject matter knowledge is the best way to select predictors for adjustment in regression analyses, or as stratification, matching, or weighting variables in various other types of "controlled" analyses of experimental and quasiexperimental design.
What is the link between methods such as matching and statistically controlling for variables?
I think causal modeling is the key to answering this question. One is faced at the outset to identify the correct adjusted/stratified/controlled effect of interest before even looking at data. If I we
What is the link between methods such as matching and statistically controlling for variables? I think causal modeling is the key to answering this question. One is faced at the outset to identify the correct adjusted/stratified/controlled effect of interest before even looking at data. If I were to estimate the height / lung capacity relationship in adults, I would adjust for smoking status since smoking stunts growth and influences lung capacity. Confounders are variables which are causally related to the predictor of interest and are associated with the outcome of interest. See Causality from Judea Pearl, 2nd ed. One should specify and power their analysis for the correct confounding variables before the data collection process even begins using rational logic and prior knowledge from previous exploratory studies. This doesn't mean, however, that some researchers don't rely on data-driven methods to select adjustment variables. I don't agree with doing this in practice when conducting confirmatory analyses. Some common techniques in model selection for multiple adjusted models is forward/backward model selection where you can restrict to classes of models which you believe to be at least plausible. The blackbox AIC selection criteria for this is related to the likelihood and, hence, the degree of reduction in the $R^2$ for linear models for these adjustment variables. Another process common in epidemiology is where variables are only added to the model if they change the estimate of the main effect (like an odds ratio or hazard ratio) by at least 10%. While this is "more" correct than AIC based model selection, I still think there are major caveats in this approach. My recommendation is prespecify the desired analysis as part of a hypothesis. The age adjusted smoking / cancer risk is a different parameter, and leads to different inference in a controlled study than the crude smoking / cancer risk. Using subject matter knowledge is the best way to select predictors for adjustment in regression analyses, or as stratification, matching, or weighting variables in various other types of "controlled" analyses of experimental and quasiexperimental design.
What is the link between methods such as matching and statistically controlling for variables? I think causal modeling is the key to answering this question. One is faced at the outset to identify the correct adjusted/stratified/controlled effect of interest before even looking at data. If I we
28,776
What is the link between methods such as matching and statistically controlling for variables?
The story about the relationship between matching and regression is briefly summarised in a blog post here. In short "Regress on D [a treatment indicator] an a full set of dummies (i.e., saturated) model for X [covariates]. The resulting estimate of the effect of D is equal to matching on X, and weighting across covariate cells by the variance of treatment conditional on X" See also section 3.3 of Mostly Harmless Econometrics or section 5.3 of Counterfactuals and Causal Inference for a thorough discussion, including the pros and cons of the D given X weighting that regression implicitly provides. @EpiGrad gives a good start on your first question. The books linked above treat it almost exclusively. If you do not have a computer science / math background you may find Pearl hard going (although worth it in the end!)
What is the link between methods such as matching and statistically controlling for variables?
The story about the relationship between matching and regression is briefly summarised in a blog post here. In short "Regress on D [a treatment indicator] an a full set of dummies (i.e., saturated)
What is the link between methods such as matching and statistically controlling for variables? The story about the relationship between matching and regression is briefly summarised in a blog post here. In short "Regress on D [a treatment indicator] an a full set of dummies (i.e., saturated) model for X [covariates]. The resulting estimate of the effect of D is equal to matching on X, and weighting across covariate cells by the variance of treatment conditional on X" See also section 3.3 of Mostly Harmless Econometrics or section 5.3 of Counterfactuals and Causal Inference for a thorough discussion, including the pros and cons of the D given X weighting that regression implicitly provides. @EpiGrad gives a good start on your first question. The books linked above treat it almost exclusively. If you do not have a computer science / math background you may find Pearl hard going (although worth it in the end!)
What is the link between methods such as matching and statistically controlling for variables? The story about the relationship between matching and regression is briefly summarised in a blog post here. In short "Regress on D [a treatment indicator] an a full set of dummies (i.e., saturated)
28,777
Difference between canonical correpondence analysis and canonical correlation analysis
Canonical correspondence analysis is a technique developed, I believe, by the community ecology people. A founding paper is Canonical correspondence analysis: a new eigenvector technique for multivariate direct gradient analysis by Cajo J.F. Ter Braak (1986). The method involves a canonical correlation analysis and a direct gradient analysis. The idea is to relate the prevalences of a set of species to a collection of environmental variables. Traditionally CCA (correlation) seeks to find that linear combination of the X variables and that linear combination of the Y variables that have the greatest correlation with each other. It relies on the eigen decomposition of $\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}$, where the Sigma matrices are correlation matrices of the variables. See Mardia, Kent and Bibby (Multivariate Analysis). CCA thus assumes a linear relationship between the two sets of variables. The correspondence analysis assumes a different relationship: The species have a gaussian distribution along a direction determined by the environmental factors. Note that CCA is symmetric in the X variables and the Y variables. Correspondence analysis presumes no symmetry, since we want to explain the species in terms of their environment - not the other way around.
Difference between canonical correpondence analysis and canonical correlation analysis
Canonical correspondence analysis is a technique developed, I believe, by the community ecology people. A founding paper is Canonical correspondence analysis: a new eigenvector technique for multivari
Difference between canonical correpondence analysis and canonical correlation analysis Canonical correspondence analysis is a technique developed, I believe, by the community ecology people. A founding paper is Canonical correspondence analysis: a new eigenvector technique for multivariate direct gradient analysis by Cajo J.F. Ter Braak (1986). The method involves a canonical correlation analysis and a direct gradient analysis. The idea is to relate the prevalences of a set of species to a collection of environmental variables. Traditionally CCA (correlation) seeks to find that linear combination of the X variables and that linear combination of the Y variables that have the greatest correlation with each other. It relies on the eigen decomposition of $\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}$, where the Sigma matrices are correlation matrices of the variables. See Mardia, Kent and Bibby (Multivariate Analysis). CCA thus assumes a linear relationship between the two sets of variables. The correspondence analysis assumes a different relationship: The species have a gaussian distribution along a direction determined by the environmental factors. Note that CCA is symmetric in the X variables and the Y variables. Correspondence analysis presumes no symmetry, since we want to explain the species in terms of their environment - not the other way around.
Difference between canonical correpondence analysis and canonical correlation analysis Canonical correspondence analysis is a technique developed, I believe, by the community ecology people. A founding paper is Canonical correspondence analysis: a new eigenvector technique for multivari
28,778
Is multicollinearity implicit in categorical variables?
I cannot reproduce exactly this phenomenon, but I can demonstrate that VIF does not necessarily increase as the number of categories increases. The intuition is simple: categorical variables can be made orthogonal by suitable experimental designs. Therefore, there should in general be no relationship between numbers of categories and multicollinearity. Here is an R function to create categorical datasets with specifiable numbers of categories (for two independent variables) and specifiable amount of replication for each category. It represents a balanced study in which every combination of category is observed an equal number of times, $n$: trial <- function(n, k1=2, k2=2) { df <- expand.grid(1:k1, 1:k2) df <- do.call(rbind, lapply(1:n, function(i) df)) df$y <- rnorm(k1*k2*n) fit <- lm(y ~ Var1+Var2, data=df) vif(fit) } Applying it, I find the VIFs are always at their lowest possible values, $1$, reflecting the balancing (which translates to orthogonal columns in the design matrix). Some examples: sapply(1:5, trial) # Two binary categories, 1-5 replicates per combination sapply(1:5, function(i) trial(i, 10, 3)) # 30 categories, 1-5 replicates This suggests the multicollinearity may be growing due to a growing imbalance in the design. To test this, insert the line df <- subset(df, subset=(y < 0)) before the fit line in trial. This removes half the data at random. Re-running sapply(1:5, function(i) trial(i, 10, 3)) shows that the VIFs are no longer equal to $1$ (but they remain close to it, randomly). They still do not increase with more categories: sapply(1:5, function(i) trial(i, 10, 10)) produces comparable values.
Is multicollinearity implicit in categorical variables?
I cannot reproduce exactly this phenomenon, but I can demonstrate that VIF does not necessarily increase as the number of categories increases. The intuition is simple: categorical variables can be ma
Is multicollinearity implicit in categorical variables? I cannot reproduce exactly this phenomenon, but I can demonstrate that VIF does not necessarily increase as the number of categories increases. The intuition is simple: categorical variables can be made orthogonal by suitable experimental designs. Therefore, there should in general be no relationship between numbers of categories and multicollinearity. Here is an R function to create categorical datasets with specifiable numbers of categories (for two independent variables) and specifiable amount of replication for each category. It represents a balanced study in which every combination of category is observed an equal number of times, $n$: trial <- function(n, k1=2, k2=2) { df <- expand.grid(1:k1, 1:k2) df <- do.call(rbind, lapply(1:n, function(i) df)) df$y <- rnorm(k1*k2*n) fit <- lm(y ~ Var1+Var2, data=df) vif(fit) } Applying it, I find the VIFs are always at their lowest possible values, $1$, reflecting the balancing (which translates to orthogonal columns in the design matrix). Some examples: sapply(1:5, trial) # Two binary categories, 1-5 replicates per combination sapply(1:5, function(i) trial(i, 10, 3)) # 30 categories, 1-5 replicates This suggests the multicollinearity may be growing due to a growing imbalance in the design. To test this, insert the line df <- subset(df, subset=(y < 0)) before the fit line in trial. This removes half the data at random. Re-running sapply(1:5, function(i) trial(i, 10, 3)) shows that the VIFs are no longer equal to $1$ (but they remain close to it, randomly). They still do not increase with more categories: sapply(1:5, function(i) trial(i, 10, 10)) produces comparable values.
Is multicollinearity implicit in categorical variables? I cannot reproduce exactly this phenomenon, but I can demonstrate that VIF does not necessarily increase as the number of categories increases. The intuition is simple: categorical variables can be ma
28,779
Is multicollinearity implicit in categorical variables?
You have the constraint that you can see is inherent in multinomial distributions, namely that one and only one of the $x_i$s will be 1 and all the rest will be 0. So you have the linear constraint $\sum x_i =1$. That means say $x_1 =1 - \sum x_i$ where the sum is taken over $i \neq 1$. This is the collinearity effect that you are noticing. There is nothing unusual or disturbing about it.
Is multicollinearity implicit in categorical variables?
You have the constraint that you can see is inherent in multinomial distributions, namely that one and only one of the $x_i$s will be 1 and all the rest will be 0. So you have the linear constraint $
Is multicollinearity implicit in categorical variables? You have the constraint that you can see is inherent in multinomial distributions, namely that one and only one of the $x_i$s will be 1 and all the rest will be 0. So you have the linear constraint $\sum x_i =1$. That means say $x_1 =1 - \sum x_i$ where the sum is taken over $i \neq 1$. This is the collinearity effect that you are noticing. There is nothing unusual or disturbing about it.
Is multicollinearity implicit in categorical variables? You have the constraint that you can see is inherent in multinomial distributions, namely that one and only one of the $x_i$s will be 1 and all the rest will be 0. So you have the linear constraint $
28,780
How to make Random Forests more interpretable? [duplicate]
The results from CART can change easily (with realistic sample sizes) with small perturbations to the data. If this is the case, it seems the interpretation is not a straightforward as it seems. I've often heard some of my colleagues avoiding random forests because of difficulties in interpretation. They are built more for prediction. Even the variable importance measures that come out are based on predictive performance, but they do help with interpretation.
How to make Random Forests more interpretable? [duplicate]
The results from CART can change easily (with realistic sample sizes) with small perturbations to the data. If this is the case, it seems the interpretation is not a straightforward as it seems. I've
How to make Random Forests more interpretable? [duplicate] The results from CART can change easily (with realistic sample sizes) with small perturbations to the data. If this is the case, it seems the interpretation is not a straightforward as it seems. I've often heard some of my colleagues avoiding random forests because of difficulties in interpretation. They are built more for prediction. Even the variable importance measures that come out are based on predictive performance, but they do help with interpretation.
How to make Random Forests more interpretable? [duplicate] The results from CART can change easily (with realistic sample sizes) with small perturbations to the data. If this is the case, it seems the interpretation is not a straightforward as it seems. I've
28,781
How to make Random Forests more interpretable? [duplicate]
For each tree in the forest you have an interpretation for the terminal nodes. So the forest can be viewed as a series of explanations why vector x might belong to class y. Then the class with the largest number of reasonable explanations is the class that is picked. Isn't that fairly easy to understand?
How to make Random Forests more interpretable? [duplicate]
For each tree in the forest you have an interpretation for the terminal nodes. So the forest can be viewed as a series of explanations why vector x might belong to class y. Then the class with the la
How to make Random Forests more interpretable? [duplicate] For each tree in the forest you have an interpretation for the terminal nodes. So the forest can be viewed as a series of explanations why vector x might belong to class y. Then the class with the largest number of reasonable explanations is the class that is picked. Isn't that fairly easy to understand?
How to make Random Forests more interpretable? [duplicate] For each tree in the forest you have an interpretation for the terminal nodes. So the forest can be viewed as a series of explanations why vector x might belong to class y. Then the class with the la
28,782
p-value vs. confidence interval obtained in Bootstrapping
You have many choices for bootstrap confidence intervals. All bootstrap confidence intervals are approximate and do not always do well in small samples (usually 80 is not considered small). also if you read Hall and Wilson's paper you will find that testing hypotheses assuming the bootstrap distribution under the null hypothesis works better than inverting confidence intervals. It is an issue about how to center the pivotal quantity in the test statistics. Schenker in 1985 showed that bootstrap methods such as Efron's percentile method and even the BC method severely under cover the true parameter for certain chi square populations when the sample size is not very large. Chernick and LaBudde in 2010 American Journal of Mathematical and Management Science showed that in small samples there can even be problems with BCa and bootstrap t for highly skewed distributions such as the lognormal. So based on the literature including my own research I suggest doing the hypothesis test with the centering approach recommended by Hall and Wilson and base your conclusions on that p-value. You can find detailed coverage of this in my recent book "An Introduction to the Bootstrap with Applications to R" published by Wiley in 2011.
p-value vs. confidence interval obtained in Bootstrapping
You have many choices for bootstrap confidence intervals. All bootstrap confidence intervals are approximate and do not always do well in small samples (usually 80 is not considered small). also if
p-value vs. confidence interval obtained in Bootstrapping You have many choices for bootstrap confidence intervals. All bootstrap confidence intervals are approximate and do not always do well in small samples (usually 80 is not considered small). also if you read Hall and Wilson's paper you will find that testing hypotheses assuming the bootstrap distribution under the null hypothesis works better than inverting confidence intervals. It is an issue about how to center the pivotal quantity in the test statistics. Schenker in 1985 showed that bootstrap methods such as Efron's percentile method and even the BC method severely under cover the true parameter for certain chi square populations when the sample size is not very large. Chernick and LaBudde in 2010 American Journal of Mathematical and Management Science showed that in small samples there can even be problems with BCa and bootstrap t for highly skewed distributions such as the lognormal. So based on the literature including my own research I suggest doing the hypothesis test with the centering approach recommended by Hall and Wilson and base your conclusions on that p-value. You can find detailed coverage of this in my recent book "An Introduction to the Bootstrap with Applications to R" published by Wiley in 2011.
p-value vs. confidence interval obtained in Bootstrapping You have many choices for bootstrap confidence intervals. All bootstrap confidence intervals are approximate and do not always do well in small samples (usually 80 is not considered small). also if
28,783
p-value vs. confidence interval obtained in Bootstrapping
I am not a real Bootstrap expert, but I can tell you about the two main things: Bootstrap confidence intervals are usually more robust and accurate then the ones estimated without bootstrap. If you estimate the parameter with bootstrap, your confidence interval (CI) usually evaluated in a different way then in a regular t-test. For example, in a regular case CI is $[ \hat{\theta} - \hat{q}_{1-\alpha/2}, \hat{\theta} + \hat{q}_{\alpha/2} ]$ (here $\hat{\theta}$ is an estimate of the parameter, $\hat{q}_{\alpha}$ is an $\alpha$-quantile). But for bootstrap it is $[ \hat{\theta} - \hat{q}_{1-\alpha/2}, \hat{\theta} - \hat{q}_{\alpha/2} ]$ (minus sign in both cases). From that all I would suggest you to recheck whether with this formulas bootstrap CI accompany with the p-value. And if you will find that it's ok now, report bootstrapped results. If no, it is better to ask SPSS experts about how bootstrap works there.
p-value vs. confidence interval obtained in Bootstrapping
I am not a real Bootstrap expert, but I can tell you about the two main things: Bootstrap confidence intervals are usually more robust and accurate then the ones estimated without bootstrap. If you e
p-value vs. confidence interval obtained in Bootstrapping I am not a real Bootstrap expert, but I can tell you about the two main things: Bootstrap confidence intervals are usually more robust and accurate then the ones estimated without bootstrap. If you estimate the parameter with bootstrap, your confidence interval (CI) usually evaluated in a different way then in a regular t-test. For example, in a regular case CI is $[ \hat{\theta} - \hat{q}_{1-\alpha/2}, \hat{\theta} + \hat{q}_{\alpha/2} ]$ (here $\hat{\theta}$ is an estimate of the parameter, $\hat{q}_{\alpha}$ is an $\alpha$-quantile). But for bootstrap it is $[ \hat{\theta} - \hat{q}_{1-\alpha/2}, \hat{\theta} - \hat{q}_{\alpha/2} ]$ (minus sign in both cases). From that all I would suggest you to recheck whether with this formulas bootstrap CI accompany with the p-value. And if you will find that it's ok now, report bootstrapped results. If no, it is better to ask SPSS experts about how bootstrap works there.
p-value vs. confidence interval obtained in Bootstrapping I am not a real Bootstrap expert, but I can tell you about the two main things: Bootstrap confidence intervals are usually more robust and accurate then the ones estimated without bootstrap. If you e
28,784
Mahalanobis distance via PCA when $n<p$
If you keep all the components from a PCA - then the Euclidean distances between patients in the new PCA-space will equal their Mahalanobis distances in the observed-variable space. If you'll skip some components, that will change a little, but anyway. Here I refer to to unit-variance PCA-components, not the kind whose variance is equal to eigenvalue (I am not sure about your PCA implementation). I just mean, that if you want to evaluate Mahalanobis distance between the patients, you can apply PCA and evaluate Euclidean distance. Evaluating Mahalanobis distance after applying PCA seems something meaningless to me.
Mahalanobis distance via PCA when $n<p$
If you keep all the components from a PCA - then the Euclidean distances between patients in the new PCA-space will equal their Mahalanobis distances in the observed-variable space. If you'll skip som
Mahalanobis distance via PCA when $n<p$ If you keep all the components from a PCA - then the Euclidean distances between patients in the new PCA-space will equal their Mahalanobis distances in the observed-variable space. If you'll skip some components, that will change a little, but anyway. Here I refer to to unit-variance PCA-components, not the kind whose variance is equal to eigenvalue (I am not sure about your PCA implementation). I just mean, that if you want to evaluate Mahalanobis distance between the patients, you can apply PCA and evaluate Euclidean distance. Evaluating Mahalanobis distance after applying PCA seems something meaningless to me.
Mahalanobis distance via PCA when $n<p$ If you keep all the components from a PCA - then the Euclidean distances between patients in the new PCA-space will equal their Mahalanobis distances in the observed-variable space. If you'll skip som
28,785
Mahalanobis distance via PCA when $n<p$
Take a look into the following paper: Zuber, V., Silva, A. P. D., & Strimmer, K. (2012). A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies. BMC bioinformatics, 13(1), 284. It exactly deals with your problem. The authors suppose the use of a new variable-importance measurements, besides that they earlier introduced a penalized estimation method for the correlation-matrix of explanatory variables which fits your problem. They also use the Mahalanobis distance for decorrelation! The methods are included in the R-package 'care', available on CRAN
Mahalanobis distance via PCA when $n<p$
Take a look into the following paper: Zuber, V., Silva, A. P. D., & Strimmer, K. (2012). A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies. BMC bioi
Mahalanobis distance via PCA when $n<p$ Take a look into the following paper: Zuber, V., Silva, A. P. D., & Strimmer, K. (2012). A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies. BMC bioinformatics, 13(1), 284. It exactly deals with your problem. The authors suppose the use of a new variable-importance measurements, besides that they earlier introduced a penalized estimation method for the correlation-matrix of explanatory variables which fits your problem. They also use the Mahalanobis distance for decorrelation! The methods are included in the R-package 'care', available on CRAN
Mahalanobis distance via PCA when $n<p$ Take a look into the following paper: Zuber, V., Silva, A. P. D., & Strimmer, K. (2012). A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies. BMC bioi
28,786
Mahalanobis distance via PCA when $n<p$
PCA scores (or PCA results) are used in the literature to calculate Mahalanobis distance between sample and a distribution of samples. For an example, see this article. Under the "Analysis methods" section, the authors state: Data sets of fluorescence spectra (681) are reduced into a lower dimension (11) by evaluating the principal components (PCs) of the correlation matrix (681 × 681). PC scores are estimated by projecting the original data along the PCs. Classification among the data sets has been done using Mahalanobis distance model by computing Mahalanobis distances for the PC scores. I have seen other examples of PCA/Mahalanobis distance based discriminant analysis in the literature and in the help menu of the GRAMS IQ chemometrics software. This combination makes sense since Mahalanobis distance does not work well when the number of variables is greater than the number of available samples, and PCA reduces the number of variables. One-class classification machine learning algorithms (i.e. Isolation Forest, One-ClassSVM, etc.) are possible alternatives to the PCA/Mahalanobis distance based discriminant analysis. In our lab, Isolation Forest combined with data pre-processing has produced good results in the classification of Near Infrared spectra. On a slightly related note, outlier or novelty detection with PCA/Mahalanobis distance, for high dimentional data, often requires calculation of the Mahalanobis distance cutoff. This article suggests that the cutoff can be calculated as the square root of the chi-squared distribution's critical value, assuming that the data is normally distributed. This critical value requires the number of degrees of freedom and the probability value associated with the data. The article appears to suggest that the number of principal components retained equals the number of degrees of freedom needed to calculate the critical value because the authors used the number of features in the data set for their calculation.
Mahalanobis distance via PCA when $n<p$
PCA scores (or PCA results) are used in the literature to calculate Mahalanobis distance between sample and a distribution of samples. For an example, see this article. Under the "Analysis methods" se
Mahalanobis distance via PCA when $n<p$ PCA scores (or PCA results) are used in the literature to calculate Mahalanobis distance between sample and a distribution of samples. For an example, see this article. Under the "Analysis methods" section, the authors state: Data sets of fluorescence spectra (681) are reduced into a lower dimension (11) by evaluating the principal components (PCs) of the correlation matrix (681 × 681). PC scores are estimated by projecting the original data along the PCs. Classification among the data sets has been done using Mahalanobis distance model by computing Mahalanobis distances for the PC scores. I have seen other examples of PCA/Mahalanobis distance based discriminant analysis in the literature and in the help menu of the GRAMS IQ chemometrics software. This combination makes sense since Mahalanobis distance does not work well when the number of variables is greater than the number of available samples, and PCA reduces the number of variables. One-class classification machine learning algorithms (i.e. Isolation Forest, One-ClassSVM, etc.) are possible alternatives to the PCA/Mahalanobis distance based discriminant analysis. In our lab, Isolation Forest combined with data pre-processing has produced good results in the classification of Near Infrared spectra. On a slightly related note, outlier or novelty detection with PCA/Mahalanobis distance, for high dimentional data, often requires calculation of the Mahalanobis distance cutoff. This article suggests that the cutoff can be calculated as the square root of the chi-squared distribution's critical value, assuming that the data is normally distributed. This critical value requires the number of degrees of freedom and the probability value associated with the data. The article appears to suggest that the number of principal components retained equals the number of degrees of freedom needed to calculate the critical value because the authors used the number of features in the data set for their calculation.
Mahalanobis distance via PCA when $n<p$ PCA scores (or PCA results) are used in the literature to calculate Mahalanobis distance between sample and a distribution of samples. For an example, see this article. Under the "Analysis methods" se
28,787
Variance of the reciprocal II
If you can't get a predictive accuracy out of the package, this may help. 1) A better approximation to $Var(x/y)$, which to some extent takes covariation into account, is: $Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$ 2) For approximating the variance of a transform of a random variate, the delta method Wikipedia sometimes, but not always, gives good results. In this case, it gives, corresponding to your formula (1): $Var(1/(1-z)) \approx \frac{Var(z)}{(1-E(z))^4}$ So now you know where that comes from! Using more terms from the underlying Taylor expansion etc. gives a higher-order, although not necessarily better, approximation: $Var(1/(1-z)) \approx \frac{Var(z)}{(1-E(z))^4} + 2\frac{E[(z-E(z))^3]}{(1-E(z))^5} + \frac{E[(z-E(z))^4]}{(1-E(z))^6}$ I tried this out via simulation using 10,000 $U(0.1,0.6)$ variates, mimicking the example range you provided in your question, and obtained the following results. The observed variance of $1/(1-z)$ was 0.149. The first-order delta approximation yielded a value of 0.117. The next delta approximation yielded a value of 0.128. 10,000 draws from a Beta(10,20) distribution gave results of similar relative accuracy; the observed variance of $1/(1-z)$ was 0.044 and the higher-order delta approximation gave a value of 0.039. How you would get the third and fourth moments of your estimates I'm not sure. You could, if your sample sizes give you some confidence in being close to asymptotic normality for your estimates, just use those of the Normal distribution. A bootstrap is a possibility as well, if you can do it. Either way, with small samples you're probably better off with the one-term approximation. Of course, I could simplify all this notation by just defining $z' = 1-z$ and using that, but I chose to stick with the original notation in the question.
Variance of the reciprocal II
If you can't get a predictive accuracy out of the package, this may help. 1) A better approximation to $Var(x/y)$, which to some extent takes covariation into account, is: $Var(x/y) \approx \left(\fra
Variance of the reciprocal II If you can't get a predictive accuracy out of the package, this may help. 1) A better approximation to $Var(x/y)$, which to some extent takes covariation into account, is: $Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$ 2) For approximating the variance of a transform of a random variate, the delta method Wikipedia sometimes, but not always, gives good results. In this case, it gives, corresponding to your formula (1): $Var(1/(1-z)) \approx \frac{Var(z)}{(1-E(z))^4}$ So now you know where that comes from! Using more terms from the underlying Taylor expansion etc. gives a higher-order, although not necessarily better, approximation: $Var(1/(1-z)) \approx \frac{Var(z)}{(1-E(z))^4} + 2\frac{E[(z-E(z))^3]}{(1-E(z))^5} + \frac{E[(z-E(z))^4]}{(1-E(z))^6}$ I tried this out via simulation using 10,000 $U(0.1,0.6)$ variates, mimicking the example range you provided in your question, and obtained the following results. The observed variance of $1/(1-z)$ was 0.149. The first-order delta approximation yielded a value of 0.117. The next delta approximation yielded a value of 0.128. 10,000 draws from a Beta(10,20) distribution gave results of similar relative accuracy; the observed variance of $1/(1-z)$ was 0.044 and the higher-order delta approximation gave a value of 0.039. How you would get the third and fourth moments of your estimates I'm not sure. You could, if your sample sizes give you some confidence in being close to asymptotic normality for your estimates, just use those of the Normal distribution. A bootstrap is a possibility as well, if you can do it. Either way, with small samples you're probably better off with the one-term approximation. Of course, I could simplify all this notation by just defining $z' = 1-z$ and using that, but I chose to stick with the original notation in the question.
Variance of the reciprocal II If you can't get a predictive accuracy out of the package, this may help. 1) A better approximation to $Var(x/y)$, which to some extent takes covariation into account, is: $Var(x/y) \approx \left(\fra
28,788
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
Look at the package apsrtable. You can tweak then the output the way you want, and summarise several models instead of one.
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
Look at the package apsrtable. You can tweak then the output the way you want, and summarise several models instead of one.
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] Look at the package apsrtable. You can tweak then the output the way you want, and summarise several models instead of one.
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] Look at the package apsrtable. You can tweak then the output the way you want, and summarise several models instead of one.
28,789
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
I gave up and played with the code to produce something similar. Not the prettiest thing though. If any one feels like improving it - I'd be happy to use your code. print.summary.lm.xtable <- function (x, digits = max(3, getOption("digits") - 3), symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) { if(!require(xtable)) stop("This function requires the package 'xtable' - please make sure you get it") cat("\\begin{verbatim}") cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") resid <- x$residuals df <- x$df rdf <- df[2L] cat(if (!is.null(x$w) && diff(range(x$w))) "Weighted ", "Residuals:\n", sep = "") if (rdf > 5L) { nam <- c("Min", "1Q", "Median", "3Q", "Max") rq <- if (length(dim(resid)) == 2L) structure(apply(t(resid), 1L, quantile), dimnames = list(nam, dimnames(resid)[[2L]])) else { zz <- zapsmall(quantile(resid), digits + 1) structure(zz, names = nam) } print(rq, digits = digits, ...) } else if (rdf > 0L) { print(resid, digits = digits, ...) } else { cat("ALL", df[1L], "residuals are 0: no residual degrees of freedom!\n") } # if (length(x$aliased) == 0L) { # cat("\nNo Coefficients\n") # } # else { # if (nsingular <- df[3L] - df[1L]) # cat("\nCoefficients: (", nsingular, " not defined because of singularities)\n", # sep = "") # else cat("\nCoefficients:\n") # coefs <- x$coefficients # if (!is.null(aliased <- x$aliased) && any(aliased)) { # cn <- names(aliased) # coefs <- matrix(NA, length(aliased), 4, dimnames = list(cn, # colnames(coefs))) # coefs[!aliased, ] <- x$coefficients # } # printCoefmat(coefs, digits = digits, signif.stars = signif.stars, # na.print = "NA", ...) # } cat("\\end{verbatim}") print(xtable(x), latex.environments = "left") # x is a summary of some lm object cat("\\begin{verbatim}") cat("Residual standard error:", format(signif(x$sigma, digits)), "on", rdf, "degrees of freedom\n") if (nzchar(mess <- naprint(x$na.action))) cat(" (", mess, ")\n", sep = "") if (!is.null(x$fstatistic)) { cat("Multiple R-squared:", formatC(x$r.squared, digits = digits)) cat(",\tAdjusted R-squared:", formatC(x$adj.r.squared, digits = digits), "\nF-statistic:", formatC(x$fstatistic[1L], digits = digits), "on", x$fstatistic[2L], "and", x$fstatistic[3L], "DF, p-value:", format.pval(pf(x$fstatistic[1L], x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), digits = digits), "\n") } correl <- x$correlation if (!is.null(correl)) { p <- NCOL(correl) if (p > 1L) { cat("\nCorrelation of Coefficients:\n") if (is.logical(symbolic.cor) && symbolic.cor) { print(symnum(correl, abbr.colnames = NULL)) } else { correl <- format(round(correl, 2), nsmall = 2, digits = digits) correl[!lower.tri(correl)] <- "" print(correl[-1, -p, drop = FALSE], quote = FALSE) } } } cat("\n") cat("\\end{verbatim}") invisible(x) }
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
I gave up and played with the code to produce something similar. Not the prettiest thing though. If any one feels like improving it - I'd be happy to use your code. print.summary.lm.xtable <- functi
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] I gave up and played with the code to produce something similar. Not the prettiest thing though. If any one feels like improving it - I'd be happy to use your code. print.summary.lm.xtable <- function (x, digits = max(3, getOption("digits") - 3), symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) { if(!require(xtable)) stop("This function requires the package 'xtable' - please make sure you get it") cat("\\begin{verbatim}") cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") resid <- x$residuals df <- x$df rdf <- df[2L] cat(if (!is.null(x$w) && diff(range(x$w))) "Weighted ", "Residuals:\n", sep = "") if (rdf > 5L) { nam <- c("Min", "1Q", "Median", "3Q", "Max") rq <- if (length(dim(resid)) == 2L) structure(apply(t(resid), 1L, quantile), dimnames = list(nam, dimnames(resid)[[2L]])) else { zz <- zapsmall(quantile(resid), digits + 1) structure(zz, names = nam) } print(rq, digits = digits, ...) } else if (rdf > 0L) { print(resid, digits = digits, ...) } else { cat("ALL", df[1L], "residuals are 0: no residual degrees of freedom!\n") } # if (length(x$aliased) == 0L) { # cat("\nNo Coefficients\n") # } # else { # if (nsingular <- df[3L] - df[1L]) # cat("\nCoefficients: (", nsingular, " not defined because of singularities)\n", # sep = "") # else cat("\nCoefficients:\n") # coefs <- x$coefficients # if (!is.null(aliased <- x$aliased) && any(aliased)) { # cn <- names(aliased) # coefs <- matrix(NA, length(aliased), 4, dimnames = list(cn, # colnames(coefs))) # coefs[!aliased, ] <- x$coefficients # } # printCoefmat(coefs, digits = digits, signif.stars = signif.stars, # na.print = "NA", ...) # } cat("\\end{verbatim}") print(xtable(x), latex.environments = "left") # x is a summary of some lm object cat("\\begin{verbatim}") cat("Residual standard error:", format(signif(x$sigma, digits)), "on", rdf, "degrees of freedom\n") if (nzchar(mess <- naprint(x$na.action))) cat(" (", mess, ")\n", sep = "") if (!is.null(x$fstatistic)) { cat("Multiple R-squared:", formatC(x$r.squared, digits = digits)) cat(",\tAdjusted R-squared:", formatC(x$adj.r.squared, digits = digits), "\nF-statistic:", formatC(x$fstatistic[1L], digits = digits), "on", x$fstatistic[2L], "and", x$fstatistic[3L], "DF, p-value:", format.pval(pf(x$fstatistic[1L], x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), digits = digits), "\n") } correl <- x$correlation if (!is.null(correl)) { p <- NCOL(correl) if (p > 1L) { cat("\nCorrelation of Coefficients:\n") if (is.logical(symbolic.cor) && symbolic.cor) { print(symnum(correl, abbr.colnames = NULL)) } else { correl <- format(round(correl, 2), nsmall = 2, digits = digits) correl[!lower.tri(correl)] <- "" print(correl[-1, -p, drop = FALSE], quote = FALSE) } } } cat("\n") cat("\\end{verbatim}") invisible(x) }
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] I gave up and played with the code to produce something similar. Not the prettiest thing though. If any one feels like improving it - I'd be happy to use your code. print.summary.lm.xtable <- functi
28,790
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
One possible solution is swst: Print statistical results in Sweave package by Sacha Epskamp. Examples library(swst) x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c( 2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) corTest <- cor.test(x, y, method = "kendall", alternative = "greater") swst(corTest) ($T=26$, $p=0.06$) # Chi-square test: M <- as.table(rbind(c(762, 327, 468), c(484,239,477))) dimnames(M) <- list(gender=c("M","F"), party=c("Democrat","Independent", "Republican")) chisqTest <- chisq.test(M) swst(chisqTest) ($\\\\chi^2(2)=30.07$, $p<0.001$) # Linear model: ## Annette Dobson (1990) "An Introduction to Generalized Linear Models". ## Page 9: Plant Weight Data. ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2,10,20, labels=c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) lm.D90 <- lm(weight ~ group - 1) # omitting intercept swst(lm.D9) ($F( 1,18)=1.419$, $p=0.249$) swst(lm.D90) ($F( 2,18)=485.051$, $p<0.001$)
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
One possible solution is swst: Print statistical results in Sweave package by Sacha Epskamp. Examples library(swst) x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c( 2.6, 3.1, 2.5,
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] One possible solution is swst: Print statistical results in Sweave package by Sacha Epskamp. Examples library(swst) x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c( 2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) corTest <- cor.test(x, y, method = "kendall", alternative = "greater") swst(corTest) ($T=26$, $p=0.06$) # Chi-square test: M <- as.table(rbind(c(762, 327, 468), c(484,239,477))) dimnames(M) <- list(gender=c("M","F"), party=c("Democrat","Independent", "Republican")) chisqTest <- chisq.test(M) swst(chisqTest) ($\\\\chi^2(2)=30.07$, $p<0.001$) # Linear model: ## Annette Dobson (1990) "An Introduction to Generalized Linear Models". ## Page 9: Plant Weight Data. ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2,10,20, labels=c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) lm.D90 <- lm(weight ~ group - 1) # omitting intercept swst(lm.D9) ($F( 1,18)=1.419$, $p=0.249$) swst(lm.D90) ($F( 2,18)=485.051$, $p<0.001$)
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] One possible solution is swst: Print statistical results in Sweave package by Sacha Epskamp. Examples library(swst) x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c( 2.6, 3.1, 2.5,
28,791
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
Personally I enjoy texreg, which plays nice with booktabs and is also highly customizable. Not exactly what you're looking for, but I think this is also good reading for this sort of work. *Note, I am no relation to the Philip who wrote that package. Lol.
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed]
Personally I enjoy texreg, which plays nice with booktabs and is also highly customizable. Not exactly what you're looking for, but I think this is also good reading for this sort of work. *Note, I am
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] Personally I enjoy texreg, which plays nice with booktabs and is also highly customizable. Not exactly what you're looking for, but I think this is also good reading for this sort of work. *Note, I am no relation to the Philip who wrote that package. Lol.
LaTeX output for R's summary.lm object - while displaying the information outside the table [closed] Personally I enjoy texreg, which plays nice with booktabs and is also highly customizable. Not exactly what you're looking for, but I think this is also good reading for this sort of work. *Note, I am
28,792
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package
Try the kinship package, which is based on nlme. See this thread on r-sig-mixed-models for details. I'd forgotten about this as I was trying to do it for a logistic model. See https://stackoverflow.com/questions/8245132 for a worked-out example. For non-normal responses, you'd need to modify the pedigreemm package, which is based on lme4. It gets you close, but the relationship matrix has to be created from a pedigree. The below function is a modification of the pedigreemm function which takes an arbitrary relationship matrix instead. library(pedigreemm) relmatmm <- function (formula, data, family = NULL, REML = TRUE, relmat = list(), control = list(), start = NULL, verbose = FALSE, subset, weights, na.action, offset, contrasts = NULL, model = TRUE, x = TRUE, ...) { mc <- match.call() lmerc <- mc lmerc[[1]] <- as.name("lmer") lmerc$relmat <- NULL if (!length(relmat)) return(eval.parent(lmerc)) stopifnot(is.list(relmat), length(names(relmat)) == length(relmat)) lmerc$doFit <- FALSE lmf <- eval(lmerc, parent.frame()) relfac <- relmat relnms <- names(relmat) stopifnot(all(relnms %in% names(lmf$FL$fl))) asgn <- attr(lmf$FL$fl, "assign") for (i in seq_along(relmat)) { tn <- which(match(relnms[i], names(lmf$FL$fl)) == asgn) if (length(tn) > 1) stop("a relationship matrix must be associated with only one random effects term") Zt <- lmf$FL$trms[[tn]]$Zt relmat[[i]] <- Matrix(relmat[[i]][rownames(Zt), rownames(Zt)], sparse = TRUE) relfac[[i]] <- chol(relmat[[i]]) lmf$FL$trms[[tn]]$Zt <- lmf$FL$trms[[tn]]$A <- relfac[[i]] %*% Zt } ans <- do.call(if (!is.null(lmf$glmFit)) lme4:::glmer_finalize else lme4:::lmer_finalize, lmf) ans <- new("pedigreemm", relfac = relfac, ans) ans@call <- match.call() ans } Usage is similar to pedigreemm except you give it the relationship matrix as the relmat argument instead of the pedigree as the pedigree argument. m <- relmatmm(yld ~ (1|gen) + (1|repl), relmat=list(gen=covmat), data=mydata) This doesn't apply here as you have ten observations/individual, but for one observation/individual you need one more line in this function and a minor patch to lme4 to allow for only one observation per random effect.
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or
Try the kinship package, which is based on nlme. See this thread on r-sig-mixed-models for details. I'd forgotten about this as I was trying to do it for a logistic model. See https://stackoverflow.
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package Try the kinship package, which is based on nlme. See this thread on r-sig-mixed-models for details. I'd forgotten about this as I was trying to do it for a logistic model. See https://stackoverflow.com/questions/8245132 for a worked-out example. For non-normal responses, you'd need to modify the pedigreemm package, which is based on lme4. It gets you close, but the relationship matrix has to be created from a pedigree. The below function is a modification of the pedigreemm function which takes an arbitrary relationship matrix instead. library(pedigreemm) relmatmm <- function (formula, data, family = NULL, REML = TRUE, relmat = list(), control = list(), start = NULL, verbose = FALSE, subset, weights, na.action, offset, contrasts = NULL, model = TRUE, x = TRUE, ...) { mc <- match.call() lmerc <- mc lmerc[[1]] <- as.name("lmer") lmerc$relmat <- NULL if (!length(relmat)) return(eval.parent(lmerc)) stopifnot(is.list(relmat), length(names(relmat)) == length(relmat)) lmerc$doFit <- FALSE lmf <- eval(lmerc, parent.frame()) relfac <- relmat relnms <- names(relmat) stopifnot(all(relnms %in% names(lmf$FL$fl))) asgn <- attr(lmf$FL$fl, "assign") for (i in seq_along(relmat)) { tn <- which(match(relnms[i], names(lmf$FL$fl)) == asgn) if (length(tn) > 1) stop("a relationship matrix must be associated with only one random effects term") Zt <- lmf$FL$trms[[tn]]$Zt relmat[[i]] <- Matrix(relmat[[i]][rownames(Zt), rownames(Zt)], sparse = TRUE) relfac[[i]] <- chol(relmat[[i]]) lmf$FL$trms[[tn]]$Zt <- lmf$FL$trms[[tn]]$A <- relfac[[i]] %*% Zt } ans <- do.call(if (!is.null(lmf$glmFit)) lme4:::glmer_finalize else lme4:::lmer_finalize, lmf) ans <- new("pedigreemm", relfac = relfac, ans) ans@call <- match.call() ans } Usage is similar to pedigreemm except you give it the relationship matrix as the relmat argument instead of the pedigree as the pedigree argument. m <- relmatmm(yld ~ (1|gen) + (1|repl), relmat=list(gen=covmat), data=mydata) This doesn't apply here as you have ten observations/individual, but for one observation/individual you need one more line in this function and a minor patch to lme4 to allow for only one observation per random effect.
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or Try the kinship package, which is based on nlme. See this thread on r-sig-mixed-models for details. I'd forgotten about this as I was trying to do it for a logistic model. See https://stackoverflow.
28,793
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package
This answer is potential expansion of the suggestion made by Aaron, who has suggested to use Pedigreem. The pedigreem can compute relationship from the projects as following syntax, I am unaware how we can use such relation output from different way. # just example from the manual to create pedigree structure and relation matrix # (although you have already the matrix in place) p1 <- new("pedigree", sire = as.integer(c(NA,NA,1, 1,4,5)), dam = as.integer(c(NA,NA,2,NA,3,2)), label = as.character(1:6)) p1 (dtc <- as(p1, "sparseMatrix")) # T-inverse in Mrode’s notation solve(dtc) inbreeding(p1) The mixed model fit of the package is based on lme4 for the syntax for the main function is similar to lme4 package function lmer function except you can put the pedigree object in it. pedigreemm(formula, data, family = NULL, REML = TRUE, pedigree = list(), control = list(), start = NULL, verbose = FALSE, subset, weights, na.action, offset, contrasts = NULL, model = TRUE, x = TRUE, ...) I know this is not perfect answer to your question, however this can help a little bit. i am glad you asked this question, interesting to me !
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or
This answer is potential expansion of the suggestion made by Aaron, who has suggested to use Pedigreem. The pedigreem can compute relationship from the projects as following syntax, I am unaware how w
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package This answer is potential expansion of the suggestion made by Aaron, who has suggested to use Pedigreem. The pedigreem can compute relationship from the projects as following syntax, I am unaware how we can use such relation output from different way. # just example from the manual to create pedigree structure and relation matrix # (although you have already the matrix in place) p1 <- new("pedigree", sire = as.integer(c(NA,NA,1, 1,4,5)), dam = as.integer(c(NA,NA,2,NA,3,2)), label = as.character(1:6)) p1 (dtc <- as(p1, "sparseMatrix")) # T-inverse in Mrode’s notation solve(dtc) inbreeding(p1) The mixed model fit of the package is based on lme4 for the syntax for the main function is similar to lme4 package function lmer function except you can put the pedigree object in it. pedigreemm(formula, data, family = NULL, REML = TRUE, pedigree = list(), control = list(), start = NULL, verbose = FALSE, subset, weights, na.action, offset, contrasts = NULL, model = TRUE, x = TRUE, ...) I know this is not perfect answer to your question, however this can help a little bit. i am glad you asked this question, interesting to me !
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or This answer is potential expansion of the suggestion made by Aaron, who has suggested to use Pedigreem. The pedigreem can compute relationship from the projects as following syntax, I am unaware how w
28,794
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package
lmer() in the lme4 package permits crossed random effects. Here, you'd use something like y ~ (1|gen) + (1|repl) For a full reference; http://www.stat.wisc.edu/~bates/PotsdamGLMM/LMMD.pdf
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or
lmer() in the lme4 package permits crossed random effects. Here, you'd use something like y ~ (1|gen) + (1|repl) For a full reference; http://www.stat.wisc.edu/~bates/PotsdamGLMM/LMMD.pdf
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package lmer() in the lme4 package permits crossed random effects. Here, you'd use something like y ~ (1|gen) + (1|repl) For a full reference; http://www.stat.wisc.edu/~bates/PotsdamGLMM/LMMD.pdf
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or lmer() in the lme4 package permits crossed random effects. Here, you'd use something like y ~ (1|gen) + (1|repl) For a full reference; http://www.stat.wisc.edu/~bates/PotsdamGLMM/LMMD.pdf
28,795
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package
Your title says "with lme4 or nlme package", but your text says How can I achieve this using R packages, perhaps with nlme or lme4? I know that ASREML can do it but I do not have hold and I love R for being robust as well as free. This approach is not based on these two packages, but it is open source and very flexible. GBLUP with arbitrary covariance structure is a special case of RKHS regression aka Kernel Ridge Regression. The package BGLR estimates the variance components in a Bayesian Framework. An alternative is the package KRMM that seems to solve the same model but using Expectation Maximization instead of a Bayesian approach (Gibbs sampling). But I didn't test that. This excerpt from the BGLR extended documentation computes y ~ a + g + e where a is a random effect with a pedigree-derived covariance structure, g is a random effect using a marker-derived covariance structure (you can use another genetic distance instead of the definition shown here) and e is the residual. For your problem, you can of course just omit a (=list(K=A, ...). The genomic relationship matrices (G and A in this example) must relate 1-to-1 to the genotype order in y, so if a genotype occurs multiple times in y, it must do so in the matrices as well. Box 4a: Fitting a Pedigree + Markers regression using Gaussian Processes #1# Loading and preparing the input data library(BGLR); data(wheat);Y<-wheat.Y; X<-wheat.X; A<-wheat.A; y<-Y[,1] #2# Computing the genomic relationship matrix X<-scale(X,center=TRUE,scale=TRUE) G<-tcrossprod(X)/ncol(X) #3# Computing the eigen-value decomposition of G EVD <-eigen(G) #3# Setting the linear predictor ETA<-list(list(K=A, model='RKHS'), list(V=EVD$vectors,d=EVD$values, model='RKHS')) #4# Fitting the model fm<-BGLR(y=y,ETA=ETA, nIter=12000, burnIn=2000,saveAt='PGBLUP_') save(fm,file='fmPG_BLUP.rda') See also these examples of different ways to compute GBLUP. This documentation page shows an example including fixed effects (and other methods, such as BayesB, just use those models you need): pheno=mice.pheno fm=BGLR(y=pheno$Obesity.BMI, ETA=list( fixed=list(~factor(GENDER)+factor(Litter),data=pheno,model='FIXED'), cage=list(~factor(cage),data=pheno,model='BRR'), ped=list(K=A,model='RKHS'), mrk=list(X=X,model='BayesB') ) )
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or
Your title says "with lme4 or nlme package", but your text says How can I achieve this using R packages, perhaps with nlme or lme4? I know that ASREML can do it but I do not have hold and I love R fo
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or nlme package Your title says "with lme4 or nlme package", but your text says How can I achieve this using R packages, perhaps with nlme or lme4? I know that ASREML can do it but I do not have hold and I love R for being robust as well as free. This approach is not based on these two packages, but it is open source and very flexible. GBLUP with arbitrary covariance structure is a special case of RKHS regression aka Kernel Ridge Regression. The package BGLR estimates the variance components in a Bayesian Framework. An alternative is the package KRMM that seems to solve the same model but using Expectation Maximization instead of a Bayesian approach (Gibbs sampling). But I didn't test that. This excerpt from the BGLR extended documentation computes y ~ a + g + e where a is a random effect with a pedigree-derived covariance structure, g is a random effect using a marker-derived covariance structure (you can use another genetic distance instead of the definition shown here) and e is the residual. For your problem, you can of course just omit a (=list(K=A, ...). The genomic relationship matrices (G and A in this example) must relate 1-to-1 to the genotype order in y, so if a genotype occurs multiple times in y, it must do so in the matrices as well. Box 4a: Fitting a Pedigree + Markers regression using Gaussian Processes #1# Loading and preparing the input data library(BGLR); data(wheat);Y<-wheat.Y; X<-wheat.X; A<-wheat.A; y<-Y[,1] #2# Computing the genomic relationship matrix X<-scale(X,center=TRUE,scale=TRUE) G<-tcrossprod(X)/ncol(X) #3# Computing the eigen-value decomposition of G EVD <-eigen(G) #3# Setting the linear predictor ETA<-list(list(K=A, model='RKHS'), list(V=EVD$vectors,d=EVD$values, model='RKHS')) #4# Fitting the model fm<-BGLR(y=y,ETA=ETA, nIter=12000, burnIn=2000,saveAt='PGBLUP_') save(fm,file='fmPG_BLUP.rda') See also these examples of different ways to compute GBLUP. This documentation page shows an example including fixed effects (and other methods, such as BayesB, just use those models you need): pheno=mice.pheno fm=BGLR(y=pheno$Obesity.BMI, ETA=list( fixed=list(~factor(GENDER)+factor(Litter),data=pheno,model='FIXED'), cage=list(~factor(cage),data=pheno,model='BRR'), ped=list(K=A,model='RKHS'), mrk=list(X=X,model='BayesB') ) )
Estimating random effects and applying user defined correlation/covariance structure with R lme4 or Your title says "with lme4 or nlme package", but your text says How can I achieve this using R packages, perhaps with nlme or lme4? I know that ASREML can do it but I do not have hold and I love R fo
28,796
Estimate confidence interval of mean by bootstrap t method or simply by bootstrap?
Bootstrap-$t$ still relies on assumptions for parametric distributions: If the boostrap distribution of a statistic has a normal distribution, you can use the bootstrap-$t$ method. This will lead to a symmetric CI. If, however, the sampling distribution is skewed or biased, it is better to use the percentile bootstrap (which allows for asymmetric CIs). Now, which method should you use? Concerning the bootstrapped mean: According to simulations by Wilcox (2010), the percentile bootstrap should not be used for untrimmed means (in this case bootstrap-$t$ works better); starting from 20% trimming, percentile bootstrap outperforms the bootstrap-$t$ (the situation is unclear for 10% trimming). Another hint comes from Hesterberg et al. (2005, p. 14-35): The conditions for safe use of bootstrap t and bootstrap percentile inter- vals are a bit vague. We recommend that you check whether these intervals are reasonable by comparing them with each other. If the bias of the bootstrap distribution is small and the distribution is close to normal, the bootstrap t and percentile confidence intervals will agree closely. Percentile intervals, un- like t intervals, do not ignore skewness. Percentile intervals are therefore usu- ally more accurate, as long as the bias is small. Because we will soon meet much more accurate bootstrap intervals, our recommendation is that when bootstrap t and bootstrap percentile intervals do not agree closely, neither type of interval should be used. --> in case of disagreement better use the BCa-corrected bootstrap CI! Hesterberg, T., Monaghan, S., Moore, D., Clipson, A., & Epstein, R. (2005). Bootstrap methods and permutation tests. Introduction to the Practice of Statistics, 14.1–14.70. Wilcox, R. R. (2010). Fundamentals of modern statistical methods: Substantially improving power and accuracy. Springer Verlag.
Estimate confidence interval of mean by bootstrap t method or simply by bootstrap?
Bootstrap-$t$ still relies on assumptions for parametric distributions: If the boostrap distribution of a statistic has a normal distribution, you can use the bootstrap-$t$ method. This will lead to a
Estimate confidence interval of mean by bootstrap t method or simply by bootstrap? Bootstrap-$t$ still relies on assumptions for parametric distributions: If the boostrap distribution of a statistic has a normal distribution, you can use the bootstrap-$t$ method. This will lead to a symmetric CI. If, however, the sampling distribution is skewed or biased, it is better to use the percentile bootstrap (which allows for asymmetric CIs). Now, which method should you use? Concerning the bootstrapped mean: According to simulations by Wilcox (2010), the percentile bootstrap should not be used for untrimmed means (in this case bootstrap-$t$ works better); starting from 20% trimming, percentile bootstrap outperforms the bootstrap-$t$ (the situation is unclear for 10% trimming). Another hint comes from Hesterberg et al. (2005, p. 14-35): The conditions for safe use of bootstrap t and bootstrap percentile inter- vals are a bit vague. We recommend that you check whether these intervals are reasonable by comparing them with each other. If the bias of the bootstrap distribution is small and the distribution is close to normal, the bootstrap t and percentile confidence intervals will agree closely. Percentile intervals, un- like t intervals, do not ignore skewness. Percentile intervals are therefore usu- ally more accurate, as long as the bias is small. Because we will soon meet much more accurate bootstrap intervals, our recommendation is that when bootstrap t and bootstrap percentile intervals do not agree closely, neither type of interval should be used. --> in case of disagreement better use the BCa-corrected bootstrap CI! Hesterberg, T., Monaghan, S., Moore, D., Clipson, A., & Epstein, R. (2005). Bootstrap methods and permutation tests. Introduction to the Practice of Statistics, 14.1–14.70. Wilcox, R. R. (2010). Fundamentals of modern statistical methods: Substantially improving power and accuracy. Springer Verlag.
Estimate confidence interval of mean by bootstrap t method or simply by bootstrap? Bootstrap-$t$ still relies on assumptions for parametric distributions: If the boostrap distribution of a statistic has a normal distribution, you can use the bootstrap-$t$ method. This will lead to a
28,797
Is there a concept of "enough" data for training statistical models?
You can slice your dataset into consecutive subsets with 10%, 20%, 30%, ... , 100% of your data and for each subset estimate the variance of your estimator accuracy using k-fold cross validation or bootstrapping. If you have "enough" data, plotting the variances should display a decreasing monotonic line that should reach a plateau before 100%: adding more data does not decrease the variance of the accuracy of the estimator in any significant way.
Is there a concept of "enough" data for training statistical models?
You can slice your dataset into consecutive subsets with 10%, 20%, 30%, ... , 100% of your data and for each subset estimate the variance of your estimator accuracy using k-fold cross validation or bo
Is there a concept of "enough" data for training statistical models? You can slice your dataset into consecutive subsets with 10%, 20%, 30%, ... , 100% of your data and for each subset estimate the variance of your estimator accuracy using k-fold cross validation or bootstrapping. If you have "enough" data, plotting the variances should display a decreasing monotonic line that should reach a plateau before 100%: adding more data does not decrease the variance of the accuracy of the estimator in any significant way.
Is there a concept of "enough" data for training statistical models? You can slice your dataset into consecutive subsets with 10%, 20%, 30%, ... , 100% of your data and for each subset estimate the variance of your estimator accuracy using k-fold cross validation or bo
28,798
How can I compute a posterior density estimate from a prior and likelihood?
You have several things mixed up. The theory talks about multiplying the prior distribution and the likelihood, not samples from the prior distribution. Also it is not clear what you have the prior of, is this a prior on the mean of something? or something else? Then you have things reversed in the likelihood, your observations should be x with either prior draws or known fixed constants as the mean and standard deviation. And even then it would really be the product of 4 calls to dnorm with each of your observations as x and the same mean and standard deviation. What is really no clear is what you are trying to do. What is your question? which parameters are you interested in? what prior(s) do you have on those parameters? are there other parameters? do you have priors or fixed values for those? Trying to go about things the way you currently are will only confuse you more until you work out exactly what your question is and work from there. Below is added due after the editing of the original question. You are still missing some pieces, and probably not understanding everything, but we can start from where you are at. I think you are confusing a few concepts. There is the likelihood that shows the relationship between the data and the parameters, you are using the normal which has 2 parameters, the mean and the standard deviation (or variance, or precision). Then there is the prior distributions on the parameters, you have specified a normal prior with mean 0 and sd 1, but that mean and standard deviation are completely different from the mean and standard deviation of the likelihood. To be complete you need to either know the likelihood SD or place a prior on the likelihood SD, for simplicity (but less real) I will assume we know the likelihood SD is $\frac12$ (no good reason other than it works and is different from 1). So we can start similar to what you did and generate from the prior: > obs <- c(0.4, 0.5, 0.8, 0.1) > pri <- rnorm(10000, 0, 1) Now we need to compute the likelihoods, this is based on the prior draws of the mean, the likelihood with the data, and the known value of the SD. The dnorm function will give us the likelihood of a single point, but we need to multiply together the values for each of the observations, here is a function to do that: > likfun <- function(theta) { + sapply( theta, function(t) prod( dnorm(obs, t, 0.5) ) ) + } Now we can compute the likelihood for each draw from the prior for the mean > tmp <- likfun(pri) Now to get the posterior we need to do a new type of draw, one approach that is similar to rejection sampling is to sample from the prior mean draws proportional to the likelihood for each prior draw (this is the closest to the multiplication step you were asking about): > post <- sample( pri, 100000, replace=TRUE, prob=tmp ) Now we can look at the results of the posterior draws: > mean(post) [1] 0.4205842 > sd(post) [1] 0.2421079 > > hist(post) > abline(v=mean(post), col='green') and compare the above results to the closed form values from the theory > (1/1^2*mean(pri) + length(obs)/0.5^2 * mean(obs))/( 1/1^2 + length(obs)/0.5^2 ) [1] 0.4233263 > sqrt(1/(1+4*4)) [1] 0.2425356 Not a bad approximation, but it would probably work better to use a built-in McMC tool to draw from the posterior. Most of these tools sample one point at a time not in batches like above. More realistically we would not know the SD of the likelihood and would need a prior for that as well (often the prior on the variance is a $\chi^2$ or gamma), but then it is more complicated to compute (McMC comes in handy) and there is no closed form to compare with. The general solution is to use existing tools for doing the McMC calculations such as WinBugs or OpenBugs (BRugs in R gives an interface between R and Bugs) or packages such LearnBayes in R.
How can I compute a posterior density estimate from a prior and likelihood?
You have several things mixed up. The theory talks about multiplying the prior distribution and the likelihood, not samples from the prior distribution. Also it is not clear what you have the prior
How can I compute a posterior density estimate from a prior and likelihood? You have several things mixed up. The theory talks about multiplying the prior distribution and the likelihood, not samples from the prior distribution. Also it is not clear what you have the prior of, is this a prior on the mean of something? or something else? Then you have things reversed in the likelihood, your observations should be x with either prior draws or known fixed constants as the mean and standard deviation. And even then it would really be the product of 4 calls to dnorm with each of your observations as x and the same mean and standard deviation. What is really no clear is what you are trying to do. What is your question? which parameters are you interested in? what prior(s) do you have on those parameters? are there other parameters? do you have priors or fixed values for those? Trying to go about things the way you currently are will only confuse you more until you work out exactly what your question is and work from there. Below is added due after the editing of the original question. You are still missing some pieces, and probably not understanding everything, but we can start from where you are at. I think you are confusing a few concepts. There is the likelihood that shows the relationship between the data and the parameters, you are using the normal which has 2 parameters, the mean and the standard deviation (or variance, or precision). Then there is the prior distributions on the parameters, you have specified a normal prior with mean 0 and sd 1, but that mean and standard deviation are completely different from the mean and standard deviation of the likelihood. To be complete you need to either know the likelihood SD or place a prior on the likelihood SD, for simplicity (but less real) I will assume we know the likelihood SD is $\frac12$ (no good reason other than it works and is different from 1). So we can start similar to what you did and generate from the prior: > obs <- c(0.4, 0.5, 0.8, 0.1) > pri <- rnorm(10000, 0, 1) Now we need to compute the likelihoods, this is based on the prior draws of the mean, the likelihood with the data, and the known value of the SD. The dnorm function will give us the likelihood of a single point, but we need to multiply together the values for each of the observations, here is a function to do that: > likfun <- function(theta) { + sapply( theta, function(t) prod( dnorm(obs, t, 0.5) ) ) + } Now we can compute the likelihood for each draw from the prior for the mean > tmp <- likfun(pri) Now to get the posterior we need to do a new type of draw, one approach that is similar to rejection sampling is to sample from the prior mean draws proportional to the likelihood for each prior draw (this is the closest to the multiplication step you were asking about): > post <- sample( pri, 100000, replace=TRUE, prob=tmp ) Now we can look at the results of the posterior draws: > mean(post) [1] 0.4205842 > sd(post) [1] 0.2421079 > > hist(post) > abline(v=mean(post), col='green') and compare the above results to the closed form values from the theory > (1/1^2*mean(pri) + length(obs)/0.5^2 * mean(obs))/( 1/1^2 + length(obs)/0.5^2 ) [1] 0.4233263 > sqrt(1/(1+4*4)) [1] 0.2425356 Not a bad approximation, but it would probably work better to use a built-in McMC tool to draw from the posterior. Most of these tools sample one point at a time not in batches like above. More realistically we would not know the SD of the likelihood and would need a prior for that as well (often the prior on the variance is a $\chi^2$ or gamma), but then it is more complicated to compute (McMC comes in handy) and there is no closed form to compare with. The general solution is to use existing tools for doing the McMC calculations such as WinBugs or OpenBugs (BRugs in R gives an interface between R and Bugs) or packages such LearnBayes in R.
How can I compute a posterior density estimate from a prior and likelihood? You have several things mixed up. The theory talks about multiplying the prior distribution and the likelihood, not samples from the prior distribution. Also it is not clear what you have the prior
28,799
Is a logistic regression biased when the outcome variable is split 5% - 95%?
I disagreed with the other answers in the comments, so it's only fair I give my own. Let $Y$ be the response (good/bad accounts), and $X$ be the covariates. For logistic regression, the model is the following: $\log\left(\frac{p(Y=1|X=x)}{p(Y=0|X=x)}\right)= \alpha + \sum_{i=1}^k x_i \beta_i $ Think about how the data might be collected: You could select the observations randomly from some hypothetical "population" You could select the data based on $X$, and see what values of $Y$ occur. Both of these are okay for the above model, as you are only modelling the distribution of $Y|X$. These would be called a prospective study. Alternatively: You could select the observations based on $Y$ (say 100 of each), and see the relative prevalence of $X$ (i.e. you are stratifying on $Y$). This is called a retrospective or case-control study. (You could also select the data based on $Y$ and certain variables of $X$: this would be a stratified case-control study, and is much more complicated to work with, so I won't go into it here). There is a nice result from epidemiology (see Prentice and Pyke (1979)) that for a case-control study, the maximum likelihood estimates for $\beta$ can be found by logistic regression, that is using the prospective model for retrospective data. So how is this relevant to your problem? Well, it means that if you are able to collect more data, you could just look at the bad accounts and still use logistic regression to estimate the $\beta_i$'s (but you would need to adjust the $\alpha$ to account for the over-representation). Say it cost $1 for each extra account, then this might be more cost effective then simply looking at all accounts. But on the other hand, if you already have ALL possible data, there is no point to stratifying: you would simply be throwing away data (giving worse estimates), and then be left with the problem of trying to estimate $\alpha$.
Is a logistic regression biased when the outcome variable is split 5% - 95%?
I disagreed with the other answers in the comments, so it's only fair I give my own. Let $Y$ be the response (good/bad accounts), and $X$ be the covariates. For logistic regression, the model is the f
Is a logistic regression biased when the outcome variable is split 5% - 95%? I disagreed with the other answers in the comments, so it's only fair I give my own. Let $Y$ be the response (good/bad accounts), and $X$ be the covariates. For logistic regression, the model is the following: $\log\left(\frac{p(Y=1|X=x)}{p(Y=0|X=x)}\right)= \alpha + \sum_{i=1}^k x_i \beta_i $ Think about how the data might be collected: You could select the observations randomly from some hypothetical "population" You could select the data based on $X$, and see what values of $Y$ occur. Both of these are okay for the above model, as you are only modelling the distribution of $Y|X$. These would be called a prospective study. Alternatively: You could select the observations based on $Y$ (say 100 of each), and see the relative prevalence of $X$ (i.e. you are stratifying on $Y$). This is called a retrospective or case-control study. (You could also select the data based on $Y$ and certain variables of $X$: this would be a stratified case-control study, and is much more complicated to work with, so I won't go into it here). There is a nice result from epidemiology (see Prentice and Pyke (1979)) that for a case-control study, the maximum likelihood estimates for $\beta$ can be found by logistic regression, that is using the prospective model for retrospective data. So how is this relevant to your problem? Well, it means that if you are able to collect more data, you could just look at the bad accounts and still use logistic regression to estimate the $\beta_i$'s (but you would need to adjust the $\alpha$ to account for the over-representation). Say it cost $1 for each extra account, then this might be more cost effective then simply looking at all accounts. But on the other hand, if you already have ALL possible data, there is no point to stratifying: you would simply be throwing away data (giving worse estimates), and then be left with the problem of trying to estimate $\alpha$.
Is a logistic regression biased when the outcome variable is split 5% - 95%? I disagreed with the other answers in the comments, so it's only fair I give my own. Let $Y$ be the response (good/bad accounts), and $X$ be the covariates. For logistic regression, the model is the f
28,800
Is a logistic regression biased when the outcome variable is split 5% - 95%?
Asymptotically, the ratio of positive to negative patterns is essentially irrelevant. The problem arises principally when you have too few samples of the minority class to adequately describe its statistical distribution. Making the dataset larger generally solves the problem (where that is possible). If this is not possible, the best thing to do is to re-sample the data to get a balanced dataset, and then apply a multiplicative adjustment to the output of the classifier to compensate for the difference between training set and operational relative class frequencies. While you can calculate the (asymptotically) optimal adjustment factor, in practice it is best to tune the adjustment using cross-validation (as we are dealing with a finite practical case rather than an asymptotic one). In this sort of situation, I often use a committee of models, where each is trained on all of the minority patterns and a different random sample of the majority patterns of the same size as the minority patterns. This guards against bad luck in the selection of a single subset of the majority patterns.
Is a logistic regression biased when the outcome variable is split 5% - 95%?
Asymptotically, the ratio of positive to negative patterns is essentially irrelevant. The problem arises principally when you have too few samples of the minority class to adequately describe its sta
Is a logistic regression biased when the outcome variable is split 5% - 95%? Asymptotically, the ratio of positive to negative patterns is essentially irrelevant. The problem arises principally when you have too few samples of the minority class to adequately describe its statistical distribution. Making the dataset larger generally solves the problem (where that is possible). If this is not possible, the best thing to do is to re-sample the data to get a balanced dataset, and then apply a multiplicative adjustment to the output of the classifier to compensate for the difference between training set and operational relative class frequencies. While you can calculate the (asymptotically) optimal adjustment factor, in practice it is best to tune the adjustment using cross-validation (as we are dealing with a finite practical case rather than an asymptotic one). In this sort of situation, I often use a committee of models, where each is trained on all of the minority patterns and a different random sample of the majority patterns of the same size as the minority patterns. This guards against bad luck in the selection of a single subset of the majority patterns.
Is a logistic regression biased when the outcome variable is split 5% - 95%? Asymptotically, the ratio of positive to negative patterns is essentially irrelevant. The problem arises principally when you have too few samples of the minority class to adequately describe its sta