idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,101
Is analysis of existing data always exploratory, or can it be used for hypothesis testing?
I disagee that all analyses of pre-existing data are exploratory. The scenario you described seems like a textbook-perfect example of a hypothesis test, assuming the investigators generated their hypothesis without looking at the data first. If it was truly an a priori hypothesis, then what would have changed if they went out and made measurements instead of just downloading the data? Issues with exploratory analysis (data dredging, multiple comparisons, etc) arise when the hypothesis is formed from the same data it is subsequently tested on. If your hypothetical researchers had thumbed through the data and noticed a potentially interesting relationship between two factors, a subsequent test of that relationship provides somewhat weaker evidence for it than if it were tested on an entirely new set of observations. In some cases, it might be possible to collect additional confirmatory data; you could also potentially use one subset of the data for developing your model and then test it on the rest of the data (there are also things like cross-validation if your 'exploration' is automated). I would be interested to hear how (for example) macro economists deal with this, as they often work with data that is collected over long timescales, can't be re-observed, and the researchers are often aware of many trends in the data. As a practical matter, I think you more or less have to take the authors at their word. Ideally, the authors would explain how they arrived at their hypothesis; it is, of course, possible to come up with some tortured post hoc rationalization too, but those often stick out from the text. Pre-registration would definitely help--it's been going on for a while for clinical trials and some psychologists are advocating for it for basic science-type experiments--but that raises some big logistical hurdles too. Finally, my inner Bayesian wants to point out that individual studies are rarely worth much in isolation; there's nothing wrong with updating your beliefs somewhat less if the study was either overtly exploratory or you think the authors may have peeked.
Is analysis of existing data always exploratory, or can it be used for hypothesis testing?
I disagee that all analyses of pre-existing data are exploratory. The scenario you described seems like a textbook-perfect example of a hypothesis test, assuming the investigators generated their hypo
Is analysis of existing data always exploratory, or can it be used for hypothesis testing? I disagee that all analyses of pre-existing data are exploratory. The scenario you described seems like a textbook-perfect example of a hypothesis test, assuming the investigators generated their hypothesis without looking at the data first. If it was truly an a priori hypothesis, then what would have changed if they went out and made measurements instead of just downloading the data? Issues with exploratory analysis (data dredging, multiple comparisons, etc) arise when the hypothesis is formed from the same data it is subsequently tested on. If your hypothetical researchers had thumbed through the data and noticed a potentially interesting relationship between two factors, a subsequent test of that relationship provides somewhat weaker evidence for it than if it were tested on an entirely new set of observations. In some cases, it might be possible to collect additional confirmatory data; you could also potentially use one subset of the data for developing your model and then test it on the rest of the data (there are also things like cross-validation if your 'exploration' is automated). I would be interested to hear how (for example) macro economists deal with this, as they often work with data that is collected over long timescales, can't be re-observed, and the researchers are often aware of many trends in the data. As a practical matter, I think you more or less have to take the authors at their word. Ideally, the authors would explain how they arrived at their hypothesis; it is, of course, possible to come up with some tortured post hoc rationalization too, but those often stick out from the text. Pre-registration would definitely help--it's been going on for a while for clinical trials and some psychologists are advocating for it for basic science-type experiments--but that raises some big logistical hurdles too. Finally, my inner Bayesian wants to point out that individual studies are rarely worth much in isolation; there's nothing wrong with updating your beliefs somewhat less if the study was either overtly exploratory or you think the authors may have peeked.
Is analysis of existing data always exploratory, or can it be used for hypothesis testing? I disagee that all analyses of pre-existing data are exploratory. The scenario you described seems like a textbook-perfect example of a hypothesis test, assuming the investigators generated their hypo
45,102
Is analysis of existing data always exploratory, or can it be used for hypothesis testing?
Astronomers and astro-physicists predominantly use data that other people have collected. And what one of these scientists collects as evidence will be used by a lot of other people who are doing good science by testing good hypotheses. The example describes taking n out of a collected sample of N and performing a test. That surely is not a "fine example of hypothesis testing" -- Why not use the rest of the data? On the other hand, it might be a particular example of testing a narrow hypothesis for the purpose of rejecting it (or modifying it) when something that is supposed to be universally true does not show up in this sample which is sufficiently large. A "fine example" is going to have to belong to a sufficiently well-developed narrative that there is consequence to the acceptance or rejection. There is "known science" and the hypothesis has a chance of changing expectations. One data set is probably not going to be answer all the questions, but it can be enough to raise questions that are new. What this has reminded me is various data analyzed for the Flynn Effect (see Wikip). The Flynn Effect is the observation that IQ scores have increased by a couple of points per decade. An early indicator of it was the observation that the manufacturers of IQ and achievement tests have found it necessary to re-standardize their tests every few years in order to keep the IQ mean at 100, etc. Early on, the presence of the Effect debunked the extent that IQ falls with ageing: older testees who answer exactly the same as they did 30 or 40 years earlier will be assigned lower scores on a test that is not cohort age-adjusted Dozens or hundreds of data sets, mostly collected for other purposes, have been investigated in the subsequent tests of alternative explanations. What makes some of these "fine examples" is how well they address specific cogent arguments
Is analysis of existing data always exploratory, or can it be used for hypothesis testing?
Astronomers and astro-physicists predominantly use data that other people have collected. And what one of these scientists collects as evidence will be used by a lot of other people who are doing goo
Is analysis of existing data always exploratory, or can it be used for hypothesis testing? Astronomers and astro-physicists predominantly use data that other people have collected. And what one of these scientists collects as evidence will be used by a lot of other people who are doing good science by testing good hypotheses. The example describes taking n out of a collected sample of N and performing a test. That surely is not a "fine example of hypothesis testing" -- Why not use the rest of the data? On the other hand, it might be a particular example of testing a narrow hypothesis for the purpose of rejecting it (or modifying it) when something that is supposed to be universally true does not show up in this sample which is sufficiently large. A "fine example" is going to have to belong to a sufficiently well-developed narrative that there is consequence to the acceptance or rejection. There is "known science" and the hypothesis has a chance of changing expectations. One data set is probably not going to be answer all the questions, but it can be enough to raise questions that are new. What this has reminded me is various data analyzed for the Flynn Effect (see Wikip). The Flynn Effect is the observation that IQ scores have increased by a couple of points per decade. An early indicator of it was the observation that the manufacturers of IQ and achievement tests have found it necessary to re-standardize their tests every few years in order to keep the IQ mean at 100, etc. Early on, the presence of the Effect debunked the extent that IQ falls with ageing: older testees who answer exactly the same as they did 30 or 40 years earlier will be assigned lower scores on a test that is not cohort age-adjusted Dozens or hundreds of data sets, mostly collected for other purposes, have been investigated in the subsequent tests of alternative explanations. What makes some of these "fine examples" is how well they address specific cogent arguments
Is analysis of existing data always exploratory, or can it be used for hypothesis testing? Astronomers and astro-physicists predominantly use data that other people have collected. And what one of these scientists collects as evidence will be used by a lot of other people who are doing goo
45,103
How to calculate the specific Standard Error relevant for a specific point estimate within a linear regression?
The first part of my question, is how do you calculate this specific Standard Error at a specific point estimate? You don't specify if you mean simple linear or multiple regression. I'll assume the general case. Let's do it at a point $x^* = (1,x_1^*,x_2^*,...,x_p^*)$ $$\text{Var}(\hat y^*) = \text{Var}(x^*\hat\beta)= \text{Var}(x^*(X^TX)^{-1}X^T y)$$ $$= x^*(X^TX)^{-1}X^T \text{Var}(y) X(X^TX)^{-1}x^{*T}$$ $$ = \sigma^2 x^*(X^TX)^{-1}X' I X(X^TX)^{-1}x^{*T} $$ $$= \sigma^2 x^*(X^TX)^{-1}x^{*T}$$ If $h^*_{ii} = [x^*(X^TX)^{-1}x^{*T}]_{ii}$ then $\text{Var}(\hat y_i) = \sigma^2 h^*_{ii}$. Of course, $\sigma^2$ is unknown and must be estimated. The standard error is the square root of that estimated variance up above. Could one provide a link to a numerical example to facilitate my interpretation of the formula? I'll try to dig one up. My second part to this overall question is: How come the resulting hourglass shape of the resulting Confidence Interval as depicted does not break the linear regression assumption that the variance of residuals remain constant across observations (the heteroskedasticity thing)? 1) it's a confidence interval for where the mean is, not the variance of the data; it reflects our uncertainty in the parameters as they feed through (via the design, $X$) to the the estimate of the mean. Something assumed true for one thing not being true for a different thing doesn't violate the assumption for the first thing. 2) Your statement "the linear regression assumption that the variance of residuals remain constant across observations" is factually incorrect (though I know what you're getting at). That is not an assumption of regression - in fact, outside specific cases, it's untrue for regression. What is assumed constant is the variance of the unobserved errors. The variance of the residuals is not constant. In fact it 'bows in' in opposite fashion to the way the variance above 'bows out', both due to the phenomenon of leverage. Edits in response to followup questions: Why would the variance bow in? I'll do it algebraically and then expand on the explanation in the text above: \begin{eqnarray} \text{Var}(y-\hat y) &=& \text{Var}(y) + \text{Var}(\hat y) - 2 \text{Cov}(y,\hat y)\\ &=&\sigma^2 I + \text{Var}(X \hat \beta) - 2 \text{Cov}(y,X \hat \beta)\\ &=&\sigma^2 I + \text{Var}(X (X^TX)^{-1}X^T y) - 2 \text{Cov}(y,X (X^TX)^{-1}X^T y)\\ &=&\sigma^2 I + X (X^TX)^{-1}X^T\text{Var}(y) X (X^TX)^{-1}X^T - 2 \text{Cov}(y, y)X (X^TX)^{-1}X^T\\ &=&\sigma^2 I + X (X^TX)^{-1}X^T(\sigma^2 I) X (X^TX)^{-1}X^T - 2 \sigma^2 I X (X^TX)^{-1}X^T\\ &=&\sigma^2 I + \sigma^2 X (X^TX)^{-1}X^T X (X^TX)^{-1}X^T - 2 \sigma^2 I X (X^TX)^{-1}X^T\\ &=&\sigma^2 I + \sigma^2 X (X^TX)^{-1}X^T - 2 \sigma^2 X (X^TX)^{-1}X^T\\ &=&\sigma^2 [I + X (X^TX)^{-1}X^T - 2 X (X^TX)^{-1}X^T]\\ &=&\sigma^2 [I - X (X^TX)^{-1}X^T]\\ &=& \sigma^2(I-H) \end{eqnarray} where $H = X(X^TX)^{-1}X^T$. Therefore the variance of the $i^{\tt{th}}$ residual is $\sigma^2(1-h_{ii})$ where $h_{ii}$ is $H(i,i)$ (some texts will write that as $h_i$ instead). As you see, it's smaller, when $h$ is larger... which is when the pattern of $x$-values is further from the center of the $x$'s. In simple regression $h$ is larger when $(x-\bar x)$ is larger. Now as to why, note that $\hat y = Hy$ ($H$ is called the hat-matrix for this reason). That is, the fit at the $i^{\tt{th}}$ observation responds to movements in $y_i$ in proportion to $h_{ii}$, or $\frac{\partial \hat{y}_i}{\partial y_i} = h_{ii}$. So when $h$ is larger, $y$ pulls the line more toward itself, making its residual smaller. There's a more intuitive discussion in the context of simple linear regression here that may help motivate it for you. I interpret that as large errors near the Mean with smaller errors away from the Mean. No, we're not discussing errors, they have constant variance. We're discussing residuals. They are not the errors and don't have constant variance; they're related but different. The bit of material I have read on the subject, suggests just the opposite... Can you point me to something that does this? Recall that we're discussing the residual variability here. Additionally, how would you define heteroskedasticity? Having non-constant variance. That is, when the regression assumption about the variance being constant doesn't hold, you have heteroskedasticity. See Wikipedia: http://en.wikipedia.org/wiki/Heteroscedasticity And, what do you mean by the variance of unobserved errors? You don't observe the errors, but the model assumes they have constant variance, $\sigma^2$. The "variance of unobserved errors" is thus simply "$\sigma^2$". How can you measure those since they are unobserved? Individually, you can't, at least not very well. You can roughly estimate them by the residuals, but they don't even have the same variance, as we saw. However, you can estimate their variance reasonably well from the residuals, if you appropriately adjust for the fact that the residuals are on average smaller than the errors.
How to calculate the specific Standard Error relevant for a specific point estimate within a linear
The first part of my question, is how do you calculate this specific Standard Error at a specific point estimate? You don't specify if you mean simple linear or multiple regression. I'll assume the
How to calculate the specific Standard Error relevant for a specific point estimate within a linear regression? The first part of my question, is how do you calculate this specific Standard Error at a specific point estimate? You don't specify if you mean simple linear or multiple regression. I'll assume the general case. Let's do it at a point $x^* = (1,x_1^*,x_2^*,...,x_p^*)$ $$\text{Var}(\hat y^*) = \text{Var}(x^*\hat\beta)= \text{Var}(x^*(X^TX)^{-1}X^T y)$$ $$= x^*(X^TX)^{-1}X^T \text{Var}(y) X(X^TX)^{-1}x^{*T}$$ $$ = \sigma^2 x^*(X^TX)^{-1}X' I X(X^TX)^{-1}x^{*T} $$ $$= \sigma^2 x^*(X^TX)^{-1}x^{*T}$$ If $h^*_{ii} = [x^*(X^TX)^{-1}x^{*T}]_{ii}$ then $\text{Var}(\hat y_i) = \sigma^2 h^*_{ii}$. Of course, $\sigma^2$ is unknown and must be estimated. The standard error is the square root of that estimated variance up above. Could one provide a link to a numerical example to facilitate my interpretation of the formula? I'll try to dig one up. My second part to this overall question is: How come the resulting hourglass shape of the resulting Confidence Interval as depicted does not break the linear regression assumption that the variance of residuals remain constant across observations (the heteroskedasticity thing)? 1) it's a confidence interval for where the mean is, not the variance of the data; it reflects our uncertainty in the parameters as they feed through (via the design, $X$) to the the estimate of the mean. Something assumed true for one thing not being true for a different thing doesn't violate the assumption for the first thing. 2) Your statement "the linear regression assumption that the variance of residuals remain constant across observations" is factually incorrect (though I know what you're getting at). That is not an assumption of regression - in fact, outside specific cases, it's untrue for regression. What is assumed constant is the variance of the unobserved errors. The variance of the residuals is not constant. In fact it 'bows in' in opposite fashion to the way the variance above 'bows out', both due to the phenomenon of leverage. Edits in response to followup questions: Why would the variance bow in? I'll do it algebraically and then expand on the explanation in the text above: \begin{eqnarray} \text{Var}(y-\hat y) &=& \text{Var}(y) + \text{Var}(\hat y) - 2 \text{Cov}(y,\hat y)\\ &=&\sigma^2 I + \text{Var}(X \hat \beta) - 2 \text{Cov}(y,X \hat \beta)\\ &=&\sigma^2 I + \text{Var}(X (X^TX)^{-1}X^T y) - 2 \text{Cov}(y,X (X^TX)^{-1}X^T y)\\ &=&\sigma^2 I + X (X^TX)^{-1}X^T\text{Var}(y) X (X^TX)^{-1}X^T - 2 \text{Cov}(y, y)X (X^TX)^{-1}X^T\\ &=&\sigma^2 I + X (X^TX)^{-1}X^T(\sigma^2 I) X (X^TX)^{-1}X^T - 2 \sigma^2 I X (X^TX)^{-1}X^T\\ &=&\sigma^2 I + \sigma^2 X (X^TX)^{-1}X^T X (X^TX)^{-1}X^T - 2 \sigma^2 I X (X^TX)^{-1}X^T\\ &=&\sigma^2 I + \sigma^2 X (X^TX)^{-1}X^T - 2 \sigma^2 X (X^TX)^{-1}X^T\\ &=&\sigma^2 [I + X (X^TX)^{-1}X^T - 2 X (X^TX)^{-1}X^T]\\ &=&\sigma^2 [I - X (X^TX)^{-1}X^T]\\ &=& \sigma^2(I-H) \end{eqnarray} where $H = X(X^TX)^{-1}X^T$. Therefore the variance of the $i^{\tt{th}}$ residual is $\sigma^2(1-h_{ii})$ where $h_{ii}$ is $H(i,i)$ (some texts will write that as $h_i$ instead). As you see, it's smaller, when $h$ is larger... which is when the pattern of $x$-values is further from the center of the $x$'s. In simple regression $h$ is larger when $(x-\bar x)$ is larger. Now as to why, note that $\hat y = Hy$ ($H$ is called the hat-matrix for this reason). That is, the fit at the $i^{\tt{th}}$ observation responds to movements in $y_i$ in proportion to $h_{ii}$, or $\frac{\partial \hat{y}_i}{\partial y_i} = h_{ii}$. So when $h$ is larger, $y$ pulls the line more toward itself, making its residual smaller. There's a more intuitive discussion in the context of simple linear regression here that may help motivate it for you. I interpret that as large errors near the Mean with smaller errors away from the Mean. No, we're not discussing errors, they have constant variance. We're discussing residuals. They are not the errors and don't have constant variance; they're related but different. The bit of material I have read on the subject, suggests just the opposite... Can you point me to something that does this? Recall that we're discussing the residual variability here. Additionally, how would you define heteroskedasticity? Having non-constant variance. That is, when the regression assumption about the variance being constant doesn't hold, you have heteroskedasticity. See Wikipedia: http://en.wikipedia.org/wiki/Heteroscedasticity And, what do you mean by the variance of unobserved errors? You don't observe the errors, but the model assumes they have constant variance, $\sigma^2$. The "variance of unobserved errors" is thus simply "$\sigma^2$". How can you measure those since they are unobserved? Individually, you can't, at least not very well. You can roughly estimate them by the residuals, but they don't even have the same variance, as we saw. However, you can estimate their variance reasonably well from the residuals, if you appropriately adjust for the fact that the residuals are on average smaller than the errors.
How to calculate the specific Standard Error relevant for a specific point estimate within a linear The first part of my question, is how do you calculate this specific Standard Error at a specific point estimate? You don't specify if you mean simple linear or multiple regression. I'll assume the
45,104
Relationship between regressing Y on X, and X on Y in logistic regression
Yes, there is a similar relationship: for circumstances where it makes sense and where both variables are coded by $0$ and $1$ (the analog of standardization), the "slope" in the logistic regression of $Y$ against $X$ equals the slope in the logistic regression of $X$ against $Y$. Recall that (univariate) logistic regression models a binary response $Y$ in terms of a variable $x$ and a constant, using two parameters $\beta_0$ and $\beta_1$, by stipulating that the chance of $Y$ equaling one of its values (generically termed "success") can be modeled by $$\mathbb O(Y=\text{success}) = \beta_0 + \beta_1 x$$ where "$\mathbb O$" refers to the log odds, equal to the logarithm of the odds $\Pr(\text{success}) / \Pr(\text{not success})$. The only circumstance under which it makes sense to switch the roles of $Y$ and $x$, then, is when $x$ also is binary. That compels us to view its outcomes now as draws from a random variable $X$. The values of $Y$ must be encoded as fixed (nonrandom) values $1$ for "success" and $0$ otherwise. We might as well assume, then, that the encoding $1$="success" and $0$="not success" has been used all along for both variables. Notice that the data in this situation can be considered a two-by-two contingency table in which the counts of all four possible combinations of $x$ and $y$ are displayed. Let the counts for $x=i$ and $y=j$ be written $n_{ij}$, for $i=0,1$ and $j=0,1$. The conventional estimator of the parameters is obtained in the maximum likelihood theory by finding values for which the gradient of the log likelihood equals zero. In the first case, viewing $Y$ as the dependent variable, the likelihood equations are $$\cases { 0 = n_{01} + n_{11} - \frac{n_{00}+n_{10}}{1+\exp{\beta_0}} - \frac{n_{10}+n_{11}}{1+\exp(\beta_0+\beta_1)} \\ 0 = n_{11} - \frac{n_{10} + n_{11}}{1+\exp(\beta_0+\beta_1)} }$$ When all the $n_{ij}\ne 0$ the solution is $$\cases{ \beta_0 = \log(n_{00}) - \log(n_{01}),\\ \beta_1 = \log(n_{01}) + \log(n_{10}) - \log(n_{00}) - \log(n_{11}).}$$ Switching the roles of the variables merely permutes the subscripts of the $n$'s (although now $\beta_0$ and $\beta_1$ have different meanings, for they multiply the $y$ values instead of the $x$ values). But the symmetry of the solution for $\beta_1$ shows that it remains unchanged. This is the "slope" term and it is the perfect analog of the regression coefficient in ordinary least squares regression. Example Software will confirm this result. Here, for instance, are the results of the two logistic regressions in R using the following two-way table: Y=0 Y=1 X=0: 1 3 X=1: 2 4 Regressing $Y$ against $X$ gives $(\hat\beta_0, \hat\beta_1)$ = $(\log(1/3), \log(3/2))$ = $(-1.0986, 0.4055)$ while regressing $X$ against $Y$ gives $(\hat\beta_0, \hat\beta_1)$ = $(\log(1/2), \log(3/2))$ = $(-0.6931, 0.4055)$. y <- matrix(c(1,2,3,4),nrow=2) (fit <- glm(y ~ as.factor(0:1), family=binomial)) (fit.t <- glm(t(y) ~ as.factor(0:1), family=binomial)) The output suggests that both the slopes and the null deviances remain the same upon switching $X$ and $Y$: Coefficients: (Intercept) as.factor(0:1)1 -1.0986 0.4055 Degrees of Freedom: 1 Total (i.e. Null); 0 Residual Null Deviance: 0.08043 Residual Deviance: 2.22e-16 AIC: 7.948 Coefficients: (Intercept) as.factor(0:1)1 -0.6931 0.4055 Degrees of Freedom: 1 Total (i.e. Null); 0 Residual Null Deviance: 0.08043 Residual Deviance: 4.441e-16 AIC: 8.072
Relationship between regressing Y on X, and X on Y in logistic regression
Yes, there is a similar relationship: for circumstances where it makes sense and where both variables are coded by $0$ and $1$ (the analog of standardization), the "slope" in the logistic regression o
Relationship between regressing Y on X, and X on Y in logistic regression Yes, there is a similar relationship: for circumstances where it makes sense and where both variables are coded by $0$ and $1$ (the analog of standardization), the "slope" in the logistic regression of $Y$ against $X$ equals the slope in the logistic regression of $X$ against $Y$. Recall that (univariate) logistic regression models a binary response $Y$ in terms of a variable $x$ and a constant, using two parameters $\beta_0$ and $\beta_1$, by stipulating that the chance of $Y$ equaling one of its values (generically termed "success") can be modeled by $$\mathbb O(Y=\text{success}) = \beta_0 + \beta_1 x$$ where "$\mathbb O$" refers to the log odds, equal to the logarithm of the odds $\Pr(\text{success}) / \Pr(\text{not success})$. The only circumstance under which it makes sense to switch the roles of $Y$ and $x$, then, is when $x$ also is binary. That compels us to view its outcomes now as draws from a random variable $X$. The values of $Y$ must be encoded as fixed (nonrandom) values $1$ for "success" and $0$ otherwise. We might as well assume, then, that the encoding $1$="success" and $0$="not success" has been used all along for both variables. Notice that the data in this situation can be considered a two-by-two contingency table in which the counts of all four possible combinations of $x$ and $y$ are displayed. Let the counts for $x=i$ and $y=j$ be written $n_{ij}$, for $i=0,1$ and $j=0,1$. The conventional estimator of the parameters is obtained in the maximum likelihood theory by finding values for which the gradient of the log likelihood equals zero. In the first case, viewing $Y$ as the dependent variable, the likelihood equations are $$\cases { 0 = n_{01} + n_{11} - \frac{n_{00}+n_{10}}{1+\exp{\beta_0}} - \frac{n_{10}+n_{11}}{1+\exp(\beta_0+\beta_1)} \\ 0 = n_{11} - \frac{n_{10} + n_{11}}{1+\exp(\beta_0+\beta_1)} }$$ When all the $n_{ij}\ne 0$ the solution is $$\cases{ \beta_0 = \log(n_{00}) - \log(n_{01}),\\ \beta_1 = \log(n_{01}) + \log(n_{10}) - \log(n_{00}) - \log(n_{11}).}$$ Switching the roles of the variables merely permutes the subscripts of the $n$'s (although now $\beta_0$ and $\beta_1$ have different meanings, for they multiply the $y$ values instead of the $x$ values). But the symmetry of the solution for $\beta_1$ shows that it remains unchanged. This is the "slope" term and it is the perfect analog of the regression coefficient in ordinary least squares regression. Example Software will confirm this result. Here, for instance, are the results of the two logistic regressions in R using the following two-way table: Y=0 Y=1 X=0: 1 3 X=1: 2 4 Regressing $Y$ against $X$ gives $(\hat\beta_0, \hat\beta_1)$ = $(\log(1/3), \log(3/2))$ = $(-1.0986, 0.4055)$ while regressing $X$ against $Y$ gives $(\hat\beta_0, \hat\beta_1)$ = $(\log(1/2), \log(3/2))$ = $(-0.6931, 0.4055)$. y <- matrix(c(1,2,3,4),nrow=2) (fit <- glm(y ~ as.factor(0:1), family=binomial)) (fit.t <- glm(t(y) ~ as.factor(0:1), family=binomial)) The output suggests that both the slopes and the null deviances remain the same upon switching $X$ and $Y$: Coefficients: (Intercept) as.factor(0:1)1 -1.0986 0.4055 Degrees of Freedom: 1 Total (i.e. Null); 0 Residual Null Deviance: 0.08043 Residual Deviance: 2.22e-16 AIC: 7.948 Coefficients: (Intercept) as.factor(0:1)1 -0.6931 0.4055 Degrees of Freedom: 1 Total (i.e. Null); 0 Residual Null Deviance: 0.08043 Residual Deviance: 4.441e-16 AIC: 8.072
Relationship between regressing Y on X, and X on Y in logistic regression Yes, there is a similar relationship: for circumstances where it makes sense and where both variables are coded by $0$ and $1$ (the analog of standardization), the "slope" in the logistic regression o
45,105
Relationship between regressing Y on X, and X on Y in logistic regression
An important distinction here is that Pearson's product-moment correlation, the linear regression of $Y$ on $X$, and the linear regression of $X$ on $Y$ (assuming $X$ is continuous) are all linear models. On the other hand, logistic regression is a nonlinear model / an instance of the generalized linear model. If you were to regress a continuous $X$ variable onto a binary $Y$ variable, that would be a t-test1. (The t-test, in turn, is a special case of regression / the general linear model2.) Using logistic regression to model a binary $Y$ is a different animal because there is a nonlinear transformation between the left hand side and the right hand side of the equation, namely the link function (specifically, the logit3). (I had wanted to address the case that @whuber discusses, when $X$ constitutes only two categories coded as $0$, $1$, earlier, but didn't have time so I had to leave off. @whuber has done a good job with that topic, but I'll go ahead and explain it again, because I'll come at it from a slightly different direction, which may help some to understand it more easily, and I'll add one more detail.) In this situation, your data consist of four counts: $n_{00}$ (the number of observations where: $X=0,~Y=0$), $n_{01}$ (where: $X=0,~Y=1$), $n_{10}$ (where: $X=1,~Y=0$), $n_{11}$ (where: $X=1,~Y=1$). The thing to remember at this point is that logistic regression is linear in the log odds of the response, and when exponentiated, the intercept is the odds of success when $X=0$, and the slope is the odds ratio4 associated with a one-unit change in $X$. Hence, $$ \exp(\hat\beta_{0;\text{ Y on X}})=\frac{n_{01}}{n_{00}}\quad\text{ and }\quad\exp(\hat\beta_{0;\text{ X on Y}})=\frac{n_{10}}{n_{00}} $$ Thus, the two intercepts will be equal if and only if $n_{01}=n_{10}$. In addition, $$ \exp(\hat\beta_{1;\text{ Y on X}})=\frac{\frac{n_{11}}{n_{10}}}{\frac{n_{01}}{n_{00}}}\quad\text{ and }\quad\exp(\hat\beta_{1;\text{ X on Y}})=\frac{\frac{n_{11}}{n_{01}}}{\frac{n_{10}}{n_{00}}} $$ But in both cases these equal $\frac{n_{11}}{n_{10}}\cdot\frac{n_{00}}{n_{01}}$, so the slopes must always be equal (as @whuber explained). The subscripts that @whuber and I are using to index the $n$'s are switched around. Also, in @whuber's R example, he seems to be using Y=0 as success, whereas I would call Y=1 success. For example, note that for $\hat\beta_{0;\text{ Y on X}}$, he has $\log(1/3)$, whereas using my convention, $\exp(\hat\beta_{0;\text{ Y on X}})=3/1$. I duplicate @whuber's R example below; both work. y = c(0,0,0,1,1,1,1,1,1,1) x = c(0,1,1,0,0,0,1,1,1,1) t(table(y,x)) # these data are the same as @whuber's y x 0 1 0 1 3 # using my conventions, exp(b0[YonX]) would be 3/1 = 3 1 2 4 # using my conventions, exp(b0[XonY]) would be 2/1 = 2 fit.YonX = glm(y~x, family=binomial(link="logit")) fit.XonY = glm(x~y, family=binomial(link="logit")) coef(fit.YonX) (Intercept) x 1.0986123 -0.4054651 exp(1.0986123) [1] 3 coef(fit.XonY) (Intercept) y 0.6931472 -0.4054651 exp(0.6931472) [1] 2 Footnotes: 1 Strictly speaking, running a t-test 'in the other direction' wouldn't quite be a logistic regression. A stronger analogy would be Fisher's linear discriminant analysis. That's because in logistic regression there is no assumption about the distribution of $X$, but LDA does assume $X$ is normally distributed and the t-test likewise assumes the residuals are. Nonetheless, given that we're starting from logistic regression, for a quick way to think about what you're doing if you were to switch (a continuous) $X$ and $Y$, calling it a t-test is close enough. 2 For help with understanding how the t-test is a special case of regression, see here: How are regression, the t-test, and the ANOVA all versions of the general linear model? 3 For help with understanding link functions and the logit transformation, it may help to read my answer here: Difference between logit and probit models, although it was written in a different context. 4 For more about logistic regression and how it's related to odds and odds ratios, see my answer here: interpretation of simple predictions to odds ratios in logistic regression.
Relationship between regressing Y on X, and X on Y in logistic regression
An important distinction here is that Pearson's product-moment correlation, the linear regression of $Y$ on $X$, and the linear regression of $X$ on $Y$ (assuming $X$ is continuous) are all linear mod
Relationship between regressing Y on X, and X on Y in logistic regression An important distinction here is that Pearson's product-moment correlation, the linear regression of $Y$ on $X$, and the linear regression of $X$ on $Y$ (assuming $X$ is continuous) are all linear models. On the other hand, logistic regression is a nonlinear model / an instance of the generalized linear model. If you were to regress a continuous $X$ variable onto a binary $Y$ variable, that would be a t-test1. (The t-test, in turn, is a special case of regression / the general linear model2.) Using logistic regression to model a binary $Y$ is a different animal because there is a nonlinear transformation between the left hand side and the right hand side of the equation, namely the link function (specifically, the logit3). (I had wanted to address the case that @whuber discusses, when $X$ constitutes only two categories coded as $0$, $1$, earlier, but didn't have time so I had to leave off. @whuber has done a good job with that topic, but I'll go ahead and explain it again, because I'll come at it from a slightly different direction, which may help some to understand it more easily, and I'll add one more detail.) In this situation, your data consist of four counts: $n_{00}$ (the number of observations where: $X=0,~Y=0$), $n_{01}$ (where: $X=0,~Y=1$), $n_{10}$ (where: $X=1,~Y=0$), $n_{11}$ (where: $X=1,~Y=1$). The thing to remember at this point is that logistic regression is linear in the log odds of the response, and when exponentiated, the intercept is the odds of success when $X=0$, and the slope is the odds ratio4 associated with a one-unit change in $X$. Hence, $$ \exp(\hat\beta_{0;\text{ Y on X}})=\frac{n_{01}}{n_{00}}\quad\text{ and }\quad\exp(\hat\beta_{0;\text{ X on Y}})=\frac{n_{10}}{n_{00}} $$ Thus, the two intercepts will be equal if and only if $n_{01}=n_{10}$. In addition, $$ \exp(\hat\beta_{1;\text{ Y on X}})=\frac{\frac{n_{11}}{n_{10}}}{\frac{n_{01}}{n_{00}}}\quad\text{ and }\quad\exp(\hat\beta_{1;\text{ X on Y}})=\frac{\frac{n_{11}}{n_{01}}}{\frac{n_{10}}{n_{00}}} $$ But in both cases these equal $\frac{n_{11}}{n_{10}}\cdot\frac{n_{00}}{n_{01}}$, so the slopes must always be equal (as @whuber explained). The subscripts that @whuber and I are using to index the $n$'s are switched around. Also, in @whuber's R example, he seems to be using Y=0 as success, whereas I would call Y=1 success. For example, note that for $\hat\beta_{0;\text{ Y on X}}$, he has $\log(1/3)$, whereas using my convention, $\exp(\hat\beta_{0;\text{ Y on X}})=3/1$. I duplicate @whuber's R example below; both work. y = c(0,0,0,1,1,1,1,1,1,1) x = c(0,1,1,0,0,0,1,1,1,1) t(table(y,x)) # these data are the same as @whuber's y x 0 1 0 1 3 # using my conventions, exp(b0[YonX]) would be 3/1 = 3 1 2 4 # using my conventions, exp(b0[XonY]) would be 2/1 = 2 fit.YonX = glm(y~x, family=binomial(link="logit")) fit.XonY = glm(x~y, family=binomial(link="logit")) coef(fit.YonX) (Intercept) x 1.0986123 -0.4054651 exp(1.0986123) [1] 3 coef(fit.XonY) (Intercept) y 0.6931472 -0.4054651 exp(0.6931472) [1] 2 Footnotes: 1 Strictly speaking, running a t-test 'in the other direction' wouldn't quite be a logistic regression. A stronger analogy would be Fisher's linear discriminant analysis. That's because in logistic regression there is no assumption about the distribution of $X$, but LDA does assume $X$ is normally distributed and the t-test likewise assumes the residuals are. Nonetheless, given that we're starting from logistic regression, for a quick way to think about what you're doing if you were to switch (a continuous) $X$ and $Y$, calling it a t-test is close enough. 2 For help with understanding how the t-test is a special case of regression, see here: How are regression, the t-test, and the ANOVA all versions of the general linear model? 3 For help with understanding link functions and the logit transformation, it may help to read my answer here: Difference between logit and probit models, although it was written in a different context. 4 For more about logistic regression and how it's related to odds and odds ratios, see my answer here: interpretation of simple predictions to odds ratios in logistic regression.
Relationship between regressing Y on X, and X on Y in logistic regression An important distinction here is that Pearson's product-moment correlation, the linear regression of $Y$ on $X$, and the linear regression of $X$ on $Y$ (assuming $X$ is continuous) are all linear mod
45,106
How to interpret the model fit indices generated by lavaan (in R)? Something wrong with the model specifications?
It appears that this is a model where (almost) everything is regressed on everything else. You have 5 variables in your model. That means you have 10 covariances. You have 10 parameters. The df of the model is equal to (number of covariances) - (number of parameters). This is zero. The model is described as saturated, and it's not testing anything. Because it's not testing anything, the fit indices are all perfect. (This will make sense if you look at the formulas for the fit indices - a zero chi-square should give you these fit indices). What do you mean by simulate a model? If you don't want the fit to be perfect, add some constraints. Typically, one constrains to zero. So yes, it has to do with the model specification. It's a an unusual model to test with an SEM, but if that's your model you want to test, that's your model. If you want to make it more testable, you need to add a variable which is a possible cause of one variable, but not of the others. For example, social support might influence stress, but should not (directly) incfuence ilness, and perhaps not the others. If you add social support, and put an arrow from social support ONLY to stress, you will add 6 covariances to the model, but only add 1 df. Hence your model will have 5 df, and the fit will no longer be perfect.
How to interpret the model fit indices generated by lavaan (in R)? Something wrong with the model sp
It appears that this is a model where (almost) everything is regressed on everything else. You have 5 variables in your model. That means you have 10 covariances. You have 10 parameters. The df of t
How to interpret the model fit indices generated by lavaan (in R)? Something wrong with the model specifications? It appears that this is a model where (almost) everything is regressed on everything else. You have 5 variables in your model. That means you have 10 covariances. You have 10 parameters. The df of the model is equal to (number of covariances) - (number of parameters). This is zero. The model is described as saturated, and it's not testing anything. Because it's not testing anything, the fit indices are all perfect. (This will make sense if you look at the formulas for the fit indices - a zero chi-square should give you these fit indices). What do you mean by simulate a model? If you don't want the fit to be perfect, add some constraints. Typically, one constrains to zero. So yes, it has to do with the model specification. It's a an unusual model to test with an SEM, but if that's your model you want to test, that's your model. If you want to make it more testable, you need to add a variable which is a possible cause of one variable, but not of the others. For example, social support might influence stress, but should not (directly) incfuence ilness, and perhaps not the others. If you add social support, and put an arrow from social support ONLY to stress, you will add 6 covariances to the model, but only add 1 df. Hence your model will have 5 df, and the fit will no longer be perfect.
How to interpret the model fit indices generated by lavaan (in R)? Something wrong with the model sp It appears that this is a model where (almost) everything is regressed on everything else. You have 5 variables in your model. That means you have 10 covariances. You have 10 parameters. The df of t
45,107
Power for experimental design
First turn your anova into a regression. Then, $$ N = \frac{2.84^2}{p(1-p)}\frac{\sigma^2}{MDE^2} $$ N is your sample size, p is the proportion getting the treatment, $\sigma$ is the standard deviation of the residuals, and MDE=the minimum detectable effect that you are powered for. 2.84 comes from alpha confidence of 95% and 80% power (when you've got a lot of degrees of freedom. With fewer degrees of freedom, you'll need to use the T distribution. Look at the reference for details.) The more explanatory factors you've got in your experiment the more $\sigma$ will shrink. Knowing how much it will shrink is tricky. At some point, analytical formulas collapse, and you're better off simply simulating your entire dataset, multiple times doing some sort of monte carlo, and fitting the desired model to each of the plausible datasets. Your power is the proportion of times you get the result you want. As with the analytical methods, you're only as good as your assumptions. Source is from memory, roughly following this.
Power for experimental design
First turn your anova into a regression. Then, $$ N = \frac{2.84^2}{p(1-p)}\frac{\sigma^2}{MDE^2} $$ N is your sample size, p is the proportion getting the treatment, $\sigma$ is the standard deviat
Power for experimental design First turn your anova into a regression. Then, $$ N = \frac{2.84^2}{p(1-p)}\frac{\sigma^2}{MDE^2} $$ N is your sample size, p is the proportion getting the treatment, $\sigma$ is the standard deviation of the residuals, and MDE=the minimum detectable effect that you are powered for. 2.84 comes from alpha confidence of 95% and 80% power (when you've got a lot of degrees of freedom. With fewer degrees of freedom, you'll need to use the T distribution. Look at the reference for details.) The more explanatory factors you've got in your experiment the more $\sigma$ will shrink. Knowing how much it will shrink is tricky. At some point, analytical formulas collapse, and you're better off simply simulating your entire dataset, multiple times doing some sort of monte carlo, and fitting the desired model to each of the plausible datasets. Your power is the proportion of times you get the result you want. As with the analytical methods, you're only as good as your assumptions. Source is from memory, roughly following this.
Power for experimental design First turn your anova into a regression. Then, $$ N = \frac{2.84^2}{p(1-p)}\frac{\sigma^2}{MDE^2} $$ N is your sample size, p is the proportion getting the treatment, $\sigma$ is the standard deviat
45,108
Power for experimental design
Once beyond the simple cases like t-tests I prefer to use simulations. When you do a simulation you control all the assumptions being made and can simulate for situations that may not be in the nice canned routines. Here is an answer with an example of a simulation: Simulation 1
Power for experimental design
Once beyond the simple cases like t-tests I prefer to use simulations. When you do a simulation you control all the assumptions being made and can simulate for situations that may not be in the nice
Power for experimental design Once beyond the simple cases like t-tests I prefer to use simulations. When you do a simulation you control all the assumptions being made and can simulate for situations that may not be in the nice canned routines. Here is an answer with an example of a simulation: Simulation 1
Power for experimental design Once beyond the simple cases like t-tests I prefer to use simulations. When you do a simulation you control all the assumptions being made and can simulate for situations that may not be in the nice
45,109
What happens to the constant in Least Squares
Often in textbooks and literature the authors implicitly add a constant column of 1s to the data matrix $X$. If your original vector $x$ looks like $$ x = \begin{bmatrix} \frac{5}{6} \\ \frac{1}{6} \\ 3 \end{bmatrix} $$ then your augmented $X$ is something like $$ X = \begin{bmatrix} \frac{5}{6} & 1 \\ \frac{1}{6} & 1 \\ 3 & 1 \end{bmatrix} $$ Thus your $w$ vector is really a 2d vector: the first component represent the coefficient of $x$, and the second component represents the coefficient of 1 (in your example, this is denoted $c$.) If we denote $w = [w_1, w_0]$, then, your optimal least squares line is given by $$ y = w_1 x + w_0$$
What happens to the constant in Least Squares
Often in textbooks and literature the authors implicitly add a constant column of 1s to the data matrix $X$. If your original vector $x$ looks like $$ x = \begin{bmatrix} \frac{5}{6}
What happens to the constant in Least Squares Often in textbooks and literature the authors implicitly add a constant column of 1s to the data matrix $X$. If your original vector $x$ looks like $$ x = \begin{bmatrix} \frac{5}{6} \\ \frac{1}{6} \\ 3 \end{bmatrix} $$ then your augmented $X$ is something like $$ X = \begin{bmatrix} \frac{5}{6} & 1 \\ \frac{1}{6} & 1 \\ 3 & 1 \end{bmatrix} $$ Thus your $w$ vector is really a 2d vector: the first component represent the coefficient of $x$, and the second component represents the coefficient of 1 (in your example, this is denoted $c$.) If we denote $w = [w_1, w_0]$, then, your optimal least squares line is given by $$ y = w_1 x + w_0$$
What happens to the constant in Least Squares Often in textbooks and literature the authors implicitly add a constant column of 1s to the data matrix $X$. If your original vector $x$ looks like $$ x = \begin{bmatrix} \frac{5}{6}
45,110
Summary statistics of the precision-recall curve
The "Mean Average Precision" (sometimes abbreviated mAP or MAP) might be what you want. It's pretty commonly used for evaluating information retrieval systems and is fairly straightforward to compute. First, calculate the average precision for a given query. To do this, rank the documents and compute the precision after retrieving each relevant document. For example, suppose that four documents are relevant to this query, and our system returned the following: Relevant document Irrelevant document Relevant document Relevant document Irrelevant document Irrelevant document. Relevant document The first relevant document is at position one, and the precision there is 1/1 = 1.0 The next relevant document is at position 3; two of the three documents seen so far are relevant, so our precision here is 2/3. Document 4 is relevant too and the precision score here is 3/4. The final relevant item is at position seven, giving us a precision of 4/7. Find the mean of these precision scores (1/4*(1 + 2/3 + 3/4 + 4/7) = ~0.747) to get the average precision for this query. The mean average precision is just the mean of these averages across all the queries in your evaluation set. As for choosing a precision-recall trade off, that's largely up to you. The $F_1$ score gives them equal weight; you can interpret the $\beta$ in $F_\beta$ as giving $\beta$ times more weight to recall than precision. I believe that some studies indicate that users prefer precision to recall, but I would bet that it depends a lot on the application and use-case. I certainly don't need google to show me every webpage about cats, but do want all the sites on the first page to be relevant. On the flip side, it might be more important to return every possibly-relevant document if you're doing discovery for a court case.
Summary statistics of the precision-recall curve
The "Mean Average Precision" (sometimes abbreviated mAP or MAP) might be what you want. It's pretty commonly used for evaluating information retrieval systems and is fairly straightforward to compute.
Summary statistics of the precision-recall curve The "Mean Average Precision" (sometimes abbreviated mAP or MAP) might be what you want. It's pretty commonly used for evaluating information retrieval systems and is fairly straightforward to compute. First, calculate the average precision for a given query. To do this, rank the documents and compute the precision after retrieving each relevant document. For example, suppose that four documents are relevant to this query, and our system returned the following: Relevant document Irrelevant document Relevant document Relevant document Irrelevant document Irrelevant document. Relevant document The first relevant document is at position one, and the precision there is 1/1 = 1.0 The next relevant document is at position 3; two of the three documents seen so far are relevant, so our precision here is 2/3. Document 4 is relevant too and the precision score here is 3/4. The final relevant item is at position seven, giving us a precision of 4/7. Find the mean of these precision scores (1/4*(1 + 2/3 + 3/4 + 4/7) = ~0.747) to get the average precision for this query. The mean average precision is just the mean of these averages across all the queries in your evaluation set. As for choosing a precision-recall trade off, that's largely up to you. The $F_1$ score gives them equal weight; you can interpret the $\beta$ in $F_\beta$ as giving $\beta$ times more weight to recall than precision. I believe that some studies indicate that users prefer precision to recall, but I would bet that it depends a lot on the application and use-case. I certainly don't need google to show me every webpage about cats, but do want all the sites on the first page to be relevant. On the flip side, it might be more important to return every possibly-relevant document if you're doing discovery for a court case.
Summary statistics of the precision-recall curve The "Mean Average Precision" (sometimes abbreviated mAP or MAP) might be what you want. It's pretty commonly used for evaluating information retrieval systems and is fairly straightforward to compute.
45,111
Summary statistics of the precision-recall curve
Actually there is just an AUC of PR-curve measure; it is used in biology (especially in the DREAM challenge series surroundings) because it is consistent with AUROC (i.e. the ranking is usually the same if the performance differs significantly) still gives better numerical resolution by giving lower values than AUROC. The problem is that AUPR requires careful integration, so it is pretty hard to find a correct implementation. This is a canonical paper about the topic.
Summary statistics of the precision-recall curve
Actually there is just an AUC of PR-curve measure; it is used in biology (especially in the DREAM challenge series surroundings) because it is consistent with AUROC (i.e. the ranking is usually the sa
Summary statistics of the precision-recall curve Actually there is just an AUC of PR-curve measure; it is used in biology (especially in the DREAM challenge series surroundings) because it is consistent with AUROC (i.e. the ranking is usually the same if the performance differs significantly) still gives better numerical resolution by giving lower values than AUROC. The problem is that AUPR requires careful integration, so it is pretty hard to find a correct implementation. This is a canonical paper about the topic.
Summary statistics of the precision-recall curve Actually there is just an AUC of PR-curve measure; it is used in biology (especially in the DREAM challenge series surroundings) because it is consistent with AUROC (i.e. the ranking is usually the sa
45,112
Summary statistics of the precision-recall curve
You can calculate the AUC of the ROC for just a single (precision,recall) data point. This paper, Robust classification for imprecise environments, describes how to calculate the convex hull AUC (which is pretty standard now). When you have only one (precision,recall) point, you extend a straight line down to the always-say-no (0,0) point and a straight line up to the always-say-yes (1,1) point, and you have the convex hull. Now the neat result: in this case, with only one coordinate, the calculation simplifies to $AUC = (t − f + 1)/2 $. This emphasises the connection between AUC and the Gini coefficient, remarked on elsewhere.
Summary statistics of the precision-recall curve
You can calculate the AUC of the ROC for just a single (precision,recall) data point. This paper, Robust classification for imprecise environments, describes how to calculate the convex hull AUC (whic
Summary statistics of the precision-recall curve You can calculate the AUC of the ROC for just a single (precision,recall) data point. This paper, Robust classification for imprecise environments, describes how to calculate the convex hull AUC (which is pretty standard now). When you have only one (precision,recall) point, you extend a straight line down to the always-say-no (0,0) point and a straight line up to the always-say-yes (1,1) point, and you have the convex hull. Now the neat result: in this case, with only one coordinate, the calculation simplifies to $AUC = (t − f + 1)/2 $. This emphasises the connection between AUC and the Gini coefficient, remarked on elsewhere.
Summary statistics of the precision-recall curve You can calculate the AUC of the ROC for just a single (precision,recall) data point. This paper, Robust classification for imprecise environments, describes how to calculate the convex hull AUC (whic
45,113
Thinning chains in BUGS/JAGS
Short answer: The number of iterations incorporates the burn in and does not incorporate thinning. Less short answer: If you were to run a BUGS model through R2WinBUGS or R2OpenBUGS (or view a summary of WinBUGS output) with the arguments you stated: n.iter=5000, n.burnin=5000, n.thin=2 you would get an error message/no output. n.iter refers to the total number of iterations including the burn in, hence all your iterations are burn in and are thrown away (or not included in the CODA output and any ACF plot in WinBUGS). Thinning is treated differently (in relation to n.iter). For example if you set your MCMC up with any of the following arguments: n.iter=6000, n.burnin=5000, n.thin=1 n.iter=6000, n.burnin=5000, n.thin=5 n.iter=6000, n.burnin=5000, n.thin=10 only 1000 iterations will be saved, i.e. all non-thinned simulations are discarded (in CODA output or any ACF plot in WinBUGS). Not sure if this is the same for jags?
Thinning chains in BUGS/JAGS
Short answer: The number of iterations incorporates the burn in and does not incorporate thinning. Less short answer: If you were to run a BUGS model through R2WinBUGS or R2OpenBUGS (or view a summary
Thinning chains in BUGS/JAGS Short answer: The number of iterations incorporates the burn in and does not incorporate thinning. Less short answer: If you were to run a BUGS model through R2WinBUGS or R2OpenBUGS (or view a summary of WinBUGS output) with the arguments you stated: n.iter=5000, n.burnin=5000, n.thin=2 you would get an error message/no output. n.iter refers to the total number of iterations including the burn in, hence all your iterations are burn in and are thrown away (or not included in the CODA output and any ACF plot in WinBUGS). Thinning is treated differently (in relation to n.iter). For example if you set your MCMC up with any of the following arguments: n.iter=6000, n.burnin=5000, n.thin=1 n.iter=6000, n.burnin=5000, n.thin=5 n.iter=6000, n.burnin=5000, n.thin=10 only 1000 iterations will be saved, i.e. all non-thinned simulations are discarded (in CODA output or any ACF plot in WinBUGS). Not sure if this is the same for jags?
Thinning chains in BUGS/JAGS Short answer: The number of iterations incorporates the burn in and does not incorporate thinning. Less short answer: If you were to run a BUGS model through R2WinBUGS or R2OpenBUGS (or view a summary
45,114
Thinning chains in BUGS/JAGS
Other answered is correct about BUGS but his answer does not apply to JAGS (at least, not to rjags, R2jags might be different). I haven't used JAGS directly, but the writer of rjags is the creator of JAGS so I would guess they use the same convention. In rjags, the jags.model object keeps track of the number of iterations that the chain(s) have been run. Here is a small model in a file "tmpJags": model { X ~ dnorm(Y, 1) Y ~ dt(0, 1, 1) } Then I run X <- 1 jm <- jags.model(file = "tmpJags", data = list(X = X), n.chains = 4, n.adapt = 1000) jm consists of 4 chains, each of which has been run 1000 total iterations. Then I do samps <- coda.samples(jm, "Y", n.iter = 1000, thin = 10) Now all 4 of my chains have been run for a total of 2000 iterations each, and I have collected 100 samples from each chain, so that 400 samples are saved in total. To me, it makes sense to do it this way because for the purposes of monitoring chain convergence you would rather think in terms of total iterations than iterations after thinning.
Thinning chains in BUGS/JAGS
Other answered is correct about BUGS but his answer does not apply to JAGS (at least, not to rjags, R2jags might be different). I haven't used JAGS directly, but the writer of rjags is the creator of
Thinning chains in BUGS/JAGS Other answered is correct about BUGS but his answer does not apply to JAGS (at least, not to rjags, R2jags might be different). I haven't used JAGS directly, but the writer of rjags is the creator of JAGS so I would guess they use the same convention. In rjags, the jags.model object keeps track of the number of iterations that the chain(s) have been run. Here is a small model in a file "tmpJags": model { X ~ dnorm(Y, 1) Y ~ dt(0, 1, 1) } Then I run X <- 1 jm <- jags.model(file = "tmpJags", data = list(X = X), n.chains = 4, n.adapt = 1000) jm consists of 4 chains, each of which has been run 1000 total iterations. Then I do samps <- coda.samples(jm, "Y", n.iter = 1000, thin = 10) Now all 4 of my chains have been run for a total of 2000 iterations each, and I have collected 100 samples from each chain, so that 400 samples are saved in total. To me, it makes sense to do it this way because for the purposes of monitoring chain convergence you would rather think in terms of total iterations than iterations after thinning.
Thinning chains in BUGS/JAGS Other answered is correct about BUGS but his answer does not apply to JAGS (at least, not to rjags, R2jags might be different). I haven't used JAGS directly, but the writer of rjags is the creator of
45,115
Computing c-index for an external validation of a Cox PH model with R
I just got an explanation from a colleague about how to use this feature. In the help page for rcorr.cens(), it states that x is a "numeric predictor variable". I thought that this meant it had to be a model variable like Age, Stage, Metastasis, etc. What I found out is that x can just be a vector of your model's survival estimates for an external data set. Therefore the only two things rcorr.cens() needs is that vector of survival estimates and a Surv() object. Using my code from above, this is how you use it: library(rms) surv.obj=with(veteran,Surv(time,status)) ####This will be used for rcorr.cens cox.mod=cph(surv.obj~celltype+karno,data=veteran,x=T,y=T,surv=TRUE,time.inc=5*365) ##Here is the test data set that is the external "independent" data. test_dat=data.frame(trt=replicate(500,NA), celltype=replicate(500,NA), time=replicate(500,NA), status=replicate(500,NA), karno=replicate(500,NA), diagtime=replicate(500,NA), age=replicate(500,NA), prior=replicate(500,NA)) for(i in seq(8)){ test_dat[,i]=sample(veteran[,i],500,replace=T) } ###Create your survival estimates estimates=survest(cox.mod,newdata=test_dat,times=5*365)$surv ###Determine concordance rcorr.cens(x=estimates,S=surv.obj) I hope this helps anyone in the future who has the same question!
Computing c-index for an external validation of a Cox PH model with R
I just got an explanation from a colleague about how to use this feature. In the help page for rcorr.cens(), it states that x is a "numeric predictor variable". I thought that this meant it had to b
Computing c-index for an external validation of a Cox PH model with R I just got an explanation from a colleague about how to use this feature. In the help page for rcorr.cens(), it states that x is a "numeric predictor variable". I thought that this meant it had to be a model variable like Age, Stage, Metastasis, etc. What I found out is that x can just be a vector of your model's survival estimates for an external data set. Therefore the only two things rcorr.cens() needs is that vector of survival estimates and a Surv() object. Using my code from above, this is how you use it: library(rms) surv.obj=with(veteran,Surv(time,status)) ####This will be used for rcorr.cens cox.mod=cph(surv.obj~celltype+karno,data=veteran,x=T,y=T,surv=TRUE,time.inc=5*365) ##Here is the test data set that is the external "independent" data. test_dat=data.frame(trt=replicate(500,NA), celltype=replicate(500,NA), time=replicate(500,NA), status=replicate(500,NA), karno=replicate(500,NA), diagtime=replicate(500,NA), age=replicate(500,NA), prior=replicate(500,NA)) for(i in seq(8)){ test_dat[,i]=sample(veteran[,i],500,replace=T) } ###Create your survival estimates estimates=survest(cox.mod,newdata=test_dat,times=5*365)$surv ###Determine concordance rcorr.cens(x=estimates,S=surv.obj) I hope this helps anyone in the future who has the same question!
Computing c-index for an external validation of a Cox PH model with R I just got an explanation from a colleague about how to use this feature. In the help page for rcorr.cens(), it states that x is a "numeric predictor variable". I thought that this meant it had to b
45,116
Computing c-index for an external validation of a Cox PH model with R
There is a package, which is a component of bioconductor that can help you calculate the c-index : survcomp If you don't include survival data, than cindex in survcomp is basically the same as the AUC you get from the ROC curve.
Computing c-index for an external validation of a Cox PH model with R
There is a package, which is a component of bioconductor that can help you calculate the c-index : survcomp If you don't include survival data, than cindex in survcomp is basically the same as the AUC
Computing c-index for an external validation of a Cox PH model with R There is a package, which is a component of bioconductor that can help you calculate the c-index : survcomp If you don't include survival data, than cindex in survcomp is basically the same as the AUC you get from the ROC curve.
Computing c-index for an external validation of a Cox PH model with R There is a package, which is a component of bioconductor that can help you calculate the c-index : survcomp If you don't include survival data, than cindex in survcomp is basically the same as the AUC
45,117
Computing c-index for an external validation of a Cox PH model with R
I think code provided by @JJM could work with some changes. The point raised by @Seanosapien could be addressed by following edited code. library(rms) surv.obj=with(veteran,Surv(time,status)) ####This will NOT be used for rcorr.cens cox.mod=cph(surv.obj~celltype+karno,data=veteran,x=T,y=T,surv=TRUE,time.inc=5*365) ##Here is the test data set that is the external "independent" data. test_dat=data.frame(trt=replicate(500,NA), celltype=replicate(500,NA), time=replicate(500,NA), status=replicate(500,NA), karno=replicate(500,NA), diagtime=replicate(500,NA), age=replicate(500,NA), prior=replicate(500,NA)) for(i in seq(8)){ test_dat[,i]=sample(veteran[,i],500,replace=T) } Surv.obj_test=with(test_dat,Surv(time,status)) #This will be used for rcorr.cens ###Create your survival estimates estimates=survest(cox.mod,newdata=test_dat,times=5*365)$surv ###Determine concordance rcorr.cens(x=estimates,S=Surv.obj_test) The change I have made is that Surv() object is created for the test data Surv.obj_test. This will allow rcorr.cens to compare the estimates with the test data test_dat. The estimates enable in getting survival estimates using the model cox_mod. These estimates are compared with the test_dat "time" and "status" variable using the rcorr.cens.
Computing c-index for an external validation of a Cox PH model with R
I think code provided by @JJM could work with some changes. The point raised by @Seanosapien could be addressed by following edited code. library(rms) surv.obj=with(veteran,Surv(time,status)) ####T
Computing c-index for an external validation of a Cox PH model with R I think code provided by @JJM could work with some changes. The point raised by @Seanosapien could be addressed by following edited code. library(rms) surv.obj=with(veteran,Surv(time,status)) ####This will NOT be used for rcorr.cens cox.mod=cph(surv.obj~celltype+karno,data=veteran,x=T,y=T,surv=TRUE,time.inc=5*365) ##Here is the test data set that is the external "independent" data. test_dat=data.frame(trt=replicate(500,NA), celltype=replicate(500,NA), time=replicate(500,NA), status=replicate(500,NA), karno=replicate(500,NA), diagtime=replicate(500,NA), age=replicate(500,NA), prior=replicate(500,NA)) for(i in seq(8)){ test_dat[,i]=sample(veteran[,i],500,replace=T) } Surv.obj_test=with(test_dat,Surv(time,status)) #This will be used for rcorr.cens ###Create your survival estimates estimates=survest(cox.mod,newdata=test_dat,times=5*365)$surv ###Determine concordance rcorr.cens(x=estimates,S=Surv.obj_test) The change I have made is that Surv() object is created for the test data Surv.obj_test. This will allow rcorr.cens to compare the estimates with the test data test_dat. The estimates enable in getting survival estimates using the model cox_mod. These estimates are compared with the test_dat "time" and "status" variable using the rcorr.cens.
Computing c-index for an external validation of a Cox PH model with R I think code provided by @JJM could work with some changes. The point raised by @Seanosapien could be addressed by following edited code. library(rms) surv.obj=with(veteran,Surv(time,status)) ####T
45,118
Mean survival time of a Weibull distribution
According to the mean you give, you use the following parametrisation for the Weibull distribution: $$ \textrm{if }X\sim \textrm{Weibull}(\lambda, \alpha) \textrm{ then } f_X(x) = \lambda \alpha x^{\alpha - 1} \exp(-\lambda x^\alpha), $$ with $\lambda > 0$ a scale parameter, and $\alpha > 0$ a shape parameter. dweibull() from R, as well as wikipedia, use another parametrisation. The conversion is as follows: $$ \textrm{shape} = \alpha \quad \textrm{and} \quad \textrm{scale} = \left(\frac{1}{\lambda} \right)^{\tfrac{1}{\alpha}}, $$ where $\textrm{shape}$ and $\textrm{scale}$ are those given in dweibull() and wikipeida. Let $\mathbf{x}'\mathbf{\beta} = x_1\beta_1 + x_2\beta_2 + \dotsb$ be the linear predictor. Assuming a proportional hazards structure and a $\textrm{Weibull}(\lambda, \alpha)$ distribution at baseline, the hazard rate is written \begin{align*} h(t) & = h_0(t) \exp(\mathbf{x}'\mathbf{\beta}) \\ & = \lambda \alpha t^{\alpha - 1} \exp(\mathbf{x}'\mathbf{\beta}). \end{align*} The corresponding pdf is $$ f(t) = \lambda \alpha t^{\alpha - 1} \exp(\mathbf{x}'\mathbf{\beta}) \exp \left( - \lambda t^\alpha \exp(\mathbf{x}'\mathbf{\beta}) \right). $$ That is, $T$ has a Weibull distribution with the same shape $\alpha$ but the scale parameter is changed from $\lambda$ to $\lambda \exp(\mathbf{x}'\mathbf{\beta})$: $$ T \sim \textrm{Weibull}(\lambda \exp(\mathbf{x}'\mathbf{\beta}), \alpha) $$ and we have $$ E[T] = \frac{\Gamma(1 + \tfrac{1}{\alpha})}{\left(\lambda\exp(\mathbf{x}'\mathbf{\beta})\right)^{\tfrac{1}{\alpha}}}. $$ An example without covariate: > #------ scale and shape parameters in your parametrisation ------ > lambda <- 3 > alpha <- 0.88 > #---------------------------------------------------------------- > > #------ conversion ------ > shape <- alpha > scale <- (1 / lambda)^(1 / alpha) > #------------------------ > > #------ some data ------ > T <- rweibull(n=10000, shape=shape, scale=scale) > #----------------------- > > #------ theoretical and empirical means ------ > gamma(1 + 1 / alpha) / (lambda^(1 / alpha)) [1] 0.305765 > mean(T) [1] 0.3026293 > #---------------------------------------------
Mean survival time of a Weibull distribution
According to the mean you give, you use the following parametrisation for the Weibull distribution: $$ \textrm{if }X\sim \textrm{Weibull}(\lambda, \alpha) \textrm{ then } f_X(x) = \lambda \alpha x^{\a
Mean survival time of a Weibull distribution According to the mean you give, you use the following parametrisation for the Weibull distribution: $$ \textrm{if }X\sim \textrm{Weibull}(\lambda, \alpha) \textrm{ then } f_X(x) = \lambda \alpha x^{\alpha - 1} \exp(-\lambda x^\alpha), $$ with $\lambda > 0$ a scale parameter, and $\alpha > 0$ a shape parameter. dweibull() from R, as well as wikipedia, use another parametrisation. The conversion is as follows: $$ \textrm{shape} = \alpha \quad \textrm{and} \quad \textrm{scale} = \left(\frac{1}{\lambda} \right)^{\tfrac{1}{\alpha}}, $$ where $\textrm{shape}$ and $\textrm{scale}$ are those given in dweibull() and wikipeida. Let $\mathbf{x}'\mathbf{\beta} = x_1\beta_1 + x_2\beta_2 + \dotsb$ be the linear predictor. Assuming a proportional hazards structure and a $\textrm{Weibull}(\lambda, \alpha)$ distribution at baseline, the hazard rate is written \begin{align*} h(t) & = h_0(t) \exp(\mathbf{x}'\mathbf{\beta}) \\ & = \lambda \alpha t^{\alpha - 1} \exp(\mathbf{x}'\mathbf{\beta}). \end{align*} The corresponding pdf is $$ f(t) = \lambda \alpha t^{\alpha - 1} \exp(\mathbf{x}'\mathbf{\beta}) \exp \left( - \lambda t^\alpha \exp(\mathbf{x}'\mathbf{\beta}) \right). $$ That is, $T$ has a Weibull distribution with the same shape $\alpha$ but the scale parameter is changed from $\lambda$ to $\lambda \exp(\mathbf{x}'\mathbf{\beta})$: $$ T \sim \textrm{Weibull}(\lambda \exp(\mathbf{x}'\mathbf{\beta}), \alpha) $$ and we have $$ E[T] = \frac{\Gamma(1 + \tfrac{1}{\alpha})}{\left(\lambda\exp(\mathbf{x}'\mathbf{\beta})\right)^{\tfrac{1}{\alpha}}}. $$ An example without covariate: > #------ scale and shape parameters in your parametrisation ------ > lambda <- 3 > alpha <- 0.88 > #---------------------------------------------------------------- > > #------ conversion ------ > shape <- alpha > scale <- (1 / lambda)^(1 / alpha) > #------------------------ > > #------ some data ------ > T <- rweibull(n=10000, shape=shape, scale=scale) > #----------------------- > > #------ theoretical and empirical means ------ > gamma(1 + 1 / alpha) / (lambda^(1 / alpha)) [1] 0.305765 > mean(T) [1] 0.3026293 > #---------------------------------------------
Mean survival time of a Weibull distribution According to the mean you give, you use the following parametrisation for the Weibull distribution: $$ \textrm{if }X\sim \textrm{Weibull}(\lambda, \alpha) \textrm{ then } f_X(x) = \lambda \alpha x^{\a
45,119
Eigenvectors of a covariance matrix with only positive elements
With your new information, that all the components of the positive-definite matrix are positive, it becomes easy. While it follows directly from the Perron-Frobenius theorem (which is valid for square matrices with non-negative elements, symmetric or not), in the symmetric case it is much easier. Let the positive-definite matrix be $S$. The eigenvector corresponding to the largest eigenvector is the vector $x$ obtaining the maximum in the following problem: $$ \lambda_{\mathrm{max}} = \mathrm{max}_{\{x \colon \| x\|=1\}} x^T S x $$(that is, the "argmax") where $\lambda_{\text{max}}$ is the largest eigenvalue. Suppose to get a contradiction that $x_1$ is negative, while the other components of $x$ are non-negative. We can write $$ x^T S x = x_1 S_{11} x_1+2x_1 \sum_{j=2}^m s_{1j} x_j + \sum_{i=2}^m \sum_{j=2}^m x_i s_{ij} x_j $$ Note that the first and third terms are positive while the second term is negative, and we can get a strictly larger value by switching the sign of $x_1$, which respects the restriction on norm. That gives the contradiction you need. A similar argument can be written for any other pattern of negative/positive sign.
Eigenvectors of a covariance matrix with only positive elements
With your new information, that all the components of the positive-definite matrix are positive, it becomes easy. While it follows directly from the Perron-Frobenius theorem (which is valid for square
Eigenvectors of a covariance matrix with only positive elements With your new information, that all the components of the positive-definite matrix are positive, it becomes easy. While it follows directly from the Perron-Frobenius theorem (which is valid for square matrices with non-negative elements, symmetric or not), in the symmetric case it is much easier. Let the positive-definite matrix be $S$. The eigenvector corresponding to the largest eigenvector is the vector $x$ obtaining the maximum in the following problem: $$ \lambda_{\mathrm{max}} = \mathrm{max}_{\{x \colon \| x\|=1\}} x^T S x $$(that is, the "argmax") where $\lambda_{\text{max}}$ is the largest eigenvalue. Suppose to get a contradiction that $x_1$ is negative, while the other components of $x$ are non-negative. We can write $$ x^T S x = x_1 S_{11} x_1+2x_1 \sum_{j=2}^m s_{1j} x_j + \sum_{i=2}^m \sum_{j=2}^m x_i s_{ij} x_j $$ Note that the first and third terms are positive while the second term is negative, and we can get a strictly larger value by switching the sign of $x_1$, which respects the restriction on norm. That gives the contradiction you need. A similar argument can be written for any other pattern of negative/positive sign.
Eigenvectors of a covariance matrix with only positive elements With your new information, that all the components of the positive-definite matrix are positive, it becomes easy. While it follows directly from the Perron-Frobenius theorem (which is valid for square
45,120
Eigenvectors of a covariance matrix with only positive elements
If the signs of the coefficients are all of the same sign it indicates that they are all measuring the first component in the same direction. If you reverse one of the variables (e.g. by multiplying it by -1) you ought get something different. The signs of the first component do not all need to be positive; indeed, when I run this example from R prcomp prcomp(USArrests, scale = TRUE) all the signs are negative (but that could be different when you run it) If I then run prcomp(~ Murder + Assault + Rape, data = USArrests, scale = TRUE) all the signs are again negative on PC1 but if I modify it: attach(USArrests) rapeinv <- Rape*-1 prcomp(~ Murder + Assault + rapeinv, scale = TRUE) then two signs are negative but the one for rapeinv is positive. Often, in practice (as here) the first component is capturing something general about the data (here, crime rate) so it is frequently the case that the first PC will have all its signs the same, but it is not necessarily the case.
Eigenvectors of a covariance matrix with only positive elements
If the signs of the coefficients are all of the same sign it indicates that they are all measuring the first component in the same direction. If you reverse one of the variables (e.g. by multiplying i
Eigenvectors of a covariance matrix with only positive elements If the signs of the coefficients are all of the same sign it indicates that they are all measuring the first component in the same direction. If you reverse one of the variables (e.g. by multiplying it by -1) you ought get something different. The signs of the first component do not all need to be positive; indeed, when I run this example from R prcomp prcomp(USArrests, scale = TRUE) all the signs are negative (but that could be different when you run it) If I then run prcomp(~ Murder + Assault + Rape, data = USArrests, scale = TRUE) all the signs are again negative on PC1 but if I modify it: attach(USArrests) rapeinv <- Rape*-1 prcomp(~ Murder + Assault + rapeinv, scale = TRUE) then two signs are negative but the one for rapeinv is positive. Often, in practice (as here) the first component is capturing something general about the data (here, crime rate) so it is frequently the case that the first PC will have all its signs the same, but it is not necessarily the case.
Eigenvectors of a covariance matrix with only positive elements If the signs of the coefficients are all of the same sign it indicates that they are all measuring the first component in the same direction. If you reverse one of the variables (e.g. by multiplying i
45,121
Eigenvectors of a covariance matrix with only positive elements
The solutions to the two questions follow in a straightforward manner from the definitions, but some care is needed in the analysis. I offer this post to fill in some gaps in the previous ones, to make the solution self-contained (without relying on any advanced or specialized theorems), and to provide a solution to the second question, which so far has not been offered. Let $\mathbb A$ be the covariance matrix, of dimensions $n$ by $n$. It is symmetric and positive-definite by assumption. Therefore (these are standard results in the study of such matrices) there exists a basis of $n$ nonzero eigenvectors $E=(e_1, e_2, \ldots, e_n)$ for which The $e_i$ all have real (not merely complex) coefficients; $\mathbb A e_i = \lambda_i$ for non-negative real (not merely complex) numbers $\lambda_i$, the eigenvalues; We may therefore order the eigenvectors so that $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_n \gt 0$; The eigenvectors are mutually orthogonal: $e_i^\prime e_j = 0$ whenever $i\ne j$; and We may normalize the eigenvectors (by dividing each one by $\sqrt{e_i^\prime e_i}$ if necessary) to make them all of unit length. These are the basic facts worth remembering, because they (greatly) simplify our understanding and analysis of such matrices, which are ubiquitous in statistical theory and practice. The remainder of this post exploits these properties to address the two questions. Because $E$ is a basis, any arbitrary vector $x$ has a unique expansion as a linear combination of eigenvectors, $$x = x_1 e_1 + x_2 e_2 + \cdots x_n e_n$$ for real numbers $x_1, x_2, \ldots, x_n$ determined by $x$. Facts (4) and (5) let us calculate that $$|x|^2 = x_1^2 + x_2^2 + \cdots + x_n^2$$ and property (2) implies $$|\mathbb A x|^2 = \lambda_1^2 x_1^2 + \lambda_2^2 x_2^2 + \cdots + \lambda_n^2 x_n^2.$$ It is clear--and is an easily proven elementary inequality--that when $|x|^2=1$, the latter is maximized when $x_j=0$ for all $j$ where $\lambda_j \lt \lambda_1$. (Provided $\lambda_1$ is unique-- that is, $\lambda_i \lt \lambda_1$ for $i=2, 3, \ldots, n$--there are exactly two solutions: $x_1=\pm 1$ (and $x_i=0$ for $i=2, 3, \ldots, n$), whence $x = \pm e_1$.) The first question concerns the original coordinates in which the matrix and the eigenvectors were originally written. Writing $e_1 = (e_{11}, e_{12}, \ldots, e_{1n})$ in those coordinates, suppose there exist indexes $j$ for which $e_{1j}\lt 0$. Let $f$ be the vector obtained by negating all such $e_{1j}$. Because $e_{1j}^2 = (-e_{1j})^2$, this does not change the norm, whence $|f|=1$ (by fact (5)). However, this process increases $|\mathbb A e|$ because--by assumption--multiplication by $\mathbb A$ consists of taking linear combinations with positive coefficients and the change from $e_1$ to $f$ has actually turned what were subtractions of positive values into additions of positive values. Since $|\mathbb A e|$ was maximal, we conclude that $|\mathbb A f|$ is maximal and $f$ is an eigenvector with eigenvalue $\lambda_1$. We may therefore take $e_1$ to be $\pm f$, but either way all its components will have the same sign. As an (important) aside, note that all the components of $e_1$ must be positive: none can be zero. This is because (a) $e_1$ is nonzero, whence it has at least one nonzero component and (b) in computing the product $\mathbb A e_1 = \lambda_1 e_1$ (fact (2)) all the products being added up are sums of nonnegative numbers and at least one (obtained from a nonzero component of $e_1$) is nonzero. That shows all the components of $\lambda_1 e_1$ are nonzero, but since $\lambda_1 \gt 0$ (fact (3)), all components of $e_1$ must be nonzero, too. The second question asserts that the remaining eigenvectors, $e_2, e_3, \ldots, e_n$, must have some negative components when written in the original basis. Consider one of them, say $e_j$ and write it as $e_j = (e_{j1}, e_{j2}, \ldots, e_{jn})$ in the original basis. Then from fact (4) $e_j$ is orthogonal to $e_1$: $$0 = e_1^\prime e_j = e_{11}e_{j1} + e_{12}e_{j2} + \cdots + e_{1n}e_{jn}.$$ Since--as we showed in the aside--all the $e_{1i} \gt 0$, the only way this linear combination can equal zero is for at least one $e_{ji} \lt 0$. A more delicate version of these results can be obtained when the components of $\mathbb A$ are merely assumed to be nonnegative. What changes is that the first principal component may have zeros (as originally expressed) and some of the other principal components may also have entirely nonnegative entries, too. As an example, take $\mathbb A$ to be the $2\times 2$ identity matrix and let the first two principal components (which are not unique!) be $e_1=(1,0)$ and $e_2=(0,1)$.
Eigenvectors of a covariance matrix with only positive elements
The solutions to the two questions follow in a straightforward manner from the definitions, but some care is needed in the analysis. I offer this post to fill in some gaps in the previous ones, to ma
Eigenvectors of a covariance matrix with only positive elements The solutions to the two questions follow in a straightforward manner from the definitions, but some care is needed in the analysis. I offer this post to fill in some gaps in the previous ones, to make the solution self-contained (without relying on any advanced or specialized theorems), and to provide a solution to the second question, which so far has not been offered. Let $\mathbb A$ be the covariance matrix, of dimensions $n$ by $n$. It is symmetric and positive-definite by assumption. Therefore (these are standard results in the study of such matrices) there exists a basis of $n$ nonzero eigenvectors $E=(e_1, e_2, \ldots, e_n)$ for which The $e_i$ all have real (not merely complex) coefficients; $\mathbb A e_i = \lambda_i$ for non-negative real (not merely complex) numbers $\lambda_i$, the eigenvalues; We may therefore order the eigenvectors so that $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_n \gt 0$; The eigenvectors are mutually orthogonal: $e_i^\prime e_j = 0$ whenever $i\ne j$; and We may normalize the eigenvectors (by dividing each one by $\sqrt{e_i^\prime e_i}$ if necessary) to make them all of unit length. These are the basic facts worth remembering, because they (greatly) simplify our understanding and analysis of such matrices, which are ubiquitous in statistical theory and practice. The remainder of this post exploits these properties to address the two questions. Because $E$ is a basis, any arbitrary vector $x$ has a unique expansion as a linear combination of eigenvectors, $$x = x_1 e_1 + x_2 e_2 + \cdots x_n e_n$$ for real numbers $x_1, x_2, \ldots, x_n$ determined by $x$. Facts (4) and (5) let us calculate that $$|x|^2 = x_1^2 + x_2^2 + \cdots + x_n^2$$ and property (2) implies $$|\mathbb A x|^2 = \lambda_1^2 x_1^2 + \lambda_2^2 x_2^2 + \cdots + \lambda_n^2 x_n^2.$$ It is clear--and is an easily proven elementary inequality--that when $|x|^2=1$, the latter is maximized when $x_j=0$ for all $j$ where $\lambda_j \lt \lambda_1$. (Provided $\lambda_1$ is unique-- that is, $\lambda_i \lt \lambda_1$ for $i=2, 3, \ldots, n$--there are exactly two solutions: $x_1=\pm 1$ (and $x_i=0$ for $i=2, 3, \ldots, n$), whence $x = \pm e_1$.) The first question concerns the original coordinates in which the matrix and the eigenvectors were originally written. Writing $e_1 = (e_{11}, e_{12}, \ldots, e_{1n})$ in those coordinates, suppose there exist indexes $j$ for which $e_{1j}\lt 0$. Let $f$ be the vector obtained by negating all such $e_{1j}$. Because $e_{1j}^2 = (-e_{1j})^2$, this does not change the norm, whence $|f|=1$ (by fact (5)). However, this process increases $|\mathbb A e|$ because--by assumption--multiplication by $\mathbb A$ consists of taking linear combinations with positive coefficients and the change from $e_1$ to $f$ has actually turned what were subtractions of positive values into additions of positive values. Since $|\mathbb A e|$ was maximal, we conclude that $|\mathbb A f|$ is maximal and $f$ is an eigenvector with eigenvalue $\lambda_1$. We may therefore take $e_1$ to be $\pm f$, but either way all its components will have the same sign. As an (important) aside, note that all the components of $e_1$ must be positive: none can be zero. This is because (a) $e_1$ is nonzero, whence it has at least one nonzero component and (b) in computing the product $\mathbb A e_1 = \lambda_1 e_1$ (fact (2)) all the products being added up are sums of nonnegative numbers and at least one (obtained from a nonzero component of $e_1$) is nonzero. That shows all the components of $\lambda_1 e_1$ are nonzero, but since $\lambda_1 \gt 0$ (fact (3)), all components of $e_1$ must be nonzero, too. The second question asserts that the remaining eigenvectors, $e_2, e_3, \ldots, e_n$, must have some negative components when written in the original basis. Consider one of them, say $e_j$ and write it as $e_j = (e_{j1}, e_{j2}, \ldots, e_{jn})$ in the original basis. Then from fact (4) $e_j$ is orthogonal to $e_1$: $$0 = e_1^\prime e_j = e_{11}e_{j1} + e_{12}e_{j2} + \cdots + e_{1n}e_{jn}.$$ Since--as we showed in the aside--all the $e_{1i} \gt 0$, the only way this linear combination can equal zero is for at least one $e_{ji} \lt 0$. A more delicate version of these results can be obtained when the components of $\mathbb A$ are merely assumed to be nonnegative. What changes is that the first principal component may have zeros (as originally expressed) and some of the other principal components may also have entirely nonnegative entries, too. As an example, take $\mathbb A$ to be the $2\times 2$ identity matrix and let the first two principal components (which are not unique!) be $e_1=(1,0)$ and $e_2=(0,1)$.
Eigenvectors of a covariance matrix with only positive elements The solutions to the two questions follow in a straightforward manner from the definitions, but some care is needed in the analysis. I offer this post to fill in some gaps in the previous ones, to ma
45,122
Calculating necessary sample size using bootstrap
Ok, so this answer might not be exactly what you were after based on the detail of your question, but I stumbled across your question based on just the title and so this might help other people who also come across it in a similar fashion. The only way I know of determining sample size using a bootstrap is via a power analysis approach. That is you: State the null hypothesis and alternative hypothesis State the alpha level (typically 5%) If necessary shift the pilot study data so that you know the null hypothesis is false Re-sample with replacements from the pilot study Perform the test on the this sample and record the result Repeat 1000 or so times to build up probability distribution Count how many times the null hypothesis is rejected With many possible "variations on a theme of..." And that gives you the statistical power (for that sample size and that particular test), because the definition of statistical power is "probability that the test will reject the null hypothesis when the alternative hypothesis is true". So you can then vary the sample size until you achieve the desired power. Here's an approach in R that I did based on this paper, Sample Size / Power Considerations, by Elizabeth Colantuoni. I had two groups of non-normal, non-parametric data. A pilot study of each showed them to have differing medians and a Mann Whitney Wilcoxon test rejected the null hypothesis that they were the same, but I wanted to determine the sample size required so I could say this for "sure". Since the test already rejected the null hypothesis on the pilot data I did not see any need to shift or manipulate the data to ensure the alternative hypothesis was true. power = function(group1.pilot, group2.pilot, reps=1000, size=10) { results <- sapply(1:reps, function(r) { group1.resample <- sample(group1.pilot, size=size, replace=TRUE) group2.resample <- sample(group2.pilot, size=size, replace=TRUE) test <- wilcox.test(group1.resample, group2.resample, paired=FALSE) test$p.value }) sum(results<0.05)/reps } #Find power for a sample size of 100 power(data1, data2, reps=1000, size=100) Necessary disclaimer: I'm not a statistician and I'm still learning about bootstrapping so feedback, corrections and pointing and laughing are welcome.
Calculating necessary sample size using bootstrap
Ok, so this answer might not be exactly what you were after based on the detail of your question, but I stumbled across your question based on just the title and so this might help other people who al
Calculating necessary sample size using bootstrap Ok, so this answer might not be exactly what you were after based on the detail of your question, but I stumbled across your question based on just the title and so this might help other people who also come across it in a similar fashion. The only way I know of determining sample size using a bootstrap is via a power analysis approach. That is you: State the null hypothesis and alternative hypothesis State the alpha level (typically 5%) If necessary shift the pilot study data so that you know the null hypothesis is false Re-sample with replacements from the pilot study Perform the test on the this sample and record the result Repeat 1000 or so times to build up probability distribution Count how many times the null hypothesis is rejected With many possible "variations on a theme of..." And that gives you the statistical power (for that sample size and that particular test), because the definition of statistical power is "probability that the test will reject the null hypothesis when the alternative hypothesis is true". So you can then vary the sample size until you achieve the desired power. Here's an approach in R that I did based on this paper, Sample Size / Power Considerations, by Elizabeth Colantuoni. I had two groups of non-normal, non-parametric data. A pilot study of each showed them to have differing medians and a Mann Whitney Wilcoxon test rejected the null hypothesis that they were the same, but I wanted to determine the sample size required so I could say this for "sure". Since the test already rejected the null hypothesis on the pilot data I did not see any need to shift or manipulate the data to ensure the alternative hypothesis was true. power = function(group1.pilot, group2.pilot, reps=1000, size=10) { results <- sapply(1:reps, function(r) { group1.resample <- sample(group1.pilot, size=size, replace=TRUE) group2.resample <- sample(group2.pilot, size=size, replace=TRUE) test <- wilcox.test(group1.resample, group2.resample, paired=FALSE) test$p.value }) sum(results<0.05)/reps } #Find power for a sample size of 100 power(data1, data2, reps=1000, size=100) Necessary disclaimer: I'm not a statistician and I'm still learning about bootstrapping so feedback, corrections and pointing and laughing are welcome.
Calculating necessary sample size using bootstrap Ok, so this answer might not be exactly what you were after based on the detail of your question, but I stumbled across your question based on just the title and so this might help other people who al
45,123
Calculating necessary sample size using bootstrap
Assuming you want to calculate the power of non normal based test like for example wilcox test, one general approach would be to simulate. The basic approach to the bootstrap for power calculation is to assume, that the effect is real count how many times, the statistical test chosen gives a statistically significant result based on the the chosen significance level over the total number of times you ran the simulation. This ratio is the power. For the Wilcox test the below R-code shows the principle of the approach. power = function(group1, group2, alpha=0.05, reps=1000) { results <- sapply(1:reps, function(r) { group1.resample <- sample(group1, size=length(group1), replace=TRUE) group2.resample <- sample(group2, size=length(group2), replace=TRUE) test <- wilcox.test(group1.resample, group2.resample, paired=FALSE) test$p.value }) sum(results<alpha)/reps } where data1 and data2 are assumed to be vectors for simplicity. power(data1, data2, reps=1000) Based on this approach it should be clear also how to extend the approach to more general experimental setups, such as paired data, or more groups etc. A short overview of many topics in statistics, including boostrapping, can be found in Larry Wassermans excellent book "All of statistics". In terms of further Robust statistics Rand Wilcox book "Introduction to Robust Estimation & Hypothesis Testing" is warmly recommended, can also be quite useful in terms of looking at the source code to understand how it works (given that his WRS package contains about 1000+ functions or so). As a side point, it would appear to me as though, for this to be useful potentially you might already have done the experiment and that leads to the issues with post hoc power analysis.
Calculating necessary sample size using bootstrap
Assuming you want to calculate the power of non normal based test like for example wilcox test, one general approach would be to simulate. The basic approach to the bootstrap for power calculation is
Calculating necessary sample size using bootstrap Assuming you want to calculate the power of non normal based test like for example wilcox test, one general approach would be to simulate. The basic approach to the bootstrap for power calculation is to assume, that the effect is real count how many times, the statistical test chosen gives a statistically significant result based on the the chosen significance level over the total number of times you ran the simulation. This ratio is the power. For the Wilcox test the below R-code shows the principle of the approach. power = function(group1, group2, alpha=0.05, reps=1000) { results <- sapply(1:reps, function(r) { group1.resample <- sample(group1, size=length(group1), replace=TRUE) group2.resample <- sample(group2, size=length(group2), replace=TRUE) test <- wilcox.test(group1.resample, group2.resample, paired=FALSE) test$p.value }) sum(results<alpha)/reps } where data1 and data2 are assumed to be vectors for simplicity. power(data1, data2, reps=1000) Based on this approach it should be clear also how to extend the approach to more general experimental setups, such as paired data, or more groups etc. A short overview of many topics in statistics, including boostrapping, can be found in Larry Wassermans excellent book "All of statistics". In terms of further Robust statistics Rand Wilcox book "Introduction to Robust Estimation & Hypothesis Testing" is warmly recommended, can also be quite useful in terms of looking at the source code to understand how it works (given that his WRS package contains about 1000+ functions or so). As a side point, it would appear to me as though, for this to be useful potentially you might already have done the experiment and that leads to the issues with post hoc power analysis.
Calculating necessary sample size using bootstrap Assuming you want to calculate the power of non normal based test like for example wilcox test, one general approach would be to simulate. The basic approach to the bootstrap for power calculation is
45,124
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regression Models?
It would be difficult to be clearer than what has been said in the other posts. Nevertheless, I will try to say something to the point that addresses the different assumptions that are needed for OLS and various other estimation techniques to be appropriate to use. OLS estimation: This is applied in both simple linear and multiple regression, where the common assumptions are (1) the model is linear in the coefficients of the predictor with an additive random error term (2) the random error terms are (a) normally distributed with 0 mean and (b) a variance that doesn't change as the values of the predictor covariates (i.e. IVs) change, Note also that in this framework, which applies in both simple and multiple regression, the covariates are assumed to be known without any uncertainty in their given values. OLS can be used when either A) only (1) holds with 2(b) or B) both (1) and (2) hold. If B) can be assumed OLS has some nice properties that make it attractive to use. (I) MINIMUM VARIANCE AMONG UNBIASED ESTIMATORS (II) MAXIMUM LIKELIHOOD (III) CONSISTENT AND ASYMPTOTICALLY NORMALITY AND EFFICIENCY UNDER CERTAIN REGULARITY CONDITIONS Under B) OLS can be used for both estimation and predictions and both confidence and prediction intervals can be generated for the fitted values and predictions. IF only A) holds we still have property (I) but not (II) or (III). If your objective is to fit the model and you don't need confidence or prediction interval for the response given the covariate and you don't need confidence intervals for the regression parameters then OLS can be used under A). But you cannot test for the significance of the coefficients in the model using the t-tests that are often used nor can you apply the F test for overall model fit or the one for equality of variances. But the Gauss-Markov theorem tells you that property I still hold. However, in case A) since (II) and (III) no longer hold other more robust estimation procedures may be better than least squares even though they are not unbiased. This is particularly true when the error distribution is heavy-tailed and you see outliers in the data. The least squares estimates are very sensitive to outliers. What else can go wrong with using OLS? Error variances not homogeneous means a weighted least squares method may be preferable to OLS. The high degree of collinearity among predictors means that either some predictors should be removed or another estimation procedure such as ridge regression should be used. The OLS estimated coefficients can be highly unstable when there is a high degree of multicollinearity. If the covariates are observed with error (e.g. measurement error) then the model assumption that the covariates are given without error is violated. This is bad for OLS because the criteria minimize the residuals in the direction of the response variable assuming no error to worry about in the direction of the covariates. This is called the error in variables problem and a solution that takes account of these errors in the covariate directions will do better. Error in variables (aka Deming) regression minimizes the sum of squared deviations in a direction that takes account of the ratios of these variances. This is a little complicated because many assumptions are involved in these models and objectives play a role in deciding which assumptions are crucial for a given analysis. But if you focus on the properties one at a time to see the consequences of the violation of an assumption it might be less confusing.
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regressi
It would be difficult to be clearer than what has been said in the other posts. Nevertheless, I will try to say something to the point that addresses the different assumptions that are needed for OLS
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regression Models? It would be difficult to be clearer than what has been said in the other posts. Nevertheless, I will try to say something to the point that addresses the different assumptions that are needed for OLS and various other estimation techniques to be appropriate to use. OLS estimation: This is applied in both simple linear and multiple regression, where the common assumptions are (1) the model is linear in the coefficients of the predictor with an additive random error term (2) the random error terms are (a) normally distributed with 0 mean and (b) a variance that doesn't change as the values of the predictor covariates (i.e. IVs) change, Note also that in this framework, which applies in both simple and multiple regression, the covariates are assumed to be known without any uncertainty in their given values. OLS can be used when either A) only (1) holds with 2(b) or B) both (1) and (2) hold. If B) can be assumed OLS has some nice properties that make it attractive to use. (I) MINIMUM VARIANCE AMONG UNBIASED ESTIMATORS (II) MAXIMUM LIKELIHOOD (III) CONSISTENT AND ASYMPTOTICALLY NORMALITY AND EFFICIENCY UNDER CERTAIN REGULARITY CONDITIONS Under B) OLS can be used for both estimation and predictions and both confidence and prediction intervals can be generated for the fitted values and predictions. IF only A) holds we still have property (I) but not (II) or (III). If your objective is to fit the model and you don't need confidence or prediction interval for the response given the covariate and you don't need confidence intervals for the regression parameters then OLS can be used under A). But you cannot test for the significance of the coefficients in the model using the t-tests that are often used nor can you apply the F test for overall model fit or the one for equality of variances. But the Gauss-Markov theorem tells you that property I still hold. However, in case A) since (II) and (III) no longer hold other more robust estimation procedures may be better than least squares even though they are not unbiased. This is particularly true when the error distribution is heavy-tailed and you see outliers in the data. The least squares estimates are very sensitive to outliers. What else can go wrong with using OLS? Error variances not homogeneous means a weighted least squares method may be preferable to OLS. The high degree of collinearity among predictors means that either some predictors should be removed or another estimation procedure such as ridge regression should be used. The OLS estimated coefficients can be highly unstable when there is a high degree of multicollinearity. If the covariates are observed with error (e.g. measurement error) then the model assumption that the covariates are given without error is violated. This is bad for OLS because the criteria minimize the residuals in the direction of the response variable assuming no error to worry about in the direction of the covariates. This is called the error in variables problem and a solution that takes account of these errors in the covariate directions will do better. Error in variables (aka Deming) regression minimizes the sum of squared deviations in a direction that takes account of the ratios of these variances. This is a little complicated because many assumptions are involved in these models and objectives play a role in deciding which assumptions are crucial for a given analysis. But if you focus on the properties one at a time to see the consequences of the violation of an assumption it might be less confusing.
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regressi It would be difficult to be clearer than what has been said in the other posts. Nevertheless, I will try to say something to the point that addresses the different assumptions that are needed for OLS
45,125
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regression Models?
Let me clarify your question: First, linear regression models comprise all linear models in general. Generally, linear regression models are all about describing the relationship of one variable (dependent) with other variables (independent). Second, simple and multiple regression models simply refer to the number of independent variables that one uses in a model. We have a simple regression model in case one uses only one independent variable. In case one uses more than one independent variable to describe a dependent variable then we are calling it multiple regression. Finally, one can estimate linear regression models in several ways. The most common technique is ordinary least squares (OLS). The OLS method minimizes the sum of squared residuals to estimate the model. It is conceptually simple and computationally straightforward. Other techniques include ML estimations or Bayesian regressions. That means, we can start talking about the necessary assumptions only once we know what estimation technique we are using to estimate a linear regression model. The only technique you mention in your question is ordinary least squares. You can find a basic understanding of OLS on the following website: https://economictheoryblog.com/ordinary-least-squares-ols This site also provides a nice and intuitive description of the assumptions of the OLS estimator: https://economictheoryblog.com/2015/04/01/ols_assumptions
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regressi
Let me clarify your question: First, linear regression models comprise all linear models in general. Generally, linear regression models are all about describing the relationship of one variable (depe
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regression Models? Let me clarify your question: First, linear regression models comprise all linear models in general. Generally, linear regression models are all about describing the relationship of one variable (dependent) with other variables (independent). Second, simple and multiple regression models simply refer to the number of independent variables that one uses in a model. We have a simple regression model in case one uses only one independent variable. In case one uses more than one independent variable to describe a dependent variable then we are calling it multiple regression. Finally, one can estimate linear regression models in several ways. The most common technique is ordinary least squares (OLS). The OLS method minimizes the sum of squared residuals to estimate the model. It is conceptually simple and computationally straightforward. Other techniques include ML estimations or Bayesian regressions. That means, we can start talking about the necessary assumptions only once we know what estimation technique we are using to estimate a linear regression model. The only technique you mention in your question is ordinary least squares. You can find a basic understanding of OLS on the following website: https://economictheoryblog.com/ordinary-least-squares-ols This site also provides a nice and intuitive description of the assumptions of the OLS estimator: https://economictheoryblog.com/2015/04/01/ols_assumptions
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regressi Let me clarify your question: First, linear regression models comprise all linear models in general. Generally, linear regression models are all about describing the relationship of one variable (depe
45,126
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regression Models?
There's no difference in assumptions for statistical models 1-4. Each one of those models is a form of OLS regression. The assumptions are the same. The assumptions generally relate to the central limit theorem. If your variables don't have a standard normal distribution then you most likely have a problem. Common problems: Heteroskedasticity, Multicollinearity, Autocorrelation (time series) Did this help answer your question?
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regressi
There's no difference in assumptions for statistical models 1-4. Each one of those models is a form of OLS regression. The assumptions are the same. The assumptions generally relate to the central li
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regression Models? There's no difference in assumptions for statistical models 1-4. Each one of those models is a form of OLS regression. The assumptions are the same. The assumptions generally relate to the central limit theorem. If your variables don't have a standard normal distribution then you most likely have a problem. Common problems: Heteroskedasticity, Multicollinearity, Autocorrelation (time series) Did this help answer your question?
What are the Assumptions required in Regression Models, Ordinary Least Square, and Multiple Regressi There's no difference in assumptions for statistical models 1-4. Each one of those models is a form of OLS regression. The assumptions are the same. The assumptions generally relate to the central li
45,127
How to bootstrap the best fit distribution to a sample?
Since you know that $X$ is either lognormal or gamma, you can use a parametric bootstrap instead of the nonparametric version that you proposed. You would then resample from the fitted distribution instead, and compute the probability that find.bestfit gives the right answer. This probability will depend on whether $X$ is lognormal or gamma, so you have to make two separate computations. Here is a way to do this in R: library(MASS) n<-500 # Sample size B<-100 # Number of bootstrap samples set.seed(0) x <- rlnorm(500) ## Create an empty vector fit.samps <- rep(NA, B) #### # LOGNORMAL DISTRIBUTION # Lognormal parameters: lnpar<-fitdistr(x, "lognormal")$estimate # Determine fit to parametric bootstrap samples from original distribution for(i in 1:B){ fit.samps[i] <- find.bestfit(rlnorm(n,as.vector(lnpar))) } # Probability of correct classification if lognormal: sum(fit.samps=="logN")/B #### # GAMMA DISTRIBUTION # Gamma parameters: gammapar<-fitdistr(x, "gamma")$estimate ## Determine fit to parametric bootstrap samples from original distribution for(i in 1:B){ fit.samps[i] <- find.bestfit(rgamma(n,as.vector(gammapar))) } # Probability of correct classification if gamma: sum(fit.samps=="gam")/B For $n=500$ these probabilities are both virtually 1. For $n\approx 50$ (or less), you get different probabilities though.
How to bootstrap the best fit distribution to a sample?
Since you know that $X$ is either lognormal or gamma, you can use a parametric bootstrap instead of the nonparametric version that you proposed. You would then resample from the fitted distribution in
How to bootstrap the best fit distribution to a sample? Since you know that $X$ is either lognormal or gamma, you can use a parametric bootstrap instead of the nonparametric version that you proposed. You would then resample from the fitted distribution instead, and compute the probability that find.bestfit gives the right answer. This probability will depend on whether $X$ is lognormal or gamma, so you have to make two separate computations. Here is a way to do this in R: library(MASS) n<-500 # Sample size B<-100 # Number of bootstrap samples set.seed(0) x <- rlnorm(500) ## Create an empty vector fit.samps <- rep(NA, B) #### # LOGNORMAL DISTRIBUTION # Lognormal parameters: lnpar<-fitdistr(x, "lognormal")$estimate # Determine fit to parametric bootstrap samples from original distribution for(i in 1:B){ fit.samps[i] <- find.bestfit(rlnorm(n,as.vector(lnpar))) } # Probability of correct classification if lognormal: sum(fit.samps=="logN")/B #### # GAMMA DISTRIBUTION # Gamma parameters: gammapar<-fitdistr(x, "gamma")$estimate ## Determine fit to parametric bootstrap samples from original distribution for(i in 1:B){ fit.samps[i] <- find.bestfit(rgamma(n,as.vector(gammapar))) } # Probability of correct classification if gamma: sum(fit.samps=="gam")/B For $n=500$ these probabilities are both virtually 1. For $n\approx 50$ (or less), you get different probabilities though.
How to bootstrap the best fit distribution to a sample? Since you know that $X$ is either lognormal or gamma, you can use a parametric bootstrap instead of the nonparametric version that you proposed. You would then resample from the fitted distribution in
45,128
How to bootstrap the best fit distribution to a sample?
The bootstrap can be used for this although it is not commonly done. Th approach would be to sample with replacement n times from your sample of size n. Each time you sample with replacement you compute the goodness of fit statistics for the competing distributions and pick the distribution that fits best. Take the number of times distribution A is selected divided by the total number of bootstrap samples to get an estimate for the probability that distribution A will be selected.
How to bootstrap the best fit distribution to a sample?
The bootstrap can be used for this although it is not commonly done. Th approach would be to sample with replacement n times from your sample of size n. Each time you sample with replacement you com
How to bootstrap the best fit distribution to a sample? The bootstrap can be used for this although it is not commonly done. Th approach would be to sample with replacement n times from your sample of size n. Each time you sample with replacement you compute the goodness of fit statistics for the competing distributions and pick the distribution that fits best. Take the number of times distribution A is selected divided by the total number of bootstrap samples to get an estimate for the probability that distribution A will be selected.
How to bootstrap the best fit distribution to a sample? The bootstrap can be used for this although it is not commonly done. Th approach would be to sample with replacement n times from your sample of size n. Each time you sample with replacement you com
45,129
Simple Multivariate Regression with Octave
It is not clear to me exactly what type of model you hope to fit. Some people say 'multivariate regression' to mean there are multiple dependent variables. For examples, you might want to predict both humidity and temp from day. Other people use it to just mean one outcome but multiple predictors. For example, you could predict temperature from both day and humidity. Here is an instructional solution showing how you could get the parameter estimates, standard errors, and new predicted values from a multiple regression model predicting the outcome, temperature, from day and humidity (as well as the constant term). # your original data day = [4, 5, 6, 8] temp = [97, 100, 98, 80] humidity = [62, 46, 50, 55] # create the design matrix # intercept (1s), day and humidity as predictors X = [1, 1, 1, 1; day; humidity]' # linear parameter estimates b = inv(X'*X)*X'*temp' # residuals R = temp' - (X * b) # residual variance v = (R'*R)/(4 - 3) # variance covariance matrix of parameters Sigma = v * inv(X'*X) # standard errors of parameters (b vector) se = sqrt(diag(Sigma)) # new data for prediction, constant # day is 7 and 9, humidity is 80 newdata = [1, 7, 80; 1, 9, 80] # predicted values for day 7 and 9 pred = newdata * b Which gives: pred = 70.712 60.549 In practice, that is recreating the wheel, but since you are new to octave (and maybe regression?) I thought it might be helpful. Here is the simple way using built in functions to directly get the coefficients. ols(temp', X) which would be b from above, and you could postmultiply by new data (in your case day 7 and 9) to get predicted ("forecasts") values. ans = 156.18467 -5.08122 -0.62381
Simple Multivariate Regression with Octave
It is not clear to me exactly what type of model you hope to fit. Some people say 'multivariate regression' to mean there are multiple dependent variables. For examples, you might want to predict bo
Simple Multivariate Regression with Octave It is not clear to me exactly what type of model you hope to fit. Some people say 'multivariate regression' to mean there are multiple dependent variables. For examples, you might want to predict both humidity and temp from day. Other people use it to just mean one outcome but multiple predictors. For example, you could predict temperature from both day and humidity. Here is an instructional solution showing how you could get the parameter estimates, standard errors, and new predicted values from a multiple regression model predicting the outcome, temperature, from day and humidity (as well as the constant term). # your original data day = [4, 5, 6, 8] temp = [97, 100, 98, 80] humidity = [62, 46, 50, 55] # create the design matrix # intercept (1s), day and humidity as predictors X = [1, 1, 1, 1; day; humidity]' # linear parameter estimates b = inv(X'*X)*X'*temp' # residuals R = temp' - (X * b) # residual variance v = (R'*R)/(4 - 3) # variance covariance matrix of parameters Sigma = v * inv(X'*X) # standard errors of parameters (b vector) se = sqrt(diag(Sigma)) # new data for prediction, constant # day is 7 and 9, humidity is 80 newdata = [1, 7, 80; 1, 9, 80] # predicted values for day 7 and 9 pred = newdata * b Which gives: pred = 70.712 60.549 In practice, that is recreating the wheel, but since you are new to octave (and maybe regression?) I thought it might be helpful. Here is the simple way using built in functions to directly get the coefficients. ols(temp', X) which would be b from above, and you could postmultiply by new data (in your case day 7 and 9) to get predicted ("forecasts") values. ans = 156.18467 -5.08122 -0.62381
Simple Multivariate Regression with Octave It is not clear to me exactly what type of model you hope to fit. Some people say 'multivariate regression' to mean there are multiple dependent variables. For examples, you might want to predict bo
45,130
What is the mathematical model formula corresponding to this gam model fit in R?
Leaving off the other calls to gam (e.g. data, method), your model formula is: Temp ~ Loc + s(Doy) + s(Doy,by = Loc) + s(Tod) + s(Tod,by = Loc) The help file for gam formulas is here and is where I'm getting my information. The s() terms indicate a smooth function in that term, when a by tag is included within s(), that indicates the smooth function is multiplied by the corresponding term. The terms without s() around them are ordinary linear terms. So, your model can be written as: $$ {\rm Temp} = {\rm Loc} + f_{1}({\rm Doy}) + f_{2}({\rm Doy})\cdot {\rm Loc} + f_{3}({\rm Tod}) + f_{4}({\rm Tod})\cdot {\rm Loc} + \varepsilon $$ where $f_1,f_2,f_3,f_4$ are smooth functions estimated by the model by maximum likelihood. You may consider replacing the variable names with symbols (e.g. $T$ instead of ${\rm Temp}$) but this is how you would write it, using the same variable names as in your R code.
What is the mathematical model formula corresponding to this gam model fit in R?
Leaving off the other calls to gam (e.g. data, method), your model formula is: Temp ~ Loc + s(Doy) + s(Doy,by = Loc) + s(Tod) + s(Tod,by = Loc) The help file for gam formulas is here and is where I'
What is the mathematical model formula corresponding to this gam model fit in R? Leaving off the other calls to gam (e.g. data, method), your model formula is: Temp ~ Loc + s(Doy) + s(Doy,by = Loc) + s(Tod) + s(Tod,by = Loc) The help file for gam formulas is here and is where I'm getting my information. The s() terms indicate a smooth function in that term, when a by tag is included within s(), that indicates the smooth function is multiplied by the corresponding term. The terms without s() around them are ordinary linear terms. So, your model can be written as: $$ {\rm Temp} = {\rm Loc} + f_{1}({\rm Doy}) + f_{2}({\rm Doy})\cdot {\rm Loc} + f_{3}({\rm Tod}) + f_{4}({\rm Tod})\cdot {\rm Loc} + \varepsilon $$ where $f_1,f_2,f_3,f_4$ are smooth functions estimated by the model by maximum likelihood. You may consider replacing the variable names with symbols (e.g. $T$ instead of ${\rm Temp}$) but this is how you would write it, using the same variable names as in your R code.
What is the mathematical model formula corresponding to this gam model fit in R? Leaving off the other calls to gam (e.g. data, method), your model formula is: Temp ~ Loc + s(Doy) + s(Doy,by = Loc) + s(Tod) + s(Tod,by = Loc) The help file for gam formulas is here and is where I'
45,131
Time series analysis with neural networks
A feed-forward neural network (typically multi-layer) is a type of supervised learner that will adjust the network weights on its input and internal nodes, in an iterative manner, in order to minimize errors between predicted and actual target variables. It commonly uses stochastic gradient descent (sometimes called error back propagation) over many iterations in order to find a local minimum of the error response and optimize the network weights accordingly. The basic idea behind stochastic gradient descent is to start by randomizing the weights, then adjust them by iterating through several passes and updating the weights in a direction that moves the total error between target and predicted errors towards the local minimum error of the gradient surface. In practice, a tradeoff is found between optimizing a training set against a validation set, in order to reduce the problem of over-fitting. Lastly, the input (time series or otherwise) often needs to be transformed in order to create a stationary series that is also bounded (amplitude wise) between the input range of the NN layer transfer function(s)(typically, 0 to 1 or -1 to 1). Once the weights have been trained, the model can be stored and used to process additional new time series data, much like a typical linear regression based model. An example illustration of using a NN to predict finanacial time series data, using Weka, is posted here: http://intelligenttradingtech.blogspot.com/2010/01/systems.html A good text comparing financial AR based models against NN models is, "Applied Quantitative Methods for Trading and Investment," Christian Dunis et.al.
Time series analysis with neural networks
A feed-forward neural network (typically multi-layer) is a type of supervised learner that will adjust the network weights on its input and internal nodes, in an iterative manner, in order to minimize
Time series analysis with neural networks A feed-forward neural network (typically multi-layer) is a type of supervised learner that will adjust the network weights on its input and internal nodes, in an iterative manner, in order to minimize errors between predicted and actual target variables. It commonly uses stochastic gradient descent (sometimes called error back propagation) over many iterations in order to find a local minimum of the error response and optimize the network weights accordingly. The basic idea behind stochastic gradient descent is to start by randomizing the weights, then adjust them by iterating through several passes and updating the weights in a direction that moves the total error between target and predicted errors towards the local minimum error of the gradient surface. In practice, a tradeoff is found between optimizing a training set against a validation set, in order to reduce the problem of over-fitting. Lastly, the input (time series or otherwise) often needs to be transformed in order to create a stationary series that is also bounded (amplitude wise) between the input range of the NN layer transfer function(s)(typically, 0 to 1 or -1 to 1). Once the weights have been trained, the model can be stored and used to process additional new time series data, much like a typical linear regression based model. An example illustration of using a NN to predict finanacial time series data, using Weka, is posted here: http://intelligenttradingtech.blogspot.com/2010/01/systems.html A good text comparing financial AR based models against NN models is, "Applied Quantitative Methods for Trading and Investment," Christian Dunis et.al.
Time series analysis with neural networks A feed-forward neural network (typically multi-layer) is a type of supervised learner that will adjust the network weights on its input and internal nodes, in an iterative manner, in order to minimize
45,132
Time series analysis with neural networks
Let me start by answering your question, then I will add some comments and suggestions. First, I believe that when you say "weight" you actual mean "input"/ "output". This is because you asked how to transform the time series to weights and how to transform output weights into a prediction. Neural network terminology uses "weight" to mean something else (pat's answer uses the term "weight" correctly). This is what people usually suggest: If your time series looks like X_1, X_2, ..., X_n, ... then you do the following: Step 1: Decide how many observations you want to use to make a prediction. Step 2: Decide how many steps forward you want to predict. Both these choices are fixed for the NN. For this example let's say you want to use the last 5 readings to make 2 predictions. Then you will Step 3. Create a neural network with 5 input nodes and 2 output nodes. Step 4. Create your training set with each element consisting of 5 sequential readings as input and the next two readings as output. Here are the first TWO elements of the training set: Input = X_1, X_2, X_3, X_4, X_5 Output = X_6, X_7 Input = X_2, X_3, X_4, X_5, X_6 Output = X_7, X_8 etc. Hopefully that answers your question. Now for some gyan. If your data is noisy e.g. stock ticks, then my feeling is that this will be hard to train. I know I have had bad luck trying to train neural networks on noisy data. So here is another strategy: First model your time series using the ARIMA framework. This views a time series as Polynomial base + Cyclic component + Bounded randomness (Take a look at the Weka example in pat's answer from this point of view.) Now my feeling (and I am still experimenting) is that the random component(s) are interfering with the training of the NN. So I want to avoid trying to predict them directly. Picture your series data coming in. On every reading you feed it into your ARIMA black box, which figures out the underlying model and then spits out the ARIMA model parameters. So at time 0 you have a set of parameters, and then at time 1 you have an updated set of parameters, etc. Note: The ARIMA black box is slow. Question 1: Would it be possible for a neural network to learn how these parameters change? My feeling is that they will change slowly, so this may be doable. Question 2: Could you train a different neural network to discern any patterns in the ARIMA error? I.e. If ARIMA predicts 5.4 and the actual next reading is 5.5, could you train a neural network to figure out that 0.1?
Time series analysis with neural networks
Let me start by answering your question, then I will add some comments and suggestions. First, I believe that when you say "weight" you actual mean "input"/ "output". This is because you asked how to
Time series analysis with neural networks Let me start by answering your question, then I will add some comments and suggestions. First, I believe that when you say "weight" you actual mean "input"/ "output". This is because you asked how to transform the time series to weights and how to transform output weights into a prediction. Neural network terminology uses "weight" to mean something else (pat's answer uses the term "weight" correctly). This is what people usually suggest: If your time series looks like X_1, X_2, ..., X_n, ... then you do the following: Step 1: Decide how many observations you want to use to make a prediction. Step 2: Decide how many steps forward you want to predict. Both these choices are fixed for the NN. For this example let's say you want to use the last 5 readings to make 2 predictions. Then you will Step 3. Create a neural network with 5 input nodes and 2 output nodes. Step 4. Create your training set with each element consisting of 5 sequential readings as input and the next two readings as output. Here are the first TWO elements of the training set: Input = X_1, X_2, X_3, X_4, X_5 Output = X_6, X_7 Input = X_2, X_3, X_4, X_5, X_6 Output = X_7, X_8 etc. Hopefully that answers your question. Now for some gyan. If your data is noisy e.g. stock ticks, then my feeling is that this will be hard to train. I know I have had bad luck trying to train neural networks on noisy data. So here is another strategy: First model your time series using the ARIMA framework. This views a time series as Polynomial base + Cyclic component + Bounded randomness (Take a look at the Weka example in pat's answer from this point of view.) Now my feeling (and I am still experimenting) is that the random component(s) are interfering with the training of the NN. So I want to avoid trying to predict them directly. Picture your series data coming in. On every reading you feed it into your ARIMA black box, which figures out the underlying model and then spits out the ARIMA model parameters. So at time 0 you have a set of parameters, and then at time 1 you have an updated set of parameters, etc. Note: The ARIMA black box is slow. Question 1: Would it be possible for a neural network to learn how these parameters change? My feeling is that they will change slowly, so this may be doable. Question 2: Could you train a different neural network to discern any patterns in the ARIMA error? I.e. If ARIMA predicts 5.4 and the actual next reading is 5.5, could you train a neural network to figure out that 0.1?
Time series analysis with neural networks Let me start by answering your question, then I will add some comments and suggestions. First, I believe that when you say "weight" you actual mean "input"/ "output". This is because you asked how to
45,133
Time series analysis with neural networks
There are several ways in which you can 'train' a neural network. Personally, I prefer the genetic algorithm approach - each individual represents a set of weights, with the fitness function being the performance of the neural network. The performance of the neural network in terms of time series analysis could be the mean squared error of the predictions against the targets. One common method in time series prediction with neural networks is to use the % change from specific intervals over a 'lookback' period. You may find this useful - http://ijcai.org/Past%20Proceedings/IJCAI-89-VOL1/PDF/122.pdf
Time series analysis with neural networks
There are several ways in which you can 'train' a neural network. Personally, I prefer the genetic algorithm approach - each individual represents a set of weights, with the fitness function being the
Time series analysis with neural networks There are several ways in which you can 'train' a neural network. Personally, I prefer the genetic algorithm approach - each individual represents a set of weights, with the fitness function being the performance of the neural network. The performance of the neural network in terms of time series analysis could be the mean squared error of the predictions against the targets. One common method in time series prediction with neural networks is to use the % change from specific intervals over a 'lookback' period. You may find this useful - http://ijcai.org/Past%20Proceedings/IJCAI-89-VOL1/PDF/122.pdf
Time series analysis with neural networks There are several ways in which you can 'train' a neural network. Personally, I prefer the genetic algorithm approach - each individual represents a set of weights, with the fitness function being the
45,134
Time series analysis with neural networks
If you have access to a MATLAB installation, try the neural network toolbox first (have a look at the screenshots). It is very, very good for what you're trying to accomplish, and with great documentation. It is a great starting point.
Time series analysis with neural networks
If you have access to a MATLAB installation, try the neural network toolbox first (have a look at the screenshots). It is very, very good for what you're trying to accomplish, and with great documenta
Time series analysis with neural networks If you have access to a MATLAB installation, try the neural network toolbox first (have a look at the screenshots). It is very, very good for what you're trying to accomplish, and with great documentation. It is a great starting point.
Time series analysis with neural networks If you have access to a MATLAB installation, try the neural network toolbox first (have a look at the screenshots). It is very, very good for what you're trying to accomplish, and with great documenta
45,135
Justification for use of $\chi^2(1)$ in Wald and score test
One thought is that you should've mentioned the regularity conditions for asymptotic normality of the estimates and the $\chi^2$ performance of the likelihood ratio test statistic. These conditions include, informally speaking, the true parameter being in the interior of the parameter space; the log-likelihood really affording Taylor series expansion; i.i.d. data; conditions to interchange some of the derivatives and the integrals/expectations (some sort of uniform boundedness); and such. See http://www.stat.unc.edu/postscript/rs/ISI89.pdf and http://www.jstor.org/stable/2346086 concerning violations of these conditions. (The simplest example is estimation when the support depends on the parameter value, e.g., $U[0,\theta]$. The MLE $\hat\theta_n=x_{(n)}$ is not asymptotically normal, and an estimator that has a greater asymptotic efficiency in terms of MSE can be constructed.) These are worthy papers to read if you are serious about statistical theory, and many courses on asymptotics do not really wander off far enough into this elephant's graveyard of ML elegance. Another thought is that may be they really did want you to mention both Wald and score tests. Buse (1982) provides a wonderful review of the relation between the three tests.
Justification for use of $\chi^2(1)$ in Wald and score test
One thought is that you should've mentioned the regularity conditions for asymptotic normality of the estimates and the $\chi^2$ performance of the likelihood ratio test statistic. These conditions in
Justification for use of $\chi^2(1)$ in Wald and score test One thought is that you should've mentioned the regularity conditions for asymptotic normality of the estimates and the $\chi^2$ performance of the likelihood ratio test statistic. These conditions include, informally speaking, the true parameter being in the interior of the parameter space; the log-likelihood really affording Taylor series expansion; i.i.d. data; conditions to interchange some of the derivatives and the integrals/expectations (some sort of uniform boundedness); and such. See http://www.stat.unc.edu/postscript/rs/ISI89.pdf and http://www.jstor.org/stable/2346086 concerning violations of these conditions. (The simplest example is estimation when the support depends on the parameter value, e.g., $U[0,\theta]$. The MLE $\hat\theta_n=x_{(n)}$ is not asymptotically normal, and an estimator that has a greater asymptotic efficiency in terms of MSE can be constructed.) These are worthy papers to read if you are serious about statistical theory, and many courses on asymptotics do not really wander off far enough into this elephant's graveyard of ML elegance. Another thought is that may be they really did want you to mention both Wald and score tests. Buse (1982) provides a wonderful review of the relation between the three tests.
Justification for use of $\chi^2(1)$ in Wald and score test One thought is that you should've mentioned the regularity conditions for asymptotic normality of the estimates and the $\chi^2$ performance of the likelihood ratio test statistic. These conditions in
45,136
Justification for use of $\chi^2(1)$ in Wald and score test
Difficult to know what's in the head of a teacher ;-) The only things that come to my mind are: The source of the results which says that MLE are asymptotically normally distributed (along with checking that hypotheses are satisfied in this case). the dependency of Information $I$ on $n$ which is not clear in your notations (for example writing $\frac{1}{I_n(\theta_0)}$ to emphasize that the estimator is consistent.
Justification for use of $\chi^2(1)$ in Wald and score test
Difficult to know what's in the head of a teacher ;-) The only things that come to my mind are: The source of the results which says that MLE are asymptotically normally distributed (along with check
Justification for use of $\chi^2(1)$ in Wald and score test Difficult to know what's in the head of a teacher ;-) The only things that come to my mind are: The source of the results which says that MLE are asymptotically normally distributed (along with checking that hypotheses are satisfied in this case). the dependency of Information $I$ on $n$ which is not clear in your notations (for example writing $\frac{1}{I_n(\theta_0)}$ to emphasize that the estimator is consistent.
Justification for use of $\chi^2(1)$ in Wald and score test Difficult to know what's in the head of a teacher ;-) The only things that come to my mind are: The source of the results which says that MLE are asymptotically normally distributed (along with check
45,137
Justification for use of $\chi^2(1)$ in Wald and score test
You took for granted the result that if $X$ is distributed $N(m,s^2)$ then $(X-m)/s$ is distributed $N(0,1)$. Clearly that is needed for the last step. Also you did not mention the result than the sum of squares of $k$ $N(0,1)$ variables is chi square $n$ degrees of freedom and in your case $n=1$. It is a judgment call whether to accept that you obviously knew that or to consider that formally the proof is incomplete. Do they give partial credit? I think you showed that you knew how to get the hard part (the asymptotic normality of the estimator of $\theta$ with the correct mean and variance). This may appear nit-picky but the question may have been designed for you to demonstrate that you know those facts.
Justification for use of $\chi^2(1)$ in Wald and score test
You took for granted the result that if $X$ is distributed $N(m,s^2)$ then $(X-m)/s$ is distributed $N(0,1)$. Clearly that is needed for the last step. Also you did not mention the result than the
Justification for use of $\chi^2(1)$ in Wald and score test You took for granted the result that if $X$ is distributed $N(m,s^2)$ then $(X-m)/s$ is distributed $N(0,1)$. Clearly that is needed for the last step. Also you did not mention the result than the sum of squares of $k$ $N(0,1)$ variables is chi square $n$ degrees of freedom and in your case $n=1$. It is a judgment call whether to accept that you obviously knew that or to consider that formally the proof is incomplete. Do they give partial credit? I think you showed that you knew how to get the hard part (the asymptotic normality of the estimator of $\theta$ with the correct mean and variance). This may appear nit-picky but the question may have been designed for you to demonstrate that you know those facts.
Justification for use of $\chi^2(1)$ in Wald and score test You took for granted the result that if $X$ is distributed $N(m,s^2)$ then $(X-m)/s$ is distributed $N(0,1)$. Clearly that is needed for the last step. Also you did not mention the result than the
45,138
Difference between ANOVA power simulation and power calculation
It seems like you simply hit a specific R peculiarity: When you analyze linear models, and you have a predictor with numerical values, you have to tell R whether it really represents data from a numerical variable (default, leading to a regression model), or whether it actually is a factor (leading to an ANOVA). In your case you just have to change result <- summary(aov(test_matrix[i,] ~ group)) to result <- summary(aov(test_matrix[i,] ~ factor(group))) to get close to correct results. In addition, I don't understand your correction to the standard deviations. The sds are the true standard deviations that are required to simulate data with rnorm(). Leave out your correction, and you get even closer to the correct result. When you have time to explore R some more, you might want to look at some features / strategies that make simulations like yours somewhat easier. E.g. rnorm() is vectorized, supply a vector of $\mu$s and $\sigma$s, each with length = number of simulated values, and you can eliminate the double loop in create_sim_data() anova(lm()) returns a data frame that is a lot easier to index than the result of summary(aov()) rep() accepts a vector for its times argument that simplifies its use for your purpose. Here's your simulation stripped to the bare bones, just for the group size of 40, giving us the p-values. Nj <- c(40, 40, 40) # group sizes for 3 groups mu <- c(0.2, 0, -0.2) # expected values in groups sigma <- c(1, 1, 1) # true standard deviations in groups mus <- rep(mu, times=Nj) # for use in rnorm(): vector of mus sigmas <- rep(sigma, times=Nj) # for use in rnorm(): vector of true sds IV <- factor(rep(1:3, times=Nj)) # factor for ANOVA nsims <- 1000 # number of simulations # reference: correct power power.anova.test(groups=3, n=Nj[1], between.var=var(mu), within.var=sigma[1]^2)$power doSim <- function() { # function to run one ANOVA on simulated data DV <- rnorm(sum(Nj), mus, sigmas) # data from all three groups anova(lm(DV ~ IV))["IV", "Pr(>F)"] # p-value from ANOVA } pVals <- replicate(nsims, doSim()) # run the simulation nsims times (power <- sum(pVals < 0.05) / nsims) # fraction of significant ANOVAs
Difference between ANOVA power simulation and power calculation
It seems like you simply hit a specific R peculiarity: When you analyze linear models, and you have a predictor with numerical values, you have to tell R whether it really represents data from a numer
Difference between ANOVA power simulation and power calculation It seems like you simply hit a specific R peculiarity: When you analyze linear models, and you have a predictor with numerical values, you have to tell R whether it really represents data from a numerical variable (default, leading to a regression model), or whether it actually is a factor (leading to an ANOVA). In your case you just have to change result <- summary(aov(test_matrix[i,] ~ group)) to result <- summary(aov(test_matrix[i,] ~ factor(group))) to get close to correct results. In addition, I don't understand your correction to the standard deviations. The sds are the true standard deviations that are required to simulate data with rnorm(). Leave out your correction, and you get even closer to the correct result. When you have time to explore R some more, you might want to look at some features / strategies that make simulations like yours somewhat easier. E.g. rnorm() is vectorized, supply a vector of $\mu$s and $\sigma$s, each with length = number of simulated values, and you can eliminate the double loop in create_sim_data() anova(lm()) returns a data frame that is a lot easier to index than the result of summary(aov()) rep() accepts a vector for its times argument that simplifies its use for your purpose. Here's your simulation stripped to the bare bones, just for the group size of 40, giving us the p-values. Nj <- c(40, 40, 40) # group sizes for 3 groups mu <- c(0.2, 0, -0.2) # expected values in groups sigma <- c(1, 1, 1) # true standard deviations in groups mus <- rep(mu, times=Nj) # for use in rnorm(): vector of mus sigmas <- rep(sigma, times=Nj) # for use in rnorm(): vector of true sds IV <- factor(rep(1:3, times=Nj)) # factor for ANOVA nsims <- 1000 # number of simulations # reference: correct power power.anova.test(groups=3, n=Nj[1], between.var=var(mu), within.var=sigma[1]^2)$power doSim <- function() { # function to run one ANOVA on simulated data DV <- rnorm(sum(Nj), mus, sigmas) # data from all three groups anova(lm(DV ~ IV))["IV", "Pr(>F)"] # p-value from ANOVA } pVals <- replicate(nsims, doSim()) # run the simulation nsims times (power <- sum(pVals < 0.05) / nsims) # fraction of significant ANOVAs
Difference between ANOVA power simulation and power calculation It seems like you simply hit a specific R peculiarity: When you analyze linear models, and you have a predictor with numerical values, you have to tell R whether it really represents data from a numer
45,139
How to test whether average of ten independent correlations is different from zero?
Treat r (or Z) as any effect size and calculate its mean using a standard formula: $\bar{r} = \Sigma r_iw_i$/$\Sigma w_i$ where $r_i$ is the $i$th value of r and $w_i$ is a weighting factor which is $1/s^2(r_i)$ The standard error of $\bar{r}$ is given by the square root of $1/\Sigma w_i$ and you can use this to construct confidence intervals around $\bar{r}$ and check whether they include 0 or not.
How to test whether average of ten independent correlations is different from zero?
Treat r (or Z) as any effect size and calculate its mean using a standard formula: $\bar{r} = \Sigma r_iw_i$/$\Sigma w_i$ where $r_i$ is the $i$th value of r and $w_i$ is a weighting factor which is $
How to test whether average of ten independent correlations is different from zero? Treat r (or Z) as any effect size and calculate its mean using a standard formula: $\bar{r} = \Sigma r_iw_i$/$\Sigma w_i$ where $r_i$ is the $i$th value of r and $w_i$ is a weighting factor which is $1/s^2(r_i)$ The standard error of $\bar{r}$ is given by the square root of $1/\Sigma w_i$ and you can use this to construct confidence intervals around $\bar{r}$ and check whether they include 0 or not.
How to test whether average of ten independent correlations is different from zero? Treat r (or Z) as any effect size and calculate its mean using a standard formula: $\bar{r} = \Sigma r_iw_i$/$\Sigma w_i$ where $r_i$ is the $i$th value of r and $w_i$ is a weighting factor which is $
45,140
How to test whether average of ten independent correlations is different from zero?
I think the first approach is preferable (reporting Z values only). Converting the Z values back to r before the test wouldn't really do anything (it would be as if you had never transformed in the first place). But you may have noticed that in your data it makes very little difference whether you use Z or r since the nonlinearities of the Fisher transform only really enter for r>0.5 1 and your Z values imply smaller correlations than that. I would emphasise this point when reporting the analysis.
How to test whether average of ten independent correlations is different from zero?
I think the first approach is preferable (reporting Z values only). Converting the Z values back to r before the test wouldn't really do anything (it would be as if you had never transformed in the fi
How to test whether average of ten independent correlations is different from zero? I think the first approach is preferable (reporting Z values only). Converting the Z values back to r before the test wouldn't really do anything (it would be as if you had never transformed in the first place). But you may have noticed that in your data it makes very little difference whether you use Z or r since the nonlinearities of the Fisher transform only really enter for r>0.5 1 and your Z values imply smaller correlations than that. I would emphasise this point when reporting the analysis.
How to test whether average of ten independent correlations is different from zero? I think the first approach is preferable (reporting Z values only). Converting the Z values back to r before the test wouldn't really do anything (it would be as if you had never transformed in the fi
45,141
How to test whether average of ten independent correlations is different from zero?
I can relate to the way you're thinking about it, and of your 2 ideas, I'd say the first is clearly preferable: Fisher designed Z such that any computations such as averaging are better done with Z than with the (0, 1)-bounded r. But in this situation a sig. test seems less helpful, and its results would be less clear, than in the usual case. If there is one. For one thing, we wouldn't have taken into account how many values went into each subject's r, so it'd take some thinking to figure out the best df for the test. Have you considered simply showing the results and letting people make up their mind that way? E.g., in a dot plot: Edit: perhaps better would be a bubble plot in which bubble size represented the N that went into each correlation.
How to test whether average of ten independent correlations is different from zero?
I can relate to the way you're thinking about it, and of your 2 ideas, I'd say the first is clearly preferable: Fisher designed Z such that any computations such as averaging are better done with Z t
How to test whether average of ten independent correlations is different from zero? I can relate to the way you're thinking about it, and of your 2 ideas, I'd say the first is clearly preferable: Fisher designed Z such that any computations such as averaging are better done with Z than with the (0, 1)-bounded r. But in this situation a sig. test seems less helpful, and its results would be less clear, than in the usual case. If there is one. For one thing, we wouldn't have taken into account how many values went into each subject's r, so it'd take some thinking to figure out the best df for the test. Have you considered simply showing the results and letting people make up their mind that way? E.g., in a dot plot: Edit: perhaps better would be a bubble plot in which bubble size represented the N that went into each correlation.
How to test whether average of ten independent correlations is different from zero? I can relate to the way you're thinking about it, and of your 2 ideas, I'd say the first is clearly preferable: Fisher designed Z such that any computations such as averaging are better done with Z t
45,142
How to test whether average of ten independent correlations is different from zero?
The r-values should be Fisher's z-transformed, then compared to zero using a one-sample t-test, not a paired-sample t-test.
How to test whether average of ten independent correlations is different from zero?
The r-values should be Fisher's z-transformed, then compared to zero using a one-sample t-test, not a paired-sample t-test.
How to test whether average of ten independent correlations is different from zero? The r-values should be Fisher's z-transformed, then compared to zero using a one-sample t-test, not a paired-sample t-test.
How to test whether average of ten independent correlations is different from zero? The r-values should be Fisher's z-transformed, then compared to zero using a one-sample t-test, not a paired-sample t-test.
45,143
Are all attributes/data points inherently nominal?
The question (and your answer) invoke Stevens' theory of levels of measurement. This thread perhaps is not the place for a critical evaluation of that (old) theory, which has subsequently been found to be limited and counterproductive in many (but not all) applications. The question, though, implicitly suggests that this theory would provide a basis for software design and development, most likely as a scheme for class inheritance. Maybe it would work, but I think not. It makes sense to have an abstract universal or base data class from which all others will inherit, and to supply some default methods for testing equality, printing values etc. Feel free to call this the "nominal" class. In this sense, the answer to the question as stated is "yes," you can think of every data type as (at least) nominal. But what next? Let's consider where the following types of data might properly lie within Stevens' taxonomy and the implications that might have for software design: Relative orientations: these are ordered but not totally ordered and they enjoy a large continuous group of meaningful transformations (rotations). The lack of total order prevents them from being ordinal, yet they enjoy all the other properties of interval data. Geographic locations: distances between them make sense, but most transformations do not. Thus, by Stevens, it seems they would be nominal and interval but not both. Image data: once again, many kinds of distances make sense. There is no intrinsic ordering. Various forms of transformations and comparisons arguably make these a complex and richer form of data than ratio data, though. (What about an image that represents an entire field of ratio data?) Complex numbers, representing locations on the plane or Euclidean transformations thereof. Richer than ratio data but without an ordering. Interval-valued data, representing both censored data and data known only to lie between definite bounds. Should these inherit from the type of data whose ranges they represent (ordinal or ratio) or, given that even inequality tests often are indefinite, should they be considered purely nominal (which would strip them of almost all the information they contain)? Percentages and counted ratios. Such numbers have some characteristics of all of Stevens' types as well as peculiarities of their own. They clearly do not qualify as either interval or ratio data, but treating them as such is commonplace (and can be effective in analysis). These and other examples suggest it would be limiting to force a hierarchy on the Stevens taxonomy or to use it to design a class for representing data. Another problem--a big one--with viewing Stevens' taxonomy as a hierarchy is that it just ain't so. For example, data that appear purely ordinal can often be analyzed as interval or ratio (e.g., using ordered logistic regression). Thus, if you want to support effective data analysis, you must not compel the user to view data at one level, nor should you arbitrarily or unnecessarily limit what can be done with data based on the level at which it has been assigned. One thing you do want to do is to make sure that internal representations used to encode nominal values ("factors," often) never get confused with actual numbers. This is a basic mistake made by many users of systems that use numerical codes to represent factors but then allow those codes to be calculated with (as in regression) as if their actual values were meaningful. I don't think this is a question of class design or of placing data in a hierarchy. A better way to view it may be that of attaching semantic tags to the data to give clues about their proper use, display, or interpretation. Because the question expresses no design goals, it is not possible to suggest alternatives, but it does appear wise just to ignore the Stevens approach unless the software is intended to limit the usability of the data it works with. Instead, do the software design using good engineering practice: start from a clear statement of the software's purpose and how it will be used. What kinds of data must it store and manage? What kinds of operations does it need to perform on those data? To what extent must it be extensible? What performance requirements might constrain the methods of internal data representation? And so on... This line of inquiry will be more relevant and more productive than trying to adhere to Stevens' system (or anybody else's).
Are all attributes/data points inherently nominal?
The question (and your answer) invoke Stevens' theory of levels of measurement. This thread perhaps is not the place for a critical evaluation of that (old) theory, which has subsequently been found
Are all attributes/data points inherently nominal? The question (and your answer) invoke Stevens' theory of levels of measurement. This thread perhaps is not the place for a critical evaluation of that (old) theory, which has subsequently been found to be limited and counterproductive in many (but not all) applications. The question, though, implicitly suggests that this theory would provide a basis for software design and development, most likely as a scheme for class inheritance. Maybe it would work, but I think not. It makes sense to have an abstract universal or base data class from which all others will inherit, and to supply some default methods for testing equality, printing values etc. Feel free to call this the "nominal" class. In this sense, the answer to the question as stated is "yes," you can think of every data type as (at least) nominal. But what next? Let's consider where the following types of data might properly lie within Stevens' taxonomy and the implications that might have for software design: Relative orientations: these are ordered but not totally ordered and they enjoy a large continuous group of meaningful transformations (rotations). The lack of total order prevents them from being ordinal, yet they enjoy all the other properties of interval data. Geographic locations: distances between them make sense, but most transformations do not. Thus, by Stevens, it seems they would be nominal and interval but not both. Image data: once again, many kinds of distances make sense. There is no intrinsic ordering. Various forms of transformations and comparisons arguably make these a complex and richer form of data than ratio data, though. (What about an image that represents an entire field of ratio data?) Complex numbers, representing locations on the plane or Euclidean transformations thereof. Richer than ratio data but without an ordering. Interval-valued data, representing both censored data and data known only to lie between definite bounds. Should these inherit from the type of data whose ranges they represent (ordinal or ratio) or, given that even inequality tests often are indefinite, should they be considered purely nominal (which would strip them of almost all the information they contain)? Percentages and counted ratios. Such numbers have some characteristics of all of Stevens' types as well as peculiarities of their own. They clearly do not qualify as either interval or ratio data, but treating them as such is commonplace (and can be effective in analysis). These and other examples suggest it would be limiting to force a hierarchy on the Stevens taxonomy or to use it to design a class for representing data. Another problem--a big one--with viewing Stevens' taxonomy as a hierarchy is that it just ain't so. For example, data that appear purely ordinal can often be analyzed as interval or ratio (e.g., using ordered logistic regression). Thus, if you want to support effective data analysis, you must not compel the user to view data at one level, nor should you arbitrarily or unnecessarily limit what can be done with data based on the level at which it has been assigned. One thing you do want to do is to make sure that internal representations used to encode nominal values ("factors," often) never get confused with actual numbers. This is a basic mistake made by many users of systems that use numerical codes to represent factors but then allow those codes to be calculated with (as in regression) as if their actual values were meaningful. I don't think this is a question of class design or of placing data in a hierarchy. A better way to view it may be that of attaching semantic tags to the data to give clues about their proper use, display, or interpretation. Because the question expresses no design goals, it is not possible to suggest alternatives, but it does appear wise just to ignore the Stevens approach unless the software is intended to limit the usability of the data it works with. Instead, do the software design using good engineering practice: start from a clear statement of the software's purpose and how it will be used. What kinds of data must it store and manage? What kinds of operations does it need to perform on those data? To what extent must it be extensible? What performance requirements might constrain the methods of internal data representation? And so on... This line of inquiry will be more relevant and more productive than trying to adhere to Stevens' system (or anybody else's).
Are all attributes/data points inherently nominal? The question (and your answer) invoke Stevens' theory of levels of measurement. This thread perhaps is not the place for a critical evaluation of that (old) theory, which has subsequently been found
45,144
Are all attributes/data points inherently nominal?
Stanley Smith Stevens proposed a level of measurement in 1946 which is used by many statisticians today. In it, there are four different scale types proposed: Nominal Scale Ordinal Scale Interval Scale Ratio Measurement With each scale type building on the foundations of the other. For example, ratio measurements can have all of the operations of all of the other scales applied to values of that scale type. That said, it stands to reason that the hierarchy tree for a type system would be like so: Nominal Attribute Ordinal Attribute Interval Attribute Ratio scale
Are all attributes/data points inherently nominal?
Stanley Smith Stevens proposed a level of measurement in 1946 which is used by many statisticians today. In it, there are four different scale types proposed: Nominal Scale Ordinal Scale Interval Sca
Are all attributes/data points inherently nominal? Stanley Smith Stevens proposed a level of measurement in 1946 which is used by many statisticians today. In it, there are four different scale types proposed: Nominal Scale Ordinal Scale Interval Scale Ratio Measurement With each scale type building on the foundations of the other. For example, ratio measurements can have all of the operations of all of the other scales applied to values of that scale type. That said, it stands to reason that the hierarchy tree for a type system would be like so: Nominal Attribute Ordinal Attribute Interval Attribute Ratio scale
Are all attributes/data points inherently nominal? Stanley Smith Stevens proposed a level of measurement in 1946 which is used by many statisticians today. In it, there are four different scale types proposed: Nominal Scale Ordinal Scale Interval Sca
45,145
Is statistics.com class worth the money?
Some resources I gathered for myself: Khan Academy Probability and Statistics Online Statistics Education: An Interactive Multimedia Course of Study http://onlinestatbook.com/ CMU Open Learning Initiative Statistics Introduction to Statistical Thought Book Also, if you take ml-class you might be interested in: http://www.pgm-class.org/ Edit (just a few more links): mathematicalmonk's probability & ML All of statistics Book Kardi Teknomo's Tutorials (EPIC SUFF): http://people.revoledu.com/kardi/tutorial/index.html
Is statistics.com class worth the money?
Some resources I gathered for myself: Khan Academy Probability and Statistics Online Statistics Education: An Interactive Multimedia Course of Study http://onlinestatbook.com/ CMU Open Learning Initia
Is statistics.com class worth the money? Some resources I gathered for myself: Khan Academy Probability and Statistics Online Statistics Education: An Interactive Multimedia Course of Study http://onlinestatbook.com/ CMU Open Learning Initiative Statistics Introduction to Statistical Thought Book Also, if you take ml-class you might be interested in: http://www.pgm-class.org/ Edit (just a few more links): mathematicalmonk's probability & ML All of statistics Book Kardi Teknomo's Tutorials (EPIC SUFF): http://people.revoledu.com/kardi/tutorial/index.html
Is statistics.com class worth the money? Some resources I gathered for myself: Khan Academy Probability and Statistics Online Statistics Education: An Interactive Multimedia Course of Study http://onlinestatbook.com/ CMU Open Learning Initia
45,146
Is statistics.com class worth the money?
Great that you have signed up for the ML class. Actually, there are plenty of resources on learning statistics online. To begin with, the CRAN website is excellent. Another great resource is SAS software's documentation. Thirdly, there is MIT opencourseware. Also, if you want free stat softwares, apart from R, there are many of them on this website: www.statpages.org
Is statistics.com class worth the money?
Great that you have signed up for the ML class. Actually, there are plenty of resources on learning statistics online. To begin with, the CRAN website is excellent. Another great resource is SAS softw
Is statistics.com class worth the money? Great that you have signed up for the ML class. Actually, there are plenty of resources on learning statistics online. To begin with, the CRAN website is excellent. Another great resource is SAS software's documentation. Thirdly, there is MIT opencourseware. Also, if you want free stat softwares, apart from R, there are many of them on this website: www.statpages.org
Is statistics.com class worth the money? Great that you have signed up for the ML class. Actually, there are plenty of resources on learning statistics online. To begin with, the CRAN website is excellent. Another great resource is SAS softw
45,147
Is statistics.com class worth the money?
You can try searching MIT's OCW - http://ocw.mit.edu/index.htm or Academic Earth - http://academicearth.org/ for relevant statistical lectures.
Is statistics.com class worth the money?
You can try searching MIT's OCW - http://ocw.mit.edu/index.htm or Academic Earth - http://academicearth.org/ for relevant statistical lectures.
Is statistics.com class worth the money? You can try searching MIT's OCW - http://ocw.mit.edu/index.htm or Academic Earth - http://academicearth.org/ for relevant statistical lectures.
Is statistics.com class worth the money? You can try searching MIT's OCW - http://ocw.mit.edu/index.htm or Academic Earth - http://academicearth.org/ for relevant statistical lectures.
45,148
Confidence set for parameter vector in linear regression
To make things clearer recall that $$\hat{\beta}\sim N(\beta,\hat{\sigma}^2(X^TX)^{-1}),$$ When you isolate $\beta_j$ you get that $$\hat{\beta}_j-\beta_j\sim N(0, \sigma^2 v_j)$$ where $v_j$ are the diagonal elements of $(X^TX)^{-1}$. We can write this alternatively as $$\frac{\hat{\beta}_j-\beta_j}{\sqrt{v_j}}\sim N(0,\sigma),$$ which is the same as $$\left(\frac{\hat{\beta}_j-\beta_j}{\sqrt{v_j}}\right)^2= (\hat{\beta}_j-\beta_j)(v_j)^{-1}(\hat{\beta}_j-\beta_j)\sim \sigma\chi_1^2.$$ Note that those $\beta_j$ that satisfy the condition $$\left(\frac{\hat{\beta}_j-\beta_j}{\sqrt{v_j}}\right)^2\le \sigma^2\chi_{1,1-\alpha}^2$$ fall in the confidence interval described in the equation 3.14. Hence the confidence interval is the set in real line. Now similarly we get $$(X^TX)^{1/2}(\hat\beta-\beta)\sim N(0, \sigma^2 I),$$ so $$(\hat\beta-\beta)X^TX(\hat\beta-\beta)\sim \sigma^2\chi_{p+1}^2$$ where $p$ is the number of the regressors. Using the same analogy we can look for vector points $\beta\in\mathbb{R}^{p+1}$ which satisfy the condition $$(\hat\beta-\beta)X^TX(\hat\beta-\beta)\le \sigma^2\chi_{p+1,1-\alpha}^2.$$ For $p=1$ this set will be the interior of the ellipsis. The confidence set is used since it accounts for interactions between $\beta_i$ and $\beta_j$. Look at the scatter plot of two independent normal variables (which would be the case for orthogonal regressors with the same variance): The circular shape is evident. Using the univariate confidence intervals the confidence set would be square, and this graph illustrates that it will actualy estimate the confidence incorrectly.
Confidence set for parameter vector in linear regression
To make things clearer recall that $$\hat{\beta}\sim N(\beta,\hat{\sigma}^2(X^TX)^{-1}),$$ When you isolate $\beta_j$ you get that $$\hat{\beta}_j-\beta_j\sim N(0, \sigma^2 v_j)$$ where $v_j$ are the
Confidence set for parameter vector in linear regression To make things clearer recall that $$\hat{\beta}\sim N(\beta,\hat{\sigma}^2(X^TX)^{-1}),$$ When you isolate $\beta_j$ you get that $$\hat{\beta}_j-\beta_j\sim N(0, \sigma^2 v_j)$$ where $v_j$ are the diagonal elements of $(X^TX)^{-1}$. We can write this alternatively as $$\frac{\hat{\beta}_j-\beta_j}{\sqrt{v_j}}\sim N(0,\sigma),$$ which is the same as $$\left(\frac{\hat{\beta}_j-\beta_j}{\sqrt{v_j}}\right)^2= (\hat{\beta}_j-\beta_j)(v_j)^{-1}(\hat{\beta}_j-\beta_j)\sim \sigma\chi_1^2.$$ Note that those $\beta_j$ that satisfy the condition $$\left(\frac{\hat{\beta}_j-\beta_j}{\sqrt{v_j}}\right)^2\le \sigma^2\chi_{1,1-\alpha}^2$$ fall in the confidence interval described in the equation 3.14. Hence the confidence interval is the set in real line. Now similarly we get $$(X^TX)^{1/2}(\hat\beta-\beta)\sim N(0, \sigma^2 I),$$ so $$(\hat\beta-\beta)X^TX(\hat\beta-\beta)\sim \sigma^2\chi_{p+1}^2$$ where $p$ is the number of the regressors. Using the same analogy we can look for vector points $\beta\in\mathbb{R}^{p+1}$ which satisfy the condition $$(\hat\beta-\beta)X^TX(\hat\beta-\beta)\le \sigma^2\chi_{p+1,1-\alpha}^2.$$ For $p=1$ this set will be the interior of the ellipsis. The confidence set is used since it accounts for interactions between $\beta_i$ and $\beta_j$. Look at the scatter plot of two independent normal variables (which would be the case for orthogonal regressors with the same variance): The circular shape is evident. Using the univariate confidence intervals the confidence set would be square, and this graph illustrates that it will actualy estimate the confidence incorrectly.
Confidence set for parameter vector in linear regression To make things clearer recall that $$\hat{\beta}\sim N(\beta,\hat{\sigma}^2(X^TX)^{-1}),$$ When you isolate $\beta_j$ you get that $$\hat{\beta}_j-\beta_j\sim N(0, \sigma^2 v_j)$$ where $v_j$ are the
45,149
Confidence set for parameter vector in linear regression
To supplement, if $X\in \mathbb{R}^{N\times (p+1)}$ and $\hat\beta$ is the LS estimation for $\beta$ in the linear regression model $Y=X\beta+\epsilon$ with $\epsilon\sim\mathcal{N}(0,\sigma^2)$, $$ \frac{(\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)}{\hat{\sigma}^2}\sim \chi_{p+1}^2 $$ holds asymptotically when $N\to+\infty$. To see this, we first have \begin{align} (\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)\sim & \sigma^2\chi^2_{p+1}\quad\mbox{(from $\hat\beta\sim\mathcal{N}(\beta, \sigma^2(X^TX)^{-1})$)}\\ (N-p-1)\hat{\sigma}^2\sim & \sigma^2\chi^2_{N-p-1} \end{align} which gives $$ \frac{(\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)}{(p+1)\hat{\sigma}^2}\sim F_{p+1,N-p-1} $$ On the other hand, one can prove if $S\sim F_{m,n}$, $T=\lim_{n\to+\infty}mS\sim\chi_m^2$ by directly computing the limit of $mS$'s PDF, with the help of the relation between gamma function and beta function and Stirling's formula. With this claim, we have $$ \frac{(\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)}{\hat{\sigma}^2}\sim \chi_{p+1}^2\quad(N\to+\infty) $$.
Confidence set for parameter vector in linear regression
To supplement, if $X\in \mathbb{R}^{N\times (p+1)}$ and $\hat\beta$ is the LS estimation for $\beta$ in the linear regression model $Y=X\beta+\epsilon$ with $\epsilon\sim\mathcal{N}(0,\sigma^2)$, $$ \
Confidence set for parameter vector in linear regression To supplement, if $X\in \mathbb{R}^{N\times (p+1)}$ and $\hat\beta$ is the LS estimation for $\beta$ in the linear regression model $Y=X\beta+\epsilon$ with $\epsilon\sim\mathcal{N}(0,\sigma^2)$, $$ \frac{(\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)}{\hat{\sigma}^2}\sim \chi_{p+1}^2 $$ holds asymptotically when $N\to+\infty$. To see this, we first have \begin{align} (\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)\sim & \sigma^2\chi^2_{p+1}\quad\mbox{(from $\hat\beta\sim\mathcal{N}(\beta, \sigma^2(X^TX)^{-1})$)}\\ (N-p-1)\hat{\sigma}^2\sim & \sigma^2\chi^2_{N-p-1} \end{align} which gives $$ \frac{(\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)}{(p+1)\hat{\sigma}^2}\sim F_{p+1,N-p-1} $$ On the other hand, one can prove if $S\sim F_{m,n}$, $T=\lim_{n\to+\infty}mS\sim\chi_m^2$ by directly computing the limit of $mS$'s PDF, with the help of the relation between gamma function and beta function and Stirling's formula. With this claim, we have $$ \frac{(\hat{\beta}-\beta)^TX^TX(\hat{\beta}-\beta)}{\hat{\sigma}^2}\sim \chi_{p+1}^2\quad(N\to+\infty) $$.
Confidence set for parameter vector in linear regression To supplement, if $X\in \mathbb{R}^{N\times (p+1)}$ and $\hat\beta$ is the LS estimation for $\beta$ in the linear regression model $Y=X\beta+\epsilon$ with $\epsilon\sim\mathcal{N}(0,\sigma^2)$, $$ \
45,150
Confidence set for parameter vector in linear regression
Because (under the assumption made in the text that the errors are Gaussian with mean zero and with variance $\sigma^2$) the vector $\hat\beta-\beta$ is multinormal with mean zero and covariance matrix $(X^T X)^{-1}\sigma^2$, the squared Mahalanobis Distance of $\hat\beta$ is (by definition) $$\frac{1}{\sigma^2}(\hat\beta-\beta)^T(X^T X)(\hat\beta-\beta)$$ and as per https://en.wikipedia.org/wiki/Mahalanobis_distance#Normal_distributions, that random variable is $\chi^2$ with $p+1$ degrees of freedom where $p+1$ is the number of dimensions in $\beta$. So, $(\hat\beta-\beta)^T(X^T X)(\hat\beta-\beta)$ is distributed like $\sigma^2 \chi_{p+1}^2$. Thus, under the assumptions in the text, there is a concise answer to the original question.
Confidence set for parameter vector in linear regression
Because (under the assumption made in the text that the errors are Gaussian with mean zero and with variance $\sigma^2$) the vector $\hat\beta-\beta$ is multinormal with mean zero and covariance matri
Confidence set for parameter vector in linear regression Because (under the assumption made in the text that the errors are Gaussian with mean zero and with variance $\sigma^2$) the vector $\hat\beta-\beta$ is multinormal with mean zero and covariance matrix $(X^T X)^{-1}\sigma^2$, the squared Mahalanobis Distance of $\hat\beta$ is (by definition) $$\frac{1}{\sigma^2}(\hat\beta-\beta)^T(X^T X)(\hat\beta-\beta)$$ and as per https://en.wikipedia.org/wiki/Mahalanobis_distance#Normal_distributions, that random variable is $\chi^2$ with $p+1$ degrees of freedom where $p+1$ is the number of dimensions in $\beta$. So, $(\hat\beta-\beta)^T(X^T X)(\hat\beta-\beta)$ is distributed like $\sigma^2 \chi_{p+1}^2$. Thus, under the assumptions in the text, there is a concise answer to the original question.
Confidence set for parameter vector in linear regression Because (under the assumption made in the text that the errors are Gaussian with mean zero and with variance $\sigma^2$) the vector $\hat\beta-\beta$ is multinormal with mean zero and covariance matri
45,151
Scripts for Mixed-Effects Models in S and S-PLUS
> library(nlme) > system.file("scripts", package = "nlme") [1] "/Library/Frameworks/R.framework/Versions/2.14/Resources/library/nlme/scripts" > list.files(system.file("scripts", package = "nlme")) [1] "ch01.R" "ch02.R" "ch03.R" "ch04.R" "ch05.R" "ch06.R" "ch08.R" [8] "sims.rda" > file.show(system.file("scripts", "ch01.R", package = "nlme")) Magic!
Scripts for Mixed-Effects Models in S and S-PLUS
> library(nlme) > system.file("scripts", package = "nlme") [1] "/Library/Frameworks/R.framework/Versions/2.14/Resources/library/nlme/scripts" > list.files(system.file("scripts", package = "nlme")) [1]
Scripts for Mixed-Effects Models in S and S-PLUS > library(nlme) > system.file("scripts", package = "nlme") [1] "/Library/Frameworks/R.framework/Versions/2.14/Resources/library/nlme/scripts" > list.files(system.file("scripts", package = "nlme")) [1] "ch01.R" "ch02.R" "ch03.R" "ch04.R" "ch05.R" "ch06.R" "ch08.R" [8] "sims.rda" > file.show(system.file("scripts", "ch01.R", package = "nlme")) Magic!
Scripts for Mixed-Effects Models in S and S-PLUS > library(nlme) > system.file("scripts", package = "nlme") [1] "/Library/Frameworks/R.framework/Versions/2.14/Resources/library/nlme/scripts" > list.files(system.file("scripts", package = "nlme")) [1]
45,152
Scripts for Mixed-Effects Models in S and S-PLUS
They're in the package itself, in the inst/scripts directory. I'm sure it's possible to get at it from an existing installation but I'm not sure how; I downloaded the source package and looked inside.
Scripts for Mixed-Effects Models in S and S-PLUS
They're in the package itself, in the inst/scripts directory. I'm sure it's possible to get at it from an existing installation but I'm not sure how; I downloaded the source package and looked inside
Scripts for Mixed-Effects Models in S and S-PLUS They're in the package itself, in the inst/scripts directory. I'm sure it's possible to get at it from an existing installation but I'm not sure how; I downloaded the source package and looked inside.
Scripts for Mixed-Effects Models in S and S-PLUS They're in the package itself, in the inst/scripts directory. I'm sure it's possible to get at it from an existing installation but I'm not sure how; I downloaded the source package and looked inside
45,153
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate
It seems to me that your question is ill-posed (which was pointed out by Aniko already). If you know the mean, your uncertainty about it is zero, so the confidence interval should have zero length. Assuming that you somehow have the mean right, in whatever sense is suitable for you, and a confidence interval that comes from another source, you can reverse-engineer Johnson's (1978) procedure to come up with a measure of skewness of the original distribution (see also Chen's (1995) extension), and then pick say a skew-normal or a (shifted, if needed) gamma distribution with the required properties. UPDATE: Let us look at Johnson (1978) formula (2.7) for the confidence interval: $[\bar x + \kappa / 6 s^2 n] \pm t_\alpha(n-1) s / \sqrt{n}$, where I refer to the skewness of the original distribution as $\kappa$. If you are given the mean xbar, the lower limit cl, the upper limit cu and the sample size n (we'd have to assume i.i.d. data there), then talpha = qt(p=0.975,df=n-1) s = (cu - cl)*sqrt(n)/(2*talpha) kappa = 6*s*s*n*( cl - xbar + talpha*s/sqrt(n) ) gamma.shape = 4/(kappa*kappa) gamma.scale = s/sqrt(gamma.shape) gamma.shift = xbar - gamma.shape*gamma.scale simulated.data = rgamma(n = simulated.n, shape = gamma.shape, scale = gamma.scale) + gamma.shift See if it produces reasonable results. I like the skew-normal distribution better, as the normal distribution, a standard reference, can be produced with the skew.normal.shape = 0 rather than gamma.shape = infinity in the gamma case, but the computations are more cumbersome.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot
It seems to me that your question is ill-posed (which was pointed out by Aniko already). If you know the mean, your uncertainty about it is zero, so the confidence interval should have zero length. A
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate It seems to me that your question is ill-posed (which was pointed out by Aniko already). If you know the mean, your uncertainty about it is zero, so the confidence interval should have zero length. Assuming that you somehow have the mean right, in whatever sense is suitable for you, and a confidence interval that comes from another source, you can reverse-engineer Johnson's (1978) procedure to come up with a measure of skewness of the original distribution (see also Chen's (1995) extension), and then pick say a skew-normal or a (shifted, if needed) gamma distribution with the required properties. UPDATE: Let us look at Johnson (1978) formula (2.7) for the confidence interval: $[\bar x + \kappa / 6 s^2 n] \pm t_\alpha(n-1) s / \sqrt{n}$, where I refer to the skewness of the original distribution as $\kappa$. If you are given the mean xbar, the lower limit cl, the upper limit cu and the sample size n (we'd have to assume i.i.d. data there), then talpha = qt(p=0.975,df=n-1) s = (cu - cl)*sqrt(n)/(2*talpha) kappa = 6*s*s*n*( cl - xbar + talpha*s/sqrt(n) ) gamma.shape = 4/(kappa*kappa) gamma.scale = s/sqrt(gamma.shape) gamma.shift = xbar - gamma.shape*gamma.scale simulated.data = rgamma(n = simulated.n, shape = gamma.shape, scale = gamma.scale) + gamma.shift See if it produces reasonable results. I like the skew-normal distribution better, as the normal distribution, a standard reference, can be produced with the skew.normal.shape = 0 rather than gamma.shape = infinity in the gamma case, but the computations are more cumbersome.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot It seems to me that your question is ill-posed (which was pointed out by Aniko already). If you know the mean, your uncertainty about it is zero, so the confidence interval should have zero length. A
45,154
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate
I don't have a full answer for you, but several issues are worth pointing out: You cannot draw random values from a confidence interval of a parameter, because it is a frequentist concept, and parameters do not have distributions in frequentist statistics. The most you can do is to try to sample the sampling distribution of the parameter estimate. If you want to bootstrap a meta-analysis, bootstrap the studies that go into it. Most assymetric confidence intervals are symmetric on some other scale, typically the log-scale. For example, the typical confidence interval for an odds ratio (OR) is constructed as symmetric for log(OR) and then exponentiated. So I would certainly check whether a log-transform would make the confidence interval symmetric.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot
I don't have a full answer for you, but several issues are worth pointing out: You cannot draw random values from a confidence interval of a parameter, because it is a frequentist concept, and parame
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate I don't have a full answer for you, but several issues are worth pointing out: You cannot draw random values from a confidence interval of a parameter, because it is a frequentist concept, and parameters do not have distributions in frequentist statistics. The most you can do is to try to sample the sampling distribution of the parameter estimate. If you want to bootstrap a meta-analysis, bootstrap the studies that go into it. Most assymetric confidence intervals are symmetric on some other scale, typically the log-scale. For example, the typical confidence interval for an odds ratio (OR) is constructed as symmetric for log(OR) and then exponentiated. So I would certainly check whether a log-transform would make the confidence interval symmetric.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot I don't have a full answer for you, but several issues are worth pointing out: You cannot draw random values from a confidence interval of a parameter, because it is a frequentist concept, and parame
45,155
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate
Here is a half-baked idea for trying to avoid the "impossible" problem of sampling from a confidence interval. If you want to do a bootstrap analysis of those meta-analyses, you should be bootstrapping the result of each meta-analysis, not its potential results. The only problem is that those meta-analyses have different precision, so you probably want them to have different weights for the bootstrap sample. The weight might depend on the total number of subjects that went into each meta-analysis, or the width of the confidence interval (which are, of course, related). In many similar situations inverse variance weighting turns out to be optimal, so I think weighing by sample size or by the inverse square of the width of the confidence interval would be reasonable choices. Of course, for the confidence interval width it would be better to have a scale on which the interval is somewhat symmetric, but for ratios the log-transform would (approximately) do that, even for a bootstrap interval. Again, I am not sure how well this would work - you might want to run some simulation studies, but it might be a more straightforward approach.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot
Here is a half-baked idea for trying to avoid the "impossible" problem of sampling from a confidence interval. If you want to do a bootstrap analysis of those meta-analyses, you should be bootstrappin
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate Here is a half-baked idea for trying to avoid the "impossible" problem of sampling from a confidence interval. If you want to do a bootstrap analysis of those meta-analyses, you should be bootstrapping the result of each meta-analysis, not its potential results. The only problem is that those meta-analyses have different precision, so you probably want them to have different weights for the bootstrap sample. The weight might depend on the total number of subjects that went into each meta-analysis, or the width of the confidence interval (which are, of course, related). In many similar situations inverse variance weighting turns out to be optimal, so I think weighing by sample size or by the inverse square of the width of the confidence interval would be reasonable choices. Of course, for the confidence interval width it would be better to have a scale on which the interval is somewhat symmetric, but for ratios the log-transform would (approximately) do that, even for a bootstrap interval. Again, I am not sure how well this would work - you might want to run some simulation studies, but it might be a more straightforward approach.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot Here is a half-baked idea for trying to avoid the "impossible" problem of sampling from a confidence interval. If you want to do a bootstrap analysis of those meta-analyses, you should be bootstrappin
45,156
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate
Unfortunately there are an infinite number of distributions that could have resulted in the confidence interval given. One option to generate from one of those distributions would be to generate a random uniform, if it is below 0.025 (assuming 95% confidence interval) then choose a value slightly less than the lower confidence limit, if it is higher than 0.975 choose a value slightly higher than the upper confidence limit, otherwise choose a uniform between the 2 confidence limits. Something a little more realistic would be to use the logspline package in R. The oldlogspline function allows you to specify data as interval censored, so you could specify that 5 points came from less than the lower confidence limit, 5 points came from greater than the upper confidence limit, and 190 points came from between the 2 confidence limits. This would then give a smooth curve with approximately the confidence limits that you have, you could then change some of the 190 points from interval censored to actual values close to the mean to get the mean and asymmetry. Then tweak the values of those points until the mean and quantiles are close enough. Then the roldlogspline function will generate data from the distribution that you created. It probably will not be the exact distribution that generated the bootstrap mean and interval, but it is one of those that could have and would have good properties.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot
Unfortunately there are an infinite number of distributions that could have resulted in the confidence interval given. One option to generate from one of those distributions would be to generate a ra
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate Unfortunately there are an infinite number of distributions that could have resulted in the confidence interval given. One option to generate from one of those distributions would be to generate a random uniform, if it is below 0.025 (assuming 95% confidence interval) then choose a value slightly less than the lower confidence limit, if it is higher than 0.975 choose a value slightly higher than the upper confidence limit, otherwise choose a uniform between the 2 confidence limits. Something a little more realistic would be to use the logspline package in R. The oldlogspline function allows you to specify data as interval censored, so you could specify that 5 points came from less than the lower confidence limit, 5 points came from greater than the upper confidence limit, and 190 points came from between the 2 confidence limits. This would then give a smooth curve with approximately the confidence limits that you have, you could then change some of the 190 points from interval censored to actual values close to the mean to get the mean and asymmetry. Then tweak the values of those points until the mean and quantiles are close enough. Then the roldlogspline function will generate data from the distribution that you created. It probably will not be the exact distribution that generated the bootstrap mean and interval, but it is one of those that could have and would have good properties.
Sampling random numbers from a distribution with asymmetric confidence intervals generated by a boot Unfortunately there are an infinite number of distributions that could have resulted in the confidence interval given. One option to generate from one of those distributions would be to generate a ra
45,157
Is it valid to select a model based upon AUC?
AUROC is one of many ways of evaluating the model -- in fact it judges how good ranking (or "sureness" measure) your method may produce. The question whether to use it rather than precision recall or simple accuracy or F-measure is only depending on a particular application. Model selection is a problematic issue on its own -- generally you should also use the score you believe fits application best, and take care that your selection is significant (usually it is not and some other factors may be important, like even computational time). About AUC in R -- I see you use ROCR, which makes nice plots but it is also terribly bloated, thus slow and difficult in integration. Try colAUC from caTools package -- it is rocket fast and trivial to use. Oh, and bigger AUC is better.
Is it valid to select a model based upon AUC?
AUROC is one of many ways of evaluating the model -- in fact it judges how good ranking (or "sureness" measure) your method may produce. The question whether to use it rather than precision recall or
Is it valid to select a model based upon AUC? AUROC is one of many ways of evaluating the model -- in fact it judges how good ranking (or "sureness" measure) your method may produce. The question whether to use it rather than precision recall or simple accuracy or F-measure is only depending on a particular application. Model selection is a problematic issue on its own -- generally you should also use the score you believe fits application best, and take care that your selection is significant (usually it is not and some other factors may be important, like even computational time). About AUC in R -- I see you use ROCR, which makes nice plots but it is also terribly bloated, thus slow and difficult in integration. Try colAUC from caTools package -- it is rocket fast and trivial to use. Oh, and bigger AUC is better.
Is it valid to select a model based upon AUC? AUROC is one of many ways of evaluating the model -- in fact it judges how good ranking (or "sureness" measure) your method may produce. The question whether to use it rather than precision recall or
45,158
Is it valid to select a model based upon AUC?
As, mbq wrote, the answer to whether you should use AUC depends on what you are trying to do. Two points that are worth considering: AUROC is insensitive to changes in class distribution. It places even emphasis on the different classes, which means it can poorly reflect an algorithm's performance if there is a big imbalance in the distribution of classes. On the other hand, if you are more interested in identifying characteristics of the classes rather than their prevalence, this is a strength. AUROC does not capture the different costs of different outcomes and it is seldom the case that you care equally about false positives and false negatives. I find AUROC sensible. The curves easy to read: they are like an intuitive version of a confusion matrix. But it is important to know what we're reading and what's left off. See also: Evaluating and combining methods based on ROC and PR curves
Is it valid to select a model based upon AUC?
As, mbq wrote, the answer to whether you should use AUC depends on what you are trying to do. Two points that are worth considering: AUROC is insensitive to changes in class distribution. It places e
Is it valid to select a model based upon AUC? As, mbq wrote, the answer to whether you should use AUC depends on what you are trying to do. Two points that are worth considering: AUROC is insensitive to changes in class distribution. It places even emphasis on the different classes, which means it can poorly reflect an algorithm's performance if there is a big imbalance in the distribution of classes. On the other hand, if you are more interested in identifying characteristics of the classes rather than their prevalence, this is a strength. AUROC does not capture the different costs of different outcomes and it is seldom the case that you care equally about false positives and false negatives. I find AUROC sensible. The curves easy to read: they are like an intuitive version of a confusion matrix. But it is important to know what we're reading and what's left off. See also: Evaluating and combining methods based on ROC and PR curves
Is it valid to select a model based upon AUC? As, mbq wrote, the answer to whether you should use AUC depends on what you are trying to do. Two points that are worth considering: AUROC is insensitive to changes in class distribution. It places e
45,159
Is it valid to select a model based upon AUC?
As you are using ROCR, you can get the point of the ROC curve that maximizes the area and use this to determine the corresponding threshold: my_prediction <- predict.gbm(object = gbm_mod, newdata = X, 100) pred <- prediction(my_prediction, Y) perf <- performance(pred, 'tpr', 'fpr') r <- rev((as.data.frame(perf@y.values)*(1-as.data.frame(perf@x.values)))[,1]) threshold <- as.data.frame(perf@alpha.values)[which(r==max(r)),1][1] You can think of this optimization simply as the point that makes the largest possible rectangle under the ROC curve.
Is it valid to select a model based upon AUC?
As you are using ROCR, you can get the point of the ROC curve that maximizes the area and use this to determine the corresponding threshold: my_prediction <- predict.gbm(object = gbm_mod, newdata = X,
Is it valid to select a model based upon AUC? As you are using ROCR, you can get the point of the ROC curve that maximizes the area and use this to determine the corresponding threshold: my_prediction <- predict.gbm(object = gbm_mod, newdata = X, 100) pred <- prediction(my_prediction, Y) perf <- performance(pred, 'tpr', 'fpr') r <- rev((as.data.frame(perf@y.values)*(1-as.data.frame(perf@x.values)))[,1]) threshold <- as.data.frame(perf@alpha.values)[which(r==max(r)),1][1] You can think of this optimization simply as the point that makes the largest possible rectangle under the ROC curve.
Is it valid to select a model based upon AUC? As you are using ROCR, you can get the point of the ROC curve that maximizes the area and use this to determine the corresponding threshold: my_prediction <- predict.gbm(object = gbm_mod, newdata = X,
45,160
Reference or book on simulation of experimental design data in R
Statistical models in S, by Chambers and Hastie (Chapmann and Hall, 1991; or the so-called White Book), and to a lesser extent Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002, 4th ed.), include some material about DoE and the analysis of common designs in S and R. Vikneswaran wrote An R companion to "Experimental Design", although it is not very complete (IMHO), but there are a lot other textbooks in the Contributed section on CRAN that might help you get started. Apart from textbook, the CRAN Task View on Design of Experiments (DoE) & Analysis of Experimental Data has some good packages that ease the creation and analysis of various experimental designs; I can think of dae, agricolae, or AlgDesign (which comes with a nice vignette), to name a few.
Reference or book on simulation of experimental design data in R
Statistical models in S, by Chambers and Hastie (Chapmann and Hall, 1991; or the so-called White Book), and to a lesser extent Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002,
Reference or book on simulation of experimental design data in R Statistical models in S, by Chambers and Hastie (Chapmann and Hall, 1991; or the so-called White Book), and to a lesser extent Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002, 4th ed.), include some material about DoE and the analysis of common designs in S and R. Vikneswaran wrote An R companion to "Experimental Design", although it is not very complete (IMHO), but there are a lot other textbooks in the Contributed section on CRAN that might help you get started. Apart from textbook, the CRAN Task View on Design of Experiments (DoE) & Analysis of Experimental Data has some good packages that ease the creation and analysis of various experimental designs; I can think of dae, agricolae, or AlgDesign (which comes with a nice vignette), to name a few.
Reference or book on simulation of experimental design data in R Statistical models in S, by Chambers and Hastie (Chapmann and Hall, 1991; or the so-called White Book), and to a lesser extent Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002,
45,161
Reference or book on simulation of experimental design data in R
I have the feeling that pretty recent "Introduction to Scientific Programming and Scientific Simulation Using R" by Owen Jones, Robert Maillardet, and Andrew Robinson (2009) could be what you are looking for. There is also a very positive review for it in the Journal of Statistical Software.. Although this book is not specifically targeted at simulating experimental data it will probably get you in the direction you want to go.
Reference or book on simulation of experimental design data in R
I have the feeling that pretty recent "Introduction to Scientific Programming and Scientific Simulation Using R" by Owen Jones, Robert Maillardet, and Andrew Robinson (2009) could be what you are look
Reference or book on simulation of experimental design data in R I have the feeling that pretty recent "Introduction to Scientific Programming and Scientific Simulation Using R" by Owen Jones, Robert Maillardet, and Andrew Robinson (2009) could be what you are looking for. There is also a very positive review for it in the Journal of Statistical Software.. Although this book is not specifically targeted at simulating experimental data it will probably get you in the direction you want to go.
Reference or book on simulation of experimental design data in R I have the feeling that pretty recent "Introduction to Scientific Programming and Scientific Simulation Using R" by Owen Jones, Robert Maillardet, and Andrew Robinson (2009) could be what you are look
45,162
Reference or book on simulation of experimental design data in R
Here is an example of some code that I wrote for this purpose. The experimental design is: there are four levels of nitrogen and six replicates at each level. These data could be tested using a one-way ANOVA, but as the levels are continuous, I tested the fit of different curves. set.seed(1) library(nlme) library(ggplot2) ### Below is a set of practice data ## 1. four levels of Nitrogen: ## 0,1,4,10 N <- c(rep(0,6),rep(1,6),rep(4,6),rep(10,6)) ## 2. variance s <- 2 ## 3. Data simulated to provide examples of the ## Various hypothesized responses of Y to N ## 3.1 asymptotic incr Y = 10*N/(1+N) + 10 asym <- c(rnorm(6,10,s),rnorm(6,15,s),rnorm(6,18,s),rnorm(6,19,s)) ## 3.2 Y = 0*N + 10 the Null model m0 <- c(rnorm(24,10,s)) ## 3.3 Y =0.2*N+10 a shallow slope m1 <- c(rnorm(6,10,s),rnorm(6,10.2,s),rnorm(6,10.8,s),rnorm(6,12,s)) ## 3.4 Y = 1*N + 10 a more steep slope m4 <- c(rnorm(6,10,s),rnorm(6,14,s),rnorm(6,26,s),rnorm(6,50,s)) ## 3.5 Y = 4*log10(N)+ 10 an log-linear response lm4 <- c(rnorm(6,10,s),rnorm(6,12.4,s),rnorm(6,15.6,s),rnorm(6,18.2,s)) ## 3.6 'Hump' with max at N=1 g m-2 yr hump <- c(rnorm(6,10,s),rnorm(6,20,s),rnorm(6,9,s),rnorm(6,8,s)) ## A function to compare the fit of five models: fn.BIC.lmnls <- function (x,y,shape){ foo.null <- lm ( y~1) foo.poly1 <- lm(y~x) foo.poly2 <- lm(y ~ x + I(x^2)) foo.poly3 <- lm(y ~ x + I(x^2) + I(x^3) ) foo.mm <- nls ( y~ (a*x)/(b + x),start=list(a=1,b=1)) bic <- BIC(foo.null,foo.poly1,foo.poly2,foo.poly3,foo.mm) print(bic) return(bic) } ### now, plot data and print BIC values for each of the data sets par(mfrow = c(3,2)) fn.BIC.lmnls (N,m0,"Y = 0*N +10") fn.BIC.lmnls (N,m1,"Y = 0.2*N + 10") fn.BIC.lmnls (N,m4,"Y = 1*N + 10") fn.BIC.lmnls (N,lm4,"Y = 0.4*log10(N) + 10") fn.BIC.lmnls (N,asym,"Y = 10+10*N/(1+N)")#Y = 20*N/(5+N) fn.BIC.lmnls (N,hump,"Y = [10,20,9,8]")
Reference or book on simulation of experimental design data in R
Here is an example of some code that I wrote for this purpose. The experimental design is: there are four levels of nitrogen and six replicates at each level. These data could be tested using a one-wa
Reference or book on simulation of experimental design data in R Here is an example of some code that I wrote for this purpose. The experimental design is: there are four levels of nitrogen and six replicates at each level. These data could be tested using a one-way ANOVA, but as the levels are continuous, I tested the fit of different curves. set.seed(1) library(nlme) library(ggplot2) ### Below is a set of practice data ## 1. four levels of Nitrogen: ## 0,1,4,10 N <- c(rep(0,6),rep(1,6),rep(4,6),rep(10,6)) ## 2. variance s <- 2 ## 3. Data simulated to provide examples of the ## Various hypothesized responses of Y to N ## 3.1 asymptotic incr Y = 10*N/(1+N) + 10 asym <- c(rnorm(6,10,s),rnorm(6,15,s),rnorm(6,18,s),rnorm(6,19,s)) ## 3.2 Y = 0*N + 10 the Null model m0 <- c(rnorm(24,10,s)) ## 3.3 Y =0.2*N+10 a shallow slope m1 <- c(rnorm(6,10,s),rnorm(6,10.2,s),rnorm(6,10.8,s),rnorm(6,12,s)) ## 3.4 Y = 1*N + 10 a more steep slope m4 <- c(rnorm(6,10,s),rnorm(6,14,s),rnorm(6,26,s),rnorm(6,50,s)) ## 3.5 Y = 4*log10(N)+ 10 an log-linear response lm4 <- c(rnorm(6,10,s),rnorm(6,12.4,s),rnorm(6,15.6,s),rnorm(6,18.2,s)) ## 3.6 'Hump' with max at N=1 g m-2 yr hump <- c(rnorm(6,10,s),rnorm(6,20,s),rnorm(6,9,s),rnorm(6,8,s)) ## A function to compare the fit of five models: fn.BIC.lmnls <- function (x,y,shape){ foo.null <- lm ( y~1) foo.poly1 <- lm(y~x) foo.poly2 <- lm(y ~ x + I(x^2)) foo.poly3 <- lm(y ~ x + I(x^2) + I(x^3) ) foo.mm <- nls ( y~ (a*x)/(b + x),start=list(a=1,b=1)) bic <- BIC(foo.null,foo.poly1,foo.poly2,foo.poly3,foo.mm) print(bic) return(bic) } ### now, plot data and print BIC values for each of the data sets par(mfrow = c(3,2)) fn.BIC.lmnls (N,m0,"Y = 0*N +10") fn.BIC.lmnls (N,m1,"Y = 0.2*N + 10") fn.BIC.lmnls (N,m4,"Y = 1*N + 10") fn.BIC.lmnls (N,lm4,"Y = 0.4*log10(N) + 10") fn.BIC.lmnls (N,asym,"Y = 10+10*N/(1+N)")#Y = 20*N/(5+N) fn.BIC.lmnls (N,hump,"Y = [10,20,9,8]")
Reference or book on simulation of experimental design data in R Here is an example of some code that I wrote for this purpose. The experimental design is: there are four levels of nitrogen and six replicates at each level. These data could be tested using a one-wa
45,163
Implementing the 'kernel trick' for a support vector machine in R
Basically anything what is not separable with a line (ok, hyperplane), for instance 2D data like this: kernel trick will effectively project this situation into a (higher-dim) space in which linear separation is possible; see this movie for an effect of a gaussian kernel on similar data. Look for a kernel argument in your svm function ;-) Note that using a kernel usually introduces new parameters to the outer optimization.
Implementing the 'kernel trick' for a support vector machine in R
Basically anything what is not separable with a line (ok, hyperplane), for instance 2D data like this: kernel trick will effectively project this situation into a (higher-dim) space in which linear s
Implementing the 'kernel trick' for a support vector machine in R Basically anything what is not separable with a line (ok, hyperplane), for instance 2D data like this: kernel trick will effectively project this situation into a (higher-dim) space in which linear separation is possible; see this movie for an effect of a gaussian kernel on similar data. Look for a kernel argument in your svm function ;-) Note that using a kernel usually introduces new parameters to the outer optimization.
Implementing the 'kernel trick' for a support vector machine in R Basically anything what is not separable with a line (ok, hyperplane), for instance 2D data like this: kernel trick will effectively project this situation into a (higher-dim) space in which linear s
45,164
Implementing the 'kernel trick' for a support vector machine in R
you should take a look at kernlab R package. They even have a very nice vignette.
Implementing the 'kernel trick' for a support vector machine in R
you should take a look at kernlab R package. They even have a very nice vignette.
Implementing the 'kernel trick' for a support vector machine in R you should take a look at kernlab R package. They even have a very nice vignette.
Implementing the 'kernel trick' for a support vector machine in R you should take a look at kernlab R package. They even have a very nice vignette.
45,165
How to visualize/summarize a matrix with number of rows $\gg$ number of columns?
Find one-dimensional multidimensional scaling solutions for the rows and for the columns (separately), using whatever similarity measures you like (such as correlation). Sort the rows and columns according to their MDS positions. This will bring similar genes together and similar samples together. The whole thing can then easily be visualized as an array plot (e.g., normalize the values to the range 0..255 and display it as a grayscale image). A 50 by 6 array of standard normal variates was processed in this way (using Euclidean distances as the proximity measures): There's not much to see--after all, these data are iid--but look at the correlation matrices of the reordered columns and rows: (red = positive, blue = negative). The concentrations of positive correlations along the diagonals and negative correlation off the diagonals demonstrate the method has worked as advertised. (With the original data, the correlation matrices are random, too, causing the red and blue cells to be more evenly interspersed throughout.) In my experience, when there are even subtle nonzero correlations among rows and/or columns, this method does an excellent job of bringing them out in the original array plot (grayscale, above) and providing a visual display of clustering along both dimensions. Larger blocks along the diagonals of the corresponding correlation matrix plots help identify strongly clustered groups of rows or columns.
How to visualize/summarize a matrix with number of rows $\gg$ number of columns?
Find one-dimensional multidimensional scaling solutions for the rows and for the columns (separately), using whatever similarity measures you like (such as correlation). Sort the rows and columns acc
How to visualize/summarize a matrix with number of rows $\gg$ number of columns? Find one-dimensional multidimensional scaling solutions for the rows and for the columns (separately), using whatever similarity measures you like (such as correlation). Sort the rows and columns according to their MDS positions. This will bring similar genes together and similar samples together. The whole thing can then easily be visualized as an array plot (e.g., normalize the values to the range 0..255 and display it as a grayscale image). A 50 by 6 array of standard normal variates was processed in this way (using Euclidean distances as the proximity measures): There's not much to see--after all, these data are iid--but look at the correlation matrices of the reordered columns and rows: (red = positive, blue = negative). The concentrations of positive correlations along the diagonals and negative correlation off the diagonals demonstrate the method has worked as advertised. (With the original data, the correlation matrices are random, too, causing the red and blue cells to be more evenly interspersed throughout.) In my experience, when there are even subtle nonzero correlations among rows and/or columns, this method does an excellent job of bringing them out in the original array plot (grayscale, above) and providing a visual display of clustering along both dimensions. Larger blocks along the diagonals of the corresponding correlation matrix plots help identify strongly clustered groups of rows or columns.
How to visualize/summarize a matrix with number of rows $\gg$ number of columns? Find one-dimensional multidimensional scaling solutions for the rows and for the columns (separately), using whatever similarity measures you like (such as correlation). Sort the rows and columns acc
45,166
How to visualize/summarize a matrix with number of rows $\gg$ number of columns?
I was about to suggest something along @whuber's answer (I used this reordering technique but in a context of feature selection, so I was mainly concerned with the "variables view"). So let me suggest two other directions (but the first one is close to the already proposed one). As far as heatmaps are concerned, you can display them after a slight rearrangement of rows (samples) and/or columns (genes) through hierarchical clustering (yet another aggregation method based on a (dis)similarity measure). There're a lot of R packages that can do this, for example the cim() function in mixOmics. Another package that might be of interest is MADE4; it relies on the very good ade4 package for multivariate data analysis and visualization. If you are concerned with the rather large number of variables, you might also consider some reduction method for genes clustering. One that I've heard about is PCA-gene shaving (Hastie et al., 2000), that is largely described in Izenman (2008). In essence, this is a two-stage iterative procedure where (a) for feature selection, we single out genes whose correlation with the first principal component is below a distribution-based threshold (say, the 10% of genes having the lowest correlation at each step), and (b) for clustering, we seek to maximize an $R^2$ measure (between-cluster/within-cluster variances) for $j$ successive clusters of size $k_j$, where the optimal $k_j$ is obtained by a permutation technique and the use of the gap statistic (after effects of the former cluster has been removed by residualization). More detailed informations can be found in the paper referenced below, but the general idea is to cluster genes into small and potentially overlapping subsets of correlated genes that vary as much as possible across individuals. References Hastie, T., Tibshirani, R., Eisen, M.B., Alzadeh, A., Levy, R., Staudt, L., Chan, W.C., Botstein, D., and Brown, P.O. (2000). 'Gene shaving' as a method for identifying distinct sets of genes with similar expression patterns. Genome Biology, 1(2). Izenman, A.J. (2008). Modern Multivariate Statistical Techniques. Springer.
How to visualize/summarize a matrix with number of rows $\gg$ number of columns?
I was about to suggest something along @whuber's answer (I used this reordering technique but in a context of feature selection, so I was mainly concerned with the "variables view"). So let me suggest
How to visualize/summarize a matrix with number of rows $\gg$ number of columns? I was about to suggest something along @whuber's answer (I used this reordering technique but in a context of feature selection, so I was mainly concerned with the "variables view"). So let me suggest two other directions (but the first one is close to the already proposed one). As far as heatmaps are concerned, you can display them after a slight rearrangement of rows (samples) and/or columns (genes) through hierarchical clustering (yet another aggregation method based on a (dis)similarity measure). There're a lot of R packages that can do this, for example the cim() function in mixOmics. Another package that might be of interest is MADE4; it relies on the very good ade4 package for multivariate data analysis and visualization. If you are concerned with the rather large number of variables, you might also consider some reduction method for genes clustering. One that I've heard about is PCA-gene shaving (Hastie et al., 2000), that is largely described in Izenman (2008). In essence, this is a two-stage iterative procedure where (a) for feature selection, we single out genes whose correlation with the first principal component is below a distribution-based threshold (say, the 10% of genes having the lowest correlation at each step), and (b) for clustering, we seek to maximize an $R^2$ measure (between-cluster/within-cluster variances) for $j$ successive clusters of size $k_j$, where the optimal $k_j$ is obtained by a permutation technique and the use of the gap statistic (after effects of the former cluster has been removed by residualization). More detailed informations can be found in the paper referenced below, but the general idea is to cluster genes into small and potentially overlapping subsets of correlated genes that vary as much as possible across individuals. References Hastie, T., Tibshirani, R., Eisen, M.B., Alzadeh, A., Levy, R., Staudt, L., Chan, W.C., Botstein, D., and Brown, P.O. (2000). 'Gene shaving' as a method for identifying distinct sets of genes with similar expression patterns. Genome Biology, 1(2). Izenman, A.J. (2008). Modern Multivariate Statistical Techniques. Springer.
How to visualize/summarize a matrix with number of rows $\gg$ number of columns? I was about to suggest something along @whuber's answer (I used this reordering technique but in a context of feature selection, so I was mainly concerned with the "variables view"). So let me suggest
45,167
Is there an anova procedure that doesn't assume equal variance?
The latest version of ez lets you pass a white.adjust argument to car::Anova(), which implements a correction for heteroscedasticity. See ?car::Anova() for details.
Is there an anova procedure that doesn't assume equal variance?
The latest version of ez lets you pass a white.adjust argument to car::Anova(), which implements a correction for heteroscedasticity. See ?car::Anova() for details.
Is there an anova procedure that doesn't assume equal variance? The latest version of ez lets you pass a white.adjust argument to car::Anova(), which implements a correction for heteroscedasticity. See ?car::Anova() for details.
Is there an anova procedure that doesn't assume equal variance? The latest version of ez lets you pass a white.adjust argument to car::Anova(), which implements a correction for heteroscedasticity. See ?car::Anova() for details.
45,168
Is there an anova procedure that doesn't assume equal variance?
There is a function named oneway.test() in the base stats package, which implements Welch correction for a one-way ANOVA. Its use is similar to the standard t.test() function. It is also referred to as O'Brien transformation (Biometrics 40 (1984), 1079--1087) and might be applied with two or more independent samples.
Is there an anova procedure that doesn't assume equal variance?
There is a function named oneway.test() in the base stats package, which implements Welch correction for a one-way ANOVA. Its use is similar to the standard t.test() function. It is also referred to a
Is there an anova procedure that doesn't assume equal variance? There is a function named oneway.test() in the base stats package, which implements Welch correction for a one-way ANOVA. Its use is similar to the standard t.test() function. It is also referred to as O'Brien transformation (Biometrics 40 (1984), 1079--1087) and might be applied with two or more independent samples.
Is there an anova procedure that doesn't assume equal variance? There is a function named oneway.test() in the base stats package, which implements Welch correction for a one-way ANOVA. Its use is similar to the standard t.test() function. It is also referred to a
45,169
Regression selection using all possible subsets selection and automatic selection techniques
For the second part, you must interpret the output as the steps towards your final model. For example, in the forward case you begin with Start: AIC=377.95 cars$MidrangePrice ~ 1 Df Sum of Sq RSS AIC + cars$Horsepower 1 4979.3 3054.9 300.66 + cars$Wheelbase 1 3172.3 4862.0 338.76 + cars$Length 1 2448.8 5585.4 350.14 + cars$Width 1 1969.2 6065.0 356.89 + cars$Uturn 1 1450.2 6584.0 363.63 + cars$Luggage 1 1079.6 6954.7 368.12 <none> 8034.2 377.95 Your current model is only considering the constant cars$MidrangePrice ~ 1. Each row in the table indicates that in case you add that variable (for example, Horsepower), you will get the following results rearding Sq RSS(Residual Sum of Squares) and AIC (Akaike Information Criterion). In the other cases you must read the results the same way. Hope this helps :)
Regression selection using all possible subsets selection and automatic selection techniques
For the second part, you must interpret the output as the steps towards your final model. For example, in the forward case you begin with Start: AIC=377.95 cars$MidrangePrice ~ 1
Regression selection using all possible subsets selection and automatic selection techniques For the second part, you must interpret the output as the steps towards your final model. For example, in the forward case you begin with Start: AIC=377.95 cars$MidrangePrice ~ 1 Df Sum of Sq RSS AIC + cars$Horsepower 1 4979.3 3054.9 300.66 + cars$Wheelbase 1 3172.3 4862.0 338.76 + cars$Length 1 2448.8 5585.4 350.14 + cars$Width 1 1969.2 6065.0 356.89 + cars$Uturn 1 1450.2 6584.0 363.63 + cars$Luggage 1 1079.6 6954.7 368.12 <none> 8034.2 377.95 Your current model is only considering the constant cars$MidrangePrice ~ 1. Each row in the table indicates that in case you add that variable (for example, Horsepower), you will get the following results rearding Sq RSS(Residual Sum of Squares) and AIC (Akaike Information Criterion). In the other cases you must read the results the same way. Hope this helps :)
Regression selection using all possible subsets selection and automatic selection techniques For the second part, you must interpret the output as the steps towards your final model. For example, in the forward case you begin with Start: AIC=377.95 cars$MidrangePrice ~ 1
45,170
Regression selection using all possible subsets selection and automatic selection techniques
Stepwise regression in the absence of penalization is frought with so many difficulties that I'm surprised people are still using it. The web has long lists of problems, starting with the extremely low probability of finding the "right" model.
Regression selection using all possible subsets selection and automatic selection techniques
Stepwise regression in the absence of penalization is frought with so many difficulties that I'm surprised people are still using it. The web has long lists of problems, starting with the extremely l
Regression selection using all possible subsets selection and automatic selection techniques Stepwise regression in the absence of penalization is frought with so many difficulties that I'm surprised people are still using it. The web has long lists of problems, starting with the extremely low probability of finding the "right" model.
Regression selection using all possible subsets selection and automatic selection techniques Stepwise regression in the absence of penalization is frought with so many difficulties that I'm surprised people are still using it. The web has long lists of problems, starting with the extremely l
45,171
Statistical validation of RandomForest models
I like the idea of parsimony- the smaller the number of variables in the model, the better. Unless you are driven theoretically of course. Feature selection refers to the process of choosing which variables to use in the model (getting the best combination of variables). There are lots of different options for feature selection (worth a read). With that said, there should be inbuilt within the rf algorithm, a variable importance measure that you can generate as a starting point (with that said, be very very careful with this because there are noted biases in this) - see Strobl et al in the R journal. I trust you have varied the number of variables randomly sampled at each node (this is mtry in R) and the depth of the trees and splitting criteria etc. In terms of appearance, the second model looks slighly better to me, simply because of the reproduced accuracy in the test and train results. It always concerns me that if my test set accuracy is notably lower, there may be something wrong with the model. I trust you have made sure that your test and train set are balanced, at least on the dependent variable you are looking to classify. If this is binary (0,1) your models are not really doing much better than chance (50,50). An very important thing to look at is the sensitivity (the number of true positives in a binary task 0,1 that are correctly classified) and specificity (the number of true negatives in a binary task 0,1) that are both correctly classified. If possible, I would compare this model against other machine learning algorithms such as boosted trees, support vector machines (which do ok in gene data) etc. I am not sure what package you are using - hope that helps if If you are using r - look up caret in cran (really good intro to some of the ideas here and great for getting out some alternative measures of performance). Paul D
Statistical validation of RandomForest models
I like the idea of parsimony- the smaller the number of variables in the model, the better. Unless you are driven theoretically of course. Feature selection refers to the process of choosing which var
Statistical validation of RandomForest models I like the idea of parsimony- the smaller the number of variables in the model, the better. Unless you are driven theoretically of course. Feature selection refers to the process of choosing which variables to use in the model (getting the best combination of variables). There are lots of different options for feature selection (worth a read). With that said, there should be inbuilt within the rf algorithm, a variable importance measure that you can generate as a starting point (with that said, be very very careful with this because there are noted biases in this) - see Strobl et al in the R journal. I trust you have varied the number of variables randomly sampled at each node (this is mtry in R) and the depth of the trees and splitting criteria etc. In terms of appearance, the second model looks slighly better to me, simply because of the reproduced accuracy in the test and train results. It always concerns me that if my test set accuracy is notably lower, there may be something wrong with the model. I trust you have made sure that your test and train set are balanced, at least on the dependent variable you are looking to classify. If this is binary (0,1) your models are not really doing much better than chance (50,50). An very important thing to look at is the sensitivity (the number of true positives in a binary task 0,1 that are correctly classified) and specificity (the number of true negatives in a binary task 0,1) that are both correctly classified. If possible, I would compare this model against other machine learning algorithms such as boosted trees, support vector machines (which do ok in gene data) etc. I am not sure what package you are using - hope that helps if If you are using r - look up caret in cran (really good intro to some of the ideas here and great for getting out some alternative measures of performance). Paul D
Statistical validation of RandomForest models I like the idea of parsimony- the smaller the number of variables in the model, the better. Unless you are driven theoretically of course. Feature selection refers to the process of choosing which var
45,172
Statistical validation of RandomForest models
It just seems those two variants are equivalent; yet some better test should be made to confirm this, at least cross validation. Also if this NF and HF sets have some attributes in common, it may suggest that only this common part is useful -- I would invest some time in making feature selection.
Statistical validation of RandomForest models
It just seems those two variants are equivalent; yet some better test should be made to confirm this, at least cross validation. Also if this NF and HF sets have some attributes in common, it may sugg
Statistical validation of RandomForest models It just seems those two variants are equivalent; yet some better test should be made to confirm this, at least cross validation. Also if this NF and HF sets have some attributes in common, it may suggest that only this common part is useful -- I would invest some time in making feature selection.
Statistical validation of RandomForest models It just seems those two variants are equivalent; yet some better test should be made to confirm this, at least cross validation. Also if this NF and HF sets have some attributes in common, it may sugg
45,173
Binary or Multinomial Logistic Regression?
Binary or Multinomial: Perhaps the following rules will simplify the choice: If you have only two levels to your dependent variable then you use binary logistic regression. If you have three or more unordered levels to your dependent variable, then you'd look at multinomial logistic regression. A few points: Satisfaction with sexual needs ranges from 4 to 16 (i.e., 13 distinct values). Such a variable is typically treated as a metric predictor (i.e., in the covariate box in SPSS). Possibly your dependent variable is causing some confusion because as you phrase it, it is not a standard dichotomy. It sounds like a frequency item that could range from never, to occasionally, to sometimes, to often, to always, etc. However, I'm guessing that either you have explicitly collapsed categories or you have required the respondent to implicitly collapse the categories down to a binary choice. As a side note, if you did have an ordered set of frequency categories, then you might want to use a model that incorporated that order. SPSS: I posted some links to tutorials in SPSS and R for conducting binary logistic regression.
Binary or Multinomial Logistic Regression?
Binary or Multinomial: Perhaps the following rules will simplify the choice: If you have only two levels to your dependent variable then you use binary logistic regression. If you have three or more
Binary or Multinomial Logistic Regression? Binary or Multinomial: Perhaps the following rules will simplify the choice: If you have only two levels to your dependent variable then you use binary logistic regression. If you have three or more unordered levels to your dependent variable, then you'd look at multinomial logistic regression. A few points: Satisfaction with sexual needs ranges from 4 to 16 (i.e., 13 distinct values). Such a variable is typically treated as a metric predictor (i.e., in the covariate box in SPSS). Possibly your dependent variable is causing some confusion because as you phrase it, it is not a standard dichotomy. It sounds like a frequency item that could range from never, to occasionally, to sometimes, to often, to always, etc. However, I'm guessing that either you have explicitly collapsed categories or you have required the respondent to implicitly collapse the categories down to a binary choice. As a side note, if you did have an ordered set of frequency categories, then you might want to use a model that incorporated that order. SPSS: I posted some links to tutorials in SPSS and R for conducting binary logistic regression.
Binary or Multinomial Logistic Regression? Binary or Multinomial: Perhaps the following rules will simplify the choice: If you have only two levels to your dependent variable then you use binary logistic regression. If you have three or more
45,174
Binary or Multinomial Logistic Regression?
If you're collapsing the response and it had more in it's range, such as "frequently" and "always", then you should actually be doing ordinal regression. The ordinal package in R is quite nice for this.
Binary or Multinomial Logistic Regression?
If you're collapsing the response and it had more in it's range, such as "frequently" and "always", then you should actually be doing ordinal regression. The ordinal package in R is quite nice for th
Binary or Multinomial Logistic Regression? If you're collapsing the response and it had more in it's range, such as "frequently" and "always", then you should actually be doing ordinal regression. The ordinal package in R is quite nice for this.
Binary or Multinomial Logistic Regression? If you're collapsing the response and it had more in it's range, such as "frequently" and "always", then you should actually be doing ordinal regression. The ordinal package in R is quite nice for th
45,175
Start time requirements or assumptions for survival analysis
The starting time of the study is immaterial: it's just an origin for the clock. What you want to consider are the states in which the subjects can be found and the ages at which they transition from one to another. In this situation a minimum set of states would be [Born]: "Born with gene." This always happens at age 0, of course. [Enrolled]: "Enrolled in study." [Endpoint]: "Cardiovascular event identified." (This framework will allow multiple "endpoint" states to be modeled.) The multistate analysis supposes there is a transition probability from some of these states to others. The relevant ones would be [Born] --> Death. These account for people who never enrolled. [Born] --> [Endpoint]. Are you considering these people? Are they even allowed into the study? [Born] --> [Enrolled]. These are all the people selected for the study (who haven't died and don't already exhibit the cardiovascular disease). [Enrolled] --> [Endpoint]. These are people in the study diagnosed with a cardiovascular disease. [Enrolled] --> Death. These people died in the study without a diagnosis of cardiovascular disease. The Nelson-Aalen estimator can be generalized to estimate the rates of these transitions. It's a simple estimator, summing the ratios of events occurring to the numbers of people at risk for them to occur. The conclusion of the recent TAS article Two Pitfalls in Survival Analyses of Time-Dependent Exposure is that if you get your multistate model wrong, you will miscount the number of people at risk in various states and that will bias the results. Its message is clear: get the multistate model right. If the study truly is prospective--that is, if you identify people with the gene at birth and follow them--then there is no question about the right model. Similarly, if enrollment in the study is independent of the presence of the gene, there will be no bias. Otherwise, this framework calls out for incorporating the study selection probabilities into the model and shows how to account for deaths and prior disease before enrollment was possible. This paper also illustrates a nice tool for analyzing these subtleties: the Lexis Diagram. (Look at the figures in the end of this rather technical paper.) I believe these diagrams can be produced with the epi package in R. You might find them helpful for having discussions with your colleagues about the appropriate model to adopt. ASA members and people with university library privileges probably already have online access to this article: it's worth reading.
Start time requirements or assumptions for survival analysis
The starting time of the study is immaterial: it's just an origin for the clock. What you want to consider are the states in which the subjects can be found and the ages at which they transition from
Start time requirements or assumptions for survival analysis The starting time of the study is immaterial: it's just an origin for the clock. What you want to consider are the states in which the subjects can be found and the ages at which they transition from one to another. In this situation a minimum set of states would be [Born]: "Born with gene." This always happens at age 0, of course. [Enrolled]: "Enrolled in study." [Endpoint]: "Cardiovascular event identified." (This framework will allow multiple "endpoint" states to be modeled.) The multistate analysis supposes there is a transition probability from some of these states to others. The relevant ones would be [Born] --> Death. These account for people who never enrolled. [Born] --> [Endpoint]. Are you considering these people? Are they even allowed into the study? [Born] --> [Enrolled]. These are all the people selected for the study (who haven't died and don't already exhibit the cardiovascular disease). [Enrolled] --> [Endpoint]. These are people in the study diagnosed with a cardiovascular disease. [Enrolled] --> Death. These people died in the study without a diagnosis of cardiovascular disease. The Nelson-Aalen estimator can be generalized to estimate the rates of these transitions. It's a simple estimator, summing the ratios of events occurring to the numbers of people at risk for them to occur. The conclusion of the recent TAS article Two Pitfalls in Survival Analyses of Time-Dependent Exposure is that if you get your multistate model wrong, you will miscount the number of people at risk in various states and that will bias the results. Its message is clear: get the multistate model right. If the study truly is prospective--that is, if you identify people with the gene at birth and follow them--then there is no question about the right model. Similarly, if enrollment in the study is independent of the presence of the gene, there will be no bias. Otherwise, this framework calls out for incorporating the study selection probabilities into the model and shows how to account for deaths and prior disease before enrollment was possible. This paper also illustrates a nice tool for analyzing these subtleties: the Lexis Diagram. (Look at the figures in the end of this rather technical paper.) I believe these diagrams can be produced with the epi package in R. You might find them helpful for having discussions with your colleagues about the appropriate model to adopt. ASA members and people with university library privileges probably already have online access to this article: it's worth reading.
Start time requirements or assumptions for survival analysis The starting time of the study is immaterial: it's just an origin for the clock. What you want to consider are the states in which the subjects can be found and the ages at which they transition from
45,176
Start time requirements or assumptions for survival analysis
You need to be careful to distinguish two different "start times" in studies such as this: The origin for the time variable, i.e. the point which you're calling t=0 for each participant The time at which an individual enters the study, i.e. the time from which you would record an event if one happened In the simple cases first taught in survival analysis, these times are assumed to be the same. For long-term cohort studies, it's usually much better to allow them to differ. The most suitable time origin for cohort studies of chronic diseases (such as cardiovascular disease here) is usually date of birth, as Srikant suggests above. That's because for chronic diseases the baseline hazard varies strongly with age. But (unless it's a birth cohort) individuals don't enter the study when they are born. If they have an event before they enter the study you wouldn't record this, and that could cause bias if you don't handle it properly by distinguishing entry time from time origin. This is sometimes called delayed-entry.
Start time requirements or assumptions for survival analysis
You need to be careful to distinguish two different "start times" in studies such as this: The origin for the time variable, i.e. the point which you're calling t=0 for each participant The time at w
Start time requirements or assumptions for survival analysis You need to be careful to distinguish two different "start times" in studies such as this: The origin for the time variable, i.e. the point which you're calling t=0 for each participant The time at which an individual enters the study, i.e. the time from which you would record an event if one happened In the simple cases first taught in survival analysis, these times are assumed to be the same. For long-term cohort studies, it's usually much better to allow them to differ. The most suitable time origin for cohort studies of chronic diseases (such as cardiovascular disease here) is usually date of birth, as Srikant suggests above. That's because for chronic diseases the baseline hazard varies strongly with age. But (unless it's a birth cohort) individuals don't enter the study when they are born. If they have an event before they enter the study you wouldn't record this, and that could cause bias if you don't handle it properly by distinguishing entry time from time origin. This is sometimes called delayed-entry.
Start time requirements or assumptions for survival analysis You need to be careful to distinguish two different "start times" in studies such as this: The origin for the time variable, i.e. the point which you're calling t=0 for each participant The time at w
45,177
Start time requirements or assumptions for survival analysis
Since the individuals were born with the 'genetic change' I would use their birth as the starting time instead of the time at which they entered the registry. The following is my reasoning: First, ignore the effect of other variables on survival such as gender, income levels, exercise levels etc. For the sake of illustration we will assume that these variables have no differential impact on the time at which they have a cardio event. Second, I am assuming that you wish to investigate the following question: Does the difference in gene types (broadly speaking) result in a differential impact on the time it takes for a person to have a cardio event? I suppose that there is an underlying theory which states that the answer to the above question is 'yes'. Now, consider two individuals both of whom had the gene change. One, enters the registry just before a cardio event and the other has a cardio event several years after entering the registry. However, for the sake of dicussion assume that the ages of both people are the same. Thus, the time to cardio event is technically the same (their age). However, if you use the starting time as the time at which they enter the registry you would draw different conclusions about how long it takes for the cardio event to happen which would bias your conclusions. I am assuming that my interpretation of your goals and the situation is correct. Please correct me if I am wrong about some aspect.
Start time requirements or assumptions for survival analysis
Since the individuals were born with the 'genetic change' I would use their birth as the starting time instead of the time at which they entered the registry. The following is my reasoning: First, ig
Start time requirements or assumptions for survival analysis Since the individuals were born with the 'genetic change' I would use their birth as the starting time instead of the time at which they entered the registry. The following is my reasoning: First, ignore the effect of other variables on survival such as gender, income levels, exercise levels etc. For the sake of illustration we will assume that these variables have no differential impact on the time at which they have a cardio event. Second, I am assuming that you wish to investigate the following question: Does the difference in gene types (broadly speaking) result in a differential impact on the time it takes for a person to have a cardio event? I suppose that there is an underlying theory which states that the answer to the above question is 'yes'. Now, consider two individuals both of whom had the gene change. One, enters the registry just before a cardio event and the other has a cardio event several years after entering the registry. However, for the sake of dicussion assume that the ages of both people are the same. Thus, the time to cardio event is technically the same (their age). However, if you use the starting time as the time at which they enter the registry you would draw different conclusions about how long it takes for the cardio event to happen which would bias your conclusions. I am assuming that my interpretation of your goals and the situation is correct. Please correct me if I am wrong about some aspect.
Start time requirements or assumptions for survival analysis Since the individuals were born with the 'genetic change' I would use their birth as the starting time instead of the time at which they entered the registry. The following is my reasoning: First, ig
45,178
Start time requirements or assumptions for survival analysis
@Skirant you lost me on your last comment, but I agree with Whuber that by using birth at start start you are distorting your sample as it doesn't take into account people with that gene change that actually already had the event or died from it. On top of it once people enter the regstry there is a chance that they change their behaviour to compensate for the higher risk. I suggest you use entry into the registry as start time and age (at entry) as a covariate or age at event as a time dependent covariate Let me know how you go
Start time requirements or assumptions for survival analysis
@Skirant you lost me on your last comment, but I agree with Whuber that by using birth at start start you are distorting your sample as it doesn't take into account people with that gene change that a
Start time requirements or assumptions for survival analysis @Skirant you lost me on your last comment, but I agree with Whuber that by using birth at start start you are distorting your sample as it doesn't take into account people with that gene change that actually already had the event or died from it. On top of it once people enter the regstry there is a chance that they change their behaviour to compensate for the higher risk. I suggest you use entry into the registry as start time and age (at entry) as a covariate or age at event as a time dependent covariate Let me know how you go
Start time requirements or assumptions for survival analysis @Skirant you lost me on your last comment, but I agree with Whuber that by using birth at start start you are distorting your sample as it doesn't take into account people with that gene change that a
45,179
Conditional Expectation of Product of Normals given a Linear Combination
Comment on your attempt: the idea looks great but unfortunately $\xi - \eta \perp \xi + \eta$ of course does not imply $\xi - 2\eta \perp \xi + 2\eta$. However, the "product-to-sum" identity of $\xi\eta$ is still very useful in approaching this problem. From there, you only need to apply some very basic properties of conditional expectation and the multivariate normal distribution to get the job done. By the linearity and the "pulling out known factors" property of conditional expectation, \begin{align} E[\xi\eta|\xi - 2\eta] = \frac{1}{8}E[(\xi + 2\eta)^2|\xi - 2\eta] + \frac{1}{8}(\xi - 2\eta)^2. \tag{1} \end{align} So it remains to evaluate the $E[(\xi + 2\eta)^2|\xi - 2\eta]$, which is tractable thanks to $(\xi, \eta) \sim N_2(0, I_{(2)})$. Because of it, it follows by the affine transformation property of the multivariate normal distribution that \begin{align} \begin{bmatrix} \xi + 2\eta \\ \xi - 2\eta \end{bmatrix} = \begin{bmatrix} 1 & 2 \\ 1 & -2 \end{bmatrix} \begin{bmatrix} \xi \\ \eta \end{bmatrix} \sim N_2\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 5 & -3 \\ -3 & 5 \end{bmatrix} \right), \end{align} which implies, by the conditional distribution of the multivariate normal distribution, that \begin{align} & E[\xi + 2\eta | \xi - 2\eta] = -\frac{3}{5}(\xi - 2\eta), \\ & \operatorname{Var}(\xi + 2\eta | \xi - 2\eta) = 5 - 9 \times \frac{1}{5} = \frac{16}{5}, \end{align} whence \begin{align} E[(\xi + 2\eta)^2|\xi - 2\eta] &= \operatorname{Var}(\xi + 2\eta | \xi - 2\eta) + (E[\xi + 2\eta | \xi - 2\eta])^2 \\ &= \frac{16}{5} + \frac{9}{25}(\xi - 2\eta)^2. \tag{2} \end{align} Substituting $(2)$ into $(1)$ gives \begin{align} E[\xi\eta|\xi - 2\eta] = \frac{2}{5} + \frac{9}{200}(\xi - 2\eta)^2 + \frac{1}{8}(\xi - 2\eta)^2 = \frac{2}{5} + \frac{17}{100}(\xi - 2\eta)^2. \end{align} Now, to get the hang of the key operations in solving this problem, try resolving it using the decomposition \begin{align} \xi\eta = (\xi - 2\eta + 2\eta)\eta = \eta(\xi - 2\eta) + 2\eta^2. \end{align}
Conditional Expectation of Product of Normals given a Linear Combination
Comment on your attempt: the idea looks great but unfortunately $\xi - \eta \perp \xi + \eta$ of course does not imply $\xi - 2\eta \perp \xi + 2\eta$. However, the "product-to-sum" identity of $\xi\
Conditional Expectation of Product of Normals given a Linear Combination Comment on your attempt: the idea looks great but unfortunately $\xi - \eta \perp \xi + \eta$ of course does not imply $\xi - 2\eta \perp \xi + 2\eta$. However, the "product-to-sum" identity of $\xi\eta$ is still very useful in approaching this problem. From there, you only need to apply some very basic properties of conditional expectation and the multivariate normal distribution to get the job done. By the linearity and the "pulling out known factors" property of conditional expectation, \begin{align} E[\xi\eta|\xi - 2\eta] = \frac{1}{8}E[(\xi + 2\eta)^2|\xi - 2\eta] + \frac{1}{8}(\xi - 2\eta)^2. \tag{1} \end{align} So it remains to evaluate the $E[(\xi + 2\eta)^2|\xi - 2\eta]$, which is tractable thanks to $(\xi, \eta) \sim N_2(0, I_{(2)})$. Because of it, it follows by the affine transformation property of the multivariate normal distribution that \begin{align} \begin{bmatrix} \xi + 2\eta \\ \xi - 2\eta \end{bmatrix} = \begin{bmatrix} 1 & 2 \\ 1 & -2 \end{bmatrix} \begin{bmatrix} \xi \\ \eta \end{bmatrix} \sim N_2\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 5 & -3 \\ -3 & 5 \end{bmatrix} \right), \end{align} which implies, by the conditional distribution of the multivariate normal distribution, that \begin{align} & E[\xi + 2\eta | \xi - 2\eta] = -\frac{3}{5}(\xi - 2\eta), \\ & \operatorname{Var}(\xi + 2\eta | \xi - 2\eta) = 5 - 9 \times \frac{1}{5} = \frac{16}{5}, \end{align} whence \begin{align} E[(\xi + 2\eta)^2|\xi - 2\eta] &= \operatorname{Var}(\xi + 2\eta | \xi - 2\eta) + (E[\xi + 2\eta | \xi - 2\eta])^2 \\ &= \frac{16}{5} + \frac{9}{25}(\xi - 2\eta)^2. \tag{2} \end{align} Substituting $(2)$ into $(1)$ gives \begin{align} E[\xi\eta|\xi - 2\eta] = \frac{2}{5} + \frac{9}{200}(\xi - 2\eta)^2 + \frac{1}{8}(\xi - 2\eta)^2 = \frac{2}{5} + \frac{17}{100}(\xi - 2\eta)^2. \end{align} Now, to get the hang of the key operations in solving this problem, try resolving it using the decomposition \begin{align} \xi\eta = (\xi - 2\eta + 2\eta)\eta = \eta(\xi - 2\eta) + 2\eta^2. \end{align}
Conditional Expectation of Product of Normals given a Linear Combination Comment on your attempt: the idea looks great but unfortunately $\xi - \eta \perp \xi + \eta$ of course does not imply $\xi - 2\eta \perp \xi + 2\eta$. However, the "product-to-sum" identity of $\xi\
45,180
Wildly different answers replicating a GEE model from SPSS
They're less wildly different once you correct for the different contrasts the two programs use. SPSS has 1 as the reference level of the two variables and statsmodels has 0. Here are the fitted values for the four combinations of the two binary variables statsmodels SPSS neither -0.0284 -0.024 white -0.0134 -0.021 right -0.0156 -0.015 both 0.1117 0.115 That's still more different than I'd expect, and it's a bad sign that the statsmodels estimate hasn't converged. So I ran the model with two different R implementations (gee and geeM). They also give different answers, but more importantly they agree there's an estimation problem. The working correlation parameter is trying to be more negative than is possible given the cluster size, giving a non-positive-definite working correlation matrix. (I note that neither your SPSS nor statsmodels output shows the estimated working correlation) So, I think neither result is really reliable for this dataset, and the exchangeable working correlation model isn't stable. If the estimates haven't converged in 1000 iterations, they aren't going to (and looking at the R version, they aren't showing any signs of converging). I would suggest falling back to working independence. For working independence, SPSS and R give the same answers (I didn't check statsmodels). There's some potential efficiency gain from the exchangeable working correlation, but not if the correlation parameter can't be estimated reliably.
Wildly different answers replicating a GEE model from SPSS
They're less wildly different once you correct for the different contrasts the two programs use. SPSS has 1 as the reference level of the two variables and statsmodels has 0. Here are the fitted valu
Wildly different answers replicating a GEE model from SPSS They're less wildly different once you correct for the different contrasts the two programs use. SPSS has 1 as the reference level of the two variables and statsmodels has 0. Here are the fitted values for the four combinations of the two binary variables statsmodels SPSS neither -0.0284 -0.024 white -0.0134 -0.021 right -0.0156 -0.015 both 0.1117 0.115 That's still more different than I'd expect, and it's a bad sign that the statsmodels estimate hasn't converged. So I ran the model with two different R implementations (gee and geeM). They also give different answers, but more importantly they agree there's an estimation problem. The working correlation parameter is trying to be more negative than is possible given the cluster size, giving a non-positive-definite working correlation matrix. (I note that neither your SPSS nor statsmodels output shows the estimated working correlation) So, I think neither result is really reliable for this dataset, and the exchangeable working correlation model isn't stable. If the estimates haven't converged in 1000 iterations, they aren't going to (and looking at the R version, they aren't showing any signs of converging). I would suggest falling back to working independence. For working independence, SPSS and R give the same answers (I didn't check statsmodels). There's some potential efficiency gain from the exchangeable working correlation, but not if the correlation parameter can't be estimated reliably.
Wildly different answers replicating a GEE model from SPSS They're less wildly different once you correct for the different contrasts the two programs use. SPSS has 1 as the reference level of the two variables and statsmodels has 0. Here are the fitted valu
45,181
Smoothing spline seems to fit too precisely?
More specifically, you are right that something is being done to select the smoothing parameters for the spline and that by default this is GCV. It is known (from the spline / GAM literature) that GCV can undersmooth and I believe this is what you are seeing here. Choosing another method, such as REML smoothness selection leads to a more reasonable fit: # reusing objects from your post m_reml <- ss(x, y, method = "REML") p_reml <- predict(m_reml, x)$y plot(x, y) lines(x, pred_y, col = "red", lwd = 2) lines(x, p_reml, col = "blue", lwd = 2) produces As you have a variable that is ordered in time, there is the additional complication that if there is some signal that is contaminated by autoregressive noise, this violates the assumptions used to select the smoothing parameters and can lead you to over fit. One option in that case would be to model the autocorrelated noise: library("mgcv") df <- data.frame(y = y, x = x) m_gam <- gamm(y ~ s(x, k = 6), data = df, method = "REML", correlation = corAR1(form = ~ x)) p_gam <- predict(m_gam$gam, newdata = df) plot(x, y) lines(x, pred_y, col = "red", lwd = 2) lines(x, p_reml, col = "blue", lwd = 2) lines(x, p_gam, col = "green", lwd = 2) In this instance it doesn't help (for some definition of "help") as the model has assigned all the variation to the autocorrelation process (the AR(1)) and the resulting trend is linear with some aspect of the model fit becoming no-positive definite - a sure sign that the model is over-fit or too complex. This often happens because a trend and an autocorrelation process like the AR(p) are not always identifiable from data
Smoothing spline seems to fit too precisely?
More specifically, you are right that something is being done to select the smoothing parameters for the spline and that by default this is GCV. It is known (from the spline / GAM literature) that GCV
Smoothing spline seems to fit too precisely? More specifically, you are right that something is being done to select the smoothing parameters for the spline and that by default this is GCV. It is known (from the spline / GAM literature) that GCV can undersmooth and I believe this is what you are seeing here. Choosing another method, such as REML smoothness selection leads to a more reasonable fit: # reusing objects from your post m_reml <- ss(x, y, method = "REML") p_reml <- predict(m_reml, x)$y plot(x, y) lines(x, pred_y, col = "red", lwd = 2) lines(x, p_reml, col = "blue", lwd = 2) produces As you have a variable that is ordered in time, there is the additional complication that if there is some signal that is contaminated by autoregressive noise, this violates the assumptions used to select the smoothing parameters and can lead you to over fit. One option in that case would be to model the autocorrelated noise: library("mgcv") df <- data.frame(y = y, x = x) m_gam <- gamm(y ~ s(x, k = 6), data = df, method = "REML", correlation = corAR1(form = ~ x)) p_gam <- predict(m_gam$gam, newdata = df) plot(x, y) lines(x, pred_y, col = "red", lwd = 2) lines(x, p_reml, col = "blue", lwd = 2) lines(x, p_gam, col = "green", lwd = 2) In this instance it doesn't help (for some definition of "help") as the model has assigned all the variation to the autocorrelation process (the AR(1)) and the resulting trend is linear with some aspect of the model fit becoming no-positive definite - a sure sign that the model is over-fit or too complex. This often happens because a trend and an autocorrelation process like the AR(p) are not always identifiable from data
Smoothing spline seems to fit too precisely? More specifically, you are right that something is being done to select the smoothing parameters for the spline and that by default this is GCV. It is known (from the spline / GAM literature) that GCV
45,182
Smoothing spline seems to fit too precisely?
The number of knots is a hyperparameter. You can tune it. There are many approaches although for time series you will need to be cautious about how you partition the data. See this online textbook passage for guidance on splitting. Essentially, you'll have to sequentially break the data up into blocks.
Smoothing spline seems to fit too precisely?
The number of knots is a hyperparameter. You can tune it. There are many approaches although for time series you will need to be cautious about how you partition the data. See this online textbook pas
Smoothing spline seems to fit too precisely? The number of knots is a hyperparameter. You can tune it. There are many approaches although for time series you will need to be cautious about how you partition the data. See this online textbook passage for guidance on splitting. Essentially, you'll have to sequentially break the data up into blocks.
Smoothing spline seems to fit too precisely? The number of knots is a hyperparameter. You can tune it. There are many approaches although for time series you will need to be cautious about how you partition the data. See this online textbook pas
45,183
How to make regression results to be integers?
What you’re doing is an ordinal regression task, which TensorFlow seems to support, and I recommend looking into this approach. At the same time, remember Box’s famous quote. All models are wrong, but some are useful. Perhaps you can get a useful model by forcing this into the wrong approach. Once you accept that you’re forcing the problem into the wrong machinery to solve it, the easiest way to predict one of your ordinal levels is to force your data into a standard regression problem and round the predictions to the nearest integer, constrained to your range. TensorFlow allows for custom loss functions. This article seems to explain it decently. I would write the loss function to first round the prediction to the nearest integer, then use an if statement to constrain that rounded value to 1-5. Then write something for square or absolute loss, based on the rounded and constrained prediction. Alternatively, you may prefer to use a continuous loss function that already exists in your software, such as MSE or MAE, and then just evaluate the rounded and constrained predictions.
How to make regression results to be integers?
What you’re doing is an ordinal regression task, which TensorFlow seems to support, and I recommend looking into this approach. At the same time, remember Box’s famous quote. All models are wrong, bu
How to make regression results to be integers? What you’re doing is an ordinal regression task, which TensorFlow seems to support, and I recommend looking into this approach. At the same time, remember Box’s famous quote. All models are wrong, but some are useful. Perhaps you can get a useful model by forcing this into the wrong approach. Once you accept that you’re forcing the problem into the wrong machinery to solve it, the easiest way to predict one of your ordinal levels is to force your data into a standard regression problem and round the predictions to the nearest integer, constrained to your range. TensorFlow allows for custom loss functions. This article seems to explain it decently. I would write the loss function to first round the prediction to the nearest integer, then use an if statement to constrain that rounded value to 1-5. Then write something for square or absolute loss, based on the rounded and constrained prediction. Alternatively, you may prefer to use a continuous loss function that already exists in your software, such as MSE or MAE, and then just evaluate the rounded and constrained predictions.
How to make regression results to be integers? What you’re doing is an ordinal regression task, which TensorFlow seems to support, and I recommend looking into this approach. At the same time, remember Box’s famous quote. All models are wrong, bu
45,184
Sample without replacement from 1 to N and stop when the value is less than the previous one
Let $K$ be the random variable given by the length, so that $1\le K \le n.$ Its survival function is $$S(k) = \Pr(K \gt k).$$ The event $K\gt k$ can be characterized as $X_1 \lt X_2 \lt \cdots \lt X_k.$ Since all $k!$ possible orderings are equally likely with random sampling, this event has a probability $1/k!.$ Thus $$S(k) = \frac{1}{k!}, \ k = 1, 2, \ldots, n-1.$$ Trivially, $S(0) = 1$ and $S(k) = 0$ for integral $k \ge n$ (because the sequence $(X_i)$ must stop after $n$ observations: there's nothing left to sample). This simple formula describes the entire distribution of $K.$ According to the general formula for the expectation of a non-negative integral variable $E[K] = \sum_{k=0}^\infty S(k),$ the answer is $$E[K] = 1 + 1 + 1/2 + 1/3! + \cdots + 1/(n-1)!.$$ For large $n$ this is extremely close to, but less than, $e = \exp(1) \approx 1 + 1.71828\ldots.$ This latter value (one less than $E[K]$) is likely the number your simulation was estimating. Here is an R simulation that tracks $K$ for many samples and (when $n \gt 2,$ because for $n \le 2$ the length is always $n$) performs a chi-squared test to compare the observed distribution to this calculation: n <- 3 s <- tabulate(replicate(1e4, { x <- sample.int(n) # A sample d <- diff(x) # The successive changes min(n, which(d < 0) + 1) # The length, including the first drop (if any) }), n) if (n > 2) { p <- c(-diff(1 / factorial(1:(n-1))), 1 / factorial(n-1)) # Computed distribution chisq.test(s[-1], p=p) # (`s[-1]` is always zero) } Upon running this I found $5078$ instances where $K=2$ and $4922$ where $K=3.$ The chi-squared statistic has a p-value of $0.12:$ no significant evidence that the formula is wrong. Runs with larger values of $n$ continue to confirm the correctness of the answer.
Sample without replacement from 1 to N and stop when the value is less than the previous one
Let $K$ be the random variable given by the length, so that $1\le K \le n.$ Its survival function is $$S(k) = \Pr(K \gt k).$$ The event $K\gt k$ can be characterized as $X_1 \lt X_2 \lt \cdots \lt X_
Sample without replacement from 1 to N and stop when the value is less than the previous one Let $K$ be the random variable given by the length, so that $1\le K \le n.$ Its survival function is $$S(k) = \Pr(K \gt k).$$ The event $K\gt k$ can be characterized as $X_1 \lt X_2 \lt \cdots \lt X_k.$ Since all $k!$ possible orderings are equally likely with random sampling, this event has a probability $1/k!.$ Thus $$S(k) = \frac{1}{k!}, \ k = 1, 2, \ldots, n-1.$$ Trivially, $S(0) = 1$ and $S(k) = 0$ for integral $k \ge n$ (because the sequence $(X_i)$ must stop after $n$ observations: there's nothing left to sample). This simple formula describes the entire distribution of $K.$ According to the general formula for the expectation of a non-negative integral variable $E[K] = \sum_{k=0}^\infty S(k),$ the answer is $$E[K] = 1 + 1 + 1/2 + 1/3! + \cdots + 1/(n-1)!.$$ For large $n$ this is extremely close to, but less than, $e = \exp(1) \approx 1 + 1.71828\ldots.$ This latter value (one less than $E[K]$) is likely the number your simulation was estimating. Here is an R simulation that tracks $K$ for many samples and (when $n \gt 2,$ because for $n \le 2$ the length is always $n$) performs a chi-squared test to compare the observed distribution to this calculation: n <- 3 s <- tabulate(replicate(1e4, { x <- sample.int(n) # A sample d <- diff(x) # The successive changes min(n, which(d < 0) + 1) # The length, including the first drop (if any) }), n) if (n > 2) { p <- c(-diff(1 / factorial(1:(n-1))), 1 / factorial(n-1)) # Computed distribution chisq.test(s[-1], p=p) # (`s[-1]` is always zero) } Upon running this I found $5078$ instances where $K=2$ and $4922$ where $K=3.$ The chi-squared statistic has a p-value of $0.12:$ no significant evidence that the formula is wrong. Runs with larger values of $n$ continue to confirm the correctness of the answer.
Sample without replacement from 1 to N and stop when the value is less than the previous one Let $K$ be the random variable given by the length, so that $1\le K \le n.$ Its survival function is $$S(k) = \Pr(K \gt k).$$ The event $K\gt k$ can be characterized as $X_1 \lt X_2 \lt \cdots \lt X_
45,185
Predicting with a GLM
It matters what you mean by prediction. Unfortunately, this term can be somewhat ambiguous, especially since the linear combination of covariates in the regression model is often referred to as a linear predictor. The typical purpose of a generalized linear model is to estimate the population mean and to perform inference on the mean. This would be the proportion in a Bernoulli model and the mean in a Poisson or gamma model. The word prediction is best reserved for when interest surrounds a future sampled observation. Of course our best point prediction of a future observation is the estimated mean of the population. For a gamma model one would report the sample mean as the point prediction for a future observation. For a Bernoulli model one would report the value 0 or 1 that has the largest estimated proportion since an individual observation can only take on these discrete values. For a Poisson model one could report the mean rounded to the nearest integer since the support of the Poisson distribution is the non-negative integers. One could also use the floor or ceiling function on the mean to produce a point prediction. One might also be interested in presenting the estimated percentiles of the population. It is important that these be presented with tolerance intervals (confidence intervals for population percentiles). Alternatively one might be interested in quantifying the uncertainty regarding the point prediction for a single future observation. This would require the use of a prediction interval which is not the estimated percentiles. Here is a related thread that discusses prediction intervals. Addendum: Splitting the data into training and test is for the purposes of validating the out-of-sample prediction ability of a model. My preferred approach is not to split the data into training and test sets. Rather, I suggest to bootstrap (sample with replacement) $n$ observations from the data set as if it is the population, fit the model, and construct a point prediction or interval prediction for a particular prediction target (a single future $y$ [$m=1$ observation] or a future $\bar{y}$ based on $m$ observations). Then bootstrap a sample of size $m$ and tally i) the discrepancy between the point prediction and the target, and ii) whether the prediction interval covered the target . Repeat this 10,000 times and plot the histogram for point prediction errors and calculate the coverage rate for prediction intervals. This validates the performance of the model based on operating characteristics. Sampling with replacement from your data set treats it as a much larger population. It is likely the percentiles of your data set do not match the theoretical percentiles of the glm model you posit. This means there is slight model misspecification so don't be surprised if the prediction intervals do not cover at the nominal level and if the histogram of prediction errors shows small bias (not centered at zero). You can also perform this type of validation through simulation by randomly generating observations from the theoretical model that matches your glm, e.g. gamma or Poisson. Here you should find the prediction intervals perform close to the nominal level and your point prediction is asymptotically unbiased for the target. This type of approach can also be used to validate point and interval estimation of a population parameter.
Predicting with a GLM
It matters what you mean by prediction. Unfortunately, this term can be somewhat ambiguous, especially since the linear combination of covariates in the regression model is often referred to as a lin
Predicting with a GLM It matters what you mean by prediction. Unfortunately, this term can be somewhat ambiguous, especially since the linear combination of covariates in the regression model is often referred to as a linear predictor. The typical purpose of a generalized linear model is to estimate the population mean and to perform inference on the mean. This would be the proportion in a Bernoulli model and the mean in a Poisson or gamma model. The word prediction is best reserved for when interest surrounds a future sampled observation. Of course our best point prediction of a future observation is the estimated mean of the population. For a gamma model one would report the sample mean as the point prediction for a future observation. For a Bernoulli model one would report the value 0 or 1 that has the largest estimated proportion since an individual observation can only take on these discrete values. For a Poisson model one could report the mean rounded to the nearest integer since the support of the Poisson distribution is the non-negative integers. One could also use the floor or ceiling function on the mean to produce a point prediction. One might also be interested in presenting the estimated percentiles of the population. It is important that these be presented with tolerance intervals (confidence intervals for population percentiles). Alternatively one might be interested in quantifying the uncertainty regarding the point prediction for a single future observation. This would require the use of a prediction interval which is not the estimated percentiles. Here is a related thread that discusses prediction intervals. Addendum: Splitting the data into training and test is for the purposes of validating the out-of-sample prediction ability of a model. My preferred approach is not to split the data into training and test sets. Rather, I suggest to bootstrap (sample with replacement) $n$ observations from the data set as if it is the population, fit the model, and construct a point prediction or interval prediction for a particular prediction target (a single future $y$ [$m=1$ observation] or a future $\bar{y}$ based on $m$ observations). Then bootstrap a sample of size $m$ and tally i) the discrepancy between the point prediction and the target, and ii) whether the prediction interval covered the target . Repeat this 10,000 times and plot the histogram for point prediction errors and calculate the coverage rate for prediction intervals. This validates the performance of the model based on operating characteristics. Sampling with replacement from your data set treats it as a much larger population. It is likely the percentiles of your data set do not match the theoretical percentiles of the glm model you posit. This means there is slight model misspecification so don't be surprised if the prediction intervals do not cover at the nominal level and if the histogram of prediction errors shows small bias (not centered at zero). You can also perform this type of validation through simulation by randomly generating observations from the theoretical model that matches your glm, e.g. gamma or Poisson. Here you should find the prediction intervals perform close to the nominal level and your point prediction is asymptotically unbiased for the target. This type of approach can also be used to validate point and interval estimation of a population parameter.
Predicting with a GLM It matters what you mean by prediction. Unfortunately, this term can be somewhat ambiguous, especially since the linear combination of covariates in the regression model is often referred to as a lin
45,186
Predicting with a GLM
Quoting from Wikipedia: The GLM consists of three elements: An exponential family of probability distributions. A linear predictor $\eta=X \beta$ A link function $g$ such that $E(Y|X)=\mu=g^{-1}(\eta)$ There is no threshold inherent in a GLM. Once you have the model, you can make predictions of $\mu$ (sometimes called the "mean function") for any set of covariates $X$. For a binomial model that could be translated into a probability of class membership. For a Poisson model, you are modeling counts directly. Your application of a binomial GLM might then involve a threshold for making class predictions. Your application of a Poisson count model might involve translating counts into a rate per unit time, length, or area. But those applications should be thought of as outside the GLM itself.
Predicting with a GLM
Quoting from Wikipedia: The GLM consists of three elements: An exponential family of probability distributions. A linear predictor $\eta=X \beta$ A link function $g$ such that $E(Y|X)=\mu=g^{-1}(\et
Predicting with a GLM Quoting from Wikipedia: The GLM consists of three elements: An exponential family of probability distributions. A linear predictor $\eta=X \beta$ A link function $g$ such that $E(Y|X)=\mu=g^{-1}(\eta)$ There is no threshold inherent in a GLM. Once you have the model, you can make predictions of $\mu$ (sometimes called the "mean function") for any set of covariates $X$. For a binomial model that could be translated into a probability of class membership. For a Poisson model, you are modeling counts directly. Your application of a binomial GLM might then involve a threshold for making class predictions. Your application of a Poisson count model might involve translating counts into a rate per unit time, length, or area. But those applications should be thought of as outside the GLM itself.
Predicting with a GLM Quoting from Wikipedia: The GLM consists of three elements: An exponential family of probability distributions. A linear predictor $\eta=X \beta$ A link function $g$ such that $E(Y|X)=\mu=g^{-1}(\et
45,187
Predicting with a GLM
The commonality is that all these models predict conditional expectations. If your target class is coded 1 and your nontarget class is coded 0, then a predicted probability $\hat{p}$ of a new instance to belong to the target class is just the conditional expectation of the new instance's code. (Thresholding is iffy and loses a lot of information. Only do it if you know what you are doing.) Your Poisson regression will also predict the conditional expectation. (As long as your prediction is on the response scale, not the linear scale.) You can feed this predicted expectation $\hat{\lambda}$ into a Poisson calculator to get predicted probabilities of each possible count. Note that this procedure is a shortcut that completely disregards the uncertainty in your estimate of $\hat{\lambda}$ - take a look here for a more stringent approach. Note that there are other models that predict other functionals of the target variable's distribution, e.g., quantile regression, which aims at predicting a certain quantile.
Predicting with a GLM
The commonality is that all these models predict conditional expectations. If your target class is coded 1 and your nontarget class is coded 0, then a predicted probability $\hat{p}$ of a new instance
Predicting with a GLM The commonality is that all these models predict conditional expectations. If your target class is coded 1 and your nontarget class is coded 0, then a predicted probability $\hat{p}$ of a new instance to belong to the target class is just the conditional expectation of the new instance's code. (Thresholding is iffy and loses a lot of information. Only do it if you know what you are doing.) Your Poisson regression will also predict the conditional expectation. (As long as your prediction is on the response scale, not the linear scale.) You can feed this predicted expectation $\hat{\lambda}$ into a Poisson calculator to get predicted probabilities of each possible count. Note that this procedure is a shortcut that completely disregards the uncertainty in your estimate of $\hat{\lambda}$ - take a look here for a more stringent approach. Note that there are other models that predict other functionals of the target variable's distribution, e.g., quantile regression, which aims at predicting a certain quantile.
Predicting with a GLM The commonality is that all these models predict conditional expectations. If your target class is coded 1 and your nontarget class is coded 0, then a predicted probability $\hat{p}$ of a new instance
45,188
Benefit of parametric bootstrap over nonparametric bootstrap
Fictitious data. Suppose you have a sample x of size $n = 50$ from a population with an unknown mean and distribution. Then in R we have: x [1] 7.1 26.9 41.1 22.8 18.2 19.5 37.7 39.1 17.5 3.3 [11] 6.1 2.3 12.5 11.7 29.1 9.5 6.5 26.1 33.0 9.5 [21] 6.5 0.5 8.0 24.1 79.4 4.3 39.8 0.3 36.8 2.2 [31] 2.1 3.0 9.9 5.0 9.4 181.3 0.7 4.3 14.8 0.4 [41] 3.1 7.3 4.7 1.6 26.5 6.9 2.7 3.6 10.1 0.4 summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.300 3.375 8.700 17.584 23.775 181.300 stripchart(x, pch="|") There are many styles of nonparametric and parametric bootstrap confidence intervals. I will compare three of them with two "traditional" CIs. Questionable t CI. Obviously, the observations are strongly right-skewed. But suppose we believe, somewhat too naively and strongly, in the legendary robustness of t methods against departure from normality. So we try a 95% t confidence interval, which is $(9.57, 25.59).$ In R, this is part of the t.test procedure. t.test(x)$conf.int [1] 9.574129 25.593871 attr(,"conf.level") [1] 0.95 Nonparametric bootstrap CI. Not knowing the family of distributions from which this sample was randomly chosen, we might try a 95% nonparametric confidence interval for the population mean $\mu$ (which we assume exists). To get an idea how variable the sample mean $\bar X$ is as an estimate of $\mu,$ we re-sample many samples of size $50$ from x with replacement. For each re-sample, we find the the distance between the observed mean $\bar X = 17.584$ and the mean of the re-sample. The distribution of these many differences d.re can be used to find the 95% nonparametric bootstrap CI $(9.12, 23.82).$ set.seed(2021) # non-parametric bootstrap, re-sample from sample a.obs = mean(x); a.obs [1] 17.584 d.re = replicate(3000, mean(sample(x, 50, rep=T))-a.obs) UL = quantile(d.re,c(.975,.025)) a.obs - UL 97.5% 2.5% 9.12105 23.81885 Parametric bootstrap CI. Now suppose that we know that the population is exponentially distributed, with $X_i \stackrel{iid}{\sim}\mathsf{EXP}( \mathrm{rate}=1/\mu).$ Then we can make a 95% parametric CI for $\mu$ by taking re-samples from a population with mean $1/\bar X = 1/17.584.$ [Instead of re-sampling from the sample x, we re-sample from an exponential distribution 'suggested by' the sample x.] Of course, it would be better to know the exact $\mu,$ but knowing $\hat\mu = 1/17.584$ is better than nothing. For my fictitious data x the resulting 95% parametric bootstrap CI is $(12.44, 22.13).$ This interval is narrower than the nonparametric bootstrap CI because it is based on the additional information that the population is exponential. [I did more re-samples here because parametric bootstrap CIs with larger numbers of resamples may be noticeably more accurate.] set.seed(2021) # parametric bootstrap, sample 50 from EXP(rate=1/a.obs) a.obs = mean(x); a.obs [1] 17.584 d.re = replicate(10000, mean(rexp(50,1/a.obs))-a.obs) UL = quantile(d.re,c(.975,.025)) a.obs - UL 97.5% 2.5% 12.44381 22.13479 Parametric CI, treating the mean as a scale parameter. For some right-skewed distributions, the mean $\mu$ is more accurately viewed as a scale parameter than a location parameter. If we take this point of view, it makes more sense to look at ratios of re-sampled means to observed means $\bar X^*/\bar X_{obs}$ rather than differences $\bar X^* - \bar X_{obs},$ for each re-sample. This style of parametric bootstrap gives the reault $(13.66, 23.77).$ set.seed(2021) # parametric bootstrap of ratios, sample 50 from EXP(rate=1/a.obs) r.re = replicate(3000, mean(rexp(50,1/obs.a))/a.obs) UL = quantile(r.re,c(.975,.025)) a.obs / UL 97.5% 2.5% 13.66134 23.76732 If you know it: Exact CI. However, if the population is known to be exponential, then one can show that $\frac{\bar X}{\mu} \sim\mathsf{Gamma}(\mathrm{shape}=1/n, \mathrm{rate}=1/n).$ and 'pivot' this relationship to make an exact 95% CI for $\mu$ of the form $\left(\frac{\bar X}{U}, \frac{\bar X}{L}\right),$ where $L$ and $U$ cut probability $0.025$ from the lower and upper tails of $\mathsf{Gamma}(1/50, 1/50).$ This exact 95% CI for $\mu$ is $(13.57, 23.69).$ mean(x)/qgamma(c(.975,.025), 50, 50) [1] 13.57196 23.69111 Of course, this is the best 95% CI of the four on this page because is strictly based on statistical theory. Sometimes one may not know (or remember) that an exact CI is available. Note: The following R code was used to sample the fictitious data used in this illustration: set.seed(1203) x = round(rexp(50,1/20),1)
Benefit of parametric bootstrap over nonparametric bootstrap
Fictitious data. Suppose you have a sample x of size $n = 50$ from a population with an unknown mean and distribution. Then in R we have: x [1] 7.1 26.9 41.1 22.8 18.2 19.5 37.7 39.1 17.5
Benefit of parametric bootstrap over nonparametric bootstrap Fictitious data. Suppose you have a sample x of size $n = 50$ from a population with an unknown mean and distribution. Then in R we have: x [1] 7.1 26.9 41.1 22.8 18.2 19.5 37.7 39.1 17.5 3.3 [11] 6.1 2.3 12.5 11.7 29.1 9.5 6.5 26.1 33.0 9.5 [21] 6.5 0.5 8.0 24.1 79.4 4.3 39.8 0.3 36.8 2.2 [31] 2.1 3.0 9.9 5.0 9.4 181.3 0.7 4.3 14.8 0.4 [41] 3.1 7.3 4.7 1.6 26.5 6.9 2.7 3.6 10.1 0.4 summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.300 3.375 8.700 17.584 23.775 181.300 stripchart(x, pch="|") There are many styles of nonparametric and parametric bootstrap confidence intervals. I will compare three of them with two "traditional" CIs. Questionable t CI. Obviously, the observations are strongly right-skewed. But suppose we believe, somewhat too naively and strongly, in the legendary robustness of t methods against departure from normality. So we try a 95% t confidence interval, which is $(9.57, 25.59).$ In R, this is part of the t.test procedure. t.test(x)$conf.int [1] 9.574129 25.593871 attr(,"conf.level") [1] 0.95 Nonparametric bootstrap CI. Not knowing the family of distributions from which this sample was randomly chosen, we might try a 95% nonparametric confidence interval for the population mean $\mu$ (which we assume exists). To get an idea how variable the sample mean $\bar X$ is as an estimate of $\mu,$ we re-sample many samples of size $50$ from x with replacement. For each re-sample, we find the the distance between the observed mean $\bar X = 17.584$ and the mean of the re-sample. The distribution of these many differences d.re can be used to find the 95% nonparametric bootstrap CI $(9.12, 23.82).$ set.seed(2021) # non-parametric bootstrap, re-sample from sample a.obs = mean(x); a.obs [1] 17.584 d.re = replicate(3000, mean(sample(x, 50, rep=T))-a.obs) UL = quantile(d.re,c(.975,.025)) a.obs - UL 97.5% 2.5% 9.12105 23.81885 Parametric bootstrap CI. Now suppose that we know that the population is exponentially distributed, with $X_i \stackrel{iid}{\sim}\mathsf{EXP}( \mathrm{rate}=1/\mu).$ Then we can make a 95% parametric CI for $\mu$ by taking re-samples from a population with mean $1/\bar X = 1/17.584.$ [Instead of re-sampling from the sample x, we re-sample from an exponential distribution 'suggested by' the sample x.] Of course, it would be better to know the exact $\mu,$ but knowing $\hat\mu = 1/17.584$ is better than nothing. For my fictitious data x the resulting 95% parametric bootstrap CI is $(12.44, 22.13).$ This interval is narrower than the nonparametric bootstrap CI because it is based on the additional information that the population is exponential. [I did more re-samples here because parametric bootstrap CIs with larger numbers of resamples may be noticeably more accurate.] set.seed(2021) # parametric bootstrap, sample 50 from EXP(rate=1/a.obs) a.obs = mean(x); a.obs [1] 17.584 d.re = replicate(10000, mean(rexp(50,1/a.obs))-a.obs) UL = quantile(d.re,c(.975,.025)) a.obs - UL 97.5% 2.5% 12.44381 22.13479 Parametric CI, treating the mean as a scale parameter. For some right-skewed distributions, the mean $\mu$ is more accurately viewed as a scale parameter than a location parameter. If we take this point of view, it makes more sense to look at ratios of re-sampled means to observed means $\bar X^*/\bar X_{obs}$ rather than differences $\bar X^* - \bar X_{obs},$ for each re-sample. This style of parametric bootstrap gives the reault $(13.66, 23.77).$ set.seed(2021) # parametric bootstrap of ratios, sample 50 from EXP(rate=1/a.obs) r.re = replicate(3000, mean(rexp(50,1/obs.a))/a.obs) UL = quantile(r.re,c(.975,.025)) a.obs / UL 97.5% 2.5% 13.66134 23.76732 If you know it: Exact CI. However, if the population is known to be exponential, then one can show that $\frac{\bar X}{\mu} \sim\mathsf{Gamma}(\mathrm{shape}=1/n, \mathrm{rate}=1/n).$ and 'pivot' this relationship to make an exact 95% CI for $\mu$ of the form $\left(\frac{\bar X}{U}, \frac{\bar X}{L}\right),$ where $L$ and $U$ cut probability $0.025$ from the lower and upper tails of $\mathsf{Gamma}(1/50, 1/50).$ This exact 95% CI for $\mu$ is $(13.57, 23.69).$ mean(x)/qgamma(c(.975,.025), 50, 50) [1] 13.57196 23.69111 Of course, this is the best 95% CI of the four on this page because is strictly based on statistical theory. Sometimes one may not know (or remember) that an exact CI is available. Note: The following R code was used to sample the fictitious data used in this illustration: set.seed(1203) x = round(rexp(50,1/20),1)
Benefit of parametric bootstrap over nonparametric bootstrap Fictitious data. Suppose you have a sample x of size $n = 50$ from a population with an unknown mean and distribution. Then in R we have: x [1] 7.1 26.9 41.1 22.8 18.2 19.5 37.7 39.1 17.5
45,189
Purpose of expectation of loglikelihood
The maximum likelihood estimator $$\hat\theta(z_1,\ldots,z_n)$$ is the solution of the maximisation program $$\arg\max_\theta\sum_{i=1}^n \log \{f_Z(z_i;\theta)\}\tag{1}$$ It is therefore a random variable since it depends on one realisation of the sample $(Z_1,\ldots,Z_n)$. The justification in using the maximum likelihood estimator is that, since the true value $\theta_0$ of the parameter (i.e.~the one value behind the generation of $(z_1,\ldots,z_n)$) is solution of $$\theta_0 = \arg\max_\theta \mathbb E_{\theta_0}[\log \{f_Z(Z;\theta)\}]\tag{2}$$ and since $$\frac{1}{n}\sum_{i=1}^n\log \{f_Z(z_i;\theta)\} \approx \mathbb E_{\theta_0}[\log \{f_Z(Z;\theta)\}]$$ thanks to the Law of Large Numbers, the solutions to (1) and (2) should be close:$$\hat\theta(z_1,\ldots,z_n)\approx\theta_0$$(which can be shown rigorously, of course).
Purpose of expectation of loglikelihood
The maximum likelihood estimator $$\hat\theta(z_1,\ldots,z_n)$$ is the solution of the maximisation program $$\arg\max_\theta\sum_{i=1}^n \log \{f_Z(z_i;\theta)\}\tag{1}$$ It is therefore a random var
Purpose of expectation of loglikelihood The maximum likelihood estimator $$\hat\theta(z_1,\ldots,z_n)$$ is the solution of the maximisation program $$\arg\max_\theta\sum_{i=1}^n \log \{f_Z(z_i;\theta)\}\tag{1}$$ It is therefore a random variable since it depends on one realisation of the sample $(Z_1,\ldots,Z_n)$. The justification in using the maximum likelihood estimator is that, since the true value $\theta_0$ of the parameter (i.e.~the one value behind the generation of $(z_1,\ldots,z_n)$) is solution of $$\theta_0 = \arg\max_\theta \mathbb E_{\theta_0}[\log \{f_Z(Z;\theta)\}]\tag{2}$$ and since $$\frac{1}{n}\sum_{i=1}^n\log \{f_Z(z_i;\theta)\} \approx \mathbb E_{\theta_0}[\log \{f_Z(Z;\theta)\}]$$ thanks to the Law of Large Numbers, the solutions to (1) and (2) should be close:$$\hat\theta(z_1,\ldots,z_n)\approx\theta_0$$(which can be shown rigorously, of course).
Purpose of expectation of loglikelihood The maximum likelihood estimator $$\hat\theta(z_1,\ldots,z_n)$$ is the solution of the maximisation program $$\arg\max_\theta\sum_{i=1}^n \log \{f_Z(z_i;\theta)\}\tag{1}$$ It is therefore a random var
45,190
How many coin flips are needed to reliably know a coin of weight w is unfair?
There are various ways you could examine this problem analytically, but a typical way is to frame the problem as a hypothesis test for a stipulated probability for the coin. Suppose we let $X_1,X_2,X_3, ... \sim \text{IID Bern}(\theta)$ denote the outcomes of the coin-flips where $\theta$ is the probability of flipping a head (here denoted by a one). We can use a classical binomial proportion test to test the hypotheses: $$H_0: \theta = \tfrac{1}{2} \quad \quad \quad \quad \quad H_\text{A}: \theta \neq \tfrac{1}{2}.$$ There are various types of binomial test in statistical analysis, but the simplest is the Wald test, which uses the normal approximation to the binomial proportion. If you want to know how many flips you need to reliably detect an unfair coin, the usual thing to do would be to find out how many flips you need to obtain some minimum stipulated power against a specified value for the parameter that is close to the null value. To do this you will need to specify three things: (1) the significance level for your hypothesis test; (2) the parameter value at which you want to compute the power (presumably a value close to your null value); and (3) the minimum power you will consider to be sufficient to constitute a "reliable" test. In the section below I give an example of this using the power function for the Wald test. Computing sample size via power of the Wald binomial test: The two-sided Wald test for the null hypothesis $H_0: \theta = \tfrac{1}{2}$ uses the test statistic an approximate null distribution: $$Z_n \equiv \sqrt{n} \cdot \frac{p_n - \tfrac{1}{2}}{\sqrt{p_n (1-p_n)}} \overset{\text{approx}}{\sim} \text{N}(0,1).$$ At significance level $0 < \alpha < 1$, the test has acceptance-region $-z_{\alpha/2} \leqslant Z_n \leqslant z_{\alpha/2}$, which can be rewritten as: $$\bigg( p_n - \frac{1}{2} \bigg)^2 \leqslant z_{\alpha/2}^2 \cdot \frac{p_n (1-p_n)}{n},$$ which can be shown to be equivalent to $L(\alpha,n) \leqslant n p_n \leqslant U(\alpha,n)$ with the lower and upper bounds: $$L(\alpha,n) \equiv \frac{n}{2} \Bigg[ 1 - \sqrt{\frac{z_{\alpha/2}^2}{n + z_{\alpha/2}^2}} \Bigg] \quad \quad \quad \quad \quad U(\alpha,n) \equiv \frac{n}{2} \Bigg[ 1 + \sqrt{\frac{z_{\alpha/2}^2}{n + z_{\alpha/2}^2}} \Bigg] .$$ Consequently, the exact power function for the test is: $$\begin{align} \text{Power}_\alpha(\theta) &= 1 - \mathbb{P} ( \text{Accept } H_0 | \theta ) \\[6pt] &= 1 - \mathbb{P} ( L(\alpha,n) \leqslant n p_n \leqslant U(\alpha,n) | \theta ) \\[6pt] &= 1 - \sum_{L(\alpha,n) \leqslant x \leqslant U(\alpha,n)} \text{Bin} (x|n, \theta). \\[6pt] \end{align}$$ We can program this power function in R as follows (we have vectorised this function with respect to the input n to make the next step easier). #Create power function for the Wald binomial test power.binom.test <- function(n, prob, alpha = 0.05) { z2 <- qnorm(1-alpha/2)^2 OUT <- rep(0, length(n)) for (i in 1:length(n)) { nn <- n[i] TERM <- sqrt(z2/(nn+z2)) LOWER <- ceiling((nn/2)*(1-TERM)) UPPER <- floor((nn/2)*(1+TERM)) OUT[i] <- 1 - sum(dbinom(LOWER:UPPER, size = nn, prob = prob)) } OUT } To compute the required sample size we need to specify the three elements discussed above. For illustrative purposes, let's stipulate that we are using a test with a 5% significance level and we want to compute the power at the point $\theta = 0.51$ and we require that the power at this point must be at least 90%. #Set parameters for the computation ALPHA <- 0.05 THETA.ALT <- 0.51 MIN.POWER <- 0.9 #Compute required sample size POWER <- power.binom.test(n = 1:30000, prob = THETA.ALT, alpha = ALPHA) SAMP.SIZE <- min(which(POWER >= MIN.POWER)) #Show required sample size SAMP.SIZE [1] 26226 In this case we see that we require a minimum sample size of $n = 26,226$ to have 90% power in detecting the alternative value $\theta_1 = 0.51$ using a Wald test with 5% significance level. This is just one example of this type of calculation, and you could use different numbers if you prefer.
How many coin flips are needed to reliably know a coin of weight w is unfair?
There are various ways you could examine this problem analytically, but a typical way is to frame the problem as a hypothesis test for a stipulated probability for the coin. Suppose we let $X_1,X_2,X
How many coin flips are needed to reliably know a coin of weight w is unfair? There are various ways you could examine this problem analytically, but a typical way is to frame the problem as a hypothesis test for a stipulated probability for the coin. Suppose we let $X_1,X_2,X_3, ... \sim \text{IID Bern}(\theta)$ denote the outcomes of the coin-flips where $\theta$ is the probability of flipping a head (here denoted by a one). We can use a classical binomial proportion test to test the hypotheses: $$H_0: \theta = \tfrac{1}{2} \quad \quad \quad \quad \quad H_\text{A}: \theta \neq \tfrac{1}{2}.$$ There are various types of binomial test in statistical analysis, but the simplest is the Wald test, which uses the normal approximation to the binomial proportion. If you want to know how many flips you need to reliably detect an unfair coin, the usual thing to do would be to find out how many flips you need to obtain some minimum stipulated power against a specified value for the parameter that is close to the null value. To do this you will need to specify three things: (1) the significance level for your hypothesis test; (2) the parameter value at which you want to compute the power (presumably a value close to your null value); and (3) the minimum power you will consider to be sufficient to constitute a "reliable" test. In the section below I give an example of this using the power function for the Wald test. Computing sample size via power of the Wald binomial test: The two-sided Wald test for the null hypothesis $H_0: \theta = \tfrac{1}{2}$ uses the test statistic an approximate null distribution: $$Z_n \equiv \sqrt{n} \cdot \frac{p_n - \tfrac{1}{2}}{\sqrt{p_n (1-p_n)}} \overset{\text{approx}}{\sim} \text{N}(0,1).$$ At significance level $0 < \alpha < 1$, the test has acceptance-region $-z_{\alpha/2} \leqslant Z_n \leqslant z_{\alpha/2}$, which can be rewritten as: $$\bigg( p_n - \frac{1}{2} \bigg)^2 \leqslant z_{\alpha/2}^2 \cdot \frac{p_n (1-p_n)}{n},$$ which can be shown to be equivalent to $L(\alpha,n) \leqslant n p_n \leqslant U(\alpha,n)$ with the lower and upper bounds: $$L(\alpha,n) \equiv \frac{n}{2} \Bigg[ 1 - \sqrt{\frac{z_{\alpha/2}^2}{n + z_{\alpha/2}^2}} \Bigg] \quad \quad \quad \quad \quad U(\alpha,n) \equiv \frac{n}{2} \Bigg[ 1 + \sqrt{\frac{z_{\alpha/2}^2}{n + z_{\alpha/2}^2}} \Bigg] .$$ Consequently, the exact power function for the test is: $$\begin{align} \text{Power}_\alpha(\theta) &= 1 - \mathbb{P} ( \text{Accept } H_0 | \theta ) \\[6pt] &= 1 - \mathbb{P} ( L(\alpha,n) \leqslant n p_n \leqslant U(\alpha,n) | \theta ) \\[6pt] &= 1 - \sum_{L(\alpha,n) \leqslant x \leqslant U(\alpha,n)} \text{Bin} (x|n, \theta). \\[6pt] \end{align}$$ We can program this power function in R as follows (we have vectorised this function with respect to the input n to make the next step easier). #Create power function for the Wald binomial test power.binom.test <- function(n, prob, alpha = 0.05) { z2 <- qnorm(1-alpha/2)^2 OUT <- rep(0, length(n)) for (i in 1:length(n)) { nn <- n[i] TERM <- sqrt(z2/(nn+z2)) LOWER <- ceiling((nn/2)*(1-TERM)) UPPER <- floor((nn/2)*(1+TERM)) OUT[i] <- 1 - sum(dbinom(LOWER:UPPER, size = nn, prob = prob)) } OUT } To compute the required sample size we need to specify the three elements discussed above. For illustrative purposes, let's stipulate that we are using a test with a 5% significance level and we want to compute the power at the point $\theta = 0.51$ and we require that the power at this point must be at least 90%. #Set parameters for the computation ALPHA <- 0.05 THETA.ALT <- 0.51 MIN.POWER <- 0.9 #Compute required sample size POWER <- power.binom.test(n = 1:30000, prob = THETA.ALT, alpha = ALPHA) SAMP.SIZE <- min(which(POWER >= MIN.POWER)) #Show required sample size SAMP.SIZE [1] 26226 In this case we see that we require a minimum sample size of $n = 26,226$ to have 90% power in detecting the alternative value $\theta_1 = 0.51$ using a Wald test with 5% significance level. This is just one example of this type of calculation, and you could use different numbers if you prefer.
How many coin flips are needed to reliably know a coin of weight w is unfair? There are various ways you could examine this problem analytically, but a typical way is to frame the problem as a hypothesis test for a stipulated probability for the coin. Suppose we let $X_1,X_2,X
45,191
How many coin flips are needed to reliably know a coin of weight w is unfair?
We can simplify the power calculation by approximating the sample distributions as normal distributions with $$\sigma \approx \sqrt{pq/n}$$ approximation $pq \approx 0.5^2$ such that $$\sigma\approx \frac{0.5}{\sqrt{n}}$$ approximating the power by considering the entire left tail as non-rejection of the hypothesis (which is not entirely true because a tiny part of the tail is below the lower boundary, but this is very small) So then we need that the distance $p-0.5$ is equal to $(1.96+1.65)\sigma$. Which leads to $$p-0.5 = (1.96+1.65)\frac{0.5}{\sqrt{n}}$$ or $$n = \left(\frac{(1.96+1.65)}{2p-1} \right)^2$$ These values $1.96$ and $1.65$ are computed by the quantile function of the normal distribution and relate to the $2.5\%$ and $5\%$ quantiles. If we get rid of the second approximation (the deviation is computed for both null and alternative hypotheses with $p=0.5$) then the solution will become $$n = \left(\frac{0.5 \cdot 1.96 + \sqrt{p(1-p)} \cdot 1.65}{p-0.5} \right)^2$$ Computational comparison In the graph below we compare the approximation with exact computations The used code is in R but I imagine it is easy to read and can be easily converted into python. ### ### function to compute power ### for given sample size n ### and given effect p ### power = function(n,p) { ### hypothesis test boundaries based on binomial distribution quantiles lower = qbinom(0.025,n,0.5) ### this gives a lower/left tail of at least 2.5% upper = n-(lower+1) ### make a symmetric upper/right tail ### compute power as probabilities of rejection ### two parts either we are below the lower boundary or above the upper boundary pbelow = pbinom(lower-1,n,p) # reject when below 'lower' pabove = 1-pbinom(upper,n,p) # reject when above 'upper' ### return total probability of rejection return(pbelow+pabove) } ### function to get required 'n' ### such that type 2 error is below 5% (or power above 95%) get_n = function(p,start_n) { ### get the value of neccesary 'n' with a loop ### we start with start_n and keep increasing n untill the power is above 0.95 n_test = start_n while (power(n_test,p) < 0.95) { n_test = n_test + 1 } return(n_test) } ### plot the theoretic curve n = 1:30000 plot(0.5+(qnorm(0.975)+qnorm(0.95))*sqrt(n*0.5^2)/n,n, type = "l", xlim = c(0.5,1), xlab="p", ylab = "", yaxt = "n") ### y axis tags and label axis(2, at = 5000*c(0:10), las = 2) ## mtext("n", 2, line=4, las = 2) n_current = 1 ### add computed points for (p in c(0.9, 0.8, 0.7, 0.6, 0.55, 0.54, 0.53, 0.525, 0.520, 0.515, 0.510)) { ### compute the neccesary n ### and use the old n_current to optimize the loop in the get_n function n_current = get_n(p,n_current) points(p, n_current) }
How many coin flips are needed to reliably know a coin of weight w is unfair?
We can simplify the power calculation by approximating the sample distributions as normal distributions with $$\sigma \approx \sqrt{pq/n}$$ approximation $pq \approx 0.5^2$ such that $$\sigma\approx
How many coin flips are needed to reliably know a coin of weight w is unfair? We can simplify the power calculation by approximating the sample distributions as normal distributions with $$\sigma \approx \sqrt{pq/n}$$ approximation $pq \approx 0.5^2$ such that $$\sigma\approx \frac{0.5}{\sqrt{n}}$$ approximating the power by considering the entire left tail as non-rejection of the hypothesis (which is not entirely true because a tiny part of the tail is below the lower boundary, but this is very small) So then we need that the distance $p-0.5$ is equal to $(1.96+1.65)\sigma$. Which leads to $$p-0.5 = (1.96+1.65)\frac{0.5}{\sqrt{n}}$$ or $$n = \left(\frac{(1.96+1.65)}{2p-1} \right)^2$$ These values $1.96$ and $1.65$ are computed by the quantile function of the normal distribution and relate to the $2.5\%$ and $5\%$ quantiles. If we get rid of the second approximation (the deviation is computed for both null and alternative hypotheses with $p=0.5$) then the solution will become $$n = \left(\frac{0.5 \cdot 1.96 + \sqrt{p(1-p)} \cdot 1.65}{p-0.5} \right)^2$$ Computational comparison In the graph below we compare the approximation with exact computations The used code is in R but I imagine it is easy to read and can be easily converted into python. ### ### function to compute power ### for given sample size n ### and given effect p ### power = function(n,p) { ### hypothesis test boundaries based on binomial distribution quantiles lower = qbinom(0.025,n,0.5) ### this gives a lower/left tail of at least 2.5% upper = n-(lower+1) ### make a symmetric upper/right tail ### compute power as probabilities of rejection ### two parts either we are below the lower boundary or above the upper boundary pbelow = pbinom(lower-1,n,p) # reject when below 'lower' pabove = 1-pbinom(upper,n,p) # reject when above 'upper' ### return total probability of rejection return(pbelow+pabove) } ### function to get required 'n' ### such that type 2 error is below 5% (or power above 95%) get_n = function(p,start_n) { ### get the value of neccesary 'n' with a loop ### we start with start_n and keep increasing n untill the power is above 0.95 n_test = start_n while (power(n_test,p) < 0.95) { n_test = n_test + 1 } return(n_test) } ### plot the theoretic curve n = 1:30000 plot(0.5+(qnorm(0.975)+qnorm(0.95))*sqrt(n*0.5^2)/n,n, type = "l", xlim = c(0.5,1), xlab="p", ylab = "", yaxt = "n") ### y axis tags and label axis(2, at = 5000*c(0:10), las = 2) ## mtext("n", 2, line=4, las = 2) n_current = 1 ### add computed points for (p in c(0.9, 0.8, 0.7, 0.6, 0.55, 0.54, 0.53, 0.525, 0.520, 0.515, 0.510)) { ### compute the neccesary n ### and use the old n_current to optimize the loop in the get_n function n_current = get_n(p,n_current) points(p, n_current) }
How many coin flips are needed to reliably know a coin of weight w is unfair? We can simplify the power calculation by approximating the sample distributions as normal distributions with $$\sigma \approx \sqrt{pq/n}$$ approximation $pq \approx 0.5^2$ such that $$\sigma\approx
45,192
How many coin flips are needed to reliably know a coin of weight w is unfair?
We use the likelihood method. Suppose that you have that $$\mathcal{P}(X_n = 1 ) = p$$ for some $p \in [0,1]$. Then the likelihood that you will see the sequence of coinflips $X=X_1...X_n = s_1...s_n=s$ $$\mathcal{P}^{(n)}(X = s) = p^{\mathcal{N}_1(s)}(1-p)^{\mathcal{N}_0(s)} $$ where $\mathcal{N}_i(s)$ is the number of $i$'s in $s$. Thus the likelihood function for $X$ is $$\mathcal{L}^{(n)}(s,p) = p^{\mathcal{N}_1(s)}(1-p)^{\mathcal{N}_0(s)} $$ and we further have that $$ -\frac{1}{n}\mathcal{l}(s,p)= -\frac{1}{n}\log\mathcal{L}(s,p) = \frac{\mathcal{N}_1(s)}{n}\log_2p +\frac{\mathcal{N}_0(s)}{n}\log_2(1-p) .$$ Now we use the Asymptotic equipartition property that the Typical set possesses to get that if we let $$\mathcal{H}(p ) = p \log_2p +(1-p)\log_2(1-p) $$ and let $$\mathcal{A}_{\epsilon}^{(n)} = \{ s \in [0,1]^n \mid -\frac{1}{n}\mathcal{l}(s,p) \in ( \mathcal{H}(p ) - \epsilon , \mathcal{H}(p ) + \epsilon ) \}$$ then the probability $$\mathcal{P}(X \in \mathcal{A}_{\epsilon}^{(n)}) \geq 1 - \epsilon $$ holds by Asymptotic equipartition property that the Typical set possesses for large $n$. How Large does the $n$ have to be? Edit: Use theorem 11.2.1 in "Elements of Information" Cover & Thomas to get that the probability that the string of coinflips will be atypical will be $$ P(X \notin \mathcal{A}_{\epsilon}^{(n)}) \leq 2^{2\log(n+1)-n\frac{\mathcal{H}(0.5)-\mathcal{H}(p)}{2}} $$ if I interpret the test as follows: I flip the coin $n$ times; if the string of coinflips $s$ is in $\mathcal{A}_{\epsilon}^{(n)}$ then we declare it fair if not we declare it unfair. By choosing $\epsilon < \frac{\mathcal{H}(0.5)-\mathcal{H}(p)}{2}$ we guarantee that the two sets $\mathcal{A}_{\epsilon}^{(n)}$ for the fair and unfair coin are disjoint and thus the previous bound holds regardless of whether the coin is fair or not since $ P(X \notin \mathcal{A}_{\epsilon}^{(n)}) = 2^{2\log(n+1)-n\epsilon } $ by theorem 11.2.1 in "Elements of Information". The graph you gave is then given by $$ f(p)= \min_n \{n \in \mathbb{N} \mid 2^{2\log(n+1)-n\frac{\mathcal{H}(0.5)-\mathcal{H}(p)}{2}} \leq 0.05\}. $$ The analytic result you are looking for can probably be proven by looking for an analytic proof that $$ |p-0.5|< \Delta^{-1} \implies f(p)=\mathcal{\Omega}(2^{\Delta}) $$ or some other interesting lower bound $B$; i.e., $f(p)= \mathcal{\Omega}(B({\Delta}))$. Edit #2 The following python code: import matplotlib.pyplot as plt import math def h(p): return -p*math.log(p,2)-(1-p)*math.log(p,2) def solve(f,bound): k = 0 sol = 2**k while f(sol) > bound: k += 1 sol = 2**k if k == 0: return 1 sol = sol // 2 for i in range(k): if f(sol + 2**(k-i-1)) > bound: sol += 2**(k-i-1) if f(sol) > bound: sol+=1 return sol def failure_probability(n,p): return 2**(2*math.log(n+1,2)-n*((1-h(p))/2)) def fail_prob_for_n(p): return lambda n : failure_probability(n,p) bound = 0.05 d = 0.0001 min_prob = 0.5006 max_prob = 0.9 num_ints = int((max_prob - min_prob)/d) x = [] y = [] for i in range(num_ints): p = max_prob-d*i n = solve(fail_prob_for_n(p), bound) x.append(p) y.append(n) plt.plot(x,y) plt.savefig('coin_flip.png') gives the following plot: This is nearly identical to your plot and it gives a precise information-theoretic provable bound. This python code finds the integer that satisfies the $0.05$ bound you are looking for in $\mathcal{O}(\log(n)^2)$ time since it uses successive squaring as an optimization. Edit #3 We can now prove that $$ |p-0.5|< 2^{-k} , \ x>0\implies f(p)=\mathcal{O}(2^{(2+x)k}) $$ as well as $$ |p-0.5|< 2^{-k} \implies = \mathcal{\Omega}(2^{2k}). $$ To this end notice that $|p-0.5|< 2^{-k}$ and $n = 2^{(2+x)k}$ implies $$ P(X \notin \mathcal{A}_{\epsilon}^{(n)}) \leq 2^{2\log(2^{(2+x)k}+1)-2^{(2+x)k-1}(1+(0.5+2^{-k})\log (0.5+2^{-k}) + (0.5-2^{-k})\log(0.5-2^{-k}))} $$ and that $$\lim_{k \to \infty } \frac{1+(0.5+2^{-k})\log_2(0.5+2^{-k})+(0.5-2^{-k})\log_2(0.5-2^{-k})}{2^{-2k}} = 2.88539 $$ implies that $$ \lim_{k \to \infty} 2^{2\log(2^{(2+x)k}+1)-2^{(2+x)k-1}(1+(0.5+2^{-k})\log (0.5+2^{-k}) + (0.5-2^{-k})\log(0.5-2^{-k}))} = 0 ; $$ thus, we have that $$ f(p)=\mathcal{O}(2^{(2+x)k}) $$ for any $x>0$ as was desired. We further see that the limit fails when $x=0$ so that $$ f(p)=\mathcal{\Omega}(2^{2k}) $$and thus we have that $$ |p-0.5|< \Delta^{-1} \implies n = \mathcal{O}(\Delta^{(2+x)}) $$ and that $$ |p-0.5|< \Delta^{-1} \implies n = > \mathcal{\Omega}(\Delta^{2}) $$ which gives upper and lower bounds on the number of coinflips
How many coin flips are needed to reliably know a coin of weight w is unfair?
We use the likelihood method. Suppose that you have that $$\mathcal{P}(X_n = 1 ) = p$$ for some $p \in [0,1]$. Then the likelihood that you will see the sequence of coinflips $X=X_1...X_n = s_1...s_n=
How many coin flips are needed to reliably know a coin of weight w is unfair? We use the likelihood method. Suppose that you have that $$\mathcal{P}(X_n = 1 ) = p$$ for some $p \in [0,1]$. Then the likelihood that you will see the sequence of coinflips $X=X_1...X_n = s_1...s_n=s$ $$\mathcal{P}^{(n)}(X = s) = p^{\mathcal{N}_1(s)}(1-p)^{\mathcal{N}_0(s)} $$ where $\mathcal{N}_i(s)$ is the number of $i$'s in $s$. Thus the likelihood function for $X$ is $$\mathcal{L}^{(n)}(s,p) = p^{\mathcal{N}_1(s)}(1-p)^{\mathcal{N}_0(s)} $$ and we further have that $$ -\frac{1}{n}\mathcal{l}(s,p)= -\frac{1}{n}\log\mathcal{L}(s,p) = \frac{\mathcal{N}_1(s)}{n}\log_2p +\frac{\mathcal{N}_0(s)}{n}\log_2(1-p) .$$ Now we use the Asymptotic equipartition property that the Typical set possesses to get that if we let $$\mathcal{H}(p ) = p \log_2p +(1-p)\log_2(1-p) $$ and let $$\mathcal{A}_{\epsilon}^{(n)} = \{ s \in [0,1]^n \mid -\frac{1}{n}\mathcal{l}(s,p) \in ( \mathcal{H}(p ) - \epsilon , \mathcal{H}(p ) + \epsilon ) \}$$ then the probability $$\mathcal{P}(X \in \mathcal{A}_{\epsilon}^{(n)}) \geq 1 - \epsilon $$ holds by Asymptotic equipartition property that the Typical set possesses for large $n$. How Large does the $n$ have to be? Edit: Use theorem 11.2.1 in "Elements of Information" Cover & Thomas to get that the probability that the string of coinflips will be atypical will be $$ P(X \notin \mathcal{A}_{\epsilon}^{(n)}) \leq 2^{2\log(n+1)-n\frac{\mathcal{H}(0.5)-\mathcal{H}(p)}{2}} $$ if I interpret the test as follows: I flip the coin $n$ times; if the string of coinflips $s$ is in $\mathcal{A}_{\epsilon}^{(n)}$ then we declare it fair if not we declare it unfair. By choosing $\epsilon < \frac{\mathcal{H}(0.5)-\mathcal{H}(p)}{2}$ we guarantee that the two sets $\mathcal{A}_{\epsilon}^{(n)}$ for the fair and unfair coin are disjoint and thus the previous bound holds regardless of whether the coin is fair or not since $ P(X \notin \mathcal{A}_{\epsilon}^{(n)}) = 2^{2\log(n+1)-n\epsilon } $ by theorem 11.2.1 in "Elements of Information". The graph you gave is then given by $$ f(p)= \min_n \{n \in \mathbb{N} \mid 2^{2\log(n+1)-n\frac{\mathcal{H}(0.5)-\mathcal{H}(p)}{2}} \leq 0.05\}. $$ The analytic result you are looking for can probably be proven by looking for an analytic proof that $$ |p-0.5|< \Delta^{-1} \implies f(p)=\mathcal{\Omega}(2^{\Delta}) $$ or some other interesting lower bound $B$; i.e., $f(p)= \mathcal{\Omega}(B({\Delta}))$. Edit #2 The following python code: import matplotlib.pyplot as plt import math def h(p): return -p*math.log(p,2)-(1-p)*math.log(p,2) def solve(f,bound): k = 0 sol = 2**k while f(sol) > bound: k += 1 sol = 2**k if k == 0: return 1 sol = sol // 2 for i in range(k): if f(sol + 2**(k-i-1)) > bound: sol += 2**(k-i-1) if f(sol) > bound: sol+=1 return sol def failure_probability(n,p): return 2**(2*math.log(n+1,2)-n*((1-h(p))/2)) def fail_prob_for_n(p): return lambda n : failure_probability(n,p) bound = 0.05 d = 0.0001 min_prob = 0.5006 max_prob = 0.9 num_ints = int((max_prob - min_prob)/d) x = [] y = [] for i in range(num_ints): p = max_prob-d*i n = solve(fail_prob_for_n(p), bound) x.append(p) y.append(n) plt.plot(x,y) plt.savefig('coin_flip.png') gives the following plot: This is nearly identical to your plot and it gives a precise information-theoretic provable bound. This python code finds the integer that satisfies the $0.05$ bound you are looking for in $\mathcal{O}(\log(n)^2)$ time since it uses successive squaring as an optimization. Edit #3 We can now prove that $$ |p-0.5|< 2^{-k} , \ x>0\implies f(p)=\mathcal{O}(2^{(2+x)k}) $$ as well as $$ |p-0.5|< 2^{-k} \implies = \mathcal{\Omega}(2^{2k}). $$ To this end notice that $|p-0.5|< 2^{-k}$ and $n = 2^{(2+x)k}$ implies $$ P(X \notin \mathcal{A}_{\epsilon}^{(n)}) \leq 2^{2\log(2^{(2+x)k}+1)-2^{(2+x)k-1}(1+(0.5+2^{-k})\log (0.5+2^{-k}) + (0.5-2^{-k})\log(0.5-2^{-k}))} $$ and that $$\lim_{k \to \infty } \frac{1+(0.5+2^{-k})\log_2(0.5+2^{-k})+(0.5-2^{-k})\log_2(0.5-2^{-k})}{2^{-2k}} = 2.88539 $$ implies that $$ \lim_{k \to \infty} 2^{2\log(2^{(2+x)k}+1)-2^{(2+x)k-1}(1+(0.5+2^{-k})\log (0.5+2^{-k}) + (0.5-2^{-k})\log(0.5-2^{-k}))} = 0 ; $$ thus, we have that $$ f(p)=\mathcal{O}(2^{(2+x)k}) $$ for any $x>0$ as was desired. We further see that the limit fails when $x=0$ so that $$ f(p)=\mathcal{\Omega}(2^{2k}) $$and thus we have that $$ |p-0.5|< \Delta^{-1} \implies n = \mathcal{O}(\Delta^{(2+x)}) $$ and that $$ |p-0.5|< \Delta^{-1} \implies n = > \mathcal{\Omega}(\Delta^{2}) $$ which gives upper and lower bounds on the number of coinflips
How many coin flips are needed to reliably know a coin of weight w is unfair? We use the likelihood method. Suppose that you have that $$\mathcal{P}(X_n = 1 ) = p$$ for some $p \in [0,1]$. Then the likelihood that you will see the sequence of coinflips $X=X_1...X_n = s_1...s_n=
45,193
Diagnostic probability plots in logistic regression
Eventually, I have found a comprehensive description of the algorithm for creating a calibration plot in J. Esarey, A. Pierce: "Assessing Fit Quality and Testing for Misspecification in Binary Dependent Variable Models." Political Analysis 20.4, pp. 480-500, 2012 The article compares it with classification based evaluation. Here is a summary of the ideas together with my comments and R code for creating a calibration plot. When comparing the probability predicted by the model with the "observed" probability, there is the problem that no probabilities are observed but only zeros or ones, i.e. (non-) occurences of the response. These values can be smoothed out to probabilities by a distance weighted average in the "neighborhood" of each value, e.g. with a LOESS local regression. The distance for establishing the "neighborhood" and the weights can be measured in different spaces. Two obvious possible choices are The distance on the link scale, i.e. on $\eta_i=\beta_0 + \langle\vec{\beta},\vec{x}_i\rangle$, where $x_i$ are the predictor variable values for the $i$-th observation, and $\beta$ are the model parameters. The distance on the probability scale, i.e. on $p_i=P(Y=1|\vec{x}_i) = 1 / (1+e^{-\eta_i})$ A LOESS fit through the points $(y_i,\eta_i)$ or $(y_i,p_i)$ will then yield an estimator $\hat{p}_i$ for each $y_i$, which can be compared to the probability $p_i$ predicted by the model: There are two caveats, however: For degrees greater than zero, the LOESS fit can yield values outside [0,1]. For this reason, the first value is missing in both of the above plots: its estimated probability $\hat{p}_i$ is negative. This can be easily corrected by cutting off the probabilities at zero and one. LOESS only takes a certain percentage (parameter span) of neighbors into account. The above plots have been created with the default span=0.75. Esarey & Pierce suggest two different optimization methods and link to a reference implementation in a footnote, but that link is meanwhile stalled. I have therefore implemented a very simple optimization criterion: the minimum MSE between $\hat{p}_i$ and $p_i$, i.e. $\sum_i(\hat{p}_i - p_i)^2$. The result on the Challenger Space Shuttle O-Ring dataset can be seen here: I have also included the 95% prediction interval for $p_i$ as predicted by the model. Esarey & Pierce also compute the percentage of values that lie outside an 80% confidence interval by means of a parametric bootstrap, but this might easier be computed directly from the confidence intervals for $p_i$. Here is the code to produce the calibration plot on the link level (right hand side): # Challenger Space Shuttle O-ring data: ok vs temp data <- data.frame(y=c(0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1), x=c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)) fit <- glm(y ~ x, data=data, family=binomial) # # calibration plot on link level # link.model <- predict(fit, data, type="link", se.fit=TRUE) sort.key <- order(link.model$fit) x <- link.model$fit[sort.key] # prediction interval for probability plot(link.model$fit, data$y, main="link level") p.lower <- plogis(link.model$fit - qnorm(1-0.05/2) * link.model$se.fit)[sort.key] p.upper <- plogis(link.model$fit + qnorm(1-0.05/2) * link.model$se.fit)[sort.key] polygon(c(x,rev(x)), c(p.lower, rev(p.upper)), col="#dddddd", border=NA) points(link.model$fit, data$y) # replot overplotted points lines(x, plogis(x), col="red") # LOESS fit optim.span <- optimize(resub.mse, c(0.1,1.0), y=data$y, x=link.model$fit, p.model=p.model) span <- optim.span$minimum p.fit <- loess(y ~ x, data=data.frame(y=data$y, x=link.model$fit), family="gaussian", degree=1, span=span) p.cutfit <- predict(p.fit, data.frame(x=x)) p.cutfit[p.cutfit < 0] <- 0 p.cutfit[p.cutfit > 1] <- 1 lines(x, p.cutfit) legend("topleft", c("model", sprintf("LOESS (span=%4.2f)", span)), col=c("red","black"), lty=1) # the optimization function for estimating span resub.mse <- function(span, y, x, p.model) { fit <- loess(y ~ x, family="gaussian", degree=1, span=span) return(sum((fit$fitted - p.model)^2)) }
Diagnostic probability plots in logistic regression
Eventually, I have found a comprehensive description of the algorithm for creating a calibration plot in J. Esarey, A. Pierce: "Assessing Fit Quality and Testing for Misspecification in Binary Depend
Diagnostic probability plots in logistic regression Eventually, I have found a comprehensive description of the algorithm for creating a calibration plot in J. Esarey, A. Pierce: "Assessing Fit Quality and Testing for Misspecification in Binary Dependent Variable Models." Political Analysis 20.4, pp. 480-500, 2012 The article compares it with classification based evaluation. Here is a summary of the ideas together with my comments and R code for creating a calibration plot. When comparing the probability predicted by the model with the "observed" probability, there is the problem that no probabilities are observed but only zeros or ones, i.e. (non-) occurences of the response. These values can be smoothed out to probabilities by a distance weighted average in the "neighborhood" of each value, e.g. with a LOESS local regression. The distance for establishing the "neighborhood" and the weights can be measured in different spaces. Two obvious possible choices are The distance on the link scale, i.e. on $\eta_i=\beta_0 + \langle\vec{\beta},\vec{x}_i\rangle$, where $x_i$ are the predictor variable values for the $i$-th observation, and $\beta$ are the model parameters. The distance on the probability scale, i.e. on $p_i=P(Y=1|\vec{x}_i) = 1 / (1+e^{-\eta_i})$ A LOESS fit through the points $(y_i,\eta_i)$ or $(y_i,p_i)$ will then yield an estimator $\hat{p}_i$ for each $y_i$, which can be compared to the probability $p_i$ predicted by the model: There are two caveats, however: For degrees greater than zero, the LOESS fit can yield values outside [0,1]. For this reason, the first value is missing in both of the above plots: its estimated probability $\hat{p}_i$ is negative. This can be easily corrected by cutting off the probabilities at zero and one. LOESS only takes a certain percentage (parameter span) of neighbors into account. The above plots have been created with the default span=0.75. Esarey & Pierce suggest two different optimization methods and link to a reference implementation in a footnote, but that link is meanwhile stalled. I have therefore implemented a very simple optimization criterion: the minimum MSE between $\hat{p}_i$ and $p_i$, i.e. $\sum_i(\hat{p}_i - p_i)^2$. The result on the Challenger Space Shuttle O-Ring dataset can be seen here: I have also included the 95% prediction interval for $p_i$ as predicted by the model. Esarey & Pierce also compute the percentage of values that lie outside an 80% confidence interval by means of a parametric bootstrap, but this might easier be computed directly from the confidence intervals for $p_i$. Here is the code to produce the calibration plot on the link level (right hand side): # Challenger Space Shuttle O-ring data: ok vs temp data <- data.frame(y=c(0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1), x=c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)) fit <- glm(y ~ x, data=data, family=binomial) # # calibration plot on link level # link.model <- predict(fit, data, type="link", se.fit=TRUE) sort.key <- order(link.model$fit) x <- link.model$fit[sort.key] # prediction interval for probability plot(link.model$fit, data$y, main="link level") p.lower <- plogis(link.model$fit - qnorm(1-0.05/2) * link.model$se.fit)[sort.key] p.upper <- plogis(link.model$fit + qnorm(1-0.05/2) * link.model$se.fit)[sort.key] polygon(c(x,rev(x)), c(p.lower, rev(p.upper)), col="#dddddd", border=NA) points(link.model$fit, data$y) # replot overplotted points lines(x, plogis(x), col="red") # LOESS fit optim.span <- optimize(resub.mse, c(0.1,1.0), y=data$y, x=link.model$fit, p.model=p.model) span <- optim.span$minimum p.fit <- loess(y ~ x, data=data.frame(y=data$y, x=link.model$fit), family="gaussian", degree=1, span=span) p.cutfit <- predict(p.fit, data.frame(x=x)) p.cutfit[p.cutfit < 0] <- 0 p.cutfit[p.cutfit > 1] <- 1 lines(x, p.cutfit) legend("topleft", c("model", sprintf("LOESS (span=%4.2f)", span)), col=c("red","black"), lty=1) # the optimization function for estimating span resub.mse <- function(span, y, x, p.model) { fit <- loess(y ~ x, family="gaussian", degree=1, span=span) return(sum((fit$fitted - p.model)^2)) }
Diagnostic probability plots in logistic regression Eventually, I have found a comprehensive description of the algorithm for creating a calibration plot in J. Esarey, A. Pierce: "Assessing Fit Quality and Testing for Misspecification in Binary Depend
45,194
Diagnostic probability plots in logistic regression
Another approach, apparently not discussed in the literature, is the conditional density plot as provided out-of-the box by the R function cdplot. The conditional density plot directly estimates $P(Y=\omega_i|x)$ for an arbitrary number of levels $\omega_i$ non-parametrically without assuming a statistical model. In the case of logistic regression, there are only two levels (0 and 1) and the regression fits a parametric model for $P(Y=1|x)$. The two estimators can thus be directly compared to see whether the logistic model matches the data. cdplot estimates $P(Y=1|x)$ by means of Bayes' Theorem $$P(Y=1|x) = \frac{f(x|Y=1)\cdot P(Y=1)}{f(x)}$$ where $f$ denotes the probability densities, which are estimated by a kernel density estimator from the data. The only tricky part in this estimation is that both the estimator for $f(x)$ and for $f(x|Y=1)$ must use the same kernel bandwidth. Compared to the LOESS approach, this has two conceptual advantages: The result is guaranteed to yield a probability, and it can not happen (like for LOESS) that the value lies outside the range $[0,1]$ It does not require a numerical interpretation of the levels as 0 and 1 in order to make numerically sense or to be applicable at all. Like for the LOESS approach, the predictor must be a scalar value, for which in complete analogy the link value $\eta$ can be used. The kernel desity estimator requires to choose a bandwidth, for which the "plugin method" (bw="SJ" in the R function cdplot) is generally recommended in the literature (and in the documentation of density, too, although it uses a different default). For comparison, I have implemented an additional bandwidth selection method that chooses that bandwidth which makes the cdplot most close to the logistic prediction. This can serve as a baseline what could be the best to be said about the logistic model ;-) And here the code with the plot of the 95% confidence band from the logistic model omitted for better legibility: # Space Shuttle Challenger temp vs oring-ok data <- data.frame(y=c(0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1), x=c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)) fit <- glm(y ~ x, data=data, family=binomial) # helper function for finding the bandwidth # that is closest to the logistic model resub.mse <- function(bw, y, x, p.model) { cdfit <- cdplot(x, y, bw=bw, plot=FALSE) return(sum((cdfit[[levels(y.factor)[1]]](x) - p.model)^2)) } # # logistic prediction vs. link # link.model <- predict(fit, data, type="link", se.fit=TRUE) p.model <- plogis(link.model$fit) sort.key <- order(link.model$fit) x <- link.model$fit[sort.key] plot(link.model$fit, data$y, main="link level") lines(x, plogis(x), col="red") # cdplot vs. link # note that we must code the level of interest # as FIRST level (for cdplot) y.factor <- factor(data$y, levels=c(1,0)) optim.bw <- optimize(resub.mse, c(bw.nrd0(x)/10, (max(x)-min(x))/2), y=y.factor, x=link.model$fit, p.model=p.model) bw <- optim.bw$minimum p.kernel <- cdplot(link.model$fit, y.factor, bw="SJ", plot=FALSE) lines(x, p.kernel$'1'(x)) p.kernel <- cdplot(link.model$fit, y.factor, bw=bw, plot=FALSE) lines(x, p.kernel$'1'(x), col="blue", lty=2) legend("topleft", c("model", "cdplot (bw='SJ')", sprintf("closest cdplot (bw=%4.2f)", bw)), col=c("red", "black", "blue"), lty=c(1,1,2))
Diagnostic probability plots in logistic regression
Another approach, apparently not discussed in the literature, is the conditional density plot as provided out-of-the box by the R function cdplot. The conditional density plot directly estimates $P(Y=
Diagnostic probability plots in logistic regression Another approach, apparently not discussed in the literature, is the conditional density plot as provided out-of-the box by the R function cdplot. The conditional density plot directly estimates $P(Y=\omega_i|x)$ for an arbitrary number of levels $\omega_i$ non-parametrically without assuming a statistical model. In the case of logistic regression, there are only two levels (0 and 1) and the regression fits a parametric model for $P(Y=1|x)$. The two estimators can thus be directly compared to see whether the logistic model matches the data. cdplot estimates $P(Y=1|x)$ by means of Bayes' Theorem $$P(Y=1|x) = \frac{f(x|Y=1)\cdot P(Y=1)}{f(x)}$$ where $f$ denotes the probability densities, which are estimated by a kernel density estimator from the data. The only tricky part in this estimation is that both the estimator for $f(x)$ and for $f(x|Y=1)$ must use the same kernel bandwidth. Compared to the LOESS approach, this has two conceptual advantages: The result is guaranteed to yield a probability, and it can not happen (like for LOESS) that the value lies outside the range $[0,1]$ It does not require a numerical interpretation of the levels as 0 and 1 in order to make numerically sense or to be applicable at all. Like for the LOESS approach, the predictor must be a scalar value, for which in complete analogy the link value $\eta$ can be used. The kernel desity estimator requires to choose a bandwidth, for which the "plugin method" (bw="SJ" in the R function cdplot) is generally recommended in the literature (and in the documentation of density, too, although it uses a different default). For comparison, I have implemented an additional bandwidth selection method that chooses that bandwidth which makes the cdplot most close to the logistic prediction. This can serve as a baseline what could be the best to be said about the logistic model ;-) And here the code with the plot of the 95% confidence band from the logistic model omitted for better legibility: # Space Shuttle Challenger temp vs oring-ok data <- data.frame(y=c(0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1), x=c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)) fit <- glm(y ~ x, data=data, family=binomial) # helper function for finding the bandwidth # that is closest to the logistic model resub.mse <- function(bw, y, x, p.model) { cdfit <- cdplot(x, y, bw=bw, plot=FALSE) return(sum((cdfit[[levels(y.factor)[1]]](x) - p.model)^2)) } # # logistic prediction vs. link # link.model <- predict(fit, data, type="link", se.fit=TRUE) p.model <- plogis(link.model$fit) sort.key <- order(link.model$fit) x <- link.model$fit[sort.key] plot(link.model$fit, data$y, main="link level") lines(x, plogis(x), col="red") # cdplot vs. link # note that we must code the level of interest # as FIRST level (for cdplot) y.factor <- factor(data$y, levels=c(1,0)) optim.bw <- optimize(resub.mse, c(bw.nrd0(x)/10, (max(x)-min(x))/2), y=y.factor, x=link.model$fit, p.model=p.model) bw <- optim.bw$minimum p.kernel <- cdplot(link.model$fit, y.factor, bw="SJ", plot=FALSE) lines(x, p.kernel$'1'(x)) p.kernel <- cdplot(link.model$fit, y.factor, bw=bw, plot=FALSE) lines(x, p.kernel$'1'(x), col="blue", lty=2) legend("topleft", c("model", "cdplot (bw='SJ')", sprintf("closest cdplot (bw=%4.2f)", bw)), col=c("red", "black", "blue"), lty=c(1,1,2))
Diagnostic probability plots in logistic regression Another approach, apparently not discussed in the literature, is the conditional density plot as provided out-of-the box by the R function cdplot. The conditional density plot directly estimates $P(Y=
45,195
Diagnostic probability plots in logistic regression
Calibration For completeness sake, here are two other ways to produce calibration plots: The first is using calibration belts, introduced by Nattino et al. (2014)$^{1}$, Nattino et al. (2016)$^{2}$ and Nattino et al. (2017)$^{3}$. Briefly, they fit an $m$-th-order polynomial logistic function (with $m\geq 2$) to the observed outcomes using the predicted probabilities of the model to be assessed. The parameter $m$ is selected using a standard forward selection procedure controlled by a likelihood-ratio statistic that accounts for the forward process used to select $m$. The calibration belt can be used for internal and external calibration. The procedure is implemented in Stata (calibrationbelt) and R (package givitiR). Here is the example using the Challenger data: # Challenger Shuttle Challenger temp vs oring-ok dat <- data.frame(y=c(0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1), x=c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)) fit <- glm(y ~ x, data=dat, family=binomial) preds_p <- predict(fit, type = "response") cb <- givitiCalibrationBelt(o = dat$y, e = preds_p , devel = "internal", confLevels = c(0.95, 0.8)) plot(cb, main = "", las = 1, ylab = "Observed probabilities", xlab = "Model probabilities", table = FALSE) The identity line is displayed in red. The light gray area is the 80% confidence calibration interval whereas the dark gray is a 95% confidence interval. Ideally, the red line is inside the belt over the whole range of probabilities. In the example, the confidence intervals are huge: The calibration belt shows a large uncertainty with respect to calibration. As the red line lies within the interval, we cannot reject the hypothesis of a well calibrated model. To better illustrate the calibration belt, let's look at a calibration belt of a well calibrated model: Here, the confidence intervals are much narrower. Because the red identity line lies within the belt over the whole range, it offers little evidence for miscalibration. As some of the other answers, the second method implemented in the R package rms relies on a nonparametric smoother fitted to the predicted and observed probabilities. It also plots bias-corrected estimates based on the bootstrap. Details can be found in Harrel (2015)$^{4}$. library(rms) mod <- lrm(y~x, dat = dat, x = TRUE, y = TRUE) res <- calibrate(mod, B = 10000) plot(res) The model seems to underestimate probabilities lower than $0.75$ and overestimate probabilities in over $0.75$. But again, due to the small sample size, the uncertainty is large. Residuals There are many types of residuals for generalized linear models but their interpretation is often difficult. One possibility is to look at simulation-based quantile residuals as implemented in the DHARMa package for R. Here is the example using the same data as above: fit <- glm(y ~ x, data=dat, family=binomial) simres <- simulateResiduals(fit, n = 1e4, seed = 142857) plot(simres) The nice thing about these residuals is that they can be interpreted as the "usual" residuals from linear regression models. On the left, a Q-Q-plot of the residuals is shown. On the right, the residuals are plotted against the predicted values. In both cases, there seems to be little evidence for a problem. $[1]:$ Nattino, G., Finazzi, S., & Bertolini, G. (2014). A new calibration test and a reappraisal of the calibration belt for the assessment of prediction models based on dichotomous outcomes. Statistics in medicine, 33(14), 2390-2407. $[2]:$ Nattino, G., Finazzi, S., & Bertolini, G. (2016). A new test and graphical tool to assess the goodness of fit of logistic regression models. Statistics in medicine, 35(5), 709-720. $[3]:$ Nattino, G., Lemeshow, S., Phillips, G., Finazzi, S., & Bertolini, G. (2017). Assessing the calibration of dichotomous outcome models with the calibration belt. The Stata Journal, 17(4), 1003-1014. $[4]:$ Harrell, F. E. (2015). Regression modeling strategies: with applications to linear models, logistic and ordinal regression, and survival analysis (Vol. 3). New York: springer.
Diagnostic probability plots in logistic regression
Calibration For completeness sake, here are two other ways to produce calibration plots: The first is using calibration belts, introduced by Nattino et al. (2014)$^{1}$, Nattino et al. (2016)$^{2}$ an
Diagnostic probability plots in logistic regression Calibration For completeness sake, here are two other ways to produce calibration plots: The first is using calibration belts, introduced by Nattino et al. (2014)$^{1}$, Nattino et al. (2016)$^{2}$ and Nattino et al. (2017)$^{3}$. Briefly, they fit an $m$-th-order polynomial logistic function (with $m\geq 2$) to the observed outcomes using the predicted probabilities of the model to be assessed. The parameter $m$ is selected using a standard forward selection procedure controlled by a likelihood-ratio statistic that accounts for the forward process used to select $m$. The calibration belt can be used for internal and external calibration. The procedure is implemented in Stata (calibrationbelt) and R (package givitiR). Here is the example using the Challenger data: # Challenger Shuttle Challenger temp vs oring-ok dat <- data.frame(y=c(0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1), x=c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)) fit <- glm(y ~ x, data=dat, family=binomial) preds_p <- predict(fit, type = "response") cb <- givitiCalibrationBelt(o = dat$y, e = preds_p , devel = "internal", confLevels = c(0.95, 0.8)) plot(cb, main = "", las = 1, ylab = "Observed probabilities", xlab = "Model probabilities", table = FALSE) The identity line is displayed in red. The light gray area is the 80% confidence calibration interval whereas the dark gray is a 95% confidence interval. Ideally, the red line is inside the belt over the whole range of probabilities. In the example, the confidence intervals are huge: The calibration belt shows a large uncertainty with respect to calibration. As the red line lies within the interval, we cannot reject the hypothesis of a well calibrated model. To better illustrate the calibration belt, let's look at a calibration belt of a well calibrated model: Here, the confidence intervals are much narrower. Because the red identity line lies within the belt over the whole range, it offers little evidence for miscalibration. As some of the other answers, the second method implemented in the R package rms relies on a nonparametric smoother fitted to the predicted and observed probabilities. It also plots bias-corrected estimates based on the bootstrap. Details can be found in Harrel (2015)$^{4}$. library(rms) mod <- lrm(y~x, dat = dat, x = TRUE, y = TRUE) res <- calibrate(mod, B = 10000) plot(res) The model seems to underestimate probabilities lower than $0.75$ and overestimate probabilities in over $0.75$. But again, due to the small sample size, the uncertainty is large. Residuals There are many types of residuals for generalized linear models but their interpretation is often difficult. One possibility is to look at simulation-based quantile residuals as implemented in the DHARMa package for R. Here is the example using the same data as above: fit <- glm(y ~ x, data=dat, family=binomial) simres <- simulateResiduals(fit, n = 1e4, seed = 142857) plot(simres) The nice thing about these residuals is that they can be interpreted as the "usual" residuals from linear regression models. On the left, a Q-Q-plot of the residuals is shown. On the right, the residuals are plotted against the predicted values. In both cases, there seems to be little evidence for a problem. $[1]:$ Nattino, G., Finazzi, S., & Bertolini, G. (2014). A new calibration test and a reappraisal of the calibration belt for the assessment of prediction models based on dichotomous outcomes. Statistics in medicine, 33(14), 2390-2407. $[2]:$ Nattino, G., Finazzi, S., & Bertolini, G. (2016). A new test and graphical tool to assess the goodness of fit of logistic regression models. Statistics in medicine, 35(5), 709-720. $[3]:$ Nattino, G., Lemeshow, S., Phillips, G., Finazzi, S., & Bertolini, G. (2017). Assessing the calibration of dichotomous outcome models with the calibration belt. The Stata Journal, 17(4), 1003-1014. $[4]:$ Harrell, F. E. (2015). Regression modeling strategies: with applications to linear models, logistic and ordinal regression, and survival analysis (Vol. 3). New York: springer.
Diagnostic probability plots in logistic regression Calibration For completeness sake, here are two other ways to produce calibration plots: The first is using calibration belts, introduced by Nattino et al. (2014)$^{1}$, Nattino et al. (2016)$^{2}$ an
45,196
Examples of Defective Distributions
Well, certainly. For instance, the arctangent is monotonically increasing from $\lim_{x\to-\infty}\arctan x=-\frac{\pi}{2}$ to $\lim_{x\to\infty}\arctan x=\frac{\pi}{2}$. So something like $$ F: x\mapsto \frac{1}{2}+\frac{1-2\epsilon}{\pi}\arctan x $$ will satisfy $\lim_{x\to-\infty}F(x) = \epsilon$ and $\lim_{x\to\infty}F(x) = 1-\epsilon$ for any (small, but) positive $\epsilon$. R code: xx <- seq(-10,10,by=.1) epsilon <- 0.1 plot(xx,1/2+(1-2*epsilon)*atan(xx)/pi, type="l",las=1,ylim=c(0,1),xlab="x",ylab="F(x)") abline(h=c(epsilon,1-epsilon),lty=2)
Examples of Defective Distributions
Well, certainly. For instance, the arctangent is monotonically increasing from $\lim_{x\to-\infty}\arctan x=-\frac{\pi}{2}$ to $\lim_{x\to\infty}\arctan x=\frac{\pi}{2}$. So something like $$ F: x\map
Examples of Defective Distributions Well, certainly. For instance, the arctangent is monotonically increasing from $\lim_{x\to-\infty}\arctan x=-\frac{\pi}{2}$ to $\lim_{x\to\infty}\arctan x=\frac{\pi}{2}$. So something like $$ F: x\mapsto \frac{1}{2}+\frac{1-2\epsilon}{\pi}\arctan x $$ will satisfy $\lim_{x\to-\infty}F(x) = \epsilon$ and $\lim_{x\to\infty}F(x) = 1-\epsilon$ for any (small, but) positive $\epsilon$. R code: xx <- seq(-10,10,by=.1) epsilon <- 0.1 plot(xx,1/2+(1-2*epsilon)*atan(xx)/pi, type="l",las=1,ylim=c(0,1),xlab="x",ylab="F(x)") abline(h=c(epsilon,1-epsilon),lty=2)
Examples of Defective Distributions Well, certainly. For instance, the arctangent is monotonically increasing from $\lim_{x\to-\infty}\arctan x=-\frac{\pi}{2}$ to $\lim_{x\to\infty}\arctan x=\frac{\pi}{2}$. So something like $$ F: x\map
45,197
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the variation of the data?
It's an expression that is often used as short-hand or conventional jargon. Anyone who finds it puzzling should feel that way! Some people say "accounts for" instead as a usage supposedly a little softer. What is meant by explanation any way? This is a long-standing topic in philosophy (epistemology and philosophy of science) going back at least to Aristotle. Most disciplines have their own literature around the topic. There continue to be entire books published on that. At a statistical end of the subject even introductory texts often emphasise that there can be nonsense correlations (e.g. when variables change over time in the same way, for quite different reasons), which alone underline that high correlations don't necessarily point to anything substantive and so aren't themselves explanatory. Or, rather there is an explanation, but it is not interesting: Ministers' salaries and the consumption of alcohol are both increasing over time, so a correlation is unsurprising, but it's still a coincidence. Often an explanation to be satisfying would depend also on evidence on cause, process, mechanism or behaviour: choose wording to taste. At this point it is customary to assert that "correlation doesn't prove causation", which also and fairly is often mentioned in introductory accounts. But although bang on, that put-down can be a little cheap. The truth is that proof of causation is often very difficult; meanwhile, correlations are often what we have, and what we start from. It often took centuries of hard science before sound explanations emerged for simple phenomena. Many diseases weren't explained until bacteria and viruses were identified, and many still remain mysterious at least in part. Although the behaviour was familiar to early humans, a framework for understanding the path of a dropped or thrown stone awaited Galileo and Newton. Also, why single out correlation? The popularity of this saying depends on assonance as well as appropriateness. Generalized linear models are not causation. Support vector machines are not causation. Don't trip off the tongue so easily, do they? To the main point: there is no implication that a formula, even one that fits the pattern of variation well, has explanatory content in any subject-matter sense. But as people say near where I live "Owt's better than nowt" (Something's better than nothing). $R^2$ with a certain number is more precise than "there is a strong relationship", just as $R^2$ with a low number and a claim of a strong relationship gives you guidance on what to think. In some areas of physics $R^2$ that isn't very high indicates incompetent experimenters; in some areas of social science a very high $R^2$ indicates faking of data or a silly question. More subtly, the converse can be true. A formula can be derived in theory and then found to fit the data and that's first-rate science. But in some fields theory is weaker than the existence of equations may appear to imply. For example, a linear relation may be postulated on the grounds that variables are known or presumed to change together, a linear relation is the simplest we can think of, and we don't have good reason to suggest a different functional form. (This can be true in several fields, but somehow it seems common in econometrics.) Other way round, if you say $R^2 =0.81$, then that can be quite enough for many readers who know about it. The "explains" or "accounts for" wording is for people who ask for more comment on what you're measuring, which can be reasonable and unreasonable at the same time. It's the old business of explaining in "plain English" what you''re doing, to which a riposte is that you would not need statistics if plain English would do. (Or any other language, naturally.) As for whether it is good jargon to use, look and ask around you. Practices vary! Some fields or groups just avoid "explains" altogether. Commentary might run that $R^2$ is whatever number, and encouraging, surprising, disappointing, whatever, but writers know what $R^2$ is and consider that to be covered in texts and courses, so no need to explain further. Some use "explains" as a term of art, and if you object that the equation is not an explanation in any other sense, people may reply "Indeed" or "Of course not. Everyone knows that". Some would regard any use of "explains" as obfuscating technical jargon manifesting lack of insight or empathy into what is going on underneath the data. This is most common in some social sciences in which there are groups of researchers whose attitude to statistical analysis varies between suspicion and hostility. If so, or if your readers feel uncomfortable or puzzled by the usage for any other reason, it is really better avoided. There is a different debate about whether and how far $R^2$ is useful at all, which I will leave on one side.
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the v
It's an expression that is often used as short-hand or conventional jargon. Anyone who finds it puzzling should feel that way! Some people say "accounts for" instead as a usage supposedly a little sof
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the variation of the data? It's an expression that is often used as short-hand or conventional jargon. Anyone who finds it puzzling should feel that way! Some people say "accounts for" instead as a usage supposedly a little softer. What is meant by explanation any way? This is a long-standing topic in philosophy (epistemology and philosophy of science) going back at least to Aristotle. Most disciplines have their own literature around the topic. There continue to be entire books published on that. At a statistical end of the subject even introductory texts often emphasise that there can be nonsense correlations (e.g. when variables change over time in the same way, for quite different reasons), which alone underline that high correlations don't necessarily point to anything substantive and so aren't themselves explanatory. Or, rather there is an explanation, but it is not interesting: Ministers' salaries and the consumption of alcohol are both increasing over time, so a correlation is unsurprising, but it's still a coincidence. Often an explanation to be satisfying would depend also on evidence on cause, process, mechanism or behaviour: choose wording to taste. At this point it is customary to assert that "correlation doesn't prove causation", which also and fairly is often mentioned in introductory accounts. But although bang on, that put-down can be a little cheap. The truth is that proof of causation is often very difficult; meanwhile, correlations are often what we have, and what we start from. It often took centuries of hard science before sound explanations emerged for simple phenomena. Many diseases weren't explained until bacteria and viruses were identified, and many still remain mysterious at least in part. Although the behaviour was familiar to early humans, a framework for understanding the path of a dropped or thrown stone awaited Galileo and Newton. Also, why single out correlation? The popularity of this saying depends on assonance as well as appropriateness. Generalized linear models are not causation. Support vector machines are not causation. Don't trip off the tongue so easily, do they? To the main point: there is no implication that a formula, even one that fits the pattern of variation well, has explanatory content in any subject-matter sense. But as people say near where I live "Owt's better than nowt" (Something's better than nothing). $R^2$ with a certain number is more precise than "there is a strong relationship", just as $R^2$ with a low number and a claim of a strong relationship gives you guidance on what to think. In some areas of physics $R^2$ that isn't very high indicates incompetent experimenters; in some areas of social science a very high $R^2$ indicates faking of data or a silly question. More subtly, the converse can be true. A formula can be derived in theory and then found to fit the data and that's first-rate science. But in some fields theory is weaker than the existence of equations may appear to imply. For example, a linear relation may be postulated on the grounds that variables are known or presumed to change together, a linear relation is the simplest we can think of, and we don't have good reason to suggest a different functional form. (This can be true in several fields, but somehow it seems common in econometrics.) Other way round, if you say $R^2 =0.81$, then that can be quite enough for many readers who know about it. The "explains" or "accounts for" wording is for people who ask for more comment on what you're measuring, which can be reasonable and unreasonable at the same time. It's the old business of explaining in "plain English" what you''re doing, to which a riposte is that you would not need statistics if plain English would do. (Or any other language, naturally.) As for whether it is good jargon to use, look and ask around you. Practices vary! Some fields or groups just avoid "explains" altogether. Commentary might run that $R^2$ is whatever number, and encouraging, surprising, disappointing, whatever, but writers know what $R^2$ is and consider that to be covered in texts and courses, so no need to explain further. Some use "explains" as a term of art, and if you object that the equation is not an explanation in any other sense, people may reply "Indeed" or "Of course not. Everyone knows that". Some would regard any use of "explains" as obfuscating technical jargon manifesting lack of insight or empathy into what is going on underneath the data. This is most common in some social sciences in which there are groups of researchers whose attitude to statistical analysis varies between suspicion and hostility. If so, or if your readers feel uncomfortable or puzzled by the usage for any other reason, it is really better avoided. There is a different debate about whether and how far $R^2$ is useful at all, which I will leave on one side.
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the v It's an expression that is often used as short-hand or conventional jargon. Anyone who finds it puzzling should feel that way! Some people say "accounts for" instead as a usage supposedly a little sof
45,198
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the variation of the data?
Say you fit the model: $$ Y_i=\beta_0+\beta_1X_i $$ and you get: $$ R^2=0.81 $$ This means that your independent variable $X$ accounts for 81% of the variability in your dependent variable $Y$. Or, in other words, 81% of the variability in $Y$ is explained by the variability in $X$. This is easy to see if you look at how $R^2$ is calculated: $$ R^2=1-\frac{SS_{residuals}}{SS_{total}} $$ where $SS_{total}$ is the total sum of squares, which represents the total variability of your data (in particular, the total variability of your dependent variable $Y$); and $SS_{residuals}$ is the residual sum of squares, which is a measure of the discrepancy between your data and the model. You can see how the lower $SS_{residuals}$, the better i.e. the higher $R^2$ will be. Finally, recall that the total sum of squares can be partitioned as follows: $$ SS_{total}=SS_{regression}+SS_{residuals} $$ You can see how $SS_{residuals}$ represents the variability that's left after you remove the variability explained by the model, because $SS_{residuals}=SS_{total}-SS_{regression}$. Hence, $\frac{SS_{residuals}}{SS_{total}}$ is the proportion of the total variability that isn't explained by the model, and therefore $1-\frac{SS_{residuals}}{SS_{total}}$ is the proportion of the total variability that is explained by the model.
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the v
Say you fit the model: $$ Y_i=\beta_0+\beta_1X_i $$ and you get: $$ R^2=0.81 $$ This means that your independent variable $X$ accounts for 81% of the variability in your dependent variable $Y$. Or, in
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the variation of the data? Say you fit the model: $$ Y_i=\beta_0+\beta_1X_i $$ and you get: $$ R^2=0.81 $$ This means that your independent variable $X$ accounts for 81% of the variability in your dependent variable $Y$. Or, in other words, 81% of the variability in $Y$ is explained by the variability in $X$. This is easy to see if you look at how $R^2$ is calculated: $$ R^2=1-\frac{SS_{residuals}}{SS_{total}} $$ where $SS_{total}$ is the total sum of squares, which represents the total variability of your data (in particular, the total variability of your dependent variable $Y$); and $SS_{residuals}$ is the residual sum of squares, which is a measure of the discrepancy between your data and the model. You can see how the lower $SS_{residuals}$, the better i.e. the higher $R^2$ will be. Finally, recall that the total sum of squares can be partitioned as follows: $$ SS_{total}=SS_{regression}+SS_{residuals} $$ You can see how $SS_{residuals}$ represents the variability that's left after you remove the variability explained by the model, because $SS_{residuals}=SS_{total}-SS_{regression}$. Hence, $\frac{SS_{residuals}}{SS_{total}}$ is the proportion of the total variability that isn't explained by the model, and therefore $1-\frac{SS_{residuals}}{SS_{total}}$ is the proportion of the total variability that is explained by the model.
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the v Say you fit the model: $$ Y_i=\beta_0+\beta_1X_i $$ and you get: $$ R^2=0.81 $$ This means that your independent variable $X$ accounts for 81% of the variability in your dependent variable $Y$. Or, in
45,199
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the variation of the data?
$R^2$ is a comparison of the variance of the error terms of two models. One model is naïve and always makes the same guess of $\bar y$ each time; this is how we get the "total sum of squares". The other model uses some features in an attempt to have less variance in the error term; this is how we get the "residual sum of squares". In that regard, $R^2$ is a measure of the extent to which the error term variance is reduced. Some proportion of the original variance (from the naïve model) remains, but some proportion is explained by considering features. Let's do an example. You want to know the height of a person you select from your family. Knowing nothing about the person you select, the best guess (in terms of minimizing square loss) is the mean height of people in your family; this is the naïve model. However, you know that adults tend to be taller than children. By considering age, you reduce the variability in the heights; this is the regression model of interest. Some of the reason the raw height values vary is because the subjects have different ages; age explains some of the variability in heights. In math, and (shamelessly) taking some of a previous post of mine... Notation $y_i$ is observation $i$ of some response variable $Y$. $\hat{y}_i$ is the value of $y_i$ predicted by the regression. $\bar{y}$ is the average of all observations of the response variable. $$ y_i-\bar{y} = (y_i - \hat{y_i} + \hat{y_i} - \bar{y}) = (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) $$ $$( y_i-\bar{y})^2 = \Big[ (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) \Big]^2 = (y_i - \hat{y_i})^2 + (\hat{y_i} - \bar{y})^2 + 2(y_i - \hat{y_i})(\hat{y_i} - \bar{y}) $$ $$SSTotal := \sum_i ( y_i-\bar{y})^2 = \sum_i(y_i - \hat{y_i})^2 + \sum_i(\hat{y_i} - \bar{y})^2 + 2\sum_i\Big[ (y_i - \hat{y_i})(\hat{y_i} - \bar{y}) \Big]$$ $$ :=SSRes + SSReg + Other $$ In OLS regression, $Other = 0$. $$R^2= 1-\dfrac{SSRes}{SSTotal}\\ = \dfrac{SSTotal -SSRes-Other}{SSTotal} \\ = \dfrac{SSReg}{SSTotal} \\ =\dfrac{SSReg/n}{SSTotal/n} \\ =\dfrac{ \mathbb Var(\epsilon_{model}) }{ \mathbb Var(\epsilon_{naïve}) }$$ The claim about "proportion of variance explained" only applies in-sample (related to the $Other$ term not being zero), so if we consider the sample to be a population, then dividing by $n$ gives the exact variance.
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the v
$R^2$ is a comparison of the variance of the error terms of two models. One model is naïve and always makes the same guess of $\bar y$ each time; this is how we get the "total sum of squares". The oth
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the variation of the data? $R^2$ is a comparison of the variance of the error terms of two models. One model is naïve and always makes the same guess of $\bar y$ each time; this is how we get the "total sum of squares". The other model uses some features in an attempt to have less variance in the error term; this is how we get the "residual sum of squares". In that regard, $R^2$ is a measure of the extent to which the error term variance is reduced. Some proportion of the original variance (from the naïve model) remains, but some proportion is explained by considering features. Let's do an example. You want to know the height of a person you select from your family. Knowing nothing about the person you select, the best guess (in terms of minimizing square loss) is the mean height of people in your family; this is the naïve model. However, you know that adults tend to be taller than children. By considering age, you reduce the variability in the heights; this is the regression model of interest. Some of the reason the raw height values vary is because the subjects have different ages; age explains some of the variability in heights. In math, and (shamelessly) taking some of a previous post of mine... Notation $y_i$ is observation $i$ of some response variable $Y$. $\hat{y}_i$ is the value of $y_i$ predicted by the regression. $\bar{y}$ is the average of all observations of the response variable. $$ y_i-\bar{y} = (y_i - \hat{y_i} + \hat{y_i} - \bar{y}) = (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) $$ $$( y_i-\bar{y})^2 = \Big[ (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) \Big]^2 = (y_i - \hat{y_i})^2 + (\hat{y_i} - \bar{y})^2 + 2(y_i - \hat{y_i})(\hat{y_i} - \bar{y}) $$ $$SSTotal := \sum_i ( y_i-\bar{y})^2 = \sum_i(y_i - \hat{y_i})^2 + \sum_i(\hat{y_i} - \bar{y})^2 + 2\sum_i\Big[ (y_i - \hat{y_i})(\hat{y_i} - \bar{y}) \Big]$$ $$ :=SSRes + SSReg + Other $$ In OLS regression, $Other = 0$. $$R^2= 1-\dfrac{SSRes}{SSTotal}\\ = \dfrac{SSTotal -SSRes-Other}{SSTotal} \\ = \dfrac{SSReg}{SSTotal} \\ =\dfrac{SSReg/n}{SSTotal/n} \\ =\dfrac{ \mathbb Var(\epsilon_{model}) }{ \mathbb Var(\epsilon_{naïve}) }$$ The claim about "proportion of variance explained" only applies in-sample (related to the $Other$ term not being zero), so if we consider the sample to be a population, then dividing by $n$ gives the exact variance.
How is the relationship between two variables $X$ and $Y$ supposed to "explain" $R^2\text%$ of the v $R^2$ is a comparison of the variance of the error terms of two models. One model is naïve and always makes the same guess of $\bar y$ each time; this is how we get the "total sum of squares". The oth
45,200
Math behind applying elastic net penalties to logistic regression
The elastic net terms are added to the maximum likelihood cost function.i.e. the final cost function is: $\sum_{i = 0}^{N}\bigg[- (y\log(p) + (1-y)\log(1-p))\bigg] + \lambda_1 \sum_{i=0}^{k}|w_i| + \lambda_2 \sum_{i=0}^{k}w_i^2$ The first term is the likelihood, the second term is the $l_1$ norm part of the elastic net, and the third term is the $l_2$ norm part. i.e. the network is trying to minimize the negative log likelihood and also trying to minimize the weights.
Math behind applying elastic net penalties to logistic regression
The elastic net terms are added to the maximum likelihood cost function.i.e. the final cost function is: $\sum_{i = 0}^{N}\bigg[- (y\log(p) + (1-y)\log(1-p))\bigg] + \lambda_1 \sum_{i=0}^{k}|w_i| + \l
Math behind applying elastic net penalties to logistic regression The elastic net terms are added to the maximum likelihood cost function.i.e. the final cost function is: $\sum_{i = 0}^{N}\bigg[- (y\log(p) + (1-y)\log(1-p))\bigg] + \lambda_1 \sum_{i=0}^{k}|w_i| + \lambda_2 \sum_{i=0}^{k}w_i^2$ The first term is the likelihood, the second term is the $l_1$ norm part of the elastic net, and the third term is the $l_2$ norm part. i.e. the network is trying to minimize the negative log likelihood and also trying to minimize the weights.
Math behind applying elastic net penalties to logistic regression The elastic net terms are added to the maximum likelihood cost function.i.e. the final cost function is: $\sum_{i = 0}^{N}\bigg[- (y\log(p) + (1-y)\log(1-p))\bigg] + \lambda_1 \sum_{i=0}^{k}|w_i| + \l