idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
24,801 | KNN: 1-nearest neighbor | The bias is low, because you fit your model only to the 1-nearest point. This means your model will be really close to your training data.
The variance is high, because optimizing on only 1-nearest point means that the probability that you model the noise in your data is really high. Following your definition above, your model will depend highly on the subset of data points that you choose as training data. If you randomly reshuffle the data points you choose, the model will be dramatically different in each iteration. So
expected divergence of the estimated prediction function from its average value (i.e. how dependent the classifier is on the random sampling made in the training set)
will be high, because each time your model will be different.
Example
In general, a k-NN model fits a specific point in the data with the N nearest data points in your training set. For 1-NN this point depends only of 1 single other point. E.g. you want to split your samples into two groups (classification) - red and blue. If you train your model for a certain point p for which the nearest 4 neighbors would be red, blue, blue, blue (ascending by distance to p). Then a 4-NN would classify your point to blue (3 times blue and 1 time red), but your 1-NN model classifies it to red, because red is the nearest point. This means, that your model is really close to your training data and therefore the bias is low. If you compute the RSS between your model and your training data it is close to 0. In contrast to this the variance in your model is high, because your model is extremely sensitive and wiggly. As pointed out above, a random shuffling of your training set would be likely to change your model dramatically. In contrast, 10-NN would be more robust in such cases, but could be to stiff. Which k to choose depends on your data set. This highly depends on the Bias-Variance-Tradeoff, which exactly relates to this problem. | KNN: 1-nearest neighbor | The bias is low, because you fit your model only to the 1-nearest point. This means your model will be really close to your training data.
The variance is high, because optimizing on only 1-nearest p | KNN: 1-nearest neighbor
The bias is low, because you fit your model only to the 1-nearest point. This means your model will be really close to your training data.
The variance is high, because optimizing on only 1-nearest point means that the probability that you model the noise in your data is really high. Following your definition above, your model will depend highly on the subset of data points that you choose as training data. If you randomly reshuffle the data points you choose, the model will be dramatically different in each iteration. So
expected divergence of the estimated prediction function from its average value (i.e. how dependent the classifier is on the random sampling made in the training set)
will be high, because each time your model will be different.
Example
In general, a k-NN model fits a specific point in the data with the N nearest data points in your training set. For 1-NN this point depends only of 1 single other point. E.g. you want to split your samples into two groups (classification) - red and blue. If you train your model for a certain point p for which the nearest 4 neighbors would be red, blue, blue, blue (ascending by distance to p). Then a 4-NN would classify your point to blue (3 times blue and 1 time red), but your 1-NN model classifies it to red, because red is the nearest point. This means, that your model is really close to your training data and therefore the bias is low. If you compute the RSS between your model and your training data it is close to 0. In contrast to this the variance in your model is high, because your model is extremely sensitive and wiggly. As pointed out above, a random shuffling of your training set would be likely to change your model dramatically. In contrast, 10-NN would be more robust in such cases, but could be to stiff. Which k to choose depends on your data set. This highly depends on the Bias-Variance-Tradeoff, which exactly relates to this problem. | KNN: 1-nearest neighbor
The bias is low, because you fit your model only to the 1-nearest point. This means your model will be really close to your training data.
The variance is high, because optimizing on only 1-nearest p |
24,802 | KNN: 1-nearest neighbor | You should keep in mind that the 1-Nearest Neighbor classifier is actually the most complex nearest neighbor model. By most complex, I mean it has the most jagged decision boundary, and is most likely to overfit. If you use an N-nearest neighbor classifier (N = number of training points), you'll classify everything as the majority class. Different permutations of the data will get you the same answer, giving you a set of models that have zero variance (they're all exactly the same), but a high bias (they're all consistently wrong). Reducing the setting of K gets you closer and closer to the training data (low bias), but the model will be much more dependent on the particular training examples chosen (high variance). | KNN: 1-nearest neighbor | You should keep in mind that the 1-Nearest Neighbor classifier is actually the most complex nearest neighbor model. By most complex, I mean it has the most jagged decision boundary, and is most likely | KNN: 1-nearest neighbor
You should keep in mind that the 1-Nearest Neighbor classifier is actually the most complex nearest neighbor model. By most complex, I mean it has the most jagged decision boundary, and is most likely to overfit. If you use an N-nearest neighbor classifier (N = number of training points), you'll classify everything as the majority class. Different permutations of the data will get you the same answer, giving you a set of models that have zero variance (they're all exactly the same), but a high bias (they're all consistently wrong). Reducing the setting of K gets you closer and closer to the training data (low bias), but the model will be much more dependent on the particular training examples chosen (high variance). | KNN: 1-nearest neighbor
You should keep in mind that the 1-Nearest Neighbor classifier is actually the most complex nearest neighbor model. By most complex, I mean it has the most jagged decision boundary, and is most likely |
24,803 | KNN: 1-nearest neighbor | Here is a very interesting blog post about bias and variance. The section 3.1 deals with the knn algorithm and explains why low k leads to high variance and low bias.
Figure 5 is very interesting: you can see in real time how the model is changing while k is increasing. For low k, there's a lot of overfitting (some isolated "islands") which leads to low bias but high variance. For very high k, you've got a smoother model with low variance but high bias. In this example, a value of k between 10 and 20 will give a descent model which is general enough (relatively low variance) and accurate enough (relatively low bias). | KNN: 1-nearest neighbor | Here is a very interesting blog post about bias and variance. The section 3.1 deals with the knn algorithm and explains why low k leads to high variance and low bias.
Figure 5 is very interesting: you | KNN: 1-nearest neighbor
Here is a very interesting blog post about bias and variance. The section 3.1 deals with the knn algorithm and explains why low k leads to high variance and low bias.
Figure 5 is very interesting: you can see in real time how the model is changing while k is increasing. For low k, there's a lot of overfitting (some isolated "islands") which leads to low bias but high variance. For very high k, you've got a smoother model with low variance but high bias. In this example, a value of k between 10 and 20 will give a descent model which is general enough (relatively low variance) and accurate enough (relatively low bias). | KNN: 1-nearest neighbor
Here is a very interesting blog post about bias and variance. The section 3.1 deals with the knn algorithm and explains why low k leads to high variance and low bias.
Figure 5 is very interesting: you |
24,804 | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ line equal the square of the correlation? | $b = r \; \text{SD}_y / \text{SD}_x$ and $d = r \; \text{SD}_x / \text{SD}_y$, so $b\times d = r^2$.
Many statistics textbooks would touch on this; I like Freedman et al., Statistics. See also here and this wikipedia article. | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ | $b = r \; \text{SD}_y / \text{SD}_x$ and $d = r \; \text{SD}_x / \text{SD}_y$, so $b\times d = r^2$.
Many statistics textbooks would touch on this; I like Freedman et al., Statistics. See also here a | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ line equal the square of the correlation?
$b = r \; \text{SD}_y / \text{SD}_x$ and $d = r \; \text{SD}_x / \text{SD}_y$, so $b\times d = r^2$.
Many statistics textbooks would touch on this; I like Freedman et al., Statistics. See also here and this wikipedia article. | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$
$b = r \; \text{SD}_y / \text{SD}_x$ and $d = r \; \text{SD}_x / \text{SD}_y$, so $b\times d = r^2$.
Many statistics textbooks would touch on this; I like Freedman et al., Statistics. See also here a |
24,805 | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ line equal the square of the correlation? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have a look at Thirteen Ways to Look at the Correlation Coefficient - and especially ways 3, 4, 5 will be of most interest for you.
Rodgers, J.L., & Nicewander, W.A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42, 1, pp. 59-66. | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ line equal the square of the correlation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have a look at Thirteen Ways to Look at the Correlation Coefficient - and especially ways 3, 4, 5 will be of most interest for you.
Rodgers, J.L., & Nicewander, W.A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42, 1, pp. 59-66. | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
24,806 | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ line equal the square of the correlation? | $\DeclareMathOperator{\Cov}{Cov}$
$\DeclareMathOperator{\Corr}{Corr}$
$\DeclareMathOperator{\SD}{SD}$
$\DeclareMathOperator{\Var}{Var}$
$\DeclareMathOperator{\sgn}{sgn}$
$\DeclareMathOperator{\nsum}{\sum_{i=1}^{n}}$
Recall that many introductory texts define
$$S_{xy} = \nsum (x_i - \bar x)(y_i - \bar y)$$
Then by setting $y$ as $x$ we have $S_{xx} = \nsum (x_i - \bar x)^2$ and similarly $S_{yy} = \nsum (y_i - \bar y)^2$.
Formulae for the correlation coefficient $r$, the slope of the $y$-on-$x$ regression (your $b$) and the slope of the $x$-on-$y$ regression (your $d$) are often given as:
$$
\begin{align}
r &= \frac{S_{xy}}{\sqrt{S_{xx}S_{yy}}} \tag{1} \\
\hat \beta_{y\text{ on }x} &= \frac{S_{xy}}{S_{xx}} \tag{2} \\
\hat \beta_{x\text{ on }y} &= \frac{S_{xy}}{S_{yy}} \tag{3}
\end{align}
$$
Then multiplying $(2)$ and $(3)$ clearly gives the square of $(1)$:
$$\hat \beta_{y\text{ on }x} \cdot \hat \beta_{x\text{ on }y} = \frac{S_{xy}^2}{S_{xx}S_{yy}} = r^2 $$
Alternatively the numerators and denominators of the fractions in $(1)$, $(2)$ and $(3)$ are often divided by $n$ or $(n-1)$ so that things are framed in terms of sample or estimated variances and covariances. For instance, from $(1)$, the estimated correlation coefficient is just the estimated covariance, scaled by the estimated standard deviations:
$$\begin{align}
r &= \widehat \Corr(X,Y) = \frac{\widehat \Cov(X,Y)}{\widehat{\SD(X)}\widehat{\SD(Y)}} \tag{4} \\
\hat \beta_{y\text{ on }x} &= \frac{\widehat \Cov(X,Y)}{\widehat{\Var(X)}} \tag{5} \\
\hat \beta_{x\text{ on }y} &= \frac{\widehat \Cov(X,Y)}{\widehat{\Var(Y)}} \tag{6}
\end{align}$$
We then immediately find from multiplying $(5)$ and $(6)$ that
$$\hat \beta_{y\text{ on }x} \hat \beta_{x\text{ on }y} =
\frac{\widehat \Cov(X,Y)^2}{\widehat{\Var(X)}\widehat{\Var(Y)}} =
\left( \frac{\widehat \Cov(X,Y)}{\widehat{\SD(X)}\widehat{\SD(Y)}} \right)^2 =
r^2 $$
We might instead have rearranged $(4)$ to write the covariance as a "scaled-up" correlation:
$$\widehat \Cov(X,Y) = r\cdot \widehat{\SD(X)} \widehat{\SD(Y)} \tag{7}$$
Then by substituting $(7)$ into $(5)$ and $(6)$ we could rewrite the regression coefficients as $\hat \beta_{y\text{ on }x} = r \frac{\widehat \SD(y)}{\widehat \SD(x)}$ and $\hat \beta_{x\text{ on }y} = r \frac{\widehat \SD(x)}{\widehat \SD(y)}$. Multiplying these together would also produce $r^2$, and this is @Karl's solution. Writing the slopes in this way helps explain how we can see the correlation coefficient as a standardized regression slope.
Finally note that in your case $r = \sqrt{bd} =\sqrt{\hat \beta_{y\text{ on }x} \hat \beta_{x\text{ on }y}}$ but this was because your correlation was positive. If your correlation were negative, then you would have to take the negative root.
To work out whether your correlation is positive or negative, you simply need to regard the sign (plus or minus) of your regression coefficient — it doesn't matter whether you look at the $y$-on-0$x$ or $x$-on-$y$ as their signs will be the same. So you can use the formula:
$$ r = \sgn(\hat \beta_{y\text{ on }x}) \sqrt{\hat \beta_{y\text{ on }x} \hat \beta_{x\text{ on }y}}$$
where $\sgn$ is the signum function, i.e. is $+1$ if the slope is positive and $-1$ if the slope is negative. | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ | $\DeclareMathOperator{\Cov}{Cov}$
$\DeclareMathOperator{\Corr}{Corr}$
$\DeclareMathOperator{\SD}{SD}$
$\DeclareMathOperator{\Var}{Var}$
$\DeclareMathOperator{\sgn}{sgn}$
$\DeclareMathOperator{\nsum}{\ | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$ line equal the square of the correlation?
$\DeclareMathOperator{\Cov}{Cov}$
$\DeclareMathOperator{\Corr}{Corr}$
$\DeclareMathOperator{\SD}{SD}$
$\DeclareMathOperator{\Var}{Var}$
$\DeclareMathOperator{\sgn}{sgn}$
$\DeclareMathOperator{\nsum}{\sum_{i=1}^{n}}$
Recall that many introductory texts define
$$S_{xy} = \nsum (x_i - \bar x)(y_i - \bar y)$$
Then by setting $y$ as $x$ we have $S_{xx} = \nsum (x_i - \bar x)^2$ and similarly $S_{yy} = \nsum (y_i - \bar y)^2$.
Formulae for the correlation coefficient $r$, the slope of the $y$-on-$x$ regression (your $b$) and the slope of the $x$-on-$y$ regression (your $d$) are often given as:
$$
\begin{align}
r &= \frac{S_{xy}}{\sqrt{S_{xx}S_{yy}}} \tag{1} \\
\hat \beta_{y\text{ on }x} &= \frac{S_{xy}}{S_{xx}} \tag{2} \\
\hat \beta_{x\text{ on }y} &= \frac{S_{xy}}{S_{yy}} \tag{3}
\end{align}
$$
Then multiplying $(2)$ and $(3)$ clearly gives the square of $(1)$:
$$\hat \beta_{y\text{ on }x} \cdot \hat \beta_{x\text{ on }y} = \frac{S_{xy}^2}{S_{xx}S_{yy}} = r^2 $$
Alternatively the numerators and denominators of the fractions in $(1)$, $(2)$ and $(3)$ are often divided by $n$ or $(n-1)$ so that things are framed in terms of sample or estimated variances and covariances. For instance, from $(1)$, the estimated correlation coefficient is just the estimated covariance, scaled by the estimated standard deviations:
$$\begin{align}
r &= \widehat \Corr(X,Y) = \frac{\widehat \Cov(X,Y)}{\widehat{\SD(X)}\widehat{\SD(Y)}} \tag{4} \\
\hat \beta_{y\text{ on }x} &= \frac{\widehat \Cov(X,Y)}{\widehat{\Var(X)}} \tag{5} \\
\hat \beta_{x\text{ on }y} &= \frac{\widehat \Cov(X,Y)}{\widehat{\Var(Y)}} \tag{6}
\end{align}$$
We then immediately find from multiplying $(5)$ and $(6)$ that
$$\hat \beta_{y\text{ on }x} \hat \beta_{x\text{ on }y} =
\frac{\widehat \Cov(X,Y)^2}{\widehat{\Var(X)}\widehat{\Var(Y)}} =
\left( \frac{\widehat \Cov(X,Y)}{\widehat{\SD(X)}\widehat{\SD(Y)}} \right)^2 =
r^2 $$
We might instead have rearranged $(4)$ to write the covariance as a "scaled-up" correlation:
$$\widehat \Cov(X,Y) = r\cdot \widehat{\SD(X)} \widehat{\SD(Y)} \tag{7}$$
Then by substituting $(7)$ into $(5)$ and $(6)$ we could rewrite the regression coefficients as $\hat \beta_{y\text{ on }x} = r \frac{\widehat \SD(y)}{\widehat \SD(x)}$ and $\hat \beta_{x\text{ on }y} = r \frac{\widehat \SD(x)}{\widehat \SD(y)}$. Multiplying these together would also produce $r^2$, and this is @Karl's solution. Writing the slopes in this way helps explain how we can see the correlation coefficient as a standardized regression slope.
Finally note that in your case $r = \sqrt{bd} =\sqrt{\hat \beta_{y\text{ on }x} \hat \beta_{x\text{ on }y}}$ but this was because your correlation was positive. If your correlation were negative, then you would have to take the negative root.
To work out whether your correlation is positive or negative, you simply need to regard the sign (plus or minus) of your regression coefficient — it doesn't matter whether you look at the $y$-on-0$x$ or $x$-on-$y$ as their signs will be the same. So you can use the formula:
$$ r = \sgn(\hat \beta_{y\text{ on }x}) \sqrt{\hat \beta_{y\text{ on }x} \hat \beta_{x\text{ on }y}}$$
where $\sgn$ is the signum function, i.e. is $+1$ if the slope is positive and $-1$ if the slope is negative. | Why does the product of the bivariate regression coefficients of the $y$-on-$x$ line and $x$-on-$y$
$\DeclareMathOperator{\Cov}{Cov}$
$\DeclareMathOperator{\Corr}{Corr}$
$\DeclareMathOperator{\SD}{SD}$
$\DeclareMathOperator{\Var}{Var}$
$\DeclareMathOperator{\sgn}{sgn}$
$\DeclareMathOperator{\nsum}{\ |
24,807 | What are the use cases for Propensity Score Matching? | You need to distinguish between uses of propensity scores for matching of cases versus for more general adjustments.
The discussion on this page suggests that there isn't much of a use case for propensity score matching. Among other problems, there is seldom much to be gained by throwing away information. Yet that is what matching cases does, with additional problems introduced by using propensity scores for the matching.
That said, restricting yourself to regression to control for covariates can fail if the regression model for outcome, including the treatment effect of interest and the covariates, is incomplete or incorrect. And there's no a priori way to know whether that's the case.
Inverse propensity score weighting provides another way to achieve effective covariate balance between treated and control groups. Cases with a lower probability of getting the treatment get higher weight, providing a more graded balance between treatment groups. That helps to estimate what would have happened had the individuals with the same characteristics been equally represented in control and treatment groups.
You can combine both types of control, via regression and propensity scores, to get what's sometimes called "doubly robust" estimation. If either the regression or the propensity-score model is OK, you can get a reliable measure of treatment effect--provided, as Björn rightly notes in a comment, that there isn't heterogeneity of unobserved covariates affecting outcome between treatment groups.
The issues you raise are much more than a couple of paragraphs can cover. Read the Causal Inference book by Hernán and Robins for a thorough recent treatment. | What are the use cases for Propensity Score Matching? | You need to distinguish between uses of propensity scores for matching of cases versus for more general adjustments.
The discussion on this page suggests that there isn't much of a use case for propen | What are the use cases for Propensity Score Matching?
You need to distinguish between uses of propensity scores for matching of cases versus for more general adjustments.
The discussion on this page suggests that there isn't much of a use case for propensity score matching. Among other problems, there is seldom much to be gained by throwing away information. Yet that is what matching cases does, with additional problems introduced by using propensity scores for the matching.
That said, restricting yourself to regression to control for covariates can fail if the regression model for outcome, including the treatment effect of interest and the covariates, is incomplete or incorrect. And there's no a priori way to know whether that's the case.
Inverse propensity score weighting provides another way to achieve effective covariate balance between treated and control groups. Cases with a lower probability of getting the treatment get higher weight, providing a more graded balance between treatment groups. That helps to estimate what would have happened had the individuals with the same characteristics been equally represented in control and treatment groups.
You can combine both types of control, via regression and propensity scores, to get what's sometimes called "doubly robust" estimation. If either the regression or the propensity-score model is OK, you can get a reliable measure of treatment effect--provided, as Björn rightly notes in a comment, that there isn't heterogeneity of unobserved covariates affecting outcome between treatment groups.
The issues you raise are much more than a couple of paragraphs can cover. Read the Causal Inference book by Hernán and Robins for a thorough recent treatment. | What are the use cases for Propensity Score Matching?
You need to distinguish between uses of propensity scores for matching of cases versus for more general adjustments.
The discussion on this page suggests that there isn't much of a use case for propen |
24,808 | What are the use cases for Propensity Score Matching? | Propensity score (PS) analysis has many problems in general, and matching is especially problematic. I prefer covariate adjustment for a spline function of the logit of PS if you need propensity scores, and you must also include pre-specified individual strong covariates to absorb outcome heterogeneity. If the sample size is large in relationship to the number of model parameters, ordinary covariate adjustment without PS works just fine. Problems with PS scores and matching are detailed in links here. | What are the use cases for Propensity Score Matching? | Propensity score (PS) analysis has many problems in general, and matching is especially problematic. I prefer covariate adjustment for a spline function of the logit of PS if you need propensity scor | What are the use cases for Propensity Score Matching?
Propensity score (PS) analysis has many problems in general, and matching is especially problematic. I prefer covariate adjustment for a spline function of the logit of PS if you need propensity scores, and you must also include pre-specified individual strong covariates to absorb outcome heterogeneity. If the sample size is large in relationship to the number of model parameters, ordinary covariate adjustment without PS works just fine. Problems with PS scores and matching are detailed in links here. | What are the use cases for Propensity Score Matching?
Propensity score (PS) analysis has many problems in general, and matching is especially problematic. I prefer covariate adjustment for a spline function of the logit of PS if you need propensity scor |
24,809 | What are the use cases for Propensity Score Matching? | Complementary to the the answers from EdM and Frank Harrell (+1 to both).
One might want to consider extensions to using Propensity Scores as the direct probability of treatment group assignment. Usually such work aim to re-weight our sample at hands such that certain features are "balanced". A prime example of that is entropy balancing, (Hainmueller (2012) Entropy Balancing for Causal Effects: A Multivariate Reweighting Method to Produce Balanced Samples in Observational Studies - see the R package ebal). The balancing here refers to using weights such that the moments associated with selected covariates of the two groups of interest are approximately equal (e.g. both groups have similar mean and variance in terms of age and of years of education). There are a few other covariate balancing approach you might want to consider too (e.g. covariate balancing propensity scores (Imai & Ratkovic (2013) Covariate balancing propensity score) or targeted stable balancing weights (Zubizarreta (2015) Stable Weights that Balance Covariates for Estimation With Incomplete Outcome Data) - see the R package CBPS and optweight respectively).
We can use these weights directly or within IPTWS or doubly-robust approach (as EdM suggests). Please note though that no matching method shields us against unmeasured confounding variables. | What are the use cases for Propensity Score Matching? | Complementary to the the answers from EdM and Frank Harrell (+1 to both).
One might want to consider extensions to using Propensity Scores as the direct probability of treatment group assignment. Usua | What are the use cases for Propensity Score Matching?
Complementary to the the answers from EdM and Frank Harrell (+1 to both).
One might want to consider extensions to using Propensity Scores as the direct probability of treatment group assignment. Usually such work aim to re-weight our sample at hands such that certain features are "balanced". A prime example of that is entropy balancing, (Hainmueller (2012) Entropy Balancing for Causal Effects: A Multivariate Reweighting Method to Produce Balanced Samples in Observational Studies - see the R package ebal). The balancing here refers to using weights such that the moments associated with selected covariates of the two groups of interest are approximately equal (e.g. both groups have similar mean and variance in terms of age and of years of education). There are a few other covariate balancing approach you might want to consider too (e.g. covariate balancing propensity scores (Imai & Ratkovic (2013) Covariate balancing propensity score) or targeted stable balancing weights (Zubizarreta (2015) Stable Weights that Balance Covariates for Estimation With Incomplete Outcome Data) - see the R package CBPS and optweight respectively).
We can use these weights directly or within IPTWS or doubly-robust approach (as EdM suggests). Please note though that no matching method shields us against unmeasured confounding variables. | What are the use cases for Propensity Score Matching?
Complementary to the the answers from EdM and Frank Harrell (+1 to both).
One might want to consider extensions to using Propensity Scores as the direct probability of treatment group assignment. Usua |
24,810 | How can I obtain a Cauchy distribution from two standard normal distributions? | This can be done with a minimum of computation, relying only on (a) simple algebra and (b) basic knowledge of distributions associated with statistical tests. As such, the demonstration may have substantial pedagogical value--which is a fancy way of saying it's worth studying.
Let $Z=X/(X+Y),$ so that
$$Z - \frac{1}{2} = \frac{X}{X+Y} - \frac{X/2+Y/2}{X+Y} = \frac{1}{2}\frac{X-Y}{X+Y} = \frac{1}{2}\frac{(X-Y)/\sqrt{2}}{(X+Y)/\sqrt{2}} = \frac{1}{2}\frac{U}{V}$$
where $$(U,V) = \left(\frac{X-Y}{\sqrt{2}}, \frac{X+Y}{\sqrt{2}}\right).$$ Because $(U,V)$ is a linear transformation of the bivariate Normal variable $(X,Y),$ it too is bivariate Normal, and an easy calculation (ultimately requiring, apart from arithmetical definitions, only the fact that $1+1=2$) shows the variances of $U$ and $V$ are unity and $U$ and $V$ are uncorrelated: that is, $(U,V)$ also has a standard Normal distribution.
In particular, $U$ and $V$ are both symmetrically distributed (about $0$), implying $U/V$ has the same distribution as $U/|V|.$ But $|V| = \sqrt{V^2}$ has, by definition, a $\chi(1)$ distribution. Since $U$ and $V$ are independent, so are $U$ and $|V|,$ whence (also by definition) $U/|V| = U/\sqrt{V^2/1}$ has a Student t distribution with one degree of freedom.
The conclusion, after no integration and only the simplest of algebraic calculations, is
$W = 2Z-1 = U/V$ has a Student t distribution with one degree of freedom.
That's just another name for the (standard) Cauchy distribution. Since $Z = W/2 + 1/2$
is just a rescaled and shifted version of $W,$, $Z$ has a Cauchy distribution (once again by definition), QED.
Summary of facts used
Every one of the facts used in the foregoing analysis is of interest and well worth knowing.
These are basic theorems:
Linear transformations of bivariate Normal variables are bivariate Normal. (This could also be considered a definition.)
Uncorrelated bivariate Normal variables are independent.
The covariance is a quadratic form. (This, too, can be part of the definition of covariance, but that would be a little unusual.)
When two variables are independent, functions of each of them (separately) are also independent.
These are all definitions:
A sum of $n$ independent standard Normal variables has a $\chi^2(n)$ distribution.
The ratio of a standard Normal variable to the square root of $1/n$ times a $\chi^2(n)$ independent variable has a Student t distribution with $n$ degrees of freedom. See also A normal divided by the $\sqrt{\chi^2(s)/s}$ gives you a t-distribution -- proof.
A Cauchy distribution is a scaled, translated version of the Student t distribution with 1 degree of freedom. | How can I obtain a Cauchy distribution from two standard normal distributions? | This can be done with a minimum of computation, relying only on (a) simple algebra and (b) basic knowledge of distributions associated with statistical tests. As such, the demonstration may have subs | How can I obtain a Cauchy distribution from two standard normal distributions?
This can be done with a minimum of computation, relying only on (a) simple algebra and (b) basic knowledge of distributions associated with statistical tests. As such, the demonstration may have substantial pedagogical value--which is a fancy way of saying it's worth studying.
Let $Z=X/(X+Y),$ so that
$$Z - \frac{1}{2} = \frac{X}{X+Y} - \frac{X/2+Y/2}{X+Y} = \frac{1}{2}\frac{X-Y}{X+Y} = \frac{1}{2}\frac{(X-Y)/\sqrt{2}}{(X+Y)/\sqrt{2}} = \frac{1}{2}\frac{U}{V}$$
where $$(U,V) = \left(\frac{X-Y}{\sqrt{2}}, \frac{X+Y}{\sqrt{2}}\right).$$ Because $(U,V)$ is a linear transformation of the bivariate Normal variable $(X,Y),$ it too is bivariate Normal, and an easy calculation (ultimately requiring, apart from arithmetical definitions, only the fact that $1+1=2$) shows the variances of $U$ and $V$ are unity and $U$ and $V$ are uncorrelated: that is, $(U,V)$ also has a standard Normal distribution.
In particular, $U$ and $V$ are both symmetrically distributed (about $0$), implying $U/V$ has the same distribution as $U/|V|.$ But $|V| = \sqrt{V^2}$ has, by definition, a $\chi(1)$ distribution. Since $U$ and $V$ are independent, so are $U$ and $|V|,$ whence (also by definition) $U/|V| = U/\sqrt{V^2/1}$ has a Student t distribution with one degree of freedom.
The conclusion, after no integration and only the simplest of algebraic calculations, is
$W = 2Z-1 = U/V$ has a Student t distribution with one degree of freedom.
That's just another name for the (standard) Cauchy distribution. Since $Z = W/2 + 1/2$
is just a rescaled and shifted version of $W,$, $Z$ has a Cauchy distribution (once again by definition), QED.
Summary of facts used
Every one of the facts used in the foregoing analysis is of interest and well worth knowing.
These are basic theorems:
Linear transformations of bivariate Normal variables are bivariate Normal. (This could also be considered a definition.)
Uncorrelated bivariate Normal variables are independent.
The covariance is a quadratic form. (This, too, can be part of the definition of covariance, but that would be a little unusual.)
When two variables are independent, functions of each of them (separately) are also independent.
These are all definitions:
A sum of $n$ independent standard Normal variables has a $\chi^2(n)$ distribution.
The ratio of a standard Normal variable to the square root of $1/n$ times a $\chi^2(n)$ independent variable has a Student t distribution with $n$ degrees of freedom. See also A normal divided by the $\sqrt{\chi^2(s)/s}$ gives you a t-distribution -- proof.
A Cauchy distribution is a scaled, translated version of the Student t distribution with 1 degree of freedom. | How can I obtain a Cauchy distribution from two standard normal distributions?
This can be done with a minimum of computation, relying only on (a) simple algebra and (b) basic knowledge of distributions associated with statistical tests. As such, the demonstration may have subs |
24,811 | How can I obtain a Cauchy distribution from two standard normal distributions? | Correction: the Jacobian of the transform is $|V|$, not $V$, which implies that
$$f_{U,V}(u,v)=f_{X,Y}(uv,v-uv)|J|=\frac{|v|}{2\pi}\exp\left\{\frac{-v^2}{2}(2u^2-2u+1)\right\}$$
Hence
\begin{align}f_U(u)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}|v|e^{\frac{-v^2}{2}(2u^2-2u+1)}\text{d}v\\
&=\frac{2}{2}\frac{1}{\pi}\int_{0}^{\infty}ve^{-\overbrace{\frac{v^2}{2}(2u^2-2u+1)}^y}\text{d}v\\
&=\frac{1}{\pi(2u^2-2u+1)}\int_0^{\infty}e^{-y}\text{d}y\\
&=\frac{1}{\pi}\frac{1}{2u^2-2u+1}\\
&=\frac{1}{\pi}\frac{1}{2(u-½)^2+½}\\
&=\frac{1}{½\pi}\frac{1}{4(u-½)^2+1}\\
&=\frac{1}{½\pi}\frac{1}{(2[u-½])^2+1}\\
&=\frac{1}{½\pi}\left(\left[\frac{u-½}{½}\right]^2+1\right)^{-1}\end{align}
which is the density of a Cauchy distribution with location ½ (which is also the median) and scale ½ (which is also the MAD). (The last equality in the question is erroneously using 2 instead of ½ as scale and missing the ½ in the first fraction denominator.)
Check Pillai and Meng (2016) for further surprising properties of
the Cauchy distribution. | How can I obtain a Cauchy distribution from two standard normal distributions? | Correction: the Jacobian of the transform is $|V|$, not $V$, which implies that
$$f_{U,V}(u,v)=f_{X,Y}(uv,v-uv)|J|=\frac{|v|}{2\pi}\exp\left\{\frac{-v^2}{2}(2u^2-2u+1)\right\}$$
Hence
\begin{align}f_U | How can I obtain a Cauchy distribution from two standard normal distributions?
Correction: the Jacobian of the transform is $|V|$, not $V$, which implies that
$$f_{U,V}(u,v)=f_{X,Y}(uv,v-uv)|J|=\frac{|v|}{2\pi}\exp\left\{\frac{-v^2}{2}(2u^2-2u+1)\right\}$$
Hence
\begin{align}f_U(u)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}|v|e^{\frac{-v^2}{2}(2u^2-2u+1)}\text{d}v\\
&=\frac{2}{2}\frac{1}{\pi}\int_{0}^{\infty}ve^{-\overbrace{\frac{v^2}{2}(2u^2-2u+1)}^y}\text{d}v\\
&=\frac{1}{\pi(2u^2-2u+1)}\int_0^{\infty}e^{-y}\text{d}y\\
&=\frac{1}{\pi}\frac{1}{2u^2-2u+1}\\
&=\frac{1}{\pi}\frac{1}{2(u-½)^2+½}\\
&=\frac{1}{½\pi}\frac{1}{4(u-½)^2+1}\\
&=\frac{1}{½\pi}\frac{1}{(2[u-½])^2+1}\\
&=\frac{1}{½\pi}\left(\left[\frac{u-½}{½}\right]^2+1\right)^{-1}\end{align}
which is the density of a Cauchy distribution with location ½ (which is also the median) and scale ½ (which is also the MAD). (The last equality in the question is erroneously using 2 instead of ½ as scale and missing the ½ in the first fraction denominator.)
Check Pillai and Meng (2016) for further surprising properties of
the Cauchy distribution. | How can I obtain a Cauchy distribution from two standard normal distributions?
Correction: the Jacobian of the transform is $|V|$, not $V$, which implies that
$$f_{U,V}(u,v)=f_{X,Y}(uv,v-uv)|J|=\frac{|v|}{2\pi}\exp\left\{\frac{-v^2}{2}(2u^2-2u+1)\right\}$$
Hence
\begin{align}f_U |
24,812 | Does the difference between two symmetric r.v.'s also have a symmetric distribution? | Let $X \sim f(x)$ and $Y \sim g(y)$ be PDFs symmetric about medians $a$ and $b$ respectively. As long as $X$ and $Y$ are independent, the probability distribution of the difference $Z = X - Y$ is the convolution of $X$ and $-Y$, i.e.
$$
p(z) = \int_{-\infty}^\infty f(z + y) g(-y) dy,
$$
where $h(y) = g(-y)$ is simply the PDF over $-Y$ with median $-b.$
Intuitively, we would expect the result to be symmetric about $a - b$ so let's try that.
$$
\begin{split}
p(a - b - z) &= \int_{-\infty}^\infty f(a - b - z + y) g(-y) dy \\
&= \int_{-\infty}^\infty f(a + b + z - y) g(y - 2 b) dy \\
&= \int_{-\infty}^\infty f(a - b + z + v) g(-v) dv \\
&= p(a - b + z).
\end{split}
$$
In the second line I used both the symmetry of $f(x)$ about $a$ and of $g(-y)$ about $-b.$ In the third line, I used the substitution $v = 2 b - y$ in the integral. This proves that $p(z)$ is symmetric about $a - b$ if $f(x)$ is symmetric about $a$ and $g(y)$ is symmetric about $b.$
If $X$ and $Y$ were not independent, and $f$ and $g$ were simply marginal distributions, then we would need to know the joint distribution, $X,Y \sim h(x,y).$ Then, in the integral, we would have to replace $f(z + y) g(-y)$ with $h(z + y, -y).$ However, just because the marginal distributions are symmetric, that does not imply that the joint distribution is symmetric about each of its arguments. So you could not apply similar reasoning. | Does the difference between two symmetric r.v.'s also have a symmetric distribution? | Let $X \sim f(x)$ and $Y \sim g(y)$ be PDFs symmetric about medians $a$ and $b$ respectively. As long as $X$ and $Y$ are independent, the probability distribution of the difference $Z = X - Y$ is the | Does the difference between two symmetric r.v.'s also have a symmetric distribution?
Let $X \sim f(x)$ and $Y \sim g(y)$ be PDFs symmetric about medians $a$ and $b$ respectively. As long as $X$ and $Y$ are independent, the probability distribution of the difference $Z = X - Y$ is the convolution of $X$ and $-Y$, i.e.
$$
p(z) = \int_{-\infty}^\infty f(z + y) g(-y) dy,
$$
where $h(y) = g(-y)$ is simply the PDF over $-Y$ with median $-b.$
Intuitively, we would expect the result to be symmetric about $a - b$ so let's try that.
$$
\begin{split}
p(a - b - z) &= \int_{-\infty}^\infty f(a - b - z + y) g(-y) dy \\
&= \int_{-\infty}^\infty f(a + b + z - y) g(y - 2 b) dy \\
&= \int_{-\infty}^\infty f(a - b + z + v) g(-v) dv \\
&= p(a - b + z).
\end{split}
$$
In the second line I used both the symmetry of $f(x)$ about $a$ and of $g(-y)$ about $-b.$ In the third line, I used the substitution $v = 2 b - y$ in the integral. This proves that $p(z)$ is symmetric about $a - b$ if $f(x)$ is symmetric about $a$ and $g(y)$ is symmetric about $b.$
If $X$ and $Y$ were not independent, and $f$ and $g$ were simply marginal distributions, then we would need to know the joint distribution, $X,Y \sim h(x,y).$ Then, in the integral, we would have to replace $f(z + y) g(-y)$ with $h(z + y, -y).$ However, just because the marginal distributions are symmetric, that does not imply that the joint distribution is symmetric about each of its arguments. So you could not apply similar reasoning. | Does the difference between two symmetric r.v.'s also have a symmetric distribution?
Let $X \sim f(x)$ and $Y \sim g(y)$ be PDFs symmetric about medians $a$ and $b$ respectively. As long as $X$ and $Y$ are independent, the probability distribution of the difference $Z = X - Y$ is the |
24,813 | Does the difference between two symmetric r.v.'s also have a symmetric distribution? | This is going to depend on the relationship between $x$ and $y$, here is a counter example where $x$ and $y$ are symmetric, but $x-y$ is not:
$$x=[-4, -2, 0, 2, 4]$$
$$y=[-1, -3, 0, 1, 3]$$
$$x-y = [-3, 1, 0, 1, 1]$$
So here the median of $x-y$ is not the same as the difference in the medians and $x-y$ is not symmetric.
Edit
This may be clearer in @whuber's notation:
Consider the discrete uniform distribution where $x$ and $y$ are related such that you can only select one of the following pairs:
$$(x,y)=(-4,-1); (-2,-3); (0,0); (2,1); (4,3)$$
If you insist on thinking in a full joint distribution then consider the case where $x$ can take on any of the values $(-4, -2, 0, 2, 4)$ and $y$ can take the values $(-3, -1, 0, 1, 3)$ and the combination can take on any of the 25 pairs. But the probability of the given pairs above is 16% each and all the other possible pairs have probability of 1% each. The marginal distribution of $x$ will be discrete uniform which each value having 20% probability and therefore symmetric about the median of 0, the same is true for $y$. Take a large sample from the joint distribution and look at just $x$ or just $y$ and you will see a uniform marginal distribution (symmetric), but take the difference $x-y$ and the result will not be symmetric. | Does the difference between two symmetric r.v.'s also have a symmetric distribution? | This is going to depend on the relationship between $x$ and $y$, here is a counter example where $x$ and $y$ are symmetric, but $x-y$ is not:
$$x=[-4, -2, 0, 2, 4]$$
$$y=[-1, -3, 0, 1, 3]$$
$$x-y = [- | Does the difference between two symmetric r.v.'s also have a symmetric distribution?
This is going to depend on the relationship between $x$ and $y$, here is a counter example where $x$ and $y$ are symmetric, but $x-y$ is not:
$$x=[-4, -2, 0, 2, 4]$$
$$y=[-1, -3, 0, 1, 3]$$
$$x-y = [-3, 1, 0, 1, 1]$$
So here the median of $x-y$ is not the same as the difference in the medians and $x-y$ is not symmetric.
Edit
This may be clearer in @whuber's notation:
Consider the discrete uniform distribution where $x$ and $y$ are related such that you can only select one of the following pairs:
$$(x,y)=(-4,-1); (-2,-3); (0,0); (2,1); (4,3)$$
If you insist on thinking in a full joint distribution then consider the case where $x$ can take on any of the values $(-4, -2, 0, 2, 4)$ and $y$ can take the values $(-3, -1, 0, 1, 3)$ and the combination can take on any of the 25 pairs. But the probability of the given pairs above is 16% each and all the other possible pairs have probability of 1% each. The marginal distribution of $x$ will be discrete uniform which each value having 20% probability and therefore symmetric about the median of 0, the same is true for $y$. Take a large sample from the joint distribution and look at just $x$ or just $y$ and you will see a uniform marginal distribution (symmetric), but take the difference $x-y$ and the result will not be symmetric. | Does the difference between two symmetric r.v.'s also have a symmetric distribution?
This is going to depend on the relationship between $x$ and $y$, here is a counter example where $x$ and $y$ are symmetric, but $x-y$ is not:
$$x=[-4, -2, 0, 2, 4]$$
$$y=[-1, -3, 0, 1, 3]$$
$$x-y = [- |
24,814 | Does the difference between two symmetric r.v.'s also have a symmetric distribution? | You'll need to assume independence between X and Y for this to hold in general. The result follows directly since the distribution of $X-Y$ is a convolution of symmetric functions, which is also symmetric. | Does the difference between two symmetric r.v.'s also have a symmetric distribution? | You'll need to assume independence between X and Y for this to hold in general. The result follows directly since the distribution of $X-Y$ is a convolution of symmetric functions, which is also symme | Does the difference between two symmetric r.v.'s also have a symmetric distribution?
You'll need to assume independence between X and Y for this to hold in general. The result follows directly since the distribution of $X-Y$ is a convolution of symmetric functions, which is also symmetric. | Does the difference between two symmetric r.v.'s also have a symmetric distribution?
You'll need to assume independence between X and Y for this to hold in general. The result follows directly since the distribution of $X-Y$ is a convolution of symmetric functions, which is also symme |
24,815 | Gibbs sampler examples in R [closed] | Problem
Suppose $Y \sim \text{N}(\text{mean} = \mu, \text{Var} = \frac{1}{\tau})$.
Based on a sample, obtain the posterior distributions of $\mu$ and $\tau$ using the Gibbs sampler.
Notation
$ \mu$ = population mean
$ \tau$ = population precision (1 / variance)
$n$ = sample size
$\bar{y}$ = sample mean
$s^2$ = sample variance
Gibbs sampler
[Casella, G. & George, E. I. (1992). Explaining the Gibbs Sampler. The American Statistician, 46, 167–174.]
At iteration $i$ ($i = 1, \dots, N$):
sample $\mu^{(i)}$ from $f(\mu \,|\, \tau^{(i - 1)}, \text{data})$ (see below)
sample $\tau^{(i)}$ from $f(\tau \,|\, \mu^{(i)}, \text{data})$ (see below)
The theory ensures that after a sufficiently large number of iterations, $T$, the set $\{(\mu^{(𝑖)}, \tau^{(𝑖)} ) : i = T+1, \dots, 𝑁 \}$ can be seen as a random sample from the joint posterior distribution.
Priors
$f(\mu, \tau) = f(\mu) \times f(\tau)$, with
$f(\mu) \propto 1$
$f(\tau) \propto \tau^{-1}$
Conditional posterior for the mean, given the precision
$$(\mu \,|\, \tau, \text{data}) \sim \text{N}\Big(\bar{y}, \frac{1}{n\tau}\Big)$$
Conditional posterior for the precision, given the mean
$$(\tau \,|\, \mu, \text{data}) \sim \text{Gam}\Big(\frac{n}{2}, \frac{2}{(n-1)s^2 + n(\mu - \bar{y})^2} \Big)$$
(quick) R implementation
# summary statistics of sample
n <- 30
ybar <- 15
s2 <- 3
# sample from the joint posterior (mu, tau | data)
mu <- rep(NA, 11000)
tau <- rep(NA, 11000)
T <- 1000 # burnin
tau[1] <- 1 # initialisation
for(i in 2:11000) {
mu[i] <- rnorm(n = 1, mean = ybar, sd = sqrt(1 / (n * tau[i - 1])))
tau[i] <- rgamma(n = 1, shape = n / 2, scale = 2 / ((n - 1) * s2 + n * (mu[i] - ybar)^2))
}
mu <- mu[-(1:T)] # remove burnin
tau <- tau[-(1:T)] # remove burnin
$$
$$
hist(mu)
hist(tau) | Gibbs sampler examples in R [closed] | Problem
Suppose $Y \sim \text{N}(\text{mean} = \mu, \text{Var} = \frac{1}{\tau})$.
Based on a sample, obtain the posterior distributions of $\mu$ and $\tau$ using the Gibbs sampler.
Notation
$ \mu$ = | Gibbs sampler examples in R [closed]
Problem
Suppose $Y \sim \text{N}(\text{mean} = \mu, \text{Var} = \frac{1}{\tau})$.
Based on a sample, obtain the posterior distributions of $\mu$ and $\tau$ using the Gibbs sampler.
Notation
$ \mu$ = population mean
$ \tau$ = population precision (1 / variance)
$n$ = sample size
$\bar{y}$ = sample mean
$s^2$ = sample variance
Gibbs sampler
[Casella, G. & George, E. I. (1992). Explaining the Gibbs Sampler. The American Statistician, 46, 167–174.]
At iteration $i$ ($i = 1, \dots, N$):
sample $\mu^{(i)}$ from $f(\mu \,|\, \tau^{(i - 1)}, \text{data})$ (see below)
sample $\tau^{(i)}$ from $f(\tau \,|\, \mu^{(i)}, \text{data})$ (see below)
The theory ensures that after a sufficiently large number of iterations, $T$, the set $\{(\mu^{(𝑖)}, \tau^{(𝑖)} ) : i = T+1, \dots, 𝑁 \}$ can be seen as a random sample from the joint posterior distribution.
Priors
$f(\mu, \tau) = f(\mu) \times f(\tau)$, with
$f(\mu) \propto 1$
$f(\tau) \propto \tau^{-1}$
Conditional posterior for the mean, given the precision
$$(\mu \,|\, \tau, \text{data}) \sim \text{N}\Big(\bar{y}, \frac{1}{n\tau}\Big)$$
Conditional posterior for the precision, given the mean
$$(\tau \,|\, \mu, \text{data}) \sim \text{Gam}\Big(\frac{n}{2}, \frac{2}{(n-1)s^2 + n(\mu - \bar{y})^2} \Big)$$
(quick) R implementation
# summary statistics of sample
n <- 30
ybar <- 15
s2 <- 3
# sample from the joint posterior (mu, tau | data)
mu <- rep(NA, 11000)
tau <- rep(NA, 11000)
T <- 1000 # burnin
tau[1] <- 1 # initialisation
for(i in 2:11000) {
mu[i] <- rnorm(n = 1, mean = ybar, sd = sqrt(1 / (n * tau[i - 1])))
tau[i] <- rgamma(n = 1, shape = n / 2, scale = 2 / ((n - 1) * s2 + n * (mu[i] - ybar)^2))
}
mu <- mu[-(1:T)] # remove burnin
tau <- tau[-(1:T)] # remove burnin
$$
$$
hist(mu)
hist(tau) | Gibbs sampler examples in R [closed]
Problem
Suppose $Y \sim \text{N}(\text{mean} = \mu, \text{Var} = \frac{1}{\tau})$.
Based on a sample, obtain the posterior distributions of $\mu$ and $\tau$ using the Gibbs sampler.
Notation
$ \mu$ = |
24,816 | rpart complexity parameter confusion | As far as I know, the complexity parameter is not the error in that particular node. It is the amount by which splitting that node improved the relative error. So in your example, splitting the original root node dropped the relative error from 1.0 to 0.5, so the CP of the root node is 0.5. The CP of the next node is only 0.01 (which is the default limit for deciding when to consider splits). So splitting that node only resulted in an improvement of 0.01, so the tree building stopped there. | rpart complexity parameter confusion | As far as I know, the complexity parameter is not the error in that particular node. It is the amount by which splitting that node improved the relative error. So in your example, splitting the origi | rpart complexity parameter confusion
As far as I know, the complexity parameter is not the error in that particular node. It is the amount by which splitting that node improved the relative error. So in your example, splitting the original root node dropped the relative error from 1.0 to 0.5, so the CP of the root node is 0.5. The CP of the next node is only 0.01 (which is the default limit for deciding when to consider splits). So splitting that node only resulted in an improvement of 0.01, so the tree building stopped there. | rpart complexity parameter confusion
As far as I know, the complexity parameter is not the error in that particular node. It is the amount by which splitting that node improved the relative error. So in your example, splitting the origi |
24,817 | rpart complexity parameter confusion | The complexity parameter $\alpha$ specifies how the cost of a tree $C(T)$ is penalized by the number of terminal nodes $|T|$, resulting in a regularized cost $C_{\alpha}(T)$ (see http://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf, Section 4).
$C_{\alpha}(T) = C(T) + \alpha |T|$
Small $\alpha$ results in larger trees and potential overfitting, large $\alpha$ in small trees and potential underfitting. | rpart complexity parameter confusion | The complexity parameter $\alpha$ specifies how the cost of a tree $C(T)$ is penalized by the number of terminal nodes $|T|$, resulting in a regularized cost $C_{\alpha}(T)$ (see http://cran.r-project | rpart complexity parameter confusion
The complexity parameter $\alpha$ specifies how the cost of a tree $C(T)$ is penalized by the number of terminal nodes $|T|$, resulting in a regularized cost $C_{\alpha}(T)$ (see http://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf, Section 4).
$C_{\alpha}(T) = C(T) + \alpha |T|$
Small $\alpha$ results in larger trees and potential overfitting, large $\alpha$ in small trees and potential underfitting. | rpart complexity parameter confusion
The complexity parameter $\alpha$ specifies how the cost of a tree $C(T)$ is penalized by the number of terminal nodes $|T|$, resulting in a regularized cost $C_{\alpha}(T)$ (see http://cran.r-project |
24,818 | rpart complexity parameter confusion | It is not particularly easy to follow the rpart calculations for classification. In addition, although the 'Long Intro' suggests that gini is used for classification, it seems that cost complexity pruning (and hence the values for cp) is reported based on accuracy rather than gini. In this case, we can work through the calculations and replicate the 0.4 queried in the original question. Firstly, create the tree
df <- data.frame(x=c(1,2,3,3,3), y=factor(c("a", "a", "b", "a", "b")))
mytree <- rpart(y ~ x, data = df, minbucket = 1, minsplit=1, method="class")
and then typing
print(mytree)
we get
node), split, n, loss, yval, (yprob)
1) root 5 2 a (0.6000000 0.4000000)
2) x< 2.5 2 0 a (1.0000000 0.0000000) *
3) x>=2.5 3 1 b (0.3333333 0.6666667) *
The 'loss' column is not gini (which you might have expected it to be). It is the number of errors made.
The point at which this one split tree collapses (based on accuracy) is when
$$ 2 + \alpha * 1 = 1 + \alpha * 2$$
(where the first 2 above is the loss in the pruned tree and the second 2 is the number of terminal nodes in the full tree).
Solving for alpha, gives an alpha of 1.
As mentioned in an answer above, in the cptable, the error in the top line is scaled to 1 and then cp is scaled by the same amount.
The error in the top line is the number of errors in a tree with no splits ie 2.
Hence the alpha of 1 is scaled by dividing by 2 to give 0.50.
It is hard to read the C code in rpart, but the above is what I think it is doing. | rpart complexity parameter confusion | It is not particularly easy to follow the rpart calculations for classification. In addition, although the 'Long Intro' suggests that gini is used for classification, it seems that cost complexity pr | rpart complexity parameter confusion
It is not particularly easy to follow the rpart calculations for classification. In addition, although the 'Long Intro' suggests that gini is used for classification, it seems that cost complexity pruning (and hence the values for cp) is reported based on accuracy rather than gini. In this case, we can work through the calculations and replicate the 0.4 queried in the original question. Firstly, create the tree
df <- data.frame(x=c(1,2,3,3,3), y=factor(c("a", "a", "b", "a", "b")))
mytree <- rpart(y ~ x, data = df, minbucket = 1, minsplit=1, method="class")
and then typing
print(mytree)
we get
node), split, n, loss, yval, (yprob)
1) root 5 2 a (0.6000000 0.4000000)
2) x< 2.5 2 0 a (1.0000000 0.0000000) *
3) x>=2.5 3 1 b (0.3333333 0.6666667) *
The 'loss' column is not gini (which you might have expected it to be). It is the number of errors made.
The point at which this one split tree collapses (based on accuracy) is when
$$ 2 + \alpha * 1 = 1 + \alpha * 2$$
(where the first 2 above is the loss in the pruned tree and the second 2 is the number of terminal nodes in the full tree).
Solving for alpha, gives an alpha of 1.
As mentioned in an answer above, in the cptable, the error in the top line is scaled to 1 and then cp is scaled by the same amount.
The error in the top line is the number of errors in a tree with no splits ie 2.
Hence the alpha of 1 is scaled by dividing by 2 to give 0.50.
It is hard to read the C code in rpart, but the above is what I think it is doing. | rpart complexity parameter confusion
It is not particularly easy to follow the rpart calculations for classification. In addition, although the 'Long Intro' suggests that gini is used for classification, it seems that cost complexity pr |
24,819 | rpart complexity parameter confusion | I write to further validate the answers of both @joran and @fernando, and help someone like myself to further clarify how to interpret the cp matrix in an rpart obejct. If you observe the following code you will find that i have fitted a classification tree with two possible outcomes introducing my own loss matrix (FN are twice the cost of the FP).
Classification tree:
rpart(formula = Result ~ ., data = data, method = "class", parms = list(loss
= PenaltyMatrix))
Root node error: 174/343 = 0.50729
n= 343
CP nsplit rel error xerror xstd
1 0.066092 0 1.00000 0.50000 0.046311
2 0.040230 2 0.86782 0.73563 0.065081
3 0.034483 4 0.78736 0.91379 0.075656
4 0.022989 5 0.75287 1.01149 0.080396
5 0.019157 7 0.70690 1.17241 0.086526
6 0.011494 10 0.64943 1.21264 0.087764
7 0.010000 12 0.62644 1.31609 0.090890
Now if you observe for example step 6 on the matrix you can see that the number of splits increased by 3 (from 7 to 10) resulting to a relative error of 0.64943. Now if you subtract this error from the respective one on the previous step you will find an improvement of 0.05747, which in turn if devided by the number of extra splits between the steps 5-6 (which is three) results to aprroximately 0.01957, which is the complexity parameter of step 5. This can be validated in between all steps!
Now if I may i have a two-fold question to address to the community.
It still bufles me what does it mean to have the xerror to continuously increase as the tree size grows?
If i follow the so-called rule of thumb i have to select the tree with the smallest size that has an xerror within one standard deviation of the tree with the minimum xerror across the table. This in my case would be the tree in step 2 (because this is the one with the smallest xerror and 0.73563+0.065081=0.800711, which is not met by any other tree in the table). Is this correct? | rpart complexity parameter confusion | I write to further validate the answers of both @joran and @fernando, and help someone like myself to further clarify how to interpret the cp matrix in an rpart obejct. If you observe the following co | rpart complexity parameter confusion
I write to further validate the answers of both @joran and @fernando, and help someone like myself to further clarify how to interpret the cp matrix in an rpart obejct. If you observe the following code you will find that i have fitted a classification tree with two possible outcomes introducing my own loss matrix (FN are twice the cost of the FP).
Classification tree:
rpart(formula = Result ~ ., data = data, method = "class", parms = list(loss
= PenaltyMatrix))
Root node error: 174/343 = 0.50729
n= 343
CP nsplit rel error xerror xstd
1 0.066092 0 1.00000 0.50000 0.046311
2 0.040230 2 0.86782 0.73563 0.065081
3 0.034483 4 0.78736 0.91379 0.075656
4 0.022989 5 0.75287 1.01149 0.080396
5 0.019157 7 0.70690 1.17241 0.086526
6 0.011494 10 0.64943 1.21264 0.087764
7 0.010000 12 0.62644 1.31609 0.090890
Now if you observe for example step 6 on the matrix you can see that the number of splits increased by 3 (from 7 to 10) resulting to a relative error of 0.64943. Now if you subtract this error from the respective one on the previous step you will find an improvement of 0.05747, which in turn if devided by the number of extra splits between the steps 5-6 (which is three) results to aprroximately 0.01957, which is the complexity parameter of step 5. This can be validated in between all steps!
Now if I may i have a two-fold question to address to the community.
It still bufles me what does it mean to have the xerror to continuously increase as the tree size grows?
If i follow the so-called rule of thumb i have to select the tree with the smallest size that has an xerror within one standard deviation of the tree with the minimum xerror across the table. This in my case would be the tree in step 2 (because this is the one with the smallest xerror and 0.73563+0.065081=0.800711, which is not met by any other tree in the table). Is this correct? | rpart complexity parameter confusion
I write to further validate the answers of both @joran and @fernando, and help someone like myself to further clarify how to interpret the cp matrix in an rpart obejct. If you observe the following co |
24,820 | How to apply coefficient term for factors and interactive terms in a linear equation? | This is not a problem specific to R. R uses a conventional display of coefficients.
When you read such regression output (in a paper, textbook, or from statistical software), you need to know which variables are "continuous" and which are "categorical":
The "continuous" ones are explicitly numeric and their numeric values were used as-is in the regression fitting.
The "categorical" variables can be of any type, including those that are numeric! What makes them categorical is that the software treated them as "factors": that is, each distinct value that is found is considered an indicator of something distinct.
Most software will treat non-numerical values (such as strings) as factors. Most software can be persuaded to treat numerical values as factors, too. For example, a postal service code (ZIP code in the US) looks like a number but really is just a code for a set of mailboxes; it would make no sense to add, subtract, and multiply ZIP codes by other numbers! (This flexibility is the source of a common mistake: if you are not careful, or unwitting, your software may treat a variable you consider to be categorical as continuous, or vice-versa. Be careful!)
Nevertheless, categorical variables have to be represented in some way as numbers in order to apply the fitting algorithms. There are many ways to encode them. The codes are created using "dummy variables." Find out more about dummy variable encoding by searching on this site; the details don't matter here.
In the question we are told that h and f are categorical ("discrete") values. By default, log(d) and a are continuous. That's all we need to know. The model is
$$\eqalign{
y &= \color{red}{-0.679695} & \\
&+ \color{RoyalBlue}{1.791294}\ \log(d) \\
&+ 0.870735 &\text{ if }h=h_1 \\
& -0.447570 &\text{ if }h=h_2 \\
&+ \color{green}{0.542033} &\text{ if }h=h_3 \\
&+ \color{orange}{0.037362}\ a \\
& -0.588362 &\text{ if }f=f_1 \\
&+ \color{purple}{0.816825} &\text{ if }f=f_2 \\
&+ 0.534440 &\text{ if }f=f_3 \\
& -0.085658\ a &\text{ if }h=h_1 \\
& -0.034970\ a &\text{ if }h=h_2 \\
& -\color{brown}{0.040637}\ a &\text{ if }h=h_3 \\
}$$
The rules applied here are:
The "intercept" term, if it appears, is an additive constant (first line).
Continuous variables are multiplied by their coefficients, even in "interactions" like the h1:a, h2:a, and h3:a terms. (This answers the original question.)
Any categorical variable (or factor) is included only for cases where the value of that factor appears.
For example, suppose that $\log(d)=2$, $h=h_3$, $a=-1$, and $f=f_2$. The fitted value in this model is
$$\hat{y} = \color{red}{-0.6797} + \color{RoyalBlue}{1.7913}\times (2) + \color{green}{0.5420} + \color{orange}{0.0374}\times (-1) + \color{purple}{0.8168} -\color{brown}{0.0406}\times (-1).$$
Notice how most of the model coefficients simply do not appear in the calculation, because h can take on exactly one of the three values $h_1$, $h_2$, $h_3$ and therefore only one of the three coefficients $(0.870735, -0.447570, 0.542033)$ applies to h and only one of the three coefficients $(-0.085658, -0.034970, -0.040637)$ will multiply a in the h:a interaction; similarly, only one coefficient applies to f in any particular case. | How to apply coefficient term for factors and interactive terms in a linear equation? | This is not a problem specific to R. R uses a conventional display of coefficients.
When you read such regression output (in a paper, textbook, or from statistical software), you need to know which v | How to apply coefficient term for factors and interactive terms in a linear equation?
This is not a problem specific to R. R uses a conventional display of coefficients.
When you read such regression output (in a paper, textbook, or from statistical software), you need to know which variables are "continuous" and which are "categorical":
The "continuous" ones are explicitly numeric and their numeric values were used as-is in the regression fitting.
The "categorical" variables can be of any type, including those that are numeric! What makes them categorical is that the software treated them as "factors": that is, each distinct value that is found is considered an indicator of something distinct.
Most software will treat non-numerical values (such as strings) as factors. Most software can be persuaded to treat numerical values as factors, too. For example, a postal service code (ZIP code in the US) looks like a number but really is just a code for a set of mailboxes; it would make no sense to add, subtract, and multiply ZIP codes by other numbers! (This flexibility is the source of a common mistake: if you are not careful, or unwitting, your software may treat a variable you consider to be categorical as continuous, or vice-versa. Be careful!)
Nevertheless, categorical variables have to be represented in some way as numbers in order to apply the fitting algorithms. There are many ways to encode them. The codes are created using "dummy variables." Find out more about dummy variable encoding by searching on this site; the details don't matter here.
In the question we are told that h and f are categorical ("discrete") values. By default, log(d) and a are continuous. That's all we need to know. The model is
$$\eqalign{
y &= \color{red}{-0.679695} & \\
&+ \color{RoyalBlue}{1.791294}\ \log(d) \\
&+ 0.870735 &\text{ if }h=h_1 \\
& -0.447570 &\text{ if }h=h_2 \\
&+ \color{green}{0.542033} &\text{ if }h=h_3 \\
&+ \color{orange}{0.037362}\ a \\
& -0.588362 &\text{ if }f=f_1 \\
&+ \color{purple}{0.816825} &\text{ if }f=f_2 \\
&+ 0.534440 &\text{ if }f=f_3 \\
& -0.085658\ a &\text{ if }h=h_1 \\
& -0.034970\ a &\text{ if }h=h_2 \\
& -\color{brown}{0.040637}\ a &\text{ if }h=h_3 \\
}$$
The rules applied here are:
The "intercept" term, if it appears, is an additive constant (first line).
Continuous variables are multiplied by their coefficients, even in "interactions" like the h1:a, h2:a, and h3:a terms. (This answers the original question.)
Any categorical variable (or factor) is included only for cases where the value of that factor appears.
For example, suppose that $\log(d)=2$, $h=h_3$, $a=-1$, and $f=f_2$. The fitted value in this model is
$$\hat{y} = \color{red}{-0.6797} + \color{RoyalBlue}{1.7913}\times (2) + \color{green}{0.5420} + \color{orange}{0.0374}\times (-1) + \color{purple}{0.8168} -\color{brown}{0.0406}\times (-1).$$
Notice how most of the model coefficients simply do not appear in the calculation, because h can take on exactly one of the three values $h_1$, $h_2$, $h_3$ and therefore only one of the three coefficients $(0.870735, -0.447570, 0.542033)$ applies to h and only one of the three coefficients $(-0.085658, -0.034970, -0.040637)$ will multiply a in the h:a interaction; similarly, only one coefficient applies to f in any particular case. | How to apply coefficient term for factors and interactive terms in a linear equation?
This is not a problem specific to R. R uses a conventional display of coefficients.
When you read such regression output (in a paper, textbook, or from statistical software), you need to know which v |
24,821 | How to apply coefficient term for factors and interactive terms in a linear equation? | This is just a comment but it won't fit as such in the limited edit boxes we have at our disposal.
I like seeing a regression equation clearly written in plain text, as @whuber did in his reply. Here is a quick way to this in R, with the Hmisc package. (I'll be using rms too, but that does not really matter.) Basically, it only assumes that a $\LaTeX$ typesetting system is available on your machine.
Let's simulate some data first,
n <- 200
x1 <- runif(n)
x2 <- runif(n)
x3 <- runif(n)
g1 <- gl(2, 100, n, labels=letters[1:2])
g2 <- cut2(runif(n), g=4)
y <- x1 + x2 + rnorm(200)
then fit a regression model,
f <- ols(y ~ x1 + x2 + x3 + g1 + g2 + x1:g1)
which yields the following results:
Linear Regression Model
ols(formula = y ~ x1 + x2 + x3 + g1 + g2 + x1:g1)
Model Likelihood Discrimination
Ratio Test Indexes
Obs 200 LR chi2 35.22 R2 0.161
sigma 0.9887 d.f. 8 R2 adj 0.126
d.f. 191 Pr(> chi2) 0.0000 g 0.487
Residuals
Min 1Q Median 3Q Max
-3.1642 -0.7109 0.1015 0.7363 2.7342
Coef S.E. t Pr(>|t|)
Intercept 0.0540 0.2932 0.18 0.8541
x1 1.1414 0.3642 3.13 0.0020
x2 0.8546 0.2331 3.67 0.0003
x3 -0.0048 0.2472 -0.02 0.9844
g1=b 0.2099 0.2895 0.73 0.4692
g2=[0.23278,0.553) 0.0609 0.1988 0.31 0.7598
g2=[0.55315,0.777) -0.2615 0.1987 -1.32 0.1896
g2=[0.77742,0.985] -0.2107 0.1986 -1.06 0.2901
x1 * g1=b -0.2354 0.5020 -0.47 0.6396
Then, to print the corresponding regression equation, just use the generic latex function, like this:
latex(f)
Upon conversion of the dvi to png, you should get something like that
IMO, this has the merit of showing how to compute predicted values depending on actual or chosen values for numerical and categorical predictors. For the latter, factor levels are indicated in bracket near the corresponding coefficient. | How to apply coefficient term for factors and interactive terms in a linear equation? | This is just a comment but it won't fit as such in the limited edit boxes we have at our disposal.
I like seeing a regression equation clearly written in plain text, as @whuber did in his reply. Here | How to apply coefficient term for factors and interactive terms in a linear equation?
This is just a comment but it won't fit as such in the limited edit boxes we have at our disposal.
I like seeing a regression equation clearly written in plain text, as @whuber did in his reply. Here is a quick way to this in R, with the Hmisc package. (I'll be using rms too, but that does not really matter.) Basically, it only assumes that a $\LaTeX$ typesetting system is available on your machine.
Let's simulate some data first,
n <- 200
x1 <- runif(n)
x2 <- runif(n)
x3 <- runif(n)
g1 <- gl(2, 100, n, labels=letters[1:2])
g2 <- cut2(runif(n), g=4)
y <- x1 + x2 + rnorm(200)
then fit a regression model,
f <- ols(y ~ x1 + x2 + x3 + g1 + g2 + x1:g1)
which yields the following results:
Linear Regression Model
ols(formula = y ~ x1 + x2 + x3 + g1 + g2 + x1:g1)
Model Likelihood Discrimination
Ratio Test Indexes
Obs 200 LR chi2 35.22 R2 0.161
sigma 0.9887 d.f. 8 R2 adj 0.126
d.f. 191 Pr(> chi2) 0.0000 g 0.487
Residuals
Min 1Q Median 3Q Max
-3.1642 -0.7109 0.1015 0.7363 2.7342
Coef S.E. t Pr(>|t|)
Intercept 0.0540 0.2932 0.18 0.8541
x1 1.1414 0.3642 3.13 0.0020
x2 0.8546 0.2331 3.67 0.0003
x3 -0.0048 0.2472 -0.02 0.9844
g1=b 0.2099 0.2895 0.73 0.4692
g2=[0.23278,0.553) 0.0609 0.1988 0.31 0.7598
g2=[0.55315,0.777) -0.2615 0.1987 -1.32 0.1896
g2=[0.77742,0.985] -0.2107 0.1986 -1.06 0.2901
x1 * g1=b -0.2354 0.5020 -0.47 0.6396
Then, to print the corresponding regression equation, just use the generic latex function, like this:
latex(f)
Upon conversion of the dvi to png, you should get something like that
IMO, this has the merit of showing how to compute predicted values depending on actual or chosen values for numerical and categorical predictors. For the latter, factor levels are indicated in bracket near the corresponding coefficient. | How to apply coefficient term for factors and interactive terms in a linear equation?
This is just a comment but it won't fit as such in the limited edit boxes we have at our disposal.
I like seeing a regression equation clearly written in plain text, as @whuber did in his reply. Here |
24,822 | How to apply coefficient term for factors and interactive terms in a linear equation? | You can check your "contrasts" are the default by options() and looking for:
$contrasts
unordered ordered
"contr.treatment" "contr.poly"
If your unordered contrasts are set as contr.treatment (as they should be unless you've changed them), then the first level of each of your factors will be set as a baseline. You will only be given estimates for the coefficients in front of the dummy variables created for other levels of the factor. In effect, those coefficients will be "how different on average is the response variable at this level of the factor, compared to the baseline level of the factor, having controlled for everything else in the model".
I am guessing from your output there is a an h0 and f0 which are the baseline levels for h and f (unless you have a non-default option for contrasts, in which case there are several possibilities; try ?contr.treatment for some help).
It's similar with the interaction. If my previous paragraph is correct, the estimate given for a will really be the slope for a when h=h0. The estimates given in the summary that apply to the interactions are how much that slope changes for different levels of h.
So in your example where h=h1 and f=f2, try:
log(c) = 1.791294(log(d)) + (0.037362 - 0.085658) (a) + 0.870735 + 0.816825 -0.679695
Oh, and you can use predict() to do a lot of useful things too... if you actually want to predict something (rather than write out the equation for a report). Try ?predict.lm to see what predict() does to an object created by lm. | How to apply coefficient term for factors and interactive terms in a linear equation? | You can check your "contrasts" are the default by options() and looking for:
$contrasts
unordered ordered
"contr.treatment" "contr.poly"
If your unordered contrasts are set | How to apply coefficient term for factors and interactive terms in a linear equation?
You can check your "contrasts" are the default by options() and looking for:
$contrasts
unordered ordered
"contr.treatment" "contr.poly"
If your unordered contrasts are set as contr.treatment (as they should be unless you've changed them), then the first level of each of your factors will be set as a baseline. You will only be given estimates for the coefficients in front of the dummy variables created for other levels of the factor. In effect, those coefficients will be "how different on average is the response variable at this level of the factor, compared to the baseline level of the factor, having controlled for everything else in the model".
I am guessing from your output there is a an h0 and f0 which are the baseline levels for h and f (unless you have a non-default option for contrasts, in which case there are several possibilities; try ?contr.treatment for some help).
It's similar with the interaction. If my previous paragraph is correct, the estimate given for a will really be the slope for a when h=h0. The estimates given in the summary that apply to the interactions are how much that slope changes for different levels of h.
So in your example where h=h1 and f=f2, try:
log(c) = 1.791294(log(d)) + (0.037362 - 0.085658) (a) + 0.870735 + 0.816825 -0.679695
Oh, and you can use predict() to do a lot of useful things too... if you actually want to predict something (rather than write out the equation for a report). Try ?predict.lm to see what predict() does to an object created by lm. | How to apply coefficient term for factors and interactive terms in a linear equation?
You can check your "contrasts" are the default by options() and looking for:
$contrasts
unordered ordered
"contr.treatment" "contr.poly"
If your unordered contrasts are set |
24,823 | How to apply coefficient term for factors and interactive terms in a linear equation? | Rather than thinking of some of the coefficients being included and some not, resulting in a number of different equations depending on the values of the variables, another way to think about it is that all coefficients are included in a single equation. But, they are multiplied by either 1 or 0 depending on whether that condition is true.
That is, each possible value of each factor variable (i.e. discrete variable) appears in the final equation as either a 0 or a 1 depending on whether the variable has that value, and its coefficient is applied to it.
So, as already mentioned, each factor variable is split into dummy variables, one for each level in the factor (e.g. h becomes h1, h2, h3, ... hn), representing all possible conditions of the variable. Then, each dummy variable gets its own unique coefficient.
So lm(log(c) ~ log(d) + h + a + f + h:a) becomes
Coefficients:
Estimate
(Intercept) -0.679695
log(d) 1.791294
h1 0.870735
h2 -0.447570
h3 0.542033
a 0.037362
f1 -0.588362
f2 0.816825
f3 0.534440
h1:a -0.085658
h2:a -0.034970
h3:a -0.040637
which becomes
log(c) == -0.679695 + 1.791294*log(d) + 0.870735*h1 - 0.447570*h2 + 0.542033*h3 + 0.037362*a - 0.588362*f1 + 0.816825*f2 + 0.534440*f3 - 0.085658*h1*a - 0.034970*h2*a - 0.040637*h3*a
Now plug in ones and zeroes:
If h equals h1, then h1 equals 1 (or TRUE), else h1 equals 0 ( or FALSE).
If h equals h2, then h2 equals 1, else h2 equals 0.
... etc. | How to apply coefficient term for factors and interactive terms in a linear equation? | Rather than thinking of some of the coefficients being included and some not, resulting in a number of different equations depending on the values of the variables, another way to think about it is th | How to apply coefficient term for factors and interactive terms in a linear equation?
Rather than thinking of some of the coefficients being included and some not, resulting in a number of different equations depending on the values of the variables, another way to think about it is that all coefficients are included in a single equation. But, they are multiplied by either 1 or 0 depending on whether that condition is true.
That is, each possible value of each factor variable (i.e. discrete variable) appears in the final equation as either a 0 or a 1 depending on whether the variable has that value, and its coefficient is applied to it.
So, as already mentioned, each factor variable is split into dummy variables, one for each level in the factor (e.g. h becomes h1, h2, h3, ... hn), representing all possible conditions of the variable. Then, each dummy variable gets its own unique coefficient.
So lm(log(c) ~ log(d) + h + a + f + h:a) becomes
Coefficients:
Estimate
(Intercept) -0.679695
log(d) 1.791294
h1 0.870735
h2 -0.447570
h3 0.542033
a 0.037362
f1 -0.588362
f2 0.816825
f3 0.534440
h1:a -0.085658
h2:a -0.034970
h3:a -0.040637
which becomes
log(c) == -0.679695 + 1.791294*log(d) + 0.870735*h1 - 0.447570*h2 + 0.542033*h3 + 0.037362*a - 0.588362*f1 + 0.816825*f2 + 0.534440*f3 - 0.085658*h1*a - 0.034970*h2*a - 0.040637*h3*a
Now plug in ones and zeroes:
If h equals h1, then h1 equals 1 (or TRUE), else h1 equals 0 ( or FALSE).
If h equals h2, then h2 equals 1, else h2 equals 0.
... etc. | How to apply coefficient term for factors and interactive terms in a linear equation?
Rather than thinking of some of the coefficients being included and some not, resulting in a number of different equations depending on the values of the variables, another way to think about it is th |
24,824 | If random variables are drawn from an identical distribution, why doesn't this guarantee they are independent? | If random variables are drawn from an identical distribution, why doesn't this guarantee that they are independent?
Since you don't specify how the random variables are drawn, the question has no meaning. It is the manner of drawing that is important. Consider a neoclassical example of an urn with one ball marked $0$ and one ball marked $1$. The random variable $X$ is the number on the ball drawn from the urn, and is a Bernoulli random variable with parameter $p = P\{X = 1\} = \frac{1}{2}$. Now let $X_1$ denote the number on the first ball drawn and $X_2$ the number on the second ball drawn.
Case I: drawing with replacement There are 4 equally likely outcomes of the experiment and they can be written as $00, 01, 10, 11$.
$X_1$ and $X_2$ are both
Bernoulli$\left(\frac{1}{2}\right)$ random variables and are independent.
Case II: drawing without replacement Now there are only two
equally likely outcomes $01, 10$ but $X_1$ and $X_2$ clearly are
Bernoulli$\left(\frac{1}{2}\right)$ random variables just as before,
and just as clearly are not independent.
Thus, just getting random variables with identical distribution does
not by any means guarantee that they are independent. | If random variables are drawn from an identical distribution, why doesn't this guarantee they are in | If random variables are drawn from an identical distribution, why doesn't this guarantee that they are independent?
Since you don't specify how the random variables are drawn, the question has no mea | If random variables are drawn from an identical distribution, why doesn't this guarantee they are independent?
If random variables are drawn from an identical distribution, why doesn't this guarantee that they are independent?
Since you don't specify how the random variables are drawn, the question has no meaning. It is the manner of drawing that is important. Consider a neoclassical example of an urn with one ball marked $0$ and one ball marked $1$. The random variable $X$ is the number on the ball drawn from the urn, and is a Bernoulli random variable with parameter $p = P\{X = 1\} = \frac{1}{2}$. Now let $X_1$ denote the number on the first ball drawn and $X_2$ the number on the second ball drawn.
Case I: drawing with replacement There are 4 equally likely outcomes of the experiment and they can be written as $00, 01, 10, 11$.
$X_1$ and $X_2$ are both
Bernoulli$\left(\frac{1}{2}\right)$ random variables and are independent.
Case II: drawing without replacement Now there are only two
equally likely outcomes $01, 10$ but $X_1$ and $X_2$ clearly are
Bernoulli$\left(\frac{1}{2}\right)$ random variables just as before,
and just as clearly are not independent.
Thus, just getting random variables with identical distribution does
not by any means guarantee that they are independent. | If random variables are drawn from an identical distribution, why doesn't this guarantee they are in
If random variables are drawn from an identical distribution, why doesn't this guarantee that they are independent?
Since you don't specify how the random variables are drawn, the question has no mea |
24,825 | Linear regression with factors in R | To elaborate on @John's answer: in R's formulas, you have a few operators you can apply to the terms: "+" simply adds them, ":" means that you add a term (or several terms) that refer to their interaction (see below), "*" means both, that is: the "main effects" are added, and the interaction term(s) are added as well.
So what does this interaction mean? Well, in the case of continuous variables, it is indeed a term that is added that is simply the multiple of the two variables. If you'd have height and weight as predictors, and use out ~ height * weight as formula, the linear model will thus contain three 'variables', namely weight, height and their product (it also contains the interaction but that is of less interest here).
Although I suggest otherwise above: this works exactly the same way for categorical variables, but now the 'product' applies to the (set of) dummy variable(s) for each categorical variable. Suppose your height and weight are now categorical, each with three categories (S(mall), M(edium) and L(arge)). Then in linear models, each of these is represented by a set of two dummy variables that are either 0 or 1 (there are other ways of coding, but this is the default in R and the most commonly used). Let's say we use S as the reference category for both, then we have each time two dummies height.M and height.L (and similar for weight).
So now, model out ~ height * weight now contains the 4 dummies + all the products of all dummy-combinations (I'm not explicitly writing the coefficients here, they are implied):
(intercept) + height.M + height.L + weight.M + weight.L +
height.M * weight.M + height.L * weight.M + height.M *
weight.L + height.L * weight.L.
In the line above, '*' now again refers to a simple product, but this time of the dummies, so each product itself is also either 1 (when all factors are 1) or 0 (when at least one is not).
In this case the 8 'variables' enable different (mean) outcomes in all combinations of the two variables: the effect of having large weight is now no longer the same for small people (for them the effect is simply formed by the term weight.L) as for large people (here, the effect is weight.L + height.L * weight.L) | Linear regression with factors in R | To elaborate on @John's answer: in R's formulas, you have a few operators you can apply to the terms: "+" simply adds them, ":" means that you add a term (or several terms) that refer to their interac | Linear regression with factors in R
To elaborate on @John's answer: in R's formulas, you have a few operators you can apply to the terms: "+" simply adds them, ":" means that you add a term (or several terms) that refer to their interaction (see below), "*" means both, that is: the "main effects" are added, and the interaction term(s) are added as well.
So what does this interaction mean? Well, in the case of continuous variables, it is indeed a term that is added that is simply the multiple of the two variables. If you'd have height and weight as predictors, and use out ~ height * weight as formula, the linear model will thus contain three 'variables', namely weight, height and their product (it also contains the interaction but that is of less interest here).
Although I suggest otherwise above: this works exactly the same way for categorical variables, but now the 'product' applies to the (set of) dummy variable(s) for each categorical variable. Suppose your height and weight are now categorical, each with three categories (S(mall), M(edium) and L(arge)). Then in linear models, each of these is represented by a set of two dummy variables that are either 0 or 1 (there are other ways of coding, but this is the default in R and the most commonly used). Let's say we use S as the reference category for both, then we have each time two dummies height.M and height.L (and similar for weight).
So now, model out ~ height * weight now contains the 4 dummies + all the products of all dummy-combinations (I'm not explicitly writing the coefficients here, they are implied):
(intercept) + height.M + height.L + weight.M + weight.L +
height.M * weight.M + height.L * weight.M + height.M *
weight.L + height.L * weight.L.
In the line above, '*' now again refers to a simple product, but this time of the dummies, so each product itself is also either 1 (when all factors are 1) or 0 (when at least one is not).
In this case the 8 'variables' enable different (mean) outcomes in all combinations of the two variables: the effect of having large weight is now no longer the same for small people (for them the effect is simply formed by the term weight.L) as for large people (here, the effect is weight.L + height.L * weight.L) | Linear regression with factors in R
To elaborate on @John's answer: in R's formulas, you have a few operators you can apply to the terms: "+" simply adds them, ":" means that you add a term (or several terms) that refer to their interac |
24,826 | Linear regression with factors in R | To follow up on John's answer, the formulae in lm don't use arithmetic notation, they're using a compact symbolic notation to describe linear models (specifically Wilkinson-Rogers notation, there's a good short summary here https://www.mathworks.com/help/stats/wilkinson-notation.html).
Basically, including A*B in the model formula means you're fitting A, B, and A:B (the interaction of A and B). If the interaction term is statistically significant, it suggests that the effect of the treatment is different for each of the types. | Linear regression with factors in R | To follow up on John's answer, the formulae in lm don't use arithmetic notation, they're using a compact symbolic notation to describe linear models (specifically Wilkinson-Rogers notation, there's a | Linear regression with factors in R
To follow up on John's answer, the formulae in lm don't use arithmetic notation, they're using a compact symbolic notation to describe linear models (specifically Wilkinson-Rogers notation, there's a good short summary here https://www.mathworks.com/help/stats/wilkinson-notation.html).
Basically, including A*B in the model formula means you're fitting A, B, and A:B (the interaction of A and B). If the interaction term is statistically significant, it suggests that the effect of the treatment is different for each of the types. | Linear regression with factors in R
To follow up on John's answer, the formulae in lm don't use arithmetic notation, they're using a compact symbolic notation to describe linear models (specifically Wilkinson-Rogers notation, there's a |
24,827 | Linear regression with factors in R | Perhaps looking up 'formula' in help would be of assistance. You aren't multiplying, you're saying you want the two main effects and their interaction as well. | Linear regression with factors in R | Perhaps looking up 'formula' in help would be of assistance. You aren't multiplying, you're saying you want the two main effects and their interaction as well. | Linear regression with factors in R
Perhaps looking up 'formula' in help would be of assistance. You aren't multiplying, you're saying you want the two main effects and their interaction as well. | Linear regression with factors in R
Perhaps looking up 'formula' in help would be of assistance. You aren't multiplying, you're saying you want the two main effects and their interaction as well. |
24,828 | Statistics library with knapsack constraint | All of Statistics: A Concise Course in Statistical Inference - US$ 79.11
Statistical Models: Theory and Practice - US$ 40.00
Data Analysis Using Regression and Multilevel/Hierarchical Models -US$ 47.99
Grand total of US$ 167.11
As chl suggested, you still have money left to print nearly all of Hastie (~746 pages, or $37). | Statistics library with knapsack constraint | All of Statistics: A Concise Course in Statistical Inference - US$ 79.11
Statistical Models: Theory and Practice - US$ 40.00
Data Analysis Using Regression and Multilevel/Hierarchical Models -US$ 47.9 | Statistics library with knapsack constraint
All of Statistics: A Concise Course in Statistical Inference - US$ 79.11
Statistical Models: Theory and Practice - US$ 40.00
Data Analysis Using Regression and Multilevel/Hierarchical Models -US$ 47.99
Grand total of US$ 167.11
As chl suggested, you still have money left to print nearly all of Hastie (~746 pages, or $37). | Statistics library with knapsack constraint
All of Statistics: A Concise Course in Statistical Inference - US$ 79.11
Statistical Models: Theory and Practice - US$ 40.00
Data Analysis Using Regression and Multilevel/Hierarchical Models -US$ 47.9 |
24,829 | Statistics library with knapsack constraint | You might want to spend $1.20 printing out Matthias Vallentin's probability and statistics cheat sheet. | Statistics library with knapsack constraint | You might want to spend $1.20 printing out Matthias Vallentin's probability and statistics cheat sheet. | Statistics library with knapsack constraint
You might want to spend $1.20 printing out Matthias Vallentin's probability and statistics cheat sheet. | Statistics library with knapsack constraint
You might want to spend $1.20 printing out Matthias Vallentin's probability and statistics cheat sheet. |
24,830 | Statistics library with knapsack constraint | Harrell, FE. Regression Modeling Strategies (Springer, 2010, 2nd ed.)
Izenman, AJ. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning (Springer, 2008)
You should have money left to print part of The Handbook of Computational Statistics (Gentle et al., Springer 2004) and The Elements of Statistical Learning (Hastie et al., Springer 2009 2nd ed.) that are circulating on the web. As the latter mostly covers the same topics than Izenman's book (as pointed by @kwak), either may be replaced by one of the Handbook of Statistics published by North-Holland, depending on your field of interests. | Statistics library with knapsack constraint | Harrell, FE. Regression Modeling Strategies (Springer, 2010, 2nd ed.)
Izenman, AJ. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning (Springer, 2008)
You s | Statistics library with knapsack constraint
Harrell, FE. Regression Modeling Strategies (Springer, 2010, 2nd ed.)
Izenman, AJ. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning (Springer, 2008)
You should have money left to print part of The Handbook of Computational Statistics (Gentle et al., Springer 2004) and The Elements of Statistical Learning (Hastie et al., Springer 2009 2nd ed.) that are circulating on the web. As the latter mostly covers the same topics than Izenman's book (as pointed by @kwak), either may be replaced by one of the Handbook of Statistics published by North-Holland, depending on your field of interests. | Statistics library with knapsack constraint
Harrell, FE. Regression Modeling Strategies (Springer, 2010, 2nd ed.)
Izenman, AJ. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning (Springer, 2008)
You s |
24,831 | Statistics library with knapsack constraint | As a social scientist I would have to vouch for the Sage Green Books. If you are a bargain shopper you would be able to get between 10 to 20 books for 200 dollars (assuming no shipping). For those not familiar these are all introductions to various methodologies aimed at people with no more than an introduction to statistical inference in most circumstances. | Statistics library with knapsack constraint | As a social scientist I would have to vouch for the Sage Green Books. If you are a bargain shopper you would be able to get between 10 to 20 books for 200 dollars (assuming no shipping). For those not | Statistics library with knapsack constraint
As a social scientist I would have to vouch for the Sage Green Books. If you are a bargain shopper you would be able to get between 10 to 20 books for 200 dollars (assuming no shipping). For those not familiar these are all introductions to various methodologies aimed at people with no more than an introduction to statistical inference in most circumstances. | Statistics library with knapsack constraint
As a social scientist I would have to vouch for the Sage Green Books. If you are a bargain shopper you would be able to get between 10 to 20 books for 200 dollars (assuming no shipping). For those not |
24,832 | Statistics library with knapsack constraint | What topics are you interested in? I learned from KNNL ($157.50), but oh gosh I couldn't imagine carrying this thing around -- you'd be asking for a reading list on scoliosis correction.
"General Statistics" is certainly an area of interest, but I'm curious if you're more interested in depth, breadth, or some mix of both. | Statistics library with knapsack constraint | What topics are you interested in? I learned from KNNL ($157.50), but oh gosh I couldn't imagine carrying this thing around -- you'd be asking for a reading list on scoliosis correction.
"General Sta | Statistics library with knapsack constraint
What topics are you interested in? I learned from KNNL ($157.50), but oh gosh I couldn't imagine carrying this thing around -- you'd be asking for a reading list on scoliosis correction.
"General Statistics" is certainly an area of interest, but I'm curious if you're more interested in depth, breadth, or some mix of both. | Statistics library with knapsack constraint
What topics are you interested in? I learned from KNNL ($157.50), but oh gosh I couldn't imagine carrying this thing around -- you'd be asking for a reading list on scoliosis correction.
"General Sta |
24,833 | Statistics library with knapsack constraint | A tad pricey, but Bruning and Kintz's Computational Handbook of Statistics ($95.80) would certainly fit in your knapsack. | Statistics library with knapsack constraint | A tad pricey, but Bruning and Kintz's Computational Handbook of Statistics ($95.80) would certainly fit in your knapsack. | Statistics library with knapsack constraint
A tad pricey, but Bruning and Kintz's Computational Handbook of Statistics ($95.80) would certainly fit in your knapsack. | Statistics library with knapsack constraint
A tad pricey, but Bruning and Kintz's Computational Handbook of Statistics ($95.80) would certainly fit in your knapsack. |
24,834 | Interpreting log-log regression with log(1+x) as independent variable | Definitely not, except when $x$ is much larger than $1.$ This is one reason why the automatic reflex to "just add $1$ to values that might be zero before taking the log" is difficult to justify.
Let's see what is really going on. Suppose you are modeling a response $y$ in the form
$$\log y = \cdots + \beta \log(1 + x) + \cdots$$
where the missing stuff doesn't vary with $x.$ Then increasing $x$ by $100\delta\%$ changes $y$ to
$$\log y^\prime = \cdots + \beta \log(1 + x(1 + \delta)) + \cdots = \log y + \beta(\log(1 + x + x\delta) - \log(1 + x)),$$
showing that
$\log y$ changes by $\beta(\log(1 + x + x\delta) - \log(1 + x)).$
That's as nasty to understand as it looks, even when (as is usual) $\delta$ is taken to be very small. For small values of $x\delta$ (that is, $x$ isn't huge) we can approximate it by
$$\beta(\log(1 + x + x\delta) - \log(1 + x)) \approx \beta x\delta;\quad |x\delta| \ll 1+|x|.$$
This is approximately a $100 \beta x \delta \%$ change in $y$ itself. This covers the case of small negative values of $x,$ too -- but of course they cannot be $-1$ or less, for then $\log(1+x)$ would be undefined.
For large $x \gg 1$ we can approximate this change by neglecting $1$ in comparison to $x,$ giving
$$\beta(\log(1 + x + x\delta) - \log(1 + x)) \approx \beta \delta;\quad x \gg 1.$$
This is close to the usual log-log relation, reflecting approximately a $100 \beta \delta\%$ change in $y.$
Here, to illustrate and help the intuition, are log-log plots of two such relationships (with $\beta=1/2$):
The straight-line (dotted red) plot is $\log y = \beta \log(x).$ Because it is a line, we can interpret a change in $\log x$ as being related to a fixed multiple of that change in $\log y.$ But the black graph of $\log y = \beta\log(1+x)$ departs pretty strongly from this linear shape when $x$ is small to medium in size. Its changing curvature means that the relationship between any change in $\log x$ and the value of $\log y$ changes with $x:$ it's small for smaller $x$ but grows as $x$ gets larger.
Thus, the answer is it's complicated: when $x$ ranges from smallish to largish values, the change in $\log y$ ranges from a multiple $x\delta$ to a multiple $\delta$ of $\beta.$ The value depends on $x$ itself, at least until $x$ is sufficiently large. There is no fixed, simple relationship. | Interpreting log-log regression with log(1+x) as independent variable | Definitely not, except when $x$ is much larger than $1.$ This is one reason why the automatic reflex to "just add $1$ to values that might be zero before taking the log" is difficult to justify.
Let' | Interpreting log-log regression with log(1+x) as independent variable
Definitely not, except when $x$ is much larger than $1.$ This is one reason why the automatic reflex to "just add $1$ to values that might be zero before taking the log" is difficult to justify.
Let's see what is really going on. Suppose you are modeling a response $y$ in the form
$$\log y = \cdots + \beta \log(1 + x) + \cdots$$
where the missing stuff doesn't vary with $x.$ Then increasing $x$ by $100\delta\%$ changes $y$ to
$$\log y^\prime = \cdots + \beta \log(1 + x(1 + \delta)) + \cdots = \log y + \beta(\log(1 + x + x\delta) - \log(1 + x)),$$
showing that
$\log y$ changes by $\beta(\log(1 + x + x\delta) - \log(1 + x)).$
That's as nasty to understand as it looks, even when (as is usual) $\delta$ is taken to be very small. For small values of $x\delta$ (that is, $x$ isn't huge) we can approximate it by
$$\beta(\log(1 + x + x\delta) - \log(1 + x)) \approx \beta x\delta;\quad |x\delta| \ll 1+|x|.$$
This is approximately a $100 \beta x \delta \%$ change in $y$ itself. This covers the case of small negative values of $x,$ too -- but of course they cannot be $-1$ or less, for then $\log(1+x)$ would be undefined.
For large $x \gg 1$ we can approximate this change by neglecting $1$ in comparison to $x,$ giving
$$\beta(\log(1 + x + x\delta) - \log(1 + x)) \approx \beta \delta;\quad x \gg 1.$$
This is close to the usual log-log relation, reflecting approximately a $100 \beta \delta\%$ change in $y.$
Here, to illustrate and help the intuition, are log-log plots of two such relationships (with $\beta=1/2$):
The straight-line (dotted red) plot is $\log y = \beta \log(x).$ Because it is a line, we can interpret a change in $\log x$ as being related to a fixed multiple of that change in $\log y.$ But the black graph of $\log y = \beta\log(1+x)$ departs pretty strongly from this linear shape when $x$ is small to medium in size. Its changing curvature means that the relationship between any change in $\log x$ and the value of $\log y$ changes with $x:$ it's small for smaller $x$ but grows as $x$ gets larger.
Thus, the answer is it's complicated: when $x$ ranges from smallish to largish values, the change in $\log y$ ranges from a multiple $x\delta$ to a multiple $\delta$ of $\beta.$ The value depends on $x$ itself, at least until $x$ is sufficiently large. There is no fixed, simple relationship. | Interpreting log-log regression with log(1+x) as independent variable
Definitely not, except when $x$ is much larger than $1.$ This is one reason why the automatic reflex to "just add $1$ to values that might be zero before taking the log" is difficult to justify.
Let' |
24,835 | Interpreting log-log regression with log(1+x) as independent variable | $$\begin{array}{}
\frac{y^\prime/y}{x^\prime/x} &=& a& \quad \text{if $y=x^a$}\\
\frac{y^\prime/y}{x^\prime/x} &=& a \frac{x}{1+x}& \quad \text{if $y=(1+x)^a$}\\
\end{array}$$
You can interpret it as following: a $q\%$ change in $x$ leads to a $q \frac{x}{1+x}\%$ change in $(1+x)$.
If $x$ is large they have a similar interpretation but not when $x$ is small.
For instance, when $x=1$, then a doubling of $x$
is the same as a $50\%$ increase of $1+x$. The $x$ changes from $1$ to $2$ and the $x+1$ from $2$ to $3$.
When $x=100$, then a doubling of $x$
is nearly the same as a doubling of $1+x$. The $x$ changes from $100$ to $200$ and the $x+1$ from $101$ to $201$.
Or the other way: a $q\%$ change in $1+x$ leads to a $q (1+1/x)\%$ change in $x$.
For instance, when $x=1$, then a doubling of $x+1$
is the same as a $200\%$ increase of $x$. The $x+1$ changes from $2$ to $4$ and the $x$ from $1$ to $3$.
When $x=100$, then a doubling of $x+1$
is nearly the same as a doubling of $x$. The $x+1$ changes from $101$ to $202$ and the $x$ from $100$ to $201$.
What is x has both negative and positive values with the absolute value less than 1 so that 1+x are all positive?
Transformations like $\log(x+c)$ are seen when $x$ can have zero or negative values. But it changes the interpretation as you see above.
It might be better to solve the problem directly without the transformation. E.g. if the goal is to do some sort of regression and you linearise the function for this with the transformation. Instead, one can also use non-linear least squares regression or a generalized linear model. You don't need to transform data in order to fit a function like $y=a x^b$. | Interpreting log-log regression with log(1+x) as independent variable | $$\begin{array}{}
\frac{y^\prime/y}{x^\prime/x} &=& a& \quad \text{if $y=x^a$}\\
\frac{y^\prime/y}{x^\prime/x} &=& a \frac{x}{1+x}& \quad \text{if $y=(1+x)^a$}\\
\end{array}$$
You can interpret it as | Interpreting log-log regression with log(1+x) as independent variable
$$\begin{array}{}
\frac{y^\prime/y}{x^\prime/x} &=& a& \quad \text{if $y=x^a$}\\
\frac{y^\prime/y}{x^\prime/x} &=& a \frac{x}{1+x}& \quad \text{if $y=(1+x)^a$}\\
\end{array}$$
You can interpret it as following: a $q\%$ change in $x$ leads to a $q \frac{x}{1+x}\%$ change in $(1+x)$.
If $x$ is large they have a similar interpretation but not when $x$ is small.
For instance, when $x=1$, then a doubling of $x$
is the same as a $50\%$ increase of $1+x$. The $x$ changes from $1$ to $2$ and the $x+1$ from $2$ to $3$.
When $x=100$, then a doubling of $x$
is nearly the same as a doubling of $1+x$. The $x$ changes from $100$ to $200$ and the $x+1$ from $101$ to $201$.
Or the other way: a $q\%$ change in $1+x$ leads to a $q (1+1/x)\%$ change in $x$.
For instance, when $x=1$, then a doubling of $x+1$
is the same as a $200\%$ increase of $x$. The $x+1$ changes from $2$ to $4$ and the $x$ from $1$ to $3$.
When $x=100$, then a doubling of $x+1$
is nearly the same as a doubling of $x$. The $x+1$ changes from $101$ to $202$ and the $x$ from $100$ to $201$.
What is x has both negative and positive values with the absolute value less than 1 so that 1+x are all positive?
Transformations like $\log(x+c)$ are seen when $x$ can have zero or negative values. But it changes the interpretation as you see above.
It might be better to solve the problem directly without the transformation. E.g. if the goal is to do some sort of regression and you linearise the function for this with the transformation. Instead, one can also use non-linear least squares regression or a generalized linear model. You don't need to transform data in order to fit a function like $y=a x^b$. | Interpreting log-log regression with log(1+x) as independent variable
$$\begin{array}{}
\frac{y^\prime/y}{x^\prime/x} &=& a& \quad \text{if $y=x^a$}\\
\frac{y^\prime/y}{x^\prime/x} &=& a \frac{x}{1+x}& \quad \text{if $y=(1+x)^a$}\\
\end{array}$$
You can interpret it as |
24,836 | Interpreting log-log regression with log(1+x) as independent variable | I have always used the following intuitive explanation where log(1+x) (for positive x) is the transformation:
For small x (<< 1), $\log(1+x) \approx x$ . So the effects are still linear.
For large x (>> 1), $\log(1+x) \approx \log(x)$ . So the effects are multipliative.
Therefore, the transformation log(1+x) moves from being linear at one extreme to multiplicative at the other, and is somewhere between the two for "middle-sized" values of x.
By way of a concrete example, if the beta-value was +0.3, then this would be saying
"for small response variables, an increase of one unit of the predictor variable would lead to an additive increase of +0.3. For large response variables, an increase of one unit of the predictor would lead to a multiplication by 1.3 (or +30%). For response variables which are neither small nor large, the effect will be somewhere between the two." | Interpreting log-log regression with log(1+x) as independent variable | I have always used the following intuitive explanation where log(1+x) (for positive x) is the transformation:
For small x (<< 1), $\log(1+x) \approx x$ . So the effects are still linear.
For large x ( | Interpreting log-log regression with log(1+x) as independent variable
I have always used the following intuitive explanation where log(1+x) (for positive x) is the transformation:
For small x (<< 1), $\log(1+x) \approx x$ . So the effects are still linear.
For large x (>> 1), $\log(1+x) \approx \log(x)$ . So the effects are multipliative.
Therefore, the transformation log(1+x) moves from being linear at one extreme to multiplicative at the other, and is somewhere between the two for "middle-sized" values of x.
By way of a concrete example, if the beta-value was +0.3, then this would be saying
"for small response variables, an increase of one unit of the predictor variable would lead to an additive increase of +0.3. For large response variables, an increase of one unit of the predictor would lead to a multiplication by 1.3 (or +30%). For response variables which are neither small nor large, the effect will be somewhere between the two." | Interpreting log-log regression with log(1+x) as independent variable
I have always used the following intuitive explanation where log(1+x) (for positive x) is the transformation:
For small x (<< 1), $\log(1+x) \approx x$ . So the effects are still linear.
For large x ( |
24,837 | Bayesian analysis used merely as a computational tool? | The set of methods called "frequentist" statistics is quite broad. It allows you to specify any proposed estimator you want and then investigate its long-run properties conditional on the true values of the parameters. This method only counts an estimator out completely if it is "inadmissible", meaning that it is dominated by another available estimator (i.e., it gives equal/higher risk over every possible value of the parameter and higher risk over at least some parameter values).
Now, there is a famous theorem that says that, under wide conditions, Bayesian estimators are admissible --- i.e., they are not dominated by other estimators. Bayesian estimators tend to be biased (since they incorporate prior information) but they are also consistent under fairly wide conditions. This means that they are estimators that will tend to perform well in terms of the frequentist criteria. Consequently, frequentists usually consider these estimators as one option that can be used in their analysis.
By definition, "pure Bayesians" are going to adopt the Bayesian methodology in all cases. Most pure Bayesians are going to have adopted this methodology by being convinced of its underlying philosophical and mathematical superiority. However, part of the motivation for adoption of Bayesian methods may be the knowledge that even under the frequentist paradigm, these methods tend to perform well according to frequentist criteria. As to what a pure Bayesian would think of a frequentist using a Bayesian estimator, I suppose it is somewhat like what a priest would think of an atheist who decides one day to pray for spiritual guidance (e.g., on the basis that it can't do any harm even under their own philosophy). They would likely see this as a desirable change in behaviour, improperly motivated, but also possibly a useful entry-point to try to convince them that the general philosophy underpinning that activity is coherent and desirable. | Bayesian analysis used merely as a computational tool? | The set of methods called "frequentist" statistics is quite broad. It allows you to specify any proposed estimator you want and then investigate its long-run properties conditional on the true values | Bayesian analysis used merely as a computational tool?
The set of methods called "frequentist" statistics is quite broad. It allows you to specify any proposed estimator you want and then investigate its long-run properties conditional on the true values of the parameters. This method only counts an estimator out completely if it is "inadmissible", meaning that it is dominated by another available estimator (i.e., it gives equal/higher risk over every possible value of the parameter and higher risk over at least some parameter values).
Now, there is a famous theorem that says that, under wide conditions, Bayesian estimators are admissible --- i.e., they are not dominated by other estimators. Bayesian estimators tend to be biased (since they incorporate prior information) but they are also consistent under fairly wide conditions. This means that they are estimators that will tend to perform well in terms of the frequentist criteria. Consequently, frequentists usually consider these estimators as one option that can be used in their analysis.
By definition, "pure Bayesians" are going to adopt the Bayesian methodology in all cases. Most pure Bayesians are going to have adopted this methodology by being convinced of its underlying philosophical and mathematical superiority. However, part of the motivation for adoption of Bayesian methods may be the knowledge that even under the frequentist paradigm, these methods tend to perform well according to frequentist criteria. As to what a pure Bayesian would think of a frequentist using a Bayesian estimator, I suppose it is somewhat like what a priest would think of an atheist who decides one day to pray for spiritual guidance (e.g., on the basis that it can't do any harm even under their own philosophy). They would likely see this as a desirable change in behaviour, improperly motivated, but also possibly a useful entry-point to try to convince them that the general philosophy underpinning that activity is coherent and desirable. | Bayesian analysis used merely as a computational tool?
The set of methods called "frequentist" statistics is quite broad. It allows you to specify any proposed estimator you want and then investigate its long-run properties conditional on the true values |
24,838 | Bayesian analysis used merely as a computational tool? | In a 2002 paper with Arnaud Doucet and Simon Godsill, Marginal maximum a posteriori estimation using Markov chain Monte Carlo, we use an MCMC approach to derive the maximum likelihood estimator in latent variable models where the observed likelihood is not available. By repeating the number of repetitions of said latent variables in a simulated annealing spirit. Similar proposals were proposed subsequently by
Gaetan and Yao (2003) under the name of multiple imputation Metropolis-EM
Lele et al. (2007) under the name of data cloning
Jacquier, Johannes and Polson (2007) under the name of MCMC maximum likelihood | Bayesian analysis used merely as a computational tool? | In a 2002 paper with Arnaud Doucet and Simon Godsill, Marginal maximum a posteriori estimation using Markov chain Monte Carlo, we use an MCMC approach to derive the maximum likelihood estimator in lat | Bayesian analysis used merely as a computational tool?
In a 2002 paper with Arnaud Doucet and Simon Godsill, Marginal maximum a posteriori estimation using Markov chain Monte Carlo, we use an MCMC approach to derive the maximum likelihood estimator in latent variable models where the observed likelihood is not available. By repeating the number of repetitions of said latent variables in a simulated annealing spirit. Similar proposals were proposed subsequently by
Gaetan and Yao (2003) under the name of multiple imputation Metropolis-EM
Lele et al. (2007) under the name of data cloning
Jacquier, Johannes and Polson (2007) under the name of MCMC maximum likelihood | Bayesian analysis used merely as a computational tool?
In a 2002 paper with Arnaud Doucet and Simon Godsill, Marginal maximum a posteriori estimation using Markov chain Monte Carlo, we use an MCMC approach to derive the maximum likelihood estimator in lat |
24,839 | Bayesian analysis used merely as a computational tool? | Comment: Here are a few reasons why a frequentist statistician might use a Bayesian approach.
Computational convenience, as @Fiodor1234 says, may not be high on the list.
One such example might be use of the Jeffreys posterior probability interval as a confidence interval for a binomial proportion. For example, if you have $x = 42$ successes in $n=100$ trials, the asymptotic Wald interval is not the best
choice because of the small sample size. The Agresti-Coull interval is easy to compute and comes close to more accurate intervals that are somewhat intricate to compute. The Jeffreys interval, based on the noninformative Bayesian prior $\mathsf{Beta}(.5, .5)$, is easy to compute in R and has good frequentist properties.
p.hat = 42/100
CI.Wald = p.hat + qnorm(c(.025,.975))*sqrt(p.hat*(1-p.hat)/100)
round(CI.Wald,4)
[1] 0.3233 0.5167
p.est = (42+2)/(100+4)
CI.Agr = p.est + qnorm(c(.025,.975))*sqrt(p.est*(1-p.est)/104)
round(CI.Agr,4)
[1] 0.3281 0.5180
CI.Jeff = qbeta(c(.025,.975), 42+.5, 100-42+.5)
round(CI.Jeff,4)
[1] 0.3267 0.5179
Proper support of distribution. In an attempt to find
the prevalence of a disease from screening test data, traditional methods can give an interval for prevalence
that extends beyond $(0,1).$ By using a Gibbs sampler with a beta prior distribution, it is possible to get a useful
interval estimate for prevalence. (Since the beta prior has the unit interval as support, then the posterior distribution will also.) See example..
'Simulate' latent data. Sometimes one wants to test or to give
a parameter estimate for latent data, which can be reliably reconstructed using a Gibbs sampler. One simple example is to know the variability of groups in a one-way random-effects ANOVA. Observed values from the groups are available, but the components of variance due to the various groups (separate from overall variance) are typically latent. | Bayesian analysis used merely as a computational tool? | Comment: Here are a few reasons why a frequentist statistician might use a Bayesian approach.
Computational convenience, as @Fiodor1234 says, may not be high on the list.
One such example might be use | Bayesian analysis used merely as a computational tool?
Comment: Here are a few reasons why a frequentist statistician might use a Bayesian approach.
Computational convenience, as @Fiodor1234 says, may not be high on the list.
One such example might be use of the Jeffreys posterior probability interval as a confidence interval for a binomial proportion. For example, if you have $x = 42$ successes in $n=100$ trials, the asymptotic Wald interval is not the best
choice because of the small sample size. The Agresti-Coull interval is easy to compute and comes close to more accurate intervals that are somewhat intricate to compute. The Jeffreys interval, based on the noninformative Bayesian prior $\mathsf{Beta}(.5, .5)$, is easy to compute in R and has good frequentist properties.
p.hat = 42/100
CI.Wald = p.hat + qnorm(c(.025,.975))*sqrt(p.hat*(1-p.hat)/100)
round(CI.Wald,4)
[1] 0.3233 0.5167
p.est = (42+2)/(100+4)
CI.Agr = p.est + qnorm(c(.025,.975))*sqrt(p.est*(1-p.est)/104)
round(CI.Agr,4)
[1] 0.3281 0.5180
CI.Jeff = qbeta(c(.025,.975), 42+.5, 100-42+.5)
round(CI.Jeff,4)
[1] 0.3267 0.5179
Proper support of distribution. In an attempt to find
the prevalence of a disease from screening test data, traditional methods can give an interval for prevalence
that extends beyond $(0,1).$ By using a Gibbs sampler with a beta prior distribution, it is possible to get a useful
interval estimate for prevalence. (Since the beta prior has the unit interval as support, then the posterior distribution will also.) See example..
'Simulate' latent data. Sometimes one wants to test or to give
a parameter estimate for latent data, which can be reliably reconstructed using a Gibbs sampler. One simple example is to know the variability of groups in a one-way random-effects ANOVA. Observed values from the groups are available, but the components of variance due to the various groups (separate from overall variance) are typically latent. | Bayesian analysis used merely as a computational tool?
Comment: Here are a few reasons why a frequentist statistician might use a Bayesian approach.
Computational convenience, as @Fiodor1234 says, may not be high on the list.
One such example might be use |
24,840 | Bayesian analysis used merely as a computational tool? | Other reasons to use Bayesian approaches include
getting more accurate inference when the log-likelihood is very non-Gaussian. For example, in binary logistic regression standard p-values and confidence intervals may be inaccurate whereas Bayesian quantities are exact.
getting accurate uncertainty intervals for complex derived parameters. In the frequentist world we frequently have to resort to the delta method to get approximate confidence intervals. Not only is this labor intensive, but the result is often very unsatisfactory because such intervals are forced to be symmetric when they should have been asymmetric in order to have accurate coverage with respect to both tails. One example is state occupancy probabilities in a state transition model, which involve recursive matrix multiplications are a mess to deal with in the frequentist domain. With MCMC (let's say you have 4000 posterior draws from the multivariate distribution of all parameters together) you just compute the complex derived parameter 4000 times and estimate the highest posterior density interval from those 4000 numbers. | Bayesian analysis used merely as a computational tool? | Other reasons to use Bayesian approaches include
getting more accurate inference when the log-likelihood is very non-Gaussian. For example, in binary logistic regression standard p-values and confid | Bayesian analysis used merely as a computational tool?
Other reasons to use Bayesian approaches include
getting more accurate inference when the log-likelihood is very non-Gaussian. For example, in binary logistic regression standard p-values and confidence intervals may be inaccurate whereas Bayesian quantities are exact.
getting accurate uncertainty intervals for complex derived parameters. In the frequentist world we frequently have to resort to the delta method to get approximate confidence intervals. Not only is this labor intensive, but the result is often very unsatisfactory because such intervals are forced to be symmetric when they should have been asymmetric in order to have accurate coverage with respect to both tails. One example is state occupancy probabilities in a state transition model, which involve recursive matrix multiplications are a mess to deal with in the frequentist domain. With MCMC (let's say you have 4000 posterior draws from the multivariate distribution of all parameters together) you just compute the complex derived parameter 4000 times and estimate the highest posterior density interval from those 4000 numbers. | Bayesian analysis used merely as a computational tool?
Other reasons to use Bayesian approaches include
getting more accurate inference when the log-likelihood is very non-Gaussian. For example, in binary logistic regression standard p-values and confid |
24,841 | How can the probability of each point be zero in continuous random variable? [duplicate] | A guess at your point of confusion:
Zero probability does not mean an event cannot occur! It means the probability measure gives the event (a set of outcomes) a measure zero.
As @Aksakai's answer points out, the union of an infinite number of zero width points can form a positive width line segment and similarly, the union of an infinite number of zero probability events can form a positive probability event.
More explanation:
Our intuition from discrete probability is that if an outcome has zero probability, then the outcome is impossible. If the probability of drawing the ace of spades from a deck is equal to zero, it means that ace of spades is not in the deck!
With continuous random variables (or more generally, an infinite number of possible outcomes) that intuition is flawed.
Probability measure zero events can happen. Measure one events need not
happen. If an event has probability measure 1, you say that it occurs almost surely. Notice the critical word almost! It doesn't happen surely.
If you want to say an event is impossible, you may say it is "outside the support." What's inside and outside the support is a big distinction.
Loosely, an infinite sum of measure zero events can add up to something positive. You need an infinite sum though. Each point on a line segment has zero width, but collectively, they have positive width. | How can the probability of each point be zero in continuous random variable? [duplicate] | A guess at your point of confusion:
Zero probability does not mean an event cannot occur! It means the probability measure gives the event (a set of outcomes) a measure zero.
As @Aksakai's answer poin | How can the probability of each point be zero in continuous random variable? [duplicate]
A guess at your point of confusion:
Zero probability does not mean an event cannot occur! It means the probability measure gives the event (a set of outcomes) a measure zero.
As @Aksakai's answer points out, the union of an infinite number of zero width points can form a positive width line segment and similarly, the union of an infinite number of zero probability events can form a positive probability event.
More explanation:
Our intuition from discrete probability is that if an outcome has zero probability, then the outcome is impossible. If the probability of drawing the ace of spades from a deck is equal to zero, it means that ace of spades is not in the deck!
With continuous random variables (or more generally, an infinite number of possible outcomes) that intuition is flawed.
Probability measure zero events can happen. Measure one events need not
happen. If an event has probability measure 1, you say that it occurs almost surely. Notice the critical word almost! It doesn't happen surely.
If you want to say an event is impossible, you may say it is "outside the support." What's inside and outside the support is a big distinction.
Loosely, an infinite sum of measure zero events can add up to something positive. You need an infinite sum though. Each point on a line segment has zero width, but collectively, they have positive width. | How can the probability of each point be zero in continuous random variable? [duplicate]
A guess at your point of confusion:
Zero probability does not mean an event cannot occur! It means the probability measure gives the event (a set of outcomes) a measure zero.
As @Aksakai's answer poin |
24,842 | How can the probability of each point be zero in continuous random variable? [duplicate] | It's really not a statistics question. It's a real analysis question. For instance, it's almost the same as asking "what's the width of a point in line?" (the answer is zero, by the way)
This is an interesting situation though. In mathematics the line is defined as a set of points. There are certain geometric constraints on the points, so that they form a line and not a circle, for instance. However, that's not what's important.
What's important is this. If the width of each point is zero, and the line is a set of points, how come the sum of widths of all its points is NOT zero? You add two zeros and it gives you a zero. If I keep adding this way shouldn't the length of a line be zero? Apparently, not!
This is the same question you're asking. How is it that each point's probability is zero, yet the total probability is one? The reason why is this question the same is because probabilities are intimately linked to the concept of the length of a line between two points. The central concept of the modern probability theory is the concept of a measure. Unsurprisingly, it has its roots in the simplest of all measures: the length in geometry.
If you want a shortcut in understanding this bind boggling stuff then look up the concept of countable and uncountable sets. Note the difference between infinite countable sets and uncountable sets. Both have infinite number of points in them, yet the latter has more points in it (totally crazy!). So discrete and continuous random variables (and their distributions) are related to these two kinds of sets.
UPDATE
Example: In English there are countable and uncountable nouns such as apple vs. milk. I could ask you how much does an apple weigh? And you could say that it's half a pound in this batch. However, if I asked how much does milk weigh, it wouldn't make a sense without specifying an amount such as a pint or a quart.
In this regard the discrete random variables and their probabilities are like apples and their weights. You could say that the probability of Poisson variable 1 is 10%, for instance.
The continuous random variables are like milk. It's pointless asking what's the probability of a given value, you need to specify the bucket. Say, for a standard normal (Gaussian) variables you could ask what's the probability that their values are between 0 and 1, and the answer would be something like 34%. However, the probability of 1 is pretty much meaningless in practical sense. You can calculate the density at $x=1$ but what are you going to do with it? It's not the probability. In the same way if you're interested in the weight of milk the density of milk is not an answer, you need to specify the container size then we can tell you the weight using its density. That's why probability density function is actually called density, it originates from densities of bodies. | How can the probability of each point be zero in continuous random variable? [duplicate] | It's really not a statistics question. It's a real analysis question. For instance, it's almost the same as asking "what's the width of a point in line?" (the answer is zero, by the way)
This is an in | How can the probability of each point be zero in continuous random variable? [duplicate]
It's really not a statistics question. It's a real analysis question. For instance, it's almost the same as asking "what's the width of a point in line?" (the answer is zero, by the way)
This is an interesting situation though. In mathematics the line is defined as a set of points. There are certain geometric constraints on the points, so that they form a line and not a circle, for instance. However, that's not what's important.
What's important is this. If the width of each point is zero, and the line is a set of points, how come the sum of widths of all its points is NOT zero? You add two zeros and it gives you a zero. If I keep adding this way shouldn't the length of a line be zero? Apparently, not!
This is the same question you're asking. How is it that each point's probability is zero, yet the total probability is one? The reason why is this question the same is because probabilities are intimately linked to the concept of the length of a line between two points. The central concept of the modern probability theory is the concept of a measure. Unsurprisingly, it has its roots in the simplest of all measures: the length in geometry.
If you want a shortcut in understanding this bind boggling stuff then look up the concept of countable and uncountable sets. Note the difference between infinite countable sets and uncountable sets. Both have infinite number of points in them, yet the latter has more points in it (totally crazy!). So discrete and continuous random variables (and their distributions) are related to these two kinds of sets.
UPDATE
Example: In English there are countable and uncountable nouns such as apple vs. milk. I could ask you how much does an apple weigh? And you could say that it's half a pound in this batch. However, if I asked how much does milk weigh, it wouldn't make a sense without specifying an amount such as a pint or a quart.
In this regard the discrete random variables and their probabilities are like apples and their weights. You could say that the probability of Poisson variable 1 is 10%, for instance.
The continuous random variables are like milk. It's pointless asking what's the probability of a given value, you need to specify the bucket. Say, for a standard normal (Gaussian) variables you could ask what's the probability that their values are between 0 and 1, and the answer would be something like 34%. However, the probability of 1 is pretty much meaningless in practical sense. You can calculate the density at $x=1$ but what are you going to do with it? It's not the probability. In the same way if you're interested in the weight of milk the density of milk is not an answer, you need to specify the container size then we can tell you the weight using its density. That's why probability density function is actually called density, it originates from densities of bodies. | How can the probability of each point be zero in continuous random variable? [duplicate]
It's really not a statistics question. It's a real analysis question. For instance, it's almost the same as asking "what's the width of a point in line?" (the answer is zero, by the way)
This is an in |
24,843 | How can the probability of each point be zero in continuous random variable? [duplicate] | I think it is helpful to imagine the area under the point. The probability for a continuous distribution is the integral of the PDF from (a,b). If you pick a single point (a,a) is there any area? Imagine a simple PDF like the uniform distribution do the math.
PS No, but if you ask enough mathematicians 1/20 will say yes. However, I'll accept the null with an $\alpha$ of 5%. | How can the probability of each point be zero in continuous random variable? [duplicate] | I think it is helpful to imagine the area under the point. The probability for a continuous distribution is the integral of the PDF from (a,b). If you pick a single point (a,a) is there any area? Imag | How can the probability of each point be zero in continuous random variable? [duplicate]
I think it is helpful to imagine the area under the point. The probability for a continuous distribution is the integral of the PDF from (a,b). If you pick a single point (a,a) is there any area? Imagine a simple PDF like the uniform distribution do the math.
PS No, but if you ask enough mathematicians 1/20 will say yes. However, I'll accept the null with an $\alpha$ of 5%. | How can the probability of each point be zero in continuous random variable? [duplicate]
I think it is helpful to imagine the area under the point. The probability for a continuous distribution is the integral of the PDF from (a,b). If you pick a single point (a,a) is there any area? Imag |
24,844 | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | The null hypothesis isn't equivalent to a Bayesian uninformative prior for the simple reason that Bayesians can also use null hypotheses and perform hypothesis tests using Bayes' factors. If they were equivalent, Bayesians wouldn't use null hypotheses.
However, both frequentist and Bayesian hypothesis testing incorporate an element of self-skepticism, in that we are required to show that there is some evidence that our alternative hypothesis is in some way a more plausible explanation for the observations than random chance. Frequentists do this by having a significance level, Bayesians do this by having a scale of interpretation for the Bayes factor, such that we wouldn't strongly promulgate a hypothesis unless the Bayes factor over the null hypothesis were sufficiently high.
Now the reason why frequentist hypothesis tests are counter-intuitive is because a frequentist cannot assign a non-trivial probability to the truth of a hypothesis, which sadly is generally what we actually want. The closest they can get to this is to compute the p-value (the likelihood of the observations under H0) and then draw a subjective conclusion from this as to whether H0 or H1 are plausible. The Bayesian can assign a probability to the truth of a hypothesis, and so can work out the ratio of these probabilities to provide an indication of their relative plausibilities, or at least of how the observations change the ratio of these probabilities (which is what a Bayes factor does).
In my opinion it is a bad idea to try to draw too close a parallel between frequentist and Bayesian hypothesis testing methods as they are fundamentally different and answer fundamentally different questions. Treating them as if they were equivalent encourages a Bayesian interpretation of the frequentist test (e.g. the p-value fallacy) which is potentially dangerous (for example climate skeptics often assume that a lack of a statistically significant trend in global mean surface temperature means that there has been no warming - which is not at all correct). | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | The null hypothesis isn't equivalent to a Bayesian uninformative prior for the simple reason that Bayesians can also use null hypotheses and perform hypothesis tests using Bayes' factors. If they wer | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
The null hypothesis isn't equivalent to a Bayesian uninformative prior for the simple reason that Bayesians can also use null hypotheses and perform hypothesis tests using Bayes' factors. If they were equivalent, Bayesians wouldn't use null hypotheses.
However, both frequentist and Bayesian hypothesis testing incorporate an element of self-skepticism, in that we are required to show that there is some evidence that our alternative hypothesis is in some way a more plausible explanation for the observations than random chance. Frequentists do this by having a significance level, Bayesians do this by having a scale of interpretation for the Bayes factor, such that we wouldn't strongly promulgate a hypothesis unless the Bayes factor over the null hypothesis were sufficiently high.
Now the reason why frequentist hypothesis tests are counter-intuitive is because a frequentist cannot assign a non-trivial probability to the truth of a hypothesis, which sadly is generally what we actually want. The closest they can get to this is to compute the p-value (the likelihood of the observations under H0) and then draw a subjective conclusion from this as to whether H0 or H1 are plausible. The Bayesian can assign a probability to the truth of a hypothesis, and so can work out the ratio of these probabilities to provide an indication of their relative plausibilities, or at least of how the observations change the ratio of these probabilities (which is what a Bayes factor does).
In my opinion it is a bad idea to try to draw too close a parallel between frequentist and Bayesian hypothesis testing methods as they are fundamentally different and answer fundamentally different questions. Treating them as if they were equivalent encourages a Bayesian interpretation of the frequentist test (e.g. the p-value fallacy) which is potentially dangerous (for example climate skeptics often assume that a lack of a statistically significant trend in global mean surface temperature means that there has been no warming - which is not at all correct). | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
The null hypothesis isn't equivalent to a Bayesian uninformative prior for the simple reason that Bayesians can also use null hypotheses and perform hypothesis tests using Bayes' factors. If they wer |
24,845 | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | The reason you don't have the same epiphanic look on your face as that guy is I think that . . . the statement isn't true.
A null hypothesis is the hypothesis that any difference between the control and experimental conditions is due to chance.
An uninformative prior is meant to state that you have prior data on a question, but that it doesn't tell you anything about what to expect this next time round. A Bayesian is likely to maintain that there's information in any prior, even the uniform distribution.
So the null hypothesis says that there's no difference between control and experimental; an uninformative prior on the other hand may or may not be possible, and if it did would indicate nothing about the difference between control and experimental (which is different from indicating that any difference is due to chance).
Perhaps I am lacking in my understanding of uninformative priors, though. I look forward to other answers. | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | The reason you don't have the same epiphanic look on your face as that guy is I think that . . . the statement isn't true.
A null hypothesis is the hypothesis that any difference between the control a | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
The reason you don't have the same epiphanic look on your face as that guy is I think that . . . the statement isn't true.
A null hypothesis is the hypothesis that any difference between the control and experimental conditions is due to chance.
An uninformative prior is meant to state that you have prior data on a question, but that it doesn't tell you anything about what to expect this next time round. A Bayesian is likely to maintain that there's information in any prior, even the uniform distribution.
So the null hypothesis says that there's no difference between control and experimental; an uninformative prior on the other hand may or may not be possible, and if it did would indicate nothing about the difference between control and experimental (which is different from indicating that any difference is due to chance).
Perhaps I am lacking in my understanding of uninformative priors, though. I look forward to other answers. | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
The reason you don't have the same epiphanic look on your face as that guy is I think that . . . the statement isn't true.
A null hypothesis is the hypothesis that any difference between the control a |
24,846 | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | See this Wikipedia article:
For the case of a single parameter and data that can be summarized in
a single sufficient statistic, it can be shown that the credible
interval and the confidence interval will coincide if the unknown
parameter is a location parameter (...) with a prior that is a uniform
flat distribution (...) and also if the unknown parameter is a scale
parameter (...) with a Jeffreys' prior.
In fact, the reference points to Jaynes:
Jaynes, E.T. (1976), Confidence Intervals vs Bayesian Intervals.
In page 185 we can find:
If case (I) arises (and it does more often than realized), the
Bayesian and orthodox tests are going to lead us to exactly the same
results and the same conclusion, with a verbal disagreement as whether
we should use 'probability' or 'significance' to describe them.
So, in fact there are similar cases, but I wouldn't say the statement in the image is truth if you are, for example, using a Cauchy distribution as likelihood... | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | See this Wikipedia article:
For the case of a single parameter and data that can be summarized in
a single sufficient statistic, it can be shown that the credible
interval and the confidence int | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
See this Wikipedia article:
For the case of a single parameter and data that can be summarized in
a single sufficient statistic, it can be shown that the credible
interval and the confidence interval will coincide if the unknown
parameter is a location parameter (...) with a prior that is a uniform
flat distribution (...) and also if the unknown parameter is a scale
parameter (...) with a Jeffreys' prior.
In fact, the reference points to Jaynes:
Jaynes, E.T. (1976), Confidence Intervals vs Bayesian Intervals.
In page 185 we can find:
If case (I) arises (and it does more often than realized), the
Bayesian and orthodox tests are going to lead us to exactly the same
results and the same conclusion, with a verbal disagreement as whether
we should use 'probability' or 'significance' to describe them.
So, in fact there are similar cases, but I wouldn't say the statement in the image is truth if you are, for example, using a Cauchy distribution as likelihood... | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
See this Wikipedia article:
For the case of a single parameter and data that can be summarized in
a single sufficient statistic, it can be shown that the credible
interval and the confidence int |
24,847 | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | I'm the one that created the graphic, though as noted in the accompanying post it's not originally my insight. Let me provide some context for how it came up and do my best to explain how I understand it. The realization occurred during a discussion with a student who had mostly learned the Bayesian approach to inference up to that point. He was having a hard time understanding the whole hypothesis testing paradigm, and I was doing my best to explain this decidedly confusing approach (if you consider “difference” to be a negative - as in not equal to - then the standard null hypothesis approach is a triple negative: the researchers’ goal is to show that there is not no difference). In general, and as stated in another response, the researchers usually expect some difference to exist; what they really hope to find is convincing evidence to “reject” the null. To be unbiased, though, they begin by essentially feigning ignorance, as in, “Well, maybe this drug has zero effect on people.” Then they proceed to demonstrate through data collection and analysis (if they can), that this null hypothesis, given the data, was a bad assumption.
To a Bayesian, this must seem like a convoluted starting point. Why not just begin by announcing your prior beliefs directly, and be clear about what you are (and aren't) assuming by encoding it in a prior? A key point here is that a uniform prior is not the same as an uninformative prior. If I toss a coin 1000 times and get 500 heads, my new prior assigns equal (uniform) weight to both heads and tails, but its distribution curve is very steep. I am encoding additional information that is highly informative! A true uninformative prior (taken to the limit) would carry no weight at all. It means, in effect, starting from scratch and, to use a frequentist expression, let the data speak for itself. The observation made by "Clarence" was that the frequentist way to encode this lack of info is with the null hypothesis. It’s not exactly the same as an uninformative prior; it's the frequentist approach to expressing maximal ignorance in an honest way, one that doesn't presume what you wish to prove. | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship? | I'm the one that created the graphic, though as noted in the accompanying post it's not originally my insight. Let me provide some context for how it came up and do my best to explain how I understand | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
I'm the one that created the graphic, though as noted in the accompanying post it's not originally my insight. Let me provide some context for how it came up and do my best to explain how I understand it. The realization occurred during a discussion with a student who had mostly learned the Bayesian approach to inference up to that point. He was having a hard time understanding the whole hypothesis testing paradigm, and I was doing my best to explain this decidedly confusing approach (if you consider “difference” to be a negative - as in not equal to - then the standard null hypothesis approach is a triple negative: the researchers’ goal is to show that there is not no difference). In general, and as stated in another response, the researchers usually expect some difference to exist; what they really hope to find is convincing evidence to “reject” the null. To be unbiased, though, they begin by essentially feigning ignorance, as in, “Well, maybe this drug has zero effect on people.” Then they proceed to demonstrate through data collection and analysis (if they can), that this null hypothesis, given the data, was a bad assumption.
To a Bayesian, this must seem like a convoluted starting point. Why not just begin by announcing your prior beliefs directly, and be clear about what you are (and aren't) assuming by encoding it in a prior? A key point here is that a uniform prior is not the same as an uninformative prior. If I toss a coin 1000 times and get 500 heads, my new prior assigns equal (uniform) weight to both heads and tails, but its distribution curve is very steep. I am encoding additional information that is highly informative! A true uninformative prior (taken to the limit) would carry no weight at all. It means, in effect, starting from scratch and, to use a frequentist expression, let the data speak for itself. The observation made by "Clarence" was that the frequentist way to encode this lack of info is with the null hypothesis. It’s not exactly the same as an uninformative prior; it's the frequentist approach to expressing maximal ignorance in an honest way, one that doesn't presume what you wish to prove. | Bayesian uninformative priors vs. frequentist null hypotheses: what's the relationship?
I'm the one that created the graphic, though as noted in the accompanying post it's not originally my insight. Let me provide some context for how it came up and do my best to explain how I understand |
24,848 | Exponentiated logistic regression coefficient different than odds ratio | If you're only putting that lone predictor into the model, then the odds ratio between the predictor and the response will be exactly equal to the exponentiated regression coefficient. I don't think a derivation of this result in present on the site, so I will take this opportunity to provide it.
Consider a binary outcome $Y$ and single binary predictor $X$:
$$ \begin{array}{c|cc}
\phantom{} & Y = 1 & Y = 0 \\
\hline
X=1 & p_{11} & p_{10} \\
X=0 & p_{01} & p_{00} \\
\end{array}
$$
Then, one way to calculate the odds ratio between $X_i$ and $Y_i$ is
$$ {\rm OR} = \frac{ p_{11} p_{00} }{p_{01} p_{10}} $$
By definition of conditional probability, $p_{ij} = P(Y = i | X = j) \cdot P(X = j)$. In the ratio, he marginal probabilities involving the $X$ cancel out and you can rewrite the odds ratio in terms of the conditional probabilities of $Y|X$:
$${\rm OR} = \frac{ P(Y = 1| X = 1) }{P(Y = 0 | X = 1)} \cdot \frac{ P(Y = 0 | X = 0) }{ P(Y = 1 | X = 0)} $$
In logistic regression, you model these probabilities directly:
$$ \log \left( \frac{ P(Y_i = 1|X_i) }{ P(Y_i = 0|X_i) } \right) = \beta_0 + \beta_1 X_i $$
So we can calculate these conditional probabilities directly from the model. The first ratio in the expression for ${\rm OR}$ above is:
$$
\frac{ P(Y_i = 1| X_i = 1) }{P(Y_i = 0 | X_i = 1)}
=
\frac{ \left( \frac{1}{1 + e^{-(\beta_0+\beta_1)}} \right) }
{\left( \frac{e^{-(\beta_0+\beta_1)}}{1 + e^{-(\beta_0+\beta_1)}}\right)}
= \frac{1}{e^{-(\beta_0+\beta_1)}} = e^{(\beta_0+\beta_1)}
$$
and the second is:
$$
\frac{ P(Y_i = 0| X_i = 0) }{P(Y_i = 1 | X_i = 0)}
=
\frac{ \left( \frac{e^{-\beta_0}}{1 + e^{-\beta_0}} \right) }
{ \left( \frac{1}{1 + e^{-\beta_0}} \right) } = e^{-\beta_0}$$
plugging this back into the formula, we have ${\rm OR} = e^{(\beta_0+\beta_1)} \cdot e^{-\beta_0} = e^{\beta_1}$, which is the result.
Note: When you have other predictors, call them $Z_1, ..., Z_p$, in the model, the exponentiated regression coefficient (using a similar derivation) is actually
$$
\frac{ P(Y = 1| X = 1, Z_1, ..., Z_p) }{P(Y = 0 | X = 1, Z_1, ..., Z_p)} \cdot \frac{ P(Y = 0 | X = 0, Z_1, ..., Z_p) }{ P(Y = 1 | X = 0, Z_1, ..., Z_p)} $$
so it is the odds ratio conditional on the values of the other predictors in the model and, in general, in not equal to
$$ \frac{ P(Y = 1| X = 1) }{P(Y = 0 | X = 1)} \cdot \frac{ P(Y = 0 | X = 0) }{ P(Y = 1 | X = 0)}$$
So, it is no surprise that you're observing a discrepancy between the exponentiated coefficient and the observed odds ratio.
Note 2: I derived a relationship between the true $\beta$ and the true odds ratio but note that the same relationship holds for the sample quantities since the fitted logistic regression with a single binary predictor will exactly reproduce the entries of a two-by-two table. That is, the fitted means exactly match the sample means, as with any GLM. So, all of the logic used above applies with the true values replaced by sample quantities. | Exponentiated logistic regression coefficient different than odds ratio | If you're only putting that lone predictor into the model, then the odds ratio between the predictor and the response will be exactly equal to the exponentiated regression coefficient. I don't think a | Exponentiated logistic regression coefficient different than odds ratio
If you're only putting that lone predictor into the model, then the odds ratio between the predictor and the response will be exactly equal to the exponentiated regression coefficient. I don't think a derivation of this result in present on the site, so I will take this opportunity to provide it.
Consider a binary outcome $Y$ and single binary predictor $X$:
$$ \begin{array}{c|cc}
\phantom{} & Y = 1 & Y = 0 \\
\hline
X=1 & p_{11} & p_{10} \\
X=0 & p_{01} & p_{00} \\
\end{array}
$$
Then, one way to calculate the odds ratio between $X_i$ and $Y_i$ is
$$ {\rm OR} = \frac{ p_{11} p_{00} }{p_{01} p_{10}} $$
By definition of conditional probability, $p_{ij} = P(Y = i | X = j) \cdot P(X = j)$. In the ratio, he marginal probabilities involving the $X$ cancel out and you can rewrite the odds ratio in terms of the conditional probabilities of $Y|X$:
$${\rm OR} = \frac{ P(Y = 1| X = 1) }{P(Y = 0 | X = 1)} \cdot \frac{ P(Y = 0 | X = 0) }{ P(Y = 1 | X = 0)} $$
In logistic regression, you model these probabilities directly:
$$ \log \left( \frac{ P(Y_i = 1|X_i) }{ P(Y_i = 0|X_i) } \right) = \beta_0 + \beta_1 X_i $$
So we can calculate these conditional probabilities directly from the model. The first ratio in the expression for ${\rm OR}$ above is:
$$
\frac{ P(Y_i = 1| X_i = 1) }{P(Y_i = 0 | X_i = 1)}
=
\frac{ \left( \frac{1}{1 + e^{-(\beta_0+\beta_1)}} \right) }
{\left( \frac{e^{-(\beta_0+\beta_1)}}{1 + e^{-(\beta_0+\beta_1)}}\right)}
= \frac{1}{e^{-(\beta_0+\beta_1)}} = e^{(\beta_0+\beta_1)}
$$
and the second is:
$$
\frac{ P(Y_i = 0| X_i = 0) }{P(Y_i = 1 | X_i = 0)}
=
\frac{ \left( \frac{e^{-\beta_0}}{1 + e^{-\beta_0}} \right) }
{ \left( \frac{1}{1 + e^{-\beta_0}} \right) } = e^{-\beta_0}$$
plugging this back into the formula, we have ${\rm OR} = e^{(\beta_0+\beta_1)} \cdot e^{-\beta_0} = e^{\beta_1}$, which is the result.
Note: When you have other predictors, call them $Z_1, ..., Z_p$, in the model, the exponentiated regression coefficient (using a similar derivation) is actually
$$
\frac{ P(Y = 1| X = 1, Z_1, ..., Z_p) }{P(Y = 0 | X = 1, Z_1, ..., Z_p)} \cdot \frac{ P(Y = 0 | X = 0, Z_1, ..., Z_p) }{ P(Y = 1 | X = 0, Z_1, ..., Z_p)} $$
so it is the odds ratio conditional on the values of the other predictors in the model and, in general, in not equal to
$$ \frac{ P(Y = 1| X = 1) }{P(Y = 0 | X = 1)} \cdot \frac{ P(Y = 0 | X = 0) }{ P(Y = 1 | X = 0)}$$
So, it is no surprise that you're observing a discrepancy between the exponentiated coefficient and the observed odds ratio.
Note 2: I derived a relationship between the true $\beta$ and the true odds ratio but note that the same relationship holds for the sample quantities since the fitted logistic regression with a single binary predictor will exactly reproduce the entries of a two-by-two table. That is, the fitted means exactly match the sample means, as with any GLM. So, all of the logic used above applies with the true values replaced by sample quantities. | Exponentiated logistic regression coefficient different than odds ratio
If you're only putting that lone predictor into the model, then the odds ratio between the predictor and the response will be exactly equal to the exponentiated regression coefficient. I don't think a |
24,849 | Exponentiated logistic regression coefficient different than odds ratio | You have a really nice answer from @Macro (+1), who has pointed out that the simple (marginal) odds ratio calculated without reference to a model and the odds ratio taken from a multiple logistic regression model ($\exp(\beta)$) are in general not equal. I wonder if I can still contribute a little bit of related information here, in particular explaining when they will and will not be equal.
Beta values in logistic regression, like in OLS regression, specify the ceteris paribus change in the parameter governing the response distribution associated with a 1-unit change in the covariate. (For logistic regression, this is a change in the logit of the probability of 'success', whereas for OLS regression it is the mean, $\mu$.) That is, it is the change all else being equal. Exponentiated betas are similarly ceteris paribus odds ratios. Thus, the first issue is to be sure that it is possible for this to be meaningful. Specifically, the covariate in question should not exist in other terms (e.g., in an interaction, or a polynomial term) elsewhere in the model. (Note that here I am referring to terms that are included in your model, but there are also problems if the true relationship varies across levels of another covariate but an interaction term was not included, for example.) Once we've established that it's meaningful to calculate an odds ratio by exponentiating a beta from a logistic regression model, we can ask the questions of when will the model-based and marginal odds ratios differ, and which should you prefer when they do?
The reason that these ORs will differ is because the other covariates included in your model are not orthogonal to the one in question. For example, you can check by running a simple correlation between your covariates (it doesn't matter what the p-values are, or if your covariates are $0/1$ instead of continuous, the point is simply that $r\ne0$). On the other hand, when all of your other covariates are orthogonal to the one in question, $\exp(\beta)$ will equal the marginal OR.
If the marginal OR and the model-based OR differ, you should use / interpret the model-based version. The reason is that the marginal OR does not account for the confounding amongst your covariates, whereas the model does. This phenomenon is related to Simpson's Paradox, which you may want to read about (SEP also has a good entry, there is a discussion on CV here: Basic-simpson's-paradox, and you can search on CV's simpsons-paradox tag). For the sake of simplicity and practicality, you may want to just only use the model based OR, since it will be either clearly preferable or the same. | Exponentiated logistic regression coefficient different than odds ratio | You have a really nice answer from @Macro (+1), who has pointed out that the simple (marginal) odds ratio calculated without reference to a model and the odds ratio taken from a multiple logistic regr | Exponentiated logistic regression coefficient different than odds ratio
You have a really nice answer from @Macro (+1), who has pointed out that the simple (marginal) odds ratio calculated without reference to a model and the odds ratio taken from a multiple logistic regression model ($\exp(\beta)$) are in general not equal. I wonder if I can still contribute a little bit of related information here, in particular explaining when they will and will not be equal.
Beta values in logistic regression, like in OLS regression, specify the ceteris paribus change in the parameter governing the response distribution associated with a 1-unit change in the covariate. (For logistic regression, this is a change in the logit of the probability of 'success', whereas for OLS regression it is the mean, $\mu$.) That is, it is the change all else being equal. Exponentiated betas are similarly ceteris paribus odds ratios. Thus, the first issue is to be sure that it is possible for this to be meaningful. Specifically, the covariate in question should not exist in other terms (e.g., in an interaction, or a polynomial term) elsewhere in the model. (Note that here I am referring to terms that are included in your model, but there are also problems if the true relationship varies across levels of another covariate but an interaction term was not included, for example.) Once we've established that it's meaningful to calculate an odds ratio by exponentiating a beta from a logistic regression model, we can ask the questions of when will the model-based and marginal odds ratios differ, and which should you prefer when they do?
The reason that these ORs will differ is because the other covariates included in your model are not orthogonal to the one in question. For example, you can check by running a simple correlation between your covariates (it doesn't matter what the p-values are, or if your covariates are $0/1$ instead of continuous, the point is simply that $r\ne0$). On the other hand, when all of your other covariates are orthogonal to the one in question, $\exp(\beta)$ will equal the marginal OR.
If the marginal OR and the model-based OR differ, you should use / interpret the model-based version. The reason is that the marginal OR does not account for the confounding amongst your covariates, whereas the model does. This phenomenon is related to Simpson's Paradox, which you may want to read about (SEP also has a good entry, there is a discussion on CV here: Basic-simpson's-paradox, and you can search on CV's simpsons-paradox tag). For the sake of simplicity and practicality, you may want to just only use the model based OR, since it will be either clearly preferable or the same. | Exponentiated logistic regression coefficient different than odds ratio
You have a really nice answer from @Macro (+1), who has pointed out that the simple (marginal) odds ratio calculated without reference to a model and the odds ratio taken from a multiple logistic regr |
24,850 | Good book about theoretical approach to statistics | I've been studying in Mathematical Statistics by Jun Shao this summer. It certainly takes a theoretical approach. The exposition is extremely clear and there are tons of exercises. | Good book about theoretical approach to statistics | I've been studying in Mathematical Statistics by Jun Shao this summer. It certainly takes a theoretical approach. The exposition is extremely clear and there are tons of exercises. | Good book about theoretical approach to statistics
I've been studying in Mathematical Statistics by Jun Shao this summer. It certainly takes a theoretical approach. The exposition is extremely clear and there are tons of exercises. | Good book about theoretical approach to statistics
I've been studying in Mathematical Statistics by Jun Shao this summer. It certainly takes a theoretical approach. The exposition is extremely clear and there are tons of exercises. |
24,851 | Good book about theoretical approach to statistics | A bit late, but anyway...
"Theoretical Statistics"
Keener, Robert W.
1st Edition., 2010, XVII, 538 p.
Hardcover, ISBN 978-0-387-93838-7
About the book...
Intended as the text for a sequence of advanced courses, this book covers major topics in theoretical statistics in a concise and rigorous fashion. The discussion assumes a background in advanced calculus, linear algebra, probability, and some analysis and topology.
Measure theory is used, but the notation and basic results needed are presented in an initial chapter on probability, so prior knowledge of these topics is not essential. The presentation is designed to expose students to as many of the central ideas and topics in the discipline as possible, balancing various approaches to inference as well as exact, numerical, and large sample methods. Moving beyond more standard material, the book includes chapters introducing bootstrap methods, nonparametric regression, equivariant estimation, empirical Bayes, and sequential design and analysis.
The book has a rich collection of exercises. Several of them illustrate how the theory developed in the book may be used in various applications. Solutions to many of the exercises are included in an appendix. Robert Keener is Professor of Statistics at the University of Michigan and a fellow of the Institute of Mathematical Statistics. | Good book about theoretical approach to statistics | A bit late, but anyway...
"Theoretical Statistics"
Keener, Robert W.
1st Edition., 2010, XVII, 538 p.
Hardcover, ISBN 978-0-387-93838-7
About the book...
Intended as the text for a sequence of advance | Good book about theoretical approach to statistics
A bit late, but anyway...
"Theoretical Statistics"
Keener, Robert W.
1st Edition., 2010, XVII, 538 p.
Hardcover, ISBN 978-0-387-93838-7
About the book...
Intended as the text for a sequence of advanced courses, this book covers major topics in theoretical statistics in a concise and rigorous fashion. The discussion assumes a background in advanced calculus, linear algebra, probability, and some analysis and topology.
Measure theory is used, but the notation and basic results needed are presented in an initial chapter on probability, so prior knowledge of these topics is not essential. The presentation is designed to expose students to as many of the central ideas and topics in the discipline as possible, balancing various approaches to inference as well as exact, numerical, and large sample methods. Moving beyond more standard material, the book includes chapters introducing bootstrap methods, nonparametric regression, equivariant estimation, empirical Bayes, and sequential design and analysis.
The book has a rich collection of exercises. Several of them illustrate how the theory developed in the book may be used in various applications. Solutions to many of the exercises are included in an appendix. Robert Keener is Professor of Statistics at the University of Michigan and a fellow of the Institute of Mathematical Statistics. | Good book about theoretical approach to statistics
A bit late, but anyway...
"Theoretical Statistics"
Keener, Robert W.
1st Edition., 2010, XVII, 538 p.
Hardcover, ISBN 978-0-387-93838-7
About the book...
Intended as the text for a sequence of advance |
24,852 | Good book about theoretical approach to statistics | Casella and Berger's Statistical Inference is theory-heavy, and it's the standard text for a first graduate course in statistics. | Good book about theoretical approach to statistics | Casella and Berger's Statistical Inference is theory-heavy, and it's the standard text for a first graduate course in statistics. | Good book about theoretical approach to statistics
Casella and Berger's Statistical Inference is theory-heavy, and it's the standard text for a first graduate course in statistics. | Good book about theoretical approach to statistics
Casella and Berger's Statistical Inference is theory-heavy, and it's the standard text for a first graduate course in statistics. |
24,853 | Good book about theoretical approach to statistics | It depends on what kind of statistics book you want to learn. Mathematical statistics and data analysis written by John A. Rice is recommended if you want to learn some fundamental knowledge of statistics. Basically it talks about frequentist statistics. Besides, Bayesian concept is also an important theory in statistics. Bayesian data analysis written by Andrew Gelman is an advanced book for you. | Good book about theoretical approach to statistics | It depends on what kind of statistics book you want to learn. Mathematical statistics and data analysis written by John A. Rice is recommended if you want to learn some fundamental knowledge of statis | Good book about theoretical approach to statistics
It depends on what kind of statistics book you want to learn. Mathematical statistics and data analysis written by John A. Rice is recommended if you want to learn some fundamental knowledge of statistics. Basically it talks about frequentist statistics. Besides, Bayesian concept is also an important theory in statistics. Bayesian data analysis written by Andrew Gelman is an advanced book for you. | Good book about theoretical approach to statistics
It depends on what kind of statistics book you want to learn. Mathematical statistics and data analysis written by John A. Rice is recommended if you want to learn some fundamental knowledge of statis |
24,854 | How do I vertically stack two graphs with the same x scale, but a different y scale in R? | You can use par(new=TRUE) to plot into the same graph using two different y-axes! This should also solve your problem.
Next you will find a simple example that plots two random normal variables, one on mean 0 the other one on mean 100 (both sd s = 1) in the same plot. The first one in red on the left y-axis, the second one in blue on the right y-axis. Then, axis labels are added.
Here you go:
x <- 1:10
y1 <- rnorm(10)
y2 <- rnorm(10)+100
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2))
par(new=TRUE)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,105))
axis(side=2)
axis(side=4)
looks like this then (remember red on left axis, blue on right axis):
UPDATE:
Based on comments I produced an updated version of my graph. Now I dig a little deeper into base graph functionality using par(mar=c(a,b,c,d)) to create a bigger margin around the graph (needed for right axis label), mtext to show the axis labels and and advanced use of the axis function:
x <- 1:100
y1 <- rnorm(100)
y2 <- rnorm(100)+100
par(mar=c(5,5,5,5))
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2),ylab="")
axis(side=2, at=c(-2,0,2))
mtext("red line", side = 2, line=2.5, at=0)
par(new=TRUE)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,108), ylab="")
axis(side=4, at=c(98,100,102), labels=c("98%","100%","102%"))
mtext("blue line", side=4, line=2.5, at=100)
As you see it is pretty straight forward. You can define the position of your data with ylim in the plot function, then use at in the axis function to select which axis ticks you wanna see. Furthermore, you can even provide the labels for the axis ticks (pretty useful for nominal x-axis) via labels in the axis function (done here on the right axis). To add axis labels, use mtext with at for vertical positioning (line for horizontal positioning).
Make sure to check ?plot, ?par, ?axis, and ?mtext for further info.
Great web resources are: Quick-R for Graphs: 1, 2, and 3. | How do I vertically stack two graphs with the same x scale, but a different y scale in R? | You can use par(new=TRUE) to plot into the same graph using two different y-axes! This should also solve your problem.
Next you will find a simple example that plots two random normal variables, one o | How do I vertically stack two graphs with the same x scale, but a different y scale in R?
You can use par(new=TRUE) to plot into the same graph using two different y-axes! This should also solve your problem.
Next you will find a simple example that plots two random normal variables, one on mean 0 the other one on mean 100 (both sd s = 1) in the same plot. The first one in red on the left y-axis, the second one in blue on the right y-axis. Then, axis labels are added.
Here you go:
x <- 1:10
y1 <- rnorm(10)
y2 <- rnorm(10)+100
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2))
par(new=TRUE)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,105))
axis(side=2)
axis(side=4)
looks like this then (remember red on left axis, blue on right axis):
UPDATE:
Based on comments I produced an updated version of my graph. Now I dig a little deeper into base graph functionality using par(mar=c(a,b,c,d)) to create a bigger margin around the graph (needed for right axis label), mtext to show the axis labels and and advanced use of the axis function:
x <- 1:100
y1 <- rnorm(100)
y2 <- rnorm(100)+100
par(mar=c(5,5,5,5))
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2),ylab="")
axis(side=2, at=c(-2,0,2))
mtext("red line", side = 2, line=2.5, at=0)
par(new=TRUE)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,108), ylab="")
axis(side=4, at=c(98,100,102), labels=c("98%","100%","102%"))
mtext("blue line", side=4, line=2.5, at=100)
As you see it is pretty straight forward. You can define the position of your data with ylim in the plot function, then use at in the axis function to select which axis ticks you wanna see. Furthermore, you can even provide the labels for the axis ticks (pretty useful for nominal x-axis) via labels in the axis function (done here on the right axis). To add axis labels, use mtext with at for vertical positioning (line for horizontal positioning).
Make sure to check ?plot, ?par, ?axis, and ?mtext for further info.
Great web resources are: Quick-R for Graphs: 1, 2, and 3. | How do I vertically stack two graphs with the same x scale, but a different y scale in R?
You can use par(new=TRUE) to plot into the same graph using two different y-axes! This should also solve your problem.
Next you will find a simple example that plots two random normal variables, one o |
24,855 | How do I vertically stack two graphs with the same x scale, but a different y scale in R? | I think you can get what you want using ggplot2. Using the code below, I can produce:
Obviously things like line colours can be changed to what ever you want. On the x-axis I specified major lines on years and minor lines on months.
require(ggplot2)
t = as.Date(0:1000, origin="2008-01-01")
y1 = rexp(1001)
y2 = cumsum(y1)
df = data.frame(t=t, values=c(y2,y1), type=rep(c("Bytes", "Changes"), each=1001))
g = ggplot(data=df, aes(x=t, y=values)) +
geom_line() +
facet_grid(type ~ ., scales="free") +
scale_y_continuous(trans="log10") +
scale_x_date(major="years", minor="months") +
ylab("Log values")
g | How do I vertically stack two graphs with the same x scale, but a different y scale in R? | I think you can get what you want using ggplot2. Using the code below, I can produce:
Obviously things like line colours can be changed to what ever you want. On the x-axis I specified major lines on | How do I vertically stack two graphs with the same x scale, but a different y scale in R?
I think you can get what you want using ggplot2. Using the code below, I can produce:
Obviously things like line colours can be changed to what ever you want. On the x-axis I specified major lines on years and minor lines on months.
require(ggplot2)
t = as.Date(0:1000, origin="2008-01-01")
y1 = rexp(1001)
y2 = cumsum(y1)
df = data.frame(t=t, values=c(y2,y1), type=rep(c("Bytes", "Changes"), each=1001))
g = ggplot(data=df, aes(x=t, y=values)) +
geom_line() +
facet_grid(type ~ ., scales="free") +
scale_y_continuous(trans="log10") +
scale_x_date(major="years", minor="months") +
ylab("Log values")
g | How do I vertically stack two graphs with the same x scale, but a different y scale in R?
I think you can get what you want using ggplot2. Using the code below, I can produce:
Obviously things like line colours can be changed to what ever you want. On the x-axis I specified major lines on |
24,856 | What is an intuitive interpretation for the softmax transformation? | Intuition is a funky concept. For an ex-physicist, myself, seeing softmwax for the first time was "Ok, this is Boltzmann distribution." For a statistician it would be "Oh, isn't this mlogit?"
Physicist's intuition
Softmax is literally the case of canonical ensemble:
$$ p_i=\frac 1 Q e^{- {\varepsilon}_i / (k T)}=\frac{e^{- {\varepsilon}_i / (kT)}}{\sum_{j=1}^{n}{e^{- {\varepsilon}_j / (k T)}}}$$
The denominator is called a canonical partition function, it's basically a normalizing constant to make sure the probabilities add up to 100%. But it has a physical meaning too: the system can only be in one of its M states, that's why probabilities must add up. This stuff is straight up from statistical mechanics.
The probability of a state $i$ is defined by its energy $\varepsilon_i$ relative to the energies of all other states. You see, in physics systems always try to minimize the energy, so the probability of the state with the lowest energy must be the highest. However, if the temperature of the system $T$ is high, then the difference in probabilities of the lowest energy state and other states will vanish:
$$\lim_{T\to\infty}p_{min}/p_i=\lim_{T\to\infty}e^{ ({\varepsilon_i- \varepsilon}_{min}) / (k T)}=1$$
So, in OP's equation the energy $\varepsilon=-z$ and the temperature is $T\sim 1/\lambda$. He also isolates the base state, and sets its probability with 1 instead of the exponential. This doesn't change anything for intuition, it only sets all energies relative to a chosen base state.
This is VERY intuitive to a physicist.
Statistician's intuition
A statistician will immediately recognize the multinomial logit regression. For those who only know bivariate logit regression, here's how mlogit works.
Estimate $n-1$ bivariate logits of $n-1$ states vs a chosen base state on the censored data set. So, you create a dataset from a base state, say 1, and one of the states $i\in[2,n]$. This way you get $n-1$ logits for each $i$, conditional ones:
$$\ln\frac{Pr[i|i\cup 1]}{Pr[1|i\cup 1]}\sim X_i$$
This equation is more recognizable as:
$$\ln\frac{p}{1-p}\sim X_i$$
This is how it is usually presented in bivariate cases, where there are only categories to choose from, like in our censored subset of the full dataset with $n$ categories.
Using Bayes theorem we know that: $$Pr[i|i\cup 1]=\frac{Pr[i]}{Pr[i]+Pr[1]}$$
So, we can trivially combine $n-1$ bivariate regressions into a single one to get unconditional probabilities:
$$Pr[i]=\frac{e^{X_i\beta_i}}{1+\sum_i e^{X_i\beta_i}}$$
This gets us OP's equation. | What is an intuitive interpretation for the softmax transformation? | Intuition is a funky concept. For an ex-physicist, myself, seeing softmwax for the first time was "Ok, this is Boltzmann distribution." For a statistician it would be "Oh, isn't this mlogit?"
Physicis | What is an intuitive interpretation for the softmax transformation?
Intuition is a funky concept. For an ex-physicist, myself, seeing softmwax for the first time was "Ok, this is Boltzmann distribution." For a statistician it would be "Oh, isn't this mlogit?"
Physicist's intuition
Softmax is literally the case of canonical ensemble:
$$ p_i=\frac 1 Q e^{- {\varepsilon}_i / (k T)}=\frac{e^{- {\varepsilon}_i / (kT)}}{\sum_{j=1}^{n}{e^{- {\varepsilon}_j / (k T)}}}$$
The denominator is called a canonical partition function, it's basically a normalizing constant to make sure the probabilities add up to 100%. But it has a physical meaning too: the system can only be in one of its M states, that's why probabilities must add up. This stuff is straight up from statistical mechanics.
The probability of a state $i$ is defined by its energy $\varepsilon_i$ relative to the energies of all other states. You see, in physics systems always try to minimize the energy, so the probability of the state with the lowest energy must be the highest. However, if the temperature of the system $T$ is high, then the difference in probabilities of the lowest energy state and other states will vanish:
$$\lim_{T\to\infty}p_{min}/p_i=\lim_{T\to\infty}e^{ ({\varepsilon_i- \varepsilon}_{min}) / (k T)}=1$$
So, in OP's equation the energy $\varepsilon=-z$ and the temperature is $T\sim 1/\lambda$. He also isolates the base state, and sets its probability with 1 instead of the exponential. This doesn't change anything for intuition, it only sets all energies relative to a chosen base state.
This is VERY intuitive to a physicist.
Statistician's intuition
A statistician will immediately recognize the multinomial logit regression. For those who only know bivariate logit regression, here's how mlogit works.
Estimate $n-1$ bivariate logits of $n-1$ states vs a chosen base state on the censored data set. So, you create a dataset from a base state, say 1, and one of the states $i\in[2,n]$. This way you get $n-1$ logits for each $i$, conditional ones:
$$\ln\frac{Pr[i|i\cup 1]}{Pr[1|i\cup 1]}\sim X_i$$
This equation is more recognizable as:
$$\ln\frac{p}{1-p}\sim X_i$$
This is how it is usually presented in bivariate cases, where there are only categories to choose from, like in our censored subset of the full dataset with $n$ categories.
Using Bayes theorem we know that: $$Pr[i|i\cup 1]=\frac{Pr[i]}{Pr[i]+Pr[1]}$$
So, we can trivially combine $n-1$ bivariate regressions into a single one to get unconditional probabilities:
$$Pr[i]=\frac{e^{X_i\beta_i}}{1+\sum_i e^{X_i\beta_i}}$$
This gets us OP's equation. | What is an intuitive interpretation for the softmax transformation?
Intuition is a funky concept. For an ex-physicist, myself, seeing softmwax for the first time was "Ok, this is Boltzmann distribution." For a statistician it would be "Oh, isn't this mlogit?"
Physicis |
24,857 | What is an intuitive interpretation for the softmax transformation? | The presence of the exponentials in this function makes it fairly easy to construct an intuitive meaning for the transformation in terms of exponential growth of a set of quantities. Consequently, I will give an intuitive description for the function in terms of a simple financial portfolio earning returns over time. This can be modified or generalised to refer to any similar example involving a set of quantities affected by exponential growth.
A simple intuitive interpretation: Suppose you have an initial investment portfolio consisting of a set of $n$ investment items with the same value. (Without loss of generality, set the initial values of each item to one.) The first item always earns zero return, and the remaining items earn fixed continuous rates-of-return of $z_1, ..., z_{n-1}$ in each time period (these returns may be positive or negative or zero). Now, after $\lambda$ time units the first item will have a value of one, and the remaining items will have respective values $\exp(\lambda z_1), ..., \exp(\lambda z_{n-1})$. The total value of the portfolio is $1 + \sum_{k=1}^{n-1} \exp(\lambda z_k)$. Consequently, the softmax function gives you the vector of proportions of the size of each item in the portfolio after $\lambda$ time units have elapsed.
$$\mathbf{S}(\mathbf{z}) = \text{Proportion vector for size of items in portfolio after } \lambda \text{ time units}.$$
This gives a simple intuitive interpretation of the softmax transformation. It is also worth noting that one can easily construct a corresponding intuitive interpretation for the inverse-softmax transformation. The latter transformation would take in the proportion vector showing the relative sizes of the items in the portfolio, and it would figure out the continuous rates-of-return that led to that outcome over $\lambda$ time units.
This is just one intuitive interpretation for the softmax function, using a finance context. One can easily construct corresponding interpretations for any finite set of initial items that are each subject to exponential growth over time (with one item fixed to have zero growth). | What is an intuitive interpretation for the softmax transformation? | The presence of the exponentials in this function makes it fairly easy to construct an intuitive meaning for the transformation in terms of exponential growth of a set of quantities. Consequently, I | What is an intuitive interpretation for the softmax transformation?
The presence of the exponentials in this function makes it fairly easy to construct an intuitive meaning for the transformation in terms of exponential growth of a set of quantities. Consequently, I will give an intuitive description for the function in terms of a simple financial portfolio earning returns over time. This can be modified or generalised to refer to any similar example involving a set of quantities affected by exponential growth.
A simple intuitive interpretation: Suppose you have an initial investment portfolio consisting of a set of $n$ investment items with the same value. (Without loss of generality, set the initial values of each item to one.) The first item always earns zero return, and the remaining items earn fixed continuous rates-of-return of $z_1, ..., z_{n-1}$ in each time period (these returns may be positive or negative or zero). Now, after $\lambda$ time units the first item will have a value of one, and the remaining items will have respective values $\exp(\lambda z_1), ..., \exp(\lambda z_{n-1})$. The total value of the portfolio is $1 + \sum_{k=1}^{n-1} \exp(\lambda z_k)$. Consequently, the softmax function gives you the vector of proportions of the size of each item in the portfolio after $\lambda$ time units have elapsed.
$$\mathbf{S}(\mathbf{z}) = \text{Proportion vector for size of items in portfolio after } \lambda \text{ time units}.$$
This gives a simple intuitive interpretation of the softmax transformation. It is also worth noting that one can easily construct a corresponding intuitive interpretation for the inverse-softmax transformation. The latter transformation would take in the proportion vector showing the relative sizes of the items in the portfolio, and it would figure out the continuous rates-of-return that led to that outcome over $\lambda$ time units.
This is just one intuitive interpretation for the softmax function, using a finance context. One can easily construct corresponding interpretations for any finite set of initial items that are each subject to exponential growth over time (with one item fixed to have zero growth). | What is an intuitive interpretation for the softmax transformation?
The presence of the exponentials in this function makes it fairly easy to construct an intuitive meaning for the transformation in terms of exponential growth of a set of quantities. Consequently, I |
24,858 | What is an intuitive interpretation for the softmax transformation? | At least for deep learning purposes, one shouldn't overthink the importance of the exact terms in the softmax function, nor that it maps onto the probability simplex. What matters is that it's a function $s_\lambda : \mathbb{R}^n \to [0,1]^n$ with the properties
$$\begin{align}
\frac{\partial (s_\lambda(\mathbf{z}))_i}{\partial z_i} >&\: 0
& \text{for all $i$}
\\
\frac{\partial (s_\lambda(\mathbf{z}))_i}{\partial z_j} <&\: 0
& \text{for all $j\neq i$}
\end{align}$$
and
$$
s_\lambda(\mathbf{z}) \approx \{0,0,...,\underbrace{1}_{i},0,...\}
$$
if $z_i \gg z_j$ for all $j\neq i$. As a result, tweaking the inputs sufficiently far in some direction will eventually give you a nearly categorical output, and the gradients of mismatching results are always propagated in a direction that's suitable.
So why is this exponential form the softmax function that's used by everyone? It's mostly just that $\exp$ happens to be both monotone and strictly convex, and from that the above properties already follow. In addition, it's very fast-growing, which means the convergence to almost-black&white result does not take too long time. | What is an intuitive interpretation for the softmax transformation? | At least for deep learning purposes, one shouldn't overthink the importance of the exact terms in the softmax function, nor that it maps onto the probability simplex. What matters is that it's a funct | What is an intuitive interpretation for the softmax transformation?
At least for deep learning purposes, one shouldn't overthink the importance of the exact terms in the softmax function, nor that it maps onto the probability simplex. What matters is that it's a function $s_\lambda : \mathbb{R}^n \to [0,1]^n$ with the properties
$$\begin{align}
\frac{\partial (s_\lambda(\mathbf{z}))_i}{\partial z_i} >&\: 0
& \text{for all $i$}
\\
\frac{\partial (s_\lambda(\mathbf{z}))_i}{\partial z_j} <&\: 0
& \text{for all $j\neq i$}
\end{align}$$
and
$$
s_\lambda(\mathbf{z}) \approx \{0,0,...,\underbrace{1}_{i},0,...\}
$$
if $z_i \gg z_j$ for all $j\neq i$. As a result, tweaking the inputs sufficiently far in some direction will eventually give you a nearly categorical output, and the gradients of mismatching results are always propagated in a direction that's suitable.
So why is this exponential form the softmax function that's used by everyone? It's mostly just that $\exp$ happens to be both monotone and strictly convex, and from that the above properties already follow. In addition, it's very fast-growing, which means the convergence to almost-black&white result does not take too long time. | What is an intuitive interpretation for the softmax transformation?
At least for deep learning purposes, one shouldn't overthink the importance of the exact terms in the softmax function, nor that it maps onto the probability simplex. What matters is that it's a funct |
24,859 | What is an intuitive interpretation for the softmax transformation? | One way to think about the softmax function is that it gives you an output that can be interpreted as a probability distribution (i.e., all numbers are in the range [0,1], and they sum to 1). This is useful, because then the output of the softmax can be interpreted as a "probability" of each class/category (conditioned on the features).
Why does its output always have this property? Well, the softmax is essentially the composition of two steps:
Apply the exp function to each value. This makes all values non-negative.
Normalize the values so they sum to 1 (by dividing by the sum). This makes all values sum to 1.
After both of these steps, you are guaranteed that all values are non-negative and they sum to 1, which means they can be interpreted as a probability distribution.
The generalized softmax with scaling $\lambda$ just amounts to multiplying all values by $\lambda$, then applying the softmax, so it is not very different from the normal softmax.
Another way to think about the softmax is that it is a natural generalization of the standard logistic function $f(x) = e^x/(1+e^x)$, used in logistic regression. Logistic regression is used when you want to do two-class classification. When you want to do multi-class classification, you replace the standard logistic function with the softmax function. If you apply the softmax function with two classes, the result reduces to the standard logistic function that you're used to in (two-class) logistic regression. | What is an intuitive interpretation for the softmax transformation? | One way to think about the softmax function is that it gives you an output that can be interpreted as a probability distribution (i.e., all numbers are in the range [0,1], and they sum to 1). This is | What is an intuitive interpretation for the softmax transformation?
One way to think about the softmax function is that it gives you an output that can be interpreted as a probability distribution (i.e., all numbers are in the range [0,1], and they sum to 1). This is useful, because then the output of the softmax can be interpreted as a "probability" of each class/category (conditioned on the features).
Why does its output always have this property? Well, the softmax is essentially the composition of two steps:
Apply the exp function to each value. This makes all values non-negative.
Normalize the values so they sum to 1 (by dividing by the sum). This makes all values sum to 1.
After both of these steps, you are guaranteed that all values are non-negative and they sum to 1, which means they can be interpreted as a probability distribution.
The generalized softmax with scaling $\lambda$ just amounts to multiplying all values by $\lambda$, then applying the softmax, so it is not very different from the normal softmax.
Another way to think about the softmax is that it is a natural generalization of the standard logistic function $f(x) = e^x/(1+e^x)$, used in logistic regression. Logistic regression is used when you want to do two-class classification. When you want to do multi-class classification, you replace the standard logistic function with the softmax function. If you apply the softmax function with two classes, the result reduces to the standard logistic function that you're used to in (two-class) logistic regression. | What is an intuitive interpretation for the softmax transformation?
One way to think about the softmax function is that it gives you an output that can be interpreted as a probability distribution (i.e., all numbers are in the range [0,1], and they sum to 1). This is |
24,860 | p-values from t.test and prop.test differ considerably | You're correct that the tests should be more similar. They are tests of means, and for a light-tailed distribution, so you should expect them to agree. What's more, the estimated variance $\hat p(1-\hat p)/n$ for a binomial distribution is extremely close to $s^2/n$
> var(x)/100
[1] 0.0009090909
> .1*(.9)/100
[1] 9e-04
> .2*(.8)/100
[1] 0.0016
> var(y)/100
[1] 0.001616162
What you're seeing is the continuity correction. If you try it without, the $p$-values are almost identical
> t.test(x,y)
Welch Two Sample t-test
data: x and y
t = -1.99, df = 183.61, p-value = 0.04808
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1991454034 -0.0008545966
sample estimates:
mean of x mean of y
0.1 0.2
> prop.test(c(10,20),c(100, 100),correct=FALSE)
2-sample test for equality of proportions without continuity correction
data: c(10, 20) out of c(100, 100)
X-squared = 3.9216, df = 1, p-value = 0.04767
alternative hypothesis: two.sided
95 percent confidence interval:
-0.197998199 -0.002001801
sample estimates:
prop 1 prop 2
0.1 0.2
The continuity correction for the chi-squared test is a bit controversial. It does dramatically reduce the number of situations where the test is anti-conservative, but at the price of making the test noticeably conservative. Not using the 'correction' gives p-values that are closer to a uniform distribution under the null hypothesis. And, as you see here, not using the correction gives you something closer to the t-test. | p-values from t.test and prop.test differ considerably | You're correct that the tests should be more similar. They are tests of means, and for a light-tailed distribution, so you should expect them to agree. What's more, the estimated variance $\hat p(1- | p-values from t.test and prop.test differ considerably
You're correct that the tests should be more similar. They are tests of means, and for a light-tailed distribution, so you should expect them to agree. What's more, the estimated variance $\hat p(1-\hat p)/n$ for a binomial distribution is extremely close to $s^2/n$
> var(x)/100
[1] 0.0009090909
> .1*(.9)/100
[1] 9e-04
> .2*(.8)/100
[1] 0.0016
> var(y)/100
[1] 0.001616162
What you're seeing is the continuity correction. If you try it without, the $p$-values are almost identical
> t.test(x,y)
Welch Two Sample t-test
data: x and y
t = -1.99, df = 183.61, p-value = 0.04808
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1991454034 -0.0008545966
sample estimates:
mean of x mean of y
0.1 0.2
> prop.test(c(10,20),c(100, 100),correct=FALSE)
2-sample test for equality of proportions without continuity correction
data: c(10, 20) out of c(100, 100)
X-squared = 3.9216, df = 1, p-value = 0.04767
alternative hypothesis: two.sided
95 percent confidence interval:
-0.197998199 -0.002001801
sample estimates:
prop 1 prop 2
0.1 0.2
The continuity correction for the chi-squared test is a bit controversial. It does dramatically reduce the number of situations where the test is anti-conservative, but at the price of making the test noticeably conservative. Not using the 'correction' gives p-values that are closer to a uniform distribution under the null hypothesis. And, as you see here, not using the correction gives you something closer to the t-test. | p-values from t.test and prop.test differ considerably
You're correct that the tests should be more similar. They are tests of means, and for a light-tailed distribution, so you should expect them to agree. What's more, the estimated variance $\hat p(1- |
24,861 | p-values from t.test and prop.test differ considerably | A difference between p-values of 0.048 and 0.074 is not large. This can easily happen between tests that don't do exactly the same but a similar thing.
The theory of the t-test is for normally distributed data, which your data obviously are not. You're right that the t-test can be justified as an approximation, but there's no reason to use an approximation if a more precise test (namely the proportion test) is available. For sure there is no reason to expect the t-test to have a better power, or only in case that it is anticonservative, which is not a good thing (being an approximation, one would probably need to simulate what its finite sample characteristics are in this situation).
Edited after looking up the reference Agostino et al. ("The Appropriateness of Some Common Procedures for Testing the Equality of Two Independent Binomial Populations", Am. Stat. 1988) given by cdalitz. This reference states that prop.test with continuity correction is too conservative whereas the t-test as well as the prop.test without continuity correction are normally closer to the nominal level, if occasionally anticonservative (which in may view not necessarily justifies an overall recommendation). This was also mentioned in the answer by Thomas Lumley.
If we're ignoring the continuity correction for a moment, there are two differences between the t-test and prop.test (which is not fully documented but I think it does the z-test based on normal approximation).
(a) prop.test uses the knowledge that the variance of the Binomial is $np(1-p)$ rather than using a sample variance based on normality. In my view what prop.test does here should clearly do better, as it is based on information about the specific setup used here.
(b) prop.test uses a normal approximation whereas the t-test uses a t-approximation. Now both of these, applied to the Binomial situation, are asymptotic in nature (the t-distribution is only precise if the underlying data are normal which they aren't here), and actually they are asymptotically equivalent. Although the normal approximation looks more intuitive based on the Central Limit Theorem, this doesn't imply by any means than the normal works better than the t in the finite sample situation (and the t is as well justified by the CLT, if only indirectly). The t-distribution is motivated by the normal assumption, but in fact it may also be the case that the asymptotic normal distribution of prop.test underestimates the finite sample variability because it ignores the variability in the variance estimation, and the t-distribution, despite here not precisely justified, may do a better job at that.
So I now believe that potentially (as could be confirmed by simulations, maybe somebody has done that?) the best thing to do could be using the test statistic of prop.test, i.e., the "correct" variance estimation, but replacing the asymptotic normal distribution by a t-distribution, which in some sense may put together the advantages of them both. | p-values from t.test and prop.test differ considerably | A difference between p-values of 0.048 and 0.074 is not large. This can easily happen between tests that don't do exactly the same but a similar thing.
The theory of the t-test is for normally distri | p-values from t.test and prop.test differ considerably
A difference between p-values of 0.048 and 0.074 is not large. This can easily happen between tests that don't do exactly the same but a similar thing.
The theory of the t-test is for normally distributed data, which your data obviously are not. You're right that the t-test can be justified as an approximation, but there's no reason to use an approximation if a more precise test (namely the proportion test) is available. For sure there is no reason to expect the t-test to have a better power, or only in case that it is anticonservative, which is not a good thing (being an approximation, one would probably need to simulate what its finite sample characteristics are in this situation).
Edited after looking up the reference Agostino et al. ("The Appropriateness of Some Common Procedures for Testing the Equality of Two Independent Binomial Populations", Am. Stat. 1988) given by cdalitz. This reference states that prop.test with continuity correction is too conservative whereas the t-test as well as the prop.test without continuity correction are normally closer to the nominal level, if occasionally anticonservative (which in may view not necessarily justifies an overall recommendation). This was also mentioned in the answer by Thomas Lumley.
If we're ignoring the continuity correction for a moment, there are two differences between the t-test and prop.test (which is not fully documented but I think it does the z-test based on normal approximation).
(a) prop.test uses the knowledge that the variance of the Binomial is $np(1-p)$ rather than using a sample variance based on normality. In my view what prop.test does here should clearly do better, as it is based on information about the specific setup used here.
(b) prop.test uses a normal approximation whereas the t-test uses a t-approximation. Now both of these, applied to the Binomial situation, are asymptotic in nature (the t-distribution is only precise if the underlying data are normal which they aren't here), and actually they are asymptotically equivalent. Although the normal approximation looks more intuitive based on the Central Limit Theorem, this doesn't imply by any means than the normal works better than the t in the finite sample situation (and the t is as well justified by the CLT, if only indirectly). The t-distribution is motivated by the normal assumption, but in fact it may also be the case that the asymptotic normal distribution of prop.test underestimates the finite sample variability because it ignores the variability in the variance estimation, and the t-distribution, despite here not precisely justified, may do a better job at that.
So I now believe that potentially (as could be confirmed by simulations, maybe somebody has done that?) the best thing to do could be using the test statistic of prop.test, i.e., the "correct" variance estimation, but replacing the asymptotic normal distribution by a t-distribution, which in some sense may put together the advantages of them both. | p-values from t.test and prop.test differ considerably
A difference between p-values of 0.048 and 0.074 is not large. This can easily happen between tests that don't do exactly the same but a similar thing.
The theory of the t-test is for normally distri |
24,862 | p-values from t.test and prop.test differ considerably | The t-test can be quite robust to deviations from the normality assumption, particularly when sample sizes are large, so I understand why one might want to use a t-test for this task.
However, you know the parametric family; since the outcome is either $0$ or $1$, the distribution is completely characterized by the relative proportion, thus Bernoulli. Consequently, you can rely on a parametric test designed for a Bernoulli variable, which the t-test is not.
Methods that are robust to deviations from parametric assumptions are wonderful, since we typically do not know the type of population distribution. (If we did, why did we not determine the population parameters when we had the chance!?) However, the case of a binary variable is unique in how it is completely defined by the relative proportion and must be Bernoulli (or easy to represent as Bernoulli, such as calling “heads” and “tails” of a coin $0$ and $1$, respectively). | p-values from t.test and prop.test differ considerably | The t-test can be quite robust to deviations from the normality assumption, particularly when sample sizes are large, so I understand why one might want to use a t-test for this task.
However, you kno | p-values from t.test and prop.test differ considerably
The t-test can be quite robust to deviations from the normality assumption, particularly when sample sizes are large, so I understand why one might want to use a t-test for this task.
However, you know the parametric family; since the outcome is either $0$ or $1$, the distribution is completely characterized by the relative proportion, thus Bernoulli. Consequently, you can rely on a parametric test designed for a Bernoulli variable, which the t-test is not.
Methods that are robust to deviations from parametric assumptions are wonderful, since we typically do not know the type of population distribution. (If we did, why did we not determine the population parameters when we had the chance!?) However, the case of a binary variable is unique in how it is completely defined by the relative proportion and must be Bernoulli (or easy to represent as Bernoulli, such as calling “heads” and “tails” of a coin $0$ and $1$, respectively). | p-values from t.test and prop.test differ considerably
The t-test can be quite robust to deviations from the normality assumption, particularly when sample sizes are large, so I understand why one might want to use a t-test for this task.
However, you kno |
24,863 | Why isn't bootstrapping done in the following manner? | The idea of the bootstrap is to estimate the sampling distribution of your estimate without making actual assumptions about the distribution of your data.
You usually go for the sampling distribution when you are after the estimates of the standard error and/or confidence intervals. However, your point estimate is fine. Given your data set and without knowing the distribution, the sample mean is still a very good guess about the central tendency of your data. Now, what about the standard error? The bootstrap is a good way getting that estimate without imposing a probabilistic distribution for data.
More technically, when building a standard error for a generic statistic, if you knew the sampling distribution of your estimate $\hat \theta$ is $F$, and you wanted to see how far you can be from it's mean $\mu$, the quantity $\hat \theta$ estimates, you could look at the differences from the mean of the sampling distribution $\mu$, namely $\delta$, and make that the focus of your analysis, not $\hat \theta$
$$
\delta = \hat \theta - \mu
$$
Now, since we know that $\hat \theta \sim F$, when know that $\delta$ should be related with $F$ minus the constant $\mu$. A type of "standardization" as we do with the normal distribution. And with that in mind, just compute the 80% confidence interval such that
$$
P_F(\delta_{.9} \le \hat \theta - \mu \le \delta_{.1} | \mu) = 0.8 \leftrightarrow
P_F(\hat \theta - \delta_{.9} \ge \mu \ge \ \hat \theta - delta_{.1} | \mu) = 0.8
$$
So we just build the CI as $\left[\hat \theta - \delta_{.1}, \hat \theta - \delta_{.9} \right]$. Keep in mind that we don't know $F$ so we cant know $\delta_{.1}$ or $\delta_{.9}$. And we don't want to assume that it is normal and just look at the percentiles of a standard normal distribution either.
The bootstrap principle helps us estimate the sampling distribution $F$ by resampling our data. Our point estimate will be forever $\hat \theta$. There isn't anything wrong with it. But if I take another resample I can built $\hat \theta^*_1 $. And then another resmple $\hat \theta^*_2 $. And then another $\hat \theta^*_3 $. I think you get the idea.
With a set of estimates $\hat \theta^*_1 ... \hat \theta^*_n$ has a distribution $F^*$ which approximates $F$. We can then compute
$$
\delta^*_i = \hat \theta^*_i - \hat \theta
$$
Notice that the point estimate for the $\mu$ is replaced by our best guess $\hat \theta$. And look at the empirical distribution of $\theta^*$ to compute $\left[\hat \theta - \delta^*_{.1}, \hat \theta - \delta^*_{.9} \right]$.
Now, this explanation is heavily based on this MIT class on the bootstrap. I highly recommend you give it a read. | Why isn't bootstrapping done in the following manner? | The idea of the bootstrap is to estimate the sampling distribution of your estimate without making actual assumptions about the distribution of your data.
You usually go for the sampling distribution | Why isn't bootstrapping done in the following manner?
The idea of the bootstrap is to estimate the sampling distribution of your estimate without making actual assumptions about the distribution of your data.
You usually go for the sampling distribution when you are after the estimates of the standard error and/or confidence intervals. However, your point estimate is fine. Given your data set and without knowing the distribution, the sample mean is still a very good guess about the central tendency of your data. Now, what about the standard error? The bootstrap is a good way getting that estimate without imposing a probabilistic distribution for data.
More technically, when building a standard error for a generic statistic, if you knew the sampling distribution of your estimate $\hat \theta$ is $F$, and you wanted to see how far you can be from it's mean $\mu$, the quantity $\hat \theta$ estimates, you could look at the differences from the mean of the sampling distribution $\mu$, namely $\delta$, and make that the focus of your analysis, not $\hat \theta$
$$
\delta = \hat \theta - \mu
$$
Now, since we know that $\hat \theta \sim F$, when know that $\delta$ should be related with $F$ minus the constant $\mu$. A type of "standardization" as we do with the normal distribution. And with that in mind, just compute the 80% confidence interval such that
$$
P_F(\delta_{.9} \le \hat \theta - \mu \le \delta_{.1} | \mu) = 0.8 \leftrightarrow
P_F(\hat \theta - \delta_{.9} \ge \mu \ge \ \hat \theta - delta_{.1} | \mu) = 0.8
$$
So we just build the CI as $\left[\hat \theta - \delta_{.1}, \hat \theta - \delta_{.9} \right]$. Keep in mind that we don't know $F$ so we cant know $\delta_{.1}$ or $\delta_{.9}$. And we don't want to assume that it is normal and just look at the percentiles of a standard normal distribution either.
The bootstrap principle helps us estimate the sampling distribution $F$ by resampling our data. Our point estimate will be forever $\hat \theta$. There isn't anything wrong with it. But if I take another resample I can built $\hat \theta^*_1 $. And then another resmple $\hat \theta^*_2 $. And then another $\hat \theta^*_3 $. I think you get the idea.
With a set of estimates $\hat \theta^*_1 ... \hat \theta^*_n$ has a distribution $F^*$ which approximates $F$. We can then compute
$$
\delta^*_i = \hat \theta^*_i - \hat \theta
$$
Notice that the point estimate for the $\mu$ is replaced by our best guess $\hat \theta$. And look at the empirical distribution of $\theta^*$ to compute $\left[\hat \theta - \delta^*_{.1}, \hat \theta - \delta^*_{.9} \right]$.
Now, this explanation is heavily based on this MIT class on the bootstrap. I highly recommend you give it a read. | Why isn't bootstrapping done in the following manner?
The idea of the bootstrap is to estimate the sampling distribution of your estimate without making actual assumptions about the distribution of your data.
You usually go for the sampling distribution |
24,864 | Why isn't bootstrapping done in the following manner? | That's not OK. You would need to use the double bootstrap to get a correct confidence interval from a new estimator that is a function of many bootstrap estimates. The bootstrap was not created to provide new estimators, except in rare cases such as the Harrell-Davis quantile estimator. The main function of the bootstrap is to study the performance of an existing estimator, or to tell how bad the estimator is (e.g., in terms of variance or bias). The bootstrap can also provide confidence intervals for strange quantities such as the number of modes in a continuous distribution. | Why isn't bootstrapping done in the following manner? | That's not OK. You would need to use the double bootstrap to get a correct confidence interval from a new estimator that is a function of many bootstrap estimates. The bootstrap was not created to | Why isn't bootstrapping done in the following manner?
That's not OK. You would need to use the double bootstrap to get a correct confidence interval from a new estimator that is a function of many bootstrap estimates. The bootstrap was not created to provide new estimators, except in rare cases such as the Harrell-Davis quantile estimator. The main function of the bootstrap is to study the performance of an existing estimator, or to tell how bad the estimator is (e.g., in terms of variance or bias). The bootstrap can also provide confidence intervals for strange quantities such as the number of modes in a continuous distribution. | Why isn't bootstrapping done in the following manner?
That's not OK. You would need to use the double bootstrap to get a correct confidence interval from a new estimator that is a function of many bootstrap estimates. The bootstrap was not created to |
24,865 | Why isn't bootstrapping done in the following manner? | The reason you typically take the statistic calculated from all data as your point estimate is that (at least for a mean) with the number of bootstrap samples going to infinity, you will get that same answer. I.e. any deviation is just due to the number of bootstrap samples and you might just as well use the known exact answer.
In the second part of your question, what do you mean by calculating the confidence around the mean "using the original data"? The main reason you use boostrapping is usually that there's no simple formula for just getting a CI from the original data. If you mean taking the variation in the original data (e.g. take 1.96 $\times$ SD of original data), then that's not a confidence interval for the mean, but rather an interval that also describes the variation in the outcome. | Why isn't bootstrapping done in the following manner? | The reason you typically take the statistic calculated from all data as your point estimate is that (at least for a mean) with the number of bootstrap samples going to infinity, you will get that same | Why isn't bootstrapping done in the following manner?
The reason you typically take the statistic calculated from all data as your point estimate is that (at least for a mean) with the number of bootstrap samples going to infinity, you will get that same answer. I.e. any deviation is just due to the number of bootstrap samples and you might just as well use the known exact answer.
In the second part of your question, what do you mean by calculating the confidence around the mean "using the original data"? The main reason you use boostrapping is usually that there's no simple formula for just getting a CI from the original data. If you mean taking the variation in the original data (e.g. take 1.96 $\times$ SD of original data), then that's not a confidence interval for the mean, but rather an interval that also describes the variation in the outcome. | Why isn't bootstrapping done in the following manner?
The reason you typically take the statistic calculated from all data as your point estimate is that (at least for a mean) with the number of bootstrap samples going to infinity, you will get that same |
24,866 | Why isn't bootstrapping done in the following manner? | On the first question: if the statistic you are interested in is not the mean, then there are cases where taking the mean statistic from all the resampling trials is arguably better than taking the single statistic from the original trial.
For example, suppose you are interested in the median of a distribution. The distribution turns out to be bimodal with narrow peaks at 0 and 1. You have 99 points in your sample, of which 50 are near 0 and 49 are near 1. It's too close to call whether the population median is nearer 0 or 1. Your sample median is close to 0, but if you wanted to minimise the mean squared error of your estimate of the population median, you would want your estimate to be something close to 0.5. | Why isn't bootstrapping done in the following manner? | On the first question: if the statistic you are interested in is not the mean, then there are cases where taking the mean statistic from all the resampling trials is arguably better than taking the si | Why isn't bootstrapping done in the following manner?
On the first question: if the statistic you are interested in is not the mean, then there are cases where taking the mean statistic from all the resampling trials is arguably better than taking the single statistic from the original trial.
For example, suppose you are interested in the median of a distribution. The distribution turns out to be bimodal with narrow peaks at 0 and 1. You have 99 points in your sample, of which 50 are near 0 and 49 are near 1. It's too close to call whether the population median is nearer 0 or 1. Your sample median is close to 0, but if you wanted to minimise the mean squared error of your estimate of the population median, you would want your estimate to be something close to 0.5. | Why isn't bootstrapping done in the following manner?
On the first question: if the statistic you are interested in is not the mean, then there are cases where taking the mean statistic from all the resampling trials is arguably better than taking the si |
24,867 | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning? | All unsupervised algorithms, e.g.
clustering,
dimension reduction (PCA, t-sne, autoencoder,...),
missing value imputation,
outlier detection,
...
Some of them might internally use regression or classification elements, but the algorithm itself is neither. | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised | All unsupervised algorithms, e.g.
clustering,
dimension reduction (PCA, t-sne, autoencoder,...),
missing value imputation,
outlier detection,
...
Some of them might internally use regression or cla | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning?
All unsupervised algorithms, e.g.
clustering,
dimension reduction (PCA, t-sne, autoencoder,...),
missing value imputation,
outlier detection,
...
Some of them might internally use regression or classification elements, but the algorithm itself is neither. | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised
All unsupervised algorithms, e.g.
clustering,
dimension reduction (PCA, t-sne, autoencoder,...),
missing value imputation,
outlier detection,
...
Some of them might internally use regression or cla |
24,868 | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning? | No, it's much broader than that. You should at least read about the following:
Clustering
Dimensionality Reduction
Reinforcement Learning | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised | No, it's much broader than that. You should at least read about the following:
Clustering
Dimensionality Reduction
Reinforcement Learning | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning?
No, it's much broader than that. You should at least read about the following:
Clustering
Dimensionality Reduction
Reinforcement Learning | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised
No, it's much broader than that. You should at least read about the following:
Clustering
Dimensionality Reduction
Reinforcement Learning |
24,869 | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning? | Generally speaking "supervised" learning", "classification" and "regression" are actually very different levels of meaning.
Supervised learning is a high level categorization of ML problems which defines all challenges where we have at least some solved/labeled data. This is opposed to unsupervised learning (we don't know the solution) and reinforcement learning (data and labels are generated procedurally).
Classification is specific goal of ML which you can compare to targets like prediction, outlier detection, dimension reduction, etc.
Finally regression is a specific mathematical algorithm which can help us achieve tasks and might be opposed to algorithms such as a Neural Net, Naive Bayes, etc.
A specific ML model can be described in all three terms:
An unsupervised classification problem solved with a K-Means clustering algorithm
A supervised prediction problem solved with a linear regression
A reinforcement learning optimization problem solved with a monte carlo model. | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised | Generally speaking "supervised" learning", "classification" and "regression" are actually very different levels of meaning.
Supervised learning is a high level categorization of ML problems which defi | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning?
Generally speaking "supervised" learning", "classification" and "regression" are actually very different levels of meaning.
Supervised learning is a high level categorization of ML problems which defines all challenges where we have at least some solved/labeled data. This is opposed to unsupervised learning (we don't know the solution) and reinforcement learning (data and labels are generated procedurally).
Classification is specific goal of ML which you can compare to targets like prediction, outlier detection, dimension reduction, etc.
Finally regression is a specific mathematical algorithm which can help us achieve tasks and might be opposed to algorithms such as a Neural Net, Naive Bayes, etc.
A specific ML model can be described in all three terms:
An unsupervised classification problem solved with a K-Means clustering algorithm
A supervised prediction problem solved with a linear regression
A reinforcement learning optimization problem solved with a monte carlo model. | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised
Generally speaking "supervised" learning", "classification" and "regression" are actually very different levels of meaning.
Supervised learning is a high level categorization of ML problems which defi |
24,870 | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning? | First you need to know that machine learning algorithms are broadly divided into three categories-
Supervised Learning
Unsupervised Learning
Reinforcement Learning
But you should know that most production level machine learning pipelines use a combination of two or all of the three kinds of algorithms.
Supervised Learning takes advantage of already known labels, like whether an email is reported spam or not, how much rainfall has occured in the last 7 days, whether a lump in body is carcinogenic or not etc.
Where in Unsupervised Learning, the data is not labeled i.e. there are no clearly defined target variables (nature of email, amount of rainfall and nature of tumor are the target variables in the previous cases).
Reinforcement Learning algorithms are complex and advanced where the model learns from its previous predictions and correctness.
So, whenever there is a clearly defined target variable, you can apply a supervised learning algorithm. Regression and Classification fall into the supervised learning domain, and cannot be classified as unsupervised learning models.
And, there are many supervised learning algorithms which are not regression or classification, for example-
Naive Bayes Classifier
Decision Tree
Random Forest
Support Vector Machine
etc.
These are just some examples of the supervised learning algorithms. And these, along with regression and classification, do not fall under unsupervised learning algorithms. Some of the most common unsupervised learning algorithms are-
Clustering
Neural Networks
Anomaly Detection
etc.
Here's a diagram-
Machine Learning Algorithms
|
|
---------------------------------------------------------------------------------
| | |
supervised learning unsupervised learning reinforcement learning
| |
|--->Naive Bayes Classifier |--->Clustering
|--->Support Vector Machine |--->Neural Networks
|--->Decision Tree |--->Anomaly Detection
|--->Random Forest
|--->Regression
|--->Classification
These questions are better suited for the Data Science Stack Exchange site. | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised | First you need to know that machine learning algorithms are broadly divided into three categories-
Supervised Learning
Unsupervised Learning
Reinforcement Learning
But you should know that most prod | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised learning?
First you need to know that machine learning algorithms are broadly divided into three categories-
Supervised Learning
Unsupervised Learning
Reinforcement Learning
But you should know that most production level machine learning pipelines use a combination of two or all of the three kinds of algorithms.
Supervised Learning takes advantage of already known labels, like whether an email is reported spam or not, how much rainfall has occured in the last 7 days, whether a lump in body is carcinogenic or not etc.
Where in Unsupervised Learning, the data is not labeled i.e. there are no clearly defined target variables (nature of email, amount of rainfall and nature of tumor are the target variables in the previous cases).
Reinforcement Learning algorithms are complex and advanced where the model learns from its previous predictions and correctness.
So, whenever there is a clearly defined target variable, you can apply a supervised learning algorithm. Regression and Classification fall into the supervised learning domain, and cannot be classified as unsupervised learning models.
And, there are many supervised learning algorithms which are not regression or classification, for example-
Naive Bayes Classifier
Decision Tree
Random Forest
Support Vector Machine
etc.
These are just some examples of the supervised learning algorithms. And these, along with regression and classification, do not fall under unsupervised learning algorithms. Some of the most common unsupervised learning algorithms are-
Clustering
Neural Networks
Anomaly Detection
etc.
Here's a diagram-
Machine Learning Algorithms
|
|
---------------------------------------------------------------------------------
| | |
supervised learning unsupervised learning reinforcement learning
| |
|--->Naive Bayes Classifier |--->Clustering
|--->Support Vector Machine |--->Neural Networks
|--->Decision Tree |--->Anomaly Detection
|--->Random Forest
|--->Regression
|--->Classification
These questions are better suited for the Data Science Stack Exchange site. | Are all Machine Learning algorithms divided into Classification and Regression, not just supervised
First you need to know that machine learning algorithms are broadly divided into three categories-
Supervised Learning
Unsupervised Learning
Reinforcement Learning
But you should know that most prod |
24,871 | what is vanishing gradient? | If you do not carefully choose the range of the initial values for the weights, and if you do not control the range of the values of the weights during training, vanishing gradient would occur which is the main barrier to learning deep networks. The neural networks are trained using the gradient descent algorithm:
$$w^{new} := w^{old} - \eta \frac{\partial L}{\partial w}$$
where $L$ is the loss of the network on the current training batch. It is clear that if the $\frac{\partial L}{\partial w}$ is very small, the learning will be very slow, since the changes in $w$ will be very small. So, if the gradients are vanished, the learning will be very very slow.
The reason for vanishing gradient is that during backpropagation, the gradient of early layers (layers near to the input layer) are obtained by multiplying the gradients of later layers (layers near to the output layer). So, for example if the gradients of later layers are less than one, their multiplication vanishes very fast.
With this explanations these are answers to your questions:
Gradient is the gradient of the loss with respect to each trainable parameters (weights and biases).
Vanishing gradient does not mean the gradient vector is all zero (except for numerical underflow), but it means the gradients are so small that the learning will be very slow. | what is vanishing gradient? | If you do not carefully choose the range of the initial values for the weights, and if you do not control the range of the values of the weights during training, vanishing gradient would occur which i | what is vanishing gradient?
If you do not carefully choose the range of the initial values for the weights, and if you do not control the range of the values of the weights during training, vanishing gradient would occur which is the main barrier to learning deep networks. The neural networks are trained using the gradient descent algorithm:
$$w^{new} := w^{old} - \eta \frac{\partial L}{\partial w}$$
where $L$ is the loss of the network on the current training batch. It is clear that if the $\frac{\partial L}{\partial w}$ is very small, the learning will be very slow, since the changes in $w$ will be very small. So, if the gradients are vanished, the learning will be very very slow.
The reason for vanishing gradient is that during backpropagation, the gradient of early layers (layers near to the input layer) are obtained by multiplying the gradients of later layers (layers near to the output layer). So, for example if the gradients of later layers are less than one, their multiplication vanishes very fast.
With this explanations these are answers to your questions:
Gradient is the gradient of the loss with respect to each trainable parameters (weights and biases).
Vanishing gradient does not mean the gradient vector is all zero (except for numerical underflow), but it means the gradients are so small that the learning will be very slow. | what is vanishing gradient?
If you do not carefully choose the range of the initial values for the weights, and if you do not control the range of the values of the weights during training, vanishing gradient would occur which i |
24,872 | what is vanishing gradient? | Consider the following feedforward neural network:
Let $w^l_{j,k}$ be the weight for the connection from the $k^{\text{th}}$ neuron in the $(l-1)^{\text{th}}$ layer to the $j^{\text{th}}$ neuron in the $l^{\text{th}}$ layer.
Let $b^l_j$ be the bias of the $j^{\text{th}}$ neuron in the $l^{\text{th}}$ layer.
Let $C$ be the cost function. We consider the inputs and desired outputs of training examples as constants while we train our network, so in our simple network, $C$ is a function of the weights and biases in the network. (I.e. weights and biases of hidden layers and the output layer.)
Let $\delta^l\equiv\left(\begin{gathered}\frac{\partial C}{\partial w_{1,1}^{l}}\\
\\
\frac{\partial C}{\partial w_{1,2}^{l}}\\
\\
\frac{\partial C}{\partial w_{2,1}^{l}}\\
\\
\frac{\partial C}{\partial w_{2,2}^{l}}\\
\\
\frac{\partial C}{\partial b_{1}^{l}}\\
\\
\frac{\partial C}{\partial b_{2}^{l}}
\end{gathered}
\right)$ be "the gradient in the $l^{\text{th}}$ layer".
(I use the notation used by Michael Nielsen in the excellent chapter How the backpropagation algorithm works in the book Neural Networks and Deep Learning, except for "the gradient in the $l^{\text{th}}$ layer", which I define slightly differently.)
I am not aware of a strict definition of the vanishing gradient problem, but I think Nielsen's definition (from the chapter Why are deep neural networks hard to train? in the same book) is quite clear:
[...] in at least some deep neural networks, the gradient tends to get smaller as we move backward through the hidden layers. This means that neurons in the earlier layers learn much more slowly than neurons in later layers. [...] The phenomenon is known as the vanishing gradient problem.
E.g. in our network, if $||\delta^2||\ll||\delta^4||\ll||\delta^6||$, then we say we have a vanishing gradient problem.
If we use Stochastic Gradient Descent, then the size of the change to every parameter $\alpha$ (e.g. a weight, a bias, or any other parameter in more sophisticated networks) in each step taken by the algorithm (we might call this size "the speed of learning of $\alpha$") is proportional to an approximation of $-\frac{\partial C}{\partial\alpha}$ (based on a mini-batch of training examples).
Thus, in case of a vanishing gradient problem, we can say that the speed of learning of parameters of neurons becomes lower and lower, as you move to earlier layers.
So it doesn't necessarily mean that gradients in earlier layers are actually zero, or that they are stuck in any manner, but their speed of learning is low enough to significantly increase the training time, which is why it is called "vanishing gradient problem".
See this answer for a more rigorous explanation of the problem. | what is vanishing gradient? | Consider the following feedforward neural network:
Let $w^l_{j,k}$ be the weight for the connection from the $k^{\text{th}}$ neuron in the $(l-1)^{\text{th}}$ layer to the $j^{\text{th}}$ neuron in | what is vanishing gradient?
Consider the following feedforward neural network:
Let $w^l_{j,k}$ be the weight for the connection from the $k^{\text{th}}$ neuron in the $(l-1)^{\text{th}}$ layer to the $j^{\text{th}}$ neuron in the $l^{\text{th}}$ layer.
Let $b^l_j$ be the bias of the $j^{\text{th}}$ neuron in the $l^{\text{th}}$ layer.
Let $C$ be the cost function. We consider the inputs and desired outputs of training examples as constants while we train our network, so in our simple network, $C$ is a function of the weights and biases in the network. (I.e. weights and biases of hidden layers and the output layer.)
Let $\delta^l\equiv\left(\begin{gathered}\frac{\partial C}{\partial w_{1,1}^{l}}\\
\\
\frac{\partial C}{\partial w_{1,2}^{l}}\\
\\
\frac{\partial C}{\partial w_{2,1}^{l}}\\
\\
\frac{\partial C}{\partial w_{2,2}^{l}}\\
\\
\frac{\partial C}{\partial b_{1}^{l}}\\
\\
\frac{\partial C}{\partial b_{2}^{l}}
\end{gathered}
\right)$ be "the gradient in the $l^{\text{th}}$ layer".
(I use the notation used by Michael Nielsen in the excellent chapter How the backpropagation algorithm works in the book Neural Networks and Deep Learning, except for "the gradient in the $l^{\text{th}}$ layer", which I define slightly differently.)
I am not aware of a strict definition of the vanishing gradient problem, but I think Nielsen's definition (from the chapter Why are deep neural networks hard to train? in the same book) is quite clear:
[...] in at least some deep neural networks, the gradient tends to get smaller as we move backward through the hidden layers. This means that neurons in the earlier layers learn much more slowly than neurons in later layers. [...] The phenomenon is known as the vanishing gradient problem.
E.g. in our network, if $||\delta^2||\ll||\delta^4||\ll||\delta^6||$, then we say we have a vanishing gradient problem.
If we use Stochastic Gradient Descent, then the size of the change to every parameter $\alpha$ (e.g. a weight, a bias, or any other parameter in more sophisticated networks) in each step taken by the algorithm (we might call this size "the speed of learning of $\alpha$") is proportional to an approximation of $-\frac{\partial C}{\partial\alpha}$ (based on a mini-batch of training examples).
Thus, in case of a vanishing gradient problem, we can say that the speed of learning of parameters of neurons becomes lower and lower, as you move to earlier layers.
So it doesn't necessarily mean that gradients in earlier layers are actually zero, or that they are stuck in any manner, but their speed of learning is low enough to significantly increase the training time, which is why it is called "vanishing gradient problem".
See this answer for a more rigorous explanation of the problem. | what is vanishing gradient?
Consider the following feedforward neural network:
Let $w^l_{j,k}$ be the weight for the connection from the $k^{\text{th}}$ neuron in the $(l-1)^{\text{th}}$ layer to the $j^{\text{th}}$ neuron in |
24,873 | what is vanishing gradient? | Continuing from comments, when you use sigmoid activation function which squashes the input to a small range $(0,1)$, you further multiply it by a small learning rate and more partial derivatives (chain rule) as you go back in layers. The value of delta to be updated diminishes and thus earlier layers get little or no updates. If little, then it would require lot of training. If no, then only changing the activation function (AF) would be of any help. RELUs are currently the best AFs that avoid this problem. | what is vanishing gradient? | Continuing from comments, when you use sigmoid activation function which squashes the input to a small range $(0,1)$, you further multiply it by a small learning rate and more partial derivatives (cha | what is vanishing gradient?
Continuing from comments, when you use sigmoid activation function which squashes the input to a small range $(0,1)$, you further multiply it by a small learning rate and more partial derivatives (chain rule) as you go back in layers. The value of delta to be updated diminishes and thus earlier layers get little or no updates. If little, then it would require lot of training. If no, then only changing the activation function (AF) would be of any help. RELUs are currently the best AFs that avoid this problem. | what is vanishing gradient?
Continuing from comments, when you use sigmoid activation function which squashes the input to a small range $(0,1)$, you further multiply it by a small learning rate and more partial derivatives (cha |
24,874 | How to start building a regression model when the most strongly associated predictor is binary | Many people believe that you should use some strategy like starting with the most highly associated variable, and then adding additional variables in turn until one is not significant. However, there is no logic that compels this approach. Moreover, this is a kind of 'greedy' variable selection / search strategy (cf., my answer here: Algorithms for automatic model selection). You do not have to do this, and really, you shouldn't. If you want to know the relationship between pm, and temp and rain, just fit a multiple regression model with all three variables. You will still need to assess the model to determine if it is reasonable and the assumptions are met, but that's it. If you want to test some a-priori hypothesis, you can do so with the model. If you want to assess the model's out of sample predictive accuracy, you can do that with cross-validation.
You needn't really worry about multicollinearity either. The correlation between temp and rain is listed as 0.044 in your correlation matrix. That is a very low correlation and shouldn't cause any problems. | How to start building a regression model when the most strongly associated predictor is binary | Many people believe that you should use some strategy like starting with the most highly associated variable, and then adding additional variables in turn until one is not significant. However, there | How to start building a regression model when the most strongly associated predictor is binary
Many people believe that you should use some strategy like starting with the most highly associated variable, and then adding additional variables in turn until one is not significant. However, there is no logic that compels this approach. Moreover, this is a kind of 'greedy' variable selection / search strategy (cf., my answer here: Algorithms for automatic model selection). You do not have to do this, and really, you shouldn't. If you want to know the relationship between pm, and temp and rain, just fit a multiple regression model with all three variables. You will still need to assess the model to determine if it is reasonable and the assumptions are met, but that's it. If you want to test some a-priori hypothesis, you can do so with the model. If you want to assess the model's out of sample predictive accuracy, you can do that with cross-validation.
You needn't really worry about multicollinearity either. The correlation between temp and rain is listed as 0.044 in your correlation matrix. That is a very low correlation and shouldn't cause any problems. | How to start building a regression model when the most strongly associated predictor is binary
Many people believe that you should use some strategy like starting with the most highly associated variable, and then adding additional variables in turn until one is not significant. However, there |
24,875 | How to start building a regression model when the most strongly associated predictor is binary | While this doesn't directly address your already gathered data set, another thing you could try the next time you are gathering data like this is to avoid recording "rain" as a binary. Your data would probably be more informative if you had instead measured rain rate (cm/hour), which would give you a variable distributed continuously (up to your precision of measurement) from 0...max_rainfall.
This would let you correlate not just "is it raining" to the other variables, but also "how much is it raining". | How to start building a regression model when the most strongly associated predictor is binary | While this doesn't directly address your already gathered data set, another thing you could try the next time you are gathering data like this is to avoid recording "rain" as a binary. Your data would | How to start building a regression model when the most strongly associated predictor is binary
While this doesn't directly address your already gathered data set, another thing you could try the next time you are gathering data like this is to avoid recording "rain" as a binary. Your data would probably be more informative if you had instead measured rain rate (cm/hour), which would give you a variable distributed continuously (up to your precision of measurement) from 0...max_rainfall.
This would let you correlate not just "is it raining" to the other variables, but also "how much is it raining". | How to start building a regression model when the most strongly associated predictor is binary
While this doesn't directly address your already gathered data set, another thing you could try the next time you are gathering data like this is to avoid recording "rain" as a binary. Your data would |
24,876 | Variance of a distribution of multi-level categorical data | I think what you probably want is (Shannon's) entropy. It is calculated like this:
$$
H(x) = -\sum_{x_i} p(x_i)\log_2 p(x_i)
$$
This represents a way of thinking about the amount of information in a categorical variable.
In R, we can calculate this as follows:
City = c("Moscow", "Moscow", "Paris", "London", "London",
"London", "NYC", "NYC", "NYC", "NYC")
table(City)
# City
# London Moscow NYC Paris
# 3 2 4 1
entropy = function(cat.vect){
px = table(cat.vect)/length(cat.vect)
lpx = log(px, base=2)
ent = -sum(px*lpx)
return(ent)
}
entropy(City) # [1] 1.846439
entropy(rep(City, 10)) # [1] 1.846439
entropy(c( "Moscow", "NYC")) # [1] 1
entropy(c( "Moscow", "NYC", "Paris", "London")) # [1] 2
entropy(rep( "Moscow", 100)) # [1] 0
entropy(c(rep("Moscow", 9), "NYC")) # [1] 0.4689956
entropy(c(rep("Moscow", 99), "NYC")) # [1] 0.08079314
entropy(c(rep("Moscow", 97), "NYC", "Paris", "London")) # [1] 0.2419407
From this, we can see that the length of the vector doesn't matter. The number of possible options ('levels' of a categorical variable) makes it increase. If there were only one possibility, the value is $0$ (as low as you can get). The value is largest, for any given number of possibilities when the probabilities are equal.
Somewhat more technically, with more possible options, it takes more information to represent the variable while minimizing error. With only one option, there is no information in your variable. Even with more options, but where almost all actual instances are a particular level, there is very little information; after all, you can just guess "Moscow" and nearly always be right.
your.metric = function(cat.vect){
px = table(cat.vect)/length(cat.vect)
spx2 = sum(px^2)
return(spx2)
}
your.metric(City) # [1] 0.3
your.metric(rep(City, 10)) # [1] 0.3
your.metric(c( "Moscow", "NYC")) # [1] 0.5
your.metric(c( "Moscow", "NYC", "Paris", "London")) # [1] 0.25
your.metric(rep( "Moscow", 100)) # [1] 1
your.metric(c(rep("Moscow", 9), "NYC")) # [1] 0.82
your.metric(c(rep("Moscow", 99), "NYC")) # [1] 0.9802
your.metric(c(rep("Moscow", 97), "NYC", "Paris", "London")) # [1] 0.9412
Your suggested metric is the sum of squared probabilities. In some ways it behaves similarly (e.g., notice that it is invariant to the length of the variable), but note that it decreases as the number of levels increases or as the variable becomes more imbalanced. It moves inversely to entropy, but the units—size of the increments—differ. Your metric will be bound by $0$ and $1$, whereas entropy ranges from $0$ to infinity. Here is a plot of their relationship: | Variance of a distribution of multi-level categorical data | I think what you probably want is (Shannon's) entropy. It is calculated like this:
$$
H(x) = -\sum_{x_i} p(x_i)\log_2 p(x_i)
$$
This represents a way of thinking about the amount of information in a | Variance of a distribution of multi-level categorical data
I think what you probably want is (Shannon's) entropy. It is calculated like this:
$$
H(x) = -\sum_{x_i} p(x_i)\log_2 p(x_i)
$$
This represents a way of thinking about the amount of information in a categorical variable.
In R, we can calculate this as follows:
City = c("Moscow", "Moscow", "Paris", "London", "London",
"London", "NYC", "NYC", "NYC", "NYC")
table(City)
# City
# London Moscow NYC Paris
# 3 2 4 1
entropy = function(cat.vect){
px = table(cat.vect)/length(cat.vect)
lpx = log(px, base=2)
ent = -sum(px*lpx)
return(ent)
}
entropy(City) # [1] 1.846439
entropy(rep(City, 10)) # [1] 1.846439
entropy(c( "Moscow", "NYC")) # [1] 1
entropy(c( "Moscow", "NYC", "Paris", "London")) # [1] 2
entropy(rep( "Moscow", 100)) # [1] 0
entropy(c(rep("Moscow", 9), "NYC")) # [1] 0.4689956
entropy(c(rep("Moscow", 99), "NYC")) # [1] 0.08079314
entropy(c(rep("Moscow", 97), "NYC", "Paris", "London")) # [1] 0.2419407
From this, we can see that the length of the vector doesn't matter. The number of possible options ('levels' of a categorical variable) makes it increase. If there were only one possibility, the value is $0$ (as low as you can get). The value is largest, for any given number of possibilities when the probabilities are equal.
Somewhat more technically, with more possible options, it takes more information to represent the variable while minimizing error. With only one option, there is no information in your variable. Even with more options, but where almost all actual instances are a particular level, there is very little information; after all, you can just guess "Moscow" and nearly always be right.
your.metric = function(cat.vect){
px = table(cat.vect)/length(cat.vect)
spx2 = sum(px^2)
return(spx2)
}
your.metric(City) # [1] 0.3
your.metric(rep(City, 10)) # [1] 0.3
your.metric(c( "Moscow", "NYC")) # [1] 0.5
your.metric(c( "Moscow", "NYC", "Paris", "London")) # [1] 0.25
your.metric(rep( "Moscow", 100)) # [1] 1
your.metric(c(rep("Moscow", 9), "NYC")) # [1] 0.82
your.metric(c(rep("Moscow", 99), "NYC")) # [1] 0.9802
your.metric(c(rep("Moscow", 97), "NYC", "Paris", "London")) # [1] 0.9412
Your suggested metric is the sum of squared probabilities. In some ways it behaves similarly (e.g., notice that it is invariant to the length of the variable), but note that it decreases as the number of levels increases or as the variable becomes more imbalanced. It moves inversely to entropy, but the units—size of the increments—differ. Your metric will be bound by $0$ and $1$, whereas entropy ranges from $0$ to infinity. Here is a plot of their relationship: | Variance of a distribution of multi-level categorical data
I think what you probably want is (Shannon's) entropy. It is calculated like this:
$$
H(x) = -\sum_{x_i} p(x_i)\log_2 p(x_i)
$$
This represents a way of thinking about the amount of information in a |
24,877 | Variance of a distribution of multi-level categorical data | The sum of the squares of the fractions (to let your text align with your arithmetic) is indeed a much re-discovered or re-invented measure of the concentration of distributions divided into distinct categories. It is now in its second century at least, allowing a little latitude to include under the same umbrella its complement and its reciprocal: all three versions have easy interpretations and uses. There are (wild guess) perhaps twenty different names for it in common use. Let's write generically $p$ for proportion or probability, where necessarily $1 \ge p_s \ge 0$ and $\sum_{s=1}^S p_s \equiv 1$.
Your measure is $\sum_{s=1}^S p_s^2 =: R$. At least for biologists the index $s=1, \dots, S$ is mnemonic for species. Then that sum is for ecologists the Simpson index (after E.H. Simpson, 1922-2019, the person for whom Simpson's paradox is named); for economists it's the Herfindahl-Hirschman index; and so on. It has a long history in cryptography, often clouded in secrecy for decades by its use in classified problems, but most famously featuring A.M. Turing. I.J. Good (who like Simpson worked with Turing in World War II) called it the repeat rate, which motivates the symbol $R$ above; for D.J.C. MacKay it is the match probability.
Suppose we rank the proportions $p_1 \ge \dots \ge p_S$. Then at one extreme $p_1$ grows to $1$ and the other $p_s$ shrink to $0$ and then $R = 1$. Another extreme is equal probabilities $1/S$ so that $R = S (1/S^2) = 1/S$. The two limits naturally coincide for $S = 1$. Thus for $2, 10, 100$ species $R \ge 0.5, 0.1, 0.01$ respectively.
The complement $1 - R$ was one of various measures of heterogeneity used by Corrado Gini, but beware serious overloading of terms in various literatures: the terms Gini index or coefficient have been applied to several distinct measures. It features in machine learning as a measure of impurity of classifications; conversely $R$ measures purity. Ecologists usually talk of diversity: $R$ measures diversity inversely and $1 - R$ measures it directly.
For geneticists $1 - R$ is the heterozygosity.
The reciprocal $1/R$ has a 'numbers equivalent' interpretation. Imagine as above any case in which $S$ species are equally common with each $p_s = 1/S$. Then $1/R = 1/\sum_{s=1}^S (1/S)^2 = S$. By extension $1/R$ measures an equivalent number of equally common categories, so that for example the squares of $1/6, 2/6, 3/6$ give $1/R \approx 2.57$ which matches an intuition that the distribution is between $2/6, 2/6, 2/6$ and $3/6,3/6, 0$ in concentration or diversity.
(The numbers equivalent for Shannon entropy $H$ is just its antilogarithm, say $2^H, \exp(H)$ or $10^H$ for bases $2, e = \exp(1)$ and $10$ respectively.)
There are various generalisations of entropy which make this measure one of a wider family; a simple one given by I.J. Good defines the menagerie $\sum_{s} p_s^a\ [\ln (1/p_s)]^b$ from which $a =2, b=0$ gives our measure; $a = 1, b=1$ is Shannon entropy; $a =0; b=0$ returns $S$, the number of species present, which is the simplest measurement of diversity possible and one with several merits. | Variance of a distribution of multi-level categorical data | The sum of the squares of the fractions (to let your text align with your arithmetic) is indeed a much re-discovered or re-invented measure of the concentration of distributions divided into distinct | Variance of a distribution of multi-level categorical data
The sum of the squares of the fractions (to let your text align with your arithmetic) is indeed a much re-discovered or re-invented measure of the concentration of distributions divided into distinct categories. It is now in its second century at least, allowing a little latitude to include under the same umbrella its complement and its reciprocal: all three versions have easy interpretations and uses. There are (wild guess) perhaps twenty different names for it in common use. Let's write generically $p$ for proportion or probability, where necessarily $1 \ge p_s \ge 0$ and $\sum_{s=1}^S p_s \equiv 1$.
Your measure is $\sum_{s=1}^S p_s^2 =: R$. At least for biologists the index $s=1, \dots, S$ is mnemonic for species. Then that sum is for ecologists the Simpson index (after E.H. Simpson, 1922-2019, the person for whom Simpson's paradox is named); for economists it's the Herfindahl-Hirschman index; and so on. It has a long history in cryptography, often clouded in secrecy for decades by its use in classified problems, but most famously featuring A.M. Turing. I.J. Good (who like Simpson worked with Turing in World War II) called it the repeat rate, which motivates the symbol $R$ above; for D.J.C. MacKay it is the match probability.
Suppose we rank the proportions $p_1 \ge \dots \ge p_S$. Then at one extreme $p_1$ grows to $1$ and the other $p_s$ shrink to $0$ and then $R = 1$. Another extreme is equal probabilities $1/S$ so that $R = S (1/S^2) = 1/S$. The two limits naturally coincide for $S = 1$. Thus for $2, 10, 100$ species $R \ge 0.5, 0.1, 0.01$ respectively.
The complement $1 - R$ was one of various measures of heterogeneity used by Corrado Gini, but beware serious overloading of terms in various literatures: the terms Gini index or coefficient have been applied to several distinct measures. It features in machine learning as a measure of impurity of classifications; conversely $R$ measures purity. Ecologists usually talk of diversity: $R$ measures diversity inversely and $1 - R$ measures it directly.
For geneticists $1 - R$ is the heterozygosity.
The reciprocal $1/R$ has a 'numbers equivalent' interpretation. Imagine as above any case in which $S$ species are equally common with each $p_s = 1/S$. Then $1/R = 1/\sum_{s=1}^S (1/S)^2 = S$. By extension $1/R$ measures an equivalent number of equally common categories, so that for example the squares of $1/6, 2/6, 3/6$ give $1/R \approx 2.57$ which matches an intuition that the distribution is between $2/6, 2/6, 2/6$ and $3/6,3/6, 0$ in concentration or diversity.
(The numbers equivalent for Shannon entropy $H$ is just its antilogarithm, say $2^H, \exp(H)$ or $10^H$ for bases $2, e = \exp(1)$ and $10$ respectively.)
There are various generalisations of entropy which make this measure one of a wider family; a simple one given by I.J. Good defines the menagerie $\sum_{s} p_s^a\ [\ln (1/p_s)]^b$ from which $a =2, b=0$ gives our measure; $a = 1, b=1$ is Shannon entropy; $a =0; b=0$ returns $S$, the number of species present, which is the simplest measurement of diversity possible and one with several merits. | Variance of a distribution of multi-level categorical data
The sum of the squares of the fractions (to let your text align with your arithmetic) is indeed a much re-discovered or re-invented measure of the concentration of distributions divided into distinct |
24,878 | Variance of a distribution of multi-level categorical data | Interesting question... It really depends what you want to do with this metric - if you just want to rank a list by "most variable" a lot of things might work. The metric you made up seems reasonable. I wouldn't say you need mathematical "proof": proof of what? You could ask a question like "is this dataset likely to come from a uniform distribution?". I find some intuitive appeal in "what is the probability that two random draws from this list are equal?". You could do that in R like so:
set.seed(1)
cities <- c("Moscow", "Moscow", "NYC", "London")
# Gives .3525
prob_equal = mean(sample(rep(cities, 100)) == sample(rep(cities, 100)))
citiesTwo <- c(rep("Moscow", 100), rep("NYC", 100)) # Gave .497
citiesTwo <- c(rep("Moscow", 100), rep("NYC", 10)) # Gave .833
Where the 'mean' part gives the mean of a vector few hundred random entries like TRUE, TRUE, FALSE, TRUE, FALSE ..., which becomes the mean of 1, 1, 0, 1, 0, etc
1 minus that probability might give a better notion of "variance" though (i.e. prob two random are different, thus higher number means more diverse). Some such quantity could probably be calculated without too much effort. It's probably something like P(a random selection is moscow) * P(a second is moscow) + P(a random selection is NYC) * P(a second is NYC) + ..., so I think it's just proportion_moscow ^ 2 + proportion_nyc ^ 2, which in fact would be what you came up with! | Variance of a distribution of multi-level categorical data | Interesting question... It really depends what you want to do with this metric - if you just want to rank a list by "most variable" a lot of things might work. The metric you made up seems reasonable | Variance of a distribution of multi-level categorical data
Interesting question... It really depends what you want to do with this metric - if you just want to rank a list by "most variable" a lot of things might work. The metric you made up seems reasonable. I wouldn't say you need mathematical "proof": proof of what? You could ask a question like "is this dataset likely to come from a uniform distribution?". I find some intuitive appeal in "what is the probability that two random draws from this list are equal?". You could do that in R like so:
set.seed(1)
cities <- c("Moscow", "Moscow", "NYC", "London")
# Gives .3525
prob_equal = mean(sample(rep(cities, 100)) == sample(rep(cities, 100)))
citiesTwo <- c(rep("Moscow", 100), rep("NYC", 100)) # Gave .497
citiesTwo <- c(rep("Moscow", 100), rep("NYC", 10)) # Gave .833
Where the 'mean' part gives the mean of a vector few hundred random entries like TRUE, TRUE, FALSE, TRUE, FALSE ..., which becomes the mean of 1, 1, 0, 1, 0, etc
1 minus that probability might give a better notion of "variance" though (i.e. prob two random are different, thus higher number means more diverse). Some such quantity could probably be calculated without too much effort. It's probably something like P(a random selection is moscow) * P(a second is moscow) + P(a random selection is NYC) * P(a second is NYC) + ..., so I think it's just proportion_moscow ^ 2 + proportion_nyc ^ 2, which in fact would be what you came up with! | Variance of a distribution of multi-level categorical data
Interesting question... It really depends what you want to do with this metric - if you just want to rank a list by "most variable" a lot of things might work. The metric you made up seems reasonable |
24,879 | A way to maintain classifier's recall while improving precision | Precision and recall are a tradeoff. Typically to increase precision for a given model implies lowering recall, though this depends on the precision-recall curve of your model, so you may get lucky.
Generally, if you want higher precision you need to restrict the positive predictions to those with highest certainty in your model, which means predicting fewer positives overall (which, in turn, usually results in lower recall).
If you want to maintain the same level of recall while improving precision, you will need a better classifier. | A way to maintain classifier's recall while improving precision | Precision and recall are a tradeoff. Typically to increase precision for a given model implies lowering recall, though this depends on the precision-recall curve of your model, so you may get lucky.
| A way to maintain classifier's recall while improving precision
Precision and recall are a tradeoff. Typically to increase precision for a given model implies lowering recall, though this depends on the precision-recall curve of your model, so you may get lucky.
Generally, if you want higher precision you need to restrict the positive predictions to those with highest certainty in your model, which means predicting fewer positives overall (which, in turn, usually results in lower recall).
If you want to maintain the same level of recall while improving precision, you will need a better classifier. | A way to maintain classifier's recall while improving precision
Precision and recall are a tradeoff. Typically to increase precision for a given model implies lowering recall, though this depends on the precision-recall curve of your model, so you may get lucky.
|
24,880 | A way to maintain classifier's recall while improving precision | I don't know each library you are using. But most ML libraries have model optimizers built in to help you with this task.
For instance, if you using sklearn, you can use RandomizedSearchCV to look for a good combination of hyperparameters for you. For instance, if you training a Random Forest classifier":
#model
MOD = RandomForestClassifier()
#Implemente RandomSearchCV
m_params = {
"RF": {
"n_estimators" : np.linspace(2, 500, 500, dtype = "int"),
"max_depth": [5, 20, 30, None],
"min_samples_split": np.linspace(2, 50, 50, dtype = "int"),
"max_features": ["sqrt", "log2",10, 20, None],
"oob_score": [True],
"bootstrap": [True]
},
}
scoreFunction = {"recall": "recall", "precision": "precision"}
random_search = RandomizedSearchCV(MOD,
param_distributions = m_params[model],
n_iter = 20,
scoring = scoreFunction,
refit = "recall",
return_train_score = True,
random_state = 42,
cv = 5,
verbose = 1 + int(log))
#trains and optimizes the model
random_search.fit(x_train, y_train)
#recover the best model
MOD = random_search.best_estimator_
Note that the parameters scoring and refit will tell the RandomizedSerachCV which metrics you are most interested in maximizing. This method will also save you the time of hand tuning (and potentially overfitting your model on your test data).
Good luck! | A way to maintain classifier's recall while improving precision | I don't know each library you are using. But most ML libraries have model optimizers built in to help you with this task.
For instance, if you using sklearn, you can use RandomizedSearchCV to look for | A way to maintain classifier's recall while improving precision
I don't know each library you are using. But most ML libraries have model optimizers built in to help you with this task.
For instance, if you using sklearn, you can use RandomizedSearchCV to look for a good combination of hyperparameters for you. For instance, if you training a Random Forest classifier":
#model
MOD = RandomForestClassifier()
#Implemente RandomSearchCV
m_params = {
"RF": {
"n_estimators" : np.linspace(2, 500, 500, dtype = "int"),
"max_depth": [5, 20, 30, None],
"min_samples_split": np.linspace(2, 50, 50, dtype = "int"),
"max_features": ["sqrt", "log2",10, 20, None],
"oob_score": [True],
"bootstrap": [True]
},
}
scoreFunction = {"recall": "recall", "precision": "precision"}
random_search = RandomizedSearchCV(MOD,
param_distributions = m_params[model],
n_iter = 20,
scoring = scoreFunction,
refit = "recall",
return_train_score = True,
random_state = 42,
cv = 5,
verbose = 1 + int(log))
#trains and optimizes the model
random_search.fit(x_train, y_train)
#recover the best model
MOD = random_search.best_estimator_
Note that the parameters scoring and refit will tell the RandomizedSerachCV which metrics you are most interested in maximizing. This method will also save you the time of hand tuning (and potentially overfitting your model on your test data).
Good luck! | A way to maintain classifier's recall while improving precision
I don't know each library you are using. But most ML libraries have model optimizers built in to help you with this task.
For instance, if you using sklearn, you can use RandomizedSearchCV to look for |
24,881 | A way to maintain classifier's recall while improving precision | (for problems like these, always have the two by two contingency table in mind; see wikipedia's Recall/Precision or Sensitivity/Specificity for details)
Recall (or sensitivity) is the same as P(test/classifier is positive | reality is true) or True Positives/(True Positives + False Negatives) or True positives/True items
Precision (or positive predictive value) is the same as P(reality is true | test/classifier is positive) or True Positives/(True Positives + False Positives) or True positives/Positive items
Better recall means more hits of reality (true things more likely included in positives), better precision means more hits of positives (if you classify positive, more likely to be true).
One can arbitrarily increase recall by making your classifier include more (sort of without caring if they're not true). You can have perfect recall by just saying everything is positive. There'll be no false negatives that way. Of course, you'll have lots of false positives. In the contingency table, it's like moving the horizontal line between positives and negatives down. It obviously increases recall (and may or may not effect precision).
Since everything here is dual we can say:
One can arbitrarily increase precision by increasing the leniency in considering your gold-standard to have more and more true's. You can have perfect precision by just saying everything is true. There'll be no false positives that way. Of course, you'll have lots of false negatives. It obviously increases precision (and may or may not effect precision). (Yes, this seems like a strange interpretation but bear with me)
But that seems a little arbitrary (for both).
When you increase one (by changing some cutoff), you will tend to decrease the other. What you're asking for is to avoid the decrease. That's surely better. The vague way to do that is to include things that should be found (turn an FN to an TP) but don't then also include things that shouldn't (don't turn TN into FP).
What do you have control over? The classifier itself? Or the feature space (the data points themselves)? If the classifier, well, that's part of the algorithm design, so I'll assume he feature space itself.
To make an over simplified example, let's consider a search engine. Suppose you want to find web pages that involve a concept X. If you start off with X, you'll be missing webpages that mention synonyms of X. If you add a synonym Y to a search, you turn some False Negatives to True positives (you'll collect more that you were missing before).
But that new word Y may be ambiguous and its alternate meaning may include more things that you don't want (increasing False Positives, reducing precision). To prevent that, you'll want to exclude contexts where Y has the unintended alternate meaning. | A way to maintain classifier's recall while improving precision | (for problems like these, always have the two by two contingency table in mind; see wikipedia's Recall/Precision or Sensitivity/Specificity for details)
Recall (or sensitivity) is the same as P(test/c | A way to maintain classifier's recall while improving precision
(for problems like these, always have the two by two contingency table in mind; see wikipedia's Recall/Precision or Sensitivity/Specificity for details)
Recall (or sensitivity) is the same as P(test/classifier is positive | reality is true) or True Positives/(True Positives + False Negatives) or True positives/True items
Precision (or positive predictive value) is the same as P(reality is true | test/classifier is positive) or True Positives/(True Positives + False Positives) or True positives/Positive items
Better recall means more hits of reality (true things more likely included in positives), better precision means more hits of positives (if you classify positive, more likely to be true).
One can arbitrarily increase recall by making your classifier include more (sort of without caring if they're not true). You can have perfect recall by just saying everything is positive. There'll be no false negatives that way. Of course, you'll have lots of false positives. In the contingency table, it's like moving the horizontal line between positives and negatives down. It obviously increases recall (and may or may not effect precision).
Since everything here is dual we can say:
One can arbitrarily increase precision by increasing the leniency in considering your gold-standard to have more and more true's. You can have perfect precision by just saying everything is true. There'll be no false positives that way. Of course, you'll have lots of false negatives. It obviously increases precision (and may or may not effect precision). (Yes, this seems like a strange interpretation but bear with me)
But that seems a little arbitrary (for both).
When you increase one (by changing some cutoff), you will tend to decrease the other. What you're asking for is to avoid the decrease. That's surely better. The vague way to do that is to include things that should be found (turn an FN to an TP) but don't then also include things that shouldn't (don't turn TN into FP).
What do you have control over? The classifier itself? Or the feature space (the data points themselves)? If the classifier, well, that's part of the algorithm design, so I'll assume he feature space itself.
To make an over simplified example, let's consider a search engine. Suppose you want to find web pages that involve a concept X. If you start off with X, you'll be missing webpages that mention synonyms of X. If you add a synonym Y to a search, you turn some False Negatives to True positives (you'll collect more that you were missing before).
But that new word Y may be ambiguous and its alternate meaning may include more things that you don't want (increasing False Positives, reducing precision). To prevent that, you'll want to exclude contexts where Y has the unintended alternate meaning. | A way to maintain classifier's recall while improving precision
(for problems like these, always have the two by two contingency table in mind; see wikipedia's Recall/Precision or Sensitivity/Specificity for details)
Recall (or sensitivity) is the same as P(test/c |
24,882 | A way to maintain classifier's recall while improving precision | One other alternative which is not stated here is to do oversampling and under-sampling of data points corresponding to different labels in your training dataset. In this way, you ensure that your model has balanced representation of training data. This is called class balanced and can improve recall considerably, which keeping precision fairly manageable. | A way to maintain classifier's recall while improving precision | One other alternative which is not stated here is to do oversampling and under-sampling of data points corresponding to different labels in your training dataset. In this way, you ensure that your mod | A way to maintain classifier's recall while improving precision
One other alternative which is not stated here is to do oversampling and under-sampling of data points corresponding to different labels in your training dataset. In this way, you ensure that your model has balanced representation of training data. This is called class balanced and can improve recall considerably, which keeping precision fairly manageable. | A way to maintain classifier's recall while improving precision
One other alternative which is not stated here is to do oversampling and under-sampling of data points corresponding to different labels in your training dataset. In this way, you ensure that your mod |
24,883 | Using Regression to project outside of the data range ok? never ok? sometimes ok? | Almost all answers and comments warn against the dangers of extrapolation. I would like to offer a more formal way of seeing whether prediction is prudent. The method is based on the projection matrix on the space spanned by the columns of $\mathbf{X}$ which we assume full rank, i.e. we assume the column space is p-dimensional. As you might remember,
$$\mathbf{H}=\mathbf{X}\left(\mathbf{X}^{T}\mathbf{X} \right)^{-1} \mathbf{X}$$
It can be shown that the diagonal elements of $\mathbf{H}$ satisfy $0<\mathbf{H}_{ii}<1,\ i=1,\ldots,n$, this is a consequence of idempotence by the way, and they can be interpreted as distances from the centroid of the predictor space. This is true because there is a one-to-one correspondence between the leverages $\mathbf{H}_{ii}$ and the squared Mahalanobis distances. A way to spot hidden extrapolations would then be to see how far the new obsevation lies from the centroid, right? This can be done by computing the new diagonal element. Recalling some basic rules of matrix multiplication, we have
$$\mathbf{H}_{new,new} = \mathbf{x}_{new}^{T} \left(\mathbf{X}^{T}\mathbf{X} \right)^{-1} \mathbf{x}_{new} $$
If $\mathbf{H}_{new,new}$ is much larger than the rest of the diagonal elements, then this tells you that your new observation lies quite far from the centroid and prediction is probably a risky move. It takes some judgement to decide how large is too large so of course the technique is not foolproof. Its beauty nevertheless is that it works in all dimensions, when you cannot look at a simple scatter plot that is.
I am not sure which software you are using but almost all of them will return the hat matrix with the right command. So I suggest you take a look before making up your mind. | Using Regression to project outside of the data range ok? never ok? sometimes ok? | Almost all answers and comments warn against the dangers of extrapolation. I would like to offer a more formal way of seeing whether prediction is prudent. The method is based on the projection matrix | Using Regression to project outside of the data range ok? never ok? sometimes ok?
Almost all answers and comments warn against the dangers of extrapolation. I would like to offer a more formal way of seeing whether prediction is prudent. The method is based on the projection matrix on the space spanned by the columns of $\mathbf{X}$ which we assume full rank, i.e. we assume the column space is p-dimensional. As you might remember,
$$\mathbf{H}=\mathbf{X}\left(\mathbf{X}^{T}\mathbf{X} \right)^{-1} \mathbf{X}$$
It can be shown that the diagonal elements of $\mathbf{H}$ satisfy $0<\mathbf{H}_{ii}<1,\ i=1,\ldots,n$, this is a consequence of idempotence by the way, and they can be interpreted as distances from the centroid of the predictor space. This is true because there is a one-to-one correspondence between the leverages $\mathbf{H}_{ii}$ and the squared Mahalanobis distances. A way to spot hidden extrapolations would then be to see how far the new obsevation lies from the centroid, right? This can be done by computing the new diagonal element. Recalling some basic rules of matrix multiplication, we have
$$\mathbf{H}_{new,new} = \mathbf{x}_{new}^{T} \left(\mathbf{X}^{T}\mathbf{X} \right)^{-1} \mathbf{x}_{new} $$
If $\mathbf{H}_{new,new}$ is much larger than the rest of the diagonal elements, then this tells you that your new observation lies quite far from the centroid and prediction is probably a risky move. It takes some judgement to decide how large is too large so of course the technique is not foolproof. Its beauty nevertheless is that it works in all dimensions, when you cannot look at a simple scatter plot that is.
I am not sure which software you are using but almost all of them will return the hat matrix with the right command. So I suggest you take a look before making up your mind. | Using Regression to project outside of the data range ok? never ok? sometimes ok?
Almost all answers and comments warn against the dangers of extrapolation. I would like to offer a more formal way of seeing whether prediction is prudent. The method is based on the projection matrix |
24,884 | Using Regression to project outside of the data range ok? never ok? sometimes ok? | The prediction error increases quadratically with the distance from the mean. The regression equation and results allow you to gauge the size of the error over the observed range of data, and the model is only adequate over that same range.
Outside of that range a lot of things can happen. First, the prediction gets worse and worse due to the increase of the prediction error.
Second, the model may break down completely. The easiest way to see that is to try to project a model relating price to time: You can't make predictions for negative time.
Third, the linear relationship may be inadequate. In your example, there almost certainly are economies of scale, which would become very noticeable if you try to predict far outside of the range of observed values.
A humorous example of this same effect appears in one of the works of Mark Twain, where he attempts to model the length of the Mississippi river over time --- it it/was quite windy and shortens/ed each year due to erosion of some of the bends as well as man-made shortcuts --- and "predicts" that in so many years the distance between Cairo, Illinois, and New Orleans will have shrunk to about a mile and three quarters).
Finally, note that the range of observed values can be quite complicated if you have more than one predictor variable. (Due to correlations between the predictors you often cannot just take the box defined by the maxima and minima in each predictor.) | Using Regression to project outside of the data range ok? never ok? sometimes ok? | The prediction error increases quadratically with the distance from the mean. The regression equation and results allow you to gauge the size of the error over the observed range of data, and the mode | Using Regression to project outside of the data range ok? never ok? sometimes ok?
The prediction error increases quadratically with the distance from the mean. The regression equation and results allow you to gauge the size of the error over the observed range of data, and the model is only adequate over that same range.
Outside of that range a lot of things can happen. First, the prediction gets worse and worse due to the increase of the prediction error.
Second, the model may break down completely. The easiest way to see that is to try to project a model relating price to time: You can't make predictions for negative time.
Third, the linear relationship may be inadequate. In your example, there almost certainly are economies of scale, which would become very noticeable if you try to predict far outside of the range of observed values.
A humorous example of this same effect appears in one of the works of Mark Twain, where he attempts to model the length of the Mississippi river over time --- it it/was quite windy and shortens/ed each year due to erosion of some of the bends as well as man-made shortcuts --- and "predicts" that in so many years the distance between Cairo, Illinois, and New Orleans will have shrunk to about a mile and three quarters).
Finally, note that the range of observed values can be quite complicated if you have more than one predictor variable. (Due to correlations between the predictors you often cannot just take the box defined by the maxima and minima in each predictor.) | Using Regression to project outside of the data range ok? never ok? sometimes ok?
The prediction error increases quadratically with the distance from the mean. The regression equation and results allow you to gauge the size of the error over the observed range of data, and the mode |
24,885 | Using Regression to project outside of the data range ok? never ok? sometimes ok? | You cannot make data driven decisions for areas where you don't have data. End of story. The data can very well support a linear shape for the range of which your data is collected but you do not have data-driven reasons to believe this shape continues to be linear outside your range. It could be any shape under the sun!
You may assume the linear shape continues outside your data range but this is a subjective assumption not supported by the data you've collected. I would suggest consulted a subject matter expert to see, based on their subject matter expertise, how safe this assumption is. | Using Regression to project outside of the data range ok? never ok? sometimes ok? | You cannot make data driven decisions for areas where you don't have data. End of story. The data can very well support a linear shape for the range of which your data is collected but you do not have | Using Regression to project outside of the data range ok? never ok? sometimes ok?
You cannot make data driven decisions for areas where you don't have data. End of story. The data can very well support a linear shape for the range of which your data is collected but you do not have data-driven reasons to believe this shape continues to be linear outside your range. It could be any shape under the sun!
You may assume the linear shape continues outside your data range but this is a subjective assumption not supported by the data you've collected. I would suggest consulted a subject matter expert to see, based on their subject matter expertise, how safe this assumption is. | Using Regression to project outside of the data range ok? never ok? sometimes ok?
You cannot make data driven decisions for areas where you don't have data. End of story. The data can very well support a linear shape for the range of which your data is collected but you do not have |
24,886 | Similarities and differences between correlation and regression [duplicate] | OLS regression tells you more than the (linear) correlation coefficient. Also, the latter is one of the things you get from the former. Here's what you get with OLS:
A characterization of a linear trend describing how Y relates to X. This trend includes:
1a. The slope (aka beta, effect, coefficient, etc. depending of discipline) of that line, which tells you how much you estimate Y will change given a 1-unit increase in X.
1b. The Y-intercept, which may or may not be of interest, depending on the substantive nature of one's research questions.
A characterization of the strength of association... that is, does the line $Y = \beta_{0} + \beta_{X}X$ describe the data really well, or does it only kinda describe the data. In the former case, most of the observed data points lie on or close to the regression line; in the latter case the data points may lie quite a ways off the line. Usually, this is reported as $R^{2}$, which is the same thing as Pearson's $r^{2}$
One gets predictions of the value of Y given a value of X complete with an estimate of the uncertainty of that prediction.
Pearson's correlation coefficient gives one (2), but gives only the sign of the slope in (1a), and does not give intercepts (1b), or predictions (3). | Similarities and differences between correlation and regression [duplicate] | OLS regression tells you more than the (linear) correlation coefficient. Also, the latter is one of the things you get from the former. Here's what you get with OLS:
A characterization of a linear tr | Similarities and differences between correlation and regression [duplicate]
OLS regression tells you more than the (linear) correlation coefficient. Also, the latter is one of the things you get from the former. Here's what you get with OLS:
A characterization of a linear trend describing how Y relates to X. This trend includes:
1a. The slope (aka beta, effect, coefficient, etc. depending of discipline) of that line, which tells you how much you estimate Y will change given a 1-unit increase in X.
1b. The Y-intercept, which may or may not be of interest, depending on the substantive nature of one's research questions.
A characterization of the strength of association... that is, does the line $Y = \beta_{0} + \beta_{X}X$ describe the data really well, or does it only kinda describe the data. In the former case, most of the observed data points lie on or close to the regression line; in the latter case the data points may lie quite a ways off the line. Usually, this is reported as $R^{2}$, which is the same thing as Pearson's $r^{2}$
One gets predictions of the value of Y given a value of X complete with an estimate of the uncertainty of that prediction.
Pearson's correlation coefficient gives one (2), but gives only the sign of the slope in (1a), and does not give intercepts (1b), or predictions (3). | Similarities and differences between correlation and regression [duplicate]
OLS regression tells you more than the (linear) correlation coefficient. Also, the latter is one of the things you get from the former. Here's what you get with OLS:
A characterization of a linear tr |
24,887 | Similarities and differences between correlation and regression [duplicate] | To focus one just one aspect of the question (@Alexis answer analyzes well the greater picture), the sample correlation coefficient between $Y$ and $X$ is
$$r = \frac { \operatorname{\hat Cov}(Y,X)}{\hat \sigma_y\hat \sigma_x}$$
while in a simple regression $Y = \beta_0 + \beta_1X+ u$, the OLS estimator for the slope coefficient is
$$\hat \beta_1 = \frac { \operatorname{\hat Cov}(Y,X)}{\hat \sigma_x^2}$$
Combining, we have the relation
$$\hat \beta_1 = \frac {\hat \sigma_y}{\hat \sigma_x}r$$
Pondering this last one, I believe it will provide useful intuition. | Similarities and differences between correlation and regression [duplicate] | To focus one just one aspect of the question (@Alexis answer analyzes well the greater picture), the sample correlation coefficient between $Y$ and $X$ is
$$r = \frac { \operatorname{\hat Cov}(Y,X)}{\ | Similarities and differences between correlation and regression [duplicate]
To focus one just one aspect of the question (@Alexis answer analyzes well the greater picture), the sample correlation coefficient between $Y$ and $X$ is
$$r = \frac { \operatorname{\hat Cov}(Y,X)}{\hat \sigma_y\hat \sigma_x}$$
while in a simple regression $Y = \beta_0 + \beta_1X+ u$, the OLS estimator for the slope coefficient is
$$\hat \beta_1 = \frac { \operatorname{\hat Cov}(Y,X)}{\hat \sigma_x^2}$$
Combining, we have the relation
$$\hat \beta_1 = \frac {\hat \sigma_y}{\hat \sigma_x}r$$
Pondering this last one, I believe it will provide useful intuition. | Similarities and differences between correlation and regression [duplicate]
To focus one just one aspect of the question (@Alexis answer analyzes well the greater picture), the sample correlation coefficient between $Y$ and $X$ is
$$r = \frac { \operatorname{\hat Cov}(Y,X)}{\ |
24,888 | Similarities and differences between correlation and regression [duplicate] | If I want to investigate how two continuous variables are linked, what is the difference between calculating the correlation coefficient (Pearson's r) versus calculating the (simple linear) regression coefficient?
The regression line is $E(Y|X=x)$. Correlation is a quite different object.
A regression slope is in units of Y/units of X, while a correlation is unitless.
I see people who, if the regression coefficient is significantly different from zero, talk about the two variables as if they are correlated, which is confusing as it suggests that the two coefficients (correlation, regression) are the same thing.
No, only that they are related, which they are. (Their p-values are effectively the same)
Having said that, isn't r a measure of the (regression line) slope anyway?
Not of slope, no, as mentioned above. If I change from measuring in meters to measuring in mm, my slope changes by a factor of a million, but my correlation doesn't change at all. But they're related. | Similarities and differences between correlation and regression [duplicate] | If I want to investigate how two continuous variables are linked, what is the difference between calculating the correlation coefficient (Pearson's r) versus calculating the (simple linear) regression | Similarities and differences between correlation and regression [duplicate]
If I want to investigate how two continuous variables are linked, what is the difference between calculating the correlation coefficient (Pearson's r) versus calculating the (simple linear) regression coefficient?
The regression line is $E(Y|X=x)$. Correlation is a quite different object.
A regression slope is in units of Y/units of X, while a correlation is unitless.
I see people who, if the regression coefficient is significantly different from zero, talk about the two variables as if they are correlated, which is confusing as it suggests that the two coefficients (correlation, regression) are the same thing.
No, only that they are related, which they are. (Their p-values are effectively the same)
Having said that, isn't r a measure of the (regression line) slope anyway?
Not of slope, no, as mentioned above. If I change from measuring in meters to measuring in mm, my slope changes by a factor of a million, but my correlation doesn't change at all. But they're related. | Similarities and differences between correlation and regression [duplicate]
If I want to investigate how two continuous variables are linked, what is the difference between calculating the correlation coefficient (Pearson's r) versus calculating the (simple linear) regression |
24,889 | Similarities and differences between correlation and regression [duplicate] | On the intuitive side, I have been thinking about the following.
The Pearson correlation is a 2-dimensional linear approximation, while the linear regression is n-dimensional linear approximation. Therefore, the latter offers an estimate of the correlation that accounts for a lot of other features that might in/deflate the estimate obtained with the Pearson correlation.
See this example1, for the Pearson correlation. Consider a map without info on altitude on it and suppose you can move on it linearly (presence of rivers or cliffs do not matter). You know the time you left point A and reached B, then you compute the speed.
See this example2, for the linear regression. If instead you move on a map with info on altitude and you have to accounts for all a lot of other info on the ground you are facing (i.e., rivers or cliffs), but still the time you left point A and reached B is as in example 1, the value of the speed you will get will be different (very likely it will be higher).
Although the linear regression offers only an approximation of the average speed, it is still better than the initial approximation you got with the Pearson correlation.
Do some of you find something wrong in this example? (your answers will be very useful as I normally use this example in class)
In any case, I hope this example helped to understand the difference between the two techniques. | Similarities and differences between correlation and regression [duplicate] | On the intuitive side, I have been thinking about the following.
The Pearson correlation is a 2-dimensional linear approximation, while the linear regression is n-dimensional linear approximation. Th | Similarities and differences between correlation and regression [duplicate]
On the intuitive side, I have been thinking about the following.
The Pearson correlation is a 2-dimensional linear approximation, while the linear regression is n-dimensional linear approximation. Therefore, the latter offers an estimate of the correlation that accounts for a lot of other features that might in/deflate the estimate obtained with the Pearson correlation.
See this example1, for the Pearson correlation. Consider a map without info on altitude on it and suppose you can move on it linearly (presence of rivers or cliffs do not matter). You know the time you left point A and reached B, then you compute the speed.
See this example2, for the linear regression. If instead you move on a map with info on altitude and you have to accounts for all a lot of other info on the ground you are facing (i.e., rivers or cliffs), but still the time you left point A and reached B is as in example 1, the value of the speed you will get will be different (very likely it will be higher).
Although the linear regression offers only an approximation of the average speed, it is still better than the initial approximation you got with the Pearson correlation.
Do some of you find something wrong in this example? (your answers will be very useful as I normally use this example in class)
In any case, I hope this example helped to understand the difference between the two techniques. | Similarities and differences between correlation and regression [duplicate]
On the intuitive side, I have been thinking about the following.
The Pearson correlation is a 2-dimensional linear approximation, while the linear regression is n-dimensional linear approximation. Th |
24,890 | Understanding this acf output | Let $x = (x_1, x_2, \ldots, x_n)$ be the series. Set
$$y_t = x_t - \bar{x}.$$
These are the residuals with respect to the estimated mean $\bar{x} = \frac{1}{n}\sum_{t=1}^n x_t$ of the series.
For $k=0, 1, 2, \ldots, n-1$ the acf function is computing
$$\text{acf}(x)_k = \frac{\sum_{t=1}^{n-k} y_t y_{t+k}}{\sum_{t=1}^n y_t^2}.$$
Notice that as the lag $k$ grows, there are fewer and fewer terms in the numerator as well as a shift of the indexes in the product. The reduction in number of terms in the numerator essentially forces a decrease in the value as $k$ increases. Most time series analyses consider only lags $k$ much smaller than $n$ for which this effect is negligible.
In your example where $x = (0, 1, 2, 3, 4, 5)$, $y = (-5/2, -3/5, -1/2, 1/2, 3/2, 5/2)$ initially has negative values and then moves into positive territory. For lags $k \ge 3$, the products $y_ty_{t+k}$ are pairing the early negative values with the later positive values, producing negative numbers.
Edit: Intuitive Explanation
Intuitively, $\text{acf}(x)_k$ is supposed to be telling us the correlation between a series and its lag-$k$ version. The motivation for the question is that a series like $(0, 1, \ldots, n-1)$ is perfectly correlated with all its lags for $k=0$ right through $k=n-2$. How, then, can the ACF plot produce near zero and even negative values?
There are two factors in play here. They can be seen by comparing the ACF formula to that of the usual correlation coefficient. For two series $(u_t)$ and $(w_t)$ of the same length $n-k$, let $\upsilon_t = u_t - \bar{u}$ and $\omega_t = w_t - \bar{w}$ be their residuals. (In the ensuing discussion, $(u_t)$ will be the prefix $(x_1, x_2, \ldots, x_{n-k}$ and $(w_t)$ will be the suffix $(x_{k+1}, x_{k+2}, \ldots, x_n)$.) By definition, their correlation coefficient is the average standardized residual,
$$\rho(u, w) = \frac{\sum_{t=1}^{n-k} \upsilon_t \omega_t}{\sqrt{\sum_{t=1}^{n-k} \upsilon_t^2 \sum_{t=1}^{n-k} \omega_t^2}}.$$
(The constants $\frac{1}{n-k}$ that usually appear in formulas for averages cancel in this ratio, so I have omitted them.)
When we are dealing with a single series $(x_t)$ of length $n$ and its (short) lags $k$, both $\upsilon_t$ and $\omega_t$ are essentially the same, apart from the shift of $k$ in their indexes: the first consists of the $(y_t)$ for $t$ from $1$ through $n-k$ (the high-$t$ end has been trimmed off) while the second consists of the same $(y_t)$ for $t$ from $k$ through $n$ (the low-$t$ end has been removed). If we ignore these slight differences, the denominator of $\rho(u, w)$ simplifies to
$$\sqrt{\sum_{t=1}^{n-k} \upsilon_t^2 \sum_{t=1}^{n-k} \omega_t^2} = \sqrt{\sum_{t=1}^{n-k} y_t^2 \sum_{t=1}^{n-k} y_{t+k}^2} \approx \sqrt{\sum_{t=1}^{n} y_t^2 \sum_{t=1}^n y_{t}^2} = \sqrt{\left(\sum_{t=1}^{n} y_t^2\right)^2 } = \sum_{t=1}^{n} y_t^2.$$
In making this approximation I have inserted the first $k$ terms $y_1^2 + \cdots + y_k^2$ into the sum for the suffix ($\omega_t$) and the last $k$ terms $y_{n-k+1}^2 + \cdots + y_{n}^2$ into the sum for the prefix ($\upsilon_t$). Because these are both sums of squares, they cannot decrease the denominator, and usually increase it a little bit. Accordingly, we see that using $\sum_{t=1}^n y_t^2$ in the denominator decreases the apparent correlation $\rho(u, w)$. The greater the lag $k$, the more the denominator will tend to increase, so this factor tends to reduce the high-lag values of the ACF no matter what.
The second factor has to do with the difference between the mean of the entire series $\bar{x}$ and the means of the prefix $\bar{\upsilon} = \frac{1}{n-k}\sum_{t=1}^{n-k} y_t$ and suffix $\bar{\omega} = \frac{1}{n-k}\sum_{t=k+1}^n y_t$. The ACF formula uses the former whereas the correlation coefficient formula uses the latter. We can work out the change in the numerator by comparing the ACF and correlation coefficient formulas, working algebraically to make the ACF numerator look like the $\rho$ numerator:
$$\eqalign{
\sum_{t=1}^{n-k} y_t y_{t+k} &= &\sum_{t=1}^{n-k} (x_t-\bar{x})(x_{t+k}-\bar{x}) \\
&= &\sum_{t=1}^{n-k} (x_t-\bar u + \bar u - \bar{x})(x_{t+k}-\bar w + \bar w - \bar{x}) \\
&= &\sum_{t=1}^{n-k} \left((x_t-\bar u)(x_{t+k}-\bar w) + (\bar u - \bar{x})(\bar w - \bar{x})\right) \\
&= &\left(\sum_{t=1}^{n-k} \upsilon_t \omega_t\right) + (n-k)(\bar u - \bar{x})(\bar w - \bar{x}).
}$$
(The cross terms disappeared after the second line for the usual reason: they sum to zero.)
Comparing to the formula for $\rho$, we see that the discrepancy in numerators depends on the lag (in terms of $n-k$) and the products of the changes in the means, $\bar u - \bar{x}$ and $\bar w - \bar{x}$. For a stationary series and large $k$ those changes ought to be small; for small $k$ we hope they will be small but perhaps not. In the example, for instance, at lag $k=1$ the mean after dropping off the last term decreases by $1/2$ and the mean after dropping off the first term similarly increases by $1/2$. The product
$$(n-k)(\bar u - \bar{x})(\bar w - \bar{x}) = (6-1)(-1/2)(1/2) = -5/4$$
decreases the numerator in the ACF compared to the numerator in $\rho$.
The net effect of these two factors in the example is that both conspire to decrease the apparent correlation: the denominator goes up, because it includes a few more positive terms overall, and the numerator goes down, because one end of the series tends to be less than the average and the other end tends to be greater than the average. (That's more or less what a "long term trend" means, suggesting there is some evidence of non stationarity in this series.)
To illustrate the formula for the ACF, here is direct (but less efficient) R code to compute acf:
acf.0 <- function(x) {
n <- length(x)
y <- x - mean(x)
sapply(1:n - 1, function(k) sum( y[1:(n-k)] * y[1:(n-k) + k] )) / sum(y * y)
}
As a test, compare the two results:
> sum((acf.0(0:5) - acf(0:5, plot=FALSE)$acf)^2)
> 6.162976e-33
The answers agree to within double precision floating point roundoff error. | Understanding this acf output | Let $x = (x_1, x_2, \ldots, x_n)$ be the series. Set
$$y_t = x_t - \bar{x}.$$
These are the residuals with respect to the estimated mean $\bar{x} = \frac{1}{n}\sum_{t=1}^n x_t$ of the series.
For $k= | Understanding this acf output
Let $x = (x_1, x_2, \ldots, x_n)$ be the series. Set
$$y_t = x_t - \bar{x}.$$
These are the residuals with respect to the estimated mean $\bar{x} = \frac{1}{n}\sum_{t=1}^n x_t$ of the series.
For $k=0, 1, 2, \ldots, n-1$ the acf function is computing
$$\text{acf}(x)_k = \frac{\sum_{t=1}^{n-k} y_t y_{t+k}}{\sum_{t=1}^n y_t^2}.$$
Notice that as the lag $k$ grows, there are fewer and fewer terms in the numerator as well as a shift of the indexes in the product. The reduction in number of terms in the numerator essentially forces a decrease in the value as $k$ increases. Most time series analyses consider only lags $k$ much smaller than $n$ for which this effect is negligible.
In your example where $x = (0, 1, 2, 3, 4, 5)$, $y = (-5/2, -3/5, -1/2, 1/2, 3/2, 5/2)$ initially has negative values and then moves into positive territory. For lags $k \ge 3$, the products $y_ty_{t+k}$ are pairing the early negative values with the later positive values, producing negative numbers.
Edit: Intuitive Explanation
Intuitively, $\text{acf}(x)_k$ is supposed to be telling us the correlation between a series and its lag-$k$ version. The motivation for the question is that a series like $(0, 1, \ldots, n-1)$ is perfectly correlated with all its lags for $k=0$ right through $k=n-2$. How, then, can the ACF plot produce near zero and even negative values?
There are two factors in play here. They can be seen by comparing the ACF formula to that of the usual correlation coefficient. For two series $(u_t)$ and $(w_t)$ of the same length $n-k$, let $\upsilon_t = u_t - \bar{u}$ and $\omega_t = w_t - \bar{w}$ be their residuals. (In the ensuing discussion, $(u_t)$ will be the prefix $(x_1, x_2, \ldots, x_{n-k}$ and $(w_t)$ will be the suffix $(x_{k+1}, x_{k+2}, \ldots, x_n)$.) By definition, their correlation coefficient is the average standardized residual,
$$\rho(u, w) = \frac{\sum_{t=1}^{n-k} \upsilon_t \omega_t}{\sqrt{\sum_{t=1}^{n-k} \upsilon_t^2 \sum_{t=1}^{n-k} \omega_t^2}}.$$
(The constants $\frac{1}{n-k}$ that usually appear in formulas for averages cancel in this ratio, so I have omitted them.)
When we are dealing with a single series $(x_t)$ of length $n$ and its (short) lags $k$, both $\upsilon_t$ and $\omega_t$ are essentially the same, apart from the shift of $k$ in their indexes: the first consists of the $(y_t)$ for $t$ from $1$ through $n-k$ (the high-$t$ end has been trimmed off) while the second consists of the same $(y_t)$ for $t$ from $k$ through $n$ (the low-$t$ end has been removed). If we ignore these slight differences, the denominator of $\rho(u, w)$ simplifies to
$$\sqrt{\sum_{t=1}^{n-k} \upsilon_t^2 \sum_{t=1}^{n-k} \omega_t^2} = \sqrt{\sum_{t=1}^{n-k} y_t^2 \sum_{t=1}^{n-k} y_{t+k}^2} \approx \sqrt{\sum_{t=1}^{n} y_t^2 \sum_{t=1}^n y_{t}^2} = \sqrt{\left(\sum_{t=1}^{n} y_t^2\right)^2 } = \sum_{t=1}^{n} y_t^2.$$
In making this approximation I have inserted the first $k$ terms $y_1^2 + \cdots + y_k^2$ into the sum for the suffix ($\omega_t$) and the last $k$ terms $y_{n-k+1}^2 + \cdots + y_{n}^2$ into the sum for the prefix ($\upsilon_t$). Because these are both sums of squares, they cannot decrease the denominator, and usually increase it a little bit. Accordingly, we see that using $\sum_{t=1}^n y_t^2$ in the denominator decreases the apparent correlation $\rho(u, w)$. The greater the lag $k$, the more the denominator will tend to increase, so this factor tends to reduce the high-lag values of the ACF no matter what.
The second factor has to do with the difference between the mean of the entire series $\bar{x}$ and the means of the prefix $\bar{\upsilon} = \frac{1}{n-k}\sum_{t=1}^{n-k} y_t$ and suffix $\bar{\omega} = \frac{1}{n-k}\sum_{t=k+1}^n y_t$. The ACF formula uses the former whereas the correlation coefficient formula uses the latter. We can work out the change in the numerator by comparing the ACF and correlation coefficient formulas, working algebraically to make the ACF numerator look like the $\rho$ numerator:
$$\eqalign{
\sum_{t=1}^{n-k} y_t y_{t+k} &= &\sum_{t=1}^{n-k} (x_t-\bar{x})(x_{t+k}-\bar{x}) \\
&= &\sum_{t=1}^{n-k} (x_t-\bar u + \bar u - \bar{x})(x_{t+k}-\bar w + \bar w - \bar{x}) \\
&= &\sum_{t=1}^{n-k} \left((x_t-\bar u)(x_{t+k}-\bar w) + (\bar u - \bar{x})(\bar w - \bar{x})\right) \\
&= &\left(\sum_{t=1}^{n-k} \upsilon_t \omega_t\right) + (n-k)(\bar u - \bar{x})(\bar w - \bar{x}).
}$$
(The cross terms disappeared after the second line for the usual reason: they sum to zero.)
Comparing to the formula for $\rho$, we see that the discrepancy in numerators depends on the lag (in terms of $n-k$) and the products of the changes in the means, $\bar u - \bar{x}$ and $\bar w - \bar{x}$. For a stationary series and large $k$ those changes ought to be small; for small $k$ we hope they will be small but perhaps not. In the example, for instance, at lag $k=1$ the mean after dropping off the last term decreases by $1/2$ and the mean after dropping off the first term similarly increases by $1/2$. The product
$$(n-k)(\bar u - \bar{x})(\bar w - \bar{x}) = (6-1)(-1/2)(1/2) = -5/4$$
decreases the numerator in the ACF compared to the numerator in $\rho$.
The net effect of these two factors in the example is that both conspire to decrease the apparent correlation: the denominator goes up, because it includes a few more positive terms overall, and the numerator goes down, because one end of the series tends to be less than the average and the other end tends to be greater than the average. (That's more or less what a "long term trend" means, suggesting there is some evidence of non stationarity in this series.)
To illustrate the formula for the ACF, here is direct (but less efficient) R code to compute acf:
acf.0 <- function(x) {
n <- length(x)
y <- x - mean(x)
sapply(1:n - 1, function(k) sum( y[1:(n-k)] * y[1:(n-k) + k] )) / sum(y * y)
}
As a test, compare the two results:
> sum((acf.0(0:5) - acf(0:5, plot=FALSE)$acf)^2)
> 6.162976e-33
The answers agree to within double precision floating point roundoff error. | Understanding this acf output
Let $x = (x_1, x_2, \ldots, x_n)$ be the series. Set
$$y_t = x_t - \bar{x}.$$
These are the residuals with respect to the estimated mean $\bar{x} = \frac{1}{n}\sum_{t=1}^n x_t$ of the series.
For $k= |
24,891 | Understanding this acf output | I am gonna show it for the case when the time series is a special form of $Y_t=0,1,2, ...,n.$ First assume $n$ is odd. So the $\bar{Y}=n/2$ that will happen at the time $(n+1)/2$. The formula for the sample acf at lag $k$ is $r_k=\dfrac{\sum_{t=k+1}^n(Y_{t}-\bar{Y})(Y_{t-k}-\bar{Y})}{\sum_{t=1}^n(Y_{t}-\bar{Y})^2}$.
Now for $k\geq \bar{Y}$, the time index for $(Y_{t}-\bar{Y})$ in the numerator of $r_k$ goes from $\{\dfrac{n+1}{2}+1,...,n\}.$ Therefore, $(Y_{t}-\bar{Y})\geq 0$ for $t\in \{\dfrac{n+1}{2}+1,...,n\}$. On the other hand the time index for the 2nd term in the numerator of $r_k$ i.e. $(Y_{t-k}-\bar{Y})$ goes from $\{1,2,...,n-k\}$. Note that since we assumed $k\geq \bar{Y}$ then $k\geq (n+1)/2$. So $n-k\leq (n-1)/2\leq (n+1)/2.$ Therefore, $(n-k)\leq \bar{Y}$. So $(Y_{t-k}-\bar{Y})\leq 0$ for $t \in \{1,2,...,n-k\}$. Hence the product of $(Y_{t}-\bar{Y})(Y_{t-k}-\bar{Y})$ in the numerator of $r_k$ would be all negative for $t\in \{k+1,...,n\}$. And the summation of all negative numbers is again negative. So $r_k\leq 0$. (We don't care about the denominator coz its always positive). Same argument can be applied for the case when $n$ is even.
In your example, we have: $Y=0,1,2...,5.$ So $\bar{y}=2.5.$ Based on above argument when $k\geq 2.5$ , we will have $r_k\leq 0$. Here is another example:
> y=0:10
> mean(y)
[1] 5
>
> acf(y,plot=FALSE)
Autocorrelations of series ‘y’, by lag
0 1 2 3 4 5 6 7 8 9 10
1.000 0.727 0.464 0.218 0.000 -0.182 -0.318 -0.400 -0.418 -0.364 -0.227 | Understanding this acf output | I am gonna show it for the case when the time series is a special form of $Y_t=0,1,2, ...,n.$ First assume $n$ is odd. So the $\bar{Y}=n/2$ that will happen at the time $(n+1)/2$. The formula for the | Understanding this acf output
I am gonna show it for the case when the time series is a special form of $Y_t=0,1,2, ...,n.$ First assume $n$ is odd. So the $\bar{Y}=n/2$ that will happen at the time $(n+1)/2$. The formula for the sample acf at lag $k$ is $r_k=\dfrac{\sum_{t=k+1}^n(Y_{t}-\bar{Y})(Y_{t-k}-\bar{Y})}{\sum_{t=1}^n(Y_{t}-\bar{Y})^2}$.
Now for $k\geq \bar{Y}$, the time index for $(Y_{t}-\bar{Y})$ in the numerator of $r_k$ goes from $\{\dfrac{n+1}{2}+1,...,n\}.$ Therefore, $(Y_{t}-\bar{Y})\geq 0$ for $t\in \{\dfrac{n+1}{2}+1,...,n\}$. On the other hand the time index for the 2nd term in the numerator of $r_k$ i.e. $(Y_{t-k}-\bar{Y})$ goes from $\{1,2,...,n-k\}$. Note that since we assumed $k\geq \bar{Y}$ then $k\geq (n+1)/2$. So $n-k\leq (n-1)/2\leq (n+1)/2.$ Therefore, $(n-k)\leq \bar{Y}$. So $(Y_{t-k}-\bar{Y})\leq 0$ for $t \in \{1,2,...,n-k\}$. Hence the product of $(Y_{t}-\bar{Y})(Y_{t-k}-\bar{Y})$ in the numerator of $r_k$ would be all negative for $t\in \{k+1,...,n\}$. And the summation of all negative numbers is again negative. So $r_k\leq 0$. (We don't care about the denominator coz its always positive). Same argument can be applied for the case when $n$ is even.
In your example, we have: $Y=0,1,2...,5.$ So $\bar{y}=2.5.$ Based on above argument when $k\geq 2.5$ , we will have $r_k\leq 0$. Here is another example:
> y=0:10
> mean(y)
[1] 5
>
> acf(y,plot=FALSE)
Autocorrelations of series ‘y’, by lag
0 1 2 3 4 5 6 7 8 9 10
1.000 0.727 0.464 0.218 0.000 -0.182 -0.318 -0.400 -0.418 -0.364 -0.227 | Understanding this acf output
I am gonna show it for the case when the time series is a special form of $Y_t=0,1,2, ...,n.$ First assume $n$ is odd. So the $\bar{Y}=n/2$ that will happen at the time $(n+1)/2$. The formula for the |
24,892 | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R? | The sum of the squared Pearson residuals is exactly equal to the Pearson $\chi^2$ test statistic for lack of fit. So if your fitted model (i.e., the glm object) is called logistic.fit, the following code would return the test statistic:
sum(residuals(logistic.fit, type = "pearson")^2)
See the documentation on residuals.glm for more information, including what other residuals are available. For example, the code
sum(residuals(logistic.fit, type = "deviance")^2)
will get you the $G^2$ test statistic, just the same as deviance(logistic.fit) provides. | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i | The sum of the squared Pearson residuals is exactly equal to the Pearson $\chi^2$ test statistic for lack of fit. So if your fitted model (i.e., the glm object) is called logistic.fit, the following c | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R?
The sum of the squared Pearson residuals is exactly equal to the Pearson $\chi^2$ test statistic for lack of fit. So if your fitted model (i.e., the glm object) is called logistic.fit, the following code would return the test statistic:
sum(residuals(logistic.fit, type = "pearson")^2)
See the documentation on residuals.glm for more information, including what other residuals are available. For example, the code
sum(residuals(logistic.fit, type = "deviance")^2)
will get you the $G^2$ test statistic, just the same as deviance(logistic.fit) provides. | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i
The sum of the squared Pearson residuals is exactly equal to the Pearson $\chi^2$ test statistic for lack of fit. So if your fitted model (i.e., the glm object) is called logistic.fit, the following c |
24,893 | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R? | The Pearson statistic has a degenerate distribution so is not recommended in general for logistic model goodness-of-fit. I prefer structured tests (linearity, additivity). If you want an omnibus test see the single degree of freedom le Cessie - van Houwelingen - Copas - Hosmer unweighted sum of squares test as implemented in the R rms package residuals.lrm function. | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i | The Pearson statistic has a degenerate distribution so is not recommended in general for logistic model goodness-of-fit. I prefer structured tests (linearity, additivity). If you want an omnibus tes | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R?
The Pearson statistic has a degenerate distribution so is not recommended in general for logistic model goodness-of-fit. I prefer structured tests (linearity, additivity). If you want an omnibus test see the single degree of freedom le Cessie - van Houwelingen - Copas - Hosmer unweighted sum of squares test as implemented in the R rms package residuals.lrm function. | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i
The Pearson statistic has a degenerate distribution so is not recommended in general for logistic model goodness-of-fit. I prefer structured tests (linearity, additivity). If you want an omnibus tes |
24,894 | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R? | Thanks, I didn't realize it was as simple as:
sum(residuals(f1, type="pearson")^2)
However please note that Pearsons residual varies depending on whether it is calculated by covariate group or by individual. A simple example:
m1 is a matrix (this one is the head of a larger matrix):
m1[1:4,1:8]
x1 x2 x3 obs pi lev yhat y
obs 1 1 44 5 0.359 0.131 1.795 2
obs 0 1 43 27 0.176 0.053 4.752 4
obs 0 1 53 15 0.219 0.062 3.285 1
obs 0 1 33 22 0.140 0.069 3.080 3
Where x1-3 are predictors, obs is no. observations in each group, pi is probability of group membership (predicted from regression equation), lev is leverage, the diagonal of the hat matrix, yhat the predicted no. (of y=1) in the group and y the actual no.
This will give you Pearson's by group. Note how it's different if y==0:
'$ fun1 <- function(j){
if (m1[j,"y"] ==0){ # y=0 for this covariate pattern
Pr1 <- sqrt( m1[i,"pi"] / (1-m1[i,"pi"]))
Pr2 <- -sqrt (m1[i,"obs"])
res <- round( Pr1 * Pr2, 3)
return(res)
} else {
Pr1 <- m1[j,"y"] - m1[j,"yhat"]
Pr2 <- sqrt( m1[j,"yhat"] * ( 1-(m1[j,"pi"]) ) )
res <- round( Pr1/Pr2, 3)
return(res)
}
} $'
Thus
nr <-nrow(m1)
nr1 <- seq(1,nr)
Pr <- sapply(1:nrow[m1], FUN=fun1)
PrSj <- sum(Pr^2)
If there are large numbers of subjects with y=0 covariate patterns, then Pearons residual will be much larger when calculated using the 'by group' rather than the 'by indiviual' method.
See e.g. Hosmer & Lemeshow "Applied Logistic Regression", Wiley, 200. | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i | Thanks, I didn't realize it was as simple as:
sum(residuals(f1, type="pearson")^2)
However please note that Pearsons residual varies depending on whether it is calculated by covariate group or by | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R?
Thanks, I didn't realize it was as simple as:
sum(residuals(f1, type="pearson")^2)
However please note that Pearsons residual varies depending on whether it is calculated by covariate group or by individual. A simple example:
m1 is a matrix (this one is the head of a larger matrix):
m1[1:4,1:8]
x1 x2 x3 obs pi lev yhat y
obs 1 1 44 5 0.359 0.131 1.795 2
obs 0 1 43 27 0.176 0.053 4.752 4
obs 0 1 53 15 0.219 0.062 3.285 1
obs 0 1 33 22 0.140 0.069 3.080 3
Where x1-3 are predictors, obs is no. observations in each group, pi is probability of group membership (predicted from regression equation), lev is leverage, the diagonal of the hat matrix, yhat the predicted no. (of y=1) in the group and y the actual no.
This will give you Pearson's by group. Note how it's different if y==0:
'$ fun1 <- function(j){
if (m1[j,"y"] ==0){ # y=0 for this covariate pattern
Pr1 <- sqrt( m1[i,"pi"] / (1-m1[i,"pi"]))
Pr2 <- -sqrt (m1[i,"obs"])
res <- round( Pr1 * Pr2, 3)
return(res)
} else {
Pr1 <- m1[j,"y"] - m1[j,"yhat"]
Pr2 <- sqrt( m1[j,"yhat"] * ( 1-(m1[j,"pi"]) ) )
res <- round( Pr1/Pr2, 3)
return(res)
}
} $'
Thus
nr <-nrow(m1)
nr1 <- seq(1,nr)
Pr <- sapply(1:nrow[m1], FUN=fun1)
PrSj <- sum(Pr^2)
If there are large numbers of subjects with y=0 covariate patterns, then Pearons residual will be much larger when calculated using the 'by group' rather than the 'by indiviual' method.
See e.g. Hosmer & Lemeshow "Applied Logistic Regression", Wiley, 200. | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i
Thanks, I didn't realize it was as simple as:
sum(residuals(f1, type="pearson")^2)
However please note that Pearsons residual varies depending on whether it is calculated by covariate group or by |
24,895 | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R? | You can also use c_hat(mod) that will give the same output as sum(residuals(mod, type = "pearson")^2). | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i | You can also use c_hat(mod) that will give the same output as sum(residuals(mod, type = "pearson")^2). | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model in R?
You can also use c_hat(mod) that will give the same output as sum(residuals(mod, type = "pearson")^2). | How can I compute Pearson's $\chi^2$ test statistic for lack of fit on a logistic regression model i
You can also use c_hat(mod) that will give the same output as sum(residuals(mod, type = "pearson")^2). |
24,896 | Confidence interval for a proportion when sample proportion is almost 1 or 0 | Use a Clopper-Pearson interval?
Wikipedia discribes how to do this here:
http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
For example if you take your 39 successes in 40 trial example you get:
> qbeta(.025,39,2) #qbeta(alpha/2,x,n-x+1) x=num of successes and n=num of trials
[1] 0.8684141
> qbeta(1-.025,39,2)
[1] 0.9938864
For your 40 out of 40 you get:
> qbeta(1-.025,40,1)
[1] 0.9993673
> qbeta(.025,40,1)
[1] 0.9119027 | Confidence interval for a proportion when sample proportion is almost 1 or 0 | Use a Clopper-Pearson interval?
Wikipedia discribes how to do this here:
http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
For example if you take your 39 successes in 40 trial exa | Confidence interval for a proportion when sample proportion is almost 1 or 0
Use a Clopper-Pearson interval?
Wikipedia discribes how to do this here:
http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
For example if you take your 39 successes in 40 trial example you get:
> qbeta(.025,39,2) #qbeta(alpha/2,x,n-x+1) x=num of successes and n=num of trials
[1] 0.8684141
> qbeta(1-.025,39,2)
[1] 0.9938864
For your 40 out of 40 you get:
> qbeta(1-.025,40,1)
[1] 0.9993673
> qbeta(.025,40,1)
[1] 0.9119027 | Confidence interval for a proportion when sample proportion is almost 1 or 0
Use a Clopper-Pearson interval?
Wikipedia discribes how to do this here:
http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
For example if you take your 39 successes in 40 trial exa |
24,897 | Confidence interval for a proportion when sample proportion is almost 1 or 0 | There are many confidence intervals for single proportions and most of them have poor performance for $p$ close to 0 or 1. The "exact" Clopper-Pearson interval mentioned above is very conservative in that setting, meaning that the actual coverage of the interval can be quite a bit larger than the nominal $1-\alpha$.
An interval that has pretty good performance for $p$ close to 0 or 1 is actually the Bayesian credible interval using the Jeffreys prior. See e.g. this paper by Brown, Cai and DasGupta (2002). It is simple to compute in R:
qbeta(c(alpha/2,1-alpha/2),x+0.5,n-x+0.5)
Nevermind that it is Bayesian by nature - it has been shown over and over again to have good frequentist performance!
(Although the Bayesian Jeffreys interval usually is recommended in this setting, it is possible to construct intervals that simultaneously give higher confidence and lower expected length for small $p$; see a recent manuscript of mine.) | Confidence interval for a proportion when sample proportion is almost 1 or 0 | There are many confidence intervals for single proportions and most of them have poor performance for $p$ close to 0 or 1. The "exact" Clopper-Pearson interval mentioned above is very conservative in | Confidence interval for a proportion when sample proportion is almost 1 or 0
There are many confidence intervals for single proportions and most of them have poor performance for $p$ close to 0 or 1. The "exact" Clopper-Pearson interval mentioned above is very conservative in that setting, meaning that the actual coverage of the interval can be quite a bit larger than the nominal $1-\alpha$.
An interval that has pretty good performance for $p$ close to 0 or 1 is actually the Bayesian credible interval using the Jeffreys prior. See e.g. this paper by Brown, Cai and DasGupta (2002). It is simple to compute in R:
qbeta(c(alpha/2,1-alpha/2),x+0.5,n-x+0.5)
Nevermind that it is Bayesian by nature - it has been shown over and over again to have good frequentist performance!
(Although the Bayesian Jeffreys interval usually is recommended in this setting, it is possible to construct intervals that simultaneously give higher confidence and lower expected length for small $p$; see a recent manuscript of mine.) | Confidence interval for a proportion when sample proportion is almost 1 or 0
There are many confidence intervals for single proportions and most of them have poor performance for $p$ close to 0 or 1. The "exact" Clopper-Pearson interval mentioned above is very conservative in |
24,898 | Confidence interval for a proportion when sample proportion is almost 1 or 0 | Why not just do this in a Bayesian way?
That is, set up a beta-distributed prior, and choose some interval whose integral is as big as you want it (working out from the mode, for example). | Confidence interval for a proportion when sample proportion is almost 1 or 0 | Why not just do this in a Bayesian way?
That is, set up a beta-distributed prior, and choose some interval whose integral is as big as you want it (working out from the mode, for example). | Confidence interval for a proportion when sample proportion is almost 1 or 0
Why not just do this in a Bayesian way?
That is, set up a beta-distributed prior, and choose some interval whose integral is as big as you want it (working out from the mode, for example). | Confidence interval for a proportion when sample proportion is almost 1 or 0
Why not just do this in a Bayesian way?
That is, set up a beta-distributed prior, and choose some interval whose integral is as big as you want it (working out from the mode, for example). |
24,899 | Confidence interval for a proportion when sample proportion is almost 1 or 0 | Clopper-Pearson is an exact binomial method and can be used to get confidence intervals for p even when the number of successes is 0 out of N or N out of N. in the first case it will give an interval from 0 to A where A depends on N and alpha and from B to 1 in the latter case. | Confidence interval for a proportion when sample proportion is almost 1 or 0 | Clopper-Pearson is an exact binomial method and can be used to get confidence intervals for p even when the number of successes is 0 out of N or N out of N. in the first case it will give an interval | Confidence interval for a proportion when sample proportion is almost 1 or 0
Clopper-Pearson is an exact binomial method and can be used to get confidence intervals for p even when the number of successes is 0 out of N or N out of N. in the first case it will give an interval from 0 to A where A depends on N and alpha and from B to 1 in the latter case. | Confidence interval for a proportion when sample proportion is almost 1 or 0
Clopper-Pearson is an exact binomial method and can be used to get confidence intervals for p even when the number of successes is 0 out of N or N out of N. in the first case it will give an interval |
24,900 | Predicting cluster of a new object with kmeans in R [closed] | One of your options is to use cl_predict from the cluepackage (note: I found this through googling "kmeans R predict"). | Predicting cluster of a new object with kmeans in R [closed] | One of your options is to use cl_predict from the cluepackage (note: I found this through googling "kmeans R predict"). | Predicting cluster of a new object with kmeans in R [closed]
One of your options is to use cl_predict from the cluepackage (note: I found this through googling "kmeans R predict"). | Predicting cluster of a new object with kmeans in R [closed]
One of your options is to use cl_predict from the cluepackage (note: I found this through googling "kmeans R predict"). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.