idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
33,601 | Why is a zero-intercept linear regression model predicts better than a model with an intercept? | This would be understandable if the intercept you obtained was merely noise --not sig. different from zero. (Am I right that the standardized regression coefficients were nearly the same in both models?) If so I don't think you should generalize from this example. When intercepts are sig. and substantial, they add something meaningful to predictive accuracy. | Why is a zero-intercept linear regression model predicts better than a model with an intercept? | This would be understandable if the intercept you obtained was merely noise --not sig. different from zero. (Am I right that the standardized regression coefficients were nearly the same in both mode | Why is a zero-intercept linear regression model predicts better than a model with an intercept?
This would be understandable if the intercept you obtained was merely noise --not sig. different from zero. (Am I right that the standardized regression coefficients were nearly the same in both models?) If so I don't think you should generalize from this example. When intercepts are sig. and substantial, they add something meaningful to predictive accuracy. | Why is a zero-intercept linear regression model predicts better than a model with an intercept?
This would be understandable if the intercept you obtained was merely noise --not sig. different from zero. (Am I right that the standardized regression coefficients were nearly the same in both mode |
33,602 | Why is a zero-intercept linear regression model predicts better than a model with an intercept? | In linear regression, you are fitting:
$y = f(\beta, X) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots $
You fit $\beta$ given training data $(X, Y)$
Suppose you drop the $\beta_0$ and fit the model, will the error in the fit:
$ \sum_i (y_i- f(\beta, X_i) )^2$
be larger than if you included it? In all (non-degenerate) cases you can prove the error will be the same or lower (on the training data) when you include $\beta_0$ since the model is free to use this parameter to reduce the error if it is present and helps, and will set it to zero if it doesn't help. Further, suppose you added a large constant to y (assume your output needed to be $+10000$ than in your original training data), and refit the model, then $\beta_0$ clearly becomes very important.
Perhaps you're referring to regularized models when you say "suppressed". The L1 and L2 regularized, these methods prefer to keep coefficients close to zero (and you should have already mean and variance normalized your $X$ beforehand to make this step a sensible one. In regularization, you then have a choice whether to include the intercept term (should we prefer also to have a small $\beta_0$?). Again, in most cases (all cases?), you're better off not regularizing $\beta_0$, since its unlikely to reduce overfitting and shrinks the space of representable functions (by excluding those with high $\beta_0$) leading to higher error.
Side note: scikit's logistic regression regularizes the intercept by default. Anyone know why: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html ? I don't think its a good idea. | Why is a zero-intercept linear regression model predicts better than a model with an intercept? | In linear regression, you are fitting:
$y = f(\beta, X) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots $
You fit $\beta$ given training data $(X, Y)$
Suppose you drop the $\beta_0$ and fit the model, | Why is a zero-intercept linear regression model predicts better than a model with an intercept?
In linear regression, you are fitting:
$y = f(\beta, X) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots $
You fit $\beta$ given training data $(X, Y)$
Suppose you drop the $\beta_0$ and fit the model, will the error in the fit:
$ \sum_i (y_i- f(\beta, X_i) )^2$
be larger than if you included it? In all (non-degenerate) cases you can prove the error will be the same or lower (on the training data) when you include $\beta_0$ since the model is free to use this parameter to reduce the error if it is present and helps, and will set it to zero if it doesn't help. Further, suppose you added a large constant to y (assume your output needed to be $+10000$ than in your original training data), and refit the model, then $\beta_0$ clearly becomes very important.
Perhaps you're referring to regularized models when you say "suppressed". The L1 and L2 regularized, these methods prefer to keep coefficients close to zero (and you should have already mean and variance normalized your $X$ beforehand to make this step a sensible one. In regularization, you then have a choice whether to include the intercept term (should we prefer also to have a small $\beta_0$?). Again, in most cases (all cases?), you're better off not regularizing $\beta_0$, since its unlikely to reduce overfitting and shrinks the space of representable functions (by excluding those with high $\beta_0$) leading to higher error.
Side note: scikit's logistic regression regularizes the intercept by default. Anyone know why: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html ? I don't think its a good idea. | Why is a zero-intercept linear regression model predicts better than a model with an intercept?
In linear regression, you are fitting:
$y = f(\beta, X) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots $
You fit $\beta$ given training data $(X, Y)$
Suppose you drop the $\beta_0$ and fit the model, |
33,603 | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | There is a survey of decomposition algorithms in the first table of this NIPS paper.
It lists modern algorithms (with links to known implementations), including the stochastic decomposition of Halko et al., arguably the state-of-the-art method today.
You ask for convenient programming packages but don't state your platform or language of choice. Assuming it's:
Python:
use scipy for in-core decompositions (input must fit in RAM)
use gensim for both in-core and out-of-core sparse decompositions (also supports incremental decomposition updates)
Java:
Mahout has several scalable decomposition algos
LingPipe (in-core) supports missing input values
C++
redsvd (in-core) very clean and elegant, efficient implementation
MATLAB
pca.m by Mark Tygert, one of the co-authors of the stochastic method.
Your problem isn't particularly big though, so I guess any existing package (using the iterative Lanczos algorithm) will do fine, eigen decompositions have been around for a while. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | There is a survey of decomposition algorithms in the first table of this NIPS paper.
It lists modern algorithms (with links to known implementations), including the stochastic decomposition of Halko | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
There is a survey of decomposition algorithms in the first table of this NIPS paper.
It lists modern algorithms (with links to known implementations), including the stochastic decomposition of Halko et al., arguably the state-of-the-art method today.
You ask for convenient programming packages but don't state your platform or language of choice. Assuming it's:
Python:
use scipy for in-core decompositions (input must fit in RAM)
use gensim for both in-core and out-of-core sparse decompositions (also supports incremental decomposition updates)
Java:
Mahout has several scalable decomposition algos
LingPipe (in-core) supports missing input values
C++
redsvd (in-core) very clean and elegant, efficient implementation
MATLAB
pca.m by Mark Tygert, one of the co-authors of the stochastic method.
Your problem isn't particularly big though, so I guess any existing package (using the iterative Lanczos algorithm) will do fine, eigen decompositions have been around for a while. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
There is a survey of decomposition algorithms in the first table of this NIPS paper.
It lists modern algorithms (with links to known implementations), including the stochastic decomposition of Halko |
33,604 | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | Take a look at A Survey of Software for Sparse Eigenvalue Problems by Hernández et al. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | Take a look at A Survey of Software for Sparse Eigenvalue Problems by Hernández et al. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
Take a look at A Survey of Software for Sparse Eigenvalue Problems by Hernández et al. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
Take a look at A Survey of Software for Sparse Eigenvalue Problems by Hernández et al. |
33,605 | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | I don't know much about eigenvalues or what they are applicable to, but R seems to have a built in function for this purpose named eigen(). Calculating the eigenvalues & eigenvectors for a 2500 * 2500 matrix took ~ 1 minute on my machine.
> sampData <- runif(6250000, 0, 2)
> x <- matrix(sampData, ncol = 2500, byrow = TRUE)
> system.time(eigen(x))
user system elapsed
79.74 2.90 65.69
This question has also come up on Stack Overflow. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | I don't know much about eigenvalues or what they are applicable to, but R seems to have a built in function for this purpose named eigen(). Calculating the eigenvalues & eigenvectors for a 2500 * 2500 | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
I don't know much about eigenvalues or what they are applicable to, but R seems to have a built in function for this purpose named eigen(). Calculating the eigenvalues & eigenvectors for a 2500 * 2500 matrix took ~ 1 minute on my machine.
> sampData <- runif(6250000, 0, 2)
> x <- matrix(sampData, ncol = 2500, byrow = TRUE)
> system.time(eigen(x))
user system elapsed
79.74 2.90 65.69
This question has also come up on Stack Overflow. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
I don't know much about eigenvalues or what they are applicable to, but R seems to have a built in function for this purpose named eigen(). Calculating the eigenvalues & eigenvectors for a 2500 * 2500 |
33,606 | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | 2500x2500 is not such a large problem. Even without leveraging the sparsity the SVD implementation of scipy.linalg is able to decompose it in less than a minute. See my answer to a related questions for more details.
For larger problems you will need to use the sparsity explicitly. The gensim project my help you for middle size problems that fit on a single computer but not in RAM and the mahout implementation is able to deal with sparse matrices that don't even fit on a single hard-drive. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | 2500x2500 is not such a large problem. Even without leveraging the sparsity the SVD implementation of scipy.linalg is able to decompose it in less than a minute. See my answer to a related questions f | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
2500x2500 is not such a large problem. Even without leveraging the sparsity the SVD implementation of scipy.linalg is able to decompose it in less than a minute. See my answer to a related questions for more details.
For larger problems you will need to use the sparsity explicitly. The gensim project my help you for middle size problems that fit on a single computer but not in RAM and the mahout implementation is able to deal with sparse matrices that don't even fit on a single hard-drive. | How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors?
2500x2500 is not such a large problem. Even without leveraging the sparsity the SVD implementation of scipy.linalg is able to decompose it in less than a minute. See my answer to a related questions f |
33,607 | Kolmogorov-Smirnov instability depending on whether values are small or big | The ways you’ve coded it, you’re asking the KS test about a null hypothesis that the distribution is $N(0,1)$. In the first set of numbers, that looks plausible. Consequently, the p-value is high. In the second set of numbers, that does not seem to be the case. Numbers like those don’t typically come from a $N(0,1)$ distribution. Consequently, the p-value is low.
By multiplying by a factor, you’ve changed the variance. Since the KS test considers all aspects of the distribution, variance included, the test correctly regards the two data sets as different.
The reason that Shapiro-Wilk is more stable is because it evaluates normality. Multiplying by a positive factor does not change the normality, so Shapiro-Wilk will not have the same kind of sensitivity to a variance change that KS has. | Kolmogorov-Smirnov instability depending on whether values are small or big | The ways you’ve coded it, you’re asking the KS test about a null hypothesis that the distribution is $N(0,1)$. In the first set of numbers, that looks plausible. Consequently, the p-value is high. In | Kolmogorov-Smirnov instability depending on whether values are small or big
The ways you’ve coded it, you’re asking the KS test about a null hypothesis that the distribution is $N(0,1)$. In the first set of numbers, that looks plausible. Consequently, the p-value is high. In the second set of numbers, that does not seem to be the case. Numbers like those don’t typically come from a $N(0,1)$ distribution. Consequently, the p-value is low.
By multiplying by a factor, you’ve changed the variance. Since the KS test considers all aspects of the distribution, variance included, the test correctly regards the two data sets as different.
The reason that Shapiro-Wilk is more stable is because it evaluates normality. Multiplying by a positive factor does not change the normality, so Shapiro-Wilk will not have the same kind of sensitivity to a variance change that KS has. | Kolmogorov-Smirnov instability depending on whether values are small or big
The ways you’ve coded it, you’re asking the KS test about a null hypothesis that the distribution is $N(0,1)$. In the first set of numbers, that looks plausible. Consequently, the p-value is high. In |
33,608 | Kolmogorov-Smirnov instability depending on whether values are small or big | Adding to the existing response, it's worth noting that
the two ks.test calls below produce the same output.
x = c(0.5379796,1.1230795,-0.4047321,-0.8150001,0.9706860)
ks.test(x, pnorm)
#>
#> Exact one-sample Kolmogorov-Smirnov test
#>
#> data: x
#> D = 0.3047, p-value = 0.6454
#> alternative hypothesis: two-sided
ks.test(x*100, pnorm, sd = 100)
#>
#> Exact one-sample Kolmogorov-Smirnov test
#>
#> data: x * 100
#> D = 0.3047, p-value = 0.6454
#> alternative hypothesis: two-sided
R syntax note: the default arguments to pnorm() are mean = 0, sd = 1. Anything after the second argument in ks.test() gets passed as an argument to the pnorm() function in this case. | Kolmogorov-Smirnov instability depending on whether values are small or big | Adding to the existing response, it's worth noting that
the two ks.test calls below produce the same output.
x = c(0.5379796,1.1230795,-0.4047321,-0.8150001,0.9706860)
ks.test(x, pnorm)
#>
#> Exact | Kolmogorov-Smirnov instability depending on whether values are small or big
Adding to the existing response, it's worth noting that
the two ks.test calls below produce the same output.
x = c(0.5379796,1.1230795,-0.4047321,-0.8150001,0.9706860)
ks.test(x, pnorm)
#>
#> Exact one-sample Kolmogorov-Smirnov test
#>
#> data: x
#> D = 0.3047, p-value = 0.6454
#> alternative hypothesis: two-sided
ks.test(x*100, pnorm, sd = 100)
#>
#> Exact one-sample Kolmogorov-Smirnov test
#>
#> data: x * 100
#> D = 0.3047, p-value = 0.6454
#> alternative hypothesis: two-sided
R syntax note: the default arguments to pnorm() are mean = 0, sd = 1. Anything after the second argument in ks.test() gets passed as an argument to the pnorm() function in this case. | Kolmogorov-Smirnov instability depending on whether values are small or big
Adding to the existing response, it's worth noting that
the two ks.test calls below produce the same output.
x = c(0.5379796,1.1230795,-0.4047321,-0.8150001,0.9706860)
ks.test(x, pnorm)
#>
#> Exact |
33,609 | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter $\theta$ is unbiased? | Say $Q$ is unbiased for $\theta^2$, i.e. $E(Q) = \theta^2$, then because of Jensen's inequality,
$$\sqrt{ E(Q) } = \theta < E \left( \sqrt{Q} \right)$$
So $\sqrt{Q}$ is biased high, i.e. it will overestimate $\theta$ on average.
Note: This is a strict inequality (i.e. $<$ not $\leq$) because $Q$ is not a degenerate random variable and square root is not an affine transformation. | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter | Say $Q$ is unbiased for $\theta^2$, i.e. $E(Q) = \theta^2$, then because of Jensen's inequality,
$$\sqrt{ E(Q) } = \theta < E \left( \sqrt{Q} \right)$$
So $\sqrt{Q}$ is biased high, i.e. it will over | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter $\theta$ is unbiased?
Say $Q$ is unbiased for $\theta^2$, i.e. $E(Q) = \theta^2$, then because of Jensen's inequality,
$$\sqrt{ E(Q) } = \theta < E \left( \sqrt{Q} \right)$$
So $\sqrt{Q}$ is biased high, i.e. it will overestimate $\theta$ on average.
Note: This is a strict inequality (i.e. $<$ not $\leq$) because $Q$ is not a degenerate random variable and square root is not an affine transformation. | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter
Say $Q$ is unbiased for $\theta^2$, i.e. $E(Q) = \theta^2$, then because of Jensen's inequality,
$$\sqrt{ E(Q) } = \theta < E \left( \sqrt{Q} \right)$$
So $\sqrt{Q}$ is biased high, i.e. it will over |
33,610 | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter $\theta$ is unbiased? | Note that for any estimator (with finite second moment) that $E(\widehat{\theta^2}) - E(\hat\theta)^2$ $=$ $\text{Var}(\hat\theta)\geq 0$ with equality only when $\text{Var}(\hat\theta)=0$ (which is easy to check doesn't hold).
Replace the first term on the LHS of that inequality by using your result for unbiasedness of $\widehat{\theta^2}$, and then by using the fact that $\theta$ and $\hat \theta$ are both positive, show $\hat \theta$ is biased, not unbiased as you supposed. (More generally, you could apply Jensen's inequality but it's not needed here)
Note that this proof doesn't relate to the particulars of your problem -- for a non-negative estimator of a non-negative parameter, if its square is unbiased for the square of the parameter, then the estimator must itself be biased unless the variance of the estimator is $0$. | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter | Note that for any estimator (with finite second moment) that $E(\widehat{\theta^2}) - E(\hat\theta)^2$ $=$ $\text{Var}(\hat\theta)\geq 0$ with equality only when $\text{Var}(\hat\theta)=0$ (which is e | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter $\theta$ is unbiased?
Note that for any estimator (with finite second moment) that $E(\widehat{\theta^2}) - E(\hat\theta)^2$ $=$ $\text{Var}(\hat\theta)\geq 0$ with equality only when $\text{Var}(\hat\theta)=0$ (which is easy to check doesn't hold).
Replace the first term on the LHS of that inequality by using your result for unbiasedness of $\widehat{\theta^2}$, and then by using the fact that $\theta$ and $\hat \theta$ are both positive, show $\hat \theta$ is biased, not unbiased as you supposed. (More generally, you could apply Jensen's inequality but it's not needed here)
Note that this proof doesn't relate to the particulars of your problem -- for a non-negative estimator of a non-negative parameter, if its square is unbiased for the square of the parameter, then the estimator must itself be biased unless the variance of the estimator is $0$. | If I prove the estimator of $\theta^2$ is unbiased, does that prove that the estimator of parameter
Note that for any estimator (with finite second moment) that $E(\widehat{\theta^2}) - E(\hat\theta)^2$ $=$ $\text{Var}(\hat\theta)\geq 0$ with equality only when $\text{Var}(\hat\theta)=0$ (which is e |
33,611 | What is the intuition on fixed and random effects models? [duplicate] | One way to think about fixed-effects vs. random effects is by examining how the fixed-effects estimator works in comparison to the random effects estimator.
Let's say I have panel data on firms. Let $y_{i,t}$ be dividends for firm $i$ at time $t$. Let $x_{i,t}$ be something we're looking at like free cash flow.
Imagine our model is:
$$ y_{i,t} = \beta x_{i,t} + u_i + \epsilon_{i,t} $$
So dividends for firm $i$ at time $t$ are the sum of $\beta$ times free cash flow plus a firm specific effect $u_i$ and a firm, time specific error-term $\epsilon_{i,t}$. Now let's imagine two different estimators:
The within estimator. $\beta$ is estimated using only time-series variation within each firm.
The between estimator. $\beta$ is estimated using only the variation between different firms. (The between estimator is $\beta$ from the cross-sectional regression $\bar{y}_i = \beta \bar{x}_i + v_i$.)
The within estimator is the fixed-effect estimator. It takes off the mean from each group and the only variation leftover to estimate $\beta$ is time series variation within each firm. If the fixed effects can be anything, this is what you have to do.
The random effects estimator is a weighted average of the within estimator and the between estimator. If the effects $u_i$ are random and mean zero, then variation between firms also contains information about $\beta$ and the between estimator is also a consistent estimator. Rather than tossing out the between firm variation (as occurs in the fixed effect estimator), the between firm variation is given some weight in the random effects estimator of $\beta$. | What is the intuition on fixed and random effects models? [duplicate] | One way to think about fixed-effects vs. random effects is by examining how the fixed-effects estimator works in comparison to the random effects estimator.
Let's say I have panel data on firms. Let $ | What is the intuition on fixed and random effects models? [duplicate]
One way to think about fixed-effects vs. random effects is by examining how the fixed-effects estimator works in comparison to the random effects estimator.
Let's say I have panel data on firms. Let $y_{i,t}$ be dividends for firm $i$ at time $t$. Let $x_{i,t}$ be something we're looking at like free cash flow.
Imagine our model is:
$$ y_{i,t} = \beta x_{i,t} + u_i + \epsilon_{i,t} $$
So dividends for firm $i$ at time $t$ are the sum of $\beta$ times free cash flow plus a firm specific effect $u_i$ and a firm, time specific error-term $\epsilon_{i,t}$. Now let's imagine two different estimators:
The within estimator. $\beta$ is estimated using only time-series variation within each firm.
The between estimator. $\beta$ is estimated using only the variation between different firms. (The between estimator is $\beta$ from the cross-sectional regression $\bar{y}_i = \beta \bar{x}_i + v_i$.)
The within estimator is the fixed-effect estimator. It takes off the mean from each group and the only variation leftover to estimate $\beta$ is time series variation within each firm. If the fixed effects can be anything, this is what you have to do.
The random effects estimator is a weighted average of the within estimator and the between estimator. If the effects $u_i$ are random and mean zero, then variation between firms also contains information about $\beta$ and the between estimator is also a consistent estimator. Rather than tossing out the between firm variation (as occurs in the fixed effect estimator), the between firm variation is given some weight in the random effects estimator of $\beta$. | What is the intuition on fixed and random effects models? [duplicate]
One way to think about fixed-effects vs. random effects is by examining how the fixed-effects estimator works in comparison to the random effects estimator.
Let's say I have panel data on firms. Let $ |
33,612 | What is the intuition on fixed and random effects models? [duplicate] | You can start with this thread. As already noted in comments by fcop one example of using random effects is then you have multiple levels of your variable (classrooms) and estimating so many parameters would require large amounts of data and huge computational power. In such cases often you wouldn't be interested in classroom effects themselves, but their influence in general, you would assume that they vary but can be summarized using common distribution. It also could be the case that you have just a sample of classrooms and the particular classrooms are not interesting by themselves, but are used to learn something about general variability that is connected with classrooms. So you use random effects what you are not interested in estimating the parameters for your variable precisely, yet you want to account for influence of such variable by estimating the distribution of possible influences of it's levels. | What is the intuition on fixed and random effects models? [duplicate] | You can start with this thread. As already noted in comments by fcop one example of using random effects is then you have multiple levels of your variable (classrooms) and estimating so many parameter | What is the intuition on fixed and random effects models? [duplicate]
You can start with this thread. As already noted in comments by fcop one example of using random effects is then you have multiple levels of your variable (classrooms) and estimating so many parameters would require large amounts of data and huge computational power. In such cases often you wouldn't be interested in classroom effects themselves, but their influence in general, you would assume that they vary but can be summarized using common distribution. It also could be the case that you have just a sample of classrooms and the particular classrooms are not interesting by themselves, but are used to learn something about general variability that is connected with classrooms. So you use random effects what you are not interested in estimating the parameters for your variable precisely, yet you want to account for influence of such variable by estimating the distribution of possible influences of it's levels. | What is the intuition on fixed and random effects models? [duplicate]
You can start with this thread. As already noted in comments by fcop one example of using random effects is then you have multiple levels of your variable (classrooms) and estimating so many parameter |
33,613 | What is the intuition on fixed and random effects models? [duplicate] | About the dummy variable, that works if the variable has a limited number of values (like classrooms in your case) but not when there are a hughe number of values and that is the trick; if you have a hughe number of values then you get a hughe number of intercepts (or slopes) thus a lot of dummies and then you can not estimate the model well (you loose many degrees of freedoms because you have a lot of explanatory variables).
In that case you can use random effects; i.e. you assume that the intercepts are normally distributed and then your hughe number of dummies is ''summarised'' in a normal distribution. The latter has only two parameters (mean and standard deviation), so in stead of estimateing a hughe number of corefficients (namely one for each your dummies) you only have to estimate two parameters (mean and standard deviation) and you know the distribution of the intercepts. This saves a lot of degrees of freedom. | What is the intuition on fixed and random effects models? [duplicate] | About the dummy variable, that works if the variable has a limited number of values (like classrooms in your case) but not when there are a hughe number of values and that is the trick; if you have a | What is the intuition on fixed and random effects models? [duplicate]
About the dummy variable, that works if the variable has a limited number of values (like classrooms in your case) but not when there are a hughe number of values and that is the trick; if you have a hughe number of values then you get a hughe number of intercepts (or slopes) thus a lot of dummies and then you can not estimate the model well (you loose many degrees of freedoms because you have a lot of explanatory variables).
In that case you can use random effects; i.e. you assume that the intercepts are normally distributed and then your hughe number of dummies is ''summarised'' in a normal distribution. The latter has only two parameters (mean and standard deviation), so in stead of estimateing a hughe number of corefficients (namely one for each your dummies) you only have to estimate two parameters (mean and standard deviation) and you know the distribution of the intercepts. This saves a lot of degrees of freedom. | What is the intuition on fixed and random effects models? [duplicate]
About the dummy variable, that works if the variable has a limited number of values (like classrooms in your case) but not when there are a hughe number of values and that is the trick; if you have a |
33,614 | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate] | The question appears to confuse two meanings of "residual."
The first bullet refers to the differences between the data and their fitted values.
The second bullet refers to a collection of random variables that are used to model the differences between the data and their expectations.
This might become clearer upon examining the simplest possible example: estimating the mean of a population, $\mu$, by taking two independent observations from it (with replacement). The data can be modeled by an ordered pair of random variables $(X_1, X_2)$. The "fitted values" are the estimated mean,
$$\bar X = (X_1 + X_2)/2.$$
This number is the fit for each of the two observations.
The residuals are the differences between the data and the fit. They consist of the ordered pair $$(e_1, e_2) = (X_1 - \bar X, X_2 - \bar X) = ((X_1-X_2)/2, -(X_1-X_2)/2).$$ Consequently $e_2 = -e_1$, showing the residuals are dependent.
An alternative model of these data uses the random variables $$(\epsilon_1, \epsilon_2) = (X_1 - \mu, X_2 - \mu).$$ Often these random variables are called "errors" but sometimes they are also called "residuals." Since the $X_i$ are independent, and $\mu$ is just some constant, the $\epsilon_i$ are also independent.
It might be of interest to note that $e_1 + e_2 = 0$ whereas $\mathbb{E}(\epsilon_1) = \mathbb{E}(\epsilon_2) = 0$. The former is a true dependence among random variables whereas the latter is merely a constraint concerning the underlying model. | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate] | The question appears to confuse two meanings of "residual."
The first bullet refers to the differences between the data and their fitted values.
The second bullet refers to a collection of random v | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate]
The question appears to confuse two meanings of "residual."
The first bullet refers to the differences between the data and their fitted values.
The second bullet refers to a collection of random variables that are used to model the differences between the data and their expectations.
This might become clearer upon examining the simplest possible example: estimating the mean of a population, $\mu$, by taking two independent observations from it (with replacement). The data can be modeled by an ordered pair of random variables $(X_1, X_2)$. The "fitted values" are the estimated mean,
$$\bar X = (X_1 + X_2)/2.$$
This number is the fit for each of the two observations.
The residuals are the differences between the data and the fit. They consist of the ordered pair $$(e_1, e_2) = (X_1 - \bar X, X_2 - \bar X) = ((X_1-X_2)/2, -(X_1-X_2)/2).$$ Consequently $e_2 = -e_1$, showing the residuals are dependent.
An alternative model of these data uses the random variables $$(\epsilon_1, \epsilon_2) = (X_1 - \mu, X_2 - \mu).$$ Often these random variables are called "errors" but sometimes they are also called "residuals." Since the $X_i$ are independent, and $\mu$ is just some constant, the $\epsilon_i$ are also independent.
It might be of interest to note that $e_1 + e_2 = 0$ whereas $\mathbb{E}(\epsilon_1) = \mathbb{E}(\epsilon_2) = 0$. The former is a true dependence among random variables whereas the latter is merely a constraint concerning the underlying model. | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate]
The question appears to confuse two meanings of "residual."
The first bullet refers to the differences between the data and their fitted values.
The second bullet refers to a collection of random v |
33,615 | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate] | First, let's clarify the terminology, which can be different in different fields. For instance, in econometrics we differentiate between errors and residuals. Let's look at a simple model:
$$y_i=\beta_0+\beta_1 x_i+\varepsilon_i$$
Here, $\varepsilon_i$ is called errors. They are not observable, i.e. unknowns. The parameters (betas, coefficients) of the model are also unknown.
We can try to fit and estimate the model, and obtain the parameter estimates $\hat\beta_0,\hat\beta_1$, then we can obtain the residuals:
$$\hat\varepsilon_i=y-\hat\beta_1 x -\hat\beta_0$$
So, residuals $\hat\varepsilon_i$ are estimates of unobserved errors $\varepsilon_i$.
Why is this so important? Because you mixed and matched several concepts from different places in your question.
The first question alludes to the technical property of linear models. When you estimate the model, $\hat\beta_0$ will absorb the mean of errors making the residual mean zero.
The second question sounds like one of the assumptions of Gauss-Markov theorem, but misplacing the errors with residuals. The theorem's assumption is about errors. It may or may not hold true. Both residuals and errors may show autocorrelation, for instance. | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate] | First, let's clarify the terminology, which can be different in different fields. For instance, in econometrics we differentiate between errors and residuals. Let's look at a simple model:
$$y_i=\beta | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate]
First, let's clarify the terminology, which can be different in different fields. For instance, in econometrics we differentiate between errors and residuals. Let's look at a simple model:
$$y_i=\beta_0+\beta_1 x_i+\varepsilon_i$$
Here, $\varepsilon_i$ is called errors. They are not observable, i.e. unknowns. The parameters (betas, coefficients) of the model are also unknown.
We can try to fit and estimate the model, and obtain the parameter estimates $\hat\beta_0,\hat\beta_1$, then we can obtain the residuals:
$$\hat\varepsilon_i=y-\hat\beta_1 x -\hat\beta_0$$
So, residuals $\hat\varepsilon_i$ are estimates of unobserved errors $\varepsilon_i$.
Why is this so important? Because you mixed and matched several concepts from different places in your question.
The first question alludes to the technical property of linear models. When you estimate the model, $\hat\beta_0$ will absorb the mean of errors making the residual mean zero.
The second question sounds like one of the assumptions of Gauss-Markov theorem, but misplacing the errors with residuals. The theorem's assumption is about errors. It may or may not hold true. Both residuals and errors may show autocorrelation, for instance. | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate]
First, let's clarify the terminology, which can be different in different fields. For instance, in econometrics we differentiate between errors and residuals. Let's look at a simple model:
$$y_i=\beta |
33,616 | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate] | The residuals are certainly not independent. Assume that the true errors $u_i, i=1,...,n$ are fully independent. In a linear model
$$y_i = \mathbb x_i'\beta + u_i \implies u_i = y_i - \mathbb x_i'\beta$$
the residuals equal, under OLS estimation
$$\hat u_i = y_i - \mathbb x_i'\hat \beta(\mathbf y, \mathbf X) = y_i - \mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf y $$
$$= y_i - \mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\left (\mathbf X \beta + \mathbf u\right) $$
$$\implies \hat u_i = u_i -\mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u$$
Considering
$$E(\hat u_i \hat u_j) = E(u_iu_j) - E\left[u_i\mathbb x_j' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\right]\\-E\left[u_j\mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\right] + E\left[\mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\mathbf u'\mathbf X \left(\mathbf X' \mathbf X\right)^{-1}\mathbb x_j'\right]$$
we can see that it is not equal to zero, because for example,
$$ E\left[u_i\mathbb x_j' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\right] \neq 0$$
since $u_i$ exists in $\mathbf u$ also, and so we get $u_i^2$ whose conditional or unconditional expected value cannot be eqaul to zero (why?).
So the residuals are not independent, even if the true errors are, and even if the regressors are independent from the true errors. | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate] | The residuals are certainly not independent. Assume that the true errors $u_i, i=1,...,n$ are fully independent. In a linear model
$$y_i = \mathbb x_i'\beta + u_i \implies u_i = y_i - \mathbb x_i'\be | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate]
The residuals are certainly not independent. Assume that the true errors $u_i, i=1,...,n$ are fully independent. In a linear model
$$y_i = \mathbb x_i'\beta + u_i \implies u_i = y_i - \mathbb x_i'\beta$$
the residuals equal, under OLS estimation
$$\hat u_i = y_i - \mathbb x_i'\hat \beta(\mathbf y, \mathbf X) = y_i - \mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf y $$
$$= y_i - \mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\left (\mathbf X \beta + \mathbf u\right) $$
$$\implies \hat u_i = u_i -\mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u$$
Considering
$$E(\hat u_i \hat u_j) = E(u_iu_j) - E\left[u_i\mathbb x_j' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\right]\\-E\left[u_j\mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\right] + E\left[\mathbb x_i' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\mathbf u'\mathbf X \left(\mathbf X' \mathbf X\right)^{-1}\mathbb x_j'\right]$$
we can see that it is not equal to zero, because for example,
$$ E\left[u_i\mathbb x_j' \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'\mathbf u\right] \neq 0$$
since $u_i$ exists in $\mathbf u$ also, and so we get $u_i^2$ whose conditional or unconditional expected value cannot be eqaul to zero (why?).
So the residuals are not independent, even if the true errors are, and even if the regressors are independent from the true errors. | Residuals in a linear model are independent but sum to zero; isn't it a contradiction? [duplicate]
The residuals are certainly not independent. Assume that the true errors $u_i, i=1,...,n$ are fully independent. In a linear model
$$y_i = \mathbb x_i'\beta + u_i \implies u_i = y_i - \mathbb x_i'\be |
33,617 | Sensitivity of the mean to outliers | Consider what would happen if you wanted to take the mean of some numbers, but you dragged one of them off toward infinity. Sure, at first it wouldn't have a huge impact on the mean, but the farther you drag it off, the more your mean changes.
Every number has a (proportionally) small contribution to the mean, but they do all contribute. So if one number is really different than the others, it can still have a big influence.
This idea of dragging values off toward infinity and seeing how the estimator behaves is formalized by the breakdown point: the proportion of data that can get arbitrarily large before the estimator also becomes arbitrarily large.
The mean has a breakdown point of 0, because it only takes 1 bad data point to make the whole estimator bad (this is actually the asymptotic breakdown point, the finite sample breakdown point is 1/N).
On the other hand, the median has breakdown point 0.5 because it doesn't care about how strange data gets, as long as the middle point doesn't change. You can take half of the data and make it arbitrarily large and the median shrugs it off.
You can even construct an estimator with whatever breakdown point you want (between 0 and 0.5) by 'trimming' the mean by that percentage--throwing away some of the data before computing the mean.
So, what does this mean for actually doing work? Is the mean just a terrible idea? Well, like everything else in life, it depends. If you desperately need to protect yourself against outliers, yeah, the mean probably isn't for you. But the median pays a price of losing a lot of potentially helpful information to get that high breakdown point.
If you're interested in reading more about it, here's a set of lecture notes that really helped me when I was learning about robust statistics.
http://www.stat.umn.edu/geyer/5601/notes/break.pdf | Sensitivity of the mean to outliers | Consider what would happen if you wanted to take the mean of some numbers, but you dragged one of them off toward infinity. Sure, at first it wouldn't have a huge impact on the mean, but the farther y | Sensitivity of the mean to outliers
Consider what would happen if you wanted to take the mean of some numbers, but you dragged one of them off toward infinity. Sure, at first it wouldn't have a huge impact on the mean, but the farther you drag it off, the more your mean changes.
Every number has a (proportionally) small contribution to the mean, but they do all contribute. So if one number is really different than the others, it can still have a big influence.
This idea of dragging values off toward infinity and seeing how the estimator behaves is formalized by the breakdown point: the proportion of data that can get arbitrarily large before the estimator also becomes arbitrarily large.
The mean has a breakdown point of 0, because it only takes 1 bad data point to make the whole estimator bad (this is actually the asymptotic breakdown point, the finite sample breakdown point is 1/N).
On the other hand, the median has breakdown point 0.5 because it doesn't care about how strange data gets, as long as the middle point doesn't change. You can take half of the data and make it arbitrarily large and the median shrugs it off.
You can even construct an estimator with whatever breakdown point you want (between 0 and 0.5) by 'trimming' the mean by that percentage--throwing away some of the data before computing the mean.
So, what does this mean for actually doing work? Is the mean just a terrible idea? Well, like everything else in life, it depends. If you desperately need to protect yourself against outliers, yeah, the mean probably isn't for you. But the median pays a price of losing a lot of potentially helpful information to get that high breakdown point.
If you're interested in reading more about it, here's a set of lecture notes that really helped me when I was learning about robust statistics.
http://www.stat.umn.edu/geyer/5601/notes/break.pdf | Sensitivity of the mean to outliers
Consider what would happen if you wanted to take the mean of some numbers, but you dragged one of them off toward infinity. Sure, at first it wouldn't have a huge impact on the mean, but the farther y |
33,618 | Sensitivity of the mean to outliers | In some sense, the mean depends equally on all the items on data — it is perfectly democratic. We can see this by considering the arithmetic mean as a special case of weighted mean,
$$\bar x = \sum_{i=1}^n \alpha_i x_i \tag{1}$$
where the weights $\alpha_i$ are all set equal at $\frac{1}{n}$.
However, when we talk about sensitivity to inputs (of a function, of a model, etc), we are generally interested in "how much of an effect would it have if our inputs were different." And on that count, outliers can be very influential in our result for the arithmetic mean.
One way to see this is to take a data set containing an outlier, and consider, for each piece of data, what would be the effect on the mean if this data point were deleted (or more counterfactually, imagine that we had never sampled and recorded it).
Take the data set $\{1,3,4,5,7,100\}$ for instance. This has a mean of $120/6 = 20$. If the $1$ were deleted, the mean rises to $119/5 = 23.8$, a change of $+3.8$. Deleting the $3$, $4$, $5$ or $7$ would have had even small effects, producing changes of $+3.4$, $+3.2$, $+3$ and $+2.6$ respectively. But deleting the $100$ would reduce the mean to $20/5=4$, a change of $-16$. In this sense, the mean is very sensitive to the inclusion of the $100$ in the data set: its value would have been very different without it. The impact of removing the outlier is noticeably larger than for any of the other data points. Here's a visual representation: the dot plot at the top shows the full distribution and its mean (the black bar); the following plots show the effect on the mean of deleting successive points.
In contrast, the median is a less sensitive measure of central tendency. The median of our entire data set is $4.5$, and deletions give the following changes:
\begin{array}{lll}
&\text{Delete} &\text{New median} &\text{Change} \\
\hline
&1 &5 &+0.5 \\
&3 &5 &+0.5 \\
&4 &5 &+0.5 \\
&5 &4 &-0.5 \\
&7 &4 &-0.5 \\
&100 &4 &-0.5 \\
\end{array}
Not only is the median less sensitive to changes overall, but removing the outlier had no more effect than deleting any of the other data points. It could even cope with several outliers. In fact, many outliers are okay, so long as they constitute less than half of the data set — see the the answer by @Sullysarus.
Note that the sensitivity of the mean is not necessarily a bad thing; it's often the case that we use the mean because we like its sensitivity, i.e. the way it makes full use of all the data. See the thread "If mean is so sensitive, why use it in the first place?"
Note that the same principles apply to outliers on the left tail, i.e. values that are "unusually small" for the data set: consider e.g. $\{-1, -3, -4, -5, -7, -100 \}$ where it is clear that the results will come out similarly to before, but with the sign (i.e. direction) of changes reversed.
Also, to head off a possible misunderstanding, looking at $\frac{1}{n} x_i$ as the "contribution" of $x_i$ to the mean may not be the most helpful approach to considering "sensitivity", even though it's true that $\bar x$ is the sum of these contributions (as written in equation $(1)$). Consider the translation of our original data set to $\{10001, 10003, 10004, 10005, 10007, 10100 \}$. Now each contribution looks fairly similar: proportionately there's not much difference between $1666\frac{5}{6}$ (the smallest, from the $10001$) and $1683\frac{1}{3}$ (the largest, from the $10100$). Each contribution is very close to one sixth of the mean, so it doesn't look like the outlier is making much more difference than the other numbers did.
But this is misleading, since the "sensitivity to deletion" argument will give the same results as before. It's the relative position of the number from the mean that matters, not the relative proportion it contributes towards the sum. (Another issue with the "relative proportion" or "contribution" approach is that it doesn't make much sense when you have a mixture of positive and negative data.)
Consider the new mean after the deletion of $x_j$: we would divide the new total, which is the previous total, $\sum_{i=1}^n x_i = n \bar x$, divided by number of items of data remaining, $n-1$. This can be manipulated to give
$$\frac{n \bar x - x_j}{n-1} = \frac{n \bar x - \bar x + \bar x - x_j}{n-1} = \frac{(n - 1) \bar x + (\bar x - x_j)}{n-1} = \bar x + \frac{\bar x - x_j}{n-1}$$
This shows that deleting a data point $x_j$ causes the mean to increase by $\frac{\bar x - x_j}{n-1}$. It should now be clear why deleting data that lies far from the mean, in either direction, should have a greater effect than removing data that lies close to the mean.
R code for plot
library(grDevices)
fulldata <- c(1,3,4,5,7,100)
n <- length(fulldata)
par(bg="grey98")
plot(NULL, xlim=c(min(fulldata)-5, max(fulldata)+5), ylim=c(0,n+2), axes=FALSE,
ylab="", xlab="")
cols <- rainbow(n)
points(rep(fulldata,n+1), rep(1:(n+1),each=n), col="black", bg=cols, pch=21, cex=1)
points(fulldata[1:n], n:1, pch=4, cex=2)
abline(v=mean(fulldata), col="grey", lty=3)
segments(mean(fulldata),n+1-0.2,
mean(fulldata),n+1+0.2, col="black", lwd=2)
segments((sum(fulldata)-fulldata[1:n])/(n-1),n:1-0.2,
(sum(fulldata)-fulldata[1:n])/(n-1),n:1+0.2,col=cols, lwd=2)
segments((sum(fulldata)-fulldata[1:n])/(n-1),n:1,rep(mean(fulldata),n),n:1,col=cols,lwd=2) | Sensitivity of the mean to outliers | In some sense, the mean depends equally on all the items on data — it is perfectly democratic. We can see this by considering the arithmetic mean as a special case of weighted mean,
$$\bar x = \sum_{i | Sensitivity of the mean to outliers
In some sense, the mean depends equally on all the items on data — it is perfectly democratic. We can see this by considering the arithmetic mean as a special case of weighted mean,
$$\bar x = \sum_{i=1}^n \alpha_i x_i \tag{1}$$
where the weights $\alpha_i$ are all set equal at $\frac{1}{n}$.
However, when we talk about sensitivity to inputs (of a function, of a model, etc), we are generally interested in "how much of an effect would it have if our inputs were different." And on that count, outliers can be very influential in our result for the arithmetic mean.
One way to see this is to take a data set containing an outlier, and consider, for each piece of data, what would be the effect on the mean if this data point were deleted (or more counterfactually, imagine that we had never sampled and recorded it).
Take the data set $\{1,3,4,5,7,100\}$ for instance. This has a mean of $120/6 = 20$. If the $1$ were deleted, the mean rises to $119/5 = 23.8$, a change of $+3.8$. Deleting the $3$, $4$, $5$ or $7$ would have had even small effects, producing changes of $+3.4$, $+3.2$, $+3$ and $+2.6$ respectively. But deleting the $100$ would reduce the mean to $20/5=4$, a change of $-16$. In this sense, the mean is very sensitive to the inclusion of the $100$ in the data set: its value would have been very different without it. The impact of removing the outlier is noticeably larger than for any of the other data points. Here's a visual representation: the dot plot at the top shows the full distribution and its mean (the black bar); the following plots show the effect on the mean of deleting successive points.
In contrast, the median is a less sensitive measure of central tendency. The median of our entire data set is $4.5$, and deletions give the following changes:
\begin{array}{lll}
&\text{Delete} &\text{New median} &\text{Change} \\
\hline
&1 &5 &+0.5 \\
&3 &5 &+0.5 \\
&4 &5 &+0.5 \\
&5 &4 &-0.5 \\
&7 &4 &-0.5 \\
&100 &4 &-0.5 \\
\end{array}
Not only is the median less sensitive to changes overall, but removing the outlier had no more effect than deleting any of the other data points. It could even cope with several outliers. In fact, many outliers are okay, so long as they constitute less than half of the data set — see the the answer by @Sullysarus.
Note that the sensitivity of the mean is not necessarily a bad thing; it's often the case that we use the mean because we like its sensitivity, i.e. the way it makes full use of all the data. See the thread "If mean is so sensitive, why use it in the first place?"
Note that the same principles apply to outliers on the left tail, i.e. values that are "unusually small" for the data set: consider e.g. $\{-1, -3, -4, -5, -7, -100 \}$ where it is clear that the results will come out similarly to before, but with the sign (i.e. direction) of changes reversed.
Also, to head off a possible misunderstanding, looking at $\frac{1}{n} x_i$ as the "contribution" of $x_i$ to the mean may not be the most helpful approach to considering "sensitivity", even though it's true that $\bar x$ is the sum of these contributions (as written in equation $(1)$). Consider the translation of our original data set to $\{10001, 10003, 10004, 10005, 10007, 10100 \}$. Now each contribution looks fairly similar: proportionately there's not much difference between $1666\frac{5}{6}$ (the smallest, from the $10001$) and $1683\frac{1}{3}$ (the largest, from the $10100$). Each contribution is very close to one sixth of the mean, so it doesn't look like the outlier is making much more difference than the other numbers did.
But this is misleading, since the "sensitivity to deletion" argument will give the same results as before. It's the relative position of the number from the mean that matters, not the relative proportion it contributes towards the sum. (Another issue with the "relative proportion" or "contribution" approach is that it doesn't make much sense when you have a mixture of positive and negative data.)
Consider the new mean after the deletion of $x_j$: we would divide the new total, which is the previous total, $\sum_{i=1}^n x_i = n \bar x$, divided by number of items of data remaining, $n-1$. This can be manipulated to give
$$\frac{n \bar x - x_j}{n-1} = \frac{n \bar x - \bar x + \bar x - x_j}{n-1} = \frac{(n - 1) \bar x + (\bar x - x_j)}{n-1} = \bar x + \frac{\bar x - x_j}{n-1}$$
This shows that deleting a data point $x_j$ causes the mean to increase by $\frac{\bar x - x_j}{n-1}$. It should now be clear why deleting data that lies far from the mean, in either direction, should have a greater effect than removing data that lies close to the mean.
R code for plot
library(grDevices)
fulldata <- c(1,3,4,5,7,100)
n <- length(fulldata)
par(bg="grey98")
plot(NULL, xlim=c(min(fulldata)-5, max(fulldata)+5), ylim=c(0,n+2), axes=FALSE,
ylab="", xlab="")
cols <- rainbow(n)
points(rep(fulldata,n+1), rep(1:(n+1),each=n), col="black", bg=cols, pch=21, cex=1)
points(fulldata[1:n], n:1, pch=4, cex=2)
abline(v=mean(fulldata), col="grey", lty=3)
segments(mean(fulldata),n+1-0.2,
mean(fulldata),n+1+0.2, col="black", lwd=2)
segments((sum(fulldata)-fulldata[1:n])/(n-1),n:1-0.2,
(sum(fulldata)-fulldata[1:n])/(n-1),n:1+0.2,col=cols, lwd=2)
segments((sum(fulldata)-fulldata[1:n])/(n-1),n:1,rep(mean(fulldata),n),n:1,col=cols,lwd=2) | Sensitivity of the mean to outliers
In some sense, the mean depends equally on all the items on data — it is perfectly democratic. We can see this by considering the arithmetic mean as a special case of weighted mean,
$$\bar x = \sum_{i |
33,619 | Sensitivity of the mean to outliers | Arithmetic mean is a sum of all the values divided by their count
$$ \frac{x_1 + x_2 + \dots + x_n}{N} $$
so each of the values have the same impact on the final estimate. Let me say it once again: each of the values has impact on the estimate. Means' "sensibility" to data is actually one of the reasons why we choose it as a estimator of location. Obviously, if one or more of the values deviate from the other, they influence the mean. If they deviate by large, their influence is larger, if deviation is smaller, than their influence is smaller. It is true that "small amount of observations shouldn't have much impact" on it's result, but that doesn't mean that they have no impact.
As correctly suggested by whuber, such robustness can be shown by measures such as breakdown point. Breakdown point is basically the proportion of observations that are needed to influence the estimate. In case of mean it is zero, because changing a single value is enough to influence the final result. More robust measures, like median, are less sensible and a greater fraction of outlying cases may be needed to influence their estimates. | Sensitivity of the mean to outliers | Arithmetic mean is a sum of all the values divided by their count
$$ \frac{x_1 + x_2 + \dots + x_n}{N} $$
so each of the values have the same impact on the final estimate. Let me say it once again: ea | Sensitivity of the mean to outliers
Arithmetic mean is a sum of all the values divided by their count
$$ \frac{x_1 + x_2 + \dots + x_n}{N} $$
so each of the values have the same impact on the final estimate. Let me say it once again: each of the values has impact on the estimate. Means' "sensibility" to data is actually one of the reasons why we choose it as a estimator of location. Obviously, if one or more of the values deviate from the other, they influence the mean. If they deviate by large, their influence is larger, if deviation is smaller, than their influence is smaller. It is true that "small amount of observations shouldn't have much impact" on it's result, but that doesn't mean that they have no impact.
As correctly suggested by whuber, such robustness can be shown by measures such as breakdown point. Breakdown point is basically the proportion of observations that are needed to influence the estimate. In case of mean it is zero, because changing a single value is enough to influence the final result. More robust measures, like median, are less sensible and a greater fraction of outlying cases may be needed to influence their estimates. | Sensitivity of the mean to outliers
Arithmetic mean is a sum of all the values divided by their count
$$ \frac{x_1 + x_2 + \dots + x_n}{N} $$
so each of the values have the same impact on the final estimate. Let me say it once again: ea |
33,620 | Why do I get this p-value doing the Jarque-Bera test in R? | p-value = 0.5329
Does it mean that the probability to discard the normality hypothesis
A p-value is not "the probability to discard the hypothesis". You should review the meaning of p-values. The first sentence of the relevant wikipedia page should help:
the p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true.
(NB: I have modified the above link to the version that was current at the time I wrote the answer, as the opening paragraph of the article has been edited badly and it's presently - June 2018 - effectively wrong.)
It goes on to say:
If this p-value is very small, usually less than or equal to a threshold value previously chosen called the significance level (traditionally 5% or 1% [1]), it suggests that the observed data is inconsistent with the assumption that the null hypothesis is true
This is quite different from "probability to discard the hypothesis".
is 53.29%?
A p-value around 53% is quite consistent with the null hypothesis.
(However, this does not imply that the distribution that the data were supposedly a random sample from is normal; it would be consistent with an infinite number of non-normal distributions as well.) | Why do I get this p-value doing the Jarque-Bera test in R? | p-value = 0.5329
Does it mean that the probability to discard the normality hypothesis
A p-value is not "the probability to discard the hypothesis". You should review the meaning of p-values. The fir | Why do I get this p-value doing the Jarque-Bera test in R?
p-value = 0.5329
Does it mean that the probability to discard the normality hypothesis
A p-value is not "the probability to discard the hypothesis". You should review the meaning of p-values. The first sentence of the relevant wikipedia page should help:
the p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true.
(NB: I have modified the above link to the version that was current at the time I wrote the answer, as the opening paragraph of the article has been edited badly and it's presently - June 2018 - effectively wrong.)
It goes on to say:
If this p-value is very small, usually less than or equal to a threshold value previously chosen called the significance level (traditionally 5% or 1% [1]), it suggests that the observed data is inconsistent with the assumption that the null hypothesis is true
This is quite different from "probability to discard the hypothesis".
is 53.29%?
A p-value around 53% is quite consistent with the null hypothesis.
(However, this does not imply that the distribution that the data were supposedly a random sample from is normal; it would be consistent with an infinite number of non-normal distributions as well.) | Why do I get this p-value doing the Jarque-Bera test in R?
p-value = 0.5329
Does it mean that the probability to discard the normality hypothesis
A p-value is not "the probability to discard the hypothesis". You should review the meaning of p-values. The fir |
33,621 | Why do I get this p-value doing the Jarque-Bera test in R? | Your data have come from a normal distribution so the null hypothesis for the Jarque-Bera test (that the population the sample are drawn from has zero skew and zero excess kurtosis) is actually true. Although we usually call Jarque-Bera a "test for normality", there are other distributions which also have zero skew and zero excess kurtosis (see this answer for an example), so a Jarque-Bera test can't distinguish them from a normal distribution.
A p-value is
the probability of getting a result as or or more extreme than the observed result, assuming the null hypothesis is true. It is not the probability of rejecting the null hypothesis.
I hope this deals with the "Does it mean that..." aspect of your question. If we see a very small p-value, like 0.001, this means that our observed results would be very improbable if $H_0$ were true (indeed, highly surprising - something as or more extreme than this we'd only expect to happen 1 time in 1000). This leads us to suspect that $H_0$ is incorrect. On the contrary, a high p-value is not at all surprising, and although it is not evidence actively in favour of $H_0$ it certainly does not put $H_0$ into doubt. In general we consider low p-values as evidence against $H_0$, and a lower p-values constitutes stronger evidence. What would lead us to reject $H_0$? It's common to set a level of significance, often 5%, and reject $H_0$ if we observe a p-value lower than the significance level. In your case we would not reject $H_0$ at any sensible level of significance.
When $H_0$ is true, the p-value will have a continuous uniform distribution between 0 and 1, also known as the rectangular distribution because of the shape of the pdf. This isn't just true for the Jarque-Bera test, and while it isn't quite true for all hypothesis tests (consider tests on discrete distributions such as a binomial proportion test or Poisson mean test) "the p-value is equally likely to be anywhere from 0 to 1" is usually a good way of thinking about the p-value under the null.
NB to address a common misconception: just because the null is true does not mean we should expect the p value to be high! There is a 50% chance of it being above 0.5, 50% chance of it being below. If you set your significance level to 5% - that is, you will reject $H_0$ if you obtain a p value below 0.05 - then be aware this will happen 5% of the time even if the null is true (this is why your significance level will be the same as your probability of a Type I error). But there's also a 5% chance of it being between 0.95 and 1, or between 0.32 and 0.37, or between 0.64 and 0.69. I hope this covers the "why do I get this p-value" aspect of your query.
Caution: I have been describing here the ideal situation where the Jarque-Bera test is working well. The test relies on the sample skewness and sample kurtosis being normally distributed - the Central Limit Theorem guarantees this will be asymptotically true in large sample sizes, but this approximation is not very good in smaller sample sizes. In fact your $n=85$ is too small - and so the reported p-values under the null aren't quite uniformly distributed. But if you'd used rnorm(1000) instead, my description would have been accurate.
When you refer to the "probability to discard the normality hypothesis (it being true)" you seem to be thinking about the Type I error rate. But you can't see that from just one sample, you need to think about the chances of making an incorrect decision across many samples. A good way to understand how error rates work is by simulation. Keep running the same R code and you'll keep getting different p values. Make a histogram of those p values and you'll find them approximately equally likely to be drawn anywhere between 0 and 1, so long as you've chosen a large enough $n$ for the Jarque-Bera test to work nicely. If you set your significance level at 5% you'll find that, in the long run, you'll make the Type I error of rejecting the null hypothesis even though it's true (which happens in your simulation when p < 0.05) about 5% of the time. If you want to reduce your Type I error rate to 1% then set your significance level to 1%. You might even set it lower. The problem with doing so is that you make it much harder to reject the null hypothesis when it is false, so you are increasing the Type II error rate.
Also, if you do want to apply a Jarque-Bera test on a sample size as low as 85, my earlier caution about small sample sizes applies. Since the reported p-values based on the asymptotic distribution will not be uniformly distributed under the null, p < 0.05 doesn't occur 5% of the time. So you can't achieve a Type I error rate of 5% simply by rejecting $H_0$ if the reported p < 0.05! Instead, you have to adjust critical values e.g. based on simulation results, as is done in Section 4.1 of Thadewald, T, and H. Buning, 2004, Jarque-Bera test and its competitors for testing normality - A power comparison, Discussion Paper Economics 2004/9, School of Business and Economics, Free University of Berlin.
In your simulation you only considered normally distributed data; what if you simulate data that isn't normal instead? In this case we should reject the null hypothesis but you will find you don't always get a p value below 0.05 (or whatever significance level you set) so the Jarque-Bera test results do not give you sufficient evidence to reject. The more powerful the test, the better it is at telling you to reject $H_0$ in this situation. You will find that you can improve the power of the test by increasing the sample size (whereas when the null was true, changing the sample size makes no difference to the rectangular distribution of the p values - try it! - when the data isn't drawn from a normal population, you'll find low p values become increasingly likely as you increase the sample size). The power of the test is also higher if your data are more blatantly departing from normality - see what happens as you sample from distributions with more extreme skew and kurtosis. There are alternative normality tests available, and they will have different powers against different types of departure from normality.
A final word of warning. Be aware that in many practical situations, we do not really want to run a normality test at all. Sometimes normality tests can be useful, though - for instance, if you are of a skeptical disposition and want to check whether the "random normal deviates" generated by your statistical software are genuinely normal. You should find that the rnorm function in R is fine, however! | Why do I get this p-value doing the Jarque-Bera test in R? | Your data have come from a normal distribution so the null hypothesis for the Jarque-Bera test (that the population the sample are drawn from has zero skew and zero excess kurtosis) is actually true. | Why do I get this p-value doing the Jarque-Bera test in R?
Your data have come from a normal distribution so the null hypothesis for the Jarque-Bera test (that the population the sample are drawn from has zero skew and zero excess kurtosis) is actually true. Although we usually call Jarque-Bera a "test for normality", there are other distributions which also have zero skew and zero excess kurtosis (see this answer for an example), so a Jarque-Bera test can't distinguish them from a normal distribution.
A p-value is
the probability of getting a result as or or more extreme than the observed result, assuming the null hypothesis is true. It is not the probability of rejecting the null hypothesis.
I hope this deals with the "Does it mean that..." aspect of your question. If we see a very small p-value, like 0.001, this means that our observed results would be very improbable if $H_0$ were true (indeed, highly surprising - something as or more extreme than this we'd only expect to happen 1 time in 1000). This leads us to suspect that $H_0$ is incorrect. On the contrary, a high p-value is not at all surprising, and although it is not evidence actively in favour of $H_0$ it certainly does not put $H_0$ into doubt. In general we consider low p-values as evidence against $H_0$, and a lower p-values constitutes stronger evidence. What would lead us to reject $H_0$? It's common to set a level of significance, often 5%, and reject $H_0$ if we observe a p-value lower than the significance level. In your case we would not reject $H_0$ at any sensible level of significance.
When $H_0$ is true, the p-value will have a continuous uniform distribution between 0 and 1, also known as the rectangular distribution because of the shape of the pdf. This isn't just true for the Jarque-Bera test, and while it isn't quite true for all hypothesis tests (consider tests on discrete distributions such as a binomial proportion test or Poisson mean test) "the p-value is equally likely to be anywhere from 0 to 1" is usually a good way of thinking about the p-value under the null.
NB to address a common misconception: just because the null is true does not mean we should expect the p value to be high! There is a 50% chance of it being above 0.5, 50% chance of it being below. If you set your significance level to 5% - that is, you will reject $H_0$ if you obtain a p value below 0.05 - then be aware this will happen 5% of the time even if the null is true (this is why your significance level will be the same as your probability of a Type I error). But there's also a 5% chance of it being between 0.95 and 1, or between 0.32 and 0.37, or between 0.64 and 0.69. I hope this covers the "why do I get this p-value" aspect of your query.
Caution: I have been describing here the ideal situation where the Jarque-Bera test is working well. The test relies on the sample skewness and sample kurtosis being normally distributed - the Central Limit Theorem guarantees this will be asymptotically true in large sample sizes, but this approximation is not very good in smaller sample sizes. In fact your $n=85$ is too small - and so the reported p-values under the null aren't quite uniformly distributed. But if you'd used rnorm(1000) instead, my description would have been accurate.
When you refer to the "probability to discard the normality hypothesis (it being true)" you seem to be thinking about the Type I error rate. But you can't see that from just one sample, you need to think about the chances of making an incorrect decision across many samples. A good way to understand how error rates work is by simulation. Keep running the same R code and you'll keep getting different p values. Make a histogram of those p values and you'll find them approximately equally likely to be drawn anywhere between 0 and 1, so long as you've chosen a large enough $n$ for the Jarque-Bera test to work nicely. If you set your significance level at 5% you'll find that, in the long run, you'll make the Type I error of rejecting the null hypothesis even though it's true (which happens in your simulation when p < 0.05) about 5% of the time. If you want to reduce your Type I error rate to 1% then set your significance level to 1%. You might even set it lower. The problem with doing so is that you make it much harder to reject the null hypothesis when it is false, so you are increasing the Type II error rate.
Also, if you do want to apply a Jarque-Bera test on a sample size as low as 85, my earlier caution about small sample sizes applies. Since the reported p-values based on the asymptotic distribution will not be uniformly distributed under the null, p < 0.05 doesn't occur 5% of the time. So you can't achieve a Type I error rate of 5% simply by rejecting $H_0$ if the reported p < 0.05! Instead, you have to adjust critical values e.g. based on simulation results, as is done in Section 4.1 of Thadewald, T, and H. Buning, 2004, Jarque-Bera test and its competitors for testing normality - A power comparison, Discussion Paper Economics 2004/9, School of Business and Economics, Free University of Berlin.
In your simulation you only considered normally distributed data; what if you simulate data that isn't normal instead? In this case we should reject the null hypothesis but you will find you don't always get a p value below 0.05 (or whatever significance level you set) so the Jarque-Bera test results do not give you sufficient evidence to reject. The more powerful the test, the better it is at telling you to reject $H_0$ in this situation. You will find that you can improve the power of the test by increasing the sample size (whereas when the null was true, changing the sample size makes no difference to the rectangular distribution of the p values - try it! - when the data isn't drawn from a normal population, you'll find low p values become increasingly likely as you increase the sample size). The power of the test is also higher if your data are more blatantly departing from normality - see what happens as you sample from distributions with more extreme skew and kurtosis. There are alternative normality tests available, and they will have different powers against different types of departure from normality.
A final word of warning. Be aware that in many practical situations, we do not really want to run a normality test at all. Sometimes normality tests can be useful, though - for instance, if you are of a skeptical disposition and want to check whether the "random normal deviates" generated by your statistical software are genuinely normal. You should find that the rnorm function in R is fine, however! | Why do I get this p-value doing the Jarque-Bera test in R?
Your data have come from a normal distribution so the null hypothesis for the Jarque-Bera test (that the population the sample are drawn from has zero skew and zero excess kurtosis) is actually true. |
33,622 | Why do I get this p-value doing the Jarque-Bera test in R? | The other answers are detailed but are concept-heavy. Since the p-value has a mathematical derivation, laying out the math behind it might tighten up the understanding.
Definitions
A statistic is any function of data. Typically we assume that observed data is the realization of a random variable or sequence of random variables. Therefore, a statistic is also a random variable. Call the data $X$, and use $x$ to denote a particular observation of $X$ (in this case, a data set). Call the statistic $T$ and use $t$ to denote the value of the statistic that is computed from a particular $x$.
This way we can refer to "the event that statistic $T$ takes value $t$" and attach a probability to it. Many statistics are interesting because they describe complex aspects of the data but follow well-known probability distributions.
The Jarque-Bera test
The Jarque-Bera test is built on a statistic $T$ that has two special properties:
$t$ is large if and only if skewness $s$ or kurtosis $k$, or both, are large.
If $X$ follows the normal distribution and the sample size is large, $T$ approximately follows the chi-square distribution with two degrees of freedom.
We know that, in the chi-square distribution with two degrees of freedom, larger values are less probable: $ \lim_{t \rightarrow \infty} \operatorname{Pr}{\left( T \leq t\, |\, X \sim N \, \right)} = 0 $. So if we observe a large $s$ or a large $k$, we also observe a large $t$, and that means that, if the data is normally distributed, we have observed a very improbable event. So if we observe a large $t$, either we have observed a very rare event, or $X \not\sim N$.
The $p$ value is defined as $p \equiv \operatorname{Pr}{\left( T > t\, |\, X \sim N \, \right)} $.
If $t$ is large, $p$ is small because large $t$s are unlikely if $X \sim N$. We conduct an hypothesis test by choosing a small value like 0.05 below which $p$ is "too small" for us to believe that it was truly a draw from the chi square distribution, and therefore that $X$ must not be normally distributed.
But then what does this say about a large p value? Absolutely nothing. It could be that $X \sim N$ so that $T \sim \chi^2_2$. But it could also be that $X \not\sim N$ and $T$ follows some other distribution. There just isn't any way to tell. This is why it is never correct to "accept" the hypothesis that $X \sim N$. We can only fail to reject it. | Why do I get this p-value doing the Jarque-Bera test in R? | The other answers are detailed but are concept-heavy. Since the p-value has a mathematical derivation, laying out the math behind it might tighten up the understanding.
Definitions
A statistic is any | Why do I get this p-value doing the Jarque-Bera test in R?
The other answers are detailed but are concept-heavy. Since the p-value has a mathematical derivation, laying out the math behind it might tighten up the understanding.
Definitions
A statistic is any function of data. Typically we assume that observed data is the realization of a random variable or sequence of random variables. Therefore, a statistic is also a random variable. Call the data $X$, and use $x$ to denote a particular observation of $X$ (in this case, a data set). Call the statistic $T$ and use $t$ to denote the value of the statistic that is computed from a particular $x$.
This way we can refer to "the event that statistic $T$ takes value $t$" and attach a probability to it. Many statistics are interesting because they describe complex aspects of the data but follow well-known probability distributions.
The Jarque-Bera test
The Jarque-Bera test is built on a statistic $T$ that has two special properties:
$t$ is large if and only if skewness $s$ or kurtosis $k$, or both, are large.
If $X$ follows the normal distribution and the sample size is large, $T$ approximately follows the chi-square distribution with two degrees of freedom.
We know that, in the chi-square distribution with two degrees of freedom, larger values are less probable: $ \lim_{t \rightarrow \infty} \operatorname{Pr}{\left( T \leq t\, |\, X \sim N \, \right)} = 0 $. So if we observe a large $s$ or a large $k$, we also observe a large $t$, and that means that, if the data is normally distributed, we have observed a very improbable event. So if we observe a large $t$, either we have observed a very rare event, or $X \not\sim N$.
The $p$ value is defined as $p \equiv \operatorname{Pr}{\left( T > t\, |\, X \sim N \, \right)} $.
If $t$ is large, $p$ is small because large $t$s are unlikely if $X \sim N$. We conduct an hypothesis test by choosing a small value like 0.05 below which $p$ is "too small" for us to believe that it was truly a draw from the chi square distribution, and therefore that $X$ must not be normally distributed.
But then what does this say about a large p value? Absolutely nothing. It could be that $X \sim N$ so that $T \sim \chi^2_2$. But it could also be that $X \not\sim N$ and $T$ follows some other distribution. There just isn't any way to tell. This is why it is never correct to "accept" the hypothesis that $X \sim N$. We can only fail to reject it. | Why do I get this p-value doing the Jarque-Bera test in R?
The other answers are detailed but are concept-heavy. Since the p-value has a mathematical derivation, laying out the math behind it might tighten up the understanding.
Definitions
A statistic is any |
33,623 | Why do I get this p-value doing the Jarque-Bera test in R? | You have to read about hypothesis testing and Jarque-Bera test. It seems that you don't understand either or both of the concepts.
JB test's null hypothesis is that your sample is from normal distribution. The test p-value reflects the probability of accepting the null hypothesis. If it's too low then you reject it. You must set the confidence level, for instance $\alpha=5\%$, then reject the null if p-value is below this $\alpha$. In your case p-value is over 50%, which is too high to reject the null. Note, that the hypothesis testing will never tell you to accept the null, it may only tell to reject it or not reject it.
So, your test tells you that it can't reject the hypothesis that your sample is from normal distribution. Which is an expected result, I suppose. | Why do I get this p-value doing the Jarque-Bera test in R? | You have to read about hypothesis testing and Jarque-Bera test. It seems that you don't understand either or both of the concepts.
JB test's null hypothesis is that your sample is from normal distribu | Why do I get this p-value doing the Jarque-Bera test in R?
You have to read about hypothesis testing and Jarque-Bera test. It seems that you don't understand either or both of the concepts.
JB test's null hypothesis is that your sample is from normal distribution. The test p-value reflects the probability of accepting the null hypothesis. If it's too low then you reject it. You must set the confidence level, for instance $\alpha=5\%$, then reject the null if p-value is below this $\alpha$. In your case p-value is over 50%, which is too high to reject the null. Note, that the hypothesis testing will never tell you to accept the null, it may only tell to reject it or not reject it.
So, your test tells you that it can't reject the hypothesis that your sample is from normal distribution. Which is an expected result, I suppose. | Why do I get this p-value doing the Jarque-Bera test in R?
You have to read about hypothesis testing and Jarque-Bera test. It seems that you don't understand either or both of the concepts.
JB test's null hypothesis is that your sample is from normal distribu |
33,624 | Regression - How do I know if my residuals are normally distributed? | In practice you simply don't know (but they probably aren't). Not that non-normal residuals are necessarily a problem; it depends on how non-normal and how big your sample size is and how much you care about the impact on your inference.
You can see if the residuals are reasonably close to normal via a Q-Q plot.
A Q-Q plot isn't hard to generate in Excel.
If you take $r$ to be the ranks of the residuals (1 for smallest, 2 for second smallest, etc), then
$\Phi^{-1}(\frac{r-3/8}{n+1/4})$ is a good approximation for the expected normal order statistics. Plot the residuals against that transformation of their ranks, and it should look roughly like a straight line.
(where $\Phi^{-1}$ is the inverse cdf of a standard normal)
If you haven't used Q-Q plots before, I'd suggest generating a bunch of sets of random normal data (at several samples sizes) and seeing what the plots look like. (Roughly like points close to a straight line with some tendency to be a bit more noisy - wiggle a bit - at the ends)
Then generate skewed data, heavy tailed data, uniform data, bimodal data etc and see what the plots look like when data isn't normal. (Various kinds of curves and kinks, basically)
These plots are standard in most stats packages.
Here's one done in R:
Here's one I just generated in Excel via the above method:
(not the same set of data both times)
You can see the points form a straightish line ... that's because the data was actually normal.
Here's one that's not normal (it's quite right skew):
If you ever happen to be using something that has neither Q-Q plots nor inverse normal cdf functions, proceed as above up to the ranking stage, then find $p=\frac{r-3/8}{n+1/4}$ but use the Tukey lambda approximation to the inverse normal cdf.
Actually, there are two such that have been in popular use:
$\Phi^{-1}(p) \approx 5.05 (p^{0.135} - (1-p)^{0.135})$
$\Phi^{-1}(p) \approx 4.91 (p^{0.14} - (1-p)^{0.14})$
(Either is quite adequate, but my recollection is that the second seemed to work slightly better. I believe Tukey used 1/0.1975 = 5.063 in the first one instead of 5.05) | Regression - How do I know if my residuals are normally distributed? | In practice you simply don't know (but they probably aren't). Not that non-normal residuals are necessarily a problem; it depends on how non-normal and how big your sample size is and how much you car | Regression - How do I know if my residuals are normally distributed?
In practice you simply don't know (but they probably aren't). Not that non-normal residuals are necessarily a problem; it depends on how non-normal and how big your sample size is and how much you care about the impact on your inference.
You can see if the residuals are reasonably close to normal via a Q-Q plot.
A Q-Q plot isn't hard to generate in Excel.
If you take $r$ to be the ranks of the residuals (1 for smallest, 2 for second smallest, etc), then
$\Phi^{-1}(\frac{r-3/8}{n+1/4})$ is a good approximation for the expected normal order statistics. Plot the residuals against that transformation of their ranks, and it should look roughly like a straight line.
(where $\Phi^{-1}$ is the inverse cdf of a standard normal)
If you haven't used Q-Q plots before, I'd suggest generating a bunch of sets of random normal data (at several samples sizes) and seeing what the plots look like. (Roughly like points close to a straight line with some tendency to be a bit more noisy - wiggle a bit - at the ends)
Then generate skewed data, heavy tailed data, uniform data, bimodal data etc and see what the plots look like when data isn't normal. (Various kinds of curves and kinks, basically)
These plots are standard in most stats packages.
Here's one done in R:
Here's one I just generated in Excel via the above method:
(not the same set of data both times)
You can see the points form a straightish line ... that's because the data was actually normal.
Here's one that's not normal (it's quite right skew):
If you ever happen to be using something that has neither Q-Q plots nor inverse normal cdf functions, proceed as above up to the ranking stage, then find $p=\frac{r-3/8}{n+1/4}$ but use the Tukey lambda approximation to the inverse normal cdf.
Actually, there are two such that have been in popular use:
$\Phi^{-1}(p) \approx 5.05 (p^{0.135} - (1-p)^{0.135})$
$\Phi^{-1}(p) \approx 4.91 (p^{0.14} - (1-p)^{0.14})$
(Either is quite adequate, but my recollection is that the second seemed to work slightly better. I believe Tukey used 1/0.1975 = 5.063 in the first one instead of 5.05) | Regression - How do I know if my residuals are normally distributed?
In practice you simply don't know (but they probably aren't). Not that non-normal residuals are necessarily a problem; it depends on how non-normal and how big your sample size is and how much you car |
33,625 | Regression - How do I know if my residuals are normally distributed? | Using plots of simulated data to get an impression of how to interpret a Q-Q plot as propsed by @Glen_b is an excelent idea.
You can also use such simulated curves as a background in your final graph. That way it is easier to compare the diviations from the diagonal in your observed residuals with the kind of variation from the diagonal line one could expect when the residuals were draws from a real normal distribution. See for example the graph below:
The details:
I made this graph in Stata. For convenience I used for the plotting position $p=\frac{r-.5}{n}$, the default for qplot. For those who have Stata and whish to play with it, here is the code (it requires the user written components qplot and the lean1 scheme, both can be found using findit):
// make sure the random draws can be replicated
set seed 12345
// load some data
sysuse auto, clear
// do a regression
reg price mpg foreign i.rep78
// predict the residuals
predict resid, resid
// create 19 random draws
forvalues i = 1/19 {
gen resid`i' = rnormal(0,e(rmse))
}
//create the Q-Q plot
qplot resid? resid?? resid, trscale(invnorm(@)*e(rmse)) ///
lcolor( `: display _dup(19) "gs12 "' black) ///
msymbol(`: display _dup(19) "none "' oh ) ///
connect(`: display _dup(19) "l "' . ) ///
lpattern(solid...) ///
legend(order(20 "observed" 19 "simulated" ) ///
subtitle(residuals)) ///
aspect(1) scheme(lean1) | Regression - How do I know if my residuals are normally distributed? | Using plots of simulated data to get an impression of how to interpret a Q-Q plot as propsed by @Glen_b is an excelent idea.
You can also use such simulated curves as a background in your final graph | Regression - How do I know if my residuals are normally distributed?
Using plots of simulated data to get an impression of how to interpret a Q-Q plot as propsed by @Glen_b is an excelent idea.
You can also use such simulated curves as a background in your final graph. That way it is easier to compare the diviations from the diagonal in your observed residuals with the kind of variation from the diagonal line one could expect when the residuals were draws from a real normal distribution. See for example the graph below:
The details:
I made this graph in Stata. For convenience I used for the plotting position $p=\frac{r-.5}{n}$, the default for qplot. For those who have Stata and whish to play with it, here is the code (it requires the user written components qplot and the lean1 scheme, both can be found using findit):
// make sure the random draws can be replicated
set seed 12345
// load some data
sysuse auto, clear
// do a regression
reg price mpg foreign i.rep78
// predict the residuals
predict resid, resid
// create 19 random draws
forvalues i = 1/19 {
gen resid`i' = rnormal(0,e(rmse))
}
//create the Q-Q plot
qplot resid? resid?? resid, trscale(invnorm(@)*e(rmse)) ///
lcolor( `: display _dup(19) "gs12 "' black) ///
msymbol(`: display _dup(19) "none "' oh ) ///
connect(`: display _dup(19) "l "' . ) ///
lpattern(solid...) ///
legend(order(20 "observed" 19 "simulated" ) ///
subtitle(residuals)) ///
aspect(1) scheme(lean1) | Regression - How do I know if my residuals are normally distributed?
Using plots of simulated data to get an impression of how to interpret a Q-Q plot as propsed by @Glen_b is an excelent idea.
You can also use such simulated curves as a background in your final graph |
33,626 | Regression - How do I know if my residuals are normally distributed? | There are normality tests, such as Jarque-Bera test, you can find them in any statistical package.
There was a bunch of comments recommending you not to test for normality. I agree that you don't have to test for normality if you don't use this assumption. However, if you use normality assumption then you better demonstrate that it holds. In some regulated industries you must produce the evidence that your assumptions, such as normality, holds. | Regression - How do I know if my residuals are normally distributed? | There are normality tests, such as Jarque-Bera test, you can find them in any statistical package.
There was a bunch of comments recommending you not to test for normality. I agree that you don't have | Regression - How do I know if my residuals are normally distributed?
There are normality tests, such as Jarque-Bera test, you can find them in any statistical package.
There was a bunch of comments recommending you not to test for normality. I agree that you don't have to test for normality if you don't use this assumption. However, if you use normality assumption then you better demonstrate that it holds. In some regulated industries you must produce the evidence that your assumptions, such as normality, holds. | Regression - How do I know if my residuals are normally distributed?
There are normality tests, such as Jarque-Bera test, you can find them in any statistical package.
There was a bunch of comments recommending you not to test for normality. I agree that you don't have |
33,627 | Convention for symbols indicating statistical significance? | Following is the convention:
ns P > 0.05
* P ≤ 0.05
** P ≤ 0.01
*** P ≤ 0.001
**** P ≤ 0.0001 | Convention for symbols indicating statistical significance? | Following is the convention:
ns P > 0.05
* P ≤ 0.05
** P ≤ 0.01
*** P ≤ 0.001
**** P ≤ 0.0001 | Convention for symbols indicating statistical significance?
Following is the convention:
ns P > 0.05
* P ≤ 0.05
** P ≤ 0.01
*** P ≤ 0.001
**** P ≤ 0.0001 | Convention for symbols indicating statistical significance?
Following is the convention:
ns P > 0.05
* P ≤ 0.05
** P ≤ 0.01
*** P ≤ 0.001
**** P ≤ 0.0001 |
33,628 | Convention for symbols indicating statistical significance? | The convention is...
* yay, I can publish
** yay, I can publish and not get refuted
*** I have no idea what alpha means
**** my power is unfathomable
***** graphing the data and noting r = 0.98 wasn't good enough
Also, see the references in chi's answer. | Convention for symbols indicating statistical significance? | The convention is...
* yay, I can publish
** yay, I can publish and not get refuted
*** I have no idea what alpha means
**** my power is unfathomable
***** graphing the data and noting r = 0.98 wasn't | Convention for symbols indicating statistical significance?
The convention is...
* yay, I can publish
** yay, I can publish and not get refuted
*** I have no idea what alpha means
**** my power is unfathomable
***** graphing the data and noting r = 0.98 wasn't good enough
Also, see the references in chi's answer. | Convention for symbols indicating statistical significance?
The convention is...
* yay, I can publish
** yay, I can publish and not get refuted
*** I have no idea what alpha means
**** my power is unfathomable
***** graphing the data and noting r = 0.98 wasn't |
33,629 | Convention for symbols indicating statistical significance? | This is correct, but please don't fall in the trap of the star system: The Earth Is Round, p < .05 :-) | Convention for symbols indicating statistical significance? | This is correct, but please don't fall in the trap of the star system: The Earth Is Round, p < .05 :-) | Convention for symbols indicating statistical significance?
This is correct, but please don't fall in the trap of the star system: The Earth Is Round, p < .05 :-) | Convention for symbols indicating statistical significance?
This is correct, but please don't fall in the trap of the star system: The Earth Is Round, p < .05 :-) |
33,630 | Why ANOVA/Regression results change when controlling for another variable | Linear regression can be illustrated geometrically in terms of an orthogonal projection of the predicted variable vector $\boldsymbol{y}$ onto the space defined by the predictor vectors $\boldsymbol{x}_{i}$. This approach is nicely explained in Wicken's book "The Geometry of Multivariate Statistics" (1994). Without loss of generality, assume centered variables. In the following diagrams, the length of a vector equals its standard deviation, and the cosine of the angle between two vectors equals their correlation (see here). The simple linear regression from $\boldsymbol{y}$ onto $\boldsymbol{x}$ then looks like this:
$\hat{\boldsymbol{y}} = b \cdot \boldsymbol{x}$ is the prediction that results from the orthogonal projection of $\boldsymbol{y}$ onto the subspace defined by $\boldsymbol{x}$. $b$ is the projection of $\boldsymbol{y}$ in subspace coordinates (basis vector $\boldsymbol{x}$). This prediction minimizes the error $\boldsymbol{e} = \boldsymbol{y} - \hat{\boldsymbol{y}}$, i.e., it finds the closest point to $\boldsymbol{y}$ in the subspace defined by $\boldsymbol{x}$ (recall that minimizing the error sum of squares means minimizing the variance of the error, i.e., its squared length). With two correlated predictors $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$, the situation looks like this:
$\boldsymbol{y}$ is projected orthogonally onto $U$, the subspace (plane) spanned by $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$. The prediction $\hat{\boldsymbol{y}} = b_{1} \cdot \boldsymbol{x}_{1} + b_{2} \cdot \boldsymbol{x}_{2}$ is this projection. $b_{1}$ and $b_{2}$ are thus the ends of the dotted lines, i.e. the coordinates of $\hat{\boldsymbol{y}}$ in subspace coordinates (basis vectors $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$).
The next thing to realize is that the orthogonal projections of $\hat{\boldsymbol{y}}$ onto $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are the same as the orthogonal projections of $\boldsymbol{y}$ itself onto $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$.
This allows us to directly compare the regression weights from each simple regression with the regression weights from the multiple regression:
$\hat{\boldsymbol{y}}_{1}$ and $\hat{\boldsymbol{y}}_{2}$ are the predictions from the simple regressions $\boldsymbol{y}$ onto $\boldsymbol{x}_{1}$, and $\boldsymbol{y}$ onto $\boldsymbol{x}_{2}$. Their endpoints give the individual regression weights $b^{1} = \rho_{x_{1} y} \cdot \sigma_{y}$ and $b^{2} = \rho_{x_{2} y} \cdot \sigma_{y}$, where $\rho_{x_{1} y}$ is the correlation between $\boldsymbol{x}_{1}$ and $\boldsymbol{y}$, and $\sigma_{y}$ is the standard deviation of $\boldsymbol{y}$. In contrast, the endpoints of the dotted lines give the regression weights from the multiple regression of $\boldsymbol{y}$ onto $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$: $b_{1} = \beta_{1} \sigma_{y}$, where $\beta_{1}$ is the standardized regression coefficient.
Now it is easy to see that $b^{1}$ and $b^{2}$ will coincide exactly with $b_{1}$ and $b_{2}$ only if $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are orthogonal (or if $\boldsymbol{y}$ is orthogonal to the plane spanned by $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$). It is also easy to geometrically construct cases that sometimes seem puzzling, e.g., when the regression weight has the opposite sign as the bivariate correlation between a predictor and the predicted variable:
Here, $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are highly correlated. Now the sign of the correlation between $\boldsymbol{y}$ and $\boldsymbol{x}_{1}$ is positive (red line: orthogonal projection of $\boldsymbol{y}$ onto $\boldsymbol{x}_{1}$), but the regression weight from the multiple regression is negative (end of green line onto subspace defined by $\boldsymbol{x}_{1}$. | Why ANOVA/Regression results change when controlling for another variable | Linear regression can be illustrated geometrically in terms of an orthogonal projection of the predicted variable vector $\boldsymbol{y}$ onto the space defined by the predictor vectors $\boldsymbol{x | Why ANOVA/Regression results change when controlling for another variable
Linear regression can be illustrated geometrically in terms of an orthogonal projection of the predicted variable vector $\boldsymbol{y}$ onto the space defined by the predictor vectors $\boldsymbol{x}_{i}$. This approach is nicely explained in Wicken's book "The Geometry of Multivariate Statistics" (1994). Without loss of generality, assume centered variables. In the following diagrams, the length of a vector equals its standard deviation, and the cosine of the angle between two vectors equals their correlation (see here). The simple linear regression from $\boldsymbol{y}$ onto $\boldsymbol{x}$ then looks like this:
$\hat{\boldsymbol{y}} = b \cdot \boldsymbol{x}$ is the prediction that results from the orthogonal projection of $\boldsymbol{y}$ onto the subspace defined by $\boldsymbol{x}$. $b$ is the projection of $\boldsymbol{y}$ in subspace coordinates (basis vector $\boldsymbol{x}$). This prediction minimizes the error $\boldsymbol{e} = \boldsymbol{y} - \hat{\boldsymbol{y}}$, i.e., it finds the closest point to $\boldsymbol{y}$ in the subspace defined by $\boldsymbol{x}$ (recall that minimizing the error sum of squares means minimizing the variance of the error, i.e., its squared length). With two correlated predictors $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$, the situation looks like this:
$\boldsymbol{y}$ is projected orthogonally onto $U$, the subspace (plane) spanned by $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$. The prediction $\hat{\boldsymbol{y}} = b_{1} \cdot \boldsymbol{x}_{1} + b_{2} \cdot \boldsymbol{x}_{2}$ is this projection. $b_{1}$ and $b_{2}$ are thus the ends of the dotted lines, i.e. the coordinates of $\hat{\boldsymbol{y}}$ in subspace coordinates (basis vectors $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$).
The next thing to realize is that the orthogonal projections of $\hat{\boldsymbol{y}}$ onto $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are the same as the orthogonal projections of $\boldsymbol{y}$ itself onto $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$.
This allows us to directly compare the regression weights from each simple regression with the regression weights from the multiple regression:
$\hat{\boldsymbol{y}}_{1}$ and $\hat{\boldsymbol{y}}_{2}$ are the predictions from the simple regressions $\boldsymbol{y}$ onto $\boldsymbol{x}_{1}$, and $\boldsymbol{y}$ onto $\boldsymbol{x}_{2}$. Their endpoints give the individual regression weights $b^{1} = \rho_{x_{1} y} \cdot \sigma_{y}$ and $b^{2} = \rho_{x_{2} y} \cdot \sigma_{y}$, where $\rho_{x_{1} y}$ is the correlation between $\boldsymbol{x}_{1}$ and $\boldsymbol{y}$, and $\sigma_{y}$ is the standard deviation of $\boldsymbol{y}$. In contrast, the endpoints of the dotted lines give the regression weights from the multiple regression of $\boldsymbol{y}$ onto $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$: $b_{1} = \beta_{1} \sigma_{y}$, where $\beta_{1}$ is the standardized regression coefficient.
Now it is easy to see that $b^{1}$ and $b^{2}$ will coincide exactly with $b_{1}$ and $b_{2}$ only if $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are orthogonal (or if $\boldsymbol{y}$ is orthogonal to the plane spanned by $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$). It is also easy to geometrically construct cases that sometimes seem puzzling, e.g., when the regression weight has the opposite sign as the bivariate correlation between a predictor and the predicted variable:
Here, $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are highly correlated. Now the sign of the correlation between $\boldsymbol{y}$ and $\boldsymbol{x}_{1}$ is positive (red line: orthogonal projection of $\boldsymbol{y}$ onto $\boldsymbol{x}_{1}$), but the regression weight from the multiple regression is negative (end of green line onto subspace defined by $\boldsymbol{x}_{1}$. | Why ANOVA/Regression results change when controlling for another variable
Linear regression can be illustrated geometrically in terms of an orthogonal projection of the predicted variable vector $\boldsymbol{y}$ onto the space defined by the predictor vectors $\boldsymbol{x |
33,631 | Why ANOVA/Regression results change when controlling for another variable | Because multi-variable regression is, using economic jargon "ceteris paribus" i.e. controlling other elements unchanged.
The difference between
$GDP=\beta_{0}+\beta_{1}Income$ (1)
and $GDP=\gamma_{0}+\gamma_{1}Income+\gamma_{2}Investment$ (2)
is that $\beta_{1}$ in (1) captures the correlation (or effects) between GDP and Income and other elements (of course including Investment). That's $\beta_{1}$ absorb all effects of variable other than Income. But in (2) when Investment is added, $\gamma_{1}$ excluding the effects from Investment.
You can do a three-step regression. In the first step, regress GDP on Investment. Then you get the unexplained variance of GDP by Investment.
In the second step, regress Income on Investment. You get the unexplained variance of Income by Investment.
In the third step, regress the unexplained part of GDP by unexplained Income. Then you get $\hat{\gamma_{1}}$.
I mean $\gamma_{1}$ is direct effects of Income on GDP, excluding the indirect effects from Investment through Income on GDP.
Hope I say it clearly. | Why ANOVA/Regression results change when controlling for another variable | Because multi-variable regression is, using economic jargon "ceteris paribus" i.e. controlling other elements unchanged.
The difference between
$GDP=\beta_{0}+\beta_{1}Income$ (1)
and $GDP=\gamma_{0 | Why ANOVA/Regression results change when controlling for another variable
Because multi-variable regression is, using economic jargon "ceteris paribus" i.e. controlling other elements unchanged.
The difference between
$GDP=\beta_{0}+\beta_{1}Income$ (1)
and $GDP=\gamma_{0}+\gamma_{1}Income+\gamma_{2}Investment$ (2)
is that $\beta_{1}$ in (1) captures the correlation (or effects) between GDP and Income and other elements (of course including Investment). That's $\beta_{1}$ absorb all effects of variable other than Income. But in (2) when Investment is added, $\gamma_{1}$ excluding the effects from Investment.
You can do a three-step regression. In the first step, regress GDP on Investment. Then you get the unexplained variance of GDP by Investment.
In the second step, regress Income on Investment. You get the unexplained variance of Income by Investment.
In the third step, regress the unexplained part of GDP by unexplained Income. Then you get $\hat{\gamma_{1}}$.
I mean $\gamma_{1}$ is direct effects of Income on GDP, excluding the indirect effects from Investment through Income on GDP.
Hope I say it clearly. | Why ANOVA/Regression results change when controlling for another variable
Because multi-variable regression is, using economic jargon "ceteris paribus" i.e. controlling other elements unchanged.
The difference between
$GDP=\beta_{0}+\beta_{1}Income$ (1)
and $GDP=\gamma_{0 |
33,632 | Why ANOVA/Regression results change when controlling for another variable | The important thing to understand about regression is that you are finding estimated parameter values that minimize the sum of squared residuals. Adding (or subtracting!) covariates to your model will always change the parameter estimates unless the new covariate is perfectly orthogonal to those already in the model, or perfectly orthogonal to the response variable, or both. Furthermore, the issue isn't whether these variables are related in the population, but rather in your sample; it is quite reasonable to imagine that some variables aren't actually related to each other, but when you gather a sample, they will pretty much never be perfectly orthogonal in your sample. | Why ANOVA/Regression results change when controlling for another variable | The important thing to understand about regression is that you are finding estimated parameter values that minimize the sum of squared residuals. Adding (or subtracting!) covariates to your model wil | Why ANOVA/Regression results change when controlling for another variable
The important thing to understand about regression is that you are finding estimated parameter values that minimize the sum of squared residuals. Adding (or subtracting!) covariates to your model will always change the parameter estimates unless the new covariate is perfectly orthogonal to those already in the model, or perfectly orthogonal to the response variable, or both. Furthermore, the issue isn't whether these variables are related in the population, but rather in your sample; it is quite reasonable to imagine that some variables aren't actually related to each other, but when you gather a sample, they will pretty much never be perfectly orthogonal in your sample. | Why ANOVA/Regression results change when controlling for another variable
The important thing to understand about regression is that you are finding estimated parameter values that minimize the sum of squared residuals. Adding (or subtracting!) covariates to your model wil |
33,633 | Why ANOVA/Regression results change when controlling for another variable | This is also called the Regression Anatomy. Regressing GDP on Income and Investment gives you the same coefficient on Income as this two-step procedure:
first regress Income on Investment and predict the residuals; then regress GDP on these residuals. The residuals have the property that they only contain that part of Income that is uncorrelated with Investment. Unless the two explanatory variables are uncorrelated the multivariate coefficient is different from the bivariate coefficient. As stated above, the multivariate regression makes sure that only the direct effect of Income is captured.
Hope this helps
Michael | Why ANOVA/Regression results change when controlling for another variable | This is also called the Regression Anatomy. Regressing GDP on Income and Investment gives you the same coefficient on Income as this two-step procedure:
first regress Income on Investment and predict | Why ANOVA/Regression results change when controlling for another variable
This is also called the Regression Anatomy. Regressing GDP on Income and Investment gives you the same coefficient on Income as this two-step procedure:
first regress Income on Investment and predict the residuals; then regress GDP on these residuals. The residuals have the property that they only contain that part of Income that is uncorrelated with Investment. Unless the two explanatory variables are uncorrelated the multivariate coefficient is different from the bivariate coefficient. As stated above, the multivariate regression makes sure that only the direct effect of Income is captured.
Hope this helps
Michael | Why ANOVA/Regression results change when controlling for another variable
This is also called the Regression Anatomy. Regressing GDP on Income and Investment gives you the same coefficient on Income as this two-step procedure:
first regress Income on Investment and predict |
33,634 | Selecting best model based on linear, quadratic and cubic fit of data | The general term for what you are asking about is model selection. You have a set of possible models, in this case something like
$$
\begin{aligned}
y&=\beta_0 + \beta_1x &\textrm{(linear)}\\
y&=\beta_0 + \beta_1x + \beta_2x^2 &\textrm{(quadratic)}\\
y&=\beta_0 + \beta_1x + \beta_2x^2 + \beta_3x^3 &\textrm{(cubic)}\\
\end{aligned}$$
and you want to determine which of these models is most parsimonious with your data. We generally worry about parsimony rather than best-fitting (i.e, highest $R^2$) since a complex model could "over-fit" the data. For example imagine your timing data is generated by a quadratic algorithm, but there's a little bit of noise in the timing (random paging by the OS, clock inaccuracy, cosmic rays, whatever). The quadratic model might still fit reasonably well, but it won't be perfect. However, we can find a (very high order) polynomial that goes through each and every data point. This model fits perfectly but will be terrible at making future predictions and, obviously, doesn't match the underlying phenomenon either. We want to balance model complexity with the model's explanatory power. How does one do this?
There are many options. I recently stumbled upon this review by Zucchini, which might be a good overview. One approach is to calculate something like the AIC (Akaike Information Criterion), which adjusts each model's likelihood to take the number of parameters into account. These are often relatively easy to compute. For example, AIC is:
$$ AIC = 2k -2ln(L) $$
where L is the likelihood of the data given the model and k is the number of parameters (e.g., 2 for linear, 3 for quadratic, etc). You compute this criterion for each model, then choose the model with the smallest AIC. Another approach is to use cross-validation (or something like that) to show that none of your models are over-fit. You could then select the best-fitting model.
That's sort of the general case. However, as @Michelle noted above, you probably don't want to be doing model selection at all if you know something about the underlying phenomenon. In this case, if you have the code or know the underlying algorithm, you should just trace through it to determine the algorithm's order. Gaussian elimination is an interesting case, as the "pat" answer, $O(N^3)$, isn't technically right. .
Also, keep in mind that the Big-O order of the algorithm isn't based on empircal observation: (i.e., the best-fit to the observed run time). It's more of a limiting property, taken as $N \rightarrow \infty$. Suppose your run-time were given by
$$t(N) = 0.0000001n^2 + 999999999n$$
I would bet that a runtime-vs-input size plot for that would be pretty linear-looking over the ranges you're likely to test, but the algorithm would technically be considered $O(n^2)$. | Selecting best model based on linear, quadratic and cubic fit of data | The general term for what you are asking about is model selection. You have a set of possible models, in this case something like
$$
\begin{aligned}
y&=\beta_0 + \beta_1x &\textrm{(linear)}\\
y&=\beta | Selecting best model based on linear, quadratic and cubic fit of data
The general term for what you are asking about is model selection. You have a set of possible models, in this case something like
$$
\begin{aligned}
y&=\beta_0 + \beta_1x &\textrm{(linear)}\\
y&=\beta_0 + \beta_1x + \beta_2x^2 &\textrm{(quadratic)}\\
y&=\beta_0 + \beta_1x + \beta_2x^2 + \beta_3x^3 &\textrm{(cubic)}\\
\end{aligned}$$
and you want to determine which of these models is most parsimonious with your data. We generally worry about parsimony rather than best-fitting (i.e, highest $R^2$) since a complex model could "over-fit" the data. For example imagine your timing data is generated by a quadratic algorithm, but there's a little bit of noise in the timing (random paging by the OS, clock inaccuracy, cosmic rays, whatever). The quadratic model might still fit reasonably well, but it won't be perfect. However, we can find a (very high order) polynomial that goes through each and every data point. This model fits perfectly but will be terrible at making future predictions and, obviously, doesn't match the underlying phenomenon either. We want to balance model complexity with the model's explanatory power. How does one do this?
There are many options. I recently stumbled upon this review by Zucchini, which might be a good overview. One approach is to calculate something like the AIC (Akaike Information Criterion), which adjusts each model's likelihood to take the number of parameters into account. These are often relatively easy to compute. For example, AIC is:
$$ AIC = 2k -2ln(L) $$
where L is the likelihood of the data given the model and k is the number of parameters (e.g., 2 for linear, 3 for quadratic, etc). You compute this criterion for each model, then choose the model with the smallest AIC. Another approach is to use cross-validation (or something like that) to show that none of your models are over-fit. You could then select the best-fitting model.
That's sort of the general case. However, as @Michelle noted above, you probably don't want to be doing model selection at all if you know something about the underlying phenomenon. In this case, if you have the code or know the underlying algorithm, you should just trace through it to determine the algorithm's order. Gaussian elimination is an interesting case, as the "pat" answer, $O(N^3)$, isn't technically right. .
Also, keep in mind that the Big-O order of the algorithm isn't based on empircal observation: (i.e., the best-fit to the observed run time). It's more of a limiting property, taken as $N \rightarrow \infty$. Suppose your run-time were given by
$$t(N) = 0.0000001n^2 + 999999999n$$
I would bet that a runtime-vs-input size plot for that would be pretty linear-looking over the ranges you're likely to test, but the algorithm would technically be considered $O(n^2)$. | Selecting best model based on linear, quadratic and cubic fit of data
The general term for what you are asking about is model selection. You have a set of possible models, in this case something like
$$
\begin{aligned}
y&=\beta_0 + \beta_1x &\textrm{(linear)}\\
y&=\beta |
33,635 | Selecting best model based on linear, quadratic and cubic fit of data | Model selection will result in an estimate of the residual variance that is biased low. This will bias all other aspects of inference and inflate $R^2$. The unbiased estimate of $\sigma^2$ has in the denominator $n - p - 1$ where $p$ is the number of pre-specified parameters excluding the intercept. It is not clear that model selection will help. One can just use a regression spline with default knot locations (usually based on quantiles of the predictor distribution), choosing the number of knots based on what complexity the effective sample size will support. The R rms package makes this easy, and course notes at http://biostat.mc.vanderbilt.edu/rms will help. Removal of "non-significant" terms will hurt inference. The use of AIC can sometimes backfire if the modeling is not highly structured. I use AIC to tell me the optimum number $k$ of knots for a large number of predictors if I restrict each predictor to have $k$ knots. This structure minimizes the damage caused by model uncertainty. | Selecting best model based on linear, quadratic and cubic fit of data | Model selection will result in an estimate of the residual variance that is biased low. This will bias all other aspects of inference and inflate $R^2$. The unbiased estimate of $\sigma^2$ has in th | Selecting best model based on linear, quadratic and cubic fit of data
Model selection will result in an estimate of the residual variance that is biased low. This will bias all other aspects of inference and inflate $R^2$. The unbiased estimate of $\sigma^2$ has in the denominator $n - p - 1$ where $p$ is the number of pre-specified parameters excluding the intercept. It is not clear that model selection will help. One can just use a regression spline with default knot locations (usually based on quantiles of the predictor distribution), choosing the number of knots based on what complexity the effective sample size will support. The R rms package makes this easy, and course notes at http://biostat.mc.vanderbilt.edu/rms will help. Removal of "non-significant" terms will hurt inference. The use of AIC can sometimes backfire if the modeling is not highly structured. I use AIC to tell me the optimum number $k$ of knots for a large number of predictors if I restrict each predictor to have $k$ knots. This structure minimizes the damage caused by model uncertainty. | Selecting best model based on linear, quadratic and cubic fit of data
Model selection will result in an estimate of the residual variance that is biased low. This will bias all other aspects of inference and inflate $R^2$. The unbiased estimate of $\sigma^2$ has in th |
33,636 | Selecting best model based on linear, quadratic and cubic fit of data | I'm not 100% sure but if the model is time to sort data you should probably include a $x\log(x)$ or something like this (perhaps its $x^2\log(x)$) as I think this term appears in theoretical time complexity calculations for sorting data. | Selecting best model based on linear, quadratic and cubic fit of data | I'm not 100% sure but if the model is time to sort data you should probably include a $x\log(x)$ or something like this (perhaps its $x^2\log(x)$) as I think this term appears in theoretical time comp | Selecting best model based on linear, quadratic and cubic fit of data
I'm not 100% sure but if the model is time to sort data you should probably include a $x\log(x)$ or something like this (perhaps its $x^2\log(x)$) as I think this term appears in theoretical time complexity calculations for sorting data. | Selecting best model based on linear, quadratic and cubic fit of data
I'm not 100% sure but if the model is time to sort data you should probably include a $x\log(x)$ or something like this (perhaps its $x^2\log(x)$) as I think this term appears in theoretical time comp |
33,637 | Data mining conferences? [closed] | KDD (ACM Special Interest Group on Knowledge Discovery and Data Mining)
KDD 2010 | Data mining conferences? [closed] | KDD (ACM Special Interest Group on Knowledge Discovery and Data Mining)
KDD 2010 | Data mining conferences? [closed]
KDD (ACM Special Interest Group on Knowledge Discovery and Data Mining)
KDD 2010 | Data mining conferences? [closed]
KDD (ACM Special Interest Group on Knowledge Discovery and Data Mining)
KDD 2010 |
33,638 | Data mining conferences? [closed] | NIPS: http://nips.cc/ | Data mining conferences? [closed] | NIPS: http://nips.cc/ | Data mining conferences? [closed]
NIPS: http://nips.cc/ | Data mining conferences? [closed]
NIPS: http://nips.cc/ |
33,639 | Data mining conferences? [closed] | Strata Conference:
Strata Conference is for developers, data scientists, data analysts,
and other data professionals.
Strata Conference covers the latest and best tools and technologies
for this new discipline, along the entire data supply chain—from
gathering, cleaning, analyzing, and storing data to communicating data
intelligence effectively. With hardcore technical sessions, case
studies, and provocative reports from the leading edge, Strata
Conference showcases the people, tools, and technologies that make big
data work.
Not sure I'd quite classify Strata as "data mining" (perhaps "how to use data in the industry", instead), but data mining is certainly a part of it. | Data mining conferences? [closed] | Strata Conference:
Strata Conference is for developers, data scientists, data analysts,
and other data professionals.
Strata Conference covers the latest and best tools and technologies
for this | Data mining conferences? [closed]
Strata Conference:
Strata Conference is for developers, data scientists, data analysts,
and other data professionals.
Strata Conference covers the latest and best tools and technologies
for this new discipline, along the entire data supply chain—from
gathering, cleaning, analyzing, and storing data to communicating data
intelligence effectively. With hardcore technical sessions, case
studies, and provocative reports from the leading edge, Strata
Conference showcases the people, tools, and technologies that make big
data work.
Not sure I'd quite classify Strata as "data mining" (perhaps "how to use data in the industry", instead), but data mining is certainly a part of it. | Data mining conferences? [closed]
Strata Conference:
Strata Conference is for developers, data scientists, data analysts,
and other data professionals.
Strata Conference covers the latest and best tools and technologies
for this |
33,640 | Data mining conferences? [closed] | SIAM's Data Mining Conference, SDM11. | Data mining conferences? [closed] | SIAM's Data Mining Conference, SDM11. | Data mining conferences? [closed]
SIAM's Data Mining Conference, SDM11. | Data mining conferences? [closed]
SIAM's Data Mining Conference, SDM11. |
33,641 | Data mining conferences? [closed] | IEEE International Conference on Data Mining (ICDM) | Data mining conferences? [closed] | IEEE International Conference on Data Mining (ICDM) | Data mining conferences? [closed]
IEEE International Conference on Data Mining (ICDM) | Data mining conferences? [closed]
IEEE International Conference on Data Mining (ICDM) |
33,642 | Data mining conferences? [closed] | Salford Analytics and Data Mining Conference 2012. | Data mining conferences? [closed] | Salford Analytics and Data Mining Conference 2012. | Data mining conferences? [closed]
Salford Analytics and Data Mining Conference 2012. | Data mining conferences? [closed]
Salford Analytics and Data Mining Conference 2012. |
33,643 | Data mining conferences? [closed] | M2010 - 13th Annual Data Mining Conference http://www.sas.com/m2010 | Data mining conferences? [closed] | M2010 - 13th Annual Data Mining Conference http://www.sas.com/m2010 | Data mining conferences? [closed]
M2010 - 13th Annual Data Mining Conference http://www.sas.com/m2010 | Data mining conferences? [closed]
M2010 - 13th Annual Data Mining Conference http://www.sas.com/m2010 |
33,644 | Data mining conferences? [closed] | Predictive Analytics World: pawcon.com. | Data mining conferences? [closed] | Predictive Analytics World: pawcon.com. | Data mining conferences? [closed]
Predictive Analytics World: pawcon.com. | Data mining conferences? [closed]
Predictive Analytics World: pawcon.com. |
33,645 | Data mining conferences? [closed] | Check this useful site: AIStats | Data mining conferences? [closed] | Check this useful site: AIStats | Data mining conferences? [closed]
Check this useful site: AIStats | Data mining conferences? [closed]
Check this useful site: AIStats |
33,646 | Data mining conferences? [closed] | Industrial Conference on Data Mining IndustrialDM | Data mining conferences? [closed] | Industrial Conference on Data Mining IndustrialDM | Data mining conferences? [closed]
Industrial Conference on Data Mining IndustrialDM | Data mining conferences? [closed]
Industrial Conference on Data Mining IndustrialDM |
33,647 | How to calculate the "exact confidence interval" for relative risk? | Check out the R Epi and epitools packages, which include many functions for computing exact and approximate CIs/p-values for various measures of association found in epidemiological studies, including relative risk (RR). I know there is also PropCIs, but I never tried it. Bootstraping is also an option, but generally these are exact or approximated CIs that are provided in epidemiological papers, although most of the explanatory studies rely on GLM, and thus make use of odds-ratio (OR) instead of RR (although, wrongly it is often the RR that is interpreted because it is easier to understand, but this is another story).
You can also check your results with online calculator, like on statpages.org, or Relative Risk and Risk Difference Confidence Intervals. The latter explains how computations are done.
By "exact" tests, we generally mean tests/CIs not relying on an asymptotic distribution, like the chi-square or standard normal; e.g. in the case of an RR, an 95% CI may be approximated as
$\exp\left[ \log(\text{rr}) - 1.96\sqrt{\text{Var}\big(\log(\text{rr})\big)} \right], \exp\left[ \log(\text{rr}) + 1.96\sqrt{\text{Var}\big(\log(\text{rr})\big)} \right]$,
where $\text{Var}\big(\log(\text{rr})\big)=1/a - 1/(a+b) + 1/c - 1/(c+d)$ (assuming a 2-way cross-classification table, with $a$, $b$, $c$, and $d$ denoting cell frequencies). The explanations given by @Keith are, however, very insightful.
For more details on the calculation of CIs in epidemiology, I would suggest to look at Rothman and Greenland's textbook, Modern Epidemiology (now in it's 3rd edition), Statistical Methods for Rates and Proportions, from Fleiss et al., or Statistical analyses of the relative risk, from J.J. Gart (1979).
You will generally get similar results with fisher.test(), as pointed by @gd047, although in this case this function will provide you with a 95% CI for the odds-ratio (which in the case of a disease with low prevalence will be very close to the RR).
Notes:
I didn't check your Excel file, for the reason advocated by @csgillespie.
Michael E Dewey provides an interesting summary of confidence intervals for risk ratios, from a digest of posts on the R mailing-list. | How to calculate the "exact confidence interval" for relative risk? | Check out the R Epi and epitools packages, which include many functions for computing exact and approximate CIs/p-values for various measures of association found in epidemiological studies, including | How to calculate the "exact confidence interval" for relative risk?
Check out the R Epi and epitools packages, which include many functions for computing exact and approximate CIs/p-values for various measures of association found in epidemiological studies, including relative risk (RR). I know there is also PropCIs, but I never tried it. Bootstraping is also an option, but generally these are exact or approximated CIs that are provided in epidemiological papers, although most of the explanatory studies rely on GLM, and thus make use of odds-ratio (OR) instead of RR (although, wrongly it is often the RR that is interpreted because it is easier to understand, but this is another story).
You can also check your results with online calculator, like on statpages.org, or Relative Risk and Risk Difference Confidence Intervals. The latter explains how computations are done.
By "exact" tests, we generally mean tests/CIs not relying on an asymptotic distribution, like the chi-square or standard normal; e.g. in the case of an RR, an 95% CI may be approximated as
$\exp\left[ \log(\text{rr}) - 1.96\sqrt{\text{Var}\big(\log(\text{rr})\big)} \right], \exp\left[ \log(\text{rr}) + 1.96\sqrt{\text{Var}\big(\log(\text{rr})\big)} \right]$,
where $\text{Var}\big(\log(\text{rr})\big)=1/a - 1/(a+b) + 1/c - 1/(c+d)$ (assuming a 2-way cross-classification table, with $a$, $b$, $c$, and $d$ denoting cell frequencies). The explanations given by @Keith are, however, very insightful.
For more details on the calculation of CIs in epidemiology, I would suggest to look at Rothman and Greenland's textbook, Modern Epidemiology (now in it's 3rd edition), Statistical Methods for Rates and Proportions, from Fleiss et al., or Statistical analyses of the relative risk, from J.J. Gart (1979).
You will generally get similar results with fisher.test(), as pointed by @gd047, although in this case this function will provide you with a 95% CI for the odds-ratio (which in the case of a disease with low prevalence will be very close to the RR).
Notes:
I didn't check your Excel file, for the reason advocated by @csgillespie.
Michael E Dewey provides an interesting summary of confidence intervals for risk ratios, from a digest of posts on the R mailing-list. | How to calculate the "exact confidence interval" for relative risk?
Check out the R Epi and epitools packages, which include many functions for computing exact and approximate CIs/p-values for various measures of association found in epidemiological studies, including |
33,648 | How to calculate the "exact confidence interval" for relative risk? | There is no single exact confidence interval for the ratio of two proportions. Generally speaking, an exact 95% confidence interval is any interval-generating procedure that guarantees at least 95% coverage of the true ratio, irrespective of the values of the underlying proportions.
An interval formed by the Fisher Exact Test is probably overly conservative -- in that it has MORE than 95% coverage for most values of the parameters. It's not wrong but it's also wider than it has to be.
The interval used by the StatXact software with the default settings would be a better choice here -- I believe it uses some variety of Chan interval (i.e. an extremum-searching interval using the Berger-Boos procedure and a standardized statistic), but would need to check the manual to be sure.
When you ask for the "how and why" -- does this answer your question? I think we could certainly expound further about the definition of confidence intervals and how to construct one from scratch if that's what you were looking for. Or does it do the trick just to say that this is a Fisher Exact Test-based interval, one (but not the only and not the most powerful) of the confidence intervals that guarantees its coverage unconditionally?
(Footnote: Some authors reserve the word "exact" to apply only to intervals and tests where false-positives are controlled at exactly alpha, instead of merely bounded by alpha. Taken in this sense, there simply isn't a deterministic exact confidence interval for the ratio of two proportions, period. All of the deterministic intervals are necessarily approximate. Of course, even so some intervals and tests do unconditionally control Type I error and some don't.) | How to calculate the "exact confidence interval" for relative risk? | There is no single exact confidence interval for the ratio of two proportions. Generally speaking, an exact 95% confidence interval is any interval-generating procedure that guarantees at least 95% co | How to calculate the "exact confidence interval" for relative risk?
There is no single exact confidence interval for the ratio of two proportions. Generally speaking, an exact 95% confidence interval is any interval-generating procedure that guarantees at least 95% coverage of the true ratio, irrespective of the values of the underlying proportions.
An interval formed by the Fisher Exact Test is probably overly conservative -- in that it has MORE than 95% coverage for most values of the parameters. It's not wrong but it's also wider than it has to be.
The interval used by the StatXact software with the default settings would be a better choice here -- I believe it uses some variety of Chan interval (i.e. an extremum-searching interval using the Berger-Boos procedure and a standardized statistic), but would need to check the manual to be sure.
When you ask for the "how and why" -- does this answer your question? I think we could certainly expound further about the definition of confidence intervals and how to construct one from scratch if that's what you were looking for. Or does it do the trick just to say that this is a Fisher Exact Test-based interval, one (but not the only and not the most powerful) of the confidence intervals that guarantees its coverage unconditionally?
(Footnote: Some authors reserve the word "exact" to apply only to intervals and tests where false-positives are controlled at exactly alpha, instead of merely bounded by alpha. Taken in this sense, there simply isn't a deterministic exact confidence interval for the ratio of two proportions, period. All of the deterministic intervals are necessarily approximate. Of course, even so some intervals and tests do unconditionally control Type I error and some don't.) | How to calculate the "exact confidence interval" for relative risk?
There is no single exact confidence interval for the ratio of two proportions. Generally speaking, an exact 95% confidence interval is any interval-generating procedure that guarantees at least 95% co |
33,649 | How to calculate the "exact confidence interval" for relative risk? | This seems to be Fisher's Exact Test for Count Data.
You can reproduce the results in R by giving:
data <- matrix(c(678,4450547,63,2509451),2,2)
fisher.test(data)
data: data
p-value < 2.2e-16
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
4.682723 7.986867
sample estimates:
odds ratio
6.068817 | How to calculate the "exact confidence interval" for relative risk? | This seems to be Fisher's Exact Test for Count Data.
You can reproduce the results in R by giving:
data <- matrix(c(678,4450547,63,2509451),2,2)
fisher.test(data)
data: data
p-value < 2.2e-16
alter | How to calculate the "exact confidence interval" for relative risk?
This seems to be Fisher's Exact Test for Count Data.
You can reproduce the results in R by giving:
data <- matrix(c(678,4450547,63,2509451),2,2)
fisher.test(data)
data: data
p-value < 2.2e-16
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
4.682723 7.986867
sample estimates:
odds ratio
6.068817 | How to calculate the "exact confidence interval" for relative risk?
This seems to be Fisher's Exact Test for Count Data.
You can reproduce the results in R by giving:
data <- matrix(c(678,4450547,63,2509451),2,2)
fisher.test(data)
data: data
p-value < 2.2e-16
alter |
33,650 | How can we have multiple "exact" tests? | Not really; all an exact test needs to do is accept or reject the null hypothesis based on the actual distribution of the test statistic. Consider the following two tests of whether the mean of a Normal distribution with known standard deviation equals zero.
A z-test for whether the mean $= 0$. No explanation needed!
A binomial test for whether the mean (= the median) $= 0$. Here we assume the probability of an observation being above the median $= 0.5$; this is an exact test, as the binomial distribution with probability $0.5$ is the true distribution of the number of observations above the median.
These two tests will (almost) certainly give different results when calculating the probability of the observed data under the null hypothesis, but they are both exact.
Expanding on this example to address a comment:
The key here is that, although the underlying distribution is the same in both cases, the test statistics $T_N(x)$ and $T_B(x)$ are not (regardless of whether the null hypothesis is true.) There is no 1-1 map between the two (this is important.) Consequently, $p(x|T_N(x))$ and $p(x|T_B(x))$ aren't guaranteed to be the same either. The underlying distributions are the same; the distributions conditional upon the test statistic are not.
However, conditional upon the test results failing to reject the null hypothesis, in both cases we have the "estimated" mean $= 0$ and standard deviation $= 1$. Consequently, $p(x|H_0, T_N(x)) = p(x|H_0, T_B(x))$; the test statistic is irrelevant as the distribution is fully determined (in this case) by $H_0$. In the more general case where we have parameters $\theta$ that are not specified by the hypotheses and therefore need to be estimated, as long as a) the estimation procedure is the same regardless of the test chosen, and b) neither test rejects the null hypothesis, we have $p(x|H_0, \hat{\theta}, T_1(x)) = p(x|H_0, \hat{\theta}, T_2(x)) = p(x|H_0, \hat{\theta})$. | How can we have multiple "exact" tests? | Not really; all an exact test needs to do is accept or reject the null hypothesis based on the actual distribution of the test statistic. Consider the following two tests of whether the mean of a Nor | How can we have multiple "exact" tests?
Not really; all an exact test needs to do is accept or reject the null hypothesis based on the actual distribution of the test statistic. Consider the following two tests of whether the mean of a Normal distribution with known standard deviation equals zero.
A z-test for whether the mean $= 0$. No explanation needed!
A binomial test for whether the mean (= the median) $= 0$. Here we assume the probability of an observation being above the median $= 0.5$; this is an exact test, as the binomial distribution with probability $0.5$ is the true distribution of the number of observations above the median.
These two tests will (almost) certainly give different results when calculating the probability of the observed data under the null hypothesis, but they are both exact.
Expanding on this example to address a comment:
The key here is that, although the underlying distribution is the same in both cases, the test statistics $T_N(x)$ and $T_B(x)$ are not (regardless of whether the null hypothesis is true.) There is no 1-1 map between the two (this is important.) Consequently, $p(x|T_N(x))$ and $p(x|T_B(x))$ aren't guaranteed to be the same either. The underlying distributions are the same; the distributions conditional upon the test statistic are not.
However, conditional upon the test results failing to reject the null hypothesis, in both cases we have the "estimated" mean $= 0$ and standard deviation $= 1$. Consequently, $p(x|H_0, T_N(x)) = p(x|H_0, T_B(x))$; the test statistic is irrelevant as the distribution is fully determined (in this case) by $H_0$. In the more general case where we have parameters $\theta$ that are not specified by the hypotheses and therefore need to be estimated, as long as a) the estimation procedure is the same regardless of the test chosen, and b) neither test rejects the null hypothesis, we have $p(x|H_0, \hat{\theta}, T_1(x)) = p(x|H_0, \hat{\theta}, T_2(x)) = p(x|H_0, \hat{\theta})$. | How can we have multiple "exact" tests?
Not really; all an exact test needs to do is accept or reject the null hypothesis based on the actual distribution of the test statistic. Consider the following two tests of whether the mean of a Nor |
33,651 | How can we have multiple "exact" tests? | What make and exact test exact is that it uses the actual distributions of the null hypothesis (e.g. binomial or hypergeometric or multinomial distributions) rather than approximations (e.g. normal or chi-square distributions) to calculate $p$-values of the probability of seeing the observed result or another result as or more extreme.
Since they often involve discrete distributions, they usually do not have a critical region with probability exactly equal to some pre-specified significance level $\alpha$, causing some further confusion over the word exact.
Different tests have different distributions underlying the null hypothesis: the Fisher test assumes a hypergeometric distribution while Barnard's test assumes two binomial distributions. This means that they usually calculate different $p$-values. Some tests may also have different views of what counts as more extreme. So different tests should not be expected to give the same results, and this is a reason why you should decide which test to use before you see the data. | How can we have multiple "exact" tests? | What make and exact test exact is that it uses the actual distributions of the null hypothesis (e.g. binomial or hypergeometric or multinomial distributions) rather than approximations (e.g. normal or | How can we have multiple "exact" tests?
What make and exact test exact is that it uses the actual distributions of the null hypothesis (e.g. binomial or hypergeometric or multinomial distributions) rather than approximations (e.g. normal or chi-square distributions) to calculate $p$-values of the probability of seeing the observed result or another result as or more extreme.
Since they often involve discrete distributions, they usually do not have a critical region with probability exactly equal to some pre-specified significance level $\alpha$, causing some further confusion over the word exact.
Different tests have different distributions underlying the null hypothesis: the Fisher test assumes a hypergeometric distribution while Barnard's test assumes two binomial distributions. This means that they usually calculate different $p$-values. Some tests may also have different views of what counts as more extreme. So different tests should not be expected to give the same results, and this is a reason why you should decide which test to use before you see the data. | How can we have multiple "exact" tests?
What make and exact test exact is that it uses the actual distributions of the null hypothesis (e.g. binomial or hypergeometric or multinomial distributions) rather than approximations (e.g. normal or |
33,652 | How can we have multiple "exact" tests? | The answer mostly comes down to definition. An exact test is one that when the assumptions it makes under $H_0$ hold, will not exceed the selected type I error rate (anywhere under the null), at any given sample size [1].
For a point null (almost all the tests you're likely to be doing in pactice) it should equal the desired type I error rate if that is chosen from the available rates (with a discrete test statistic there's a finite set of choices unless you use randomized tests).
Unsurprisingly given such a broad definition, there's an infinite number of exact tests. For example, most common rank based tests are exact (given the earlier caveats).
While the usual chi-squared test of independence is not small-sample exact, it is possible to construct an exact test using that statistic (indeed it's an option in R's chisq.test to do that exact test).
In the same fashion, one could construct an exact test based on the G statistic or the Neyman statistic or the Freeman-Tukey statistic, or any of the other Cressie-Reed power-divergence statistics, or indeed any number of other possibilities, all for the same contingency table and under the same assumptions and the same conditioning on the margins - an infinite array of possible tests just for independence in contingency tables.
Note that Barnard's test [2] does not condition on both margins, so it would be part of another group of tests again.
A small clarification is needed (I presume you understood this already but the phrasing in the question may mislead some readers).
Hypothesis tests don't
evaluate $P(\text{data}|H_0)$, since significance levels are based on the probability that the test statistic will fall into the rejection region (and thence the 'as, or more extreme' phrasing for p values). Generally the test will have an explicit test statistic whose cdf can be computed exactly (at least in principle, and to any required degree of accuracy in practice) and hence explicit rejection rules obtained. From this an exact p value will be implied. E.g. for tests where you would reject for small values of the test statistic (say T), $P(T\leq t_\text{obs})$ is the p value, and different tests will not order the possible samples the same way (e.g. an exact test based on a chi squared statistic and a Fisher exact test will not always perfectly correspond). While a test statistic is not normally given for the Fisher exact test, it is possible to define one, even in $r\times c$ tables.
For that test, since that effective statistic is based on likelihood under the null, you could indeed consider $\text{data}|H_0$, but 'as or more extreme' includes all other tables [2] with equal or lower probability, not just that specific data table.
[1]: Further, since we don't know that the exact form of population distributions hold, typically people dont use the phrase 'exact' when there's a specific distributional form assumed (that doesn't follow from the other assumptions).
[2]: If I recall correctly, many decades later Barnard came to the conclusion that Fisher had been correct to condition on both margins in this situation.
[3]: ... with the same margins, since the test conditions on the margins. However, the test does not require one to have fixed margins (as many texts incorrectly state). | How can we have multiple "exact" tests? | The answer mostly comes down to definition. An exact test is one that when the assumptions it makes under $H_0$ hold, will not exceed the selected type I error rate (anywhere under the null), at any | How can we have multiple "exact" tests?
The answer mostly comes down to definition. An exact test is one that when the assumptions it makes under $H_0$ hold, will not exceed the selected type I error rate (anywhere under the null), at any given sample size [1].
For a point null (almost all the tests you're likely to be doing in pactice) it should equal the desired type I error rate if that is chosen from the available rates (with a discrete test statistic there's a finite set of choices unless you use randomized tests).
Unsurprisingly given such a broad definition, there's an infinite number of exact tests. For example, most common rank based tests are exact (given the earlier caveats).
While the usual chi-squared test of independence is not small-sample exact, it is possible to construct an exact test using that statistic (indeed it's an option in R's chisq.test to do that exact test).
In the same fashion, one could construct an exact test based on the G statistic or the Neyman statistic or the Freeman-Tukey statistic, or any of the other Cressie-Reed power-divergence statistics, or indeed any number of other possibilities, all for the same contingency table and under the same assumptions and the same conditioning on the margins - an infinite array of possible tests just for independence in contingency tables.
Note that Barnard's test [2] does not condition on both margins, so it would be part of another group of tests again.
A small clarification is needed (I presume you understood this already but the phrasing in the question may mislead some readers).
Hypothesis tests don't
evaluate $P(\text{data}|H_0)$, since significance levels are based on the probability that the test statistic will fall into the rejection region (and thence the 'as, or more extreme' phrasing for p values). Generally the test will have an explicit test statistic whose cdf can be computed exactly (at least in principle, and to any required degree of accuracy in practice) and hence explicit rejection rules obtained. From this an exact p value will be implied. E.g. for tests where you would reject for small values of the test statistic (say T), $P(T\leq t_\text{obs})$ is the p value, and different tests will not order the possible samples the same way (e.g. an exact test based on a chi squared statistic and a Fisher exact test will not always perfectly correspond). While a test statistic is not normally given for the Fisher exact test, it is possible to define one, even in $r\times c$ tables.
For that test, since that effective statistic is based on likelihood under the null, you could indeed consider $\text{data}|H_0$, but 'as or more extreme' includes all other tables [2] with equal or lower probability, not just that specific data table.
[1]: Further, since we don't know that the exact form of population distributions hold, typically people dont use the phrase 'exact' when there's a specific distributional form assumed (that doesn't follow from the other assumptions).
[2]: If I recall correctly, many decades later Barnard came to the conclusion that Fisher had been correct to condition on both margins in this situation.
[3]: ... with the same margins, since the test conditions on the margins. However, the test does not require one to have fixed margins (as many texts incorrectly state). | How can we have multiple "exact" tests?
The answer mostly comes down to definition. An exact test is one that when the assumptions it makes under $H_0$ hold, will not exceed the selected type I error rate (anywhere under the null), at any |
33,653 | Linearity of maximum function in expectation | It is not true. A simple counterexample is letting $X \sim N(0, 1)$. Then $\max(E(X), 0) = 0$, whereas
\begin{align}
& E(\max(X, 0)) = \int_{-\infty}^\infty \max(x, 0)\phi(x)dx = \int_0^\infty x\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx = \frac{1}{\sqrt{2\pi}} > 0.
\end{align}
In fact, using $\max(a, b) = \frac{1}{2}((a + b) + |a - b|)$, it can be seen that
\begin{align}
& E(\max(X, 0)) = \frac{1}{2}(E(X) + E(|X|)), \\
& \max(E(X), 0) = \frac{1}{2}(E(X) + |E(X)|)).
\end{align}
Because $|E(X)| \leq E(|X|)$ for any integrable random variable $X$, it always holds that (it's also a consequence of Jensen's inequality $f(E(X)) \leq E(f(X))$ with $f(x) = \max(x, 0)$):
\begin{align}
E(\max(X, 0)) \geq \max(E(X), 0),
\end{align}
and the strict inequality holds for any random variable such that $E(|X|) > |E(X)|$. Needless to say, there are numerous such random variables.
Assuming $E[|X|] < \infty$, as @Henry commented, a necessary and sufficient condition for $E[|X|] > |E[X]|$ is $P(X > 0) > 0$ and $P(X < 0) > 0$. To prove it, let $X^+ = \max(X, 0)$ and $X^- = \max(-X, 0)$, then $E[|X|] = E[X^+] + E[X^-]$, $|E[X]| = |E[X^+] - E[X^-]|$.
If $P(X > 0) > 0$ and $P(X < 0) > 0$, then (by, say, Theorem 15.2(ii) of Probability and Measure) $E[X^+] > 0, E[X^-] > 0$, hence $|E[X]| = |E[X^+] - E[X^-]| < E[X^+] + E[X^-] = E[|X|]$.
Conversely, if $|E[X^+] - E[X^-]| < E[X^+] + E[X^-]$, since $|a - b| < a + b$ holds for non-negative $a, b$ if and only if $a \neq 0$ and $b \neq 0$, it follows that $E[X^+] > 0$ and $E[X^-] > 0$, which in turn requires $P(X > 0) > 0$ and $P(X < 0) > 0$. | Linearity of maximum function in expectation | It is not true. A simple counterexample is letting $X \sim N(0, 1)$. Then $\max(E(X), 0) = 0$, whereas
\begin{align}
& E(\max(X, 0)) = \int_{-\infty}^\infty \max(x, 0)\phi(x)dx = \int_0^\infty x\fra | Linearity of maximum function in expectation
It is not true. A simple counterexample is letting $X \sim N(0, 1)$. Then $\max(E(X), 0) = 0$, whereas
\begin{align}
& E(\max(X, 0)) = \int_{-\infty}^\infty \max(x, 0)\phi(x)dx = \int_0^\infty x\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx = \frac{1}{\sqrt{2\pi}} > 0.
\end{align}
In fact, using $\max(a, b) = \frac{1}{2}((a + b) + |a - b|)$, it can be seen that
\begin{align}
& E(\max(X, 0)) = \frac{1}{2}(E(X) + E(|X|)), \\
& \max(E(X), 0) = \frac{1}{2}(E(X) + |E(X)|)).
\end{align}
Because $|E(X)| \leq E(|X|)$ for any integrable random variable $X$, it always holds that (it's also a consequence of Jensen's inequality $f(E(X)) \leq E(f(X))$ with $f(x) = \max(x, 0)$):
\begin{align}
E(\max(X, 0)) \geq \max(E(X), 0),
\end{align}
and the strict inequality holds for any random variable such that $E(|X|) > |E(X)|$. Needless to say, there are numerous such random variables.
Assuming $E[|X|] < \infty$, as @Henry commented, a necessary and sufficient condition for $E[|X|] > |E[X]|$ is $P(X > 0) > 0$ and $P(X < 0) > 0$. To prove it, let $X^+ = \max(X, 0)$ and $X^- = \max(-X, 0)$, then $E[|X|] = E[X^+] + E[X^-]$, $|E[X]| = |E[X^+] - E[X^-]|$.
If $P(X > 0) > 0$ and $P(X < 0) > 0$, then (by, say, Theorem 15.2(ii) of Probability and Measure) $E[X^+] > 0, E[X^-] > 0$, hence $|E[X]| = |E[X^+] - E[X^-]| < E[X^+] + E[X^-] = E[|X|]$.
Conversely, if $|E[X^+] - E[X^-]| < E[X^+] + E[X^-]$, since $|a - b| < a + b$ holds for non-negative $a, b$ if and only if $a \neq 0$ and $b \neq 0$, it follows that $E[X^+] > 0$ and $E[X^-] > 0$, which in turn requires $P(X > 0) > 0$ and $P(X < 0) > 0$. | Linearity of maximum function in expectation
It is not true. A simple counterexample is letting $X \sim N(0, 1)$. Then $\max(E(X), 0) = 0$, whereas
\begin{align}
& E(\max(X, 0)) = \int_{-\infty}^\infty \max(x, 0)\phi(x)dx = \int_0^\infty x\fra |
33,654 | Linearity of maximum function in expectation | No, it's not. If $E[X] > 0$ then
$$
max(E[X], 0) = E[X]
$$
so you would need
$$
E[X] = E[max(X, 0)]
$$
That can easily be disproved with simple counterexamples.
If $E[X]\le0$ then
$$
max(E[X], 0) = 0
$$
so you would need to have
$$
E[max(X, 0)]=0
$$
That would hold only for constant $X$ equal to $0$.
TL;DR Both ways, it can be easily disproved. | Linearity of maximum function in expectation | No, it's not. If $E[X] > 0$ then
$$
max(E[X], 0) = E[X]
$$
so you would need
$$
E[X] = E[max(X, 0)]
$$
That can easily be disproved with simple counterexamples.
If $E[X]\le0$ then
$$
max(E[X], 0) = 0
| Linearity of maximum function in expectation
No, it's not. If $E[X] > 0$ then
$$
max(E[X], 0) = E[X]
$$
so you would need
$$
E[X] = E[max(X, 0)]
$$
That can easily be disproved with simple counterexamples.
If $E[X]\le0$ then
$$
max(E[X], 0) = 0
$$
so you would need to have
$$
E[max(X, 0)]=0
$$
That would hold only for constant $X$ equal to $0$.
TL;DR Both ways, it can be easily disproved. | Linearity of maximum function in expectation
No, it's not. If $E[X] > 0$ then
$$
max(E[X], 0) = E[X]
$$
so you would need
$$
E[X] = E[max(X, 0)]
$$
That can easily be disproved with simple counterexamples.
If $E[X]\le0$ then
$$
max(E[X], 0) = 0
|
33,655 | Linearity of maximum function in expectation | Some calculations give, for $X$ a random variable with density $f(x)$ and cdf $ \DeclareMathOperator{\E}{\mathbb{E}} F(x)$
\begin{align}
\E \max(X,0) &= \int_{-\infty}^\infty \max(x,0) f(x) \; dx \\
&= \int_{-\infty}^0 0 \cdot f(x)\; dx + \int_0^\infty x \cdot f(x)\; dx \\
&= \int_0^\infty x \cdot f(x)\; dx
\end{align}
Now, the conditional distribution of $X$ given $X \ge 0$ has density
$\frac{f(x)}{1-F(0)}$ (for $x\ge 0$, zero elsewhere) so find that
\begin{align}
\E \max(X,0) &= \int_0^\infty x \cdot f(x)\; dx \\
&= \left[ 1-F(0) \right] \cdot \int_0^\infty x \cdot \frac{f(x)}{1-F(0)} \; dx \\
&= \left[ 1-F(0) \right] \cdot \E\left[ X \mid X \ge 0 \right]
\end{align}
while $\max( \E X, 0)$ will be zero for any random variable with negative expectation.
So any random variable which can take both negative and positive values with positive probability, and which has a negative expectation, gives you an counterexample. | Linearity of maximum function in expectation | Some calculations give, for $X$ a random variable with density $f(x)$ and cdf $ \DeclareMathOperator{\E}{\mathbb{E}} F(x)$
\begin{align}
\E \max(X,0) &= \int_{-\infty}^\infty \max(x,0) f(x) \; dx \ | Linearity of maximum function in expectation
Some calculations give, for $X$ a random variable with density $f(x)$ and cdf $ \DeclareMathOperator{\E}{\mathbb{E}} F(x)$
\begin{align}
\E \max(X,0) &= \int_{-\infty}^\infty \max(x,0) f(x) \; dx \\
&= \int_{-\infty}^0 0 \cdot f(x)\; dx + \int_0^\infty x \cdot f(x)\; dx \\
&= \int_0^\infty x \cdot f(x)\; dx
\end{align}
Now, the conditional distribution of $X$ given $X \ge 0$ has density
$\frac{f(x)}{1-F(0)}$ (for $x\ge 0$, zero elsewhere) so find that
\begin{align}
\E \max(X,0) &= \int_0^\infty x \cdot f(x)\; dx \\
&= \left[ 1-F(0) \right] \cdot \int_0^\infty x \cdot \frac{f(x)}{1-F(0)} \; dx \\
&= \left[ 1-F(0) \right] \cdot \E\left[ X \mid X \ge 0 \right]
\end{align}
while $\max( \E X, 0)$ will be zero for any random variable with negative expectation.
So any random variable which can take both negative and positive values with positive probability, and which has a negative expectation, gives you an counterexample. | Linearity of maximum function in expectation
Some calculations give, for $X$ a random variable with density $f(x)$ and cdf $ \DeclareMathOperator{\E}{\mathbb{E}} F(x)$
\begin{align}
\E \max(X,0) &= \int_{-\infty}^\infty \max(x,0) f(x) \; dx \ |
33,656 | Linearity of maximum function in expectation | This is false for any random variable $X$ taking both positive and negative values, and true whenever $X$ is either strictly non-negative or strictly non-positive.
It is clear that if $X$ is strictly non-negative, then both the left and right hand side of your equation equal $\mathbb{E}[X] = 0$. Similarly if $X$ is strictly non-positive, then both sides equal zero. Therefore assume otherwise.
In general we have the following:
$$ \mathbb{E}[X] = P(X > 0)\mathbb{E}[X | X > 0] + P(X < 0)\mathbb{E}[X | X < 0] $$
$$ \mathbb{E}[\max(X,0)] = P(X>0)\mathbb{E}[X|X>0].$$
Since $P(X < 0) \not = 0$, we have that $P(X < 0)\mathbb{E}[X|X<0]$ is negative. Thus
$$ \mathbb{E}[\max(X,0)] > \mathbb{E}[X]$$
and also
$$ \mathbb{E}[\max(X,0)] > 0 $$
because $P(X > 0) > 0$ and $\mathbb{E}[X|X>0] > 0$.
Therefore we get the general statement that
$$ \mathbb{E}[\max(X,0)] > \max(\mathbb{E}[X],0)$$
whenever $P(X > 0) \not = 0$ and $P(X < 0) \not = 0$. | Linearity of maximum function in expectation | This is false for any random variable $X$ taking both positive and negative values, and true whenever $X$ is either strictly non-negative or strictly non-positive.
It is clear that if $X$ is strictly | Linearity of maximum function in expectation
This is false for any random variable $X$ taking both positive and negative values, and true whenever $X$ is either strictly non-negative or strictly non-positive.
It is clear that if $X$ is strictly non-negative, then both the left and right hand side of your equation equal $\mathbb{E}[X] = 0$. Similarly if $X$ is strictly non-positive, then both sides equal zero. Therefore assume otherwise.
In general we have the following:
$$ \mathbb{E}[X] = P(X > 0)\mathbb{E}[X | X > 0] + P(X < 0)\mathbb{E}[X | X < 0] $$
$$ \mathbb{E}[\max(X,0)] = P(X>0)\mathbb{E}[X|X>0].$$
Since $P(X < 0) \not = 0$, we have that $P(X < 0)\mathbb{E}[X|X<0]$ is negative. Thus
$$ \mathbb{E}[\max(X,0)] > \mathbb{E}[X]$$
and also
$$ \mathbb{E}[\max(X,0)] > 0 $$
because $P(X > 0) > 0$ and $\mathbb{E}[X|X>0] > 0$.
Therefore we get the general statement that
$$ \mathbb{E}[\max(X,0)] > \max(\mathbb{E}[X],0)$$
whenever $P(X > 0) \not = 0$ and $P(X < 0) \not = 0$. | Linearity of maximum function in expectation
This is false for any random variable $X$ taking both positive and negative values, and true whenever $X$ is either strictly non-negative or strictly non-positive.
It is clear that if $X$ is strictly |
33,657 | Very simple chi-square question | At the time of writing, the two answers suggest a binomial test. This is a good approach to assess a binomial set of counts, like in your question.
But there is also a chi-square goodness-of-fit test that can be used in these cases.
R has this built in to the chisq.test() function.
And it can be used when there are more than two categories. Or when the theoretical proportions aren't equal across categories.
That is,
Gender = c("Female", "Male")
Count = c(12, 8)
chisq.test(Count)
Gender = c("Female", "Male", "Other")
Count = c(12, 8, 6)
chisq.test(Count)
Race = c("American Indian", "Asian", "Black", "Pacific Islander", "White")
Count = c(10, 8, 16, 1, 24)
Theoretical = c(0.10, 0.15, 0.16, 0.0, 0.59)
chisq.test(Count, Theoretical)
Like a chi-square test of association, the chi-square goodness-of-fit test has a suggested minimum for expected values:
Gender = c("Female", "Male")
Count = c(12, 8)
chisq.test(Count)$expected
And you can extract the standardized residuals:
Gender = c("Female", "Male", "Other")
Count = c(12, 8, 6)
chisq.test(Count)$stdres
There are exact and Monte Carlo approaches. You might look at the multinomial.test() function in the EMT package.
There are also multinomial confidence intervals. You might look at the MultinomCI() function in the DescTools package.
Often the best way to express effect size is to compare the expected proportions to the observed proportions (e.g. rcompanion.org/handbook/images/image301.png ).
Addendum 1:
Because there's some suggestion in the answers about which test may be better, below is the results for this example from a few different tests.
Without attempting justification as to which is more correct, in this case the p-values from the exact tests and Monte Carlo simulations are similar.
The uncorrected chi-square test is probably too liberal in this case. Though a Yates correction could be applied here too. (Though I don't know of an easy implementation in R for the goodness-of-fit chi-square test with Yates correction).
A = c(12, 8)
N = sum(A)
theoretical =c(0.5, 0.5)
#####################
binom.test(A, N)
### Exact binomial test
###
### number of successes = 12, number of trials = 20, p-value = 0.5034
chisq.test(A)
### Chi-squared test for given probabilities
###
### X-squared = 0.8, df = 1, p-value = 0.3711
chisq.test(A, simulate.p.value=TRUE, B=10000)
### Chi-squared test for given probabilities with simulated p-value (based on 10000 replicates)
###
### X-squared = 0.8, df = NA, p-value = 0.506
library(DescTools)
GTest(A, correct="yates")
### Log likelihood ratio (G-test) goodness of fit test
###
### G = 0.4517, X-squared df = 1, p-value = 0.5015
library(EMT)
multinomial.test(A, theoretical)
### Exact Multinomial Test
###
### Events pObs p.value
### 21 0.1201 0.5034
library(EMT)
multinomial.test(A, theoretical, MonteCarlo = TRUE, ntrial=10000)
### Monte Carlo Multinomial Test
###
### Events pObs p.value
### 21 0.1201 0.5026
Addendum 2:
I couldn't resist seeing how the Yates correction on the chi-square goodness of fit test would work out. With this correction, it's in line with the exact and Monte Carlo tests.
A = c(12, 8)
DF = length(A)-1
Exp = theoretical * N
YatesChisq = sum((abs(A-Exp)-0.5)^2/Exp)
pValue = pchisq(YatesChisq, DF, lower.tail=FALSE)
(data.frame(YatesChisq=round(YatesChisq, 2), pValue=round(pValue,4)))
### YatesChisq pValue
### 0.45 0.5023
Thanks to @utobi for pointing out that the Yates correction can be applied for a chi-square goodness-of-fit test when there are two categories
x = 12
n = 20
prop.test(x, n, correct=TRUE)
### 1-sample proportions test with continuity correction
### X-squared = 0.45, df = 1, p-value = 0.5023
Addendum 3
Comparing the p-values from the chi-square goodness-of-fit test and the binomial test, where the sum of counts for two categories is 20.
G1 = 1:10
G2 = 20-G1
pChiSq = rep(NA, 10)
pBinom = rep(NA, 10)
for(i in 1:10){
pChiSq[i] = chisq.test(c(G1[i],G2[i]))$p.value
pBinom[i] = binom.test(G1[i],(G1[i]+G2[i]))$p.value
}
(data.frame(Count1=G1, Count2=G2, pChiSq=round(pChiSq,5), pBinom=round(pBinom,5)))
### Count1 Count2 pChiSq pBinom
### 1 19 0.00006 0.00004
### 2 18 0.00035 0.00040
### 3 17 0.00175 0.00258
### 4 16 0.00729 0.01182
### 5 15 0.02535 0.04139
### 6 14 0.07364 0.11532
### 7 13 0.17971 0.26318
### 8 12 0.37109 0.50344
### 9 11 0.65472 0.82380
### 10 10 1.00000 1.00000 | Very simple chi-square question | At the time of writing, the two answers suggest a binomial test. This is a good approach to assess a binomial set of counts, like in your question.
But there is also a chi-square goodness-of-fit test | Very simple chi-square question
At the time of writing, the two answers suggest a binomial test. This is a good approach to assess a binomial set of counts, like in your question.
But there is also a chi-square goodness-of-fit test that can be used in these cases.
R has this built in to the chisq.test() function.
And it can be used when there are more than two categories. Or when the theoretical proportions aren't equal across categories.
That is,
Gender = c("Female", "Male")
Count = c(12, 8)
chisq.test(Count)
Gender = c("Female", "Male", "Other")
Count = c(12, 8, 6)
chisq.test(Count)
Race = c("American Indian", "Asian", "Black", "Pacific Islander", "White")
Count = c(10, 8, 16, 1, 24)
Theoretical = c(0.10, 0.15, 0.16, 0.0, 0.59)
chisq.test(Count, Theoretical)
Like a chi-square test of association, the chi-square goodness-of-fit test has a suggested minimum for expected values:
Gender = c("Female", "Male")
Count = c(12, 8)
chisq.test(Count)$expected
And you can extract the standardized residuals:
Gender = c("Female", "Male", "Other")
Count = c(12, 8, 6)
chisq.test(Count)$stdres
There are exact and Monte Carlo approaches. You might look at the multinomial.test() function in the EMT package.
There are also multinomial confidence intervals. You might look at the MultinomCI() function in the DescTools package.
Often the best way to express effect size is to compare the expected proportions to the observed proportions (e.g. rcompanion.org/handbook/images/image301.png ).
Addendum 1:
Because there's some suggestion in the answers about which test may be better, below is the results for this example from a few different tests.
Without attempting justification as to which is more correct, in this case the p-values from the exact tests and Monte Carlo simulations are similar.
The uncorrected chi-square test is probably too liberal in this case. Though a Yates correction could be applied here too. (Though I don't know of an easy implementation in R for the goodness-of-fit chi-square test with Yates correction).
A = c(12, 8)
N = sum(A)
theoretical =c(0.5, 0.5)
#####################
binom.test(A, N)
### Exact binomial test
###
### number of successes = 12, number of trials = 20, p-value = 0.5034
chisq.test(A)
### Chi-squared test for given probabilities
###
### X-squared = 0.8, df = 1, p-value = 0.3711
chisq.test(A, simulate.p.value=TRUE, B=10000)
### Chi-squared test for given probabilities with simulated p-value (based on 10000 replicates)
###
### X-squared = 0.8, df = NA, p-value = 0.506
library(DescTools)
GTest(A, correct="yates")
### Log likelihood ratio (G-test) goodness of fit test
###
### G = 0.4517, X-squared df = 1, p-value = 0.5015
library(EMT)
multinomial.test(A, theoretical)
### Exact Multinomial Test
###
### Events pObs p.value
### 21 0.1201 0.5034
library(EMT)
multinomial.test(A, theoretical, MonteCarlo = TRUE, ntrial=10000)
### Monte Carlo Multinomial Test
###
### Events pObs p.value
### 21 0.1201 0.5026
Addendum 2:
I couldn't resist seeing how the Yates correction on the chi-square goodness of fit test would work out. With this correction, it's in line with the exact and Monte Carlo tests.
A = c(12, 8)
DF = length(A)-1
Exp = theoretical * N
YatesChisq = sum((abs(A-Exp)-0.5)^2/Exp)
pValue = pchisq(YatesChisq, DF, lower.tail=FALSE)
(data.frame(YatesChisq=round(YatesChisq, 2), pValue=round(pValue,4)))
### YatesChisq pValue
### 0.45 0.5023
Thanks to @utobi for pointing out that the Yates correction can be applied for a chi-square goodness-of-fit test when there are two categories
x = 12
n = 20
prop.test(x, n, correct=TRUE)
### 1-sample proportions test with continuity correction
### X-squared = 0.45, df = 1, p-value = 0.5023
Addendum 3
Comparing the p-values from the chi-square goodness-of-fit test and the binomial test, where the sum of counts for two categories is 20.
G1 = 1:10
G2 = 20-G1
pChiSq = rep(NA, 10)
pBinom = rep(NA, 10)
for(i in 1:10){
pChiSq[i] = chisq.test(c(G1[i],G2[i]))$p.value
pBinom[i] = binom.test(G1[i],(G1[i]+G2[i]))$p.value
}
(data.frame(Count1=G1, Count2=G2, pChiSq=round(pChiSq,5), pBinom=round(pBinom,5)))
### Count1 Count2 pChiSq pBinom
### 1 19 0.00006 0.00004
### 2 18 0.00035 0.00040
### 3 17 0.00175 0.00258
### 4 16 0.00729 0.01182
### 5 15 0.02535 0.04139
### 6 14 0.07364 0.11532
### 7 13 0.17971 0.26318
### 8 12 0.37109 0.50344
### 9 11 0.65472 0.82380
### 10 10 1.00000 1.00000 | Very simple chi-square question
At the time of writing, the two answers suggest a binomial test. This is a good approach to assess a binomial set of counts, like in your question.
But there is also a chi-square goodness-of-fit test |
33,658 | Very simple chi-square question | Your problem can be seen as having an iid sample $X_1,\ldots,X_n$ with $X_i\sim \text{Bernoulli}(\theta)$, with say $X_i=1$ if the sample is female and $X_i=0$ otherwise. In this notation, $\theta$ is the probability of observing a female.
The aim is to test $H_0:\theta=1/2$ vs $H_1:\theta\neq 1$.
There are many ways to test $H_0$, e.g. through a hypothesis testing procedure or a confidence interval. Both can be obtained by noting that if $\hat\theta = \bar X$ is the sample proportion of females, then, under $H_0$
$$
n\bar X \sim \text{Bin}(n,\theta_0),
$$
which can be used to perform a test statistic or it may be inverted to obtain a confidence interval. There are many ways to perform such inversion and an exact R implementation is
> x <- 8
> n <- 20
> binom.test(x, n, p=0.50)
Exact binomial test
data: x and n
number of successes = 8, number of trials = 20, p-value = 0.5034
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.1911901 0.6394574
sample estimates:
probability of success
0.4
You might want to have a look also at the exactci package for other related approaches.
Another path may be to invoke the Central Limit Theorem, i.e.
$$
T_n = \frac{\sqrt{n}(\bar X - \theta)}{\sqrt{\theta(1-\theta)}}\overset{d}{\to} N(0,1).\quad\quad(*)
$$
Under $H_0$, $(*)$ is a pivotal quantity and can be used to build the approximate $\alpha$-level test statistic:
Reject $H_0$ if the observed value of $T_n$ is in absolute value
greater than $z_{1-\alpha/2}$.
Now for large $n$, $T_n^2$ will be approximately $\chi_1^2$, thus another equivalent $\alpha$-level test statistics would be
Reject $H_0$ if the observed value of $T_n^2$ is
greater than $\chi_{1-\alpha}^2$.
In R this can be performed (using a continuity correction for better accuracy) by
> x <- 8
> n <- 20
> prop.test(x, n, p=0.5)
1-sample proportions test with continuity correction
data: x out of n, null probability 0.5
X-squared = 0.45, df = 1, p-value = 0.5023
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.1997709 0.6358833
sample estimates:
p
0.4 | Very simple chi-square question | Your problem can be seen as having an iid sample $X_1,\ldots,X_n$ with $X_i\sim \text{Bernoulli}(\theta)$, with say $X_i=1$ if the sample is female and $X_i=0$ otherwise. In this notation, $\theta$ is | Very simple chi-square question
Your problem can be seen as having an iid sample $X_1,\ldots,X_n$ with $X_i\sim \text{Bernoulli}(\theta)$, with say $X_i=1$ if the sample is female and $X_i=0$ otherwise. In this notation, $\theta$ is the probability of observing a female.
The aim is to test $H_0:\theta=1/2$ vs $H_1:\theta\neq 1$.
There are many ways to test $H_0$, e.g. through a hypothesis testing procedure or a confidence interval. Both can be obtained by noting that if $\hat\theta = \bar X$ is the sample proportion of females, then, under $H_0$
$$
n\bar X \sim \text{Bin}(n,\theta_0),
$$
which can be used to perform a test statistic or it may be inverted to obtain a confidence interval. There are many ways to perform such inversion and an exact R implementation is
> x <- 8
> n <- 20
> binom.test(x, n, p=0.50)
Exact binomial test
data: x and n
number of successes = 8, number of trials = 20, p-value = 0.5034
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.1911901 0.6394574
sample estimates:
probability of success
0.4
You might want to have a look also at the exactci package for other related approaches.
Another path may be to invoke the Central Limit Theorem, i.e.
$$
T_n = \frac{\sqrt{n}(\bar X - \theta)}{\sqrt{\theta(1-\theta)}}\overset{d}{\to} N(0,1).\quad\quad(*)
$$
Under $H_0$, $(*)$ is a pivotal quantity and can be used to build the approximate $\alpha$-level test statistic:
Reject $H_0$ if the observed value of $T_n$ is in absolute value
greater than $z_{1-\alpha/2}$.
Now for large $n$, $T_n^2$ will be approximately $\chi_1^2$, thus another equivalent $\alpha$-level test statistics would be
Reject $H_0$ if the observed value of $T_n^2$ is
greater than $\chi_{1-\alpha}^2$.
In R this can be performed (using a continuity correction for better accuracy) by
> x <- 8
> n <- 20
> prop.test(x, n, p=0.5)
1-sample proportions test with continuity correction
data: x out of n, null probability 0.5
X-squared = 0.45, df = 1, p-value = 0.5023
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.1997709 0.6358833
sample estimates:
p
0.4 | Very simple chi-square question
Your problem can be seen as having an iid sample $X_1,\ldots,X_n$ with $X_i\sim \text{Bernoulli}(\theta)$, with say $X_i=1$ if the sample is female and $X_i=0$ otherwise. In this notation, $\theta$ is |
33,659 | Very simple chi-square question | As with any table, the chi-squared statistic is the sum of $(O-E)^2/E$ where $O$ is the observed count in each cell and $E$ is the expected count.
In your case, there are two cells with observations $12$ and $8.$ The null hypothesis asserts that the expected counts are each $1/2$ times the sample size, $E = 1/2\times 20 = 10.$ Thus
$$\chi^2 = \frac{(12 - 10)^2}{10} + \frac{(8 - 10)^2}{10} = \frac{8}{10}.$$
The p-value (for the two-sided alternative to the null; namely, that the population proportion of males differs from $1/2$) is approximated by the right tail area of a chi-squared distribution. The one to use has $2-1 = 1$ degrees of freedom, because (a) you have two cells and (b) the null hypothesis specifies one parameter, leaving one left over. The p-value, computed with the R function pchisq(8/10, 1, lower.tail = FALSE), is 37%.
Some have contended this is the "wrong" test. Far from it, this is an excellent test. Part of the proof is to consider the distribution of the possible p-values under the null hypothesis. (The other part is to examine its power, but that would take us far afield.) Ideally, the p-values will be uniformly distributed between $0$ and $1.$ Because there are only $11$ distinct outcomes ($0$ and $20$ lead to the same decision, $1$ and $19$ to the same, and so on) the ideal is impossible to attain for any (non-randomized) decision procedure whatsoever. But how close can we come? Look at the actual distribution function:
The reference uniform distribution appears as the dashed red line. The plot at left shows that the null distribution of p-values comes as close as reasonably possible to the uniform reference distribution, at least insofar as we can see. (In the very rare cases where one of the cells is zero, I have set the p-value to zero.)
The plot at the right is the same, shown on log-log axes to reveal details for small p-values. From left to right are the outcomes $0,20; 1,19; 2,18;$ and so on. Except at the extreme left, where one of the cell counts is only $0$ or $1,$ the p-values are approximately uniformly distributed. Moreover, this failure at the left is minor: in almost any case you would correctly reject the null even though the p-value is a little larger than it ought to be.
A standard rule of thumb, by the way, is that you can trust the chi-squared test when all expected counts are $5$ or larger. In this case, both expected counts are $10:$ the rule gives good advice. | Very simple chi-square question | As with any table, the chi-squared statistic is the sum of $(O-E)^2/E$ where $O$ is the observed count in each cell and $E$ is the expected count.
In your case, there are two cells with observations $ | Very simple chi-square question
As with any table, the chi-squared statistic is the sum of $(O-E)^2/E$ where $O$ is the observed count in each cell and $E$ is the expected count.
In your case, there are two cells with observations $12$ and $8.$ The null hypothesis asserts that the expected counts are each $1/2$ times the sample size, $E = 1/2\times 20 = 10.$ Thus
$$\chi^2 = \frac{(12 - 10)^2}{10} + \frac{(8 - 10)^2}{10} = \frac{8}{10}.$$
The p-value (for the two-sided alternative to the null; namely, that the population proportion of males differs from $1/2$) is approximated by the right tail area of a chi-squared distribution. The one to use has $2-1 = 1$ degrees of freedom, because (a) you have two cells and (b) the null hypothesis specifies one parameter, leaving one left over. The p-value, computed with the R function pchisq(8/10, 1, lower.tail = FALSE), is 37%.
Some have contended this is the "wrong" test. Far from it, this is an excellent test. Part of the proof is to consider the distribution of the possible p-values under the null hypothesis. (The other part is to examine its power, but that would take us far afield.) Ideally, the p-values will be uniformly distributed between $0$ and $1.$ Because there are only $11$ distinct outcomes ($0$ and $20$ lead to the same decision, $1$ and $19$ to the same, and so on) the ideal is impossible to attain for any (non-randomized) decision procedure whatsoever. But how close can we come? Look at the actual distribution function:
The reference uniform distribution appears as the dashed red line. The plot at left shows that the null distribution of p-values comes as close as reasonably possible to the uniform reference distribution, at least insofar as we can see. (In the very rare cases where one of the cells is zero, I have set the p-value to zero.)
The plot at the right is the same, shown on log-log axes to reveal details for small p-values. From left to right are the outcomes $0,20; 1,19; 2,18;$ and so on. Except at the extreme left, where one of the cell counts is only $0$ or $1,$ the p-values are approximately uniformly distributed. Moreover, this failure at the left is minor: in almost any case you would correctly reject the null even though the p-value is a little larger than it ought to be.
A standard rule of thumb, by the way, is that you can trust the chi-squared test when all expected counts are $5$ or larger. In this case, both expected counts are $10:$ the rule gives good advice. | Very simple chi-square question
As with any table, the chi-squared statistic is the sum of $(O-E)^2/E$ where $O$ is the observed count in each cell and $E$ is the expected count.
In your case, there are two cells with observations $ |
33,660 | Very simple chi-square question | You are using the wrong test. While you could run chi squared test (with expected values of 10 and 10) and 1 degree of freedom, there is a much better way. You haven't found examples like this because you should use a binomial test instead.
Given the null hypothesis (and quite reasonable assumptions on the large size of the population and the sampling method) the number of females would have a binomial distribution B(20,0.5).
So you use the binomial distribution to find a (say) 95% acceptance region. The exact region depends on if you want a two-tail or one-tail test. But 6≤n≤14 works for me.
And then reject your null hypothesis if the observed value of female lies outside that region. | Very simple chi-square question | You are using the wrong test. While you could run chi squared test (with expected values of 10 and 10) and 1 degree of freedom, there is a much better way. You haven't found examples like this becau | Very simple chi-square question
You are using the wrong test. While you could run chi squared test (with expected values of 10 and 10) and 1 degree of freedom, there is a much better way. You haven't found examples like this because you should use a binomial test instead.
Given the null hypothesis (and quite reasonable assumptions on the large size of the population and the sampling method) the number of females would have a binomial distribution B(20,0.5).
So you use the binomial distribution to find a (say) 95% acceptance region. The exact region depends on if you want a two-tail or one-tail test. But 6≤n≤14 works for me.
And then reject your null hypothesis if the observed value of female lies outside that region. | Very simple chi-square question
You are using the wrong test. While you could run chi squared test (with expected values of 10 and 10) and 1 degree of freedom, there is a much better way. You haven't found examples like this becau |
33,661 | Kolmogorov-Smirnov Test in Python weird result and interpretation | You got a couple of things wrong while reading the documentation of the Kolmogorov-Smirnov test.
First you need to use the cumulative distribution function (CDF), not the probability density function (PDF). Second you have to pass the CDF as a callable function, not evaluate it at an equally spaced grid of points. [This doesn't work because the kstest function assumes you are passing along a second sample for a two-sample KS test.]
from functools import partial
import numpy as np
import scipy.stats as stats
# Weibull distribution parameters
c, loc, scale = 2.34, 0, 1
# sample size
n = 10_000
x = stats.weibull_min.rvs(c, loc=loc, scale=scale, size=n)
# One-sample KS test compares x to a CDF (given as a callable function)
stats.kstest(
x,
partial(stats.weibull_min.cdf, c=c, loc=loc, scale=scale)
)
#> KstestResult(statistic=0.0054, pvalue=0.9352)
# Two-sample KS test compares x to another sample (here from the same distribution)
stats.kstest(
x,
stats.weibull_min.rvs(c, loc=loc, scale=scale, size=n)
)
#> KstestResult(statistic=0.0094, pvalue=0.9291)
@Dave is correct that with hypothesis testing we don't accept the null hypothesis, we can only reject it or not reject it. The point is that "not reject" is not the same as "accept".
On the other hand, it sounds a bit awkward to say "we have a sample of 10,000 but we simply have insufficient evidence to conclude anything". At this sample size we expect that estimates are precise (have small variance).
Note that this situation is a bit hypothetical. In practice we rarely know the true distribution or that two large samples come from the same distribution as in the simulation. So in the real world, at sample sizes on the order of 10k, it's more likely that the p-value is small, not large.
So do we learn anything if the sample size is large and the p-value is large?
We learn that the significance level α = 0.05 doesn't make sense for large data. Keeping α fixed while n grows implies we are looking for smaller and smaller effects.
And we learn that — while we cannot accept the null hypothesis as true — the evidence is consistent both with "no effect" and with "trivial effect". If we have chosen the sample size so that we have enough power to detect differences of interest to us, then we also have a good idea what "trivial" means.
You can read more on the topic Are large data sets inappropriate for hypothesis testing?. | Kolmogorov-Smirnov Test in Python weird result and interpretation | You got a couple of things wrong while reading the documentation of the Kolmogorov-Smirnov test.
First you need to use the cumulative distribution function (CDF), not the probability density function | Kolmogorov-Smirnov Test in Python weird result and interpretation
You got a couple of things wrong while reading the documentation of the Kolmogorov-Smirnov test.
First you need to use the cumulative distribution function (CDF), not the probability density function (PDF). Second you have to pass the CDF as a callable function, not evaluate it at an equally spaced grid of points. [This doesn't work because the kstest function assumes you are passing along a second sample for a two-sample KS test.]
from functools import partial
import numpy as np
import scipy.stats as stats
# Weibull distribution parameters
c, loc, scale = 2.34, 0, 1
# sample size
n = 10_000
x = stats.weibull_min.rvs(c, loc=loc, scale=scale, size=n)
# One-sample KS test compares x to a CDF (given as a callable function)
stats.kstest(
x,
partial(stats.weibull_min.cdf, c=c, loc=loc, scale=scale)
)
#> KstestResult(statistic=0.0054, pvalue=0.9352)
# Two-sample KS test compares x to another sample (here from the same distribution)
stats.kstest(
x,
stats.weibull_min.rvs(c, loc=loc, scale=scale, size=n)
)
#> KstestResult(statistic=0.0094, pvalue=0.9291)
@Dave is correct that with hypothesis testing we don't accept the null hypothesis, we can only reject it or not reject it. The point is that "not reject" is not the same as "accept".
On the other hand, it sounds a bit awkward to say "we have a sample of 10,000 but we simply have insufficient evidence to conclude anything". At this sample size we expect that estimates are precise (have small variance).
Note that this situation is a bit hypothetical. In practice we rarely know the true distribution or that two large samples come from the same distribution as in the simulation. So in the real world, at sample sizes on the order of 10k, it's more likely that the p-value is small, not large.
So do we learn anything if the sample size is large and the p-value is large?
We learn that the significance level α = 0.05 doesn't make sense for large data. Keeping α fixed while n grows implies we are looking for smaller and smaller effects.
And we learn that — while we cannot accept the null hypothesis as true — the evidence is consistent both with "no effect" and with "trivial effect". If we have chosen the sample size so that we have enough power to detect differences of interest to us, then we also have a good idea what "trivial" means.
You can read more on the topic Are large data sets inappropriate for hypothesis testing?. | Kolmogorov-Smirnov Test in Python weird result and interpretation
You got a couple of things wrong while reading the documentation of the Kolmogorov-Smirnov test.
First you need to use the cumulative distribution function (CDF), not the probability density function |
33,662 | Kolmogorov-Smirnov Test in Python weird result and interpretation | In addition to the coding mistakes addressed in the other answer, there are two statistics mistakes in the post that I want to address.
If the p-Value is higher than my chosen alpha (5%) my samples are from the distribution.
This is a common misinterpretation of the p-value. We do not accept null hypotheses. When the p-value is larger than $\alpha$, we simply have insufficient evidence to conclude anything. Otherwise, you could just collect two points, conduct your test, pretty much never reject, and keep claiming that you're proving null hypothesis after null hypothesis. Further, this logic applies to all hypothesis testing, not just KS.
I read that the KS-test might not be great for large Data.
There is some truth to this that is discussed extensively in another Cross Validated post. While that question addresses the normal distribution, the logic applies. Summarizing the link, large sample sizes give hypothesis tests (not just KS) great power to detect small differences that are not of practical importance or of interest to clients/customers/reviewers/bosses. However, that only happens when the null hypothesis is slightly incorrect, say a null hypothesis of $\mu = 0$ when the real $\mu = 0.1$. If the null hypothesis is true, the KS test does exactly what it is supposed to, as I will demonstrate in a simulation.
library(ggplot2)
set.seed(2022)
B <- 5000
N <- 25000
ps <- rep(NA, B)
for (i in 1:B){
# Simulate some Weibull data
#
x <- rweibull(N, 2.34, 1)
# KS-test the data for having the specified Weibull distribution
#
ps[i] <- ks.test(x, pweibull, shape = 2.34, scale = 1)$p.value
if (i %% 25 == 0 | i < 5 | B - i < 5){
print(paste(i/B*100, "% complete", sep = ""))
}
}
d <- data.frame(ps = ps, CDF = ecdf(ps)(ps), Distribution = "Weibull")
ggplot(d, aes(x = ps, y = CDF, col = Distribution)) +
geom_line() +
geom_abline(slope = 1, intercept = 0) +
theme_bw()
Since the null hypothesis is true, the KS test rejects approximately the correct number of times (for any $\alpha$-level, not just $0.05$), as the $U(0,1)$-looking CDF of the p-values shows. I even supercharged the KS test by having a sample size of $25000$, as opposed to your $10000$, yet KS was not overpowered.
Now let's tweak the simulation ever so slightly. A plot above the $y=x$ diagonal line indicates power to detect the difference.
library(ggplot2)
set.seed(2022)
B <- 5000
N <- 25000
ps <- rep(NA, B)
for (i in 1:B){
# Simulate some Weibull data
#
x <- rweibull(N, 2.34, 1)
# KS-test the data for having the specified Weibull distribution
#
ps[i] <- ks.test(x, pweibull, shape = 2.3, scale = 1)$p.value
if (i %% 25 == 0 | i < 5 | B - i < 5){
print(paste(i/B*100, "% complete", sep = ""))
}
}
d <- data.frame(ps = ps, CDF = ecdf(ps)(ps), Distribution = "Weibull 2.3")
ggplot(d, aes(x = ps, y = CDF, col = Distribution)) +
geom_line() +
geom_abline(slope = 1, intercept = 0) +
theme_bw()
I won't tell you if you should care about $2.3$ vs $2.34$, but even if you don't, the KS test sure does! | Kolmogorov-Smirnov Test in Python weird result and interpretation | In addition to the coding mistakes addressed in the other answer, there are two statistics mistakes in the post that I want to address.
If the p-Value is higher than my chosen alpha (5%) my samples a | Kolmogorov-Smirnov Test in Python weird result and interpretation
In addition to the coding mistakes addressed in the other answer, there are two statistics mistakes in the post that I want to address.
If the p-Value is higher than my chosen alpha (5%) my samples are from the distribution.
This is a common misinterpretation of the p-value. We do not accept null hypotheses. When the p-value is larger than $\alpha$, we simply have insufficient evidence to conclude anything. Otherwise, you could just collect two points, conduct your test, pretty much never reject, and keep claiming that you're proving null hypothesis after null hypothesis. Further, this logic applies to all hypothesis testing, not just KS.
I read that the KS-test might not be great for large Data.
There is some truth to this that is discussed extensively in another Cross Validated post. While that question addresses the normal distribution, the logic applies. Summarizing the link, large sample sizes give hypothesis tests (not just KS) great power to detect small differences that are not of practical importance or of interest to clients/customers/reviewers/bosses. However, that only happens when the null hypothesis is slightly incorrect, say a null hypothesis of $\mu = 0$ when the real $\mu = 0.1$. If the null hypothesis is true, the KS test does exactly what it is supposed to, as I will demonstrate in a simulation.
library(ggplot2)
set.seed(2022)
B <- 5000
N <- 25000
ps <- rep(NA, B)
for (i in 1:B){
# Simulate some Weibull data
#
x <- rweibull(N, 2.34, 1)
# KS-test the data for having the specified Weibull distribution
#
ps[i] <- ks.test(x, pweibull, shape = 2.34, scale = 1)$p.value
if (i %% 25 == 0 | i < 5 | B - i < 5){
print(paste(i/B*100, "% complete", sep = ""))
}
}
d <- data.frame(ps = ps, CDF = ecdf(ps)(ps), Distribution = "Weibull")
ggplot(d, aes(x = ps, y = CDF, col = Distribution)) +
geom_line() +
geom_abline(slope = 1, intercept = 0) +
theme_bw()
Since the null hypothesis is true, the KS test rejects approximately the correct number of times (for any $\alpha$-level, not just $0.05$), as the $U(0,1)$-looking CDF of the p-values shows. I even supercharged the KS test by having a sample size of $25000$, as opposed to your $10000$, yet KS was not overpowered.
Now let's tweak the simulation ever so slightly. A plot above the $y=x$ diagonal line indicates power to detect the difference.
library(ggplot2)
set.seed(2022)
B <- 5000
N <- 25000
ps <- rep(NA, B)
for (i in 1:B){
# Simulate some Weibull data
#
x <- rweibull(N, 2.34, 1)
# KS-test the data for having the specified Weibull distribution
#
ps[i] <- ks.test(x, pweibull, shape = 2.3, scale = 1)$p.value
if (i %% 25 == 0 | i < 5 | B - i < 5){
print(paste(i/B*100, "% complete", sep = ""))
}
}
d <- data.frame(ps = ps, CDF = ecdf(ps)(ps), Distribution = "Weibull 2.3")
ggplot(d, aes(x = ps, y = CDF, col = Distribution)) +
geom_line() +
geom_abline(slope = 1, intercept = 0) +
theme_bw()
I won't tell you if you should care about $2.3$ vs $2.34$, but even if you don't, the KS test sure does! | Kolmogorov-Smirnov Test in Python weird result and interpretation
In addition to the coding mistakes addressed in the other answer, there are two statistics mistakes in the post that I want to address.
If the p-Value is higher than my chosen alpha (5%) my samples a |
33,663 | Confidence band for simple linear regression - why the curve? [duplicate] | Computing the sample variance of the estimate $\hat{y}$
The estimate for the mean $y$ (as a function of $x$) has the following function in terms of the predictions for coefficients $\alpha$ and $\beta$
$$\hat {y} = \hat{\alpha} + \hat{\beta} x$$
The standard error of $\hat{y}$ can be computed with the formula for the standard deviation or variance
$$Var(\hat\alpha + \hat\beta x) = Var(\hat\alpha) + x^2 Var(\hat\beta) - 2x Cov(\hat\alpha,\hat\beta)$$
So this is a quadratic function that has a minimum at $x = \frac{Cov(\hat\alpha,\hat\beta)}{Var(\hat\beta)} = \bar{x_i}$ and this creates that funnel shape with the minimum at the mean of the datapoints $x_i$.
Intuitive
Let's try out several fits. We use the following data
$X_i$ is normal distributed. $Y_i$ is $0.8$ times $X_i$ with some added noise.
$$\begin{array}{}
X_i &\sim& N(0,1) \\ \epsilon_i &\sim& N(0,1) \\
Y_i &=& 0.8 X_i + 0.6 \epsilon_i
\end{array}$$
Result of $25$ simulations with each $15$ data points
When we combine all those different lines in a single plot then we get:
So here we might see intuitively why the confidence band becomes 'fatter' at the ends. The confidence is due to errors in the height of the line (parameter $\alpha$) and the slope of the line (parameter $\beta$). It is this latter one that makes the error larger towards the ends.
$\frac{Cov(\hat\alpha,\hat\beta)}{Var(\hat\beta)} = \bar{x} $ follows from the covariance matrix for the $\alpha$ and $\beta$ which is $\sigma (X^TX)^{-1}$. Which you could work out further by filling in all the terms... But you could also argue that the minimum should be at $\bar{x}$ by transform the data matrix $X$ such that the the column vectors are perpendicular and the estimates $\hat\alpha$ and $\hat\beta$ have zero covariance. | Confidence band for simple linear regression - why the curve? [duplicate] | Computing the sample variance of the estimate $\hat{y}$
The estimate for the mean $y$ (as a function of $x$) has the following function in terms of the predictions for coefficients $\alpha$ and $\beta | Confidence band for simple linear regression - why the curve? [duplicate]
Computing the sample variance of the estimate $\hat{y}$
The estimate for the mean $y$ (as a function of $x$) has the following function in terms of the predictions for coefficients $\alpha$ and $\beta$
$$\hat {y} = \hat{\alpha} + \hat{\beta} x$$
The standard error of $\hat{y}$ can be computed with the formula for the standard deviation or variance
$$Var(\hat\alpha + \hat\beta x) = Var(\hat\alpha) + x^2 Var(\hat\beta) - 2x Cov(\hat\alpha,\hat\beta)$$
So this is a quadratic function that has a minimum at $x = \frac{Cov(\hat\alpha,\hat\beta)}{Var(\hat\beta)} = \bar{x_i}$ and this creates that funnel shape with the minimum at the mean of the datapoints $x_i$.
Intuitive
Let's try out several fits. We use the following data
$X_i$ is normal distributed. $Y_i$ is $0.8$ times $X_i$ with some added noise.
$$\begin{array}{}
X_i &\sim& N(0,1) \\ \epsilon_i &\sim& N(0,1) \\
Y_i &=& 0.8 X_i + 0.6 \epsilon_i
\end{array}$$
Result of $25$ simulations with each $15$ data points
When we combine all those different lines in a single plot then we get:
So here we might see intuitively why the confidence band becomes 'fatter' at the ends. The confidence is due to errors in the height of the line (parameter $\alpha$) and the slope of the line (parameter $\beta$). It is this latter one that makes the error larger towards the ends.
$\frac{Cov(\hat\alpha,\hat\beta)}{Var(\hat\beta)} = \bar{x} $ follows from the covariance matrix for the $\alpha$ and $\beta$ which is $\sigma (X^TX)^{-1}$. Which you could work out further by filling in all the terms... But you could also argue that the minimum should be at $\bar{x}$ by transform the data matrix $X$ such that the the column vectors are perpendicular and the estimates $\hat\alpha$ and $\hat\beta$ have zero covariance. | Confidence band for simple linear regression - why the curve? [duplicate]
Computing the sample variance of the estimate $\hat{y}$
The estimate for the mean $y$ (as a function of $x$) has the following function in terms of the predictions for coefficients $\alpha$ and $\beta |
33,664 | Confidence band for simple linear regression - why the curve? [duplicate] | As you get farther from $\bar x,\bar y$ uncertainty increases. There's fewer and fewer observations when you reach out to distant regions of the domain of your function.
The main source of uncertainty is the one about the slope of the line. Take a look at the drawing here. With the given sample of observations you can say that the best fit line should be somewhere between these two grey lines. The uncertainty around $(\bar x,\bar y)$ is the smallest, but once you step away from where the observations are located, uncertainty increases.
Here's how we can intuitively "derive" the asymptotic confidence interval, i.e. where $x^*$ is very far away from your observations. The confidence given by model MSE will proportionally expand as $|x^*-\bar x|\to\infty$. Think of it as approximate equality of ratios $\frac{MSE}{\sqrt n\sigma_x}\approx \frac{CI(x^*)}{|x^*-\bar x|}$. That's the asymptotic of your formula:
$$\lim_{x^*\to\infty} MSE \sqrt{\frac{1}{n} + \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar x)^2}} =\frac{MSE}{\sqrt n\sigma_x}|x^*-\bar x|$$ | Confidence band for simple linear regression - why the curve? [duplicate] | As you get farther from $\bar x,\bar y$ uncertainty increases. There's fewer and fewer observations when you reach out to distant regions of the domain of your function.
The main source of uncertainty | Confidence band for simple linear regression - why the curve? [duplicate]
As you get farther from $\bar x,\bar y$ uncertainty increases. There's fewer and fewer observations when you reach out to distant regions of the domain of your function.
The main source of uncertainty is the one about the slope of the line. Take a look at the drawing here. With the given sample of observations you can say that the best fit line should be somewhere between these two grey lines. The uncertainty around $(\bar x,\bar y)$ is the smallest, but once you step away from where the observations are located, uncertainty increases.
Here's how we can intuitively "derive" the asymptotic confidence interval, i.e. where $x^*$ is very far away from your observations. The confidence given by model MSE will proportionally expand as $|x^*-\bar x|\to\infty$. Think of it as approximate equality of ratios $\frac{MSE}{\sqrt n\sigma_x}\approx \frac{CI(x^*)}{|x^*-\bar x|}$. That's the asymptotic of your formula:
$$\lim_{x^*\to\infty} MSE \sqrt{\frac{1}{n} + \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar x)^2}} =\frac{MSE}{\sqrt n\sigma_x}|x^*-\bar x|$$ | Confidence band for simple linear regression - why the curve? [duplicate]
As you get farther from $\bar x,\bar y$ uncertainty increases. There's fewer and fewer observations when you reach out to distant regions of the domain of your function.
The main source of uncertainty |
33,665 | Confidence band for simple linear regression - why the curve? [duplicate] | There is 2 uncertainties here. As you mentioned there is the uncertainty with the slope thus the spreading curve at ends, but there is also an uncertainty at the mean. Yes, the curve is thinnest at the mean but it is not zero. Thus the uncertainty of the slope passing through the mean's distribution causes the estimate to be non linear and generates the above examples. | Confidence band for simple linear regression - why the curve? [duplicate] | There is 2 uncertainties here. As you mentioned there is the uncertainty with the slope thus the spreading curve at ends, but there is also an uncertainty at the mean. Yes, the curve is thinnest at th | Confidence band for simple linear regression - why the curve? [duplicate]
There is 2 uncertainties here. As you mentioned there is the uncertainty with the slope thus the spreading curve at ends, but there is also an uncertainty at the mean. Yes, the curve is thinnest at the mean but it is not zero. Thus the uncertainty of the slope passing through the mean's distribution causes the estimate to be non linear and generates the above examples. | Confidence band for simple linear regression - why the curve? [duplicate]
There is 2 uncertainties here. As you mentioned there is the uncertainty with the slope thus the spreading curve at ends, but there is also an uncertainty at the mean. Yes, the curve is thinnest at th |
33,666 | Is it possible for a distribution to have known variance but unknown mean? | A practical example: suppose I have a thermometer and want to build a picture of how accurate it is. I test it at a wide variety of different known temperatures, and empirically discover that if the true temperature is $T$ then the temperature that the thermometer displays is approximately normally distributed with mean $T$ and standard deviation 1 degree Kelvin (and that different readings at the same true temperature are independent). I then take a reading from the thermometer in a room in which the true temperature is unknown. It would be reasonable to model the distribution of the reading as normal with known standard deviation (1) and unknown mean.
Similar examples could arise with other measuring tools. | Is it possible for a distribution to have known variance but unknown mean? | A practical example: suppose I have a thermometer and want to build a picture of how accurate it is. I test it at a wide variety of different known temperatures, and empirically discover that if the t | Is it possible for a distribution to have known variance but unknown mean?
A practical example: suppose I have a thermometer and want to build a picture of how accurate it is. I test it at a wide variety of different known temperatures, and empirically discover that if the true temperature is $T$ then the temperature that the thermometer displays is approximately normally distributed with mean $T$ and standard deviation 1 degree Kelvin (and that different readings at the same true temperature are independent). I then take a reading from the thermometer in a room in which the true temperature is unknown. It would be reasonable to model the distribution of the reading as normal with known standard deviation (1) and unknown mean.
Similar examples could arise with other measuring tools. | Is it possible for a distribution to have known variance but unknown mean?
A practical example: suppose I have a thermometer and want to build a picture of how accurate it is. I test it at a wide variety of different known temperatures, and empirically discover that if the t |
33,667 | Is it possible for a distribution to have known variance but unknown mean? | In the real world, these types of problems primarily happen in manufacturing. You usually see them when there are strong constraints on the behavior of a variable. For example, the normal distribution assumes that a value can take on any value over $(-\infty,\infty)$ but if you are building cars, it is never going to happen that a tire will be larger than the factory it is being constructed in, let alone of nearly infinite size. Indeed, barring a monumental equipment failure, the diameter will never be much outside some easily and well-defined maximum or minimum. There also will never be a penny-sized tire either. I am not sure what a negative diameter could mean.
A simple example would be a submarine hiding in the ocean. Because it is of fixed size and shape its variance is fixed. Its location is not fixed. Indeed, it is hiding.
You might have some way to collect data about the submarine's location. The data could be a location somewhere on the ship, for example, a point near the fantail. Maybe it could be from a variety of points around the vessel. If the data collected depends on the geometry of the ship, then the data generation function will have a fixed variance. However, as the mean is somewhere in the ocean we do not know what it is.
One other note, not all formulations of the variance assume that the mean is known or that a point estimate exists for it. Consider the posterior probability $\Pr(\mu;\sigma^2|X)$ where $X$ is the observed data, $\mu$ is the population mean, and $\sigma^2$ is the population variance. A point estimate of the variance can be obtained by first marginalizing out $\mu$ so that $$\Pr(\sigma^2|X)=\int_{-\infty}^{\infty}\Pr(\mu;\sigma^2|X)\mathrm{d}\mu.$$
From that point, a utility function could be imposed upon the distribution and a point found. While there is information about $\mu$, it is a probability distribution instead of a known point or an estimator. There is more than one way to estimate variance depending on the goals and the circumstances. | Is it possible for a distribution to have known variance but unknown mean? | In the real world, these types of problems primarily happen in manufacturing. You usually see them when there are strong constraints on the behavior of a variable. For example, the normal distributi | Is it possible for a distribution to have known variance but unknown mean?
In the real world, these types of problems primarily happen in manufacturing. You usually see them when there are strong constraints on the behavior of a variable. For example, the normal distribution assumes that a value can take on any value over $(-\infty,\infty)$ but if you are building cars, it is never going to happen that a tire will be larger than the factory it is being constructed in, let alone of nearly infinite size. Indeed, barring a monumental equipment failure, the diameter will never be much outside some easily and well-defined maximum or minimum. There also will never be a penny-sized tire either. I am not sure what a negative diameter could mean.
A simple example would be a submarine hiding in the ocean. Because it is of fixed size and shape its variance is fixed. Its location is not fixed. Indeed, it is hiding.
You might have some way to collect data about the submarine's location. The data could be a location somewhere on the ship, for example, a point near the fantail. Maybe it could be from a variety of points around the vessel. If the data collected depends on the geometry of the ship, then the data generation function will have a fixed variance. However, as the mean is somewhere in the ocean we do not know what it is.
One other note, not all formulations of the variance assume that the mean is known or that a point estimate exists for it. Consider the posterior probability $\Pr(\mu;\sigma^2|X)$ where $X$ is the observed data, $\mu$ is the population mean, and $\sigma^2$ is the population variance. A point estimate of the variance can be obtained by first marginalizing out $\mu$ so that $$\Pr(\sigma^2|X)=\int_{-\infty}^{\infty}\Pr(\mu;\sigma^2|X)\mathrm{d}\mu.$$
From that point, a utility function could be imposed upon the distribution and a point found. While there is information about $\mu$, it is a probability distribution instead of a known point or an estimator. There is more than one way to estimate variance depending on the goals and the circumstances. | Is it possible for a distribution to have known variance but unknown mean?
In the real world, these types of problems primarily happen in manufacturing. You usually see them when there are strong constraints on the behavior of a variable. For example, the normal distributi |
33,668 | Is it possible for a distribution to have known variance but unknown mean? | Suppose we have a random variable $X\sim N(\mu, \sigma^2)$ where $\mu$ is the mean and unknown but with $\sigma^2=1$ variance. Now to answer your question how this could be possible, we can consider one way to relate the mean to the variance and see if we can use this method to back out what the mean should be. I'll use the fact that $Var(X)=\mathbb{E}[(X-\mu)^2]$. So,
$$1=Var(X)=\mathbb{E}[(X-\mu)^2]=\mathbb{E}[X^2]-\mathbb{E}[X]^2=(\mu^2 + 1)-\mu^2 = 1$$
Here I use the fact that the second moment of a normal is $\mathbb{E}[X^2]=\mu^2 + \sigma^2$. Notice how we just get back $1$ again which basically means we have 2 unknowns and 1 equation and so the mean $\mu$ is free to vary.
This is just one example that we can know the variance but not the mean. Intuitively, the mean is like the "location" of a random variable, and the variance is how much that random variable is "spaced out" around that location. But we could always move that location around and keep the same variance. Hope that helps!
Edit:
I am interpreting this question as asking how we could have a random variable whose mean we do not know but whose variance is known (as in the example above). That is, given a random variable and it's variance can we somehow always back out the mean. Another interpretation could be how is possible for $\mathbb{E}[X]$ to be undefined. For this I would give the example of a Cauchy distribution. | Is it possible for a distribution to have known variance but unknown mean? | Suppose we have a random variable $X\sim N(\mu, \sigma^2)$ where $\mu$ is the mean and unknown but with $\sigma^2=1$ variance. Now to answer your question how this could be possible, we can consider o | Is it possible for a distribution to have known variance but unknown mean?
Suppose we have a random variable $X\sim N(\mu, \sigma^2)$ where $\mu$ is the mean and unknown but with $\sigma^2=1$ variance. Now to answer your question how this could be possible, we can consider one way to relate the mean to the variance and see if we can use this method to back out what the mean should be. I'll use the fact that $Var(X)=\mathbb{E}[(X-\mu)^2]$. So,
$$1=Var(X)=\mathbb{E}[(X-\mu)^2]=\mathbb{E}[X^2]-\mathbb{E}[X]^2=(\mu^2 + 1)-\mu^2 = 1$$
Here I use the fact that the second moment of a normal is $\mathbb{E}[X^2]=\mu^2 + \sigma^2$. Notice how we just get back $1$ again which basically means we have 2 unknowns and 1 equation and so the mean $\mu$ is free to vary.
This is just one example that we can know the variance but not the mean. Intuitively, the mean is like the "location" of a random variable, and the variance is how much that random variable is "spaced out" around that location. But we could always move that location around and keep the same variance. Hope that helps!
Edit:
I am interpreting this question as asking how we could have a random variable whose mean we do not know but whose variance is known (as in the example above). That is, given a random variable and it's variance can we somehow always back out the mean. Another interpretation could be how is possible for $\mathbb{E}[X]$ to be undefined. For this I would give the example of a Cauchy distribution. | Is it possible for a distribution to have known variance but unknown mean?
Suppose we have a random variable $X\sim N(\mu, \sigma^2)$ where $\mu$ is the mean and unknown but with $\sigma^2=1$ variance. Now to answer your question how this could be possible, we can consider o |
33,669 | Is it possible for a distribution to have known variance but unknown mean? | Say that you want to measure a voltage $V(t)$ that is varying in time and has a certain additive noise $e(t)$ (a random variable), for example,
$$
V(t) = \sin(ft) + e(t)
$$
Often one assumes that $e(t)$ has a constant zero mean and constant positive variance. One is interested in $\sin(ft)$, and the variance of $e(t)$ is unknown, how should one proceed? One way is to jointly handle $\sin(ft)$ and the variance of $e(t)$ simultaneously, but that may turn out to be complicated. Another way is to measure the voltage with no signal, i.e.,
$$
V(t) = e(t)
$$
from which you can estimate the variance of $e(t)$. Now you can go back to the original problem with the estimated variance of $e(t)$, where you now assume that you know it. This often simplifies the analysis of the original problem. | Is it possible for a distribution to have known variance but unknown mean? | Say that you want to measure a voltage $V(t)$ that is varying in time and has a certain additive noise $e(t)$ (a random variable), for example,
$$
V(t) = \sin(ft) + e(t)
$$
Often one assumes that $e(t | Is it possible for a distribution to have known variance but unknown mean?
Say that you want to measure a voltage $V(t)$ that is varying in time and has a certain additive noise $e(t)$ (a random variable), for example,
$$
V(t) = \sin(ft) + e(t)
$$
Often one assumes that $e(t)$ has a constant zero mean and constant positive variance. One is interested in $\sin(ft)$, and the variance of $e(t)$ is unknown, how should one proceed? One way is to jointly handle $\sin(ft)$ and the variance of $e(t)$ simultaneously, but that may turn out to be complicated. Another way is to measure the voltage with no signal, i.e.,
$$
V(t) = e(t)
$$
from which you can estimate the variance of $e(t)$. Now you can go back to the original problem with the estimated variance of $e(t)$, where you now assume that you know it. This often simplifies the analysis of the original problem. | Is it possible for a distribution to have known variance but unknown mean?
Say that you want to measure a voltage $V(t)$ that is varying in time and has a certain additive noise $e(t)$ (a random variable), for example,
$$
V(t) = \sin(ft) + e(t)
$$
Often one assumes that $e(t |
33,670 | What is the "opposite" of a random variable? | A random variable which is not actually random, and doesn't change by chance, is by definition a constant. But, it is still a RV. Since the RV definition is a superset of constant RV definition, I believe there is no conceptual opposite. | What is the "opposite" of a random variable? | A random variable which is not actually random, and doesn't change by chance, is by definition a constant. But, it is still a RV. Since the RV definition is a superset of constant RV definition, I bel | What is the "opposite" of a random variable?
A random variable which is not actually random, and doesn't change by chance, is by definition a constant. But, it is still a RV. Since the RV definition is a superset of constant RV definition, I believe there is no conceptual opposite. | What is the "opposite" of a random variable?
A random variable which is not actually random, and doesn't change by chance, is by definition a constant. But, it is still a RV. Since the RV definition is a superset of constant RV definition, I bel |
33,671 | What is the "opposite" of a random variable? | One thing that might be worth noting is that in the formal definition, a random variable is a function -- in particular, a measurable function $X: \Omega \to E$ from a set of possible outcomes $\Omega$ (which is in fact a probability space -- more here) to a measurable space $E$.
Along the same lines of @gunes answer (+1), it doesn't quite make sense to discuss the opposite of a function -- you could say it's a constant, but how would you consider a function such as $f(x) = 0$? Is it "more" constant than other functions? It's a bit like comparing apples and oranges, since functions and scalars are very different types of objects.
I think your question is more around the use of the word "variable", which can be a bit confusing. For instance, in algebra you might encounter a problem such as "Find the roots of the equation $x^2-9=0$". Here, $x$ is a "variable", but it takes on a deterministic value (namely, $ x = \pm 3$ ) and can really be considered scalar since $x \in {\Bbb R}$. There's no presumption of it representing a relationship between some event and an associated probability, so it's not considered a random variable. | What is the "opposite" of a random variable? | One thing that might be worth noting is that in the formal definition, a random variable is a function -- in particular, a measurable function $X: \Omega \to E$ from a set of possible outcomes $\Omeg | What is the "opposite" of a random variable?
One thing that might be worth noting is that in the formal definition, a random variable is a function -- in particular, a measurable function $X: \Omega \to E$ from a set of possible outcomes $\Omega$ (which is in fact a probability space -- more here) to a measurable space $E$.
Along the same lines of @gunes answer (+1), it doesn't quite make sense to discuss the opposite of a function -- you could say it's a constant, but how would you consider a function such as $f(x) = 0$? Is it "more" constant than other functions? It's a bit like comparing apples and oranges, since functions and scalars are very different types of objects.
I think your question is more around the use of the word "variable", which can be a bit confusing. For instance, in algebra you might encounter a problem such as "Find the roots of the equation $x^2-9=0$". Here, $x$ is a "variable", but it takes on a deterministic value (namely, $ x = \pm 3$ ) and can really be considered scalar since $x \in {\Bbb R}$. There's no presumption of it representing a relationship between some event and an associated probability, so it's not considered a random variable. | What is the "opposite" of a random variable?
One thing that might be worth noting is that in the formal definition, a random variable is a function -- in particular, a measurable function $X: \Omega \to E$ from a set of possible outcomes $\Omeg |
33,672 | What is the "opposite" of a random variable? | A non-random variable is generally called a Constant. But constants are not really the opposite of random variables, in the same way integers are not the opposite of real numbers - they're a subset.
A constant is just a random variable with all it's probability mass concentrated at one point. (i.e. it has a Dirac-delta function for probability distribution) | What is the "opposite" of a random variable? | A non-random variable is generally called a Constant. But constants are not really the opposite of random variables, in the same way integers are not the opposite of real numbers - they're a subset.
| What is the "opposite" of a random variable?
A non-random variable is generally called a Constant. But constants are not really the opposite of random variables, in the same way integers are not the opposite of real numbers - they're a subset.
A constant is just a random variable with all it's probability mass concentrated at one point. (i.e. it has a Dirac-delta function for probability distribution) | What is the "opposite" of a random variable?
A non-random variable is generally called a Constant. But constants are not really the opposite of random variables, in the same way integers are not the opposite of real numbers - they're a subset.
|
33,673 | What is the "opposite" of a random variable? | I would say a deterministic variable.
Examples:
Random variable- the amount of heads when a coin is tossed 100 times.
Deterministic variable- the age of the Eifel tower in exactly 12 years from now. | What is the "opposite" of a random variable? | I would say a deterministic variable.
Examples:
Random variable- the amount of heads when a coin is tossed 100 times.
Deterministic variable- the age of the Eifel tower in exactly 12 years from now. | What is the "opposite" of a random variable?
I would say a deterministic variable.
Examples:
Random variable- the amount of heads when a coin is tossed 100 times.
Deterministic variable- the age of the Eifel tower in exactly 12 years from now. | What is the "opposite" of a random variable?
I would say a deterministic variable.
Examples:
Random variable- the amount of heads when a coin is tossed 100 times.
Deterministic variable- the age of the Eifel tower in exactly 12 years from now. |
33,674 | How do I test if regression slopes are statistically different? | Assuming you have the original data and not just the summary of the fits, the general solution to this problem is to fit a model with an interaction, i.e. to go back to the data and fit the model
$$
Y = \beta_0 + \beta_1 I(t>t_I) + \beta_2 (t-t_I) + \beta_3 I(t>t_I) (t-t_I)
$$
where $I(t>t_I)$ is an indicator variable, i.e. =1 if $t>t_I$ and 0 otherwise. In this formulation,
$\beta_0$ represents the mean before the intervention
$\beta_1$ represents a discontinuous jump in the mean at $t=t_I$ (depending on your problem, you may choose to leave this out of the model)
$\beta_2$ represents the slope before the intervention
$\beta_3$ represents the change in slope before vs. after: that is, $\beta_2 + \beta_3$ is the slope after the intervention. A standard t-test against the null hypothesis $\beta_3=0$ is a test of the slope difference.
You might look for a deeper treatment of this under the rubrics of regression discontinuity designs (usually when the predictor is not time), or changepoint analysis/interrupted time series analysis (when the predictor is time). | How do I test if regression slopes are statistically different? | Assuming you have the original data and not just the summary of the fits, the general solution to this problem is to fit a model with an interaction, i.e. to go back to the data and fit the model
$$
Y | How do I test if regression slopes are statistically different?
Assuming you have the original data and not just the summary of the fits, the general solution to this problem is to fit a model with an interaction, i.e. to go back to the data and fit the model
$$
Y = \beta_0 + \beta_1 I(t>t_I) + \beta_2 (t-t_I) + \beta_3 I(t>t_I) (t-t_I)
$$
where $I(t>t_I)$ is an indicator variable, i.e. =1 if $t>t_I$ and 0 otherwise. In this formulation,
$\beta_0$ represents the mean before the intervention
$\beta_1$ represents a discontinuous jump in the mean at $t=t_I$ (depending on your problem, you may choose to leave this out of the model)
$\beta_2$ represents the slope before the intervention
$\beta_3$ represents the change in slope before vs. after: that is, $\beta_2 + \beta_3$ is the slope after the intervention. A standard t-test against the null hypothesis $\beta_3=0$ is a test of the slope difference.
You might look for a deeper treatment of this under the rubrics of regression discontinuity designs (usually when the predictor is not time), or changepoint analysis/interrupted time series analysis (when the predictor is time). | How do I test if regression slopes are statistically different?
Assuming you have the original data and not just the summary of the fits, the general solution to this problem is to fit a model with an interaction, i.e. to go back to the data and fit the model
$$
Y |
33,675 | How do I test if regression slopes are statistically different? | If you have two regressions of $Y$ onto $X$, one for group $A$ and another for group $B$, you can test for a difference in regression slopes thus:
Positivist null hypothesis:
$H_{0}^{+}: \beta_{A} - \beta_{B} = 0,$ with $H_{\text{A}}^{+}: \beta_{A} - \beta_{B} \ne 0$
Test statistic for the positivist null hypothesis:
$$t = \frac{\beta_{A}-\beta_{B}}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}$$
Where $t$ has $n_{A} + n_{B} - 4$ degrees of freedom, and $s_{\hat{\beta}_{A}-\hat{\beta}_{B}} = \sqrt{s_{\hat{\beta}_{A}}-s_{\hat{\beta}_{B}}}$ if $n_{A} = n_{B}$ as your design suggests. (And $s_{\hat{\beta}_{A}}$ and $s_{\hat{\beta}_{A}}$ are the standard errors of the slopes for $A$ and $B$.)
Obtain the p-value for $t$ thus:
$$p = P\left(|T_{\text{df}}|\ge |t| \right)$$
Reject $H^{+}_{0}$ if $p \le \alpha$.
You can (and should) also test for a equivalence of regression slopes by at least $\delta$ (the smallest relevant difference in slopes between $A$ and $B$ which you care about) thus:
Negativist null hypothesis (general form):
$H_{0}^{-}: |\beta_{A} - \beta_{B}| \ge \delta,$ with $H_{\text{A}}^{-}: |\beta_{A} - \beta_{B}| < \delta$
Negativist null hypothesis (two one-sided tests):
$H_{01}^{-}: \beta_{A} - \beta_{B} \ge \delta,$ with $H_{\text{A}}^{-}: \beta_{A} - \beta_{B} < \delta$
$H_{02}^{-}: \beta_{A} - \beta_{B} \le -\delta,$ with $H_{\text{A}}^{-}: \beta_{A} - \beta_{B} > -\delta$
Test statistics for the negativist null hypothesis:
$$t_{1} = \frac{\delta- \left(\beta_{A}-\beta_{B}\right)}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}\\
t_{2} = \frac{(\beta_{A}-\beta_{B})+\delta}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}$$
Where both $t$s have $n_{A} + n_{B} - 4$ degrees of freedom, and $s_{\hat{\beta}_{A}-\hat{\beta}_{B}} = \sqrt{s_{\hat{\beta}_{A}}-s_{\hat{\beta}_{B}}}$ if $n_{A} = n_{B}$ as your design suggests.
Obtain the p-value for both $t$s thus (both test statistics are constructed to be one-sided tests with upper-tail p-values):
$$p_{1} = P\left(T_{\text{df}} \ge t_{1} \right)$$
$$p_{2} = P\left(T_{\text{df}} \ge t_{2} \right)$$
Reject $H^{-}_{01}$ if $p_{1} \le \alpha$, and reject $H^{-}_{02}$ if $p_{2} \le \alpha$. You can only reject $H^{-}_{0}$ if you reject both $H_{01}^{-}$ and $H_{02}^{-}$.
Combining the results from both tests gives you four possibilities (for $\alpha$ level of significance, and $\delta$ relevance threshold):
Reject $H_{0}^{+}$ and fail to reject $H_{0}^{-}$, so conclude: relevant difference in slopes.
Fail to reject $H_{0}^{+}$ and reject $H_{0}^{-}$, so conclude: equivalent slopes.
Reject $H_{0}^{+}$ and reject $H_{0}^{-}$, so conclude: trivial difference in slopes (i.e. there is a significant difference in slopes, but a priori you do not care about differences this small).
Fail to reject $H_{0}^{+}$ and fail to reject $H_{0}^{-}$, so conclude: indeterminate results (i.e. your data are under-powered to say anything about the slopes' difference for a given $\alpha$ and $\delta$). | How do I test if regression slopes are statistically different? | If you have two regressions of $Y$ onto $X$, one for group $A$ and another for group $B$, you can test for a difference in regression slopes thus:
Positivist null hypothesis:
$H_{0}^{+}: \beta_{A} - \ | How do I test if regression slopes are statistically different?
If you have two regressions of $Y$ onto $X$, one for group $A$ and another for group $B$, you can test for a difference in regression slopes thus:
Positivist null hypothesis:
$H_{0}^{+}: \beta_{A} - \beta_{B} = 0,$ with $H_{\text{A}}^{+}: \beta_{A} - \beta_{B} \ne 0$
Test statistic for the positivist null hypothesis:
$$t = \frac{\beta_{A}-\beta_{B}}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}$$
Where $t$ has $n_{A} + n_{B} - 4$ degrees of freedom, and $s_{\hat{\beta}_{A}-\hat{\beta}_{B}} = \sqrt{s_{\hat{\beta}_{A}}-s_{\hat{\beta}_{B}}}$ if $n_{A} = n_{B}$ as your design suggests. (And $s_{\hat{\beta}_{A}}$ and $s_{\hat{\beta}_{A}}$ are the standard errors of the slopes for $A$ and $B$.)
Obtain the p-value for $t$ thus:
$$p = P\left(|T_{\text{df}}|\ge |t| \right)$$
Reject $H^{+}_{0}$ if $p \le \alpha$.
You can (and should) also test for a equivalence of regression slopes by at least $\delta$ (the smallest relevant difference in slopes between $A$ and $B$ which you care about) thus:
Negativist null hypothesis (general form):
$H_{0}^{-}: |\beta_{A} - \beta_{B}| \ge \delta,$ with $H_{\text{A}}^{-}: |\beta_{A} - \beta_{B}| < \delta$
Negativist null hypothesis (two one-sided tests):
$H_{01}^{-}: \beta_{A} - \beta_{B} \ge \delta,$ with $H_{\text{A}}^{-}: \beta_{A} - \beta_{B} < \delta$
$H_{02}^{-}: \beta_{A} - \beta_{B} \le -\delta,$ with $H_{\text{A}}^{-}: \beta_{A} - \beta_{B} > -\delta$
Test statistics for the negativist null hypothesis:
$$t_{1} = \frac{\delta- \left(\beta_{A}-\beta_{B}\right)}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}\\
t_{2} = \frac{(\beta_{A}-\beta_{B})+\delta}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}$$
Where both $t$s have $n_{A} + n_{B} - 4$ degrees of freedom, and $s_{\hat{\beta}_{A}-\hat{\beta}_{B}} = \sqrt{s_{\hat{\beta}_{A}}-s_{\hat{\beta}_{B}}}$ if $n_{A} = n_{B}$ as your design suggests.
Obtain the p-value for both $t$s thus (both test statistics are constructed to be one-sided tests with upper-tail p-values):
$$p_{1} = P\left(T_{\text{df}} \ge t_{1} \right)$$
$$p_{2} = P\left(T_{\text{df}} \ge t_{2} \right)$$
Reject $H^{-}_{01}$ if $p_{1} \le \alpha$, and reject $H^{-}_{02}$ if $p_{2} \le \alpha$. You can only reject $H^{-}_{0}$ if you reject both $H_{01}^{-}$ and $H_{02}^{-}$.
Combining the results from both tests gives you four possibilities (for $\alpha$ level of significance, and $\delta$ relevance threshold):
Reject $H_{0}^{+}$ and fail to reject $H_{0}^{-}$, so conclude: relevant difference in slopes.
Fail to reject $H_{0}^{+}$ and reject $H_{0}^{-}$, so conclude: equivalent slopes.
Reject $H_{0}^{+}$ and reject $H_{0}^{-}$, so conclude: trivial difference in slopes (i.e. there is a significant difference in slopes, but a priori you do not care about differences this small).
Fail to reject $H_{0}^{+}$ and fail to reject $H_{0}^{-}$, so conclude: indeterminate results (i.e. your data are under-powered to say anything about the slopes' difference for a given $\alpha$ and $\delta$). | How do I test if regression slopes are statistically different?
If you have two regressions of $Y$ onto $X$, one for group $A$ and another for group $B$, you can test for a difference in regression slopes thus:
Positivist null hypothesis:
$H_{0}^{+}: \beta_{A} - \ |
33,676 | Need intuition about independence of events | Independence means the Venn diagram can be drawn in a simpler way.
After presenting a simple analysis, which is trivial but enlightening, I offer a way of visualizing and generalizing independence and then discuss some of its uses and implications.
Analysis
Two events $A$ and $B$ in the same probability space $\Omega$ determine four events altogether by means of their complements ${A}^\prime = \Omega\setminus A$ and ${B}^\prime = \Omega\setminus B$; namely, the four possible nontrivial intersections $A\cap B$, $A\cap B^\prime$, $A^\prime \cap B$, and $A^\prime \cap B^\prime$. These four events are mutually exclusive--any two have null intersection--and their union is all of $\Omega$.
In general, the probabilities associated with these four intersections could be any values consistent with the axioms: they must be non-negative and sum to unity. (This implies three parameters are needed to describe all such probabilities; the fourth probability is determined by the sum-to-unity constraint.) But when $A$ and $B$ are independent, this simplifies.
Recall that $A$ and $B$ are independent when $\Pr(A\cap B)=\Pr(A)\Pr(B)$. Notice this implies that $A$ and $B^\prime$ are independent, because
$$\eqalign{\Pr(A)&=\Pr(A\cap \Omega) = \Pr(A\cap(B\cup B^\prime))=\Pr((A\cap B)\cup(A\cap B^\prime)) \\&= \Pr(A\cap B)+\Pr(A\cap B^\prime)}$$
implies
$$\eqalign{\Pr(A\cap B^\prime) &= \Pr(A) - \Pr(A\cap B) = \Pr(A) - \Pr(A)\Pr(B) = \Pr(A)\left(1 - \Pr(B)\right) \\&= \Pr(A)\Pr(B^\prime).}$$
Exchanging the roles of $A$ and $B$ in this argument shows $A^\prime$ and $B$ are independent and, finally, replacing $B$ with $B^\prime$ (whence $B^{\prime\prime}=B$) shows $A^\prime$ and $B^\prime$ are independent.
Visualization
This analysis can be depicted by representing $\Omega$ (abstractly) as an interval of points on an axis. $A$ is a subset of this interval and $A^\prime$ is the remainder of the subset. I will make the lengths of these subintervals proportional to their probabilities.
Let's erect another vertical axis, again representing $\Omega$, on which we may draw $B$. We are free to re-order the elements of $\Omega$ on this axis so that $B$ also appears as a subinterval and $B^\prime$ is the remainder, again drawn with lengths proportional to their probabilities.
These intervals determine rectangles in the figure, as shown. Independence of $A$ and $B$ means the relative areas of the rectangles are their probabilities.
Discussion
Now only two parameters, instead of three, are needed to describe all possible probability distributions: $\Pr(A)$ and $\Pr(B)$ completely determine all the rectangle areas.
This idea generalizes. Let $A_1, A_2, \ldots, A_m$ be events that partition $\Omega$: that is, the intersection of any distinct pair of them is empty and their union is $\Omega$. Let $B_1, B_2, \ldots, B_n$ be another partition. These two partitions are independent when $\Pr(A_i\cap B_j) = \Pr(A_i)\Pr(B_j)$ for all $i,j$. We may draw a similar figure in which the $A_i$ are a sequence of non-overlapping line segments on the x axis and the $B_j$ are a sequence of non-overlapping line segments on the y axis, each with a length proportional to the probability. This generalized idea of independence simply means the probabilities of all $m\times n$ rectangles formed by these seqments are determined by the $m$ probabilities for the $A_i$ and the $n$ probabilities for the $B_j$. That replaces $mn$ numbers (subject to a single sum-to-unity constraint) by $m+n$ numbers (subject to two separate sum-to-unity constraints). The reduction in parameter counts from $mn-1$ to $m+n-2$ quantifies how much simplification has occurred. It's substantial.
This kind of diagram can help your intuition in various ways. When you think of independence, think of two one-dimensional axes filling out a two-dimensional region and think of areas of rectangles determined by the lengths of their sides. If you progress in your study of probability far enough theoretically, eventually you will encounter generalizations in which the concept of independence extends to "sub sigma algebras." (A sub sigma algebra is a collection of events having some additional properties that don't matter here. It's a way to generalize the finite partitions, as previously described, into infinite partitions.) If you visualize a "sub sigma algebra" as a collection of intervals on a line (although this time they may overlap each other), you will not need to enlarge or modify your intuition one bit: this hugely general and abstract definition of independence merely says that any rectangle formed by a set on the x-axis and a set on the y-axis has a probability proportional to its area.
Yet another generalization extends to independence of three or more sets (or sub sigma algebras). Visualize these by adding more axes to the picture: a third axis in a third dimension for the third set (now the relevant probabilities are volumes of cuboids), and so on. In effect, independence lets us break down a potentially complicated probability space into simpler "one-dimensional" components, almost in the same way we analyze vectors (in vector spaces) in terms of their components. | Need intuition about independence of events | Independence means the Venn diagram can be drawn in a simpler way.
After presenting a simple analysis, which is trivial but enlightening, I offer a way of visualizing and generalizing independence and | Need intuition about independence of events
Independence means the Venn diagram can be drawn in a simpler way.
After presenting a simple analysis, which is trivial but enlightening, I offer a way of visualizing and generalizing independence and then discuss some of its uses and implications.
Analysis
Two events $A$ and $B$ in the same probability space $\Omega$ determine four events altogether by means of their complements ${A}^\prime = \Omega\setminus A$ and ${B}^\prime = \Omega\setminus B$; namely, the four possible nontrivial intersections $A\cap B$, $A\cap B^\prime$, $A^\prime \cap B$, and $A^\prime \cap B^\prime$. These four events are mutually exclusive--any two have null intersection--and their union is all of $\Omega$.
In general, the probabilities associated with these four intersections could be any values consistent with the axioms: they must be non-negative and sum to unity. (This implies three parameters are needed to describe all such probabilities; the fourth probability is determined by the sum-to-unity constraint.) But when $A$ and $B$ are independent, this simplifies.
Recall that $A$ and $B$ are independent when $\Pr(A\cap B)=\Pr(A)\Pr(B)$. Notice this implies that $A$ and $B^\prime$ are independent, because
$$\eqalign{\Pr(A)&=\Pr(A\cap \Omega) = \Pr(A\cap(B\cup B^\prime))=\Pr((A\cap B)\cup(A\cap B^\prime)) \\&= \Pr(A\cap B)+\Pr(A\cap B^\prime)}$$
implies
$$\eqalign{\Pr(A\cap B^\prime) &= \Pr(A) - \Pr(A\cap B) = \Pr(A) - \Pr(A)\Pr(B) = \Pr(A)\left(1 - \Pr(B)\right) \\&= \Pr(A)\Pr(B^\prime).}$$
Exchanging the roles of $A$ and $B$ in this argument shows $A^\prime$ and $B$ are independent and, finally, replacing $B$ with $B^\prime$ (whence $B^{\prime\prime}=B$) shows $A^\prime$ and $B^\prime$ are independent.
Visualization
This analysis can be depicted by representing $\Omega$ (abstractly) as an interval of points on an axis. $A$ is a subset of this interval and $A^\prime$ is the remainder of the subset. I will make the lengths of these subintervals proportional to their probabilities.
Let's erect another vertical axis, again representing $\Omega$, on which we may draw $B$. We are free to re-order the elements of $\Omega$ on this axis so that $B$ also appears as a subinterval and $B^\prime$ is the remainder, again drawn with lengths proportional to their probabilities.
These intervals determine rectangles in the figure, as shown. Independence of $A$ and $B$ means the relative areas of the rectangles are their probabilities.
Discussion
Now only two parameters, instead of three, are needed to describe all possible probability distributions: $\Pr(A)$ and $\Pr(B)$ completely determine all the rectangle areas.
This idea generalizes. Let $A_1, A_2, \ldots, A_m$ be events that partition $\Omega$: that is, the intersection of any distinct pair of them is empty and their union is $\Omega$. Let $B_1, B_2, \ldots, B_n$ be another partition. These two partitions are independent when $\Pr(A_i\cap B_j) = \Pr(A_i)\Pr(B_j)$ for all $i,j$. We may draw a similar figure in which the $A_i$ are a sequence of non-overlapping line segments on the x axis and the $B_j$ are a sequence of non-overlapping line segments on the y axis, each with a length proportional to the probability. This generalized idea of independence simply means the probabilities of all $m\times n$ rectangles formed by these seqments are determined by the $m$ probabilities for the $A_i$ and the $n$ probabilities for the $B_j$. That replaces $mn$ numbers (subject to a single sum-to-unity constraint) by $m+n$ numbers (subject to two separate sum-to-unity constraints). The reduction in parameter counts from $mn-1$ to $m+n-2$ quantifies how much simplification has occurred. It's substantial.
This kind of diagram can help your intuition in various ways. When you think of independence, think of two one-dimensional axes filling out a two-dimensional region and think of areas of rectangles determined by the lengths of their sides. If you progress in your study of probability far enough theoretically, eventually you will encounter generalizations in which the concept of independence extends to "sub sigma algebras." (A sub sigma algebra is a collection of events having some additional properties that don't matter here. It's a way to generalize the finite partitions, as previously described, into infinite partitions.) If you visualize a "sub sigma algebra" as a collection of intervals on a line (although this time they may overlap each other), you will not need to enlarge or modify your intuition one bit: this hugely general and abstract definition of independence merely says that any rectangle formed by a set on the x-axis and a set on the y-axis has a probability proportional to its area.
Yet another generalization extends to independence of three or more sets (or sub sigma algebras). Visualize these by adding more axes to the picture: a third axis in a third dimension for the third set (now the relevant probabilities are volumes of cuboids), and so on. In effect, independence lets us break down a potentially complicated probability space into simpler "one-dimensional" components, almost in the same way we analyze vectors (in vector spaces) in terms of their components. | Need intuition about independence of events
Independence means the Venn diagram can be drawn in a simpler way.
After presenting a simple analysis, which is trivial but enlightening, I offer a way of visualizing and generalizing independence and |
33,677 | Need intuition about independence of events | There are a few ways to show that two events are independent, but I'll work through one of them to address your two scenarios.
Scenario 1 - The Figure
As you point out, events $A$ and $B$ are independent in the situation you describe. This is because:
$$P(A \cap B) = P(A)P(B) \\
P(x_2) = [P(x_1)+P(x_2)] \times [P(x_2)+P(x_3)]\\
0.25 = [0.25+0.25] \times [0.25+0.25] \\
0.25 = 0.25$$
But why does the fact that $P(A \cap B) = P(A)P(B)$ mean that events $A$ and $B$ are independent? We can show this using Bayes's Rule:
$$P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{P(A)P(B)}{P(B)} = P(A)$$
Or, in English, the probability that event $A$ occurs given that event $B$ occurs is equal to the probability of event $A$ occurring with no knowledge regarding $B$. This means that event $B$ provides us with no information about $A$ (the opposite can be shown the exact same way) and this means that the two events occur independently of each other.
Scenario 2 - Outcome $x_5$
You haven't given us quite enough information to work through this fully, but if we assume that adding $x_5$ as you described does not affect $P(A)$ or $P(B)$ then $A$ and $B$ are still independent by the calculation shown above.
If we assume that $P(x_i) = 0.2$ now, then we can show:
$$P(A | B) = \frac{P(A \cap B)}{P(B)} \\
P(A | B) = \frac{P(x_2)}{P(x_2)+P(x_3)} \\
P(A | B) = \frac{0.2}{0.4} \\
P(A | B) = 0.5 \ne P(A) = 0.4$$
The intuitive explanation of these numbers is that in this new scenario, whether or not $B$ has occurred affects the likelihood that $A$ has occurred:
Without any information about the outcome, we know that $A$ has a $40\%$ chance of occurring;
if we are told that $B$ has occurred, however, then we know that $A$ has a $50\%$ chance of occurring; and
if we are told that $B$ has NOT occurred, then we know that $A$ has a $33\%$ chance of occurring.
Event $A$ is more likely or less likely to occur based on whether or not event $B$ has occurred, which means that the events are not independent and, equivalently, that they are dependent. | Need intuition about independence of events | There are a few ways to show that two events are independent, but I'll work through one of them to address your two scenarios.
Scenario 1 - The Figure
As you point out, events $A$ and $B$ are independ | Need intuition about independence of events
There are a few ways to show that two events are independent, but I'll work through one of them to address your two scenarios.
Scenario 1 - The Figure
As you point out, events $A$ and $B$ are independent in the situation you describe. This is because:
$$P(A \cap B) = P(A)P(B) \\
P(x_2) = [P(x_1)+P(x_2)] \times [P(x_2)+P(x_3)]\\
0.25 = [0.25+0.25] \times [0.25+0.25] \\
0.25 = 0.25$$
But why does the fact that $P(A \cap B) = P(A)P(B)$ mean that events $A$ and $B$ are independent? We can show this using Bayes's Rule:
$$P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{P(A)P(B)}{P(B)} = P(A)$$
Or, in English, the probability that event $A$ occurs given that event $B$ occurs is equal to the probability of event $A$ occurring with no knowledge regarding $B$. This means that event $B$ provides us with no information about $A$ (the opposite can be shown the exact same way) and this means that the two events occur independently of each other.
Scenario 2 - Outcome $x_5$
You haven't given us quite enough information to work through this fully, but if we assume that adding $x_5$ as you described does not affect $P(A)$ or $P(B)$ then $A$ and $B$ are still independent by the calculation shown above.
If we assume that $P(x_i) = 0.2$ now, then we can show:
$$P(A | B) = \frac{P(A \cap B)}{P(B)} \\
P(A | B) = \frac{P(x_2)}{P(x_2)+P(x_3)} \\
P(A | B) = \frac{0.2}{0.4} \\
P(A | B) = 0.5 \ne P(A) = 0.4$$
The intuitive explanation of these numbers is that in this new scenario, whether or not $B$ has occurred affects the likelihood that $A$ has occurred:
Without any information about the outcome, we know that $A$ has a $40\%$ chance of occurring;
if we are told that $B$ has occurred, however, then we know that $A$ has a $50\%$ chance of occurring; and
if we are told that $B$ has NOT occurred, then we know that $A$ has a $33\%$ chance of occurring.
Event $A$ is more likely or less likely to occur based on whether or not event $B$ has occurred, which means that the events are not independent and, equivalently, that they are dependent. | Need intuition about independence of events
There are a few ways to show that two events are independent, but I'll work through one of them to address your two scenarios.
Scenario 1 - The Figure
As you point out, events $A$ and $B$ are independ |
33,678 | Need intuition about independence of events | The intuition of independence is clearer if you think about conditional probability. Let us define the conditional probability $P(B \mid A) := P(A \cap B) / P(A)$; intuitively, this is the probability that $B$ is true given that you know $A$ is true. In terms of the Venn diagram, this is the proportion of $A$ that the intersection $A \cap B$ takes up.
Then, independence would imply $$P(B \mid A) = \frac{P(A \cap B)}{P(A)} = \frac{P(A) P(B)}{P(A)} = P(B),$$
which in English means that the probability of $B$ did not "change" even after gaining the extra knowledge that $A$ is true. This is the sense of "independence."
In terms of actual numbers in your example, we have $P(B) = 1/2$, but also $P(B \mid A) = \frac{1/4}{1/2} = \frac{1}{2}$, so gaining the knowledge that $A$ is true did not "affect" the probability of $B$, hence "independence."
Edit: the reason why, if you add $x_5$ and again assuming uniformity over the atoms ($P(x_i) = 0.2$ for all $i$), that independence breaks is that $P(B) = 2/5$, but if you now know the $A$ is true, then the probability of $B$ being true is $P(B \mid A) = P(A \cap B) / P(A) = \frac{1/5}{2/5} = 1/2$, because the proportion of $A$ that is occupied by the event $B$ is larger than the proportion of the entire sample space that is occupied by the event $B$. | Need intuition about independence of events | The intuition of independence is clearer if you think about conditional probability. Let us define the conditional probability $P(B \mid A) := P(A \cap B) / P(A)$; intuitively, this is the probabilit | Need intuition about independence of events
The intuition of independence is clearer if you think about conditional probability. Let us define the conditional probability $P(B \mid A) := P(A \cap B) / P(A)$; intuitively, this is the probability that $B$ is true given that you know $A$ is true. In terms of the Venn diagram, this is the proportion of $A$ that the intersection $A \cap B$ takes up.
Then, independence would imply $$P(B \mid A) = \frac{P(A \cap B)}{P(A)} = \frac{P(A) P(B)}{P(A)} = P(B),$$
which in English means that the probability of $B$ did not "change" even after gaining the extra knowledge that $A$ is true. This is the sense of "independence."
In terms of actual numbers in your example, we have $P(B) = 1/2$, but also $P(B \mid A) = \frac{1/4}{1/2} = \frac{1}{2}$, so gaining the knowledge that $A$ is true did not "affect" the probability of $B$, hence "independence."
Edit: the reason why, if you add $x_5$ and again assuming uniformity over the atoms ($P(x_i) = 0.2$ for all $i$), that independence breaks is that $P(B) = 2/5$, but if you now know the $A$ is true, then the probability of $B$ being true is $P(B \mid A) = P(A \cap B) / P(A) = \frac{1/5}{2/5} = 1/2$, because the proportion of $A$ that is occupied by the event $B$ is larger than the proportion of the entire sample space that is occupied by the event $B$. | Need intuition about independence of events
The intuition of independence is clearer if you think about conditional probability. Let us define the conditional probability $P(B \mid A) := P(A \cap B) / P(A)$; intuitively, this is the probabilit |
33,679 | Need intuition about independence of events | Let's say that you have $n$ elements in your finite sample space, $x_1, x_2, \cdots x_n$, each with equal probability $P(x_i)=1/n$.
Let $i, j, k$ all be different and between $1$ and $n$. For the ease of notation I write $x_i$ in stead of $\{x_i\}$ in the formulas below.
Then $P(x_i \cup x_j)=2/n$ while $P(x_i \cup x_j|_{x_j \cup x_k})=0.5$.
So both are equal iff $n=4$. This means that, if $n>4$ then the fact that you already observed $x_j \cup x_k$ gives you additional information on the occurrence of $x_i \cup x_j$ because knowing that you already observed $x_j \cup x_k$ makes it more probable ($P(x_i \cup x_j|_{x_j \cup x_k})=0.5$) to observe $x_i \cup x_j$ compared to lacking the knowledge that $x_j \cup x_k$ was observed ($P(x_i \cup x_j)=2/n<0.4$ when $n>4$).
If $n=4$ then the fact that you observed $x_j \cup x_k$ gives you no additional observation on the probability of observing $x_i \cup x_j$.
So the fact that your sample space is smaller (and it only holds for
$n=4$ in this setting) creates cases where observing something does not bring
knowledge on the (probability of) observing other things but that may
change when you have larger (n>4) sample spaces. | Need intuition about independence of events | Let's say that you have $n$ elements in your finite sample space, $x_1, x_2, \cdots x_n$, each with equal probability $P(x_i)=1/n$.
Let $i, j, k$ all be different and between $1$ and $n$. For the eas | Need intuition about independence of events
Let's say that you have $n$ elements in your finite sample space, $x_1, x_2, \cdots x_n$, each with equal probability $P(x_i)=1/n$.
Let $i, j, k$ all be different and between $1$ and $n$. For the ease of notation I write $x_i$ in stead of $\{x_i\}$ in the formulas below.
Then $P(x_i \cup x_j)=2/n$ while $P(x_i \cup x_j|_{x_j \cup x_k})=0.5$.
So both are equal iff $n=4$. This means that, if $n>4$ then the fact that you already observed $x_j \cup x_k$ gives you additional information on the occurrence of $x_i \cup x_j$ because knowing that you already observed $x_j \cup x_k$ makes it more probable ($P(x_i \cup x_j|_{x_j \cup x_k})=0.5$) to observe $x_i \cup x_j$ compared to lacking the knowledge that $x_j \cup x_k$ was observed ($P(x_i \cup x_j)=2/n<0.4$ when $n>4$).
If $n=4$ then the fact that you observed $x_j \cup x_k$ gives you no additional observation on the probability of observing $x_i \cup x_j$.
So the fact that your sample space is smaller (and it only holds for
$n=4$ in this setting) creates cases where observing something does not bring
knowledge on the (probability of) observing other things but that may
change when you have larger (n>4) sample spaces. | Need intuition about independence of events
Let's say that you have $n$ elements in your finite sample space, $x_1, x_2, \cdots x_n$, each with equal probability $P(x_i)=1/n$.
Let $i, j, k$ all be different and between $1$ and $n$. For the eas |
33,680 | Need intuition about independence of events | Let's go through your two coins example provided in the comment. The data presented in the table looks like below.
$$
\begin{array}{ccccc}
X_1 & X_2 & & & \text{Prob} \\
t & t & & B & 1/4\\
t & h & & & 1/4\\
h & t & A & & 1/4\\
h & h & A & B & 1/4
\end{array}
$$
You can observe that $P(A) = 0.5$ and $P(B) = 0.5$. The events are independent since $P(A \cap B) = P(A)\,P(B) = 0.25$. You say that the result is "unintuitive" because
... knowing that $B$ has occurred eliminates the possibility of a HT
outcome which is one of the outcomes in event $A$. So, intuitively
occurrence of $B$ has an effect on $A$ ...
But look at the table once again. If $B$ has occurred (rows 1 & 4), then only in the half of the cases $A$ occurs (row 4), so $P(A) = P(A|B) = 0.5$. Same if $A$ has occurred (rows 3 & 4), then $P(B) = P(B|A) = 0.5$. They are independent. So knowing that one of the events occurred tells you nothing about the other. This makes perfect sense.
Regarding your comment,
... For example I don't agree with your sentence "knowing that one of the
events occurred tells you nothing about the other". Because knowing $B$
tells me something about $A$, namely, the HT outcome in $A$ cannot occur.
I guess that this is where your confusion has it's roots. The event $A$ is defined as $X_1 = h$, so event $A$ is observed, or not, no matter what is $X_2$. For $A$ to happen it doesn't matter that $ht$ cannot occur. Obviously the event $X_1 = h \land X_2 = t$ depends on $B$ since $P(X_1 = h \land X_2 = t) = 0.25$ while $P(X_1 = h \land X_2 = t \mid B) = 0$, but the event $A$ itself does not.
Imagine that the two coin tosses happen behind a curtain and you don't see the outcomes of the tosses, but you are only informed about occurrence of $A$ or $B$. Information about happening of one of the events does not help you to guess if the second one has happened. | Need intuition about independence of events | Let's go through your two coins example provided in the comment. The data presented in the table looks like below.
$$
\begin{array}{ccccc}
X_1 & X_2 & & & \text{Prob} \\
t & t & & B & 1/4\\
t & h & & | Need intuition about independence of events
Let's go through your two coins example provided in the comment. The data presented in the table looks like below.
$$
\begin{array}{ccccc}
X_1 & X_2 & & & \text{Prob} \\
t & t & & B & 1/4\\
t & h & & & 1/4\\
h & t & A & & 1/4\\
h & h & A & B & 1/4
\end{array}
$$
You can observe that $P(A) = 0.5$ and $P(B) = 0.5$. The events are independent since $P(A \cap B) = P(A)\,P(B) = 0.25$. You say that the result is "unintuitive" because
... knowing that $B$ has occurred eliminates the possibility of a HT
outcome which is one of the outcomes in event $A$. So, intuitively
occurrence of $B$ has an effect on $A$ ...
But look at the table once again. If $B$ has occurred (rows 1 & 4), then only in the half of the cases $A$ occurs (row 4), so $P(A) = P(A|B) = 0.5$. Same if $A$ has occurred (rows 3 & 4), then $P(B) = P(B|A) = 0.5$. They are independent. So knowing that one of the events occurred tells you nothing about the other. This makes perfect sense.
Regarding your comment,
... For example I don't agree with your sentence "knowing that one of the
events occurred tells you nothing about the other". Because knowing $B$
tells me something about $A$, namely, the HT outcome in $A$ cannot occur.
I guess that this is where your confusion has it's roots. The event $A$ is defined as $X_1 = h$, so event $A$ is observed, or not, no matter what is $X_2$. For $A$ to happen it doesn't matter that $ht$ cannot occur. Obviously the event $X_1 = h \land X_2 = t$ depends on $B$ since $P(X_1 = h \land X_2 = t) = 0.25$ while $P(X_1 = h \land X_2 = t \mid B) = 0$, but the event $A$ itself does not.
Imagine that the two coin tosses happen behind a curtain and you don't see the outcomes of the tosses, but you are only informed about occurrence of $A$ or $B$. Information about happening of one of the events does not help you to guess if the second one has happened. | Need intuition about independence of events
Let's go through your two coins example provided in the comment. The data presented in the table looks like below.
$$
\begin{array}{ccccc}
X_1 & X_2 & & & \text{Prob} \\
t & t & & B & 1/4\\
t & h & & |
33,681 | Need intuition about independence of events | Just change the probabilities, say $$P_1=1/2\\P_2=1/4\\P_3=1/8$$, now
$$P_A=3/4\\P_B=3/8$$
Hence, $$P(A\times B)=1/4\\P(A)\times P(B)=9/32$$
and $$P(A\times B)\ne P(A)\times P(B)$$
So, you constructed (albeit accidentally) a case where it just happens so that $P(A\times B)= P(A)\times P(B)$ for a special set of probabilities $P_i$. Generally, for another set of probabilities $P_i$ this will not hold. | Need intuition about independence of events | Just change the probabilities, say $$P_1=1/2\\P_2=1/4\\P_3=1/8$$, now
$$P_A=3/4\\P_B=3/8$$
Hence, $$P(A\times B)=1/4\\P(A)\times P(B)=9/32$$
and $$P(A\times B)\ne P(A)\times P(B)$$
So, you constructed | Need intuition about independence of events
Just change the probabilities, say $$P_1=1/2\\P_2=1/4\\P_3=1/8$$, now
$$P_A=3/4\\P_B=3/8$$
Hence, $$P(A\times B)=1/4\\P(A)\times P(B)=9/32$$
and $$P(A\times B)\ne P(A)\times P(B)$$
So, you constructed (albeit accidentally) a case where it just happens so that $P(A\times B)= P(A)\times P(B)$ for a special set of probabilities $P_i$. Generally, for another set of probabilities $P_i$ this will not hold. | Need intuition about independence of events
Just change the probabilities, say $$P_1=1/2\\P_2=1/4\\P_3=1/8$$, now
$$P_A=3/4\\P_B=3/8$$
Hence, $$P(A\times B)=1/4\\P(A)\times P(B)=9/32$$
and $$P(A\times B)\ne P(A)\times P(B)$$
So, you constructed |
33,682 | Need intuition about independence of events | I believe the idea of independence is, using your Venn diagram, that the proportion of A in B is equal to the proportion of A in Ω. That is, AB takes up half the space of B, exactly as much as A takes out of Ω.
If we now interpret the Venn diagram in terms of probabilities, this means that given that B has occurred, A has ½ of occurring (=0.25/0.5), which is exactly the probability of A occurring anyways (=0.5/1), therefore knowledge/occurrence of B does not change the probability of A.
If you add a fifth event assigning now each outcome a probability of 0.2, then those proportions are not equal anymore.
We can generalize from your illustrative example and this geometric intuition to define independence as equal proportions, P(A|B) = P(A), that is, the probability of A happening (in general) equals the the probability of it happening (specifically) when B has happened, just as in your Venn diagram.
Now P(A|B) = P(AB) / P(B), which is the proportion that A occupies in B.
Substituting P(A) for P(A|B), we get the formal definition of independence for two events: P(AB) = P(A)P(B) | Need intuition about independence of events | I believe the idea of independence is, using your Venn diagram, that the proportion of A in B is equal to the proportion of A in Ω. That is, AB takes up half the space of B, exactly as much as A takes | Need intuition about independence of events
I believe the idea of independence is, using your Venn diagram, that the proportion of A in B is equal to the proportion of A in Ω. That is, AB takes up half the space of B, exactly as much as A takes out of Ω.
If we now interpret the Venn diagram in terms of probabilities, this means that given that B has occurred, A has ½ of occurring (=0.25/0.5), which is exactly the probability of A occurring anyways (=0.5/1), therefore knowledge/occurrence of B does not change the probability of A.
If you add a fifth event assigning now each outcome a probability of 0.2, then those proportions are not equal anymore.
We can generalize from your illustrative example and this geometric intuition to define independence as equal proportions, P(A|B) = P(A), that is, the probability of A happening (in general) equals the the probability of it happening (specifically) when B has happened, just as in your Venn diagram.
Now P(A|B) = P(AB) / P(B), which is the proportion that A occupies in B.
Substituting P(A) for P(A|B), we get the formal definition of independence for two events: P(AB) = P(A)P(B) | Need intuition about independence of events
I believe the idea of independence is, using your Venn diagram, that the proportion of A in B is equal to the proportion of A in Ω. That is, AB takes up half the space of B, exactly as much as A takes |
33,683 | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n - 2? [duplicate] | In linear regression, the degrees of freedom of the residuals is:
$$ \mathit{df} = n - k^*$$
Where $k^*$ is the numbers of parameters you're estimating INCLUDING an intercept. (The residual vector will exist in an $n - k^*$ dimensional linear space.)
If you include an intercept term in a regression and $k$ refers to the number of regressors not including the intercept then $k^* = k + 1$.
Notes:
It varies across statistics texts etc... how $k$ is defined, whether it includes the intercept term or not.)
My notation of $k^*$ isn't standard.
Examples:
Simple linear regression:
In the simplest model of linear regression you are estimating two parameters:
$$ y_i = b_0 + b_1 x_i + \epsilon_i$$
People often refer to this as $k=1$. Hence we're estimating $k^* = k + 1 = 2$ parameters. The residual degrees of freedom is $n-2$.
Your textbook example:
You have 3 regressors (bp, type, age) and an intercept term. You're estimating 4 parameters and the residual degrees of freedom is $n - 4$. | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n | In linear regression, the degrees of freedom of the residuals is:
$$ \mathit{df} = n - k^*$$
Where $k^*$ is the numbers of parameters you're estimating INCLUDING an intercept. (The residual vector wil | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n - 2? [duplicate]
In linear regression, the degrees of freedom of the residuals is:
$$ \mathit{df} = n - k^*$$
Where $k^*$ is the numbers of parameters you're estimating INCLUDING an intercept. (The residual vector will exist in an $n - k^*$ dimensional linear space.)
If you include an intercept term in a regression and $k$ refers to the number of regressors not including the intercept then $k^* = k + 1$.
Notes:
It varies across statistics texts etc... how $k$ is defined, whether it includes the intercept term or not.)
My notation of $k^*$ isn't standard.
Examples:
Simple linear regression:
In the simplest model of linear regression you are estimating two parameters:
$$ y_i = b_0 + b_1 x_i + \epsilon_i$$
People often refer to this as $k=1$. Hence we're estimating $k^* = k + 1 = 2$ parameters. The residual degrees of freedom is $n-2$.
Your textbook example:
You have 3 regressors (bp, type, age) and an intercept term. You're estimating 4 parameters and the residual degrees of freedom is $n - 4$. | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n
In linear regression, the degrees of freedom of the residuals is:
$$ \mathit{df} = n - k^*$$
Where $k^*$ is the numbers of parameters you're estimating INCLUDING an intercept. (The residual vector wil |
33,684 | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n - 2? [duplicate] | If I have $n$ observations, the data could have gone $n$ ways, but say I am estimating for 3 variables (including intercept), then really it could have only gone $(n-3)$ ways as I already have estimates of 3 things which control the data. That's my way of looking at it | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n | If I have $n$ observations, the data could have gone $n$ ways, but say I am estimating for 3 variables (including intercept), then really it could have only gone $(n-3)$ ways as I already have estimat | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n - 2? [duplicate]
If I have $n$ observations, the data could have gone $n$ ways, but say I am estimating for 3 variables (including intercept), then really it could have only gone $(n-3)$ ways as I already have estimates of 3 things which control the data. That's my way of looking at it | Why are the Degrees of Freedom for multiple regression n - k - 1? For linear regression, why is it n
If I have $n$ observations, the data could have gone $n$ ways, but say I am estimating for 3 variables (including intercept), then really it could have only gone $(n-3)$ ways as I already have estimat |
33,685 | What's wrong with this proposed resolution to the St Petersburg Paradox? | Let $K$ be some random variable.
In your problem, $K$ is number of times you flip before getting heads.
Let $f(k)$ be some payoff function.
In your problem $f(k) = 2^k$.
Let $f(K)$ be the payoff
You're saying that a reasonable valuation of the gamble $f(K)$ is given by $f(\mathrm{E[}K])$. This is an entirely ad-hoc, rather unprincipled heuristic. Perhaps fine in some situations (eg. where $K$ is small and $f$ near linear), but it's easy to construct an example where it suggests something non-sensical.
Example where your system makes absolutely no sense
Let $K$ be a draw from the normal distribution $\mathcal{N}(0,10000000000000)$ and let the payoff function be $f(K) = K^2$. Your system says I shouldn't pay more than $0$ for this gamble because $f(\mathrm{E}[K]) = 0^2 = 0$. But shouldn't you assign some positive value to this gamble?! There is a 100% probability the payoff is greater than zero!
A more classic resolution of the St. Petersburg Paradox
One approach is to add risk-aversion. If you're sufficiently risk averse, what you're willing to pay to play this infinite expectation gamble will be finite. If you accept the Von Neumann-Morgernstern axioms, then the certainty equivalent of playing the game is given by $z$ where:
$$u(w + z) = \mathrm{E}[ u(w + f(K)) ] $$
and where $w$ is your wealth and $u$ is a concave function (in jargon, a Bernoulli utility function) which captures your level of risk aversion. If $u$ is sufficiently concave, the valuation of $2^K$ will be finite.
A Bernoulli utility function with some nice properties turns out to be $u(x) = \log(x)$. Maximizing expected utility where the Bernoulli utility function is the log of your wealth is equivalent to maximizing the expected growth rate of your wealth. For simple binary bets, this gives you Kelly Criterion betting.
An important other point is that the risk aversion approach leads to different certainty equivalents depending on what side of the gamble you are on. | What's wrong with this proposed resolution to the St Petersburg Paradox? | Let $K$ be some random variable.
In your problem, $K$ is number of times you flip before getting heads.
Let $f(k)$ be some payoff function.
In your problem $f(k) = 2^k$.
Let $f(K)$ be the payoff
| What's wrong with this proposed resolution to the St Petersburg Paradox?
Let $K$ be some random variable.
In your problem, $K$ is number of times you flip before getting heads.
Let $f(k)$ be some payoff function.
In your problem $f(k) = 2^k$.
Let $f(K)$ be the payoff
You're saying that a reasonable valuation of the gamble $f(K)$ is given by $f(\mathrm{E[}K])$. This is an entirely ad-hoc, rather unprincipled heuristic. Perhaps fine in some situations (eg. where $K$ is small and $f$ near linear), but it's easy to construct an example where it suggests something non-sensical.
Example where your system makes absolutely no sense
Let $K$ be a draw from the normal distribution $\mathcal{N}(0,10000000000000)$ and let the payoff function be $f(K) = K^2$. Your system says I shouldn't pay more than $0$ for this gamble because $f(\mathrm{E}[K]) = 0^2 = 0$. But shouldn't you assign some positive value to this gamble?! There is a 100% probability the payoff is greater than zero!
A more classic resolution of the St. Petersburg Paradox
One approach is to add risk-aversion. If you're sufficiently risk averse, what you're willing to pay to play this infinite expectation gamble will be finite. If you accept the Von Neumann-Morgernstern axioms, then the certainty equivalent of playing the game is given by $z$ where:
$$u(w + z) = \mathrm{E}[ u(w + f(K)) ] $$
and where $w$ is your wealth and $u$ is a concave function (in jargon, a Bernoulli utility function) which captures your level of risk aversion. If $u$ is sufficiently concave, the valuation of $2^K$ will be finite.
A Bernoulli utility function with some nice properties turns out to be $u(x) = \log(x)$. Maximizing expected utility where the Bernoulli utility function is the log of your wealth is equivalent to maximizing the expected growth rate of your wealth. For simple binary bets, this gives you Kelly Criterion betting.
An important other point is that the risk aversion approach leads to different certainty equivalents depending on what side of the gamble you are on. | What's wrong with this proposed resolution to the St Petersburg Paradox?
Let $K$ be some random variable.
In your problem, $K$ is number of times you flip before getting heads.
Let $f(k)$ be some payoff function.
In your problem $f(k) = 2^k$.
Let $f(K)$ be the payoff
|
33,686 | What's wrong with this proposed resolution to the St Petersburg Paradox? | There's nothing wrong with that proposed resolution.
In the original paradox we look at the expected value (mean) of the profit which is infinite and therefore you should stake an infinite amount. However, after the first flip of the coin there's a 50% chance that you've lost money and that is why people don't like it. Your resolution just formalizes this, instead of looking at the mean profit you are looking at the median profit. Unlike the mean profit, the median profit is finite and the paradox goes away. | What's wrong with this proposed resolution to the St Petersburg Paradox? | There's nothing wrong with that proposed resolution.
In the original paradox we look at the expected value (mean) of the profit which is infinite and therefore you should stake an infinite amount. How | What's wrong with this proposed resolution to the St Petersburg Paradox?
There's nothing wrong with that proposed resolution.
In the original paradox we look at the expected value (mean) of the profit which is infinite and therefore you should stake an infinite amount. However, after the first flip of the coin there's a 50% chance that you've lost money and that is why people don't like it. Your resolution just formalizes this, instead of looking at the mean profit you are looking at the median profit. Unlike the mean profit, the median profit is finite and the paradox goes away. | What's wrong with this proposed resolution to the St Petersburg Paradox?
There's nothing wrong with that proposed resolution.
In the original paradox we look at the expected value (mean) of the profit which is infinite and therefore you should stake an infinite amount. How |
33,687 | What's wrong with this proposed resolution to the St Petersburg Paradox? | If I understand correctly, your analysis is:
Calculate the expected number of coin-flips required to get a head.
Calculate the payout for the outcome where you get exactly the expected number.
Value the game equal to that payout.
...OK, let's modify that game a little bit. Just like the original version, I will flip a coin and keep flipping until I throw heads. Only the payouts have changed:
If I flip heads on the second throw, you get four dollars.
On any other outcome, you lose everything you own and have to come work for me forever, for free.
How many coins do we expect to flip before we get a head? 2, exactly the same as before.
What is the payout for the outcome where we flip two coins to get a head? $4.00, exactly the same as before.
How much would you be willing to pay for the 'privilege' of paying this game that has a 75% chance of bankrupting you and a 25% chance of returning $4.00?
I suspect the answer is not "up to four dollars, exactly the same as before". Which means there's a hole in your logic.
Taking a broader perspective, expected winnings are not necessarily enough information to answer this sort of question; usually it depends on some additional context. Is this a one-off opportunity or are you expecting to be offered this gamble many times? How much money do you have on hand? And how much money do you need to be happy?
For example, if my total wealth is $100 but I urgently need a million dollars for a life-saving operation, I would be willing to pay all my money for a single shot at the St. Petersburg gamble. It only gives me a 1/2^19 chance of winning the money I need, but if I don't play I have no chance at all.
On the other hand, if my total wealth is $1000,000 and I need exactly a million dollars for that operation, the most I'd be willing to pay for a single game is two dollars (which I'm guaranteed to win back). Anything more, and I have a 1/2 chance of ending up short of the million bucks I need to save my life.
If I'm expecting to have many chances to play such games, then I probably want to choose a strategy that gives me a high probability of having lots of money at the end of all those games. For example:
Game A is guaranteed to increase my wealth by 10% every time I play it. (Expected winning: +10% of my current wealth.)
Game B has a 90% chance of doubling my wealth, and a 10% chance of bankrupting me. (Expected winning: +70% of my current wealth.) [edit: actually +80% because I fail at basic arithmetic, but the argument still holds.]
If I play 100 iterations of Game A, I'm certain to multiply my wealth by 13,780 times.
If I play 100 iterations of Game B, I have a 0.0027% chance of becoming unimaginably wealthy (about 10^30 x what I started with)... and a 99.73% chance of being bankrupted. Even though the average is better than for Game A, it's not a good option.
For this sort of heavily-iterated game, rather than trying to maximise my expected winnings in each game, I'm better off trying to maximise expected value of ln(total wealth after game/total wealth before game). This ensures long-term growth without getting wiped out.
If the stakes for every game are small relative to my total wealth, then this is approximately equivalent to maximising expected winnings in each game.
So, if you're playing lots of games and never risking a large portion of your current wealth, then the expected value of the gamble tells you all you need to know. In just about any other situation, you need to think about other things too. | What's wrong with this proposed resolution to the St Petersburg Paradox? | If I understand correctly, your analysis is:
Calculate the expected number of coin-flips required to get a head.
Calculate the payout for the outcome where you get exactly the expected number.
Value | What's wrong with this proposed resolution to the St Petersburg Paradox?
If I understand correctly, your analysis is:
Calculate the expected number of coin-flips required to get a head.
Calculate the payout for the outcome where you get exactly the expected number.
Value the game equal to that payout.
...OK, let's modify that game a little bit. Just like the original version, I will flip a coin and keep flipping until I throw heads. Only the payouts have changed:
If I flip heads on the second throw, you get four dollars.
On any other outcome, you lose everything you own and have to come work for me forever, for free.
How many coins do we expect to flip before we get a head? 2, exactly the same as before.
What is the payout for the outcome where we flip two coins to get a head? $4.00, exactly the same as before.
How much would you be willing to pay for the 'privilege' of paying this game that has a 75% chance of bankrupting you and a 25% chance of returning $4.00?
I suspect the answer is not "up to four dollars, exactly the same as before". Which means there's a hole in your logic.
Taking a broader perspective, expected winnings are not necessarily enough information to answer this sort of question; usually it depends on some additional context. Is this a one-off opportunity or are you expecting to be offered this gamble many times? How much money do you have on hand? And how much money do you need to be happy?
For example, if my total wealth is $100 but I urgently need a million dollars for a life-saving operation, I would be willing to pay all my money for a single shot at the St. Petersburg gamble. It only gives me a 1/2^19 chance of winning the money I need, but if I don't play I have no chance at all.
On the other hand, if my total wealth is $1000,000 and I need exactly a million dollars for that operation, the most I'd be willing to pay for a single game is two dollars (which I'm guaranteed to win back). Anything more, and I have a 1/2 chance of ending up short of the million bucks I need to save my life.
If I'm expecting to have many chances to play such games, then I probably want to choose a strategy that gives me a high probability of having lots of money at the end of all those games. For example:
Game A is guaranteed to increase my wealth by 10% every time I play it. (Expected winning: +10% of my current wealth.)
Game B has a 90% chance of doubling my wealth, and a 10% chance of bankrupting me. (Expected winning: +70% of my current wealth.) [edit: actually +80% because I fail at basic arithmetic, but the argument still holds.]
If I play 100 iterations of Game A, I'm certain to multiply my wealth by 13,780 times.
If I play 100 iterations of Game B, I have a 0.0027% chance of becoming unimaginably wealthy (about 10^30 x what I started with)... and a 99.73% chance of being bankrupted. Even though the average is better than for Game A, it's not a good option.
For this sort of heavily-iterated game, rather than trying to maximise my expected winnings in each game, I'm better off trying to maximise expected value of ln(total wealth after game/total wealth before game). This ensures long-term growth without getting wiped out.
If the stakes for every game are small relative to my total wealth, then this is approximately equivalent to maximising expected winnings in each game.
So, if you're playing lots of games and never risking a large portion of your current wealth, then the expected value of the gamble tells you all you need to know. In just about any other situation, you need to think about other things too. | What's wrong with this proposed resolution to the St Petersburg Paradox?
If I understand correctly, your analysis is:
Calculate the expected number of coin-flips required to get a head.
Calculate the payout for the outcome where you get exactly the expected number.
Value |
33,688 | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$ | Assuming $A$, $B$, $A+B$, and $A^{-1}+B^{-1}$ are all invertible, note that
$$A^{-1} + B^{-1} = B^{-1} + A^{-1} = B^{-1}(A+B)A^{-1}$$
and then invert both sides, QED.
Symmetry is unnecessary for this to hold. | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$ | Assuming $A$, $B$, $A+B$, and $A^{-1}+B^{-1}$ are all invertible, note that
$$A^{-1} + B^{-1} = B^{-1} + A^{-1} = B^{-1}(A+B)A^{-1}$$
and then invert both sides, QED.
Symmetry is unnecessary for this | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$
Assuming $A$, $B$, $A+B$, and $A^{-1}+B^{-1}$ are all invertible, note that
$$A^{-1} + B^{-1} = B^{-1} + A^{-1} = B^{-1}(A+B)A^{-1}$$
and then invert both sides, QED.
Symmetry is unnecessary for this to hold. | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$
Assuming $A$, $B$, $A+B$, and $A^{-1}+B^{-1}$ are all invertible, note that
$$A^{-1} + B^{-1} = B^{-1} + A^{-1} = B^{-1}(A+B)A^{-1}$$
and then invert both sides, QED.
Symmetry is unnecessary for this |
33,689 | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$ | Note that
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$$
is the inverse of
$$\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $$
if and only if
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) = \mathbf{I} $$
and
$$ \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} = \mathbf{I} $$
so that the left and right inverses coincide. Let's prove the first statement. We can see that
$$\begin{align} \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) & = \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \left(\mathbf{B} \mathbf{A}^{-1} + \mathbf{I} \right) \\ &= \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \left( \mathbf{A} + \mathbf{B} \right) \mathbf{A}^{-1} \\ & = \mathbf{I} \end{align} $$
as desired. A similar trick will prove the second statement as well. Thus $ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$ is indeed the inverse of $\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $. | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$ | Note that
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$$
is the inverse of
$$\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $$
if and only if
$$ \mathbf{A} \left(\mathbf{A} + | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$
Note that
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$$
is the inverse of
$$\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $$
if and only if
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) = \mathbf{I} $$
and
$$ \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} = \mathbf{I} $$
so that the left and right inverses coincide. Let's prove the first statement. We can see that
$$\begin{align} \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) & = \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \left(\mathbf{B} \mathbf{A}^{-1} + \mathbf{I} \right) \\ &= \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \left( \mathbf{A} + \mathbf{B} \right) \mathbf{A}^{-1} \\ & = \mathbf{I} \end{align} $$
as desired. A similar trick will prove the second statement as well. Thus $ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$ is indeed the inverse of $\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $. | Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$
Note that
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$$
is the inverse of
$$\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $$
if and only if
$$ \mathbf{A} \left(\mathbf{A} + |
33,690 | What does capital letter I mean in this formulas? | It's the indicator function! It takes value 1 if the condition inside the brackets is met, 0 otherwise. | What does capital letter I mean in this formulas? | It's the indicator function! It takes value 1 if the condition inside the brackets is met, 0 otherwise. | What does capital letter I mean in this formulas?
It's the indicator function! It takes value 1 if the condition inside the brackets is met, 0 otherwise. | What does capital letter I mean in this formulas?
It's the indicator function! It takes value 1 if the condition inside the brackets is met, 0 otherwise. |
33,691 | What does capital letter I mean in this formulas? | To add to stochazesthai correct answer.
In your examples the first usage indicates 1 in the case where the prediction of the mth tree does not equal the actual class. If it not equal then 1 is "returned" and so will affect the error term via the weight. | What does capital letter I mean in this formulas? | To add to stochazesthai correct answer.
In your examples the first usage indicates 1 in the case where the prediction of the mth tree does not equal the actual class. If it not equal then 1 is "return | What does capital letter I mean in this formulas?
To add to stochazesthai correct answer.
In your examples the first usage indicates 1 in the case where the prediction of the mth tree does not equal the actual class. If it not equal then 1 is "returned" and so will affect the error term via the weight. | What does capital letter I mean in this formulas?
To add to stochazesthai correct answer.
In your examples the first usage indicates 1 in the case where the prediction of the mth tree does not equal the actual class. If it not equal then 1 is "return |
33,692 | Coefficient changes sign when adding a variable in logistic regression | In addition to the links to Simpson's paradox in the comments, here is another way to think about it.
Imagine a dataset that is collected by counting the numbers and types of coins that various people have with them (I will use US Currency for the example, but it could be translated to other currencies as well).
Now we create 3 variables, the y variable is an indicator for whether the change totals to more than 1 dollar (\$1.00), x1 is the total number of coins and x2 is the total number of pennies (\$0.01) and nickels (\$0.05) (this will be a subset of x1). Now if regressed individually we would expect that x1 and x2 would have positive coefficients, the more coins, the more likely the total is over \$1. But if put into a regression model together then it makes sense for the coefficient on x2 to become negative, remember the definition of the individual coefficient is the change in y (or in the logistic case the change in the log odds of y) for a 1 unit change in x while holding the other variables constant. So if we have the same number of total coins (x1) but increase the number of small value coins (x2) then we have fewer of the large value coins and so a smaller chance of totaling over \$1. | Coefficient changes sign when adding a variable in logistic regression | In addition to the links to Simpson's paradox in the comments, here is another way to think about it.
Imagine a dataset that is collected by counting the numbers and types of coins that various people | Coefficient changes sign when adding a variable in logistic regression
In addition to the links to Simpson's paradox in the comments, here is another way to think about it.
Imagine a dataset that is collected by counting the numbers and types of coins that various people have with them (I will use US Currency for the example, but it could be translated to other currencies as well).
Now we create 3 variables, the y variable is an indicator for whether the change totals to more than 1 dollar (\$1.00), x1 is the total number of coins and x2 is the total number of pennies (\$0.01) and nickels (\$0.05) (this will be a subset of x1). Now if regressed individually we would expect that x1 and x2 would have positive coefficients, the more coins, the more likely the total is over \$1. But if put into a regression model together then it makes sense for the coefficient on x2 to become negative, remember the definition of the individual coefficient is the change in y (or in the logistic case the change in the log odds of y) for a 1 unit change in x while holding the other variables constant. So if we have the same number of total coins (x1) but increase the number of small value coins (x2) then we have fewer of the large value coins and so a smaller chance of totaling over \$1. | Coefficient changes sign when adding a variable in logistic regression
In addition to the links to Simpson's paradox in the comments, here is another way to think about it.
Imagine a dataset that is collected by counting the numbers and types of coins that various people |
33,693 | Coefficient changes sign when adding a variable in logistic regression | Predictors do change their signs in the presence of others in a model. I think you are seeing a special case of "suppression". Let me explain using correlations (this principle should be applicable to logistic regression). Say you are trying to predict the extent of fire damage done to a house ($Y$) from the severity of the fire ($X_1$) and the number of fire fighters sent to put out the fire ($X_2$). Assume $r_{YX_1}=0.65, \: r_{YX_2}=0.25, \: r_{X_1X_2}=0.70$. Then, if you compute semi-partial correlations,
$r_{Y(X_1X_2)} = \displaystyle\frac{0.65-0.25*0.70}{\sqrt{1-0.70^2}} = 0.67, \:
r_{Y(X_2X_1)} = \displaystyle\frac{0.25-0.65*0.70}{\sqrt{1-0.70^2}} = -0.29$
This is a case of suppression (albeit very slight) because $X_2$ suppressed the variance unaccounted for by $X_1$, resulting in $r_{Y(X_1X_2)} > r_{YX_1}$. Also, $X_2$'s semi-partial correlation ($r_{Y(X_2X_1)}$) switched its sign because its positive correlation with Y was mainly through its large positive correlation with $X_1$. Conceptually this make sense: if fire severity is held constant, sending more firefighters should result in less damage to a house (Messick & Van de Geer, 1981).
In your case, you need to think whether it makes sense that, while holding the time variable constant, location distance of an amenity be negatively related to the dependent variable. I also suggest some great posts on this issue in Cross Validated
Answering your other questions, I do not believe your data are suffering from multicollinearity; otherwise, all predictors should show inflated standard errors and lower p-values. Finally, of course you can add the travel-distance variable to the model since it seems its true relationship was masked by irrelevant variance (which was 'suppressed' by other predictors).
It is really up to the original questions you were trying to answer by designing your study.
Reference
Messick, D.M. & Van de Geer, J.P. "A reversal paradox." Psychological Bulletin 90.3 (1981): 582. | Coefficient changes sign when adding a variable in logistic regression | Predictors do change their signs in the presence of others in a model. I think you are seeing a special case of "suppression". Let me explain using correlations (this principle should be applicable to | Coefficient changes sign when adding a variable in logistic regression
Predictors do change their signs in the presence of others in a model. I think you are seeing a special case of "suppression". Let me explain using correlations (this principle should be applicable to logistic regression). Say you are trying to predict the extent of fire damage done to a house ($Y$) from the severity of the fire ($X_1$) and the number of fire fighters sent to put out the fire ($X_2$). Assume $r_{YX_1}=0.65, \: r_{YX_2}=0.25, \: r_{X_1X_2}=0.70$. Then, if you compute semi-partial correlations,
$r_{Y(X_1X_2)} = \displaystyle\frac{0.65-0.25*0.70}{\sqrt{1-0.70^2}} = 0.67, \:
r_{Y(X_2X_1)} = \displaystyle\frac{0.25-0.65*0.70}{\sqrt{1-0.70^2}} = -0.29$
This is a case of suppression (albeit very slight) because $X_2$ suppressed the variance unaccounted for by $X_1$, resulting in $r_{Y(X_1X_2)} > r_{YX_1}$. Also, $X_2$'s semi-partial correlation ($r_{Y(X_2X_1)}$) switched its sign because its positive correlation with Y was mainly through its large positive correlation with $X_1$. Conceptually this make sense: if fire severity is held constant, sending more firefighters should result in less damage to a house (Messick & Van de Geer, 1981).
In your case, you need to think whether it makes sense that, while holding the time variable constant, location distance of an amenity be negatively related to the dependent variable. I also suggest some great posts on this issue in Cross Validated
Answering your other questions, I do not believe your data are suffering from multicollinearity; otherwise, all predictors should show inflated standard errors and lower p-values. Finally, of course you can add the travel-distance variable to the model since it seems its true relationship was masked by irrelevant variance (which was 'suppressed' by other predictors).
It is really up to the original questions you were trying to answer by designing your study.
Reference
Messick, D.M. & Van de Geer, J.P. "A reversal paradox." Psychological Bulletin 90.3 (1981): 582. | Coefficient changes sign when adding a variable in logistic regression
Predictors do change their signs in the presence of others in a model. I think you are seeing a special case of "suppression". Let me explain using correlations (this principle should be applicable to |
33,694 | Coefficient changes sign when adding a variable in logistic regression | In my logistic regression the sign of coefficients of a variable (location distance of an amenity) changes based on other variables (with time -ve, with travel distance +ve) in the model. When the location distance is the only variable in the model, it has +ve sign.
This isn't surprising. It happens in ordinary regression as well. See the example in the image here
Should the variable need to maintain the +ve sign no matter what other variables are added in the model?
I don't see why this would be expected to be the case.
Does changing sign signify a multicollinearity issue?
Not necessarily multicollinearity; it can occur with quite ordinary non-orthogonality.
Some IVs are gaining significance while in a bivariate model, they didn't show significance and vice versa.
Sure, also common.
Is it okay to add variables that don't have much significance (ex: travel distance has a significance of 0.33 individually, but 0.05 when added with other variables) but becomes significant in the model?
Sure. It's also okay to add variables that aren't significant in either case (though if you throw in a large number of them it can cause problems. However, it sounds like you're doing variable selection; be very cautious about interpreting p-values/test statistics when you do that. | Coefficient changes sign when adding a variable in logistic regression | In my logistic regression the sign of coefficients of a variable (location distance of an amenity) changes based on other variables (with time -ve, with travel distance +ve) in the model. When the loc | Coefficient changes sign when adding a variable in logistic regression
In my logistic regression the sign of coefficients of a variable (location distance of an amenity) changes based on other variables (with time -ve, with travel distance +ve) in the model. When the location distance is the only variable in the model, it has +ve sign.
This isn't surprising. It happens in ordinary regression as well. See the example in the image here
Should the variable need to maintain the +ve sign no matter what other variables are added in the model?
I don't see why this would be expected to be the case.
Does changing sign signify a multicollinearity issue?
Not necessarily multicollinearity; it can occur with quite ordinary non-orthogonality.
Some IVs are gaining significance while in a bivariate model, they didn't show significance and vice versa.
Sure, also common.
Is it okay to add variables that don't have much significance (ex: travel distance has a significance of 0.33 individually, but 0.05 when added with other variables) but becomes significant in the model?
Sure. It's also okay to add variables that aren't significant in either case (though if you throw in a large number of them it can cause problems. However, it sounds like you're doing variable selection; be very cautious about interpreting p-values/test statistics when you do that. | Coefficient changes sign when adding a variable in logistic regression
In my logistic regression the sign of coefficients of a variable (location distance of an amenity) changes based on other variables (with time -ve, with travel distance +ve) in the model. When the loc |
33,695 | Coefficient changes sign when adding a variable in logistic regression | I think this may be a case of ceteris paribus confusion. When travel distance is the only variable, the effect on the outcome is positive. If the outcome is a purchase, this might be explained by the fact that when an agent lives far away, a trip to the store is more expensive, so he is more likely to stock up if he's already there. People who live far away fill their carts all the way, but make fewer trips, compared to people who live closer. I would bet dollars to donuts this is also what you would find if you used only travel time in the model as your measure of cost.
When you have both travel distance and travel time in the model, the sign of the distance coefficient gives you sign of the effect holding travel time fixed. When distance gets longer, but the travel time stays constant, the effect becomes negative. How might distance get longer, but travel time remain the same? If the speed of travel on the road became faster, perhaps because it was a highway with a higher speed limit. The comparison you are now making when both variables are in the model is between two identical people who both live $X$ minutes from a store, but one lives further away and takes a highway to get there. That agent is less likely to make a purchase, perhaps because traveling on the highway is easier than taking the local roads on gas usage, or perhaps this is the road he uses to commute to work and he passes the store on the way home (a kind of omitted variable in your model).
To sum up, when the regressors are different, the coefficients corresponds to different thought experiment comparisons and the interpretation changes accordingly. The changing signs do not necessarily indicate multicollinearity. Variable selection should be driven by theory, careful thought, and your ultimate goals. | Coefficient changes sign when adding a variable in logistic regression | I think this may be a case of ceteris paribus confusion. When travel distance is the only variable, the effect on the outcome is positive. If the outcome is a purchase, this might be explained by the | Coefficient changes sign when adding a variable in logistic regression
I think this may be a case of ceteris paribus confusion. When travel distance is the only variable, the effect on the outcome is positive. If the outcome is a purchase, this might be explained by the fact that when an agent lives far away, a trip to the store is more expensive, so he is more likely to stock up if he's already there. People who live far away fill their carts all the way, but make fewer trips, compared to people who live closer. I would bet dollars to donuts this is also what you would find if you used only travel time in the model as your measure of cost.
When you have both travel distance and travel time in the model, the sign of the distance coefficient gives you sign of the effect holding travel time fixed. When distance gets longer, but the travel time stays constant, the effect becomes negative. How might distance get longer, but travel time remain the same? If the speed of travel on the road became faster, perhaps because it was a highway with a higher speed limit. The comparison you are now making when both variables are in the model is between two identical people who both live $X$ minutes from a store, but one lives further away and takes a highway to get there. That agent is less likely to make a purchase, perhaps because traveling on the highway is easier than taking the local roads on gas usage, or perhaps this is the road he uses to commute to work and he passes the store on the way home (a kind of omitted variable in your model).
To sum up, when the regressors are different, the coefficients corresponds to different thought experiment comparisons and the interpretation changes accordingly. The changing signs do not necessarily indicate multicollinearity. Variable selection should be driven by theory, careful thought, and your ultimate goals. | Coefficient changes sign when adding a variable in logistic regression
I think this may be a case of ceteris paribus confusion. When travel distance is the only variable, the effect on the outcome is positive. If the outcome is a purchase, this might be explained by the |
33,696 | Coefficient changes sign when adding a variable in logistic regression | Nothing you said indicates to me that there is a problem with your models: they are all good answers to different questions. It is up to you to decide which question you want to answer, and thus which model you want to report. | Coefficient changes sign when adding a variable in logistic regression | Nothing you said indicates to me that there is a problem with your models: they are all good answers to different questions. It is up to you to decide which question you want to answer, and thus which | Coefficient changes sign when adding a variable in logistic regression
Nothing you said indicates to me that there is a problem with your models: they are all good answers to different questions. It is up to you to decide which question you want to answer, and thus which model you want to report. | Coefficient changes sign when adding a variable in logistic regression
Nothing you said indicates to me that there is a problem with your models: they are all good answers to different questions. It is up to you to decide which question you want to answer, and thus which |
33,697 | Dungeons & Dragons Attack hit probability success percentage | What you really want to know is how to do this calculation quickly--preferably in your head--during game play. To that end, consider using Normal approximations to the distributions.
Using Normal approximations, we can easily determine that two rolls for damage with a 2d6+2 have about a $30\%$ chance of equalling or exceeding $20$ and three rolls for damage have about a $95\%$ chance.
Using a Binomial distribution, we can estimate there is about a $108/343$ chance of rolling twice for damage and $27/343$ chance of rolling three times. Therefore, the net chance of equaling or exceeding $20$ is approximately
$$(0.30 \times 108 + 0.95 \times 27) / 343 \approx (32 + 25)/343 = 57/343 \approx 17\%.$$
(Careful consideration of errors of approximation suggested, when I first carried this out and did not know the answer exactly, that this number was likely within $2\%$ of the correct value. In fact it is astonishingly close to the exact answer, which is around $16.9997\%$, but that closeness is purely accidental.)
These calculations are relatively simple and easily carried out in a short amount of time. This approach really comes to the fore when you just want to know whether the chances of something are small, medium, or large, because then you can make approximations that greatly simplify the arithmetic.
Details
Normal approximations come to the fore when many activities are independently conducted and their results are added up--exactly as in this situation. Because the restriction to nonnegative health (which is not any kind of a summation operation) is a nuisance, ignore it and compute the chance that the opponent's health will decline to zero or less.
There will be three rolls of the 1d20 and, contingent upon how many of them exceed the opponent's armor, from zero to three rolls of the 2d6+2. This calls for two sets of calculations.
Approximating the damage distribution. We need to know two things: its mean and variance. An elementary calculation, easily memorized, shows that the mean of a d6 is $7/2$ and its variance is $35/12 \approx 3$. (I would use the value of $3$ for crude approximations.) Thus the mean of a 2d6 is $2\times 7/2 = 7$ and its variance is $2\times 35/12 = 35/6$. The mean of a 2d6+2 is increased to $7+2=9$ without changing its variance.
Therefore,
One roll for damages has a mean of $9$ and a variance of $35/6$. Because the largest possible damage is $14$, this will not reduce a health of $20$ to $0$.
Two rolls for damages have a mean of $2\times 9=18$ and a variance of $2\times 35/6=35/3\approx 12$. The square root of this variance must be around $3.5$ or so, indicating the health is approximately $(20-18)/3.5\approx 0.6$ standard deviations above the mean. I might use $0.5=1/2$ for a crude approximation.
Three rolls for damages have a mean of $27$ and variance of $35/2\approx 18$ whose square root is a little larger than $4$. Thus the health is around $1.5$ to $2$ standard deviations lower than the mean.
The 68-95-99.7 rule says that about $68\%$ of the results lie within one SD of the mean, $95\%$ within two SDs, and $99.7\%$ within three SDs. This information (which everyone memorizes) is on top of the obvious fact that no results are less than zero SDs from the mean. It applies beautifully to sums of dice.
Crudely interpolating, we may estimate that somewhere around $40\%$ or so will be within $0.6$ SDs of the mean and therefore the remaing $60\%$ are further than $0.6$ SDs from the mean. Half of those--about $30\%$--will be below the mean (and the other half above). Thus, we estimate that two rolls for damage has about a $30\%$ chance of destroying the enemy.
Similarly, it should be clear that when the mean damage is between $1.5$ and $2$ standard deviations above the health, destruction is almost certain. The 68-95-99.7 rule suggests that chance is around $95\%$.
This figure plots the true cumulative distributions of the final health (in black), their Normal approximations (in red), and the true chances of reducing the health to zero or less (as horizontal blue lines). These lines are at $0\%$, $33.6\%$, and $96.4\%$, respectively. As expected, the Normal approximations are excellent and so our approximately calculated chances are pretty accurate.
Estimating the number of rolls for damages. The comparison of a 1d20 to the armor class has three outcomes: doing nothing with a chance of $11/20$, rolling for half damages with a chance of $1/20$, and rolling for full damages with a chance of $8/20$. Tracking three outcomes over three rolls is too complicated: there will be $3\times 3\times 3=27$ possibilities falling into $10$ distinct categories. Instead of halving the damages upon equalling the armor, let's just flip a coin then to determine whether there will be full or no damages. That reduces the outcomes to an $11/20 + (1/2)\times 1/20 = 23/40$ chance of doing nothing and a $40/40 - 23/40 = 17/40$ chance of rolling for damages.
Since this is intended to be done mentally, note that the $23/40 = 8/20 + (1/2)\times 1/20 = 0.425$ is easily calculated and this is extremely close to a simple fraction $3/7 = 0.42857\ldots.$ We have placed ourselves in a situation equivalent to rolling an unfair coin with $3/7$ chance of success. This has a Binomial distribution:
We can roll for damages twice with a chance of $3\times (4/7)\times (3/7)^2= 108/343.$
We will roll for damages three times with a chance of $(3/7)^3 = 27/343.$
(These calculations are very easily learned; all introductory statistics courses cover the theory and offer lots of practice with them.)
Code
To verify this result (which was obtained before many of the other answers appeared), I wrote some R code to carry out such calculations in very general ways. Because they can involve nonlinear operations, such as comparisons and truncation, they do not capitalize on the efficiency of convolutions, but just do the work with brute force (using outer products). The efficiency is more than adequate for smallish distributions (having only a few hundred possible outcomes, more or less). I found it more important for the code to be expressive so that we, its users, could have some confidence that it correctly carries out what we want. Here for your consideration is the full set of calculations to solve this (somewhat complex) problem:
round <- conditional(sign(hit-armor), list(nothing, half(damage), damage))
x <- health - rep(round, n.rounds) # The battle
x <= nothing # Outcome distribution
The output is
FALSE TRUE
0.8300265 0.1699735
showing a 16.99735% chance of success (and 83.00265% chance of failure).
Of course, the data for this question had to be specified beforehand:
hit <- d(1, 20, 4) # Distribution of hit points
damage <- d(2, 6, 1) # Distribution of damage points
n.rounds <- 3 # Number of attacks
health <- as.die(20) # Opponent's health
armor <- as.die(16) # Opponent's armor
nothing <- as.die(0) # Result of no hit
This code reveals that the calculations are lurking in a class I have named die. This class maintains information about outcomes ("value") and their chances ("prob"). The class needs some basic support for creating dice and displaying their values:
as.die <- function(value, prob) {
if(missing(prob)) x <- list(value=value, prob=1)
else x <- list(value=value, prob=prob)
class(x) <- "die"
return(x)
}
print.die <- function(d, ...) {
names(d$prob) <- d$value
print(d$prob, ...)
}
plot.die <- function(d, ...) {
i <- order(d$value)
plot(d$value[i], cumsum(d$prob[i]), ylim=c(0,1), ylab="Probability", ...)
}
rep.die <- function(d, n) {
x <- d
while(n > 1) {n <- n-1; x <- d + x}
return(x)
}
die.normalize <- function(value, prob) {
i <- prob > 0
p <- aggregate(prob[i], by=list(value[i]), FUN=sum)
as.die(p[[1]], p[[2]])
}
die.uniform <- function(faces, offset=0)
as.die(1:faces + offset, rep(1/faces, faces))
d <- function(n=2, k, ...) rep(die.uniform(k, ...), n)
This is straightforward stuff, quickly written. The only subtlety is die.normalize, which adds the probabilities associated with values appearing more than once in the data structure, keeping the encoding as economical as possible.
The last function is noteworthy: d(n,k,a) represents the sum of n independent dice with values $1+a, 2+a, \ldots, k+a$. For instance, a 2d6+2 can be considered the sum of two d6+1 distributions and is created via the call d(2,6,1).
The heart of the code is the overloading of arithmetic operations. I implemented only those needed for this calculation, but did so in a way that is easy to extend, as should be evident by all the one-line definitions. The conditional function (a variant of switch) is especially useful.
op.die <- function(op, d1, d2) {
if(missing(d2)) {
values <- op(d1$value)
probs <- d1$prob
} else {
values <- c(outer(d1$value, d2$value, FUN=op))
probs <- c(outer(d1$prob, d2$prob, FUN='*'))
}
die.normalize(values, probs)
}
"[.die" <- function(d1, i) sum(d1$prob[d1$value %in% i])
"==.die" <- function(d1, d2) op.die('==', d1, d2)
">.die" <- function(d1, d2) op.die('>', d1, d2)
"<=.die" <- function(d1, d2) op.die('<=', d1, d2)
"!.die" <- function(d) op.die(function(x) 1-x, d)
"+.die" <- function(d1, d2) op.die('+', d1, d2)
"-.die" <- function(d1, d2) op.die('-', d1, d2)
"*.die" <- function(d1, d2) op.die('*', d1, d2)
"/.die" <- function(d1, d2) op.die('/', d1, d2)
sign.die <- function(d) op.die(sign, d)
half <- function(d) op.die(function(x) floor(x/2), d)
conditional <- function(cond, dice) {
values <- unlist(sapply(dice, function(x) x$value))
probs <- unlist(sapply(1:length(cond$prob),
function(i) cond$prob[i] * dice[[i]]$prob))
die.normalize(values, probs)
}
(If one wanted to be efficient, which might be useful when working with large distributions, rep.die, +.die, and -.die could be specially rewritten to use convolutions. This is unlikely to be helpful in most applications, though, because the other operations would still need brute-force calculation.)
To enable study of the properties of distributions, here are some statistical summaries:
moment <- function(d, k) sum(d$value^k * d$prob)
mean.die <- function(d) moment(d, 1)
var.die <- function(d) moment(d, 2) - moment(d, 1)^2
sd.die <- function(d) sqrt(var.die(d))
min.die <- function(d) min(d$value)
max.die <- function(d) max(d$value)
As an example of their use, here is the health distribution for three damage rolls (the right hand plot in the first figure). The calculation of the total damage distribution is performed by x.3 <- health - rep(damage, 3) (pretty simple, right?) and the Normal approximation is computed via pnorm(x, mean.die(x.3), sd.die(x.3)).
plot(x.3 <- health - rep(damage, 3), type="b", xlim=l, lwd=2, xlab="Health",
main="After Three Hits")
curve(pnorm(x, mean.die(x.3), sd.die(x.3)), lwd=2, col="Red", add=TRUE)
abline(v=0, col="Gray")
abline(h = (x.3 <= nothing)[TRUE], col="Blue")
All this ought to port easily to C++. | Dungeons & Dragons Attack hit probability success percentage | What you really want to know is how to do this calculation quickly--preferably in your head--during game play. To that end, consider using Normal approximations to the distributions.
Using Normal app | Dungeons & Dragons Attack hit probability success percentage
What you really want to know is how to do this calculation quickly--preferably in your head--during game play. To that end, consider using Normal approximations to the distributions.
Using Normal approximations, we can easily determine that two rolls for damage with a 2d6+2 have about a $30\%$ chance of equalling or exceeding $20$ and three rolls for damage have about a $95\%$ chance.
Using a Binomial distribution, we can estimate there is about a $108/343$ chance of rolling twice for damage and $27/343$ chance of rolling three times. Therefore, the net chance of equaling or exceeding $20$ is approximately
$$(0.30 \times 108 + 0.95 \times 27) / 343 \approx (32 + 25)/343 = 57/343 \approx 17\%.$$
(Careful consideration of errors of approximation suggested, when I first carried this out and did not know the answer exactly, that this number was likely within $2\%$ of the correct value. In fact it is astonishingly close to the exact answer, which is around $16.9997\%$, but that closeness is purely accidental.)
These calculations are relatively simple and easily carried out in a short amount of time. This approach really comes to the fore when you just want to know whether the chances of something are small, medium, or large, because then you can make approximations that greatly simplify the arithmetic.
Details
Normal approximations come to the fore when many activities are independently conducted and their results are added up--exactly as in this situation. Because the restriction to nonnegative health (which is not any kind of a summation operation) is a nuisance, ignore it and compute the chance that the opponent's health will decline to zero or less.
There will be three rolls of the 1d20 and, contingent upon how many of them exceed the opponent's armor, from zero to three rolls of the 2d6+2. This calls for two sets of calculations.
Approximating the damage distribution. We need to know two things: its mean and variance. An elementary calculation, easily memorized, shows that the mean of a d6 is $7/2$ and its variance is $35/12 \approx 3$. (I would use the value of $3$ for crude approximations.) Thus the mean of a 2d6 is $2\times 7/2 = 7$ and its variance is $2\times 35/12 = 35/6$. The mean of a 2d6+2 is increased to $7+2=9$ without changing its variance.
Therefore,
One roll for damages has a mean of $9$ and a variance of $35/6$. Because the largest possible damage is $14$, this will not reduce a health of $20$ to $0$.
Two rolls for damages have a mean of $2\times 9=18$ and a variance of $2\times 35/6=35/3\approx 12$. The square root of this variance must be around $3.5$ or so, indicating the health is approximately $(20-18)/3.5\approx 0.6$ standard deviations above the mean. I might use $0.5=1/2$ for a crude approximation.
Three rolls for damages have a mean of $27$ and variance of $35/2\approx 18$ whose square root is a little larger than $4$. Thus the health is around $1.5$ to $2$ standard deviations lower than the mean.
The 68-95-99.7 rule says that about $68\%$ of the results lie within one SD of the mean, $95\%$ within two SDs, and $99.7\%$ within three SDs. This information (which everyone memorizes) is on top of the obvious fact that no results are less than zero SDs from the mean. It applies beautifully to sums of dice.
Crudely interpolating, we may estimate that somewhere around $40\%$ or so will be within $0.6$ SDs of the mean and therefore the remaing $60\%$ are further than $0.6$ SDs from the mean. Half of those--about $30\%$--will be below the mean (and the other half above). Thus, we estimate that two rolls for damage has about a $30\%$ chance of destroying the enemy.
Similarly, it should be clear that when the mean damage is between $1.5$ and $2$ standard deviations above the health, destruction is almost certain. The 68-95-99.7 rule suggests that chance is around $95\%$.
This figure plots the true cumulative distributions of the final health (in black), their Normal approximations (in red), and the true chances of reducing the health to zero or less (as horizontal blue lines). These lines are at $0\%$, $33.6\%$, and $96.4\%$, respectively. As expected, the Normal approximations are excellent and so our approximately calculated chances are pretty accurate.
Estimating the number of rolls for damages. The comparison of a 1d20 to the armor class has three outcomes: doing nothing with a chance of $11/20$, rolling for half damages with a chance of $1/20$, and rolling for full damages with a chance of $8/20$. Tracking three outcomes over three rolls is too complicated: there will be $3\times 3\times 3=27$ possibilities falling into $10$ distinct categories. Instead of halving the damages upon equalling the armor, let's just flip a coin then to determine whether there will be full or no damages. That reduces the outcomes to an $11/20 + (1/2)\times 1/20 = 23/40$ chance of doing nothing and a $40/40 - 23/40 = 17/40$ chance of rolling for damages.
Since this is intended to be done mentally, note that the $23/40 = 8/20 + (1/2)\times 1/20 = 0.425$ is easily calculated and this is extremely close to a simple fraction $3/7 = 0.42857\ldots.$ We have placed ourselves in a situation equivalent to rolling an unfair coin with $3/7$ chance of success. This has a Binomial distribution:
We can roll for damages twice with a chance of $3\times (4/7)\times (3/7)^2= 108/343.$
We will roll for damages three times with a chance of $(3/7)^3 = 27/343.$
(These calculations are very easily learned; all introductory statistics courses cover the theory and offer lots of practice with them.)
Code
To verify this result (which was obtained before many of the other answers appeared), I wrote some R code to carry out such calculations in very general ways. Because they can involve nonlinear operations, such as comparisons and truncation, they do not capitalize on the efficiency of convolutions, but just do the work with brute force (using outer products). The efficiency is more than adequate for smallish distributions (having only a few hundred possible outcomes, more or less). I found it more important for the code to be expressive so that we, its users, could have some confidence that it correctly carries out what we want. Here for your consideration is the full set of calculations to solve this (somewhat complex) problem:
round <- conditional(sign(hit-armor), list(nothing, half(damage), damage))
x <- health - rep(round, n.rounds) # The battle
x <= nothing # Outcome distribution
The output is
FALSE TRUE
0.8300265 0.1699735
showing a 16.99735% chance of success (and 83.00265% chance of failure).
Of course, the data for this question had to be specified beforehand:
hit <- d(1, 20, 4) # Distribution of hit points
damage <- d(2, 6, 1) # Distribution of damage points
n.rounds <- 3 # Number of attacks
health <- as.die(20) # Opponent's health
armor <- as.die(16) # Opponent's armor
nothing <- as.die(0) # Result of no hit
This code reveals that the calculations are lurking in a class I have named die. This class maintains information about outcomes ("value") and their chances ("prob"). The class needs some basic support for creating dice and displaying their values:
as.die <- function(value, prob) {
if(missing(prob)) x <- list(value=value, prob=1)
else x <- list(value=value, prob=prob)
class(x) <- "die"
return(x)
}
print.die <- function(d, ...) {
names(d$prob) <- d$value
print(d$prob, ...)
}
plot.die <- function(d, ...) {
i <- order(d$value)
plot(d$value[i], cumsum(d$prob[i]), ylim=c(0,1), ylab="Probability", ...)
}
rep.die <- function(d, n) {
x <- d
while(n > 1) {n <- n-1; x <- d + x}
return(x)
}
die.normalize <- function(value, prob) {
i <- prob > 0
p <- aggregate(prob[i], by=list(value[i]), FUN=sum)
as.die(p[[1]], p[[2]])
}
die.uniform <- function(faces, offset=0)
as.die(1:faces + offset, rep(1/faces, faces))
d <- function(n=2, k, ...) rep(die.uniform(k, ...), n)
This is straightforward stuff, quickly written. The only subtlety is die.normalize, which adds the probabilities associated with values appearing more than once in the data structure, keeping the encoding as economical as possible.
The last function is noteworthy: d(n,k,a) represents the sum of n independent dice with values $1+a, 2+a, \ldots, k+a$. For instance, a 2d6+2 can be considered the sum of two d6+1 distributions and is created via the call d(2,6,1).
The heart of the code is the overloading of arithmetic operations. I implemented only those needed for this calculation, but did so in a way that is easy to extend, as should be evident by all the one-line definitions. The conditional function (a variant of switch) is especially useful.
op.die <- function(op, d1, d2) {
if(missing(d2)) {
values <- op(d1$value)
probs <- d1$prob
} else {
values <- c(outer(d1$value, d2$value, FUN=op))
probs <- c(outer(d1$prob, d2$prob, FUN='*'))
}
die.normalize(values, probs)
}
"[.die" <- function(d1, i) sum(d1$prob[d1$value %in% i])
"==.die" <- function(d1, d2) op.die('==', d1, d2)
">.die" <- function(d1, d2) op.die('>', d1, d2)
"<=.die" <- function(d1, d2) op.die('<=', d1, d2)
"!.die" <- function(d) op.die(function(x) 1-x, d)
"+.die" <- function(d1, d2) op.die('+', d1, d2)
"-.die" <- function(d1, d2) op.die('-', d1, d2)
"*.die" <- function(d1, d2) op.die('*', d1, d2)
"/.die" <- function(d1, d2) op.die('/', d1, d2)
sign.die <- function(d) op.die(sign, d)
half <- function(d) op.die(function(x) floor(x/2), d)
conditional <- function(cond, dice) {
values <- unlist(sapply(dice, function(x) x$value))
probs <- unlist(sapply(1:length(cond$prob),
function(i) cond$prob[i] * dice[[i]]$prob))
die.normalize(values, probs)
}
(If one wanted to be efficient, which might be useful when working with large distributions, rep.die, +.die, and -.die could be specially rewritten to use convolutions. This is unlikely to be helpful in most applications, though, because the other operations would still need brute-force calculation.)
To enable study of the properties of distributions, here are some statistical summaries:
moment <- function(d, k) sum(d$value^k * d$prob)
mean.die <- function(d) moment(d, 1)
var.die <- function(d) moment(d, 2) - moment(d, 1)^2
sd.die <- function(d) sqrt(var.die(d))
min.die <- function(d) min(d$value)
max.die <- function(d) max(d$value)
As an example of their use, here is the health distribution for three damage rolls (the right hand plot in the first figure). The calculation of the total damage distribution is performed by x.3 <- health - rep(damage, 3) (pretty simple, right?) and the Normal approximation is computed via pnorm(x, mean.die(x.3), sd.die(x.3)).
plot(x.3 <- health - rep(damage, 3), type="b", xlim=l, lwd=2, xlab="Health",
main="After Three Hits")
curve(pnorm(x, mean.die(x.3), sd.die(x.3)), lwd=2, col="Red", add=TRUE)
abline(v=0, col="Gray")
abline(h = (x.3 <= nothing)[TRUE], col="Blue")
All this ought to port easily to C++. | Dungeons & Dragons Attack hit probability success percentage
What you really want to know is how to do this calculation quickly--preferably in your head--during game play. To that end, consider using Normal approximations to the distributions.
Using Normal app |
33,698 | Dungeons & Dragons Attack hit probability success percentage | So if you roll 12 you exactly equal his armour class, and if you roll higher, you beat it.
That's 11/20 chance of 0 damage, 1/20 chance of $\lfloor\frac{_1}{^2}$(2d6+2)$\rfloor$ and 8/20 chance of 2d6+2.
Damage distribution (prob x 36)
Event Prob 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Hit 0.40 1 2 3 4 5 6 5 4 3 2 1
Just hit 0.05 3 7 11 9 5 1
Miss 0.55 36
So the unconditional distribution of per-attack damage is:
Damage distribution (prob x 36 x 20)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(Prob x 36 x 20) 396 0 3 7 19 25 29 33 40 48 40 32 24 16 8
Prob % 55.0 0.0 0.42 0.97 2.64 3.47 4.03 4.58 5.56 6.67 5.56 4.44 3.33 2.22 1.11
The convolution of damage from three such attacks is:
Dam 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Prob% 16.64 0.00 0.38 0.88 2.40 3.16 3.71 4.29 5.32 6.55 5.82 5.17 4.61 4.11 3.58
Dam 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Prob% 2.96 3.26 3.44 3.46 3.28 2.98 2.62 2.21 1.79 1.42 1.14 0.94 0.79 0.68 0.59
Dam 30 31 32 33 34 35 36 37 38 39 40 41 42
Prob% 0.50 0.41 0.31 0.23 0.16 0.10 0.06 0.03 0.02 0.01 0.00 0.00 0.00
(While the convolution calculation is straightforward, this convolution was performed using the convolve function in R.)
The probability of doing 20 or more damage = 16.99735%
That is, the desired probability is essentially 17%.
(Interestingly, this is about the same chance as the chance of doing no damage at all.)
Average damage over three attacks is 11.44, median damage is 11.
Incorporating crits:
Damage distribution (prob x 36)
Event Prob 8 10 12 14 16 18 20 22 24 26 28
Crit 0.05 1 2 3 4 5 6 5 4 3 2 1
Event Prob 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Hit 0.35 1 2 3 4 5 6 5 4 3 2 1
Just hit 0.05 3 7 11 9 5 1
Miss 0.55 36
The unconditional distribution of per-attack damage is:
Damage distribution (prob x 36 x 20)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
(Px720) 396 0 3 7 18 23 26 29 36 42 37 28 24 14 11 0 5 0 6 0 5 0 4 0 3 0 2 0 1
The convolution for three attacks is:
Dam 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Pr% 16.64 0.00 0.38 0.88 2.27 2.91 3.33 3.78 4.79 5.73 5.33 4.49 4.35 3.49 3.50
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
2.42 3.29 2.80 3.60 2.73 3.18 2.30 2.54 1.74 1.88 1.31 1.44 1.07 1.14 0.91
30 31 32 33 34 35 36 37 38 39 40 >40
0.86 0.74 0.68 0.55 0.51 0.39 0.37 0.26 0.26 0.18 0.19 0.78
So we get Prob(damage $\geq$ 20) = 23.28396 % | Dungeons & Dragons Attack hit probability success percentage | So if you roll 12 you exactly equal his armour class, and if you roll higher, you beat it.
That's 11/20 chance of 0 damage, 1/20 chance of $\lfloor\frac{_1}{^2}$(2d6+2)$\rfloor$ and 8/20 chance of 2d6 | Dungeons & Dragons Attack hit probability success percentage
So if you roll 12 you exactly equal his armour class, and if you roll higher, you beat it.
That's 11/20 chance of 0 damage, 1/20 chance of $\lfloor\frac{_1}{^2}$(2d6+2)$\rfloor$ and 8/20 chance of 2d6+2.
Damage distribution (prob x 36)
Event Prob 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Hit 0.40 1 2 3 4 5 6 5 4 3 2 1
Just hit 0.05 3 7 11 9 5 1
Miss 0.55 36
So the unconditional distribution of per-attack damage is:
Damage distribution (prob x 36 x 20)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(Prob x 36 x 20) 396 0 3 7 19 25 29 33 40 48 40 32 24 16 8
Prob % 55.0 0.0 0.42 0.97 2.64 3.47 4.03 4.58 5.56 6.67 5.56 4.44 3.33 2.22 1.11
The convolution of damage from three such attacks is:
Dam 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Prob% 16.64 0.00 0.38 0.88 2.40 3.16 3.71 4.29 5.32 6.55 5.82 5.17 4.61 4.11 3.58
Dam 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Prob% 2.96 3.26 3.44 3.46 3.28 2.98 2.62 2.21 1.79 1.42 1.14 0.94 0.79 0.68 0.59
Dam 30 31 32 33 34 35 36 37 38 39 40 41 42
Prob% 0.50 0.41 0.31 0.23 0.16 0.10 0.06 0.03 0.02 0.01 0.00 0.00 0.00
(While the convolution calculation is straightforward, this convolution was performed using the convolve function in R.)
The probability of doing 20 or more damage = 16.99735%
That is, the desired probability is essentially 17%.
(Interestingly, this is about the same chance as the chance of doing no damage at all.)
Average damage over three attacks is 11.44, median damage is 11.
Incorporating crits:
Damage distribution (prob x 36)
Event Prob 8 10 12 14 16 18 20 22 24 26 28
Crit 0.05 1 2 3 4 5 6 5 4 3 2 1
Event Prob 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Hit 0.35 1 2 3 4 5 6 5 4 3 2 1
Just hit 0.05 3 7 11 9 5 1
Miss 0.55 36
The unconditional distribution of per-attack damage is:
Damage distribution (prob x 36 x 20)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
(Px720) 396 0 3 7 18 23 26 29 36 42 37 28 24 14 11 0 5 0 6 0 5 0 4 0 3 0 2 0 1
The convolution for three attacks is:
Dam 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Pr% 16.64 0.00 0.38 0.88 2.27 2.91 3.33 3.78 4.79 5.73 5.33 4.49 4.35 3.49 3.50
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
2.42 3.29 2.80 3.60 2.73 3.18 2.30 2.54 1.74 1.88 1.31 1.44 1.07 1.14 0.91
30 31 32 33 34 35 36 37 38 39 40 >40
0.86 0.74 0.68 0.55 0.51 0.39 0.37 0.26 0.26 0.18 0.19 0.78
So we get Prob(damage $\geq$ 20) = 23.28396 % | Dungeons & Dragons Attack hit probability success percentage
So if you roll 12 you exactly equal his armour class, and if you roll higher, you beat it.
That's 11/20 chance of 0 damage, 1/20 chance of $\lfloor\frac{_1}{^2}$(2d6+2)$\rfloor$ and 8/20 chance of 2d6 |
33,699 | Dungeons & Dragons Attack hit probability success percentage | One way to get at this fairly simply is just through simulation - you won't get the exact percentage to the second decimal, but you can nail it down very closely. I've input some R code below that will simulate the rolls you're making and spit out the probability that your ally dies.
# Creating a hundred thousand sets of your three rolls to hit
roll.1 <- sample(1:20, replace = TRUE, 100000)
roll.2 <- sample(1:20, replace = TRUE, 100000)
roll.3 <- sample(1:20, replace = TRUE, 100000)
# Creating a hundred thousand sets of three damage rolls
damage.1 <- replicate(100000, (sample(1:6, 1) + sample(1:6, 1) + 2))
damage.2 <- replicate(100000, (sample(1:6, 1) + sample(1:6, 1) + 2))
damage.3 <- replicate(100000, (sample(1:6, 1) + sample(1:6, 1) + 2))
# Here we calculate the damage of each roll. Essentially this line is saying
# "Apply the full damage if the hit roll was 13 or more (13 + 4 = 17), and
# apply half the damage if the roll was 12." Applying zero damage when the roll
# was less than 12 is implicit here.
hurt.1 <- ((roll.1 >= 13) * damage.1 + floor((roll.1 == 12) * damage.1 * .5))
hurt.2 <- ((roll.2 >= 13) * damage.2 + floor((roll.2 == 12) * damage.2 * .5))
hurt.3 <- ((roll.3 >= 13) * damage.3 + floor((roll.3 == 12) * damage.3 * .5))
# Now we just subtract the total damage from the health
health <- 20 - (hurt.1 + hurt.2 + hurt.3)
# And this gives the percentage of the time you'd kill your ally.
sum(health <= 0)/1000000
When I run this, I consistently get between 16.8% and 17.2%. So you had about a 17% chance of killing your ally with this spell.
If you're interested, the below code also computes the exact probability using the method outlined in Micah's answer. It turns out the exact probability is 16.99735%
# Get a vector of the probability to hit 0, 1, 2, and 3 times. Since you can
# only kill him if you get 2 hits or more, we only need the latter 2 probabilities
hit.times <- (dbinom(0:3, 3, 9/20))
# We'll be making extensive use of R's `outer` function, which gives us all
# combinations of adding or multiplying various numbers - useful for dice
# rolling
damage.prob <- table(outer(1:6, 1:6, FUN = "+") + 2)/36
damage.prob <- data.frame(damage.prob)
colnames(damage.prob) <- c("Damage", "Prob")
damage.prob$Damage <- as.numeric(as.character(damage.prob$Damage))
# Since we'll be multiplying by the probability to hit each number of times
# later, we just use 8/9 as the probability to get full damage, and 1/9 as
# the probability of half damage.
damage.prob.full <- data.frame("Damage" = damage.prob$Damage, "Prob" = damage.prob$Prob * 8/9)
damage.prob.half <- data.frame("Damage" = damage.prob$Damage * .5, "Prob" = damage.prob$Prob * 1/9)
# Rounding down the half damage
damage.prob.half$Damage <- floor(damage.prob.half$Damage)
damage.prob.half <- aggregate(damage.prob.half$Prob, by = list(damage.prob.half$Damage), FUN = sum)
colnames(damage.prob.half) <- c("Damage", "Prob")
damage.prob.total <- merge(damage.prob.full, damage.prob.half, by = "Damage", all.x = TRUE, all.y = TRUE)
damage.prob.total$Prob <- rowSums(cbind(damage.prob.total$Prob.x, damage.prob.total$Prob.y), na.rm=TRUE)
# Below I'm multiplying out all the damage probabilities for 2 and 3 hits, then
# summing the probabilities of getting each damage total that equals 20 or more.
damage.2 <- outer(damage.prob.total$Damage, damage.prob.total$Damage, FUN = '+')
prob.kill.2 <- sum(outer(damage.prob.total$Prob, damage.prob.total$Prob)[damage.2 >= 20])
damage.3 <- outer(outer(damage.prob.total$Damage, damage.prob.total$Damage, FUN = "+"), damage.prob.total$Damage, FUN = "+")
prob.kill.3 <- outer(outer(damage.prob.total$Prob, damage.prob.total$Prob), damage.prob.total$Prob)[damage.3 >= 20]
# Now we just multiply the probability of killing with 2 hits by the probability
# of hitting twice, and the same for 3 hits. Adding that together we get the
# answer.
sum(prob.kill.2)*hit.times[3] + sum(prob.kill.3)*hit.times[4] | Dungeons & Dragons Attack hit probability success percentage | One way to get at this fairly simply is just through simulation - you won't get the exact percentage to the second decimal, but you can nail it down very closely. I've input some R code below that wil | Dungeons & Dragons Attack hit probability success percentage
One way to get at this fairly simply is just through simulation - you won't get the exact percentage to the second decimal, but you can nail it down very closely. I've input some R code below that will simulate the rolls you're making and spit out the probability that your ally dies.
# Creating a hundred thousand sets of your three rolls to hit
roll.1 <- sample(1:20, replace = TRUE, 100000)
roll.2 <- sample(1:20, replace = TRUE, 100000)
roll.3 <- sample(1:20, replace = TRUE, 100000)
# Creating a hundred thousand sets of three damage rolls
damage.1 <- replicate(100000, (sample(1:6, 1) + sample(1:6, 1) + 2))
damage.2 <- replicate(100000, (sample(1:6, 1) + sample(1:6, 1) + 2))
damage.3 <- replicate(100000, (sample(1:6, 1) + sample(1:6, 1) + 2))
# Here we calculate the damage of each roll. Essentially this line is saying
# "Apply the full damage if the hit roll was 13 or more (13 + 4 = 17), and
# apply half the damage if the roll was 12." Applying zero damage when the roll
# was less than 12 is implicit here.
hurt.1 <- ((roll.1 >= 13) * damage.1 + floor((roll.1 == 12) * damage.1 * .5))
hurt.2 <- ((roll.2 >= 13) * damage.2 + floor((roll.2 == 12) * damage.2 * .5))
hurt.3 <- ((roll.3 >= 13) * damage.3 + floor((roll.3 == 12) * damage.3 * .5))
# Now we just subtract the total damage from the health
health <- 20 - (hurt.1 + hurt.2 + hurt.3)
# And this gives the percentage of the time you'd kill your ally.
sum(health <= 0)/1000000
When I run this, I consistently get between 16.8% and 17.2%. So you had about a 17% chance of killing your ally with this spell.
If you're interested, the below code also computes the exact probability using the method outlined in Micah's answer. It turns out the exact probability is 16.99735%
# Get a vector of the probability to hit 0, 1, 2, and 3 times. Since you can
# only kill him if you get 2 hits or more, we only need the latter 2 probabilities
hit.times <- (dbinom(0:3, 3, 9/20))
# We'll be making extensive use of R's `outer` function, which gives us all
# combinations of adding or multiplying various numbers - useful for dice
# rolling
damage.prob <- table(outer(1:6, 1:6, FUN = "+") + 2)/36
damage.prob <- data.frame(damage.prob)
colnames(damage.prob) <- c("Damage", "Prob")
damage.prob$Damage <- as.numeric(as.character(damage.prob$Damage))
# Since we'll be multiplying by the probability to hit each number of times
# later, we just use 8/9 as the probability to get full damage, and 1/9 as
# the probability of half damage.
damage.prob.full <- data.frame("Damage" = damage.prob$Damage, "Prob" = damage.prob$Prob * 8/9)
damage.prob.half <- data.frame("Damage" = damage.prob$Damage * .5, "Prob" = damage.prob$Prob * 1/9)
# Rounding down the half damage
damage.prob.half$Damage <- floor(damage.prob.half$Damage)
damage.prob.half <- aggregate(damage.prob.half$Prob, by = list(damage.prob.half$Damage), FUN = sum)
colnames(damage.prob.half) <- c("Damage", "Prob")
damage.prob.total <- merge(damage.prob.full, damage.prob.half, by = "Damage", all.x = TRUE, all.y = TRUE)
damage.prob.total$Prob <- rowSums(cbind(damage.prob.total$Prob.x, damage.prob.total$Prob.y), na.rm=TRUE)
# Below I'm multiplying out all the damage probabilities for 2 and 3 hits, then
# summing the probabilities of getting each damage total that equals 20 or more.
damage.2 <- outer(damage.prob.total$Damage, damage.prob.total$Damage, FUN = '+')
prob.kill.2 <- sum(outer(damage.prob.total$Prob, damage.prob.total$Prob)[damage.2 >= 20])
damage.3 <- outer(outer(damage.prob.total$Damage, damage.prob.total$Damage, FUN = "+"), damage.prob.total$Damage, FUN = "+")
prob.kill.3 <- outer(outer(damage.prob.total$Prob, damage.prob.total$Prob), damage.prob.total$Prob)[damage.3 >= 20]
# Now we just multiply the probability of killing with 2 hits by the probability
# of hitting twice, and the same for 3 hits. Adding that together we get the
# answer.
sum(prob.kill.2)*hit.times[3] + sum(prob.kill.3)*hit.times[4] | Dungeons & Dragons Attack hit probability success percentage
One way to get at this fairly simply is just through simulation - you won't get the exact percentage to the second decimal, but you can nail it down very closely. I've input some R code below that wil |
33,700 | Dungeons & Dragons Attack hit probability success percentage | You need to break it down. Start with one attempt. You have a 1/20 chance of rolling exactly at his armor class and you have an 8/20 change of beating it [13-20].
So then you have a probability distribution of damage which is a mixture of three cases: 0 with probability 11/20, 2d6+2 with probability 8/20 and 2d6+2/2 with probability 1/20.
Suppose you beat his AC, then the distribution of 2d6+2 can be broken down to
p dmg total
1/36 : 2+2 = 4
2/36 : 2+3 = 5
3/36 : 2+4 = 6
4/36 : 2+5 = 7
5/36 : 2+6 = 8
6/36 : 2+7 = 9
5/36 : 2+8 = 10
4/36 : 2+9 = 11
3/36 : 2+10 = 12
2/36 : 2+11 = 13
1/36 : 2+12 = 14
You then have to adjust those probalities for the fact that you have an 8/20 chance of it happening:
dmg prob
4 0.011111
5 0.022222
6 0.033333
7 0.044444
8 0.055556
9 0.066667
10 0.055556
11 0.044444
12 0.033333
13 0.022222
14 0.011111
If you hit for half, then you have to again multiply by 1/20
dmg prob
2 0.0013889
2 0.0027778
3 0.0041667
3 0.0055556
4 0.0069444
4 0.0083333
5 0.0069444
5 0.0055556
6 0.0041667
6 0.0027778
7 0.0013889
So now you have several ways to get 4,5,6,7 dmg, since this is an OR relationship (4 OR 5 OR 6 OR 7) you sum up the probabilities of those things happening:
dmg prob
0 = 0.55
2 0.0013889 + 0.0027778 = 0.0041667
3 0.0041667 + 0.0055556 = 0.0097223
4 0.0069444 + 0.0083333 + 0.011111 = 0.0263887
5 0.0069444 + 0.0055556 + 0.022222 = 0.0347220
6 0.0041667 + 0.0027778 + 0.033333 = 0.0402775
7 0.0013889 + 0.044444 = 0.0458329
8 0.055556 = 0.055556
9 0.066667 = 0.066667
10 0.055556 = 0.055556
11 0.044444 = 0.044444
12 0.033333 = 0.033333
13 0.022222 = 0.022222
14 0.011111 = 0.011111
SUM 1
To do this three times, you then need to figure out all possible ways to break the number you want (20) and sum up the probabilities of each of them happening. The probability of A AND B happening is the product of the (or 3) probabilities. So for instance one possibility is a 14 AND a 6, the probability of that happening is 0.11111*0.04. You can make it easier by assuming you get one number for the first roll then summing up the probabilities that will result in > 20). So you would assume you will get a 14 on the first roll and then multiply the probability of that happening AND the probability of getting greater than 6 on the next roll (6 OR 7 OR ...). So then it would be P_14*(P_6+P_7+P_8+...P_20), ie the probability of rolling a 14 AND (6 OR 7 OR ...). You would then need to say what is the probability of (2 AND 4 AND 14) OR (2 AND 5 AND (13 OR 14)) OR (2 AND 6 AND (12 OR 13 OR 14)) ... and so on. I'll leave that last part to you. I would guess you would want to do some programming.
Coming with a general formula is also possible, but it will be big. | Dungeons & Dragons Attack hit probability success percentage | You need to break it down. Start with one attempt. You have a 1/20 chance of rolling exactly at his armor class and you have an 8/20 change of beating it [13-20].
So then you have a probability distr | Dungeons & Dragons Attack hit probability success percentage
You need to break it down. Start with one attempt. You have a 1/20 chance of rolling exactly at his armor class and you have an 8/20 change of beating it [13-20].
So then you have a probability distribution of damage which is a mixture of three cases: 0 with probability 11/20, 2d6+2 with probability 8/20 and 2d6+2/2 with probability 1/20.
Suppose you beat his AC, then the distribution of 2d6+2 can be broken down to
p dmg total
1/36 : 2+2 = 4
2/36 : 2+3 = 5
3/36 : 2+4 = 6
4/36 : 2+5 = 7
5/36 : 2+6 = 8
6/36 : 2+7 = 9
5/36 : 2+8 = 10
4/36 : 2+9 = 11
3/36 : 2+10 = 12
2/36 : 2+11 = 13
1/36 : 2+12 = 14
You then have to adjust those probalities for the fact that you have an 8/20 chance of it happening:
dmg prob
4 0.011111
5 0.022222
6 0.033333
7 0.044444
8 0.055556
9 0.066667
10 0.055556
11 0.044444
12 0.033333
13 0.022222
14 0.011111
If you hit for half, then you have to again multiply by 1/20
dmg prob
2 0.0013889
2 0.0027778
3 0.0041667
3 0.0055556
4 0.0069444
4 0.0083333
5 0.0069444
5 0.0055556
6 0.0041667
6 0.0027778
7 0.0013889
So now you have several ways to get 4,5,6,7 dmg, since this is an OR relationship (4 OR 5 OR 6 OR 7) you sum up the probabilities of those things happening:
dmg prob
0 = 0.55
2 0.0013889 + 0.0027778 = 0.0041667
3 0.0041667 + 0.0055556 = 0.0097223
4 0.0069444 + 0.0083333 + 0.011111 = 0.0263887
5 0.0069444 + 0.0055556 + 0.022222 = 0.0347220
6 0.0041667 + 0.0027778 + 0.033333 = 0.0402775
7 0.0013889 + 0.044444 = 0.0458329
8 0.055556 = 0.055556
9 0.066667 = 0.066667
10 0.055556 = 0.055556
11 0.044444 = 0.044444
12 0.033333 = 0.033333
13 0.022222 = 0.022222
14 0.011111 = 0.011111
SUM 1
To do this three times, you then need to figure out all possible ways to break the number you want (20) and sum up the probabilities of each of them happening. The probability of A AND B happening is the product of the (or 3) probabilities. So for instance one possibility is a 14 AND a 6, the probability of that happening is 0.11111*0.04. You can make it easier by assuming you get one number for the first roll then summing up the probabilities that will result in > 20). So you would assume you will get a 14 on the first roll and then multiply the probability of that happening AND the probability of getting greater than 6 on the next roll (6 OR 7 OR ...). So then it would be P_14*(P_6+P_7+P_8+...P_20), ie the probability of rolling a 14 AND (6 OR 7 OR ...). You would then need to say what is the probability of (2 AND 4 AND 14) OR (2 AND 5 AND (13 OR 14)) OR (2 AND 6 AND (12 OR 13 OR 14)) ... and so on. I'll leave that last part to you. I would guess you would want to do some programming.
Coming with a general formula is also possible, but it will be big. | Dungeons & Dragons Attack hit probability success percentage
You need to break it down. Start with one attempt. You have a 1/20 chance of rolling exactly at his armor class and you have an 8/20 change of beating it [13-20].
So then you have a probability distr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.