idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
13,701
|
Visualizing Likert responses using R or SPSS
|
@RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put the data in a more meaningful order also).
Of course it depends on what main message you're trying to communicate, but I think this is simpler and a bit easier to make sense of. It also has the questions and responses in a (mostly!) logical order.
library(stringr)
LikertResponseSummary$Var1num <-
as.numeric(str_extract(LikertResponseSummary$Var1, "[0-9]+"))
LikertResponseSummary$Var2 <-
factor(LikertResponseSummary$Var2,
levels = c("Strongly disagree", "Disagree", "Neutral", "Agree", "Strongly agree"))
ggplot(LikertResponseSummary,
aes(factor(Var1num), value, fill = factor(Var2))) +
geom_bar(position="fill") +
scale_x_discrete(name = 'Question', breaks=LikertResponseSummary$Var1num,
labels=LikertResponseSummary$Var1) +
scale_y_continuous(name = 'Proportion') +
scale_fill_discrete(name = 'Response') +
coord_flip()
|
Visualizing Likert responses using R or SPSS
|
@RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put th
|
Visualizing Likert responses using R or SPSS
@RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put the data in a more meaningful order also).
Of course it depends on what main message you're trying to communicate, but I think this is simpler and a bit easier to make sense of. It also has the questions and responses in a (mostly!) logical order.
library(stringr)
LikertResponseSummary$Var1num <-
as.numeric(str_extract(LikertResponseSummary$Var1, "[0-9]+"))
LikertResponseSummary$Var2 <-
factor(LikertResponseSummary$Var2,
levels = c("Strongly disagree", "Disagree", "Neutral", "Agree", "Strongly agree"))
ggplot(LikertResponseSummary,
aes(factor(Var1num), value, fill = factor(Var2))) +
geom_bar(position="fill") +
scale_x_discrete(name = 'Question', breaks=LikertResponseSummary$Var1num,
labels=LikertResponseSummary$Var1) +
scale_y_continuous(name = 'Proportion') +
scale_fill_discrete(name = 'Response') +
coord_flip()
|
Visualizing Likert responses using R or SPSS
@RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put th
|
13,702
|
Interpreting exp(B) in multinomial logistic regression
|
It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One might express this as a "5012%" increase in relative risk, but that's a confusing and potentially misleading way to do it, because it suggests we should be thinking of the changes additively, when in fact the multinomial logistic model strongly encourages us to think multiplicatively. The modifier "relative" is essential, because a change in a variable is simultaneously changing the predicted probabilities of all outcomes, not just the one in question, so we have to compare probabilities (by means of ratios, not differences).
The rest of this reply develops the terminology and intuition needed to interpret these statements correctly.
Background
Let's start with ordinary logistic regression before moving on to the multinomial case.
For dependent (binary) variable $Y$ and independent variables $X_i$, the model is
$$\Pr[Y=1] = \frac{\exp(\beta_1 X_1 + \cdots + \beta_m X_m)}{1+\exp(\beta_1 X_1 + \cdots + \beta_m X_m)};$$
equivalently, assuming $0 \ne \Pr[Y=1] \ne 1$,
$$\log(\rho(X_1, \cdots, X_m)) = \log\frac{\Pr[Y=1]}{\Pr[Y=0]} = \beta_1 X_1 + \cdots + \beta_m X_m.$$
(This simply defines $\rho$, which is the odds as a function of the $X_i$.)
Without any loss of generality, index the $X_i$ so that $X_m$ is the variable and $\beta_m$ is the "B" in the question (so that $\exp(\beta_m)=6.012$). Fixing the values of $X_i, 1\le i\lt m$, and varying $X_m$ by a small amount $\delta$ yields
$$\log(\rho(\cdots, X_m+\delta)) - \log(\rho(\cdots, X_m)) = \beta_m \delta.$$
Thus, $\beta_m$ is the marginal change in log odds with respect to $X_m$.
To recover $\exp(\beta_m)$, evidently we must set $\delta=1$ and exponentiate the left hand side:
$$\eqalign{
\exp(\beta_m) &= \exp(\beta_m \times 1) \\
& = \exp( \log(\rho(\cdots, X_m+1)) - \log(\rho(\cdots, X_m))) \\
& = \frac{\rho(\cdots, X_m+1)}{\rho(\cdots, X_m)}.
}$$
This exhibits $\exp(\beta_m)$ as the odds ratio for a one-unit increase in $X_m$. To develop an intuition for what this might mean, tabulate some values for a range of starting odds, rounding heavily to make the patterns stand out:
Starting odds Ending odds Starting Pr[Y=1] Ending Pr[Y=1]
0.0001 0.0006 0.0001 0.0006
0.001 0.006 0.001 0.006
0.01 0.06 0.01 0.057
0.1 0.6 0.091 0.38
1. 6. 0.5 0.9
10. 60. 0.91 1.
100. 600. 0.99 1.
For really small odds, which correspond to really small probabilities, the effect of a one unit increase in $X_m$ is to multiply the odds or the probability by about 6.012. The multiplicative factor decreases as the odds (and probability) get larger, and has essentially vanished once the odds exceed 10 (the probability exceeds 0.9).
As an additive change, there's not much of a difference between a probability of 0.0001 and 0.0006 (it's only 0.05%), nor is there much of a difference between 0.99 and 1. (only 1%). The largest additive effect occurs when the odds equal $1/\sqrt{6.012} \sim 0.408$, where the probability changes from 29% to 71%: a change of +42%.
We see, then, that if we express "risk" as an odds ratio, $\beta_m$ = "B" has a simple interpretation--the odds ratio equals $\beta_m$ for a unit increase in $X_m$--but when we express risk in some other fashion, such as a change in probabilities, the interpretation requires care to specify the starting probability.
Multinomial logistic regression
(This has been added as a later edit.)
Having recognized the value of using log odds to express chances, let's move on to the multinomial case. Now the dependent variable $Y$ can equal one of $k \ge 2$ categories, indexed by $i=1, 2, \ldots, k$. The relative probability that it is in category $i$ is
$$\Pr[Y_i] \sim \exp\left(\beta_1^{(i)} X_1 + \cdots + \beta_m^{(i)} X_m\right)$$
with parameters $\beta_j^{(i)}$ to be determined and writing $Y_i$ for $\Pr[Y=\text{category }i]$. As an abbreviation, let's write the right-hand expression as $p_i(X,\beta)$ or, where $X$ and $\beta$ are clear from the context, simply $p_i$. Normalizing to make all these relative probabilities sum to unity gives
$$\Pr[Y_i] =\frac{p_i(X,\beta)}{p_1(X,\beta) + \cdots + p_m(X,\beta)}.$$
(There is an ambiguity in the parameters: there are too many of them. Conventionally, one chooses a "base" category for comparison and forces all its coefficients to be zero. However, although this is necessary to report unique estimates of the betas, it is not needed to interpret the coefficients. To maintain the symmetry--that is, to avoid any artificial distinctions among the categories--let's not enforce any such constraint unless we have to.)
One way to interpret this model is to ask for the marginal rate of change of the log odds for any category (say category $i$) with respect to any one of the independent variables (say $X_j$). That is, when we change $X_j$ by a little bit, that induces a change in the log odds of $Y_i$. We are interested in the constant of proportionality relating these two changes. The Chain Rule of Calculus, together with a little algebra, tells us this rate of change is
$$\frac{\partial\ \text{log odds}(Y_i)}{\partial\ X_j} = \beta_j^{(i)} - \frac{\beta_j^{(1)}p_1 + \cdots + \beta_j^{(i-1)}p_{i-1} + \beta_j^{(i+1)}p_{i+1} +\cdots + \beta_j^{(k)}p_k}{p_1 + \cdots + p_{i-1} + p_{i+1} + \cdots + p_k}.$$
This has a relatively simple interpretation as the coefficient $\beta_j^{(i)}$ of $X_j$ in the formula for the chance that $Y$ is in category $i$ minus an "adjustment." The adjustment is the probability-weighted average of the coefficients of $X_j$ in all the other categories. The weights are computed using probabilities associated with the current values of the independent variables $X$. Thus, the marginal change in logs is not necessarily constant: it depends on the probabilities of all the other categories, not just the probability of the category in question (category $i$).
When there are just $k=2$ categories, this ought to reduce to ordinary logistic regression. Indeed, the probability weighting does nothing and (choosing $i=2$) gives simply the difference $\beta_j^{(2)} - \beta_j^{(1)}$. Letting category $i$ be the base case reduces this further to $\beta_j^{(2)}$, because we force $\beta_j^{(1)}=0$. Thus the new interpretation generalizes the old.
To interpret $\beta_j^{(i)}$ directly, then, we will isolate it on one side of the preceding formula, leading to:
The coefficient of $X_j$ for category $i$ equals the marginal change in the log odds of category $i$ with respect to the variable $X_j$, plus the probability-weighted average of the coefficients of all the other $X_{j'}$ for category $i$.
Another interpretation, albeit a little less direct, is afforded by (temporarily) setting category $i$ as the base case, thereby making $\beta_j^{(i)}=0$ for all the independent variables $X_j$:
The marginal rate of change in the log odds of the base case for variable $X_j$ is the negative of the probability-weighted average of its coefficients for all the other cases.
Actually using these interpretations typically requires extracting the betas and the probabilities from software output and performing the calculations as shown.
Finally, for the exponentiated coefficients, note that the ratio of probabilities among two outcomes (sometimes called the "relative risk" of $i$ compared to $i'$) is
$$\frac{Y_{i}}{Y_{i'}} = \frac{p_{i}(X,\beta)}{p_{i'}(X,\beta)}.$$
Let's increase $X_j$ by one unit to $X_j+1$. This multiplies $p_{i}$ by $\exp(\beta_j^{(i)})$ and $p_{i'}$ by $\exp(\beta_j^{(i')})$, whence the relative risk is multiplied by $\exp(\beta_j^{(i)}) / \exp(\beta_j^{(i')})$ = $\exp(\beta_j^{(i)}-\beta_j^{(i')})$. Taking category $i'$ to be the base case reduces this to $\exp(\beta_j^{(i)})$, leading us to say,
The exponentiated coefficient $\exp(\beta_j^{(i)})$ is the amount by which the relative risk $\Pr[Y = \text{category }i]/\Pr[Y = \text{base category}]$ is multiplied when variable $X_j$ is increased by one unit.
|
Interpreting exp(B) in multinomial logistic regression
|
It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One m
|
Interpreting exp(B) in multinomial logistic regression
It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One might express this as a "5012%" increase in relative risk, but that's a confusing and potentially misleading way to do it, because it suggests we should be thinking of the changes additively, when in fact the multinomial logistic model strongly encourages us to think multiplicatively. The modifier "relative" is essential, because a change in a variable is simultaneously changing the predicted probabilities of all outcomes, not just the one in question, so we have to compare probabilities (by means of ratios, not differences).
The rest of this reply develops the terminology and intuition needed to interpret these statements correctly.
Background
Let's start with ordinary logistic regression before moving on to the multinomial case.
For dependent (binary) variable $Y$ and independent variables $X_i$, the model is
$$\Pr[Y=1] = \frac{\exp(\beta_1 X_1 + \cdots + \beta_m X_m)}{1+\exp(\beta_1 X_1 + \cdots + \beta_m X_m)};$$
equivalently, assuming $0 \ne \Pr[Y=1] \ne 1$,
$$\log(\rho(X_1, \cdots, X_m)) = \log\frac{\Pr[Y=1]}{\Pr[Y=0]} = \beta_1 X_1 + \cdots + \beta_m X_m.$$
(This simply defines $\rho$, which is the odds as a function of the $X_i$.)
Without any loss of generality, index the $X_i$ so that $X_m$ is the variable and $\beta_m$ is the "B" in the question (so that $\exp(\beta_m)=6.012$). Fixing the values of $X_i, 1\le i\lt m$, and varying $X_m$ by a small amount $\delta$ yields
$$\log(\rho(\cdots, X_m+\delta)) - \log(\rho(\cdots, X_m)) = \beta_m \delta.$$
Thus, $\beta_m$ is the marginal change in log odds with respect to $X_m$.
To recover $\exp(\beta_m)$, evidently we must set $\delta=1$ and exponentiate the left hand side:
$$\eqalign{
\exp(\beta_m) &= \exp(\beta_m \times 1) \\
& = \exp( \log(\rho(\cdots, X_m+1)) - \log(\rho(\cdots, X_m))) \\
& = \frac{\rho(\cdots, X_m+1)}{\rho(\cdots, X_m)}.
}$$
This exhibits $\exp(\beta_m)$ as the odds ratio for a one-unit increase in $X_m$. To develop an intuition for what this might mean, tabulate some values for a range of starting odds, rounding heavily to make the patterns stand out:
Starting odds Ending odds Starting Pr[Y=1] Ending Pr[Y=1]
0.0001 0.0006 0.0001 0.0006
0.001 0.006 0.001 0.006
0.01 0.06 0.01 0.057
0.1 0.6 0.091 0.38
1. 6. 0.5 0.9
10. 60. 0.91 1.
100. 600. 0.99 1.
For really small odds, which correspond to really small probabilities, the effect of a one unit increase in $X_m$ is to multiply the odds or the probability by about 6.012. The multiplicative factor decreases as the odds (and probability) get larger, and has essentially vanished once the odds exceed 10 (the probability exceeds 0.9).
As an additive change, there's not much of a difference between a probability of 0.0001 and 0.0006 (it's only 0.05%), nor is there much of a difference between 0.99 and 1. (only 1%). The largest additive effect occurs when the odds equal $1/\sqrt{6.012} \sim 0.408$, where the probability changes from 29% to 71%: a change of +42%.
We see, then, that if we express "risk" as an odds ratio, $\beta_m$ = "B" has a simple interpretation--the odds ratio equals $\beta_m$ for a unit increase in $X_m$--but when we express risk in some other fashion, such as a change in probabilities, the interpretation requires care to specify the starting probability.
Multinomial logistic regression
(This has been added as a later edit.)
Having recognized the value of using log odds to express chances, let's move on to the multinomial case. Now the dependent variable $Y$ can equal one of $k \ge 2$ categories, indexed by $i=1, 2, \ldots, k$. The relative probability that it is in category $i$ is
$$\Pr[Y_i] \sim \exp\left(\beta_1^{(i)} X_1 + \cdots + \beta_m^{(i)} X_m\right)$$
with parameters $\beta_j^{(i)}$ to be determined and writing $Y_i$ for $\Pr[Y=\text{category }i]$. As an abbreviation, let's write the right-hand expression as $p_i(X,\beta)$ or, where $X$ and $\beta$ are clear from the context, simply $p_i$. Normalizing to make all these relative probabilities sum to unity gives
$$\Pr[Y_i] =\frac{p_i(X,\beta)}{p_1(X,\beta) + \cdots + p_m(X,\beta)}.$$
(There is an ambiguity in the parameters: there are too many of them. Conventionally, one chooses a "base" category for comparison and forces all its coefficients to be zero. However, although this is necessary to report unique estimates of the betas, it is not needed to interpret the coefficients. To maintain the symmetry--that is, to avoid any artificial distinctions among the categories--let's not enforce any such constraint unless we have to.)
One way to interpret this model is to ask for the marginal rate of change of the log odds for any category (say category $i$) with respect to any one of the independent variables (say $X_j$). That is, when we change $X_j$ by a little bit, that induces a change in the log odds of $Y_i$. We are interested in the constant of proportionality relating these two changes. The Chain Rule of Calculus, together with a little algebra, tells us this rate of change is
$$\frac{\partial\ \text{log odds}(Y_i)}{\partial\ X_j} = \beta_j^{(i)} - \frac{\beta_j^{(1)}p_1 + \cdots + \beta_j^{(i-1)}p_{i-1} + \beta_j^{(i+1)}p_{i+1} +\cdots + \beta_j^{(k)}p_k}{p_1 + \cdots + p_{i-1} + p_{i+1} + \cdots + p_k}.$$
This has a relatively simple interpretation as the coefficient $\beta_j^{(i)}$ of $X_j$ in the formula for the chance that $Y$ is in category $i$ minus an "adjustment." The adjustment is the probability-weighted average of the coefficients of $X_j$ in all the other categories. The weights are computed using probabilities associated with the current values of the independent variables $X$. Thus, the marginal change in logs is not necessarily constant: it depends on the probabilities of all the other categories, not just the probability of the category in question (category $i$).
When there are just $k=2$ categories, this ought to reduce to ordinary logistic regression. Indeed, the probability weighting does nothing and (choosing $i=2$) gives simply the difference $\beta_j^{(2)} - \beta_j^{(1)}$. Letting category $i$ be the base case reduces this further to $\beta_j^{(2)}$, because we force $\beta_j^{(1)}=0$. Thus the new interpretation generalizes the old.
To interpret $\beta_j^{(i)}$ directly, then, we will isolate it on one side of the preceding formula, leading to:
The coefficient of $X_j$ for category $i$ equals the marginal change in the log odds of category $i$ with respect to the variable $X_j$, plus the probability-weighted average of the coefficients of all the other $X_{j'}$ for category $i$.
Another interpretation, albeit a little less direct, is afforded by (temporarily) setting category $i$ as the base case, thereby making $\beta_j^{(i)}=0$ for all the independent variables $X_j$:
The marginal rate of change in the log odds of the base case for variable $X_j$ is the negative of the probability-weighted average of its coefficients for all the other cases.
Actually using these interpretations typically requires extracting the betas and the probabilities from software output and performing the calculations as shown.
Finally, for the exponentiated coefficients, note that the ratio of probabilities among two outcomes (sometimes called the "relative risk" of $i$ compared to $i'$) is
$$\frac{Y_{i}}{Y_{i'}} = \frac{p_{i}(X,\beta)}{p_{i'}(X,\beta)}.$$
Let's increase $X_j$ by one unit to $X_j+1$. This multiplies $p_{i}$ by $\exp(\beta_j^{(i)})$ and $p_{i'}$ by $\exp(\beta_j^{(i')})$, whence the relative risk is multiplied by $\exp(\beta_j^{(i)}) / \exp(\beta_j^{(i')})$ = $\exp(\beta_j^{(i)}-\beta_j^{(i')})$. Taking category $i'$ to be the base case reduces this to $\exp(\beta_j^{(i)})$, leading us to say,
The exponentiated coefficient $\exp(\beta_j^{(i)})$ is the amount by which the relative risk $\Pr[Y = \text{category }i]/\Pr[Y = \text{base category}]$ is multiplied when variable $X_j$ is increased by one unit.
|
Interpreting exp(B) in multinomial logistic regression
It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One m
|
13,703
|
Interpreting exp(B) in multinomial logistic regression
|
Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6. In a multinomial context, by "odds ratio" we mean the ratio of these two quantities: a) the odds (not probability, but rather p/[1-p]) of a case taking the value of the dependent variable indicated in the output table in question, and b) the odds of a case taking the reference value of the dependent variable.
You seem to be looking to quantify the probability--rather than odds-- of a case being in one or the other category. To do this you would need to know what probabilities the case "started with" -- i.e., before we assumed the increase of 1 on the predictor in question. Ratios of probabilities will vary case by case, while the ratio of odds connected with an increase of 1 on the predictor stays the same.
|
Interpreting exp(B) in multinomial logistic regression
|
Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6.
|
Interpreting exp(B) in multinomial logistic regression
Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6. In a multinomial context, by "odds ratio" we mean the ratio of these two quantities: a) the odds (not probability, but rather p/[1-p]) of a case taking the value of the dependent variable indicated in the output table in question, and b) the odds of a case taking the reference value of the dependent variable.
You seem to be looking to quantify the probability--rather than odds-- of a case being in one or the other category. To do this you would need to know what probabilities the case "started with" -- i.e., before we assumed the increase of 1 on the predictor in question. Ratios of probabilities will vary case by case, while the ratio of odds connected with an increase of 1 on the predictor stays the same.
|
Interpreting exp(B) in multinomial logistic regression
Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6.
|
13,704
|
Interpreting exp(B) in multinomial logistic regression
|
I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do however read to the end, since it is important.
First of all the values B and Exp(B) are the once you are looking for. If the B is negative your Exp(B) will be lower than one, which means odds decrease. If higher the Exp(B) will be higher than 1, meaning odds increase. Since you are multiplying by the factor Exp(B).
Unfortunately you are not there yet. Because in a multinominal regression your dependent variable has multiple categories, let's call these categories D1, D2 and D3. Of which your last is the reference category. And let's assume your first independent variable is sex (males vs females).
Let's say the output for D1 -> males is exp(B)= 1.21, this means for males the odds increase by a factor 1.21 for being in the category D1 rather than D3 (reference category) compared to females (reference category).
So you are always comparing against your reference category of the dependent but also independent variables. This is not true if you have a covariate variable. In that case it would mean; a one unit increase in X increases the odds by a factor of 1.21 of being in category D1 rather than D3.
For those with an ordinal dependent variable:
If you have an ordinal dependent variable and did not do an ordinal regression because of the assumption of proportional odds for instance. Keep in mind your highest category is the reference category. Your result as above are valid to report. But keep in mind that an increase in odds than in fact means an increase in odds of being in the lower category rather than the higher! But that's only if you have an ordinal dependant variable.
If you want to know the increase in percentage, well take a fictive odds-number, let's say 100 and multiply it by 1.21 which is 121? Compared to 100 how much did it change percentage wise?
|
Interpreting exp(B) in multinomial logistic regression
|
I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do
|
Interpreting exp(B) in multinomial logistic regression
I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do however read to the end, since it is important.
First of all the values B and Exp(B) are the once you are looking for. If the B is negative your Exp(B) will be lower than one, which means odds decrease. If higher the Exp(B) will be higher than 1, meaning odds increase. Since you are multiplying by the factor Exp(B).
Unfortunately you are not there yet. Because in a multinominal regression your dependent variable has multiple categories, let's call these categories D1, D2 and D3. Of which your last is the reference category. And let's assume your first independent variable is sex (males vs females).
Let's say the output for D1 -> males is exp(B)= 1.21, this means for males the odds increase by a factor 1.21 for being in the category D1 rather than D3 (reference category) compared to females (reference category).
So you are always comparing against your reference category of the dependent but also independent variables. This is not true if you have a covariate variable. In that case it would mean; a one unit increase in X increases the odds by a factor of 1.21 of being in category D1 rather than D3.
For those with an ordinal dependent variable:
If you have an ordinal dependent variable and did not do an ordinal regression because of the assumption of proportional odds for instance. Keep in mind your highest category is the reference category. Your result as above are valid to report. But keep in mind that an increase in odds than in fact means an increase in odds of being in the lower category rather than the higher! But that's only if you have an ordinal dependant variable.
If you want to know the increase in percentage, well take a fictive odds-number, let's say 100 and multiply it by 1.21 which is 121? Compared to 100 how much did it change percentage wise?
|
Interpreting exp(B) in multinomial logistic regression
I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do
|
13,705
|
Interpreting exp(B) in multinomial logistic regression
|
Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion here might have to do with by 4% (multiplicative meaning) and by 4 percent points (additive meaning). The % interpretation is correct if we talk about a percentage change not percentage point change. (The latter would not make sense anyhow as relative risks aren't expressed in terms of percentages.)
|
Interpreting exp(B) in multinomial logistic regression
|
Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion her
|
Interpreting exp(B) in multinomial logistic regression
Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion here might have to do with by 4% (multiplicative meaning) and by 4 percent points (additive meaning). The % interpretation is correct if we talk about a percentage change not percentage point change. (The latter would not make sense anyhow as relative risks aren't expressed in terms of percentages.)
|
Interpreting exp(B) in multinomial logistic regression
Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion her
|
13,706
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
|
In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance would be the same. This is not the case for convolutional networks, which assume neighbourhood relations.
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
|
In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance wo
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance would be the same. This is not the case for convolutional networks, which assume neighbourhood relations.
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance wo
|
13,707
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
|
A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x_2, x_3))=f((x_2, x_1,x_3))=f((x_3,x_1,x_2))
$$
and so on.
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
|
A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x_2, x_3))=f((x_2, x_1,x_3))=f((x_3,x_1,x_2))
$$
and so on.
|
What does "permutation invariant" mean in the context of neural networks doing image recognition?
A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x
|
13,708
|
Difference between histogram and pdf?
|
To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical pdf of the underlying normal distribution. Note that the histogram is expressed in densities and not in frequencies here. This is done for plotting purposes, in general frequencies are used in histograms.
So to answer your question : you use the empirical distribution (i.e. the histogram) if you want to describe your sample, and the pdf if you want to describe the hypothesized underlying distribution.
Plot is generated by following code in R :
x <- rnorm(100)
y <- seq(-4,4,length.out=200)
hist(x,freq=F,ylim=c(0,0.5))
lines(density(x),col="red",lwd=2)
lines(y,dnorm(y),col="blue",lwd=2)
|
Difference between histogram and pdf?
|
To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical p
|
Difference between histogram and pdf?
To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical pdf of the underlying normal distribution. Note that the histogram is expressed in densities and not in frequencies here. This is done for plotting purposes, in general frequencies are used in histograms.
So to answer your question : you use the empirical distribution (i.e. the histogram) if you want to describe your sample, and the pdf if you want to describe the hypothesized underlying distribution.
Plot is generated by following code in R :
x <- rnorm(100)
y <- seq(-4,4,length.out=200)
hist(x,freq=F,ylim=c(0,0.5))
lines(density(x),col="red",lwd=2)
lines(y,dnorm(y),col="blue",lwd=2)
|
Difference between histogram and pdf?
To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical p
|
13,709
|
Difference between histogram and pdf?
|
A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the other hand, is a closed-form expression for a given distribution. That is different from describing your dataset with an estimated density or histogram.
|
Difference between histogram and pdf?
|
A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the
|
Difference between histogram and pdf?
A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the other hand, is a closed-form expression for a given distribution. That is different from describing your dataset with an estimated density or histogram.
|
Difference between histogram and pdf?
A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the
|
13,710
|
Difference between histogram and pdf?
|
There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that an estimated density covers up. For example, Andrew Gelman makes this point:
Variations on the histogram
A key benefit of a histogram is that, as a plot of raw data, it contains the seeds of its own error assessment. Or, to put it another way, the jaggedness of a slightly undersmoothed histogram performs a useful service by visually indicating sampling variability. That's why, if you look at the histograms in my books and published articles, I just about always use lots of bins. I also almost never like those kernel density estimates that people sometimes use to display one-dimensional distributions. I'd rather see the histogram and know where the data are.
|
Difference between histogram and pdf?
|
There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that
|
Difference between histogram and pdf?
There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that an estimated density covers up. For example, Andrew Gelman makes this point:
Variations on the histogram
A key benefit of a histogram is that, as a plot of raw data, it contains the seeds of its own error assessment. Or, to put it another way, the jaggedness of a slightly undersmoothed histogram performs a useful service by visually indicating sampling variability. That's why, if you look at the histograms in my books and published articles, I just about always use lots of bins. I also almost never like those kernel density estimates that people sometimes use to display one-dimensional distributions. I'd rather see the histogram and know where the data are.
|
Difference between histogram and pdf?
There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that
|
13,711
|
Difference between histogram and pdf?
|
Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' axis is density value ( 'Normalized count' divided by 'bin width')
Bar areas sum to 1
Probability Density Function PDF (continuous)
PDF is a continuous version of a histogram since histogram bins are discrete
total area under Curve integrates to 1
These references were helpful :)
http://stattrek.com/statistics/dictionary.aspx?definition=Probability_density_function
Continuous_probability_distribution from the above site
http://www.geog.ucsb.edu/~joel/g210_w07/lecture_notes/lect04/oh07_04_1.html
|
Difference between histogram and pdf?
|
Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' ax
|
Difference between histogram and pdf?
Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' axis is density value ( 'Normalized count' divided by 'bin width')
Bar areas sum to 1
Probability Density Function PDF (continuous)
PDF is a continuous version of a histogram since histogram bins are discrete
total area under Curve integrates to 1
These references were helpful :)
http://stattrek.com/statistics/dictionary.aspx?definition=Probability_density_function
Continuous_probability_distribution from the above site
http://www.geog.ucsb.edu/~joel/g210_w07/lecture_notes/lect04/oh07_04_1.html
|
Difference between histogram and pdf?
Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' ax
|
13,712
|
logit - interpreting coefficients as probabilities
|
These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odds ratio is $e^{0.25} = 1.28$.
The odds ratio is the multiplier that shows how the odds change for a one-unit increase in the value of the X. The odds ratio increases by a factor of 1.28. So if the initial odds ratio was, say 0.25, the odds ratio after one unit increase in the covariate becomes $0.25 \times 1.28$.
Another way to try to interpret the odds ratio is to look at the fractional part and interpret it as a percentage change. For example, the odds ratio of 1.28 corresponds to a 28% increase in the odds for a 1-unit increase in the corresponding X.
In case we are dealing with an decreasing effect (OR < 1), for example odds ratio = 0.94, then there is a 6% decrease in the odds for a 1-unit increase in the corresponding X.
The formula is:
$$ \text{Percent Change in the Odds} = \left( \text{Odds Ratio} - 1 \right) \times 100 $$
|
logit - interpreting coefficients as probabilities
|
These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odd
|
logit - interpreting coefficients as probabilities
These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odds ratio is $e^{0.25} = 1.28$.
The odds ratio is the multiplier that shows how the odds change for a one-unit increase in the value of the X. The odds ratio increases by a factor of 1.28. So if the initial odds ratio was, say 0.25, the odds ratio after one unit increase in the covariate becomes $0.25 \times 1.28$.
Another way to try to interpret the odds ratio is to look at the fractional part and interpret it as a percentage change. For example, the odds ratio of 1.28 corresponds to a 28% increase in the odds for a 1-unit increase in the corresponding X.
In case we are dealing with an decreasing effect (OR < 1), for example odds ratio = 0.94, then there is a 6% decrease in the odds for a 1-unit increase in the corresponding X.
The formula is:
$$ \text{Percent Change in the Odds} = \left( \text{Odds Ratio} - 1 \right) \times 100 $$
|
logit - interpreting coefficients as probabilities
These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odd
|
13,713
|
logit - interpreting coefficients as probabilities
|
Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" (emphasis added). Poisson regression uses a logarithmic link, in contrast to logistic regression, which uses a logit (log-odds) link. The interpretation of exponentiated coefficients as multiplicative effects only works for a log-scale coefficients (or, at the risk of muddying the waters slightly, for logit-scale coefficients if the baseline risk is very low ...)
Everyone would like to be able to quote effects of treatments on probabilities in a simple, universal scale-independent way, but this is basically impossible: this is why there are so many tutorials on interpreting odds and log-odds circulating in the wild, and why epidemiologists spend so much time arguing about relative risk vs. odds ratios vs ...
|
logit - interpreting coefficients as probabilities
|
Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" (
|
logit - interpreting coefficients as probabilities
Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" (emphasis added). Poisson regression uses a logarithmic link, in contrast to logistic regression, which uses a logit (log-odds) link. The interpretation of exponentiated coefficients as multiplicative effects only works for a log-scale coefficients (or, at the risk of muddying the waters slightly, for logit-scale coefficients if the baseline risk is very low ...)
Everyone would like to be able to quote effects of treatments on probabilities in a simple, universal scale-independent way, but this is basically impossible: this is why there are so many tutorials on interpreting odds and log-odds circulating in the wild, and why epidemiologists spend so much time arguing about relative risk vs. odds ratios vs ...
|
logit - interpreting coefficients as probabilities
Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" (
|
13,714
|
logit - interpreting coefficients as probabilities
|
If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can multiply by the odds-ratio of a given term to determine what the odds would be when that covariate is 1 instead of 0.
The inverse logit transform above can be applied to the odds to give the percent chance of $Y=1$.
So when all $x=0$:
$p(Y=1) = \frac{e^{\beta_0}}{1+e^{\beta_0}}$
and if $x_1=1$ (and any other covariates are 0) then:
$p(Y=1) = \frac{ e^{(\beta_0 + \beta_1)}}{ 1+ e^{(\beta_0 + \beta_1)}}$
and those can be compared. But notice that the effect of $x_1$ is different depending on $\beta_0$, it is not a constant effect like in linear regression, only constant on the log-odds scale.
Also notice that your estimate of $\beta_0$ will depend on how the data was collected. A case-control study where equal number of subjects with $Y=0$ and $Y=1$ are selected, then their value of $x$ is observed can give a very different $\beta_0$ estimate than a simple random sample, and the interpretation of the percentage(s) from the first could be meaningless as interpretations of what would happen in the second case.
|
logit - interpreting coefficients as probabilities
|
If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can mul
|
logit - interpreting coefficients as probabilities
If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can multiply by the odds-ratio of a given term to determine what the odds would be when that covariate is 1 instead of 0.
The inverse logit transform above can be applied to the odds to give the percent chance of $Y=1$.
So when all $x=0$:
$p(Y=1) = \frac{e^{\beta_0}}{1+e^{\beta_0}}$
and if $x_1=1$ (and any other covariates are 0) then:
$p(Y=1) = \frac{ e^{(\beta_0 + \beta_1)}}{ 1+ e^{(\beta_0 + \beta_1)}}$
and those can be compared. But notice that the effect of $x_1$ is different depending on $\beta_0$, it is not a constant effect like in linear regression, only constant on the log-odds scale.
Also notice that your estimate of $\beta_0$ will depend on how the data was collected. A case-control study where equal number of subjects with $Y=0$ and $Y=1$ are selected, then their value of $x$ is observed can give a very different $\beta_0$ estimate than a simple random sample, and the interpretation of the percentage(s) from the first could be meaningless as interpretations of what would happen in the second case.
|
logit - interpreting coefficients as probabilities
If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can mul
|
13,715
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
|
A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is that
its cumulative distribution function is called $\Phi$,
it has a probability density function $\phi(z) = \Phi^\prime(z)$, and that
$\phi^\prime(z) = -z \phi(z)$.
The first two bullets are just notation and definitions: the third is the only special property of normal distributions we will need.
Let the "certain value" be $T$. Anticipating the change from $X$ to $Z$, define
$$t = (T-\mu)/\sigma,$$
so that
$$\Pr(X \le T) = \Pr(Z \le t) = \Phi(t).$$
Then, starting with the definition of the conditional expectation we may exploit its linearity to obtain
$$\eqalign{
\mathbb{E}(X\,|\, X \le T) &= \mathbb{E}(\sigma Z + \mu \,|\, Z \le t) = \sigma \mathbb{E}(Z \,|\, Z \le t) + \mu \mathbb{E}(1 \,|\, Z \le t) \\
&= \left(\sigma \int_{-\infty}^t z \phi(z) dz + \mu \int_{-\infty}^t \phi(z) dz \right) / \Pr(Z \le t)\\
&=\left(-\sigma \int_{-\infty}^t \phi^\prime(z) dz + \mu \int_{-\infty}^t \Phi^\prime(z) dz\right) / \Phi(t).
}$$
The Fundamental Theorem of Calculus asserts that any integral of a derivative is found by evaluating the function at the endpoints: $\int_a^b F^\prime(z) dz = F(b) - F(a)$. This applies to both integrals. Since both $\Phi$ and $\phi$ must vanish at $-\infty$, we obtain
$$\mathbb{E}(X\,|\, X \le T) = \mu - \sigma \frac{\phi\left(t\right)}{\Phi\left(t\right)} = \mu - \sigma \frac{\phi\left((T-\mu)/\sigma\right)}{\Phi\left((T-\mu)/\sigma\right)}.$$
It's the original mean minus a correction term proportional to the Inverse Mills Ratio.
As we would expect, the inverse Mills ratio for $t$ must be positive and exceed $-t$ (whose graph is shown with a dotted red line). It has to dwindle down to $0$ as $t$ grows large, for then the truncation at $Z=t$ (or $X=T$) changes almost nothing. As $t$ grows very negative, the inverse Mills ratio must approach $-t$ because the tails of the normal distribution decrease so rapidly that almost all the probability in the left tail is concentrated near its right-hand side (at $t$).
Finally, when $T = \mu$ is at the mean, $t=0$ where the inverse Mills Ratio equals $\sqrt{2/\pi} \approx 0.797885$. This implies the expected value of $X$, truncated at its mean (which is the negative of a half-normal distribution), is $-\sqrt{2/\pi}$ times its standard deviation below the original mean.
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
|
A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is th
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is that
its cumulative distribution function is called $\Phi$,
it has a probability density function $\phi(z) = \Phi^\prime(z)$, and that
$\phi^\prime(z) = -z \phi(z)$.
The first two bullets are just notation and definitions: the third is the only special property of normal distributions we will need.
Let the "certain value" be $T$. Anticipating the change from $X$ to $Z$, define
$$t = (T-\mu)/\sigma,$$
so that
$$\Pr(X \le T) = \Pr(Z \le t) = \Phi(t).$$
Then, starting with the definition of the conditional expectation we may exploit its linearity to obtain
$$\eqalign{
\mathbb{E}(X\,|\, X \le T) &= \mathbb{E}(\sigma Z + \mu \,|\, Z \le t) = \sigma \mathbb{E}(Z \,|\, Z \le t) + \mu \mathbb{E}(1 \,|\, Z \le t) \\
&= \left(\sigma \int_{-\infty}^t z \phi(z) dz + \mu \int_{-\infty}^t \phi(z) dz \right) / \Pr(Z \le t)\\
&=\left(-\sigma \int_{-\infty}^t \phi^\prime(z) dz + \mu \int_{-\infty}^t \Phi^\prime(z) dz\right) / \Phi(t).
}$$
The Fundamental Theorem of Calculus asserts that any integral of a derivative is found by evaluating the function at the endpoints: $\int_a^b F^\prime(z) dz = F(b) - F(a)$. This applies to both integrals. Since both $\Phi$ and $\phi$ must vanish at $-\infty$, we obtain
$$\mathbb{E}(X\,|\, X \le T) = \mu - \sigma \frac{\phi\left(t\right)}{\Phi\left(t\right)} = \mu - \sigma \frac{\phi\left((T-\mu)/\sigma\right)}{\Phi\left((T-\mu)/\sigma\right)}.$$
It's the original mean minus a correction term proportional to the Inverse Mills Ratio.
As we would expect, the inverse Mills ratio for $t$ must be positive and exceed $-t$ (whose graph is shown with a dotted red line). It has to dwindle down to $0$ as $t$ grows large, for then the truncation at $Z=t$ (or $X=T$) changes almost nothing. As $t$ grows very negative, the inverse Mills ratio must approach $-t$ because the tails of the normal distribution decrease so rapidly that almost all the probability in the left tail is concentrated near its right-hand side (at $t$).
Finally, when $T = \mu$ is at the mean, $t=0$ where the inverse Mills Ratio equals $\sqrt{2/\pi} \approx 0.797885$. This implies the expected value of $X$, truncated at its mean (which is the negative of a half-normal distribution), is $-\sqrt{2/\pi}$ times its standard deviation below the original mean.
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is th
|
13,716
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
|
In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}=\frac{P(c_1\leq X \leq x)}{P(c_1\leq X \leq c_2)}\\&=&\frac{F(x)-F(c_1)}{F(c_2)-F(c_1)}
\end{eqnarray*}
You may obtain special cases by taking, for example $c_1=-\infty$, which yields $F(c_1)=0$.
Using conditional cdfs, you may get conditional densities (e.g., $f(x|X<0)=2\phi(x)$ for $X\sim N(0,1)$), which can be used for conditional expectations.
In your example, integration by parts gives
$$
E(X|X<0)=2\int_{-\infty}^0x\phi(x)=-2\phi(0),
$$
like in @whuber's answer.
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
|
In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}=
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}=\frac{P(c_1\leq X \leq x)}{P(c_1\leq X \leq c_2)}\\&=&\frac{F(x)-F(c_1)}{F(c_2)-F(c_1)}
\end{eqnarray*}
You may obtain special cases by taking, for example $c_1=-\infty$, which yields $F(c_1)=0$.
Using conditional cdfs, you may get conditional densities (e.g., $f(x|X<0)=2\phi(x)$ for $X\sim N(0,1)$), which can be used for conditional expectations.
In your example, integration by parts gives
$$
E(X|X<0)=2\int_{-\infty}^0x\phi(x)=-2\phi(0),
$$
like in @whuber's answer.
|
Expected value of x in a normal distribution, GIVEN that it is below a certain value
In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}=
|
13,717
|
Error bars on error bars?
|
You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and standard deviation), which are functions of the underlying population only, and are not dependent on how large your sample is.
Your current plot shows two values per group, the sample mean and sample standard deviation, about which there is no uncertainty (it is whatever you observe it to be). Assuming appropriate random sampling, you can use these values to make inference about the unobservable quantities of the population mean and population standard deviation for each group. You can use common tools like standard error or 95% confidence intervals to estimate the precision of your parameter estimates.
It would be odd to try to represent this as error bars on error bars, but it would be perfectly reasonable to list the mean and standard deviation for each group, along with the 95% CI of each parameter estimate. This can help you to decide if the means/standard deviations observed in Groups C and D, for example, represent true differences in the underlying population parameters, or if the apparent differences represent normal variation that would be expected with a small sample size.
|
Error bars on error bars?
|
You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and
|
Error bars on error bars?
You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and standard deviation), which are functions of the underlying population only, and are not dependent on how large your sample is.
Your current plot shows two values per group, the sample mean and sample standard deviation, about which there is no uncertainty (it is whatever you observe it to be). Assuming appropriate random sampling, you can use these values to make inference about the unobservable quantities of the population mean and population standard deviation for each group. You can use common tools like standard error or 95% confidence intervals to estimate the precision of your parameter estimates.
It would be odd to try to represent this as error bars on error bars, but it would be perfectly reasonable to list the mean and standard deviation for each group, along with the 95% CI of each parameter estimate. This can help you to decide if the means/standard deviations observed in Groups C and D, for example, represent true differences in the underlying population parameters, or if the apparent differences represent normal variation that would be expected with a small sample size.
|
Error bars on error bars?
You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and
|
13,718
|
Error bars on error bars?
|
The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since they are already functions of the observed data, these objects do not have any uncertainty in them --- they represent inferences about uncertain values, but there no uncertainty in the statistics themselves. We do not form error bars on error bars because there is no uncertainty in the error bars to begin with, because they are formed as a function of the observed data.
As a minor point, it is generally suboptimal practice to use error bars to show a deviation of plus/minus one (estimated) standard deviation. Usually you are better off using these values and other statistics to form an appropriate confidence interval for the uncertain value of interest, and using the error bars to show the confidence interval. In either case you should label your plot appropriately so that the reader understands what the error bars represent.
|
Error bars on error bars?
|
The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since
|
Error bars on error bars?
The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since they are already functions of the observed data, these objects do not have any uncertainty in them --- they represent inferences about uncertain values, but there no uncertainty in the statistics themselves. We do not form error bars on error bars because there is no uncertainty in the error bars to begin with, because they are formed as a function of the observed data.
As a minor point, it is generally suboptimal practice to use error bars to show a deviation of plus/minus one (estimated) standard deviation. Usually you are better off using these values and other statistics to form an appropriate confidence interval for the uncertain value of interest, and using the error bars to show the confidence interval. In either case you should label your plot appropriately so that the reader understands what the error bars represent.
|
Error bars on error bars?
The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since
|
13,719
|
Error bars on error bars?
|
The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways to create confidence intervals. They are different rules with slightly different properties. However, they are a chosen rule.
Other ways to construct error bars exist as well, such as adding plus or minus one standard deviation. It is still a rule.
You know the answer exactly. They are not uncertain.
What they are reflecting is the random elements of the samples seen. If they are a $1-\alpha$ percent confidence interval, there is a guarantee that the confidence intervals cover the parameter at least $1-\alpha$ percent of the time. There is no guarantee that it covers it for this sample. Even with a set of five samples, none of them may cover the parameter, the guarantee is over infinite repetition.
Each way you could construct an error bar has some form of optimality principle behind it. So, error bars satisfy some optimality condition that is good on average.
All of them are a statement of the best estimator of the range in which a parameter sits, given a model and a loss function.
Your error bars are a statement of uncertainty.
|
Error bars on error bars?
|
The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways t
|
Error bars on error bars?
The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways to create confidence intervals. They are different rules with slightly different properties. However, they are a chosen rule.
Other ways to construct error bars exist as well, such as adding plus or minus one standard deviation. It is still a rule.
You know the answer exactly. They are not uncertain.
What they are reflecting is the random elements of the samples seen. If they are a $1-\alpha$ percent confidence interval, there is a guarantee that the confidence intervals cover the parameter at least $1-\alpha$ percent of the time. There is no guarantee that it covers it for this sample. Even with a set of five samples, none of them may cover the parameter, the guarantee is over infinite repetition.
Each way you could construct an error bar has some form of optimality principle behind it. So, error bars satisfy some optimality condition that is good on average.
All of them are a statement of the best estimator of the range in which a parameter sits, given a model and a loss function.
Your error bars are a statement of uncertainty.
|
Error bars on error bars?
The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways t
|
13,720
|
Error bars on error bars?
|
The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distribution of the location of your estimate is least likely. Clause Wilke (in his book Fundamentals of Data Visualization, in the chapter Visualizing uncertainty) shows some graphical alternatives to traditional error bars that convey something of the distribution of uncertainty in an estimate:
Image by Claus Wilke, used under an Attribution-NonCommercial-NoDerivatives 4.0 International licence. Original available at https://clauswilke.com/dataviz/visualizing-uncertainty.html
The "graded error bars" in (a) and (b) are formed by plotting the 90%, 95% and 99% CIs simultaneously. Thom Baguley discusses a similar approach he terms "tiered error bars" and provides example R code here: https://seriousstats.wordpress.com/2012/06/21/confidence-intervals-with-tiers/ , although I first saw such an approach being used by Andrew Gelman in his textbook Data Analysis Using Regression and Multilevel/Hierarchical Models.
|
Error bars on error bars?
|
The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distr
|
Error bars on error bars?
The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distribution of the location of your estimate is least likely. Clause Wilke (in his book Fundamentals of Data Visualization, in the chapter Visualizing uncertainty) shows some graphical alternatives to traditional error bars that convey something of the distribution of uncertainty in an estimate:
Image by Claus Wilke, used under an Attribution-NonCommercial-NoDerivatives 4.0 International licence. Original available at https://clauswilke.com/dataviz/visualizing-uncertainty.html
The "graded error bars" in (a) and (b) are formed by plotting the 90%, 95% and 99% CIs simultaneously. Thom Baguley discusses a similar approach he terms "tiered error bars" and provides example R code here: https://seriousstats.wordpress.com/2012/06/21/confidence-intervals-with-tiers/ , although I first saw such an approach being used by Andrew Gelman in his textbook Data Analysis Using Regression and Multilevel/Hierarchical Models.
|
Error bars on error bars?
The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distr
|
13,721
|
Error bars on error bars?
|
Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by the property that
\begin{equation*}
\mathbb{P}\left[ L(X) < \theta < U(X) \right] = 1-\alpha,
\end{equation*}
where $L$ is the lower endpoint and $R$ is the upper endpoint of the confidence interval.
The plot shown in the question illustrates that $L$ and $U$ are random variables. This is certainly the case, as they depend on the random variable $X$. However, a fraction of the confidence intervals $(L(X),U(X))$ contain $\theta$. By construction, the fraction is exactly $1-\alpha$. When $\alpha=0.05$, this is $95\%$ of the confidence intervals.
Error bars on error bars
This procedure makes perfect sense if the target of inference is $\theta$ - which is what we stated above. However, you may also be interested in the endpoints $L(X)$ and $U(X)$ themselves. Then you can construct a "confidence intervals" $(L^L(X), U^L(X))$ and $(L^U(X), U^U(X))$ such that
\begin{equation*}
\mathbb{P} \left[L^L(X) < L(X) < U^L(X) \right] = 1-\alpha
\end{equation*}
and
\begin{equation*}
\mathbb{P} \left[L^U(X) < U(X) < U^U(X) \right] = 1-\alpha.
\end{equation*}
For example, the "confidence interval" $(L^L(X), U^L(X))$ contains the random variable $L(X)$ a fraction $1-\alpha$ of the time.
Based on all these confidence intervals, we could extend the original confidence interval to $(L^L(X), U^U(X))$. I'm not sure what the utility of this is, though.
|
Error bars on error bars?
|
Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by
|
Error bars on error bars?
Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by the property that
\begin{equation*}
\mathbb{P}\left[ L(X) < \theta < U(X) \right] = 1-\alpha,
\end{equation*}
where $L$ is the lower endpoint and $R$ is the upper endpoint of the confidence interval.
The plot shown in the question illustrates that $L$ and $U$ are random variables. This is certainly the case, as they depend on the random variable $X$. However, a fraction of the confidence intervals $(L(X),U(X))$ contain $\theta$. By construction, the fraction is exactly $1-\alpha$. When $\alpha=0.05$, this is $95\%$ of the confidence intervals.
Error bars on error bars
This procedure makes perfect sense if the target of inference is $\theta$ - which is what we stated above. However, you may also be interested in the endpoints $L(X)$ and $U(X)$ themselves. Then you can construct a "confidence intervals" $(L^L(X), U^L(X))$ and $(L^U(X), U^U(X))$ such that
\begin{equation*}
\mathbb{P} \left[L^L(X) < L(X) < U^L(X) \right] = 1-\alpha
\end{equation*}
and
\begin{equation*}
\mathbb{P} \left[L^U(X) < U(X) < U^U(X) \right] = 1-\alpha.
\end{equation*}
For example, the "confidence interval" $(L^L(X), U^L(X))$ contains the random variable $L(X)$ a fraction $1-\alpha$ of the time.
Based on all these confidence intervals, we could extend the original confidence interval to $(L^L(X), U^U(X))$. I'm not sure what the utility of this is, though.
|
Error bars on error bars?
Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by
|
13,722
|
Error bars on error bars?
|
TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can indeed see that the estimate of the standard deviation is different each experiment. We are not certain about the exact value of the standard deviation.
But one thing is more or less a constant, that is the probability that the true mean is inside the interval depicted by the error bar.
In this example we made 61 times (30.5%) a wrong estimate of the interval (coloured red/blue when we underestimate or overestimate the mean). For large samples it will become approximately 32% error (see https://en.m.wikipedia.org/wiki/68–95–99.7_rule)
When we interpret the error bars more in this way, as an interval that contains the parameter some amount of time, then the error on error bars is sort of baked into it and is in this expression of error of containing the parameter.
Interval estimation
Error bars can be seen as a graphical representation of interval estimation. So this discussion about uncertainty in the uncertainty estimates themselves and can be seen as similar to a more general discussion about intervals.
Standard deviation, standard error, are simple indicators of intervals
When the error bars represent the estimated standard deviation then indeed the error bars themselves have some uncertainty as well. Standard deviations are just a simple way of expressing the uncertainty.
You can have all kinds of intervals like, credible intervals or confidence intervals, in which case the uncertainty is tackled in some way or another.
Alternative example: Confidence intervals
For instance a confidence interval will contain the correct data point $\alpha\%$ of the time. This is one way to represent the certainty and precision in the data. The more precise the data is the smaller that we can make the intervals.
But note that confidence intervals represent the uncertainty in a peculiar way. See Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? The confidence interval contains the true data point with $\alpha\%$ probability when we condition on the model parameters, and not when we condition on the observation.
For particular observations the intervals will be more often wrong than for other observations and the intervals may differ in size (like in your graph). So there is still uncertainty about the intervals. But this uncertainty is already expressed by stating that it is an interval with $\alpha\%$ probability.
The error bars based on the 'simple' standard deviation is often very close to a 68% confidence interval (see https://en.m.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule).
How the 'problem' is solved.
In the case of the confidence interval the problem can be solved by computing a statistic that is a pivotal quantity.
For instance a t statistic is a ratio of the mean and the observed standard deviation, because both the numerator and denominator in this ratio depend on the variance of the original distribution, the ratio becomes independent on this variance. In this way the uncertainty about the variance of the distribution has been 'eliminated'.
In the case of the credible interval we use a prior distribution to express the uncertainty about the entire system. In the final computation of the interval based on the posterior distribution, the uncertainty about the interval is included.
|
Error bars on error bars?
|
TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can ind
|
Error bars on error bars?
TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can indeed see that the estimate of the standard deviation is different each experiment. We are not certain about the exact value of the standard deviation.
But one thing is more or less a constant, that is the probability that the true mean is inside the interval depicted by the error bar.
In this example we made 61 times (30.5%) a wrong estimate of the interval (coloured red/blue when we underestimate or overestimate the mean). For large samples it will become approximately 32% error (see https://en.m.wikipedia.org/wiki/68–95–99.7_rule)
When we interpret the error bars more in this way, as an interval that contains the parameter some amount of time, then the error on error bars is sort of baked into it and is in this expression of error of containing the parameter.
Interval estimation
Error bars can be seen as a graphical representation of interval estimation. So this discussion about uncertainty in the uncertainty estimates themselves and can be seen as similar to a more general discussion about intervals.
Standard deviation, standard error, are simple indicators of intervals
When the error bars represent the estimated standard deviation then indeed the error bars themselves have some uncertainty as well. Standard deviations are just a simple way of expressing the uncertainty.
You can have all kinds of intervals like, credible intervals or confidence intervals, in which case the uncertainty is tackled in some way or another.
Alternative example: Confidence intervals
For instance a confidence interval will contain the correct data point $\alpha\%$ of the time. This is one way to represent the certainty and precision in the data. The more precise the data is the smaller that we can make the intervals.
But note that confidence intervals represent the uncertainty in a peculiar way. See Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? The confidence interval contains the true data point with $\alpha\%$ probability when we condition on the model parameters, and not when we condition on the observation.
For particular observations the intervals will be more often wrong than for other observations and the intervals may differ in size (like in your graph). So there is still uncertainty about the intervals. But this uncertainty is already expressed by stating that it is an interval with $\alpha\%$ probability.
The error bars based on the 'simple' standard deviation is often very close to a 68% confidence interval (see https://en.m.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule).
How the 'problem' is solved.
In the case of the confidence interval the problem can be solved by computing a statistic that is a pivotal quantity.
For instance a t statistic is a ratio of the mean and the observed standard deviation, because both the numerator and denominator in this ratio depend on the variance of the original distribution, the ratio becomes independent on this variance. In this way the uncertainty about the variance of the distribution has been 'eliminated'.
In the case of the credible interval we use a prior distribution to express the uncertainty about the entire system. In the final computation of the interval based on the posterior distribution, the uncertainty about the interval is included.
|
Error bars on error bars?
TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can ind
|
13,723
|
How does one find the mean of a sum of dependent variables?
|
Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the expectations exist), regardless of whether they are independent or not.
We can generalise (e.g. by induction) so that $\mathbb{E}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \mathbb{E}(X_i)$ so long as each expectation $\mathbb{E}(X_i)$ exists.
So yes, the mean of the sum is the same as the sum of the mean even if the variables are dependent. But note that this does not apply for the variance! So while $\mathrm{Var}(X + Y) = \mathrm{Var}(X) + \mathrm{Var}(Y)$ for independent variables, or even variables which are dependent but uncorrelated, the general formula is $\mathrm{Var}(X + Y) = \mathrm{Var}(X) + \mathrm{Var}(Y) + 2\mathrm{Cov}(X, Y)$ where $\mathrm{Cov}$ is the covariance of the variables.
|
How does one find the mean of a sum of dependent variables?
|
Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the e
|
How does one find the mean of a sum of dependent variables?
Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the expectations exist), regardless of whether they are independent or not.
We can generalise (e.g. by induction) so that $\mathbb{E}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \mathbb{E}(X_i)$ so long as each expectation $\mathbb{E}(X_i)$ exists.
So yes, the mean of the sum is the same as the sum of the mean even if the variables are dependent. But note that this does not apply for the variance! So while $\mathrm{Var}(X + Y) = \mathrm{Var}(X) + \mathrm{Var}(Y)$ for independent variables, or even variables which are dependent but uncorrelated, the general formula is $\mathrm{Var}(X + Y) = \mathrm{Var}(X) + \mathrm{Var}(Y) + 2\mathrm{Cov}(X, Y)$ where $\mathrm{Cov}$ is the covariance of the variables.
|
How does one find the mean of a sum of dependent variables?
Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the e
|
13,724
|
How does one find the mean of a sum of dependent variables?
|
TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with the sum of random variables $Y_n = \sum_{i=1}^n X_i$, i.e. of a function of many of them, the mean of the sum $E(Y_n)$ is with respect to their joint distribution (we assume that all means exist and are finite) Denoting $\mathbf X$ the multivariate vector of the $n$ r.v.'s, their joint density can be written as $f_{\mathbf X}(\mathbf x)= f_{X_1,...,X_n}(x_1,...,x_n)$ and their joint support
$D = S_{X_1} \times ...\times S_{X_n}$
Using the Law of Unconcscious Statistician we have the multiple integral
$$E[Y_n] = \int_D Y_nf_{\mathbf X}(\mathbf x)d\mathbf x$$.
Under some regularity conditions we can decompose the multiple integral into an $n$-iterative integral:
$$E[Y_n] = \int_{S_{X_n}}...\int_{S_{X_1}}\Big[\sum_{i=1}^n X_i\Big]f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n $$
and using the linearity of integrals we can decompose into
$$ = \int_{S_{X_n}}...\int_{S_{X_1}}x_1f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n \; + ...\\ ...+\int_{S_{X_n}}...\int_{S_{X_1}}x_nf_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n $$
For each $n$-iterative integral we can re-arrange the order of integration so that, in each, the outer integration is with respect to the variable that is outside the joint density. Namely,
$$\int_{S_{X_n}}...\int_{S_{X_1}}x_1f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n = \\\int_{S_{X_1}}x_1\int_{S_{X_n}}...\int_{S_{X_2}}f_{X_1,...,X_n}(x_1,...,x_n)dx_2...dx_ndx_1$$
and in general
$$\int_{S_{X_n}}...\int_{S_{X_j}}...\int_{S_{X_1}}x_jf_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_j...dx_n =$$
$$=\int_{S_{X_j}}x_j\int_{S_{X_n}}...\int_{S_{X_{j-1}}}\int_{S_{X_{j+1}}}...\int_{S_{X_1}}f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_{j-1}dx_{j+1}......dx_ndx_j$$
As we calculate one-by-one the integral in each $n$-iterative integral (starting from the inside), we "integrate out" a variable and we obtain in each step the "joint-marginal" distribution of the other variables. Each $n$-iterative integral therefore will end up as $\int_{S_{X_j}}x_jf_{X_j}(x_j)dx_j$.
Bringing it all together we arrive at
$$E[Y_n ] = E[\sum_{i=1}^n X_i] = \int_{S_{X_1}}x_1f_{X_1}(x_1)dx_1 +...+\int_{S_{X_n}}x_nf_{X_n}(x_n)dx_n $$
But now each simple integral is the expected value of each random variable separately, so
$$ E[\sum_{i=1}^n X_i] = E(X_1) + ...+E(X_n) $$
$$= \sum_{i=1}^nE(X_i) $$
Note that we never invoked independence or non-independence of the random variables involved, but we worked solely with their joint distribution.
|
How does one find the mean of a sum of dependent variables?
|
TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with
|
How does one find the mean of a sum of dependent variables?
TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with the sum of random variables $Y_n = \sum_{i=1}^n X_i$, i.e. of a function of many of them, the mean of the sum $E(Y_n)$ is with respect to their joint distribution (we assume that all means exist and are finite) Denoting $\mathbf X$ the multivariate vector of the $n$ r.v.'s, their joint density can be written as $f_{\mathbf X}(\mathbf x)= f_{X_1,...,X_n}(x_1,...,x_n)$ and their joint support
$D = S_{X_1} \times ...\times S_{X_n}$
Using the Law of Unconcscious Statistician we have the multiple integral
$$E[Y_n] = \int_D Y_nf_{\mathbf X}(\mathbf x)d\mathbf x$$.
Under some regularity conditions we can decompose the multiple integral into an $n$-iterative integral:
$$E[Y_n] = \int_{S_{X_n}}...\int_{S_{X_1}}\Big[\sum_{i=1}^n X_i\Big]f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n $$
and using the linearity of integrals we can decompose into
$$ = \int_{S_{X_n}}...\int_{S_{X_1}}x_1f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n \; + ...\\ ...+\int_{S_{X_n}}...\int_{S_{X_1}}x_nf_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n $$
For each $n$-iterative integral we can re-arrange the order of integration so that, in each, the outer integration is with respect to the variable that is outside the joint density. Namely,
$$\int_{S_{X_n}}...\int_{S_{X_1}}x_1f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n = \\\int_{S_{X_1}}x_1\int_{S_{X_n}}...\int_{S_{X_2}}f_{X_1,...,X_n}(x_1,...,x_n)dx_2...dx_ndx_1$$
and in general
$$\int_{S_{X_n}}...\int_{S_{X_j}}...\int_{S_{X_1}}x_jf_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_j...dx_n =$$
$$=\int_{S_{X_j}}x_j\int_{S_{X_n}}...\int_{S_{X_{j-1}}}\int_{S_{X_{j+1}}}...\int_{S_{X_1}}f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_{j-1}dx_{j+1}......dx_ndx_j$$
As we calculate one-by-one the integral in each $n$-iterative integral (starting from the inside), we "integrate out" a variable and we obtain in each step the "joint-marginal" distribution of the other variables. Each $n$-iterative integral therefore will end up as $\int_{S_{X_j}}x_jf_{X_j}(x_j)dx_j$.
Bringing it all together we arrive at
$$E[Y_n ] = E[\sum_{i=1}^n X_i] = \int_{S_{X_1}}x_1f_{X_1}(x_1)dx_1 +...+\int_{S_{X_n}}x_nf_{X_n}(x_n)dx_n $$
But now each simple integral is the expected value of each random variable separately, so
$$ E[\sum_{i=1}^n X_i] = E(X_1) + ...+E(X_n) $$
$$= \sum_{i=1}^nE(X_i) $$
Note that we never invoked independence or non-independence of the random variables involved, but we worked solely with their joint distribution.
|
How does one find the mean of a sum of dependent variables?
TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with
|
13,725
|
How to name the ticks in a python matplotlib boxplot
|
Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon', 'tue', 'wed'])
edited to remove pylab bc pylab is a convenience module that bulk imports matplotlib.pyplot (for plotting) and numpy (for mathematics and working with arrays) in a single name space. Although many examples use pylab, it is no longer recommended.
|
How to name the ticks in a python matplotlib boxplot
|
Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon'
|
How to name the ticks in a python matplotlib boxplot
Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon', 'tue', 'wed'])
edited to remove pylab bc pylab is a convenience module that bulk imports matplotlib.pyplot (for plotting) and numpy (for mathematics and working with arrays) in a single name space. Although many examples use pylab, it is no longer recommended.
|
How to name the ticks in a python matplotlib boxplot
Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon'
|
13,726
|
How to name the ticks in a python matplotlib boxplot
|
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I submitted this boxplot example that shows you other functionality that could be useful (like rotating the tick mark text, adding upper Y-axis tick marks and labels, adding color to the boxes, etc.)
|
How to name the ticks in a python matplotlib boxplot
|
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I su
|
How to name the ticks in a python matplotlib boxplot
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I submitted this boxplot example that shows you other functionality that could be useful (like rotating the tick mark text, adding upper Y-axis tick marks and labels, adding color to the boxes, etc.)
|
How to name the ticks in a python matplotlib boxplot
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I su
|
13,727
|
Why is best subset selection not favored in comparison to lasso?
|
In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero. If your selection procedure led you to exclude a predictor with a true nonzero coefficient, all coefficient estimates will be biased. This defeats your argument if you will agree that selection is typically not perfect.
Thus, to make "sure" of an unbiased model estimate, you should err on the side of including more, or even all potentially relevant predictors. That is, you should not select at all.
Why is this a bad idea? Because of the bias-variance tradeoff. Yes, your large model will be unbiased, but it will have a large variance, and the variance will dominate the prediction (or other) error.
Therefore, it is better to accept that parameter estimates will be biased but have lower variance (regularization), rather than hope that our subset selection has only removed true zero parameters so we have an unbiased model with larger variance.
Since you write that you assess both approaches using cross-validation, this mitigates some of the concerns above. One remaining issue for Best Subset remains: it constrains some parameters to be exactly zero and lets the others float freely. So there is a discontinuity in the estimate, which isn't there if we tweak the lasso $\lambda$ beyond a point $\lambda_0$ where a predictor $p$ is included or excluded. Suppose that cross-validation outputs an "optimal" $\lambda$ that is close to $\lambda_0$, so we are essentially unsure whether p should be included or not. In this case, I would argue that it makes more sense to constrain the parameter estimate $\hat{\beta}_p$ via the lasso to a small (absolute) value, rather than either completely exclude it, $\hat{\beta}_p=0$, or let it float freely, $\hat{\beta}_p=\hat{\beta}_p^{\text{OLS}}$, as Best Subset does.
This may be helpful: Why does shrinkage work?
|
Why is best subset selection not favored in comparison to lasso?
|
In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero.
|
Why is best subset selection not favored in comparison to lasso?
In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero. If your selection procedure led you to exclude a predictor with a true nonzero coefficient, all coefficient estimates will be biased. This defeats your argument if you will agree that selection is typically not perfect.
Thus, to make "sure" of an unbiased model estimate, you should err on the side of including more, or even all potentially relevant predictors. That is, you should not select at all.
Why is this a bad idea? Because of the bias-variance tradeoff. Yes, your large model will be unbiased, but it will have a large variance, and the variance will dominate the prediction (or other) error.
Therefore, it is better to accept that parameter estimates will be biased but have lower variance (regularization), rather than hope that our subset selection has only removed true zero parameters so we have an unbiased model with larger variance.
Since you write that you assess both approaches using cross-validation, this mitigates some of the concerns above. One remaining issue for Best Subset remains: it constrains some parameters to be exactly zero and lets the others float freely. So there is a discontinuity in the estimate, which isn't there if we tweak the lasso $\lambda$ beyond a point $\lambda_0$ where a predictor $p$ is included or excluded. Suppose that cross-validation outputs an "optimal" $\lambda$ that is close to $\lambda_0$, so we are essentially unsure whether p should be included or not. In this case, I would argue that it makes more sense to constrain the parameter estimate $\hat{\beta}_p$ via the lasso to a small (absolute) value, rather than either completely exclude it, $\hat{\beta}_p=0$, or let it float freely, $\hat{\beta}_p=\hat{\beta}_p^{\text{OLS}}$, as Best Subset does.
This may be helpful: Why does shrinkage work?
|
Why is best subset selection not favored in comparison to lasso?
In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero.
|
13,728
|
Why is best subset selection not favored in comparison to lasso?
|
In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do not contribute to the fit, (3) prediction accuracy and (4) producing essentially unbiased estimates for the selected variables. One recent paper that argued for the superior quality of best subset over LASSO is that by Bertsimas et al (2016) "Best subset selection via a modern optimization lens". Another older one giving a concrete example (on the deconvolution of spike trains) where best subset was better than LASSO or ridge is that by de Rooi & Eilers (2011).
The reason that the LASSO is still preferred in practice is mostly due to it being computationally much easier to calculate. Best subset selection, i.e. using an $L_0$ pseudonorm penalty, is essentially a combinatorial problem, and is NP hard, whereas the LASSO solution is easy to calculate over a regularization path using pathwise coordinate descent. In addition, the LASSO ($L_1$ norm penalized regression) is the tightest convex relaxation of $L_0$ pseudonorm penalized regression / best subset selection (bridge regression, i.e. $L_q$ norm penalized regression with q close to 0 would in principle be closer to best subset selection than LASSO, but this is no longer a convex optimization problem, and so is quite tricky to fit).
To reduce the bias of the LASSO one can use derived multistep approaches, such as the adaptive LASSO (where coefficients are differentially penalized based on a prior estimate from a least squares or ridge regression fit) or relaxed LASSO (a simple solution being to do a least squares fit of the variables selected by the LASSO). In comparison to best subset, LASSO tends to select slightly too many variables though. Best subset selection is better, but harder to fit.
That being said, there are also efficient computational methods now to do best subset selection / $L_0$ penalized regression, e.g. using the adaptive ridge approach described in the paper
"An Adaptive Ridge Procedure for L0 Regularization" by Frommlet & Nuel (2016).
Note that also under best subset selection you'll still have to use either cross validation or some information criterion (adjusted R2, AIC, BIC, mBIC...) to determine what number of predictors gives you the best prediction performance / explanatory power for the number of variables in your model, which is essential to avoid overfitting.
The paper "Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso" by Hastie et al (2017) provides an extensive comparison of best subset, LASSO and some LASSO variants like the relaxed LASSO, and they claim that the relaxed LASSO was the one that produced the highest model prediction accuracy under the widest range of circumstances, i.e. they came to a different conclusion than Bertsimas. But the conclusion about which is best depends a lot on what you consider best (e.g. highest prediction accuracy, or best at picking out relevant variables and not including irrelevant ones; ridge regression e.g. typically selects way too many variables but the prediction accuracy for cases with highly collinear variables can nevertheless be really good).
For a very small problem with 3 variables like you describe it is plain clear best subset selection is the preferred option though.
|
Why is best subset selection not favored in comparison to lasso?
|
In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do
|
Why is best subset selection not favored in comparison to lasso?
In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do not contribute to the fit, (3) prediction accuracy and (4) producing essentially unbiased estimates for the selected variables. One recent paper that argued for the superior quality of best subset over LASSO is that by Bertsimas et al (2016) "Best subset selection via a modern optimization lens". Another older one giving a concrete example (on the deconvolution of spike trains) where best subset was better than LASSO or ridge is that by de Rooi & Eilers (2011).
The reason that the LASSO is still preferred in practice is mostly due to it being computationally much easier to calculate. Best subset selection, i.e. using an $L_0$ pseudonorm penalty, is essentially a combinatorial problem, and is NP hard, whereas the LASSO solution is easy to calculate over a regularization path using pathwise coordinate descent. In addition, the LASSO ($L_1$ norm penalized regression) is the tightest convex relaxation of $L_0$ pseudonorm penalized regression / best subset selection (bridge regression, i.e. $L_q$ norm penalized regression with q close to 0 would in principle be closer to best subset selection than LASSO, but this is no longer a convex optimization problem, and so is quite tricky to fit).
To reduce the bias of the LASSO one can use derived multistep approaches, such as the adaptive LASSO (where coefficients are differentially penalized based on a prior estimate from a least squares or ridge regression fit) or relaxed LASSO (a simple solution being to do a least squares fit of the variables selected by the LASSO). In comparison to best subset, LASSO tends to select slightly too many variables though. Best subset selection is better, but harder to fit.
That being said, there are also efficient computational methods now to do best subset selection / $L_0$ penalized regression, e.g. using the adaptive ridge approach described in the paper
"An Adaptive Ridge Procedure for L0 Regularization" by Frommlet & Nuel (2016).
Note that also under best subset selection you'll still have to use either cross validation or some information criterion (adjusted R2, AIC, BIC, mBIC...) to determine what number of predictors gives you the best prediction performance / explanatory power for the number of variables in your model, which is essential to avoid overfitting.
The paper "Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso" by Hastie et al (2017) provides an extensive comparison of best subset, LASSO and some LASSO variants like the relaxed LASSO, and they claim that the relaxed LASSO was the one that produced the highest model prediction accuracy under the widest range of circumstances, i.e. they came to a different conclusion than Bertsimas. But the conclusion about which is best depends a lot on what you consider best (e.g. highest prediction accuracy, or best at picking out relevant variables and not including irrelevant ones; ridge regression e.g. typically selects way too many variables but the prediction accuracy for cases with highly collinear variables can nevertheless be really good).
For a very small problem with 3 variables like you describe it is plain clear best subset selection is the preferred option though.
|
Why is best subset selection not favored in comparison to lasso?
In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do
|
13,729
|
Power analysis for ordinal logistic regression
|
I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and affordable) using R.
decide what you think your data should look like and how you will analyze it
write a function or set of expressions that will simulate the data for a given relationship and sample size and do the analysis (a function is preferable in that you can make the sample size and parameters into arguments to make it easier to try different values). The function or code should return the p-value or other test statistic.
use the replicate function to run the code from above a bunch of times (I usually start at about 100 times to get a feel for how long it takes and to get the right general area, then up it to 1,000 and sometimes 10,000 or 100,000 for the final values that I will use). The proportion of times that you rejected the null hypothesis is the power.
redo the above for another set of conditions.
Here is a simple example with ordinal regression:
library(rms)
tmpfun <- function(n, beta0, beta1, beta2) {
x <- runif(n, 0, 10)
eta1 <- beta0 + beta1*x
eta2 <- eta1 + beta2
p1 <- exp(eta1)/(1+exp(eta1))
p2 <- exp(eta2)/(1+exp(eta2))
tmp <- runif(n)
y <- (tmp < p1) + (tmp < p2)
fit <- lrm(y~x)
fit$stats[5]
}
out <- replicate(1000, tmpfun(100, -1/2, 1/4, 1/4))
mean( out < 0.05 )
|
Power analysis for ordinal logistic regression
|
I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and af
|
Power analysis for ordinal logistic regression
I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and affordable) using R.
decide what you think your data should look like and how you will analyze it
write a function or set of expressions that will simulate the data for a given relationship and sample size and do the analysis (a function is preferable in that you can make the sample size and parameters into arguments to make it easier to try different values). The function or code should return the p-value or other test statistic.
use the replicate function to run the code from above a bunch of times (I usually start at about 100 times to get a feel for how long it takes and to get the right general area, then up it to 1,000 and sometimes 10,000 or 100,000 for the final values that I will use). The proportion of times that you rejected the null hypothesis is the power.
redo the above for another set of conditions.
Here is a simple example with ordinal regression:
library(rms)
tmpfun <- function(n, beta0, beta1, beta2) {
x <- runif(n, 0, 10)
eta1 <- beta0 + beta1*x
eta2 <- eta1 + beta2
p1 <- exp(eta1)/(1+exp(eta1))
p2 <- exp(eta2)/(1+exp(eta2))
tmp <- runif(n)
y <- (tmp < p1) + (tmp < p2)
fit <- lrm(y~x)
fit$stats[5]
}
out <- replicate(1000, tmpfun(100, -1/2, 1/4, 1/4))
mean( out < 0.05 )
|
Power analysis for ordinal logistic regression
I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and af
|
13,730
|
Power analysis for ordinal logistic regression
|
Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-replacement the same n, but the same idea.
So here's an example: I ran a little self-experiment which turned in a positive point-estimate but because it was little, was not nearly statistically-significant in the ordinal logistic regression. With that point-estimate, how big a n would I need? For various possible n, I many times generated a dataset & ran the ordinal logistic regression & saw how small the p-value was:
library(boot)
library(rms)
npt <- read.csv("http://www.gwern.net/docs/nootropics/2013-gwern-noopept.csv")
newNoopeptPower <- function(dt, indices) {
d <- dt[sample(nrow(dt), n, replace=TRUE), ] # new dataset, possibly larger than the original
lmodel <- lrm(MP ~ Noopept + Magtein, data = d)
return(anova(lmodel)[7])
}
alpha <- 0.05
for (n in seq(from = 300, to = 600, by = 30)) {
bs <- boot(data=npt, statistic=newNoopeptPower, R=10000, parallel="multicore", ncpus=4)
print(c(n, sum(bs$t<=alpha)/length(bs$t)))
}
With the output (for me):
[1] 300.0000 0.1823
[1] 330.0000 0.1925
[1] 360.0000 0.2083
[1] 390.0000 0.2143
[1] 420.0000 0.2318
[1] 450.0000 0.2462
[1] 480.000 0.258
[1] 510.0000 0.2825
[1] 540.0000 0.2855
[1] 570.0000 0.3184
[1] 600.0000 0.3175
In this case, at n=600 the power was 32%. Not very encouraging.
(If my simulation approach is wrong, please someone tell me. I'm going off a few medical papers discussing power simulation for planning clinical trials, but I'm not at all certain about my precise implementation.)
|
Power analysis for ordinal logistic regression
|
Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-repla
|
Power analysis for ordinal logistic regression
Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-replacement the same n, but the same idea.
So here's an example: I ran a little self-experiment which turned in a positive point-estimate but because it was little, was not nearly statistically-significant in the ordinal logistic regression. With that point-estimate, how big a n would I need? For various possible n, I many times generated a dataset & ran the ordinal logistic regression & saw how small the p-value was:
library(boot)
library(rms)
npt <- read.csv("http://www.gwern.net/docs/nootropics/2013-gwern-noopept.csv")
newNoopeptPower <- function(dt, indices) {
d <- dt[sample(nrow(dt), n, replace=TRUE), ] # new dataset, possibly larger than the original
lmodel <- lrm(MP ~ Noopept + Magtein, data = d)
return(anova(lmodel)[7])
}
alpha <- 0.05
for (n in seq(from = 300, to = 600, by = 30)) {
bs <- boot(data=npt, statistic=newNoopeptPower, R=10000, parallel="multicore", ncpus=4)
print(c(n, sum(bs$t<=alpha)/length(bs$t)))
}
With the output (for me):
[1] 300.0000 0.1823
[1] 330.0000 0.1925
[1] 360.0000 0.2083
[1] 390.0000 0.2143
[1] 420.0000 0.2318
[1] 450.0000 0.2462
[1] 480.000 0.258
[1] 510.0000 0.2825
[1] 540.0000 0.2855
[1] 570.0000 0.3184
[1] 600.0000 0.3175
In this case, at n=600 the power was 32%. Not very encouraging.
(If my simulation approach is wrong, please someone tell me. I'm going off a few medical papers discussing power simulation for planning clinical trials, but I'm not at all certain about my precise implementation.)
|
Power analysis for ordinal logistic regression
Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-repla
|
13,731
|
Power analysis for ordinal logistic regression
|
I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power default to 1-tailed test, and if you are trying to see if your simulations match them (always a good idea when you are learning how to do this), you will want to check that first.
To make Snow's run a 1-tailed test, I would add a parameter called "tail" to the function inputs, and put something like this in the function itself:
#two-tail test
if (tail==2) fit$stats[5]
#one-tail test
if (tail==1){
if (fit$coefficients[5]>0) {
fit$stats[5]/2
} else 1
The 1-tailed version basically checks to see that the coefficient is positive, and then cuts the p-value in half.
|
Power analysis for ordinal logistic regression
|
I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power
|
Power analysis for ordinal logistic regression
I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power default to 1-tailed test, and if you are trying to see if your simulations match them (always a good idea when you are learning how to do this), you will want to check that first.
To make Snow's run a 1-tailed test, I would add a parameter called "tail" to the function inputs, and put something like this in the function itself:
#two-tail test
if (tail==2) fit$stats[5]
#one-tail test
if (tail==1){
if (fit$coefficients[5]>0) {
fit$stats[5]/2
} else 1
The 1-tailed version basically checks to see that the coefficient is positive, and then cuts the p-value in half.
|
Power analysis for ordinal logistic regression
I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power
|
13,732
|
100-sided dice roll problem
|
This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get there? Or,
You can play this game an unlimited number of times and you wish to maximize your expected profit per roll in the long run?
The two interpretation lead to very different strategies, each of which would be exceptionally poor if applied in the other circumstance!
First interpretation.
Let $T\subset \{1,2,\ldots, 100\}$ be the set of values for which you intend to collect a reward and let $p=|T|/100$ be its size as a proportion of all outcomes. The expected number of rolls needed to observe an element of $T$ is (as is well known and intuitively obvious) equal to $1/p = 100/|T|.$ Moreover, the expected reward is the mean of $T$ (because, conditional on $T,$ the rolls are uniformly distributed among the values of $T$). Consequently,
For any given value of $p$ you want to make the mean of $T$ as large as possible. Thus, $T = \{t, t+1, \ldots, 100\}$ must consist of the $100p$ highest possible values on the die. Its mean is $(100+t)/2$ and $p = (101-t)/100.$
Thus, your expected net profit is $(100+t)/2 - 100/(101-t).$ As a function of a real variable this rises smoothly to a maximum at $t = 101 - \sqrt{200} \approx 86.9$ and then falls rapidly, implying that as a function of an integral value it must be maximized either at $86$ or $87.$ It's almost a toss-up, but $t=87$ wins out by a tiny amount.
Here is a plot of this function.
And a closer look near the region of interest (notice the scale on the vertical axis):
Second interpretation.
You might as well ask what is the best way to pick up cash lying in the street: take it all!
Imagine all future rolls laid out before you in order, like this randomly generated sequence:
86 91 100 8 100 66 87 9 71 44 24 94 57 2 68 62 59 93 97 15 ...
You will pay $\$1$ for each of these rolls no matter what. You will receive, however, only those rewards where you choose to stop.
I'm going to make your choice supremely easy: since you have committed to bet on each roll, I will let you peek at them all to decide which rewards to collect! Surely you cannot do better without peeking, so this provides an upper bound on what you might be able to achieve.
For instance, if--according to some--you select any reward where the roll exceeds 49, your list of net returns (rewards minus the bets) begins
85 90 99 -1 99 65 86 -1 70 -1 -1 93 56 -1 67 61 58 92 96 -1 ...
If instead--if you were relying on the results of the first interpretation of the question for guidance--you select only rewards where the roll exceeds 86, your list of net returns begins
-1 90 99 -1 99 -1 86 -1 -1 -1 -1 93 -1 -1 -1 -1 -1 92 96 -1
The more restrictive your stopping rule, the more times you will replace a positive number with a -1. In the long run, it just gets worse and worse for you as you hold back waiting for any set of special stopping numbers.
This argument covers not only a threshold stopping rule, but even an arbitrary sequence of stopping rules of any complexity. Any rule that causes you not to collect a reward immediately reduces your total return.
Wait, you might object: why can't I just decide not to bet on the next roll? Go right ahead. I will make the same offer as before, but you are not allowed to peek at the roll before deciding not to bet. Because that's the case, the list of the rolls that you do bet on will have exactly the same probabilistic characteristics as the list I began this answer with: it's a sequence of independent uniform outcomes.
I said that peeking gives an upper bound on the possibilities. However, since the greatest total rewards can be obtained without peeking,
the optimal strategy is to collect a reward on every turn regardless of the roll's outcome. Your expected value for each roll is $-1$ (for the cost of rolling) plus $101/2$ (the expected value on a d100 die), a net of $49.5.$
If you are believer in any other strategy, understand that by waiting until a high-ish value is observed, you will tend to pay for several rolls before seeing that number. For instance, if you wait to see a value exceeding 50, it is easy to establish (and intuitively obvious) that you will pay for two rolls on average for that to happen. You will collect an expected value of $(51+52+\cdots+100)/50 = 75.5$ but you will have paid $\$2$ for that privilege. The average rate of return on your investment is only $(75.5-2)/2 = 36.75,$ noticeably less than the ROI of $49.5/1 = 49.5$ achieved with the optimal strategy.
Still unconvinced? For the 20 rolls shown at the outset, I will pay $20$ and collect $1233,$ leaving me up by $1213.$ You will pay the same $20$ and will collect only $1131,$ leaving you with $102$ less than me.
|
100-sided dice roll problem
|
This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get
|
100-sided dice roll problem
This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get there? Or,
You can play this game an unlimited number of times and you wish to maximize your expected profit per roll in the long run?
The two interpretation lead to very different strategies, each of which would be exceptionally poor if applied in the other circumstance!
First interpretation.
Let $T\subset \{1,2,\ldots, 100\}$ be the set of values for which you intend to collect a reward and let $p=|T|/100$ be its size as a proportion of all outcomes. The expected number of rolls needed to observe an element of $T$ is (as is well known and intuitively obvious) equal to $1/p = 100/|T|.$ Moreover, the expected reward is the mean of $T$ (because, conditional on $T,$ the rolls are uniformly distributed among the values of $T$). Consequently,
For any given value of $p$ you want to make the mean of $T$ as large as possible. Thus, $T = \{t, t+1, \ldots, 100\}$ must consist of the $100p$ highest possible values on the die. Its mean is $(100+t)/2$ and $p = (101-t)/100.$
Thus, your expected net profit is $(100+t)/2 - 100/(101-t).$ As a function of a real variable this rises smoothly to a maximum at $t = 101 - \sqrt{200} \approx 86.9$ and then falls rapidly, implying that as a function of an integral value it must be maximized either at $86$ or $87.$ It's almost a toss-up, but $t=87$ wins out by a tiny amount.
Here is a plot of this function.
And a closer look near the region of interest (notice the scale on the vertical axis):
Second interpretation.
You might as well ask what is the best way to pick up cash lying in the street: take it all!
Imagine all future rolls laid out before you in order, like this randomly generated sequence:
86 91 100 8 100 66 87 9 71 44 24 94 57 2 68 62 59 93 97 15 ...
You will pay $\$1$ for each of these rolls no matter what. You will receive, however, only those rewards where you choose to stop.
I'm going to make your choice supremely easy: since you have committed to bet on each roll, I will let you peek at them all to decide which rewards to collect! Surely you cannot do better without peeking, so this provides an upper bound on what you might be able to achieve.
For instance, if--according to some--you select any reward where the roll exceeds 49, your list of net returns (rewards minus the bets) begins
85 90 99 -1 99 65 86 -1 70 -1 -1 93 56 -1 67 61 58 92 96 -1 ...
If instead--if you were relying on the results of the first interpretation of the question for guidance--you select only rewards where the roll exceeds 86, your list of net returns begins
-1 90 99 -1 99 -1 86 -1 -1 -1 -1 93 -1 -1 -1 -1 -1 92 96 -1
The more restrictive your stopping rule, the more times you will replace a positive number with a -1. In the long run, it just gets worse and worse for you as you hold back waiting for any set of special stopping numbers.
This argument covers not only a threshold stopping rule, but even an arbitrary sequence of stopping rules of any complexity. Any rule that causes you not to collect a reward immediately reduces your total return.
Wait, you might object: why can't I just decide not to bet on the next roll? Go right ahead. I will make the same offer as before, but you are not allowed to peek at the roll before deciding not to bet. Because that's the case, the list of the rolls that you do bet on will have exactly the same probabilistic characteristics as the list I began this answer with: it's a sequence of independent uniform outcomes.
I said that peeking gives an upper bound on the possibilities. However, since the greatest total rewards can be obtained without peeking,
the optimal strategy is to collect a reward on every turn regardless of the roll's outcome. Your expected value for each roll is $-1$ (for the cost of rolling) plus $101/2$ (the expected value on a d100 die), a net of $49.5.$
If you are believer in any other strategy, understand that by waiting until a high-ish value is observed, you will tend to pay for several rolls before seeing that number. For instance, if you wait to see a value exceeding 50, it is easy to establish (and intuitively obvious) that you will pay for two rolls on average for that to happen. You will collect an expected value of $(51+52+\cdots+100)/50 = 75.5$ but you will have paid $\$2$ for that privilege. The average rate of return on your investment is only $(75.5-2)/2 = 36.75,$ noticeably less than the ROI of $49.5/1 = 49.5$ achieved with the optimal strategy.
Still unconvinced? For the 20 rolls shown at the outset, I will pay $20$ and collect $1233,$ leaving me up by $1213.$ You will pay the same $20$ and will collect only $1131,$ leaving you with $102$ less than me.
|
100-sided dice roll problem
This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get
|
13,733
|
100-sided dice roll problem
|
Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that on average it will take us $\frac{1}{p}$ rolls to finish. Note that when we stop, we received a value uniformly distributed over $[t+1,100]$, which is on average $\frac{t+1+100}{2}$. Thus, our expected profit is
$$
\frac{t+1+100}{2} - \frac{1}{p} = \frac{101 + t}{2} - \frac{100}{100 -t}
$$
Iterating over the values of $t$ gives us the maximum expected value at $t=86$ of $86.3571429 (which is consistent with Lynn's simulation which resulted in the same rule of >= 87).
Ths analysis below is wrong, since the expected payout is incorrect. See my new answer for a fully probabilistic treatment
Now then let's consider the case where the player has access to a supplementary source of randomness in order to make decisions.
Now we define $t = i + r$ where $i$ is a whole number $r \in [0,1)$ is the remainder. And establish the following rule for the roll value $v$:
When $v \leq i$, continue
When $v > i + 1$, stop
When $v = i + 1$, stop with probability $1-r$
Then the probability of stopping is $p = 1 - \frac{i+1}{100} + \frac{1-r}{100} = 1 - \frac{t}{100}$. Given that we have stopped, the expected payout is the same as before. So the expression for the expected profit remains the same. Only now we can optimize over non-integer $t$. Solving this gives $t= 100 - 10\sqrt 2 \approx 85.858$ resulting in a profit of $\frac{201}{2} - 10\sqrt 2 \approx 86.358$
|
100-sided dice roll problem
|
Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that o
|
100-sided dice roll problem
Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that on average it will take us $\frac{1}{p}$ rolls to finish. Note that when we stop, we received a value uniformly distributed over $[t+1,100]$, which is on average $\frac{t+1+100}{2}$. Thus, our expected profit is
$$
\frac{t+1+100}{2} - \frac{1}{p} = \frac{101 + t}{2} - \frac{100}{100 -t}
$$
Iterating over the values of $t$ gives us the maximum expected value at $t=86$ of $86.3571429 (which is consistent with Lynn's simulation which resulted in the same rule of >= 87).
Ths analysis below is wrong, since the expected payout is incorrect. See my new answer for a fully probabilistic treatment
Now then let's consider the case where the player has access to a supplementary source of randomness in order to make decisions.
Now we define $t = i + r$ where $i$ is a whole number $r \in [0,1)$ is the remainder. And establish the following rule for the roll value $v$:
When $v \leq i$, continue
When $v > i + 1$, stop
When $v = i + 1$, stop with probability $1-r$
Then the probability of stopping is $p = 1 - \frac{i+1}{100} + \frac{1-r}{100} = 1 - \frac{t}{100}$. Given that we have stopped, the expected payout is the same as before. So the expression for the expected profit remains the same. Only now we can optimize over non-integer $t$. Solving this gives $t= 100 - 10\sqrt 2 \approx 85.858$ resulting in a profit of $\frac{201}{2} - 10\sqrt 2 \approx 86.358$
|
100-sided dice roll problem
Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that o
|
13,734
|
100-sided dice roll problem
|
I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test 2: Stopping when throw >= 87:
Average winnings: \$86.36
Minimum winnings: \$-4
Maximum throws: 92
I tested a few stopping values, and stopping after rolling 87 or higher seemed to give the best results:
Here's my python code:
import random
import numpy as np
def roll_dice():
return random.randint(1, 100)
def stop(num, throw, limit=50):
return throw >= limit
def winnings(num, throw):
return throw - num
win_list = []
max_throws = 0
stop_at = 50
for run in range(1000000):
for i in range(1, 101):
throw = roll_dice()
if stop(i, throw, stop_at):
break
win_list.append(winnings(i, throw))
max_throws = max(max_throws, i)
print(f'Stopping when throw >= {stop_at}')
print(f'Average winnings: ${np.mean(win_list):.2f}')
print(f'Minimum winnings: ${np.min(win_list)}')
print(f'Maximum throws: {max_throws}')
|
100-sided dice roll problem
|
I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test
|
100-sided dice roll problem
I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test 2: Stopping when throw >= 87:
Average winnings: \$86.36
Minimum winnings: \$-4
Maximum throws: 92
I tested a few stopping values, and stopping after rolling 87 or higher seemed to give the best results:
Here's my python code:
import random
import numpy as np
def roll_dice():
return random.randint(1, 100)
def stop(num, throw, limit=50):
return throw >= limit
def winnings(num, throw):
return throw - num
win_list = []
max_throws = 0
stop_at = 50
for run in range(1000000):
for i in range(1, 101):
throw = roll_dice()
if stop(i, throw, stop_at):
break
win_list.append(winnings(i, throw))
max_throws = max(max_throws, i)
print(f'Stopping when throw >= {stop_at}')
print(f'Average winnings: ${np.mean(win_list):.2f}')
print(f'Minimum winnings: ${np.min(win_list)}')
print(f'Maximum throws: {max_throws}')
|
100-sided dice roll problem
I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test
|
13,735
|
100-sided dice roll problem
|
The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$. The expected value of the next roll, and every future roll, is \$50-1=\$49.
Thus, if you are currently getting \$50 or higher you should stop, if you are currently getting less than \$49 you should keep going. If you are currently getting exactly \$49 you are indifferent in expectation and you need some other criterion -- perhaps you should toss a coin to decide.
|
100-sided dice roll problem
|
The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$.
|
100-sided dice roll problem
The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$. The expected value of the next roll, and every future roll, is \$50-1=\$49.
Thus, if you are currently getting \$50 or higher you should stop, if you are currently getting less than \$49 you should keep going. If you are currently getting exactly \$49 you are indifferent in expectation and you need some other criterion -- perhaps you should toss a coin to decide.
|
100-sided dice roll problem
The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$.
|
13,736
|
100-sided dice roll problem
|
First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on your last roll. You previous rolls don't affect it at all. Furthermore, your marginal cost is not affected by the rolls. You total cost depends on how many previous rolls you had, but optimality is based on marginal cost, not total cost. Since neither cost nor benefit are affected by previous rolls, we can ignore them.
Therefore, whether you should roll again is based solely on what the current roll is. So there are n different strategies: stop as soon as you see a $1$, stop as soon as you see a $2$, etc. If $f(n)$ is the expected winnings for following the nth strategy, then the problem is to find the $n$ that maximizes $f(n)$. So what is $f(n)$? Well, we have a $\frac1{100}$ chance of getting $100$. And if $n<100$, then we have a $\frac1{100}$ chance of getting $99$. And so on. In other words, the expected value after the next roll is $\frac{100+99+98+...n}{100}$. With some knowledge of arithmetic sequences, we can put that in closed form as $\frac 1 {100}*\left(5050-\frac{n(n+1)}2\right)= \frac{10100-n(n+1)}{200} $. There's also a $\frac{n-1}{100}$ chance of not stopping at the next roll, in which case we're right back where we started, except we're down a dollar. So $f(n) = \frac{10100-n(n+1)}{200}+ \frac{(f(n)-1)(n-1)}{100}$.
$$f(n) = \frac{10100-n(n+1)+ 2(f(n)-1)(n-1)}{200}$$
$$200f(n) = 10100-n(n+1)+ 2(f(n)-1)(n-1)$$
$$200f(n) = 10100-n(n+1)+ 2(f(n)(n-1))-n+1$$
$$(200-2n+2)f(n) = 10100-n(n+1)-n+1$$
$$(202-2n)f(n) = 10100-n^2-2n+1$$
$$f(n) = \frac{10101-n^2-2n}{202-2n}$$
This has the maximum value of 84.6176 at n = 85. This isn't the same as previous answers, so I wuite possibly made a mistake with my arithmetic somewhere.
|
100-sided dice roll problem
|
First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on y
|
100-sided dice roll problem
First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on your last roll. You previous rolls don't affect it at all. Furthermore, your marginal cost is not affected by the rolls. You total cost depends on how many previous rolls you had, but optimality is based on marginal cost, not total cost. Since neither cost nor benefit are affected by previous rolls, we can ignore them.
Therefore, whether you should roll again is based solely on what the current roll is. So there are n different strategies: stop as soon as you see a $1$, stop as soon as you see a $2$, etc. If $f(n)$ is the expected winnings for following the nth strategy, then the problem is to find the $n$ that maximizes $f(n)$. So what is $f(n)$? Well, we have a $\frac1{100}$ chance of getting $100$. And if $n<100$, then we have a $\frac1{100}$ chance of getting $99$. And so on. In other words, the expected value after the next roll is $\frac{100+99+98+...n}{100}$. With some knowledge of arithmetic sequences, we can put that in closed form as $\frac 1 {100}*\left(5050-\frac{n(n+1)}2\right)= \frac{10100-n(n+1)}{200} $. There's also a $\frac{n-1}{100}$ chance of not stopping at the next roll, in which case we're right back where we started, except we're down a dollar. So $f(n) = \frac{10100-n(n+1)}{200}+ \frac{(f(n)-1)(n-1)}{100}$.
$$f(n) = \frac{10100-n(n+1)+ 2(f(n)-1)(n-1)}{200}$$
$$200f(n) = 10100-n(n+1)+ 2(f(n)-1)(n-1)$$
$$200f(n) = 10100-n(n+1)+ 2(f(n)(n-1))-n+1$$
$$(200-2n+2)f(n) = 10100-n(n+1)-n+1$$
$$(202-2n)f(n) = 10100-n^2-2n+1$$
$$f(n) = \frac{10101-n^2-2n}{202-2n}$$
This has the maximum value of 84.6176 at n = 85. This isn't the same as previous answers, so I wuite possibly made a mistake with my arithmetic somewhere.
|
100-sided dice roll problem
First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on y
|
13,737
|
100-sided dice roll problem
|
Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where $R_n$ is the revenue from the last roll; $R_n$ takes values $i=1,\ldots,100$ with each
value having probability $p_i=1/100$. Therefore the expected payoff is not a function of $n$. The cost of $n$ rolls is $n$ dollars. So my expected payoff is $50.5-n$. The maximum expected payoff is for $n=1$.
|
100-sided dice roll problem
|
Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where
|
100-sided dice roll problem
Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where $R_n$ is the revenue from the last roll; $R_n$ takes values $i=1,\ldots,100$ with each
value having probability $p_i=1/100$. Therefore the expected payoff is not a function of $n$. The cost of $n$ rolls is $n$ dollars. So my expected payoff is $50.5-n$. The maximum expected payoff is for $n=1$.
|
100-sided dice roll problem
Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where
|
13,738
|
100-sided dice roll problem
|
As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help someone or to get it commented on by someone more expert.
With every new dice roll you are paying 1\$ so you want to increase the expect utility by at least 1\$.
Now let's say you already rolled a 99, you are going to make 99\$ and a new dice roll is "good" only if you get 100. The chances are $1/100$, so the expected utility is $0.01\$$. Rolling a dice again is not worth it.
What if you already got a 98? You can make 1\$ more by rolling a 99 or 2\$ more by rolling a 100. The two options are mutually exclusive and equally likely so we can just sum up their expected utility $0.01\$ + 0.02\$ = 0.03\$$. Not worth it again.
So we just need to find the value $n$ for which the sum of expect utilities is more than 1\$, which would mean, finding the $m = 100 - n$ for which $\dfrac{m*(m+1)}{2 * 100} > 1$ which is "the sum of the increases of the expected utilities for all the values greater than ours is greater than 1\$".
That gives us $n = 86$ with an increase in the expected utility of $1.05\$$.
|
100-sided dice roll problem
|
As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help s
|
100-sided dice roll problem
As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help someone or to get it commented on by someone more expert.
With every new dice roll you are paying 1\$ so you want to increase the expect utility by at least 1\$.
Now let's say you already rolled a 99, you are going to make 99\$ and a new dice roll is "good" only if you get 100. The chances are $1/100$, so the expected utility is $0.01\$$. Rolling a dice again is not worth it.
What if you already got a 98? You can make 1\$ more by rolling a 99 or 2\$ more by rolling a 100. The two options are mutually exclusive and equally likely so we can just sum up their expected utility $0.01\$ + 0.02\$ = 0.03\$$. Not worth it again.
So we just need to find the value $n$ for which the sum of expect utilities is more than 1\$, which would mean, finding the $m = 100 - n$ for which $\dfrac{m*(m+1)}{2 * 100} > 1$ which is "the sum of the increases of the expected utilities for all the values greater than ours is greater than 1\$".
That gives us $n = 86$ with an increase in the expected utility of $1.05\$$.
|
100-sided dice roll problem
As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help s
|
13,739
|
100-sided dice roll problem
|
Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, the most profitable option is to stop playing right away (best_profit(last_roll, rolls) == last_roll - rolls). I do not know how to prove that mathematically, however. For values strictly less than 87 there exist both situations where continuing is more profitable, and situations where stopping is more profitable.
#!/usr/bin/env python3
from functools import cache
@cache
def best_profit(last_roll, rolls):
# If the maximal possible profit in the next roll is less than or equal to zero, there is no profit in playing at all, stop immediately.
if 100 - (rolls + 1) <= 0:
return last_roll - rolls
# If the profit of stopping is greater than or equal to the maximal possible profit of continuing, stop.
if last_roll - rolls >= 100 - (rolls + 1):
return last_roll - rolls
return max(last_roll - rolls, 0.01 * sum(best_profit(next_roll, rolls + 1) for next_roll in range(1, 100+1)))
print(best_profit(0, 0))
|
100-sided dice roll problem
|
Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, th
|
100-sided dice roll problem
Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, the most profitable option is to stop playing right away (best_profit(last_roll, rolls) == last_roll - rolls). I do not know how to prove that mathematically, however. For values strictly less than 87 there exist both situations where continuing is more profitable, and situations where stopping is more profitable.
#!/usr/bin/env python3
from functools import cache
@cache
def best_profit(last_roll, rolls):
# If the maximal possible profit in the next roll is less than or equal to zero, there is no profit in playing at all, stop immediately.
if 100 - (rolls + 1) <= 0:
return last_roll - rolls
# If the profit of stopping is greater than or equal to the maximal possible profit of continuing, stop.
if last_roll - rolls >= 100 - (rolls + 1):
return last_roll - rolls
return max(last_roll - rolls, 0.01 * sum(best_profit(next_roll, rolls + 1) for next_roll in range(1, 100+1)))
print(best_profit(0, 0))
|
100-sided dice roll problem
Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, th
|
13,740
|
100-sided dice roll problem
|
Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note that however many prior turns have been should not impact our current decision. It follows immediately that we should take the first roll (since we will not lose money even if we decide to stop after 1 roll). We thus take a fully probabilistic approach, whereby we stop after having seen a value $i$ with probability $p_i$, or $p=(p_1, p_2, \dots, p_{100})$. Then, defining $V(p) = \sum_i p_i i$, we have that our expected winnings, as a function of $p$ is:
$$
W(p) = -1 + \left(1 - \sum_i \frac{p_i}{100}\right)W(p) + \frac{V}{100}
$$
where the middle term captures the expected winnings when we do not stop.
Rearranging gives:
$$
W(p) = \frac{V - 100}{\sum_i p_i}
$$
Then,
$$
\frac{dW}{dp_j} = \frac{j \sum_i p_i - V + 100}{\left(\sum_i p_i\right)^2}
$$
Note that this is strictly monotonically increasing in the index $j$. Thus, there can be at most a single value $j_0$ for which the above value is 0. For all $j > j_0$, the gradient is positive, and thus to maximize $W(\cdot)$, $p_j = 1$. Similarly, for all $j < j_0$, the gradient is negative, and thus $p_j = 0$. If there is such a value $j_0$, we let $k = j_0$. Otherwise, we let $k$ be the smallest index for which the gradient is positive. Then,
$$
k \sum_i p_i - V = -(1 + 2 + \dots + (100 - k))
$$
We note that $\sum_{i=1}^{13} i = 91$ and $\sum_{i=1}^{14} i = 105$. Thus, there is no $j_0$ value. Therefore, $100 - k = 13$ or $k = 87$. This then gives the rule: stop if the value seen is $\geq 87$.
|
100-sided dice roll problem
|
Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note t
|
100-sided dice roll problem
Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note that however many prior turns have been should not impact our current decision. It follows immediately that we should take the first roll (since we will not lose money even if we decide to stop after 1 roll). We thus take a fully probabilistic approach, whereby we stop after having seen a value $i$ with probability $p_i$, or $p=(p_1, p_2, \dots, p_{100})$. Then, defining $V(p) = \sum_i p_i i$, we have that our expected winnings, as a function of $p$ is:
$$
W(p) = -1 + \left(1 - \sum_i \frac{p_i}{100}\right)W(p) + \frac{V}{100}
$$
where the middle term captures the expected winnings when we do not stop.
Rearranging gives:
$$
W(p) = \frac{V - 100}{\sum_i p_i}
$$
Then,
$$
\frac{dW}{dp_j} = \frac{j \sum_i p_i - V + 100}{\left(\sum_i p_i\right)^2}
$$
Note that this is strictly monotonically increasing in the index $j$. Thus, there can be at most a single value $j_0$ for which the above value is 0. For all $j > j_0$, the gradient is positive, and thus to maximize $W(\cdot)$, $p_j = 1$. Similarly, for all $j < j_0$, the gradient is negative, and thus $p_j = 0$. If there is such a value $j_0$, we let $k = j_0$. Otherwise, we let $k$ be the smallest index for which the gradient is positive. Then,
$$
k \sum_i p_i - V = -(1 + 2 + \dots + (100 - k))
$$
We note that $\sum_{i=1}^{13} i = 91$ and $\sum_{i=1}^{14} i = 105$. Thus, there is no $j_0$ value. Therefore, $100 - k = 13$ or $k = 87$. This then gives the rule: stop if the value seen is $\geq 87$.
|
100-sided dice roll problem
Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note t
|
13,741
|
Algorithm for sampling fixed number of samples from a finite population
|
Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items uniformly at random. After you have been through the entire population, the cache will be the desired random sample.
This algorithm is similar to a standard algorithm for creating a random permutation of $n$ items. It's essentially Durstenfeld's version of the Fisher-Yates shuffle.
Here is a diagram of how such a sample of size $k=20$ evolved for a population that eventually was size $n=300.$ The lines at each iteration indicate the indexes of the sample members.
At each iteration, the sample should be roughly uniformly distributed between $1$ and the iteration--conditional, of course, on how uniformly distributed it had previously been. Of crucial importance is to note how some of the earliest elements (shown in red) manage to persist in the sample to the end: these need to have the same chances of being in the sample as any of the later elements.
To prove the algorithm works, we may view it as a Markov chain.
The set of states after $n\ge k$ items have been processed can be identified with the set of $k$-subsets $\mathcal{I} = \{i_1, i_2, \ldots, i_k\}$ of the indexes $1,2,\ldots, n$ denoting which items are currently in the sample.
The algorithm makes a random transition from any subset $\mathcal I$ of $\{1,2,\ldots, n\}$ to $k+1$ distinct possible subsets of $\{1,2,\ldots, n, n+1\}.$ One of them is $\mathcal I$ itself, which occurs with probability $1 - k/(n+1).$ The other of them are the subsets where $i_j$ is replaced by $m+1$ for $j=1,2,\ldots, k.$ Each of these transitions occurs with probability
$$\frac{1}{k}\left(\frac{k}{n+1}\right) = \frac{1}{n+1}.$$
We need to prove that after $n \ge k$ steps, every $k$-subset of $\{1,2,\ldots, n\}$ has the same chance of being the sample. We can do this inductively. To this end, suppose after step $n\ge k$ that all $k$ subsets have equal chances of being the sample. These chances therefore are all $1/\binom{n}{k}.$ After step $n+1,$ a given subset $\mathcal I$ of $\{1,2,\ldots, n+1\}$ can have arisen as a transition from $n-k+2$ subsets of $\{1,2,\ldots, n\}:$ namely,
If $\mathcal{I}$ does not contain $n+1,$ it arose as a transition of probability $1-k/(n+1)$ from itself, where it originally had a chance of $1/\binom{n}{k}$ of occurring. Such subsets therefore appear with individual chances of
$$\Pr(\mathcal{I}) = \frac{1}{\binom{n}{k}} \times \left(1 - \frac{k}{n+1}\right) = \frac{1}{\binom{n+1}{k}}.$$
If $\mathcal{I}$ does contain $n+1,$ it arose upon replacing one of the $n-(k-1)$ indexes in $\{1,2,\ldots, n\}$ that do not appear in $\mathcal I$ with the new index $n+1.$ Each such transition occurs with chance $1/(n+1),$ again giving a total chance of
$$\Pr(\mathcal I) = (n-(k-1)) \times \frac{1}{\binom{n}{k}} \times \frac{1}{n+1} = \frac{1}{\binom{n+1}{k}}.$$
Consequently, all possible $k$-subsets of the first $n+1$ indexes have a common chance of $1/\binom{n+1}{k}$ of occurring, proving the induction step.
To start the induction, notice that at step $n=k$ there is exactly one subset and it has the correct chance of $1$ to be the sample! This completes the proof.
This R code demonstrates the practicality of the algorithm. In the actual application you would not have a full vector population: instead of looping over seq_along(population), you would have a source of data from which you sequentially fetch the next element (as in population[j]) and increment j until it is exhausted.
sample.online <- function(k, population) {
cache <- rep(NA, k)
for (j in seq_along(population)) {
if (j <= k) {
cache[j] <- population[j]
} else {
if (runif(1, 0, j) <= k) cache[sample.int(k, 1)] <- population[j]
}
}
cache
}
|
Algorithm for sampling fixed number of samples from a finite population
|
Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items u
|
Algorithm for sampling fixed number of samples from a finite population
Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items uniformly at random. After you have been through the entire population, the cache will be the desired random sample.
This algorithm is similar to a standard algorithm for creating a random permutation of $n$ items. It's essentially Durstenfeld's version of the Fisher-Yates shuffle.
Here is a diagram of how such a sample of size $k=20$ evolved for a population that eventually was size $n=300.$ The lines at each iteration indicate the indexes of the sample members.
At each iteration, the sample should be roughly uniformly distributed between $1$ and the iteration--conditional, of course, on how uniformly distributed it had previously been. Of crucial importance is to note how some of the earliest elements (shown in red) manage to persist in the sample to the end: these need to have the same chances of being in the sample as any of the later elements.
To prove the algorithm works, we may view it as a Markov chain.
The set of states after $n\ge k$ items have been processed can be identified with the set of $k$-subsets $\mathcal{I} = \{i_1, i_2, \ldots, i_k\}$ of the indexes $1,2,\ldots, n$ denoting which items are currently in the sample.
The algorithm makes a random transition from any subset $\mathcal I$ of $\{1,2,\ldots, n\}$ to $k+1$ distinct possible subsets of $\{1,2,\ldots, n, n+1\}.$ One of them is $\mathcal I$ itself, which occurs with probability $1 - k/(n+1).$ The other of them are the subsets where $i_j$ is replaced by $m+1$ for $j=1,2,\ldots, k.$ Each of these transitions occurs with probability
$$\frac{1}{k}\left(\frac{k}{n+1}\right) = \frac{1}{n+1}.$$
We need to prove that after $n \ge k$ steps, every $k$-subset of $\{1,2,\ldots, n\}$ has the same chance of being the sample. We can do this inductively. To this end, suppose after step $n\ge k$ that all $k$ subsets have equal chances of being the sample. These chances therefore are all $1/\binom{n}{k}.$ After step $n+1,$ a given subset $\mathcal I$ of $\{1,2,\ldots, n+1\}$ can have arisen as a transition from $n-k+2$ subsets of $\{1,2,\ldots, n\}:$ namely,
If $\mathcal{I}$ does not contain $n+1,$ it arose as a transition of probability $1-k/(n+1)$ from itself, where it originally had a chance of $1/\binom{n}{k}$ of occurring. Such subsets therefore appear with individual chances of
$$\Pr(\mathcal{I}) = \frac{1}{\binom{n}{k}} \times \left(1 - \frac{k}{n+1}\right) = \frac{1}{\binom{n+1}{k}}.$$
If $\mathcal{I}$ does contain $n+1,$ it arose upon replacing one of the $n-(k-1)$ indexes in $\{1,2,\ldots, n\}$ that do not appear in $\mathcal I$ with the new index $n+1.$ Each such transition occurs with chance $1/(n+1),$ again giving a total chance of
$$\Pr(\mathcal I) = (n-(k-1)) \times \frac{1}{\binom{n}{k}} \times \frac{1}{n+1} = \frac{1}{\binom{n+1}{k}}.$$
Consequently, all possible $k$-subsets of the first $n+1$ indexes have a common chance of $1/\binom{n+1}{k}$ of occurring, proving the induction step.
To start the induction, notice that at step $n=k$ there is exactly one subset and it has the correct chance of $1$ to be the sample! This completes the proof.
This R code demonstrates the practicality of the algorithm. In the actual application you would not have a full vector population: instead of looping over seq_along(population), you would have a source of data from which you sequentially fetch the next element (as in population[j]) and increment j until it is exhausted.
sample.online <- function(k, population) {
cache <- rep(NA, k)
for (j in seq_along(population)) {
if (j <= k) {
cache[j] <- population[j]
} else {
if (runif(1, 0, j) <= k) cache[sample.int(k, 1)] <- population[j]
}
}
cache
}
|
Algorithm for sampling fixed number of samples from a finite population
Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items u
|
13,742
|
Algorithm for sampling fixed number of samples from a finite population
|
I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of that list.
Since you're taking the first $k$ elements, you don't need to keep track of the elements after that, so you can instead maintain a list of length $k$. Each time you see a new item, if you had seen $m$ items previously, there's $m+1$ different places to put the new item, so each for each slot in the partial list, there's a $\frac 1 {m+1}$ probability of the new item going there, and a $\frac{m+1-k}{m+1}$ probability of it not going in the partial list at all. If it does go in a partial list, then item that was previously last in the list gets pushed out.
Since the order doesn't matter, you can simplify it even further by having each new item having probability $\frac k {m+1}$ of being added, and if it is added, then each old item has probability $ \frac 1 k$ being dropped.
This is the algorithm in whuber's answer (my $m+1$ is their $j$), but I think the explanation for it is more intuitive.
|
Algorithm for sampling fixed number of samples from a finite population
|
I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of
|
Algorithm for sampling fixed number of samples from a finite population
I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of that list.
Since you're taking the first $k$ elements, you don't need to keep track of the elements after that, so you can instead maintain a list of length $k$. Each time you see a new item, if you had seen $m$ items previously, there's $m+1$ different places to put the new item, so each for each slot in the partial list, there's a $\frac 1 {m+1}$ probability of the new item going there, and a $\frac{m+1-k}{m+1}$ probability of it not going in the partial list at all. If it does go in a partial list, then item that was previously last in the list gets pushed out.
Since the order doesn't matter, you can simplify it even further by having each new item having probability $\frac k {m+1}$ of being added, and if it is added, then each old item has probability $ \frac 1 k$ being dropped.
This is the algorithm in whuber's answer (my $m+1$ is their $j$), but I think the explanation for it is more intuitive.
|
Algorithm for sampling fixed number of samples from a finite population
I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of
|
13,743
|
Algorithm for sampling fixed number of samples from a finite population
|
Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling methodologies have been strongly motivated by the need to sample streaming data where the overall sample size $n$ is unknown or by definition dynamic. The term "reservoir" refers to the size of the resulting sample. The original principle is discussed in "Random sampling with a reservoir" (1985) by Vitter but came to prominence with social media applications; "TeRec: a temporal recommender system over tweet stream" (2013) by Chen et al. is a short and straightforward implementation of how the original algorithm of Vitter was adapted/extended to suit the needs of social media apps like Twitter and Weibo. I came also came across an excellent blog post on the matter here: Reservoir sampling by Startin which takes a much more programmatic approach.
Please note that the original request from Tim asked for $O(n)$ complexity but they are algorithms that can do even better than that. Similarly, they are implementations that offer the ability to do weighted reservoir sampling. (i.e. the probability of each item to be selected is determined by its relative weight - see "Weighted random sampling with a reservoir" (2005) by Efraimidis and Spirakis for an early work on that)
|
Algorithm for sampling fixed number of samples from a finite population
|
Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling method
|
Algorithm for sampling fixed number of samples from a finite population
Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling methodologies have been strongly motivated by the need to sample streaming data where the overall sample size $n$ is unknown or by definition dynamic. The term "reservoir" refers to the size of the resulting sample. The original principle is discussed in "Random sampling with a reservoir" (1985) by Vitter but came to prominence with social media applications; "TeRec: a temporal recommender system over tweet stream" (2013) by Chen et al. is a short and straightforward implementation of how the original algorithm of Vitter was adapted/extended to suit the needs of social media apps like Twitter and Weibo. I came also came across an excellent blog post on the matter here: Reservoir sampling by Startin which takes a much more programmatic approach.
Please note that the original request from Tim asked for $O(n)$ complexity but they are algorithms that can do even better than that. Similarly, they are implementations that offer the ability to do weighted reservoir sampling. (i.e. the probability of each item to be selected is determined by its relative weight - see "Weighted random sampling with a reservoir" (2005) by Efraimidis and Spirakis for an early work on that)
|
Algorithm for sampling fixed number of samples from a finite population
Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling method
|
13,744
|
Difference between Anomaly and Outlier
|
The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also referred to as abnormalities, discordants, deviants, or anomalies in the data mining and statistics literature.
Bold text is not part of the original text.
The free to download pdf of the book available from the author is here.
|
Difference between Anomaly and Outlier
|
The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also
|
Difference between Anomaly and Outlier
The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also referred to as abnormalities, discordants, deviants, or anomalies in the data mining and statistics literature.
Bold text is not part of the original text.
The free to download pdf of the book available from the author is here.
|
Difference between Anomaly and Outlier
The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also
|
13,745
|
Difference between Anomaly and Outlier
|
A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that indicates your model does work properly
A more serious, less cryptic answer:
The concept of outliers starts from the issue of building a model that makes assumptions about the data. Outliers are often indicators that the model does not describe the data properly and thus we should question the results of our model or quality of our data.
The concept of anomalies starts outside the theoretic world and inside the applied world: we want to look for unusual behavior in our data, sometimes motivated by the fact that we are interested in finding behavior that someone is trying to hide (like a virus in an email). The problem is that since people are trying to hide what they are doing, we don't really know what to look for. So we take a set of "good" data, and decide that whatever we find in our new dataset that doesn't look "good" is an anomaly and worth our time to checkout in more detail. Often, looking for anomalies means looking for outliers in your new data set. But note that these values may be very common in your new dataset, despite being rare in your old dataset!
In summary, the two concepts are very similar in terms of the statistics behind them (i.e. unusual values given your fitted model) but come at the idea from different angles. In addition, when we talk about outliers, we typically mean an unusual data point in the data used to fit our model, where as an anomaly is usually meant as an unusual data point in a dataset outside of the data used to fit our model.
Note: this answer is based on how I've seen the two terms frequently used rather than formal definitions. User experiences may differ.
|
Difference between Anomaly and Outlier
|
A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that ind
|
Difference between Anomaly and Outlier
A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that indicates your model does work properly
A more serious, less cryptic answer:
The concept of outliers starts from the issue of building a model that makes assumptions about the data. Outliers are often indicators that the model does not describe the data properly and thus we should question the results of our model or quality of our data.
The concept of anomalies starts outside the theoretic world and inside the applied world: we want to look for unusual behavior in our data, sometimes motivated by the fact that we are interested in finding behavior that someone is trying to hide (like a virus in an email). The problem is that since people are trying to hide what they are doing, we don't really know what to look for. So we take a set of "good" data, and decide that whatever we find in our new dataset that doesn't look "good" is an anomaly and worth our time to checkout in more detail. Often, looking for anomalies means looking for outliers in your new data set. But note that these values may be very common in your new dataset, despite being rare in your old dataset!
In summary, the two concepts are very similar in terms of the statistics behind them (i.e. unusual values given your fitted model) but come at the idea from different angles. In addition, when we talk about outliers, we typically mean an unusual data point in the data used to fit our model, where as an anomaly is usually meant as an unusual data point in a dataset outside of the data used to fit our model.
Note: this answer is based on how I've seen the two terms frequently used rather than formal definitions. User experiences may differ.
|
Difference between Anomaly and Outlier
A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that ind
|
13,746
|
Difference between Anomaly and Outlier
|
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbability).
|
Difference between Anomaly and Outlier
|
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbabil
|
Difference between Anomaly and Outlier
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbability).
|
Difference between Anomaly and Outlier
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbabil
|
13,747
|
Difference between Anomaly and Outlier
|
The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rare observations. In statistics, on a normal distribution, you would consider three sigma to be outliers. That is 99.7% of your objects are expected to be "normal".
"Anomaly" is used much more liberally. If you suddenly have millions of visitors on your website, these are not rare visitors. The sudden increase in visitors however is still "anomalous", whereas each individual visitor is not an "outlier".
It may have been in this article where I saw these differences discussed, but I can't access it right now, unfortunately.
Statistical Analysis and Data Mining, Volume 5, Issue 5, October 2012, Pages 363–387
A survey on unsupervised outlier detection in high-dimensional numerical data
|
Difference between Anomaly and Outlier
|
The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rar
|
Difference between Anomaly and Outlier
The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rare observations. In statistics, on a normal distribution, you would consider three sigma to be outliers. That is 99.7% of your objects are expected to be "normal".
"Anomaly" is used much more liberally. If you suddenly have millions of visitors on your website, these are not rare visitors. The sudden increase in visitors however is still "anomalous", whereas each individual visitor is not an "outlier".
It may have been in this article where I saw these differences discussed, but I can't access it right now, unfortunately.
Statistical Analysis and Data Mining, Volume 5, Issue 5, October 2012, Pages 363–387
A survey on unsupervised outlier detection in high-dimensional numerical data
|
Difference between Anomaly and Outlier
The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rar
|
13,748
|
Difference between Anomaly and Outlier
|
Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value or long-term average. A positive anomaly indicates that the
observed temperature was warmer than the reference value, while a
negative anomaly indicates that the observed temperature was cooler
than the reference value.
see e.g.
That may well be regarded as outside machine learning, but people interested in the question may be interested in this.
|
Difference between Anomaly and Outlier
|
Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value o
|
Difference between Anomaly and Outlier
Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value or long-term average. A positive anomaly indicates that the
observed temperature was warmer than the reference value, while a
negative anomaly indicates that the observed temperature was cooler
than the reference value.
see e.g.
That may well be regarded as outside machine learning, but people interested in the question may be interested in this.
|
Difference between Anomaly and Outlier
Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value o
|
13,749
|
Difference between Anomaly and Outlier
|
Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeably in literature.
|
Difference between Anomaly and Outlier
|
Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeabl
|
Difference between Anomaly and Outlier
Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeably in literature.
|
Difference between Anomaly and Outlier
Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeabl
|
13,750
|
Difference between Anomaly and Outlier
|
An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e. more generalizable) models. A point $(1,5)$ would be an outlier for the model $y=x$. You ignore it in light of the fact that all your other points $(1,1)$, $(5,5)$, $(3,3.1)$ more closely fit $y=x$.
An anomaly can be one data point, or also a general trend or behavior observed in data after a model has already been built or an understanding of the data-generating process formed.
You face anomalies because the system starts behaving differently, or you seek out such data points, because you want to be informed when an event occurs during which your model is not valid. You may care about observing any anomalous behavior in amplitudes of ocean waves, not because you want to throw away those data points and build a better model, but because you want to be aware when a tsunami might be happening.
|
Difference between Anomaly and Outlier
|
An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e.
|
Difference between Anomaly and Outlier
An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e. more generalizable) models. A point $(1,5)$ would be an outlier for the model $y=x$. You ignore it in light of the fact that all your other points $(1,1)$, $(5,5)$, $(3,3.1)$ more closely fit $y=x$.
An anomaly can be one data point, or also a general trend or behavior observed in data after a model has already been built or an understanding of the data-generating process formed.
You face anomalies because the system starts behaving differently, or you seek out such data points, because you want to be informed when an event occurs during which your model is not valid. You may care about observing any anomalous behavior in amplitudes of ocean waves, not because you want to throw away those data points and build a better model, but because you want to be aware when a tsunami might be happening.
|
Difference between Anomaly and Outlier
An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e.
|
13,751
|
What is a good book about the philosophy behind Bayesian thinking?
|
Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it.
|
What is a good book about the philosophy behind Bayesian thinking?
|
Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it.
|
What is a good book about the philosophy behind Bayesian thinking?
Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it.
|
What is a good book about the philosophy behind Bayesian thinking?
Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it.
|
13,752
|
What is a good book about the philosophy behind Bayesian thinking?
|
I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book.
|
What is a good book about the philosophy behind Bayesian thinking?
|
I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book.
|
What is a good book about the philosophy behind Bayesian thinking?
I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book.
|
What is a good book about the philosophy behind Bayesian thinking?
I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book.
|
13,753
|
What is a good book about the philosophy behind Bayesian thinking?
|
Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jeffreys' Theory of Probability of 1939 (1948, 1961), Good's Probability & the Weighing of Evidence of 1950 and Savage's Foundations of Statistics of 1954.
|
What is a good book about the philosophy behind Bayesian thinking?
|
Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jef
|
What is a good book about the philosophy behind Bayesian thinking?
Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jeffreys' Theory of Probability of 1939 (1948, 1961), Good's Probability & the Weighing of Evidence of 1950 and Savage's Foundations of Statistics of 1954.
|
What is a good book about the philosophy behind Bayesian thinking?
Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jef
|
13,754
|
What is a good book about the philosophy behind Bayesian thinking?
|
Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods
|
What is a good book about the philosophy behind Bayesian thinking?
|
Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods
|
What is a good book about the philosophy behind Bayesian thinking?
Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods
|
What is a good book about the philosophy behind Bayesian thinking?
Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods
|
13,755
|
What is a good book about the philosophy behind Bayesian thinking?
|
One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleBookMain.html
|
What is a good book about the philosophy behind Bayesian thinking?
|
One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-s
|
What is a good book about the philosophy behind Bayesian thinking?
One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleBookMain.html
|
What is a good book about the philosophy behind Bayesian thinking?
One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-s
|
13,756
|
Influence functions and OLS
|
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that statistic. They can also be used to create asymptotic variance estimates. If influence equals $I$ then asymptotic variance is $\frac{I^2}{n}$.
The way I understand influence functions is as follows. You have some sort of theoretical CDF, denoted by $F_{i}(y)=Pr(Y_{i}<y_{i})$. For simple OLS, you have
$$Pr(Y_{i}<y_{i})=Pr(\alpha+\beta x_{i} + \epsilon_{i} < y_{i})=\Phi\left(\frac{y_{i}-(\alpha+\beta x_{i})}{\sigma}\right)$$
Where $\Phi(z)$ is the standard normal CDF, and $\sigma^2$ is the error variance. Now you can show that any statistic will be a function of this CDF, hence the notation $S(F)$ (i.e. some function of $F$). Now suppose we change the function $F$ by a "little bit", to $F_{(i)}(z)=(1+\zeta)F(z)-\zeta \delta_{(i)}(z)$ Where $\delta_{i}(z)=I(y_{i}<z)$, and $\zeta=\frac{1}{n-1}$. Thus $F_{(i)}$ represents the CDF of the data with the "ith" data point removed. We can do a taylor series of $F_{(i)}(z)$ about $\zeta=0$. This gives:
$$S[F_{(i)}(z,\zeta)] \approx S[F_{(i)}(z,0)]+\zeta\left[\frac{\partial S[F_{(i)}(z,\zeta)]}{\partial \zeta}|_{\zeta=0}\right]$$
Note that $F_{(i)}(z,0)=F(z)$ so we get:
$$S[F_{(i)}(z,\zeta)] \approx S[F(z)]+\zeta\left[\frac{\partial S[F_{(i)}(z,\zeta)]}{\partial \zeta}|_{\zeta=0}\right]$$
The partial derivative here is called the influence function. So this represents an approximate "first order" correction to be made to a statistic due to deleting the "ith" observation. Note that in regression the remainder does not go to zero asymtotically, so that this is an approximation to the changes you may actually get. Now write $\beta$ as:
$$\beta=\frac{\frac{1}{n}\sum_{j=1}^{n}(y_{j}-\overline{y})(x_{j}-\overline{x})}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}$$
Thus beta is a function of two statistics: the variance of X and covariance between X and Y. These two statistics have representations in terms of the CDF as:
$$cov(X,Y)=\int(X-\mu_x(F))(Y-\mu_y(F))dF$$
and
$$var(X)=\int(X-\mu_x(F))^{2}dF$$
where
$$\mu_x=\int xdF$$
To remove the ith observation we replace $F\rightarrow F_{(i)}=(1+\zeta)F-\zeta \delta_{(i)}$ in both integrals to give:
$$\mu_{x(i)}=\int xd[(1+\zeta)F-\zeta \delta_{(i)}]=\mu_x-\zeta(x_{i}-\mu_x)$$
$$Var(X)_{(i)}=\int(X-\mu_{x(i)})^{2}dF_{(i)}=\int(X-\mu_x+\zeta(x_{i}-\mu_x))^{2}d[(1+\zeta)F-\zeta \delta_{(i)}]$$
ignoring terms of $\zeta^{2}$ and simplifying we get:
$$Var(X)_{(i)}\approx Var(X)-\zeta\left[(x_{i}-\mu_x)^2-Var(X)\right]$$
Similarly for the covariance
$$Cov(X,Y)_{(i)}\approx Cov(X,Y)-\zeta\left[(x_{i}-\mu_x)(y_{i}-\mu_y)-Cov(X,Y)\right]$$
So we can now express $\beta_{(i)}$ as a function of $\zeta$. This is:
$$\beta_{(i)}(\zeta)\approx \frac{Cov(X,Y)-\zeta\left[(x_{i}-\mu_x)(y_{i}-\mu_y)-Cov(X,Y)\right]}{Var(X)-\zeta\left[(x_{i}-\mu_x)^2-Var(X)\right]}$$
We can now use the Taylor series:
$$\beta_{(i)}(\zeta)\approx \beta_{(i)}(0)+\zeta\left[\frac{\partial \beta_{(i)}(\zeta)}{\partial \zeta}\right]_{\zeta=0}$$
Simplifying this gives:
$$\beta_{(i)}(\zeta)\approx \beta-\zeta\left[\frac{(x_{i}-\mu_x)(y_{i}-\mu_y)}{Var(X)}-\beta\frac{(x_{i}-\mu_x)^2}{Var(X)}\right]$$
And plugging in the values of the statistics $\mu_y$, $\mu_x$, $var(X)$, and $\zeta=\frac{1}{n-1}$ we get:
$$\beta_{(i)}\approx \beta-\frac{x_{i}-\overline{x}}{n-1}\left[\frac{y_{i}-\overline{y}}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}-\beta\frac{x_{i}-\overline{x}}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}\right]
$$
And you can see how the effect of removing a single observation can be approximated without having to re-fit the model. You can also see how an x equal to the average has no influence on the slope of the line. Think about this and you will see how it makes sense. You can also write this more succinctly in terms of the standardised values $\tilde{x}=\frac{x-\overline{x}}{s_{x}}$ (similarly for y):
$$\beta_{(i)}\approx \beta-\frac{\tilde{x_{i}}}{n-1}\left[\tilde{y_{i}}\frac{s_y}{s_x}-\tilde{x_{i}}\beta\right]
$$
|
Influence functions and OLS
|
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that s
|
Influence functions and OLS
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that statistic. They can also be used to create asymptotic variance estimates. If influence equals $I$ then asymptotic variance is $\frac{I^2}{n}$.
The way I understand influence functions is as follows. You have some sort of theoretical CDF, denoted by $F_{i}(y)=Pr(Y_{i}<y_{i})$. For simple OLS, you have
$$Pr(Y_{i}<y_{i})=Pr(\alpha+\beta x_{i} + \epsilon_{i} < y_{i})=\Phi\left(\frac{y_{i}-(\alpha+\beta x_{i})}{\sigma}\right)$$
Where $\Phi(z)$ is the standard normal CDF, and $\sigma^2$ is the error variance. Now you can show that any statistic will be a function of this CDF, hence the notation $S(F)$ (i.e. some function of $F$). Now suppose we change the function $F$ by a "little bit", to $F_{(i)}(z)=(1+\zeta)F(z)-\zeta \delta_{(i)}(z)$ Where $\delta_{i}(z)=I(y_{i}<z)$, and $\zeta=\frac{1}{n-1}$. Thus $F_{(i)}$ represents the CDF of the data with the "ith" data point removed. We can do a taylor series of $F_{(i)}(z)$ about $\zeta=0$. This gives:
$$S[F_{(i)}(z,\zeta)] \approx S[F_{(i)}(z,0)]+\zeta\left[\frac{\partial S[F_{(i)}(z,\zeta)]}{\partial \zeta}|_{\zeta=0}\right]$$
Note that $F_{(i)}(z,0)=F(z)$ so we get:
$$S[F_{(i)}(z,\zeta)] \approx S[F(z)]+\zeta\left[\frac{\partial S[F_{(i)}(z,\zeta)]}{\partial \zeta}|_{\zeta=0}\right]$$
The partial derivative here is called the influence function. So this represents an approximate "first order" correction to be made to a statistic due to deleting the "ith" observation. Note that in regression the remainder does not go to zero asymtotically, so that this is an approximation to the changes you may actually get. Now write $\beta$ as:
$$\beta=\frac{\frac{1}{n}\sum_{j=1}^{n}(y_{j}-\overline{y})(x_{j}-\overline{x})}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}$$
Thus beta is a function of two statistics: the variance of X and covariance between X and Y. These two statistics have representations in terms of the CDF as:
$$cov(X,Y)=\int(X-\mu_x(F))(Y-\mu_y(F))dF$$
and
$$var(X)=\int(X-\mu_x(F))^{2}dF$$
where
$$\mu_x=\int xdF$$
To remove the ith observation we replace $F\rightarrow F_{(i)}=(1+\zeta)F-\zeta \delta_{(i)}$ in both integrals to give:
$$\mu_{x(i)}=\int xd[(1+\zeta)F-\zeta \delta_{(i)}]=\mu_x-\zeta(x_{i}-\mu_x)$$
$$Var(X)_{(i)}=\int(X-\mu_{x(i)})^{2}dF_{(i)}=\int(X-\mu_x+\zeta(x_{i}-\mu_x))^{2}d[(1+\zeta)F-\zeta \delta_{(i)}]$$
ignoring terms of $\zeta^{2}$ and simplifying we get:
$$Var(X)_{(i)}\approx Var(X)-\zeta\left[(x_{i}-\mu_x)^2-Var(X)\right]$$
Similarly for the covariance
$$Cov(X,Y)_{(i)}\approx Cov(X,Y)-\zeta\left[(x_{i}-\mu_x)(y_{i}-\mu_y)-Cov(X,Y)\right]$$
So we can now express $\beta_{(i)}$ as a function of $\zeta$. This is:
$$\beta_{(i)}(\zeta)\approx \frac{Cov(X,Y)-\zeta\left[(x_{i}-\mu_x)(y_{i}-\mu_y)-Cov(X,Y)\right]}{Var(X)-\zeta\left[(x_{i}-\mu_x)^2-Var(X)\right]}$$
We can now use the Taylor series:
$$\beta_{(i)}(\zeta)\approx \beta_{(i)}(0)+\zeta\left[\frac{\partial \beta_{(i)}(\zeta)}{\partial \zeta}\right]_{\zeta=0}$$
Simplifying this gives:
$$\beta_{(i)}(\zeta)\approx \beta-\zeta\left[\frac{(x_{i}-\mu_x)(y_{i}-\mu_y)}{Var(X)}-\beta\frac{(x_{i}-\mu_x)^2}{Var(X)}\right]$$
And plugging in the values of the statistics $\mu_y$, $\mu_x$, $var(X)$, and $\zeta=\frac{1}{n-1}$ we get:
$$\beta_{(i)}\approx \beta-\frac{x_{i}-\overline{x}}{n-1}\left[\frac{y_{i}-\overline{y}}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}-\beta\frac{x_{i}-\overline{x}}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}\right]
$$
And you can see how the effect of removing a single observation can be approximated without having to re-fit the model. You can also see how an x equal to the average has no influence on the slope of the line. Think about this and you will see how it makes sense. You can also write this more succinctly in terms of the standardised values $\tilde{x}=\frac{x-\overline{x}}{s_{x}}$ (similarly for y):
$$\beta_{(i)}\approx \beta-\frac{\tilde{x_{i}}}{n-1}\left[\tilde{y_{i}}\frac{s_y}{s_x}-\tilde{x_{i}}\beta\right]
$$
|
Influence functions and OLS
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that s
|
13,757
|
Influence functions and OLS
|
Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The contaminated distribution function, $F_\epsilon(x)$ can be defined as:
$$
F_\epsilon(x)=(1-\epsilon)F+\epsilon\delta_x
$$
where $\delta_x$ is the probability measure on $\Sigma$ which assigns probability 1 to $\{x\}$ and 0 to all other elements of $\Sigma$.
From this we can define the influence function fairly easily:
The influence function of $\hat{\theta}$ at $F$, $\psi_i:\mathcal{X}\to\Gamma$ is defined as:
\begin{equation}
\psi_{\hat{\theta},F}(x)=\lim\limits_{\epsilon\to 0}\dfrac{\hat{\theta}(F_\epsilon(x))-\hat{\theta}(F)}{\epsilon}
\end{equation}
From here it's possible to see that an influence function is the Gateaux derivative of $\hat\theta$ at $F$ in the direction of $\delta_x$. This makes the interpretation of influence functions (for me) a little bit clearer: An influence function tells you the effect that a particular observation has on the estimator.
The OLS estimate is a solution to the problem:
$$
\hat\theta=\arg\min_\theta E[(Y-X\theta)^T(Y-X\theta)]
$$
Imagine a contaminated distribution which puts a little more weight on observation $(x,y)$:
$$
\hat\theta_\epsilon = \arg\min_\theta (1-\epsilon)E[(Y-X\theta)^T(Y-X\theta)]+\epsilon (y-x\theta)^T(y-x\theta)
$$
Taking first order conditions:
$$
\left\{(1-\epsilon)E[X^TX]+\epsilon x^Tx\right\}\hat\theta_\epsilon = (1-\epsilon)E[X^TY]+\epsilon x^Ty
$$
Since the influence function is just a Gateaux derivative we can now say:
$$
-(E[X^TX]+x^Tx)\hat\theta_\epsilon + E[X^TX]\psi_{\theta}(x,y) = -E[X^TY] + x^Ty
$$
At $\epsilon=0$, $\hat\theta_\epsilon=\hat\theta=E[X^TX]^{-1}E[X^TY]$, so:
$$
\psi_{\theta}(x,y)=E[X^TX]^{-1}x^T(y-x\theta)
$$
The finite sample counterpart of this influence function is:
$$
\psi_{\theta}(x,y)=\left(\dfrac{1}{N}\sum_i X_i^TX_i\right)^{-1}x^T(y-x\theta)
$$
In general I find this framework (working with influence functions as Gateaux derivatives) easier to deal with.
|
Influence functions and OLS
|
Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The conta
|
Influence functions and OLS
Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The contaminated distribution function, $F_\epsilon(x)$ can be defined as:
$$
F_\epsilon(x)=(1-\epsilon)F+\epsilon\delta_x
$$
where $\delta_x$ is the probability measure on $\Sigma$ which assigns probability 1 to $\{x\}$ and 0 to all other elements of $\Sigma$.
From this we can define the influence function fairly easily:
The influence function of $\hat{\theta}$ at $F$, $\psi_i:\mathcal{X}\to\Gamma$ is defined as:
\begin{equation}
\psi_{\hat{\theta},F}(x)=\lim\limits_{\epsilon\to 0}\dfrac{\hat{\theta}(F_\epsilon(x))-\hat{\theta}(F)}{\epsilon}
\end{equation}
From here it's possible to see that an influence function is the Gateaux derivative of $\hat\theta$ at $F$ in the direction of $\delta_x$. This makes the interpretation of influence functions (for me) a little bit clearer: An influence function tells you the effect that a particular observation has on the estimator.
The OLS estimate is a solution to the problem:
$$
\hat\theta=\arg\min_\theta E[(Y-X\theta)^T(Y-X\theta)]
$$
Imagine a contaminated distribution which puts a little more weight on observation $(x,y)$:
$$
\hat\theta_\epsilon = \arg\min_\theta (1-\epsilon)E[(Y-X\theta)^T(Y-X\theta)]+\epsilon (y-x\theta)^T(y-x\theta)
$$
Taking first order conditions:
$$
\left\{(1-\epsilon)E[X^TX]+\epsilon x^Tx\right\}\hat\theta_\epsilon = (1-\epsilon)E[X^TY]+\epsilon x^Ty
$$
Since the influence function is just a Gateaux derivative we can now say:
$$
-(E[X^TX]+x^Tx)\hat\theta_\epsilon + E[X^TX]\psi_{\theta}(x,y) = -E[X^TY] + x^Ty
$$
At $\epsilon=0$, $\hat\theta_\epsilon=\hat\theta=E[X^TX]^{-1}E[X^TY]$, so:
$$
\psi_{\theta}(x,y)=E[X^TX]^{-1}x^T(y-x\theta)
$$
The finite sample counterpart of this influence function is:
$$
\psi_{\theta}(x,y)=\left(\dfrac{1}{N}\sum_i X_i^TX_i\right)^{-1}x^T(y-x\theta)
$$
In general I find this framework (working with influence functions as Gateaux derivatives) easier to deal with.
|
Influence functions and OLS
Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The conta
|
13,758
|
Influence functions and OLS
|
Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$.
Consider the mapping $$\phi:\mathbb{D} \mapsto \mathbb{E} $$
such that $\phi$ maps the joint distribution of $X,Y, {P}$ to $\beta=Cov(X,Y)/V(X)$.
Denote Dirac measure with a mass on the $i$th observation $(X_i, Y_i)$, $\delta_{i}$, and note that based on the linearity mentioned here it is easy to see that
$$\phi((1-t)P+t\delta_i) = \frac{(1-t)\sigma_{XY}+t(1-t)X_iY_i}{(1-t)\sigma_X^2+ t(1-t)X_i^2 },$$
which gives influence function of the ith observation via ordinary derivative (nothing fancy..., sorry)
$$\phi_{(i)}'(P)=\left.\frac{\partial \phi((1-t)P+t\delta_i)}{\partial t}\right|_{t=0} = \frac{X_iY_i}{\sigma_X^2}.$$
In large samples and under a mean-zero and scalar-$\beta$ setting, this (infeasible) conclusion is the same approximation as the solution provided by @jayk.
|
Influence functions and OLS
|
Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$.
|
Influence functions and OLS
Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$.
Consider the mapping $$\phi:\mathbb{D} \mapsto \mathbb{E} $$
such that $\phi$ maps the joint distribution of $X,Y, {P}$ to $\beta=Cov(X,Y)/V(X)$.
Denote Dirac measure with a mass on the $i$th observation $(X_i, Y_i)$, $\delta_{i}$, and note that based on the linearity mentioned here it is easy to see that
$$\phi((1-t)P+t\delta_i) = \frac{(1-t)\sigma_{XY}+t(1-t)X_iY_i}{(1-t)\sigma_X^2+ t(1-t)X_i^2 },$$
which gives influence function of the ith observation via ordinary derivative (nothing fancy..., sorry)
$$\phi_{(i)}'(P)=\left.\frac{\partial \phi((1-t)P+t\delta_i)}{\partial t}\right|_{t=0} = \frac{X_iY_i}{\sigma_X^2}.$$
In large samples and under a mean-zero and scalar-$\beta$ setting, this (infeasible) conclusion is the same approximation as the solution provided by @jayk.
|
Influence functions and OLS
Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$.
|
13,759
|
How will studying "stochastic processes" help me as a statistician?
|
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways:
Enable you to develop models for situations of interest to you.
An exposure to such a course, may enable you to identify a standard stochastic process that works given your problem context. You can then modify the model as needed to accommodate the idiosyncrasies of your specific context.
Enable you to better understand the nuances of the statistical methodology that uses stochastic processes.
There are several key ideas in stochastic processes such as convergence, stationarity that play an important role when we want to analyze a stochastic process. It is my belief that a course in stochastic process will let you appreciate better the need for caring about these issues and why they are important.
Can you be a statistician without taking a course in stochastic processes? Sure. You can always use the software that is available to perform whatever statistical analysis you want. However, a basic understanding of stochastic processes is very helpful in order to make a correct choice of methodology, in order to understand what is really happening in the black box etc. Obviously, you will not be able to contribute to the theory of stochastic processes with a basic course but in my opinion it will make you a better statistician. My general rule of thumb for coursework: The more advanced course you take the better off you will be in the long-run.
By way of analogy: You can perform a t-test without knowing any probability theory or statistics testing methodology. But, a knowledge of probability theory and statistical testing methodology is extremely useful in understanding the output correctly and in choosing the correct statistical test.
|
How will studying "stochastic processes" help me as a statistician?
|
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic
|
How will studying "stochastic processes" help me as a statistician?
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways:
Enable you to develop models for situations of interest to you.
An exposure to such a course, may enable you to identify a standard stochastic process that works given your problem context. You can then modify the model as needed to accommodate the idiosyncrasies of your specific context.
Enable you to better understand the nuances of the statistical methodology that uses stochastic processes.
There are several key ideas in stochastic processes such as convergence, stationarity that play an important role when we want to analyze a stochastic process. It is my belief that a course in stochastic process will let you appreciate better the need for caring about these issues and why they are important.
Can you be a statistician without taking a course in stochastic processes? Sure. You can always use the software that is available to perform whatever statistical analysis you want. However, a basic understanding of stochastic processes is very helpful in order to make a correct choice of methodology, in order to understand what is really happening in the black box etc. Obviously, you will not be able to contribute to the theory of stochastic processes with a basic course but in my opinion it will make you a better statistician. My general rule of thumb for coursework: The more advanced course you take the better off you will be in the long-run.
By way of analogy: You can perform a t-test without knowing any probability theory or statistics testing methodology. But, a knowledge of probability theory and statistical testing methodology is extremely useful in understanding the output correctly and in choosing the correct statistical test.
|
How will studying "stochastic processes" help me as a statistician?
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic
|
13,760
|
How will studying "stochastic processes" help me as a statistician?
|
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology could help with biological statistical consultancy since you know more biology!
I presume that you have a choice of modules that you can take, and you need to pick $n$ of them. The real question is what modules should I pick (that question probably isn't appropriate for this site!)
To answer your question, you are still very early in your career and at this moment in time you should try to get a wide selection of courses under your belt. Furthermore, if you are planning a career in academia then some more mathematical courses, like stochastic processes would be useful.
|
How will studying "stochastic processes" help me as a statistician?
|
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology
|
How will studying "stochastic processes" help me as a statistician?
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology could help with biological statistical consultancy since you know more biology!
I presume that you have a choice of modules that you can take, and you need to pick $n$ of them. The real question is what modules should I pick (that question probably isn't appropriate for this site!)
To answer your question, you are still very early in your career and at this moment in time you should try to get a wide selection of courses under your belt. Furthermore, if you are planning a career in academia then some more mathematical courses, like stochastic processes would be useful.
|
How will studying "stochastic processes" help me as a statistician?
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology
|
13,761
|
How will studying "stochastic processes" help me as a statistician?
|
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. ISBN 9780387202877
Having said that, many applied statisticians (including me) use survival analysis without any understanding of stochastic processes. I'm not likely to make any advances to the theory though.
|
How will studying "stochastic processes" help me as a statistician?
|
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history an
|
How will studying "stochastic processes" help me as a statistician?
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. ISBN 9780387202877
Having said that, many applied statisticians (including me) use survival analysis without any understanding of stochastic processes. I'm not likely to make any advances to the theory though.
|
How will studying "stochastic processes" help me as a statistician?
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history an
|
13,762
|
How will studying "stochastic processes" help me as a statistician?
|
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course will probably teach you the mathematics behind these stochastic processes, e. g. distribution functions, which will allow you to grasp the function of your statistical tools.
I think you can compare it with an automobile: As you can drive your car without understanding the engineering behind it, and without theoretical knowledge about the dynamics of your car on the road, you can apply statistical tools to your data without understanding how these tools work, as long as you understand the output. This will probably be good enough if you want to do basic statistics with well behaved data. But if you really want to get the most out of your car, to see where it's limits are, you need knowledge about the engineering, the dynamics of your car on roads and in curves and so on. And if you want to get the most out of your data with the help of your statistical tools, you need to understand how data generation can be modeled, how tests are devised and what the assumptions behind your tests are to be able to see where those assumptions are not valid.
|
How will studying "stochastic processes" help me as a statistician?
|
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course
|
How will studying "stochastic processes" help me as a statistician?
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course will probably teach you the mathematics behind these stochastic processes, e. g. distribution functions, which will allow you to grasp the function of your statistical tools.
I think you can compare it with an automobile: As you can drive your car without understanding the engineering behind it, and without theoretical knowledge about the dynamics of your car on the road, you can apply statistical tools to your data without understanding how these tools work, as long as you understand the output. This will probably be good enough if you want to do basic statistics with well behaved data. But if you really want to get the most out of your car, to see where it's limits are, you need knowledge about the engineering, the dynamics of your car on roads and in curves and so on. And if you want to get the most out of your data with the help of your statistical tools, you need to understand how data generation can be modeled, how tests are devised and what the assumptions behind your tests are to be able to see where those assumptions are not valid.
|
How will studying "stochastic processes" help me as a statistician?
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course
|
13,763
|
How will studying "stochastic processes" help me as a statistician?
|
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one).
|
How will studying "stochastic processes" help me as a statistician?
|
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one).
|
How will studying "stochastic processes" help me as a statistician?
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one).
|
How will studying "stochastic processes" help me as a statistician?
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one).
|
13,764
|
How will studying "stochastic processes" help me as a statistician?
|
In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerging evidence points to one hypothesis or another, is based on the theory of stochastic processes. So yes, this course is a win.
|
How will studying "stochastic processes" help me as a statistician?
|
In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerg
|
How will studying "stochastic processes" help me as a statistician?
In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerging evidence points to one hypothesis or another, is based on the theory of stochastic processes. So yes, this course is a win.
|
How will studying "stochastic processes" help me as a statistician?
In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerg
|
13,765
|
How will studying "stochastic processes" help me as a statistician?
|
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an understanding of stochastic processes. This is so fundamental in so many areas of application that I am inclined to say that anyone with a graduate degree in stats or a field that uses sampling or frequentist inference ought to have key stochastic processes results under their belt. (2) Structural equation modeling for causal inference a la Judea Pearl: Analyzing directed acyclic graphs (DAGs) of causal processes requires some handle of stochastic process theory.
|
How will studying "stochastic processes" help me as a statistician?
|
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an un
|
How will studying "stochastic processes" help me as a statistician?
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an understanding of stochastic processes. This is so fundamental in so many areas of application that I am inclined to say that anyone with a graduate degree in stats or a field that uses sampling or frequentist inference ought to have key stochastic processes results under their belt. (2) Structural equation modeling for causal inference a la Judea Pearl: Analyzing directed acyclic graphs (DAGs) of causal processes requires some handle of stochastic process theory.
|
How will studying "stochastic processes" help me as a statistician?
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an un
|
13,766
|
(Why) Is absolute loss not a proper scoring rule?
|
Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In your examples, $s$ is a function of observed data $y_1,\dots,y_n$ with $s = \hat{p}$. The Brier score loss function is $L_b(y_i | s) = |y_i - s|^2$, and the absolute loss function is $L_a(y_i|s) = |y_i - s|$. A loss function has an expected loss $E_Y(L(Y|s)) := R(p|s)$. A loss function is a proper score rule if the expected loss $R(p|s)$ is minimized with respect to $s$ by setting $s=p$ for any $p\in(0,1)$.
A handy trick for verifying this is using the binary nature of $Y$, as for any expected loss, we have
$$R(p|s) = pL(1|s) + (1-p)L(0|s)$$
Let's start by verifying that the Bier loss function is a proper score rule. Note that $L_b(1|s) = |1-s|^2 = (1-s)^2$, and $L_b(0|s) = s^2$, so using the above, we have
$$R_b(p|s) = p(1-s)^2 + (1-p)s^2$$
and taking derivative of that function wrt to $s$ and setting to $0$ will give you that the choice of $s = p$ minimizes the expected risk. So the Brier score is indeed a proper score rule.
In contrast, recalling the binary nature of $Y$, we can write the absolute loss $L_a$ as
$$L_a(y|s) = y(1-s) + (1-y)s$$
as $y\in\{0,1\}$. As such, we have that
$$R_a(p|s) = p(1-s) + (1-p)s = p + s - 2ps$$
Unfortunately, $R_a(p|s)$ is not minimized by $s=p$, and by considering edge cases, you can show that $R_a(p|s)$ is minimized by $s=1$ when $p>.5$, and by $s=0$ when $p<.5$, and holds for any choice of $s$ when $p=.5$.
So to answer your questions, absolute loss is not a proper scoring rule, and that does not have to the with the number of output categories. As for whether it can be wrestled, I certainly can't think of a way... I think such attempts to think of similar approaches will probably lead you to the Brier score :).
Edit:
In response to OP's comment, note that the absolute loss approach is basically estimating the median of $Y$, which in the binary case is in expectation either $0$ or $1$ depending on $p$. The absolute loss just doesn't penalize the alternative choice enough to make you want to choose anything but the value that shows up the most. In contrast, the squared error penalizes the alternative enough to find a middle ground that coincides with the mean $p$. This should also highlight that there's nothing wrong with using absolute loss as a classifier, and you can think of it related to determining, for a given problem, if you care more about the mean or the median. For binary data, I'd personally say the mean is more interesting (knowing the median tells you whether p > .5, but knowing the mean tells you a more precise statement about $p$), but it depends. As the other post also emphasizes, there's nothing wrong with absolute loss, it just isn't a proper score rule.
|
(Why) Is absolute loss not a proper scoring rule?
|
Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In you
|
(Why) Is absolute loss not a proper scoring rule?
Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In your examples, $s$ is a function of observed data $y_1,\dots,y_n$ with $s = \hat{p}$. The Brier score loss function is $L_b(y_i | s) = |y_i - s|^2$, and the absolute loss function is $L_a(y_i|s) = |y_i - s|$. A loss function has an expected loss $E_Y(L(Y|s)) := R(p|s)$. A loss function is a proper score rule if the expected loss $R(p|s)$ is minimized with respect to $s$ by setting $s=p$ for any $p\in(0,1)$.
A handy trick for verifying this is using the binary nature of $Y$, as for any expected loss, we have
$$R(p|s) = pL(1|s) + (1-p)L(0|s)$$
Let's start by verifying that the Bier loss function is a proper score rule. Note that $L_b(1|s) = |1-s|^2 = (1-s)^2$, and $L_b(0|s) = s^2$, so using the above, we have
$$R_b(p|s) = p(1-s)^2 + (1-p)s^2$$
and taking derivative of that function wrt to $s$ and setting to $0$ will give you that the choice of $s = p$ minimizes the expected risk. So the Brier score is indeed a proper score rule.
In contrast, recalling the binary nature of $Y$, we can write the absolute loss $L_a$ as
$$L_a(y|s) = y(1-s) + (1-y)s$$
as $y\in\{0,1\}$. As such, we have that
$$R_a(p|s) = p(1-s) + (1-p)s = p + s - 2ps$$
Unfortunately, $R_a(p|s)$ is not minimized by $s=p$, and by considering edge cases, you can show that $R_a(p|s)$ is minimized by $s=1$ when $p>.5$, and by $s=0$ when $p<.5$, and holds for any choice of $s$ when $p=.5$.
So to answer your questions, absolute loss is not a proper scoring rule, and that does not have to the with the number of output categories. As for whether it can be wrestled, I certainly can't think of a way... I think such attempts to think of similar approaches will probably lead you to the Brier score :).
Edit:
In response to OP's comment, note that the absolute loss approach is basically estimating the median of $Y$, which in the binary case is in expectation either $0$ or $1$ depending on $p$. The absolute loss just doesn't penalize the alternative choice enough to make you want to choose anything but the value that shows up the most. In contrast, the squared error penalizes the alternative enough to find a middle ground that coincides with the mean $p$. This should also highlight that there's nothing wrong with using absolute loss as a classifier, and you can think of it related to determining, for a given problem, if you care more about the mean or the median. For binary data, I'd personally say the mean is more interesting (knowing the median tells you whether p > .5, but knowing the mean tells you a more precise statement about $p$), but it depends. As the other post also emphasizes, there's nothing wrong with absolute loss, it just isn't a proper score rule.
|
(Why) Is absolute loss not a proper scoring rule?
Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In you
|
13,767
|
(Why) Is absolute loss not a proper scoring rule?
|
Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i = 1$ if $p_i>0.5$ and $\check y_i=0$ if $p_i<0.5$.
Suppose $p_i>0.5$ (for simplicity).
The expected Brier loss of $\hat y_i$ is $(1-p_i)^2p_i+p_i^2(1-p_i)=1-p_i^2$. The expected Brier loss of $\check y_i$ is $0^2\times p_i + 1^2\times (1-p_i)=1$, so $\hat y_i$ is preferred over $\check y_i$.
The expected absolute loss of $\hat y_i$ is $(1-p_i)p_i+p_i(1-p_i)=2p_i(1-p_i)$. The expected Brier loss of $\check y_i$ is $0\times p_i + 1\times (1-p_i)=1-p_i$, and since $p_i>0.5$, $2p_i(1-p_i)>(1-p_i)$ so $\check y_i$ is preferred over $\hat y_i$.
So, minimising absolute loss makes you say $\check y_i$ is better than true probability $\hat y_i$, which is what it means to be improper.
Note that $\check y_i$ is the median of $Y_i|p_i$, so it's not necessarily a bad estimator. And absolute error isn't necessarily a bad loss function. It's just not a proper scoring rule.
If you're going to have a continuous loss like this be proper it will have to penalise big errors more than small errors, so it will not have the interpretation you want it to have.
No, you get the same problems
No, you get the same problems
|
(Why) Is absolute loss not a proper scoring rule?
|
Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i =
|
(Why) Is absolute loss not a proper scoring rule?
Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i = 1$ if $p_i>0.5$ and $\check y_i=0$ if $p_i<0.5$.
Suppose $p_i>0.5$ (for simplicity).
The expected Brier loss of $\hat y_i$ is $(1-p_i)^2p_i+p_i^2(1-p_i)=1-p_i^2$. The expected Brier loss of $\check y_i$ is $0^2\times p_i + 1^2\times (1-p_i)=1$, so $\hat y_i$ is preferred over $\check y_i$.
The expected absolute loss of $\hat y_i$ is $(1-p_i)p_i+p_i(1-p_i)=2p_i(1-p_i)$. The expected Brier loss of $\check y_i$ is $0\times p_i + 1\times (1-p_i)=1-p_i$, and since $p_i>0.5$, $2p_i(1-p_i)>(1-p_i)$ so $\check y_i$ is preferred over $\hat y_i$.
So, minimising absolute loss makes you say $\check y_i$ is better than true probability $\hat y_i$, which is what it means to be improper.
Note that $\check y_i$ is the median of $Y_i|p_i$, so it's not necessarily a bad estimator. And absolute error isn't necessarily a bad loss function. It's just not a proper scoring rule.
If you're going to have a continuous loss like this be proper it will have to penalise big errors more than small errors, so it will not have the interpretation you want it to have.
No, you get the same problems
No, you get the same problems
|
(Why) Is absolute loss not a proper scoring rule?
Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i =
|
13,768
|
(Why) Is absolute loss not a proper scoring rule?
|
In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an observation $y$, the CRPS is defined like this:
$$\text{CRPS}(F,y) = \int (F(z)-I(y\leq z))^2dz$$
Intuitively it is a measure of the distance between $F$ and a perfect predicted CDF which is exact and without uncertainty (i.e. $P[Y=y]=1$).
Let's restrict ourselves to $y$ being either 0 or 1. If our prediction $F$ is the CDF of a Bernoulli distribution with parameter $\hat{p}$, then you can show fairly easily that:
$$\text{CRPS}(F,y) = (y-\hat{p})^2$$
That is, the CRPS just reduces to the Brier score when the observations are 0-1 and $F$ is Bernoulli.
We'd like to find a distribution $F$ for which the CRPS reduces to absolute error instead. One possibility is to take the degenerate forecast $P[Y=\hat{y}]=1$. That is, this prediction is that $Y$ is not really random at all, and instead of being either 0 or 1, it is always $\hat{y}$. Then, we can show:
$$\text{CRPS}(F,y) = |y-\hat{y}|$$
As the other answers have shown, this is minimized at either $\hat{y}=0$ or $\hat{y}=1$. This shouldn't be particularly surprising; any other value means that, in our prediction $F$, the probability of observing either 0 or 1 is zero, which shouldn't give you a good score given that we've assumed those are the only possibilities.
Then, in the context of 0-1 data, minimizing the absolute error is kind of like minimizing CRPS (which is proper) but over a class of distributions which does not contain Bernoulli distributions with $0 < p < 1$, so isn't proper in general.
|
(Why) Is absolute loss not a proper scoring rule?
|
In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an
|
(Why) Is absolute loss not a proper scoring rule?
In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an observation $y$, the CRPS is defined like this:
$$\text{CRPS}(F,y) = \int (F(z)-I(y\leq z))^2dz$$
Intuitively it is a measure of the distance between $F$ and a perfect predicted CDF which is exact and without uncertainty (i.e. $P[Y=y]=1$).
Let's restrict ourselves to $y$ being either 0 or 1. If our prediction $F$ is the CDF of a Bernoulli distribution with parameter $\hat{p}$, then you can show fairly easily that:
$$\text{CRPS}(F,y) = (y-\hat{p})^2$$
That is, the CRPS just reduces to the Brier score when the observations are 0-1 and $F$ is Bernoulli.
We'd like to find a distribution $F$ for which the CRPS reduces to absolute error instead. One possibility is to take the degenerate forecast $P[Y=\hat{y}]=1$. That is, this prediction is that $Y$ is not really random at all, and instead of being either 0 or 1, it is always $\hat{y}$. Then, we can show:
$$\text{CRPS}(F,y) = |y-\hat{y}|$$
As the other answers have shown, this is minimized at either $\hat{y}=0$ or $\hat{y}=1$. This shouldn't be particularly surprising; any other value means that, in our prediction $F$, the probability of observing either 0 or 1 is zero, which shouldn't give you a good score given that we've assumed those are the only possibilities.
Then, in the context of 0-1 data, minimizing the absolute error is kind of like minimizing CRPS (which is proper) but over a class of distributions which does not contain Bernoulli distributions with $0 < p < 1$, so isn't proper in general.
|
(Why) Is absolute loss not a proper scoring rule?
In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an
|
13,769
|
Split data into N equal groups
|
If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
This will return a list of data frames where each data frame is consists of randomly selected rows from df. By default sample() will assign equal probability to each group.
|
Split data into N equal groups
|
If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
Thi
|
Split data into N equal groups
If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
This will return a list of data frames where each data frame is consists of randomly selected rows from df. By default sample() will assign equal probability to each group.
|
Split data into N equal groups
If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
Thi
|
13,770
|
Split data into N equal groups
|
Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere. Maybe my answer
will help if someone finds this page from now on.
I wrote an R package, which does exactly what the question
asked for: it takes a data.frame and creates N different groups while
trying to minimize the differences between groups in one or several
criteria. It uses a simple method based on repeated random
assignment, which is also the suggested method in the approved response.
This is the link to the package minDiff:
To tackle the stated problem, you could use:
library(minDiff)
assigment <- create_groups(dataframe, criteria_scale = c("price", "click count", "rating"), sets_n = N, repetitions = 1000)
The repetitions argument will determine how often you randomly create
different groups. The best assignment - the one that has minimal
differences between groups - will be returned.
|
Split data into N equal groups
|
Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere
|
Split data into N equal groups
Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere. Maybe my answer
will help if someone finds this page from now on.
I wrote an R package, which does exactly what the question
asked for: it takes a data.frame and creates N different groups while
trying to minimize the differences between groups in one or several
criteria. It uses a simple method based on repeated random
assignment, which is also the suggested method in the approved response.
This is the link to the package minDiff:
To tackle the stated problem, you could use:
library(minDiff)
assigment <- create_groups(dataframe, criteria_scale = c("price", "click count", "rating"), sets_n = N, repetitions = 1000)
The repetitions argument will determine how often you randomly create
different groups. The best assignment - the one that has minimal
differences between groups - will be returned.
|
Split data into N equal groups
Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere
|
13,771
|
Split data into N equal groups
|
Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <- df[order(runif(nrow(df))), ]
bins <- rep(1:N, nrow(df) / N)
split(df, bins)
|
Split data into N equal groups
|
Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <-
|
Split data into N equal groups
Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <- df[order(runif(nrow(df))), ]
bins <- rep(1:N, nrow(df) / N)
split(df, bins)
|
Split data into N equal groups
Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <-
|
13,772
|
Split data into N equal groups
|
This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data)
|
Split data into N equal groups
|
This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data)
|
Split data into N equal groups
This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data)
|
Split data into N equal groups
This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data)
|
13,773
|
Significance testing or cross validation?
|
First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and regression function
$$f(x_1, \ldots, x_p) = \beta_0 + \beta_1 x_1 + \ldots + \beta_p x_p,$$
which could be a model of the mean of the response variable for a given observation of $x_1, \ldots, x_p$.
The question is how to select a subset of the $\beta_i$'s to be non-zero, and, in particular, a comparison of significance testing versus cross validation.
To be crystal clear about the terminology, significance testing is a general concept, which is carried out differently in different contexts. It depends, for instance, on the choice of a test statistic. Cross validation is really an algorithm for estimation of the expected generalization error, which is the important general concept, and which depends on the choice of a loss function.
The expected generalization error is a little technical to define formally, but in words it is the expected loss of a fitted model when used for prediction on an independent data set, where expectation is over the data used for the estimation as well as the independent data set used for prediction.
To make a reasonable comparison lets focus on whether $\beta_1$ could be taken equal to 0 or not.
For significance testing of the null hypothesis that $\beta_1 = 0$ the main procedure is to compute a $p$-value, which is the probability that the chosen test-statistic is larger than observed for our data set under the null hypothesis, that is, when assuming that $\beta_1 = 0$. The interpretation is that a small $p$-value is evidence against the null hypothesis. There are commonly used rules for what "small" means in an absolute sense such as the famous 0.05 or 0.01 significance levels.
For the expected generalization error we compute, perhaps using cross-validation, an estimate of the expected generalization error under the assumption that $\beta_1 = 0$. This quantity tells us how well models fitted by the method we use, and with $\beta_1 = 0$, will perform on average when used for prediction on independent data. A large expected generalization error is bad, but there are no rules in terms of its absolute value on how large it needs to be to be bad. We will have to estimate the expected generalization error for the model where $\beta_1$ is allowed to be different from 0 as well, and then we can compare the two estimated errors. Whichever is the smallest corresponds to the model we choose.
Using significance testing we are not directly concerned with the "performance" of the model under the null hypothesis versus other models, but we are concerned with documenting that the null is wrong. This makes most sense (to me) in a confirmatory setup where the main objective is to confirm and document an a priory well specified scientific hypothesis, which can be formulated as $\beta_1 \neq 0$.
The expected generalization error is, on the other hand, only concerned with average "performance" in terms of expected prediction loss, and concluding that it is best to allow $\beta_1$ to be different from 0 in terms of prediction is not an attempt to document that $\beta_1$ is "really" different from 0 $-$ whatever that means.
I have personally never worked on a problem where I formally needed significance testing, yet $p$-values find their way into my work and do provide sensible guides and first impressions for variable selection. I am, however, mostly using penalization methods like lasso in combination with the generalization error for any formal model selection, and I am slowly trying to suppress my inclination to even compute $p$-values.
For exploratory analysis I see no argument in favor of significance testing and $p$-values, and I will definitely recommend focusing on a concept like expected generalization error for variable selection. In other contexts where one might consider using a $p$-value for documenting that $\beta_1$ is not 0, I would say that it is almost always a better idea to report an estimate of $\beta_1$ and a confidence interval instead.
|
Significance testing or cross validation?
|
First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or
|
Significance testing or cross validation?
First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and regression function
$$f(x_1, \ldots, x_p) = \beta_0 + \beta_1 x_1 + \ldots + \beta_p x_p,$$
which could be a model of the mean of the response variable for a given observation of $x_1, \ldots, x_p$.
The question is how to select a subset of the $\beta_i$'s to be non-zero, and, in particular, a comparison of significance testing versus cross validation.
To be crystal clear about the terminology, significance testing is a general concept, which is carried out differently in different contexts. It depends, for instance, on the choice of a test statistic. Cross validation is really an algorithm for estimation of the expected generalization error, which is the important general concept, and which depends on the choice of a loss function.
The expected generalization error is a little technical to define formally, but in words it is the expected loss of a fitted model when used for prediction on an independent data set, where expectation is over the data used for the estimation as well as the independent data set used for prediction.
To make a reasonable comparison lets focus on whether $\beta_1$ could be taken equal to 0 or not.
For significance testing of the null hypothesis that $\beta_1 = 0$ the main procedure is to compute a $p$-value, which is the probability that the chosen test-statistic is larger than observed for our data set under the null hypothesis, that is, when assuming that $\beta_1 = 0$. The interpretation is that a small $p$-value is evidence against the null hypothesis. There are commonly used rules for what "small" means in an absolute sense such as the famous 0.05 or 0.01 significance levels.
For the expected generalization error we compute, perhaps using cross-validation, an estimate of the expected generalization error under the assumption that $\beta_1 = 0$. This quantity tells us how well models fitted by the method we use, and with $\beta_1 = 0$, will perform on average when used for prediction on independent data. A large expected generalization error is bad, but there are no rules in terms of its absolute value on how large it needs to be to be bad. We will have to estimate the expected generalization error for the model where $\beta_1$ is allowed to be different from 0 as well, and then we can compare the two estimated errors. Whichever is the smallest corresponds to the model we choose.
Using significance testing we are not directly concerned with the "performance" of the model under the null hypothesis versus other models, but we are concerned with documenting that the null is wrong. This makes most sense (to me) in a confirmatory setup where the main objective is to confirm and document an a priory well specified scientific hypothesis, which can be formulated as $\beta_1 \neq 0$.
The expected generalization error is, on the other hand, only concerned with average "performance" in terms of expected prediction loss, and concluding that it is best to allow $\beta_1$ to be different from 0 in terms of prediction is not an attempt to document that $\beta_1$ is "really" different from 0 $-$ whatever that means.
I have personally never worked on a problem where I formally needed significance testing, yet $p$-values find their way into my work and do provide sensible guides and first impressions for variable selection. I am, however, mostly using penalization methods like lasso in combination with the generalization error for any formal model selection, and I am slowly trying to suppress my inclination to even compute $p$-values.
For exploratory analysis I see no argument in favor of significance testing and $p$-values, and I will definitely recommend focusing on a concept like expected generalization error for variable selection. In other contexts where one might consider using a $p$-value for documenting that $\beta_1$ is not 0, I would say that it is almost always a better idea to report an estimate of $\beta_1$ and a confidence interval instead.
|
Significance testing or cross validation?
First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or
|
13,774
|
Significance testing or cross validation?
|
Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you may get strong correlations by chance and these correlations can seemingly be enhanced as you remove other unnecessary predictors.
The selection procedure, of course, keeps only those variables with the strongest correlations with the outcome and, as the stepwise procedure moves forward, the probability of committing a Type I error becomes larger than you would imagine. This is because the standard errors (and thus p-values) are not adjusted to take into account the fact that the variables were not selected for inclusion in the model randomly and multiple hypothesis tests were conducted to choose that set.
David Freedman has a cute paper in which he demonstrates these points called "A Note on Screening Regression Equations." The abstract:
Consider developing a regression model in a context where substantive
theory is weak. To focus on an extreme case, suppose that in fact
there is no relationship between the dependent variable and the
explanatory variables. Even so, if there are many explanatory
variables, the $R^2$ will be high. If explanatory variables with small
t statistics are dropped and the equation refitted, the $R^2$ will stay high and the overall F will become highly significant. This is
demonstrated by simulation and by asymptotic calculation.
One potential solution to this problem, as you mentioned, is using a variant of cross validation. When I don't have a good economic (my area of research) or statistical reason to believe my model, this is my preferred approach to selecting an appropriate model and performing inference.
Other respondents might mention that stepwise procedures using the AIC or BIC are asympotically equivalent to cross validation. This only works as the number of observations relative to the number of predictors gets large, however. In the context of having many variables relative to the number of observations (Freedman says 1 variable per 10 or fewer observations), selection in this manner can exhibit the poor properties discussed above.
In an age of powerful computers, I don't see any reason not to use cross validation as a model selection procedure over stepwise selection.
|
Significance testing or cross validation?
|
Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you
|
Significance testing or cross validation?
Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you may get strong correlations by chance and these correlations can seemingly be enhanced as you remove other unnecessary predictors.
The selection procedure, of course, keeps only those variables with the strongest correlations with the outcome and, as the stepwise procedure moves forward, the probability of committing a Type I error becomes larger than you would imagine. This is because the standard errors (and thus p-values) are not adjusted to take into account the fact that the variables were not selected for inclusion in the model randomly and multiple hypothesis tests were conducted to choose that set.
David Freedman has a cute paper in which he demonstrates these points called "A Note on Screening Regression Equations." The abstract:
Consider developing a regression model in a context where substantive
theory is weak. To focus on an extreme case, suppose that in fact
there is no relationship between the dependent variable and the
explanatory variables. Even so, if there are many explanatory
variables, the $R^2$ will be high. If explanatory variables with small
t statistics are dropped and the equation refitted, the $R^2$ will stay high and the overall F will become highly significant. This is
demonstrated by simulation and by asymptotic calculation.
One potential solution to this problem, as you mentioned, is using a variant of cross validation. When I don't have a good economic (my area of research) or statistical reason to believe my model, this is my preferred approach to selecting an appropriate model and performing inference.
Other respondents might mention that stepwise procedures using the AIC or BIC are asympotically equivalent to cross validation. This only works as the number of observations relative to the number of predictors gets large, however. In the context of having many variables relative to the number of observations (Freedman says 1 variable per 10 or fewer observations), selection in this manner can exhibit the poor properties discussed above.
In an age of powerful computers, I don't see any reason not to use cross validation as a model selection procedure over stepwise selection.
|
Significance testing or cross validation?
Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you
|
13,775
|
Is there ever a reason to solve a regression problem as a classification problem?
|
In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment-896285461 :
One loses information by binning the response. Why would one want to do that in the first place (except data compression)?
Continuous targets have an order (<). (Standard) Classification classes don’t (except ordinal categorical regression/classification).
Continuous targets usually have some kind of smoothness: Proximity in feature space (for continuous features) means proximity in target space.
All this loss of information is accompanied by possibly more parameters in the model, e.g. logistic regression has number of coefficients proportional to number of classes.
The binning obfuscates whether one is trying to predict the expectation/mean or a quantile.
One can end up with a badly (conditionally) calibrated regression model, ie biased. (This can also happen for stdandard regression techniques.)
From V. Fedorov, F. Mannino, Rongmei Zhang "Consequences of dichotomization" (2009) doi: 10.1002/pst.331
While the analysis of dichotomized outcomes may be easier, there are no benefits to this approach when the true outcomes can be observed and the ‘working’ model is flexible enough to describe the
population at hand. Thus, dichotomization should be avoided in most cases.
|
Is there ever a reason to solve a regression problem as a classification problem?
|
In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment-
|
Is there ever a reason to solve a regression problem as a classification problem?
In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment-896285461 :
One loses information by binning the response. Why would one want to do that in the first place (except data compression)?
Continuous targets have an order (<). (Standard) Classification classes don’t (except ordinal categorical regression/classification).
Continuous targets usually have some kind of smoothness: Proximity in feature space (for continuous features) means proximity in target space.
All this loss of information is accompanied by possibly more parameters in the model, e.g. logistic regression has number of coefficients proportional to number of classes.
The binning obfuscates whether one is trying to predict the expectation/mean or a quantile.
One can end up with a badly (conditionally) calibrated regression model, ie biased. (This can also happen for stdandard regression techniques.)
From V. Fedorov, F. Mannino, Rongmei Zhang "Consequences of dichotomization" (2009) doi: 10.1002/pst.331
While the analysis of dichotomized outcomes may be easier, there are no benefits to this approach when the true outcomes can be observed and the ‘working’ model is flexible enough to describe the
population at hand. Thus, dichotomization should be avoided in most cases.
|
Is there ever a reason to solve a regression problem as a classification problem?
In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment-
|
13,776
|
Is there ever a reason to solve a regression problem as a classification problem?
|
In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably out of practical convenience. Libraries for classification might be more common and easily accessible, and they also automatically provide answers that are in the correct range (while regression for example can output negative values etc.).
One slightly better motivation I can think of is that the typical outputs of classification algorithms can be interpreted as class probabilities, which can provide a measure of uncertainty on the result (for example, you can read a result as giving 40% probability for the range 10-20, 50% for the range 20-30 etc.). Of course regression models can in general provide uncertainty estimates as well, but it is a feature that is lacking in many standard tool and is not "automatic" as in the classification case.
|
Is there ever a reason to solve a regression problem as a classification problem?
|
In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably o
|
Is there ever a reason to solve a regression problem as a classification problem?
In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably out of practical convenience. Libraries for classification might be more common and easily accessible, and they also automatically provide answers that are in the correct range (while regression for example can output negative values etc.).
One slightly better motivation I can think of is that the typical outputs of classification algorithms can be interpreted as class probabilities, which can provide a measure of uncertainty on the result (for example, you can read a result as giving 40% probability for the range 10-20, 50% for the range 20-30 etc.). Of course regression models can in general provide uncertainty estimates as well, but it is a feature that is lacking in many standard tool and is not "automatic" as in the classification case.
|
Is there ever a reason to solve a regression problem as a classification problem?
In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably o
|
13,777
|
Is there ever a reason to solve a regression problem as a classification problem?
|
In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectation is badly modeled as a linear function of the predictors. But then there are better alternatives, like response transfromations (see How to choose the best transformation to achieve linearity? and When (and why) should you take the log of a distribution (of numbers)?).
But another, newer, idea is to use ordinal regression. User Frank Harrell has written much about this here, search. Some starting points: Which model should I use to fit my data ? ordinal and non-ordinal, not normal and not homoscedastic, proportional odds (PO) ordinal logistic regression model as nonparametric ANOVA that controls for covariates, Analysis for ordinal categorical outcome
|
Is there ever a reason to solve a regression problem as a classification problem?
|
In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectati
|
Is there ever a reason to solve a regression problem as a classification problem?
In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectation is badly modeled as a linear function of the predictors. But then there are better alternatives, like response transfromations (see How to choose the best transformation to achieve linearity? and When (and why) should you take the log of a distribution (of numbers)?).
But another, newer, idea is to use ordinal regression. User Frank Harrell has written much about this here, search. Some starting points: Which model should I use to fit my data ? ordinal and non-ordinal, not normal and not homoscedastic, proportional odds (PO) ordinal logistic regression model as nonparametric ANOVA that controls for covariates, Analysis for ordinal categorical outcome
|
Is there ever a reason to solve a regression problem as a classification problem?
In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectati
|
13,778
|
Is there ever a reason to solve a regression problem as a classification problem?
|
One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified. This will weight the cases correctly even though they have different variances.
OTOH, logistic regression is a proper regression model, despite it mostly being taught as a classifier. So maybe this doesn't count.
|
Is there ever a reason to solve a regression problem as a classification problem?
|
One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified.
|
Is there ever a reason to solve a regression problem as a classification problem?
One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified. This will weight the cases correctly even though they have different variances.
OTOH, logistic regression is a proper regression model, despite it mostly being taught as a classifier. So maybe this doesn't count.
|
Is there ever a reason to solve a regression problem as a classification problem?
One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified.
|
13,779
|
Is there ever a reason to solve a regression problem as a classification problem?
|
I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this one (all code is attached at the end), where the red class corresponds to $y \leq 1$ and the blue class to $y>1$ and we have one (or of course more) predictor that is within class uncorrelated with $y$, but separates the classes perfectly.
Here, a Firth penalized logistic regression
Predicted
Truth red blue
red 5000 0
blue 2 4998
beats a simple linear model (followed by classifying based on whether predictions are >1):
Predicted
Truth red blue
red 4970 30
blue 0 5000
However, let's be honest, part of the problem is that a linear regression is not such a great model for this problem. Replacing the linear regression and the logistic regression with a regression and a classification random forest, respectively, deals with this perfectly. Both produce this result (see below):
Predicted
Truth red blue
red 5000 0
blue 0 5000
However, I guess that's at least an example where you seem to do a little better within the class of models with a linear regression equation (of course, this still totally ignores the possibility of using splines etc.).
library(tidyverse)
library(ranger)
library(ggrepel)
library(logistf)
# Set defaults for ggplot2 ----
theme_set( theme_bw(base_size=18) +
theme(legend.position = "none"))
scale_colour_discrete <- function(...) {
# Alternative: ggsci::scale_color_nejm(...)
scale_colour_brewer(..., palette="Set1")
}
scale_fill_discrete <- function(...) {
# Alternative: ggsci::scale_fill_nejm(...)
scale_fill_brewer(..., palette="Set1")
}
scale_colour_continuous <- function(...) {
scale_colour_viridis_c(..., option="turbo")
}
update_geom_defaults("point", list(size=2))
update_geom_defaults("line", list(size=1.5))
# To allow adding label to points e.g. as geom_text_repel(data=. %>% filter(1:n()==n()))
update_geom_defaults("text_repel", list(label.size = NA, fill = rgb(0,0,0,0),
segment.color = "transparent", size=6))
# Start program ----
set.seed(1234)
records = 5000
# Create the example data including a train-test split
example = tibble(y = c(runif(n=records*2, min = 0, max=1),
runif(n=records*2, min = 1, max=2)),
class = rep(c(0L,1L), each=records*2),
test = factor(rep(c(0,1,0,1), each=records),
levels=0:1, labels=c("Train", "Test")),
predictor = c(runif(n=records*2, min = 0, max=1),
runif(n=records*2, min = 1, max=2)))
# Plot the dataset
example %>%
ggplot(aes(x=predictor, y=y, col=factor(class))) +
geom_point(alpha=0.3) +
facet_wrap(~test)
# Linear regression
lm1 = lm(data=example %>% filter(test=="Train"),
y ~ predictor)
# Performance of linear regression prediction followed by classifying by prediction>1
table(example %>% filter(test=="Test") %>% pull(class),
predict(lm1,
example %>% filter(test=="Test")) > 1)
# Firth penalized logistic regression
glm1 = logistf(data=example %>% filter(test=="Train"),
class ~ predictor,
pl=F)
# Performance of classifying by predicted log-odds from Firth LR being >0
table(example %>% filter(test=="Test") %>% pull(class),
predict(glm1,
example %>% filter(test=="Test"))>0)
# Now, let's try this with RF instead:
# First, binary classification RF
rf1 = ranger(formula = class ~ predictor,
data=example %>% filter(test=="Train"),
classification = T)
table(example %>% filter(test=="Test") %>% pull(class),
predict(rf1, example %>% filter(test=="Test"))$predictions)
# Now regression RF
rf2 = ranger(formula = y ~ predictor,
data=example %>% filter(test=="Train"),
classification = F)
table(example %>% filter(test=="Test") %>% pull(class),
predict(rf2, example %>% filter(test=="Test"))$predictions>1)
|
Is there ever a reason to solve a regression problem as a classification problem?
|
I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this
|
Is there ever a reason to solve a regression problem as a classification problem?
I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this one (all code is attached at the end), where the red class corresponds to $y \leq 1$ and the blue class to $y>1$ and we have one (or of course more) predictor that is within class uncorrelated with $y$, but separates the classes perfectly.
Here, a Firth penalized logistic regression
Predicted
Truth red blue
red 5000 0
blue 2 4998
beats a simple linear model (followed by classifying based on whether predictions are >1):
Predicted
Truth red blue
red 4970 30
blue 0 5000
However, let's be honest, part of the problem is that a linear regression is not such a great model for this problem. Replacing the linear regression and the logistic regression with a regression and a classification random forest, respectively, deals with this perfectly. Both produce this result (see below):
Predicted
Truth red blue
red 5000 0
blue 0 5000
However, I guess that's at least an example where you seem to do a little better within the class of models with a linear regression equation (of course, this still totally ignores the possibility of using splines etc.).
library(tidyverse)
library(ranger)
library(ggrepel)
library(logistf)
# Set defaults for ggplot2 ----
theme_set( theme_bw(base_size=18) +
theme(legend.position = "none"))
scale_colour_discrete <- function(...) {
# Alternative: ggsci::scale_color_nejm(...)
scale_colour_brewer(..., palette="Set1")
}
scale_fill_discrete <- function(...) {
# Alternative: ggsci::scale_fill_nejm(...)
scale_fill_brewer(..., palette="Set1")
}
scale_colour_continuous <- function(...) {
scale_colour_viridis_c(..., option="turbo")
}
update_geom_defaults("point", list(size=2))
update_geom_defaults("line", list(size=1.5))
# To allow adding label to points e.g. as geom_text_repel(data=. %>% filter(1:n()==n()))
update_geom_defaults("text_repel", list(label.size = NA, fill = rgb(0,0,0,0),
segment.color = "transparent", size=6))
# Start program ----
set.seed(1234)
records = 5000
# Create the example data including a train-test split
example = tibble(y = c(runif(n=records*2, min = 0, max=1),
runif(n=records*2, min = 1, max=2)),
class = rep(c(0L,1L), each=records*2),
test = factor(rep(c(0,1,0,1), each=records),
levels=0:1, labels=c("Train", "Test")),
predictor = c(runif(n=records*2, min = 0, max=1),
runif(n=records*2, min = 1, max=2)))
# Plot the dataset
example %>%
ggplot(aes(x=predictor, y=y, col=factor(class))) +
geom_point(alpha=0.3) +
facet_wrap(~test)
# Linear regression
lm1 = lm(data=example %>% filter(test=="Train"),
y ~ predictor)
# Performance of linear regression prediction followed by classifying by prediction>1
table(example %>% filter(test=="Test") %>% pull(class),
predict(lm1,
example %>% filter(test=="Test")) > 1)
# Firth penalized logistic regression
glm1 = logistf(data=example %>% filter(test=="Train"),
class ~ predictor,
pl=F)
# Performance of classifying by predicted log-odds from Firth LR being >0
table(example %>% filter(test=="Test") %>% pull(class),
predict(glm1,
example %>% filter(test=="Test"))>0)
# Now, let's try this with RF instead:
# First, binary classification RF
rf1 = ranger(formula = class ~ predictor,
data=example %>% filter(test=="Train"),
classification = T)
table(example %>% filter(test=="Test") %>% pull(class),
predict(rf1, example %>% filter(test=="Test"))$predictions)
# Now regression RF
rf2 = ranger(formula = y ~ predictor,
data=example %>% filter(test=="Train"),
classification = F)
table(example %>% filter(test=="Test") %>% pull(class),
predict(rf2, example %>% filter(test=="Test"))$predictions>1)
|
Is there ever a reason to solve a regression problem as a classification problem?
I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this
|
13,780
|
Is there ever a reason to solve a regression problem as a classification problem?
|
Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each value of sales (a continuoum of classes) a probability is assigned predicting how likely it is that sales-value/class.
Hypothesis testing is also much like this and a discrete form. One performs regression fits some parameters and subsequently classifies the observation as indictating whether the hypothesis is true or not true. Neyman Pearson hypothesis testing is very explicit with this and compares a null hypothesis and an alternative hypothesis and uses the likelihood ratio to decide between the two hypotheses.
For instance a hypothesis might be that the growth in sales are gonna be more than some hypothetical percentage $x$ and the regression is leading to a rejection or non-rejection of that percentage/class.
|
Is there ever a reason to solve a regression problem as a classification problem?
|
Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each
|
Is there ever a reason to solve a regression problem as a classification problem?
Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each value of sales (a continuoum of classes) a probability is assigned predicting how likely it is that sales-value/class.
Hypothesis testing is also much like this and a discrete form. One performs regression fits some parameters and subsequently classifies the observation as indictating whether the hypothesis is true or not true. Neyman Pearson hypothesis testing is very explicit with this and compares a null hypothesis and an alternative hypothesis and uses the likelihood ratio to decide between the two hypotheses.
For instance a hypothesis might be that the growth in sales are gonna be more than some hypothetical percentage $x$ and the regression is leading to a rejection or non-rejection of that percentage/class.
|
Is there ever a reason to solve a regression problem as a classification problem?
Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each
|
13,781
|
Is there ever a reason to solve a regression problem as a classification problem?
|
You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an ML classification model.
You might have perhaps ten different intensities of this illness and you know the thresholds for them from experience so that you have labels, perhaps using a point system over many input columns or just experience over years.
The advantage of a classification model is that each class out of the ten classes gets its own probability, while in a regression model, you do not see the probability, but you get just the one most probable predicted value instead.
|
Is there ever a reason to solve a regression problem as a classification problem?
|
You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an M
|
Is there ever a reason to solve a regression problem as a classification problem?
You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an ML classification model.
You might have perhaps ten different intensities of this illness and you know the thresholds for them from experience so that you have labels, perhaps using a point system over many input columns or just experience over years.
The advantage of a classification model is that each class out of the ten classes gets its own probability, while in a regression model, you do not see the probability, but you get just the one most probable predicted value instead.
|
Is there ever a reason to solve a regression problem as a classification problem?
You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an M
|
13,782
|
Is there ever a reason to solve a regression problem as a classification problem?
|
I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend you're a data scientist at a company and they say to you that they want to forecast monthly sales. They hand you a bunch of data that includes historical sales, perhaps other continuous data, and a large number of categorical data about the products, consumers, marketing approaches, etc. You immediately see this data and think regression is a likely good choice.
You dig into the data to see if regression is a good fit, perhaps doing an EDA, and find that there's 100s of categorical data with 100s of levels each. You then go back and ask the sales team if all of the categoricals are useful to them. They say yes, but then they clarify that they really only care if they're making 10x above spend (which is also one of the pieces of data you have).
Suddenly you have a choice to regress on monthly saies ad report on 10x or not, or to lump sales into levels of <10x or >= 10x. Now you have a logistic regression as an option.
You then one-hot all your categoricals as a first pass and find that the data are too expansive (too many fields and levels) for you to run the regression quickly. The sales team needs the model by the end of the week. You go back and propose that they give you more time, but they say no. You also tell them your option of logistic regression, but they say that maybe they want to know 0.5x, 2x and then 10x and above for it to be really useful.
You still have regression on the table, but now you have a clear classification possibility.
At this point, you can hash the categoricals quickly, greatly reducing the number of features and the size of the problem. You can bin the sales numbers into 0.5x, 2x, and >=10x. You can quickly run a tree-based classifier like Random Forest, XGBoost, or LightGBM classifier on your local machine, pull out feature importance, look at some trees, etc. and gain insight into what features matter without having to figure out how all the coding should work for a regression.
Perhaps at this point the prediction quality is poor, but nevertheless, you've delivered a predictive model in time, gained insight on the potentially useful features via classification, and opened up a few more options for proceeding on a better model.
That said, if you have multiple data types that require different loss functions and regularlizations, GLRM helps formulate all that quite nicely.
|
Is there ever a reason to solve a regression problem as a classification problem?
|
I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend
|
Is there ever a reason to solve a regression problem as a classification problem?
I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend you're a data scientist at a company and they say to you that they want to forecast monthly sales. They hand you a bunch of data that includes historical sales, perhaps other continuous data, and a large number of categorical data about the products, consumers, marketing approaches, etc. You immediately see this data and think regression is a likely good choice.
You dig into the data to see if regression is a good fit, perhaps doing an EDA, and find that there's 100s of categorical data with 100s of levels each. You then go back and ask the sales team if all of the categoricals are useful to them. They say yes, but then they clarify that they really only care if they're making 10x above spend (which is also one of the pieces of data you have).
Suddenly you have a choice to regress on monthly saies ad report on 10x or not, or to lump sales into levels of <10x or >= 10x. Now you have a logistic regression as an option.
You then one-hot all your categoricals as a first pass and find that the data are too expansive (too many fields and levels) for you to run the regression quickly. The sales team needs the model by the end of the week. You go back and propose that they give you more time, but they say no. You also tell them your option of logistic regression, but they say that maybe they want to know 0.5x, 2x and then 10x and above for it to be really useful.
You still have regression on the table, but now you have a clear classification possibility.
At this point, you can hash the categoricals quickly, greatly reducing the number of features and the size of the problem. You can bin the sales numbers into 0.5x, 2x, and >=10x. You can quickly run a tree-based classifier like Random Forest, XGBoost, or LightGBM classifier on your local machine, pull out feature importance, look at some trees, etc. and gain insight into what features matter without having to figure out how all the coding should work for a regression.
Perhaps at this point the prediction quality is poor, but nevertheless, you've delivered a predictive model in time, gained insight on the potentially useful features via classification, and opened up a few more options for proceeding on a better model.
That said, if you have multiple data types that require different loss functions and regularlizations, GLRM helps formulate all that quite nicely.
|
Is there ever a reason to solve a regression problem as a classification problem?
I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend
|
13,783
|
Variablity in cv.glmnet results
|
The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this is done $K$ times, using a different $K$ part each time). This is done for all the lambdas, and the lambda.min is the one that gives the smallest cross validation error.
This is why when you use $nfolds = n$ the results don't change: each group is made of one, so no much choice for the $K$ groups.
From the cv.glmnet() reference manual:
Note also that the results of cv.glmnet are random, since the folds
are selected at random. Users can reduce this randomness by running
cv.glmnet many times, and averaging the error curves.
### cycle for doing 100 cross validations
### and take the average of the mean error curves
### initialize vector for final data.frame with Mean Standard Errors
MSEs <- NULL
for (i in 1:100){
cv <- cv.glmnet(y, x, alpha=alpha, nfolds=k)
MSEs <- cbind(MSEs, cv$cvm)
}
rownames(MSEs) <- cv$lambda
lambda.min <- as.numeric(names(which.min(rowMeans(MSEs))))
MSEs is the data frame containing all the errors for all lambdas (for the 100 runs),
lambda.min is your lambda with minimum average error.
|
Variablity in cv.glmnet results
|
The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this
|
Variablity in cv.glmnet results
The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this is done $K$ times, using a different $K$ part each time). This is done for all the lambdas, and the lambda.min is the one that gives the smallest cross validation error.
This is why when you use $nfolds = n$ the results don't change: each group is made of one, so no much choice for the $K$ groups.
From the cv.glmnet() reference manual:
Note also that the results of cv.glmnet are random, since the folds
are selected at random. Users can reduce this randomness by running
cv.glmnet many times, and averaging the error curves.
### cycle for doing 100 cross validations
### and take the average of the mean error curves
### initialize vector for final data.frame with Mean Standard Errors
MSEs <- NULL
for (i in 1:100){
cv <- cv.glmnet(y, x, alpha=alpha, nfolds=k)
MSEs <- cbind(MSEs, cv$cvm)
}
rownames(MSEs) <- cv$lambda
lambda.min <- as.numeric(names(which.min(rowMeans(MSEs))))
MSEs is the data frame containing all the errors for all lambdas (for the 100 runs),
lambda.min is your lambda with minimum average error.
|
Variablity in cv.glmnet results
The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this
|
13,784
|
Variablity in cv.glmnet results
|
Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create 3 cv test each with 1000 iterations averaging the min MSEs for each $\alpha$, I get 3 different best ($\lambda$, $\alpha$) couples.
I won't touch the $\alpha$ problem here but I decided that my best solution is not averaging the min MSEs, but instead extracting the coefficients for each iteration best $\lambda$ and then treat them as a distribution of values (a random variable).
Then, for each predictor I get:
mean coefficient
standard deviation
5 number summary (median, quartiles, min and max)
percentage of times is different from zero (ie. has an influence)
This way I get a pretty solid description of the effect of predictor.
Once you have distributions for the coefficients, than you could run any statistical stuff you think is worth to get CI, p values, etc... but I didn't investigate this yet.
This method can be used with more or less any selection method I can think of.
|
Variablity in cv.glmnet results
|
Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create
|
Variablity in cv.glmnet results
Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create 3 cv test each with 1000 iterations averaging the min MSEs for each $\alpha$, I get 3 different best ($\lambda$, $\alpha$) couples.
I won't touch the $\alpha$ problem here but I decided that my best solution is not averaging the min MSEs, but instead extracting the coefficients for each iteration best $\lambda$ and then treat them as a distribution of values (a random variable).
Then, for each predictor I get:
mean coefficient
standard deviation
5 number summary (median, quartiles, min and max)
percentage of times is different from zero (ie. has an influence)
This way I get a pretty solid description of the effect of predictor.
Once you have distributions for the coefficients, than you could run any statistical stuff you think is worth to get CI, p values, etc... but I didn't investigate this yet.
This method can be used with more or less any selection method I can think of.
|
Variablity in cv.glmnet results
Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create
|
13,785
|
Variablity in cv.glmnet results
|
I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes useful points!
lambdas = NULL
for (i in 1:n)
{
fit <- cv.glmnet(xs,ys)
errors = data.frame(fit$lambda,fit$cvm)
lambdas <- rbind(lambdas,errors)
}
# take mean cvm for each lambda
lambdas <- aggregate(lambdas[, 2], list(lambdas$fit.lambda), mean)
# select the best one
bestindex = which(lambdas[2]==min(lambdas[2]))
bestlambda = lambdas[bestindex,1]
# and now run glmnet once more with it
fit <- glmnet(xy,ys,lambda=bestlambda)
|
Variablity in cv.glmnet results
|
I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes use
|
Variablity in cv.glmnet results
I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes useful points!
lambdas = NULL
for (i in 1:n)
{
fit <- cv.glmnet(xs,ys)
errors = data.frame(fit$lambda,fit$cvm)
lambdas <- rbind(lambdas,errors)
}
# take mean cvm for each lambda
lambdas <- aggregate(lambdas[, 2], list(lambdas$fit.lambda), mean)
# select the best one
bestindex = which(lambdas[2]==min(lambdas[2]))
bestlambda = lambdas[bestindex,1]
# and now run glmnet once more with it
fit <- glmnet(xy,ys,lambda=bestlambda)
|
Variablity in cv.glmnet results
I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes use
|
13,786
|
Variablity in cv.glmnet results
|
Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452, 0.12368107787663, : length of 'dimnames' [1] not equal to array extent.
OptimLambda below should work in the general case, and is also faster by leveraging mclapply for parallel processing and avoidance of loops.
Lambdas <- function(...) {
cv <- cv.glmnet(...)
return(data.table(cvm=cv$cvm, lambda=cv$lambda))
}
OptimLambda <- function(k, ...) {
# Returns optimal lambda for glmnet.
#
# Args:
# k: # times to loop through cv.glmnet.
# ...: Other args passed to cv.glmnet.
#
# Returns:
# Lambda associated with minimum average CV error over runs.
#
# Example:
# OptimLambda(k=100, y=y, x=x, alpha=alpha, nfolds=k)
#
require(parallel)
require(data.table)
MSEs <- data.table(rbind.fill(mclapply(seq(k), function(dummy) Lambdas(...))))
return(MSEs[, list(mean.cvm=mean(cvm)), lambda][order(mean.cvm)][1]$lambda)
}
|
Variablity in cv.glmnet results
|
Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452
|
Variablity in cv.glmnet results
Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452, 0.12368107787663, : length of 'dimnames' [1] not equal to array extent.
OptimLambda below should work in the general case, and is also faster by leveraging mclapply for parallel processing and avoidance of loops.
Lambdas <- function(...) {
cv <- cv.glmnet(...)
return(data.table(cvm=cv$cvm, lambda=cv$lambda))
}
OptimLambda <- function(k, ...) {
# Returns optimal lambda for glmnet.
#
# Args:
# k: # times to loop through cv.glmnet.
# ...: Other args passed to cv.glmnet.
#
# Returns:
# Lambda associated with minimum average CV error over runs.
#
# Example:
# OptimLambda(k=100, y=y, x=x, alpha=alpha, nfolds=k)
#
require(parallel)
require(data.table)
MSEs <- data.table(rbind.fill(mclapply(seq(k), function(dummy) Lambdas(...))))
return(MSEs[, list(mean.cvm=mean(cvm)), lambda][order(mean.cvm)][1]$lambda)
}
|
Variablity in cv.glmnet results
Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452
|
13,787
|
Variablity in cv.glmnet results
|
You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE)
foldids = rep(1,length(responseDiffs))
foldids[flds$Fold2] = 2
foldids[flds$Fold3] = 3
foldids[flds$Fold4] = 4
foldids[flds$Fold5] = 5
Now run cv.glmnet with these foldids.
lassoResults<-cv.glmnet(x=countDiffs,y=responseDiffs,alpha=1,foldid = foldids)
You will get the same results each time.
|
Variablity in cv.glmnet results
|
You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE)
|
Variablity in cv.glmnet results
You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE)
foldids = rep(1,length(responseDiffs))
foldids[flds$Fold2] = 2
foldids[flds$Fold3] = 3
foldids[flds$Fold4] = 4
foldids[flds$Fold5] = 5
Now run cv.glmnet with these foldids.
lassoResults<-cv.glmnet(x=countDiffs,y=responseDiffs,alpha=1,foldid = foldids)
You will get the same results each time.
|
Variablity in cv.glmnet results
You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE)
|
13,788
|
How to assess skewness from a boxplot?
|
One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a ratio
When (Q3-Q2) vs (Q2-Q1) is instead expressed as a difference (or equivalently midhinge-median), that must be scaled to make it dimensionless (as usually needed for a skewness measure), say by the IQR, as here (by putting $u=0.25$).
The most common measure is of course third-moment skewness.
There's no reason that these three measures will necessarily be consistent. Any one of them could be different from the other two.
What we regard as "skewness" is a somewhat slippery and ill-defined concept. See here for more discussion.
If we look at your data with a normal qqplot:
[The line marked there is based on the first 6 points only, because I want to discuss the deviation of the last two from the pattern there.]
We see that the smallest 6 points lie almost perfectly on the line.
Then the 7th point is below the line (closer to the middle relatively than the corresponding second point in from the left end), while the eighth point sits way above.
The 7th point suggests mild left skew, the last, stronger right skew. If you ignore either point, the impression of skewness is entirely determined by the other.
If I had to say it was one or the other, I'd call that "right skew" but I'd also point out that the impression was entirely due to the effect of that one very large point. Without it there's really nothing to say it's right skew. (On the other hand, without the 7th point instead, it's clearly not left skew.)
We must be very careful when our impression is entirely determined by single points, and can be flipped around by removing one point. That's not much of a basis to go on!
I start with the premise that what makes an outlier 'outlying' is the model (what's an outlier with respect on one model may be quite typical under another model).
I think an observation at the 0.01 upper percentile (1/10000) of a normal (3.72 sds above the mean) is equally an outlier to the normal model as an observation at the 0.01 upper percentile of an exponential distribution is to the exponential model. (If we transform a distribution by its own probability integral transform, each will go to the same uniform)
To see the problem with applying the boxplot rule to even a moderately right skew distribution, simulate large samples from an exponential distribution.
E.g. if we simulate samples of size 100 from a normal, we average less than 1 outlier per sample. If we do it with an exponential, we average around 5. But there's no real basis on which to say that a higher proportion of exponential values are "outlying" unless we do it by comparison with (say) a normal model. In particular situations we might have specific reasons to have an outlier rule of some particular form, but there's no general rule, which leaves us with general principles like the one I started with on this subsection - to treat each model/distribution on its own lights (if a value isn't unusual with respect to a model, why call it an outlier in that situation?)
To turn to the question in the title:
While it's a pretty crude instrument (which is why I looked at the QQ-plot) there are several indications of skewness in a boxplot - if there's at least one point marked as an outlier, there's potentially (at least) three:
In this sample (n=100), the outer points (green) mark the extremes, and with the median suggest left skewness. Then the fences (blue) suggest (when combined with the median) suggest right skewness. Then the hinges (quartiles, brown), suggest left skewness when combined with the median.
As we see, they needn't be consistent. Which you would focus on depends on the situation you're in (and possibly your preferences).
However, a warning on just how crude the boxplot is. The example toward the end here -- which includes a description of how to generate the data --
gives four quite different distributions with the same boxplot:
As you can see there's a quite skewed distribution with all of the above-mentioned indicators of skewness showing perfect symmetry.
--
Let's take this from the point of view "what answer was your teacher expecting, given that this is a boxplot, which marks one point as an outlier?".
We're left with first answering "do they expect you to assess skewness excluding that point, or with it in the sample?". Some would exclude it, and assess skewness from what remains, as jsk did in another answer. While I have disputed aspects of that approach, I can't say it's wrong -- that depends on the situation. Some would include it (not least because excluding 12.5% of your sample because of a rule derived from normality seems a big step*).
* Imagine a population distribution which is symmetric except for the far right tail (I constructed one such in answering this - normal but with the extreme right tail being Pareto - but didn't present it in my answer). If I draw samples of size 8, often 7 of the observations come from the normal-looking part and one comes from the upper tail. If we exclude the points marked as boxplot-outliers in that case, we're excluding the point that's telling us that it is actually skew! When we do, the truncated distribution that remains in that situation is left-skew, and our conclusion would be the opposite of the correct one.
|
How to assess skewness from a boxplot?
|
One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a rat
|
How to assess skewness from a boxplot?
One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a ratio
When (Q3-Q2) vs (Q2-Q1) is instead expressed as a difference (or equivalently midhinge-median), that must be scaled to make it dimensionless (as usually needed for a skewness measure), say by the IQR, as here (by putting $u=0.25$).
The most common measure is of course third-moment skewness.
There's no reason that these three measures will necessarily be consistent. Any one of them could be different from the other two.
What we regard as "skewness" is a somewhat slippery and ill-defined concept. See here for more discussion.
If we look at your data with a normal qqplot:
[The line marked there is based on the first 6 points only, because I want to discuss the deviation of the last two from the pattern there.]
We see that the smallest 6 points lie almost perfectly on the line.
Then the 7th point is below the line (closer to the middle relatively than the corresponding second point in from the left end), while the eighth point sits way above.
The 7th point suggests mild left skew, the last, stronger right skew. If you ignore either point, the impression of skewness is entirely determined by the other.
If I had to say it was one or the other, I'd call that "right skew" but I'd also point out that the impression was entirely due to the effect of that one very large point. Without it there's really nothing to say it's right skew. (On the other hand, without the 7th point instead, it's clearly not left skew.)
We must be very careful when our impression is entirely determined by single points, and can be flipped around by removing one point. That's not much of a basis to go on!
I start with the premise that what makes an outlier 'outlying' is the model (what's an outlier with respect on one model may be quite typical under another model).
I think an observation at the 0.01 upper percentile (1/10000) of a normal (3.72 sds above the mean) is equally an outlier to the normal model as an observation at the 0.01 upper percentile of an exponential distribution is to the exponential model. (If we transform a distribution by its own probability integral transform, each will go to the same uniform)
To see the problem with applying the boxplot rule to even a moderately right skew distribution, simulate large samples from an exponential distribution.
E.g. if we simulate samples of size 100 from a normal, we average less than 1 outlier per sample. If we do it with an exponential, we average around 5. But there's no real basis on which to say that a higher proportion of exponential values are "outlying" unless we do it by comparison with (say) a normal model. In particular situations we might have specific reasons to have an outlier rule of some particular form, but there's no general rule, which leaves us with general principles like the one I started with on this subsection - to treat each model/distribution on its own lights (if a value isn't unusual with respect to a model, why call it an outlier in that situation?)
To turn to the question in the title:
While it's a pretty crude instrument (which is why I looked at the QQ-plot) there are several indications of skewness in a boxplot - if there's at least one point marked as an outlier, there's potentially (at least) three:
In this sample (n=100), the outer points (green) mark the extremes, and with the median suggest left skewness. Then the fences (blue) suggest (when combined with the median) suggest right skewness. Then the hinges (quartiles, brown), suggest left skewness when combined with the median.
As we see, they needn't be consistent. Which you would focus on depends on the situation you're in (and possibly your preferences).
However, a warning on just how crude the boxplot is. The example toward the end here -- which includes a description of how to generate the data --
gives four quite different distributions with the same boxplot:
As you can see there's a quite skewed distribution with all of the above-mentioned indicators of skewness showing perfect symmetry.
--
Let's take this from the point of view "what answer was your teacher expecting, given that this is a boxplot, which marks one point as an outlier?".
We're left with first answering "do they expect you to assess skewness excluding that point, or with it in the sample?". Some would exclude it, and assess skewness from what remains, as jsk did in another answer. While I have disputed aspects of that approach, I can't say it's wrong -- that depends on the situation. Some would include it (not least because excluding 12.5% of your sample because of a rule derived from normality seems a big step*).
* Imagine a population distribution which is symmetric except for the far right tail (I constructed one such in answering this - normal but with the extreme right tail being Pareto - but didn't present it in my answer). If I draw samples of size 8, often 7 of the observations come from the normal-looking part and one comes from the upper tail. If we exclude the points marked as boxplot-outliers in that case, we're excluding the point that's telling us that it is actually skew! When we do, the truncated distribution that remains in that situation is left-skew, and our conclusion would be the opposite of the correct one.
|
How to assess skewness from a boxplot?
One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a rat
|
13,789
|
How to assess skewness from a boxplot?
|
No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting some form of asymmetry in the data distribution).
John Tukey described a systematic way to explore asymmetry in batches of data by means of his "N-number summary." A boxplot is a graphic of a 5-number summary and thereby is amenable to this analysis.
A boxplot displays a 5-number summary: the median $M$, the two hinges $H^{+}$ and $H^{-}$, and the extremes $X^{+}$ and $X^{-}$. The key idea in Tukey's generalized approach is to choose some statistics $T_i^{+}$ reflecting the upper half of the batch (based on ranks or, equivalently, percentiles), with increasing $i$ corresponding to more extreme data. Each statistic $T_i^{+}$ has a counterparts $T_i^{-}$ obtained by computing the same statistic after turning the data upside-down (by negating the values, for instance). In a symmetric batch, each pair of matching statistics must be centered at the middle of the batch (and this center will coincide with $M = M^{+}=M^{-}$). Thus, a plot of how much the mid-statistic $(T_i^{+} + T_i^{-})/2$ varies with $i$ provides a graphical diagnostic and can furnish a quantitative estimate of asymmetry.
To apply this idea to a boxplot, just draw the midpoints of each pair of corresponding parts: the median (which is already there), the midpoint of the hinges (ends of the box, shown in blue), and the midpoint of the extremes (shown in red).
In this example the lower value of the mid-hinge compared to the median indicates the middle of the batch is slightly negatively skewed (thereby corroborating the assessment quoted in the question, while at the same time suitably limiting its scope to the middle of the batch) while the (much) higher value of the mid-extreme indicates the tails of the batch (or at least its extremes) are positively skewed (albeit, on closer inspection, this is due to a single high outlier). Although this is almost a trivial example, the relative richness of this interpretation compared to a single "skewness" statistic already reveals the descriptive power of this approach.
With a small amount of practice you do not have to draw these mid-statistics: you can imagine where they are and read the resulting skewness information directly off any boxplot.
An example from Tukey's EDA (p. 81) uses a nine-number summary of heights of 219 volcanoes (expressed in hundreds of feet). He calls these statistics $M$, $H$, $E$, $D$, and $X$: they correspond (roughly) to the middle, the upper and lower quartiles, the eighths, the sixteenths and the extremes, respectively. I have indexed them in this order by $i=1, 2, 3, 4, 5$. The left hand plot in the next figure is the diagnostic plot for the midpoints of these paired statistics. From the accelerating slope, it is clear the data are becoming more and more positively skewed as we reach out into their tails.
The middle and right plots show the same thing for the square roots (of the data, not of the mid-number statistics!) and the (base-10) logarithms. The relative stability of the values of the roots (notice the relative small vertical range and the level sloped in the middle) indicates that this batch of 219 values becomes approximately symmetric both in its middle portions and in all parts of its tails, almost out to the extremes when the heights are re-expressed as square roots. This result is a strong--almost compelling--basis for continuing further analysis of these heights in terms of their square roots.
Among other things, these plots reveal something quantitative about the asymmetry of the data: on the original scale, they immediately reveal the varying skewness of the data (casting considerable doubt on the utility of using a single statistic to characterize its skewness), whereas on the square root scale, the data are close to symmetric about their middle--and therefore can succinctly be summarized with a five-number summary, or equivalently a boxplot. The skewness again varies appreciably on a log scale, showing the logarithm is too "strong" a way to re-express these data.
The generalization of a boxplot to seven-, nine-, and more-number summaries is straightforward to draw. Tukey calls them "schematic plots." Today many plots serve a similar purpose, including standbys like Q-Q plots and relative novelties such as "bean plots" and "violin plots." (Even the lowly histogram can be pressed into service for this purpose.) Using points from such plots, one can assess asymmetry in a detailed fashion and perform a similar evaluation of ways to re-express the data.
|
How to assess skewness from a boxplot?
|
No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting
|
How to assess skewness from a boxplot?
No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting some form of asymmetry in the data distribution).
John Tukey described a systematic way to explore asymmetry in batches of data by means of his "N-number summary." A boxplot is a graphic of a 5-number summary and thereby is amenable to this analysis.
A boxplot displays a 5-number summary: the median $M$, the two hinges $H^{+}$ and $H^{-}$, and the extremes $X^{+}$ and $X^{-}$. The key idea in Tukey's generalized approach is to choose some statistics $T_i^{+}$ reflecting the upper half of the batch (based on ranks or, equivalently, percentiles), with increasing $i$ corresponding to more extreme data. Each statistic $T_i^{+}$ has a counterparts $T_i^{-}$ obtained by computing the same statistic after turning the data upside-down (by negating the values, for instance). In a symmetric batch, each pair of matching statistics must be centered at the middle of the batch (and this center will coincide with $M = M^{+}=M^{-}$). Thus, a plot of how much the mid-statistic $(T_i^{+} + T_i^{-})/2$ varies with $i$ provides a graphical diagnostic and can furnish a quantitative estimate of asymmetry.
To apply this idea to a boxplot, just draw the midpoints of each pair of corresponding parts: the median (which is already there), the midpoint of the hinges (ends of the box, shown in blue), and the midpoint of the extremes (shown in red).
In this example the lower value of the mid-hinge compared to the median indicates the middle of the batch is slightly negatively skewed (thereby corroborating the assessment quoted in the question, while at the same time suitably limiting its scope to the middle of the batch) while the (much) higher value of the mid-extreme indicates the tails of the batch (or at least its extremes) are positively skewed (albeit, on closer inspection, this is due to a single high outlier). Although this is almost a trivial example, the relative richness of this interpretation compared to a single "skewness" statistic already reveals the descriptive power of this approach.
With a small amount of practice you do not have to draw these mid-statistics: you can imagine where they are and read the resulting skewness information directly off any boxplot.
An example from Tukey's EDA (p. 81) uses a nine-number summary of heights of 219 volcanoes (expressed in hundreds of feet). He calls these statistics $M$, $H$, $E$, $D$, and $X$: they correspond (roughly) to the middle, the upper and lower quartiles, the eighths, the sixteenths and the extremes, respectively. I have indexed them in this order by $i=1, 2, 3, 4, 5$. The left hand plot in the next figure is the diagnostic plot for the midpoints of these paired statistics. From the accelerating slope, it is clear the data are becoming more and more positively skewed as we reach out into their tails.
The middle and right plots show the same thing for the square roots (of the data, not of the mid-number statistics!) and the (base-10) logarithms. The relative stability of the values of the roots (notice the relative small vertical range and the level sloped in the middle) indicates that this batch of 219 values becomes approximately symmetric both in its middle portions and in all parts of its tails, almost out to the extremes when the heights are re-expressed as square roots. This result is a strong--almost compelling--basis for continuing further analysis of these heights in terms of their square roots.
Among other things, these plots reveal something quantitative about the asymmetry of the data: on the original scale, they immediately reveal the varying skewness of the data (casting considerable doubt on the utility of using a single statistic to characterize its skewness), whereas on the square root scale, the data are close to symmetric about their middle--and therefore can succinctly be summarized with a five-number summary, or equivalently a boxplot. The skewness again varies appreciably on a log scale, showing the logarithm is too "strong" a way to re-express these data.
The generalization of a boxplot to seven-, nine-, and more-number summaries is straightforward to draw. Tukey calls them "schematic plots." Today many plots serve a similar purpose, including standbys like Q-Q plots and relative novelties such as "bean plots" and "violin plots." (Even the lowly histogram can be pressed into service for this purpose.) Using points from such plots, one can assess asymmetry in a detailed fashion and perform a similar evaluation of ways to re-express the data.
|
How to assess skewness from a boxplot?
No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting
|
13,790
|
How to assess skewness from a boxplot?
|
The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively skewed but the mean is larger than the median due to the outlier.
|
How to assess skewness from a boxplot?
|
The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively
|
How to assess skewness from a boxplot?
The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively skewed but the mean is larger than the median due to the outlier.
|
How to assess skewness from a boxplot?
The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively
|
13,791
|
Mann-Whitney U test with unequal sample sizes
|
Yes, the Mann-Whitney test works fine with unequal sample sizes.
|
Mann-Whitney U test with unequal sample sizes
|
Yes, the Mann-Whitney test works fine with unequal sample sizes.
|
Mann-Whitney U test with unequal sample sizes
Yes, the Mann-Whitney test works fine with unequal sample sizes.
|
Mann-Whitney U test with unequal sample sizes
Yes, the Mann-Whitney test works fine with unequal sample sizes.
|
13,792
|
Mann-Whitney U test with unequal sample sizes
|
@HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will diminish as the group sizes become more unequal. For an example, I have a simulation (actually of a t-test, but the principle is the same) that demonstrates this here.
|
Mann-Whitney U test with unequal sample sizes
|
@HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will
|
Mann-Whitney U test with unequal sample sizes
@HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will diminish as the group sizes become more unequal. For an example, I have a simulation (actually of a t-test, but the principle is the same) that demonstrates this here.
|
Mann-Whitney U test with unequal sample sizes
@HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will
|
13,793
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
|
To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is that the probability that the true parameter value lies within the interval equals 95%. One of the two common frequentist interpretations of a, e.g., "95% confidence interval", even if numerically the two are identical, is that in the long run, if we were to perform the procedure many many times, the frequency with which the interval would cover the real value would converge to 95%. The former is intuitive, the latter is not. Try explaining to a manager some time that you can't say "The probability that our solar panels will degrade by less than 20% over 25 years is 95%", but must instead say "If the true degradation rate was 20% over 25 years, and we could somehow repeat our sampling but with different results blah blah parallel identical universes blah, the long run frequency of times that the one-sided confidence interval I would calculate would lie entirely below 20%/25 years would be 5%", or whatever the equivalent frequentist statement would be.
An alternative frequentist interpretation would be "Before the data was generated, there was a 5% chance the interval I would calculate using the procedure I settled on would fall entirely below the true parameter value. However, now that we've collected the data, we can't make any such statement, because we're not subjectivists and the probability is either 0 or 1, depending upon whether it does or does not lie entirely below the true parameter value." That'll help with the auditors and when calculating a warranty reserve. (I actually find this definition reasonable, albeit not usually useful; it's also not easy to understand intuitively, and especially not if you're not a statistician.)
Neither frequentist interpretation is intuitive. The Bayesian version is. Hence the "big advantage in interpretation" held by the Bayesian approach.
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
|
To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is that the probability that the true parameter value lies within the interval equals 95%. One of the two common frequentist interpretations of a, e.g., "95% confidence interval", even if numerically the two are identical, is that in the long run, if we were to perform the procedure many many times, the frequency with which the interval would cover the real value would converge to 95%. The former is intuitive, the latter is not. Try explaining to a manager some time that you can't say "The probability that our solar panels will degrade by less than 20% over 25 years is 95%", but must instead say "If the true degradation rate was 20% over 25 years, and we could somehow repeat our sampling but with different results blah blah parallel identical universes blah, the long run frequency of times that the one-sided confidence interval I would calculate would lie entirely below 20%/25 years would be 5%", or whatever the equivalent frequentist statement would be.
An alternative frequentist interpretation would be "Before the data was generated, there was a 5% chance the interval I would calculate using the procedure I settled on would fall entirely below the true parameter value. However, now that we've collected the data, we can't make any such statement, because we're not subjectivists and the probability is either 0 or 1, depending upon whether it does or does not lie entirely below the true parameter value." That'll help with the auditors and when calculating a warranty reserve. (I actually find this definition reasonable, albeit not usually useful; it's also not easy to understand intuitively, and especially not if you're not a statistician.)
Neither frequentist interpretation is intuitive. The Bayesian version is. Hence the "big advantage in interpretation" held by the Bayesian approach.
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is
|
13,794
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
|
In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relative plausibility of the truth of some proposition) is more closely in accord with our everyday usage of the word than is the frequentist definition (the long run frequency with which something occurrs). In most practical situations $p(\theta|x)$ is what we actually want to know, not $p(x|\theta)$, and the difficulty arises with frequentist statistics due to a tendency to interpret the results in a frequentist calculation as if it were a Bayesian one, i.e. $p(x|\theta)$ as if it were $p(\theta|x)$ (for example the p-value fallacy, or interpreting a confidence interval as if it were a credible interval).
Note that informative priors are not necessarily subjective, for instance I would not consider it subjective knowledge to assert that prior knowledge of some physical system should be independent of the units of measurement (as they are essentially arbitrary), leading to the idea of transformation groups and "minimally informative" priors.
The flip side of ignoring subjective knowledge is that your system may be sub-optimal because you are ignoring expert knowledge, so subjectivity is not necessarily a bad thing. For instance in the usual "infer the bias of a coin" problem, often used as a motivating example, you will learn relatively slowly with a uniform prior as the data comes in. But are all amounts of bias being equally likely a reasonable assumption? No, it is easy to make a slightly biased coin, or one that is completely biased (two heads or two tals), so if we build that assummption into our analysis, via a subjective prior, we will need less data to identify what the bias actually is.
Frequentist analyses also often contain subjective elements (for instance the decision to reject the null hypothesis if the p-value is less than 0.05, there is no logical compulsion to do so, it is merely a tradition that has proven useful). The advantage of the Bayesian approach is that the subjectivity is made explicit in the calculation, rather than leaving it implicit.
At the end of the day, it is a matter of "horses for courses", you should have both sets of tools in your toolbox, and be prepared to use the best tool for the task at hand.
Having said which, Bayesian $\gg$ frequentist !!! ;oP
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
|
In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relativ
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relative plausibility of the truth of some proposition) is more closely in accord with our everyday usage of the word than is the frequentist definition (the long run frequency with which something occurrs). In most practical situations $p(\theta|x)$ is what we actually want to know, not $p(x|\theta)$, and the difficulty arises with frequentist statistics due to a tendency to interpret the results in a frequentist calculation as if it were a Bayesian one, i.e. $p(x|\theta)$ as if it were $p(\theta|x)$ (for example the p-value fallacy, or interpreting a confidence interval as if it were a credible interval).
Note that informative priors are not necessarily subjective, for instance I would not consider it subjective knowledge to assert that prior knowledge of some physical system should be independent of the units of measurement (as they are essentially arbitrary), leading to the idea of transformation groups and "minimally informative" priors.
The flip side of ignoring subjective knowledge is that your system may be sub-optimal because you are ignoring expert knowledge, so subjectivity is not necessarily a bad thing. For instance in the usual "infer the bias of a coin" problem, often used as a motivating example, you will learn relatively slowly with a uniform prior as the data comes in. But are all amounts of bias being equally likely a reasonable assumption? No, it is easy to make a slightly biased coin, or one that is completely biased (two heads or two tals), so if we build that assummption into our analysis, via a subjective prior, we will need less data to identify what the bias actually is.
Frequentist analyses also often contain subjective elements (for instance the decision to reject the null hypothesis if the p-value is less than 0.05, there is no logical compulsion to do so, it is merely a tradition that has proven useful). The advantage of the Bayesian approach is that the subjectivity is made explicit in the calculation, rather than leaving it implicit.
At the end of the day, it is a matter of "horses for courses", you should have both sets of tools in your toolbox, and be prepared to use the best tool for the task at hand.
Having said which, Bayesian $\gg$ frequentist !!! ;oP
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relativ
|
13,795
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
|
The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods depend on using what information you have, and knowing how to encode that information into a probability distribution.
Using Bayesian methods is basically using probability theory in its full power. Bayes theorem is nothing but a restatement of the classic product rule of probability theory:
$$p(\theta x|I)=p(\theta|I)p(x|\theta I)=p(x|I)p(\theta|xI)$$
So long as $p(x|I)\neq 0$ (i.e. the prior information didn't say what was observed was impossible) we can divide by it, and arrive at bayes theorm. I have used $I$ to denote the prior information, which is always present - you can't assign a probability distribution without information.
Now, if you think that Bayes theorem is suspect, then logically, you must also think that the product rule is also suspect. You can find a deductive argument here, which derives the product and sum rules, similar to Cox's theorem. A more explicit list of the assumptions required can be found here.
As far as I know, frequentist inference is not based on a set of foundations within a logical framework. Because it uses the Kolmogorov axioms of probability, there does not seem to be any connection between probability theory and statistical inference. There are not any axioms for frequentist inference which lead to a procedure that is to be followed. There are principles and methods (maximum likelihood, confidence intervals, p-values, etc.), and they work well, but they tend to be isolated and specialised to particular problems. I think frequentist methods are best left vague in their foundations, at least in terms of a strict logical framework.
For point $1$, getting the same result is somewhat irrelevant, from the perspective of interpretation. Two procedures may lead to the same result, but this need not mean that they are equivalent. If I was to just guess $\theta$, and happened to guess the maximum likelihood estimate (MLE), this would not mean that my guessing is just as good as MLE.
For point $2$, why should you be worried that people with different information will come to different conclusions? Someone with a phd in mathematics would, and should, come to different conclusions to someone with high school level mathematics. They have different amounts of information - why would we expect them to agree? When you are presented knew information, you tend to change your mind. How much depends on what kind of information it was. Bayes theorem contains this feature, as it should.
Using a uniform prior is often a convenient approximation to make when the likelihood is sharp compared to the prior. It is not worth the effort sometimes, to go through and properly set up a prior. Similarly, don't make the mistake of confusing Bayesian statistics with MCMC. MCMC is just an algorithm for integration, same as guassian quadratre, and in a similar class to the Laplace approximation. It is a bit more useful than quadratre because you can re-use the algorithm's output to do all your integrals (posterior means and variances are integrals), and a bit more general that Laplace because you don't need a big sample, or a well rounded peak in the posterior (Laplace is quicker though).
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
|
The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods depend on using what information you have, and knowing how to encode that information into a probability distribution.
Using Bayesian methods is basically using probability theory in its full power. Bayes theorem is nothing but a restatement of the classic product rule of probability theory:
$$p(\theta x|I)=p(\theta|I)p(x|\theta I)=p(x|I)p(\theta|xI)$$
So long as $p(x|I)\neq 0$ (i.e. the prior information didn't say what was observed was impossible) we can divide by it, and arrive at bayes theorm. I have used $I$ to denote the prior information, which is always present - you can't assign a probability distribution without information.
Now, if you think that Bayes theorem is suspect, then logically, you must also think that the product rule is also suspect. You can find a deductive argument here, which derives the product and sum rules, similar to Cox's theorem. A more explicit list of the assumptions required can be found here.
As far as I know, frequentist inference is not based on a set of foundations within a logical framework. Because it uses the Kolmogorov axioms of probability, there does not seem to be any connection between probability theory and statistical inference. There are not any axioms for frequentist inference which lead to a procedure that is to be followed. There are principles and methods (maximum likelihood, confidence intervals, p-values, etc.), and they work well, but they tend to be isolated and specialised to particular problems. I think frequentist methods are best left vague in their foundations, at least in terms of a strict logical framework.
For point $1$, getting the same result is somewhat irrelevant, from the perspective of interpretation. Two procedures may lead to the same result, but this need not mean that they are equivalent. If I was to just guess $\theta$, and happened to guess the maximum likelihood estimate (MLE), this would not mean that my guessing is just as good as MLE.
For point $2$, why should you be worried that people with different information will come to different conclusions? Someone with a phd in mathematics would, and should, come to different conclusions to someone with high school level mathematics. They have different amounts of information - why would we expect them to agree? When you are presented knew information, you tend to change your mind. How much depends on what kind of information it was. Bayes theorem contains this feature, as it should.
Using a uniform prior is often a convenient approximation to make when the likelihood is sharp compared to the prior. It is not worth the effort sometimes, to go through and properly set up a prior. Similarly, don't make the mistake of confusing Bayesian statistics with MCMC. MCMC is just an algorithm for integration, same as guassian quadratre, and in a similar class to the Laplace approximation. It is a bit more useful than quadratre because you can re-use the algorithm's output to do all your integrals (posterior means and variances are integrals), and a bit more general that Laplace because you don't need a big sample, or a well rounded peak in the posterior (Laplace is quicker though).
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods
|
13,796
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
|
I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors that provide little information about what the solution will be, but which encode mathematically what a good solution probably looks like. For example, one typically sees a Gaussian prior ($\mu=0$) placed over a regression coefficient, encoding the knowledge that all things being equal, we prefer solutions in which the coefficients have lower magnitudes. This is to avoid overfitting a data set, by finding solutions that do maximize the objective function but which don't make sense in the particular context of our problem. In a sense, they provide a way to give the statistical model some "clues" about a particular domain.
However, this isn't (in my opinion) the most important aspect of Bayesian methodologies. Bayesian methods are generative, in that they provide a complete "story" for how the data came into existence. Thus, they aren't simply pattern finders, but rather they are able to take into account the full reality of the situation at hand. For example, consider LDA (latent Dirichlet allocation), which provides a full generative story for how a text document comes to be, that goes something like this:
Select some mix of topics based on the likelihood of particular topics co-occurring; and
Select some set of words from the vocabulary, conditioned based on the selected topics.
Thus, the model is fit based on a very specific understanding of the objects in the domain (here, text documents) and how they got created; therefore, the information we get back is tailored directly to our problem domain (likelihoods of words given topics, likelihoods of topics being mentioned together, likelihoods of documents containing topics and to what extent, etc.). The fact that Bayes Theorem is required to do this is almost secondary, hence the little joke, "Bayes wouldn't be a Bayesian, and Christ wouldn't be a Christian."
In short, Bayesian models are all about rigorously modeling the domain objects using probability distributions; therefore, we are able to encode knowledge that wouldn't otherwise be available with a simple discriminative technique.
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
|
I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors th
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors that provide little information about what the solution will be, but which encode mathematically what a good solution probably looks like. For example, one typically sees a Gaussian prior ($\mu=0$) placed over a regression coefficient, encoding the knowledge that all things being equal, we prefer solutions in which the coefficients have lower magnitudes. This is to avoid overfitting a data set, by finding solutions that do maximize the objective function but which don't make sense in the particular context of our problem. In a sense, they provide a way to give the statistical model some "clues" about a particular domain.
However, this isn't (in my opinion) the most important aspect of Bayesian methodologies. Bayesian methods are generative, in that they provide a complete "story" for how the data came into existence. Thus, they aren't simply pattern finders, but rather they are able to take into account the full reality of the situation at hand. For example, consider LDA (latent Dirichlet allocation), which provides a full generative story for how a text document comes to be, that goes something like this:
Select some mix of topics based on the likelihood of particular topics co-occurring; and
Select some set of words from the vocabulary, conditioned based on the selected topics.
Thus, the model is fit based on a very specific understanding of the objects in the domain (here, text documents) and how they got created; therefore, the information we get back is tailored directly to our problem domain (likelihoods of words given topics, likelihoods of topics being mentioned together, likelihoods of documents containing topics and to what extent, etc.). The fact that Bayes Theorem is required to do this is almost secondary, hence the little joke, "Bayes wouldn't be a Bayesian, and Christ wouldn't be a Christian."
In short, Bayesian models are all about rigorously modeling the domain objects using probability distributions; therefore, we are able to encode knowledge that wouldn't otherwise be available with a simple discriminative technique.
|
How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors th
|
13,797
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
|
It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
Note that not all discrete distributions are lattices. Eg if $X$ can take on the values $\{1, e, \pi, 5\}$, this is not a lattice since there is no $d$ such that all the values can be expressed as multiples of $d$.
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
|
It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
No
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
Note that not all discrete distributions are lattices. Eg if $X$ can take on the values $\{1, e, \pi, 5\}$, this is not a lattice since there is no $d$ such that all the values can be expressed as multiples of $d$.
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
No
|
13,798
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
|
This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the meaning and potential applications of lattice random variables.
Background
In mathematics, a "lattice" $\mathcal{L}$ is a discrete subgroup of a topological group $G$ (usually assumed to have a finite covolume).
"Discrete" means that around each element $g\in\mathcal{L}$ is an open set $\mathcal{O}_g\subset\mathcal{L}$ containing only $g$ itself: $\mathcal{O}_g\cup\mathcal{L}=\{g\}$. It would be fair to think of $\mathcal{L}$ as being a "patterned" or "regular" arrangement of points in $G$.
The group $G$ acts on $\mathcal{L}$ by "moving points in $\mathcal{L}$ around in $G$," forming an orbit out of each one. A fundamental domain of this action consists of a single point in each orbit. $G$ can be equipped with a measure--the Haar measure--used to measure the sizes, or volumes, of Borel measurable subsets of $G$. A measurable fundamental domain can be found. Its volume is the covolume of $\mathcal{L}$. When it is finite, we can think of $G$ as being tiled by this fundamental domain and the elements of $\mathcal{L}$ as moving the tiles around.
Any pair of these sea horse figures--where one is right side up and the other upside down--can be a fundamental domain for the visually evident lattice in the Euclidean plane. M.C. Escher, Sea Horse (No. 11).
A "lattice" random variable $X$ is supported on a lattice in $(\mathbb{R}^n, {+})$. This means that all its probability is contained in the closure of the lattice. Because a lattice is discrete, it is closed, so the values of $X$ are on the lattice almost surely: $\Pr(X\in\mathcal{L})=1$.
Application
The group implied by the question is the additive group of real numbers, $(\mathbb{R}, {+})$, with its usual (Euclidean) topology. As a subgroup, a lattice $\mathcal{L}$ must include $0$. That alone will not suffice, because the quotient $\mathbb{R}/\{0\}$ has infinite volume ("volume" = "length" in this 1D case). Thus there is at least one nonzero element $g\in\mathcal{L}$. All of the powers of this element must also be in the subgroup. Since the operation is addition, the $n^\text{th}$ power of $g$ is $ng$. Therefore $\mathcal{L}$ contains all integral multiples of $g$ (including the negative ones).
If there are two elements $h,g\in\mathcal{L}$ which are not powers of each other, it is easy to show (using a tiny bit of number theory) that (1) all the combinations $ng+mh$, for $n,m\in\mathbb{Z}$, are in one-to-one correspondence with the ordered pairs $(m,n)$ and (2) these combinations are dense in $\mathbb{R}$, which would mean $\mathcal{L}$ is not discrete. From this it is straightforward to conclude that all elements in $\mathcal{L}$ are powers of a single number $g$. This is the generator of $\mathcal{L}$.
(An analogous argument shows that lattices in $(\mathbb{R}^n, {+})$ must have $n$ generators. Generators for the Escher watercolor could be, say, a translation of two units down and a translation one unit down and one unit to the right, approximately.)
Consequently, corresponding to any real-valued lattice random variable $X$ on $(\mathbb{R}, {+})$ must be a generator $g\ne 0$, whence
$$\sum_{n=0}^\infty \Pr(X=ng) \le \sum_{n=-\infty}^\infty \Pr(X=ng) = \Pr(X\in\mathcal{L}) = 1.$$
The definition in the question therefore can be understood as that of a non-negative lattice variable. We might also want to stipulate that $\Pr(X=0) \lt 1$, for otherwise $X$ is supported on the subgroup $\{0\}$ which, having infinite covolume, is not a lattice.
Generalization
The positive real numbers $(\mathbb{R}^{+}, {\times})$ form a multiplicative group. A lattice on this group will be of the form $\mathcal{L} = \{g^n\,|\,n\in\mathbb{Z}\}$ for some $g \gt 0$. (The covolume of this lattice is $|\log(g)|$.) Accordingly, any random variable $Y$ for which
$$\sum_{n=-\infty}^\infty \Pr(Y=g^n) = 1$$
could be considered a lattice variable on this group. Evidently, $\log(Y)$ would be a lattice variable on $(\mathbb{R}, {+})$.
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
|
This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the meaning and potential applications of lattice random variables.
Background
In mathematics, a "lattice" $\mathcal{L}$ is a discrete subgroup of a topological group $G$ (usually assumed to have a finite covolume).
"Discrete" means that around each element $g\in\mathcal{L}$ is an open set $\mathcal{O}_g\subset\mathcal{L}$ containing only $g$ itself: $\mathcal{O}_g\cup\mathcal{L}=\{g\}$. It would be fair to think of $\mathcal{L}$ as being a "patterned" or "regular" arrangement of points in $G$.
The group $G$ acts on $\mathcal{L}$ by "moving points in $\mathcal{L}$ around in $G$," forming an orbit out of each one. A fundamental domain of this action consists of a single point in each orbit. $G$ can be equipped with a measure--the Haar measure--used to measure the sizes, or volumes, of Borel measurable subsets of $G$. A measurable fundamental domain can be found. Its volume is the covolume of $\mathcal{L}$. When it is finite, we can think of $G$ as being tiled by this fundamental domain and the elements of $\mathcal{L}$ as moving the tiles around.
Any pair of these sea horse figures--where one is right side up and the other upside down--can be a fundamental domain for the visually evident lattice in the Euclidean plane. M.C. Escher, Sea Horse (No. 11).
A "lattice" random variable $X$ is supported on a lattice in $(\mathbb{R}^n, {+})$. This means that all its probability is contained in the closure of the lattice. Because a lattice is discrete, it is closed, so the values of $X$ are on the lattice almost surely: $\Pr(X\in\mathcal{L})=1$.
Application
The group implied by the question is the additive group of real numbers, $(\mathbb{R}, {+})$, with its usual (Euclidean) topology. As a subgroup, a lattice $\mathcal{L}$ must include $0$. That alone will not suffice, because the quotient $\mathbb{R}/\{0\}$ has infinite volume ("volume" = "length" in this 1D case). Thus there is at least one nonzero element $g\in\mathcal{L}$. All of the powers of this element must also be in the subgroup. Since the operation is addition, the $n^\text{th}$ power of $g$ is $ng$. Therefore $\mathcal{L}$ contains all integral multiples of $g$ (including the negative ones).
If there are two elements $h,g\in\mathcal{L}$ which are not powers of each other, it is easy to show (using a tiny bit of number theory) that (1) all the combinations $ng+mh$, for $n,m\in\mathbb{Z}$, are in one-to-one correspondence with the ordered pairs $(m,n)$ and (2) these combinations are dense in $\mathbb{R}$, which would mean $\mathcal{L}$ is not discrete. From this it is straightforward to conclude that all elements in $\mathcal{L}$ are powers of a single number $g$. This is the generator of $\mathcal{L}$.
(An analogous argument shows that lattices in $(\mathbb{R}^n, {+})$ must have $n$ generators. Generators for the Escher watercolor could be, say, a translation of two units down and a translation one unit down and one unit to the right, approximately.)
Consequently, corresponding to any real-valued lattice random variable $X$ on $(\mathbb{R}, {+})$ must be a generator $g\ne 0$, whence
$$\sum_{n=0}^\infty \Pr(X=ng) \le \sum_{n=-\infty}^\infty \Pr(X=ng) = \Pr(X\in\mathcal{L}) = 1.$$
The definition in the question therefore can be understood as that of a non-negative lattice variable. We might also want to stipulate that $\Pr(X=0) \lt 1$, for otherwise $X$ is supported on the subgroup $\{0\}$ which, having infinite covolume, is not a lattice.
Generalization
The positive real numbers $(\mathbb{R}^{+}, {\times})$ form a multiplicative group. A lattice on this group will be of the form $\mathcal{L} = \{g^n\,|\,n\in\mathbb{Z}\}$ for some $g \gt 0$. (The covolume of this lattice is $|\log(g)|$.) Accordingly, any random variable $Y$ for which
$$\sum_{n=-\infty}^\infty \Pr(Y=g^n) = 1$$
could be considered a lattice variable on this group. Evidently, $\log(Y)$ would be a lattice variable on $(\mathbb{R}, {+})$.
|
What is the intuitive meaning behind a random variable being defined as a "lattice"?
This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the
|
13,799
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
|
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we plot the data. Here is a scatter plot where the data points are colored by group. Additionally, we have a separate group-specific regression line for each group, as well as a simple regression line (ignoring groups) in dashed bold:
plot(y ~ x, data=dat, col=f, pch=19)
abline(coef(lm(y ~ x, data=dat)), lwd=3, lty=2)
by(dat, dat$f, function(i) abline(coef(lm(y ~ x, data=i)), col=i$f))
The fixed-effect model
What the fixed-effect model is going to do with these data is fairly straightforward. The effect of $x$ is estimated "controlling for" groups. In other words, $x$ is first orthogonalized with respect to the group dummies, and then the slope of this new, orthogonalized $x$ is what is estimated. In this case, this orthogonalization is going to remove a lot of the variance in $x$ (specifically, the between-cluster variability in $x$), because the group dummies are highly correlated with $x$. (To recognize this intuitively, think about what would happen if we regressed $x$ on just the set of group dummies, leaving $y$ out of the equation. Judging from the plot above, it certainly seems that we would expect to have some high $t$-statistics on each of the dummy coefficients in this regression!)
So basically what this ends up meaning for us is that only the within-cluster variability in $x$ is used to estimate the effect of $x$. The between-cluster variability in $x$ (which, as we can see above, is substantial), is "controlled out" of the analysis. So the slope that we get from lm() is the average of the 4 within-cluster regression lines, all of which are relatively steep in this case.
The mixed model
What the mixed model does is slightly more complicated. The mixed model attempts to use both within-cluster and between-cluster variability on $x$ to estimate the effect of $x$. Incidentally this is really one of the selling points of the model, as its ability/willingness to incorporate this additional information means it can often yield more efficient estimates. But unfortunately, things can get tricky when the between-cluster effect of $x$ and the average within-cluster effect of $x$ do not really agree, as is the case here. Note: this situation is what the "Hausman test" for panel data attempts to diagnose!
Specifically, what the mixed model will attempt to do here is to estimate some sort of compromise between the average within-cluster slope of $x$ and the simple regression line that ignores the clusters (the dashed bold line). The exact point within this compromising range that mixed model settles on depends on the ratio of the random intercept variance to the total variance (also known as the intra-class correlation). As this ratio approaches 0, the mixed model estimate approaches the estimate of the simple regression line. As the ratio approaches 1, the mixed model estimate approaches the average within-cluster slope estimate.
Here are the coefficients for the simple regression model (the dashed bold line in the plot):
> lm(y ~ x, data=dat)
Call:
lm(formula = y ~ x, data = dat)
Coefficients:
(Intercept) x
0.008333 0.008643
As you can see, the coefficients here are identical to what we obtained in the mixed model. This is exactly what we expected to find, since as you already noted, we have an estimate of 0 variance for the random intercepts, making the previously mentioned ratio/intra-class correlation 0. So the mixed model estimates in this case are just the simple linear regression estimates, and as we can see in the plot, the slope here is far less pronounced than the within-cluster slopes.
This brings us to one final conceptual issue...
Why is the variance of the random intercepts estimated to be 0?
The answer to this question has the potential to become a little technical and difficult, but I'll try to keep it as simple and nontechnical as I can (for both our sakes!). But it will maybe still be a little long-winded.
I mentioned earlier the notion of intra-class correlation. This is another way of thinking about the dependence in $y$ (or, more correctly, the errors of the model) induced by the clustering structure. The intra-class correlation tells us how similar on average are two errors drawn from the same cluster, relative to the average similarity of two errors drawn from anywhere in the dataset (i.e., may or may not be in the same cluster). A positive intra-class correlation tells us that errors from the same cluster tend to be relatively more similar to each other; if I draw one error from a cluster and it has a high value, then I can expect above chance that the next error I draw from the same cluster will also have a high value. Although somewhat less common, intra-class correlations can also be negative; two errors drawn from the same cluster are less similar (i.e., further apart in value) than would typically be expected across the dataset as a whole. All of this intra-class correlation business is just a useful alternative way of describing the dependence in the data.
The mixed model we are considering is not using the intra-class correlation method of representing the dependence in the data. Instead it describes the dependence in terms of variance components. This is all fine as long as the intra-class correlation is positive. In those cases, the intra-class correlation can be easily written in terms of variance components, specifically as the previously mentioned ratio of the random intercept variance to the total variance. (See the wiki page on intra-class correlation for more info on this.) But unfortunately variance-components models have a difficult time dealing with situations where we have a negative intra-class correlation. After all, writing the intra-class correlation in terms of the variance components involves writing it as a proportion of variance, and proportions cannot be negative.
Judging from the plot, it looks like the intra-class correlation in these data would be slightly negative. (What I am looking at in drawing this conclusion is the fact that there is a lot of variance in $y$ within each cluster, but relatively little variance in the cluster means on $y$, so two errors drawn from the same cluster will tend to have a difference that nearly spans the range of $y$, whereas errors drawn from different clusters will tend to have a more moderate difference.) So your mixed model is doing what, in practice, mixed models often do in this case: it gives estimates that are as consistent with a negative intra-class correlation as it can muster, but it stops at the lower bound of 0 (this constraint is usually programmed into the model fitting algorithm). So we end up with an estimated random intercept variance of 0, which is still not a very good estimate, but it's as close as we can get with this variance-components type of model.
So what can we do?
One option is to just go with the fixed-effects model. This would be reasonable here because these data have two separate features that are tricky for mixed models (random group effects correlated with $x$, and negative intra-class correlation).
Another option is to use a mixed model, but set it up in such a way that we separately estimate the between- and within-cluster slopes of $x$ rather than awkwardly attempting to pool them together. At the bottom of this answer I reference two papers that talk about this strategy; I follow the approach advocated in the first paper by Bell & Jones.
To do this, we take our $x$ predictor and split it into two predictors, $x_b$ which will contain only between-cluster variation in $x$, and $x_w$ which will contain only within-cluster variation in $x$. Here's what this looks like:
> dat <- within(dat, x_b <- tapply(x, f, mean)[paste(f)])
> dat <- within(dat, x_w <- x - x_b)
> dat
y x f x_b x_w
1 -0.5 2 1 3 -1
2 0.0 3 1 3 0
3 0.5 4 1 3 1
4 -0.6 -4 2 -3 -1
5 0.0 -3 2 -3 0
6 0.6 -2 2 -3 1
7 -0.2 13 3 14 -1
8 0.1 14 3 14 0
9 0.4 15 3 14 1
10 -0.5 -15 4 -14 -1
11 -0.1 -14 4 -14 0
12 0.4 -13 4 -14 1
>
> mod <- lmer(y ~ x_b + x_w + (1|f), data=dat)
> mod
Linear mixed model fit by REML
Formula: y ~ x_b + x_w + (1 | f)
Data: dat
AIC BIC logLik deviance REMLdev
6.547 8.972 1.726 -23.63 -3.453
Random effects:
Groups Name Variance Std.Dev.
f (Intercept) 0.000000 0.00000
Residual 0.010898 0.10439
Number of obs: 12, groups: f, 4
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.008333 0.030135 0.277
x_b 0.005691 0.002977 1.912
x_w 0.462500 0.036908 12.531
Correlation of Fixed Effects:
(Intr) x_b
x_b 0.000
x_w 0.000 0.000
A few things to notice here. First, the coefficient for $x_w$ is exactly the same as what we got in the fixed-effect model. So far so good. Second, the coefficient for $x_b$ is the slope of the regression we would get from regression $y$ on just a vector of the cluster means of $x$. As such it is not quite equivalent to the bold dashed line in our first plot, which used the total variance in $x$, but it is close. Third, although the coefficient for $x_b$ is smaller than the coefficient from the simple regression model, the standard error is also substantially smaller and hence the $t$-statistic is larger. This also is unsurprising because the residual variance is far smaller in this mixed model due to the random group effects eating up a lot of the variance that the simple regression model had to deal with.
Finally, we still have an estimate of 0 for the variance of the random intercepts, for the reasons I elaborated in the previous section. I'm not really sure what all we can do about that one at least without switching to some software other than lmer(), and I'm also not sure to what extent this is still going to be adversely affecting our estimates in this final mixed model. Maybe another user can chime in with some thoughts about this issue.
References
Bell, A., & Jones, K. (2014). Explaining fixed effects: Random effects modelling of time-series cross-sectional and panel data. Political Science Research and Methods. PDF
Bafumi, J., & Gelman, A. E. (2006). Fitting multilevel models when predictors and group effects correlate. PDF
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
|
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we pl
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we plot the data. Here is a scatter plot where the data points are colored by group. Additionally, we have a separate group-specific regression line for each group, as well as a simple regression line (ignoring groups) in dashed bold:
plot(y ~ x, data=dat, col=f, pch=19)
abline(coef(lm(y ~ x, data=dat)), lwd=3, lty=2)
by(dat, dat$f, function(i) abline(coef(lm(y ~ x, data=i)), col=i$f))
The fixed-effect model
What the fixed-effect model is going to do with these data is fairly straightforward. The effect of $x$ is estimated "controlling for" groups. In other words, $x$ is first orthogonalized with respect to the group dummies, and then the slope of this new, orthogonalized $x$ is what is estimated. In this case, this orthogonalization is going to remove a lot of the variance in $x$ (specifically, the between-cluster variability in $x$), because the group dummies are highly correlated with $x$. (To recognize this intuitively, think about what would happen if we regressed $x$ on just the set of group dummies, leaving $y$ out of the equation. Judging from the plot above, it certainly seems that we would expect to have some high $t$-statistics on each of the dummy coefficients in this regression!)
So basically what this ends up meaning for us is that only the within-cluster variability in $x$ is used to estimate the effect of $x$. The between-cluster variability in $x$ (which, as we can see above, is substantial), is "controlled out" of the analysis. So the slope that we get from lm() is the average of the 4 within-cluster regression lines, all of which are relatively steep in this case.
The mixed model
What the mixed model does is slightly more complicated. The mixed model attempts to use both within-cluster and between-cluster variability on $x$ to estimate the effect of $x$. Incidentally this is really one of the selling points of the model, as its ability/willingness to incorporate this additional information means it can often yield more efficient estimates. But unfortunately, things can get tricky when the between-cluster effect of $x$ and the average within-cluster effect of $x$ do not really agree, as is the case here. Note: this situation is what the "Hausman test" for panel data attempts to diagnose!
Specifically, what the mixed model will attempt to do here is to estimate some sort of compromise between the average within-cluster slope of $x$ and the simple regression line that ignores the clusters (the dashed bold line). The exact point within this compromising range that mixed model settles on depends on the ratio of the random intercept variance to the total variance (also known as the intra-class correlation). As this ratio approaches 0, the mixed model estimate approaches the estimate of the simple regression line. As the ratio approaches 1, the mixed model estimate approaches the average within-cluster slope estimate.
Here are the coefficients for the simple regression model (the dashed bold line in the plot):
> lm(y ~ x, data=dat)
Call:
lm(formula = y ~ x, data = dat)
Coefficients:
(Intercept) x
0.008333 0.008643
As you can see, the coefficients here are identical to what we obtained in the mixed model. This is exactly what we expected to find, since as you already noted, we have an estimate of 0 variance for the random intercepts, making the previously mentioned ratio/intra-class correlation 0. So the mixed model estimates in this case are just the simple linear regression estimates, and as we can see in the plot, the slope here is far less pronounced than the within-cluster slopes.
This brings us to one final conceptual issue...
Why is the variance of the random intercepts estimated to be 0?
The answer to this question has the potential to become a little technical and difficult, but I'll try to keep it as simple and nontechnical as I can (for both our sakes!). But it will maybe still be a little long-winded.
I mentioned earlier the notion of intra-class correlation. This is another way of thinking about the dependence in $y$ (or, more correctly, the errors of the model) induced by the clustering structure. The intra-class correlation tells us how similar on average are two errors drawn from the same cluster, relative to the average similarity of two errors drawn from anywhere in the dataset (i.e., may or may not be in the same cluster). A positive intra-class correlation tells us that errors from the same cluster tend to be relatively more similar to each other; if I draw one error from a cluster and it has a high value, then I can expect above chance that the next error I draw from the same cluster will also have a high value. Although somewhat less common, intra-class correlations can also be negative; two errors drawn from the same cluster are less similar (i.e., further apart in value) than would typically be expected across the dataset as a whole. All of this intra-class correlation business is just a useful alternative way of describing the dependence in the data.
The mixed model we are considering is not using the intra-class correlation method of representing the dependence in the data. Instead it describes the dependence in terms of variance components. This is all fine as long as the intra-class correlation is positive. In those cases, the intra-class correlation can be easily written in terms of variance components, specifically as the previously mentioned ratio of the random intercept variance to the total variance. (See the wiki page on intra-class correlation for more info on this.) But unfortunately variance-components models have a difficult time dealing with situations where we have a negative intra-class correlation. After all, writing the intra-class correlation in terms of the variance components involves writing it as a proportion of variance, and proportions cannot be negative.
Judging from the plot, it looks like the intra-class correlation in these data would be slightly negative. (What I am looking at in drawing this conclusion is the fact that there is a lot of variance in $y$ within each cluster, but relatively little variance in the cluster means on $y$, so two errors drawn from the same cluster will tend to have a difference that nearly spans the range of $y$, whereas errors drawn from different clusters will tend to have a more moderate difference.) So your mixed model is doing what, in practice, mixed models often do in this case: it gives estimates that are as consistent with a negative intra-class correlation as it can muster, but it stops at the lower bound of 0 (this constraint is usually programmed into the model fitting algorithm). So we end up with an estimated random intercept variance of 0, which is still not a very good estimate, but it's as close as we can get with this variance-components type of model.
So what can we do?
One option is to just go with the fixed-effects model. This would be reasonable here because these data have two separate features that are tricky for mixed models (random group effects correlated with $x$, and negative intra-class correlation).
Another option is to use a mixed model, but set it up in such a way that we separately estimate the between- and within-cluster slopes of $x$ rather than awkwardly attempting to pool them together. At the bottom of this answer I reference two papers that talk about this strategy; I follow the approach advocated in the first paper by Bell & Jones.
To do this, we take our $x$ predictor and split it into two predictors, $x_b$ which will contain only between-cluster variation in $x$, and $x_w$ which will contain only within-cluster variation in $x$. Here's what this looks like:
> dat <- within(dat, x_b <- tapply(x, f, mean)[paste(f)])
> dat <- within(dat, x_w <- x - x_b)
> dat
y x f x_b x_w
1 -0.5 2 1 3 -1
2 0.0 3 1 3 0
3 0.5 4 1 3 1
4 -0.6 -4 2 -3 -1
5 0.0 -3 2 -3 0
6 0.6 -2 2 -3 1
7 -0.2 13 3 14 -1
8 0.1 14 3 14 0
9 0.4 15 3 14 1
10 -0.5 -15 4 -14 -1
11 -0.1 -14 4 -14 0
12 0.4 -13 4 -14 1
>
> mod <- lmer(y ~ x_b + x_w + (1|f), data=dat)
> mod
Linear mixed model fit by REML
Formula: y ~ x_b + x_w + (1 | f)
Data: dat
AIC BIC logLik deviance REMLdev
6.547 8.972 1.726 -23.63 -3.453
Random effects:
Groups Name Variance Std.Dev.
f (Intercept) 0.000000 0.00000
Residual 0.010898 0.10439
Number of obs: 12, groups: f, 4
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.008333 0.030135 0.277
x_b 0.005691 0.002977 1.912
x_w 0.462500 0.036908 12.531
Correlation of Fixed Effects:
(Intr) x_b
x_b 0.000
x_w 0.000 0.000
A few things to notice here. First, the coefficient for $x_w$ is exactly the same as what we got in the fixed-effect model. So far so good. Second, the coefficient for $x_b$ is the slope of the regression we would get from regression $y$ on just a vector of the cluster means of $x$. As such it is not quite equivalent to the bold dashed line in our first plot, which used the total variance in $x$, but it is close. Third, although the coefficient for $x_b$ is smaller than the coefficient from the simple regression model, the standard error is also substantially smaller and hence the $t$-statistic is larger. This also is unsurprising because the residual variance is far smaller in this mixed model due to the random group effects eating up a lot of the variance that the simple regression model had to deal with.
Finally, we still have an estimate of 0 for the variance of the random intercepts, for the reasons I elaborated in the previous section. I'm not really sure what all we can do about that one at least without switching to some software other than lmer(), and I'm also not sure to what extent this is still going to be adversely affecting our estimates in this final mixed model. Maybe another user can chime in with some thoughts about this issue.
References
Bell, A., & Jones, K. (2014). Explaining fixed effects: Random effects modelling of time-series cross-sectional and panel data. Political Science Research and Methods. PDF
Bafumi, J., & Gelman, A. E. (2006). Fitting multilevel models when predictors and group effects correlate. PDF
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we pl
|
13,800
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
|
After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independent and the dependent variables. In this case, those variables are omitted or unobserved. However, I do observe the groupings between which the omitted variable ought to vary.
I believe the econometrician would suggest a fixed effect model. That is, a model that includes a dummy for every grouping level (or an equivalent specification that conditions the model such that many grouping dummies are not required) in this case. With a fixed effect model, the hope is that all unobserved and time-invariant variables can be controlled by conditioning out across group (or across individual) variation. Indeed, the second model in my question is precisely a fixed effect model, and as such gives the estimate I expect.
I welcome comments that will further illuminate this circumstance.
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
|
After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independ
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independent and the dependent variables. In this case, those variables are omitted or unobserved. However, I do observe the groupings between which the omitted variable ought to vary.
I believe the econometrician would suggest a fixed effect model. That is, a model that includes a dummy for every grouping level (or an equivalent specification that conditions the model such that many grouping dummies are not required) in this case. With a fixed effect model, the hope is that all unobserved and time-invariant variables can be controlled by conditioning out across group (or across individual) variation. Indeed, the second model in my question is precisely a fixed effect model, and as such gives the estimate I expect.
I welcome comments that will further illuminate this circumstance.
|
Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independ
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.