idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
35,501 | Non-random small sample - What conclusions can I get? | To expand on my comments, here's one approach. Someone more used to the area you're working in (and there are many here) may have a better suggestion for solving the same problems:
1) Assume that confounding variables are of two kinds.
i) The first kind (the main one) are counfounders which are always the same for a given subject. "School effects" and "teacher effects" and socio-economic variables, for example, may be reasonably assumed to be the same before and after for each subject
ii) The second kind (which may not exist for your problem) can change within subjects (these would be time-related things like 'learning effects' from having been tested before rather than from the intervention itself)
2) Assume no confounders interact with any of the effects you're interested in
A model that reflects that could be written as follows:
Let $i$ represent the subject, and let $t$ represent the time (0/1). Let $Y_{it}$ be the response for subject $i$ at times $t$. The variable $\text{Treatment}$ is $1$ for those in the treatment group and $0$ for the control
$\alpha_i$ incorporates all the individual-level counfounders above.
$\gamma$ incorporates any time-counfounders, including the effect of the first round of testing.
$\beta$ incorporates the treatments effects - the difference
$Y_{it} = \alpha_i + \gamma \cdot t + \beta \text{ Treatment}\cdot t +\varepsilon_{it}$
Normally with a model like this I'd be tempted to use mixed effects model with random intercept, but in this case you don't have randomization. Nonetheless, because of the before/after pairing, with the assumptions of no interaction of confounders with treatment you can tease out the treatment effect.
For example, If you take $D_i = Y_{i1}-Y_{i0}$, you get:
$Y_{i1} = \alpha_i + \gamma + \beta \text{ Treatment} +\varepsilon_{i1}$
$Y_{i0} = \alpha_i + 0 + 0 +\varepsilon_{i0}$
$D_i = \gamma + \beta \text{ Treatment} + \eta_i$
where $\eta_i =\varepsilon_{i1}-\varepsilon_{i0}$.
Then - assuming sample sizes are large enough, a straight two-sample test of equality of population means of the $D$'s between control and treatment should arguably pick up a treatment effect. | Non-random small sample - What conclusions can I get? | To expand on my comments, here's one approach. Someone more used to the area you're working in (and there are many here) may have a better suggestion for solving the same problems:
1) Assume that conf | Non-random small sample - What conclusions can I get?
To expand on my comments, here's one approach. Someone more used to the area you're working in (and there are many here) may have a better suggestion for solving the same problems:
1) Assume that confounding variables are of two kinds.
i) The first kind (the main one) are counfounders which are always the same for a given subject. "School effects" and "teacher effects" and socio-economic variables, for example, may be reasonably assumed to be the same before and after for each subject
ii) The second kind (which may not exist for your problem) can change within subjects (these would be time-related things like 'learning effects' from having been tested before rather than from the intervention itself)
2) Assume no confounders interact with any of the effects you're interested in
A model that reflects that could be written as follows:
Let $i$ represent the subject, and let $t$ represent the time (0/1). Let $Y_{it}$ be the response for subject $i$ at times $t$. The variable $\text{Treatment}$ is $1$ for those in the treatment group and $0$ for the control
$\alpha_i$ incorporates all the individual-level counfounders above.
$\gamma$ incorporates any time-counfounders, including the effect of the first round of testing.
$\beta$ incorporates the treatments effects - the difference
$Y_{it} = \alpha_i + \gamma \cdot t + \beta \text{ Treatment}\cdot t +\varepsilon_{it}$
Normally with a model like this I'd be tempted to use mixed effects model with random intercept, but in this case you don't have randomization. Nonetheless, because of the before/after pairing, with the assumptions of no interaction of confounders with treatment you can tease out the treatment effect.
For example, If you take $D_i = Y_{i1}-Y_{i0}$, you get:
$Y_{i1} = \alpha_i + \gamma + \beta \text{ Treatment} +\varepsilon_{i1}$
$Y_{i0} = \alpha_i + 0 + 0 +\varepsilon_{i0}$
$D_i = \gamma + \beta \text{ Treatment} + \eta_i$
where $\eta_i =\varepsilon_{i1}-\varepsilon_{i0}$.
Then - assuming sample sizes are large enough, a straight two-sample test of equality of population means of the $D$'s between control and treatment should arguably pick up a treatment effect. | Non-random small sample - What conclusions can I get?
To expand on my comments, here's one approach. Someone more used to the area you're working in (and there are many here) may have a better suggestion for solving the same problems:
1) Assume that conf |
35,502 | Non-random small sample - What conclusions can I get? | In this case, I would be much more interested in the individual story of the 10 kids than in any kind of statistics.
History: Statistics with small samples. Cite Mark Twain, "History doesn't often repeat itself, but it rhymes."
Roger Koenker, Dictionary of Received Ideas of Statistics | Non-random small sample - What conclusions can I get? | In this case, I would be much more interested in the individual story of the 10 kids than in any kind of statistics.
History: Statistics with small samples. Cite Mark Twain, "History doesn't often re | Non-random small sample - What conclusions can I get?
In this case, I would be much more interested in the individual story of the 10 kids than in any kind of statistics.
History: Statistics with small samples. Cite Mark Twain, "History doesn't often repeat itself, but it rhymes."
Roger Koenker, Dictionary of Received Ideas of Statistics | Non-random small sample - What conclusions can I get?
In this case, I would be much more interested in the individual story of the 10 kids than in any kind of statistics.
History: Statistics with small samples. Cite Mark Twain, "History doesn't often re |
35,503 | Conditional vs unconditional expectation | As @Whuber said, second calculation should give you a random variable rather than a number.
Now to complete these calculations (and to see @Whubers fact) we calculate the following:
$$E(Y)=\sum_y yf(y)=1(0.25) + 1(0.25) + 2(0.5) = 1.5$$
which is what you correctly calculated before. Now, the second expectation is the following:
$$E(Y|X=x)=\sum_y y f(y|X=x)= \begin{cases}
1\times1 + 0\times 2=1 & \text{if } x = 1\\
1\times 1 + 0\times 2=1 & \text{if } x = -1\\
0\times 1 + 1\times2=2 & \text{if }x = 2
\end{cases}$$
So depending on what value $X$ takes determines what the expected value of $Y$ (conditioned $X$) will be. | Conditional vs unconditional expectation | As @Whuber said, second calculation should give you a random variable rather than a number.
Now to complete these calculations (and to see @Whubers fact) we calculate the following:
$$E(Y)=\sum_y yf | Conditional vs unconditional expectation
As @Whuber said, second calculation should give you a random variable rather than a number.
Now to complete these calculations (and to see @Whubers fact) we calculate the following:
$$E(Y)=\sum_y yf(y)=1(0.25) + 1(0.25) + 2(0.5) = 1.5$$
which is what you correctly calculated before. Now, the second expectation is the following:
$$E(Y|X=x)=\sum_y y f(y|X=x)= \begin{cases}
1\times1 + 0\times 2=1 & \text{if } x = 1\\
1\times 1 + 0\times 2=1 & \text{if } x = -1\\
0\times 1 + 1\times2=2 & \text{if }x = 2
\end{cases}$$
So depending on what value $X$ takes determines what the expected value of $Y$ (conditioned $X$) will be. | Conditional vs unconditional expectation
As @Whuber said, second calculation should give you a random variable rather than a number.
Now to complete these calculations (and to see @Whubers fact) we calculate the following:
$$E(Y)=\sum_y yf |
35,504 | Conditional vs unconditional expectation | If $p_{X,Y}(x,y)$ denotes the joint probability mass function of discrete
random variables $X$ and $Y$, then the marginal mass functions are
$$\begin{align}
p_X(x) &= \sum_y p_{X,Y}(x,y)\\
p_Y(y) &= \sum_x p_{X,Y}(x,y)
\end{align}$$
and so we have that
$$E[Y] = \sum_y y\cdot p_{Y}(y) = \sum_y y\cdot \sum_xp_{X,Y}(x,y)
= \sum_x\sum_y y\cdot p_{X,Y}(x,y).\tag{1}$$
Now, the conditional probability mass function of $Y$ given that
$X = x$ is
$$p_{Y\mid X}(y \mid X=x) = \frac{p_{X,Y}(x,y)}{p_X(x)}
= \frac{p_{X,Y}(x,y)}{\sum_y p_{X,Y}(x,y)}\tag{2}$$
and
$$E[Y\mid X=x] = \sum_y y\cdot p_{Y\mid X}(y \mid X=x).\tag{3}$$
The value of this expectation depends on our choice of the value $x$
taken on by $X$ and is thus a random variable; indeed, it is a function
of the random variable $X$, and this random variable is denoted
$E[Y\mid X]$. It happens to take on values
$E[Y\mid X = x_1], E[Y\mid X=x_2], \cdots $ with probabilities
$p_X(x_1), p_X(x_2), \cdots$ and so its expected value is
$$\begin{align}E\bigr[E[Y\mid X]\bigr] &= \sum_x E[Y\mid X = x]\cdot p_X(x)
&\text{note the sum is w.r.t}~x\\
&= \sum_x \left[\sum_y y\cdot p_{Y\mid X}(y \mid X=x)\right]\cdot p_X(x)
&\text{using}~ (3)\\
&= \sum_x \left[\sum_y y\cdot \frac{p_{X,Y}(x,y)}{p_X(x)}\right]\cdot p_X(x)
&\text{using}~ (2)\\
&= \sum_x \sum_y y\cdot p_{X,Y}(x,y)\\
&= E[Y] &\text{using}~(1)
\end{align}$$
In general, the number $E[Y\mid X = x]$ need not equal
the number $E[Y]$ for any $x$. But, if $X$ and $Y$ are
independent random variables and so $p_{X,Y}(x,y) = p_X(x)p_Y(y)$
for all $x$ and $y$, then
$$p_{Y\mid X}(y \mid X=x) = \frac{p_{X,Y}(x,y)}{p_X(x)}
= \frac{p_X(x)p_Y(y)}{p_X(x)} = p_Y(y)\tag{4}$$
and so $(3)$ gives
$$E[Y\mid X=x] = \sum_y y\cdot p_{Y\mid X}(y \mid X=x)
= \sum_y y\cdot p_Y(y) = E[Y]$$
for all $x$, that is, $E[Y\mid X]$ is a degenerate random
variable that equals the number $E[Y]$ with probability $1$.
In your particular example, BabakP's answer after correction
by Moderator whuber shows that $E[Y\mid X = x]$ is a random
variable that takes on values $1, 1, 2$ with probabilities
$0.25, 0.25, 0.5$ respectively and so its expectation is
$0.25\times 1 + 0.25\times 1 + 0.5\times 2 = 1.5$ while the
$Y$ itself is a random variable taking on values $1$ and $1$
with equal probability $0.5$ and so $E[Y] = 1\times 0.5 + 2\times 0.5 = 1.5$
as indeed one expects from the law of iterated expectation
$$E\left[[Y\mid X]\right] = E[Y].$$
If the joint pmf was intended
to illustrate the difference between conditional
expectation and expectation, then it was a spectacularly bad choice
because the random variable $E[Y\mid X]$ turns out to have the
same distribution as the random variable $Y$, and so the expected
values are necessarily the same. More generally, $E[Y\mid X]$ does
not have the same distribution as $Y$ but their expected values are
the same.
Consider for exampple, the joint pmf
$$\begin{array}{c c c c}
& \quad &X=1\quad &X=-1\quad &X=2 \\
&Y=1\quad &0.2\quad &0.2\quad &0.1 \\
&Y=2\quad &0.2\quad &0.1\quad &0.2
\end{array}$$
for which the conditional pmfs of $Y$ are
$$X=1: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{1}{2}, p_{Y\mid X}(2\mid X = 1) = \frac{1}{2}\\
X=-1: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{2}{3}, p_{Y\mid X}(2\mid X = 1) = \frac{1}{3}\\
X=2: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{1}{3}, p_{Y\mid X}(2\mid X = 1) = \frac{2}{3}$$
the conditional means are
$$\begin{align}
E[Y\mid X = 1] &= 1\times \frac{1}{2} + 2 \times \frac{1}{2} = \frac{3}{2}\\
E[Y\mid X = -1] &= 1\times \frac{2}{3} + 2 \times \frac{1}{3} = \frac{4}{3}\\
E[Y\mid X = 2] &= 1\times \frac{1}{3} + 2 \times \frac{2}{3} = \frac{5}{3}
\end{align}$$
that is, $E[Y\mid X]$ is a random variable that takes on values
$\frac{3}{2}, \frac{4}{3}, \frac{5}{3}$ with probabiliities
$\frac{4}{10}, \frac{3}{10}, \frac{3}{10}$ respectively which is
not the same as the distribution of $Y$. Note also that $E[Y] = \frac{3}{2}$
happens to equal $E[Y\mid X=1]$ but not the
other two conditional expectations. While $E[Y\mid X]$ and $Y$ have
different distributions, their expected values are the same:
$$E\left[E[Y\mid X]\right] = \frac{3}{2}\times\frac{4}{10}
+\frac{4}{3}\times\frac{3}{10} + \frac{5}{3}\times \frac{3}{10}
= \frac{3}{2} = E[Y].$$ | Conditional vs unconditional expectation | If $p_{X,Y}(x,y)$ denotes the joint probability mass function of discrete
random variables $X$ and $Y$, then the marginal mass functions are
$$\begin{align}
p_X(x) &= \sum_y p_{X,Y}(x,y)\\
p_Y(y) &= \ | Conditional vs unconditional expectation
If $p_{X,Y}(x,y)$ denotes the joint probability mass function of discrete
random variables $X$ and $Y$, then the marginal mass functions are
$$\begin{align}
p_X(x) &= \sum_y p_{X,Y}(x,y)\\
p_Y(y) &= \sum_x p_{X,Y}(x,y)
\end{align}$$
and so we have that
$$E[Y] = \sum_y y\cdot p_{Y}(y) = \sum_y y\cdot \sum_xp_{X,Y}(x,y)
= \sum_x\sum_y y\cdot p_{X,Y}(x,y).\tag{1}$$
Now, the conditional probability mass function of $Y$ given that
$X = x$ is
$$p_{Y\mid X}(y \mid X=x) = \frac{p_{X,Y}(x,y)}{p_X(x)}
= \frac{p_{X,Y}(x,y)}{\sum_y p_{X,Y}(x,y)}\tag{2}$$
and
$$E[Y\mid X=x] = \sum_y y\cdot p_{Y\mid X}(y \mid X=x).\tag{3}$$
The value of this expectation depends on our choice of the value $x$
taken on by $X$ and is thus a random variable; indeed, it is a function
of the random variable $X$, and this random variable is denoted
$E[Y\mid X]$. It happens to take on values
$E[Y\mid X = x_1], E[Y\mid X=x_2], \cdots $ with probabilities
$p_X(x_1), p_X(x_2), \cdots$ and so its expected value is
$$\begin{align}E\bigr[E[Y\mid X]\bigr] &= \sum_x E[Y\mid X = x]\cdot p_X(x)
&\text{note the sum is w.r.t}~x\\
&= \sum_x \left[\sum_y y\cdot p_{Y\mid X}(y \mid X=x)\right]\cdot p_X(x)
&\text{using}~ (3)\\
&= \sum_x \left[\sum_y y\cdot \frac{p_{X,Y}(x,y)}{p_X(x)}\right]\cdot p_X(x)
&\text{using}~ (2)\\
&= \sum_x \sum_y y\cdot p_{X,Y}(x,y)\\
&= E[Y] &\text{using}~(1)
\end{align}$$
In general, the number $E[Y\mid X = x]$ need not equal
the number $E[Y]$ for any $x$. But, if $X$ and $Y$ are
independent random variables and so $p_{X,Y}(x,y) = p_X(x)p_Y(y)$
for all $x$ and $y$, then
$$p_{Y\mid X}(y \mid X=x) = \frac{p_{X,Y}(x,y)}{p_X(x)}
= \frac{p_X(x)p_Y(y)}{p_X(x)} = p_Y(y)\tag{4}$$
and so $(3)$ gives
$$E[Y\mid X=x] = \sum_y y\cdot p_{Y\mid X}(y \mid X=x)
= \sum_y y\cdot p_Y(y) = E[Y]$$
for all $x$, that is, $E[Y\mid X]$ is a degenerate random
variable that equals the number $E[Y]$ with probability $1$.
In your particular example, BabakP's answer after correction
by Moderator whuber shows that $E[Y\mid X = x]$ is a random
variable that takes on values $1, 1, 2$ with probabilities
$0.25, 0.25, 0.5$ respectively and so its expectation is
$0.25\times 1 + 0.25\times 1 + 0.5\times 2 = 1.5$ while the
$Y$ itself is a random variable taking on values $1$ and $1$
with equal probability $0.5$ and so $E[Y] = 1\times 0.5 + 2\times 0.5 = 1.5$
as indeed one expects from the law of iterated expectation
$$E\left[[Y\mid X]\right] = E[Y].$$
If the joint pmf was intended
to illustrate the difference between conditional
expectation and expectation, then it was a spectacularly bad choice
because the random variable $E[Y\mid X]$ turns out to have the
same distribution as the random variable $Y$, and so the expected
values are necessarily the same. More generally, $E[Y\mid X]$ does
not have the same distribution as $Y$ but their expected values are
the same.
Consider for exampple, the joint pmf
$$\begin{array}{c c c c}
& \quad &X=1\quad &X=-1\quad &X=2 \\
&Y=1\quad &0.2\quad &0.2\quad &0.1 \\
&Y=2\quad &0.2\quad &0.1\quad &0.2
\end{array}$$
for which the conditional pmfs of $Y$ are
$$X=1: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{1}{2}, p_{Y\mid X}(2\mid X = 1) = \frac{1}{2}\\
X=-1: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{2}{3}, p_{Y\mid X}(2\mid X = 1) = \frac{1}{3}\\
X=2: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{1}{3}, p_{Y\mid X}(2\mid X = 1) = \frac{2}{3}$$
the conditional means are
$$\begin{align}
E[Y\mid X = 1] &= 1\times \frac{1}{2} + 2 \times \frac{1}{2} = \frac{3}{2}\\
E[Y\mid X = -1] &= 1\times \frac{2}{3} + 2 \times \frac{1}{3} = \frac{4}{3}\\
E[Y\mid X = 2] &= 1\times \frac{1}{3} + 2 \times \frac{2}{3} = \frac{5}{3}
\end{align}$$
that is, $E[Y\mid X]$ is a random variable that takes on values
$\frac{3}{2}, \frac{4}{3}, \frac{5}{3}$ with probabiliities
$\frac{4}{10}, \frac{3}{10}, \frac{3}{10}$ respectively which is
not the same as the distribution of $Y$. Note also that $E[Y] = \frac{3}{2}$
happens to equal $E[Y\mid X=1]$ but not the
other two conditional expectations. While $E[Y\mid X]$ and $Y$ have
different distributions, their expected values are the same:
$$E\left[E[Y\mid X]\right] = \frac{3}{2}\times\frac{4}{10}
+\frac{4}{3}\times\frac{3}{10} + \frac{5}{3}\times \frac{3}{10}
= \frac{3}{2} = E[Y].$$ | Conditional vs unconditional expectation
If $p_{X,Y}(x,y)$ denotes the joint probability mass function of discrete
random variables $X$ and $Y$, then the marginal mass functions are
$$\begin{align}
p_X(x) &= \sum_y p_{X,Y}(x,y)\\
p_Y(y) &= \ |
35,505 | LASSO in R for variable selection: how to choose the tuning parameter | By calling cv.glmnet with default arguments you're k-fold cross-validating on lambda with k = 10. The fitted model will use the 1-standard-error-from-min value of lambda by default, and you can get the value by calling cv.glmnet.object$lambda.1se. See page 5 of the vignette: http://cran.r-project.org/web/packages/glmnet/glmnet.pdf. | LASSO in R for variable selection: how to choose the tuning parameter | By calling cv.glmnet with default arguments you're k-fold cross-validating on lambda with k = 10. The fitted model will use the 1-standard-error-from-min value of lambda by default, and you can get t | LASSO in R for variable selection: how to choose the tuning parameter
By calling cv.glmnet with default arguments you're k-fold cross-validating on lambda with k = 10. The fitted model will use the 1-standard-error-from-min value of lambda by default, and you can get the value by calling cv.glmnet.object$lambda.1se. See page 5 of the vignette: http://cran.r-project.org/web/packages/glmnet/glmnet.pdf. | LASSO in R for variable selection: how to choose the tuning parameter
By calling cv.glmnet with default arguments you're k-fold cross-validating on lambda with k = 10. The fitted model will use the 1-standard-error-from-min value of lambda by default, and you can get t |
35,506 | Converting odds ratios to Cohen's d for meta analysis | Hi Erica and welcome to the site. Have a look at this (page 3) document and this paper. The basic formula for the conversion is
$$
d=\mathrm{LogOR}\times \frac{\sqrt{3}}{\pi}
$$
Applying the delta-method, we get the following expression for the the variance of $d$ (the standard error of $d$ is just the square root of its variance):
$$
\mathrm{Var}_{d}=\mathrm{Var}_{\mathrm{LogOR}}\times \frac{3}{\pi^{2}}
$$
Where $\mathrm{LogOR}$ denotes the log of the odds ratio and $\mathrm{Var}_{\mathrm{LogOR}}$ denotes the variance of the log odds ratio.
To get the variance of the log odds ratio, you can use the information given by the confidence interval. To get the standard error of the log odds ratio, use the following formula:
$$
\mathrm{SE}_{\mathrm{LogOR}}=\frac{\log(\mathrm{CI}_{upper}) - \log(\mathrm{CI}_{lower})}{2\times z_{1-\alpha/2}}
$$
Where $\mathrm{CI}_{upper}$ denotes the upper and $\mathrm{CI}_{lower}$ the lower bound of the confidence interval for the odds ratio (as given in the papers) and $z_{1-\alpha/2}$ is the $1-\alpha/2$ quantile of the standard normal distribution. For a 95%-CI, $\alpha = 0.05$ and $z_{1-\alpha/2}\approx 1.96$. To get the variance of the log odds ratio, just square the standard error. In your example, the standard error of the log odds ratio is about $0.247$. Hence, the variance of the log odds ratio is $0.247^{2}\approx0.061$. To calculate the confidence interval of $d$, you need the standard error of $d$, which is simply
$$
\mathrm{SE}_{d}=\mathrm{SE}_{\mathrm{LogOR}}\times\frac{\sqrt{3}}{\pi}
$$ | Converting odds ratios to Cohen's d for meta analysis | Hi Erica and welcome to the site. Have a look at this (page 3) document and this paper. The basic formula for the conversion is
$$
d=\mathrm{LogOR}\times \frac{\sqrt{3}}{\pi}
$$
Applying the delta-m | Converting odds ratios to Cohen's d for meta analysis
Hi Erica and welcome to the site. Have a look at this (page 3) document and this paper. The basic formula for the conversion is
$$
d=\mathrm{LogOR}\times \frac{\sqrt{3}}{\pi}
$$
Applying the delta-method, we get the following expression for the the variance of $d$ (the standard error of $d$ is just the square root of its variance):
$$
\mathrm{Var}_{d}=\mathrm{Var}_{\mathrm{LogOR}}\times \frac{3}{\pi^{2}}
$$
Where $\mathrm{LogOR}$ denotes the log of the odds ratio and $\mathrm{Var}_{\mathrm{LogOR}}$ denotes the variance of the log odds ratio.
To get the variance of the log odds ratio, you can use the information given by the confidence interval. To get the standard error of the log odds ratio, use the following formula:
$$
\mathrm{SE}_{\mathrm{LogOR}}=\frac{\log(\mathrm{CI}_{upper}) - \log(\mathrm{CI}_{lower})}{2\times z_{1-\alpha/2}}
$$
Where $\mathrm{CI}_{upper}$ denotes the upper and $\mathrm{CI}_{lower}$ the lower bound of the confidence interval for the odds ratio (as given in the papers) and $z_{1-\alpha/2}$ is the $1-\alpha/2$ quantile of the standard normal distribution. For a 95%-CI, $\alpha = 0.05$ and $z_{1-\alpha/2}\approx 1.96$. To get the variance of the log odds ratio, just square the standard error. In your example, the standard error of the log odds ratio is about $0.247$. Hence, the variance of the log odds ratio is $0.247^{2}\approx0.061$. To calculate the confidence interval of $d$, you need the standard error of $d$, which is simply
$$
\mathrm{SE}_{d}=\mathrm{SE}_{\mathrm{LogOR}}\times\frac{\sqrt{3}}{\pi}
$$ | Converting odds ratios to Cohen's d for meta analysis
Hi Erica and welcome to the site. Have a look at this (page 3) document and this paper. The basic formula for the conversion is
$$
d=\mathrm{LogOR}\times \frac{\sqrt{3}}{\pi}
$$
Applying the delta-m |
35,507 | In MCMC, how is burn-in time chosen? | There are several diagnostics, including the Geweke Diagnostic, the Heidelberg and Welch Diagnostic, the Raftery and Lewis Diagnostic, and the Gelman and Rubin Multiple Sequence Diagnostic. Also, visual examination of the trace plot can help. All of these are only indications, not guarantees.
You might check out:
http://www.people.fas.harvard.edu/~plam/teaching/methods/convergence/convergence_print.pdf or
http://www.stat.duke.edu/courses/Fall10/sta290/Lectures/Diagnostics/param-diag.pdf
EDIT: Also, you cannot determine the burn-in length in advance. You look at your run -- as suggested above -- and if it looks like things have converged by the end of your burn-in, the burn-in you did is long enough. | In MCMC, how is burn-in time chosen? | There are several diagnostics, including the Geweke Diagnostic, the Heidelberg and Welch Diagnostic, the Raftery and Lewis Diagnostic, and the Gelman and Rubin Multiple Sequence Diagnostic. Also, visu | In MCMC, how is burn-in time chosen?
There are several diagnostics, including the Geweke Diagnostic, the Heidelberg and Welch Diagnostic, the Raftery and Lewis Diagnostic, and the Gelman and Rubin Multiple Sequence Diagnostic. Also, visual examination of the trace plot can help. All of these are only indications, not guarantees.
You might check out:
http://www.people.fas.harvard.edu/~plam/teaching/methods/convergence/convergence_print.pdf or
http://www.stat.duke.edu/courses/Fall10/sta290/Lectures/Diagnostics/param-diag.pdf
EDIT: Also, you cannot determine the burn-in length in advance. You look at your run -- as suggested above -- and if it looks like things have converged by the end of your burn-in, the burn-in you did is long enough. | In MCMC, how is burn-in time chosen?
There are several diagnostics, including the Geweke Diagnostic, the Heidelberg and Welch Diagnostic, the Raftery and Lewis Diagnostic, and the Gelman and Rubin Multiple Sequence Diagnostic. Also, visu |
35,508 | In MCMC, how is burn-in time chosen? | I would run the MCMC many times (with different starting values) and plot the log-likelihood along with parameter estimates across time (or iteration number). Hopefully you see a trend for what the iteration number is for the chain to enter the stationary distribution. I would then use this value (and add a little more to be conservative) as the burn-in time.
Of course there is no guarantee this will work across all scenarios, or that you have entered the true stationary distributions in your simulations. Therefore this advice should be taken with a grain of salt. | In MCMC, how is burn-in time chosen? | I would run the MCMC many times (with different starting values) and plot the log-likelihood along with parameter estimates across time (or iteration number). Hopefully you see a trend for what the i | In MCMC, how is burn-in time chosen?
I would run the MCMC many times (with different starting values) and plot the log-likelihood along with parameter estimates across time (or iteration number). Hopefully you see a trend for what the iteration number is for the chain to enter the stationary distribution. I would then use this value (and add a little more to be conservative) as the burn-in time.
Of course there is no guarantee this will work across all scenarios, or that you have entered the true stationary distributions in your simulations. Therefore this advice should be taken with a grain of salt. | In MCMC, how is burn-in time chosen?
I would run the MCMC many times (with different starting values) and plot the log-likelihood along with parameter estimates across time (or iteration number). Hopefully you see a trend for what the i |
35,509 | How are the convergence conditions / KKT conditions for the soft-margin SVM derived | Unfortunately, I am late a couple of years, but after reading Ng's lecture notes, I was asking myself the same question. The complementarity conditions you have listed follow from the other KKT conditions, namely:
$$
\begin{align}
\alpha_i &\geq 0 \tag{1},\\
g_i(w) &\leq 0 \tag{2},\\
\alpha_i g_i(w) &= 0 \tag{3},\\
r_i &\geq 0 \tag{4},\\
\xi_i &\geq 0 \tag{5},\\
r_i \xi_i &= 0 \tag{6},
\end{align}
$$
where
$$
g_i(w) = - y^{(i)} \left(w^T x^{(i}) + b\right) +1 -\xi_i.
$$
Furthermore, from
$$
\begin{equation}
\frac{\partial \mathcal{L}}{\partial \xi_i} \overset{!}{=} 0,
\end{equation}
$$
we obtain the relation
$$
\alpha_i = C - r_i \tag{7}.
$$
Now we can distinguish the following cases:
$\alpha_i = 0 \implies r_i = C \implies \xi_i = 0$ (from Eq. (7) and (6)), which together with Eq. (2) gives
$$
\begin{equation}
y^{(i)} \left(w^T x^{(i}) + b\right) \geq 1 \tag{8}
\end{equation}
$$
$0 < \alpha_i < C \implies r_i > 0 \implies \xi_i = 0$ via Eq. (3) gives
$$
\begin{equation}
y^{(i)} \left(w^T x^{(i}) + b\right) = 1 \tag{9}
\end{equation}
$$
And finally $\alpha_i = C \implies r_i = 0 \implies \xi_i \geq 0$ so that, again, from Eq. (2):
$$
\begin{equation}
\xi_i \geq 1 - y^{(i)} \left(w^T x^{(i}) + b\right),
\end{equation}
$$
which we note can only be fulfilled simultaneously with Eq. (5) if
$$
\begin{equation}
y^{(i)} \left(w^T x^{(i}) + b\right) \leq 1, \tag{10}
\end{equation}
$$
Note that Eq. (8) does not contribute to the SVM, as it is classified with sufficient confidence ($\alpha_i = 0$ as for the linearly separable case). For the case $0<\alpha_i<C$, $\xi_i=0$ the points are on the margin, and for $\alpha_i = C$ the points are within the margin (where depending on the value of $\xi_i$ the points are either classified correctly or incorrectly). | How are the convergence conditions / KKT conditions for the soft-margin SVM derived | Unfortunately, I am late a couple of years, but after reading Ng's lecture notes, I was asking myself the same question. The complementarity conditions you have listed follow from the other KKT condit | How are the convergence conditions / KKT conditions for the soft-margin SVM derived
Unfortunately, I am late a couple of years, but after reading Ng's lecture notes, I was asking myself the same question. The complementarity conditions you have listed follow from the other KKT conditions, namely:
$$
\begin{align}
\alpha_i &\geq 0 \tag{1},\\
g_i(w) &\leq 0 \tag{2},\\
\alpha_i g_i(w) &= 0 \tag{3},\\
r_i &\geq 0 \tag{4},\\
\xi_i &\geq 0 \tag{5},\\
r_i \xi_i &= 0 \tag{6},
\end{align}
$$
where
$$
g_i(w) = - y^{(i)} \left(w^T x^{(i}) + b\right) +1 -\xi_i.
$$
Furthermore, from
$$
\begin{equation}
\frac{\partial \mathcal{L}}{\partial \xi_i} \overset{!}{=} 0,
\end{equation}
$$
we obtain the relation
$$
\alpha_i = C - r_i \tag{7}.
$$
Now we can distinguish the following cases:
$\alpha_i = 0 \implies r_i = C \implies \xi_i = 0$ (from Eq. (7) and (6)), which together with Eq. (2) gives
$$
\begin{equation}
y^{(i)} \left(w^T x^{(i}) + b\right) \geq 1 \tag{8}
\end{equation}
$$
$0 < \alpha_i < C \implies r_i > 0 \implies \xi_i = 0$ via Eq. (3) gives
$$
\begin{equation}
y^{(i)} \left(w^T x^{(i}) + b\right) = 1 \tag{9}
\end{equation}
$$
And finally $\alpha_i = C \implies r_i = 0 \implies \xi_i \geq 0$ so that, again, from Eq. (2):
$$
\begin{equation}
\xi_i \geq 1 - y^{(i)} \left(w^T x^{(i}) + b\right),
\end{equation}
$$
which we note can only be fulfilled simultaneously with Eq. (5) if
$$
\begin{equation}
y^{(i)} \left(w^T x^{(i}) + b\right) \leq 1, \tag{10}
\end{equation}
$$
Note that Eq. (8) does not contribute to the SVM, as it is classified with sufficient confidence ($\alpha_i = 0$ as for the linearly separable case). For the case $0<\alpha_i<C$, $\xi_i=0$ the points are on the margin, and for $\alpha_i = C$ the points are within the margin (where depending on the value of $\xi_i$ the points are either classified correctly or incorrectly). | How are the convergence conditions / KKT conditions for the soft-margin SVM derived
Unfortunately, I am late a couple of years, but after reading Ng's lecture notes, I was asking myself the same question. The complementarity conditions you have listed follow from the other KKT condit |
35,510 | How are the convergence conditions / KKT conditions for the soft-margin SVM derived | Considering a general primal problem:
$$\min_w f(w)$$
subject to
$$g_i(w)\leq 0 \forall i, h_j(w)=0 \forall j$$
This gives us the Lagrangian:
$$ \mathcal{L}(w,\alpha,\beta) = f(w) +\sum_i\alpha_i g_i(w) + \sum_j\beta_j h_j(w) $$
KKT condition says that $\alpha_i^*g_i(w^*) = 0 \forall i$
Now let us state the soft margin SVM:
$$\min_w \frac 12 w^Tw + C\sum_{i=1}^m \xi_i$$
subject to
$$y_i(w^Tx_i+b)\geq 1-\xi_i \forall i \in \{1,2,..,m\}$$
$$\xi_i\geq 0 \forall i \in \{1,2,..,m\}$$
It is importatnt to notice that both of these constraints are inequalities. This means both of these will appear in the KKT conditions. This solves your trouble:
I am troubled that $\xi_i$ doesn't appear in the complementary condition.
(It does).
The Lagrangian can then be written as:
$$\mathcal{L}(w,b,\xi,\alpha,r) = \frac{1}{2}w^Tw + C\sum_{i=1}^m{\xi_i} - \sum_{i=1}^m \alpha_i [y_i (w^Tx_i+b)-1+\xi_i] - \sum_{i=1}^m r_i \xi_i$$
We have by setting derivative wrt $\xi_i$ to 0:
$$\alpha_i+r_i=C \forall i \in \{1,2,..,m\}$$
Applying the KKT condition at optimal point:
$$\alpha_i[y_i(w^Tx_i+b)-1+\xi_i] = 0 \forall i \in \{1,2,..,m\}$$
$$r_i\xi_i = 0 \forall i \in \{1,2,..,m\}$$
Case 1:
$$\alpha_i=0\implies r_i=C\implies \xi_i = 0 \implies y_i(w^Tx_i+b)-1+0\geq 0 $$ $$ \implies y_i(w^Tx_i+b)\geq1$$
Case 2:
$$0<\alpha_i<C\implies 0<r_i<C\implies \xi_i = 0 \implies y_i(w^Tx_i+b)-1+0=0 $$ $$ \implies y_i(w^Tx_i+b)=1$$
Case 3:
$$\alpha_i=C\implies r_i=0\implies \xi_i \geq 0 \implies y_i(w^Tx_i+b)-1+\xi_i=0, \xi_i\geq 0$$ $$\implies y_i(w^Tx_i+b)\leq1$$ | How are the convergence conditions / KKT conditions for the soft-margin SVM derived | Considering a general primal problem:
$$\min_w f(w)$$
subject to
$$g_i(w)\leq 0 \forall i, h_j(w)=0 \forall j$$
This gives us the Lagrangian:
$$ \mathcal{L}(w,\alpha,\beta) = f(w) +\sum_i\alpha_i g_i | How are the convergence conditions / KKT conditions for the soft-margin SVM derived
Considering a general primal problem:
$$\min_w f(w)$$
subject to
$$g_i(w)\leq 0 \forall i, h_j(w)=0 \forall j$$
This gives us the Lagrangian:
$$ \mathcal{L}(w,\alpha,\beta) = f(w) +\sum_i\alpha_i g_i(w) + \sum_j\beta_j h_j(w) $$
KKT condition says that $\alpha_i^*g_i(w^*) = 0 \forall i$
Now let us state the soft margin SVM:
$$\min_w \frac 12 w^Tw + C\sum_{i=1}^m \xi_i$$
subject to
$$y_i(w^Tx_i+b)\geq 1-\xi_i \forall i \in \{1,2,..,m\}$$
$$\xi_i\geq 0 \forall i \in \{1,2,..,m\}$$
It is importatnt to notice that both of these constraints are inequalities. This means both of these will appear in the KKT conditions. This solves your trouble:
I am troubled that $\xi_i$ doesn't appear in the complementary condition.
(It does).
The Lagrangian can then be written as:
$$\mathcal{L}(w,b,\xi,\alpha,r) = \frac{1}{2}w^Tw + C\sum_{i=1}^m{\xi_i} - \sum_{i=1}^m \alpha_i [y_i (w^Tx_i+b)-1+\xi_i] - \sum_{i=1}^m r_i \xi_i$$
We have by setting derivative wrt $\xi_i$ to 0:
$$\alpha_i+r_i=C \forall i \in \{1,2,..,m\}$$
Applying the KKT condition at optimal point:
$$\alpha_i[y_i(w^Tx_i+b)-1+\xi_i] = 0 \forall i \in \{1,2,..,m\}$$
$$r_i\xi_i = 0 \forall i \in \{1,2,..,m\}$$
Case 1:
$$\alpha_i=0\implies r_i=C\implies \xi_i = 0 \implies y_i(w^Tx_i+b)-1+0\geq 0 $$ $$ \implies y_i(w^Tx_i+b)\geq1$$
Case 2:
$$0<\alpha_i<C\implies 0<r_i<C\implies \xi_i = 0 \implies y_i(w^Tx_i+b)-1+0=0 $$ $$ \implies y_i(w^Tx_i+b)=1$$
Case 3:
$$\alpha_i=C\implies r_i=0\implies \xi_i \geq 0 \implies y_i(w^Tx_i+b)-1+\xi_i=0, \xi_i\geq 0$$ $$\implies y_i(w^Tx_i+b)\leq1$$ | How are the convergence conditions / KKT conditions for the soft-margin SVM derived
Considering a general primal problem:
$$\min_w f(w)$$
subject to
$$g_i(w)\leq 0 \forall i, h_j(w)=0 \forall j$$
This gives us the Lagrangian:
$$ \mathcal{L}(w,\alpha,\beta) = f(w) +\sum_i\alpha_i g_i |
35,511 | How does the generalized linear model generalize the general linear model? | Consider a case where your response variable is a set of 'successes' and 'failures' (also represented as 'yeses' and 'nos', $1$s and $0$s, etc.). If this were true, it cannot be the case that your error term is normally distributed. Instead, your error term would be Bernoulli, by definition. Thus, one of the assumptions that are alluded to is violated. Another such assumption is that of homoskedasticity, but this would be violated as well, because the variance is a function of the mean. So we can see that the (OLS) GLM is inappropriate for this case.
Note that, for a typical linear regression model, what you are predicting (i.e., $\hat y_i$) is $\mu_i$, the mean of the conditional normal distribution of the response at that exact spot where $X=x_i$. What we need in this case is to predict $\hat\pi_i$, the probability of 'success' at that spot. So we think of our response distribution as Bernoulli, and we are predicting the parameter that controls the behavior of that distribution. There is one important complication here, however. Specifically, there will be some values for $\bf X$ that, in combination with your estimates $\boldsymbol\beta$ will yield predicted values of $\hat y_i$ (i.e, $\hat\pi_i$) that will be either $<0$ or $>1$. But this is impossible, because the range of $\pi$ is $(0,~1)$. Thus we need to transform the parameter $\pi$ so that it can range $(-\infty,~\infty)$, just as the right hand side of your GLiM can. Hence, you need a link function.
At this point, we have stipulated a response distribution (Bernoulli) and a link function (perhaps the logit transformation). We already have a structural part of our model: $\bf X \boldsymbol \beta$. So now we have all the required parts of our model. This is now the generalized linear model, because we have 'relaxed' the assumptions about our response variable and the errors.
To answer your specific questions more directly, the generalized linear model relaxes assumptions about $\bf Y$ and $\bf U$ by positing a response distribution (in the exponential family) and a link function that maps the parameter in question to the interval $(-\infty,~\infty)$.
For more on this topic, it may help you to read my answer to this question: Difference between logit and probit models. | How does the generalized linear model generalize the general linear model? | Consider a case where your response variable is a set of 'successes' and 'failures' (also represented as 'yeses' and 'nos', $1$s and $0$s, etc.). If this were true, it cannot be the case that your er | How does the generalized linear model generalize the general linear model?
Consider a case where your response variable is a set of 'successes' and 'failures' (also represented as 'yeses' and 'nos', $1$s and $0$s, etc.). If this were true, it cannot be the case that your error term is normally distributed. Instead, your error term would be Bernoulli, by definition. Thus, one of the assumptions that are alluded to is violated. Another such assumption is that of homoskedasticity, but this would be violated as well, because the variance is a function of the mean. So we can see that the (OLS) GLM is inappropriate for this case.
Note that, for a typical linear regression model, what you are predicting (i.e., $\hat y_i$) is $\mu_i$, the mean of the conditional normal distribution of the response at that exact spot where $X=x_i$. What we need in this case is to predict $\hat\pi_i$, the probability of 'success' at that spot. So we think of our response distribution as Bernoulli, and we are predicting the parameter that controls the behavior of that distribution. There is one important complication here, however. Specifically, there will be some values for $\bf X$ that, in combination with your estimates $\boldsymbol\beta$ will yield predicted values of $\hat y_i$ (i.e, $\hat\pi_i$) that will be either $<0$ or $>1$. But this is impossible, because the range of $\pi$ is $(0,~1)$. Thus we need to transform the parameter $\pi$ so that it can range $(-\infty,~\infty)$, just as the right hand side of your GLiM can. Hence, you need a link function.
At this point, we have stipulated a response distribution (Bernoulli) and a link function (perhaps the logit transformation). We already have a structural part of our model: $\bf X \boldsymbol \beta$. So now we have all the required parts of our model. This is now the generalized linear model, because we have 'relaxed' the assumptions about our response variable and the errors.
To answer your specific questions more directly, the generalized linear model relaxes assumptions about $\bf Y$ and $\bf U$ by positing a response distribution (in the exponential family) and a link function that maps the parameter in question to the interval $(-\infty,~\infty)$.
For more on this topic, it may help you to read my answer to this question: Difference between logit and probit models. | How does the generalized linear model generalize the general linear model?
Consider a case where your response variable is a set of 'successes' and 'failures' (also represented as 'yeses' and 'nos', $1$s and $0$s, etc.). If this were true, it cannot be the case that your er |
35,512 | How to interpret odds ratio? | It means that the odds of a case having had exposure #1 are 1.5 times the odds of its having the baseline exposure. This is not the same as being 1.5 times as probable: odds are not the same as probability (odds of 2:1 against means a probability of $\frac{1}{3}$). So it comes down to what you mean by 'likely'. | How to interpret odds ratio? | It means that the odds of a case having had exposure #1 are 1.5 times the odds of its having the baseline exposure. This is not the same as being 1.5 times as probable: odds are not the same as probab | How to interpret odds ratio?
It means that the odds of a case having had exposure #1 are 1.5 times the odds of its having the baseline exposure. This is not the same as being 1.5 times as probable: odds are not the same as probability (odds of 2:1 against means a probability of $\frac{1}{3}$). So it comes down to what you mean by 'likely'. | How to interpret odds ratio?
It means that the odds of a case having had exposure #1 are 1.5 times the odds of its having the baseline exposure. This is not the same as being 1.5 times as probable: odds are not the same as probab |
35,513 | How to interpret odds ratio? | In answer to the second part of your question, what you say is correct (assuming that by "likely" you mean in terms of odds not probability). Both odds ratios should be relative to the controls.
However, you should check carefully the definition of the exposures - it is possible that having the second exposure is only possible if you have already had the first exposure, in which case the second odds ratio is not the relative odds of having an exposure at time point 2, but rather of having two exposures. If you can provide more information on the methods, I can give a clearer answer specific to your case. | How to interpret odds ratio? | In answer to the second part of your question, what you say is correct (assuming that by "likely" you mean in terms of odds not probability). Both odds ratios should be relative to the controls.
Howe | How to interpret odds ratio?
In answer to the second part of your question, what you say is correct (assuming that by "likely" you mean in terms of odds not probability). Both odds ratios should be relative to the controls.
However, you should check carefully the definition of the exposures - it is possible that having the second exposure is only possible if you have already had the first exposure, in which case the second odds ratio is not the relative odds of having an exposure at time point 2, but rather of having two exposures. If you can provide more information on the methods, I can give a clearer answer specific to your case. | How to interpret odds ratio?
In answer to the second part of your question, what you say is correct (assuming that by "likely" you mean in terms of odds not probability). Both odds ratios should be relative to the controls.
Howe |
35,514 | How to interpret odds ratio? | Your language, "cases are 1.5 times as likely to have exposure 1 than the controls" is a fine description of the interpretation of an odds ratio. As some have noted "likely" is something of an ambiguous phrase, though I doubt anyone in epidemiology is going to raise an eyebrow at your language.
You might consider either "The odds of having exposure 1 in cases was 1.5 times that of controls" or some such as a slightly more precise wording. | How to interpret odds ratio? | Your language, "cases are 1.5 times as likely to have exposure 1 than the controls" is a fine description of the interpretation of an odds ratio. As some have noted "likely" is something of an ambiguo | How to interpret odds ratio?
Your language, "cases are 1.5 times as likely to have exposure 1 than the controls" is a fine description of the interpretation of an odds ratio. As some have noted "likely" is something of an ambiguous phrase, though I doubt anyone in epidemiology is going to raise an eyebrow at your language.
You might consider either "The odds of having exposure 1 in cases was 1.5 times that of controls" or some such as a slightly more precise wording. | How to interpret odds ratio?
Your language, "cases are 1.5 times as likely to have exposure 1 than the controls" is a fine description of the interpretation of an odds ratio. As some have noted "likely" is something of an ambiguo |
35,515 | Homework: Bayesian Data Analysis: Priors on both binomial parameters | I did all of the questions from the first four chapters six years ago. Here's what I have:
$p(\mu,\theta)\propto\left|\frac{\partial \lambda}{\partial \mu}\right|p(\lambda,\theta)=\mu^{-1}.$
So
$\begin{array}{lrl}p\left(N,\theta\right) & = & \int_{0}^{\infty}p\left(\mu,N,\theta\right)\mathrm{d}\mu\\ & = & \int_{0}^{\infty}p\left(\mu,\theta\right)\Pr\left(N\,|\,\mu\right)\mathrm{d}\mu\\ & \propto & \int_{0}^{\infty}\mu^{-1}\left(\frac{\mu^{N}}{N!}e^{-\mu}\right)\mathrm{d}\mu\\ & = & \frac{\left(N-1\right)!}{N!}=N^{-1}\end{array}$
You don't need to be worried that $p(N,\theta)$ doesn't depend on $\theta$. This just means that the prior for $\theta$ is uniform on $[0,1]$, which is cool for a Bernoulli parameter. | Homework: Bayesian Data Analysis: Priors on both binomial parameters | I did all of the questions from the first four chapters six years ago. Here's what I have:
$p(\mu,\theta)\propto\left|\frac{\partial \lambda}{\partial \mu}\right|p(\lambda,\theta)=\mu^{-1}.$
So
$\begi | Homework: Bayesian Data Analysis: Priors on both binomial parameters
I did all of the questions from the first four chapters six years ago. Here's what I have:
$p(\mu,\theta)\propto\left|\frac{\partial \lambda}{\partial \mu}\right|p(\lambda,\theta)=\mu^{-1}.$
So
$\begin{array}{lrl}p\left(N,\theta\right) & = & \int_{0}^{\infty}p\left(\mu,N,\theta\right)\mathrm{d}\mu\\ & = & \int_{0}^{\infty}p\left(\mu,\theta\right)\Pr\left(N\,|\,\mu\right)\mathrm{d}\mu\\ & \propto & \int_{0}^{\infty}\mu^{-1}\left(\frac{\mu^{N}}{N!}e^{-\mu}\right)\mathrm{d}\mu\\ & = & \frac{\left(N-1\right)!}{N!}=N^{-1}\end{array}$
You don't need to be worried that $p(N,\theta)$ doesn't depend on $\theta$. This just means that the prior for $\theta$ is uniform on $[0,1]$, which is cool for a Bernoulli parameter. | Homework: Bayesian Data Analysis: Priors on both binomial parameters
I did all of the questions from the first four chapters six years ago. Here's what I have:
$p(\mu,\theta)\propto\left|\frac{\partial \lambda}{\partial \mu}\right|p(\lambda,\theta)=\mu^{-1}.$
So
$\begi |
35,516 | How to statistically compare two algorithms across three datasets in feature selection and classification? | Unless your algorithms have huge differences in performance and you have huge numbers of test cases, you won't be able to detect differences by just looking at the performance.
However, you can make use of apaired design:
run all three algorithms on exactly the same train/test split of a data set, and
do not aggregate the test results into % correct, but keep them at the single test case level as correct or wrong.
For the comparison, have a look at McNemar's test. The idea behind exploiting a paired design here is that all cases that both algorithms got right and those that both got wrong do not help you distinguishing the algorithms. But if one algorithm is better than the other, there should be many cases the better algorithm got right but not the worse, and few that were predicted correctly by the worse method but wrong by the better one.
Also, Cawley and Talbot: On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, JMLR, 2010, 1, 2079-2107. is highly relevant.
Because of the random aspects of your algorithms, you'll also want to check the same split of the same data set multiple times. From that you can estimate the variation between different runs that are otherwise equal. It may be difficult to judge how different the selected sets of variables are. But if your ultimate goal is predictive performance,then you can also use the variation between predictions of the same test case during different runs to measure the stability of the resulting models.
You'll then also want to check (as indicated above) variation due to different splits of the data set and put this into relation with the first variance.
Fractions (like % correctly recognized samples) are usually assumed to be distributed binomially, in certain cases a normal approximation is possible, but the small-print for this hardly ever met in fields with wide data matrices. This has the consequence that confidence intervals are huge for small numbers of test cases. In R, binom::binom.confint calculates confidence intervals for the true proportion given no. of tests and no. of successes.
Finally, My experience with genetic optimization for spectroscopic data (my Diplom thesis in German) suggests that you should check also the training errors. GAs tend to overfit very fast, arriving at very low training errors. Low training errors are not only overoptimistic, but they also have the consequence that the GA cannot differentiate between lots of models that seem to be equally perfect. (You may have less of a problem with that if the GA internally also randomly subsamples train and internal test sets).
Papers in English:
Beleites, Steiner, Sowa, Baumgartner, Sobottka, Schackert and Salzer: Classification of human gliomas by infrared imaging spectroscopy and chemometric image processing, Vib Spec, 2005, 38, 143 - 149.
application oriented, shows how we interpret the selected variates and that there are known "changes" in selections that are equivalent for chemical reasons.
Beleites and Salzer: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 2008, 390, 1261 - 1271.
discusses binomial confidence intervals for hit rate etc, why we could not calculate it. Instead, we use variation observed over repeated cross validation. The discussion about guesstimate for the effective sample size needs to be taken with caution, nowadays I'd say it is somewhere between misleading and wrong (I guess that is science moving on :-) ).
fig. 11 shows the independent (outer cross validation with splitting by specimen) test results from 40 different models for the same specimen. The corresponding internal estimate of the GA was 97.5% correct predictions (one model got all spectra of the specimen wrong, the remaining models were correct for all spectra - not shown in the paper). This is the problem with overfitting I mentioned above.
In this recent paper Beleites, Neugebauer, Bocklitz, Krafft and Popp: Sample Size Planning for Classification Models, Anal Chem Acta, 2013, 760, 25 - 33. we illustrate the problems with classification error estimation with too few patients/independent samples. Also illustrates how to estimate necessary patient numbers if you cannot do paired tests. | How to statistically compare two algorithms across three datasets in feature selection and classific | Unless your algorithms have huge differences in performance and you have huge numbers of test cases, you won't be able to detect differences by just looking at the performance.
However, you can make u | How to statistically compare two algorithms across three datasets in feature selection and classification?
Unless your algorithms have huge differences in performance and you have huge numbers of test cases, you won't be able to detect differences by just looking at the performance.
However, you can make use of apaired design:
run all three algorithms on exactly the same train/test split of a data set, and
do not aggregate the test results into % correct, but keep them at the single test case level as correct or wrong.
For the comparison, have a look at McNemar's test. The idea behind exploiting a paired design here is that all cases that both algorithms got right and those that both got wrong do not help you distinguishing the algorithms. But if one algorithm is better than the other, there should be many cases the better algorithm got right but not the worse, and few that were predicted correctly by the worse method but wrong by the better one.
Also, Cawley and Talbot: On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, JMLR, 2010, 1, 2079-2107. is highly relevant.
Because of the random aspects of your algorithms, you'll also want to check the same split of the same data set multiple times. From that you can estimate the variation between different runs that are otherwise equal. It may be difficult to judge how different the selected sets of variables are. But if your ultimate goal is predictive performance,then you can also use the variation between predictions of the same test case during different runs to measure the stability of the resulting models.
You'll then also want to check (as indicated above) variation due to different splits of the data set and put this into relation with the first variance.
Fractions (like % correctly recognized samples) are usually assumed to be distributed binomially, in certain cases a normal approximation is possible, but the small-print for this hardly ever met in fields with wide data matrices. This has the consequence that confidence intervals are huge for small numbers of test cases. In R, binom::binom.confint calculates confidence intervals for the true proportion given no. of tests and no. of successes.
Finally, My experience with genetic optimization for spectroscopic data (my Diplom thesis in German) suggests that you should check also the training errors. GAs tend to overfit very fast, arriving at very low training errors. Low training errors are not only overoptimistic, but they also have the consequence that the GA cannot differentiate between lots of models that seem to be equally perfect. (You may have less of a problem with that if the GA internally also randomly subsamples train and internal test sets).
Papers in English:
Beleites, Steiner, Sowa, Baumgartner, Sobottka, Schackert and Salzer: Classification of human gliomas by infrared imaging spectroscopy and chemometric image processing, Vib Spec, 2005, 38, 143 - 149.
application oriented, shows how we interpret the selected variates and that there are known "changes" in selections that are equivalent for chemical reasons.
Beleites and Salzer: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 2008, 390, 1261 - 1271.
discusses binomial confidence intervals for hit rate etc, why we could not calculate it. Instead, we use variation observed over repeated cross validation. The discussion about guesstimate for the effective sample size needs to be taken with caution, nowadays I'd say it is somewhere between misleading and wrong (I guess that is science moving on :-) ).
fig. 11 shows the independent (outer cross validation with splitting by specimen) test results from 40 different models for the same specimen. The corresponding internal estimate of the GA was 97.5% correct predictions (one model got all spectra of the specimen wrong, the remaining models were correct for all spectra - not shown in the paper). This is the problem with overfitting I mentioned above.
In this recent paper Beleites, Neugebauer, Bocklitz, Krafft and Popp: Sample Size Planning for Classification Models, Anal Chem Acta, 2013, 760, 25 - 33. we illustrate the problems with classification error estimation with too few patients/independent samples. Also illustrates how to estimate necessary patient numbers if you cannot do paired tests. | How to statistically compare two algorithms across three datasets in feature selection and classific
Unless your algorithms have huge differences in performance and you have huge numbers of test cases, you won't be able to detect differences by just looking at the performance.
However, you can make u |
35,517 | How to statistically compare two algorithms across three datasets in feature selection and classification? | You are running featuer selection with GA 10 times and every time you get a different output !!
First, If you start by the same seed, you should always get the same selected featuer subset. However, If you are using a random seed, also, most probably, you should get almost the same selected features. One reason for getting the same selected featuer is stated in your post. Also, for a fair comparison, you may use the same seeds in the runs of A for the B's experiments.
Second, you may use cross-validation or bootstraping for comparison. This way you get a more representative comparison. In this case, there is a source of variation i.e. random training samples which seems stronger than random seeds. Thus, the comparison may reveal which algorthim is really better.
Finally, you may use t-test as you proposed or directly use some non-parametric tests like Kruskal-Wallis test. | How to statistically compare two algorithms across three datasets in feature selection and classific | You are running featuer selection with GA 10 times and every time you get a different output !!
First, If you start by the same seed, you should always get the same selected featuer subset. However, I | How to statistically compare two algorithms across three datasets in feature selection and classification?
You are running featuer selection with GA 10 times and every time you get a different output !!
First, If you start by the same seed, you should always get the same selected featuer subset. However, If you are using a random seed, also, most probably, you should get almost the same selected features. One reason for getting the same selected featuer is stated in your post. Also, for a fair comparison, you may use the same seeds in the runs of A for the B's experiments.
Second, you may use cross-validation or bootstraping for comparison. This way you get a more representative comparison. In this case, there is a source of variation i.e. random training samples which seems stronger than random seeds. Thus, the comparison may reveal which algorthim is really better.
Finally, you may use t-test as you proposed or directly use some non-parametric tests like Kruskal-Wallis test. | How to statistically compare two algorithms across three datasets in feature selection and classific
You are running featuer selection with GA 10 times and every time you get a different output !!
First, If you start by the same seed, you should always get the same selected featuer subset. However, I |
35,518 | Testing equality of two binomial proportions proportion (one near 100 %) | If you trust wikipedia for rules of thumb for the validity of the normal approximation of the binomial distribution:
One rule is that both $np$ and $n(1−p)$ must be greater than $5$.
However, the specific number varies from source to source, and depends
on how good an approximation one wants; some sources give $10$ which
gives virtually the same results as the following rule for large ''n''
until ''n'' is very large (ex: ''x=11, n=7752'').
A second rule is that for $n > 5$ the normal approximation is
adequate if
$$\left|(1/\sqrt{n})(\sqrt{(1-p)/p}-\sqrt{p/(1-p)})\right|<0.3$$
Another commonly used rule holds that the normal approximation is
appropriate only if everything within 3 standard deviations of its
mean is within the range of possible values that is if
$$ \mu \pm 3 \sigma = np \pm 3 \sqrt{np(1-p)} \in [0,n]. \,$$
All of those fail when $p$ is close to either $0$ or $1$. The intuitive idea is that then the distribution is:
very skewed, and
the normal approximation will be too significant outside the actual bounds of the binomial distribution, $[0, n]$. | Testing equality of two binomial proportions proportion (one near 100 %) | If you trust wikipedia for rules of thumb for the validity of the normal approximation of the binomial distribution:
One rule is that both $np$ and $n(1−p)$ must be greater than $5$.
However, the | Testing equality of two binomial proportions proportion (one near 100 %)
If you trust wikipedia for rules of thumb for the validity of the normal approximation of the binomial distribution:
One rule is that both $np$ and $n(1−p)$ must be greater than $5$.
However, the specific number varies from source to source, and depends
on how good an approximation one wants; some sources give $10$ which
gives virtually the same results as the following rule for large ''n''
until ''n'' is very large (ex: ''x=11, n=7752'').
A second rule is that for $n > 5$ the normal approximation is
adequate if
$$\left|(1/\sqrt{n})(\sqrt{(1-p)/p}-\sqrt{p/(1-p)})\right|<0.3$$
Another commonly used rule holds that the normal approximation is
appropriate only if everything within 3 standard deviations of its
mean is within the range of possible values that is if
$$ \mu \pm 3 \sigma = np \pm 3 \sqrt{np(1-p)} \in [0,n]. \,$$
All of those fail when $p$ is close to either $0$ or $1$. The intuitive idea is that then the distribution is:
very skewed, and
the normal approximation will be too significant outside the actual bounds of the binomial distribution, $[0, n]$. | Testing equality of two binomial proportions proportion (one near 100 %)
If you trust wikipedia for rules of thumb for the validity of the normal approximation of the binomial distribution:
One rule is that both $np$ and $n(1−p)$ must be greater than $5$.
However, the |
35,519 | Testing equality of two binomial proportions proportion (one near 100 %) | +1 for @Jaime. But as it happens in this case your null hypothesis is that both proportions equal a pooled figure of (87+48)/(88+60) = 0.91. With your sample size this is in the acceptable area for approximations such as this z test or the equivalent chi square test. See that the values in the "expected" (meaning expected under the null hypothesis of equal proportions) matrix below are all more than 5, usually accepted as an ok rule of thumb.
I would advocate as a simple solution a Chi square test with continuity correction - which agrees with you (low p value) that it is unlikely a common underlying proportion would produce these two observed sets of data.
> p <- (87+48)/(88+60)
> p
[1] 0.9121622
> obs <- matrix(c(87,1,48,12), nrow=2)
> obs
[,1] [,2]
[1,] 87 48
[2,] 1 12
> expected <- rbind(p * margin.table(obs,2),(1-p) * margin.table(obs,2))
> expected
[,1] [,2]
[1,] 80.27027 54.72973
[2,] 7.72973 5.27027
> chisq.test(obs)
Pearson's Chi-squared test with Yates' continuity correction
data: obs
X-squared = 13.5773, df = 1, p-value = 0.0002289 | Testing equality of two binomial proportions proportion (one near 100 %) | +1 for @Jaime. But as it happens in this case your null hypothesis is that both proportions equal a pooled figure of (87+48)/(88+60) = 0.91. With your sample size this is in the acceptable area for | Testing equality of two binomial proportions proportion (one near 100 %)
+1 for @Jaime. But as it happens in this case your null hypothesis is that both proportions equal a pooled figure of (87+48)/(88+60) = 0.91. With your sample size this is in the acceptable area for approximations such as this z test or the equivalent chi square test. See that the values in the "expected" (meaning expected under the null hypothesis of equal proportions) matrix below are all more than 5, usually accepted as an ok rule of thumb.
I would advocate as a simple solution a Chi square test with continuity correction - which agrees with you (low p value) that it is unlikely a common underlying proportion would produce these two observed sets of data.
> p <- (87+48)/(88+60)
> p
[1] 0.9121622
> obs <- matrix(c(87,1,48,12), nrow=2)
> obs
[,1] [,2]
[1,] 87 48
[2,] 1 12
> expected <- rbind(p * margin.table(obs,2),(1-p) * margin.table(obs,2))
> expected
[,1] [,2]
[1,] 80.27027 54.72973
[2,] 7.72973 5.27027
> chisq.test(obs)
Pearson's Chi-squared test with Yates' continuity correction
data: obs
X-squared = 13.5773, df = 1, p-value = 0.0002289 | Testing equality of two binomial proportions proportion (one near 100 %)
+1 for @Jaime. But as it happens in this case your null hypothesis is that both proportions equal a pooled figure of (87+48)/(88+60) = 0.91. With your sample size this is in the acceptable area for |
35,520 | What practical application is there for the Asymptotic Mean Integrated Squared Error in kernel density estimation? | The AMISE allows one to obtain an expression for the optimal bandwidth for the unknown density $f$. Unfortunately, the expression is in terms of derivatives of $f$. However, it is possible to derive a similar expression giving the optimal bandwidth for a kernel estimate of those derivatives. This is expressed in terms of even higher derivatives of $f$. And so on.
This seems like it might be an unending sequence of pointless theory. But the neat thing is, that for some sufficiently high order derivatives you can just assume that $f$ is normal. Then you can work your way back through the levels to find the bandwidth for $f$. It turns out that this works really well and almost nothing is lost provided $f$ is sufficiently smooth and enough levels of iteration are used (usually only 2 or 3 levels are needed).
The practical result is a bandwidth selection method which is general and quite robust. The most popular version of this is the Sheather-Jones plug-in method which is implemented in several software packages. In R, you can get a density estimate using the Sheather-Jones method:
density(x, bw="SJ")
That usually gives better results than the default bandwidth. | What practical application is there for the Asymptotic Mean Integrated Squared Error in kernel densi | The AMISE allows one to obtain an expression for the optimal bandwidth for the unknown density $f$. Unfortunately, the expression is in terms of derivatives of $f$. However, it is possible to derive a | What practical application is there for the Asymptotic Mean Integrated Squared Error in kernel density estimation?
The AMISE allows one to obtain an expression for the optimal bandwidth for the unknown density $f$. Unfortunately, the expression is in terms of derivatives of $f$. However, it is possible to derive a similar expression giving the optimal bandwidth for a kernel estimate of those derivatives. This is expressed in terms of even higher derivatives of $f$. And so on.
This seems like it might be an unending sequence of pointless theory. But the neat thing is, that for some sufficiently high order derivatives you can just assume that $f$ is normal. Then you can work your way back through the levels to find the bandwidth for $f$. It turns out that this works really well and almost nothing is lost provided $f$ is sufficiently smooth and enough levels of iteration are used (usually only 2 or 3 levels are needed).
The practical result is a bandwidth selection method which is general and quite robust. The most popular version of this is the Sheather-Jones plug-in method which is implemented in several software packages. In R, you can get a density estimate using the Sheather-Jones method:
density(x, bw="SJ")
That usually gives better results than the default bandwidth. | What practical application is there for the Asymptotic Mean Integrated Squared Error in kernel densi
The AMISE allows one to obtain an expression for the optimal bandwidth for the unknown density $f$. Unfortunately, the expression is in terms of derivatives of $f$. However, it is possible to derive a |
35,521 | Probability of collision (two bivariate normal distributions) | Summary
The problem is not trivial, but obtaining a solution is straightforward. Exact analytical expressions for the distribution of the inter-ship distance can be found (in terms of Bessel functions): it is the square root of a scaled non-central chi-squared variate. Provided the ships are far apart compared to the standard deviation of the position estimates, formulas for the mean and variance of this distribution provide an excellent Normal approximation. This can be used to develop either confidence intervals or a posterior distribution for the distance.
A comment describes the data:
The data that I have is 2 pairs of x,y coordinates that mark the estimated positions of 2 ships. Also, positional errors are bivariate normal with 95% probability of ship's actual position being within 1 mile of the expected position.
It will be convenient to obtain conventional parameters of the positional errors. A bivariate normal distribution with no correlation and variances of $\sigma^2$ for each of the coordinates has a total probability of $1 - \exp(-x^2/(2\sigma^2))$ within a distance $x$ of its mean. Letting $x$ be one mile and setting this expression to $0.95$ determines $\sigma^2$. In general, when the probability is $1-\alpha$ ($\alpha=0.05$ here) at a radius of $x$, then
$$\sigma^2 = \frac{x^2}{-2 \log(\alpha)}.$$
Let $(X_1,Y_1)$ be the observed location of ship 1, assumed to be at the unknown location $(\mu_{x1}, \mu_{y1})$ and $(X_2,Y_2)$ the observed location of ship 2, assumed to be at $(\mu_{x2}, \mu_{y2})$. Their squared distance,
$$D^2 = (X_1 - X_2)^2 + (Y_1 - Y_2)^2,$$
is a sum of squares of two Normal variates: $X_1-X_2$ has an expectation of $\mu_{x1}-\mu_{x2}$ and a variance of $2\sigma^2 = \sigma^2 + \sigma^2$ while $Y_1-Y_2$ has an expectation of $\mu_{y1}-\mu_{y2}$ and a variance of $2\sigma^2$. This makes $D^2$ equal to $2\sigma^2$ times a non-central $\chi^2$ distribution with $\nu=2$ degrees of freedom and noncentrality parameter
$$\lambda = \frac{(\mu_{x1}-\mu_{x2})^2 + (\mu_{y1}-\mu_{y2})^2}{2\sigma^2}.$$
Consequently, $D$ itself could be called a (scaled) "noncentral $\chi$ distribution."
Calculations indicate that the mean of $D$ equals $\sqrt{2}\sigma$ times $$\frac{1}{2} e^{-\lambda /4} \sqrt{\frac{\pi }{2}} \left((2+\lambda ) \text{BesselI}\left[0,\frac{\lambda }{4}\right]+\lambda \text{BesselI}\left[1,\frac{\lambda }{4}\right]\right)$$ and (somewhat surprisingly) its raw second moment is $2\sigma^2$ times $2+\lambda$. As we would intuitively expect, the mean (upper blue curve) is close to $\sqrt{\lambda}$ (lower red curve), especially for large $\lambda$, which occurs when the ships are well separated:
From these, by matching moments, we obtain a Normal approximation to $D$. It is remarkably good when the ships are separated by several $\sigma$'s. (The Normal approximation has slightly shorter tails.) For instance, here are plots of the distribution of $D$ and its Normal approximation when the two ships are actually $5$ miles apart in the circumstances of the initial quotation:
At this resolution, they perfectly coincide. The correct probability that $D$ is less than $5$, $\Pr(D\le 5)$, is equal to $0.476912$, while the probability given by the Normal approximation is $0.476807$: just $0.0001$ off.
However, these calculations do not directly answer the question, which is: given the observed value of $D$, what can we say about the true distance between the ships (equal to $\delta = \sqrt{(\mu_{x1}-\mu_{x2})^2 + (\mu_{y1}-\mu_{y2})^2}$)? This usually has two kinds of answers:
For any desired level of confidence, we can compute an associated confidence interval for $\delta$, or
If we adopt a prior distribution for $\delta$, we can update that distribution (via Bayes' Theorem) based on $D$ to obtain a posterior distribution.
Either method is easy and straightforward when the Normal approximation to the distribution of $D$ is good. Both require some heavy computation otherwise--but that is perhaps a discussion for another day. | Probability of collision (two bivariate normal distributions) | Summary
The problem is not trivial, but obtaining a solution is straightforward. Exact analytical expressions for the distribution of the inter-ship distance can be found (in terms of Bessel functions | Probability of collision (two bivariate normal distributions)
Summary
The problem is not trivial, but obtaining a solution is straightforward. Exact analytical expressions for the distribution of the inter-ship distance can be found (in terms of Bessel functions): it is the square root of a scaled non-central chi-squared variate. Provided the ships are far apart compared to the standard deviation of the position estimates, formulas for the mean and variance of this distribution provide an excellent Normal approximation. This can be used to develop either confidence intervals or a posterior distribution for the distance.
A comment describes the data:
The data that I have is 2 pairs of x,y coordinates that mark the estimated positions of 2 ships. Also, positional errors are bivariate normal with 95% probability of ship's actual position being within 1 mile of the expected position.
It will be convenient to obtain conventional parameters of the positional errors. A bivariate normal distribution with no correlation and variances of $\sigma^2$ for each of the coordinates has a total probability of $1 - \exp(-x^2/(2\sigma^2))$ within a distance $x$ of its mean. Letting $x$ be one mile and setting this expression to $0.95$ determines $\sigma^2$. In general, when the probability is $1-\alpha$ ($\alpha=0.05$ here) at a radius of $x$, then
$$\sigma^2 = \frac{x^2}{-2 \log(\alpha)}.$$
Let $(X_1,Y_1)$ be the observed location of ship 1, assumed to be at the unknown location $(\mu_{x1}, \mu_{y1})$ and $(X_2,Y_2)$ the observed location of ship 2, assumed to be at $(\mu_{x2}, \mu_{y2})$. Their squared distance,
$$D^2 = (X_1 - X_2)^2 + (Y_1 - Y_2)^2,$$
is a sum of squares of two Normal variates: $X_1-X_2$ has an expectation of $\mu_{x1}-\mu_{x2}$ and a variance of $2\sigma^2 = \sigma^2 + \sigma^2$ while $Y_1-Y_2$ has an expectation of $\mu_{y1}-\mu_{y2}$ and a variance of $2\sigma^2$. This makes $D^2$ equal to $2\sigma^2$ times a non-central $\chi^2$ distribution with $\nu=2$ degrees of freedom and noncentrality parameter
$$\lambda = \frac{(\mu_{x1}-\mu_{x2})^2 + (\mu_{y1}-\mu_{y2})^2}{2\sigma^2}.$$
Consequently, $D$ itself could be called a (scaled) "noncentral $\chi$ distribution."
Calculations indicate that the mean of $D$ equals $\sqrt{2}\sigma$ times $$\frac{1}{2} e^{-\lambda /4} \sqrt{\frac{\pi }{2}} \left((2+\lambda ) \text{BesselI}\left[0,\frac{\lambda }{4}\right]+\lambda \text{BesselI}\left[1,\frac{\lambda }{4}\right]\right)$$ and (somewhat surprisingly) its raw second moment is $2\sigma^2$ times $2+\lambda$. As we would intuitively expect, the mean (upper blue curve) is close to $\sqrt{\lambda}$ (lower red curve), especially for large $\lambda$, which occurs when the ships are well separated:
From these, by matching moments, we obtain a Normal approximation to $D$. It is remarkably good when the ships are separated by several $\sigma$'s. (The Normal approximation has slightly shorter tails.) For instance, here are plots of the distribution of $D$ and its Normal approximation when the two ships are actually $5$ miles apart in the circumstances of the initial quotation:
At this resolution, they perfectly coincide. The correct probability that $D$ is less than $5$, $\Pr(D\le 5)$, is equal to $0.476912$, while the probability given by the Normal approximation is $0.476807$: just $0.0001$ off.
However, these calculations do not directly answer the question, which is: given the observed value of $D$, what can we say about the true distance between the ships (equal to $\delta = \sqrt{(\mu_{x1}-\mu_{x2})^2 + (\mu_{y1}-\mu_{y2})^2}$)? This usually has two kinds of answers:
For any desired level of confidence, we can compute an associated confidence interval for $\delta$, or
If we adopt a prior distribution for $\delta$, we can update that distribution (via Bayes' Theorem) based on $D$ to obtain a posterior distribution.
Either method is easy and straightforward when the Normal approximation to the distribution of $D$ is good. Both require some heavy computation otherwise--but that is perhaps a discussion for another day. | Probability of collision (two bivariate normal distributions)
Summary
The problem is not trivial, but obtaining a solution is straightforward. Exact analytical expressions for the distribution of the inter-ship distance can be found (in terms of Bessel functions |
35,522 | Probability of collision (two bivariate normal distributions) | I'm not sure how much this will help you but I hope it gives some pointers.
Here is a Mathematica function which computes the probability under the distributions for a circle of radius separation/2 for two ships with a normal distribution of position with variance 0.2. A variance of 0.2 is close to the 95% certainly level.
In brief it defines a mixture distribution in 2 dimensions with covariance matrix {{0.2,0},{0,0.2}} (* other covariance matrices would account for elliptical distributions *). Forms the probability distribution function for that mixture and then numerically integrates it over the required range.
(* Uses absolute separation distance rotated to the x axis *)
probProximity[reportedSeparationMiles_, probabiityRangeMiles_] :=
With[{dist = MixtureDistribution[{1, 1},
{MultinormalDistribution[{-(reportedSeparationMiles/2),0}, {{0.2, 0}, {0,0.2}}],
MultinormalDistribution[{ reportedSeparationMiles/2, 0}, {{0.2, 0}, {0,0.2}}]}]},
NIntegrate[
PDF[dist][{x, y}]
Boole[Abs[\[Sqrt]((0 - x)^2 + (0 - y)^2)] <= probabiityRangeMiles/2],
{x, -(probabiityRangeMiles/2), probabiityRangeMiles/2},
{y, -(probabiityRangeMiles/2), probabiityRangeMiles/2}]]
The probability distribution of position for two ships 5 miles apart with a 95% confidence of being within one mile of reported position.
For a range of 5 miles, the calculated value is
probProximity[5, 5]
0.464173
Here is the probability of proximity over a range of distances: | Probability of collision (two bivariate normal distributions) | I'm not sure how much this will help you but I hope it gives some pointers.
Here is a Mathematica function which computes the probability under the distributions for a circle of radius separation/2 f | Probability of collision (two bivariate normal distributions)
I'm not sure how much this will help you but I hope it gives some pointers.
Here is a Mathematica function which computes the probability under the distributions for a circle of radius separation/2 for two ships with a normal distribution of position with variance 0.2. A variance of 0.2 is close to the 95% certainly level.
In brief it defines a mixture distribution in 2 dimensions with covariance matrix {{0.2,0},{0,0.2}} (* other covariance matrices would account for elliptical distributions *). Forms the probability distribution function for that mixture and then numerically integrates it over the required range.
(* Uses absolute separation distance rotated to the x axis *)
probProximity[reportedSeparationMiles_, probabiityRangeMiles_] :=
With[{dist = MixtureDistribution[{1, 1},
{MultinormalDistribution[{-(reportedSeparationMiles/2),0}, {{0.2, 0}, {0,0.2}}],
MultinormalDistribution[{ reportedSeparationMiles/2, 0}, {{0.2, 0}, {0,0.2}}]}]},
NIntegrate[
PDF[dist][{x, y}]
Boole[Abs[\[Sqrt]((0 - x)^2 + (0 - y)^2)] <= probabiityRangeMiles/2],
{x, -(probabiityRangeMiles/2), probabiityRangeMiles/2},
{y, -(probabiityRangeMiles/2), probabiityRangeMiles/2}]]
The probability distribution of position for two ships 5 miles apart with a 95% confidence of being within one mile of reported position.
For a range of 5 miles, the calculated value is
probProximity[5, 5]
0.464173
Here is the probability of proximity over a range of distances: | Probability of collision (two bivariate normal distributions)
I'm not sure how much this will help you but I hope it gives some pointers.
Here is a Mathematica function which computes the probability under the distributions for a circle of radius separation/2 f |
35,523 | Probability of collision (two bivariate normal distributions) | I've thrown up a simulation with R with increasing dimension in response to Erik's answer. This code also answers the question, by providing a numerical solution (when fixing the dimension to 2) to a special case which can be easily generalized (though it's not very efficient).
Erik proposed to look at the problem as a one-dimensional problem by choosing a coordinate system, which lies on the lines connecting the two ships. This can't work, as with increasing dimension, the probability being close decreases.
I'm sampling from two $d$-dimensional Gaussians with means $(1,0,\dots)$ and $(0,0,\dots)$. Covariance is the identity matrix. The code plots the frequency of the points being within 1 of each other (euclidean distance) and uses 1000 samples for each dimension.
Using simulation you can easily model ellipses simply by adjusting the covariance matrix (what my code doesn't allow, since I don't use a library for sampling from gaussians).
require(ggplot2)
euc.dist <- function(x1,x2) {
d = length(x1)
sqrt(sum((x1-x2) ^ 2))
}
drawDist <- function(d) {
v1 <- replicate(d,rnorm(1))
v2 <- replicate(d,rnorm(1))
#resample the first component of v1 to get 1,0,0,...
v1[1] <- rnorm(1,1)
euc.dist(v1,v2)
}
png("hit-prob.png")
qplot(1:10,
sapply(1:10,function(d)
mean(replicate(1000,drawDist(d) < 1))), #This is the important line
xlab="dimension",ylab="Hit Freq")
dev.off() | Probability of collision (two bivariate normal distributions) | I've thrown up a simulation with R with increasing dimension in response to Erik's answer. This code also answers the question, by providing a numerical solution (when fixing the dimension to 2) to a | Probability of collision (two bivariate normal distributions)
I've thrown up a simulation with R with increasing dimension in response to Erik's answer. This code also answers the question, by providing a numerical solution (when fixing the dimension to 2) to a special case which can be easily generalized (though it's not very efficient).
Erik proposed to look at the problem as a one-dimensional problem by choosing a coordinate system, which lies on the lines connecting the two ships. This can't work, as with increasing dimension, the probability being close decreases.
I'm sampling from two $d$-dimensional Gaussians with means $(1,0,\dots)$ and $(0,0,\dots)$. Covariance is the identity matrix. The code plots the frequency of the points being within 1 of each other (euclidean distance) and uses 1000 samples for each dimension.
Using simulation you can easily model ellipses simply by adjusting the covariance matrix (what my code doesn't allow, since I don't use a library for sampling from gaussians).
require(ggplot2)
euc.dist <- function(x1,x2) {
d = length(x1)
sqrt(sum((x1-x2) ^ 2))
}
drawDist <- function(d) {
v1 <- replicate(d,rnorm(1))
v2 <- replicate(d,rnorm(1))
#resample the first component of v1 to get 1,0,0,...
v1[1] <- rnorm(1,1)
euc.dist(v1,v2)
}
png("hit-prob.png")
qplot(1:10,
sapply(1:10,function(d)
mean(replicate(1000,drawDist(d) < 1))), #This is the important line
xlab="dimension",ylab="Hit Freq")
dev.off() | Probability of collision (two bivariate normal distributions)
I've thrown up a simulation with R with increasing dimension in response to Erik's answer. This code also answers the question, by providing a numerical solution (when fixing the dimension to 2) to a |
35,524 | Why can bigger sample size increase power of a test? | The power of the test depends on the distribution of the test statistic when the null hypothesis is false. If $R_n$ is the rejection region for the test statistic under the null hypothesis and for sample size $n$, the power is $$\beta = \mbox{Prob}(X_n \in R_n | H_A)$$ where $H_A$ is the null hypothesis and $X_n$ is the test statistic for a sample of size $n$. I am assuming a simple alternative --- although in practice, we usually care about a range of parameter values.
Typically, a test statistic is some sort of average whose long term behaviour is governed by the strong and/or weak law of large numbers. As the sample size gets large, the distribution of the test statistic approaches that of a point mass --- under either the null or the alternative hypotheses.
Thus, as $n$ gets large, the acceptance region (complement of the rejection region), gets smaller and closer to the value of the null. Intuitively, probable outcomes under the null and probable outcomes under the alternative no longer overlap - meaning that the rejection probability approaches 1 (under $H_A$) and 0 under $H_0$. Intuitively, increasing sample size is like increasing the magnification of a telescope. From a distance, two dots might seem indistinguishably close: with the telescope, you realize there is space between them. Sample size puts "probability space" between the null and alternative.
I am trying to think of an example where this does not occur --- but it is hard to imagine oneself using a test statistic whose behaviour does not ultimately lead to certainty. I can imagine situations where things don't work: if the number of nuisance parameters increases with sample size, things can fail to converge. In time series estimation, if the series is "insufficiently random" and the influence of the past fails to diminish at a reasonable rate, problems can arise as well. | Why can bigger sample size increase power of a test? | The power of the test depends on the distribution of the test statistic when the null hypothesis is false. If $R_n$ is the rejection region for the test statistic under the null hypothesis and for sam | Why can bigger sample size increase power of a test?
The power of the test depends on the distribution of the test statistic when the null hypothesis is false. If $R_n$ is the rejection region for the test statistic under the null hypothesis and for sample size $n$, the power is $$\beta = \mbox{Prob}(X_n \in R_n | H_A)$$ where $H_A$ is the null hypothesis and $X_n$ is the test statistic for a sample of size $n$. I am assuming a simple alternative --- although in practice, we usually care about a range of parameter values.
Typically, a test statistic is some sort of average whose long term behaviour is governed by the strong and/or weak law of large numbers. As the sample size gets large, the distribution of the test statistic approaches that of a point mass --- under either the null or the alternative hypotheses.
Thus, as $n$ gets large, the acceptance region (complement of the rejection region), gets smaller and closer to the value of the null. Intuitively, probable outcomes under the null and probable outcomes under the alternative no longer overlap - meaning that the rejection probability approaches 1 (under $H_A$) and 0 under $H_0$. Intuitively, increasing sample size is like increasing the magnification of a telescope. From a distance, two dots might seem indistinguishably close: with the telescope, you realize there is space between them. Sample size puts "probability space" between the null and alternative.
I am trying to think of an example where this does not occur --- but it is hard to imagine oneself using a test statistic whose behaviour does not ultimately lead to certainty. I can imagine situations where things don't work: if the number of nuisance parameters increases with sample size, things can fail to converge. In time series estimation, if the series is "insufficiently random" and the influence of the past fails to diminish at a reasonable rate, problems can arise as well. | Why can bigger sample size increase power of a test?
The power of the test depends on the distribution of the test statistic when the null hypothesis is false. If $R_n$ is the rejection region for the test statistic under the null hypothesis and for sam |
35,525 | Why can bigger sample size increase power of a test? | Here's one intuitive answer: In the real world, you are almost always sampling from a finite population (although it may be very large). If you managed to measure the entire population, power would be infinite (well, 1.0, which is essentially like infinite power - you could detect any difference) - you would know the exact difference. The closer you get to the whole population (given that your sample is random) the more precise your estimate can be.
However, if you get away from random samples, this is no longer the case. Intuitively again, suppose you are testing the difference in height between adult men and adult women. One extreme way to be non-random is to test a sample of very short men (say, you sample from a population of jockeys) against a sample of very tall women (basketball players). | Why can bigger sample size increase power of a test? | Here's one intuitive answer: In the real world, you are almost always sampling from a finite population (although it may be very large). If you managed to measure the entire population, power would be | Why can bigger sample size increase power of a test?
Here's one intuitive answer: In the real world, you are almost always sampling from a finite population (although it may be very large). If you managed to measure the entire population, power would be infinite (well, 1.0, which is essentially like infinite power - you could detect any difference) - you would know the exact difference. The closer you get to the whole population (given that your sample is random) the more precise your estimate can be.
However, if you get away from random samples, this is no longer the case. Intuitively again, suppose you are testing the difference in height between adult men and adult women. One extreme way to be non-random is to test a sample of very short men (say, you sample from a population of jockeys) against a sample of very tall women (basketball players). | Why can bigger sample size increase power of a test?
Here's one intuitive answer: In the real world, you are almost always sampling from a finite population (although it may be very large). If you managed to measure the entire population, power would be |
35,526 | Why can bigger sample size increase power of a test? | Thinking a little more about the part of the question:
Does bigger sample size always increase testing power?
If we are only talking about tests generally covered in an introductory statistics course and the conditions for those tests hold (e.g. simple random sample, central limit theorem gives an approximate normal, null hypothesis is false, etc.) then yes, increasing the sample size will increase the power. However, here are some cases where increasing the sample size may not increase the power:
If the underlyng distribution is a Cauchy (undefined mean, infinite variance, CLT does not apply), then increasing the sample size may not increase the power (but I don't know what test you would be doing on such data, or even a realistic case that would follow a Cauchy).
Excessive sampling causes subjects to loose interest and stop cooperating. I remember a presentation about an election in one of the Caribean island countries where the polling got so out of hand that all the registered voters were being surveyed on average every week and got so fed up that they stopped answering or just lied. The presentation showed that if they had used smaller samples for each survey then the population would not have been as frustrated and they probably would have received better results.
Response rates and cost. If you plan a mail out survey and send the survey to 1,000 people but do no other follow up then you might only get 100 responses, but if you use the same money to only send out 200 surveys, but you also send follow-up letters and/or offer incentives then you may receive 150 responses, so the actual amount of data from the smaller planned study of 200 subjects will be more than that for the planned 1,000 subjects. This can also influence data quality, an in person interview of 50 people, or a telephone interview of 100 people may yield better quality data than a mail out survey of 1,000.
The concept of power is only for the cases where the null hypothesis is false, so if the null is true then power will not be affected by sample size.
When taking multiple measurements per subject then the concept of sample size is more complicated. Which gives more power, 10 measurements on each of 20 subjects (for a total of 200 measured values) or 2 measurement on each of 50 subjects (for a total of 100 measured values), often the 2nd will give more power even though the total number of measurements is smaller.
If the parameter of interest changes over time (think of election polling) and getting the bigger sample will require more time in which things could change, then that could affect the power. Think of comparing a sample of 100 taken on a single day vs. a sample of 1,000 taken over a 2 week period (and what if there is a publicised debate, scandle, etc. during those 2 weeks).
If you have a test whose type I error is not exactly alpha, and it depends on sample size, then increasing the sample size can actually decrease the power. Consider a binomial test with null that the probability is $0.5$, the alternative to test is that it is greater, and we want to test with $\alpha=0.05$. With a sample size of $n=5$ we can only reject if we see 5 successes (Type I error rate 0.03125), with a sample of $n=10$ we will reject if we see 9 or 10 sucesses (Type I error rate 0.01074, it would be 0.05469 if we rejected at 8 sucesses). If the true probabilty is $0.6$ then the probability of rejecting (power) with $n=5$ is $0.07776$ and with $n=10$ it is $0.04646$, so doubling the sample size decreased the power, but also decreased the type I error rate, so not really a fair comparison. Increasing the sample size to where the power is meaningful ($>80\%$) will make it much harder to find examples like this (though there are probably some where increasing n by 1 decreases the power slightly).
If you are running the wrong test where assumptions are violated (e.g. using a test that assumes equal variances when they are quite different) then your power might not increase.
If you first run a normality test, then run a different second test based on the results then larger sample sizes are more likely to reject the normal test (even when the difference does not matter) and if the test you run as a result is less powerful than if normality was not rejected then increasing the sample size could reduce the power (which is one argument against pre-testing the data for normality).
There are probably other cases which like these are beyond the scope of what the wikipedia article was trying to cover. | Why can bigger sample size increase power of a test? | Thinking a little more about the part of the question:
Does bigger sample size always increase testing power?
If we are only talking about tests generally covered in an introductory statistics cours | Why can bigger sample size increase power of a test?
Thinking a little more about the part of the question:
Does bigger sample size always increase testing power?
If we are only talking about tests generally covered in an introductory statistics course and the conditions for those tests hold (e.g. simple random sample, central limit theorem gives an approximate normal, null hypothesis is false, etc.) then yes, increasing the sample size will increase the power. However, here are some cases where increasing the sample size may not increase the power:
If the underlyng distribution is a Cauchy (undefined mean, infinite variance, CLT does not apply), then increasing the sample size may not increase the power (but I don't know what test you would be doing on such data, or even a realistic case that would follow a Cauchy).
Excessive sampling causes subjects to loose interest and stop cooperating. I remember a presentation about an election in one of the Caribean island countries where the polling got so out of hand that all the registered voters were being surveyed on average every week and got so fed up that they stopped answering or just lied. The presentation showed that if they had used smaller samples for each survey then the population would not have been as frustrated and they probably would have received better results.
Response rates and cost. If you plan a mail out survey and send the survey to 1,000 people but do no other follow up then you might only get 100 responses, but if you use the same money to only send out 200 surveys, but you also send follow-up letters and/or offer incentives then you may receive 150 responses, so the actual amount of data from the smaller planned study of 200 subjects will be more than that for the planned 1,000 subjects. This can also influence data quality, an in person interview of 50 people, or a telephone interview of 100 people may yield better quality data than a mail out survey of 1,000.
The concept of power is only for the cases where the null hypothesis is false, so if the null is true then power will not be affected by sample size.
When taking multiple measurements per subject then the concept of sample size is more complicated. Which gives more power, 10 measurements on each of 20 subjects (for a total of 200 measured values) or 2 measurement on each of 50 subjects (for a total of 100 measured values), often the 2nd will give more power even though the total number of measurements is smaller.
If the parameter of interest changes over time (think of election polling) and getting the bigger sample will require more time in which things could change, then that could affect the power. Think of comparing a sample of 100 taken on a single day vs. a sample of 1,000 taken over a 2 week period (and what if there is a publicised debate, scandle, etc. during those 2 weeks).
If you have a test whose type I error is not exactly alpha, and it depends on sample size, then increasing the sample size can actually decrease the power. Consider a binomial test with null that the probability is $0.5$, the alternative to test is that it is greater, and we want to test with $\alpha=0.05$. With a sample size of $n=5$ we can only reject if we see 5 successes (Type I error rate 0.03125), with a sample of $n=10$ we will reject if we see 9 or 10 sucesses (Type I error rate 0.01074, it would be 0.05469 if we rejected at 8 sucesses). If the true probabilty is $0.6$ then the probability of rejecting (power) with $n=5$ is $0.07776$ and with $n=10$ it is $0.04646$, so doubling the sample size decreased the power, but also decreased the type I error rate, so not really a fair comparison. Increasing the sample size to where the power is meaningful ($>80\%$) will make it much harder to find examples like this (though there are probably some where increasing n by 1 decreases the power slightly).
If you are running the wrong test where assumptions are violated (e.g. using a test that assumes equal variances when they are quite different) then your power might not increase.
If you first run a normality test, then run a different second test based on the results then larger sample sizes are more likely to reject the normal test (even when the difference does not matter) and if the test you run as a result is less powerful than if normality was not rejected then increasing the sample size could reduce the power (which is one argument against pre-testing the data for normality).
There are probably other cases which like these are beyond the scope of what the wikipedia article was trying to cover. | Why can bigger sample size increase power of a test?
Thinking a little more about the part of the question:
Does bigger sample size always increase testing power?
If we are only talking about tests generally covered in an introductory statistics cours |
35,527 | Why can bigger sample size increase power of a test? | For a more intuitive understanding look at the power.examp function in the TeachingDemos package for R. | Why can bigger sample size increase power of a test? | For a more intuitive understanding look at the power.examp function in the TeachingDemos package for R. | Why can bigger sample size increase power of a test?
For a more intuitive understanding look at the power.examp function in the TeachingDemos package for R. | Why can bigger sample size increase power of a test?
For a more intuitive understanding look at the power.examp function in the TeachingDemos package for R. |
35,528 | Why can bigger sample size increase power of a test? | Imagine a penny that's just slightly imperfect where heads will come up a tiny bit more than tails. If you try to detect that imperfection with just for example 10 flips your chances are low because of sampling error. For instance, you might happen to get a sample where you get more tails than heads. As you increase the number of flips the relative impact of sampling error diminishes. Essentially because you're getting closer to the total size of the population and thus a result more reflective of the true value being measured. If you flip the coin 10,000 times you're more likely to accurately detect even very small differences between heads and tails. | Why can bigger sample size increase power of a test? | Imagine a penny that's just slightly imperfect where heads will come up a tiny bit more than tails. If you try to detect that imperfection with just for example 10 flips your chances are low because | Why can bigger sample size increase power of a test?
Imagine a penny that's just slightly imperfect where heads will come up a tiny bit more than tails. If you try to detect that imperfection with just for example 10 flips your chances are low because of sampling error. For instance, you might happen to get a sample where you get more tails than heads. As you increase the number of flips the relative impact of sampling error diminishes. Essentially because you're getting closer to the total size of the population and thus a result more reflective of the true value being measured. If you flip the coin 10,000 times you're more likely to accurately detect even very small differences between heads and tails. | Why can bigger sample size increase power of a test?
Imagine a penny that's just slightly imperfect where heads will come up a tiny bit more than tails. If you try to detect that imperfection with just for example 10 flips your chances are low because |
35,529 | Meaning of p-values for an interaction between a categorical variable (w/ >2 cats) & continuous variable | The general way to do this is with a full-reduced model test using the anova function. Refit your model without the interaction of interest (the update function can be convenient for this) then run the anova function on the full and reduced models, for a glm fit you probably want to include test="Chisq".
Something like:
model_glm3_reduced <- update( model_glm3, .~. - lg_hag:as.factor(educa) )
anova( model_glm3_reduced, model_glm3, test="Chisq" )
Here the null hypothesis is that the 2 models fit equally well (any differences due to chance) and the alternative is that the full model contains at least one term that is an improvement in the fit. Since the difference in the 2 models is only the interaction term, this gives a single test (p-value) for testing the entire interaction. | Meaning of p-values for an interaction between a categorical variable (w/ >2 cats) & continuous vari | The general way to do this is with a full-reduced model test using the anova function. Refit your model without the interaction of interest (the update function can be convenient for this) then run t | Meaning of p-values for an interaction between a categorical variable (w/ >2 cats) & continuous variable
The general way to do this is with a full-reduced model test using the anova function. Refit your model without the interaction of interest (the update function can be convenient for this) then run the anova function on the full and reduced models, for a glm fit you probably want to include test="Chisq".
Something like:
model_glm3_reduced <- update( model_glm3, .~. - lg_hag:as.factor(educa) )
anova( model_glm3_reduced, model_glm3, test="Chisq" )
Here the null hypothesis is that the 2 models fit equally well (any differences due to chance) and the alternative is that the full model contains at least one term that is an improvement in the fit. Since the difference in the 2 models is only the interaction term, this gives a single test (p-value) for testing the entire interaction. | Meaning of p-values for an interaction between a categorical variable (w/ >2 cats) & continuous vari
The general way to do this is with a full-reduced model test using the anova function. Refit your model without the interaction of interest (the update function can be convenient for this) then run t |
35,530 | Can I perform an exhaustive search with cross-validation for feature selection? | Yes, you are likely to end up with over-fitting in this case, see my answer to this previous question. The important thing to remember is that cross-validation is an estimate of generalisation performance based on a finite sample of data. As it is based on a finite sample of data, the estimator has a non-zero variance, so to some extent reducing the cross-validation error will result in a combination of model choices that genuinely improve generalisation error and model choices that simply exploit the random peculiarities of the particular sample of data on which it is evaluated. The latter type of model choice is likely to make generalisation performance worse rather than better.
Over-fitting is a potential problem whenever you minimise any statistic based on a finite sample of data, cross-validation is no different. | Can I perform an exhaustive search with cross-validation for feature selection? | Yes, you are likely to end up with over-fitting in this case, see my answer to this previous question. The important thing to remember is that cross-validation is an estimate of generalisation perfor | Can I perform an exhaustive search with cross-validation for feature selection?
Yes, you are likely to end up with over-fitting in this case, see my answer to this previous question. The important thing to remember is that cross-validation is an estimate of generalisation performance based on a finite sample of data. As it is based on a finite sample of data, the estimator has a non-zero variance, so to some extent reducing the cross-validation error will result in a combination of model choices that genuinely improve generalisation error and model choices that simply exploit the random peculiarities of the particular sample of data on which it is evaluated. The latter type of model choice is likely to make generalisation performance worse rather than better.
Over-fitting is a potential problem whenever you minimise any statistic based on a finite sample of data, cross-validation is no different. | Can I perform an exhaustive search with cross-validation for feature selection?
Yes, you are likely to end up with over-fitting in this case, see my answer to this previous question. The important thing to remember is that cross-validation is an estimate of generalisation perfor |
35,531 | Can I perform an exhaustive search with cross-validation for feature selection? | I think this is a valid procedure for feature selection which is no more prone to overfitting than other feature selection procedures. The problem with this procedure is that it has large computational complexity and barely can be used for real data sets. | Can I perform an exhaustive search with cross-validation for feature selection? | I think this is a valid procedure for feature selection which is no more prone to overfitting than other feature selection procedures. The problem with this procedure is that it has large computationa | Can I perform an exhaustive search with cross-validation for feature selection?
I think this is a valid procedure for feature selection which is no more prone to overfitting than other feature selection procedures. The problem with this procedure is that it has large computational complexity and barely can be used for real data sets. | Can I perform an exhaustive search with cross-validation for feature selection?
I think this is a valid procedure for feature selection which is no more prone to overfitting than other feature selection procedures. The problem with this procedure is that it has large computationa |
35,532 | Can I perform an exhaustive search with cross-validation for feature selection? | I think if you do feature selection inside each fold of the cross validation you'll be fine. As posters above state you will overfit in any model using the selected features obtained from the procedure outlined above. This is because all data had some influence on the feature selection routine. | Can I perform an exhaustive search with cross-validation for feature selection? | I think if you do feature selection inside each fold of the cross validation you'll be fine. As posters above state you will overfit in any model using the selected features obtained from the procedur | Can I perform an exhaustive search with cross-validation for feature selection?
I think if you do feature selection inside each fold of the cross validation you'll be fine. As posters above state you will overfit in any model using the selected features obtained from the procedure outlined above. This is because all data had some influence on the feature selection routine. | Can I perform an exhaustive search with cross-validation for feature selection?
I think if you do feature selection inside each fold of the cross validation you'll be fine. As posters above state you will overfit in any model using the selected features obtained from the procedur |
35,533 | Contribution of each covariate to a single prediction in a logistic regression model | You can use the predict function in R. Call it with type='terms' and it will give you the contribution of each term in the model (the coefficient times the variable value). This will be on the log-odds scale.
Another option is to use the TkPredict function from the TeachingDemos package. This will show a graph of the predicted value vs. one of the predictors, then let the user interactively change the value of the various predictors to see how that affects the prediction. | Contribution of each covariate to a single prediction in a logistic regression model | You can use the predict function in R. Call it with type='terms' and it will give you the contribution of each term in the model (the coefficient times the variable value). This will be on the log-o | Contribution of each covariate to a single prediction in a logistic regression model
You can use the predict function in R. Call it with type='terms' and it will give you the contribution of each term in the model (the coefficient times the variable value). This will be on the log-odds scale.
Another option is to use the TkPredict function from the TeachingDemos package. This will show a graph of the predicted value vs. one of the predictors, then let the user interactively change the value of the various predictors to see how that affects the prediction. | Contribution of each covariate to a single prediction in a logistic regression model
You can use the predict function in R. Call it with type='terms' and it will give you the contribution of each term in the model (the coefficient times the variable value). This will be on the log-o |
35,534 | Construct artificial slightly overlapping data for PCA plot | To simulate a PCA, start with the desired result and work backwards: add some random error in the orthogonal directions and rotate that randomly.
The following example stipulates a two-dimensional result (i.e., two principal components) with two "blobs"; it is readily extended to more blobs. (More dimensions would take a bit more work in modifying sigma to accommodate higher-dimensional covariance matrices.)
Let's start with a random rotation matrix.
set.seed(17)
p <- 5 # dimensions
rot <- qr.Q(qr(matrix(rnorm(p^2), p)))
We generate the blobs separately from different multivariate normal distributions. The parameters (their means and covariance matrices) are buried in the mvrnorm arguments. To make it easy and reliable to specify the shape of such a distribution, we create a small function sigma to convert the angle of the principal axis and the two variances into a covariance matrix.
sigma <- function(theta=0, lambda=c(1,1)) {
cos.t <- cos(theta); sin.t <- sin(theta)
a <- matrix(c(cos.t, sin.t, -sin.t, cos.t), ncol=2)
t(a) %*% diag(lambda) %*% a
}
library(MASS)
n1 <- 50 # First group population
n2 <- 75 # Second group population
x <- rbind(mvrnorm(n1, c(-2,-1), sigma(0, c(1/2,1))),
mvrnorm(n2, c(0,1), sigma(pi/3, c(1, 1/3))))
Adjoin the orthogonal error and rotate:
eps <- 0.25 # Error SD should be small compared to the SDs for the blobs
x <- cbind(x, matrix(rnorm(dim(x)[1]*(p-2), sd=eps), ncol=p-2))
y <- x %*% rot
That's the simulated dataset. To check it, apply PCA:
fit <- prcomp(y) # PCA
summary(fit) # Brief summary showing two principal components
par(mfrow=c(2,2)) # Prepare to plot
plot(fit$x[, 1:2], asp=1) # Display the first two components $
plot(x[, 1:2], asp=1); # Display the original data for comparison
points(x[1:n1,1:2], col="Blue") #...distinguish one of the blobs
screeplot(fit) # The usual screeplot, supporting the summary
zapsmall(rot %*% fit$rotation, digits=2) # Compare the fitted and simulated rotations.
The final check produces a matrix whose block form (an upper 2 by 2 block and lower 3 by 3 block with near-zeros elsewhere) confirms the accuracy of the PCA estimates:
PC1 PC2 PC3 PC4 PC5
[1,] 0.58 0.81 0.05 0.00 -0.01
[2,] 0.81 -0.58 0.00 0.00 0.00
[3,] -0.01 -0.01 0.23 -0.62 -0.75
[4,] 0.03 0.03 -0.96 -0.28 -0.06
[5,] 0.00 0.00 0.18 -0.73 0.66
(The upper 2 by 2 block, suitably scaled by the first two eigenvalues, approximately describes the relationship between the point in the "fit" plot and those in the "original components" plot. This one looks like a rotation and a reflection.) | Construct artificial slightly overlapping data for PCA plot | To simulate a PCA, start with the desired result and work backwards: add some random error in the orthogonal directions and rotate that randomly.
The following example stipulates a two-dimensional res | Construct artificial slightly overlapping data for PCA plot
To simulate a PCA, start with the desired result and work backwards: add some random error in the orthogonal directions and rotate that randomly.
The following example stipulates a two-dimensional result (i.e., two principal components) with two "blobs"; it is readily extended to more blobs. (More dimensions would take a bit more work in modifying sigma to accommodate higher-dimensional covariance matrices.)
Let's start with a random rotation matrix.
set.seed(17)
p <- 5 # dimensions
rot <- qr.Q(qr(matrix(rnorm(p^2), p)))
We generate the blobs separately from different multivariate normal distributions. The parameters (their means and covariance matrices) are buried in the mvrnorm arguments. To make it easy and reliable to specify the shape of such a distribution, we create a small function sigma to convert the angle of the principal axis and the two variances into a covariance matrix.
sigma <- function(theta=0, lambda=c(1,1)) {
cos.t <- cos(theta); sin.t <- sin(theta)
a <- matrix(c(cos.t, sin.t, -sin.t, cos.t), ncol=2)
t(a) %*% diag(lambda) %*% a
}
library(MASS)
n1 <- 50 # First group population
n2 <- 75 # Second group population
x <- rbind(mvrnorm(n1, c(-2,-1), sigma(0, c(1/2,1))),
mvrnorm(n2, c(0,1), sigma(pi/3, c(1, 1/3))))
Adjoin the orthogonal error and rotate:
eps <- 0.25 # Error SD should be small compared to the SDs for the blobs
x <- cbind(x, matrix(rnorm(dim(x)[1]*(p-2), sd=eps), ncol=p-2))
y <- x %*% rot
That's the simulated dataset. To check it, apply PCA:
fit <- prcomp(y) # PCA
summary(fit) # Brief summary showing two principal components
par(mfrow=c(2,2)) # Prepare to plot
plot(fit$x[, 1:2], asp=1) # Display the first two components $
plot(x[, 1:2], asp=1); # Display the original data for comparison
points(x[1:n1,1:2], col="Blue") #...distinguish one of the blobs
screeplot(fit) # The usual screeplot, supporting the summary
zapsmall(rot %*% fit$rotation, digits=2) # Compare the fitted and simulated rotations.
The final check produces a matrix whose block form (an upper 2 by 2 block and lower 3 by 3 block with near-zeros elsewhere) confirms the accuracy of the PCA estimates:
PC1 PC2 PC3 PC4 PC5
[1,] 0.58 0.81 0.05 0.00 -0.01
[2,] 0.81 -0.58 0.00 0.00 0.00
[3,] -0.01 -0.01 0.23 -0.62 -0.75
[4,] 0.03 0.03 -0.96 -0.28 -0.06
[5,] 0.00 0.00 0.18 -0.73 0.66
(The upper 2 by 2 block, suitably scaled by the first two eigenvalues, approximately describes the relationship between the point in the "fit" plot and those in the "original components" plot. This one looks like a rotation and a reflection.) | Construct artificial slightly overlapping data for PCA plot
To simulate a PCA, start with the desired result and work backwards: add some random error in the orthogonal directions and rotate that randomly.
The following example stipulates a two-dimensional res |
35,535 | How is the inverse gamma distribution related to $n$ and $\sigma$? | I think it is more correct to speak of the posterior distribution of your parameter $\sigma'^{2}$ rather than its posterior estimate. For clarity of notations, I will drop the prime in $\sigma'^{2}$ in what follows.
Suppose that $X$ is distributed as $\mathcal{N}(0, \sigma^2)$, — I drop $\mu$ for now to make a heuristic example — and $1/\sigma^2 = \sigma^{-2}$ is distributed as $\Gamma(\alpha, \beta)$ and is independent of $X$.
The pdf of $X$ given $\sigma^{-2}$ is Gaussian, i.e.
$$f(x|\sigma^{-2}) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left(-\frac{x^2}{2\sigma^2}\right).$$
The joint pdf of $(X, \sigma^{-2})$, $f(x,\sigma^{-2})$ is obtained by multiplying $f(x|\sigma^{-2})$ by $g(\sigma^{-2})$ — the pdf of $\sigma^{-2}$. This comes out as
$$f(x, \sigma^{-2}) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left(-\frac{x^2}{2\sigma^2}\right) \frac{\beta^{\alpha}}{\Gamma(\alpha)}\exp \left(-\frac{\beta}{ \sigma^2}\right)\frac{1}{\sigma^{2(\alpha-1)}}.$$
We can group similar terms and rewrite this as follows
$$f(x, \sigma^{-2}) \propto \sigma^{-2(\alpha-1/2)} \exp\left(-\sigma^{-2} \left(\beta + x^2/2 \right)\right).$$
The posterior distribution of $\sigma^{-2}$ is by definition the pdf of $\sigma^{-2}$ given $x$, which is $f(x, \sigma^{-2}) / f(x)$ by Bayes' formula. To answer your question 1. I don't think there is a way to express $f(\sigma^{-2}|x)$ from $f(x, \sigma^{-2})$ without using Bayes' formula. On with the computation, we recognize in the formula above something that looks like a $\Gamma$ function, so integrating $\sigma^{-2}$ out to get $f(x)$ is fairly easy.
$$ f(x) \propto (\beta + x^2/2)^{-(\alpha+1/2)}, $$
so by dividing we get
$$ f(\sigma^{-2}|x) \propto \left(\beta + x^2/2 \right) \left( \sigma^{-2} \left(\beta + x^2/2 \right) \right)^{\alpha-1/2} \exp\left(-\sigma^{-2} \left(\beta + x^2/2 \right)\right) \\
\propto \left( \sigma^{-2} \left(\beta + x^2/2 \right) \right)^{\alpha-1/2} \exp\left(-\sigma^{-2} \left(\beta + x^2/2 \right)\right). $$
And here in the last formula we recognize a $\Gamma$ distribution with parameters $(\alpha + 1/2, \beta + x^2/2)$.
If you have an IID sample $((x_1, \sigma_1^{-2}), ..., (x_n, \sigma^{-2}_n))$, by integrating out all the $\sigma_i^{-2}$, you would get $f(x_1, ..., x_n)$ and then $f(\sigma_1^{-2}, ..., \sigma_n^{-2}|x_1, ..., x_n)$ as a product of the following terms:
$$ f(\sigma_1^{-2}, ..., \sigma_n^{-2}|x_1, ..., x_n) \propto \prod_{i=1}^n \left( \sigma_i^{-2} \left(\beta + x_i^2/2 \right) \right)^{\alpha-1/2} \exp\left(-\sigma_i^{-2} \left(\beta + x_i^2/2 \right)\right), $$
Which is a product of $\Gamma$ variables. And we are stuck here because of the multiplicity of the $\sigma_i^{-2}$. Besides, the distribution of the mean of those independent $\Gamma$ variables is not straightforward to compute.
However, if we assume that all the observations $x_i$ share the same value of $\sigma^{-2}$ (which seems to be your case) i.e. that the value of $\sigma^{-2}$ was drawn only once from a $\Gamma(\alpha, \beta)$ and that all $x_i$ were then drawn with that value of $\sigma^{-2}$, we obtain
$$ f(x_1, ..., x_n, \sigma^{-2}) \propto \sigma^{-2 (\alpha + n/2)} \exp\left(-\sigma^{-2} \left(\beta + \frac{1}{2} \sum_{i=1}^n x_i^2\right) \right), $$
from which we derive the posterior distribution of $\sigma^{-2}$ as your equation 1 by applying Bayes' formula.
The posterior distribution of $\sigma^{-2}$ is a $\Gamma$ that depends on $\alpha$ and $\beta$, your prior parameters, the sample size $n$ and the observed sum of squares. The prior mean of $\sigma^{-2}$ is $\alpha/\beta$ and the variance is $\alpha/\beta^2$, so if $\alpha = \beta$ and the value is very small, the prior carries very little information about $\sigma^{-2}$ because the variance becomes huge. The values being small, you can drop them from the above equations and you end up with your equation 3.
In that case the posterior distribution becomes independent of the prior. This formula says that the inverse of the variance has a $\Gamma$ distribution that depends only on the sample size and the sum of squares. You can show that for Gaussian variables of known mean, $S^2$, the estimator of the variance, has the same distribution, except that it is a function of the sample size and the true value of the parter $\sigma^2$. In the Bayesian case, this is the ditribution of the parameter, in the frequentist case, it is the distribution of the estimator.
Regarding your question 2. you can of course use the values obtained in a previous experiment as your priors. Because we established a parallel between Bayesian and frequentist interpretation in the above, we can elaborate and say that it is like computing a variance from a small sample size and then collecting more data points: you would update your estimate of the variance rather than throw away the first data points.
Regarding your question 3. I like the Introduction to Mathematical Statistics by Hogg, McKean and Craig, which usually gives the detail of how to derive these equations. | How is the inverse gamma distribution related to $n$ and $\sigma$? | I think it is more correct to speak of the posterior distribution of your parameter $\sigma'^{2}$ rather than its posterior estimate. For clarity of notations, I will drop the prime in $\sigma'^{2}$ i | How is the inverse gamma distribution related to $n$ and $\sigma$?
I think it is more correct to speak of the posterior distribution of your parameter $\sigma'^{2}$ rather than its posterior estimate. For clarity of notations, I will drop the prime in $\sigma'^{2}$ in what follows.
Suppose that $X$ is distributed as $\mathcal{N}(0, \sigma^2)$, — I drop $\mu$ for now to make a heuristic example — and $1/\sigma^2 = \sigma^{-2}$ is distributed as $\Gamma(\alpha, \beta)$ and is independent of $X$.
The pdf of $X$ given $\sigma^{-2}$ is Gaussian, i.e.
$$f(x|\sigma^{-2}) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left(-\frac{x^2}{2\sigma^2}\right).$$
The joint pdf of $(X, \sigma^{-2})$, $f(x,\sigma^{-2})$ is obtained by multiplying $f(x|\sigma^{-2})$ by $g(\sigma^{-2})$ — the pdf of $\sigma^{-2}$. This comes out as
$$f(x, \sigma^{-2}) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left(-\frac{x^2}{2\sigma^2}\right) \frac{\beta^{\alpha}}{\Gamma(\alpha)}\exp \left(-\frac{\beta}{ \sigma^2}\right)\frac{1}{\sigma^{2(\alpha-1)}}.$$
We can group similar terms and rewrite this as follows
$$f(x, \sigma^{-2}) \propto \sigma^{-2(\alpha-1/2)} \exp\left(-\sigma^{-2} \left(\beta + x^2/2 \right)\right).$$
The posterior distribution of $\sigma^{-2}$ is by definition the pdf of $\sigma^{-2}$ given $x$, which is $f(x, \sigma^{-2}) / f(x)$ by Bayes' formula. To answer your question 1. I don't think there is a way to express $f(\sigma^{-2}|x)$ from $f(x, \sigma^{-2})$ without using Bayes' formula. On with the computation, we recognize in the formula above something that looks like a $\Gamma$ function, so integrating $\sigma^{-2}$ out to get $f(x)$ is fairly easy.
$$ f(x) \propto (\beta + x^2/2)^{-(\alpha+1/2)}, $$
so by dividing we get
$$ f(\sigma^{-2}|x) \propto \left(\beta + x^2/2 \right) \left( \sigma^{-2} \left(\beta + x^2/2 \right) \right)^{\alpha-1/2} \exp\left(-\sigma^{-2} \left(\beta + x^2/2 \right)\right) \\
\propto \left( \sigma^{-2} \left(\beta + x^2/2 \right) \right)^{\alpha-1/2} \exp\left(-\sigma^{-2} \left(\beta + x^2/2 \right)\right). $$
And here in the last formula we recognize a $\Gamma$ distribution with parameters $(\alpha + 1/2, \beta + x^2/2)$.
If you have an IID sample $((x_1, \sigma_1^{-2}), ..., (x_n, \sigma^{-2}_n))$, by integrating out all the $\sigma_i^{-2}$, you would get $f(x_1, ..., x_n)$ and then $f(\sigma_1^{-2}, ..., \sigma_n^{-2}|x_1, ..., x_n)$ as a product of the following terms:
$$ f(\sigma_1^{-2}, ..., \sigma_n^{-2}|x_1, ..., x_n) \propto \prod_{i=1}^n \left( \sigma_i^{-2} \left(\beta + x_i^2/2 \right) \right)^{\alpha-1/2} \exp\left(-\sigma_i^{-2} \left(\beta + x_i^2/2 \right)\right), $$
Which is a product of $\Gamma$ variables. And we are stuck here because of the multiplicity of the $\sigma_i^{-2}$. Besides, the distribution of the mean of those independent $\Gamma$ variables is not straightforward to compute.
However, if we assume that all the observations $x_i$ share the same value of $\sigma^{-2}$ (which seems to be your case) i.e. that the value of $\sigma^{-2}$ was drawn only once from a $\Gamma(\alpha, \beta)$ and that all $x_i$ were then drawn with that value of $\sigma^{-2}$, we obtain
$$ f(x_1, ..., x_n, \sigma^{-2}) \propto \sigma^{-2 (\alpha + n/2)} \exp\left(-\sigma^{-2} \left(\beta + \frac{1}{2} \sum_{i=1}^n x_i^2\right) \right), $$
from which we derive the posterior distribution of $\sigma^{-2}$ as your equation 1 by applying Bayes' formula.
The posterior distribution of $\sigma^{-2}$ is a $\Gamma$ that depends on $\alpha$ and $\beta$, your prior parameters, the sample size $n$ and the observed sum of squares. The prior mean of $\sigma^{-2}$ is $\alpha/\beta$ and the variance is $\alpha/\beta^2$, so if $\alpha = \beta$ and the value is very small, the prior carries very little information about $\sigma^{-2}$ because the variance becomes huge. The values being small, you can drop them from the above equations and you end up with your equation 3.
In that case the posterior distribution becomes independent of the prior. This formula says that the inverse of the variance has a $\Gamma$ distribution that depends only on the sample size and the sum of squares. You can show that for Gaussian variables of known mean, $S^2$, the estimator of the variance, has the same distribution, except that it is a function of the sample size and the true value of the parter $\sigma^2$. In the Bayesian case, this is the ditribution of the parameter, in the frequentist case, it is the distribution of the estimator.
Regarding your question 2. you can of course use the values obtained in a previous experiment as your priors. Because we established a parallel between Bayesian and frequentist interpretation in the above, we can elaborate and say that it is like computing a variance from a small sample size and then collecting more data points: you would update your estimate of the variance rather than throw away the first data points.
Regarding your question 3. I like the Introduction to Mathematical Statistics by Hogg, McKean and Craig, which usually gives the detail of how to derive these equations. | How is the inverse gamma distribution related to $n$ and $\sigma$?
I think it is more correct to speak of the posterior distribution of your parameter $\sigma'^{2}$ rather than its posterior estimate. For clarity of notations, I will drop the prime in $\sigma'^{2}$ i |
35,536 | How is the inverse gamma distribution related to $n$ and $\sigma$? | For question 1, the second equation follows from Bayes' rule as you point out, and I don't see how to avoid that.
For question 2, yes, you can do this. Just use a prior of the same form as your second equation.
For question 3, I would look for something about exponential families. Maybe someone will recommend a good resource. | How is the inverse gamma distribution related to $n$ and $\sigma$? | For question 1, the second equation follows from Bayes' rule as you point out, and I don't see how to avoid that.
For question 2, yes, you can do this. Just use a prior of the same form as your secon | How is the inverse gamma distribution related to $n$ and $\sigma$?
For question 1, the second equation follows from Bayes' rule as you point out, and I don't see how to avoid that.
For question 2, yes, you can do this. Just use a prior of the same form as your second equation.
For question 3, I would look for something about exponential families. Maybe someone will recommend a good resource. | How is the inverse gamma distribution related to $n$ and $\sigma$?
For question 1, the second equation follows from Bayes' rule as you point out, and I don't see how to avoid that.
For question 2, yes, you can do this. Just use a prior of the same form as your secon |
35,537 | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple correlations? | I also don't follow your situation 100%, but I suspect it doesn't matter. The problem of multiple comparisons arises simply due to the mathematics of looking at lots of random things. That is, each statistical test can be understood as a Bernoulli trial. If the null hypothesis holds in every case, you have a Binomial distribution with probability .05 and N equal to the number of tests. (If the null never holds, you have a binomial with probability equal to the statistical power and the same N.) Thus, if the null is always true, and the tests are independent, the probability of not making any type I errors is $.95^N$.
Bootstrapping does not get you out of this fact. Bootstrapping offers a way to deal with situations in which your test statistic may not follow the distribution assumed by large sample theory. (This can occur because the distribution of the data is too non-normal, and your sample isn't large enough to compensate; n.b. in some cases, e.g. Cauchy data, a sample can never be large enough.) Provided your data are representative of the population in question, Bootstrapping may allow you to calculate an appropriate p-value (some conditions apply). However, this issue is orthogonal to the problem of multiple comparisons; that is, bootstrapping would give you the appropriate p-value for a 'family' of size 1.
The problem of multiple comparisons is typically discussed in terms of multiple t-tests. I gather you are clear about the fact that using correlations instead of t-tests is irrelevant. Using bootstrapped sampling distributions instead of analytical sampling distributions is completely analogous in this respect.
Having made these points, the question arises of what to do about the problem of multiple comparisons in your case, given that bootstrapping is not offering you any protection. You should know that this topic has long been somewhat controversial, with scholars debating different strategies and even whether it's worthwhile to bother with the issue. There is a good deal of discussion about multiple comparisons on CV; if you search on the tag (i.e., click on it) you will be able to get a lot of information. | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple co | I also don't follow your situation 100%, but I suspect it doesn't matter. The problem of multiple comparisons arises simply due to the mathematics of looking at lots of random things. That is, each | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple correlations?
I also don't follow your situation 100%, but I suspect it doesn't matter. The problem of multiple comparisons arises simply due to the mathematics of looking at lots of random things. That is, each statistical test can be understood as a Bernoulli trial. If the null hypothesis holds in every case, you have a Binomial distribution with probability .05 and N equal to the number of tests. (If the null never holds, you have a binomial with probability equal to the statistical power and the same N.) Thus, if the null is always true, and the tests are independent, the probability of not making any type I errors is $.95^N$.
Bootstrapping does not get you out of this fact. Bootstrapping offers a way to deal with situations in which your test statistic may not follow the distribution assumed by large sample theory. (This can occur because the distribution of the data is too non-normal, and your sample isn't large enough to compensate; n.b. in some cases, e.g. Cauchy data, a sample can never be large enough.) Provided your data are representative of the population in question, Bootstrapping may allow you to calculate an appropriate p-value (some conditions apply). However, this issue is orthogonal to the problem of multiple comparisons; that is, bootstrapping would give you the appropriate p-value for a 'family' of size 1.
The problem of multiple comparisons is typically discussed in terms of multiple t-tests. I gather you are clear about the fact that using correlations instead of t-tests is irrelevant. Using bootstrapped sampling distributions instead of analytical sampling distributions is completely analogous in this respect.
Having made these points, the question arises of what to do about the problem of multiple comparisons in your case, given that bootstrapping is not offering you any protection. You should know that this topic has long been somewhat controversial, with scholars debating different strategies and even whether it's worthwhile to bother with the issue. There is a good deal of discussion about multiple comparisons on CV; if you search on the tag (i.e., click on it) you will be able to get a lot of information. | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple co
I also don't follow your situation 100%, but I suspect it doesn't matter. The problem of multiple comparisons arises simply due to the mathematics of looking at lots of random things. That is, each |
35,538 | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple correlations? | But bootstrapping does offer a simple way to do multiple comparisons (including simultaneous intervals if you don't like testing) in a way that incorporates dependence structures in an asymptotically consistent way. The statement that "bootstrapping does not get you out of this fact" is misleading because it follows a statement about independence. So in fact, bootstrapping does offer a distinct advantage over what is suggested in the previous post, which assumes independence. Estimated correlations are in fact dependent random variables, and these dependencies are simply and naturally incorporated by bootstrap vector resampling. Such resampling also incorporates non-normal characteristics of the multivariate data-generating process. | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple co | But bootstrapping does offer a simple way to do multiple comparisons (including simultaneous intervals if you don't like testing) in a way that incorporates dependence structures in an asymptotically | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple correlations?
But bootstrapping does offer a simple way to do multiple comparisons (including simultaneous intervals if you don't like testing) in a way that incorporates dependence structures in an asymptotically consistent way. The statement that "bootstrapping does not get you out of this fact" is misleading because it follows a statement about independence. So in fact, bootstrapping does offer a distinct advantage over what is suggested in the previous post, which assumes independence. Estimated correlations are in fact dependent random variables, and these dependencies are simply and naturally incorporated by bootstrap vector resampling. Such resampling also incorporates non-normal characteristics of the multivariate data-generating process. | Does using bootstrapping change how you deal with problems of Type I errors when testing multiple co
But bootstrapping does offer a simple way to do multiple comparisons (including simultaneous intervals if you don't like testing) in a way that incorporates dependence structures in an asymptotically |
35,539 | Time series modeling with high-frequency data | ARIMA and ETS models are designed for relatively short seasonality (e.g., monthly or quarterly) and do not work well for seasonal periods that are much longer. The ets() error should be captured though --- I'll fix it in the next version of the forecast package.
You might try instead a Fourier series model:
library(forecast)
X <- fourier(x, 3)
m3 <- tslm(x ~ X + trend)
plot(forecast(m3,newdata=data.frame(X=I(fourierf(x,3,2880))),h=2880))
See my blog post on forecasting with long seasonal periods for more discussion. | Time series modeling with high-frequency data | ARIMA and ETS models are designed for relatively short seasonality (e.g., monthly or quarterly) and do not work well for seasonal periods that are much longer. The ets() error should be captured thoug | Time series modeling with high-frequency data
ARIMA and ETS models are designed for relatively short seasonality (e.g., monthly or quarterly) and do not work well for seasonal periods that are much longer. The ets() error should be captured though --- I'll fix it in the next version of the forecast package.
You might try instead a Fourier series model:
library(forecast)
X <- fourier(x, 3)
m3 <- tslm(x ~ X + trend)
plot(forecast(m3,newdata=data.frame(X=I(fourierf(x,3,2880))),h=2880))
See my blog post on forecasting with long seasonal periods for more discussion. | Time series modeling with high-frequency data
ARIMA and ETS models are designed for relatively short seasonality (e.g., monthly or quarterly) and do not work well for seasonal periods that are much longer. The ets() error should be captured thoug |
35,540 | Interpreting coefficients for Poisson regression | Not to be critical, but this is kind of a strange example. It's not clear that you're really doing time series analysis, nor what the NASDAQ would have to do with the number of games won by some team. If you're interested in saying something about the number of games a team won, I think it would be best to use binary logistic regression, given that you presumably know how many games are played. Poisson regression is most appropriate for talking about counts when the total possible is not constrained well, or at least not known.
How you would interpret your betas depends, in part, on the link used--it is possible to use the identity link, even though the log link is more common (and typically more appropriate). If you are using the log link, you probably wouldn't take the log of your response variable--the link in essence is doing that for you. Let's take an abstract case, you have a Poisson model using the log link as follows:
$$
\hat{y}=\text{exp}(\hat{\beta}_0)*\text{exp}(\hat{\beta}_1)^x
$$
alternatively,
$$
\hat{y}=\text{exp}(\hat{\beta}_0+\hat{\beta}_1x)
$$
(EDIT: I'm removing the "hats" from the betas in what follows, because they're ugly, but they should still be understood.)
With normal OLS regression, you are predicting the mean of a Gaussian distribution of the response variable conditional on the values of the covariates. In this case, you are predicting the mean of a Poisson distribution of the response variable conditional on the values of the covariates. For OLS, if a given case were 1 unit higher on your covariate, you expect, all things being equal, the mean of that conditional distribution to be ${\beta}_1$ units higher. Here, if a given case were 1 unit higher, ceteris paribus, you expect the conditional mean to be $e^{{\beta}_1}$ times higher. For instance, say ${\beta}_1=2$, then in normal regression it is 2 units higher (i.e., +2), and here it is 7.4 times higher (i.e., x 7.4). In both cases, ${\beta}_0$ is your intercept; in our equation above, consider the situation when $x=0$, then exp$({\beta}_1)^x=1$, and the right hand side reduces to exp(${\beta}_0$), which gives you the mean of $y$ when all covariates equal 0.
There are a couple of things that can be confusing about this. First, predicting the mean of a Poisson distribution isn't the same as predicting the mean of a Gaussian. With a normal distribution, the mean is the single most likely value. But with the Poisson, the mean is often an impossible value (e.g., if your predicted mean is 2.7, that's not a count that could exist). In addition, normally the mean is unrelated to the level of dispersion (i.e., the SD), but with the Poisson distribution, the variance necessarily equals the mean (although, it often doesn't in practice, leading to additional complexities). Finally, those exponentiations make it more complicated; if, instead of a relative change, you wanted to know the exact value, you would have to start at 0 (i.e., $e^{{\beta}_0}$) and multiply your way up $x$ times. For predicting a specific value, it's easier to solve the expression inside the parentheses in the bottom equation and then exponentiate; this makes the meaning of the beta less clear, but the math easier and reduces the possibility of error. | Interpreting coefficients for Poisson regression | Not to be critical, but this is kind of a strange example. It's not clear that you're really doing time series analysis, nor what the NASDAQ would have to do with the number of games won by some team | Interpreting coefficients for Poisson regression
Not to be critical, but this is kind of a strange example. It's not clear that you're really doing time series analysis, nor what the NASDAQ would have to do with the number of games won by some team. If you're interested in saying something about the number of games a team won, I think it would be best to use binary logistic regression, given that you presumably know how many games are played. Poisson regression is most appropriate for talking about counts when the total possible is not constrained well, or at least not known.
How you would interpret your betas depends, in part, on the link used--it is possible to use the identity link, even though the log link is more common (and typically more appropriate). If you are using the log link, you probably wouldn't take the log of your response variable--the link in essence is doing that for you. Let's take an abstract case, you have a Poisson model using the log link as follows:
$$
\hat{y}=\text{exp}(\hat{\beta}_0)*\text{exp}(\hat{\beta}_1)^x
$$
alternatively,
$$
\hat{y}=\text{exp}(\hat{\beta}_0+\hat{\beta}_1x)
$$
(EDIT: I'm removing the "hats" from the betas in what follows, because they're ugly, but they should still be understood.)
With normal OLS regression, you are predicting the mean of a Gaussian distribution of the response variable conditional on the values of the covariates. In this case, you are predicting the mean of a Poisson distribution of the response variable conditional on the values of the covariates. For OLS, if a given case were 1 unit higher on your covariate, you expect, all things being equal, the mean of that conditional distribution to be ${\beta}_1$ units higher. Here, if a given case were 1 unit higher, ceteris paribus, you expect the conditional mean to be $e^{{\beta}_1}$ times higher. For instance, say ${\beta}_1=2$, then in normal regression it is 2 units higher (i.e., +2), and here it is 7.4 times higher (i.e., x 7.4). In both cases, ${\beta}_0$ is your intercept; in our equation above, consider the situation when $x=0$, then exp$({\beta}_1)^x=1$, and the right hand side reduces to exp(${\beta}_0$), which gives you the mean of $y$ when all covariates equal 0.
There are a couple of things that can be confusing about this. First, predicting the mean of a Poisson distribution isn't the same as predicting the mean of a Gaussian. With a normal distribution, the mean is the single most likely value. But with the Poisson, the mean is often an impossible value (e.g., if your predicted mean is 2.7, that's not a count that could exist). In addition, normally the mean is unrelated to the level of dispersion (i.e., the SD), but with the Poisson distribution, the variance necessarily equals the mean (although, it often doesn't in practice, leading to additional complexities). Finally, those exponentiations make it more complicated; if, instead of a relative change, you wanted to know the exact value, you would have to start at 0 (i.e., $e^{{\beta}_0}$) and multiply your way up $x$ times. For predicting a specific value, it's easier to solve the expression inside the parentheses in the bottom equation and then exponentiate; this makes the meaning of the beta less clear, but the math easier and reduces the possibility of error. | Interpreting coefficients for Poisson regression
Not to be critical, but this is kind of a strange example. It's not clear that you're really doing time series analysis, nor what the NASDAQ would have to do with the number of games won by some team |
35,541 | Gibbs sampler from conditional distribution | There you go-
Gibbs Sampler:
The burning period is to reach some stationarity in the samples
burning_period=5000
iterations=1000
y=matrix(nrow=(burning_period+iterations),ncol=3)
a=matrix(nrow=iterations,ncol=3)
y[1,1]=0.5 #Initial Sample
y[1,2]=0.6
y[1,3]=0.2
for(i in 2:(burning_period+iterations)){
#I have put 3,3,4 as my theta's. You may make the code generic for any choice of theta's.
t= 1+3*y[i-1,2]+4*y[i-1,3]
# Use t or t*-1 based on your requirement.
t=rexp(1,t)
y[i,1]=t
t= 1+3*y[i,1]+5*y[i-1,3]
y[i,2]=rexp(1,t)
t=1+4*y[i,1]+5*y[i,2]
y[i,3]=rexp(1,t)
}
posteriorSample=y[(burning_period+1) : (burning_period+iterations), ] | Gibbs sampler from conditional distribution | There you go-
Gibbs Sampler:
The burning period is to reach some stationarity in the samples
burning_period=5000
iterations=1000
y=matrix(nrow=(burning_period+iterations),ncol=3)
a=matrix(nrow=iterati | Gibbs sampler from conditional distribution
There you go-
Gibbs Sampler:
The burning period is to reach some stationarity in the samples
burning_period=5000
iterations=1000
y=matrix(nrow=(burning_period+iterations),ncol=3)
a=matrix(nrow=iterations,ncol=3)
y[1,1]=0.5 #Initial Sample
y[1,2]=0.6
y[1,3]=0.2
for(i in 2:(burning_period+iterations)){
#I have put 3,3,4 as my theta's. You may make the code generic for any choice of theta's.
t= 1+3*y[i-1,2]+4*y[i-1,3]
# Use t or t*-1 based on your requirement.
t=rexp(1,t)
y[i,1]=t
t= 1+3*y[i,1]+5*y[i-1,3]
y[i,2]=rexp(1,t)
t=1+4*y[i,1]+5*y[i,2]
y[i,3]=rexp(1,t)
}
posteriorSample=y[(burning_period+1) : (burning_period+iterations), ] | Gibbs sampler from conditional distribution
There you go-
Gibbs Sampler:
The burning period is to reach some stationarity in the samples
burning_period=5000
iterations=1000
y=matrix(nrow=(burning_period+iterations),ncol=3)
a=matrix(nrow=iterati |
35,542 | Gibbs sampler from conditional distribution | The full conditional distributions can be found by fixing the values of the two "other" variables, then combining all the terms that you can combine and seeing what you get:
$p(y_1|y_2,y_3) \propto \exp\{-(1+\theta_{12}y_2+\theta_{13}y_3)y_1\}$
which is evidently an exponential distribution with parameter $1+\theta_{12}y_2+\theta_{13}y_3$, and similarly for $p(y_2|y_1,y_3)$, which is an exponential distribution with parameter $1 + \theta_{12}y_1 + \theta_{23}y_3$, and $p(y_3|y_1,y_2)$, which is an exponential distribution with parameter $1 + \theta_{13}y_1 + \theta_{23}y_2$.
Your Gibbs sampler will start with some values for $y_1, y_2, y_3$ and just loop over the three conditional distributions, again and again, generating exponential variates from the appropriate distribution and substituting in the current values for the the appropriate $y_i$ as it goes. Naturally you'll have to discard some initial block of variates to get rid of burn-in effects. | Gibbs sampler from conditional distribution | The full conditional distributions can be found by fixing the values of the two "other" variables, then combining all the terms that you can combine and seeing what you get:
$p(y_1|y_2,y_3) \propto \e | Gibbs sampler from conditional distribution
The full conditional distributions can be found by fixing the values of the two "other" variables, then combining all the terms that you can combine and seeing what you get:
$p(y_1|y_2,y_3) \propto \exp\{-(1+\theta_{12}y_2+\theta_{13}y_3)y_1\}$
which is evidently an exponential distribution with parameter $1+\theta_{12}y_2+\theta_{13}y_3$, and similarly for $p(y_2|y_1,y_3)$, which is an exponential distribution with parameter $1 + \theta_{12}y_1 + \theta_{23}y_3$, and $p(y_3|y_1,y_2)$, which is an exponential distribution with parameter $1 + \theta_{13}y_1 + \theta_{23}y_2$.
Your Gibbs sampler will start with some values for $y_1, y_2, y_3$ and just loop over the three conditional distributions, again and again, generating exponential variates from the appropriate distribution and substituting in the current values for the the appropriate $y_i$ as it goes. Naturally you'll have to discard some initial block of variates to get rid of burn-in effects. | Gibbs sampler from conditional distribution
The full conditional distributions can be found by fixing the values of the two "other" variables, then combining all the terms that you can combine and seeing what you get:
$p(y_1|y_2,y_3) \propto \e |
35,543 | Arithmetic for updating likelihoods using Bayes theorem | I'll start by answering your question about updating events with the "fourth and fifth extensions." As you suspected, the arithmetic is indeed quite simple.
First, recall how Bayes theorem is derived from the definition of conditional probability:
By conditioning on A in the numerator we can get to the more familiar form:
Now consider if we don't have just B, but rather 2 or more events B_1, B_2... For that, we can derive the three event Bayes extension you cite using the chain rule of probability, which is (from wikipedia):
For B_1 and B_2, we start with the definition of conditional probability
And use the chain rule on both the numerator and the denominator:
And just like that we've rederived the equation you cite from wikipedia. Let's try adding another event:
Adding a fifth event is equally simple (an exercise for the reader). But you'll surely notice a pattern, namely that the answer to the three-event version is held within the answer to the four-event version, so that we can rewrite this as:
Or more generally, the rule for updating the posterior after the nth piece of evidence:
That fraction there is what you're interested in. Now, what you're talking about is that this might not be easy to calculate- not because of any arithmetic difficulty, but because of dependencies within the B's. If we say each B is independently distributed, updating becomes very simple:
(In fact, you'll notice that is a simple application of Bayes' theorem!) The complexity of that fraction depends on which of previous pieces of evidence your new piece of evidence depends on. The importance of conditional dependence between your variables and your pieces of evidence is precisely why Bayesian networks were developed (in fact, the above describes factorization of Bayesian networks).
Now, let's talk about your example. First, your interpretation of the word problem has an issue. Your interpretations of 70% and 80% are, respectively,
P(B1|A) = .7
P(B2|A) = .8
But (per your definitions) A means the car will be completed on time, B_1 means GM tests the transmission successfully, and B_2 means there is a successful engine test, which means you're getting them backwards- they should be
P(A|B1) = .7
P(A|B2) = .8
Now, however, the word problem doesn't really make sense. Here are the three problems:
1)They're effectively giving you what you're looking for: saying "given this transmission test a car can be completed within that time frame 70% of the time", and then asking "what is the probability a car will be completed in that time".
2) The evidence pushes you in the opposite direction that common sense would expect. The probability was 90% before you knew about the transmission, how can knowing about a successful test lower it to 70%?
3) There is a difference between a "95% success rate" and a 95% chance that a test was successful. Success rate can mean a lot of things (for example, what proportion a part doesn't break), which makes it an engineering question about the quality of the part, not a subjective assessment of "how sure are we the test succeeded?" As an illustrative example, imagine we were talking about a critical piece of a rocket ship, which needs at least a 99.999% chance of working during a flight. Saying "The piece breaks 20% of the time" does not mean there is an 80% chance the test succeeded, and thus an 80% chance you can launch the rocket next week. Perhaps the part will take 20 years to develop and fix- there is no way of knowing based on the information you're given.
For these reasons, the problem is very poorly worded. But, as I indicated above, the arithmetic involved in updating based on multiple events is quite straightforward. In that sense, I hope I answered your question.
ETA: Based on your comments, I'd say you should rework the question from the ground up. You should certainly get rid of the idea of the 95%/98% "success rate", which in this context is an engineering question and not a Bayesian statistics one. Secondly, the estimates of "We are 70% confident, given that this part works, that the car will be ready in two years" is a posterior probability, not a piece of evidence; you can't use it to update what you already have.
In the situation you are describing, you need all four parts to work by the deadline. Thus, the smartest thing to do would be simply to say "What is the probability each part will be working in two years?" Then you take the product of those probabilities (assuming independence), and you have the probability the entire thing will be working in two years.
Stepping back, it sounds like you are actually trying to combine multiple subjective predictions into one. In that case, my recommendation would be to fire your engineers. Why? Because they are telling you that they are 90% confident that it will be ready in two years, but then, after learning of a successful test of the transmission, downgrading their estimates to 70%. If that's the talent we're working with, no Bayesian statistics is going to help us :-)
More seriously- perhaps if you were more specific about the type of problem (which is probably something like combining P(A|B1) and P(A|B2)), I could give you some more advice. | Arithmetic for updating likelihoods using Bayes theorem | I'll start by answering your question about updating events with the "fourth and fifth extensions." As you suspected, the arithmetic is indeed quite simple.
First, recall how Bayes theorem is derived | Arithmetic for updating likelihoods using Bayes theorem
I'll start by answering your question about updating events with the "fourth and fifth extensions." As you suspected, the arithmetic is indeed quite simple.
First, recall how Bayes theorem is derived from the definition of conditional probability:
By conditioning on A in the numerator we can get to the more familiar form:
Now consider if we don't have just B, but rather 2 or more events B_1, B_2... For that, we can derive the three event Bayes extension you cite using the chain rule of probability, which is (from wikipedia):
For B_1 and B_2, we start with the definition of conditional probability
And use the chain rule on both the numerator and the denominator:
And just like that we've rederived the equation you cite from wikipedia. Let's try adding another event:
Adding a fifth event is equally simple (an exercise for the reader). But you'll surely notice a pattern, namely that the answer to the three-event version is held within the answer to the four-event version, so that we can rewrite this as:
Or more generally, the rule for updating the posterior after the nth piece of evidence:
That fraction there is what you're interested in. Now, what you're talking about is that this might not be easy to calculate- not because of any arithmetic difficulty, but because of dependencies within the B's. If we say each B is independently distributed, updating becomes very simple:
(In fact, you'll notice that is a simple application of Bayes' theorem!) The complexity of that fraction depends on which of previous pieces of evidence your new piece of evidence depends on. The importance of conditional dependence between your variables and your pieces of evidence is precisely why Bayesian networks were developed (in fact, the above describes factorization of Bayesian networks).
Now, let's talk about your example. First, your interpretation of the word problem has an issue. Your interpretations of 70% and 80% are, respectively,
P(B1|A) = .7
P(B2|A) = .8
But (per your definitions) A means the car will be completed on time, B_1 means GM tests the transmission successfully, and B_2 means there is a successful engine test, which means you're getting them backwards- they should be
P(A|B1) = .7
P(A|B2) = .8
Now, however, the word problem doesn't really make sense. Here are the three problems:
1)They're effectively giving you what you're looking for: saying "given this transmission test a car can be completed within that time frame 70% of the time", and then asking "what is the probability a car will be completed in that time".
2) The evidence pushes you in the opposite direction that common sense would expect. The probability was 90% before you knew about the transmission, how can knowing about a successful test lower it to 70%?
3) There is a difference between a "95% success rate" and a 95% chance that a test was successful. Success rate can mean a lot of things (for example, what proportion a part doesn't break), which makes it an engineering question about the quality of the part, not a subjective assessment of "how sure are we the test succeeded?" As an illustrative example, imagine we were talking about a critical piece of a rocket ship, which needs at least a 99.999% chance of working during a flight. Saying "The piece breaks 20% of the time" does not mean there is an 80% chance the test succeeded, and thus an 80% chance you can launch the rocket next week. Perhaps the part will take 20 years to develop and fix- there is no way of knowing based on the information you're given.
For these reasons, the problem is very poorly worded. But, as I indicated above, the arithmetic involved in updating based on multiple events is quite straightforward. In that sense, I hope I answered your question.
ETA: Based on your comments, I'd say you should rework the question from the ground up. You should certainly get rid of the idea of the 95%/98% "success rate", which in this context is an engineering question and not a Bayesian statistics one. Secondly, the estimates of "We are 70% confident, given that this part works, that the car will be ready in two years" is a posterior probability, not a piece of evidence; you can't use it to update what you already have.
In the situation you are describing, you need all four parts to work by the deadline. Thus, the smartest thing to do would be simply to say "What is the probability each part will be working in two years?" Then you take the product of those probabilities (assuming independence), and you have the probability the entire thing will be working in two years.
Stepping back, it sounds like you are actually trying to combine multiple subjective predictions into one. In that case, my recommendation would be to fire your engineers. Why? Because they are telling you that they are 90% confident that it will be ready in two years, but then, after learning of a successful test of the transmission, downgrading their estimates to 70%. If that's the talent we're working with, no Bayesian statistics is going to help us :-)
More seriously- perhaps if you were more specific about the type of problem (which is probably something like combining P(A|B1) and P(A|B2)), I could give you some more advice. | Arithmetic for updating likelihoods using Bayes theorem
I'll start by answering your question about updating events with the "fourth and fifth extensions." As you suspected, the arithmetic is indeed quite simple.
First, recall how Bayes theorem is derived |
35,544 | Arithmetic for updating likelihoods using Bayes theorem | There are lots of ways to extend this result. The general form is that
$$P(A|B,C,D...)=\frac{P(A,B,C,D...)}{P(B,C,D,...)}$$
There are many ways to write both numerator and denominator. Your formualae give two examples (assuming $B_2$ and $C$ are the same thing). Of course, for a given problem, you have to formulate the LHS by writing the RHS in terms of quantities you actually know; whether that can be done for your particular problem is probably worth a more specific question, on this site.
When the variables ($A,B,C,D$) etc are continuous, calculating the posterior does indeed get much more complicated, in most problems, and graduate level math/stat techniques are required. | Arithmetic for updating likelihoods using Bayes theorem | There are lots of ways to extend this result. The general form is that
$$P(A|B,C,D...)=\frac{P(A,B,C,D...)}{P(B,C,D,...)}$$
There are many ways to write both numerator and denominator. Your formualae | Arithmetic for updating likelihoods using Bayes theorem
There are lots of ways to extend this result. The general form is that
$$P(A|B,C,D...)=\frac{P(A,B,C,D...)}{P(B,C,D,...)}$$
There are many ways to write both numerator and denominator. Your formualae give two examples (assuming $B_2$ and $C$ are the same thing). Of course, for a given problem, you have to formulate the LHS by writing the RHS in terms of quantities you actually know; whether that can be done for your particular problem is probably worth a more specific question, on this site.
When the variables ($A,B,C,D$) etc are continuous, calculating the posterior does indeed get much more complicated, in most problems, and graduate level math/stat techniques are required. | Arithmetic for updating likelihoods using Bayes theorem
There are lots of ways to extend this result. The general form is that
$$P(A|B,C,D...)=\frac{P(A,B,C,D...)}{P(B,C,D,...)}$$
There are many ways to write both numerator and denominator. Your formualae |
35,545 | Recurrent neural networks in R | There is the RSNNS package that provides access to the "Stuttgart Neural Network Simulator" (SNNS). It contains the classical recurrent network structures of types 'Jordan' and 'Elman'. SNNS is a bit old (before 2000), but may still be worth a try. The R package itself has been updated in September this year. | Recurrent neural networks in R | There is the RSNNS package that provides access to the "Stuttgart Neural Network Simulator" (SNNS). It contains the classical recurrent network structures of types 'Jordan' and 'Elman'. SNNS is a bit | Recurrent neural networks in R
There is the RSNNS package that provides access to the "Stuttgart Neural Network Simulator" (SNNS). It contains the classical recurrent network structures of types 'Jordan' and 'Elman'. SNNS is a bit old (before 2000), but may still be worth a try. The R package itself has been updated in September this year. | Recurrent neural networks in R
There is the RSNNS package that provides access to the "Stuttgart Neural Network Simulator" (SNNS). It contains the classical recurrent network structures of types 'Jordan' and 'Elman'. SNNS is a bit |
35,546 | Recurrent neural networks in R | I am hoping someone with more R knowledge than me will submit an R answer, but I am not aware of anything. Here is one option: Use one of the multiple Python-based implementations (e.g. PyBrain or PyNeurGen) and interface back to R via Rpy or (my prefence) pyRserve. I know this is not ideal, but it could given you an easier way forward than writing your own package, at least at first. Also, I am guessing it would be preferable to call Python from R, but I don't think the RSPython package in R has been updated for some time.
EDIT: It looks like PyNeurGen may not have been updated in some time either. PyBrain seems to have the largest following and is under active development. | Recurrent neural networks in R | I am hoping someone with more R knowledge than me will submit an R answer, but I am not aware of anything. Here is one option: Use one of the multiple Python-based implementations (e.g. PyBrain or PyN | Recurrent neural networks in R
I am hoping someone with more R knowledge than me will submit an R answer, but I am not aware of anything. Here is one option: Use one of the multiple Python-based implementations (e.g. PyBrain or PyNeurGen) and interface back to R via Rpy or (my prefence) pyRserve. I know this is not ideal, but it could given you an easier way forward than writing your own package, at least at first. Also, I am guessing it would be preferable to call Python from R, but I don't think the RSPython package in R has been updated for some time.
EDIT: It looks like PyNeurGen may not have been updated in some time either. PyBrain seems to have the largest following and is under active development. | Recurrent neural networks in R
I am hoping someone with more R knowledge than me will submit an R answer, but I am not aware of anything. Here is one option: Use one of the multiple Python-based implementations (e.g. PyBrain or PyN |
35,547 | Recurrent neural networks in R | There is a new package out: rnn (on CRAN, on github), which implements a recurrent neural network in native R code.
A nice example can be found here:
http://firsttimeprogrammer.blogspot.de/2016/08/plain-vanilla-recurrent-neural-networks.html | Recurrent neural networks in R | There is a new package out: rnn (on CRAN, on github), which implements a recurrent neural network in native R code.
A nice example can be found here:
http://firsttimeprogrammer.blogspot.de/2016/08/pl | Recurrent neural networks in R
There is a new package out: rnn (on CRAN, on github), which implements a recurrent neural network in native R code.
A nice example can be found here:
http://firsttimeprogrammer.blogspot.de/2016/08/plain-vanilla-recurrent-neural-networks.html | Recurrent neural networks in R
There is a new package out: rnn (on CRAN, on github), which implements a recurrent neural network in native R code.
A nice example can be found here:
http://firsttimeprogrammer.blogspot.de/2016/08/pl |
35,548 | Finding degree of polynomial in regression analysis | Sorry if this is too elementary, I just wanted to make this answer as self-contained as possible. In fact, you can't do what you're describing: the best polynomial of degree $k+1$ will always fit at least as well as the best polynomial of degree $k$, since the set of $k+1$ degree polynomials includes all $k$ degree polynomials (just set $a_{k+1} = 0$). As you continue to increase $k$, at a certain point you will be able to find a polynomial that fits the data perfectly (i.e. with zero error).
This usually isn't a very attractive solution because it's hard to imagine a process that ought to be described by e.g. a million-degree polynomial, and it's almost certain that this kind of model will be more complex than is necessary to adequately describe the data. This phenomenon is called overfitting, and a good example is this Wikipedia image. The data is clearly close to linear, but it's possible (but not desirable) to get a lower error with a more complex model.
In general, the goal is to minimize the error that would occur on new data from the same underlying model, rather than on the current set of data. Often it isn't possible or practical to just get more data, so usually one would employ some form of cross-validation to find the model that generalizes the best to unseen data. There are lots of forms of cross-validation, and you can read about them in the Wikipedia article or in numerous answers on CrossValidated (ha!). But in effect they all can be reduced to: fit a model on some of your data and use this to predict the values for the remainder of your data. Do this repeatedly and choose the model (in this case, the degree of polynomial) that gives you the best performance on average. | Finding degree of polynomial in regression analysis | Sorry if this is too elementary, I just wanted to make this answer as self-contained as possible. In fact, you can't do what you're describing: the best polynomial of degree $k+1$ will always fit at l | Finding degree of polynomial in regression analysis
Sorry if this is too elementary, I just wanted to make this answer as self-contained as possible. In fact, you can't do what you're describing: the best polynomial of degree $k+1$ will always fit at least as well as the best polynomial of degree $k$, since the set of $k+1$ degree polynomials includes all $k$ degree polynomials (just set $a_{k+1} = 0$). As you continue to increase $k$, at a certain point you will be able to find a polynomial that fits the data perfectly (i.e. with zero error).
This usually isn't a very attractive solution because it's hard to imagine a process that ought to be described by e.g. a million-degree polynomial, and it's almost certain that this kind of model will be more complex than is necessary to adequately describe the data. This phenomenon is called overfitting, and a good example is this Wikipedia image. The data is clearly close to linear, but it's possible (but not desirable) to get a lower error with a more complex model.
In general, the goal is to minimize the error that would occur on new data from the same underlying model, rather than on the current set of data. Often it isn't possible or practical to just get more data, so usually one would employ some form of cross-validation to find the model that generalizes the best to unseen data. There are lots of forms of cross-validation, and you can read about them in the Wikipedia article or in numerous answers on CrossValidated (ha!). But in effect they all can be reduced to: fit a model on some of your data and use this to predict the values for the remainder of your data. Do this repeatedly and choose the model (in this case, the degree of polynomial) that gives you the best performance on average. | Finding degree of polynomial in regression analysis
Sorry if this is too elementary, I just wanted to make this answer as self-contained as possible. In fact, you can't do what you're describing: the best polynomial of degree $k+1$ will always fit at l |
35,549 | Finding degree of polynomial in regression analysis | One of the ways to solve this "search" problem is to first start with some meta-heuristic algorithm like Genetic Programming and once the program is able to create a "near" function (of decent fitness), start with traditional machine learning regression algorithms of degree identified by GP. You will still need to perform cross validations for fitting your n-degree polynomial model.
Few things you need to make sure while running GP is that not to provide functions which should not be used else GP has the tendency to create complex models mimicking decision tree + linear + quadratic etc. | Finding degree of polynomial in regression analysis | One of the ways to solve this "search" problem is to first start with some meta-heuristic algorithm like Genetic Programming and once the program is able to create a "near" function (of decent fitness | Finding degree of polynomial in regression analysis
One of the ways to solve this "search" problem is to first start with some meta-heuristic algorithm like Genetic Programming and once the program is able to create a "near" function (of decent fitness), start with traditional machine learning regression algorithms of degree identified by GP. You will still need to perform cross validations for fitting your n-degree polynomial model.
Few things you need to make sure while running GP is that not to provide functions which should not be used else GP has the tendency to create complex models mimicking decision tree + linear + quadratic etc. | Finding degree of polynomial in regression analysis
One of the ways to solve this "search" problem is to first start with some meta-heuristic algorithm like Genetic Programming and once the program is able to create a "near" function (of decent fitness |
35,550 | Fisher's exact test in R | You're doing everything fine. However, I would recommend Barnard's exact test than Fisher exact test. | Fisher's exact test in R | You're doing everything fine. However, I would recommend Barnard's exact test than Fisher exact test. | Fisher's exact test in R
You're doing everything fine. However, I would recommend Barnard's exact test than Fisher exact test. | Fisher's exact test in R
You're doing everything fine. However, I would recommend Barnard's exact test than Fisher exact test. |
35,551 | Looking for mathematical account of ANOVA | Everything you ask for is beautifully accomplished by Jack Kiefer in his classic Introduction to Statistical Inference (Springer-Verlag 1987). ANOVA is introduced as a special case of the General Linear Model (i.e., regression) in chapter 5, then taken up again at the end of chapter 8 as an application of normal-theory tests. Chapter 8 begins with a statement of the distributions associated with the Normal--t, F, and chi-squared along with their noncentral versions--and shows from first principles how they arise in each setting. General background on estimation is developed in chapters 1-4.
Because this is not a math text, no effort is made to derive explicit formulations of the PDFs of these distributions: the focus is on specifying certain optimal properties and additional criteria (such as invariance, unbiasedness, or linearity) and then deriving, from first principles, the tests that satisfy them and critically evaluating the characteristics of those tests. The math is reasonably rigorous and all statistical terminology is developed ab initio and clearly defined in mathematical terms.
I'm not sure what you mean by "freely available," but this book remains in print, is not expensive, and used copies can be had for very little. | Looking for mathematical account of ANOVA | Everything you ask for is beautifully accomplished by Jack Kiefer in his classic Introduction to Statistical Inference (Springer-Verlag 1987). ANOVA is introduced as a special case of the General Lin | Looking for mathematical account of ANOVA
Everything you ask for is beautifully accomplished by Jack Kiefer in his classic Introduction to Statistical Inference (Springer-Verlag 1987). ANOVA is introduced as a special case of the General Linear Model (i.e., regression) in chapter 5, then taken up again at the end of chapter 8 as an application of normal-theory tests. Chapter 8 begins with a statement of the distributions associated with the Normal--t, F, and chi-squared along with their noncentral versions--and shows from first principles how they arise in each setting. General background on estimation is developed in chapters 1-4.
Because this is not a math text, no effort is made to derive explicit formulations of the PDFs of these distributions: the focus is on specifying certain optimal properties and additional criteria (such as invariance, unbiasedness, or linearity) and then deriving, from first principles, the tests that satisfy them and critically evaluating the characteristics of those tests. The math is reasonably rigorous and all statistical terminology is developed ab initio and clearly defined in mathematical terms.
I'm not sure what you mean by "freely available," but this book remains in print, is not expensive, and used copies can be had for very little. | Looking for mathematical account of ANOVA
Everything you ask for is beautifully accomplished by Jack Kiefer in his classic Introduction to Statistical Inference (Springer-Verlag 1987). ANOVA is introduced as a special case of the General Lin |
35,552 | Looking for mathematical account of ANOVA | This paper of Terry Speed may interest you.
Special invited paper:
What is an Analysis of Variance?
T. P. Speed
Annals of Statistics Vol 15 No 3 (1987)
Available from project Euclid with commentary from some big names including Tukey (who says he's not equipped to comment adequately on the mathematical niceties and careful craftsmanship of the paper.)
Link | Looking for mathematical account of ANOVA | This paper of Terry Speed may interest you.
Special invited paper:
What is an Analysis of Variance?
T. P. Speed
Annals of Statistics Vol 15 No 3 (1987)
Available from project Euclid with commentary fr | Looking for mathematical account of ANOVA
This paper of Terry Speed may interest you.
Special invited paper:
What is an Analysis of Variance?
T. P. Speed
Annals of Statistics Vol 15 No 3 (1987)
Available from project Euclid with commentary from some big names including Tukey (who says he's not equipped to comment adequately on the mathematical niceties and careful craftsmanship of the paper.)
Link | Looking for mathematical account of ANOVA
This paper of Terry Speed may interest you.
Special invited paper:
What is an Analysis of Variance?
T. P. Speed
Annals of Statistics Vol 15 No 3 (1987)
Available from project Euclid with commentary fr |
35,553 | Is there a good reason for the name "Missing at Random"? | You gave a perfect explanation, and there isn't a better one (at least I did not find any during my dissertation work and subsequent research on missing data). There is little excuse for poor terminology, and this is one of the most striking examples (along with formative/reflective indicators in psychology, as well as mediation and moderation that are so easily confused). | Is there a good reason for the name "Missing at Random"? | You gave a perfect explanation, and there isn't a better one (at least I did not find any during my dissertation work and subsequent research on missing data). There is little excuse for poor terminol | Is there a good reason for the name "Missing at Random"?
You gave a perfect explanation, and there isn't a better one (at least I did not find any during my dissertation work and subsequent research on missing data). There is little excuse for poor terminology, and this is one of the most striking examples (along with formative/reflective indicators in psychology, as well as mediation and moderation that are so easily confused). | Is there a good reason for the name "Missing at Random"?
You gave a perfect explanation, and there isn't a better one (at least I did not find any during my dissertation work and subsequent research on missing data). There is little excuse for poor terminol |
35,554 | Is there a good reason for the name "Missing at Random"? | When, given the observed data, the missingness mechanism does not depend on the unobserved data.
Examples of MAR mechanisms
A subject may be removed from a trial if his/her condition is not
controlled sufficiently well (according to pre-defined criteria on
the response).
Two measurements of the same variable are made at the same time. If
they differ by more than a given amount a third is taken. This third
measurement is missing for those that do not differ by the given amount.
For more information visit this URL | Is there a good reason for the name "Missing at Random"? | When, given the observed data, the missingness mechanism does not depend on the unobserved data.
Examples of MAR mechanisms
A subject may be removed from a trial if his/her condition is not
controlle | Is there a good reason for the name "Missing at Random"?
When, given the observed data, the missingness mechanism does not depend on the unobserved data.
Examples of MAR mechanisms
A subject may be removed from a trial if his/her condition is not
controlled sufficiently well (according to pre-defined criteria on
the response).
Two measurements of the same variable are made at the same time. If
they differ by more than a given amount a third is taken. This third
measurement is missing for those that do not differ by the given amount.
For more information visit this URL | Is there a good reason for the name "Missing at Random"?
When, given the observed data, the missingness mechanism does not depend on the unobserved data.
Examples of MAR mechanisms
A subject may be removed from a trial if his/her condition is not
controlle |
35,555 | Ordered logit in JAGS | By default, JAGS will initialize all elements of alpha0 to the prior mean 0. So the initial value of p is c(0.5, 0, 0, 0.5). Under these prior conditions, it is impossible to have y[i] equal to 2 or 3. But, in your simulated data, y[3] = 3.
The solution is to initialize the elements of alpha0 to distinct values
inits <- list("alpha0" = c(-0.5, 0, 0.5))
mcmcmodel <- jags.model(file="ordlog.jag", data=jagsdf,
inits=inits, n.chains=3) | Ordered logit in JAGS | By default, JAGS will initialize all elements of alpha0 to the prior mean 0. So the initial value of p is c(0.5, 0, 0, 0.5). Under these prior conditions, it is impossible to have y[i] equal to 2 or 3 | Ordered logit in JAGS
By default, JAGS will initialize all elements of alpha0 to the prior mean 0. So the initial value of p is c(0.5, 0, 0, 0.5). Under these prior conditions, it is impossible to have y[i] equal to 2 or 3. But, in your simulated data, y[3] = 3.
The solution is to initialize the elements of alpha0 to distinct values
inits <- list("alpha0" = c(-0.5, 0, 0.5))
mcmcmodel <- jags.model(file="ordlog.jag", data=jagsdf,
inits=inits, n.chains=3) | Ordered logit in JAGS
By default, JAGS will initialize all elements of alpha0 to the prior mean 0. So the initial value of p is c(0.5, 0, 0, 0.5). Under these prior conditions, it is impossible to have y[i] equal to 2 or 3 |
35,556 | Distribution of a ratio of uniforms: What is wrong? | Here is a hint.
Consider carefully the term $\mathbb P( X \leq z Y \mid z Y < 1 )$. In particular, for concreteness, choose $z = 2$, so that we are considering the event $\mathbb P( X \leq 2 Y \mid Y < 1/2 )$.
Now, look at this picture (which is very closely related to the above probability).
Now, does that conditional probability depend on our particular choice of $z$? | Distribution of a ratio of uniforms: What is wrong? | Here is a hint.
Consider carefully the term $\mathbb P( X \leq z Y \mid z Y < 1 )$. In particular, for concreteness, choose $z = 2$, so that we are considering the event $\mathbb P( X \leq 2 Y \mid Y | Distribution of a ratio of uniforms: What is wrong?
Here is a hint.
Consider carefully the term $\mathbb P( X \leq z Y \mid z Y < 1 )$. In particular, for concreteness, choose $z = 2$, so that we are considering the event $\mathbb P( X \leq 2 Y \mid Y < 1/2 )$.
Now, look at this picture (which is very closely related to the above probability).
Now, does that conditional probability depend on our particular choice of $z$? | Distribution of a ratio of uniforms: What is wrong?
Here is a hint.
Consider carefully the term $\mathbb P( X \leq z Y \mid z Y < 1 )$. In particular, for concreteness, choose $z = 2$, so that we are considering the event $\mathbb P( X \leq 2 Y \mid Y |
35,557 | How do joint test, r-squared behave when using autocorrelation / heteroskedasticity robust std. errors? | The $R^2$ is the sum of squared residuals divided by the total variation in your outcome; neither of these statistics change when robust/HAC standard errors are applied, so the $R^2$ doesn't change either. Adjusted $R^2$ alters the formula somewhat, but only based upon the number of observations and the number of predictors in your model, which don't change under robust standard errors, so this value remains unchanged as well.
The F statistic can't incorporate heteroskedasticity or autocorrelation---it requires homoskedasticity and no correlations among the errors (actually, it requires that the errors be distributed according to identical normal distributions conditional on your predictors). This statistic can't be corrected. Instead, to perform robust joint tests, you need to use a Wald test that follows the $\chi^2$ distribution under the null hypothesis. So, yes, the F statistic doesn't change when you apply the robust/HAC standard errors, but that's because it isn't robust. | How do joint test, r-squared behave when using autocorrelation / heteroskedasticity robust std. erro | The $R^2$ is the sum of squared residuals divided by the total variation in your outcome; neither of these statistics change when robust/HAC standard errors are applied, so the $R^2$ doesn't change ei | How do joint test, r-squared behave when using autocorrelation / heteroskedasticity robust std. errors?
The $R^2$ is the sum of squared residuals divided by the total variation in your outcome; neither of these statistics change when robust/HAC standard errors are applied, so the $R^2$ doesn't change either. Adjusted $R^2$ alters the formula somewhat, but only based upon the number of observations and the number of predictors in your model, which don't change under robust standard errors, so this value remains unchanged as well.
The F statistic can't incorporate heteroskedasticity or autocorrelation---it requires homoskedasticity and no correlations among the errors (actually, it requires that the errors be distributed according to identical normal distributions conditional on your predictors). This statistic can't be corrected. Instead, to perform robust joint tests, you need to use a Wald test that follows the $\chi^2$ distribution under the null hypothesis. So, yes, the F statistic doesn't change when you apply the robust/HAC standard errors, but that's because it isn't robust. | How do joint test, r-squared behave when using autocorrelation / heteroskedasticity robust std. erro
The $R^2$ is the sum of squared residuals divided by the total variation in your outcome; neither of these statistics change when robust/HAC standard errors are applied, so the $R^2$ doesn't change ei |
35,558 | Explanatory power of a variable | The relaimpo R package does exactly what you want to do, and it also provides bootstrap CIs when assessing relative contribution of individual predictor to the overall $R^2$.
An example of use can be found at the end of this tutorial: Getting Started with a Modern Approach to Regression. | Explanatory power of a variable | The relaimpo R package does exactly what you want to do, and it also provides bootstrap CIs when assessing relative contribution of individual predictor to the overall $R^2$.
An example of use can be | Explanatory power of a variable
The relaimpo R package does exactly what you want to do, and it also provides bootstrap CIs when assessing relative contribution of individual predictor to the overall $R^2$.
An example of use can be found at the end of this tutorial: Getting Started with a Modern Approach to Regression. | Explanatory power of a variable
The relaimpo R package does exactly what you want to do, and it also provides bootstrap CIs when assessing relative contribution of individual predictor to the overall $R^2$.
An example of use can be |
35,559 | Median value on ordinal scales | Definitional issues:
The median is the middle value of the data; it is not by definition the middle value of the scale.
When the sample size is even, then the median is the mean of the values either side of middle most point after rank ordering all values (see wikipedia description).
When to use median on ordinal data
In theory the median can be used on data from any variable where the values can be ordered.
In practice, the median is often not the most useful summary of central tendency with ordinal variables.
This partially depends on what you want to get out of your measure of central tendency.
When you are describing the central tendency of data on an ordinal variable with only a small number of response options (i.e., perhaps less than 20 or 50 or 100), the median can be quite gross (e.g., 1,1,3,3,3 and 1,3,3,5,5 both have a median of 3, but the second example would have a higher mean).
When it comes to summarising the central tendency of Likert items, I find the mean to be much more useful and sensitive to meaningful differences.
Ordinal variables that are ranks do not suffer from this problem of "grossness".
Interpolated medians are another way of overcoming the gross nature of the median on ordinal data with few values. | Median value on ordinal scales | Definitional issues:
The median is the middle value of the data; it is not by definition the middle value of the scale.
When the sample size is even, then the median is the mean of the values either | Median value on ordinal scales
Definitional issues:
The median is the middle value of the data; it is not by definition the middle value of the scale.
When the sample size is even, then the median is the mean of the values either side of middle most point after rank ordering all values (see wikipedia description).
When to use median on ordinal data
In theory the median can be used on data from any variable where the values can be ordered.
In practice, the median is often not the most useful summary of central tendency with ordinal variables.
This partially depends on what you want to get out of your measure of central tendency.
When you are describing the central tendency of data on an ordinal variable with only a small number of response options (i.e., perhaps less than 20 or 50 or 100), the median can be quite gross (e.g., 1,1,3,3,3 and 1,3,3,5,5 both have a median of 3, but the second example would have a higher mean).
When it comes to summarising the central tendency of Likert items, I find the mean to be much more useful and sensitive to meaningful differences.
Ordinal variables that are ranks do not suffer from this problem of "grossness".
Interpolated medians are another way of overcoming the gross nature of the median on ordinal data with few values. | Median value on ordinal scales
Definitional issues:
The median is the middle value of the data; it is not by definition the middle value of the scale.
When the sample size is even, then the median is the mean of the values either |
35,560 | Median value on ordinal scales | No, the median is the value where half the data is less than or equal to that value and half the data is greater than or equal to that value.
So if your ordinal scale had 100 respondents then find the value that has at least 50 less or equal and 50 greater than or equal. It would only be 3 if half the responses were to either side. If 1 person said 1, 2 people said 2, 3 said 3, 4 said 4, and the remaining 90 said 5, then 5 would be the median.
The median works when the data is ordered, but would not make sense for nominal/unordered data, like what is your favorite color? | Median value on ordinal scales | No, the median is the value where half the data is less than or equal to that value and half the data is greater than or equal to that value.
So if your ordinal scale had 100 respondents then find t | Median value on ordinal scales
No, the median is the value where half the data is less than or equal to that value and half the data is greater than or equal to that value.
So if your ordinal scale had 100 respondents then find the value that has at least 50 less or equal and 50 greater than or equal. It would only be 3 if half the responses were to either side. If 1 person said 1, 2 people said 2, 3 said 3, 4 said 4, and the remaining 90 said 5, then 5 would be the median.
The median works when the data is ordered, but would not make sense for nominal/unordered data, like what is your favorite color? | Median value on ordinal scales
No, the median is the value where half the data is less than or equal to that value and half the data is greater than or equal to that value.
So if your ordinal scale had 100 respondents then find t |
35,561 | Data preparation for regression | (You may start from the after line section, for a shorter answer) To begin with, you are absolutely right saying that it firstly depends on the purposes of your analysis: forecasting of average price (at macro level) or a particular price (at micro level), causal analysis of consumer preferences (district, size, age, number of bedrooms, gas, travelling to the job, level of noise, etc.). This verbal specialization secondly will guide you to an appropriate choice of a model and, finally, requirements for your data.
From what you have written, I assume, that you deal with the real estate pricing models. After quick googling showed there are many ways to specify a model. Quite good starting reference could be Simon P. Leblond's article Comparing predictive accuracy of real estate pricing models: an applied study for the city of Montreal. From practical point of view you have to choose between additive or multiplicative regression models. The latter has several advantages as opposed to additive models:
parameter estimates (but intercept term, junk regression parameter anyway) are not affected by the changes in scale
parameters for log-transformed variables have a nice elasticity interpretation, that ...
naturally allows for diminishing returns to scale restrictions (in real estate this one could be crucial restrictions)
if one studies average prices, more robust averaging is weighted geometric mean than average (this will not be relevant too much at micro level though)
you may set price to zero, if, for instance, your apartment has no bedrooms at all (it is hard to do so with additive models)
One more important thing before you proceed is to think about each of your observation as a unique data point that was jointly set on the market by a decision maker on the basis of utility maximizing behaviour. Jointly meaning here that you can't separate the variables from each other (for instance, the value of apartments without a bedroom is zero for most of the consumers), but a consumer may or may not like all bundle of the attributes together, after that his or her budget (money in the pocket) is all that matters. Therefore standardization is useful for analysis of relative importance of explanatory variables, but be careful judging what variables are not significant (all factors may be important). Heterogeneity of preferences and budgets (buyers are different households) in each case of your observation also shows why regression at micro level (not averaging or so) could also be misleading. Finally, you have cross-sectional (static) data. Trying to predict prices for different years (than the year of your observations), static pictures work poorly for different periods of time (say you build model based on 2009 year's data, it will be not very useful retrospectively predicting prices for say 2007, or for 2011). At least try to correct the outcomes on the basis of change in average price for a particular year in this case.
Regarding your particular questions (what I personally do for my projects, or at least pretend to do):
List all the variables you have and their measurement units
Check and re-check the data for imputation errors
Make additional imputation for the points with missing values (you may also simply exclude the observations if you have large dataset with not so many missing values)
Make all measurement units the same across similar variables (sq. meters, currency units, etc.)
Think of a simple data frame structure at once (you need to communicate with $R$ conveniently)
Bring only raw data to $R$, make all log, differences, fractions transformations in $R$ directly (logarithms are important for multiplicative models, some pros for one are in the prelude above; fractions are also nice for you may want to eliminate the scale (size) effect at once, and emphasize the differences caused by other factors)
Leave dummies as they are but always leave one level of qualitative attribute for intercept term (if not this would be a source for pure multicollinearity problem in your model)
For your purposes you may apply ordinary least squares (OLS), though in pricing models I would also consider tobit or Heckman models, that do need a special treatment (one of my early may-be-not-so-successful post's on pricing was about this)
OLS is straightforward and usual residual analysis (found in textbooks on econometrics) is done. Violating some of the assumptions you may go for generalized methods, instrumental variables, ridged regression, cures for autoregressive residuals, but... What you really need to know: are the parameter estimates theoretically reasonable (values, signs, etc.)?
Just a nice number... any additions from the community are welcome. | Data preparation for regression | (You may start from the after line section, for a shorter answer) To begin with, you are absolutely right saying that it firstly depends on the purposes of your analysis: forecasting of average price | Data preparation for regression
(You may start from the after line section, for a shorter answer) To begin with, you are absolutely right saying that it firstly depends on the purposes of your analysis: forecasting of average price (at macro level) or a particular price (at micro level), causal analysis of consumer preferences (district, size, age, number of bedrooms, gas, travelling to the job, level of noise, etc.). This verbal specialization secondly will guide you to an appropriate choice of a model and, finally, requirements for your data.
From what you have written, I assume, that you deal with the real estate pricing models. After quick googling showed there are many ways to specify a model. Quite good starting reference could be Simon P. Leblond's article Comparing predictive accuracy of real estate pricing models: an applied study for the city of Montreal. From practical point of view you have to choose between additive or multiplicative regression models. The latter has several advantages as opposed to additive models:
parameter estimates (but intercept term, junk regression parameter anyway) are not affected by the changes in scale
parameters for log-transformed variables have a nice elasticity interpretation, that ...
naturally allows for diminishing returns to scale restrictions (in real estate this one could be crucial restrictions)
if one studies average prices, more robust averaging is weighted geometric mean than average (this will not be relevant too much at micro level though)
you may set price to zero, if, for instance, your apartment has no bedrooms at all (it is hard to do so with additive models)
One more important thing before you proceed is to think about each of your observation as a unique data point that was jointly set on the market by a decision maker on the basis of utility maximizing behaviour. Jointly meaning here that you can't separate the variables from each other (for instance, the value of apartments without a bedroom is zero for most of the consumers), but a consumer may or may not like all bundle of the attributes together, after that his or her budget (money in the pocket) is all that matters. Therefore standardization is useful for analysis of relative importance of explanatory variables, but be careful judging what variables are not significant (all factors may be important). Heterogeneity of preferences and budgets (buyers are different households) in each case of your observation also shows why regression at micro level (not averaging or so) could also be misleading. Finally, you have cross-sectional (static) data. Trying to predict prices for different years (than the year of your observations), static pictures work poorly for different periods of time (say you build model based on 2009 year's data, it will be not very useful retrospectively predicting prices for say 2007, or for 2011). At least try to correct the outcomes on the basis of change in average price for a particular year in this case.
Regarding your particular questions (what I personally do for my projects, or at least pretend to do):
List all the variables you have and their measurement units
Check and re-check the data for imputation errors
Make additional imputation for the points with missing values (you may also simply exclude the observations if you have large dataset with not so many missing values)
Make all measurement units the same across similar variables (sq. meters, currency units, etc.)
Think of a simple data frame structure at once (you need to communicate with $R$ conveniently)
Bring only raw data to $R$, make all log, differences, fractions transformations in $R$ directly (logarithms are important for multiplicative models, some pros for one are in the prelude above; fractions are also nice for you may want to eliminate the scale (size) effect at once, and emphasize the differences caused by other factors)
Leave dummies as they are but always leave one level of qualitative attribute for intercept term (if not this would be a source for pure multicollinearity problem in your model)
For your purposes you may apply ordinary least squares (OLS), though in pricing models I would also consider tobit or Heckman models, that do need a special treatment (one of my early may-be-not-so-successful post's on pricing was about this)
OLS is straightforward and usual residual analysis (found in textbooks on econometrics) is done. Violating some of the assumptions you may go for generalized methods, instrumental variables, ridged regression, cures for autoregressive residuals, but... What you really need to know: are the parameter estimates theoretically reasonable (values, signs, etc.)?
Just a nice number... any additions from the community are welcome. | Data preparation for regression
(You may start from the after line section, for a shorter answer) To begin with, you are absolutely right saying that it firstly depends on the purposes of your analysis: forecasting of average price |
35,562 | Data preparation for regression | The real estate prices that you are tying to predict , are they consecutive/chronological values i.e. time series data or are they prices for different classes e.g. this years prices for different classes for the same time frame. You might want to read something I wrote on these two kinds of problems as it warns that if you are dealing with longitudinal data ( time series) then the tools of ordinary cross-sectional regression will not ordinarily apply. It is entitled "Regression vs Box-Jenkins" http://www.autobox.com/pdfs/regvsbox.pdf . | Data preparation for regression | The real estate prices that you are tying to predict , are they consecutive/chronological values i.e. time series data or are they prices for different classes e.g. this years prices for different cl | Data preparation for regression
The real estate prices that you are tying to predict , are they consecutive/chronological values i.e. time series data or are they prices for different classes e.g. this years prices for different classes for the same time frame. You might want to read something I wrote on these two kinds of problems as it warns that if you are dealing with longitudinal data ( time series) then the tools of ordinary cross-sectional regression will not ordinarily apply. It is entitled "Regression vs Box-Jenkins" http://www.autobox.com/pdfs/regvsbox.pdf . | Data preparation for regression
The real estate prices that you are tying to predict , are they consecutive/chronological values i.e. time series data or are they prices for different classes e.g. this years prices for different cl |
35,563 | Data preparation for regression | Binning your data is usually a bad idea because it will cause you to lose information, which will likely result in loss of power. Also, I would rarely standardise variables before doing regression, although some people may like to.
A really good book to read, if you can get it, is "Regression Modeling Strategies" by Frank Harrell. | Data preparation for regression | Binning your data is usually a bad idea because it will cause you to lose information, which will likely result in loss of power. Also, I would rarely standardise variables before doing regression, al | Data preparation for regression
Binning your data is usually a bad idea because it will cause you to lose information, which will likely result in loss of power. Also, I would rarely standardise variables before doing regression, although some people may like to.
A really good book to read, if you can get it, is "Regression Modeling Strategies" by Frank Harrell. | Data preparation for regression
Binning your data is usually a bad idea because it will cause you to lose information, which will likely result in loss of power. Also, I would rarely standardise variables before doing regression, al |
35,564 | Data preparation for regression | For preprocessing I always like to include outlier detection, and removing bad data. If your data is of different scales, normalizing the data is a good idea (standardization). As far as technique goes, it always pays to graph and plot all of your variables with each other, as well as with the predicted variable. That will tell you a lot about which assumptions you can make about the data such as linearity, equality of variances, normality and can better help you choose a technique. | Data preparation for regression | For preprocessing I always like to include outlier detection, and removing bad data. If your data is of different scales, normalizing the data is a good idea (standardization). As far as technique g | Data preparation for regression
For preprocessing I always like to include outlier detection, and removing bad data. If your data is of different scales, normalizing the data is a good idea (standardization). As far as technique goes, it always pays to graph and plot all of your variables with each other, as well as with the predicted variable. That will tell you a lot about which assumptions you can make about the data such as linearity, equality of variances, normality and can better help you choose a technique. | Data preparation for regression
For preprocessing I always like to include outlier detection, and removing bad data. If your data is of different scales, normalizing the data is a good idea (standardization). As far as technique g |
35,565 | How to visualize the true dimensionality of the data? | A standard approach would be to do PCA and then show a scree plot, which you ought to be able to get that out of any software you might choose. A little tinkering and you could make it more interpretable for your particular audience if necessary. Sometimes they can be convincing, but often they're ambiguous and there'a always room to quibble about how to read them so a scree plot may (edit: not!) be ideal. Worth a look though. | How to visualize the true dimensionality of the data? | A standard approach would be to do PCA and then show a scree plot, which you ought to be able to get that out of any software you might choose. A little tinkering and you could make it more interpreta | How to visualize the true dimensionality of the data?
A standard approach would be to do PCA and then show a scree plot, which you ought to be able to get that out of any software you might choose. A little tinkering and you could make it more interpretable for your particular audience if necessary. Sometimes they can be convincing, but often they're ambiguous and there'a always room to quibble about how to read them so a scree plot may (edit: not!) be ideal. Worth a look though. | How to visualize the true dimensionality of the data?
A standard approach would be to do PCA and then show a scree plot, which you ought to be able to get that out of any software you might choose. A little tinkering and you could make it more interpreta |
35,566 | How to visualize the true dimensionality of the data? | One way to visualize this would be as follows:
Perform a PCA on the data.
Let $V$ be the vector space spanned by the first two principal component vectors, and let $V^\top$ be the complement.
Decompose each vector $x_i$ in your data set as the sum of an element in $V$ plus a remainder term (which is in $V^\top$). Write this as $x_i = v_i + c_i$. (this should be easy using the results of the PCA.)
Create a scatter plot of $||c_i||$ versus $||v_i||$.
If the data is truly $\le 2$ dimensional, the plot should look like a flat line.
In Matlab (ducking from all the shoes being thrown):
lat_d = 2; %the latent dimension of the generating process
vis_d = 16; %manifest dimension
n = 10000; %number of samples
x = randn(n,lat_d) * randn(lat_d,vis_d) + 0.1 * randn(n,vis_d); %add some noise
xmu = mean(x,1);
xc = bsxfun(@minus,x,xmu); %Matlab syntax for element recycling: ugly, weird.
[U,S,V] = svd(xc); %this will be slow;
prev = U(:,1:2) * S(1:2,1:2);
prec = U(:,3:end) * S(3:end,3:end);
normv = sqrt(sum(prev .^2,2));
normc = sqrt(sum(prec .^2,2));
scatter(normv,normc);
axis equal; %to illlustrate the differences in scaling, make axis 'square'
This generates the following scatter plot:
If you change lat_d to 4, the line is less flat. | How to visualize the true dimensionality of the data? | One way to visualize this would be as follows:
Perform a PCA on the data.
Let $V$ be the vector space spanned by the first two principal component vectors, and let $V^\top$ be the complement.
Decomp | How to visualize the true dimensionality of the data?
One way to visualize this would be as follows:
Perform a PCA on the data.
Let $V$ be the vector space spanned by the first two principal component vectors, and let $V^\top$ be the complement.
Decompose each vector $x_i$ in your data set as the sum of an element in $V$ plus a remainder term (which is in $V^\top$). Write this as $x_i = v_i + c_i$. (this should be easy using the results of the PCA.)
Create a scatter plot of $||c_i||$ versus $||v_i||$.
If the data is truly $\le 2$ dimensional, the plot should look like a flat line.
In Matlab (ducking from all the shoes being thrown):
lat_d = 2; %the latent dimension of the generating process
vis_d = 16; %manifest dimension
n = 10000; %number of samples
x = randn(n,lat_d) * randn(lat_d,vis_d) + 0.1 * randn(n,vis_d); %add some noise
xmu = mean(x,1);
xc = bsxfun(@minus,x,xmu); %Matlab syntax for element recycling: ugly, weird.
[U,S,V] = svd(xc); %this will be slow;
prev = U(:,1:2) * S(1:2,1:2);
prec = U(:,3:end) * S(3:end,3:end);
normv = sqrt(sum(prev .^2,2));
normc = sqrt(sum(prec .^2,2));
scatter(normv,normc);
axis equal; %to illlustrate the differences in scaling, make axis 'square'
This generates the following scatter plot:
If you change lat_d to 4, the line is less flat. | How to visualize the true dimensionality of the data?
One way to visualize this would be as follows:
Perform a PCA on the data.
Let $V$ be the vector space spanned by the first two principal component vectors, and let $V^\top$ be the complement.
Decomp |
35,567 | How to visualize the true dimensionality of the data? | I've done similar using PROC Varclus in SAS. The basic idea is to generate a 4 cluster solution, pick the highest correlated variable with each cluster, and then to demonstrate that this 4 cluster solution explains more of the variation than the two cluster solution. For the 2 cluster solution you could use either Varclus or the first 2 Principal Components, but I like Varclus since everything is explained via variables and not the components. There is a varclus in R, but I'm not sure if it does the same thing.
-Ralph Winters | How to visualize the true dimensionality of the data? | I've done similar using PROC Varclus in SAS. The basic idea is to generate a 4 cluster solution, pick the highest correlated variable with each cluster, and then to demonstrate that this 4 cluster so | How to visualize the true dimensionality of the data?
I've done similar using PROC Varclus in SAS. The basic idea is to generate a 4 cluster solution, pick the highest correlated variable with each cluster, and then to demonstrate that this 4 cluster solution explains more of the variation than the two cluster solution. For the 2 cluster solution you could use either Varclus or the first 2 Principal Components, but I like Varclus since everything is explained via variables and not the components. There is a varclus in R, but I'm not sure if it does the same thing.
-Ralph Winters | How to visualize the true dimensionality of the data?
I've done similar using PROC Varclus in SAS. The basic idea is to generate a 4 cluster solution, pick the highest correlated variable with each cluster, and then to demonstrate that this 4 cluster so |
35,568 | Which one should be applied first: data sampling or dimensionality reduction? | Generally, you want your training and validation data sets be separate as much as possible. Ideally, the validation set data would have been obtained only after the model has been trained. If you perform dimensionality reduction before splitting your data to separate sets, you break this isolation between the training and the validation and you won't be sure whether the dimensionality reduction process was over-fitted until your model is tested in real life.
Having said that, there are cases, where efficient separation to training, testing and validation sets is not feasible and other sampling techniques, such as cross validation, leave k out etc are used. In these cases reducing the dimensionality before the sampling might be the right approach. | Which one should be applied first: data sampling or dimensionality reduction? | Generally, you want your training and validation data sets be separate as much as possible. Ideally, the validation set data would have been obtained only after the model has been trained. If you perf | Which one should be applied first: data sampling or dimensionality reduction?
Generally, you want your training and validation data sets be separate as much as possible. Ideally, the validation set data would have been obtained only after the model has been trained. If you perform dimensionality reduction before splitting your data to separate sets, you break this isolation between the training and the validation and you won't be sure whether the dimensionality reduction process was over-fitted until your model is tested in real life.
Having said that, there are cases, where efficient separation to training, testing and validation sets is not feasible and other sampling techniques, such as cross validation, leave k out etc are used. In these cases reducing the dimensionality before the sampling might be the right approach. | Which one should be applied first: data sampling or dimensionality reduction?
Generally, you want your training and validation data sets be separate as much as possible. Ideally, the validation set data would have been obtained only after the model has been trained. If you perf |
35,569 | Which one should be applied first: data sampling or dimensionality reduction? | Do the dimensionality reduction first: Your error in estimating the principal components will be smaller due to the larger sample (your Corr/Cov-matrix used in PCA has to be estimated!).
The other way around only makes sense for computational reasons. | Which one should be applied first: data sampling or dimensionality reduction? | Do the dimensionality reduction first: Your error in estimating the principal components will be smaller due to the larger sample (your Corr/Cov-matrix used in PCA has to be estimated!).
The other way | Which one should be applied first: data sampling or dimensionality reduction?
Do the dimensionality reduction first: Your error in estimating the principal components will be smaller due to the larger sample (your Corr/Cov-matrix used in PCA has to be estimated!).
The other way around only makes sense for computational reasons. | Which one should be applied first: data sampling or dimensionality reduction?
Do the dimensionality reduction first: Your error in estimating the principal components will be smaller due to the larger sample (your Corr/Cov-matrix used in PCA has to be estimated!).
The other way |
35,570 | Which one should be applied first: data sampling or dimensionality reduction? | Devil's advocate: I could imagine the principal components differing depending on who's sampled. I'd think this validity issue would take precedence over the precision issue Richard points out. | Which one should be applied first: data sampling or dimensionality reduction? | Devil's advocate: I could imagine the principal components differing depending on who's sampled. I'd think this validity issue would take precedence over the precision issue Richard points out. | Which one should be applied first: data sampling or dimensionality reduction?
Devil's advocate: I could imagine the principal components differing depending on who's sampled. I'd think this validity issue would take precedence over the precision issue Richard points out. | Which one should be applied first: data sampling or dimensionality reduction?
Devil's advocate: I could imagine the principal components differing depending on who's sampled. I'd think this validity issue would take precedence over the precision issue Richard points out. |
35,571 | Which one should be applied first: data sampling or dimensionality reduction? | You should perform sampling and dimensionality reduction in combination.
The best way to do this is undersample the majority class, and run a decision tree. It is the best variable selector you can imagine.
Perform this a number of times (each time another sample). The result will be a number of list of candidate predictors.
And ... yes : combination of your decision trees is already a great model.
Find out why decision trees is the best data mining algorithm at http://bit.ly/a2qDWJ | Which one should be applied first: data sampling or dimensionality reduction? | You should perform sampling and dimensionality reduction in combination.
The best way to do this is undersample the majority class, and run a decision tree. It is the best variable selector you can i | Which one should be applied first: data sampling or dimensionality reduction?
You should perform sampling and dimensionality reduction in combination.
The best way to do this is undersample the majority class, and run a decision tree. It is the best variable selector you can imagine.
Perform this a number of times (each time another sample). The result will be a number of list of candidate predictors.
And ... yes : combination of your decision trees is already a great model.
Find out why decision trees is the best data mining algorithm at http://bit.ly/a2qDWJ | Which one should be applied first: data sampling or dimensionality reduction?
You should perform sampling and dimensionality reduction in combination.
The best way to do this is undersample the majority class, and run a decision tree. It is the best variable selector you can i |
35,572 | Proving that the squares of normal rv's is Chi-square distributed | We have $X_1\sim N(\mu_1,\sigma^2)$ and $X_2\sim N(\mu_2,\sigma^2)$, hence
$$EY_1=E(-X_1/\sqrt{2}+X_2/\sqrt{2})=-1/\sqrt{2}EX_1+1/\sqrt{2}EX_2=0$$
\begin{align*}
EY_1^2&=E(-X_1/\sqrt{2}+X_2/\sqrt{2})^2\\\\
&=E(X_1/\sqrt{2})^2-2E(X_1X_2/2)+E(X_2/\sqrt{2})^2\\\\
&=1/2\sigma^2+1/2\sigma^2=\sigma^2
\end{align*}
Hence $Y_1\sim N(0,\sigma^2)$ since it is the linear combination of normal variables.
Similarly we get $Y_2\sim N(0,\sigma^2)$ and $Y_3\sim N(0,\sigma^2)$
Now
$$EY_1Y_2=1/\sqrt{6}E(X_1)^2-1/\sqrt{6}EX_2^2=0$$
and similarly $EY_2Y_3=EY_1Y_3=0$, hence $Y_1$, $Y_2$ and $Y_3$ are independent, since for normal variables independece coincided with zero correlation.
Having established that we have
$$(Y_1^2+Y_2^2+Y_3^2)/\sigma^2=\left(\frac{Y_1}{\sigma}\right)^2+\left(\frac{Y_2}{\sigma}\right)^2+\left(\frac{Y_3}{\sigma}\right)^2=Z_1^2+Z_2^2+Z_3^2$$,
where $Z_i=Y_i/\sigma$. Since $Y_i\sim N(0,\sigma^2)$, we have $Z_i\sim N(0,1)$.
We have showed that our quantity of interest is a sum of squares of 3 independent standard normal variables, which by definition is $\chi^2$ with 3 degrees of freedom.
As I've said in the comments you do not need to calculate the densities. If you on the other hand want to do that, your formula is wrong. Here is why. Denote by $G(x)$ distribution of $Y_1^2$ and $F(x)$ the distribution of $Y_1$. Then we have
$$G(x)=P(Y_1^2<x)=P(-\sqrt{x}<Y_1<\sqrt{x})=F(\sqrt{x})-F(-\sqrt{x})$$
Now the density of $Y_1^2$ is $G'(x)$, so
$$G'(x)=\frac{1}{2\sqrt{x}}(F'(\sqrt{x})+F'(-\sqrt{x})$$
We have that
$$F'(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{x^2}{\sigma^2}},$$
so
$$G'(x)=\frac{1}{\sigma\sqrt{2\pi x}}e^{-\frac{x}{2}}$$
If $\sigma^2=1$ we have a pdf of $\chi^2$ with one degree of freedom. (Note that for $Z_1$ instead of $Y_1$ the calculation is similar and $\sigma^2=1$ ) As @whuber pointed out, this is gamma distribution, and sums of independent gamma distributions is again gamma, the exact formula is provided in the wikipedia page. | Proving that the squares of normal rv's is Chi-square distributed | We have $X_1\sim N(\mu_1,\sigma^2)$ and $X_2\sim N(\mu_2,\sigma^2)$, hence
$$EY_1=E(-X_1/\sqrt{2}+X_2/\sqrt{2})=-1/\sqrt{2}EX_1+1/\sqrt{2}EX_2=0$$
\begin{align*}
EY_1^2&=E(-X_1/\sqrt{2}+X_2/\sqrt{2}) | Proving that the squares of normal rv's is Chi-square distributed
We have $X_1\sim N(\mu_1,\sigma^2)$ and $X_2\sim N(\mu_2,\sigma^2)$, hence
$$EY_1=E(-X_1/\sqrt{2}+X_2/\sqrt{2})=-1/\sqrt{2}EX_1+1/\sqrt{2}EX_2=0$$
\begin{align*}
EY_1^2&=E(-X_1/\sqrt{2}+X_2/\sqrt{2})^2\\\\
&=E(X_1/\sqrt{2})^2-2E(X_1X_2/2)+E(X_2/\sqrt{2})^2\\\\
&=1/2\sigma^2+1/2\sigma^2=\sigma^2
\end{align*}
Hence $Y_1\sim N(0,\sigma^2)$ since it is the linear combination of normal variables.
Similarly we get $Y_2\sim N(0,\sigma^2)$ and $Y_3\sim N(0,\sigma^2)$
Now
$$EY_1Y_2=1/\sqrt{6}E(X_1)^2-1/\sqrt{6}EX_2^2=0$$
and similarly $EY_2Y_3=EY_1Y_3=0$, hence $Y_1$, $Y_2$ and $Y_3$ are independent, since for normal variables independece coincided with zero correlation.
Having established that we have
$$(Y_1^2+Y_2^2+Y_3^2)/\sigma^2=\left(\frac{Y_1}{\sigma}\right)^2+\left(\frac{Y_2}{\sigma}\right)^2+\left(\frac{Y_3}{\sigma}\right)^2=Z_1^2+Z_2^2+Z_3^2$$,
where $Z_i=Y_i/\sigma$. Since $Y_i\sim N(0,\sigma^2)$, we have $Z_i\sim N(0,1)$.
We have showed that our quantity of interest is a sum of squares of 3 independent standard normal variables, which by definition is $\chi^2$ with 3 degrees of freedom.
As I've said in the comments you do not need to calculate the densities. If you on the other hand want to do that, your formula is wrong. Here is why. Denote by $G(x)$ distribution of $Y_1^2$ and $F(x)$ the distribution of $Y_1$. Then we have
$$G(x)=P(Y_1^2<x)=P(-\sqrt{x}<Y_1<\sqrt{x})=F(\sqrt{x})-F(-\sqrt{x})$$
Now the density of $Y_1^2$ is $G'(x)$, so
$$G'(x)=\frac{1}{2\sqrt{x}}(F'(\sqrt{x})+F'(-\sqrt{x})$$
We have that
$$F'(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{x^2}{\sigma^2}},$$
so
$$G'(x)=\frac{1}{\sigma\sqrt{2\pi x}}e^{-\frac{x}{2}}$$
If $\sigma^2=1$ we have a pdf of $\chi^2$ with one degree of freedom. (Note that for $Z_1$ instead of $Y_1$ the calculation is similar and $\sigma^2=1$ ) As @whuber pointed out, this is gamma distribution, and sums of independent gamma distributions is again gamma, the exact formula is provided in the wikipedia page. | Proving that the squares of normal rv's is Chi-square distributed
We have $X_1\sim N(\mu_1,\sigma^2)$ and $X_2\sim N(\mu_2,\sigma^2)$, hence
$$EY_1=E(-X_1/\sqrt{2}+X_2/\sqrt{2})=-1/\sqrt{2}EX_1+1/\sqrt{2}EX_2=0$$
\begin{align*}
EY_1^2&=E(-X_1/\sqrt{2}+X_2/\sqrt{2}) |
35,573 | Proving that the squares of normal rv's is Chi-square distributed | I want to offer a solution in the view of matrix algebra.
Let
$$ U' = \left[\begin{matrix}
-1/\sqrt{2} && 1/\sqrt{2} && 0 \\
-1/\sqrt{3} && -1/\sqrt{3} && 1/\sqrt{3} \\
1/\sqrt{6} && 1/\sqrt{6} && 2/\sqrt{6}
\end{matrix}\right]$$
As $Y = U'X$,
$$E(Y) = E(U'X) = U' \cdot 0 = 0$$
$$Var(Y) = U' Var(X) U = \sigma^2 U' U = \sigma^2 I $$
You can see $Y_1, Y_2, Y_3$ in $Y$ are all follow $N(0, \sigma^2)$, and are independent to each other. Then you can prove $Y_i^2/\sigma^2 \sim \chi^2(1)$, and finally, $(Y_1^2 + Y_2^2 + Y_3^2)/\sigma^2 \sim \chi^2(3)$ | Proving that the squares of normal rv's is Chi-square distributed | I want to offer a solution in the view of matrix algebra.
Let
$$ U' = \left[\begin{matrix}
-1/\sqrt{2} && 1/\sqrt{2} && 0 \\
-1/\sqrt{3} && -1/\sqrt{3} && 1/\sqrt{3} \\
1/\sqrt{6} && 1/\sqrt{6} && 2/\ | Proving that the squares of normal rv's is Chi-square distributed
I want to offer a solution in the view of matrix algebra.
Let
$$ U' = \left[\begin{matrix}
-1/\sqrt{2} && 1/\sqrt{2} && 0 \\
-1/\sqrt{3} && -1/\sqrt{3} && 1/\sqrt{3} \\
1/\sqrt{6} && 1/\sqrt{6} && 2/\sqrt{6}
\end{matrix}\right]$$
As $Y = U'X$,
$$E(Y) = E(U'X) = U' \cdot 0 = 0$$
$$Var(Y) = U' Var(X) U = \sigma^2 U' U = \sigma^2 I $$
You can see $Y_1, Y_2, Y_3$ in $Y$ are all follow $N(0, \sigma^2)$, and are independent to each other. Then you can prove $Y_i^2/\sigma^2 \sim \chi^2(1)$, and finally, $(Y_1^2 + Y_2^2 + Y_3^2)/\sigma^2 \sim \chi^2(3)$ | Proving that the squares of normal rv's is Chi-square distributed
I want to offer a solution in the view of matrix algebra.
Let
$$ U' = \left[\begin{matrix}
-1/\sqrt{2} && 1/\sqrt{2} && 0 \\
-1/\sqrt{3} && -1/\sqrt{3} && 1/\sqrt{3} \\
1/\sqrt{6} && 1/\sqrt{6} && 2/\ |
35,574 | Proving that the squares of normal rv's is Chi-square distributed | Have you tried simply multiplying out the squared Y^2's in terms of the X[1:3] terms. I suspect that when you are all done that you will see that you simply have: (1/2 +1/3 +1/6)* X1^2 + (1/2 +1/3 +1/6)*X2^2 + (1/2 +1/3 +1/6)*X3^2 . This, of course, assumes that X1X3=X3X1, i.e. that your random variable algebra is commutative, but unless you are working on complex variables in particle physics, that assumption should hold. So far I have gotten about halfway there, and my approach seems to be holding up. It would seem to be useful that you go through the exercise, rather than for me to display it. | Proving that the squares of normal rv's is Chi-square distributed | Have you tried simply multiplying out the squared Y^2's in terms of the X[1:3] terms. I suspect that when you are all done that you will see that you simply have: (1/2 +1/3 +1/6)* X1^2 + (1/2 +1/3 +1/ | Proving that the squares of normal rv's is Chi-square distributed
Have you tried simply multiplying out the squared Y^2's in terms of the X[1:3] terms. I suspect that when you are all done that you will see that you simply have: (1/2 +1/3 +1/6)* X1^2 + (1/2 +1/3 +1/6)*X2^2 + (1/2 +1/3 +1/6)*X3^2 . This, of course, assumes that X1X3=X3X1, i.e. that your random variable algebra is commutative, but unless you are working on complex variables in particle physics, that assumption should hold. So far I have gotten about halfway there, and my approach seems to be holding up. It would seem to be useful that you go through the exercise, rather than for me to display it. | Proving that the squares of normal rv's is Chi-square distributed
Have you tried simply multiplying out the squared Y^2's in terms of the X[1:3] terms. I suspect that when you are all done that you will see that you simply have: (1/2 +1/3 +1/6)* X1^2 + (1/2 +1/3 +1/ |
35,575 | Problems with libRblas.so on ubuntu with rpy2 [closed] | It looks like you tried to do things locally but didn't quite get there. I happen to maintain the Debian packages of R (which get rebuilt for Ubuntu and are accessible at CRAN. These builds use external BLAS. rpy2 then builds just fine as well.
I would recommend that you read the README, try to install r-base-core and r-base-dev from the repositories and then try to install rpy2 from source. Or live with the slightly older rpy2 package in Ubuntu. | Problems with libRblas.so on ubuntu with rpy2 [closed] | It looks like you tried to do things locally but didn't quite get there. I happen to maintain the Debian packages of R (which get rebuilt for Ubuntu and are accessible at CRAN. These builds use extern | Problems with libRblas.so on ubuntu with rpy2 [closed]
It looks like you tried to do things locally but didn't quite get there. I happen to maintain the Debian packages of R (which get rebuilt for Ubuntu and are accessible at CRAN. These builds use external BLAS. rpy2 then builds just fine as well.
I would recommend that you read the README, try to install r-base-core and r-base-dev from the repositories and then try to install rpy2 from source. Or live with the slightly older rpy2 package in Ubuntu. | Problems with libRblas.so on ubuntu with rpy2 [closed]
It looks like you tried to do things locally but didn't quite get there. I happen to maintain the Debian packages of R (which get rebuilt for Ubuntu and are accessible at CRAN. These builds use extern |
35,576 | Problems with libRblas.so on ubuntu with rpy2 [closed] | For others running into this issue, I was able to solve it by making sure to add the R libraries to my library path in bashrc:
export LD_LIBRARY_PATH="R-install-location/lib65/R/lib:$LD_LIBRARY_PATH" | Problems with libRblas.so on ubuntu with rpy2 [closed] | For others running into this issue, I was able to solve it by making sure to add the R libraries to my library path in bashrc:
export LD_LIBRARY_PATH="R-install-location/lib65/R/lib:$LD_LIBRARY_PATH" | Problems with libRblas.so on ubuntu with rpy2 [closed]
For others running into this issue, I was able to solve it by making sure to add the R libraries to my library path in bashrc:
export LD_LIBRARY_PATH="R-install-location/lib65/R/lib:$LD_LIBRARY_PATH" | Problems with libRblas.so on ubuntu with rpy2 [closed]
For others running into this issue, I was able to solve it by making sure to add the R libraries to my library path in bashrc:
export LD_LIBRARY_PATH="R-install-location/lib65/R/lib:$LD_LIBRARY_PATH" |
35,577 | Why "supremum", not "maximum" in Kolmogorov-Smirnov test? | Suppose a difference is an increasing continuous function on open interval and zero everywhere else. Then the maximum will not exist, but supremum will. | Why "supremum", not "maximum" in Kolmogorov-Smirnov test? | Suppose a difference is an increasing continuous function on open interval and zero everywhere else. Then the maximum will not exist, but supremum will. | Why "supremum", not "maximum" in Kolmogorov-Smirnov test?
Suppose a difference is an increasing continuous function on open interval and zero everywhere else. Then the maximum will not exist, but supremum will. | Why "supremum", not "maximum" in Kolmogorov-Smirnov test?
Suppose a difference is an increasing continuous function on open interval and zero everywhere else. Then the maximum will not exist, but supremum will. |
35,578 | Yates' continuity correction in confidence interval returned by prop.test | The help page indicates that "Continuity correction is used only if it does not exceed the difference between sample and null proportions in absolute value." This is what line 5 is checking: x/n is the empirical proportion, p is the null proportion. (Actually, I find the "if" slightly misleading since it's more of a "insofar as it does not exceed" when looking at line 5.) | Yates' continuity correction in confidence interval returned by prop.test | The help page indicates that "Continuity correction is used only if it does not exceed the difference between sample and null proportions in absolute value." This is what line 5 is checking: x/n is th | Yates' continuity correction in confidence interval returned by prop.test
The help page indicates that "Continuity correction is used only if it does not exceed the difference between sample and null proportions in absolute value." This is what line 5 is checking: x/n is the empirical proportion, p is the null proportion. (Actually, I find the "if" slightly misleading since it's more of a "insofar as it does not exceed" when looking at line 5.) | Yates' continuity correction in confidence interval returned by prop.test
The help page indicates that "Continuity correction is used only if it does not exceed the difference between sample and null proportions in absolute value." This is what line 5 is checking: x/n is th |
35,579 | Yates' continuity correction in confidence interval returned by prop.test | On the second question of where you can find more info on this continuity correction (attributed to Yates in the help for prop.test but not in the refs below, I think as Yates orginally proposed a continuity correction only to the chi-squared test for contingency tables):
Newcombe RG. Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med 1998; 17(8):857-872. PMID:9595616
Brown LD, Cai TT, DasGupta A. Interval estimation for a binomial proportion (with Comments & Rejoinder). Statistical Science 2001; 16(2):101-133. doi:10.1214/ss/1009213286
The continuity-corrected Wilson score interval is 'method 4' in Newcomb. Brown et al. consider only the uncorrected Wilson score interval in the main text, but George Casella suggests using the continuity-corrected version in his Comment (p121), which Brown et al. discuss in their Rejoinder (p130):
Casella suggests the possibility of performing
a continuity correction on the score statistic
prior to constructing a confidence interval. We do
not agree with this proposal from any perspective.
These “continuity-corrected Wilson” intervals
have extremely conservative coverage properties,
though they may not in principle be guaranteed to
be everywhere conservative. But even if one’s goal,
unlike ours, is to produce conservative intervals,
these intervals will be very inefficient at their normal
level relative to Blyth–Still or even Clopper–
Pearson.
The Clopper-Pearson 'exact' interval is provided by binom.test in R. I'd suggest using that rather than prop.test if you want a conservative interval, i.e. one that guarantees at least 95% coverage. If you'd prefer an interval that has close to 95% coverage on average (over p) and will therefore often be narrower, you could use prop.test(…, correct=FALSE) to give the uncorrected Wilson score interval.
The standard textbook for such matters is Fleiss Statistical Methods for Rates and Proportions. Newcomb references the original 1981 edition but the latest edition is the 3rd (2003). I haven't checked it myself, however. | Yates' continuity correction in confidence interval returned by prop.test | On the second question of where you can find more info on this continuity correction (attributed to Yates in the help for prop.test but not in the refs below, I think as Yates orginally proposed a con | Yates' continuity correction in confidence interval returned by prop.test
On the second question of where you can find more info on this continuity correction (attributed to Yates in the help for prop.test but not in the refs below, I think as Yates orginally proposed a continuity correction only to the chi-squared test for contingency tables):
Newcombe RG. Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med 1998; 17(8):857-872. PMID:9595616
Brown LD, Cai TT, DasGupta A. Interval estimation for a binomial proportion (with Comments & Rejoinder). Statistical Science 2001; 16(2):101-133. doi:10.1214/ss/1009213286
The continuity-corrected Wilson score interval is 'method 4' in Newcomb. Brown et al. consider only the uncorrected Wilson score interval in the main text, but George Casella suggests using the continuity-corrected version in his Comment (p121), which Brown et al. discuss in their Rejoinder (p130):
Casella suggests the possibility of performing
a continuity correction on the score statistic
prior to constructing a confidence interval. We do
not agree with this proposal from any perspective.
These “continuity-corrected Wilson” intervals
have extremely conservative coverage properties,
though they may not in principle be guaranteed to
be everywhere conservative. But even if one’s goal,
unlike ours, is to produce conservative intervals,
these intervals will be very inefficient at their normal
level relative to Blyth–Still or even Clopper–
Pearson.
The Clopper-Pearson 'exact' interval is provided by binom.test in R. I'd suggest using that rather than prop.test if you want a conservative interval, i.e. one that guarantees at least 95% coverage. If you'd prefer an interval that has close to 95% coverage on average (over p) and will therefore often be narrower, you could use prop.test(…, correct=FALSE) to give the uncorrected Wilson score interval.
The standard textbook for such matters is Fleiss Statistical Methods for Rates and Proportions. Newcomb references the original 1981 edition but the latest edition is the 3rd (2003). I haven't checked it myself, however. | Yates' continuity correction in confidence interval returned by prop.test
On the second question of where you can find more info on this continuity correction (attributed to Yates in the help for prop.test but not in the refs below, I think as Yates orginally proposed a con |
35,580 | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | If, given an answer to my comment above, you wish to bound the range of the distribution, why not simply fit a Beta distribution where you rescale to the unit interval? In other words, if you know that the parameter of interest should fall between $[2 , 8]$, then why not define $Y = \frac{X - 5}{6} + \frac{1}{2} = \frac{X - 2}{6}$. Where I've first centered the interval on zero, divided by the width so that Y will have a range of 1, and then added $\frac{1}{2}$ back so that the range of Y is $[0,1]$. (You can think of it either way: directly from $[2,8] \rightarrow [0,1]$ or from $[2,8] \rightarrow [-\frac{1}{2},\frac{1}{2}] \rightarrow [0,1]$, but I thought the latter might be easier at first).
Then, with two data points, you could fit a beta posterior with a uniform beta prior? | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | If, given an answer to my comment above, you wish to bound the range of the distribution, why not simply fit a Beta distribution where you rescale to the unit interval? In other words, if you know th | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
If, given an answer to my comment above, you wish to bound the range of the distribution, why not simply fit a Beta distribution where you rescale to the unit interval? In other words, if you know that the parameter of interest should fall between $[2 , 8]$, then why not define $Y = \frac{X - 5}{6} + \frac{1}{2} = \frac{X - 2}{6}$. Where I've first centered the interval on zero, divided by the width so that Y will have a range of 1, and then added $\frac{1}{2}$ back so that the range of Y is $[0,1]$. (You can think of it either way: directly from $[2,8] \rightarrow [0,1]$ or from $[2,8] \rightarrow [-\frac{1}{2},\frac{1}{2}] \rightarrow [0,1]$, but I thought the latter might be easier at first).
Then, with two data points, you could fit a beta posterior with a uniform beta prior? | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
If, given an answer to my comment above, you wish to bound the range of the distribution, why not simply fit a Beta distribution where you rescale to the unit interval? In other words, if you know th |
35,581 | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | What about the Kumaraswamy distribution, which has the following pdf:
$$
f(x; a,b) = a b x^{a-1}{ (1-x^a)}^{b-1}
$$
for $a>0$, $b>0$, $0 < x < 1$. This distribution can be rescaled to have the required support. | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | What about the Kumaraswamy distribution, which has the following pdf:
$$
f(x; a,b) = a b x^{a-1}{ (1-x^a)}^{b-1}
$$
for $a>0$, $b>0$, $0 < x < 1$. This distribution can be rescaled to have the req | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
What about the Kumaraswamy distribution, which has the following pdf:
$$
f(x; a,b) = a b x^{a-1}{ (1-x^a)}^{b-1}
$$
for $a>0$, $b>0$, $0 < x < 1$. This distribution can be rescaled to have the required support. | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
What about the Kumaraswamy distribution, which has the following pdf:
$$
f(x; a,b) = a b x^{a-1}{ (1-x^a)}^{b-1}
$$
for $a>0$, $b>0$, $0 < x < 1$. This distribution can be rescaled to have the req |
35,582 | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | Since the log-normal distribution has two parameters, you can't satisfactorily fit it to three constraints that don't naturally fit it. With extreme quantiles of 2.5 and 7.5, the mode is ~4, and there is not much you can do about it. Since the scale of the errors for a and b is much smaller than for c, one of them will be pretty much ignored during the optimization.
For a better fit, you can either pick a three-parameter distribution, for example the generalized gamma distribution (implemented in the VGAM package), or add a shift parameter to the lognormal (or gamma, ...) distribution.
As a last note, since the distribution you are looking for is clearly not symmetric, the average of the two given observations is not the right value for the mode. I would maximize the sum of the densities at 3.0 and 3.6 while maintaining the extreme quantiles at 2.5 and 7.5 - this is possible if you have three parameters. | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | Since the log-normal distribution has two parameters, you can't satisfactorily fit it to three constraints that don't naturally fit it. With extreme quantiles of 2.5 and 7.5, the mode is ~4, and there | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
Since the log-normal distribution has two parameters, you can't satisfactorily fit it to three constraints that don't naturally fit it. With extreme quantiles of 2.5 and 7.5, the mode is ~4, and there is not much you can do about it. Since the scale of the errors for a and b is much smaller than for c, one of them will be pretty much ignored during the optimization.
For a better fit, you can either pick a three-parameter distribution, for example the generalized gamma distribution (implemented in the VGAM package), or add a shift parameter to the lognormal (or gamma, ...) distribution.
As a last note, since the distribution you are looking for is clearly not symmetric, the average of the two given observations is not the right value for the mode. I would maximize the sum of the densities at 3.0 and 3.6 while maintaining the extreme quantiles at 2.5 and 7.5 - this is possible if you have three parameters. | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
Since the log-normal distribution has two parameters, you can't satisfactorily fit it to three constraints that don't naturally fit it. With extreme quantiles of 2.5 and 7.5, the mode is ~4, and there |
35,583 | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | You could also try the triangular distribution. To fit this, you basically specify a lower bound (this would be X=2), an upper bound (this would be X=8), and a "most likely" value. The wikepedia page http://en.wikipedia.org/wiki/Triangular_distribution has more info on this distribution. If there is not much faith in the "most likely" value (as it appears to be, prior to observing any data), it may be a good idea to place a non-informative prior distribution on it, and then use the two data points to estimate this value. One good one is the jeffrey's prior, which for this problem would be p(c) = 1/(pi*sqrt((c-2)*(c-8))), where "c" is the "most likely value" (consistent with the wikipedia notation).
Given this prior, you can work out the posterior distribution of c analytically, or via simulation. The analytic form of the likelihood is not particularly nice, so simulation seems to be more attractive. This example is particularly well suited to rejection sampling (see wiki page for a general description of rejection sampling), because the maximised likelihood is 1/3^n regardless of the value of c, which provides the "upper bound". So you generate a "candidate" from the jeffrey's prior (call it c_i), and then evaluate the likelihood at this candidate L(x1,..,xn|c_i), and divide by the maximised likelihood, to give (3^n)*L(x1,..,xn|c_i). You then generate a U(0,1) random variable, and if u is less than (3^n)*L(x1,..,xn|c_i), then accept c_i as a posterior sampled value, otherwise throw away c_i and start again. Repeat this process until you have enough accepted samples (100, 500, 1,000, or more depending on how accurate you want). Then just take the sample average of whatever function of c you are interested in (the likelihood of a new observation is an obvious candidate for your application).
An alternative to accept-reject is to use the value of the likelihood as a weight (and don't generate the u), and then proceed with taking weighted averages using all candidates, rather than un-weighted averages with the accepted candidates | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints? | You could also try the triangular distribution. To fit this, you basically specify a lower bound (this would be X=2), an upper bound (this would be X=8), and a "most likely" value. The wikepedia pag | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
You could also try the triangular distribution. To fit this, you basically specify a lower bound (this would be X=2), an upper bound (this would be X=8), and a "most likely" value. The wikepedia page http://en.wikipedia.org/wiki/Triangular_distribution has more info on this distribution. If there is not much faith in the "most likely" value (as it appears to be, prior to observing any data), it may be a good idea to place a non-informative prior distribution on it, and then use the two data points to estimate this value. One good one is the jeffrey's prior, which for this problem would be p(c) = 1/(pi*sqrt((c-2)*(c-8))), where "c" is the "most likely value" (consistent with the wikipedia notation).
Given this prior, you can work out the posterior distribution of c analytically, or via simulation. The analytic form of the likelihood is not particularly nice, so simulation seems to be more attractive. This example is particularly well suited to rejection sampling (see wiki page for a general description of rejection sampling), because the maximised likelihood is 1/3^n regardless of the value of c, which provides the "upper bound". So you generate a "candidate" from the jeffrey's prior (call it c_i), and then evaluate the likelihood at this candidate L(x1,..,xn|c_i), and divide by the maximised likelihood, to give (3^n)*L(x1,..,xn|c_i). You then generate a U(0,1) random variable, and if u is less than (3^n)*L(x1,..,xn|c_i), then accept c_i as a posterior sampled value, otherwise throw away c_i and start again. Repeat this process until you have enough accepted samples (100, 500, 1,000, or more depending on how accurate you want). Then just take the sample average of whatever function of c you are interested in (the likelihood of a new observation is an obvious candidate for your application).
An alternative to accept-reject is to use the value of the likelihood as a weight (and don't generate the u), and then proceed with taking weighted averages using all candidates, rather than un-weighted averages with the accepted candidates | Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
You could also try the triangular distribution. To fit this, you basically specify a lower bound (this would be X=2), an upper bound (this would be X=8), and a "most likely" value. The wikepedia pag |
35,584 | How to compute the steady state gain from the transfer function of a discrete time system? | The "mechanical" result of just plugging in $z = 1$ into the transfer response is essentially a product of two facts. The steady-state gain is (usually, I believe) defined as the (magnitude of the) limiting response as $t \to \infty$ of the system to a unit-step input.
The so-called final-value theorem states that, if the limit $\lim_{n \to \infty} y(n)$ exists, then $\lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z)$ where $y(n)$ is the time-domain output and $Y(z)$ is its corresponding $z$-transform.
Now, for the steady-state gain, the input is a unit-step function, so $x(n) = 1$ for each $n \geq 0$. Hence,
$$
X(z) = \sum_{n=0}^\infty x(n) z^{-n} = \sum_{n=0}^\infty z^{-n} = \frac{1}{1-z^{-1}} .
$$
Using the transfer equation, we get that the $z$-transform of the output is
$$
Y(z) = X(z) H(z) = \frac{H(z)}{1-z^{-1}} .
$$
(Assuming that the limit $\lim_{n\to\infty} y(n)$ exists) we have that
$$
\lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z) = \lim_{z \to 1}\, H(z) .
$$
The left-hand side is the steady-state value of a step-response (i.e., it is the value of the response as time goes to $\infty$ of a one-unit constant input), and so the steady-state gain is $|\lim_{n \to \infty} y(n)| = \lim_{n \to \infty} |y(n)|$.
Technically, you need to check that the limit exists (which I've tried to emphasize). It seems to me that a sufficient condition would be that all the poles of the transfer response be strictly inside the unit circle. (Caveat lector: I haven't checked that closely at all.)
If this does not sufficiently clarify things, you might try doing Google searches on terms like "dc gain" and "final-value theorem", which are closely related to what you want. | How to compute the steady state gain from the transfer function of a discrete time system? | The "mechanical" result of just plugging in $z = 1$ into the transfer response is essentially a product of two facts. The steady-state gain is (usually, I believe) defined as the (magnitude of the) li | How to compute the steady state gain from the transfer function of a discrete time system?
The "mechanical" result of just plugging in $z = 1$ into the transfer response is essentially a product of two facts. The steady-state gain is (usually, I believe) defined as the (magnitude of the) limiting response as $t \to \infty$ of the system to a unit-step input.
The so-called final-value theorem states that, if the limit $\lim_{n \to \infty} y(n)$ exists, then $\lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z)$ where $y(n)$ is the time-domain output and $Y(z)$ is its corresponding $z$-transform.
Now, for the steady-state gain, the input is a unit-step function, so $x(n) = 1$ for each $n \geq 0$. Hence,
$$
X(z) = \sum_{n=0}^\infty x(n) z^{-n} = \sum_{n=0}^\infty z^{-n} = \frac{1}{1-z^{-1}} .
$$
Using the transfer equation, we get that the $z$-transform of the output is
$$
Y(z) = X(z) H(z) = \frac{H(z)}{1-z^{-1}} .
$$
(Assuming that the limit $\lim_{n\to\infty} y(n)$ exists) we have that
$$
\lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z) = \lim_{z \to 1}\, H(z) .
$$
The left-hand side is the steady-state value of a step-response (i.e., it is the value of the response as time goes to $\infty$ of a one-unit constant input), and so the steady-state gain is $|\lim_{n \to \infty} y(n)| = \lim_{n \to \infty} |y(n)|$.
Technically, you need to check that the limit exists (which I've tried to emphasize). It seems to me that a sufficient condition would be that all the poles of the transfer response be strictly inside the unit circle. (Caveat lector: I haven't checked that closely at all.)
If this does not sufficiently clarify things, you might try doing Google searches on terms like "dc gain" and "final-value theorem", which are closely related to what you want. | How to compute the steady state gain from the transfer function of a discrete time system?
The "mechanical" result of just plugging in $z = 1$ into the transfer response is essentially a product of two facts. The steady-state gain is (usually, I believe) defined as the (magnitude of the) li |
35,585 | How to compute the steady state gain from the transfer function of a discrete time system? | I asked this question at another stack exchange site. It's fairly simple:
Steady state gain is the gain the systems has when DC is applied to it, which has a frequency of f=0 or omega = 0
The variable z in the z-transform is defined as z = r * exp(j*omega). Set omega to 0 and you have z = r
r = 1 because to get the frequency response, we have to evaluate the z-transform on the unit circle
This is a very brief summary for the answer that has been given here:
https://dsp.stackexchange.com/questions/46337/what-does-g1-1-say-about-a-system#46339 | How to compute the steady state gain from the transfer function of a discrete time system? | I asked this question at another stack exchange site. It's fairly simple:
Steady state gain is the gain the systems has when DC is applied to it, which has a frequency of f=0 or omega = 0
The variab | How to compute the steady state gain from the transfer function of a discrete time system?
I asked this question at another stack exchange site. It's fairly simple:
Steady state gain is the gain the systems has when DC is applied to it, which has a frequency of f=0 or omega = 0
The variable z in the z-transform is defined as z = r * exp(j*omega). Set omega to 0 and you have z = r
r = 1 because to get the frequency response, we have to evaluate the z-transform on the unit circle
This is a very brief summary for the answer that has been given here:
https://dsp.stackexchange.com/questions/46337/what-does-g1-1-say-about-a-system#46339 | How to compute the steady state gain from the transfer function of a discrete time system?
I asked this question at another stack exchange site. It's fairly simple:
Steady state gain is the gain the systems has when DC is applied to it, which has a frequency of f=0 or omega = 0
The variab |
35,586 | How to compute the steady state gain from the transfer function of a discrete time system? | The term Steady state gain comes up when your input function is the unit step function $u(t)=1$. If your input is the unit step function, then the gain is the system's value at steady state, $t= \infty$. The steady state value is also called the final value.
The Final Value Theorem lets you calculate this steady state value quite easily:
$\lim_{t \to \infty} y(t) = \lim_{z \to 0} z*Y(z)$, where $y(t)$ is in the time domain and $Y(z)$ is in the frequency domain.
So if your transfer function is $H(z) = \frac{Y(z)}{X(z)} = \frac{.8}{z(z-.8)}$,
you can find $y(t\rightarrow \infty)$ with $\lim_{z \to 0} z*Y(z)$.
First, solve for $Y(z)$ which is your output of the system:
$Y(z) = H(z)*X(z) = \frac{.8}{z(z-.8)}*X(z)=\frac{.8}{z(z-.8)}*\frac{1}{z}$
(recall that $X(z)=\frac{1}{s}$ because $x{t} = u(t)$, the unit step function.)
Now multiplying $Y(z)$ by $z$ to get $\lim_{z \to 0} \frac{z*.8}{z^2(z-.8)}$
One of the $z$-terms cancel and you get $\lim_{z \to 0} \frac{.8}{z(z-.8)}$
You can't plug in $z=0$ to the denominator or you'll get an undefined value. The reason this doesn't work is because the Final Value theorem doesn't work in this case. The roots of the transfer function need to be negative, and here they're positive (with a root of $z=.8$). IDK how to solve it any other way but that's how you would have done it... | How to compute the steady state gain from the transfer function of a discrete time system? | The term Steady state gain comes up when your input function is the unit step function $u(t)=1$. If your input is the unit step function, then the gain is the system's value at steady state, $t= \inft | How to compute the steady state gain from the transfer function of a discrete time system?
The term Steady state gain comes up when your input function is the unit step function $u(t)=1$. If your input is the unit step function, then the gain is the system's value at steady state, $t= \infty$. The steady state value is also called the final value.
The Final Value Theorem lets you calculate this steady state value quite easily:
$\lim_{t \to \infty} y(t) = \lim_{z \to 0} z*Y(z)$, where $y(t)$ is in the time domain and $Y(z)$ is in the frequency domain.
So if your transfer function is $H(z) = \frac{Y(z)}{X(z)} = \frac{.8}{z(z-.8)}$,
you can find $y(t\rightarrow \infty)$ with $\lim_{z \to 0} z*Y(z)$.
First, solve for $Y(z)$ which is your output of the system:
$Y(z) = H(z)*X(z) = \frac{.8}{z(z-.8)}*X(z)=\frac{.8}{z(z-.8)}*\frac{1}{z}$
(recall that $X(z)=\frac{1}{s}$ because $x{t} = u(t)$, the unit step function.)
Now multiplying $Y(z)$ by $z$ to get $\lim_{z \to 0} \frac{z*.8}{z^2(z-.8)}$
One of the $z$-terms cancel and you get $\lim_{z \to 0} \frac{.8}{z(z-.8)}$
You can't plug in $z=0$ to the denominator or you'll get an undefined value. The reason this doesn't work is because the Final Value theorem doesn't work in this case. The roots of the transfer function need to be negative, and here they're positive (with a root of $z=.8$). IDK how to solve it any other way but that's how you would have done it... | How to compute the steady state gain from the transfer function of a discrete time system?
The term Steady state gain comes up when your input function is the unit step function $u(t)=1$. If your input is the unit step function, then the gain is the system's value at steady state, $t= \inft |
35,587 | How to compute the steady state gain from the transfer function of a discrete time system? | find steady state gain of transfer function H(s), we let s=0.
since z=exp(sT), to find steady state gain of H(z), let z=exp(0)=1. | How to compute the steady state gain from the transfer function of a discrete time system? | find steady state gain of transfer function H(s), we let s=0.
since z=exp(sT), to find steady state gain of H(z), let z=exp(0)=1. | How to compute the steady state gain from the transfer function of a discrete time system?
find steady state gain of transfer function H(s), we let s=0.
since z=exp(sT), to find steady state gain of H(z), let z=exp(0)=1. | How to compute the steady state gain from the transfer function of a discrete time system?
find steady state gain of transfer function H(s), we let s=0.
since z=exp(sT), to find steady state gain of H(z), let z=exp(0)=1. |
35,588 | Subsetting a data-frame in R based on dates [closed] | ok timeseries seem to have done the trick:
aets <- as.xts(read.zoo("n8_energy_actual2009_2010.csv", header=TRUE, sep=",", FUN=as.POSIXct))
eats.2010 <- aets["2010-01::2010-10"] | Subsetting a data-frame in R based on dates [closed] | ok timeseries seem to have done the trick:
aets <- as.xts(read.zoo("n8_energy_actual2009_2010.csv", header=TRUE, sep=",", FUN=as.POSIXct))
eats.2010 <- aets["2010-01::2010-10"] | Subsetting a data-frame in R based on dates [closed]
ok timeseries seem to have done the trick:
aets <- as.xts(read.zoo("n8_energy_actual2009_2010.csv", header=TRUE, sep=",", FUN=as.POSIXct))
eats.2010 <- aets["2010-01::2010-10"] | Subsetting a data-frame in R based on dates [closed]
ok timeseries seem to have done the trick:
aets <- as.xts(read.zoo("n8_energy_actual2009_2010.csv", header=TRUE, sep=",", FUN=as.POSIXct))
eats.2010 <- aets["2010-01::2010-10"] |
35,589 | Subsetting a data-frame in R based on dates [closed] | A few points:
I'm not sure why that's happening. Clearly the POSIXlt slots are wrong. I typically use POSIXct unless I absolutely need to adjust the slots.
One option is to use the dates directly rather than messing with the slots, and say <= and >= to subset. Something like ae[ae$date >= as.POSIXlt("2009-10-01") & ae$date < as.POSIXlt("2009-11-01"),]
You should consider using a time series for this, since that's the exact purpose of that data structure (and they provide many useful functions for dealing with data over time). One of the most common is zoo. xts also includes a number of functions that can help with this kind of thing. | Subsetting a data-frame in R based on dates [closed] | A few points:
I'm not sure why that's happening. Clearly the POSIXlt slots are wrong. I typically use POSIXct unless I absolutely need to adjust the slots.
One option is to use the dates directly r | Subsetting a data-frame in R based on dates [closed]
A few points:
I'm not sure why that's happening. Clearly the POSIXlt slots are wrong. I typically use POSIXct unless I absolutely need to adjust the slots.
One option is to use the dates directly rather than messing with the slots, and say <= and >= to subset. Something like ae[ae$date >= as.POSIXlt("2009-10-01") & ae$date < as.POSIXlt("2009-11-01"),]
You should consider using a time series for this, since that's the exact purpose of that data structure (and they provide many useful functions for dealing with data over time). One of the most common is zoo. xts also includes a number of functions that can help with this kind of thing. | Subsetting a data-frame in R based on dates [closed]
A few points:
I'm not sure why that's happening. Clearly the POSIXlt slots are wrong. I typically use POSIXct unless I absolutely need to adjust the slots.
One option is to use the dates directly r |
35,590 | Statistical test for difference between two odds ratios? | Assuming the odds ratios are independent, you can proceed as you would in general with any estimate, only you have to look at the log odds.
Take the difference of the log odds, $\delta$. The standard error of $\delta$ is $\sqrt{SE_{1}^2 + SE_{2}^2}$. Then you can obtain a p-value for the ratio $z = \delta/SE(\delta)$ from the standard normal.
UPDATE
The standard error of $\log OR$ is the square root of the sum of the reciprocals of the frequencies:
$SE(\log OR) = \sqrt{ {1 \over n_1} + {1 \over n_2} + {1 \over n_3} + {1 \over n_4} }$
In your case, each $n_i$ correspond to TP, FP, TN, FN. | Statistical test for difference between two odds ratios? | Assuming the odds ratios are independent, you can proceed as you would in general with any estimate, only you have to look at the log odds.
Take the difference of the log odds, $\delta$. The standard | Statistical test for difference between two odds ratios?
Assuming the odds ratios are independent, you can proceed as you would in general with any estimate, only you have to look at the log odds.
Take the difference of the log odds, $\delta$. The standard error of $\delta$ is $\sqrt{SE_{1}^2 + SE_{2}^2}$. Then you can obtain a p-value for the ratio $z = \delta/SE(\delta)$ from the standard normal.
UPDATE
The standard error of $\log OR$ is the square root of the sum of the reciprocals of the frequencies:
$SE(\log OR) = \sqrt{ {1 \over n_1} + {1 \over n_2} + {1 \over n_3} + {1 \over n_4} }$
In your case, each $n_i$ correspond to TP, FP, TN, FN. | Statistical test for difference between two odds ratios?
Assuming the odds ratios are independent, you can proceed as you would in general with any estimate, only you have to look at the log odds.
Take the difference of the log odds, $\delta$. The standard |
35,591 | Statistical test for difference between two odds ratios? | If you have the 95% CIs for all of your odds ratios, you can do a quick check for statistical difference by looking for overlap in the odds ratios. If 95% CI's for two OR overlap, they aren't significantly different. | Statistical test for difference between two odds ratios? | If you have the 95% CIs for all of your odds ratios, you can do a quick check for statistical difference by looking for overlap in the odds ratios. If 95% CI's for two OR overlap, they aren't signific | Statistical test for difference between two odds ratios?
If you have the 95% CIs for all of your odds ratios, you can do a quick check for statistical difference by looking for overlap in the odds ratios. If 95% CI's for two OR overlap, they aren't significantly different. | Statistical test for difference between two odds ratios?
If you have the 95% CIs for all of your odds ratios, you can do a quick check for statistical difference by looking for overlap in the odds ratios. If 95% CI's for two OR overlap, they aren't signific |
35,592 | Difference between Spline and Piecewise Regression | Here are some of the differences
Piecewise regression yields continuous functions which are not, generally, differentiable and hence not smooth.
Regression with splines yields smooth continuous functions. The degree of smoothness will depend on what kind of spline you use, but cubic splines are differentiable at least twice.
Good references include Frank Harrell's Regression Modelling Strategies. Libraries in R include {splines} or the {rms} library. In particular, the {splines} library can expand predictors into a linear and cubic spline basis through use of the degree argument in the bs function. It is worthwhile to note that piecewise regression is just spline regression where the basis functions are linear polynomials as opposed to cubic or restricted cubic polynomials.
Here is an example of using the splines library.
library(splines)
x <- runif(100)
y <- sin(2*pi*x) + rnorm(100, 0, 0.3)
xx <- seq(0, 1, 0.01)
fit1 <- lm(y~bs(x, degree=1, df=4))
fit2 <- lm(y~bs(x, degree=3, df=4))
plot(x, y, pch=19)
lines(xx, predict(fit1, newdata=list(x=xx)), col='red')
lines(xx, predict(fit2, newdata=list(x=xx)), col='blue') | Difference between Spline and Piecewise Regression | Here are some of the differences
Piecewise regression yields continuous functions which are not, generally, differentiable and hence not smooth.
Regression with splines yields smooth continuous funct | Difference between Spline and Piecewise Regression
Here are some of the differences
Piecewise regression yields continuous functions which are not, generally, differentiable and hence not smooth.
Regression with splines yields smooth continuous functions. The degree of smoothness will depend on what kind of spline you use, but cubic splines are differentiable at least twice.
Good references include Frank Harrell's Regression Modelling Strategies. Libraries in R include {splines} or the {rms} library. In particular, the {splines} library can expand predictors into a linear and cubic spline basis through use of the degree argument in the bs function. It is worthwhile to note that piecewise regression is just spline regression where the basis functions are linear polynomials as opposed to cubic or restricted cubic polynomials.
Here is an example of using the splines library.
library(splines)
x <- runif(100)
y <- sin(2*pi*x) + rnorm(100, 0, 0.3)
xx <- seq(0, 1, 0.01)
fit1 <- lm(y~bs(x, degree=1, df=4))
fit2 <- lm(y~bs(x, degree=3, df=4))
plot(x, y, pch=19)
lines(xx, predict(fit1, newdata=list(x=xx)), col='red')
lines(xx, predict(fit2, newdata=list(x=xx)), col='blue') | Difference between Spline and Piecewise Regression
Here are some of the differences
Piecewise regression yields continuous functions which are not, generally, differentiable and hence not smooth.
Regression with splines yields smooth continuous funct |
35,593 | Non negative least square on some coefficient | Your approach with the extra parameters will be problematic as multiple values for the coefficients will provide a same solution (the solution won't be unique).
Non-negative least squares regression is often solved by an active set method that updates in steps the active constraints (see for instance How do Lawson and Hanson solve the unconstrained least squares problem?). In those steps various regular least squares estimates are computed for different active sets. Those steps will fail with your approach because of the multiple solutions (you will get some errors during the computations like non-invertible matrices).
An alternative nnls for Python (aside from Galen's response) could be scipy.optimize.lsq_linear, which allows setting individual constraints and uses an active set method. | Non negative least square on some coefficient | Your approach with the extra parameters will be problematic as multiple values for the coefficients will provide a same solution (the solution won't be unique).
Non-negative least squares regression i | Non negative least square on some coefficient
Your approach with the extra parameters will be problematic as multiple values for the coefficients will provide a same solution (the solution won't be unique).
Non-negative least squares regression is often solved by an active set method that updates in steps the active constraints (see for instance How do Lawson and Hanson solve the unconstrained least squares problem?). In those steps various regular least squares estimates are computed for different active sets. Those steps will fail with your approach because of the multiple solutions (you will get some errors during the computations like non-invertible matrices).
An alternative nnls for Python (aside from Galen's response) could be scipy.optimize.lsq_linear, which allows setting individual constraints and uses an active set method. | Non negative least square on some coefficient
Your approach with the extra parameters will be problematic as multiple values for the coefficients will provide a same solution (the solution won't be unique).
Non-negative least squares regression i |
35,594 | Non negative least square on some coefficient | Here is an example of multiple linear regression in which some of the variables are unbounded while others are bounded. I'm going to have ten unbounded parameters and ten non-negative parameters in this example. The quick gist is to use scipy.optimize.Bounds in an optimizer that supports this argument such as 'trust-constr', along with the insight that it allows the use of np.inf. You just set some of the bounds to be -np.inf and np.inf for the unbounded parameters, and set the bounds to be 0 and np.inf for the non-negative parameters.
import numpy as np
from scipy.optimize import minimize, Bounds
np.random.seed(2022)
bounds = Bounds([-np.inf] * 10 + [0] * 10, [np.inf] * 20)
x = np.random.normal(size=100 * 20).reshape(100, 20)
true_a = np.arange(1,21)
true_y = x @ true_a + np.random.normal(size=100)
a0 = np.random.normal(size=20)
def f(a):
y_hat = x @ a
resid = true_y - y_hat
lsq = np.mean(np.power(resid, 2))
return lsq
result = minimize(f, x0=a0, method='trust-constr', bounds=bounds)
print(result.x)
Note that the last line accesses results.x rather than results.abecause x is priviliged in scipy.optimize's programming to be the parameter. But this prints a result for the optimized vector of parameters a as hoped. Here is the printout for this seed:
[ 0.9363189 2.01708136 3.07371156 4.03469035 4.97227273 5.91210627
6.86926581 8.05433955 8.83633234 9.93401828 10.96973645 12.05863185
12.95428506 13.9473809 14.93419422 16.05477142 17.13887755 18.37746539
19.05598047 19.78565871] | Non negative least square on some coefficient | Here is an example of multiple linear regression in which some of the variables are unbounded while others are bounded. I'm going to have ten unbounded parameters and ten non-negative parameters in th | Non negative least square on some coefficient
Here is an example of multiple linear regression in which some of the variables are unbounded while others are bounded. I'm going to have ten unbounded parameters and ten non-negative parameters in this example. The quick gist is to use scipy.optimize.Bounds in an optimizer that supports this argument such as 'trust-constr', along with the insight that it allows the use of np.inf. You just set some of the bounds to be -np.inf and np.inf for the unbounded parameters, and set the bounds to be 0 and np.inf for the non-negative parameters.
import numpy as np
from scipy.optimize import minimize, Bounds
np.random.seed(2022)
bounds = Bounds([-np.inf] * 10 + [0] * 10, [np.inf] * 20)
x = np.random.normal(size=100 * 20).reshape(100, 20)
true_a = np.arange(1,21)
true_y = x @ true_a + np.random.normal(size=100)
a0 = np.random.normal(size=20)
def f(a):
y_hat = x @ a
resid = true_y - y_hat
lsq = np.mean(np.power(resid, 2))
return lsq
result = minimize(f, x0=a0, method='trust-constr', bounds=bounds)
print(result.x)
Note that the last line accesses results.x rather than results.abecause x is priviliged in scipy.optimize's programming to be the parameter. But this prints a result for the optimized vector of parameters a as hoped. Here is the printout for this seed:
[ 0.9363189 2.01708136 3.07371156 4.03469035 4.97227273 5.91210627
6.86926581 8.05433955 8.83633234 9.93401828 10.96973645 12.05863185
12.95428506 13.9473809 14.93419422 16.05477142 17.13887755 18.37746539
19.05598047 19.78565871] | Non negative least square on some coefficient
Here is an example of multiple linear regression in which some of the variables are unbounded while others are bounded. I'm going to have ten unbounded parameters and ten non-negative parameters in th |
35,595 | Non negative least square on some coefficient | The brute force approach would be using constrained minimization, applying the non-negativity constraints only to certain variables - along the lines of the python implementation provided in an answer to How to include constraint to Scipy NNLS function solution so that it sums to 1.
Another trick is variable transformation that makes certain variables always non-negative, like $x=y^2$. One can then use any general purpose minimization routine.
The only problem with these approaches is that they will perform much slower than NNLS for a large number of variables - whether this is a constraint depends on the problem, the available computational capacities, and the programming language used. | Non negative least square on some coefficient | The brute force approach would be using constrained minimization, applying the non-negativity constraints only to certain variables - along the lines of the python implementation provided in an answer | Non negative least square on some coefficient
The brute force approach would be using constrained minimization, applying the non-negativity constraints only to certain variables - along the lines of the python implementation provided in an answer to How to include constraint to Scipy NNLS function solution so that it sums to 1.
Another trick is variable transformation that makes certain variables always non-negative, like $x=y^2$. One can then use any general purpose minimization routine.
The only problem with these approaches is that they will perform much slower than NNLS for a large number of variables - whether this is a constraint depends on the problem, the available computational capacities, and the programming language used. | Non negative least square on some coefficient
The brute force approach would be using constrained minimization, applying the non-negativity constraints only to certain variables - along the lines of the python implementation provided in an answer |
35,596 | Lasso regression Mathematical intuition | You have three different questions. Let's tackle them in a somewhat different order. I'll refer to ISLR 2nd edition.
First off, here are the two different formulations of the lasso:
$$ \text{minimize } ||\mathbf{y}-\mathbf{X}\mathbf{\beta}||_2+\lambda||\mathbf{\beta}||_1\text{ with respect to }\mathbf{\beta}\text{ for a given }\lambda\geq 0 \quad (6.7)$$
and
$$ \text{minimize } ||\mathbf{y}-\mathbf{X}\mathbf{\beta}||_2\text{ with respect to }\mathbf{\beta}\text{ subject to }||\mathbf{\beta}||_1\leq s\text{ for a given }s\geq0\quad (6.8) $$
The 2-norm $||\cdot||_2$ of a vector is the sum of squared entries, the 1-norm $||\cdot||_1$ is the sum of absolute entries.
On to your questions:
How do $\lambda$ and $s$ hang together?
There is a one-to-one relationship between the two, in the following sense: for every $\lambda$, there is one $s$ such that the minimizer $\mathbf{\beta}$ of (6.7) for $\lambda$ is equal to the minimizer of (6.8). And vice versa. (Note that the relationship is not unique, see below.)
One interesting boundary case is that $\lambda=0$ (no lasso penalty), which leads to the OLS estimate of $\mathbf{\beta}$ (assuming your design matrix $\mathbf{X}$ is of full rank), corresponds to $s=\infty$ (no constraint on the parameter estimates). If you are uncomfortable with $s=\infty$, just take $s$ to be any number larger than the 1-norm of the OLS estimate of $\mathbf{\beta}$.
Why are the two formulations equivalent?
This derivation is usually not given in standard statistics/data science textbooks. Most people just accept this fact. I don't think it is formally proven even in the original lasso paper by Tibshirani (1996), but it does not seem to be very hard. Starting with (6.7), pick a $\lambda$, find the optimal $\mathbf{\beta}$ in (6.7), use the 1-norm of this estimate for $s$ in (6.8) and argue that you won't find a better solution to (6.8) with smaller 1-norm. And vice versa. A rigorous proof would probably be a nice exercise for an undergraduate math student.
How do we choose $\lambda$ (or equivalently $s$)?
The algorithm usually used to fit a lasso model will give you an entire sequence of parameter coefficient vectors, one for each one of multiple values of $\lambda$ (e.g., Figure 6.6 in ISLR). You can then choose one. This is often done via cross-validation using some accuracy measure, and in fact, your software may already perform this cross-validation internally and just report the optimal $\lambda$. | Lasso regression Mathematical intuition | You have three different questions. Let's tackle them in a somewhat different order. I'll refer to ISLR 2nd edition.
First off, here are the two different formulations of the lasso:
$$ \text{minimize | Lasso regression Mathematical intuition
You have three different questions. Let's tackle them in a somewhat different order. I'll refer to ISLR 2nd edition.
First off, here are the two different formulations of the lasso:
$$ \text{minimize } ||\mathbf{y}-\mathbf{X}\mathbf{\beta}||_2+\lambda||\mathbf{\beta}||_1\text{ with respect to }\mathbf{\beta}\text{ for a given }\lambda\geq 0 \quad (6.7)$$
and
$$ \text{minimize } ||\mathbf{y}-\mathbf{X}\mathbf{\beta}||_2\text{ with respect to }\mathbf{\beta}\text{ subject to }||\mathbf{\beta}||_1\leq s\text{ for a given }s\geq0\quad (6.8) $$
The 2-norm $||\cdot||_2$ of a vector is the sum of squared entries, the 1-norm $||\cdot||_1$ is the sum of absolute entries.
On to your questions:
How do $\lambda$ and $s$ hang together?
There is a one-to-one relationship between the two, in the following sense: for every $\lambda$, there is one $s$ such that the minimizer $\mathbf{\beta}$ of (6.7) for $\lambda$ is equal to the minimizer of (6.8). And vice versa. (Note that the relationship is not unique, see below.)
One interesting boundary case is that $\lambda=0$ (no lasso penalty), which leads to the OLS estimate of $\mathbf{\beta}$ (assuming your design matrix $\mathbf{X}$ is of full rank), corresponds to $s=\infty$ (no constraint on the parameter estimates). If you are uncomfortable with $s=\infty$, just take $s$ to be any number larger than the 1-norm of the OLS estimate of $\mathbf{\beta}$.
Why are the two formulations equivalent?
This derivation is usually not given in standard statistics/data science textbooks. Most people just accept this fact. I don't think it is formally proven even in the original lasso paper by Tibshirani (1996), but it does not seem to be very hard. Starting with (6.7), pick a $\lambda$, find the optimal $\mathbf{\beta}$ in (6.7), use the 1-norm of this estimate for $s$ in (6.8) and argue that you won't find a better solution to (6.8) with smaller 1-norm. And vice versa. A rigorous proof would probably be a nice exercise for an undergraduate math student.
How do we choose $\lambda$ (or equivalently $s$)?
The algorithm usually used to fit a lasso model will give you an entire sequence of parameter coefficient vectors, one for each one of multiple values of $\lambda$ (e.g., Figure 6.6 in ISLR). You can then choose one. This is often done via cross-validation using some accuracy measure, and in fact, your software may already perform this cross-validation internally and just report the optimal $\lambda$. | Lasso regression Mathematical intuition
You have three different questions. Let's tackle them in a somewhat different order. I'll refer to ISLR 2nd edition.
First off, here are the two different formulations of the lasso:
$$ \text{minimize |
35,597 | Lasso regression Mathematical intuition | The other answer here covers the specific transformation issues you have asked about, so I'll focus solely on the intuition of the LASSO regression. One useful way to look at LASSO regression is that it is equivalent to Bayesian maximum-posterior (MAP) estimation when you use an IID Laplace prior for the coefficient vector. This prior has density given by:
$$\pi(\boldsymbol{\beta}|\lambda) = \prod_{i=1}^p \text{Laplace} \Big( \beta_i \Big| 0, \frac{2}{\lambda} \Big) = \prod_{i=1}^p \frac{\lambda}{4} \cdot \exp \bigg( -\frac{\lambda}{2} |\beta_i| \bigg),$$
Taking $\text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta})$ as the residual-sum-of-squares (taken as a function of the coefficient vector), the log-likelihood for the regression model is:
$$\ell_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) = \text{const} - \frac{1}{2} \cdot \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}).$$
Consequently, the MAP estimator is is the argument that maximises the log-posterior for the model, which is:
$$\begin{align}
\hat{\boldsymbol{\beta}}_\text{MAP}
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \log p(\boldsymbol{\beta}|\mathbf{x}, \mathbf{y}, \lambda) \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \bigg[ \ell_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) + \log \pi(\boldsymbol{\beta}|\lambda) \bigg] \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \bigg[ \text{const} - \frac{1}{2} \cdot \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) - \frac{\lambda}{2} \sum_{i=1}^p |\beta_i| \bigg] \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \bigg[ - \frac{1}{2} \cdot \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) - \frac{\lambda}{2} \sum_{i=1}^p |\beta_i| \bigg] \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg min}} \bigg[ \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) + \lambda \sum_{i=1}^p |\beta_i| \bigg] \\[6pt]
\end{align}$$
If you have a look at the Laplace distribution you will see that, in this case, it gives a density with a peak at $\beta_i=0$ and with exponential decay in prior density as we move away from this point in either direction. The estimator above is the one that maximises the posterior density under this prior. You can read more about this in this related question. | Lasso regression Mathematical intuition | The other answer here covers the specific transformation issues you have asked about, so I'll focus solely on the intuition of the LASSO regression. One useful way to look at LASSO regression is that | Lasso regression Mathematical intuition
The other answer here covers the specific transformation issues you have asked about, so I'll focus solely on the intuition of the LASSO regression. One useful way to look at LASSO regression is that it is equivalent to Bayesian maximum-posterior (MAP) estimation when you use an IID Laplace prior for the coefficient vector. This prior has density given by:
$$\pi(\boldsymbol{\beta}|\lambda) = \prod_{i=1}^p \text{Laplace} \Big( \beta_i \Big| 0, \frac{2}{\lambda} \Big) = \prod_{i=1}^p \frac{\lambda}{4} \cdot \exp \bigg( -\frac{\lambda}{2} |\beta_i| \bigg),$$
Taking $\text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta})$ as the residual-sum-of-squares (taken as a function of the coefficient vector), the log-likelihood for the regression model is:
$$\ell_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) = \text{const} - \frac{1}{2} \cdot \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}).$$
Consequently, the MAP estimator is is the argument that maximises the log-posterior for the model, which is:
$$\begin{align}
\hat{\boldsymbol{\beta}}_\text{MAP}
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \log p(\boldsymbol{\beta}|\mathbf{x}, \mathbf{y}, \lambda) \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \bigg[ \ell_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) + \log \pi(\boldsymbol{\beta}|\lambda) \bigg] \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \bigg[ \text{const} - \frac{1}{2} \cdot \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) - \frac{\lambda}{2} \sum_{i=1}^p |\beta_i| \bigg] \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg max}} \bigg[ - \frac{1}{2} \cdot \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) - \frac{\lambda}{2} \sum_{i=1}^p |\beta_i| \bigg] \\[6pt]
&= \underset{\boldsymbol{\beta}}{\text{arg min}} \bigg[ \text{RSS}_{\mathbf{x}, \mathbf{y}}(\boldsymbol{\beta}) + \lambda \sum_{i=1}^p |\beta_i| \bigg] \\[6pt]
\end{align}$$
If you have a look at the Laplace distribution you will see that, in this case, it gives a density with a peak at $\beta_i=0$ and with exponential decay in prior density as we move away from this point in either direction. The estimator above is the one that maximises the posterior density under this prior. You can read more about this in this related question. | Lasso regression Mathematical intuition
The other answer here covers the specific transformation issues you have asked about, so I'll focus solely on the intuition of the LASSO regression. One useful way to look at LASSO regression is that |
35,598 | Regression model for integer response | As an alternative we would try using a Skellam distribution as our response distribution. (i.e. our outcome variable is the difference between two Poisson-distributed random variables). It is sometimes used to predict football/rugby goal difference. Do note though that maybe more straightforward to just have to Poisson models and compute their difference. SE.SO has a relevant thread on: How can I fit a Skellam regression?. | Regression model for integer response | As an alternative we would try using a Skellam distribution as our response distribution. (i.e. our outcome variable is the difference between two Poisson-distributed random variables). It is sometime | Regression model for integer response
As an alternative we would try using a Skellam distribution as our response distribution. (i.e. our outcome variable is the difference between two Poisson-distributed random variables). It is sometimes used to predict football/rugby goal difference. Do note though that maybe more straightforward to just have to Poisson models and compute their difference. SE.SO has a relevant thread on: How can I fit a Skellam regression?. | Regression model for integer response
As an alternative we would try using a Skellam distribution as our response distribution. (i.e. our outcome variable is the difference between two Poisson-distributed random variables). It is sometime |
35,599 | Regression model for integer response | Ideally, you would use a regression model with a discrete distribution with support over the integers for your response variable. This could be approximated by a continuous distribution, so long as the standard error in the model (i.e., the standard deviation of the error term) is not too small compared to the unit interval between integers (so that the continuous density is not changing much between integers).
If you want to, you can take a regression model that uses a continuous response distribution $f$ (e.g., from the Gaussian linear model) and then "discretise" the response distribution by taking:
$$\mathbb{P}(Y_i = y | \mathbf{x}_i, \boldsymbol{\beta}) = \int \limits_{y-1/2}^{y+1/2} f(r | \mathbf{x}_i, \boldsymbol{\beta}) \ dr.$$
For example, in the Gaussian linear model, instead of having the likelihood function:
$$\begin{align}
L_{\mathbf{y}, \mathbf{x}}(\boldsymbol{\beta})
&= \prod_{i=1}^n \phi \bigg( \frac{y_i - \mathbf{x}_i \cdot \boldsymbol{\beta}}{\sigma} \bigg),
\end{align}$$
you would instead have the "discretised" version:
$$\begin{align}
L_{\mathbf{y}, \mathbf{x}}(\boldsymbol{\beta})
&= \prod_{i=1}^n \bigg[ \Phi \bigg( \frac{y_i + \tfrac{1}{2} - \mathbf{x}_i \cdot \boldsymbol{\beta}}{\sigma} \bigg) - \Phi \bigg( \frac{y_i - \tfrac{1}{2} - \mathbf{x}_i \cdot \boldsymbol{\beta}}{\sigma} \bigg) \bigg]
\end{align}$$
The MLE in both cases is going to be quite similar, so long as $\sigma$ is substantially bigger than one. The main drawback of the discretised version is that the MLE is no longer the OLS estimator and does not have a closed form, so the resulting theoretical properties are a bit trickier to deal with (but certainly not impossible). | Regression model for integer response | Ideally, you would use a regression model with a discrete distribution with support over the integers for your response variable. This could be approximated by a continuous distribution, so long as t | Regression model for integer response
Ideally, you would use a regression model with a discrete distribution with support over the integers for your response variable. This could be approximated by a continuous distribution, so long as the standard error in the model (i.e., the standard deviation of the error term) is not too small compared to the unit interval between integers (so that the continuous density is not changing much between integers).
If you want to, you can take a regression model that uses a continuous response distribution $f$ (e.g., from the Gaussian linear model) and then "discretise" the response distribution by taking:
$$\mathbb{P}(Y_i = y | \mathbf{x}_i, \boldsymbol{\beta}) = \int \limits_{y-1/2}^{y+1/2} f(r | \mathbf{x}_i, \boldsymbol{\beta}) \ dr.$$
For example, in the Gaussian linear model, instead of having the likelihood function:
$$\begin{align}
L_{\mathbf{y}, \mathbf{x}}(\boldsymbol{\beta})
&= \prod_{i=1}^n \phi \bigg( \frac{y_i - \mathbf{x}_i \cdot \boldsymbol{\beta}}{\sigma} \bigg),
\end{align}$$
you would instead have the "discretised" version:
$$\begin{align}
L_{\mathbf{y}, \mathbf{x}}(\boldsymbol{\beta})
&= \prod_{i=1}^n \bigg[ \Phi \bigg( \frac{y_i + \tfrac{1}{2} - \mathbf{x}_i \cdot \boldsymbol{\beta}}{\sigma} \bigg) - \Phi \bigg( \frac{y_i - \tfrac{1}{2} - \mathbf{x}_i \cdot \boldsymbol{\beta}}{\sigma} \bigg) \bigg]
\end{align}$$
The MLE in both cases is going to be quite similar, so long as $\sigma$ is substantially bigger than one. The main drawback of the discretised version is that the MLE is no longer the OLS estimator and does not have a closed form, so the resulting theoretical properties are a bit trickier to deal with (but certainly not impossible). | Regression model for integer response
Ideally, you would use a regression model with a discrete distribution with support over the integers for your response variable. This could be approximated by a continuous distribution, so long as t |
35,600 | Null hypothesis stated as "there is an effect" | You are, perhaps, thinking about a 'negativist' null hypothesis of the general form $\text{H}_{0}^{-}\text{: }|\theta|\ge \Delta$, with $\text{H}_{\text{A}}^{-}\text{: }|\theta|< \Delta$, as used in tests for equivalence, and where $\Delta$ is the smallest effect size that you care about a priori to the test. (As expressed here, the equivalence region ($-\Delta, \Delta$) is symmetric, although it need not be. For example, one could express an asymmetric equivalence range $\text{H}^{-}_{0}\text{: }\theta \le \Delta_{\text{lower}}\textbf{ OR } \theta \ge \Delta_{\text{upper}}$, where $|\Delta_{\text{lower}}| \ne \Delta_{\text{upper}}$. One place this is sometimes done is when $\theta$ measures a relative difference like an odds ratio or relative risk, and where $\Delta_{\text{lower}} = \frac{1}{\Delta_{\text{upper}}}$.)
In plain language, for a one-sample or two-sample test the null is "There is an effect of magnitude at least as large as $\Delta$, and the alternative is "The magnitude of the effect is less that $\Delta$." The omnibus form of this null hypothesis would be "There is an effect of magnitude at least $\Delta$ between every group."
Note: For a continuous distribution we cannot simply invert the null hypothesis from the two-sided test for difference (i.e. $\text{H}_{0}^{+}\text{: }\theta = 0$) to be $\text{H}_{0}^{+}\text{: }\theta \ne 0$, because we would have to find (probabilistic) evidence
in favor of $\text{H}_{\text{A}}\text{: }\theta = 0$, but the probability of a continuously distributed variable (e.g., normal, $t$, etc.) exactly equaling a specific number is 0 (i.e. $P(X = c) = 0$), and so you would never reject such an inverted null (i.e. $\text{H}_{0}\text{: }\theta \ne 0$ is no good).
Selected References
Anderson, S., & Hauck, W. W. (1983). A new procedure for testing equivalence in comparative bioavailability and other clinical trials. Communications in Statistics—Theory and Methods, 12(23), 2663–2692.
Reagle, D. P., & Vinod, H. D. (2003). Inference for negativist theory using numerically computed rejection regions. Computational Statistics & Data Analysis, 42(3), 491–512.
Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority (Second Edition). Chapman and Hall/CRC Press.
See also: [equivalence], equivalence test, [tost] | Null hypothesis stated as "there is an effect" | You are, perhaps, thinking about a 'negativist' null hypothesis of the general form $\text{H}_{0}^{-}\text{: }|\theta|\ge \Delta$, with $\text{H}_{\text{A}}^{-}\text{: }|\theta|< \Delta$, as used in t | Null hypothesis stated as "there is an effect"
You are, perhaps, thinking about a 'negativist' null hypothesis of the general form $\text{H}_{0}^{-}\text{: }|\theta|\ge \Delta$, with $\text{H}_{\text{A}}^{-}\text{: }|\theta|< \Delta$, as used in tests for equivalence, and where $\Delta$ is the smallest effect size that you care about a priori to the test. (As expressed here, the equivalence region ($-\Delta, \Delta$) is symmetric, although it need not be. For example, one could express an asymmetric equivalence range $\text{H}^{-}_{0}\text{: }\theta \le \Delta_{\text{lower}}\textbf{ OR } \theta \ge \Delta_{\text{upper}}$, where $|\Delta_{\text{lower}}| \ne \Delta_{\text{upper}}$. One place this is sometimes done is when $\theta$ measures a relative difference like an odds ratio or relative risk, and where $\Delta_{\text{lower}} = \frac{1}{\Delta_{\text{upper}}}$.)
In plain language, for a one-sample or two-sample test the null is "There is an effect of magnitude at least as large as $\Delta$, and the alternative is "The magnitude of the effect is less that $\Delta$." The omnibus form of this null hypothesis would be "There is an effect of magnitude at least $\Delta$ between every group."
Note: For a continuous distribution we cannot simply invert the null hypothesis from the two-sided test for difference (i.e. $\text{H}_{0}^{+}\text{: }\theta = 0$) to be $\text{H}_{0}^{+}\text{: }\theta \ne 0$, because we would have to find (probabilistic) evidence
in favor of $\text{H}_{\text{A}}\text{: }\theta = 0$, but the probability of a continuously distributed variable (e.g., normal, $t$, etc.) exactly equaling a specific number is 0 (i.e. $P(X = c) = 0$), and so you would never reject such an inverted null (i.e. $\text{H}_{0}\text{: }\theta \ne 0$ is no good).
Selected References
Anderson, S., & Hauck, W. W. (1983). A new procedure for testing equivalence in comparative bioavailability and other clinical trials. Communications in Statistics—Theory and Methods, 12(23), 2663–2692.
Reagle, D. P., & Vinod, H. D. (2003). Inference for negativist theory using numerically computed rejection regions. Computational Statistics & Data Analysis, 42(3), 491–512.
Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority (Second Edition). Chapman and Hall/CRC Press.
See also: [equivalence], equivalence test, [tost] | Null hypothesis stated as "there is an effect"
You are, perhaps, thinking about a 'negativist' null hypothesis of the general form $\text{H}_{0}^{-}\text{: }|\theta|\ge \Delta$, with $\text{H}_{\text{A}}^{-}\text{: }|\theta|< \Delta$, as used in t |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.