idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
6,101
|
How trustworthy are the confidence intervals for lmer objects through effects package?
|
It looks like what you have done in the second method is to have computed confidence intervals for the regression coefficients, then transformed those to obtain CIs for the predictions. This ignores the covariances between the regression coefficients.
Try fitting the model without an intercept, so that the batch effects will actually be the predictions, and confint will return the intervals you need.
Addendum 1
I did exactly what I suggested above:
> fm2 <- lmer(strength ~ batch - 1 + (1 | cask), Pastes)
> confint(fm2)
Computing profile confidence intervals ...
2.5 % 97.5 %
.sig01 0.000000 1.637468
.sigma 2.086385 3.007380
batchA 60.234772 64.298581
batchB 57.268105 61.331915
batchC 60.018105 64.081915
batchD 57.668105 61.731915
batchE 53.868105 57.931915
batchF 59.001439 63.065248
batchG 57.868105 61.931915
batchH 61.084772 65.148581
batchI 56.651439 60.715248
batchJ 56.551439 60.615248
These intervals seem to jibe with the results from effects.
Addendum 2
Another alternative is the lsmeans package. It obtains degrees of freedom and an adjusted covariance matrix from the pbkrtest package.
> library("lsmeans")
> lsmeans(fm1, "batch")
Loading required namespace: pbkrtest
batch lsmean SE df lower.CL upper.CL
A 62.26667 1.125709 40.45 59.99232 64.54101
B 59.30000 1.125709 40.45 57.02565 61.57435
C 62.05000 1.125709 40.45 59.77565 64.32435
D 59.70000 1.125709 40.45 57.42565 61.97435
E 55.90000 1.125709 40.45 53.62565 58.17435
F 61.03333 1.125709 40.45 58.75899 63.30768
G 59.90000 1.125709 40.45 57.62565 62.17435
H 63.11667 1.125709 40.45 60.84232 65.39101
I 58.68333 1.125709 40.45 56.40899 60.95768
J 58.58333 1.125709 40.45 56.30899 60.85768
Confidence level used: 0.95
These are even more in line with the effect results: the standard errors are identical, but effect uses different d.f. The confint results in Addendum 1 are even narrower than the asymptotic ones based on using $\pm1.96\times\mbox{se}$. So now I think those are not very trustworthy.
Results from effect and lsmeans are similar, but with an unbalanced multi-factor situation, lsmeans by default averages over unused factors with equal weights, whereas effect weights by the observed frequencies (available as an option in lsmeans).
|
How trustworthy are the confidence intervals for lmer objects through effects package?
|
It looks like what you have done in the second method is to have computed confidence intervals for the regression coefficients, then transformed those to obtain CIs for the predictions. This ignores t
|
How trustworthy are the confidence intervals for lmer objects through effects package?
It looks like what you have done in the second method is to have computed confidence intervals for the regression coefficients, then transformed those to obtain CIs for the predictions. This ignores the covariances between the regression coefficients.
Try fitting the model without an intercept, so that the batch effects will actually be the predictions, and confint will return the intervals you need.
Addendum 1
I did exactly what I suggested above:
> fm2 <- lmer(strength ~ batch - 1 + (1 | cask), Pastes)
> confint(fm2)
Computing profile confidence intervals ...
2.5 % 97.5 %
.sig01 0.000000 1.637468
.sigma 2.086385 3.007380
batchA 60.234772 64.298581
batchB 57.268105 61.331915
batchC 60.018105 64.081915
batchD 57.668105 61.731915
batchE 53.868105 57.931915
batchF 59.001439 63.065248
batchG 57.868105 61.931915
batchH 61.084772 65.148581
batchI 56.651439 60.715248
batchJ 56.551439 60.615248
These intervals seem to jibe with the results from effects.
Addendum 2
Another alternative is the lsmeans package. It obtains degrees of freedom and an adjusted covariance matrix from the pbkrtest package.
> library("lsmeans")
> lsmeans(fm1, "batch")
Loading required namespace: pbkrtest
batch lsmean SE df lower.CL upper.CL
A 62.26667 1.125709 40.45 59.99232 64.54101
B 59.30000 1.125709 40.45 57.02565 61.57435
C 62.05000 1.125709 40.45 59.77565 64.32435
D 59.70000 1.125709 40.45 57.42565 61.97435
E 55.90000 1.125709 40.45 53.62565 58.17435
F 61.03333 1.125709 40.45 58.75899 63.30768
G 59.90000 1.125709 40.45 57.62565 62.17435
H 63.11667 1.125709 40.45 60.84232 65.39101
I 58.68333 1.125709 40.45 56.40899 60.95768
J 58.58333 1.125709 40.45 56.30899 60.85768
Confidence level used: 0.95
These are even more in line with the effect results: the standard errors are identical, but effect uses different d.f. The confint results in Addendum 1 are even narrower than the asymptotic ones based on using $\pm1.96\times\mbox{se}$. So now I think those are not very trustworthy.
Results from effect and lsmeans are similar, but with an unbalanced multi-factor situation, lsmeans by default averages over unused factors with equal weights, whereas effect weights by the observed frequencies (available as an option in lsmeans).
|
How trustworthy are the confidence intervals for lmer objects through effects package?
It looks like what you have done in the second method is to have computed confidence intervals for the regression coefficients, then transformed those to obtain CIs for the predictions. This ignores t
|
6,102
|
Existence of the moment generating function and variance
|
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).
In the answer below, we do the following:
Show that if the mgf is finite for at least one (strictly) positive value
and one negative value, then all positive moments of $X$ are finite
(including nonintegral moments).
Prove that the condition in the first item above is equivalent to the
distribution of $X$ having exponentially bounded tails. In other
words, the tails of $X$ fall off at least as fast as those of an
exponential random variable $Z$ (up to a constant).
Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.
Explore some examples and counterexamples to aid our intuition
and, particularly, to show that we should not read undue importance into the lack of
finiteness of the mgf.
This answer is quite long, for which I apologize in advance. If this
would be better placed, e.g., as a blog post or somewhere else,
please feel free to provide such feedback in the comments.
What does the mgf say about the moments?
The mgf of a random variable $X \sim F$ is defined as $m(t) = \mathbb
E e^{tX}$. Note that $m(t)$ always exists since it is the integral
of a nonnegative measurable function. However, if may not be
finite. If it is finite (in the right places), then for all $p >
0$ (not necessarily an integer), the absolute moments $\mathbb E
|X|^p < \infty$ (and, so, also $\mathbb E X^p$ is finite). This is the topic of the next proposition.
Proposition: If there exists $\newcommand{\tn}{t_{n}}\newcommand{\tp}{t_{p}}\tn < 0$ and $\tp > 0$ such that $m(\tn) < \infty$ and $m(\tp) < \infty$, then the moments of all orders of $X$ exist and are finite.
Before diving into a proof, here are two useful lemmas.
Lemma 1: Suppose such $\tn$ and $\tp$ exist. Then for any $t_0 \in [\tn,\tp]$, $m(t_0) < \infty$.
Proof. This follows from convexity of $e^x$ and monotonicity of the integral. For any such $t_0$, there exists $\theta \in [0,1]$ such that $t_0 = \theta \tn + (1-\theta) \tp$. But, then
$$e^{t_0 X} = e^{\theta \tn X + (1-\theta) \tp X} \leq \theta e^{\tn X} + (1-\theta) e^{\tp X} \>.$$
Hence, by monotonicity of the integral, $\mathbb E e^{t_0 X} \leq \theta \mathbb E e^{\tn X} + (1-\theta) \mathbb E e^{\tp X} < \infty$.
So, if the mgf is finite at any two distinct points, it is finite for all values in the interval in between those points.
Lemma 2 (Nesting of $L_p$ spaces): For $0 \leq q \leq p$, if $\mathbb E |X|^p < \infty$, then $\mathbb E |X|^q < \infty$.
Proof: Two approaches are given in this answer and associated comments.
This gives us enough to continue with the proof of the proposition.
Proof of the proposition. If $\tn < 0$ and $\tp > 0$ exist as stated in the proposition, then taking $t_0 = \min(-\tn,\tp) > 0$, we know by the first lemma that $m(-t_0) < \infty$ and $m(t_0) < \infty$. But,
$$
e^{-t_0 X} + e^{t_0 X} = 2 \sum_{n=0}^\infty \frac{t_0^{2n} X^{2n}}{(2n)!} \>,
$$
and the right-hand side is composed of nonnegative terms, so, in particular, for any fixed $k$
$$
e^{-t_0 X} + e^{t_0 X} \geq 2 t_0^{2k} X^{2k}/(2k)! \>.
$$
Now, by assumption $\mathbb E e^{-t_0 X} + \mathbb E e^{t_0 X} < \infty$. Monotonicity of the integral yields $\mathbb E X^{2k} < \infty$. Hence, all even moments of $X$ are finite. Lemma 2 immediately allows us to "fill in the gaps" and conclude that all moments must be finite.
Upshot
The upshot regarding the question at hand is that if any of the
moments of $X$ are infinite or do not exist, we can immediately
conclude that the mgf is not finite in an open interval containing the
origin. (This is just the contrapositive statement of the
proposition.)
Thus, the proposition above provides the "right" condition in order to
say something about the moments of $X$ based on its mgf.
Exponentially bounded tails and the mgf
Proposition: The mgf $m(t)$ is finite in an open interval $(\tn,\tp)$
containing the origin if and only if the tails of $F$ are exponentially
bounded, i.e., $\mathbb P( |X| > x) \leq C e^{-t_0 x}$ for
some $C > 0$ and $t_0 > 0$.
Proof. We'll deal with the right tail separately. The left tail is
handled completely analogously.
$(\Rightarrow)$ Suppose $m(t_0) < \infty$ for some $t_0 > 0$. Then, the right tail of $F$ is exponentially bounded; in other words, there exists $C > 0$ and $b > 0$ such that
$$
\mathbb P(X > x) \leq C e^{-b x} \>.
$$
To see this, note that for any $t > 0$, by Markov's inequality,
$$
\mathbb P(X > x) = \mathbb P(e^{tX} > e^{tx}) \leq e^{-tx} \mathbb E e^{t X} = m(t) e^{-t x} \>.
$$
Take $C = m(t_0)$ and $b = t_0$ to complete this direction of the
proof.
$(\Leftarrow)$ Suppose there exists $C >0$ and $t_0 > 0$ such that
$\mathbb P(X > x) \leq C e^{-t_0 x}$. Then, for $t > 0$,
$$
\mathbb E e^{t X} = \int_0^\infty \mathbb P( e^{t X} > y)\,\mathrm dy
\leq 1 + \int_1^\infty \mathbb P( e^{t X} > y)\,\mathrm dy \leq 1 +
\int_1^\infty C
y^{-t_0/t} \, \mathrm dy \>,
$$
where the first equality follows from a standard fact about the
expectation of nonnegative random variables. Choose any $t$ such that $0 < t < t_0$;
then, the integral on the right-hand side is finite.
This completes the proof.
A note on uniqueness of a distribution given its mgf
If the mgf is finite in an open interval containing zero, then the associated distribution is characterized by its moments, i.e., it is the only distribution with the moments $\mu_n = \mathbb E X^n$. A standard proof is short once one has at hand some (relatively straightforward) facts about characteristic functions. Details can be found in most modern probability texts (e.g., Billingsley or Durrett). A couple related matters are discussed in this answer.
Examples and counterexamples
(a) Lognormal distribution: $X$ is lognormal if $X = e^Y$ for some normal random variable $Y$. So $X \geq 0$ with probability one. Because $e^{-x} \leq 1$ for all $x \geq 0$, this immediately tells us that $m(t) = \mathbb E e^{t X} \leq 1$ for all $t < 0$. So, the mgf is finite on the nonnegative half-line $(-\infty,0]$. (NB We've only used the nonnegativity of $X$ to establish this fact, so this is true from all nonnegative random variables.)
However, $m(t) = \infty$ for all $t > 0$. We'll take the standard lognormal as the canonical case. If $x > 0$, then $e^{x} \geq 1 + x + \frac{1}{2} x^2 + \frac{1}{6} x^3$. By change of variables, we have
$$
\mathbb E e^{t X} = (2\pi)^{-1/2} \int_{-\infty}^\infty e^{t e^u - u^2/2} \,\mathrm d u \>.
$$
For $t > 0$ and large enough $u$, we have $t e^u - u^2/2 \geq t+tu$ by the bounds given above. But,
$$
\int_{K}^\infty e^{t + tu} \,\mathrm du = \infty
$$ for any $K$, and so the mgf is infinite for all $t > 0$.
On the other hand, all moments of the lognormal distribution are finite. So, the existence of the mgf in an interval about zero is not necessary for the conclusion of the above proposition.
(b) Symmetrized lognormal: We can get an even more extreme case by "symmetrizing" the lognormal distribution. Consider the density $f(x)$ for $x \in \mathbb R$ such that
$$
f(x) = \frac{1}{2\sqrt{2\pi}|x|} e^{-\frac{1}{2} (\log |x|)^2} \>.
$$
It is not hard to see in light of the previous example that the mgf is finite only for $t = 0$. Yet, the even moments are exactly the same as those of the lognormal and the odd moments are all zero! So, the mgf exists nowhere (except at the origin where it always exists) and yet we can guarantee finite moments of all orders.
(c) Cauchy distribution: This distribution also has an mgf which is infinite for all $t \neq 0$, but no absolute moments $\mathbb E|X|^p$ are finite for $p \geq 1$. The result for the mgf follows for $t > 0$ since $e^x \geq x^3 / 6$ for $x > 0$ and so
$$
\mathbb E e^{tX} \geq \int_1^\infty \frac{t^3 x^3}{6\pi(1+x^2)} \,\mathrm dx \geq \frac{t^3}{12\pi} \int_1^\infty x \,\mathrm dx = \infty \>.
$$
The proof for $t < 0$ is analogous. (Perhaps somewhat less well known is that the moments for $0 < p < 1$ do exist for the Cauchy. See this answer.)
(d) Half-Cauchy distribution: If $X$ is (standard) Cauchy, call $Y = |X|$ a half-Cauchy random variable. Then, it is easy to see from the previous example that $\mathbb E Y^p = \infty$ for all $p \geq 1$; yet, $\mathbb E e^{tY}$ is finite for $t \in (-\infty,0]$.
|
Existence of the moment generating function and variance
|
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).
In the answer below, we do the following:
Show that if the mgf is finite for at least one (stric
|
Existence of the moment generating function and variance
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).
In the answer below, we do the following:
Show that if the mgf is finite for at least one (strictly) positive value
and one negative value, then all positive moments of $X$ are finite
(including nonintegral moments).
Prove that the condition in the first item above is equivalent to the
distribution of $X$ having exponentially bounded tails. In other
words, the tails of $X$ fall off at least as fast as those of an
exponential random variable $Z$ (up to a constant).
Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.
Explore some examples and counterexamples to aid our intuition
and, particularly, to show that we should not read undue importance into the lack of
finiteness of the mgf.
This answer is quite long, for which I apologize in advance. If this
would be better placed, e.g., as a blog post or somewhere else,
please feel free to provide such feedback in the comments.
What does the mgf say about the moments?
The mgf of a random variable $X \sim F$ is defined as $m(t) = \mathbb
E e^{tX}$. Note that $m(t)$ always exists since it is the integral
of a nonnegative measurable function. However, if may not be
finite. If it is finite (in the right places), then for all $p >
0$ (not necessarily an integer), the absolute moments $\mathbb E
|X|^p < \infty$ (and, so, also $\mathbb E X^p$ is finite). This is the topic of the next proposition.
Proposition: If there exists $\newcommand{\tn}{t_{n}}\newcommand{\tp}{t_{p}}\tn < 0$ and $\tp > 0$ such that $m(\tn) < \infty$ and $m(\tp) < \infty$, then the moments of all orders of $X$ exist and are finite.
Before diving into a proof, here are two useful lemmas.
Lemma 1: Suppose such $\tn$ and $\tp$ exist. Then for any $t_0 \in [\tn,\tp]$, $m(t_0) < \infty$.
Proof. This follows from convexity of $e^x$ and monotonicity of the integral. For any such $t_0$, there exists $\theta \in [0,1]$ such that $t_0 = \theta \tn + (1-\theta) \tp$. But, then
$$e^{t_0 X} = e^{\theta \tn X + (1-\theta) \tp X} \leq \theta e^{\tn X} + (1-\theta) e^{\tp X} \>.$$
Hence, by monotonicity of the integral, $\mathbb E e^{t_0 X} \leq \theta \mathbb E e^{\tn X} + (1-\theta) \mathbb E e^{\tp X} < \infty$.
So, if the mgf is finite at any two distinct points, it is finite for all values in the interval in between those points.
Lemma 2 (Nesting of $L_p$ spaces): For $0 \leq q \leq p$, if $\mathbb E |X|^p < \infty$, then $\mathbb E |X|^q < \infty$.
Proof: Two approaches are given in this answer and associated comments.
This gives us enough to continue with the proof of the proposition.
Proof of the proposition. If $\tn < 0$ and $\tp > 0$ exist as stated in the proposition, then taking $t_0 = \min(-\tn,\tp) > 0$, we know by the first lemma that $m(-t_0) < \infty$ and $m(t_0) < \infty$. But,
$$
e^{-t_0 X} + e^{t_0 X} = 2 \sum_{n=0}^\infty \frac{t_0^{2n} X^{2n}}{(2n)!} \>,
$$
and the right-hand side is composed of nonnegative terms, so, in particular, for any fixed $k$
$$
e^{-t_0 X} + e^{t_0 X} \geq 2 t_0^{2k} X^{2k}/(2k)! \>.
$$
Now, by assumption $\mathbb E e^{-t_0 X} + \mathbb E e^{t_0 X} < \infty$. Monotonicity of the integral yields $\mathbb E X^{2k} < \infty$. Hence, all even moments of $X$ are finite. Lemma 2 immediately allows us to "fill in the gaps" and conclude that all moments must be finite.
Upshot
The upshot regarding the question at hand is that if any of the
moments of $X$ are infinite or do not exist, we can immediately
conclude that the mgf is not finite in an open interval containing the
origin. (This is just the contrapositive statement of the
proposition.)
Thus, the proposition above provides the "right" condition in order to
say something about the moments of $X$ based on its mgf.
Exponentially bounded tails and the mgf
Proposition: The mgf $m(t)$ is finite in an open interval $(\tn,\tp)$
containing the origin if and only if the tails of $F$ are exponentially
bounded, i.e., $\mathbb P( |X| > x) \leq C e^{-t_0 x}$ for
some $C > 0$ and $t_0 > 0$.
Proof. We'll deal with the right tail separately. The left tail is
handled completely analogously.
$(\Rightarrow)$ Suppose $m(t_0) < \infty$ for some $t_0 > 0$. Then, the right tail of $F$ is exponentially bounded; in other words, there exists $C > 0$ and $b > 0$ such that
$$
\mathbb P(X > x) \leq C e^{-b x} \>.
$$
To see this, note that for any $t > 0$, by Markov's inequality,
$$
\mathbb P(X > x) = \mathbb P(e^{tX} > e^{tx}) \leq e^{-tx} \mathbb E e^{t X} = m(t) e^{-t x} \>.
$$
Take $C = m(t_0)$ and $b = t_0$ to complete this direction of the
proof.
$(\Leftarrow)$ Suppose there exists $C >0$ and $t_0 > 0$ such that
$\mathbb P(X > x) \leq C e^{-t_0 x}$. Then, for $t > 0$,
$$
\mathbb E e^{t X} = \int_0^\infty \mathbb P( e^{t X} > y)\,\mathrm dy
\leq 1 + \int_1^\infty \mathbb P( e^{t X} > y)\,\mathrm dy \leq 1 +
\int_1^\infty C
y^{-t_0/t} \, \mathrm dy \>,
$$
where the first equality follows from a standard fact about the
expectation of nonnegative random variables. Choose any $t$ such that $0 < t < t_0$;
then, the integral on the right-hand side is finite.
This completes the proof.
A note on uniqueness of a distribution given its mgf
If the mgf is finite in an open interval containing zero, then the associated distribution is characterized by its moments, i.e., it is the only distribution with the moments $\mu_n = \mathbb E X^n$. A standard proof is short once one has at hand some (relatively straightforward) facts about characteristic functions. Details can be found in most modern probability texts (e.g., Billingsley or Durrett). A couple related matters are discussed in this answer.
Examples and counterexamples
(a) Lognormal distribution: $X$ is lognormal if $X = e^Y$ for some normal random variable $Y$. So $X \geq 0$ with probability one. Because $e^{-x} \leq 1$ for all $x \geq 0$, this immediately tells us that $m(t) = \mathbb E e^{t X} \leq 1$ for all $t < 0$. So, the mgf is finite on the nonnegative half-line $(-\infty,0]$. (NB We've only used the nonnegativity of $X$ to establish this fact, so this is true from all nonnegative random variables.)
However, $m(t) = \infty$ for all $t > 0$. We'll take the standard lognormal as the canonical case. If $x > 0$, then $e^{x} \geq 1 + x + \frac{1}{2} x^2 + \frac{1}{6} x^3$. By change of variables, we have
$$
\mathbb E e^{t X} = (2\pi)^{-1/2} \int_{-\infty}^\infty e^{t e^u - u^2/2} \,\mathrm d u \>.
$$
For $t > 0$ and large enough $u$, we have $t e^u - u^2/2 \geq t+tu$ by the bounds given above. But,
$$
\int_{K}^\infty e^{t + tu} \,\mathrm du = \infty
$$ for any $K$, and so the mgf is infinite for all $t > 0$.
On the other hand, all moments of the lognormal distribution are finite. So, the existence of the mgf in an interval about zero is not necessary for the conclusion of the above proposition.
(b) Symmetrized lognormal: We can get an even more extreme case by "symmetrizing" the lognormal distribution. Consider the density $f(x)$ for $x \in \mathbb R$ such that
$$
f(x) = \frac{1}{2\sqrt{2\pi}|x|} e^{-\frac{1}{2} (\log |x|)^2} \>.
$$
It is not hard to see in light of the previous example that the mgf is finite only for $t = 0$. Yet, the even moments are exactly the same as those of the lognormal and the odd moments are all zero! So, the mgf exists nowhere (except at the origin where it always exists) and yet we can guarantee finite moments of all orders.
(c) Cauchy distribution: This distribution also has an mgf which is infinite for all $t \neq 0$, but no absolute moments $\mathbb E|X|^p$ are finite for $p \geq 1$. The result for the mgf follows for $t > 0$ since $e^x \geq x^3 / 6$ for $x > 0$ and so
$$
\mathbb E e^{tX} \geq \int_1^\infty \frac{t^3 x^3}{6\pi(1+x^2)} \,\mathrm dx \geq \frac{t^3}{12\pi} \int_1^\infty x \,\mathrm dx = \infty \>.
$$
The proof for $t < 0$ is analogous. (Perhaps somewhat less well known is that the moments for $0 < p < 1$ do exist for the Cauchy. See this answer.)
(d) Half-Cauchy distribution: If $X$ is (standard) Cauchy, call $Y = |X|$ a half-Cauchy random variable. Then, it is easy to see from the previous example that $\mathbb E Y^p = \infty$ for all $p \geq 1$; yet, $\mathbb E e^{tY}$ is finite for $t \in (-\infty,0]$.
|
Existence of the moment generating function and variance
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).
In the answer below, we do the following:
Show that if the mgf is finite for at least one (stric
|
6,103
|
How can I efficiently model the sum of Bernoulli random variables?
|
If it often resembles a Poisson, have you tried approximating it by a Poisson with parameter $\lambda = \sum p_i$ ?
EDIT: I've found a theoretical result to justify this, as well as a name for the distribution of $Y$: it's called the Poisson binomial distribution. Le Cam's inequality tells you how closely its distribution is approximated by the distribution of a Poisson with parameter $\lambda = \sum p_i$. It tells you the quality of this approx is governed by the sum of the squares of the $p_i$s, to paraphrase Steele (1994). So if all your $p_i$s are reasonably small, as it now appears they are, it should be a pretty good approximation.
EDIT 2: How small is 'reasonably small'? Well, that depends how good you need the approximation to be! The Wikipedia article on Le Cam's theorem gives the precise form of the result I referred to above: the sum of the absolute differences between the probability mass function (pmf) of $Y$ and the pmf of the above Poisson distribution is no more than twice the sum of the squares of the $p_i$s. Another result from Le Cam (1960) may be easier to use: this sum is also no more than 18 times the largest $p_i$. There are quite a few more such results... see Serfling (1978) for one review.
|
How can I efficiently model the sum of Bernoulli random variables?
|
If it often resembles a Poisson, have you tried approximating it by a Poisson with parameter $\lambda = \sum p_i$ ?
EDIT: I've found a theoretical result to justify this, as well as a name for the dis
|
How can I efficiently model the sum of Bernoulli random variables?
If it often resembles a Poisson, have you tried approximating it by a Poisson with parameter $\lambda = \sum p_i$ ?
EDIT: I've found a theoretical result to justify this, as well as a name for the distribution of $Y$: it's called the Poisson binomial distribution. Le Cam's inequality tells you how closely its distribution is approximated by the distribution of a Poisson with parameter $\lambda = \sum p_i$. It tells you the quality of this approx is governed by the sum of the squares of the $p_i$s, to paraphrase Steele (1994). So if all your $p_i$s are reasonably small, as it now appears they are, it should be a pretty good approximation.
EDIT 2: How small is 'reasonably small'? Well, that depends how good you need the approximation to be! The Wikipedia article on Le Cam's theorem gives the precise form of the result I referred to above: the sum of the absolute differences between the probability mass function (pmf) of $Y$ and the pmf of the above Poisson distribution is no more than twice the sum of the squares of the $p_i$s. Another result from Le Cam (1960) may be easier to use: this sum is also no more than 18 times the largest $p_i$. There are quite a few more such results... see Serfling (1978) for one review.
|
How can I efficiently model the sum of Bernoulli random variables?
If it often resembles a Poisson, have you tried approximating it by a Poisson with parameter $\lambda = \sum p_i$ ?
EDIT: I've found a theoretical result to justify this, as well as a name for the dis
|
6,104
|
How can I efficiently model the sum of Bernoulli random variables?
|
I came across your question while searching for a solution to this very problem. I wasn't terrifically satisfied with the answers here, but I think there's a pretty simple solution that gives you the exact distribution, and is pretty tractable.
The distribution of the sum of two discrete random variables is the convolution of their densities. So if you have $Z = X + Y$ where you know $P(X)$ and $P(Y)$ then you can compute:
$$P(Z=z) = \sum_{k=-\infty}^{\infty} P(X=k) \; P(Y=z-k)$$
(Of course for Bernoulli random variables you don't need to go quite to infinity.)
You can use this to find the exact distribution of the sum of your RVs. First sum two of the RVs together by convolving their PDFs (e.g. [0.3, 0.7] * [0.6, 0.4] = [0.18, 0.54, 0.28]). Then convolve that new distribution with your next Bernoulli PDF (e.g. [0.18, 0.54, 0.28] * [0.5, 0.5] = [0.09, 0.36, 0.41, 0.14]). Keep repeating this until all RVs have been added. And voila, the resulting vector is the exact PDF of the sum of all your variables.
I've verified with simulation that this produces the correct results. It doesn't rely on any asymptotic assumptions, and has no requirements that the Bernoulli probs are small.
There may also be some way to do this more efficiently than repeated convolution, but I haven't thought about it very deeply. I hope this is helpful to somebody!
|
How can I efficiently model the sum of Bernoulli random variables?
|
I came across your question while searching for a solution to this very problem. I wasn't terrifically satisfied with the answers here, but I think there's a pretty simple solution that gives you the
|
How can I efficiently model the sum of Bernoulli random variables?
I came across your question while searching for a solution to this very problem. I wasn't terrifically satisfied with the answers here, but I think there's a pretty simple solution that gives you the exact distribution, and is pretty tractable.
The distribution of the sum of two discrete random variables is the convolution of their densities. So if you have $Z = X + Y$ where you know $P(X)$ and $P(Y)$ then you can compute:
$$P(Z=z) = \sum_{k=-\infty}^{\infty} P(X=k) \; P(Y=z-k)$$
(Of course for Bernoulli random variables you don't need to go quite to infinity.)
You can use this to find the exact distribution of the sum of your RVs. First sum two of the RVs together by convolving their PDFs (e.g. [0.3, 0.7] * [0.6, 0.4] = [0.18, 0.54, 0.28]). Then convolve that new distribution with your next Bernoulli PDF (e.g. [0.18, 0.54, 0.28] * [0.5, 0.5] = [0.09, 0.36, 0.41, 0.14]). Keep repeating this until all RVs have been added. And voila, the resulting vector is the exact PDF of the sum of all your variables.
I've verified with simulation that this produces the correct results. It doesn't rely on any asymptotic assumptions, and has no requirements that the Bernoulli probs are small.
There may also be some way to do this more efficiently than repeated convolution, but I haven't thought about it very deeply. I hope this is helpful to somebody!
|
How can I efficiently model the sum of Bernoulli random variables?
I came across your question while searching for a solution to this very problem. I wasn't terrifically satisfied with the answers here, but I think there's a pretty simple solution that gives you the
|
6,105
|
How can I efficiently model the sum of Bernoulli random variables?
|
(Because this is approach is independent of the other solutions posted, including one that I have posted, I'm offering it as a separate response).
You can compute the exact distribution in seconds (or less) provided the sum of the p's is small.
We have already seen suggestions that the distribution might approximately be Gaussian (under some scenarios) or Poisson (under other scenarios). Either way, we know its mean $\mu$ is the sum of the $p_i$ and its variance $\sigma^2$ is the sum of $p_i(1-p_i)$. Therefore the distribution will be concentrated within a few standard deviations of its mean, say $z$ SDs with $z$ between 4 and 6 or thereabouts. Therefore we need only compute the probability that the sum $X$ equals (an integer) $k$ for $k = \mu - z \sigma$ through $k = \mu + z \sigma$. When most of the $p_i$ are small, $\sigma^2$ is approximately equal to (but slightly less than) $\mu$, so to be conservative we can do the computation for $k$ in the interval $[\mu - z \sqrt{\mu}, \mu + z \sqrt{\mu}]$. For example, when the sum of the $p_i$ equals $9$ and choosing $z = 6$ in order to cover the tails well, we would need the computation to cover $k$ in $[9 - 6 \sqrt{9}, 9 + 6 \sqrt{9}]$ = $[0, 27]$, which is just 28 values.
The distribution is computed recursively. Let $f_i$ be the distribution of the sum of the first $i$ of these Bernoulli variables. For any $j$ from $0$ through $i+1$, the sum of the first $i+1$ variables can equal $j$ in two mutually exclusive ways: the sum of the first $i$ variables equals $j$ and the $i+1^\text{st}$ is $0$ or else the sum of the first $i$ variables equals $j-1$ and the $i+1^\text{st}$ is $1$. Therefore
$$f_{i+1}(j) = f_i(j)(1 - p_{i+1}) + f_i(j-1) p_{i+1}.$$
We only need to carry out this computation for integral $j$ in the interval from $\max(0, \mu - z \sqrt{\mu})$ to $\mu + z \sqrt{\mu}.$
When most of the $p_i$ are tiny (but the $1 - p_i$ are still distinguishable from $1$ with reasonable precision), this approach is not plagued with the huge accumulation of floating point roundoff errors used in the solution I previously posted. Therefore, extended-precision computation is not required. For example, a double-precision calculation for an array of $2^{16}$ probabilities $p_i = 1/(i+1)$ ($\mu = 10.6676$, requiring calculations for probabilities of sums between $0$ and $31$) took 0.1 seconds with Mathematica 8 and 1-2 seconds with Excel 2002 (both obtained the same answers). Repeating it with quadruple precision (in Mathematica) took about 2 seconds but did not change any answer by more than $3 \times 10^{-15}$. Terminating the distribution at $z = 6$ SDs into the upper tail lost only $3.6 \times 10^{-8}$ of the total probability.
Another calculation for an array of 40,000 double precision random values between 0 and 0.001 ($\mu = 19.9093$) took 0.08 seconds with Mathematica.
This algorithm is parallelizable. Just break the set of $p_i$ into disjoint subsets of approximately equal size, one per processor. Compute the distribution for each subset, then convolve the results (using FFT if you like, although this speedup is probably unnecessary) to obtain the full answer. This makes it practical to use even when $\mu$ gets large, when you need to look far out into the tails ($z$ large), and/or $n$ is large.
The timing for an array of $n$ variables with $m$ processors scales as $O(n(\mu + z \sqrt{\mu})/m)$. Mathematica's speed is on the order of a million per second. For example, with $m = 1$ processor, $n = 20000$ variates, a total probability of $\mu = 100$, and going out to $z = 6$ standard deviations into the upper tail, $n(\mu + z \sqrt{\mu})/m = 3.2$ million: figure a couple seconds of computing time. If you compile this you might speed up the performance two orders of magnitude.
Incidentally, in these test cases, graphs of the distribution clearly showed some positive skewness: they aren't normal.
For the record, here is a Mathematica solution:
pb[p_, z_] := Module[
{\[Mu] = Total[p]},
Fold[#1 - #2 Differences[Prepend[#1, 0]] &,
Prepend[ConstantArray[0, Ceiling[\[Mu] + Sqrt[\[Mu]] z]], 1], p]
]
(NB The color coding applied by this site is meaningless for Mathematica code. In particular, the gray stuff is not comments: it's where all the work is done!)
An example of its use is
pb[RandomReal[{0, 0.001}, 40000], 8]
Edit
An R solution is ten times slower than Mathematica in this test case--perhaps I have not coded it optimally--but it still executes quickly (about one second):
pb <- function(p, z) {
mu <- sum(p)
x <- c(1, rep(0, ceiling(mu + sqrt(mu) * z)))
f <- function(v) {x <<- x - v * diff(c(0, x));}
sapply(p, f); x
}
y <- pb(runif(40000, 0, 0.001), 8)
plot(y)
|
How can I efficiently model the sum of Bernoulli random variables?
|
(Because this is approach is independent of the other solutions posted, including one that I have posted, I'm offering it as a separate response).
You can compute the exact distribution in seconds (or
|
How can I efficiently model the sum of Bernoulli random variables?
(Because this is approach is independent of the other solutions posted, including one that I have posted, I'm offering it as a separate response).
You can compute the exact distribution in seconds (or less) provided the sum of the p's is small.
We have already seen suggestions that the distribution might approximately be Gaussian (under some scenarios) or Poisson (under other scenarios). Either way, we know its mean $\mu$ is the sum of the $p_i$ and its variance $\sigma^2$ is the sum of $p_i(1-p_i)$. Therefore the distribution will be concentrated within a few standard deviations of its mean, say $z$ SDs with $z$ between 4 and 6 or thereabouts. Therefore we need only compute the probability that the sum $X$ equals (an integer) $k$ for $k = \mu - z \sigma$ through $k = \mu + z \sigma$. When most of the $p_i$ are small, $\sigma^2$ is approximately equal to (but slightly less than) $\mu$, so to be conservative we can do the computation for $k$ in the interval $[\mu - z \sqrt{\mu}, \mu + z \sqrt{\mu}]$. For example, when the sum of the $p_i$ equals $9$ and choosing $z = 6$ in order to cover the tails well, we would need the computation to cover $k$ in $[9 - 6 \sqrt{9}, 9 + 6 \sqrt{9}]$ = $[0, 27]$, which is just 28 values.
The distribution is computed recursively. Let $f_i$ be the distribution of the sum of the first $i$ of these Bernoulli variables. For any $j$ from $0$ through $i+1$, the sum of the first $i+1$ variables can equal $j$ in two mutually exclusive ways: the sum of the first $i$ variables equals $j$ and the $i+1^\text{st}$ is $0$ or else the sum of the first $i$ variables equals $j-1$ and the $i+1^\text{st}$ is $1$. Therefore
$$f_{i+1}(j) = f_i(j)(1 - p_{i+1}) + f_i(j-1) p_{i+1}.$$
We only need to carry out this computation for integral $j$ in the interval from $\max(0, \mu - z \sqrt{\mu})$ to $\mu + z \sqrt{\mu}.$
When most of the $p_i$ are tiny (but the $1 - p_i$ are still distinguishable from $1$ with reasonable precision), this approach is not plagued with the huge accumulation of floating point roundoff errors used in the solution I previously posted. Therefore, extended-precision computation is not required. For example, a double-precision calculation for an array of $2^{16}$ probabilities $p_i = 1/(i+1)$ ($\mu = 10.6676$, requiring calculations for probabilities of sums between $0$ and $31$) took 0.1 seconds with Mathematica 8 and 1-2 seconds with Excel 2002 (both obtained the same answers). Repeating it with quadruple precision (in Mathematica) took about 2 seconds but did not change any answer by more than $3 \times 10^{-15}$. Terminating the distribution at $z = 6$ SDs into the upper tail lost only $3.6 \times 10^{-8}$ of the total probability.
Another calculation for an array of 40,000 double precision random values between 0 and 0.001 ($\mu = 19.9093$) took 0.08 seconds with Mathematica.
This algorithm is parallelizable. Just break the set of $p_i$ into disjoint subsets of approximately equal size, one per processor. Compute the distribution for each subset, then convolve the results (using FFT if you like, although this speedup is probably unnecessary) to obtain the full answer. This makes it practical to use even when $\mu$ gets large, when you need to look far out into the tails ($z$ large), and/or $n$ is large.
The timing for an array of $n$ variables with $m$ processors scales as $O(n(\mu + z \sqrt{\mu})/m)$. Mathematica's speed is on the order of a million per second. For example, with $m = 1$ processor, $n = 20000$ variates, a total probability of $\mu = 100$, and going out to $z = 6$ standard deviations into the upper tail, $n(\mu + z \sqrt{\mu})/m = 3.2$ million: figure a couple seconds of computing time. If you compile this you might speed up the performance two orders of magnitude.
Incidentally, in these test cases, graphs of the distribution clearly showed some positive skewness: they aren't normal.
For the record, here is a Mathematica solution:
pb[p_, z_] := Module[
{\[Mu] = Total[p]},
Fold[#1 - #2 Differences[Prepend[#1, 0]] &,
Prepend[ConstantArray[0, Ceiling[\[Mu] + Sqrt[\[Mu]] z]], 1], p]
]
(NB The color coding applied by this site is meaningless for Mathematica code. In particular, the gray stuff is not comments: it's where all the work is done!)
An example of its use is
pb[RandomReal[{0, 0.001}, 40000], 8]
Edit
An R solution is ten times slower than Mathematica in this test case--perhaps I have not coded it optimally--but it still executes quickly (about one second):
pb <- function(p, z) {
mu <- sum(p)
x <- c(1, rep(0, ceiling(mu + sqrt(mu) * z)))
f <- function(v) {x <<- x - v * diff(c(0, x));}
sapply(p, f); x
}
y <- pb(runif(40000, 0, 0.001), 8)
plot(y)
|
How can I efficiently model the sum of Bernoulli random variables?
(Because this is approach is independent of the other solutions posted, including one that I have posted, I'm offering it as a separate response).
You can compute the exact distribution in seconds (or
|
6,106
|
How can I efficiently model the sum of Bernoulli random variables?
|
@onestop provides good references. The Wikipedia article on the Poisson binomial distribution gives a recursive formula for computing the exact probability distribution; it requires $O(n^2)$ effort. Unfortunately, it's an alternating sum, so it will be numerically unstable: it's hopeless to do this computation with floating point arithmetic. Fortunately, when the $p_i$ are small, you only need to compute a small number of probabilities, so the effort is really proportional to $O(n \log(\sum_i{p_i}))$. The precision necessary to carry out the calculation with rational arithmetic (i.e., exactly, so that the numerical instability is not a problem) grows slowly enough that the overall timing may still be approximately $O(n^2)$. That's feasible.
As a test, I created an array of probabilities $p_i = 1/(i+1)$ for various values of $n$ up to $n = 2^{16}$, which is the size of this problem. For small values of $n$ (up to $n = 2^{12}$) the timing for the exact calculation of probabilities was in seconds and scaled quadratically, so I ventured a calculation for $n = 2^{16}$ out to three SDs above the mean (probabilities for 0, 1, ..., 22 successes). It took 80 minutes (with Mathematica 8), in line with the predicted time. (The resulting probabilities are fractions whose numerators and denominators have about 75,000 digits apiece!) This shows the calculation can be done.
An alternative is to run a long simulation (a million trials ought to do). It only has to be done once, because the $p_i$ don't change.
|
How can I efficiently model the sum of Bernoulli random variables?
|
@onestop provides good references. The Wikipedia article on the Poisson binomial distribution gives a recursive formula for computing the exact probability distribution; it requires $O(n^2)$ effort.
|
How can I efficiently model the sum of Bernoulli random variables?
@onestop provides good references. The Wikipedia article on the Poisson binomial distribution gives a recursive formula for computing the exact probability distribution; it requires $O(n^2)$ effort. Unfortunately, it's an alternating sum, so it will be numerically unstable: it's hopeless to do this computation with floating point arithmetic. Fortunately, when the $p_i$ are small, you only need to compute a small number of probabilities, so the effort is really proportional to $O(n \log(\sum_i{p_i}))$. The precision necessary to carry out the calculation with rational arithmetic (i.e., exactly, so that the numerical instability is not a problem) grows slowly enough that the overall timing may still be approximately $O(n^2)$. That's feasible.
As a test, I created an array of probabilities $p_i = 1/(i+1)$ for various values of $n$ up to $n = 2^{16}$, which is the size of this problem. For small values of $n$ (up to $n = 2^{12}$) the timing for the exact calculation of probabilities was in seconds and scaled quadratically, so I ventured a calculation for $n = 2^{16}$ out to three SDs above the mean (probabilities for 0, 1, ..., 22 successes). It took 80 minutes (with Mathematica 8), in line with the predicted time. (The resulting probabilities are fractions whose numerators and denominators have about 75,000 digits apiece!) This shows the calculation can be done.
An alternative is to run a long simulation (a million trials ought to do). It only has to be done once, because the $p_i$ don't change.
|
How can I efficiently model the sum of Bernoulli random variables?
@onestop provides good references. The Wikipedia article on the Poisson binomial distribution gives a recursive formula for computing the exact probability distribution; it requires $O(n^2)$ effort.
|
6,107
|
How can I efficiently model the sum of Bernoulli random variables?
|
With different $p_i$ your best bet I think is normal approximation. Let $B_n=\sum_{i=1}^np_i(1-p_i)$. Then
\begin{align*}
B_n^{-1/2}\left(\sum_{i=1}^nX_i-\sum_{i=1}^np_i\right)\to N(0,1),
\end{align*}
as $n\to\infty$, provided that for each $\varepsilon>0$
\begin{align*}
B_n^{-1}\sum_{i=1}^nE\left((X_i-p_i)^2\mathbf{1}\{|X_i-p_i|>\varepsilon B_n^{1/2}\}\right)\to 0,
\end{align*}
as $n\to\infty$, which for Bernoulli variables will hold if $B_n\to\infty$. This is the so-called Lindeberg condition, which is sufficient and necessary for convergence to the standard normal.
Update:
The approximation error can be calculated from the following inequality:
\begin{align*}
\sup_x|F_n(x)-\Phi(x)|\le AL_n,
\end{align*}
where
\begin{align*}
L_n=B_n^{-3/2}\sum_{i=1}^nE|X_i-p_i|^3
\end{align*}
and $F_n$ is the cdf of the scaled and centered sum of $X_i$.
As whuber pointed out, the convergence can be slow for badly behaved $p_i$. For $p_i=\frac{1}{1+i}$ we have $B_n\approx \ln n$ and $L_n\approx (\ln n)^{-1/2}$. Then taking $n=2^{16}$ we get that the maximum deviation from the standard normal cdf is a whopping 0.3.
|
How can I efficiently model the sum of Bernoulli random variables?
|
With different $p_i$ your best bet I think is normal approximation. Let $B_n=\sum_{i=1}^np_i(1-p_i)$. Then
\begin{align*}
B_n^{-1/2}\left(\sum_{i=1}^nX_i-\sum_{i=1}^np_i\right)\to N(0,1),
\end{align*}
|
How can I efficiently model the sum of Bernoulli random variables?
With different $p_i$ your best bet I think is normal approximation. Let $B_n=\sum_{i=1}^np_i(1-p_i)$. Then
\begin{align*}
B_n^{-1/2}\left(\sum_{i=1}^nX_i-\sum_{i=1}^np_i\right)\to N(0,1),
\end{align*}
as $n\to\infty$, provided that for each $\varepsilon>0$
\begin{align*}
B_n^{-1}\sum_{i=1}^nE\left((X_i-p_i)^2\mathbf{1}\{|X_i-p_i|>\varepsilon B_n^{1/2}\}\right)\to 0,
\end{align*}
as $n\to\infty$, which for Bernoulli variables will hold if $B_n\to\infty$. This is the so-called Lindeberg condition, which is sufficient and necessary for convergence to the standard normal.
Update:
The approximation error can be calculated from the following inequality:
\begin{align*}
\sup_x|F_n(x)-\Phi(x)|\le AL_n,
\end{align*}
where
\begin{align*}
L_n=B_n^{-3/2}\sum_{i=1}^nE|X_i-p_i|^3
\end{align*}
and $F_n$ is the cdf of the scaled and centered sum of $X_i$.
As whuber pointed out, the convergence can be slow for badly behaved $p_i$. For $p_i=\frac{1}{1+i}$ we have $B_n\approx \ln n$ and $L_n\approx (\ln n)^{-1/2}$. Then taking $n=2^{16}$ we get that the maximum deviation from the standard normal cdf is a whopping 0.3.
|
How can I efficiently model the sum of Bernoulli random variables?
With different $p_i$ your best bet I think is normal approximation. Let $B_n=\sum_{i=1}^np_i(1-p_i)$. Then
\begin{align*}
B_n^{-1/2}\left(\sum_{i=1}^nX_i-\sum_{i=1}^np_i\right)\to N(0,1),
\end{align*}
|
6,108
|
How can I efficiently model the sum of Bernoulli random variables?
|
Well, based on your description and the discussion in the comments it is clear that $Y$ has mean $\sum_i p_i$ and variance $\sum_i p_{i}(1-p_{i})$. The shape of $Y$'s distribution will ultimately depend on the behavior of $p_i$. For suitably "nice" $p_i$ (in the sense that not too many of them are really close to zero), the distribution of $Y$ will be approximately normal (centered right at $\sum p_i$). But as $\sum_i p_i$ starts heading toward zero the distribution will be shifted to the left and when it crowds up against the $y$-axis it will start looking a lot less normal and a lot more Poisson, as @whuber and @onestop have mentioned.
From your comment "the distribution looks Poisson" I suspect that this latter case is what's happening, but can't really be sure without some sort of visual display or summary statistics about the $p$'s. Note however, as @whuber did, that with sufficiently pathological behavior of the $p$'s you can have all sorts of spooky things happen, like limits that are mixture distributions. I doubt that is the case here, but again, it really depends on what your $p$'s are doing.
As to the original question of "how to efficiently model", I was going to suggest a hierarchical model for you but it isn't really appropriate if the $p$'s are fixed constants. In short, take a look at a histogram of the $p$'s and make a first guess based on what you see. I would recommend the answer by @mpiktas (and by extension @csgillespie) if your $p$'s aren't too crowded to the left, and I would recommend the answer by @onestop if they are crowded left-ly.
By the way, here is the R code I used while playing around with this problem: the code isn't really appropriate if your $p$'s are too small, but it should be easy to plug in different models for $p$ (including spooky-crazy ones) to see what happens to the ultimate distribution of $Y$.
set.seed(1)
M <- 5000
N <- 15000
p <- rbeta(N, shape1 = 1, shape2 = 10)
Y <- replicate(M, sum(rbinom(N, size = 1, prob = p)))
Now take a look at the results.
hist(Y)
mean(Y)
sum(p)
var(Y)
sum(p*(1 - p))
Have fun; I sure did.
|
How can I efficiently model the sum of Bernoulli random variables?
|
Well, based on your description and the discussion in the comments it is clear that $Y$ has mean $\sum_i p_i$ and variance $\sum_i p_{i}(1-p_{i})$. The shape of $Y$'s distribution will ultimately dep
|
How can I efficiently model the sum of Bernoulli random variables?
Well, based on your description and the discussion in the comments it is clear that $Y$ has mean $\sum_i p_i$ and variance $\sum_i p_{i}(1-p_{i})$. The shape of $Y$'s distribution will ultimately depend on the behavior of $p_i$. For suitably "nice" $p_i$ (in the sense that not too many of them are really close to zero), the distribution of $Y$ will be approximately normal (centered right at $\sum p_i$). But as $\sum_i p_i$ starts heading toward zero the distribution will be shifted to the left and when it crowds up against the $y$-axis it will start looking a lot less normal and a lot more Poisson, as @whuber and @onestop have mentioned.
From your comment "the distribution looks Poisson" I suspect that this latter case is what's happening, but can't really be sure without some sort of visual display or summary statistics about the $p$'s. Note however, as @whuber did, that with sufficiently pathological behavior of the $p$'s you can have all sorts of spooky things happen, like limits that are mixture distributions. I doubt that is the case here, but again, it really depends on what your $p$'s are doing.
As to the original question of "how to efficiently model", I was going to suggest a hierarchical model for you but it isn't really appropriate if the $p$'s are fixed constants. In short, take a look at a histogram of the $p$'s and make a first guess based on what you see. I would recommend the answer by @mpiktas (and by extension @csgillespie) if your $p$'s aren't too crowded to the left, and I would recommend the answer by @onestop if they are crowded left-ly.
By the way, here is the R code I used while playing around with this problem: the code isn't really appropriate if your $p$'s are too small, but it should be easy to plug in different models for $p$ (including spooky-crazy ones) to see what happens to the ultimate distribution of $Y$.
set.seed(1)
M <- 5000
N <- 15000
p <- rbeta(N, shape1 = 1, shape2 = 10)
Y <- replicate(M, sum(rbinom(N, size = 1, prob = p)))
Now take a look at the results.
hist(Y)
mean(Y)
sum(p)
var(Y)
sum(p*(1 - p))
Have fun; I sure did.
|
How can I efficiently model the sum of Bernoulli random variables?
Well, based on your description and the discussion in the comments it is clear that $Y$ has mean $\sum_i p_i$ and variance $\sum_i p_{i}(1-p_{i})$. The shape of $Y$'s distribution will ultimately dep
|
6,109
|
How can I efficiently model the sum of Bernoulli random variables?
|
I think other answers are great, but I didn't see any Bayesian ways of estimating your probability. The answer doesn't have an explicit form, but the probability can be simulated using R.
Here is the attempt:
$$ X_i | p_i \sim Ber(p_i)$$
$$ p_i \sim Beta(\alpha, \beta) $$
Using wikipedia we can get estimates of $\hat{\alpha}$ and $\hat{\beta}$ (see parameter estimation section).
Now you can generate draws for the $i^{th}$ step, generate $p_i$ from $Beta(\hat{\alpha},\hat{\beta})$ and then generate $ X_i$ from $Ber(p_i)$. After you have done this $N$ times you can get $Y = \sum X_i$. This is a single cycle for generation of Y, do this $M$(large) number of times and the histogram for $M$ Ys will be the estimate of density of Y.
$$Prob[Y \leq y] = \frac {\#Y \leq y} {M}$$
This analysis is valid only when $p_i$ are not fixed. This is not the case here. But I will leave it here, in case someone has a similar question.
|
How can I efficiently model the sum of Bernoulli random variables?
|
I think other answers are great, but I didn't see any Bayesian ways of estimating your probability. The answer doesn't have an explicit form, but the probability can be simulated using R.
Here is the
|
How can I efficiently model the sum of Bernoulli random variables?
I think other answers are great, but I didn't see any Bayesian ways of estimating your probability. The answer doesn't have an explicit form, but the probability can be simulated using R.
Here is the attempt:
$$ X_i | p_i \sim Ber(p_i)$$
$$ p_i \sim Beta(\alpha, \beta) $$
Using wikipedia we can get estimates of $\hat{\alpha}$ and $\hat{\beta}$ (see parameter estimation section).
Now you can generate draws for the $i^{th}$ step, generate $p_i$ from $Beta(\hat{\alpha},\hat{\beta})$ and then generate $ X_i$ from $Ber(p_i)$. After you have done this $N$ times you can get $Y = \sum X_i$. This is a single cycle for generation of Y, do this $M$(large) number of times and the histogram for $M$ Ys will be the estimate of density of Y.
$$Prob[Y \leq y] = \frac {\#Y \leq y} {M}$$
This analysis is valid only when $p_i$ are not fixed. This is not the case here. But I will leave it here, in case someone has a similar question.
|
How can I efficiently model the sum of Bernoulli random variables?
I think other answers are great, but I didn't see any Bayesian ways of estimating your probability. The answer doesn't have an explicit form, but the probability can be simulated using R.
Here is the
|
6,110
|
How can I efficiently model the sum of Bernoulli random variables?
|
I would suggest applying Poisson approximation. It is well known (see A. D. Barbour, L. Holst and S. Janson: Poisson Approximation) that the total variation distance between $Y$ and a r.v. $Z$ having Poisson distribution with the parameter $\sum_i p_i$ is small:
$$ \sup_A |{\bf P}(Y\in A) - {\bf P}(Z\in A)|
\le \min \left\{ 1, \frac{1}{\sum_i p_i} \right\} \sum_i p_i^2.
$$
There are also bounds in terms of information divergence (the Kullback-Leibler distance, you may see P. Harremoёs: Convergence to the Poisson Distribution in Information Divergence. Preprint no. 2, Feb. 2003, Mathematical Department, University of Copenhagen. http://www.harremoes.dk/Peter/poisprep.pdf and other publications of P.Harremoёs), chi-squared distance (see Borisov and Vorozheikin https://link.springer.com/article/10.1007%2Fs11202-008-0002-3) and some other distances.
For the accuracy of approximation
$|{\bf E}f(Y) - {\bf E}f(Z)|$ for unbounded functions $f$ you may see Borisov and Ruzankin https://projecteuclid.org/euclid.aop/1039548369 .
Besides, that paper contains a simple bound for probabilities: for all $A$, we have
$${\bf P}(Y\in A) \le \frac{1}{(1-\max_i p_i)^2} {\bf P}(Z\in A).$$
|
How can I efficiently model the sum of Bernoulli random variables?
|
I would suggest applying Poisson approximation. It is well known (see A. D. Barbour, L. Holst and S. Janson: Poisson Approximation) that the total variation distance between $Y$ and a r.v. $Z$ having
|
How can I efficiently model the sum of Bernoulli random variables?
I would suggest applying Poisson approximation. It is well known (see A. D. Barbour, L. Holst and S. Janson: Poisson Approximation) that the total variation distance between $Y$ and a r.v. $Z$ having Poisson distribution with the parameter $\sum_i p_i$ is small:
$$ \sup_A |{\bf P}(Y\in A) - {\bf P}(Z\in A)|
\le \min \left\{ 1, \frac{1}{\sum_i p_i} \right\} \sum_i p_i^2.
$$
There are also bounds in terms of information divergence (the Kullback-Leibler distance, you may see P. Harremoёs: Convergence to the Poisson Distribution in Information Divergence. Preprint no. 2, Feb. 2003, Mathematical Department, University of Copenhagen. http://www.harremoes.dk/Peter/poisprep.pdf and other publications of P.Harremoёs), chi-squared distance (see Borisov and Vorozheikin https://link.springer.com/article/10.1007%2Fs11202-008-0002-3) and some other distances.
For the accuracy of approximation
$|{\bf E}f(Y) - {\bf E}f(Z)|$ for unbounded functions $f$ you may see Borisov and Ruzankin https://projecteuclid.org/euclid.aop/1039548369 .
Besides, that paper contains a simple bound for probabilities: for all $A$, we have
$${\bf P}(Y\in A) \le \frac{1}{(1-\max_i p_i)^2} {\bf P}(Z\in A).$$
|
How can I efficiently model the sum of Bernoulli random variables?
I would suggest applying Poisson approximation. It is well known (see A. D. Barbour, L. Holst and S. Janson: Poisson Approximation) that the total variation distance between $Y$ and a r.v. $Z$ having
|
6,111
|
How can I efficiently model the sum of Bernoulli random variables?
|
As has been mentioned in other answers, the probability distribution you describe is the Poisson Binomial distribution. An efficient method for computing the CDF is given in Hong, Yili. On computing the distribution function for the Poisson binomial distribution.
The approach is to efficiently compute the DFT (discrete Fourier transform) of the characteristic function.
The characteristic function of the Poisson binomial distribution is give by
$\phi(t) = \prod_j^n [(1-p_j)+p_je^{it}]$ ($i=\sqrt{-1}$).
The algorithm is:
Let $z_j(k) = 1-p_j+p_j \text{cos}(\omega k)+ i p_j \text{sin}(\omega k)$, for $\omega=\frac{2\pi}{n+1}$.
Define $x_k=\text{exp}\{\sum_j^n log(z_j(k))\}$, define $x_0=1$.
Compute $x_k$ for $k=1,\dots,[n/2]$. Use symmetry $\bar{x}_k=x_{n+1-k}$ to get the rest.
Apply FFT to the vector $\frac{1}{n+1}<x_0,x_1,\dots,x_n>$.
Take the cumulative sum of result to get the CDF.
The algorithm is available in the poibin R package.
This approach gives much better results than the recursive formulations as they tend to lack numerical stability.
|
How can I efficiently model the sum of Bernoulli random variables?
|
As has been mentioned in other answers, the probability distribution you describe is the Poisson Binomial distribution. An efficient method for computing the CDF is given in Hong, Yili. On computing t
|
How can I efficiently model the sum of Bernoulli random variables?
As has been mentioned in other answers, the probability distribution you describe is the Poisson Binomial distribution. An efficient method for computing the CDF is given in Hong, Yili. On computing the distribution function for the Poisson binomial distribution.
The approach is to efficiently compute the DFT (discrete Fourier transform) of the characteristic function.
The characteristic function of the Poisson binomial distribution is give by
$\phi(t) = \prod_j^n [(1-p_j)+p_je^{it}]$ ($i=\sqrt{-1}$).
The algorithm is:
Let $z_j(k) = 1-p_j+p_j \text{cos}(\omega k)+ i p_j \text{sin}(\omega k)$, for $\omega=\frac{2\pi}{n+1}$.
Define $x_k=\text{exp}\{\sum_j^n log(z_j(k))\}$, define $x_0=1$.
Compute $x_k$ for $k=1,\dots,[n/2]$. Use symmetry $\bar{x}_k=x_{n+1-k}$ to get the rest.
Apply FFT to the vector $\frac{1}{n+1}<x_0,x_1,\dots,x_n>$.
Take the cumulative sum of result to get the CDF.
The algorithm is available in the poibin R package.
This approach gives much better results than the recursive formulations as they tend to lack numerical stability.
|
How can I efficiently model the sum of Bernoulli random variables?
As has been mentioned in other answers, the probability distribution you describe is the Poisson Binomial distribution. An efficient method for computing the CDF is given in Hong, Yili. On computing t
|
6,112
|
Quantile regression: Loss function
|
I understand this question as asking for insight into how one could come up with any loss function that produces a given quantile as a loss minimizer no matter what the underlying distribution might be. It would be unsatisfactory, then, just to repeat the analysis in Wikipedia or elsewhere that shows this particular loss function works.
Let's begin with something familiar and simple.
What you're talking about is finding a "location" $x^{*}$ relative to a distribution or set of data $F$. It is well known, for instance, that the mean $\bar x$ minimizes the expected squared residual; that is, it is a value for which
$$\mathcal{L}_F(\bar x)=\int_{\mathbb{R}} (x - \bar x)^2 dF(x)$$
is as small as possible. I have used this notation to remind us that $\mathcal{L}$ is derived from a loss, that it is determined by $F$, but most importantly it depends on the number $\bar x$.
The standard way to show that $x^{*}$ minimizes any function begins by demonstrating the function's value does not decrease when $x^{*}$ is changed by a little bit. Such a value is called a critical point of the function.
What kind of loss function $\Lambda$ would result in a percentile $F^{-1}(\alpha)$ being a critical point? The loss for that value would be
$$\mathcal{L}_F(F^{-1}(\alpha)) = \int_{\mathbb{R}} \Lambda(x-F^{-1}(\alpha))dF(x)=\int_0^1\Lambda\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.$$
For this to be a critical point, its derivative must be zero. Since we're just trying to find some solution, we won't pause to see whether the manipulations are legitimate: we'll plan to check technical details (such as whether we really can differentiate $\Lambda$, etc.) at the end. Thus
$$\eqalign{0 &=\mathcal{L}_F^\prime(x^{*})= \mathcal{L}_F^\prime(F^{-1}(\alpha))= -\int_0^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du \\
&= -\int_0^{\alpha} \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du -\int_{\alpha}^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.\tag{1}
}$$
On the left hand side, the argument of $\Lambda$ is negative, whereas on the right hand side it is positive. Other than that, we have little control over the values of these integrals because $F$ could be any distribution function. Consequently our only hope is to make $\Lambda^\prime$ depend only on the sign of its argument, and otherwise it must be constant.
This implies $\Lambda$ will be piecewise linear, potentially with different slopes to the left and right of zero. Clearly it should be decreasing as zero is approached--it is, after all, a loss and not a gain. Moreover, rescaling $\Lambda$ by a constant will not change its properties, so we may feel free to set the left hand slope to $-1$. Let $\tau \gt 0$ be the right hand slope. Then $(1)$ simplifies to
$$0 = \alpha - \tau (1 - \alpha),$$
whence the unique solution is, up to a positive multiple,
$$\Lambda(x) = \cases{-x, \ x \le 0 \\ \frac{\alpha}{1-\alpha}x, \ x \ge 0.}$$
Multiplying this (natural) solution by $1-\alpha$, to clear the denominator, produces the loss function presented in the question.
Clearly all our manipulations are mathematically legitimate when $\Lambda$ has this form.
|
Quantile regression: Loss function
|
I understand this question as asking for insight into how one could come up with any loss function that produces a given quantile as a loss minimizer no matter what the underlying distribution might b
|
Quantile regression: Loss function
I understand this question as asking for insight into how one could come up with any loss function that produces a given quantile as a loss minimizer no matter what the underlying distribution might be. It would be unsatisfactory, then, just to repeat the analysis in Wikipedia or elsewhere that shows this particular loss function works.
Let's begin with something familiar and simple.
What you're talking about is finding a "location" $x^{*}$ relative to a distribution or set of data $F$. It is well known, for instance, that the mean $\bar x$ minimizes the expected squared residual; that is, it is a value for which
$$\mathcal{L}_F(\bar x)=\int_{\mathbb{R}} (x - \bar x)^2 dF(x)$$
is as small as possible. I have used this notation to remind us that $\mathcal{L}$ is derived from a loss, that it is determined by $F$, but most importantly it depends on the number $\bar x$.
The standard way to show that $x^{*}$ minimizes any function begins by demonstrating the function's value does not decrease when $x^{*}$ is changed by a little bit. Such a value is called a critical point of the function.
What kind of loss function $\Lambda$ would result in a percentile $F^{-1}(\alpha)$ being a critical point? The loss for that value would be
$$\mathcal{L}_F(F^{-1}(\alpha)) = \int_{\mathbb{R}} \Lambda(x-F^{-1}(\alpha))dF(x)=\int_0^1\Lambda\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.$$
For this to be a critical point, its derivative must be zero. Since we're just trying to find some solution, we won't pause to see whether the manipulations are legitimate: we'll plan to check technical details (such as whether we really can differentiate $\Lambda$, etc.) at the end. Thus
$$\eqalign{0 &=\mathcal{L}_F^\prime(x^{*})= \mathcal{L}_F^\prime(F^{-1}(\alpha))= -\int_0^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du \\
&= -\int_0^{\alpha} \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du -\int_{\alpha}^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.\tag{1}
}$$
On the left hand side, the argument of $\Lambda$ is negative, whereas on the right hand side it is positive. Other than that, we have little control over the values of these integrals because $F$ could be any distribution function. Consequently our only hope is to make $\Lambda^\prime$ depend only on the sign of its argument, and otherwise it must be constant.
This implies $\Lambda$ will be piecewise linear, potentially with different slopes to the left and right of zero. Clearly it should be decreasing as zero is approached--it is, after all, a loss and not a gain. Moreover, rescaling $\Lambda$ by a constant will not change its properties, so we may feel free to set the left hand slope to $-1$. Let $\tau \gt 0$ be the right hand slope. Then $(1)$ simplifies to
$$0 = \alpha - \tau (1 - \alpha),$$
whence the unique solution is, up to a positive multiple,
$$\Lambda(x) = \cases{-x, \ x \le 0 \\ \frac{\alpha}{1-\alpha}x, \ x \ge 0.}$$
Multiplying this (natural) solution by $1-\alpha$, to clear the denominator, produces the loss function presented in the question.
Clearly all our manipulations are mathematically legitimate when $\Lambda$ has this form.
|
Quantile regression: Loss function
I understand this question as asking for insight into how one could come up with any loss function that produces a given quantile as a loss minimizer no matter what the underlying distribution might b
|
6,113
|
Quantile regression: Loss function
|
The way this loss function is expressed is nice and compact but I think it's easier to understand by rewriting it as
$$\rho_\tau(X-m) = (X-m)(\tau-1_{(X-m<0)}) = \begin{cases} \tau |X-m| & if \; X-m \ge 0 \\
(1 - \tau) |X-m| & if \; X-m < 0)
\end{cases}$$
If you want to get an intuitive sense of why minimizing this loss function yields the $\tau$th quantile, it's helpful to consider a simple example. Let $X$ be a uniform random variable between 0 and 1. Let's also choose a concrete value for $\tau$, say, $0.25$.
So now the question is why would this loss function be minimized at $m=0.25$? Obviously, there's three times as much mass in the uniform distribution to the right of $m$ than there is to the left. And the loss function weights the values larger than this number at only a third of the weight given to values less than it. Thus, it's sort of intuitive that the scales are balanced when the $\tau$th quantile is used as the inflection point for the loss function.
|
Quantile regression: Loss function
|
The way this loss function is expressed is nice and compact but I think it's easier to understand by rewriting it as
$$\rho_\tau(X-m) = (X-m)(\tau-1_{(X-m<0)}) = \begin{cases} \tau |X-m| & if \; X-m \
|
Quantile regression: Loss function
The way this loss function is expressed is nice and compact but I think it's easier to understand by rewriting it as
$$\rho_\tau(X-m) = (X-m)(\tau-1_{(X-m<0)}) = \begin{cases} \tau |X-m| & if \; X-m \ge 0 \\
(1 - \tau) |X-m| & if \; X-m < 0)
\end{cases}$$
If you want to get an intuitive sense of why minimizing this loss function yields the $\tau$th quantile, it's helpful to consider a simple example. Let $X$ be a uniform random variable between 0 and 1. Let's also choose a concrete value for $\tau$, say, $0.25$.
So now the question is why would this loss function be minimized at $m=0.25$? Obviously, there's three times as much mass in the uniform distribution to the right of $m$ than there is to the left. And the loss function weights the values larger than this number at only a third of the weight given to values less than it. Thus, it's sort of intuitive that the scales are balanced when the $\tau$th quantile is used as the inflection point for the loss function.
|
Quantile regression: Loss function
The way this loss function is expressed is nice and compact but I think it's easier to understand by rewriting it as
$$\rho_\tau(X-m) = (X-m)(\tau-1_{(X-m<0)}) = \begin{cases} \tau |X-m| & if \; X-m \
|
6,114
|
Why does the variance of the Random walk increase?
|
In short because it keeps adding the variance of the next increments to the variability we have in getting to where we are now.
$\text{Var}(Y_{t}) = \text{Var}(e_1+ e_2+ ... +e_t)$
$\qquad\quad\;\;= \text{Var}(e_1) + \text{Var}(e_2) +... +\text{Var}(e_t)$ (independence)
$\qquad\quad\;\;= \sigma^2 + \sigma^2 + ... + \sigma^2=t\sigma^2\,,$
and we can see that $t\sigma^2$ increases linearly with $t$.
The mean is zero at each time point; if you simulated the series many times and averaged across series for a given time, that would average to something near 0
$\quad^{\text{Figure: 500 simulated random walks with sample mean in white and }}$
$\quad^{ \pm \text{ one standard deviation in red. Standard deviation increases with } \sqrt{t}\,.}$
|
Why does the variance of the Random walk increase?
|
In short because it keeps adding the variance of the next increments to the variability we have in getting to where we are now.
$\text{Var}(Y_{t}) = \text{Var}(e_1+ e_2+ ... +e_t)$
$\qquad\quad\;\;=
|
Why does the variance of the Random walk increase?
In short because it keeps adding the variance of the next increments to the variability we have in getting to where we are now.
$\text{Var}(Y_{t}) = \text{Var}(e_1+ e_2+ ... +e_t)$
$\qquad\quad\;\;= \text{Var}(e_1) + \text{Var}(e_2) +... +\text{Var}(e_t)$ (independence)
$\qquad\quad\;\;= \sigma^2 + \sigma^2 + ... + \sigma^2=t\sigma^2\,,$
and we can see that $t\sigma^2$ increases linearly with $t$.
The mean is zero at each time point; if you simulated the series many times and averaged across series for a given time, that would average to something near 0
$\quad^{\text{Figure: 500 simulated random walks with sample mean in white and }}$
$\quad^{ \pm \text{ one standard deviation in red. Standard deviation increases with } \sqrt{t}\,.}$
|
Why does the variance of the Random walk increase?
In short because it keeps adding the variance of the next increments to the variability we have in getting to where we are now.
$\text{Var}(Y_{t}) = \text{Var}(e_1+ e_2+ ... +e_t)$
$\qquad\quad\;\;=
|
6,115
|
Why does the variance of the Random walk increase?
|
Here's a way to imagine it. To simplify things, let's replace your white noise $e_i$ with a coin flip $e_i$
$$ e_i = \left\{ \begin{array}{c} 1 \ \text{with} \ Pr = .5 \\ -1 \ \text{with} \ Pr = .5 \end{array} \right. $$
this just simplifies the visualization, there's nothing really fundamental about the switch except easing the strain on our imagination.
Now, suppose you have gathered an army of coin flippers. Their instructions are to, at your command, flip their coin, and keep a working tally of what their results were, along with a summation of all their previous results. Each individual flipper is an instance of the random walk
$$ W = e_1 + e_2 + \cdots $$
and aggregating over all of your army should give you a take on the expected behavior.
flip 1: About half of your army flips heads, and half flips tails. The expectation of the sum, taken across your whole army, is zero. The maximum value of $W$ across your whole army is $1$ and the minimum is $-1$, so the total range is $2$.
flip 2: About half flip heads, and half flips tails. The expectation of this flip is again zero, so the expectation of $W$ over all flips does not change. Some of your army has flipped $HH$, and some others have flipped $TT$, so the maximum of $W$ is $2$ and the minimum is $-2$; the total range is $4$.
...
flip n: About half flip heads, and half flips tails. The expectation of this flip is again zero, so the expectation of $W$ over all flips does not change, it is still zero. If your army is very large, some very lucky soldiers flipped $HH \cdots H$ and others $TT \cdots T$. That is, there is a few with $n$ heads, and a few with $n$ tails (though this is getting rarer and rarer as time goes on). So, at least in our imaginations, the total range is $2n$.
So here's what you can see from this thought experiment:
The expectation of the walk is zero, as each step in the walk is balanced.
The total range of the walk grows linearly with the length of the walk.
To recover intuition we had to discard the standard deviation and use in intuitive measure, the range.
|
Why does the variance of the Random walk increase?
|
Here's a way to imagine it. To simplify things, let's replace your white noise $e_i$ with a coin flip $e_i$
$$ e_i = \left\{ \begin{array}{c} 1 \ \text{with} \ Pr = .5 \\ -1 \ \text{with} \ Pr = .5 \
|
Why does the variance of the Random walk increase?
Here's a way to imagine it. To simplify things, let's replace your white noise $e_i$ with a coin flip $e_i$
$$ e_i = \left\{ \begin{array}{c} 1 \ \text{with} \ Pr = .5 \\ -1 \ \text{with} \ Pr = .5 \end{array} \right. $$
this just simplifies the visualization, there's nothing really fundamental about the switch except easing the strain on our imagination.
Now, suppose you have gathered an army of coin flippers. Their instructions are to, at your command, flip their coin, and keep a working tally of what their results were, along with a summation of all their previous results. Each individual flipper is an instance of the random walk
$$ W = e_1 + e_2 + \cdots $$
and aggregating over all of your army should give you a take on the expected behavior.
flip 1: About half of your army flips heads, and half flips tails. The expectation of the sum, taken across your whole army, is zero. The maximum value of $W$ across your whole army is $1$ and the minimum is $-1$, so the total range is $2$.
flip 2: About half flip heads, and half flips tails. The expectation of this flip is again zero, so the expectation of $W$ over all flips does not change. Some of your army has flipped $HH$, and some others have flipped $TT$, so the maximum of $W$ is $2$ and the minimum is $-2$; the total range is $4$.
...
flip n: About half flip heads, and half flips tails. The expectation of this flip is again zero, so the expectation of $W$ over all flips does not change, it is still zero. If your army is very large, some very lucky soldiers flipped $HH \cdots H$ and others $TT \cdots T$. That is, there is a few with $n$ heads, and a few with $n$ tails (though this is getting rarer and rarer as time goes on). So, at least in our imaginations, the total range is $2n$.
So here's what you can see from this thought experiment:
The expectation of the walk is zero, as each step in the walk is balanced.
The total range of the walk grows linearly with the length of the walk.
To recover intuition we had to discard the standard deviation and use in intuitive measure, the range.
|
Why does the variance of the Random walk increase?
Here's a way to imagine it. To simplify things, let's replace your white noise $e_i$ with a coin flip $e_i$
$$ e_i = \left\{ \begin{array}{c} 1 \ \text{with} \ Pr = .5 \\ -1 \ \text{with} \ Pr = .5 \
|
6,116
|
Why does the variance of the Random walk increase?
|
Does this have something to do with it's not "pure" random, since the
new position is very correlated with the previous one?
It appears that by "pure" you mean independent. In random walk only the steps are random and independent of each other. As you noted, the "positions" are random but correlated, i.e. not independent.
The expectation of the position is still zero like you wrote $E[Y_t]=0$. The reason why you observe non-zero positions is because the positions are still random, i.e. $Y_t$ are all non zero random numbers. As a matter of fact, while you increase the sample larger $Y_t$ will be observed from time to time, precisely because, as you noted, the variance is increasing with sample size.
The variance is increasing because if you unwrap the position as follows: $Y_t=Y_0+\sum_{i=0}^t\varepsilon_t$, you can see that the position is a sum of steps, obviously. The variances add up with sample size increasing.
By the way, the means of errors also add up, but in a random walk we usually assume that the means are zero, so adding all zeros will still result in zero. There's random walk with a drift: $Y_t-Y_{t-1}=\mu+\varepsilon_t$, where $Y_t$ will drift away from zero at rate $\mu t$ with sample time.
|
Why does the variance of the Random walk increase?
|
Does this have something to do with it's not "pure" random, since the
new position is very correlated with the previous one?
It appears that by "pure" you mean independent. In random walk only the
|
Why does the variance of the Random walk increase?
Does this have something to do with it's not "pure" random, since the
new position is very correlated with the previous one?
It appears that by "pure" you mean independent. In random walk only the steps are random and independent of each other. As you noted, the "positions" are random but correlated, i.e. not independent.
The expectation of the position is still zero like you wrote $E[Y_t]=0$. The reason why you observe non-zero positions is because the positions are still random, i.e. $Y_t$ are all non zero random numbers. As a matter of fact, while you increase the sample larger $Y_t$ will be observed from time to time, precisely because, as you noted, the variance is increasing with sample size.
The variance is increasing because if you unwrap the position as follows: $Y_t=Y_0+\sum_{i=0}^t\varepsilon_t$, you can see that the position is a sum of steps, obviously. The variances add up with sample size increasing.
By the way, the means of errors also add up, but in a random walk we usually assume that the means are zero, so adding all zeros will still result in zero. There's random walk with a drift: $Y_t-Y_{t-1}=\mu+\varepsilon_t$, where $Y_t$ will drift away from zero at rate $\mu t$ with sample time.
|
Why does the variance of the Random walk increase?
Does this have something to do with it's not "pure" random, since the
new position is very correlated with the previous one?
It appears that by "pure" you mean independent. In random walk only the
|
6,117
|
Why does the variance of the Random walk increase?
|
Let's take a different example for an intuitive explanation: throwing darts at a dartboard. We have a player, who tries to aim for the bullseye, which we take to be a coordinate called 0. The player throws a few times, and indeed, the mean of his throws is 0, but he's not really good, so the variance is 20 cm.
We ask the player to throw a single new dart. Do you expect it to hit bullseye?
No. Although the mean is exactly bullseye, when we sample a throw, it's quite likely not to be bullseye.
In the same way, with random walk, we don't expect a single sample at time $t$ to be anywhere near 0. That's in fact what the variance indicates: how far away do we expect a sample to be?
However, if we take a lot of samples, we'll see that it does center around 0. Just like our darts player will almost never hit bullseye (large variance), but if he throws a lot of darts, he will have them centered around the bullseye (mean).
If we extend this example to the random walk, we can see that the variance increases with time, even though the mean stays at 0. In the random walk case, it seems strange that the mean stays at 0, even though you will intuitively know that it almost never ends up at the origin exactly. However, the same goes for our darter: we can see that any single dart will almost never hit bullseye with an increasing variance, and yet the darts will form a nice cloud around the bullseye - the mean stays the same: 0.
|
Why does the variance of the Random walk increase?
|
Let's take a different example for an intuitive explanation: throwing darts at a dartboard. We have a player, who tries to aim for the bullseye, which we take to be a coordinate called 0. The player t
|
Why does the variance of the Random walk increase?
Let's take a different example for an intuitive explanation: throwing darts at a dartboard. We have a player, who tries to aim for the bullseye, which we take to be a coordinate called 0. The player throws a few times, and indeed, the mean of his throws is 0, but he's not really good, so the variance is 20 cm.
We ask the player to throw a single new dart. Do you expect it to hit bullseye?
No. Although the mean is exactly bullseye, when we sample a throw, it's quite likely not to be bullseye.
In the same way, with random walk, we don't expect a single sample at time $t$ to be anywhere near 0. That's in fact what the variance indicates: how far away do we expect a sample to be?
However, if we take a lot of samples, we'll see that it does center around 0. Just like our darts player will almost never hit bullseye (large variance), but if he throws a lot of darts, he will have them centered around the bullseye (mean).
If we extend this example to the random walk, we can see that the variance increases with time, even though the mean stays at 0. In the random walk case, it seems strange that the mean stays at 0, even though you will intuitively know that it almost never ends up at the origin exactly. However, the same goes for our darter: we can see that any single dart will almost never hit bullseye with an increasing variance, and yet the darts will form a nice cloud around the bullseye - the mean stays the same: 0.
|
Why does the variance of the Random walk increase?
Let's take a different example for an intuitive explanation: throwing darts at a dartboard. We have a player, who tries to aim for the bullseye, which we take to be a coordinate called 0. The player t
|
6,118
|
Why does the variance of the Random walk increase?
|
Here's another way to get intuition that variance increases linearly with time.
Returns increase linearly with time. $.1\%$ return per month translate into $1.2\%$ return per year - $X$ return per day generate $365X$ return per year (assuming independence).
It makes sense that the range of returns also increases linearly. If monthly return is $.1\%$ on average $\pm .05\%$, then it makes intuitive sense that per year it is $1.2\%$ on average $\pm .6\%$.
Well, if we intuitively think of variance as range, then it makes intuitive sense that variance increases in the same fashion as return through time, that is linearly.
|
Why does the variance of the Random walk increase?
|
Here's another way to get intuition that variance increases linearly with time.
Returns increase linearly with time. $.1\%$ return per month translate into $1.2\%$ return per year - $X$ return per day
|
Why does the variance of the Random walk increase?
Here's another way to get intuition that variance increases linearly with time.
Returns increase linearly with time. $.1\%$ return per month translate into $1.2\%$ return per year - $X$ return per day generate $365X$ return per year (assuming independence).
It makes sense that the range of returns also increases linearly. If monthly return is $.1\%$ on average $\pm .05\%$, then it makes intuitive sense that per year it is $1.2\%$ on average $\pm .6\%$.
Well, if we intuitively think of variance as range, then it makes intuitive sense that variance increases in the same fashion as return through time, that is linearly.
|
Why does the variance of the Random walk increase?
Here's another way to get intuition that variance increases linearly with time.
Returns increase linearly with time. $.1\%$ return per month translate into $1.2\%$ return per year - $X$ return per day
|
6,119
|
Why does the variance of the Random walk increase?
|
The answer from Glen B already sufficiently and easily/briefly shows why the variance scales linearly with time.
This answer will give an alternative viewpoint. Possibly one might find the equation for the addition of variance not so intuitive or different; and this answer might give an additional intuition from a different angle.
This alternative viewpoint will consider the probability distribution as a solution to a differential equation that relates to a diffusion process (in particular the Gaussian distribution as a limit distribution). It is related to this question from which the image below is taken.
You can view the random walk as a diffusion process. The following steps give the rough idea
Considered the probability distribution for the random walk to be at some point $x$ at time $t$. (consider an ensemble of many walks)
The probability (density) $P(x,t)$ will relate to the probability at some earlier time.
Let's for simplicity use a random walk with discrete steps in discret time. For instance, each time step the random walk takes a step $\pm 1$ with equal probability $p=0.5$. This is equivalent to taking each two time steps a step $\pm 2$ with equal probability $p=0.25$, and staying in place with probability $p=0.5$. Then
$$P(x,t) = \underbrace{\frac{1}{4} \cdot P(x-1,t-2)}_{\substack{\text{a quarter} \\ \text{from the left}}} + \underbrace{ \frac{1}{2} \cdot
P(x,t-2) }_{\substack{\text{half the value} \\ \text{from the spot $x$}}} + \underbrace{\frac{1}{4} \cdot P(x+1,t-2)}_{\substack{\text{a quarter} \\ \text{from the right}}}$$
Or in terms of differences
$$\begin{array}{rcl}{\nabla_t}\hphantom{^2} P(x,t) &=& P(x,t)-P(x,t-2) \\
{\nabla_x}^2 P(x,t) &=& [P(x+1,t)-P(x,t)]-[P(x-1,t)-P(x,t)]\end{array}$$
you get
$$\nabla_t P(x,t) = \frac{1}{4} {\nabla_x}^2 P(x,t) $$
You could view this in the limit as a diffusion process (more specifically the heat equation)
$$\frac{\partial}{\partial t} f(x,t) = D \frac{\partial^2}{\partial x^2} f(x,t)$$
The normal distribution with $\sigma \propto \sqrt{t}$ satisfies the above equation process.
|
Why does the variance of the Random walk increase?
|
The answer from Glen B already sufficiently and easily/briefly shows why the variance scales linearly with time.
This answer will give an alternative viewpoint. Possibly one might find the equation fo
|
Why does the variance of the Random walk increase?
The answer from Glen B already sufficiently and easily/briefly shows why the variance scales linearly with time.
This answer will give an alternative viewpoint. Possibly one might find the equation for the addition of variance not so intuitive or different; and this answer might give an additional intuition from a different angle.
This alternative viewpoint will consider the probability distribution as a solution to a differential equation that relates to a diffusion process (in particular the Gaussian distribution as a limit distribution). It is related to this question from which the image below is taken.
You can view the random walk as a diffusion process. The following steps give the rough idea
Considered the probability distribution for the random walk to be at some point $x$ at time $t$. (consider an ensemble of many walks)
The probability (density) $P(x,t)$ will relate to the probability at some earlier time.
Let's for simplicity use a random walk with discrete steps in discret time. For instance, each time step the random walk takes a step $\pm 1$ with equal probability $p=0.5$. This is equivalent to taking each two time steps a step $\pm 2$ with equal probability $p=0.25$, and staying in place with probability $p=0.5$. Then
$$P(x,t) = \underbrace{\frac{1}{4} \cdot P(x-1,t-2)}_{\substack{\text{a quarter} \\ \text{from the left}}} + \underbrace{ \frac{1}{2} \cdot
P(x,t-2) }_{\substack{\text{half the value} \\ \text{from the spot $x$}}} + \underbrace{\frac{1}{4} \cdot P(x+1,t-2)}_{\substack{\text{a quarter} \\ \text{from the right}}}$$
Or in terms of differences
$$\begin{array}{rcl}{\nabla_t}\hphantom{^2} P(x,t) &=& P(x,t)-P(x,t-2) \\
{\nabla_x}^2 P(x,t) &=& [P(x+1,t)-P(x,t)]-[P(x-1,t)-P(x,t)]\end{array}$$
you get
$$\nabla_t P(x,t) = \frac{1}{4} {\nabla_x}^2 P(x,t) $$
You could view this in the limit as a diffusion process (more specifically the heat equation)
$$\frac{\partial}{\partial t} f(x,t) = D \frac{\partial^2}{\partial x^2} f(x,t)$$
The normal distribution with $\sigma \propto \sqrt{t}$ satisfies the above equation process.
|
Why does the variance of the Random walk increase?
The answer from Glen B already sufficiently and easily/briefly shows why the variance scales linearly with time.
This answer will give an alternative viewpoint. Possibly one might find the equation fo
|
6,120
|
Estimate quantile of value in a vector
|
As whuber pointed out, you can use ecdf, which takes a vector and returns a function for getting the percentile of a value.
> percentile <- ecdf(1:10)
> percentile(8)
[1] 0.8
|
Estimate quantile of value in a vector
|
As whuber pointed out, you can use ecdf, which takes a vector and returns a function for getting the percentile of a value.
> percentile <- ecdf(1:10)
> percentile(8)
[1] 0.8
|
Estimate quantile of value in a vector
As whuber pointed out, you can use ecdf, which takes a vector and returns a function for getting the percentile of a value.
> percentile <- ecdf(1:10)
> percentile(8)
[1] 0.8
|
Estimate quantile of value in a vector
As whuber pointed out, you can use ecdf, which takes a vector and returns a function for getting the percentile of a value.
> percentile <- ecdf(1:10)
> percentile(8)
[1] 0.8
|
6,121
|
Estimate quantile of value in a vector
|
To expand on what whuber and cwarden stated, sometimes you want to use a function in a "classical" R way (for example, to use inside a magrittr pipe). Then you could write it yourself using ecdf():
ecdf_fun <- function(x,perc) ecdf(x)(perc)
ecdf_fun(1:10,8)
>[1] 0.8
|
Estimate quantile of value in a vector
|
To expand on what whuber and cwarden stated, sometimes you want to use a function in a "classical" R way (for example, to use inside a magrittr pipe). Then you could write it yourself using ecdf():
ec
|
Estimate quantile of value in a vector
To expand on what whuber and cwarden stated, sometimes you want to use a function in a "classical" R way (for example, to use inside a magrittr pipe). Then you could write it yourself using ecdf():
ecdf_fun <- function(x,perc) ecdf(x)(perc)
ecdf_fun(1:10,8)
>[1] 0.8
|
Estimate quantile of value in a vector
To expand on what whuber and cwarden stated, sometimes you want to use a function in a "classical" R way (for example, to use inside a magrittr pipe). Then you could write it yourself using ecdf():
ec
|
6,122
|
Estimate quantile of value in a vector
|
The other answers here do a great job of explaining computation of sample quantiles from an empirical CDF. Here I will show an alternative to using the sample quantiles, which is to use a kernel density estimator for the distribution and then compute the relevant quantile of a given probability value in the kernel density estimate of the distribution.
This can easily be done using the KDE function in the utilities package. This function produces a kernel density estimator, with corresponding probability functions that can be called directly. First we generate the KDE and load its probability functins to the global environment.
#Generate mock data
set.seed(1)
DATA <- rnorm(30)
#Generate KDE
MY_KDE <- utilities::KDE(DATA, to.environment = TRUE)
plot(MY_KDE)
MY_KDE
Kernel Density Estimator (KDE)
Computed from 30 data points in the input 'DATA'
Estimated bandwidth = 0.389054
Input degrees-of-freedom = Inf
Probability functions for the KDE are the following:
Density function: dkde *
Distribution function: pkde *
Quantile function: qkde *
Random generation function: rkde *
* This function is presently loaded in the global environment
Once the KDE has been generated, you can call the quantile function using any input probabilities you want. (This is the function qkde which is part of the produced KDE object; in the code above we have loaded the function to the global environment so it can be called directly.) In the present case we are using a KDE with a normal kernel, so the quantiles at the end-points are negative and positive infinity.
#Compute the quantile for given set of input probabilities
PROBS <- 0:20/20
qkde(PROBS)
[1] -Inf -1.91000509 -1.28454267 -0.90720648 -0.66908812
[6] -0.48063573 -0.31756724 -0.17102207 -0.03661007 0.08855345
[11] 0.20675802 0.32005203 0.43046992 0.54023777 0.65204011
[16] 0.76947215 0.89796032 1.04690298 1.23554909 1.51450168
[21] Inf
|
Estimate quantile of value in a vector
|
The other answers here do a great job of explaining computation of sample quantiles from an empirical CDF. Here I will show an alternative to using the sample quantiles, which is to use a kernel dens
|
Estimate quantile of value in a vector
The other answers here do a great job of explaining computation of sample quantiles from an empirical CDF. Here I will show an alternative to using the sample quantiles, which is to use a kernel density estimator for the distribution and then compute the relevant quantile of a given probability value in the kernel density estimate of the distribution.
This can easily be done using the KDE function in the utilities package. This function produces a kernel density estimator, with corresponding probability functions that can be called directly. First we generate the KDE and load its probability functins to the global environment.
#Generate mock data
set.seed(1)
DATA <- rnorm(30)
#Generate KDE
MY_KDE <- utilities::KDE(DATA, to.environment = TRUE)
plot(MY_KDE)
MY_KDE
Kernel Density Estimator (KDE)
Computed from 30 data points in the input 'DATA'
Estimated bandwidth = 0.389054
Input degrees-of-freedom = Inf
Probability functions for the KDE are the following:
Density function: dkde *
Distribution function: pkde *
Quantile function: qkde *
Random generation function: rkde *
* This function is presently loaded in the global environment
Once the KDE has been generated, you can call the quantile function using any input probabilities you want. (This is the function qkde which is part of the produced KDE object; in the code above we have loaded the function to the global environment so it can be called directly.) In the present case we are using a KDE with a normal kernel, so the quantiles at the end-points are negative and positive infinity.
#Compute the quantile for given set of input probabilities
PROBS <- 0:20/20
qkde(PROBS)
[1] -Inf -1.91000509 -1.28454267 -0.90720648 -0.66908812
[6] -0.48063573 -0.31756724 -0.17102207 -0.03661007 0.08855345
[11] 0.20675802 0.32005203 0.43046992 0.54023777 0.65204011
[16] 0.76947215 0.89796032 1.04690298 1.23554909 1.51450168
[21] Inf
|
Estimate quantile of value in a vector
The other answers here do a great job of explaining computation of sample quantiles from an empirical CDF. Here I will show an alternative to using the sample quantiles, which is to use a kernel dens
|
6,123
|
Estimate quantile of value in a vector
|
If using dplyr, cume_dist() returns the percentile of a value.
https://dplyr.tidyverse.org/reference/ranking.html.
A plus of cume_dist() is it can be naturally used in a pipe (%>%):
library(tidyverse)
df=1:10 %>%
as_tibble() %>%
mutate(
x = rnorm(n=n(), mean=50, sd=10),
x_percentile = cume_dist(x))
> df
# A tibble: 10 × 3
value x x_percentile
<int> <dbl> <dbl>
1 1 43.7 0.3
2 2 44.8 0.5
3 3 41.8 0.2
4 4 59.6 0.9
5 5 54.3 0.8
6 6 44.7 0.4
7 7 37.4 0.1
8 8 52.1 0.6
9 9 61.2 1
10 10 52.3 0.7
>
```
|
Estimate quantile of value in a vector
|
If using dplyr, cume_dist() returns the percentile of a value.
https://dplyr.tidyverse.org/reference/ranking.html.
A plus of cume_dist() is it can be naturally used in a pipe (%>%):
library(tidyverse)
|
Estimate quantile of value in a vector
If using dplyr, cume_dist() returns the percentile of a value.
https://dplyr.tidyverse.org/reference/ranking.html.
A plus of cume_dist() is it can be naturally used in a pipe (%>%):
library(tidyverse)
df=1:10 %>%
as_tibble() %>%
mutate(
x = rnorm(n=n(), mean=50, sd=10),
x_percentile = cume_dist(x))
> df
# A tibble: 10 × 3
value x x_percentile
<int> <dbl> <dbl>
1 1 43.7 0.3
2 2 44.8 0.5
3 3 41.8 0.2
4 4 59.6 0.9
5 5 54.3 0.8
6 6 44.7 0.4
7 7 37.4 0.1
8 8 52.1 0.6
9 9 61.2 1
10 10 52.3 0.7
>
```
|
Estimate quantile of value in a vector
If using dplyr, cume_dist() returns the percentile of a value.
https://dplyr.tidyverse.org/reference/ranking.html.
A plus of cume_dist() is it can be naturally used in a pipe (%>%):
library(tidyverse)
|
6,124
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
This answer explains the following:
Why perfect separation is always possible with distinct points and a Gaussian kernel (of sufficiently small bandwidth)
How this separation may be interpreted as linear, but only in an abstract feature space distinct from the space where the data lives
How the mapping from data space to feature space is "found". Spoiler: it's not found by SVM, it's implicitly defined by the kernel you choose.
Why the feature space is infinite-dimensional.
1. Achieving perfect separation
Perfect separation is always possible with a Gaussian kernel (provided no two points from different classes are ever exactly the same) because of the kernel's locality properties, which lead to an arbitrarily flexible decision boundary. For sufficiently small kernel bandwidth, the decision boundary will look like you just drew little circles around the points whenever they are needed to separate the positive and negative examples:
(Credit: Andrew Ng's online machine learning course).
So, why does this occur from a mathematical perspective?
Consider the standard setup: you have a Gaussian kernel $K(\mathbf{x},\mathbf{z}) = \exp(-||\mathbf{x}-\mathbf{z}||^2 / \sigma^2)$ and training data $(\mathbf{x}^{(1)},y^{(1)}), (\mathbf{x}^{(2)},y^{(2)}), \ldots, (\mathbf{x}^{(n)},y^{(n)})$ where the $y^{(i)}$ values are $\pm 1$. We want to learn a classifier function
$$\hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x})$$
Now how will we ever assign the weights $w_i$? Do we need infinite dimensional spaces and a quadratic programming algorithm? No, because I just want to show that I can separate the points perfectly. So I make $\sigma$ a billion times smaller than the smallest separation $||\mathbf{x}^{(i)} - \mathbf{x}^{(j)}||$ between any two training examples, and I just set $w_i = 1$. This means that all the training points are a billion sigmas apart as far as the kernel is concerned, and each point completely controls the sign of $\hat{y}$ in its neighborhood. Formally, we have
$$ \hat{y}(\mathbf{x}^{(k)})
= \sum_{i=1}^n y^{(k)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})
= y^{(k)} K(\mathbf{x}^{(k)},\mathbf{x}^{(k)}) + \sum_{i \neq k} y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})
= y^{(k)} + \epsilon$$
where $\epsilon$ is some arbitrarily tiny value. We know $\epsilon$ is tiny because $\mathbf{x}^{(k)}$ is a billion sigmas away from any other point, so for all $i \neq k$ we have
$$K(\mathbf{x}^{(i)},\mathbf{x}^{(k)}) = \exp(-||\mathbf{x}^{(i)} - \mathbf{x}^{(k)}||^2 / \sigma^2) \approx 0.$$
Since $\epsilon$ is so small, $\hat{y}(\mathbf{x}^{(k)})$ definitely has the same sign as $y^{(k)}$, and the classifier achieves perfect accuracy on the training data.
2. Kernel SVM learning as linear separation
The fact that this can be interpreted as "perfect linear separation in an infinite dimensional feature space" comes from the kernel trick, which allows you to interpret the kernel as an inner product in a (potentially infinite-dimensional) feature space:
$$K(\mathbf{x}^{(i)},\mathbf{x}^{(j)}) = \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x}^{(j)})\rangle$$
where $\Phi(\mathbf{x})$ is the mapping from the data space into the feature space. It follows immediately that the $\hat{y}(\mathbf{x})$ function as a linear function in the feature space:
$$ \hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x})\rangle = L(\Phi(\mathbf{x}))$$
where the linear function $L(\mathbf{v})$ is defined on feature space vectors $\mathbf{v}$ as
$$ L(\mathbf{v}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\mathbf{v}\rangle$$
This function is linear in $\mathbf{v}$ because it's just a linear combination of inner products with fixed vectors. In the feature space, the decision boundary $\hat{y}(\mathbf{x}) = 0$ is just $L(\mathbf{v}) = 0$, the level set of a linear function. This is the very definition of a hyperplane in the feature space.
3. Understanding the mapping and feature space
Note: In this section, the notation $\mathbf{x}^{(i)}$ refers to an arbitrary set of $n$ points and not the training data. This is pure math; the training data does not figure into this section at all!
Kernel methods never actually "find" or "compute" the feature space or the mapping $\Phi$ explicitly. Kernel learning methods such as SVM do not need them to work; they only need the kernel function $K$.
That said, it is possible to write down a formula for $\Phi$. The feature space that $\Phi$ maps to is kind of abstract (and potentially infinite-dimensional), but essentially, the mapping is just using the kernel to do some simple feature engineering. In terms of the final result, the model you end up learning, using kernels is no different from the traditional feature engineering popularly applied in linear regression and GLM modeling, like taking the log of a positive predictor variable before feeding it into a regression formula. The math is mostly just there to help make sure the kernel plays well with the SVM algorithm, which has its vaunted advantages of sparsity and scaling well to large datasets.
If you're still interested, here's how it works. Essentially we take the identity we want to hold, $\langle \Phi(\mathbf{x}), \Phi(\mathbf{y}) \rangle = K(\mathbf{x},\mathbf{y})$, and construct a space and inner product such that it holds by definition. To do this, we define an abstract vector space $V$ where each vector is a function from the space the data lives in, $\mathcal{X}$, to the real numbers $\mathbb{R}$. A vector $f$ in $V$ is a function formed from a finite linear combination of kernel slices:
$$f(\mathbf{x}) = \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$
It is convenient to write $f$ more compactly as
$$f = \sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}}$$
where $K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y})$ is a function giving a "slice" of the kernel at $\mathbf{x}$.
The inner product on the space is not the ordinary dot product, but an abstract inner product based on the kernel:
$$\langle
\sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}},
\sum_{j=1}^n \beta_j K_{\mathbf{x}^{(j)}}
\rangle = \sum_{i,j} \alpha_i \beta_j K(\mathbf{x}^{(i)},\mathbf{x}^{(j)})$$
With the feature space defined in this way, $\Phi$ is a mapping $\mathcal{X} \rightarrow V$, taking each point $\mathbf{x}$ to the "kernel slice" at that point:
$$\Phi(\mathbf{x}) = K_\mathbf{x}, \quad \text{where} \quad K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y}). $$
You can prove that $V$ is an inner product space when $K$ is a positive definite kernel. See this paper for details. (Kudos to f coppens for pointing this out!)
4. Why is the feature space infinite-dimensional?
This answer gives a nice linear algebra explanation, but here's a geometric perspective, with both intuition and proof.
Intuition
For any fixed point $\mathbf{z}$, we have a kernel slice function $K_\mathbf{z}(\mathbf{x}) = K(\mathbf{z},\mathbf{x})$. The graph of $K_\mathbf{z}$ is just a Gaussian bump centered at $\mathbf{z}$. Now, if the feature space were only finite dimensional, that would mean we could take a finite set of bumps at a fixed set of points and form any Gaussian bump anywhere else. But clearly there's no way we can do this; you can't make a new bump out of old bumps, because the new bump could be really far away from the old ones. So, no matter how many feature vectors (bumps) we have, we can always add new bumps, and in the feature space these are new independent vectors. So the feature space can't be finite dimensional; it has to be infinite.
Proof
We use induction. Suppose you have an arbitrary set of points $\mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \ldots, \mathbf{x}^{(n)}$ such that the vectors $\Phi(\mathbf{x}^{(i)})$ are linearly independent in the feature space. Now find a point $\mathbf{x}^{(n+1)}$ distinct from these $n$ points, in fact a billion sigmas away from all of them. We claim that $\Phi(\mathbf{x}^{(n+1)})$ is linearly independent from the first $n$ feature vectors $\Phi(\mathbf{x}^{(i)})$.
Proof by contradiction. Suppose to the contrary that
$$\Phi(\mathbf{x}^{(n+1)}) = \sum_{i=1}^n \alpha_i \Phi(\mathbf{x}^{(i)})$$
Now take the inner product on both sides with an arbitrary $\mathbf{x}$. By the identity $\langle \Phi(\mathbf{z}), \Phi(\mathbf{x}) \rangle = K(\mathbf{z},\mathbf{x})$, we obtain
$$K(\mathbf{x}^{(n+1)},\mathbf{x})
= \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$
Here $\mathbf{x}$ is a free variable, so this equation is an identity stating that two functions are the same. In particular, it says that a Gaussian centered at $\mathbf{x}^{(n+1)}$ can be represented as a linear combination of Gaussians at other points $\mathbf{x}^{(i)}$. It is obvious geometrically that one cannot create a Gaussian bump centered at one point from a finite combination of Gaussian bumps centered at other points, especially when all those other Gaussian bumps are a billion sigmas away. So our assumption of linear dependence has led to a contradiction, as we set out to show.
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
This answer explains the following:
Why perfect separation is always possible with distinct points and a Gaussian kernel (of sufficiently small bandwidth)
How this separation may be interpreted as li
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
This answer explains the following:
Why perfect separation is always possible with distinct points and a Gaussian kernel (of sufficiently small bandwidth)
How this separation may be interpreted as linear, but only in an abstract feature space distinct from the space where the data lives
How the mapping from data space to feature space is "found". Spoiler: it's not found by SVM, it's implicitly defined by the kernel you choose.
Why the feature space is infinite-dimensional.
1. Achieving perfect separation
Perfect separation is always possible with a Gaussian kernel (provided no two points from different classes are ever exactly the same) because of the kernel's locality properties, which lead to an arbitrarily flexible decision boundary. For sufficiently small kernel bandwidth, the decision boundary will look like you just drew little circles around the points whenever they are needed to separate the positive and negative examples:
(Credit: Andrew Ng's online machine learning course).
So, why does this occur from a mathematical perspective?
Consider the standard setup: you have a Gaussian kernel $K(\mathbf{x},\mathbf{z}) = \exp(-||\mathbf{x}-\mathbf{z}||^2 / \sigma^2)$ and training data $(\mathbf{x}^{(1)},y^{(1)}), (\mathbf{x}^{(2)},y^{(2)}), \ldots, (\mathbf{x}^{(n)},y^{(n)})$ where the $y^{(i)}$ values are $\pm 1$. We want to learn a classifier function
$$\hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x})$$
Now how will we ever assign the weights $w_i$? Do we need infinite dimensional spaces and a quadratic programming algorithm? No, because I just want to show that I can separate the points perfectly. So I make $\sigma$ a billion times smaller than the smallest separation $||\mathbf{x}^{(i)} - \mathbf{x}^{(j)}||$ between any two training examples, and I just set $w_i = 1$. This means that all the training points are a billion sigmas apart as far as the kernel is concerned, and each point completely controls the sign of $\hat{y}$ in its neighborhood. Formally, we have
$$ \hat{y}(\mathbf{x}^{(k)})
= \sum_{i=1}^n y^{(k)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})
= y^{(k)} K(\mathbf{x}^{(k)},\mathbf{x}^{(k)}) + \sum_{i \neq k} y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})
= y^{(k)} + \epsilon$$
where $\epsilon$ is some arbitrarily tiny value. We know $\epsilon$ is tiny because $\mathbf{x}^{(k)}$ is a billion sigmas away from any other point, so for all $i \neq k$ we have
$$K(\mathbf{x}^{(i)},\mathbf{x}^{(k)}) = \exp(-||\mathbf{x}^{(i)} - \mathbf{x}^{(k)}||^2 / \sigma^2) \approx 0.$$
Since $\epsilon$ is so small, $\hat{y}(\mathbf{x}^{(k)})$ definitely has the same sign as $y^{(k)}$, and the classifier achieves perfect accuracy on the training data.
2. Kernel SVM learning as linear separation
The fact that this can be interpreted as "perfect linear separation in an infinite dimensional feature space" comes from the kernel trick, which allows you to interpret the kernel as an inner product in a (potentially infinite-dimensional) feature space:
$$K(\mathbf{x}^{(i)},\mathbf{x}^{(j)}) = \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x}^{(j)})\rangle$$
where $\Phi(\mathbf{x})$ is the mapping from the data space into the feature space. It follows immediately that the $\hat{y}(\mathbf{x})$ function as a linear function in the feature space:
$$ \hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x})\rangle = L(\Phi(\mathbf{x}))$$
where the linear function $L(\mathbf{v})$ is defined on feature space vectors $\mathbf{v}$ as
$$ L(\mathbf{v}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\mathbf{v}\rangle$$
This function is linear in $\mathbf{v}$ because it's just a linear combination of inner products with fixed vectors. In the feature space, the decision boundary $\hat{y}(\mathbf{x}) = 0$ is just $L(\mathbf{v}) = 0$, the level set of a linear function. This is the very definition of a hyperplane in the feature space.
3. Understanding the mapping and feature space
Note: In this section, the notation $\mathbf{x}^{(i)}$ refers to an arbitrary set of $n$ points and not the training data. This is pure math; the training data does not figure into this section at all!
Kernel methods never actually "find" or "compute" the feature space or the mapping $\Phi$ explicitly. Kernel learning methods such as SVM do not need them to work; they only need the kernel function $K$.
That said, it is possible to write down a formula for $\Phi$. The feature space that $\Phi$ maps to is kind of abstract (and potentially infinite-dimensional), but essentially, the mapping is just using the kernel to do some simple feature engineering. In terms of the final result, the model you end up learning, using kernels is no different from the traditional feature engineering popularly applied in linear regression and GLM modeling, like taking the log of a positive predictor variable before feeding it into a regression formula. The math is mostly just there to help make sure the kernel plays well with the SVM algorithm, which has its vaunted advantages of sparsity and scaling well to large datasets.
If you're still interested, here's how it works. Essentially we take the identity we want to hold, $\langle \Phi(\mathbf{x}), \Phi(\mathbf{y}) \rangle = K(\mathbf{x},\mathbf{y})$, and construct a space and inner product such that it holds by definition. To do this, we define an abstract vector space $V$ where each vector is a function from the space the data lives in, $\mathcal{X}$, to the real numbers $\mathbb{R}$. A vector $f$ in $V$ is a function formed from a finite linear combination of kernel slices:
$$f(\mathbf{x}) = \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$
It is convenient to write $f$ more compactly as
$$f = \sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}}$$
where $K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y})$ is a function giving a "slice" of the kernel at $\mathbf{x}$.
The inner product on the space is not the ordinary dot product, but an abstract inner product based on the kernel:
$$\langle
\sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}},
\sum_{j=1}^n \beta_j K_{\mathbf{x}^{(j)}}
\rangle = \sum_{i,j} \alpha_i \beta_j K(\mathbf{x}^{(i)},\mathbf{x}^{(j)})$$
With the feature space defined in this way, $\Phi$ is a mapping $\mathcal{X} \rightarrow V$, taking each point $\mathbf{x}$ to the "kernel slice" at that point:
$$\Phi(\mathbf{x}) = K_\mathbf{x}, \quad \text{where} \quad K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y}). $$
You can prove that $V$ is an inner product space when $K$ is a positive definite kernel. See this paper for details. (Kudos to f coppens for pointing this out!)
4. Why is the feature space infinite-dimensional?
This answer gives a nice linear algebra explanation, but here's a geometric perspective, with both intuition and proof.
Intuition
For any fixed point $\mathbf{z}$, we have a kernel slice function $K_\mathbf{z}(\mathbf{x}) = K(\mathbf{z},\mathbf{x})$. The graph of $K_\mathbf{z}$ is just a Gaussian bump centered at $\mathbf{z}$. Now, if the feature space were only finite dimensional, that would mean we could take a finite set of bumps at a fixed set of points and form any Gaussian bump anywhere else. But clearly there's no way we can do this; you can't make a new bump out of old bumps, because the new bump could be really far away from the old ones. So, no matter how many feature vectors (bumps) we have, we can always add new bumps, and in the feature space these are new independent vectors. So the feature space can't be finite dimensional; it has to be infinite.
Proof
We use induction. Suppose you have an arbitrary set of points $\mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \ldots, \mathbf{x}^{(n)}$ such that the vectors $\Phi(\mathbf{x}^{(i)})$ are linearly independent in the feature space. Now find a point $\mathbf{x}^{(n+1)}$ distinct from these $n$ points, in fact a billion sigmas away from all of them. We claim that $\Phi(\mathbf{x}^{(n+1)})$ is linearly independent from the first $n$ feature vectors $\Phi(\mathbf{x}^{(i)})$.
Proof by contradiction. Suppose to the contrary that
$$\Phi(\mathbf{x}^{(n+1)}) = \sum_{i=1}^n \alpha_i \Phi(\mathbf{x}^{(i)})$$
Now take the inner product on both sides with an arbitrary $\mathbf{x}$. By the identity $\langle \Phi(\mathbf{z}), \Phi(\mathbf{x}) \rangle = K(\mathbf{z},\mathbf{x})$, we obtain
$$K(\mathbf{x}^{(n+1)},\mathbf{x})
= \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$
Here $\mathbf{x}$ is a free variable, so this equation is an identity stating that two functions are the same. In particular, it says that a Gaussian centered at $\mathbf{x}^{(n+1)}$ can be represented as a linear combination of Gaussians at other points $\mathbf{x}^{(i)}$. It is obvious geometrically that one cannot create a Gaussian bump centered at one point from a finite combination of Gaussian bumps centered at other points, especially when all those other Gaussian bumps are a billion sigmas away. So our assumption of linear dependence has led to a contradiction, as we set out to show.
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
This answer explains the following:
Why perfect separation is always possible with distinct points and a Gaussian kernel (of sufficiently small bandwidth)
How this separation may be interpreted as li
|
6,125
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
The kernel matrix of the Gaussian kernel has always full rank for distinct $\mathbf x_1,...,\mathbf x_m$. This means that each time you add a new example, the rank increases by $1$. The easiest way to see this if you set $\sigma$ very small. Then the kernel matrix is almost diagonal.
The fact that the rank always increases by one means that all projections $\Phi(\mathbf x)$ in feature space are linearly independent (not orthogonal, but independent). Therefore, each example adds a new dimension to the span of the projections $\Phi(\mathbf x_1),...,\Phi(\mathbf x_m)$. Since you can add uncountably infinitely many examples, the feature space must have infinite dimension. Interestingly, all projections of the input space into the feature space lie on a sphere, since $||\Phi(\mathbf x)||_{\mathcal H}^²=k(\mathbf x,\mathbf x)=1$. Nevertheless, the geometry of the sphere is flat. You can read more on that in
Burges, C. J. C. (1999). Geometry and Invariance in Kernel Based Methods. In B. Schölkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in Kernel Methods Support Vector Learning (pp. 89–116). MIT Press.
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
The kernel matrix of the Gaussian kernel has always full rank for distinct $\mathbf x_1,...,\mathbf x_m$. This means that each time you add a new example, the rank increases by $1$. The easiest way to
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
The kernel matrix of the Gaussian kernel has always full rank for distinct $\mathbf x_1,...,\mathbf x_m$. This means that each time you add a new example, the rank increases by $1$. The easiest way to see this if you set $\sigma$ very small. Then the kernel matrix is almost diagonal.
The fact that the rank always increases by one means that all projections $\Phi(\mathbf x)$ in feature space are linearly independent (not orthogonal, but independent). Therefore, each example adds a new dimension to the span of the projections $\Phi(\mathbf x_1),...,\Phi(\mathbf x_m)$. Since you can add uncountably infinitely many examples, the feature space must have infinite dimension. Interestingly, all projections of the input space into the feature space lie on a sphere, since $||\Phi(\mathbf x)||_{\mathcal H}^²=k(\mathbf x,\mathbf x)=1$. Nevertheless, the geometry of the sphere is flat. You can read more on that in
Burges, C. J. C. (1999). Geometry and Invariance in Kernel Based Methods. In B. Schölkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in Kernel Methods Support Vector Learning (pp. 89–116). MIT Press.
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
The kernel matrix of the Gaussian kernel has always full rank for distinct $\mathbf x_1,...,\mathbf x_m$. This means that each time you add a new example, the rank increases by $1$. The easiest way to
|
6,126
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
For the background and the notations I refer to the answer How to calculate decision boundary from support vectors?.
So the features in the 'original' space are the vectors $x_i$, the binary outcome $y_i \in \{-1, +1\}$ and the Lagrange multipliers are $\alpha_i$.
It is known that the Kernel can be written as $K(x,y)=\Phi(x) \cdot \Phi(y)$ ('$\cdot$' represents the inner product.) Where $\Phi$ is an (implicit and unknown) transformation to a new feature space.
I will try to give some 'intuitive' explanation of what this $\Phi$ looks like, so this answer is no formal proof, it just wants to give some feeling of how I think that this works. Do not hesitate to correct me if I am wrong. The basis for my explanation is section 2.2.1 of this pdf
I have to 'transform' my feature space (so my $x_i$) into some 'new' feature space in which the linear separation will be solved.
For each observation $x_i$, I define functions $\phi_i(x)=K(x_i,x)$, so I have a function $\phi_i$ for each element of my training sample. These functions $\phi_i$ span a vector space. The vector space spanned by the $\phi_i$, note it $V=span(\phi_{i, i=1,2,\dots N})$. ($N$ is the size of the training sample).
I will try to argue that this vector space $V$ is the vector space in which linear separation will be possible. By definition of the span, each vector in the vector space $V$ can be written as as a linear combination of the $\phi_i$, i.e.: $\sum_{i=1}^N \gamma_i \phi_i$, where $\gamma_i$ are real numbers. So, in fact, $V=\{v=\sum_{i=1}^N \gamma_i \phi_i|(\gamma_1,\gamma_2,\dots\gamma_N) \in \mathbb{R}^N \}$
Note that $(\gamma_1,\gamma_2,\dots\gamma_N)$ are the coordinates of vector $v$ in the vector space $V$.
$N$ is the size of the training sample and therefore the dimension of the vector space $V$ can go up to $N$, depending on whether the $\phi_i$ are linear independent. As $\phi_i(x)=K(x_i,x)$ (see supra, we defined $\phi$ in this way), this means that the dimension of $V$ depends on the kernel used and can go up to the size of the training sample.
If the kernel is 'complex enough' then the $\phi_i(x)=K(x_i, x)$ will all be independent and then the dimension of $V$ will be $N$, the size of the training sample.
The transformation, that maps my original feature space to $V$ is defined as
$\Phi: x_i \to \phi_i(x)=K(x_i, x)$.
This map $\Phi$ maps my original feature space onto a vector space that can have a dimension that goes up to the size of my training sample. So $\Phi$ maps each observation in my training sample into a vector space where the vectors are functions. The vector $x_i$ from my training sample is 'mapped' to a vector in $V$, namely the vector $\phi_i$ with coordinates all equal to zero, except the $i$-th coordinate is 1.
Obviously, this transformation (a) depends on the kernel, (b) depends on the values $x_i$ in the training sample and (c) can, depending on my kernel, have a dimension that goes up to the size of my training sample and (d) the vectors of $V$ look like $\sum_{i=1}^N \gamma_i \phi_i$, where $\gamma_i$ are real numbers.
Looking at the function $f(x)$ in How to calculate decision boundary from support vectors? it can be seen that $f(x)=\sum_i y_i \alpha_i \phi_i(x)+b$. The decision boundary found by the SVM is $f(x)=0$.
In other words, $f(x)$ is a linear combination of the $\phi_i$ and $f(x)=0$ is a linear separating hyperplane in the $V$-space : it is a particular choice of the $\gamma_i$ namely $\gamma_i=\alpha_i y_i$ !
The $y_i$ are known from our observations, the $\alpha_i$ are the Lagrange multipliers that the SVM has found. In other words SVM find, through the use of a kernel and by solving a quadratic programming problem, a linear separation in the $V$-spave.
This is my intuitive understanding of how the 'kernel trick' allows one to 'implicitly' transform the original feature space into a new feature space $V$, with a different dimension. This dimension depends on the kernel you use and for the RBF kernel this dimension can go up to the size of the training sample. As training samples may have any size this could go up to 'infinite'. Obviously, in very high dimensional spaces the risk of overfitting will increase.
So kernels are a technique that allows SVM to transform your feature space , see also What makes the Gaussian kernel so magical for PCA, and also in general?
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
For the background and the notations I refer to the answer How to calculate decision boundary from support vectors?.
So the features in the 'original' space are the vectors $x_i$, the binary outcome
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
For the background and the notations I refer to the answer How to calculate decision boundary from support vectors?.
So the features in the 'original' space are the vectors $x_i$, the binary outcome $y_i \in \{-1, +1\}$ and the Lagrange multipliers are $\alpha_i$.
It is known that the Kernel can be written as $K(x,y)=\Phi(x) \cdot \Phi(y)$ ('$\cdot$' represents the inner product.) Where $\Phi$ is an (implicit and unknown) transformation to a new feature space.
I will try to give some 'intuitive' explanation of what this $\Phi$ looks like, so this answer is no formal proof, it just wants to give some feeling of how I think that this works. Do not hesitate to correct me if I am wrong. The basis for my explanation is section 2.2.1 of this pdf
I have to 'transform' my feature space (so my $x_i$) into some 'new' feature space in which the linear separation will be solved.
For each observation $x_i$, I define functions $\phi_i(x)=K(x_i,x)$, so I have a function $\phi_i$ for each element of my training sample. These functions $\phi_i$ span a vector space. The vector space spanned by the $\phi_i$, note it $V=span(\phi_{i, i=1,2,\dots N})$. ($N$ is the size of the training sample).
I will try to argue that this vector space $V$ is the vector space in which linear separation will be possible. By definition of the span, each vector in the vector space $V$ can be written as as a linear combination of the $\phi_i$, i.e.: $\sum_{i=1}^N \gamma_i \phi_i$, where $\gamma_i$ are real numbers. So, in fact, $V=\{v=\sum_{i=1}^N \gamma_i \phi_i|(\gamma_1,\gamma_2,\dots\gamma_N) \in \mathbb{R}^N \}$
Note that $(\gamma_1,\gamma_2,\dots\gamma_N)$ are the coordinates of vector $v$ in the vector space $V$.
$N$ is the size of the training sample and therefore the dimension of the vector space $V$ can go up to $N$, depending on whether the $\phi_i$ are linear independent. As $\phi_i(x)=K(x_i,x)$ (see supra, we defined $\phi$ in this way), this means that the dimension of $V$ depends on the kernel used and can go up to the size of the training sample.
If the kernel is 'complex enough' then the $\phi_i(x)=K(x_i, x)$ will all be independent and then the dimension of $V$ will be $N$, the size of the training sample.
The transformation, that maps my original feature space to $V$ is defined as
$\Phi: x_i \to \phi_i(x)=K(x_i, x)$.
This map $\Phi$ maps my original feature space onto a vector space that can have a dimension that goes up to the size of my training sample. So $\Phi$ maps each observation in my training sample into a vector space where the vectors are functions. The vector $x_i$ from my training sample is 'mapped' to a vector in $V$, namely the vector $\phi_i$ with coordinates all equal to zero, except the $i$-th coordinate is 1.
Obviously, this transformation (a) depends on the kernel, (b) depends on the values $x_i$ in the training sample and (c) can, depending on my kernel, have a dimension that goes up to the size of my training sample and (d) the vectors of $V$ look like $\sum_{i=1}^N \gamma_i \phi_i$, where $\gamma_i$ are real numbers.
Looking at the function $f(x)$ in How to calculate decision boundary from support vectors? it can be seen that $f(x)=\sum_i y_i \alpha_i \phi_i(x)+b$. The decision boundary found by the SVM is $f(x)=0$.
In other words, $f(x)$ is a linear combination of the $\phi_i$ and $f(x)=0$ is a linear separating hyperplane in the $V$-space : it is a particular choice of the $\gamma_i$ namely $\gamma_i=\alpha_i y_i$ !
The $y_i$ are known from our observations, the $\alpha_i$ are the Lagrange multipliers that the SVM has found. In other words SVM find, through the use of a kernel and by solving a quadratic programming problem, a linear separation in the $V$-spave.
This is my intuitive understanding of how the 'kernel trick' allows one to 'implicitly' transform the original feature space into a new feature space $V$, with a different dimension. This dimension depends on the kernel you use and for the RBF kernel this dimension can go up to the size of the training sample. As training samples may have any size this could go up to 'infinite'. Obviously, in very high dimensional spaces the risk of overfitting will increase.
So kernels are a technique that allows SVM to transform your feature space , see also What makes the Gaussian kernel so magical for PCA, and also in general?
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
For the background and the notations I refer to the answer How to calculate decision boundary from support vectors?.
So the features in the 'original' space are the vectors $x_i$, the binary outcome
|
6,127
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
Unfortunately, fcop's explanation is quite incorrect. First of all he says "It is known that the Kernel can be written as... where ... is an (implicit and unknown) transformation to a new feature space." It's NOT unknown. This is in fact the space the features are mapped to and this is the space that could be infinite dimensional like in the RBF case. All the kernel does is take the inner product of that transformed feature vector with a transformed feature vector of a training example and applies some function to the result. Thus it implicitly represents this higher dimensional feature vector. Think of writing (x+y)^2 instead of x^2+2xy+y^2 for example. Now think what infinite series is represented implicitly by the exponential function... there you have your infinite feature space. This has absolutely nothing to do with the fact that your training set could be infinitely large.
The right way to think about SVMs is that you map your features to a possibly infinite dimensional feature space which happens to be implicitly representable in yet another finite dimensional "Kernel" feature space whose dimension could be as large as the training set size.
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
|
Unfortunately, fcop's explanation is quite incorrect. First of all he says "It is known that the Kernel can be written as... where ... is an (implicit and unknown) transformation to a new feature spac
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
Unfortunately, fcop's explanation is quite incorrect. First of all he says "It is known that the Kernel can be written as... where ... is an (implicit and unknown) transformation to a new feature space." It's NOT unknown. This is in fact the space the features are mapped to and this is the space that could be infinite dimensional like in the RBF case. All the kernel does is take the inner product of that transformed feature vector with a transformed feature vector of a training example and applies some function to the result. Thus it implicitly represents this higher dimensional feature vector. Think of writing (x+y)^2 instead of x^2+2xy+y^2 for example. Now think what infinite series is represented implicitly by the exponential function... there you have your infinite feature space. This has absolutely nothing to do with the fact that your training set could be infinitely large.
The right way to think about SVMs is that you map your features to a possibly infinite dimensional feature space which happens to be implicitly representable in yet another finite dimensional "Kernel" feature space whose dimension could be as large as the training set size.
|
How can SVM 'find' an infinite feature space where linear separation is always possible?
Unfortunately, fcop's explanation is quite incorrect. First of all he says "It is known that the Kernel can be written as... where ... is an (implicit and unknown) transformation to a new feature spac
|
6,128
|
Why should we use t errors instead of normal errors?
|
Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more.
When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf
But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators).
If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller.
I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model.
(*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition).
|
Why should we use t errors instead of normal errors?
|
Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have ve
|
Why should we use t errors instead of normal errors?
Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more.
When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf
But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators).
If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller.
I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model.
(*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition).
|
Why should we use t errors instead of normal errors?
Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have ve
|
6,129
|
Why should we use t errors instead of normal errors?
|
It is not just a matter of "heavier tails" — there are plenty of distributions that are bell shaped and have heavy tails.
The T distribution is the posterior predictive of the Gaussian model. If you make a Gaussian assumption, but have finite evidence, then the resulting model is necessarily making non-central scaled t-distributed predictions. In the limit, as the amount of evidence you have goes to infinity, you end up with Gaussian predictions since the limit of the t distribution is Gaussian.
Why does this happen? Because with a finite amount of evidence, there is uncertainty in the parameters of your model. In the case of the Gaussian model, uncertainty in the mean would merely increase the variance (i.e., the posterior predictive of a Gaussian with known variance is still Gaussian). But uncertainty about the variance is what causes the heavy tails. If the model is trained with unlimited evidence, there is no longer any uncertainty in the variance (or the mean) and you can use your model to make Gaussian predictions.
This argument applies for a Gaussian model. It also applies to a parameter that is inferred whose likelihoods are Gaussian. Given finite data, the uncertainty about the parameter is t-distributed. Wherever there are Normal assumptions (with unknown mean and variance), and finite data, there are t-distributed posterior predictives.
There are similar posterior predictive distributions for all of the Bayesian models. Gelman is suggesting that we should be using those. His concerns would be mitigated by sufficient evidence.
|
Why should we use t errors instead of normal errors?
|
It is not just a matter of "heavier tails" — there are plenty of distributions that are bell shaped and have heavy tails.
The T distribution is the posterior predictive of the Gaussian model. If you
|
Why should we use t errors instead of normal errors?
It is not just a matter of "heavier tails" — there are plenty of distributions that are bell shaped and have heavy tails.
The T distribution is the posterior predictive of the Gaussian model. If you make a Gaussian assumption, but have finite evidence, then the resulting model is necessarily making non-central scaled t-distributed predictions. In the limit, as the amount of evidence you have goes to infinity, you end up with Gaussian predictions since the limit of the t distribution is Gaussian.
Why does this happen? Because with a finite amount of evidence, there is uncertainty in the parameters of your model. In the case of the Gaussian model, uncertainty in the mean would merely increase the variance (i.e., the posterior predictive of a Gaussian with known variance is still Gaussian). But uncertainty about the variance is what causes the heavy tails. If the model is trained with unlimited evidence, there is no longer any uncertainty in the variance (or the mean) and you can use your model to make Gaussian predictions.
This argument applies for a Gaussian model. It also applies to a parameter that is inferred whose likelihoods are Gaussian. Given finite data, the uncertainty about the parameter is t-distributed. Wherever there are Normal assumptions (with unknown mean and variance), and finite data, there are t-distributed posterior predictives.
There are similar posterior predictive distributions for all of the Bayesian models. Gelman is suggesting that we should be using those. His concerns would be mitigated by sufficient evidence.
|
Why should we use t errors instead of normal errors?
It is not just a matter of "heavier tails" — there are plenty of distributions that are bell shaped and have heavy tails.
The T distribution is the posterior predictive of the Gaussian model. If you
|
6,130
|
Why KL divergence is non-negative?
|
Proof 1:
First note that $\ln a \leq a-1$ for all $a \gt 0$.
We will now show that $-D_{KL}(p||q) \leq 0$ which means that $D_{KL}(p||q) \geq 0$
\begin{align}
-D(p||q)&=-\sum_x p(x)\ln \frac{p(x)}{q(x)}\\
&= \sum_x p(x)\ln \frac{q(x)}{p(x)}\\
&\stackrel{\text{(a)}}{\leq} \sum_x p(x)\left(\frac{q(x)}{p(x)}-1\right)\\
&=\sum_x q(x) - \sum_x p(x)\\
&= 1 - 1\\
&= 0
\end{align}
For inequality (a) we used the $\ln$ inequality explained in the beginning.
Alternatively you can start with Gibbs' inequality which states:
$$
-\sum_x p(x) \log_2 p(x) \leq -\sum_x p(x)\log_2 q(x)
$$
Then if we bring the left term to the right we get:
$$
\sum_x p(x) \log_2 p(x) - \sum_x p(x)\log_2 q(x)\geq 0 \\
\sum_x p(x)\log_2 \frac{p(x)}{q(x)}\geq 0
$$
The reason I am not including this as a separate proof is because if you were to ask me to prove Gibbs' inequality, I would have to start from the non-negativity of KL divergence and do the same proof from the top.
Proof 2:
We use the Log sum inequality:
$$
\sum_{i=1}^{n} a_i \log_2 \frac{a_i}{b_i} \geq \left(\sum_{i=1}^{n} a_i\right)\log_2\frac{\sum_{i=1}^{n} a_i}{\sum_{i=1}^{n} b_i}
$$
Then we can show that $D_{KL}(p||q) \geq 0$:
\begin{align}
D(p||q)&=\sum_x p(x)\log_2 \frac{p(x)}{q(x)}\\
&\stackrel{\text{(b)}}{\geq} \left(\sum_x p(x)\right)\log_2\frac{\sum_x p(x)}{\sum_x q(x)}\\
&=1 \cdot \log_2 \frac{1}{1}\\
&=0
\end{align}
where we have used the Log sum inequality at (b).
Proof 3:
(Taken from the book "Elements of Information Theory" by Thomas M. Cover and Joy A. Thomas)
\begin{align}
-D(p||q)&=-\sum_x p(x)\log_2 \frac{p(x)}{q(x)}\\
&= \sum_x p(x)\log_2 \frac{q(x)}{p(x)}\\
&\stackrel{\text{(c)}}{\leq} \log_2 \sum_x p(x)\frac{q(x)}{p(x)}\\
&=\log_2 1\\
&=0
\end{align}
where at (c) we have used Jensen's inequality and the fact that $\log$ is a concave function.
|
Why KL divergence is non-negative?
|
Proof 1:
First note that $\ln a \leq a-1$ for all $a \gt 0$.
We will now show that $-D_{KL}(p||q) \leq 0$ which means that $D_{KL}(p||q) \geq 0$
\begin{align}
-D(p||q)&=-\sum_x p(x)\ln \frac{p(x)}{q(x
|
Why KL divergence is non-negative?
Proof 1:
First note that $\ln a \leq a-1$ for all $a \gt 0$.
We will now show that $-D_{KL}(p||q) \leq 0$ which means that $D_{KL}(p||q) \geq 0$
\begin{align}
-D(p||q)&=-\sum_x p(x)\ln \frac{p(x)}{q(x)}\\
&= \sum_x p(x)\ln \frac{q(x)}{p(x)}\\
&\stackrel{\text{(a)}}{\leq} \sum_x p(x)\left(\frac{q(x)}{p(x)}-1\right)\\
&=\sum_x q(x) - \sum_x p(x)\\
&= 1 - 1\\
&= 0
\end{align}
For inequality (a) we used the $\ln$ inequality explained in the beginning.
Alternatively you can start with Gibbs' inequality which states:
$$
-\sum_x p(x) \log_2 p(x) \leq -\sum_x p(x)\log_2 q(x)
$$
Then if we bring the left term to the right we get:
$$
\sum_x p(x) \log_2 p(x) - \sum_x p(x)\log_2 q(x)\geq 0 \\
\sum_x p(x)\log_2 \frac{p(x)}{q(x)}\geq 0
$$
The reason I am not including this as a separate proof is because if you were to ask me to prove Gibbs' inequality, I would have to start from the non-negativity of KL divergence and do the same proof from the top.
Proof 2:
We use the Log sum inequality:
$$
\sum_{i=1}^{n} a_i \log_2 \frac{a_i}{b_i} \geq \left(\sum_{i=1}^{n} a_i\right)\log_2\frac{\sum_{i=1}^{n} a_i}{\sum_{i=1}^{n} b_i}
$$
Then we can show that $D_{KL}(p||q) \geq 0$:
\begin{align}
D(p||q)&=\sum_x p(x)\log_2 \frac{p(x)}{q(x)}\\
&\stackrel{\text{(b)}}{\geq} \left(\sum_x p(x)\right)\log_2\frac{\sum_x p(x)}{\sum_x q(x)}\\
&=1 \cdot \log_2 \frac{1}{1}\\
&=0
\end{align}
where we have used the Log sum inequality at (b).
Proof 3:
(Taken from the book "Elements of Information Theory" by Thomas M. Cover and Joy A. Thomas)
\begin{align}
-D(p||q)&=-\sum_x p(x)\log_2 \frac{p(x)}{q(x)}\\
&= \sum_x p(x)\log_2 \frac{q(x)}{p(x)}\\
&\stackrel{\text{(c)}}{\leq} \log_2 \sum_x p(x)\frac{q(x)}{p(x)}\\
&=\log_2 1\\
&=0
\end{align}
where at (c) we have used Jensen's inequality and the fact that $\log$ is a concave function.
|
Why KL divergence is non-negative?
Proof 1:
First note that $\ln a \leq a-1$ for all $a \gt 0$.
We will now show that $-D_{KL}(p||q) \leq 0$ which means that $D_{KL}(p||q) \geq 0$
\begin{align}
-D(p||q)&=-\sum_x p(x)\ln \frac{p(x)}{q(x
|
6,131
|
When to use a GAM vs GLM
|
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable and the covariates, GAM do not assume a priori any specific form of this relationship, and can be used to reveal and estimate non-linear effects of the covariate on the dependent variable.
More in detail, while in (generalized) linear models the linear predictor is a weighted sum of the $n$ covariates, $\sum_{i=1}^n \beta_i x_i$, in GAMs this term is replaced by a sum of smooth function, e.g. $\sum_{i=1}^n \sum_{j=1}^q \beta_i \, s_j \left( x_i \right)$, where the $s_1(\cdot),\dots,s_q(\cdot)$ are smooth basis functions (e.g. cubic splines) and $q$ is the basis dimension. By combining the basis functions GAMs can represent a large number of functional relationship (to do so they rely on the assumption that the true relationship is likely to be smooth, rather than wiggly). They are essentially an extension of GLMs, however they are designed in a way that makes them particularly useful for uncovering nonlinear effects of numerical covariates, and for doing so in an "automatic" fashion (from Hastie and Tibshirani original article, they have 'the advantage of being completely automatic, i.e. no "detective" work is needed on the part of the statistician').
|
When to use a GAM vs GLM
|
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable
|
When to use a GAM vs GLM
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable and the covariates, GAM do not assume a priori any specific form of this relationship, and can be used to reveal and estimate non-linear effects of the covariate on the dependent variable.
More in detail, while in (generalized) linear models the linear predictor is a weighted sum of the $n$ covariates, $\sum_{i=1}^n \beta_i x_i$, in GAMs this term is replaced by a sum of smooth function, e.g. $\sum_{i=1}^n \sum_{j=1}^q \beta_i \, s_j \left( x_i \right)$, where the $s_1(\cdot),\dots,s_q(\cdot)$ are smooth basis functions (e.g. cubic splines) and $q$ is the basis dimension. By combining the basis functions GAMs can represent a large number of functional relationship (to do so they rely on the assumption that the true relationship is likely to be smooth, rather than wiggly). They are essentially an extension of GLMs, however they are designed in a way that makes them particularly useful for uncovering nonlinear effects of numerical covariates, and for doing so in an "automatic" fashion (from Hastie and Tibshirani original article, they have 'the advantage of being completely automatic, i.e. no "detective" work is needed on the part of the statistician').
|
When to use a GAM vs GLM
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable
|
6,132
|
When to use a GAM vs GLM
|
I'd emphasize that GAMs are much more flexible than GLMs, and hence need more care in their use. With greater power comes greater responsibility.
You mention their use in ecology, which I have also noticed. I was in Costa Rica and saw some kind of study in a rainforest where some grad students had thrown some data into a GAM and accepted its crazy-complex smoothers because the software said so. It was pretty depressing, except for the humorous/admirable fact that they rigorously included a footnote that documented the fact that they'd used a GAM and the high-order smoothers that resulted.
You don't have to understand exactly how GAMs work to use them, but you really need to think about your data, the problem at hand, your software's automated selection of parameters like smoother orders, your choices (what smoothers you specify, interactions, if a smoother is justified, etc), and the plausibility of your results.
Do lots of plots and look at your smoothing curves. Do they go crazy in areas with little data? What happens when you specify a low-order smoother or remove smoothing entirely? Is a degree 7 smoother realistic for that variable, is it overfitting despite assurances that it's cross-validating its choices? Do you have enough data? Is it high-quality or noisy?
I like GAMS and think they're under-appreciated for data exploration. They're just super-flexible and if you allow yourself to science without rigor, they will take you farther into the statistical wilderness than simpler models like GLMs.
|
When to use a GAM vs GLM
|
I'd emphasize that GAMs are much more flexible than GLMs, and hence need more care in their use. With greater power comes greater responsibility.
You mention their use in ecology, which I have also no
|
When to use a GAM vs GLM
I'd emphasize that GAMs are much more flexible than GLMs, and hence need more care in their use. With greater power comes greater responsibility.
You mention their use in ecology, which I have also noticed. I was in Costa Rica and saw some kind of study in a rainforest where some grad students had thrown some data into a GAM and accepted its crazy-complex smoothers because the software said so. It was pretty depressing, except for the humorous/admirable fact that they rigorously included a footnote that documented the fact that they'd used a GAM and the high-order smoothers that resulted.
You don't have to understand exactly how GAMs work to use them, but you really need to think about your data, the problem at hand, your software's automated selection of parameters like smoother orders, your choices (what smoothers you specify, interactions, if a smoother is justified, etc), and the plausibility of your results.
Do lots of plots and look at your smoothing curves. Do they go crazy in areas with little data? What happens when you specify a low-order smoother or remove smoothing entirely? Is a degree 7 smoother realistic for that variable, is it overfitting despite assurances that it's cross-validating its choices? Do you have enough data? Is it high-quality or noisy?
I like GAMS and think they're under-appreciated for data exploration. They're just super-flexible and if you allow yourself to science without rigor, they will take you farther into the statistical wilderness than simpler models like GLMs.
|
When to use a GAM vs GLM
I'd emphasize that GAMs are much more flexible than GLMs, and hence need more care in their use. With greater power comes greater responsibility.
You mention their use in ecology, which I have also no
|
6,133
|
When to use a GAM vs GLM
|
I have no reputation to simply add a comment. I do totally agree with Wayne’s comment: With greater power comes greater responsibility. GAMs can be very flexible and often we get/see crazy-complex smoothers. Then, I strongly recommend researchers to restrict degrees of freedom (number of knots) of the smooth functions and to test different model structures (interactions/no interactions etc.).
GAMs can be considered in between of model-driven approaches (although the border is fuzzy I would include GLM in that group) and data-driven approaches (e.g. Artificial Neural Networks or Random Forests who assume fully interacting non-linear variables’ effects). In accordance, I do not totally agree with Hastie and Tibshirani because GAMs still need some detective work (Hope no one kills me for saying so).
From an ecological perspective, I would recommend using the R package scam to avoid these unreliable variable crazy-complex smoothers. It was developed by Natalya Pya and Simon Wood and it allows constraining the smooth curves to desired shapes (e.g. unimodal or monotonic), even for two-way interactions. I think GLM becomes a minor alternative after constraining the shape of the smooth functions but this is only my personal opinion.
Pya, N., Wood, S.N., 2015. Shape constrained additive models. Stat. Comput. 25 (3), 543–559. 10.1007/s11222-013-9448-7
|
When to use a GAM vs GLM
|
I have no reputation to simply add a comment. I do totally agree with Wayne’s comment: With greater power comes greater responsibility. GAMs can be very flexible and often we get/see crazy-complex smo
|
When to use a GAM vs GLM
I have no reputation to simply add a comment. I do totally agree with Wayne’s comment: With greater power comes greater responsibility. GAMs can be very flexible and often we get/see crazy-complex smoothers. Then, I strongly recommend researchers to restrict degrees of freedom (number of knots) of the smooth functions and to test different model structures (interactions/no interactions etc.).
GAMs can be considered in between of model-driven approaches (although the border is fuzzy I would include GLM in that group) and data-driven approaches (e.g. Artificial Neural Networks or Random Forests who assume fully interacting non-linear variables’ effects). In accordance, I do not totally agree with Hastie and Tibshirani because GAMs still need some detective work (Hope no one kills me for saying so).
From an ecological perspective, I would recommend using the R package scam to avoid these unreliable variable crazy-complex smoothers. It was developed by Natalya Pya and Simon Wood and it allows constraining the smooth curves to desired shapes (e.g. unimodal or monotonic), even for two-way interactions. I think GLM becomes a minor alternative after constraining the shape of the smooth functions but this is only my personal opinion.
Pya, N., Wood, S.N., 2015. Shape constrained additive models. Stat. Comput. 25 (3), 543–559. 10.1007/s11222-013-9448-7
|
When to use a GAM vs GLM
I have no reputation to simply add a comment. I do totally agree with Wayne’s comment: With greater power comes greater responsibility. GAMs can be very flexible and often we get/see crazy-complex smo
|
6,134
|
Relative variable importance for Boosting
|
I'll use the sklearn code, as it is generally much cleaner than the R code.
Here's the implementation of the feature_importances property of the GradientBoostingClassifier (I removed some lines of code that get in the way of the conceptual stuff)
def feature_importances_(self):
total_sum = np.zeros((self.n_features, ), dtype=np.float64)
for stage in self.estimators_:
stage_sum = sum(tree.feature_importances_
for tree in stage) / len(stage)
total_sum += stage_sum
importances = total_sum / len(self.estimators_)
return importances
This is pretty easy to understand. self.estimators_ is an array containing the individual trees in the booster, so the for loop is iterating over the individual trees. There's one hickup with the
stage_sum = sum(tree.feature_importances_
for tree in stage) / len(stage)
this is taking care of the non-binary response case. Here we fit multiple trees in each stage in a one-vs-all way. Its simplest conceptually to focus on the binary case, where the sum has one summand, and this is just tree.feature_importances_. So in the binary case, we can rewrite this all as
def feature_importances_(self):
total_sum = np.zeros((self.n_features, ), dtype=np.float64)
for tree in self.estimators_:
total_sum += tree.feature_importances_
importances = total_sum / len(self.estimators_)
return importances
So, in words, sum up the feature importances of the individual trees, then divide by the total number of trees. It remains to see how to calculate the feature importances for a single tree.
The importance calculation of a tree is implemented at the cython level, but it's still followable. Here's a cleaned up version of the code
cpdef compute_feature_importances(self, normalize=True):
"""Computes the importance of each feature (aka variable)."""
while node != end_node:
if node.left_child != _TREE_LEAF:
# ... and node.right_child != _TREE_LEAF:
left = &nodes[node.left_child]
right = &nodes[node.right_child]
importance_data[node.feature] += (
node.weighted_n_node_samples * node.impurity -
left.weighted_n_node_samples * left.impurity -
right.weighted_n_node_samples * right.impurity)
node += 1
importances /= nodes[0].weighted_n_node_samples
return importances
This is pretty simple. Iterate through the nodes of the tree. As long as you are not at a leaf node, calculate the weighted reduction in node purity from the split at this node, and attribute it to the feature that was split on
importance_data[node.feature] += (
node.weighted_n_node_samples * node.impurity -
left.weighted_n_node_samples * left.impurity -
right.weighted_n_node_samples * right.impurity)
Then, when done, divide it all by the total weight of the data (in most cases, the number of observations)
importances /= nodes[0].weighted_n_node_samples
It's worth recalling that the impurity is a common name for the metric to use when determining what split to make when growing a tree. In that light, we are simply summing up how much splitting on each feature allowed us to reduce the impurity across all the splits in the tree.
In the context of gradient boosting, these trees are always regression trees (minimize squared error greedily) fit to the gradient of the loss function.
|
Relative variable importance for Boosting
|
I'll use the sklearn code, as it is generally much cleaner than the R code.
Here's the implementation of the feature_importances property of the GradientBoostingClassifier (I removed some lines of cod
|
Relative variable importance for Boosting
I'll use the sklearn code, as it is generally much cleaner than the R code.
Here's the implementation of the feature_importances property of the GradientBoostingClassifier (I removed some lines of code that get in the way of the conceptual stuff)
def feature_importances_(self):
total_sum = np.zeros((self.n_features, ), dtype=np.float64)
for stage in self.estimators_:
stage_sum = sum(tree.feature_importances_
for tree in stage) / len(stage)
total_sum += stage_sum
importances = total_sum / len(self.estimators_)
return importances
This is pretty easy to understand. self.estimators_ is an array containing the individual trees in the booster, so the for loop is iterating over the individual trees. There's one hickup with the
stage_sum = sum(tree.feature_importances_
for tree in stage) / len(stage)
this is taking care of the non-binary response case. Here we fit multiple trees in each stage in a one-vs-all way. Its simplest conceptually to focus on the binary case, where the sum has one summand, and this is just tree.feature_importances_. So in the binary case, we can rewrite this all as
def feature_importances_(self):
total_sum = np.zeros((self.n_features, ), dtype=np.float64)
for tree in self.estimators_:
total_sum += tree.feature_importances_
importances = total_sum / len(self.estimators_)
return importances
So, in words, sum up the feature importances of the individual trees, then divide by the total number of trees. It remains to see how to calculate the feature importances for a single tree.
The importance calculation of a tree is implemented at the cython level, but it's still followable. Here's a cleaned up version of the code
cpdef compute_feature_importances(self, normalize=True):
"""Computes the importance of each feature (aka variable)."""
while node != end_node:
if node.left_child != _TREE_LEAF:
# ... and node.right_child != _TREE_LEAF:
left = &nodes[node.left_child]
right = &nodes[node.right_child]
importance_data[node.feature] += (
node.weighted_n_node_samples * node.impurity -
left.weighted_n_node_samples * left.impurity -
right.weighted_n_node_samples * right.impurity)
node += 1
importances /= nodes[0].weighted_n_node_samples
return importances
This is pretty simple. Iterate through the nodes of the tree. As long as you are not at a leaf node, calculate the weighted reduction in node purity from the split at this node, and attribute it to the feature that was split on
importance_data[node.feature] += (
node.weighted_n_node_samples * node.impurity -
left.weighted_n_node_samples * left.impurity -
right.weighted_n_node_samples * right.impurity)
Then, when done, divide it all by the total weight of the data (in most cases, the number of observations)
importances /= nodes[0].weighted_n_node_samples
It's worth recalling that the impurity is a common name for the metric to use when determining what split to make when growing a tree. In that light, we are simply summing up how much splitting on each feature allowed us to reduce the impurity across all the splits in the tree.
In the context of gradient boosting, these trees are always regression trees (minimize squared error greedily) fit to the gradient of the loss function.
|
Relative variable importance for Boosting
I'll use the sklearn code, as it is generally much cleaner than the R code.
Here's the implementation of the feature_importances property of the GradientBoostingClassifier (I removed some lines of cod
|
6,135
|
Simulation of logistic regression power analysis - designed experiments
|
Preliminaries:
As discussed in the G*Power manual, there are several different types of power analyses, depending on what you want to solve for. (That is, $N$, the effect size $ES$, $\alpha$, and power exist in relation to each other; specifying any three of them will let you solve for the fourth.)
in your description, you want to know the appropriate $N$ to capture the response rates you specified with $\alpha=.05$, and power = 80%. This is a-priori power.
we can start with post-hoc power (determine power given $N$, response rates, & alpha) as this is conceptually simpler, and then move up
In addition to @GregSnow's excellent post, another really great guide to simulation-based power analyses on CV can be found here: Calculating statistical power. To summarize the basic ideas:
figure out the effect you want to be able to detect
generate N data from that possible world
run the analysis you intend to conduct over those faux data
store whether the results are 'significant' according to your chosen alpha
repeat many ($B$) times & use the % 'significant' as an estimate of (post-hoc) power at that $N$
to determine a-priori power, search over possible $N$'s to find the value that yields your desired power
Whether you will find significance on a particular iteration can be understood as the outcome of a Bernoulli trial with probability $p$ (where $p$ is the power). The proportion found over $B$ iterations allows us to approximate the true $p$. To get a better approximation, we can increase $B$, although this will also make the simulation take longer.
In R, the primary way to generate binary data with a given probability of 'success' is ?rbinom
E.g. to get the number of successes out of 10 Bernoulli trials with probability p, the code would be rbinom(n=10, size=1, prob=p), (you will probably want to assign the result to a variable for storage)
you can also generate such data less elegantly by using ?runif, e.g., ifelse(runif(1)<=p, 1, 0)
if you believe the results are mediated by a latent Gaussian variable, you could generate the latent variable as a function of your covariates with ?rnorm, and then convert them into probabilities with pnorm() and use those in your rbinom() code.
You state that you will "include a polynomial term Var1*Var1) to account for any curvature". There is a confusion here; polynomial terms can help us account for curvature, but this is an interaction term--it will not help us in this way. Nonetheless, your response rates require us to include both squared terms and interaction terms in our model. Specifically, your model will need to include: $var1^2$, $var1*var2$, and $var1^2*var2$, beyond the basic terms.
Although written in the context of a different question, my answer here: Difference between logit and probit models has a lot of basic information about these types of models.
Just as there are different kinds of Type I error rates when there are multiple hypotheses (e.g., per-contrast error rate, familywise error rate, & per-family error rate), so are there different kinds of power* (e.g., for a single pre-specified effect, for any effect, & for all effects). You could also seek for the power to detect a specific combination of effects, or for the power of a simultaneous test of the model as a whole. My guess from your description of your SAS code is that it is looking for the latter. However, from your description of your situation, I am assuming you want to detect the interaction effects at a minimum.
*reference: Maxwell, S.E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological Methods, 9, 2, pp. 147-163.
your effects are quite small (not to be confused with the low response rates), so we will find it difficult to achieve good power.
Note that, although these all sound fairly similar, they are very much not the same (e.g., it is very possible to get a significant model with no significant effects--discussed here: How can a regression be significant yet all predictors be non-significant?, or significant effects but where the model is not significant--discussed here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic), which will be illustrated below.
For a different way to think about issues related to power, see my answer here: How to report general precision in estimating correlations within a context of justifying sample size.
Simple post-hoc power for logistic regression in R:
Let's say your posited response rates represent the true situation in the world, and that you had sent out 10,000 letters. What is the power to detect those effects? (Note that I am famous for writing "comically inefficient" code, the following is intended to be easy to follow rather than optimized for efficiency; in fact, it's quite slow.)
set.seed(1)
repetitions = 1000
N = 10000
n = N/8
var1 = c( .03, .03, .03, .03, .06, .06, .09, .09)
var2 = c( 0, 0, 0, 1, 0, 1, 0, 1)
rates = c(0.0025, 0.0025, 0.0025, 0.00395, 0.003, 0.0042, 0.0035, 0.002)
var1 = rep(var1, times=n)
var2 = rep(var2, times=n)
var12 = var1**2
var1x2 = var1 *var2
var12x2 = var12*var2
significant = matrix(nrow=repetitions, ncol=7)
startT = proc.time()[3]
for(i in 1:repetitions){
responses = rbinom(n=N, size=1, prob=rates)
model = glm(responses~var1+var2+var12+var1x2+var12x2,
family=binomial(link="logit"))
significant[i,1:5] = (summary(model)$coefficients[2:6,4]<.05)
significant[i,6] = sum(significant[i,1:5])
modelDev = model$null.deviance-model$deviance
significant[i,7] = (1-pchisq(modelDev, 5))<.05
}
endT = proc.time()[3]
endT-startT
sum(significant[,1])/repetitions # pre-specified effect power for var1
[1] 0.042
sum(significant[,2])/repetitions # pre-specified effect power for var2
[1] 0.017
sum(significant[,3])/repetitions # pre-specified effect power for var12
[1] 0.035
sum(significant[,4])/repetitions # pre-specified effect power for var1X2
[1] 0.019
sum(significant[,5])/repetitions # pre-specified effect power for var12X2
[1] 0.022
sum(significant[,7])/repetitions # power for likelihood ratio test of model
[1] 0.168
sum(significant[,6]==5)/repetitions # all effects power
[1] 0.001
sum(significant[,6]>0)/repetitions # any effect power
[1] 0.065
sum(significant[,4]&significant[,5])/repetitions # power for interaction terms
[1] 0.017
So we see that 10,000 letters doesn't really achieve 80% power (of any sort) to detect these response rates. (I am not sufficiently sure about what the SAS code is doing to be able to explain the stark discrepancy between these approaches, but this code is conceptually straightforward--if slow--and I have spent some time checking it, and I think these results are reasonable.)
Simulation-based a-priori power for logistic regression:
From here the idea is simply to search over possible $N$'s until we find a value that yields the desired level of the type of power you are interested in. Any search strategy that you can code up to work with this would be fine (in theory). Given the $N$'s that are going to be required to capture such small effects, it is worth thinking about how to do this more efficiently. My typical approach is simply brute force, i.e. to assess each $N$ that I might reasonably consider. (Note however, that I would typically only consider a small range, and I'm typically working with very small $N$'s--at least compared to this.)
Instead, my strategy here was to bracket possible $N$'s to get a sense of what the range of powers would be. Thus, I picked an $N$ of 500,000 and re-ran the code (initiating the same seed, n.b. this took an hour and a half to run). Here are the results:
sum(significant[,1])/repetitions # pre-specified effect power for var1
[1] 0.115
sum(significant[,2])/repetitions # pre-specified effect power for var2
[1] 0.091
sum(significant[,3])/repetitions # pre-specified effect power for var12
[1] 0.059
sum(significant[,4])/repetitions # pre-specified effect power for var1X2
[1] 0.606
sum(significant[,5])/repetitions # pre-specified effect power for var12X2
[1] 0.913
sum(significant[,7])/repetitions # power for likelihood ratio test of model
[1] 1
sum(significant[,6]==5)/repetitions # all effects power
[1] 0.005
sum(significant[,6]>0)/repetitions # any effect power
[1] 0.96
sum(significant[,4]&significant[,5])/repetitions # power for interaction terms
[1] 0.606
We can see from this that the magnitude of your effects varies considerably, and thus your ability to detect them varies. For example, the effect of $var1^2$ is particularly difficult to detect, only being significant 6% of the time even with half a million letters. On the other hand, the model as a whole was always significantly better than the null model. The other possibilities are arrayed in between. Although most of the 'data' are thrown away on each iteration, a good bit of exploration is still possible. For example, we could use the significant matrix to assess the correlations between the probabilities of different variables being significant.
I should note in conclusion, that due to the complexity and large $N$ entailed in your situation, this was not as simple as I had suspected / claimed in my initial comment. However, you can certainly get the idea for how this can be done in general, and the issues involved in power analysis, from what I've put here. HTH.
|
Simulation of logistic regression power analysis - designed experiments
|
Preliminaries:
As discussed in the G*Power manual, there are several different types of power analyses, depending on what you want to solve for. (That is, $N$, the effect size $ES$, $\alpha$, and po
|
Simulation of logistic regression power analysis - designed experiments
Preliminaries:
As discussed in the G*Power manual, there are several different types of power analyses, depending on what you want to solve for. (That is, $N$, the effect size $ES$, $\alpha$, and power exist in relation to each other; specifying any three of them will let you solve for the fourth.)
in your description, you want to know the appropriate $N$ to capture the response rates you specified with $\alpha=.05$, and power = 80%. This is a-priori power.
we can start with post-hoc power (determine power given $N$, response rates, & alpha) as this is conceptually simpler, and then move up
In addition to @GregSnow's excellent post, another really great guide to simulation-based power analyses on CV can be found here: Calculating statistical power. To summarize the basic ideas:
figure out the effect you want to be able to detect
generate N data from that possible world
run the analysis you intend to conduct over those faux data
store whether the results are 'significant' according to your chosen alpha
repeat many ($B$) times & use the % 'significant' as an estimate of (post-hoc) power at that $N$
to determine a-priori power, search over possible $N$'s to find the value that yields your desired power
Whether you will find significance on a particular iteration can be understood as the outcome of a Bernoulli trial with probability $p$ (where $p$ is the power). The proportion found over $B$ iterations allows us to approximate the true $p$. To get a better approximation, we can increase $B$, although this will also make the simulation take longer.
In R, the primary way to generate binary data with a given probability of 'success' is ?rbinom
E.g. to get the number of successes out of 10 Bernoulli trials with probability p, the code would be rbinom(n=10, size=1, prob=p), (you will probably want to assign the result to a variable for storage)
you can also generate such data less elegantly by using ?runif, e.g., ifelse(runif(1)<=p, 1, 0)
if you believe the results are mediated by a latent Gaussian variable, you could generate the latent variable as a function of your covariates with ?rnorm, and then convert them into probabilities with pnorm() and use those in your rbinom() code.
You state that you will "include a polynomial term Var1*Var1) to account for any curvature". There is a confusion here; polynomial terms can help us account for curvature, but this is an interaction term--it will not help us in this way. Nonetheless, your response rates require us to include both squared terms and interaction terms in our model. Specifically, your model will need to include: $var1^2$, $var1*var2$, and $var1^2*var2$, beyond the basic terms.
Although written in the context of a different question, my answer here: Difference between logit and probit models has a lot of basic information about these types of models.
Just as there are different kinds of Type I error rates when there are multiple hypotheses (e.g., per-contrast error rate, familywise error rate, & per-family error rate), so are there different kinds of power* (e.g., for a single pre-specified effect, for any effect, & for all effects). You could also seek for the power to detect a specific combination of effects, or for the power of a simultaneous test of the model as a whole. My guess from your description of your SAS code is that it is looking for the latter. However, from your description of your situation, I am assuming you want to detect the interaction effects at a minimum.
*reference: Maxwell, S.E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological Methods, 9, 2, pp. 147-163.
your effects are quite small (not to be confused with the low response rates), so we will find it difficult to achieve good power.
Note that, although these all sound fairly similar, they are very much not the same (e.g., it is very possible to get a significant model with no significant effects--discussed here: How can a regression be significant yet all predictors be non-significant?, or significant effects but where the model is not significant--discussed here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic), which will be illustrated below.
For a different way to think about issues related to power, see my answer here: How to report general precision in estimating correlations within a context of justifying sample size.
Simple post-hoc power for logistic regression in R:
Let's say your posited response rates represent the true situation in the world, and that you had sent out 10,000 letters. What is the power to detect those effects? (Note that I am famous for writing "comically inefficient" code, the following is intended to be easy to follow rather than optimized for efficiency; in fact, it's quite slow.)
set.seed(1)
repetitions = 1000
N = 10000
n = N/8
var1 = c( .03, .03, .03, .03, .06, .06, .09, .09)
var2 = c( 0, 0, 0, 1, 0, 1, 0, 1)
rates = c(0.0025, 0.0025, 0.0025, 0.00395, 0.003, 0.0042, 0.0035, 0.002)
var1 = rep(var1, times=n)
var2 = rep(var2, times=n)
var12 = var1**2
var1x2 = var1 *var2
var12x2 = var12*var2
significant = matrix(nrow=repetitions, ncol=7)
startT = proc.time()[3]
for(i in 1:repetitions){
responses = rbinom(n=N, size=1, prob=rates)
model = glm(responses~var1+var2+var12+var1x2+var12x2,
family=binomial(link="logit"))
significant[i,1:5] = (summary(model)$coefficients[2:6,4]<.05)
significant[i,6] = sum(significant[i,1:5])
modelDev = model$null.deviance-model$deviance
significant[i,7] = (1-pchisq(modelDev, 5))<.05
}
endT = proc.time()[3]
endT-startT
sum(significant[,1])/repetitions # pre-specified effect power for var1
[1] 0.042
sum(significant[,2])/repetitions # pre-specified effect power for var2
[1] 0.017
sum(significant[,3])/repetitions # pre-specified effect power for var12
[1] 0.035
sum(significant[,4])/repetitions # pre-specified effect power for var1X2
[1] 0.019
sum(significant[,5])/repetitions # pre-specified effect power for var12X2
[1] 0.022
sum(significant[,7])/repetitions # power for likelihood ratio test of model
[1] 0.168
sum(significant[,6]==5)/repetitions # all effects power
[1] 0.001
sum(significant[,6]>0)/repetitions # any effect power
[1] 0.065
sum(significant[,4]&significant[,5])/repetitions # power for interaction terms
[1] 0.017
So we see that 10,000 letters doesn't really achieve 80% power (of any sort) to detect these response rates. (I am not sufficiently sure about what the SAS code is doing to be able to explain the stark discrepancy between these approaches, but this code is conceptually straightforward--if slow--and I have spent some time checking it, and I think these results are reasonable.)
Simulation-based a-priori power for logistic regression:
From here the idea is simply to search over possible $N$'s until we find a value that yields the desired level of the type of power you are interested in. Any search strategy that you can code up to work with this would be fine (in theory). Given the $N$'s that are going to be required to capture such small effects, it is worth thinking about how to do this more efficiently. My typical approach is simply brute force, i.e. to assess each $N$ that I might reasonably consider. (Note however, that I would typically only consider a small range, and I'm typically working with very small $N$'s--at least compared to this.)
Instead, my strategy here was to bracket possible $N$'s to get a sense of what the range of powers would be. Thus, I picked an $N$ of 500,000 and re-ran the code (initiating the same seed, n.b. this took an hour and a half to run). Here are the results:
sum(significant[,1])/repetitions # pre-specified effect power for var1
[1] 0.115
sum(significant[,2])/repetitions # pre-specified effect power for var2
[1] 0.091
sum(significant[,3])/repetitions # pre-specified effect power for var12
[1] 0.059
sum(significant[,4])/repetitions # pre-specified effect power for var1X2
[1] 0.606
sum(significant[,5])/repetitions # pre-specified effect power for var12X2
[1] 0.913
sum(significant[,7])/repetitions # power for likelihood ratio test of model
[1] 1
sum(significant[,6]==5)/repetitions # all effects power
[1] 0.005
sum(significant[,6]>0)/repetitions # any effect power
[1] 0.96
sum(significant[,4]&significant[,5])/repetitions # power for interaction terms
[1] 0.606
We can see from this that the magnitude of your effects varies considerably, and thus your ability to detect them varies. For example, the effect of $var1^2$ is particularly difficult to detect, only being significant 6% of the time even with half a million letters. On the other hand, the model as a whole was always significantly better than the null model. The other possibilities are arrayed in between. Although most of the 'data' are thrown away on each iteration, a good bit of exploration is still possible. For example, we could use the significant matrix to assess the correlations between the probabilities of different variables being significant.
I should note in conclusion, that due to the complexity and large $N$ entailed in your situation, this was not as simple as I had suspected / claimed in my initial comment. However, you can certainly get the idea for how this can be done in general, and the issues involved in power analysis, from what I've put here. HTH.
|
Simulation of logistic regression power analysis - designed experiments
Preliminaries:
As discussed in the G*Power manual, there are several different types of power analyses, depending on what you want to solve for. (That is, $N$, the effect size $ES$, $\alpha$, and po
|
6,136
|
Simulation of logistic regression power analysis - designed experiments
|
@Gung's answer is great for understanding. Here is the approach that I would use:
mydat <- data.frame( v1 = rep( c(3,6,9), each=2 ),
v2 = rep( 0:1, 3 ),
resp=c(0.0025, 0.00395, 0.003, 0.0042, 0.0035, 0.002) )
fit0 <- glm( resp ~ poly(v1, 2, raw=TRUE)*v2, data=mydat,
weight=rep(100000,6), family=binomial)
b0 <- coef(fit0)
simfunc <- function( beta=b0, n=10000 ) {
w <- sample(1:6, n, replace=TRUE, prob=c(3, rep(1,5)))
mydat2 <- mydat[w, 1:2]
eta <- with(mydat2, cbind( 1, v1,
v1^2, v2,
v1*v2,
v1^2*v2 ) %*% beta )
p <- exp(eta)/(1+exp(eta))
mydat2$resp <- rbinom(n, 1, p)
fit1 <- glm( resp ~ poly(v1, 2)*v2, data=mydat2,
family=binomial)
fit2 <- update(fit1, .~ poly(v1,2) )
anova(fit1,fit2, test='Chisq')[2,5]
}
out <- replicate(100, simfunc(b0, 10000))
mean( out <= 0.05 )
hist(out)
abline(v=0.05, col='lightgrey')
This function tests the overall effect of v2, the models can be changed to look at other types of tests. I like writing it as a function so that when I want to test something different I can just change the function arguments. You could also add a progress bar, or use the parallel package to speed things up.
Here I just did 100 replications, I usually start around that level to find the approximate sample size, then up the itterations when I am in the right ball park (no need to waste the time on 10,000 iterations when you have 20% power).
|
Simulation of logistic regression power analysis - designed experiments
|
@Gung's answer is great for understanding. Here is the approach that I would use:
mydat <- data.frame( v1 = rep( c(3,6,9), each=2 ),
v2 = rep( 0:1, 3 ),
resp=c(0.0025, 0.00395, 0.003, 0.0042
|
Simulation of logistic regression power analysis - designed experiments
@Gung's answer is great for understanding. Here is the approach that I would use:
mydat <- data.frame( v1 = rep( c(3,6,9), each=2 ),
v2 = rep( 0:1, 3 ),
resp=c(0.0025, 0.00395, 0.003, 0.0042, 0.0035, 0.002) )
fit0 <- glm( resp ~ poly(v1, 2, raw=TRUE)*v2, data=mydat,
weight=rep(100000,6), family=binomial)
b0 <- coef(fit0)
simfunc <- function( beta=b0, n=10000 ) {
w <- sample(1:6, n, replace=TRUE, prob=c(3, rep(1,5)))
mydat2 <- mydat[w, 1:2]
eta <- with(mydat2, cbind( 1, v1,
v1^2, v2,
v1*v2,
v1^2*v2 ) %*% beta )
p <- exp(eta)/(1+exp(eta))
mydat2$resp <- rbinom(n, 1, p)
fit1 <- glm( resp ~ poly(v1, 2)*v2, data=mydat2,
family=binomial)
fit2 <- update(fit1, .~ poly(v1,2) )
anova(fit1,fit2, test='Chisq')[2,5]
}
out <- replicate(100, simfunc(b0, 10000))
mean( out <= 0.05 )
hist(out)
abline(v=0.05, col='lightgrey')
This function tests the overall effect of v2, the models can be changed to look at other types of tests. I like writing it as a function so that when I want to test something different I can just change the function arguments. You could also add a progress bar, or use the parallel package to speed things up.
Here I just did 100 replications, I usually start around that level to find the approximate sample size, then up the itterations when I am in the right ball park (no need to waste the time on 10,000 iterations when you have 20% power).
|
Simulation of logistic regression power analysis - designed experiments
@Gung's answer is great for understanding. Here is the approach that I would use:
mydat <- data.frame( v1 = rep( c(3,6,9), each=2 ),
v2 = rep( 0:1, 3 ),
resp=c(0.0025, 0.00395, 0.003, 0.0042
|
6,137
|
Relation between confidence interval and testing statistical hypothesis for t-test
|
Yes, there are some simple relationships between confidence interval comparisons and hypothesis tests in a wide range of practical settings. However, in addition to verifying the CI procedures and t-test are appropriate for our data, we must check that the sample sizes are not too different and that the two sets have similar standard deviations. We also should not attempt to derive highly precise p-values from comparing two confidence intervals, but should be glad to develop effective approximations.
In trying to reconcile the two replies already given (by @John and @Brett), it helps to be mathematically explicit. A formula for a symmetric two-sided confidence interval appropriate for the setting of this question is
$$\text{CI} = m \pm \frac{t_\alpha(n) s}{\sqrt{n}}$$
where $m$ is the sample mean of $n$ independent observations, $s$ is the sample standard deviation, $2\alpha$ is the desired test size (maximum false positive rate), and $t_\alpha(n)$ is the upper $1-\alpha$ percentile of the Student t distribution with $n-1$ degrees of freedom. (This slight deviation from conventional notation simplifies the exposition by obviating any need to fuss over the $n$ vs $n-1$ distinction, which will be inconsequential anyway.)
Using subscripts $1$ and $2$ to distinguish two independent sets of data for comparison, with $1$ corresponding to the larger of the two means, a non-overlap of confidence intervals is expressed by the inequality (lower confidence limit 1) $\gt$ (upper confidence limit 2); viz.,
$$m_1 - \frac{t_\alpha(n_1) s_1}{\sqrt{n_1}} \gt m_2 + \frac{t_\alpha(n_2) s_2}{\sqrt{n_2}}.$$
This can be made to look like the t-statistic of the corresponding hypothesis test (to compare the two means) with simple algebraic manipulations, yielding
$$\frac{m_1-m_2}{\sqrt{s_1^2/n_1 + s_2^2/n_2}} \gt \frac{s_1\sqrt{n_2}t_\alpha(n_1) + s_2\sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 s_2^2 + n_2 s_1^2}}.$$
The left hand side is the statistic used in the hypothesis test; it is usually compared to a percentile of a Student t distribution with $n_1+n_2$ degrees of freedom: that is, to $t_\alpha(n_1+n_2)$. The right hand side is a biased weighted average of the original t distribution percentiles.
The analysis so far justifies the reply by @Brett: there appears to be no simple relationship available. However, let's probe further. I am inspired to do so because, intuitively, a non-overlap of confidence intervals ought to say something!
First, notice that this form of the hypothesis test is valid only when we expect $s_1$ and $s_2$ to be at least approximately equal. (Otherwise we face the notorious Behrens-Fisher problem and its complexities.) Upon checking the approximate equality of the $s_i$, we could then create an approximate simplification in the form
$$\frac{m_1-m_2}{s\sqrt{1/n_1 + 1/n_2}} \gt \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}.$$
Here, $s \approx s_1 \approx s_2$. Realistically, we should not expect this informal comparison of confidence limits to have the same size as $\alpha$. Our question then is whether there exists an $\alpha'$ such that the right hand side is (at least approximately) equal to the correct t statistic. Namely, for what $\alpha'$ is it the case that
$$t_{\alpha'}(n_1+n_2) = \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}\text{?}$$
It turns out that for equal sample sizes, $\alpha$ and $\alpha'$ are connected (to pretty high accuracy) by a power law. For instance, here is a log-log plot of the two for the cases $n_1=n_2=2$ (lowest blue line), $n_1=n_2=5$ (middle red line), $n_1=n_2=\infty$ (highest gold line). The middle green dashed line is an approximation described below. The straightness of these curves belies a power law. It varies with $n=n_1=n_2$, but not much.
The answer does depend on the set $\{n_1, n_2\}$, but it is natural to wonder how much it really varies with changes in the sample sizes. In particular, we could hope that for moderate to large sample sizes (maybe $n_1 \ge 10, n_2 \ge 10$ or thereabouts) the sample size makes little difference. In this case, we could develop a quantitative way to relate $\alpha'$ to $\alpha$.
This approach turns out to work provided the sample sizes are not too different from each other. In the spirit of simplicity, I will report an omnibus formula for computing the test size $\alpha'$ corresponding to the confidence interval size $\alpha$. It is
$$\alpha' \approx e \alpha^{1.91};$$
that is,
$$\alpha' \approx \exp(1 + 1.91\log(\alpha)).$$
This formula works reasonably well in these common situations:
Both sample sizes are close to each other, $n_1 \approx n_2$, and $\alpha$ is not too extreme ($\alpha \gt .001$ or so).
One sample size is within about three times the other and the smallest isn't too small (roughly, greater than $10$) and again $\alpha$ is not too extreme.
One sample size is within three times the other and $\alpha \gt .02$ or so.
The relative error (correct value divided by the approximation) in the first situation is plotted here, with the lower (blue) line showing the case $n_1=n_2=2$, the middle (red) line the case $n_1=n_2=5$, and the upper (gold) line the case $n_1=n_2=\infty$. Interpolating between the latter two, we see that the approximation is excellent for a wide range of practical values of $\alpha$ when sample sizes are moderate (around 5-50) and otherwise is reasonably good.
This is more than good enough for eyeballing a bunch of confidence intervals.
To summarize, the failure of two $2\alpha$-size confidence intervals of means to overlap is significant evidence of a difference in means at a level equal to $2e \alpha^{1.91}$, provided the two samples have approximately equal standard deviations and are approximately the same size.
I'll end with a tabulation of the approximation for common values of $2\alpha$. In the left hand column is the nominal size $2\alpha$ of the original confidence interval; in the right hand column is the actual size $2\alpha^\prime$ of the comparison of two such intervals:
$$\begin{array}{ll}
2\alpha & 2\alpha^\prime \\ \hline
0.1 &0.02\\
0.05 &0.005\\
0.01 &0.0002\\
0.005 &0.00006\\
\end{array}$$
For example, when a pair of two-sided 95% CIs ($2\alpha=.05$) for samples of approximately equal sizes do not overlap, we should take the means to be significantly different, $p \lt .005$. The correct p-value (for equal sample sizes $n$) actually lies between $.0037$ ($n=2$) and $.0056$ ($n=\infty$).
This result justifies (and I hope improves upon) the reply by @John. Thus, although the previous replies appear to be in conflict, both are (in their own ways) correct.
|
Relation between confidence interval and testing statistical hypothesis for t-test
|
Yes, there are some simple relationships between confidence interval comparisons and hypothesis tests in a wide range of practical settings. However, in addition to verifying the CI procedures and t-
|
Relation between confidence interval and testing statistical hypothesis for t-test
Yes, there are some simple relationships between confidence interval comparisons and hypothesis tests in a wide range of practical settings. However, in addition to verifying the CI procedures and t-test are appropriate for our data, we must check that the sample sizes are not too different and that the two sets have similar standard deviations. We also should not attempt to derive highly precise p-values from comparing two confidence intervals, but should be glad to develop effective approximations.
In trying to reconcile the two replies already given (by @John and @Brett), it helps to be mathematically explicit. A formula for a symmetric two-sided confidence interval appropriate for the setting of this question is
$$\text{CI} = m \pm \frac{t_\alpha(n) s}{\sqrt{n}}$$
where $m$ is the sample mean of $n$ independent observations, $s$ is the sample standard deviation, $2\alpha$ is the desired test size (maximum false positive rate), and $t_\alpha(n)$ is the upper $1-\alpha$ percentile of the Student t distribution with $n-1$ degrees of freedom. (This slight deviation from conventional notation simplifies the exposition by obviating any need to fuss over the $n$ vs $n-1$ distinction, which will be inconsequential anyway.)
Using subscripts $1$ and $2$ to distinguish two independent sets of data for comparison, with $1$ corresponding to the larger of the two means, a non-overlap of confidence intervals is expressed by the inequality (lower confidence limit 1) $\gt$ (upper confidence limit 2); viz.,
$$m_1 - \frac{t_\alpha(n_1) s_1}{\sqrt{n_1}} \gt m_2 + \frac{t_\alpha(n_2) s_2}{\sqrt{n_2}}.$$
This can be made to look like the t-statistic of the corresponding hypothesis test (to compare the two means) with simple algebraic manipulations, yielding
$$\frac{m_1-m_2}{\sqrt{s_1^2/n_1 + s_2^2/n_2}} \gt \frac{s_1\sqrt{n_2}t_\alpha(n_1) + s_2\sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 s_2^2 + n_2 s_1^2}}.$$
The left hand side is the statistic used in the hypothesis test; it is usually compared to a percentile of a Student t distribution with $n_1+n_2$ degrees of freedom: that is, to $t_\alpha(n_1+n_2)$. The right hand side is a biased weighted average of the original t distribution percentiles.
The analysis so far justifies the reply by @Brett: there appears to be no simple relationship available. However, let's probe further. I am inspired to do so because, intuitively, a non-overlap of confidence intervals ought to say something!
First, notice that this form of the hypothesis test is valid only when we expect $s_1$ and $s_2$ to be at least approximately equal. (Otherwise we face the notorious Behrens-Fisher problem and its complexities.) Upon checking the approximate equality of the $s_i$, we could then create an approximate simplification in the form
$$\frac{m_1-m_2}{s\sqrt{1/n_1 + 1/n_2}} \gt \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}.$$
Here, $s \approx s_1 \approx s_2$. Realistically, we should not expect this informal comparison of confidence limits to have the same size as $\alpha$. Our question then is whether there exists an $\alpha'$ such that the right hand side is (at least approximately) equal to the correct t statistic. Namely, for what $\alpha'$ is it the case that
$$t_{\alpha'}(n_1+n_2) = \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}\text{?}$$
It turns out that for equal sample sizes, $\alpha$ and $\alpha'$ are connected (to pretty high accuracy) by a power law. For instance, here is a log-log plot of the two for the cases $n_1=n_2=2$ (lowest blue line), $n_1=n_2=5$ (middle red line), $n_1=n_2=\infty$ (highest gold line). The middle green dashed line is an approximation described below. The straightness of these curves belies a power law. It varies with $n=n_1=n_2$, but not much.
The answer does depend on the set $\{n_1, n_2\}$, but it is natural to wonder how much it really varies with changes in the sample sizes. In particular, we could hope that for moderate to large sample sizes (maybe $n_1 \ge 10, n_2 \ge 10$ or thereabouts) the sample size makes little difference. In this case, we could develop a quantitative way to relate $\alpha'$ to $\alpha$.
This approach turns out to work provided the sample sizes are not too different from each other. In the spirit of simplicity, I will report an omnibus formula for computing the test size $\alpha'$ corresponding to the confidence interval size $\alpha$. It is
$$\alpha' \approx e \alpha^{1.91};$$
that is,
$$\alpha' \approx \exp(1 + 1.91\log(\alpha)).$$
This formula works reasonably well in these common situations:
Both sample sizes are close to each other, $n_1 \approx n_2$, and $\alpha$ is not too extreme ($\alpha \gt .001$ or so).
One sample size is within about three times the other and the smallest isn't too small (roughly, greater than $10$) and again $\alpha$ is not too extreme.
One sample size is within three times the other and $\alpha \gt .02$ or so.
The relative error (correct value divided by the approximation) in the first situation is plotted here, with the lower (blue) line showing the case $n_1=n_2=2$, the middle (red) line the case $n_1=n_2=5$, and the upper (gold) line the case $n_1=n_2=\infty$. Interpolating between the latter two, we see that the approximation is excellent for a wide range of practical values of $\alpha$ when sample sizes are moderate (around 5-50) and otherwise is reasonably good.
This is more than good enough for eyeballing a bunch of confidence intervals.
To summarize, the failure of two $2\alpha$-size confidence intervals of means to overlap is significant evidence of a difference in means at a level equal to $2e \alpha^{1.91}$, provided the two samples have approximately equal standard deviations and are approximately the same size.
I'll end with a tabulation of the approximation for common values of $2\alpha$. In the left hand column is the nominal size $2\alpha$ of the original confidence interval; in the right hand column is the actual size $2\alpha^\prime$ of the comparison of two such intervals:
$$\begin{array}{ll}
2\alpha & 2\alpha^\prime \\ \hline
0.1 &0.02\\
0.05 &0.005\\
0.01 &0.0002\\
0.005 &0.00006\\
\end{array}$$
For example, when a pair of two-sided 95% CIs ($2\alpha=.05$) for samples of approximately equal sizes do not overlap, we should take the means to be significantly different, $p \lt .005$. The correct p-value (for equal sample sizes $n$) actually lies between $.0037$ ($n=2$) and $.0056$ ($n=\infty$).
This result justifies (and I hope improves upon) the reply by @John. Thus, although the previous replies appear to be in conflict, both are (in their own ways) correct.
|
Relation between confidence interval and testing statistical hypothesis for t-test
Yes, there are some simple relationships between confidence interval comparisons and hypothesis tests in a wide range of practical settings. However, in addition to verifying the CI procedures and t-
|
6,138
|
Relation between confidence interval and testing statistical hypothesis for t-test
|
Under typical assumptions of equal variance, yes, there is a relationship. If the bars overlap by less than the length of one bar * sqrt(2) then a t-test would find them to be significantly different at alpha = 0.05. If the ends of the bars just barely touch then a difference would be found at 0.01. If the confidence intervals for the groups are not equal one typically takes the average and applies the same rule.
Alternatively, if the width of a confidence interval around one of the means is w then the least significant difference between two values is w * sqrt(2). This is simple when you think of the denominator in the independent groups t-test, sqrt(2*MSE/n), and the factor for the CI which, sqrt(MSE/n).
(95% CIs assumed)
There's a simple paper on making inferences from confidence intervals around independent means here. It will answer this question and many other related ones you may have.
Cumming, G., & Finch, S. (2005, March). Inference by eye: confidence intervals, and how to read pictures of data. American Psychologist, 60(2), 170-180.
|
Relation between confidence interval and testing statistical hypothesis for t-test
|
Under typical assumptions of equal variance, yes, there is a relationship. If the bars overlap by less than the length of one bar * sqrt(2) then a t-test would find them to be significantly different
|
Relation between confidence interval and testing statistical hypothesis for t-test
Under typical assumptions of equal variance, yes, there is a relationship. If the bars overlap by less than the length of one bar * sqrt(2) then a t-test would find them to be significantly different at alpha = 0.05. If the ends of the bars just barely touch then a difference would be found at 0.01. If the confidence intervals for the groups are not equal one typically takes the average and applies the same rule.
Alternatively, if the width of a confidence interval around one of the means is w then the least significant difference between two values is w * sqrt(2). This is simple when you think of the denominator in the independent groups t-test, sqrt(2*MSE/n), and the factor for the CI which, sqrt(MSE/n).
(95% CIs assumed)
There's a simple paper on making inferences from confidence intervals around independent means here. It will answer this question and many other related ones you may have.
Cumming, G., & Finch, S. (2005, March). Inference by eye: confidence intervals, and how to read pictures of data. American Psychologist, 60(2), 170-180.
|
Relation between confidence interval and testing statistical hypothesis for t-test
Under typical assumptions of equal variance, yes, there is a relationship. If the bars overlap by less than the length of one bar * sqrt(2) then a t-test would find them to be significantly different
|
6,139
|
Relation between confidence interval and testing statistical hypothesis for t-test
|
No, not a simple one at least.
There is, however, an exact correspondence between the t-test of difference between two means and the confidence interval for the difference between the two means.
If the confidence interval for the difference between two means contains zero, a t-test for that difference would fail to reject null at the same level of confidence. Likewise if the confidence interval does not contain 0, the t-test would reject the null.
This is not the same as overlap between confidence intervals for each of the two means.
|
Relation between confidence interval and testing statistical hypothesis for t-test
|
No, not a simple one at least.
There is, however, an exact correspondence between the t-test of difference between two means and the confidence interval for the difference between the two means.
If th
|
Relation between confidence interval and testing statistical hypothesis for t-test
No, not a simple one at least.
There is, however, an exact correspondence between the t-test of difference between two means and the confidence interval for the difference between the two means.
If the confidence interval for the difference between two means contains zero, a t-test for that difference would fail to reject null at the same level of confidence. Likewise if the confidence interval does not contain 0, the t-test would reject the null.
This is not the same as overlap between confidence intervals for each of the two means.
|
Relation between confidence interval and testing statistical hypothesis for t-test
No, not a simple one at least.
There is, however, an exact correspondence between the t-test of difference between two means and the confidence interval for the difference between the two means.
If th
|
6,140
|
Neural network references (textbooks, online courses) for beginners
|
You're in luck! There are an amazing number of resources available at the moment. In particular, you could look at:
a Coursera course starting soon
a recently published online textbook by some of the leaders in the field (Goodfellow, Bengio and Courville)
these lecture notes, and this overview, which are more oriented towards natural language processing
a set of blog posts with beautiful visualizations by Chris Olah
two well-supported toolkits with python interfaces and online tutorials: Tensorflow and Theano
|
Neural network references (textbooks, online courses) for beginners
|
You're in luck! There are an amazing number of resources available at the moment. In particular, you could look at:
a Coursera course starting soon
a recently published online textbook by some of the
|
Neural network references (textbooks, online courses) for beginners
You're in luck! There are an amazing number of resources available at the moment. In particular, you could look at:
a Coursera course starting soon
a recently published online textbook by some of the leaders in the field (Goodfellow, Bengio and Courville)
these lecture notes, and this overview, which are more oriented towards natural language processing
a set of blog posts with beautiful visualizations by Chris Olah
two well-supported toolkits with python interfaces and online tutorials: Tensorflow and Theano
|
Neural network references (textbooks, online courses) for beginners
You're in luck! There are an amazing number of resources available at the moment. In particular, you could look at:
a Coursera course starting soon
a recently published online textbook by some of the
|
6,141
|
Neural network references (textbooks, online courses) for beginners
|
Main references:
Courses on deep learning:
Andrew Ng's course on machine learning has a nice introductory section on neural networks.
Geoffrey Hinton's course: Coursera Neural Networks for Machine Learning (fall 2012)
Michael Nielsen's free book Neural Networks and Deep Learning
Yoshua Bengio, Ian Goodfellow and Aaron Courville wrote a book on deep learning (2016)
Hugo Larochelle's course (videos + slides) at Université de Sherbrooke
Stanford's tutorial (Andrew Ng et al.) on Unsupervised Feature Learning and Deep Learning
Oxford's ML 2014-2015 course
NVIDIA Deep learning course (summer 2015)
Google's Deep Learning course on Udacity (January 2016)
NLP-oriented:
Stanford CS224d: Deep Learning for Natural Language Processing (spring 2015) by Richard Socher
Tutorial given at NAACL HLT 2013: Deep Learning for Natural Language Processing (without Magic) (videos + slides)
Vision-oriented:
CS231n Convolutional Neural Networks for Visual Recognition by Andrej Karpathy (a previous version, shorter and less polished: Hacker's guide to Neural Networks).
Toolkit-specific tutorials:
DL4J (Java): http://deeplearning4j.org/documentation.html
Theano (Python, Y. Bengio): http://deeplearning.net/
Machine Learning with Torch7 (Lua, LeCun): http://code.madbits.com/wiki/doku.php
H2O Deep Learning (Java): http://0xdata.com/product/deep-learning/
Caffee (C++, UCB): http://caffe.berkeleyvision.org/
Nervana’s Deep Learning Course
|
Neural network references (textbooks, online courses) for beginners
|
Main references:
Courses on deep learning:
Andrew Ng's course on machine learning has a nice introductory section on neural networks.
Geoffrey Hinton's course: Coursera Neural Networks for Machine Le
|
Neural network references (textbooks, online courses) for beginners
Main references:
Courses on deep learning:
Andrew Ng's course on machine learning has a nice introductory section on neural networks.
Geoffrey Hinton's course: Coursera Neural Networks for Machine Learning (fall 2012)
Michael Nielsen's free book Neural Networks and Deep Learning
Yoshua Bengio, Ian Goodfellow and Aaron Courville wrote a book on deep learning (2016)
Hugo Larochelle's course (videos + slides) at Université de Sherbrooke
Stanford's tutorial (Andrew Ng et al.) on Unsupervised Feature Learning and Deep Learning
Oxford's ML 2014-2015 course
NVIDIA Deep learning course (summer 2015)
Google's Deep Learning course on Udacity (January 2016)
NLP-oriented:
Stanford CS224d: Deep Learning for Natural Language Processing (spring 2015) by Richard Socher
Tutorial given at NAACL HLT 2013: Deep Learning for Natural Language Processing (without Magic) (videos + slides)
Vision-oriented:
CS231n Convolutional Neural Networks for Visual Recognition by Andrej Karpathy (a previous version, shorter and less polished: Hacker's guide to Neural Networks).
Toolkit-specific tutorials:
DL4J (Java): http://deeplearning4j.org/documentation.html
Theano (Python, Y. Bengio): http://deeplearning.net/
Machine Learning with Torch7 (Lua, LeCun): http://code.madbits.com/wiki/doku.php
H2O Deep Learning (Java): http://0xdata.com/product/deep-learning/
Caffee (C++, UCB): http://caffe.berkeleyvision.org/
Nervana’s Deep Learning Course
|
Neural network references (textbooks, online courses) for beginners
Main references:
Courses on deep learning:
Andrew Ng's course on machine learning has a nice introductory section on neural networks.
Geoffrey Hinton's course: Coursera Neural Networks for Machine Le
|
6,142
|
Neural network references (textbooks, online courses) for beginners
|
http://www.kdnuggets.com/2015/11/seven-steps-machine-learning-python.html
http://neuralnetworksanddeeplearning.com/
This has been my favorite resources. Started with the Stanford machine learning course, but prefer reading over lectures. Especially because the readings are example-based.
|
Neural network references (textbooks, online courses) for beginners
|
http://www.kdnuggets.com/2015/11/seven-steps-machine-learning-python.html
http://neuralnetworksanddeeplearning.com/
This has been my favorite resources. Started with the Stanford machine learning cour
|
Neural network references (textbooks, online courses) for beginners
http://www.kdnuggets.com/2015/11/seven-steps-machine-learning-python.html
http://neuralnetworksanddeeplearning.com/
This has been my favorite resources. Started with the Stanford machine learning course, but prefer reading over lectures. Especially because the readings are example-based.
|
Neural network references (textbooks, online courses) for beginners
http://www.kdnuggets.com/2015/11/seven-steps-machine-learning-python.html
http://neuralnetworksanddeeplearning.com/
This has been my favorite resources. Started with the Stanford machine learning cour
|
6,143
|
Neural network references (textbooks, online courses) for beginners
|
Neural Networks and Deep Learning is an approachable starting-point.
Neural Networks and Deep Learning is a free online book. The book will teach you about:
Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data
Deep learning, a powerful set of techniques for learning in neural networks
Neural networks and deep learning currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. This book will teach you many of the core concepts behind neural networks and deep learning.
|
Neural network references (textbooks, online courses) for beginners
|
Neural Networks and Deep Learning is an approachable starting-point.
Neural Networks and Deep Learning is a free online book. The book will teach you about:
Neural networks, a beautiful biologically-
|
Neural network references (textbooks, online courses) for beginners
Neural Networks and Deep Learning is an approachable starting-point.
Neural Networks and Deep Learning is a free online book. The book will teach you about:
Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data
Deep learning, a powerful set of techniques for learning in neural networks
Neural networks and deep learning currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. This book will teach you many of the core concepts behind neural networks and deep learning.
|
Neural network references (textbooks, online courses) for beginners
Neural Networks and Deep Learning is an approachable starting-point.
Neural Networks and Deep Learning is a free online book. The book will teach you about:
Neural networks, a beautiful biologically-
|
6,144
|
Neural network references (textbooks, online courses) for beginners
|
For fast learning I would choose:
This Deep Learning lecture from the great teacher-researcher Nando de Freitas:
https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/
For practical programming theory understanding in Python this material from Andrej Karpathy:
http://cs231n.github.io/
And for NLP:
https://arxiv.org/abs/1510.00726
|
Neural network references (textbooks, online courses) for beginners
|
For fast learning I would choose:
This Deep Learning lecture from the great teacher-researcher Nando de Freitas:
https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/
For practical programmi
|
Neural network references (textbooks, online courses) for beginners
For fast learning I would choose:
This Deep Learning lecture from the great teacher-researcher Nando de Freitas:
https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/
For practical programming theory understanding in Python this material from Andrej Karpathy:
http://cs231n.github.io/
And for NLP:
https://arxiv.org/abs/1510.00726
|
Neural network references (textbooks, online courses) for beginners
For fast learning I would choose:
This Deep Learning lecture from the great teacher-researcher Nando de Freitas:
https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/
For practical programmi
|
6,145
|
Understanding the parameters inside the Negative Binomial Distribution
|
You should look further down the Wikipedia article on the NB, where it says "gamma-Poisson mixture". While the definition you cite (which I call the "coin-flipping" definition since I usually define it for classes as "suppose you want to flip a coin until you get $k$ heads") is easier to derive and makes more sense in an introductory probability or mathematical statistics context, the gamma-Poisson mixture is (in my experience) a much more generally useful way to think about the distribution in applied contexts. (In particular, this definition allows non-integer values of the dispersion/size parameter.) In this context, your dispersion parameter describes the distribution of a hypothetical Gamma distribution that underlies your data and describes unobserved variation among individuals in their intrinsic level of contact. In particular, it is the shape parameter of the Gamma, and it may be helpful in thinking about this to know that the coefficient of variation of a Gamma distribution with shape parameter $\theta$ is $1/\sqrt{\theta}$; as $\theta$ becomes large the latent variability disappears and the distribution approaches the Poisson.
|
Understanding the parameters inside the Negative Binomial Distribution
|
You should look further down the Wikipedia article on the NB, where it says "gamma-Poisson mixture". While the definition you cite (which I call the "coin-flipping" definition since I usually define i
|
Understanding the parameters inside the Negative Binomial Distribution
You should look further down the Wikipedia article on the NB, where it says "gamma-Poisson mixture". While the definition you cite (which I call the "coin-flipping" definition since I usually define it for classes as "suppose you want to flip a coin until you get $k$ heads") is easier to derive and makes more sense in an introductory probability or mathematical statistics context, the gamma-Poisson mixture is (in my experience) a much more generally useful way to think about the distribution in applied contexts. (In particular, this definition allows non-integer values of the dispersion/size parameter.) In this context, your dispersion parameter describes the distribution of a hypothetical Gamma distribution that underlies your data and describes unobserved variation among individuals in their intrinsic level of contact. In particular, it is the shape parameter of the Gamma, and it may be helpful in thinking about this to know that the coefficient of variation of a Gamma distribution with shape parameter $\theta$ is $1/\sqrt{\theta}$; as $\theta$ becomes large the latent variability disappears and the distribution approaches the Poisson.
|
Understanding the parameters inside the Negative Binomial Distribution
You should look further down the Wikipedia article on the NB, where it says "gamma-Poisson mixture". While the definition you cite (which I call the "coin-flipping" definition since I usually define i
|
6,146
|
Understanding the parameters inside the Negative Binomial Distribution
|
As I mentioned in my earlier post to you, I'm working on getting my head around fitting a distribution to count data also. Here's among what I've learned:
When the variance is greater than the mean, overdispersion is evident and thus the negative binomial distribution is likely appropriate. If the variance and mean are the same, the Poisson distribution is suggested, and when the variance is less than the mean, it's the binomial distribution that's recommended.
With the count data you're working on, you're using the "ecological" parameterization of the Negative Binomial function in R. Section 4.5.1.3 (Page 165) of the following freely-available book speaks to this specifically (in the context of R, no less!) and, I hope, might address some of your questions:
http://www.math.mcmaster.ca/~bolker/emdbook/book.pdf
If you come to conclude that your data are zero-truncated (i.e., the probability of 0 observations is 0), then you might want to check out the zero-truncated flavor of the NBD that's in the R VGAM package.
Here's an example of its application:
library(VGAM)
someCounts = data.frame(n = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16),
freq = c(182479,76986,44859,24315,16487,15308,5736,
2843,1370,1115,1127,49,100,490,106,2))
fit = vglm(n ~ 1, posnegbinomial, control = vglm.control(maxit = 1000), weights=freq,
data=someCounts)
Coef(fit)
pdf2 = dposnegbin(x=with(someCounts, n), munb=0.8344248, size=0.4086801)
print( with(someCounts, cbind(n, freq, fitted=pdf2*sum(freq))), dig=9)
I hope this is helpful.
|
Understanding the parameters inside the Negative Binomial Distribution
|
As I mentioned in my earlier post to you, I'm working on getting my head around fitting a distribution to count data also. Here's among what I've learned:
When the variance is greater than the mean, o
|
Understanding the parameters inside the Negative Binomial Distribution
As I mentioned in my earlier post to you, I'm working on getting my head around fitting a distribution to count data also. Here's among what I've learned:
When the variance is greater than the mean, overdispersion is evident and thus the negative binomial distribution is likely appropriate. If the variance and mean are the same, the Poisson distribution is suggested, and when the variance is less than the mean, it's the binomial distribution that's recommended.
With the count data you're working on, you're using the "ecological" parameterization of the Negative Binomial function in R. Section 4.5.1.3 (Page 165) of the following freely-available book speaks to this specifically (in the context of R, no less!) and, I hope, might address some of your questions:
http://www.math.mcmaster.ca/~bolker/emdbook/book.pdf
If you come to conclude that your data are zero-truncated (i.e., the probability of 0 observations is 0), then you might want to check out the zero-truncated flavor of the NBD that's in the R VGAM package.
Here's an example of its application:
library(VGAM)
someCounts = data.frame(n = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16),
freq = c(182479,76986,44859,24315,16487,15308,5736,
2843,1370,1115,1127,49,100,490,106,2))
fit = vglm(n ~ 1, posnegbinomial, control = vglm.control(maxit = 1000), weights=freq,
data=someCounts)
Coef(fit)
pdf2 = dposnegbin(x=with(someCounts, n), munb=0.8344248, size=0.4086801)
print( with(someCounts, cbind(n, freq, fitted=pdf2*sum(freq))), dig=9)
I hope this is helpful.
|
Understanding the parameters inside the Negative Binomial Distribution
As I mentioned in my earlier post to you, I'm working on getting my head around fitting a distribution to count data also. Here's among what I've learned:
When the variance is greater than the mean, o
|
6,147
|
Understanding input_shape parameter in LSTM with Keras
|
LSTM shapes are tough so don't feel bad, I had to spend a couple days battling them myself:
If you will be feeding data 1 character at a time your input shape should be (31,1) since your input has 31 timesteps, 1 character each. You will need to reshape your x_train from (1085420, 31) to (1085420, 31,1) which is easily done with this command :
x_train=x_train.reshape(x_train.shape[0],x_train.shape[1],1))
|
Understanding input_shape parameter in LSTM with Keras
|
LSTM shapes are tough so don't feel bad, I had to spend a couple days battling them myself:
If you will be feeding data 1 character at a time your input shape should be (31,1) since your input has 31
|
Understanding input_shape parameter in LSTM with Keras
LSTM shapes are tough so don't feel bad, I had to spend a couple days battling them myself:
If you will be feeding data 1 character at a time your input shape should be (31,1) since your input has 31 timesteps, 1 character each. You will need to reshape your x_train from (1085420, 31) to (1085420, 31,1) which is easily done with this command :
x_train=x_train.reshape(x_train.shape[0],x_train.shape[1],1))
|
Understanding input_shape parameter in LSTM with Keras
LSTM shapes are tough so don't feel bad, I had to spend a couple days battling them myself:
If you will be feeding data 1 character at a time your input shape should be (31,1) since your input has 31
|
6,148
|
Understanding input_shape parameter in LSTM with Keras
|
Check this git repository LSTM Keras summary diagram and i believe you should get everything crystal clear.
This git repo includes a Keras LSTM summary diagram that shows:
the use of parameters like return_sequences, batch_size, time_step...
the real structure of lstm layers
the concept of these layers in keras
how to manipulate your input and output data to match your model requirements how to stack LSTM's layers
And more
|
Understanding input_shape parameter in LSTM with Keras
|
Check this git repository LSTM Keras summary diagram and i believe you should get everything crystal clear.
This git repo includes a Keras LSTM summary diagram that shows:
the use of parameters like
|
Understanding input_shape parameter in LSTM with Keras
Check this git repository LSTM Keras summary diagram and i believe you should get everything crystal clear.
This git repo includes a Keras LSTM summary diagram that shows:
the use of parameters like return_sequences, batch_size, time_step...
the real structure of lstm layers
the concept of these layers in keras
how to manipulate your input and output data to match your model requirements how to stack LSTM's layers
And more
|
Understanding input_shape parameter in LSTM with Keras
Check this git repository LSTM Keras summary diagram and i believe you should get everything crystal clear.
This git repo includes a Keras LSTM summary diagram that shows:
the use of parameters like
|
6,149
|
Understanding input_shape parameter in LSTM with Keras
|
I know it is not direct answer to your question. This is a simplified example with just one LSTM cell, helping me understand the reshape operation for the input data.
from keras.models import Model
from keras.layers import Input
from keras.layers import LSTM
import numpy as np
# define model
inputs1 = Input(shape=(2, 3))
lstm1, state_h, state_c = LSTM(1, return_sequences=True, return_state=True)(inputs1)
model = Model(inputs=inputs1, outputs=[lstm1, state_h, state_c])
# define input data
data = np.random.rand(2, 3)
data = data.reshape((1,2,3))
# make and show prediction
print(model.predict(data))
This would be an example of the LSTM network with just a single LSTM cell and with the input data of specific shape.
As it turns out, we are just predicting in here, training is not present for simplicity, but look how we needed to reshape the data (to add additional dimension) before the predict method.
|
Understanding input_shape parameter in LSTM with Keras
|
I know it is not direct answer to your question. This is a simplified example with just one LSTM cell, helping me understand the reshape operation for the input data.
from keras.models import Model
fr
|
Understanding input_shape parameter in LSTM with Keras
I know it is not direct answer to your question. This is a simplified example with just one LSTM cell, helping me understand the reshape operation for the input data.
from keras.models import Model
from keras.layers import Input
from keras.layers import LSTM
import numpy as np
# define model
inputs1 = Input(shape=(2, 3))
lstm1, state_h, state_c = LSTM(1, return_sequences=True, return_state=True)(inputs1)
model = Model(inputs=inputs1, outputs=[lstm1, state_h, state_c])
# define input data
data = np.random.rand(2, 3)
data = data.reshape((1,2,3))
# make and show prediction
print(model.predict(data))
This would be an example of the LSTM network with just a single LSTM cell and with the input data of specific shape.
As it turns out, we are just predicting in here, training is not present for simplicity, but look how we needed to reshape the data (to add additional dimension) before the predict method.
|
Understanding input_shape parameter in LSTM with Keras
I know it is not direct answer to your question. This is a simplified example with just one LSTM cell, helping me understand the reshape operation for the input data.
from keras.models import Model
fr
|
6,150
|
Finding Quartiles in R
|
Your textbook is confused. Very few people or software define quartiles this way. (It tends to make the first quartile too small and the third quartile too large.)
The quantile function in R implements nine different ways to compute quantiles! To see which of them, if any, correspond to this method, let's start by implementing it. From the description we can write an algorithm, first mathematically and then in R:
Order the data $x_1 \le x_2 \le \cdots \le x_n$.
For any set of data the median is its middle value when there are an odd number of values; otherwise it is the average of the two middle values when there are an even number of values. R's median function calculates this.
The index of the middle value is $m = (n+1)/2$. When it is not an integer, $(x_l + x_u)/2$ is the median, where $l$ and $u$ are $m$ rounded down and up. Otherwise when $m$ is an integer, $x_m$ is the median. In that case take $l=m-1$ and $u=m+1$. In either case $l$ is the index of the data value immediately to the left of the median and $u$ is the index of the data value immediately to the right of the median.
The "first quartile" is the median of all $x_i$ for which $i \le l$. The "third quartile" is the median of $(x_i)$ for which $i \ge u$.
Here is an implementation. It can help you do your exercises in this textbook.
quart <- function(x) {
x <- sort(x)
n <- length(x)
m <- (n+1)/2
if (floor(m) != m) {
l <- m-1/2; u <- m+1/2
} else {
l <- m-1; u <- m+1
}
c(Q1=median(x[1:l]), Q3=median(x[u:n]))
}
For instance, the output of quart(c(6,7,8,9,10,15,16,16,20,20,23,33,50,58,104)) agrees with the text:
Q1 Q3
9 33
Let's compute quartiles for some small datasets using all ten methods: the nine in R and the textbook's:
y <- matrix(NA, 2, 10)
rownames(y) <- c("Q1", "Q3")
colnames(y) <- c(1:9, "Quart")
for (n in 3:5) {
j <- 1
for (i in 1:9) {
y[, i] <- quantile(1:n, probs=c(1/4, 3/4), type=i)
}
y[, 10] <- quart(1:n)
cat("\n", n, ":\n")
print(y, digits=2)
}
When you run this and check, you will find that the textbook values do not agree with any of the R output for all three sample sizes. (The pattern of disagreements continues in cycles of period three, showing that the problem persists no matter how large the sample may be.)
The textbook might have misconstrued John Tukey's method of computing "hinges" (aka "fourths"). The difference is that when splitting the dataset around the median, he includes the median in both halves. That would produce $9.5$ and $28$ for the example dataset.
|
Finding Quartiles in R
|
Your textbook is confused. Very few people or software define quartiles this way. (It tends to make the first quartile too small and the third quartile too large.)
The quantile function in R impleme
|
Finding Quartiles in R
Your textbook is confused. Very few people or software define quartiles this way. (It tends to make the first quartile too small and the third quartile too large.)
The quantile function in R implements nine different ways to compute quantiles! To see which of them, if any, correspond to this method, let's start by implementing it. From the description we can write an algorithm, first mathematically and then in R:
Order the data $x_1 \le x_2 \le \cdots \le x_n$.
For any set of data the median is its middle value when there are an odd number of values; otherwise it is the average of the two middle values when there are an even number of values. R's median function calculates this.
The index of the middle value is $m = (n+1)/2$. When it is not an integer, $(x_l + x_u)/2$ is the median, where $l$ and $u$ are $m$ rounded down and up. Otherwise when $m$ is an integer, $x_m$ is the median. In that case take $l=m-1$ and $u=m+1$. In either case $l$ is the index of the data value immediately to the left of the median and $u$ is the index of the data value immediately to the right of the median.
The "first quartile" is the median of all $x_i$ for which $i \le l$. The "third quartile" is the median of $(x_i)$ for which $i \ge u$.
Here is an implementation. It can help you do your exercises in this textbook.
quart <- function(x) {
x <- sort(x)
n <- length(x)
m <- (n+1)/2
if (floor(m) != m) {
l <- m-1/2; u <- m+1/2
} else {
l <- m-1; u <- m+1
}
c(Q1=median(x[1:l]), Q3=median(x[u:n]))
}
For instance, the output of quart(c(6,7,8,9,10,15,16,16,20,20,23,33,50,58,104)) agrees with the text:
Q1 Q3
9 33
Let's compute quartiles for some small datasets using all ten methods: the nine in R and the textbook's:
y <- matrix(NA, 2, 10)
rownames(y) <- c("Q1", "Q3")
colnames(y) <- c(1:9, "Quart")
for (n in 3:5) {
j <- 1
for (i in 1:9) {
y[, i] <- quantile(1:n, probs=c(1/4, 3/4), type=i)
}
y[, 10] <- quart(1:n)
cat("\n", n, ":\n")
print(y, digits=2)
}
When you run this and check, you will find that the textbook values do not agree with any of the R output for all three sample sizes. (The pattern of disagreements continues in cycles of period three, showing that the problem persists no matter how large the sample may be.)
The textbook might have misconstrued John Tukey's method of computing "hinges" (aka "fourths"). The difference is that when splitting the dataset around the median, he includes the median in both halves. That would produce $9.5$ and $28$ for the example dataset.
|
Finding Quartiles in R
Your textbook is confused. Very few people or software define quartiles this way. (It tends to make the first quartile too small and the third quartile too large.)
The quantile function in R impleme
|
6,151
|
Finding Quartiles in R
|
Within the field of statistics (which I teach, but in which I am not a researcher), quartile calculations are particularly ambiguous (in a way that is not necessarily true of quantiles, more generally). This has a lot of history behind it, in part because of the use (and perhaps abuse) of inter-quartile range (IQR), which is insensitive to outliers, as a check or alternative to standard deviation. It remains an open contest, with three distinctive methods for computing Q1 and Q3 being co-canonical.
As is often the case, the Wikipedia article has a reasonable summary:
https://en.m.wikipedia.org/wiki/Quartile
The Larson and Farber text, like most elementary statistics texts, uses what's described in the Wikipedia article as "Method 1." If I follow the descriptions above, r uses "Method 3". You'll have to decide for yourself which is canonically appropriate in your own field.
|
Finding Quartiles in R
|
Within the field of statistics (which I teach, but in which I am not a researcher), quartile calculations are particularly ambiguous (in a way that is not necessarily true of quantiles, more generally
|
Finding Quartiles in R
Within the field of statistics (which I teach, but in which I am not a researcher), quartile calculations are particularly ambiguous (in a way that is not necessarily true of quantiles, more generally). This has a lot of history behind it, in part because of the use (and perhaps abuse) of inter-quartile range (IQR), which is insensitive to outliers, as a check or alternative to standard deviation. It remains an open contest, with three distinctive methods for computing Q1 and Q3 being co-canonical.
As is often the case, the Wikipedia article has a reasonable summary:
https://en.m.wikipedia.org/wiki/Quartile
The Larson and Farber text, like most elementary statistics texts, uses what's described in the Wikipedia article as "Method 1." If I follow the descriptions above, r uses "Method 3". You'll have to decide for yourself which is canonically appropriate in your own field.
|
Finding Quartiles in R
Within the field of statistics (which I teach, but in which I am not a researcher), quartile calculations are particularly ambiguous (in a way that is not necessarily true of quantiles, more generally
|
6,152
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
|
The main reason that $t$-SNE is not used in classification models is that it does not learn a function from the original space to the new (lower) dimensional one. As such, when we would try to use our classifier on new / unseen data we will not be able to map / pre-process these new data according to the previous $t$-SNE results.
There is work on training a deep neural network to approximate $t$-SNE results (e.g., the "parametric" $t$-SNE paper) but this work has been superseded in part by the existence of (deep) autoencoders. Autoencoders are starting to be used as input / pre-processors to classifiers (especially DNN) exactly because they get very good performance in training as well as generalise naturally to new data.
$t$-SNE can be potentially used if we use a non-distance based clustering techniques like FMM (Finite Mixture Models) or DBSCAN (Density-based Models). As you correctly note, in such cases, the $t$-SNE output can quite helpful. The issue in these use cases is that some people might try to read into the cluster placement and not only the cluster membership. As the global distances are lost, drawing conclusions from cluster placement can lead to bogus insights. Notice that just saying: "hey, we found all the 1s cluster together" does not offer great value if cannot say what they are far from. If we just wanted to find the 1's we might as well have used classification to begin with (which bring us back to the use of autoencoders).
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
|
The main reason that $t$-SNE is not used in classification models is that it does not learn a function from the original space to the new (lower) dimensional one. As such, when we would try to use our
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
The main reason that $t$-SNE is not used in classification models is that it does not learn a function from the original space to the new (lower) dimensional one. As such, when we would try to use our classifier on new / unseen data we will not be able to map / pre-process these new data according to the previous $t$-SNE results.
There is work on training a deep neural network to approximate $t$-SNE results (e.g., the "parametric" $t$-SNE paper) but this work has been superseded in part by the existence of (deep) autoencoders. Autoencoders are starting to be used as input / pre-processors to classifiers (especially DNN) exactly because they get very good performance in training as well as generalise naturally to new data.
$t$-SNE can be potentially used if we use a non-distance based clustering techniques like FMM (Finite Mixture Models) or DBSCAN (Density-based Models). As you correctly note, in such cases, the $t$-SNE output can quite helpful. The issue in these use cases is that some people might try to read into the cluster placement and not only the cluster membership. As the global distances are lost, drawing conclusions from cluster placement can lead to bogus insights. Notice that just saying: "hey, we found all the 1s cluster together" does not offer great value if cannot say what they are far from. If we just wanted to find the 1's we might as well have used classification to begin with (which bring us back to the use of autoencoders).
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
The main reason that $t$-SNE is not used in classification models is that it does not learn a function from the original space to the new (lower) dimensional one. As such, when we would try to use our
|
6,153
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
|
t-SNE does not preserve distances, but it basically estimates probability distributions. In theory, the t-SNE algorithms maps the input to a map space of 2 or 3 dimensions. The input space is assumed to be a Gaussian distribution and the map space a t-distribution. The loss function used is the KL Divergence between the two distributions which is minimized using gradient descent.
According to Laurens van der Maaten who is a co-author of t-SNE
t-SNE does not retain distances but probabilities, so measuring some
error between the Euclidean distances in high-D and low-D is useless.
Reference:
https://lvdmaaten.github.io/tsne/
https://www.oreilly.com/learning/an-illustrated-introduction-to-the-t-sne-algorithm
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
|
t-SNE does not preserve distances, but it basically estimates probability distributions. In theory, the t-SNE algorithms maps the input to a map space of 2 or 3 dimensions. The input space is assumed
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
t-SNE does not preserve distances, but it basically estimates probability distributions. In theory, the t-SNE algorithms maps the input to a map space of 2 or 3 dimensions. The input space is assumed to be a Gaussian distribution and the map space a t-distribution. The loss function used is the KL Divergence between the two distributions which is minimized using gradient descent.
According to Laurens van der Maaten who is a co-author of t-SNE
t-SNE does not retain distances but probabilities, so measuring some
error between the Euclidean distances in high-D and low-D is useless.
Reference:
https://lvdmaaten.github.io/tsne/
https://www.oreilly.com/learning/an-illustrated-introduction-to-the-t-sne-algorithm
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
t-SNE does not preserve distances, but it basically estimates probability distributions. In theory, the t-SNE algorithms maps the input to a map space of 2 or 3 dimensions. The input space is assumed
|
6,154
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
|
As a general statement: given a sufficiently powerful (/suitable) classifier, or cluster-er, one would never apply any dimensionality reduction.
Dimensionality reduction loses information.
Since such a cluster-er or classifier (esp classifiers, less so clusterers),
internally incorperates some form of projection to a meaningful space already.
And Dimensionality reduction is also projection to a (hopefuly) meaningful space.
But dimensionality reduction has to do so in a uninformed way -- it does not know what task you are reducing for.
This is especially true for classification, where you have outright supervised information.
But it also applies to clustering, where the space one would want to project to for clustering is better defined (for this algorithm) than just "have less dimensions). @usεr11852's answer talks about this.
As I said dimensionality reduction does not know what task you are reducing for -- you inform it in your choice of which dimensionality reduction algorithm you to use.
So often rather than adding a dimensionality reduction step as preprocessing before clustering/classification, one is better to use a different classifier/cluster-er that incorperates a useful projection.
One thing dimentionality reduction does have going for it in this though is its unsupervised nature in creating the projection to the (hopefully) meaningful space. Which is useful if you have little label data.
But there are often other methods that are closely linked to your classifier (e.g. for neural networks, using autoencoder e.g. deep belief network pretraining) that are going to work better, because they are designed with that final task in mind.
Not the more general task of dimensionality reduction.
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
|
As a general statement: given a sufficiently powerful (/suitable) classifier, or cluster-er, one would never apply any dimensionality reduction.
Dimensionality reduction loses information.
Since such
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
As a general statement: given a sufficiently powerful (/suitable) classifier, or cluster-er, one would never apply any dimensionality reduction.
Dimensionality reduction loses information.
Since such a cluster-er or classifier (esp classifiers, less so clusterers),
internally incorperates some form of projection to a meaningful space already.
And Dimensionality reduction is also projection to a (hopefuly) meaningful space.
But dimensionality reduction has to do so in a uninformed way -- it does not know what task you are reducing for.
This is especially true for classification, where you have outright supervised information.
But it also applies to clustering, where the space one would want to project to for clustering is better defined (for this algorithm) than just "have less dimensions). @usεr11852's answer talks about this.
As I said dimensionality reduction does not know what task you are reducing for -- you inform it in your choice of which dimensionality reduction algorithm you to use.
So often rather than adding a dimensionality reduction step as preprocessing before clustering/classification, one is better to use a different classifier/cluster-er that incorperates a useful projection.
One thing dimentionality reduction does have going for it in this though is its unsupervised nature in creating the projection to the (hopefully) meaningful space. Which is useful if you have little label data.
But there are often other methods that are closely linked to your classifier (e.g. for neural networks, using autoencoder e.g. deep belief network pretraining) that are going to work better, because they are designed with that final task in mind.
Not the more general task of dimensionality reduction.
|
Why is t-SNE not used as a dimensionality reduction technique for clustering or classification?
As a general statement: given a sufficiently powerful (/suitable) classifier, or cluster-er, one would never apply any dimensionality reduction.
Dimensionality reduction loses information.
Since such
|
6,155
|
What are the measure for accuracy of multilabel data?
|
(1) gives a nice overview:
The Wikipedia page n multi-label classification contains a section on the evaluation metrics as well.
I would add a warning that in the multilabel setting, accuracy is ambiguous: it might either refer to the exact match ratio or the Hamming score (see this post). Unfortunately, many papers use the term "accuracy".
(1) Sorower, Mohammad S. "A literature survey on algorithms for multi-label learning." Oregon State University, Corvallis (2010).
|
What are the measure for accuracy of multilabel data?
|
(1) gives a nice overview:
The Wikipedia page n multi-label classification contains a section on the evaluation metrics as well.
I would add a warning that in the multilabel setting, accuracy is amb
|
What are the measure for accuracy of multilabel data?
(1) gives a nice overview:
The Wikipedia page n multi-label classification contains a section on the evaluation metrics as well.
I would add a warning that in the multilabel setting, accuracy is ambiguous: it might either refer to the exact match ratio or the Hamming score (see this post). Unfortunately, many papers use the term "accuracy".
(1) Sorower, Mohammad S. "A literature survey on algorithms for multi-label learning." Oregon State University, Corvallis (2010).
|
What are the measure for accuracy of multilabel data?
(1) gives a nice overview:
The Wikipedia page n multi-label classification contains a section on the evaluation metrics as well.
I would add a warning that in the multilabel setting, accuracy is amb
|
6,156
|
What are the measure for accuracy of multilabel data?
|
The Hamming Loss is probably the most widely used loss function in multi-label classification.
Have a look at Empirical Studies on Multi-label Classification and Multi-Label Classification: An Overview, both of which discuss this.
|
What are the measure for accuracy of multilabel data?
|
The Hamming Loss is probably the most widely used loss function in multi-label classification.
Have a look at Empirical Studies on Multi-label Classification and Multi-Label Classification: An Overvi
|
What are the measure for accuracy of multilabel data?
The Hamming Loss is probably the most widely used loss function in multi-label classification.
Have a look at Empirical Studies on Multi-label Classification and Multi-Label Classification: An Overview, both of which discuss this.
|
What are the measure for accuracy of multilabel data?
The Hamming Loss is probably the most widely used loss function in multi-label classification.
Have a look at Empirical Studies on Multi-label Classification and Multi-Label Classification: An Overvi
|
6,157
|
What are the measure for accuracy of multilabel data?
|
Correctly Predicted is the intersection between the set of suggested labels and the set expected one. Total Instances is the union of the sets above (no duplicate count).
So given a single example where you predict classes A, G, E and the test case has E, A, H, P as the correct ones you end up with Accuracy = Intersection{(A,G,E), (E,A,H,P)} / Union{(A,G,E), (E,A,H,P)} = 2 / 5
|
What are the measure for accuracy of multilabel data?
|
Correctly Predicted is the intersection between the set of suggested labels and the set expected one. Total Instances is the union of the sets above (no duplicate count).
So given a single example whe
|
What are the measure for accuracy of multilabel data?
Correctly Predicted is the intersection between the set of suggested labels and the set expected one. Total Instances is the union of the sets above (no duplicate count).
So given a single example where you predict classes A, G, E and the test case has E, A, H, P as the correct ones you end up with Accuracy = Intersection{(A,G,E), (E,A,H,P)} / Union{(A,G,E), (E,A,H,P)} = 2 / 5
|
What are the measure for accuracy of multilabel data?
Correctly Predicted is the intersection between the set of suggested labels and the set expected one. Total Instances is the union of the sets above (no duplicate count).
So given a single example whe
|
6,158
|
Guideline to select the hyperparameters in Deep Learning
|
There are basically four methods:
Manual Search: Using knowledge you have about the problem guess parameters and observe the result. Based on that result tweak the parameters. Repeat this process until you find parameters that work well or you run out of time.
Grid Search: Using knowledge you have about the problem identify ranges for the hyperparameters. Then select several points from those ranges, usually uniformly distributed. Train your network using every combination of parameters and select the combination that performs best. Alternatively you can repeat your search on a more narrow domain centered around the parameters that perform the best.
Random Search: Like grid search you use knowledge of the problem to identify ranges for the hyperparameters. However instead of picking values from those ranges in a methodical manner you instead select them at random. Repeat this process until you find parameters that work well or use what you learn to narrow your search. In the paper Random Search for Hyper-Parameter Optimization Dr. Bengio proposes this be the baseline method against which all other methods should be compared and shows that it tends to work better than the other methods.
Bayesian Optimization: More recent work has been focus on improving upon these other approaches by using the information gained from any given experiment to decide how to adjust the hyper parameters for the next experiment. An example of this work would be Practical Bayesian Optimization of Machine Learning Algorithms by Adams et al.
|
Guideline to select the hyperparameters in Deep Learning
|
There are basically four methods:
Manual Search: Using knowledge you have about the problem guess parameters and observe the result. Based on that result tweak the parameters. Repeat this process unt
|
Guideline to select the hyperparameters in Deep Learning
There are basically four methods:
Manual Search: Using knowledge you have about the problem guess parameters and observe the result. Based on that result tweak the parameters. Repeat this process until you find parameters that work well or you run out of time.
Grid Search: Using knowledge you have about the problem identify ranges for the hyperparameters. Then select several points from those ranges, usually uniformly distributed. Train your network using every combination of parameters and select the combination that performs best. Alternatively you can repeat your search on a more narrow domain centered around the parameters that perform the best.
Random Search: Like grid search you use knowledge of the problem to identify ranges for the hyperparameters. However instead of picking values from those ranges in a methodical manner you instead select them at random. Repeat this process until you find parameters that work well or use what you learn to narrow your search. In the paper Random Search for Hyper-Parameter Optimization Dr. Bengio proposes this be the baseline method against which all other methods should be compared and shows that it tends to work better than the other methods.
Bayesian Optimization: More recent work has been focus on improving upon these other approaches by using the information gained from any given experiment to decide how to adjust the hyper parameters for the next experiment. An example of this work would be Practical Bayesian Optimization of Machine Learning Algorithms by Adams et al.
|
Guideline to select the hyperparameters in Deep Learning
There are basically four methods:
Manual Search: Using knowledge you have about the problem guess parameters and observe the result. Based on that result tweak the parameters. Repeat this process unt
|
6,159
|
Guideline to select the hyperparameters in Deep Learning
|
A wide variety of methods exist. They can be largely partitioned in random/undirected search methods (like grid search or random search) and direct methods. Be aware, though, that they all require testing a considerable amount of hyperparameter settings unless you get lucky (hundreds at least, depends on the number of parameters).
In the class of direct methods, several distinct approaches can be identified:
derivative free methods, for example the Nelder-Mead simplex or DIRECT
evolutionary methods, such as CMA-ES and particle swarms
model-based approaches, e.g. EGO and sequential Kriging
You may want to look into Optunity, a Python package which offers a variety of solvers for hyperparameter tuning (everything I mentioned except EGO and Kriging, for now). Optunity will be available for MATLAB and R soon. Disclaimer: I am the main developer of this package.
Based on my personal experience, evolutionary methods are very powerful for these types of problems.
|
Guideline to select the hyperparameters in Deep Learning
|
A wide variety of methods exist. They can be largely partitioned in random/undirected search methods (like grid search or random search) and direct methods. Be aware, though, that they all require tes
|
Guideline to select the hyperparameters in Deep Learning
A wide variety of methods exist. They can be largely partitioned in random/undirected search methods (like grid search or random search) and direct methods. Be aware, though, that they all require testing a considerable amount of hyperparameter settings unless you get lucky (hundreds at least, depends on the number of parameters).
In the class of direct methods, several distinct approaches can be identified:
derivative free methods, for example the Nelder-Mead simplex or DIRECT
evolutionary methods, such as CMA-ES and particle swarms
model-based approaches, e.g. EGO and sequential Kriging
You may want to look into Optunity, a Python package which offers a variety of solvers for hyperparameter tuning (everything I mentioned except EGO and Kriging, for now). Optunity will be available for MATLAB and R soon. Disclaimer: I am the main developer of this package.
Based on my personal experience, evolutionary methods are very powerful for these types of problems.
|
Guideline to select the hyperparameters in Deep Learning
A wide variety of methods exist. They can be largely partitioned in random/undirected search methods (like grid search or random search) and direct methods. Be aware, though, that they all require tes
|
6,160
|
Guideline to select the hyperparameters in Deep Learning
|
Look no further! Yoshua Bengio published one of my favorite applied papers, one that I recommend to all new machine learning engineers when they start training neural nets: Practical recommendations for gradient-based training of deep architectures. To get his perspective on hyperparameter turning: including learning rate, learning rate schedule, early stopping, minibatch size, number of hidden layers, etc., see Section 3.
|
Guideline to select the hyperparameters in Deep Learning
|
Look no further! Yoshua Bengio published one of my favorite applied papers, one that I recommend to all new machine learning engineers when they start training neural nets: Practical recommendations f
|
Guideline to select the hyperparameters in Deep Learning
Look no further! Yoshua Bengio published one of my favorite applied papers, one that I recommend to all new machine learning engineers when they start training neural nets: Practical recommendations for gradient-based training of deep architectures. To get his perspective on hyperparameter turning: including learning rate, learning rate schedule, early stopping, minibatch size, number of hidden layers, etc., see Section 3.
|
Guideline to select the hyperparameters in Deep Learning
Look no further! Yoshua Bengio published one of my favorite applied papers, one that I recommend to all new machine learning engineers when they start training neural nets: Practical recommendations f
|
6,161
|
Who invented stochastic gradient descent?
|
Stochastic Gradient Descent is preceded by Stochastic Approximation as first described by Robbins and Monro in their paper, A Stochastic Approximation Method. Kiefer and Wolfowitz subsequently published their paper, *Stochastic Estimation of the Maximum of a Regression Function* which is more recognizable to people familiar with the ML variant of Stochastic Approximation (i.e Stochastic Gradient Descent), as pointed out by Mark Stone in the comments. The 60's saw plenty of research along that vein -- Dvoretzky, Powell, Blum all published results that we take for granted today. It is a relatively minor leap to get from the Robbins and Monro method to the Kiefer Wolfowitz method, and merely a reframing of the problem to then get to Stochastic Gradient Descent (for regression problems). The above papers are widely cited as being the antecedents of Stochastic Gradient Descent, as mentioned in this review paper by Nocedal, Bottou, and Curtis, which provides a brief historical perspective from a Machine Learning point of view.
I believe that Kushner and Yin in their book Stochastic Approximation and Recursive Algorithms and Applications suggest that the notion had been used in control theory as far back as the 40's, but I don't recall if they had a citation for that or if it was anecdotal, nor do I have access to their book to confirm this.
Herbert Robbins and Sutton Monro A Stochastic Approximation Method
The Annals of Mathematical Statistics, Vol. 22, No. 3. (Sep., 1951), pp. 400-407, DOI: 10.1214/aoms/1177729586
J. Kiefer and J. Wolfowitz Stochastic Estimation of the Maximum of a Regression Function Ann. Math. Statist. Volume 23, Number 3 (1952), 462-466, DOI: 10.1214/aoms/1177729392
Leon Bottou and Frank E. Curtis and Jorge Nocedal Optimization Methods for Large-Scale Machine Learning, Technical Report, arXiv:1606.04838
|
Who invented stochastic gradient descent?
|
Stochastic Gradient Descent is preceded by Stochastic Approximation as first described by Robbins and Monro in their paper, A Stochastic Approximation Method. Kiefer and Wolfowitz subsequently publish
|
Who invented stochastic gradient descent?
Stochastic Gradient Descent is preceded by Stochastic Approximation as first described by Robbins and Monro in their paper, A Stochastic Approximation Method. Kiefer and Wolfowitz subsequently published their paper, *Stochastic Estimation of the Maximum of a Regression Function* which is more recognizable to people familiar with the ML variant of Stochastic Approximation (i.e Stochastic Gradient Descent), as pointed out by Mark Stone in the comments. The 60's saw plenty of research along that vein -- Dvoretzky, Powell, Blum all published results that we take for granted today. It is a relatively minor leap to get from the Robbins and Monro method to the Kiefer Wolfowitz method, and merely a reframing of the problem to then get to Stochastic Gradient Descent (for regression problems). The above papers are widely cited as being the antecedents of Stochastic Gradient Descent, as mentioned in this review paper by Nocedal, Bottou, and Curtis, which provides a brief historical perspective from a Machine Learning point of view.
I believe that Kushner and Yin in their book Stochastic Approximation and Recursive Algorithms and Applications suggest that the notion had been used in control theory as far back as the 40's, but I don't recall if they had a citation for that or if it was anecdotal, nor do I have access to their book to confirm this.
Herbert Robbins and Sutton Monro A Stochastic Approximation Method
The Annals of Mathematical Statistics, Vol. 22, No. 3. (Sep., 1951), pp. 400-407, DOI: 10.1214/aoms/1177729586
J. Kiefer and J. Wolfowitz Stochastic Estimation of the Maximum of a Regression Function Ann. Math. Statist. Volume 23, Number 3 (1952), 462-466, DOI: 10.1214/aoms/1177729392
Leon Bottou and Frank E. Curtis and Jorge Nocedal Optimization Methods for Large-Scale Machine Learning, Technical Report, arXiv:1606.04838
|
Who invented stochastic gradient descent?
Stochastic Gradient Descent is preceded by Stochastic Approximation as first described by Robbins and Monro in their paper, A Stochastic Approximation Method. Kiefer and Wolfowitz subsequently publish
|
6,162
|
Who invented stochastic gradient descent?
|
See
Rosenblatt F. The perceptron: A probabilistic model for information
storage and organization in the brain. Psychological review. 1958
Nov;65(6):386.
I am not sure if SGD was invented before this in optimization literature—probably was—but here I believe he describes an application of SGD to train a perceptron.
If the system is under a state of positive reinforcement, then a
positive AV is added to the values of all active A-units in the
source-sets of "on" responses, while a negative A V is added to the
active units in the source- sets of "off" responses.
He calls these "two types of reinforcement".
He also references a book with more on these "bivalent systems".
Rosenblatt F. The perceptron: a theory of statistical separability in
cognitive systems (Project Para). Cornell Aeronautical Laboratory;
1958.
|
Who invented stochastic gradient descent?
|
See
Rosenblatt F. The perceptron: A probabilistic model for information
storage and organization in the brain. Psychological review. 1958
Nov;65(6):386.
I am not sure if SGD was invented before
|
Who invented stochastic gradient descent?
See
Rosenblatt F. The perceptron: A probabilistic model for information
storage and organization in the brain. Psychological review. 1958
Nov;65(6):386.
I am not sure if SGD was invented before this in optimization literature—probably was—but here I believe he describes an application of SGD to train a perceptron.
If the system is under a state of positive reinforcement, then a
positive AV is added to the values of all active A-units in the
source-sets of "on" responses, while a negative A V is added to the
active units in the source- sets of "off" responses.
He calls these "two types of reinforcement".
He also references a book with more on these "bivalent systems".
Rosenblatt F. The perceptron: a theory of statistical separability in
cognitive systems (Project Para). Cornell Aeronautical Laboratory;
1958.
|
Who invented stochastic gradient descent?
See
Rosenblatt F. The perceptron: A probabilistic model for information
storage and organization in the brain. Psychological review. 1958
Nov;65(6):386.
I am not sure if SGD was invented before
|
6,163
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
Because a "completely analytical approach" has been requested, here is an exact solution. It also provides an alternative approach to solving the question at Probability to draw a black ball in a set of black and white balls with mixed replacement conditions.
The number of moves in the game, $X$, can be modeled as the sum of six independent realizations of Geometric$(p)$ variables with probabilities $p=1, 5/6, 4/6, 3/6, 2/6, 1/6$, each of them shifted by $1$ (because a geometric variable counts only the rolls preceding a success and we must also count the rolls on which successes were observed). By computing with the geometric distribution, we will therefore obtain answers that are $6$ less than the desired ones and therefore must be sure to add $6$ back at the end.
The probability generating function (pgf) of such a geometric variable with parameter $p$ is
$$f(z, p) = \frac{p}{1-(1-p)z}.$$
Therefore the pgf for the sum of these six variables is
$$g(z) = \prod_{i=1}^6 f(z, i/6) = 6^{-z-4} \left(-5\ 2^{z+5}+10\ 3^{z+4}-5\ 4^{z+4}+5^{z+4}+5\right).$$
(The product can be computed in this closed form by separating it into five terms via partial fractions.)
The cumulative distribution function (CDF) is obtained from the partial sums of $g$ (as a power series in $z$), which amounts to summing geometric series, and is given by
$$F(z) = 6^{-z-4} \left(-(1)\ 1^{z+4} + (5)\ 2^{z+4}-(10)\
3^{z+4}+(10)\ 4^{z+4}-(5)\ 5^{z+4}+(1)\ 6^{z+4}\right).$$
(I have written this expression in a form that suggests an alternate derivation via the Principle of Inclusion-Exclusion.)
From this we obtain the expected number of moves in the game (answering the first question) as
$$\mathbb{E}(6+X) = 6+\sum_{i=1}^\infty \left(1-F(i)\right) = \frac{147}{10}.$$
The CDF of the maximum of $m$ independent versions of $X$ is $F(z)^m$ (and from this we can, in principle, answer any probability questions about the maximum we like, such as what is its variance, what is its 99th percentile, and so on). With $m=4$ we obtain an expectation of
$$ 6+\sum_{i=1}^\infty \left(1-F(i)^4\right) \approx 21.4820363\ldots.$$
(The value is a rational fraction which, in reduced form, has a 71-digit denominator.) The standard deviation is $6.77108\ldots.$ Here is a plot of the probability mass function of the maximum for four players (it has been shifted by $6$ already):
As one would expect, it is positively skewed. The mode is at $18$ rolls. It is rare that the last person to finish will take more than $50$ rolls (it is about $0.3\%$).
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
Because a "completely analytical approach" has been requested, here is an exact solution. It also provides an alternative approach to solving the question at Probability to draw a black ball in a set
|
How often do you have to roll a 6-sided die to obtain every number at least once?
Because a "completely analytical approach" has been requested, here is an exact solution. It also provides an alternative approach to solving the question at Probability to draw a black ball in a set of black and white balls with mixed replacement conditions.
The number of moves in the game, $X$, can be modeled as the sum of six independent realizations of Geometric$(p)$ variables with probabilities $p=1, 5/6, 4/6, 3/6, 2/6, 1/6$, each of them shifted by $1$ (because a geometric variable counts only the rolls preceding a success and we must also count the rolls on which successes were observed). By computing with the geometric distribution, we will therefore obtain answers that are $6$ less than the desired ones and therefore must be sure to add $6$ back at the end.
The probability generating function (pgf) of such a geometric variable with parameter $p$ is
$$f(z, p) = \frac{p}{1-(1-p)z}.$$
Therefore the pgf for the sum of these six variables is
$$g(z) = \prod_{i=1}^6 f(z, i/6) = 6^{-z-4} \left(-5\ 2^{z+5}+10\ 3^{z+4}-5\ 4^{z+4}+5^{z+4}+5\right).$$
(The product can be computed in this closed form by separating it into five terms via partial fractions.)
The cumulative distribution function (CDF) is obtained from the partial sums of $g$ (as a power series in $z$), which amounts to summing geometric series, and is given by
$$F(z) = 6^{-z-4} \left(-(1)\ 1^{z+4} + (5)\ 2^{z+4}-(10)\
3^{z+4}+(10)\ 4^{z+4}-(5)\ 5^{z+4}+(1)\ 6^{z+4}\right).$$
(I have written this expression in a form that suggests an alternate derivation via the Principle of Inclusion-Exclusion.)
From this we obtain the expected number of moves in the game (answering the first question) as
$$\mathbb{E}(6+X) = 6+\sum_{i=1}^\infty \left(1-F(i)\right) = \frac{147}{10}.$$
The CDF of the maximum of $m$ independent versions of $X$ is $F(z)^m$ (and from this we can, in principle, answer any probability questions about the maximum we like, such as what is its variance, what is its 99th percentile, and so on). With $m=4$ we obtain an expectation of
$$ 6+\sum_{i=1}^\infty \left(1-F(i)^4\right) \approx 21.4820363\ldots.$$
(The value is a rational fraction which, in reduced form, has a 71-digit denominator.) The standard deviation is $6.77108\ldots.$ Here is a plot of the probability mass function of the maximum for four players (it has been shifted by $6$ already):
As one would expect, it is positively skewed. The mode is at $18$ rolls. It is rare that the last person to finish will take more than $50$ rolls (it is about $0.3\%$).
|
How often do you have to roll a 6-sided die to obtain every number at least once?
Because a "completely analytical approach" has been requested, here is an exact solution. It also provides an alternative approach to solving the question at Probability to draw a black ball in a set
|
6,164
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
ThePawn has the right idea to attack the problem with a recurrence relationship. Consider a Markov chain with states $\{0, \dotsc, 6\}$ corresponding to the count of the number of distinct dice rolls that have happened. State 0 is the start state, and state 6 is the finish state. Then, the probability of transition from state $i$ to itself is $\frac{i}{6}$. The probability of transition from state $i$ to state $i+1$ is $\frac{6-i}{6}$. Therefore the hitting time of the finish state is
\begin{align}
\sum_{i=0}^5 \frac{6}{6-i} = 14.7
\end{align}
For the maximum of four trials, consider states that are quadruples. You want to find the expected hitting time for the target state $(6,6,6,6)$. The expected hitting time of any state $j$ is the weighted average for each source state $i$ of the expected hitting time $T_i$ plus the time to go from $i$ to $j$, weighted by $p_ip_{ij}$, the probability of arriving at state $i$ and moving to $j$. You can discover the hitting times and probabilities by dynamic programming. It's not so hard since there is a traversal order to fill in the hitting times and probabilities. For example, for two die: first calculate T and p for (0,0), then for (1,0), then (1, 1), (2, 0), then (2, 1), etc.
In Python:
import numpy as np
import itertools as it
from tools.decorator import memoized # A standard memoization decorator
SIDES = 6
@memoized
def get_t_and_p(state):
if all(s == 0 for s in state):
return 0, 1.0
n = len(state)
choices = [[s - 1, s] if s > 0 else [s]
for s in state]
ts = []
ps = []
for last_state in it.product(*choices):
if last_state == state:
continue
last_t, last_p = get_t_and_p(tuple(sorted(last_state)))
if last_p == 0.0:
continue
transition_p = 1.0
stay_p = 1.0
for ls, s in zip(last_state, state):
if ls < s:
transition_p *= (SIDES - ls) / SIDES
else:
transition_p *= ls / SIDES
stay_p *= ls / SIDES
if transition_p == 0.0:
continue
transition_time = 1 / (1 - stay_p)
ts.append(last_t + transition_time)
ps.append(last_p * transition_p / (1 - stay_p))
if len(ts) == 0:
return 0, 0.0
t = np.average(ts, weights=ps)
p = sum(ps)
return t, p
print(get_t_and_p((SIDES,) * 4)[0])
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
ThePawn has the right idea to attack the problem with a recurrence relationship. Consider a Markov chain with states $\{0, \dotsc, 6\}$ corresponding to the count of the number of distinct dice rolls
|
How often do you have to roll a 6-sided die to obtain every number at least once?
ThePawn has the right idea to attack the problem with a recurrence relationship. Consider a Markov chain with states $\{0, \dotsc, 6\}$ corresponding to the count of the number of distinct dice rolls that have happened. State 0 is the start state, and state 6 is the finish state. Then, the probability of transition from state $i$ to itself is $\frac{i}{6}$. The probability of transition from state $i$ to state $i+1$ is $\frac{6-i}{6}$. Therefore the hitting time of the finish state is
\begin{align}
\sum_{i=0}^5 \frac{6}{6-i} = 14.7
\end{align}
For the maximum of four trials, consider states that are quadruples. You want to find the expected hitting time for the target state $(6,6,6,6)$. The expected hitting time of any state $j$ is the weighted average for each source state $i$ of the expected hitting time $T_i$ plus the time to go from $i$ to $j$, weighted by $p_ip_{ij}$, the probability of arriving at state $i$ and moving to $j$. You can discover the hitting times and probabilities by dynamic programming. It's not so hard since there is a traversal order to fill in the hitting times and probabilities. For example, for two die: first calculate T and p for (0,0), then for (1,0), then (1, 1), (2, 0), then (2, 1), etc.
In Python:
import numpy as np
import itertools as it
from tools.decorator import memoized # A standard memoization decorator
SIDES = 6
@memoized
def get_t_and_p(state):
if all(s == 0 for s in state):
return 0, 1.0
n = len(state)
choices = [[s - 1, s] if s > 0 else [s]
for s in state]
ts = []
ps = []
for last_state in it.product(*choices):
if last_state == state:
continue
last_t, last_p = get_t_and_p(tuple(sorted(last_state)))
if last_p == 0.0:
continue
transition_p = 1.0
stay_p = 1.0
for ls, s in zip(last_state, state):
if ls < s:
transition_p *= (SIDES - ls) / SIDES
else:
transition_p *= ls / SIDES
stay_p *= ls / SIDES
if transition_p == 0.0:
continue
transition_time = 1 / (1 - stay_p)
ts.append(last_t + transition_time)
ps.append(last_p * transition_p / (1 - stay_p))
if len(ts) == 0:
return 0, 0.0
t = np.average(ts, weights=ps)
p = sum(ps)
return t, p
print(get_t_and_p((SIDES,) * 4)[0])
|
How often do you have to roll a 6-sided die to obtain every number at least once?
ThePawn has the right idea to attack the problem with a recurrence relationship. Consider a Markov chain with states $\{0, \dotsc, 6\}$ corresponding to the count of the number of distinct dice rolls
|
6,165
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
Quick and dirty Monte Carlo estimate in R of the length of a game for 1 player:
N = 1e5
sample_length = function(n) { # random game length
x = numeric(0)
while(length(unique(x)) < n) x[length(x)+1] = sample(1:n,1)
return(length(x))
}
game_lengths = replicate(N, sample_length(6))
Results: $\hat{\mu}=14.684$, $\hat{\sigma} = 6.24$, so a 95% confidence interval for the mean is $[14.645,14.722]$.
To determine the length of a four-player game, we can group the samples into fours and take the average minimum length over each group (you asked about the maximum, but I assume you meant the minimum since, the way I read it, the game ends when someone succeeds at getting all the numbers):
grouped_lengths = matrix(game_lengths, ncol=4)
min_lengths = apply(grouped_lengths, 1, min)
Results: $\hat{\mu}=9.44$, $\hat{\sigma} = 2.26$, so a 95% confidence interval for the mean is $[9.411,9.468]$.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
Quick and dirty Monte Carlo estimate in R of the length of a game for 1 player:
N = 1e5
sample_length = function(n) { # random game length
x = numeric(0)
while(length(unique(x)) < n) x[length(
|
How often do you have to roll a 6-sided die to obtain every number at least once?
Quick and dirty Monte Carlo estimate in R of the length of a game for 1 player:
N = 1e5
sample_length = function(n) { # random game length
x = numeric(0)
while(length(unique(x)) < n) x[length(x)+1] = sample(1:n,1)
return(length(x))
}
game_lengths = replicate(N, sample_length(6))
Results: $\hat{\mu}=14.684$, $\hat{\sigma} = 6.24$, so a 95% confidence interval for the mean is $[14.645,14.722]$.
To determine the length of a four-player game, we can group the samples into fours and take the average minimum length over each group (you asked about the maximum, but I assume you meant the minimum since, the way I read it, the game ends when someone succeeds at getting all the numbers):
grouped_lengths = matrix(game_lengths, ncol=4)
min_lengths = apply(grouped_lengths, 1, min)
Results: $\hat{\mu}=9.44$, $\hat{\sigma} = 2.26$, so a 95% confidence interval for the mean is $[9.411,9.468]$.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
Quick and dirty Monte Carlo estimate in R of the length of a game for 1 player:
N = 1e5
sample_length = function(n) { # random game length
x = numeric(0)
while(length(unique(x)) < n) x[length(
|
6,166
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
How about a recursive relation with respect to the remaining number $m$ of sides you have to obtain in order to win.
$$T_{1} = 6$$
$$T_{m} = 1 + \frac{6 - m}{6}T_{m} + \frac{m}{6}T_{m-1}$$
Basically, the last relation is saying that the number of time to roll the $m$ remaining different numbers is equal to $1$ plus:
$T_{m}$ if you roll the one of the $6 - m$ numbers already rolled (probability $\frac{6 - m}{6}$)
$T_{m-1}$ if you roll one of the $m$ remaining numbers (probability $\frac{m}{6}$)
Numerical application of this relation gives $14.7$.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
How about a recursive relation with respect to the remaining number $m$ of sides you have to obtain in order to win.
$$T_{1} = 6$$
$$T_{m} = 1 + \frac{6 - m}{6}T_{m} + \frac{m}{6}T_{m-1}$$
Basically,
|
How often do you have to roll a 6-sided die to obtain every number at least once?
How about a recursive relation with respect to the remaining number $m$ of sides you have to obtain in order to win.
$$T_{1} = 6$$
$$T_{m} = 1 + \frac{6 - m}{6}T_{m} + \frac{m}{6}T_{m-1}$$
Basically, the last relation is saying that the number of time to roll the $m$ remaining different numbers is equal to $1$ plus:
$T_{m}$ if you roll the one of the $6 - m$ numbers already rolled (probability $\frac{6 - m}{6}$)
$T_{m-1}$ if you roll one of the $m$ remaining numbers (probability $\frac{m}{6}$)
Numerical application of this relation gives $14.7$.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
How about a recursive relation with respect to the remaining number $m$ of sides you have to obtain in order to win.
$$T_{1} = 6$$
$$T_{m} = 1 + \frac{6 - m}{6}T_{m} + \frac{m}{6}T_{m-1}$$
Basically,
|
6,167
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
A simple and intuitive explanation to the first question:
You first need to roll any number. This is easy, it'll always take exactly 1 roll.
You then need to roll any number other than the first one. The chance of this happening is $\frac{5}{6}$, so it'll take $\frac{6}{5}$ (1.2) rolls on average.
You then need to roll any number other than the first two. The chance of this happening is $\frac{4}{6}$, so it'll take $\frac{6}{4}$ (1.5) rolls on average.
You then need to roll any number other than the first three. The chance of this happening is $\frac{3}{6}$, so it'll take $\frac{6}{3}$ (2) rolls on average.
And so on until we successfully complete our 6th roll:
$\frac{6}{6} + \frac{6}{5} + \frac{6}{4} + \frac{6}{3} + \frac{6}{2} + \frac{6}{1} = 14.7\ rolls$
This answer is similar to Neil G's answer, only, without the markov chain.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
A simple and intuitive explanation to the first question:
You first need to roll any number. This is easy, it'll always take exactly 1 roll.
You then need to roll any number other than the first one.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
A simple and intuitive explanation to the first question:
You first need to roll any number. This is easy, it'll always take exactly 1 roll.
You then need to roll any number other than the first one. The chance of this happening is $\frac{5}{6}$, so it'll take $\frac{6}{5}$ (1.2) rolls on average.
You then need to roll any number other than the first two. The chance of this happening is $\frac{4}{6}$, so it'll take $\frac{6}{4}$ (1.5) rolls on average.
You then need to roll any number other than the first three. The chance of this happening is $\frac{3}{6}$, so it'll take $\frac{6}{3}$ (2) rolls on average.
And so on until we successfully complete our 6th roll:
$\frac{6}{6} + \frac{6}{5} + \frac{6}{4} + \frac{6}{3} + \frac{6}{2} + \frac{6}{1} = 14.7\ rolls$
This answer is similar to Neil G's answer, only, without the markov chain.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
A simple and intuitive explanation to the first question:
You first need to roll any number. This is easy, it'll always take exactly 1 roll.
You then need to roll any number other than the first one.
|
6,168
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
the probability density function (or discrete equivalent) for getting the next new number is:
f = sum( p * ( 1 - p )^( i - 1 ) , i = 1 .. inf )
where p is the probability per roll, 1 when no numbers have been rolled, 5/6 after 1, 4/6 .. down to 1/6 for the last number
the expected value, mu = sum( i * p * ( 1 - p )^( i - 1 ), i = 1 .. inf )
letting n = i - 1, and bringing p outside the summation,
mu = p * sum( ( n + 1 ) * ( 1 - p )^n, n = 0 .. inf )
mu = p * sum( n(1-p)^n, n = 0 .. inf ) + p * sum( (1-p)^n, n = 0 .. inf )
mu = p * (1-p) / (1-p-1)^2 + p * 1/ (1-(1-p))
mu = p * ( 1 - p ) / p^2 + p/p
mu = ( 1 - p ) / p + p/p
mu = ( 1 - p + p ) / p
mu = 1 / p
The sum of the expected values (mus) for ps of 1, 5/6, 4/6, 3/6, 2/6, and 1/6 is 14.7 as previously reported, but 1/p per required number is general regardless of die size
similarly, we can calculate the standard deviation analytically
sigma^2 = sum( ( i - mu )^2 * p * ( 1 - p )^( i - 1 ), i = 1 .. inf )
I will spare you the algebra here, but sigma^2 = (1-p)/p^2
In the case of 6, the sum of sigma^2 for each step is 38.99 for a standard deviation of about 6.24, again, as simulated
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
the probability density function (or discrete equivalent) for getting the next new number is:
f = sum( p * ( 1 - p )^( i - 1 ) , i = 1 .. inf )
where p is the probability per roll, 1 when no numbers h
|
How often do you have to roll a 6-sided die to obtain every number at least once?
the probability density function (or discrete equivalent) for getting the next new number is:
f = sum( p * ( 1 - p )^( i - 1 ) , i = 1 .. inf )
where p is the probability per roll, 1 when no numbers have been rolled, 5/6 after 1, 4/6 .. down to 1/6 for the last number
the expected value, mu = sum( i * p * ( 1 - p )^( i - 1 ), i = 1 .. inf )
letting n = i - 1, and bringing p outside the summation,
mu = p * sum( ( n + 1 ) * ( 1 - p )^n, n = 0 .. inf )
mu = p * sum( n(1-p)^n, n = 0 .. inf ) + p * sum( (1-p)^n, n = 0 .. inf )
mu = p * (1-p) / (1-p-1)^2 + p * 1/ (1-(1-p))
mu = p * ( 1 - p ) / p^2 + p/p
mu = ( 1 - p ) / p + p/p
mu = ( 1 - p + p ) / p
mu = 1 / p
The sum of the expected values (mus) for ps of 1, 5/6, 4/6, 3/6, 2/6, and 1/6 is 14.7 as previously reported, but 1/p per required number is general regardless of die size
similarly, we can calculate the standard deviation analytically
sigma^2 = sum( ( i - mu )^2 * p * ( 1 - p )^( i - 1 ), i = 1 .. inf )
I will spare you the algebra here, but sigma^2 = (1-p)/p^2
In the case of 6, the sum of sigma^2 for each step is 38.99 for a standard deviation of about 6.24, again, as simulated
|
How often do you have to roll a 6-sided die to obtain every number at least once?
the probability density function (or discrete equivalent) for getting the next new number is:
f = sum( p * ( 1 - p )^( i - 1 ) , i = 1 .. inf )
where p is the probability per roll, 1 when no numbers h
|
6,169
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
Question 1 was:
How many times do you have to roll a six-sided dice until you get every number at least once?
Obviously, the correct answer must be 'infinite'.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
|
Question 1 was:
How many times do you have to roll a six-sided dice until you get every number at least once?
Obviously, the correct answer must be 'infinite'.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
Question 1 was:
How many times do you have to roll a six-sided dice until you get every number at least once?
Obviously, the correct answer must be 'infinite'.
|
How often do you have to roll a 6-sided die to obtain every number at least once?
Question 1 was:
How many times do you have to roll a six-sided dice until you get every number at least once?
Obviously, the correct answer must be 'infinite'.
|
6,170
|
What is the difference between "coefficient of determination" and "mean squared error"?
|
$R^2=1-\frac{SSE}{SST}$, where $SSE$ is the sum of squared error (residuals or deviations from the regression line) and $SST$ is the sum of squared deviations from the dependent's $Y$ mean.
$MSE=\frac{SSE}{n-m}$, where $n$ is the sample size and $m$ is the number of parameters in the model (including intercept, if any).
$R^2$ is a standardized measure of degree of predictedness, or fit, in the sample. $MSE$ is the estimate of variance of residuals, or non-fit, in the population. The two measures are clearly related, as seen in the most usual formula for adjusted $R^2$ (the estimate of $R^2$ for population):
$R_{adj}^2=1-(1-R^2)\frac{n-1}{n-m}=1-\frac{SSE/(n-m)}{SST/(n-1)}=1-\frac{MSE}{\sigma_y^2}$.
|
What is the difference between "coefficient of determination" and "mean squared error"?
|
$R^2=1-\frac{SSE}{SST}$, where $SSE$ is the sum of squared error (residuals or deviations from the regression line) and $SST$ is the sum of squared deviations from the dependent's $Y$ mean.
$MSE=\frac
|
What is the difference between "coefficient of determination" and "mean squared error"?
$R^2=1-\frac{SSE}{SST}$, where $SSE$ is the sum of squared error (residuals or deviations from the regression line) and $SST$ is the sum of squared deviations from the dependent's $Y$ mean.
$MSE=\frac{SSE}{n-m}$, where $n$ is the sample size and $m$ is the number of parameters in the model (including intercept, if any).
$R^2$ is a standardized measure of degree of predictedness, or fit, in the sample. $MSE$ is the estimate of variance of residuals, or non-fit, in the population. The two measures are clearly related, as seen in the most usual formula for adjusted $R^2$ (the estimate of $R^2$ for population):
$R_{adj}^2=1-(1-R^2)\frac{n-1}{n-m}=1-\frac{SSE/(n-m)}{SST/(n-1)}=1-\frac{MSE}{\sigma_y^2}$.
|
What is the difference between "coefficient of determination" and "mean squared error"?
$R^2=1-\frac{SSE}{SST}$, where $SSE$ is the sum of squared error (residuals or deviations from the regression line) and $SST$ is the sum of squared deviations from the dependent's $Y$ mean.
$MSE=\frac
|
6,171
|
Doing principal component analysis or factor analysis on binary data
|
The question of dichotomous or binary variables in PCA or Factor analysis is eternal. There are polar opinions from "it is illegal" to "it is alright", through something like "you may do it but you'll get too many factors". My own current opinion is as follows. First, I deem that binary observed variable is descrete and that it is improper to treat it in any way as continuous. Can this discrete variable give rise to factor or principal component?
Factor analysis (FA). Factor by definition is a continuous latent
that load observable variables (1, 2). Consequently, the latter cannot be
but continuous (or interval, more practically speaking) when enough
loaded by factor. Also, FA, due to its linear regressional nature,
assumes that the rest - not loaded - part, called uniqness, is
continuous either, and so it comes that observable variables should
be continuous even when loaded slightly. Thus, binary variables
cannot legislate themselves in FA. However, there are at least two ways round: (A) Assume the dichotomies as roughened continues
underlying variables and do FA with tetrachoric - rather than Pearson -
correlations; (B) Assume that factor loads a dichotomous variable not
linealrly but logistically and do Latent Trait Analysis (aka Item
Response Theory) instead of linear FA. Read more.
Principal Component Analysis (PCA). While having much in common
with FA, PCA is not a modeling but only a summarizing method.
Components do not load variables in the same conceptual sense as
factors load variables. In PCA, components load variables and
variables load components. This symmetry is because PCA per se is
merely a rotation of variables-axes in space. Binary variables won't
provide true continuity for a component by their own selves - since they are not continuous, but the
pseudocontinuity can be provided by the angle of PCA-rotation which can appear any.
Thus in PCA, and in contrast with FA, you can get seemingly continuous
dimensions (rotated axes) with purely binary variables (unrotated
axes) - angle is the cause of continuity$^1$.
It is debatable whether it is legal to compute mean for binary variables (if you take them as truly categorical features). Usually PCA is performed on covariances or correlations, which implies putting the pivot point of PCA-rotation in the (1) centroid (arithmetic mean). For binary data, it makes sense to consider, besides that, other and more natural for binary data locations for such pivot point, or origin: (2) no-attribute point (0,0) (if you treat your variables as "ordinal" binary), (3) L1 or Manhattan medoid point, (4) multivariate mode point$^2$.
Some related questions about FA or PCA of binary data: 1, 2, 3, 4, 5, 6. Answers there potentially may express opinions different from mine.
$^1$ Component scores computed in PCA of binary data, like object scores computed in MCA (multiple correspondence analysis) of nominal data, are just fractional coordinates for the granular data in a smooth Euclidean space mapping: these do not permit us to conclude that the categorical data have acquired authentic scale measurement through plain PCA. To have truly scale values, variables must be scale nature from the beginning, at input, or they must be specially quantified or assumed to have been binned (see). But in classic PCA or MCA the room for "continuity" emerges later on the level of summary statistics (such as association or frequency matrices) due to that countability is akin to measurability, both are "quantitative". And for that level entities - for variables as points or categories as points - their coordinates in the principal axes space are legitimately scale values indeed. But not for data points (data cases) of binary data, - their "scores" are pseudocontinuous values: not intrinsic measure, just some overlay coordinates.
$^2$ Demonstration of various versions of PCA with binary data depending on the location of the origin of rotation. Linear PCA can be applied to any SSCP-type association matrix; it is your choice where to put the origin and whether scale the magnitudes (the matrix diagonal elements) to same value (say, $1$) or not. PCA assumes the matrix is SSCP-type and maximizes, by principal components, SS deviations from the origin. Of course, for binary data (which are bounded) SS deviations depend merely on the frequency observed in this or that direction beyond the origin; yet it also depend on where we locate the origin.
Example of binary data (just a simple case of two variables):
Scatterplots below display the data points a bit jittered (to render frequency) and show principal component axes as diagonal lines bearing component scores on them [those scores, according to my claim are pseudocontinuous values]. The left plot on every picture demonstrates PCA based on "raw" deviations from the origin, while the right plot demonstrates PCA based on scaled (diagonal = unit) deviations from it.
1) Traditional PCA puts the (0,0) origin into data mean (centroid). For binary data, mean is not a possible data value. It is, however, physical centre of gravity. PCA maximizes variability about it.
(Do not forget, too, that in a binary varible mean and variance are strictly tied together, they are, so to speak, "one thing". Standardizing/scaling binary variables, that is, doing PCA based on correlations not covariances, in the current instance, will mean that you impede more balanced variables - having greater variance - to influence the PCA greater than more skewed variables do.)
2) You may do PCA in noncentered data, i.e. let the origin (0,0) go to location (0,0). It is PCA on MSCP (X'X/n) matrix or on cosine similarity matrix. PCA maximizes protuberability from the no-attribute state.
3) You may let the origin (0,0) lie at the data point of the smallest sum of Manhattan distances from it to all the other data points - L1 medoid. Medoid, generally, is understood as the most "representative" or "typical" data point. Hence, PCA will maximize atypicality (in addition to frequency). In our data, L1 medoid fell on (1,0) original coordinates.
4) Or put the origin (0,0) at the data coordinates where the frequency is the highest - multivariate mode. It is the (1,1) data cell in our example. PCA will maximize (be driven by) junior modes.
5) In the answer's body it was mentioned that tetrachoric correlations is a sound matter to perform factor analysis on, for binary variables. Same could be said about PCA: you may do PCA based on tetrachoric correlations. However, that means you are supposing an underlying continuous variable within a binary variable.
|
Doing principal component analysis or factor analysis on binary data
|
The question of dichotomous or binary variables in PCA or Factor analysis is eternal. There are polar opinions from "it is illegal" to "it is alright", through something like "you may do it but you'll
|
Doing principal component analysis or factor analysis on binary data
The question of dichotomous or binary variables in PCA or Factor analysis is eternal. There are polar opinions from "it is illegal" to "it is alright", through something like "you may do it but you'll get too many factors". My own current opinion is as follows. First, I deem that binary observed variable is descrete and that it is improper to treat it in any way as continuous. Can this discrete variable give rise to factor or principal component?
Factor analysis (FA). Factor by definition is a continuous latent
that load observable variables (1, 2). Consequently, the latter cannot be
but continuous (or interval, more practically speaking) when enough
loaded by factor. Also, FA, due to its linear regressional nature,
assumes that the rest - not loaded - part, called uniqness, is
continuous either, and so it comes that observable variables should
be continuous even when loaded slightly. Thus, binary variables
cannot legislate themselves in FA. However, there are at least two ways round: (A) Assume the dichotomies as roughened continues
underlying variables and do FA with tetrachoric - rather than Pearson -
correlations; (B) Assume that factor loads a dichotomous variable not
linealrly but logistically and do Latent Trait Analysis (aka Item
Response Theory) instead of linear FA. Read more.
Principal Component Analysis (PCA). While having much in common
with FA, PCA is not a modeling but only a summarizing method.
Components do not load variables in the same conceptual sense as
factors load variables. In PCA, components load variables and
variables load components. This symmetry is because PCA per se is
merely a rotation of variables-axes in space. Binary variables won't
provide true continuity for a component by their own selves - since they are not continuous, but the
pseudocontinuity can be provided by the angle of PCA-rotation which can appear any.
Thus in PCA, and in contrast with FA, you can get seemingly continuous
dimensions (rotated axes) with purely binary variables (unrotated
axes) - angle is the cause of continuity$^1$.
It is debatable whether it is legal to compute mean for binary variables (if you take them as truly categorical features). Usually PCA is performed on covariances or correlations, which implies putting the pivot point of PCA-rotation in the (1) centroid (arithmetic mean). For binary data, it makes sense to consider, besides that, other and more natural for binary data locations for such pivot point, or origin: (2) no-attribute point (0,0) (if you treat your variables as "ordinal" binary), (3) L1 or Manhattan medoid point, (4) multivariate mode point$^2$.
Some related questions about FA or PCA of binary data: 1, 2, 3, 4, 5, 6. Answers there potentially may express opinions different from mine.
$^1$ Component scores computed in PCA of binary data, like object scores computed in MCA (multiple correspondence analysis) of nominal data, are just fractional coordinates for the granular data in a smooth Euclidean space mapping: these do not permit us to conclude that the categorical data have acquired authentic scale measurement through plain PCA. To have truly scale values, variables must be scale nature from the beginning, at input, or they must be specially quantified or assumed to have been binned (see). But in classic PCA or MCA the room for "continuity" emerges later on the level of summary statistics (such as association or frequency matrices) due to that countability is akin to measurability, both are "quantitative". And for that level entities - for variables as points or categories as points - their coordinates in the principal axes space are legitimately scale values indeed. But not for data points (data cases) of binary data, - their "scores" are pseudocontinuous values: not intrinsic measure, just some overlay coordinates.
$^2$ Demonstration of various versions of PCA with binary data depending on the location of the origin of rotation. Linear PCA can be applied to any SSCP-type association matrix; it is your choice where to put the origin and whether scale the magnitudes (the matrix diagonal elements) to same value (say, $1$) or not. PCA assumes the matrix is SSCP-type and maximizes, by principal components, SS deviations from the origin. Of course, for binary data (which are bounded) SS deviations depend merely on the frequency observed in this or that direction beyond the origin; yet it also depend on where we locate the origin.
Example of binary data (just a simple case of two variables):
Scatterplots below display the data points a bit jittered (to render frequency) and show principal component axes as diagonal lines bearing component scores on them [those scores, according to my claim are pseudocontinuous values]. The left plot on every picture demonstrates PCA based on "raw" deviations from the origin, while the right plot demonstrates PCA based on scaled (diagonal = unit) deviations from it.
1) Traditional PCA puts the (0,0) origin into data mean (centroid). For binary data, mean is not a possible data value. It is, however, physical centre of gravity. PCA maximizes variability about it.
(Do not forget, too, that in a binary varible mean and variance are strictly tied together, they are, so to speak, "one thing". Standardizing/scaling binary variables, that is, doing PCA based on correlations not covariances, in the current instance, will mean that you impede more balanced variables - having greater variance - to influence the PCA greater than more skewed variables do.)
2) You may do PCA in noncentered data, i.e. let the origin (0,0) go to location (0,0). It is PCA on MSCP (X'X/n) matrix or on cosine similarity matrix. PCA maximizes protuberability from the no-attribute state.
3) You may let the origin (0,0) lie at the data point of the smallest sum of Manhattan distances from it to all the other data points - L1 medoid. Medoid, generally, is understood as the most "representative" or "typical" data point. Hence, PCA will maximize atypicality (in addition to frequency). In our data, L1 medoid fell on (1,0) original coordinates.
4) Or put the origin (0,0) at the data coordinates where the frequency is the highest - multivariate mode. It is the (1,1) data cell in our example. PCA will maximize (be driven by) junior modes.
5) In the answer's body it was mentioned that tetrachoric correlations is a sound matter to perform factor analysis on, for binary variables. Same could be said about PCA: you may do PCA based on tetrachoric correlations. However, that means you are supposing an underlying continuous variable within a binary variable.
|
Doing principal component analysis or factor analysis on binary data
The question of dichotomous or binary variables in PCA or Factor analysis is eternal. There are polar opinions from "it is illegal" to "it is alright", through something like "you may do it but you'll
|
6,172
|
Are mixed models useful as predictive models?
|
It depends on the nature of the data, but in general I would expect the mixed model to outperform the fixed-effects only models.
Let's take an example: modelling the relationship between sunshine and the height of wheat stalks. We have a number of measurements of individual stalks, but many of the stalks are measured at the same sites (which are similar in soil, water and other things that may affect height). Here are some possible models:
1) height ~ sunshine
2) height ~ sunshine + site
3) height ~ sunshine + (1|site)
We want to use these models to predict the height of new wheat stalks given some estimate of the sunshine they will experience. I'm going to ignore the parameter penalty you would pay for having many sites in a fixed-effects only model, and just consider the relative predictive power of the models.
The most relevant question here is whether these new data points you are trying to predict are from one of the sites you have measured; you say this is rare in the real world, but it does happen.
A) New data are from a site you have measured
If so, models #2 and #3 will outperform #1. They both use more relevant information (mean site effect) to make predictions.
B) New data are from an unmeasured site
I would still expect model #3 to outperform #1 and #2, for the following reasons.
(i) Model #3 vs #1:
Model #1 will produce estimates that are biased in favour of overrepresented sites. If you have similar numbers of points from each site and a reasonably representative sample of sites, you should get similar results from both.
(ii) Model #3 vs. #2:
Why would model #3 be better that model #2 in this case? Because random effects take advantage of shrinkage - the site effects will be 'shrunk' towards zero. In other words, you will tend to find less extreme values for site effects when it is specified as a random effect than when it is specified as a fixed effect. This is useful and improves your predictive ability when the population means can reasonably be thought of as being drawn from a normal distribution (see Stein's Paradox in Statistics). If the population means are not expected to follow a normal distribution, this might be a problem, but it's usually a very reasonable assumption and the method is robust to small deviations.
[Side note: by default, when fitting model #2, most software would use one of the sites as a reference and estimate coefficients for the other sites that represent their the deviation from the reference. So it may appear as though there is no way to calculate an overall 'population effect'. But you can calculate this by averaging across predictions for all of the individual sites, or more simply by changing the coding of the model so that coefficients are calculated for every site.]
|
Are mixed models useful as predictive models?
|
It depends on the nature of the data, but in general I would expect the mixed model to outperform the fixed-effects only models.
Let's take an example: modelling the relationship between sunshine and
|
Are mixed models useful as predictive models?
It depends on the nature of the data, but in general I would expect the mixed model to outperform the fixed-effects only models.
Let's take an example: modelling the relationship between sunshine and the height of wheat stalks. We have a number of measurements of individual stalks, but many of the stalks are measured at the same sites (which are similar in soil, water and other things that may affect height). Here are some possible models:
1) height ~ sunshine
2) height ~ sunshine + site
3) height ~ sunshine + (1|site)
We want to use these models to predict the height of new wheat stalks given some estimate of the sunshine they will experience. I'm going to ignore the parameter penalty you would pay for having many sites in a fixed-effects only model, and just consider the relative predictive power of the models.
The most relevant question here is whether these new data points you are trying to predict are from one of the sites you have measured; you say this is rare in the real world, but it does happen.
A) New data are from a site you have measured
If so, models #2 and #3 will outperform #1. They both use more relevant information (mean site effect) to make predictions.
B) New data are from an unmeasured site
I would still expect model #3 to outperform #1 and #2, for the following reasons.
(i) Model #3 vs #1:
Model #1 will produce estimates that are biased in favour of overrepresented sites. If you have similar numbers of points from each site and a reasonably representative sample of sites, you should get similar results from both.
(ii) Model #3 vs. #2:
Why would model #3 be better that model #2 in this case? Because random effects take advantage of shrinkage - the site effects will be 'shrunk' towards zero. In other words, you will tend to find less extreme values for site effects when it is specified as a random effect than when it is specified as a fixed effect. This is useful and improves your predictive ability when the population means can reasonably be thought of as being drawn from a normal distribution (see Stein's Paradox in Statistics). If the population means are not expected to follow a normal distribution, this might be a problem, but it's usually a very reasonable assumption and the method is robust to small deviations.
[Side note: by default, when fitting model #2, most software would use one of the sites as a reference and estimate coefficients for the other sites that represent their the deviation from the reference. So it may appear as though there is no way to calculate an overall 'population effect'. But you can calculate this by averaging across predictions for all of the individual sites, or more simply by changing the coding of the model so that coefficients are calculated for every site.]
|
Are mixed models useful as predictive models?
It depends on the nature of the data, but in general I would expect the mixed model to outperform the fixed-effects only models.
Let's take an example: modelling the relationship between sunshine and
|
6,173
|
Are mixed models useful as predictive models?
|
Following up on mkt's excellent response: From my own personal experience developing predictive models in the health insurance field, incorporating random effects into predictive models (including machine learning models) has a number of advantages.
I'm often asked to build models predicting future claims outcomes for (e.g., future health expenses, length of stay, etc.) based on an individual's historical claims data. Frequently there are multiple claims per individual with correlated outcomes. Ignoring the fact that many claims are shared by the same patient would be throwing out valuable information in a predictive model.
One solution would be to create fixed effect indicator variables for each member in the dataset and use a penalized regression to shrink each of the member-level fixed effects separately. However, if there are thousands or millions of members in your data, a more efficient solution from both computational and predictive standpoints may be to represent the multiple member-level fixed effects as a single random effect term with a normal distribution.
|
Are mixed models useful as predictive models?
|
Following up on mkt's excellent response: From my own personal experience developing predictive models in the health insurance field, incorporating random effects into predictive models (including mac
|
Are mixed models useful as predictive models?
Following up on mkt's excellent response: From my own personal experience developing predictive models in the health insurance field, incorporating random effects into predictive models (including machine learning models) has a number of advantages.
I'm often asked to build models predicting future claims outcomes for (e.g., future health expenses, length of stay, etc.) based on an individual's historical claims data. Frequently there are multiple claims per individual with correlated outcomes. Ignoring the fact that many claims are shared by the same patient would be throwing out valuable information in a predictive model.
One solution would be to create fixed effect indicator variables for each member in the dataset and use a penalized regression to shrink each of the member-level fixed effects separately. However, if there are thousands or millions of members in your data, a more efficient solution from both computational and predictive standpoints may be to represent the multiple member-level fixed effects as a single random effect term with a normal distribution.
|
Are mixed models useful as predictive models?
Following up on mkt's excellent response: From my own personal experience developing predictive models in the health insurance field, incorporating random effects into predictive models (including mac
|
6,174
|
What is the intuitive reason behind doing rotations in Factor Analysis/PCA & how to select appropriate rotation?
|
Reason for rotation. Rotations are done for the sake of interpretation of the extracted factors in factor analysis (or components in PCA, if you venture to use PCA as a factor analytic technique). You are right when you describe your understanding. Rotation is done in the pursuit of some structure of the loading matrix, which may be called simple structure. It is when different factors tend to load different variables $^1$. [I believe it is more correct to say that "a factor loads a variable" than "a variable loads a factor", because it is the factor that is "in" or "behind" variables to make them correlate, but you may say as you like.] In a sense, typical simple structure is where "clusters" of correlated variables show up. You then interpret a factor as the meaning which lies on the intersection of the meaning of the variables which are loaded enough by the factor; thus, to receive different meaning, factors should load variables differentially. A rule of thumb is that a factor should load decently at least 3 variables.
Consequences. Rotation does not change the position of variables relative to each other in the space of the factors, i.e. correlations between variables are being preserved. What are changed are the coordinates of the variable vectors' end-points onto the factor axes - the loadings (please search this site for "loading plot" and "biplot", for more)$^2$. After an orthogonal rotation of the loading matrix, factor variances get changed, but factors remain uncorrelated and variable communalities are preserved.
In an oblique rotation factors are allowed to lose their uncorrelatedness if that will produce a clearer "simple structure". However, interpretation of correlated factors is a more difficult art because you have to derive meaning from one factor so that it does not contaminate the meaning of another one that it correlates with. That implies that you have to interpret factors, let us say, in parallel, and not one by one. Oblique rotation leaves you with two matrices of loadings instead of one: pattern matrix $\bf P$ and structure matrix $\bf S$. ($\bf S=PC$, where $\bf C$ is the matrix of correlations between the factors; $\bf C=Q'Q$, where $\bf Q$ is the matrix of oblique rotation: $\bf S=AQ$, where $\bf A$ was the loading matrix prior any rotation.) The pattern matrix is the matrix of regressional weights by which factors predict variables, while the structure matrix is the correlations (or covariances) between factors and variables. Most of the time we interpret factors by pattern loadings because these coefficients represent the unique individual investment of the factor in a variable. Oblique rotation preserves variable communalities, but the communalities are no longer equal to the row sums of squares in $\bf P$ or in $\bf S$. Moreover, because factors correlate, their variances partly superimpose$^3$.
Both orthogonal and oblique rotations, of course, affect factor/component scores which you might want to compute (please search "factor scores" on this site). Rotation, in effect, gives you other factors than those factors you had just after the extraction$^4$. They inherit their predictive power (for the variables and their correlations) but they will get different substantial meaning from you. After rotation, you may not say "this factor is more important than that one" because they were rotated vis-a-vis each other (to be honest, in FA, unlike PCA, you may hardly say it even after the extraction because factors are modelled as already "important").
Choice. There are many forms of orthogonal and oblique rotations. Why? First, because the concept of "simple structure" is not univocal and can be formulated somewhat differently. For example, varimax - the most popular orthogonal method - tries to maximize variance among the squared values of loadings of each factor; the sometimes used orthogonal method quartimax minimizes the number of factors needed to explain a variable, and often produces the so called "general factor". Second, different rotations aim at different side objectives apart from simple structure. I won't go into the details of these complex topics, but you might want to read about them for yourself.
Should one prefer orthogonal or oblique rotation? Well, orthogonal factors are easier to interpret and the whole factor model is statistically simpler (orthogonal predictors, of course). But there you impose orthogonality on the latent traits you want to discover; are you sure they should be uncorrelated in the field you study? What if they are not? Oblique rotation methods$^5$ (albeit each having their own inclinations) allow, but don't force, factors to correlate, and are thus less restrictive. If oblique rotation shows that factors are only weakly correlated, you may be confident that "in reality" it is so, and then you may turn to orthogonal rotation with good conscience. If factors, on the other hand, are very much correlated, it looks unnatural (for conceptually distinct latent traits, especially if you are developing an inventory in psychology or such, - recall that a factor is itself a univariate trait, not a batch of phenomena), and you might want to extract less factors, or alternatively to use the oblique results as the batch source to extract the so-called second-order factors.
$^1$ Thurstone brought forward five ideal conditions of simple structure. The three most important are: (1) each variable must have at least one near-zero loading; (2) each factor must have near-zero loadings for at least m variables (m is the number of factors); (3) for each pair of factors, there are at least m variables with loadings near zero for one of them, and far enough from zero for the other. Consequently, for each pair of factors their loading plot should ideally look something like:
This is for purely exploratory FA, while if you are doing and redoing FA to develop a questionnaire, you eventually will want to drop all points except blue ones, provided you have only two factors. If there are more than two factors, you will want the red points to become blue for some of the other factors' loading plots.
$^2$
$^3$ The variance of a factor (or component) is the sum of its squared structure loadings $\bf S$, since they are covariances/correlations between variables and (unit-scaled) factors. After oblique rotation, factors can get correlated, and so their variances intersect. Consequently, the sum of their variances, SS in $\bf S$, exceeds the overall communality explained, SS in $\bf A$. If you want to reckon after factor i only the unique "clean" portion of its variance, multiply the variance by $1-R_i^2$ of the factor's dependence on the other factors, the quantity known as anti-image. It is the reciprocal of the i-th diagonal element of $\bf C^{-1}$. The sum of the "clean" portions of the variances will be less than the overall communality explained.
$^4$ You may not say "the 1st factor/component changed in rotation in this or that way" because the 1st factor/component in the rotated loading matrix is a different factor/component than the 1st one in the unrotated loading matrix. The same ordinal number ("1st") is misleading.
$^5$ Two most important oblique methods are promax and oblimin. Promax is the oblique enhancement of varimax: the varimax-based structure is then loosed in order to meet "simple structure" to a greater degree. It is often used in confirmatory FA. Oblimin is very flexible due to its parameter gamma which, when set to 0, makes oblimin the quartimin method yielding most oblique solutions. A gamma of 1 yields the least oblique solutions, the covarimin, which is yet another varimax-based oblique method alternative to promax. All oblique methods can be direct (=primary) and indirect (=secondary) versions - see the literature. All rotations, both orthogonal and oblique, can be done with Kaiser normalization (usually) or without it. The normalization makes all variables equally important at the rotation.
Some threads for further reading:
Can there be reason not to rotate factors at all? (Check this too.)
Which matrix to interpret after oblique rotation - pattern or structure?
What do the names of factor rotation techniques (varimax, etc.) mean? (A detailed answer with formulae and peseudocode for the orthogonal factor rotation methods)
Is PCA with components rotated still PCA or is a factor analysis?
|
What is the intuitive reason behind doing rotations in Factor Analysis/PCA & how to select appropria
|
Reason for rotation. Rotations are done for the sake of interpretation of the extracted factors in factor analysis (or components in PCA, if you venture to use PCA as a factor analytic technique). You
|
What is the intuitive reason behind doing rotations in Factor Analysis/PCA & how to select appropriate rotation?
Reason for rotation. Rotations are done for the sake of interpretation of the extracted factors in factor analysis (or components in PCA, if you venture to use PCA as a factor analytic technique). You are right when you describe your understanding. Rotation is done in the pursuit of some structure of the loading matrix, which may be called simple structure. It is when different factors tend to load different variables $^1$. [I believe it is more correct to say that "a factor loads a variable" than "a variable loads a factor", because it is the factor that is "in" or "behind" variables to make them correlate, but you may say as you like.] In a sense, typical simple structure is where "clusters" of correlated variables show up. You then interpret a factor as the meaning which lies on the intersection of the meaning of the variables which are loaded enough by the factor; thus, to receive different meaning, factors should load variables differentially. A rule of thumb is that a factor should load decently at least 3 variables.
Consequences. Rotation does not change the position of variables relative to each other in the space of the factors, i.e. correlations between variables are being preserved. What are changed are the coordinates of the variable vectors' end-points onto the factor axes - the loadings (please search this site for "loading plot" and "biplot", for more)$^2$. After an orthogonal rotation of the loading matrix, factor variances get changed, but factors remain uncorrelated and variable communalities are preserved.
In an oblique rotation factors are allowed to lose their uncorrelatedness if that will produce a clearer "simple structure". However, interpretation of correlated factors is a more difficult art because you have to derive meaning from one factor so that it does not contaminate the meaning of another one that it correlates with. That implies that you have to interpret factors, let us say, in parallel, and not one by one. Oblique rotation leaves you with two matrices of loadings instead of one: pattern matrix $\bf P$ and structure matrix $\bf S$. ($\bf S=PC$, where $\bf C$ is the matrix of correlations between the factors; $\bf C=Q'Q$, where $\bf Q$ is the matrix of oblique rotation: $\bf S=AQ$, where $\bf A$ was the loading matrix prior any rotation.) The pattern matrix is the matrix of regressional weights by which factors predict variables, while the structure matrix is the correlations (or covariances) between factors and variables. Most of the time we interpret factors by pattern loadings because these coefficients represent the unique individual investment of the factor in a variable. Oblique rotation preserves variable communalities, but the communalities are no longer equal to the row sums of squares in $\bf P$ or in $\bf S$. Moreover, because factors correlate, their variances partly superimpose$^3$.
Both orthogonal and oblique rotations, of course, affect factor/component scores which you might want to compute (please search "factor scores" on this site). Rotation, in effect, gives you other factors than those factors you had just after the extraction$^4$. They inherit their predictive power (for the variables and their correlations) but they will get different substantial meaning from you. After rotation, you may not say "this factor is more important than that one" because they were rotated vis-a-vis each other (to be honest, in FA, unlike PCA, you may hardly say it even after the extraction because factors are modelled as already "important").
Choice. There are many forms of orthogonal and oblique rotations. Why? First, because the concept of "simple structure" is not univocal and can be formulated somewhat differently. For example, varimax - the most popular orthogonal method - tries to maximize variance among the squared values of loadings of each factor; the sometimes used orthogonal method quartimax minimizes the number of factors needed to explain a variable, and often produces the so called "general factor". Second, different rotations aim at different side objectives apart from simple structure. I won't go into the details of these complex topics, but you might want to read about them for yourself.
Should one prefer orthogonal or oblique rotation? Well, orthogonal factors are easier to interpret and the whole factor model is statistically simpler (orthogonal predictors, of course). But there you impose orthogonality on the latent traits you want to discover; are you sure they should be uncorrelated in the field you study? What if they are not? Oblique rotation methods$^5$ (albeit each having their own inclinations) allow, but don't force, factors to correlate, and are thus less restrictive. If oblique rotation shows that factors are only weakly correlated, you may be confident that "in reality" it is so, and then you may turn to orthogonal rotation with good conscience. If factors, on the other hand, are very much correlated, it looks unnatural (for conceptually distinct latent traits, especially if you are developing an inventory in psychology or such, - recall that a factor is itself a univariate trait, not a batch of phenomena), and you might want to extract less factors, or alternatively to use the oblique results as the batch source to extract the so-called second-order factors.
$^1$ Thurstone brought forward five ideal conditions of simple structure. The three most important are: (1) each variable must have at least one near-zero loading; (2) each factor must have near-zero loadings for at least m variables (m is the number of factors); (3) for each pair of factors, there are at least m variables with loadings near zero for one of them, and far enough from zero for the other. Consequently, for each pair of factors their loading plot should ideally look something like:
This is for purely exploratory FA, while if you are doing and redoing FA to develop a questionnaire, you eventually will want to drop all points except blue ones, provided you have only two factors. If there are more than two factors, you will want the red points to become blue for some of the other factors' loading plots.
$^2$
$^3$ The variance of a factor (or component) is the sum of its squared structure loadings $\bf S$, since they are covariances/correlations between variables and (unit-scaled) factors. After oblique rotation, factors can get correlated, and so their variances intersect. Consequently, the sum of their variances, SS in $\bf S$, exceeds the overall communality explained, SS in $\bf A$. If you want to reckon after factor i only the unique "clean" portion of its variance, multiply the variance by $1-R_i^2$ of the factor's dependence on the other factors, the quantity known as anti-image. It is the reciprocal of the i-th diagonal element of $\bf C^{-1}$. The sum of the "clean" portions of the variances will be less than the overall communality explained.
$^4$ You may not say "the 1st factor/component changed in rotation in this or that way" because the 1st factor/component in the rotated loading matrix is a different factor/component than the 1st one in the unrotated loading matrix. The same ordinal number ("1st") is misleading.
$^5$ Two most important oblique methods are promax and oblimin. Promax is the oblique enhancement of varimax: the varimax-based structure is then loosed in order to meet "simple structure" to a greater degree. It is often used in confirmatory FA. Oblimin is very flexible due to its parameter gamma which, when set to 0, makes oblimin the quartimin method yielding most oblique solutions. A gamma of 1 yields the least oblique solutions, the covarimin, which is yet another varimax-based oblique method alternative to promax. All oblique methods can be direct (=primary) and indirect (=secondary) versions - see the literature. All rotations, both orthogonal and oblique, can be done with Kaiser normalization (usually) or without it. The normalization makes all variables equally important at the rotation.
Some threads for further reading:
Can there be reason not to rotate factors at all? (Check this too.)
Which matrix to interpret after oblique rotation - pattern or structure?
What do the names of factor rotation techniques (varimax, etc.) mean? (A detailed answer with formulae and peseudocode for the orthogonal factor rotation methods)
Is PCA with components rotated still PCA or is a factor analysis?
|
What is the intuitive reason behind doing rotations in Factor Analysis/PCA & how to select appropria
Reason for rotation. Rotations are done for the sake of interpretation of the extracted factors in factor analysis (or components in PCA, if you venture to use PCA as a factor analytic technique). You
|
6,175
|
How do I get people to take better care of data?
|
It's worth considering ideas from the software world. In particular you might think of setting up: a version control repository and a central database server.
Version control probably helps you out with otherwise free floating files, such as Excel and text files, etc. But this could also include files associated with data, such as R, SAS, etc. The idea is that there's a system which tracks changes to your files allowing you to know what happened when and rollback to a point in the past if needed.
Where you already have SQL databases, the best thing you can do is set up a central server and hire a capable DBA. The DBA is the person tasked with ensuring and mantaining the integrity of the data. Part of the job description involves things like backups and tuning. But another part is more relevant here -- controlling how data enters the system, ensuring that constraints are met, access policies are in place to prevent harm to the data, setting up views to expose custom or simplified data formats, etc. In short, implementing a methodology around the data process. Even if you don't hire an actual DBA (the good ones are very hard to recruit), having a central server still enables you to start thinking about instituting some kind of methodology around data.
|
How do I get people to take better care of data?
|
It's worth considering ideas from the software world. In particular you might think of setting up: a version control repository and a central database server.
Version control probably helps you out
|
How do I get people to take better care of data?
It's worth considering ideas from the software world. In particular you might think of setting up: a version control repository and a central database server.
Version control probably helps you out with otherwise free floating files, such as Excel and text files, etc. But this could also include files associated with data, such as R, SAS, etc. The idea is that there's a system which tracks changes to your files allowing you to know what happened when and rollback to a point in the past if needed.
Where you already have SQL databases, the best thing you can do is set up a central server and hire a capable DBA. The DBA is the person tasked with ensuring and mantaining the integrity of the data. Part of the job description involves things like backups and tuning. But another part is more relevant here -- controlling how data enters the system, ensuring that constraints are met, access policies are in place to prevent harm to the data, setting up views to expose custom or simplified data formats, etc. In short, implementing a methodology around the data process. Even if you don't hire an actual DBA (the good ones are very hard to recruit), having a central server still enables you to start thinking about instituting some kind of methodology around data.
|
How do I get people to take better care of data?
It's worth considering ideas from the software world. In particular you might think of setting up: a version control repository and a central database server.
Version control probably helps you out
|
6,176
|
How do I get people to take better care of data?
|
One free online resource is the set of Statistical Good Practice Guidelines from the Statistical Services Centre at the University of Reading.
In particular:
Data Management Guidelines for Experimental Projects
The Role of a Database Package in Managing Research Data
|
How do I get people to take better care of data?
|
One free online resource is the set of Statistical Good Practice Guidelines from the Statistical Services Centre at the University of Reading.
In particular:
Data Management Guidelines for Experiment
|
How do I get people to take better care of data?
One free online resource is the set of Statistical Good Practice Guidelines from the Statistical Services Centre at the University of Reading.
In particular:
Data Management Guidelines for Experimental Projects
The Role of a Database Package in Managing Research Data
|
How do I get people to take better care of data?
One free online resource is the set of Statistical Good Practice Guidelines from the Statistical Services Centre at the University of Reading.
In particular:
Data Management Guidelines for Experiment
|
6,177
|
How do I get people to take better care of data?
|
I think first of all you have to ask yourself: why do people use Excel to do tasks Excel was not made for?
1) They already know how to use it
2) It works. Maybe in a clumsy way but it works and that's what they want
I copy a series of numbers in, press a button and I have a plot. As easy as that.
So, make them understand what advantages they can have by using centralized datasets, proper databases (note that Access is NOT one of those) and so on. But remember the two points above: you need to set up a system that works and it's easy to use.
I've seen too many times badly made systems that made me want to go back not to Excel but to pen and paper!
Just as an example, we have an horrible ordering system where I work.
We used to have to fill in an order form which was an Excel spreadsheet where you would input the name of the product, the quantity, the cost etc. It would add everything up, add TVA etc etc, you printed it, gave it to the secretary who would make the order and that was it. Unefficient, but it worked.
Now we have an online ordering system, with a centralized DB and everything. It's an horror.
It should not take me 10 minutes to fill in a damn form because of the unituitive keyboard shortcuts and the various oddities of the software. And note that I am quite informatics-savvy, so imagine what happens to people who don't like computers...
|
How do I get people to take better care of data?
|
I think first of all you have to ask yourself: why do people use Excel to do tasks Excel was not made for?
1) They already know how to use it
2) It works. Maybe in a clumsy way but it works and that's
|
How do I get people to take better care of data?
I think first of all you have to ask yourself: why do people use Excel to do tasks Excel was not made for?
1) They already know how to use it
2) It works. Maybe in a clumsy way but it works and that's what they want
I copy a series of numbers in, press a button and I have a plot. As easy as that.
So, make them understand what advantages they can have by using centralized datasets, proper databases (note that Access is NOT one of those) and so on. But remember the two points above: you need to set up a system that works and it's easy to use.
I've seen too many times badly made systems that made me want to go back not to Excel but to pen and paper!
Just as an example, we have an horrible ordering system where I work.
We used to have to fill in an order form which was an Excel spreadsheet where you would input the name of the product, the quantity, the cost etc. It would add everything up, add TVA etc etc, you printed it, gave it to the secretary who would make the order and that was it. Unefficient, but it worked.
Now we have an online ordering system, with a centralized DB and everything. It's an horror.
It should not take me 10 minutes to fill in a damn form because of the unituitive keyboard shortcuts and the various oddities of the software. And note that I am quite informatics-savvy, so imagine what happens to people who don't like computers...
|
How do I get people to take better care of data?
I think first of all you have to ask yourself: why do people use Excel to do tasks Excel was not made for?
1) They already know how to use it
2) It works. Maybe in a clumsy way but it works and that's
|
6,178
|
How do I get people to take better care of data?
|
I underline all answers given already, but let's call a cat a cat: in many workspaces it is hardly impossible to convince management that investment in "exotic" softwaretools (exotic to them, that is) is necessary, let alone hiring somebody that could set it up and maintain it. I have told quite some clients that they would benefit greatly from hiring a statistician with a thorough background on software and databases, but "no can do" is the general response.
So as long as that ain't going to happen, there are some simple things you can do with Excel that will make life easier. And the first of this is without doubt version control. More info on version control with Excel can be found here.
Some things about using excel
People using EXCEL very often like the formula features of EXCEL. Yet, this is the most important source of errors within EXCEL sheets, and of problems when trying to read in EXCEL files as far as my experience goes. I refuse to work with sheets containing formulas.
I also force everybody I work with to deliver the EXCEL sheets in a plain format, meaning that:
The first row contains the names of the different variables
The spreadsheet starts in cell A1
All data is put in columns, without interruptions and without formatting.
If possible, the data is saved in .csv format as well. It's not difficult to write a VBA script that will extract the data, reformat it and put it in a .csv file. This also allows for better version control, as you can make a .csv dump of the data every day.
If there is a general structure the data always has, then it might be good to develop a template with underlying VB macros to add data and generate the dataset for analysis. This in general will avoid that every employee comes up with his own "genius" system of data storage, and it allows you to write your code in function of this.
This said, if you can convince everybody to use SQL (and a front end for entering data), you can link R directly to that one. This will greatly increase performance.
Data structure and management
As a general rule, the data stored in databases (or EXCEL sheets if they insist) should be the absolute minimum, meaning that any variable that can be calculated from some other variables should not be contained in the database. Mind you, sometimes it can be beneficial to store those derived or transformed variables as well, if the calculations are tedious and take a long time. But these should be stored in a seperate database, if necessary linked to the original one.
Thought should be given as well to what is considered as one case (and hence one row). As as an example, people tend to produce time series by making a new variable for each time point. While this makes sense in an EXCEL, reading in these data demands quite some flipping around of the data matrix. Same for comparing groups: There should be one group indicator and one response variable, not a response variable for each group. This way data structures can be standardized as well.
A last thing I run into frequently, is the use of different metrics. Lengths are given in meters or centimeters, temperatures in Celcius, Kelvin or Farenheit, ... One should indicate in any front end or any template what the unit is in which the variable is measured.
And even after all these things, you still want to have a data control step before you actually start with the analysis. Again, this can be any script that runs daily (e.g. overnight) on new entries, and that flags problems immediately (out of range, wrong type, missing fields, ...) so they can be corrected as fast as possible. If you have to return to an entry that was made 2 months ago to find out what is wrong and why, you better get some good "Sherlock-skills" to correct it.
my 2 cents
|
How do I get people to take better care of data?
|
I underline all answers given already, but let's call a cat a cat: in many workspaces it is hardly impossible to convince management that investment in "exotic" softwaretools (exotic to them, that is)
|
How do I get people to take better care of data?
I underline all answers given already, but let's call a cat a cat: in many workspaces it is hardly impossible to convince management that investment in "exotic" softwaretools (exotic to them, that is) is necessary, let alone hiring somebody that could set it up and maintain it. I have told quite some clients that they would benefit greatly from hiring a statistician with a thorough background on software and databases, but "no can do" is the general response.
So as long as that ain't going to happen, there are some simple things you can do with Excel that will make life easier. And the first of this is without doubt version control. More info on version control with Excel can be found here.
Some things about using excel
People using EXCEL very often like the formula features of EXCEL. Yet, this is the most important source of errors within EXCEL sheets, and of problems when trying to read in EXCEL files as far as my experience goes. I refuse to work with sheets containing formulas.
I also force everybody I work with to deliver the EXCEL sheets in a plain format, meaning that:
The first row contains the names of the different variables
The spreadsheet starts in cell A1
All data is put in columns, without interruptions and without formatting.
If possible, the data is saved in .csv format as well. It's not difficult to write a VBA script that will extract the data, reformat it and put it in a .csv file. This also allows for better version control, as you can make a .csv dump of the data every day.
If there is a general structure the data always has, then it might be good to develop a template with underlying VB macros to add data and generate the dataset for analysis. This in general will avoid that every employee comes up with his own "genius" system of data storage, and it allows you to write your code in function of this.
This said, if you can convince everybody to use SQL (and a front end for entering data), you can link R directly to that one. This will greatly increase performance.
Data structure and management
As a general rule, the data stored in databases (or EXCEL sheets if they insist) should be the absolute minimum, meaning that any variable that can be calculated from some other variables should not be contained in the database. Mind you, sometimes it can be beneficial to store those derived or transformed variables as well, if the calculations are tedious and take a long time. But these should be stored in a seperate database, if necessary linked to the original one.
Thought should be given as well to what is considered as one case (and hence one row). As as an example, people tend to produce time series by making a new variable for each time point. While this makes sense in an EXCEL, reading in these data demands quite some flipping around of the data matrix. Same for comparing groups: There should be one group indicator and one response variable, not a response variable for each group. This way data structures can be standardized as well.
A last thing I run into frequently, is the use of different metrics. Lengths are given in meters or centimeters, temperatures in Celcius, Kelvin or Farenheit, ... One should indicate in any front end or any template what the unit is in which the variable is measured.
And even after all these things, you still want to have a data control step before you actually start with the analysis. Again, this can be any script that runs daily (e.g. overnight) on new entries, and that flags problems immediately (out of range, wrong type, missing fields, ...) so they can be corrected as fast as possible. If you have to return to an entry that was made 2 months ago to find out what is wrong and why, you better get some good "Sherlock-skills" to correct it.
my 2 cents
|
How do I get people to take better care of data?
I underline all answers given already, but let's call a cat a cat: in many workspaces it is hardly impossible to convince management that investment in "exotic" softwaretools (exotic to them, that is)
|
6,179
|
How do I get people to take better care of data?
|
VisTrails: A Python-Based Scientific Workflow and Provenance System.
This talk given at PyCon 2010 has some good ideas. Worth listening to even if you are not interested in using VisTrails or python. In the end I think if you would be able to require that there be a clear document way to reproduce the data. And require some validation that they can.
Quoting:
"In this talk, we will give an overview of VisTrails (http://www.vistrails.org), a python-based open-source scientific workflow that transparently captures provenance (i.e., lineage) of both data products and the processes used to derive these products. We will show how VisTrails can be used to streamline data exploration and visualization. Using real examples, we will demonstrate key features of the system, including the ability to visually create information processing pipelines that combine multiple tools and Iibraries such as VTK, pylab, and matplotlib. We will also show how VisTrails leverages provenance information not only to support result reproducibility, but also to simplify the creation and refinement of pipelines."
|
How do I get people to take better care of data?
|
VisTrails: A Python-Based Scientific Workflow and Provenance System.
This talk given at PyCon 2010 has some good ideas. Worth listening to even if you are not interested in using VisTrails or python.
|
How do I get people to take better care of data?
VisTrails: A Python-Based Scientific Workflow and Provenance System.
This talk given at PyCon 2010 has some good ideas. Worth listening to even if you are not interested in using VisTrails or python. In the end I think if you would be able to require that there be a clear document way to reproduce the data. And require some validation that they can.
Quoting:
"In this talk, we will give an overview of VisTrails (http://www.vistrails.org), a python-based open-source scientific workflow that transparently captures provenance (i.e., lineage) of both data products and the processes used to derive these products. We will show how VisTrails can be used to streamline data exploration and visualization. Using real examples, we will demonstrate key features of the system, including the ability to visually create information processing pipelines that combine multiple tools and Iibraries such as VTK, pylab, and matplotlib. We will also show how VisTrails leverages provenance information not only to support result reproducibility, but also to simplify the creation and refinement of pipelines."
|
How do I get people to take better care of data?
VisTrails: A Python-Based Scientific Workflow and Provenance System.
This talk given at PyCon 2010 has some good ideas. Worth listening to even if you are not interested in using VisTrails or python.
|
6,180
|
How do I get people to take better care of data?
|
I just came across this webpage hosted by ICPSR on data management plans. Although I think the goals of ICPSR will be somewhat different than your business (e.g. they are heavily interested in making the data readily able to be disseminated without violating confidentiality), I imagine they have useful information to businesses. Particularly advice on creating metadata seems to me to be universal.
|
How do I get people to take better care of data?
|
I just came across this webpage hosted by ICPSR on data management plans. Although I think the goals of ICPSR will be somewhat different than your business (e.g. they are heavily interested in making
|
How do I get people to take better care of data?
I just came across this webpage hosted by ICPSR on data management plans. Although I think the goals of ICPSR will be somewhat different than your business (e.g. they are heavily interested in making the data readily able to be disseminated without violating confidentiality), I imagine they have useful information to businesses. Particularly advice on creating metadata seems to me to be universal.
|
How do I get people to take better care of data?
I just came across this webpage hosted by ICPSR on data management plans. Although I think the goals of ICPSR will be somewhat different than your business (e.g. they are heavily interested in making
|
6,181
|
How do I get people to take better care of data?
|
In the case of a much smaller scales, I experienced using dropbox fora sharing/syncing a copy of the data files (and scripts and results) with other researchers/collaborators (I wrote about it here).
The other tool I have used is google docs for collecting and sharing data (about which I wrote here)
|
How do I get people to take better care of data?
|
In the case of a much smaller scales, I experienced using dropbox fora sharing/syncing a copy of the data files (and scripts and results) with other researchers/collaborators (I wrote about it here).
|
How do I get people to take better care of data?
In the case of a much smaller scales, I experienced using dropbox fora sharing/syncing a copy of the data files (and scripts and results) with other researchers/collaborators (I wrote about it here).
The other tool I have used is google docs for collecting and sharing data (about which I wrote here)
|
How do I get people to take better care of data?
In the case of a much smaller scales, I experienced using dropbox fora sharing/syncing a copy of the data files (and scripts and results) with other researchers/collaborators (I wrote about it here).
|
6,182
|
How do I get people to take better care of data?
|
Dropbox + packrat is nice for sharing files with backup/versioning.
Then you load those files (after automated canonicalization/massage) into a database and do the analyses off of the cleaned-up data. Put the scripts to automate the Extract-Transform-Load cycle under version control (or at least a separate dropbox folder with the packrat option...).
When your database server eventually crashes (or needs to be sharded or whatever) you have a pipeline for moving data from people-friendly (Excel, web forms, etc) to analysis-friendly (typically normalized and constrained, always cleaned up).
That "E-T-L" phase is from data warehousing. And if you're not building an online transaction processing system, you're probably building a data warehouse. So embrace it and take advantage of what people have learned from building those for the past 30 years.
Have fun.
|
How do I get people to take better care of data?
|
Dropbox + packrat is nice for sharing files with backup/versioning.
Then you load those files (after automated canonicalization/massage) into a database and do the analyses off of the cleaned-up data.
|
How do I get people to take better care of data?
Dropbox + packrat is nice for sharing files with backup/versioning.
Then you load those files (after automated canonicalization/massage) into a database and do the analyses off of the cleaned-up data. Put the scripts to automate the Extract-Transform-Load cycle under version control (or at least a separate dropbox folder with the packrat option...).
When your database server eventually crashes (or needs to be sharded or whatever) you have a pipeline for moving data from people-friendly (Excel, web forms, etc) to analysis-friendly (typically normalized and constrained, always cleaned up).
That "E-T-L" phase is from data warehousing. And if you're not building an online transaction processing system, you're probably building a data warehouse. So embrace it and take advantage of what people have learned from building those for the past 30 years.
Have fun.
|
How do I get people to take better care of data?
Dropbox + packrat is nice for sharing files with backup/versioning.
Then you load those files (after automated canonicalization/massage) into a database and do the analyses off of the cleaned-up data.
|
6,183
|
How to determine the quality of a multiclass classifier
|
Like binary classification, you can use the empirical error rate for estimating the quality of your classifier. Let $g$ be a classifier, and $x_i$ and $y_i$ be respectively an example in your data base and its class.
$$err(g) = \frac{1}{n} \sum_{i \leq n} \mathbb{1}_{g(x_i) \neq y_i}$$
As you said, when the classes are unbalanced, the baseline is not 50% but the proportion of the bigger class. You could add a weight on each class to balance the error. Let $W_y$ be the weight of the class $y$. Set the weights such that $\frac{1}{W_y} \sim \frac{1}{n}\sum_{i \leq n} \mathbb{1}_{y_i = y}$ and define the weighted empirical error
$$err_W(g) = \frac{1}{n} \sum_{i \leq n} W_{y_i} \mathbb{1}_{g(x_i) \neq y_i}$$
As Steffen said, the confusion matrix could be a good way to estimate the quality of a classifier. In the binary case, you can derive some measure from this matrix such as sensitivity and specificity, estimating the capability of a classifier to detect a particular class.
The source of error of a classifier might be in a particular way. For example a classifier could be too much confident when predicting a 1, but never say wrong when predicting a 0. Many classifiers can be parametrized to control this rate (false positives vs false negatives), and you are then interested in the quality of the whole family of classifier, not just one.
From this you can plot the ROC curve, and measuring the area under the ROC curve give you the quality of those classifiers.
ROC curves can be extended for your multiclass problem. I suggest you to read the answer of this thread.
|
How to determine the quality of a multiclass classifier
|
Like binary classification, you can use the empirical error rate for estimating the quality of your classifier. Let $g$ be a classifier, and $x_i$ and $y_i$ be respectively an example in your data bas
|
How to determine the quality of a multiclass classifier
Like binary classification, you can use the empirical error rate for estimating the quality of your classifier. Let $g$ be a classifier, and $x_i$ and $y_i$ be respectively an example in your data base and its class.
$$err(g) = \frac{1}{n} \sum_{i \leq n} \mathbb{1}_{g(x_i) \neq y_i}$$
As you said, when the classes are unbalanced, the baseline is not 50% but the proportion of the bigger class. You could add a weight on each class to balance the error. Let $W_y$ be the weight of the class $y$. Set the weights such that $\frac{1}{W_y} \sim \frac{1}{n}\sum_{i \leq n} \mathbb{1}_{y_i = y}$ and define the weighted empirical error
$$err_W(g) = \frac{1}{n} \sum_{i \leq n} W_{y_i} \mathbb{1}_{g(x_i) \neq y_i}$$
As Steffen said, the confusion matrix could be a good way to estimate the quality of a classifier. In the binary case, you can derive some measure from this matrix such as sensitivity and specificity, estimating the capability of a classifier to detect a particular class.
The source of error of a classifier might be in a particular way. For example a classifier could be too much confident when predicting a 1, but never say wrong when predicting a 0. Many classifiers can be parametrized to control this rate (false positives vs false negatives), and you are then interested in the quality of the whole family of classifier, not just one.
From this you can plot the ROC curve, and measuring the area under the ROC curve give you the quality of those classifiers.
ROC curves can be extended for your multiclass problem. I suggest you to read the answer of this thread.
|
How to determine the quality of a multiclass classifier
Like binary classification, you can use the empirical error rate for estimating the quality of your classifier. Let $g$ be a classifier, and $x_i$ and $y_i$ be respectively an example in your data bas
|
6,184
|
How to determine the quality of a multiclass classifier
|
To evaluate multi-way text classification systems, I use micro- and macro-averaged F1 (F-measure). The F-measure is essentially a weighted combination of precision and recall that. For binary classification, the micro and macro approaches are the same, but, for the multi-way case, I think they might help you out. You can think of Micro F1 as a weighted combination of precision and recall that gives equal weight to every document, while Macro F1 gives equal weight to every class. For each, the F-measure equation is the same, but you calculate precision and recall differently:
$$F = \frac{(\beta^{2} + 1)PR}{\beta^{2}P+R},$$
where $\beta$ is typically set to 1. Then,
$$P_{micro}=\frac{\sum^{|C|}_{i=1}TP_{i}}{\sum^{|C|}_{i=1}TP_{i}+FP_{i}}, R_{micro}=\frac{\sum^{|C|}_{i=1}TP_{i}}{\sum^{|C|}_{i=1}TP_{i}+FN_{i}}$$
$$P_{macro}=\frac{1}{|C|}\sum^{|C|}_{i=1}\frac{TP_{i}}{TP_{i}+FP_{i}},
R_{macro}=\frac{1}{|C|}\sum^{|C|}_{i=1}\frac{TP_{i}}{TP_{i}+FN_{i}}$$
where $TP$ is True Positive, $FP$ is False Positive, $FN$ is False Negative, and $C$ is class.
|
How to determine the quality of a multiclass classifier
|
To evaluate multi-way text classification systems, I use micro- and macro-averaged F1 (F-measure). The F-measure is essentially a weighted combination of precision and recall that. For binary classifi
|
How to determine the quality of a multiclass classifier
To evaluate multi-way text classification systems, I use micro- and macro-averaged F1 (F-measure). The F-measure is essentially a weighted combination of precision and recall that. For binary classification, the micro and macro approaches are the same, but, for the multi-way case, I think they might help you out. You can think of Micro F1 as a weighted combination of precision and recall that gives equal weight to every document, while Macro F1 gives equal weight to every class. For each, the F-measure equation is the same, but you calculate precision and recall differently:
$$F = \frac{(\beta^{2} + 1)PR}{\beta^{2}P+R},$$
where $\beta$ is typically set to 1. Then,
$$P_{micro}=\frac{\sum^{|C|}_{i=1}TP_{i}}{\sum^{|C|}_{i=1}TP_{i}+FP_{i}}, R_{micro}=\frac{\sum^{|C|}_{i=1}TP_{i}}{\sum^{|C|}_{i=1}TP_{i}+FN_{i}}$$
$$P_{macro}=\frac{1}{|C|}\sum^{|C|}_{i=1}\frac{TP_{i}}{TP_{i}+FP_{i}},
R_{macro}=\frac{1}{|C|}\sum^{|C|}_{i=1}\frac{TP_{i}}{TP_{i}+FN_{i}}$$
where $TP$ is True Positive, $FP$ is False Positive, $FN$ is False Negative, and $C$ is class.
|
How to determine the quality of a multiclass classifier
To evaluate multi-way text classification systems, I use micro- and macro-averaged F1 (F-measure). The F-measure is essentially a weighted combination of precision and recall that. For binary classifi
|
6,185
|
How to determine the quality of a multiclass classifier
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
# Function in R, using precision, recall and F statistics
check.model.accuracy <- function(predicted.class, actual.class){
result.tbl <- as.data.frame(table(predicted.class,actual.class ) )
result.tbl$Var1 <- as.character(result.tbl$predicted.class)
result.tbl$Var2 <- as.character(result.tbl$actual.class)
colnames(result.tbl)[1:2] <- c("Pred","Act")
cntr <- 0
for (pred.class in unique(result.tbl$Pred) ){
cntr <- cntr+ 1
tp <- sum(result.tbl[result.tbl$Pred==pred.class & result.tbl$Act==pred.class, "Freq"])
tp.fp <- sum(result.tbl[result.tbl$Pred == pred.class , "Freq" ])
tp.fn <- sum(result.tbl[result.tbl$Act == pred.class , "Freq" ])
presi <- tp/tp.fp
rec <- tp/tp.fn
F.score <- 2*presi*rec/(presi+rec)
if (cntr == 1 ) F.score.row <- cbind(pred.class, presi,rec,F.score)
if (cntr > 1 ) F.score.row <- rbind(F.score.row,cbind(pred.class,presi,rec,F.score))
}
F.score.row <- as.data.frame(F.score.row)
return(F.score.row)
}
check.model.accuracy(predicted.df,actual.df)
# For multiclass, average across all classes
|
How to determine the quality of a multiclass classifier
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
How to determine the quality of a multiclass classifier
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
# Function in R, using precision, recall and F statistics
check.model.accuracy <- function(predicted.class, actual.class){
result.tbl <- as.data.frame(table(predicted.class,actual.class ) )
result.tbl$Var1 <- as.character(result.tbl$predicted.class)
result.tbl$Var2 <- as.character(result.tbl$actual.class)
colnames(result.tbl)[1:2] <- c("Pred","Act")
cntr <- 0
for (pred.class in unique(result.tbl$Pred) ){
cntr <- cntr+ 1
tp <- sum(result.tbl[result.tbl$Pred==pred.class & result.tbl$Act==pred.class, "Freq"])
tp.fp <- sum(result.tbl[result.tbl$Pred == pred.class , "Freq" ])
tp.fn <- sum(result.tbl[result.tbl$Act == pred.class , "Freq" ])
presi <- tp/tp.fp
rec <- tp/tp.fn
F.score <- 2*presi*rec/(presi+rec)
if (cntr == 1 ) F.score.row <- cbind(pred.class, presi,rec,F.score)
if (cntr > 1 ) F.score.row <- rbind(F.score.row,cbind(pred.class,presi,rec,F.score))
}
F.score.row <- as.data.frame(F.score.row)
return(F.score.row)
}
check.model.accuracy(predicted.df,actual.df)
# For multiclass, average across all classes
|
How to determine the quality of a multiclass classifier
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
6,186
|
LDA vs word2vec
|
An answer to Topic models and word co-occurrence methods covers the difference (skip-gram word2vec is compression of pointwise mutual information (PMI)).
So:
neither method is a generalization of another,
word2vec allows us to use vector geometry (like word analogy, e.g. $v_{king} - v_{man} + v_{woman} \approx v_{queen}$, I wrote an overview of word2vec)
LDA sees higher correlations than two-element,
LDA gives interpretable topics.
Some difference is discussed in the slides word2vec, LDA, and introducing a new hybrid algorithm: lda2vec - Christopher Moody.
|
LDA vs word2vec
|
An answer to Topic models and word co-occurrence methods covers the difference (skip-gram word2vec is compression of pointwise mutual information (PMI)).
So:
neither method is a generalization of ano
|
LDA vs word2vec
An answer to Topic models and word co-occurrence methods covers the difference (skip-gram word2vec is compression of pointwise mutual information (PMI)).
So:
neither method is a generalization of another,
word2vec allows us to use vector geometry (like word analogy, e.g. $v_{king} - v_{man} + v_{woman} \approx v_{queen}$, I wrote an overview of word2vec)
LDA sees higher correlations than two-element,
LDA gives interpretable topics.
Some difference is discussed in the slides word2vec, LDA, and introducing a new hybrid algorithm: lda2vec - Christopher Moody.
|
LDA vs word2vec
An answer to Topic models and word co-occurrence methods covers the difference (skip-gram word2vec is compression of pointwise mutual information (PMI)).
So:
neither method is a generalization of ano
|
6,187
|
LDA vs word2vec
|
The two algorithms differ quite a bit in their purpose.
LDA is aimed mostly at describing documents and document collections by assigning topic distributions to them, which in turn have word distributions assigned, as you mention.
word2vec looks to embed words in a latent factor vector space, an idea originating from the distributed representations of Bengio et al.
It can also be used to describe documents, but is not really designed for the task.
|
LDA vs word2vec
|
The two algorithms differ quite a bit in their purpose.
LDA is aimed mostly at describing documents and document collections by assigning topic distributions to them, which in turn have word distribut
|
LDA vs word2vec
The two algorithms differ quite a bit in their purpose.
LDA is aimed mostly at describing documents and document collections by assigning topic distributions to them, which in turn have word distributions assigned, as you mention.
word2vec looks to embed words in a latent factor vector space, an idea originating from the distributed representations of Bengio et al.
It can also be used to describe documents, but is not really designed for the task.
|
LDA vs word2vec
The two algorithms differ quite a bit in their purpose.
LDA is aimed mostly at describing documents and document collections by assigning topic distributions to them, which in turn have word distribut
|
6,188
|
LDA vs word2vec
|
There is a relation between LDA and $\bf {Topic2Vec}$, a model used for learning Distributed Topic Representations $\bf together\ with$ Word Representations. LDA is used to construct a log-likelihood for CBOW and Skip-gram. The following explanation is inside the section 3 of the work Topic2Vec: Learning Distributed Representations of Topics:
"When training, given a word-topic sequence of a document $D=\{w_1 : z_1, ...,w_M : z_M \}$, where $z_i$ is the word $w_i$'s topic inferred from LDA, the learning objective functions can be defined to maximize the following log-likelihoods, based on CBOW and Skip-gram, respectively."
$$\mathcal{L}_{CBOW}(D) = \frac1M \sum^{M}_{i=1}(\log p(w_i|w_{ext}) + \log p(z_i|w_{ext}))$$
$$\mathcal{L}_{Skip-gram}(D)= \frac1M \sum^{M}_{i=1}\sum_{-k\le c\le k,c\neq0}(\log p(w_{i+c}|w_i) + \log p(w_{i+c}|z_i))$$
In section 4.2, the authors explain: " topics and words are equally represented as the low-dimensional vectors, we can IMMEDIATELY CALCULATE THE $\bf {COSINE\ SIMILARITY}$ between words and topics. For each topic, we select higher similarity words".
Moreover, you wil find inside that work some phrases like:
"probability is not the best choice for feature representation"
and
"LDA prefers to describe the statistical relationship of occurrences rather than real semantic information embedded in words, topics and documents"
which will help you understanding better the different models.
|
LDA vs word2vec
|
There is a relation between LDA and $\bf {Topic2Vec}$, a model used for learning Distributed Topic Representations $\bf together\ with$ Word Representations. LDA is used to construct a log-likelihood
|
LDA vs word2vec
There is a relation between LDA and $\bf {Topic2Vec}$, a model used for learning Distributed Topic Representations $\bf together\ with$ Word Representations. LDA is used to construct a log-likelihood for CBOW and Skip-gram. The following explanation is inside the section 3 of the work Topic2Vec: Learning Distributed Representations of Topics:
"When training, given a word-topic sequence of a document $D=\{w_1 : z_1, ...,w_M : z_M \}$, where $z_i$ is the word $w_i$'s topic inferred from LDA, the learning objective functions can be defined to maximize the following log-likelihoods, based on CBOW and Skip-gram, respectively."
$$\mathcal{L}_{CBOW}(D) = \frac1M \sum^{M}_{i=1}(\log p(w_i|w_{ext}) + \log p(z_i|w_{ext}))$$
$$\mathcal{L}_{Skip-gram}(D)= \frac1M \sum^{M}_{i=1}\sum_{-k\le c\le k,c\neq0}(\log p(w_{i+c}|w_i) + \log p(w_{i+c}|z_i))$$
In section 4.2, the authors explain: " topics and words are equally represented as the low-dimensional vectors, we can IMMEDIATELY CALCULATE THE $\bf {COSINE\ SIMILARITY}$ between words and topics. For each topic, we select higher similarity words".
Moreover, you wil find inside that work some phrases like:
"probability is not the best choice for feature representation"
and
"LDA prefers to describe the statistical relationship of occurrences rather than real semantic information embedded in words, topics and documents"
which will help you understanding better the different models.
|
LDA vs word2vec
There is a relation between LDA and $\bf {Topic2Vec}$, a model used for learning Distributed Topic Representations $\bf together\ with$ Word Representations. LDA is used to construct a log-likelihood
|
6,189
|
LDA vs word2vec
|
Other answers here cover the technical differences between those two algorithms, however I think the core difference is their purpose: Those two algorithms were designed to do different things:
word2vec ultimately yields a mapping between words and a fixed length vector. If we were to compare it with another well known approach, it would make more sense to do so using another tool that was designed for the same intend, like the Bag of Words (BOW model). This one does the same but lacks some desired features of word2vec like using the order of words and assigning semantic meaning to the distances between word representations.
LDA on the other hand creates a mapping from a varied length document to a vector. This document can be a sentence, paragraph or full text file but it is not a single word. It would make more sense to compare it with doc2vec that does the same job and is introduced by Tomas Mikolov here (the author uses the term paragraph vectors). Or with LSI for that matter.
So to directly answer your two questions:
None of them is a generalization or variation of the other
Use LDA to map a document to a fixed length vector. You can then use this vector in a traditional ML algorithm like a classifier that accepts a document and predicts a sentimental label for example.
Use word2vec to map a word to a fixed length vector. You can similarly use these vectors to feed ML models were the input are words, for example when developing an auto-completer that feeds on previous words and attempts to predict the next one.
|
LDA vs word2vec
|
Other answers here cover the technical differences between those two algorithms, however I think the core difference is their purpose: Those two algorithms were designed to do different things:
word2v
|
LDA vs word2vec
Other answers here cover the technical differences between those two algorithms, however I think the core difference is their purpose: Those two algorithms were designed to do different things:
word2vec ultimately yields a mapping between words and a fixed length vector. If we were to compare it with another well known approach, it would make more sense to do so using another tool that was designed for the same intend, like the Bag of Words (BOW model). This one does the same but lacks some desired features of word2vec like using the order of words and assigning semantic meaning to the distances between word representations.
LDA on the other hand creates a mapping from a varied length document to a vector. This document can be a sentence, paragraph or full text file but it is not a single word. It would make more sense to compare it with doc2vec that does the same job and is introduced by Tomas Mikolov here (the author uses the term paragraph vectors). Or with LSI for that matter.
So to directly answer your two questions:
None of them is a generalization or variation of the other
Use LDA to map a document to a fixed length vector. You can then use this vector in a traditional ML algorithm like a classifier that accepts a document and predicts a sentimental label for example.
Use word2vec to map a word to a fixed length vector. You can similarly use these vectors to feed ML models were the input are words, for example when developing an auto-completer that feeds on previous words and attempts to predict the next one.
|
LDA vs word2vec
Other answers here cover the technical differences between those two algorithms, however I think the core difference is their purpose: Those two algorithms were designed to do different things:
word2v
|
6,190
|
LDA vs word2vec
|
From a practical standpoint...
LDA starts with a bag-of-words input which considers what words co-occur in documents, but does not pay attention to the immediate context of words. This means the words can appear anywhere in the document and in any order, which strips out a certain level of information. By contrast word2vec is all about the context in which a word is used -- though perhaps not exact order.
LDA's "topics" are a mathematical construct and you shouldn't confuse them with actual human topics. You can end up with topics that have no human interpretation -- they're more like artifacts of the process than actual topics -- and you can end up with topics at different levels of abstraction, including topics that basically cover the same human topic. It's a bit like reading tea leaves.
I've found LDA useful to explore data, but not so useful for providing a solution, but your mileage may vary.
Word2vec doesn't create topics directly at all. It projects words into a high-dimensional space based on similar usage, so it can have its own surprises in terms of words that you think of as distinct -- or even opposite -- may be near each other in space.
You can use either to determine if words are "similar". With LDA: do the words have similar weights in the same topics. With word2vec: are they close (by some measure) in the embedding space.
You can use either to determine if documents are similar. With LDA, you would look for a similar mixture of topics, and with word2vec you would do something like adding up the vectors of the words of the document. ("Document" could be a sentence, paragraph, page, or an entire document.) Doc2vec is a modified version of word2vec that allows the direct comparison of documents.
While LDA throws away some contextual information with its bag-of-words approach, it does have topics (or "topics"), which word2vec doesn't have. So it's straightforward to use doc2vec to say, "Show me documents that are similar to this one", while with LDA it's straightforward to say, "Show me documents where topic A is prominent." (Again, knowing that "topic A" emerges from a mathematical process on your documents and you then figure out what human topic(s) it mostly corresponds to.)
|
LDA vs word2vec
|
From a practical standpoint...
LDA starts with a bag-of-words input which considers what words co-occur in documents, but does not pay attention to the immediate context of words. This means the words
|
LDA vs word2vec
From a practical standpoint...
LDA starts with a bag-of-words input which considers what words co-occur in documents, but does not pay attention to the immediate context of words. This means the words can appear anywhere in the document and in any order, which strips out a certain level of information. By contrast word2vec is all about the context in which a word is used -- though perhaps not exact order.
LDA's "topics" are a mathematical construct and you shouldn't confuse them with actual human topics. You can end up with topics that have no human interpretation -- they're more like artifacts of the process than actual topics -- and you can end up with topics at different levels of abstraction, including topics that basically cover the same human topic. It's a bit like reading tea leaves.
I've found LDA useful to explore data, but not so useful for providing a solution, but your mileage may vary.
Word2vec doesn't create topics directly at all. It projects words into a high-dimensional space based on similar usage, so it can have its own surprises in terms of words that you think of as distinct -- or even opposite -- may be near each other in space.
You can use either to determine if words are "similar". With LDA: do the words have similar weights in the same topics. With word2vec: are they close (by some measure) in the embedding space.
You can use either to determine if documents are similar. With LDA, you would look for a similar mixture of topics, and with word2vec you would do something like adding up the vectors of the words of the document. ("Document" could be a sentence, paragraph, page, or an entire document.) Doc2vec is a modified version of word2vec that allows the direct comparison of documents.
While LDA throws away some contextual information with its bag-of-words approach, it does have topics (or "topics"), which word2vec doesn't have. So it's straightforward to use doc2vec to say, "Show me documents that are similar to this one", while with LDA it's straightforward to say, "Show me documents where topic A is prominent." (Again, knowing that "topic A" emerges from a mathematical process on your documents and you then figure out what human topic(s) it mostly corresponds to.)
|
LDA vs word2vec
From a practical standpoint...
LDA starts with a bag-of-words input which considers what words co-occur in documents, but does not pay attention to the immediate context of words. This means the words
|
6,191
|
How to deal with hierarchical / nested data in machine learning
|
I have been thinking about this problem for a while, with inspirations from the following questions on this site.
How can I include random effects into a randomForest?
Random forest on grouped data
Random Forests / adaboost in panel regression setting
Random forest for binary panel data
Modelling clustered data using boosted regression trees
Let me first introduce the mixed-effects models for hierarchical/nested data and start from a simple two-level model (samples nested within cities). For the $j$-th sample in the $i$-th city, we write the outcome $y_{ij}$ as a function of covariates $\boldsymbol x_{ij}$ (a list of variables including gender and age),
$$ y_{ij}=f(\boldsymbol x_{ij})+{u_i}+\epsilon_{ij},$$
where ${u_i}$ is the random intercept for each city, $j=1,\ldots,n_i$. If we assume $u_i$ and $\epsilon_{ij}$ follow normal distributions with mean 0 and variances $\sigma^2_u$ and $\sigma^2$, the empirical Bayesian (EB) estimate of $u_i$ is $$\hat{u}_i=\frac{\sigma^2_u}{\sigma^2_u+\sigma^2/n_i}(\bar{\mathbf{y}}_{i.}-f(\bar{\boldsymbol x}_{i.})),$$ where $\bar{\mathbf{y}}_{i.}=\frac{1}{n_i}\sum_i^{n_i}y_{ij}$, $f(\bar{\boldsymbol x}_{i.})=\frac{1}{n_i}\sum_i^{n_i}f(\boldsymbol x_{ij}).$ If we treat $(\bar{\mathbf{y}}_{i.}-f(\bar{\boldsymbol x}_{i.}))$ as the OLS (ordinary least square) estimate of $u_i$, then the EB estimate is a weighted sum of 0 and the OLS estimate, and the weight is an increasing function of the sample size $n_i$. The final prediction is $$\hat{f}(\boldsymbol x_{ij})+\hat{u}_{i},$$ where $\hat{f}(\boldsymbol x_{ij})$ is the estimate of the fixed effect from linear regression or machine learning method like random forest. This can be easily extended to any level of data, say samples nested in cities and then regions and then countries. Other than the tree-based methods, there is a method based on SVM.
For random-forest-based method, you can try MixRF() in our R package MixRF on CRAN.
|
How to deal with hierarchical / nested data in machine learning
|
I have been thinking about this problem for a while, with inspirations from the following questions on this site.
How can I include random effects into a randomForest?
Random forest on grouped data
R
|
How to deal with hierarchical / nested data in machine learning
I have been thinking about this problem for a while, with inspirations from the following questions on this site.
How can I include random effects into a randomForest?
Random forest on grouped data
Random Forests / adaboost in panel regression setting
Random forest for binary panel data
Modelling clustered data using boosted regression trees
Let me first introduce the mixed-effects models for hierarchical/nested data and start from a simple two-level model (samples nested within cities). For the $j$-th sample in the $i$-th city, we write the outcome $y_{ij}$ as a function of covariates $\boldsymbol x_{ij}$ (a list of variables including gender and age),
$$ y_{ij}=f(\boldsymbol x_{ij})+{u_i}+\epsilon_{ij},$$
where ${u_i}$ is the random intercept for each city, $j=1,\ldots,n_i$. If we assume $u_i$ and $\epsilon_{ij}$ follow normal distributions with mean 0 and variances $\sigma^2_u$ and $\sigma^2$, the empirical Bayesian (EB) estimate of $u_i$ is $$\hat{u}_i=\frac{\sigma^2_u}{\sigma^2_u+\sigma^2/n_i}(\bar{\mathbf{y}}_{i.}-f(\bar{\boldsymbol x}_{i.})),$$ where $\bar{\mathbf{y}}_{i.}=\frac{1}{n_i}\sum_i^{n_i}y_{ij}$, $f(\bar{\boldsymbol x}_{i.})=\frac{1}{n_i}\sum_i^{n_i}f(\boldsymbol x_{ij}).$ If we treat $(\bar{\mathbf{y}}_{i.}-f(\bar{\boldsymbol x}_{i.}))$ as the OLS (ordinary least square) estimate of $u_i$, then the EB estimate is a weighted sum of 0 and the OLS estimate, and the weight is an increasing function of the sample size $n_i$. The final prediction is $$\hat{f}(\boldsymbol x_{ij})+\hat{u}_{i},$$ where $\hat{f}(\boldsymbol x_{ij})$ is the estimate of the fixed effect from linear regression or machine learning method like random forest. This can be easily extended to any level of data, say samples nested in cities and then regions and then countries. Other than the tree-based methods, there is a method based on SVM.
For random-forest-based method, you can try MixRF() in our R package MixRF on CRAN.
|
How to deal with hierarchical / nested data in machine learning
I have been thinking about this problem for a while, with inspirations from the following questions on this site.
How can I include random effects into a randomForest?
Random forest on grouped data
R
|
6,192
|
How to deal with hierarchical / nested data in machine learning
|
Given that you only have two variables and straightforward nesting, I would echo the comments of others mentioning a hierarchical Bayes model. You mention a preference for tree-based methods, but is there a particular reason for this? With a minimal number of predictors, I find that linearity is often a valid assumption that works well, and any model mis-specification could easily be checked via residual plots.
If you did have a large number of predictors, the RF example based on the EM approach mentioned by @Randel would certainly be an option. One other option I haven't seen yet is to use model-based boosting (available via the mboost package in R). Essentially, this approach allows you to estimate the functional form of your fixed-effects using various base learners (linear and non-linear), and the random effects estimates are approximated using a ridge-based penalty for all levels in that particular factor. This paper is a pretty nice tutorial (random effects base learners are discussed on page 11).
I took a look at your sample data, but it looks like it only has the random effects variables of City, Region, and Country. In this case, it would only be useful to calculate the Empirical Bayes estimates for those factors, independent of any predictors. That might actually be a good exercise to start with in general, as maybe the higher levels (Country, for example), have minimal variance explained in the outcome, and so it probably wouldn't be worthwhile to add them in your model.
|
How to deal with hierarchical / nested data in machine learning
|
Given that you only have two variables and straightforward nesting, I would echo the comments of others mentioning a hierarchical Bayes model. You mention a preference for tree-based methods, but is t
|
How to deal with hierarchical / nested data in machine learning
Given that you only have two variables and straightforward nesting, I would echo the comments of others mentioning a hierarchical Bayes model. You mention a preference for tree-based methods, but is there a particular reason for this? With a minimal number of predictors, I find that linearity is often a valid assumption that works well, and any model mis-specification could easily be checked via residual plots.
If you did have a large number of predictors, the RF example based on the EM approach mentioned by @Randel would certainly be an option. One other option I haven't seen yet is to use model-based boosting (available via the mboost package in R). Essentially, this approach allows you to estimate the functional form of your fixed-effects using various base learners (linear and non-linear), and the random effects estimates are approximated using a ridge-based penalty for all levels in that particular factor. This paper is a pretty nice tutorial (random effects base learners are discussed on page 11).
I took a look at your sample data, but it looks like it only has the random effects variables of City, Region, and Country. In this case, it would only be useful to calculate the Empirical Bayes estimates for those factors, independent of any predictors. That might actually be a good exercise to start with in general, as maybe the higher levels (Country, for example), have minimal variance explained in the outcome, and so it probably wouldn't be worthwhile to add them in your model.
|
How to deal with hierarchical / nested data in machine learning
Given that you only have two variables and straightforward nesting, I would echo the comments of others mentioning a hierarchical Bayes model. You mention a preference for tree-based methods, but is t
|
6,193
|
How to deal with hierarchical / nested data in machine learning
|
This is more of a comment or suggestion rather than an answer, but I think you ask an important question here. As someone who works exclusively with multilevel data, I can say that I have found very little about machine learning with multilevel data. However, Dan Martin, a recent PhD graduate in quantitative psychology at the University of Virginia, did his dissertation on the use of regression trees with multilevel data. Below is a link to an R package he wrote for some of these purposes:
https://github.com/dpmartin42/mleda/blob/master/README.md
Also, you can find his dissertation here:
http://dpmartin42.github.io/about.html
|
How to deal with hierarchical / nested data in machine learning
|
This is more of a comment or suggestion rather than an answer, but I think you ask an important question here. As someone who works exclusively with multilevel data, I can say that I have found very l
|
How to deal with hierarchical / nested data in machine learning
This is more of a comment or suggestion rather than an answer, but I think you ask an important question here. As someone who works exclusively with multilevel data, I can say that I have found very little about machine learning with multilevel data. However, Dan Martin, a recent PhD graduate in quantitative psychology at the University of Virginia, did his dissertation on the use of regression trees with multilevel data. Below is a link to an R package he wrote for some of these purposes:
https://github.com/dpmartin42/mleda/blob/master/README.md
Also, you can find his dissertation here:
http://dpmartin42.github.io/about.html
|
How to deal with hierarchical / nested data in machine learning
This is more of a comment or suggestion rather than an answer, but I think you ask an important question here. As someone who works exclusively with multilevel data, I can say that I have found very l
|
6,194
|
How to deal with hierarchical / nested data in machine learning
|
You can use a mixed effect model that models the ID variables as random effects. By doing so, you allow for information pooling: you use both data from the global average and from the group averages, and the less data you have per group, the more weight is given to the global average. If you want to use a machine learning model for the fixed effects part, you can, for instance, use tree-boosting.
The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where the fixed-effects are learned using tree-boosting. See this blog post and Sigrist (2020) for further information. Below is some sample R code on how to run the analysis in your case. Note that currently the R package is not yet on CRAN (hopefully soon) but the the Python package is available on PyPI.
Disclaimer: I am the author of the GPBoost library.
library(gpboost)
train <- data.frame(CountryID=c(1,1,1,1, 2,2,2,2, 3,3,3,3),
RegionID=c(1,1,1,2, 3,3,4,4, 5,5,5,5),
CityID=c(1,1,2,3, 4,5,6,6, 7,7,7,8),
Age=c(23,48,62,63, 25,41,45,19, 37,41,31,50),
Gender=factor(c("M","F","M","F", "M","F","M","F", "F","F","F","M")),
Income=c(31,42,71,65, 50,51,101,38, 47,50,55,23))
# Prepare data
X <- as.matrix(cbind(Gender=train[,"Gender"],Age=train[,"Age"]))# fixed effects data
group_data <- train[,c("CountryID","RegionID","CityID")]# grouping data for random effects
y <- train[,c("Income")]# response variable
# Define a random effects model
gp_model <- GPModel(group_data = group_data)
# Run boosting algorithm (this will not give meaningfull results as the data is too small)
bst <- gpboost(data = X, label = y, gp_model = gp_model, verbose = -1,
objective = "regression_l2", nrounds=10, learning_rate=0.1)
# Show estimated variance parameters
summary(gp_model)
# A linear mixed effects model also has problems with this small data and some of the variances are 0
gp_model <- fitGPModel(group_data=group_data, y=y, X=cbind(Intercept = rep(1,length(y)),X))
summary(gp_model)
# Or the same thing using the lme4 package
library(lme4)
mod <- lmer(Income ~ Age + Gender + (1|CountryID) + (1|RegionID) + (1|CityID), data=train, REML=FALSE)
summary(mod)
|
How to deal with hierarchical / nested data in machine learning
|
You can use a mixed effect model that models the ID variables as random effects. By doing so, you allow for information pooling: you use both data from the global average and from the group averages,
|
How to deal with hierarchical / nested data in machine learning
You can use a mixed effect model that models the ID variables as random effects. By doing so, you allow for information pooling: you use both data from the global average and from the group averages, and the less data you have per group, the more weight is given to the global average. If you want to use a machine learning model for the fixed effects part, you can, for instance, use tree-boosting.
The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where the fixed-effects are learned using tree-boosting. See this blog post and Sigrist (2020) for further information. Below is some sample R code on how to run the analysis in your case. Note that currently the R package is not yet on CRAN (hopefully soon) but the the Python package is available on PyPI.
Disclaimer: I am the author of the GPBoost library.
library(gpboost)
train <- data.frame(CountryID=c(1,1,1,1, 2,2,2,2, 3,3,3,3),
RegionID=c(1,1,1,2, 3,3,4,4, 5,5,5,5),
CityID=c(1,1,2,3, 4,5,6,6, 7,7,7,8),
Age=c(23,48,62,63, 25,41,45,19, 37,41,31,50),
Gender=factor(c("M","F","M","F", "M","F","M","F", "F","F","F","M")),
Income=c(31,42,71,65, 50,51,101,38, 47,50,55,23))
# Prepare data
X <- as.matrix(cbind(Gender=train[,"Gender"],Age=train[,"Age"]))# fixed effects data
group_data <- train[,c("CountryID","RegionID","CityID")]# grouping data for random effects
y <- train[,c("Income")]# response variable
# Define a random effects model
gp_model <- GPModel(group_data = group_data)
# Run boosting algorithm (this will not give meaningfull results as the data is too small)
bst <- gpboost(data = X, label = y, gp_model = gp_model, verbose = -1,
objective = "regression_l2", nrounds=10, learning_rate=0.1)
# Show estimated variance parameters
summary(gp_model)
# A linear mixed effects model also has problems with this small data and some of the variances are 0
gp_model <- fitGPModel(group_data=group_data, y=y, X=cbind(Intercept = rep(1,length(y)),X))
summary(gp_model)
# Or the same thing using the lme4 package
library(lme4)
mod <- lmer(Income ~ Age + Gender + (1|CountryID) + (1|RegionID) + (1|CityID), data=train, REML=FALSE)
summary(mod)
|
How to deal with hierarchical / nested data in machine learning
You can use a mixed effect model that models the ID variables as random effects. By doing so, you allow for information pooling: you use both data from the global average and from the group averages,
|
6,195
|
How to deal with hierarchical / nested data in machine learning
|
The function RFcluster() from the gamclass package for R "adapts random forests to work (albeit clumsily and inefficiently) with clustered categorical outcome data". The following example is from the help page for RFcluster:
library(randomForest)
library(gamclass)
data(mlbench::Vowel)
RFcluster(formula=Class ~., id = V1, data = Vowel, nfold = 15,
tree=500, progress=TRUE, printit = TRUE, seed = 29)
This returns an OOB accuracy (where the "bags" are bags of speakers, not bags of individual speaker samples), that my machine gives as 0.57 .
|
How to deal with hierarchical / nested data in machine learning
|
The function RFcluster() from the gamclass package for R "adapts random forests to work (albeit clumsily and inefficiently) with clustered categorical outcome data". The following example is from the
|
How to deal with hierarchical / nested data in machine learning
The function RFcluster() from the gamclass package for R "adapts random forests to work (albeit clumsily and inefficiently) with clustered categorical outcome data". The following example is from the help page for RFcluster:
library(randomForest)
library(gamclass)
data(mlbench::Vowel)
RFcluster(formula=Class ~., id = V1, data = Vowel, nfold = 15,
tree=500, progress=TRUE, printit = TRUE, seed = 29)
This returns an OOB accuracy (where the "bags" are bags of speakers, not bags of individual speaker samples), that my machine gives as 0.57 .
|
How to deal with hierarchical / nested data in machine learning
The function RFcluster() from the gamclass package for R "adapts random forests to work (albeit clumsily and inefficiently) with clustered categorical outcome data". The following example is from the
|
6,196
|
How to deal with hierarchical / nested data in machine learning
|
R package glmertree allows for fitting decision trees to multilevel and longitudinal data (which would otherwise be modeled with a mixed-effects model).
It allows for specifying a random effects structure, and partitions the dataset into subgroups using level-I, or higher-level predictors.
For further reference, see the package vignette (tutorial): https://cran.r-project.org/web/packages/glmertree/vignettes/glmertree.pdf.
Fokkema, M., Smits, N., Zeileis, A., Hothorn, T., & Kelderman, H. (2018). Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees. Behavior research methods, 50(5), 2016-2034. https://doi.org/10.3758/s13428-017-0971-x
Fokkema, M., Edbrooke-Childs, J., & Wolpert, M. (2021). Generalized linear mixed-model (GLMM) trees: A flexible decision-tree method for multilevel and longitudinal data. Psychotherapy Research, 31(3), 329-341. https://doi.org/10.1080/10503307.2020.1785037
|
How to deal with hierarchical / nested data in machine learning
|
R package glmertree allows for fitting decision trees to multilevel and longitudinal data (which would otherwise be modeled with a mixed-effects model).
It allows for specifying a random effects struc
|
How to deal with hierarchical / nested data in machine learning
R package glmertree allows for fitting decision trees to multilevel and longitudinal data (which would otherwise be modeled with a mixed-effects model).
It allows for specifying a random effects structure, and partitions the dataset into subgroups using level-I, or higher-level predictors.
For further reference, see the package vignette (tutorial): https://cran.r-project.org/web/packages/glmertree/vignettes/glmertree.pdf.
Fokkema, M., Smits, N., Zeileis, A., Hothorn, T., & Kelderman, H. (2018). Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees. Behavior research methods, 50(5), 2016-2034. https://doi.org/10.3758/s13428-017-0971-x
Fokkema, M., Edbrooke-Childs, J., & Wolpert, M. (2021). Generalized linear mixed-model (GLMM) trees: A flexible decision-tree method for multilevel and longitudinal data. Psychotherapy Research, 31(3), 329-341. https://doi.org/10.1080/10503307.2020.1785037
|
How to deal with hierarchical / nested data in machine learning
R package glmertree allows for fitting decision trees to multilevel and longitudinal data (which would otherwise be modeled with a mixed-effects model).
It allows for specifying a random effects struc
|
6,197
|
How to deal with hierarchical / nested data in machine learning
|
You may want to have a look at metboost: Miller PJ et al. metboost: Exploratory regression analysis with hierarchically clustered data.arXiv:1702.03994
Quote from the abstract: We propose an extension to boosted decision decision trees called metboost for hierarchically clustered data. It works by constraining the structure of each tree to be the same across groups, but allowing the terminal node means to differ. This allows predictors and split
points to lead to different predictions within each group, and approximates nonlinear group specific effects. Importantly, metboost remains computationally
feasible for thousands of observations and hundreds of predictors that may contain missing values.
It is implemented in the R package mvtboost
|
How to deal with hierarchical / nested data in machine learning
|
You may want to have a look at metboost: Miller PJ et al. metboost: Exploratory regression analysis with hierarchically clustered data.arXiv:1702.03994
Quote from the abstract: We propose an extension
|
How to deal with hierarchical / nested data in machine learning
You may want to have a look at metboost: Miller PJ et al. metboost: Exploratory regression analysis with hierarchically clustered data.arXiv:1702.03994
Quote from the abstract: We propose an extension to boosted decision decision trees called metboost for hierarchically clustered data. It works by constraining the structure of each tree to be the same across groups, but allowing the terminal node means to differ. This allows predictors and split
points to lead to different predictions within each group, and approximates nonlinear group specific effects. Importantly, metboost remains computationally
feasible for thousands of observations and hundreds of predictors that may contain missing values.
It is implemented in the R package mvtboost
|
How to deal with hierarchical / nested data in machine learning
You may want to have a look at metboost: Miller PJ et al. metboost: Exploratory regression analysis with hierarchically clustered data.arXiv:1702.03994
Quote from the abstract: We propose an extension
|
6,198
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
|
If the bootstrapping procedure and the formation of the confidence interval were performed correctly, it means the same as any other confidence interval. From a frequentist perspective, a 95% CI implies that if the entire study were repeated identically ad infinitum, 95% of such confidence intervals formed in this manner will include the true value. Of course, in your study, or in any given individual study, the confidence interval either will include the true value or not, but you won't know which. To understand these ideas further, it may help you to read my answer here: Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Regarding your further questions, the 'true value' refers to the actual parameter of the relevant population. (Samples don't have parameters, they have statistics; e.g., the sample mean, $\bar x$, is a sample statistic, but the population mean, $\mu$, is a population parameter.) As to how we know this, in practice we don't. You are correct that we are relying on some assumptions--we always are. If those assumptions are correct, it can be proven that the properties hold. This was the point of Efron's work back in the late 1970's and early 1980's, but the math is difficult for most people to follow. For a somewhat mathematical explanation of the bootstrap, see @StasK's answer here: Explaining to laypeople why bootstrapping works . For a quick demonstration short of the math, consider the following simulation using R:
# a function to perform bootstrapping
boot.mean.sampling.distribution = function(raw.data, B=1000){
# this function will take 1,000 (by default) bootsamples calculate the mean of
# each one, store it, & return the bootstrapped sampling distribution of the mean
boot.dist = vector(length=B) # this will store the means
N = length(raw.data) # this is the N from your data
for(i in 1:B){
boot.sample = sample(x=raw.data, size=N, replace=TRUE)
boot.dist[i] = mean(boot.sample)
}
boot.dist = sort(boot.dist)
return(boot.dist)
}
# simulate bootstrapped CI from a population w/ true mean = 0 on each pass through
# the loop, we will get a sample of data from the population, get the bootstrapped
# sampling distribution of the mean, & see if the population mean is included in the
# 95% confidence interval implied by that sampling distribution
set.seed(00) # this makes the simulation reproducible
includes = vector(length=1000) # this will store our results
for(i in 1:1000){
sim.data = rnorm(100, mean=0, sd=1)
boot.dist = boot.mean.sampling.distribution(raw.data=sim.data)
includes[i] = boot.dist[25]<0 & 0<boot.dist[976]
}
mean(includes) # this tells us the % of CIs that included the true mean
[1] 0.952
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
|
If the bootstrapping procedure and the formation of the confidence interval were performed correctly, it means the same as any other confidence interval. From a frequentist perspective, a 95% CI impl
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
If the bootstrapping procedure and the formation of the confidence interval were performed correctly, it means the same as any other confidence interval. From a frequentist perspective, a 95% CI implies that if the entire study were repeated identically ad infinitum, 95% of such confidence intervals formed in this manner will include the true value. Of course, in your study, or in any given individual study, the confidence interval either will include the true value or not, but you won't know which. To understand these ideas further, it may help you to read my answer here: Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Regarding your further questions, the 'true value' refers to the actual parameter of the relevant population. (Samples don't have parameters, they have statistics; e.g., the sample mean, $\bar x$, is a sample statistic, but the population mean, $\mu$, is a population parameter.) As to how we know this, in practice we don't. You are correct that we are relying on some assumptions--we always are. If those assumptions are correct, it can be proven that the properties hold. This was the point of Efron's work back in the late 1970's and early 1980's, but the math is difficult for most people to follow. For a somewhat mathematical explanation of the bootstrap, see @StasK's answer here: Explaining to laypeople why bootstrapping works . For a quick demonstration short of the math, consider the following simulation using R:
# a function to perform bootstrapping
boot.mean.sampling.distribution = function(raw.data, B=1000){
# this function will take 1,000 (by default) bootsamples calculate the mean of
# each one, store it, & return the bootstrapped sampling distribution of the mean
boot.dist = vector(length=B) # this will store the means
N = length(raw.data) # this is the N from your data
for(i in 1:B){
boot.sample = sample(x=raw.data, size=N, replace=TRUE)
boot.dist[i] = mean(boot.sample)
}
boot.dist = sort(boot.dist)
return(boot.dist)
}
# simulate bootstrapped CI from a population w/ true mean = 0 on each pass through
# the loop, we will get a sample of data from the population, get the bootstrapped
# sampling distribution of the mean, & see if the population mean is included in the
# 95% confidence interval implied by that sampling distribution
set.seed(00) # this makes the simulation reproducible
includes = vector(length=1000) # this will store our results
for(i in 1:1000){
sim.data = rnorm(100, mean=0, sd=1)
boot.dist = boot.mean.sampling.distribution(raw.data=sim.data)
includes[i] = boot.dist[25]<0 & 0<boot.dist[976]
}
mean(includes) # this tells us the % of CIs that included the true mean
[1] 0.952
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
If the bootstrapping procedure and the formation of the confidence interval were performed correctly, it means the same as any other confidence interval. From a frequentist perspective, a 95% CI impl
|
6,199
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
|
What you are saying is that there is no need to find confidence interval from bootstrapped resamples.
If you are satisfied with the statistic (sample mean or sample proportion) obtained from bootstrapped resamples, do not find any confidence interval and so, no question of interpretation.
But if you are not satisfied with the statistic obtained from bootstrapped resamples or satisfied but still want to find the confidence interval, then the interpretation for such confidence interval is same as any other confidence interval.
It's because when your bootstrapped resamples are exactly representing (or assumed to be so) the original population, then where is the need of confidence interval? The statistic from the bootstrapped resamples is the original population parameter itself but when you do not consider the statistic as the original population parameter, then there is a need to find the confidence interval. So, it's all about how you consider.
Let's say you calculated 95% confidence interval from bootstrapped resamples. Now the interpretation is: "95% of the times, this bootstrap method accurately results in a confidence interval containing the true population parameter".
(This is what I think. Correct me if there are any mistakes).
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
|
What you are saying is that there is no need to find confidence interval from bootstrapped resamples.
If you are satisfied with the statistic (sample mean or sample proportion) obtained from bootstrap
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
What you are saying is that there is no need to find confidence interval from bootstrapped resamples.
If you are satisfied with the statistic (sample mean or sample proportion) obtained from bootstrapped resamples, do not find any confidence interval and so, no question of interpretation.
But if you are not satisfied with the statistic obtained from bootstrapped resamples or satisfied but still want to find the confidence interval, then the interpretation for such confidence interval is same as any other confidence interval.
It's because when your bootstrapped resamples are exactly representing (or assumed to be so) the original population, then where is the need of confidence interval? The statistic from the bootstrapped resamples is the original population parameter itself but when you do not consider the statistic as the original population parameter, then there is a need to find the confidence interval. So, it's all about how you consider.
Let's say you calculated 95% confidence interval from bootstrapped resamples. Now the interpretation is: "95% of the times, this bootstrap method accurately results in a confidence interval containing the true population parameter".
(This is what I think. Correct me if there are any mistakes).
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
What you are saying is that there is no need to find confidence interval from bootstrapped resamples.
If you are satisfied with the statistic (sample mean or sample proportion) obtained from bootstrap
|
6,200
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
|
We are referring to the true parameter of the original population. It is possible to do this assuming that the data were drawn randomly from the original population -- in that case, there are mathematical arguments showing that the bootstrap procedures will give a valid confidence interval, at least as the size of the dataset becomes sufficiently large.
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
|
We are referring to the true parameter of the original population. It is possible to do this assuming that the data were drawn randomly from the original population -- in that case, there are mathemat
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
We are referring to the true parameter of the original population. It is possible to do this assuming that the data were drawn randomly from the original population -- in that case, there are mathematical arguments showing that the bootstrap procedures will give a valid confidence interval, at least as the size of the dataset becomes sufficiently large.
|
What is the meaning of a confidence interval taken from bootstrapped resamples?
We are referring to the true parameter of the original population. It is possible to do this assuming that the data were drawn randomly from the original population -- in that case, there are mathemat
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.