idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
7,701
|
The relationship between the gamma distribution and the normal distribution
|
Let us address the question posed, This is all somewhat mysterious to me. Is the normal distribution fundamental to the derivation of the gamma distribution...? No mystery really, it is simply that the normal distribution and the gamma distribution are members, among others of the exponential family of distributions, which family is defined by the ability to convert between equational forms by substitution of parameters and/or variables. As a consequence, there are many conversions by substitution between distributions, a few of which are summarized in the figure below.
LEEMIS, Lawrence M.; Jacquelyn T. MCQUESTON (February 2008). "Univariate Distribution Relationships" (PDF). American Statistician. 62 (1): 45–53. doi:10.1198/000313008x270448 cite
Here are two normal and gamma distribution relationships in greater detail (among an unknown number of others, like via chi-squared and beta).
First A more direct relationship between the gamma distribution (GD) and the normal distribution (ND) with mean zero follows. Simply put, the GD becomes normal in shape as its shape parameter is allowed to increase. Proving that that is the case is more difficult. For the GD, $$\text{GD}(z;a,b)=\begin{array}{cc}
&
\begin{cases}
\dfrac{b^{-a} z^{a-1} e^{-\dfrac{z}{b}}}{\Gamma (a)} & z>0 \\
0 & \text{other} \\
\end{cases}
\,. \\
\end{array}$$
As the GD shape parameter $a\rightarrow \infty$, the GD shape becomes more symmetric and normal, however, as the mean increases with increasing $a$, we have to left shift the GD by $(a-1) \sqrt{\dfrac{1}{a}} k$ to hold it stationary, and finally, if we wish to maintain the same standard deviation for our shifted GD, we have to decrease the scale parameter ($b$) proportional to $\sqrt{\dfrac{1}{a}}$.
To wit, to transform a GD to a limiting case ND we set the standard deviation to be a constant ($k$) by letting $b=\sqrt{\dfrac{1}{a}} k$ and shift the GD to the left to have a mode of zero by substituting $z=(a-1) \sqrt{\dfrac{1}{a}} k+x\ .$ Then $$\text{GD}\left((a-1) \sqrt{\frac{1}{a}} k+x;\ a,\ \sqrt{\frac{1}{a}} k\right)=\begin{array}{cc}
&
\begin{cases}
\dfrac{\left(\dfrac{k}{\sqrt{a}}\right)^{-a} e^{-\dfrac{\sqrt{a} x}{k}-a+1} \left(\dfrac{(a-1) k}{\sqrt{a}}+x\right)^{a-1}}{\Gamma (a)} & x>\dfrac{k(1-a)}{\sqrt{a}} \\
0 & \text{other} \\
\end{cases}
\\
\end{array}\,.$$
Note that in the limit as $a\rightarrow\infty$ the most negative value of $x$ for which this GD is nonzero $\rightarrow -\infty$. That is, the semi-infinite GD support becomes infinite. Taking the limit as $a\rightarrow \infty$ of the reparameterized GD, we find
$$\lim_{a\to \infty } \, \frac{\left(\frac{k}{\sqrt{a}}\right)^{-a} e^{-\frac{\sqrt{a} x}{k}-a+1} \left(\frac{(a-1) k}{\sqrt{a}}+x\right)^{a-1}}{\Gamma (a)}=\dfrac{e^{-\dfrac{x^2}{2 k^2}}}{\sqrt{2 \pi } k}=\text{ND}\left(x;0,k^2\right)$$
Graphically for $k=2$ and $a=1,2,4,8,16,32,64$ the GD is in blue and the limiting $\text{ND}\left(x;0,\ 2^2\right)$ is in orange, below
Second Let us make the point that due to the similarity of form between these distributions, one can pretty much develop relationships between the gamma and normal distributions by pulling them out of thin air. To wit, we next develop an "unfolded" gamma distribution generalization of a normal distribution.
Note first that it is the semi-infinite support of the gamma distribution that impedes a more direct relationship with the normal distribution. However, that impediment can be removed when considering the half-normal distribution, which also has a semi-infinite support. Thus, one can generalize the normal distribution (ND) by first folding it to be half-normal (HND), relating that to the generalized gamma distribution (GD), then for our tour de force, we "unfold" both (HND and GD) to make a generalized ND (a GND), thusly.
The generalized gamma distribution
$$\text{GD}\left(x;\alpha ,\beta ,\gamma ,\mu \right)=\begin{array}{cc}
&
\begin{cases}
\dfrac{\gamma e^{-\left(\dfrac{x-\mu }{\beta }\right)^{\gamma }} \left(\dfrac{x-\mu }{\beta }\right)^{\alpha \gamma -1}}{\beta \,\Gamma (\alpha )} & x>\mu \\
0 & \text{other} \\
\end{cases}
\\
\end{array}\,,$$
Can be reparameterized to be the half-normal distribution,
$$\text{GD}\left(x;\frac{1}{2},\frac{\sqrt{\pi }}{\theta },2,0 \right)=\begin{array}{cc}
&
\begin{cases}
\dfrac{2 \theta e^{-\dfrac{\theta ^2 x^2}{\pi }}}{\pi } & x>0 \\
0 & \text{other} \\
\end{cases}
\\
\end{array}\,\,\,=\text{HND}(x;\theta)$$
Note that $\theta=\frac{\sqrt{\pi}}{\sigma\sqrt{2}}.$ Thus, $$\text{ND}\left(x;0,\sigma^2\right)=\frac{1}{2}\text{HND}(x;\theta)+\frac{1}{2}\text{HND}(-x;\theta)=\frac{1}{2}\text{GD}\left(x;\frac{1}{2},\frac{\sqrt{\pi }}{\theta },2,0 \right)+\frac{1}{2}\text{GD}\left(-x;\frac{1}{2},\frac{\sqrt{\pi }}{\theta },2,0 \right)\,,$$
which implies that
$$
\begin{align}
\text{GND}(x;\mu,\alpha,\beta) &=
\frac{1}{2}\text{GD}\left(x;\frac{1}{\beta},\alpha,\beta,\mu \right)+\frac{1}{2}\text{GD}\left(-x;\frac{1}{\beta},\alpha,\beta,\mu \right)\\
&=
\frac{\beta e^{-\left(\dfrac{\left|x-\mu\right|}{\alpha }\right)^{\mathrm{\Large{\beta}}}}}{2 \alpha \Gamma \left(\dfrac{1}{\beta }\right)}\\
\end{align}
\,,$$
is a generalization of the normal distribution, where $\mu$ is the location, $\alpha>0$ is the scale, and $\beta>0$ is the shape and where $\beta=2$ yields a normal distribution. It includes the Laplace distribution when $\beta=1$. As $\beta\rightarrow\infty$, the density converges pointwise to a uniform density on $(\mu-\alpha,\mu+\alpha)$. Below is the generalized normal distribution plotted for $\alpha =\frac{\sqrt{\pi} }{2}\,,\beta=1/2,1,4$ in blue with the normal case $\alpha =\frac{\sqrt{\pi} }{2},\,\beta=2$ in orange.
The above can be seen as the generalized normal distribution Version 1 and in different parameterizations is known as the exponential power distribution, and the generalized error distribution, which are in turn one of several other generalized normal distributions.
|
The relationship between the gamma distribution and the normal distribution
|
Let us address the question posed, This is all somewhat mysterious to me. Is the normal distribution fundamental to the derivation of the gamma distribution...? No mystery really, it is simply that th
|
The relationship between the gamma distribution and the normal distribution
Let us address the question posed, This is all somewhat mysterious to me. Is the normal distribution fundamental to the derivation of the gamma distribution...? No mystery really, it is simply that the normal distribution and the gamma distribution are members, among others of the exponential family of distributions, which family is defined by the ability to convert between equational forms by substitution of parameters and/or variables. As a consequence, there are many conversions by substitution between distributions, a few of which are summarized in the figure below.
LEEMIS, Lawrence M.; Jacquelyn T. MCQUESTON (February 2008). "Univariate Distribution Relationships" (PDF). American Statistician. 62 (1): 45–53. doi:10.1198/000313008x270448 cite
Here are two normal and gamma distribution relationships in greater detail (among an unknown number of others, like via chi-squared and beta).
First A more direct relationship between the gamma distribution (GD) and the normal distribution (ND) with mean zero follows. Simply put, the GD becomes normal in shape as its shape parameter is allowed to increase. Proving that that is the case is more difficult. For the GD, $$\text{GD}(z;a,b)=\begin{array}{cc}
&
\begin{cases}
\dfrac{b^{-a} z^{a-1} e^{-\dfrac{z}{b}}}{\Gamma (a)} & z>0 \\
0 & \text{other} \\
\end{cases}
\,. \\
\end{array}$$
As the GD shape parameter $a\rightarrow \infty$, the GD shape becomes more symmetric and normal, however, as the mean increases with increasing $a$, we have to left shift the GD by $(a-1) \sqrt{\dfrac{1}{a}} k$ to hold it stationary, and finally, if we wish to maintain the same standard deviation for our shifted GD, we have to decrease the scale parameter ($b$) proportional to $\sqrt{\dfrac{1}{a}}$.
To wit, to transform a GD to a limiting case ND we set the standard deviation to be a constant ($k$) by letting $b=\sqrt{\dfrac{1}{a}} k$ and shift the GD to the left to have a mode of zero by substituting $z=(a-1) \sqrt{\dfrac{1}{a}} k+x\ .$ Then $$\text{GD}\left((a-1) \sqrt{\frac{1}{a}} k+x;\ a,\ \sqrt{\frac{1}{a}} k\right)=\begin{array}{cc}
&
\begin{cases}
\dfrac{\left(\dfrac{k}{\sqrt{a}}\right)^{-a} e^{-\dfrac{\sqrt{a} x}{k}-a+1} \left(\dfrac{(a-1) k}{\sqrt{a}}+x\right)^{a-1}}{\Gamma (a)} & x>\dfrac{k(1-a)}{\sqrt{a}} \\
0 & \text{other} \\
\end{cases}
\\
\end{array}\,.$$
Note that in the limit as $a\rightarrow\infty$ the most negative value of $x$ for which this GD is nonzero $\rightarrow -\infty$. That is, the semi-infinite GD support becomes infinite. Taking the limit as $a\rightarrow \infty$ of the reparameterized GD, we find
$$\lim_{a\to \infty } \, \frac{\left(\frac{k}{\sqrt{a}}\right)^{-a} e^{-\frac{\sqrt{a} x}{k}-a+1} \left(\frac{(a-1) k}{\sqrt{a}}+x\right)^{a-1}}{\Gamma (a)}=\dfrac{e^{-\dfrac{x^2}{2 k^2}}}{\sqrt{2 \pi } k}=\text{ND}\left(x;0,k^2\right)$$
Graphically for $k=2$ and $a=1,2,4,8,16,32,64$ the GD is in blue and the limiting $\text{ND}\left(x;0,\ 2^2\right)$ is in orange, below
Second Let us make the point that due to the similarity of form between these distributions, one can pretty much develop relationships between the gamma and normal distributions by pulling them out of thin air. To wit, we next develop an "unfolded" gamma distribution generalization of a normal distribution.
Note first that it is the semi-infinite support of the gamma distribution that impedes a more direct relationship with the normal distribution. However, that impediment can be removed when considering the half-normal distribution, which also has a semi-infinite support. Thus, one can generalize the normal distribution (ND) by first folding it to be half-normal (HND), relating that to the generalized gamma distribution (GD), then for our tour de force, we "unfold" both (HND and GD) to make a generalized ND (a GND), thusly.
The generalized gamma distribution
$$\text{GD}\left(x;\alpha ,\beta ,\gamma ,\mu \right)=\begin{array}{cc}
&
\begin{cases}
\dfrac{\gamma e^{-\left(\dfrac{x-\mu }{\beta }\right)^{\gamma }} \left(\dfrac{x-\mu }{\beta }\right)^{\alpha \gamma -1}}{\beta \,\Gamma (\alpha )} & x>\mu \\
0 & \text{other} \\
\end{cases}
\\
\end{array}\,,$$
Can be reparameterized to be the half-normal distribution,
$$\text{GD}\left(x;\frac{1}{2},\frac{\sqrt{\pi }}{\theta },2,0 \right)=\begin{array}{cc}
&
\begin{cases}
\dfrac{2 \theta e^{-\dfrac{\theta ^2 x^2}{\pi }}}{\pi } & x>0 \\
0 & \text{other} \\
\end{cases}
\\
\end{array}\,\,\,=\text{HND}(x;\theta)$$
Note that $\theta=\frac{\sqrt{\pi}}{\sigma\sqrt{2}}.$ Thus, $$\text{ND}\left(x;0,\sigma^2\right)=\frac{1}{2}\text{HND}(x;\theta)+\frac{1}{2}\text{HND}(-x;\theta)=\frac{1}{2}\text{GD}\left(x;\frac{1}{2},\frac{\sqrt{\pi }}{\theta },2,0 \right)+\frac{1}{2}\text{GD}\left(-x;\frac{1}{2},\frac{\sqrt{\pi }}{\theta },2,0 \right)\,,$$
which implies that
$$
\begin{align}
\text{GND}(x;\mu,\alpha,\beta) &=
\frac{1}{2}\text{GD}\left(x;\frac{1}{\beta},\alpha,\beta,\mu \right)+\frac{1}{2}\text{GD}\left(-x;\frac{1}{\beta},\alpha,\beta,\mu \right)\\
&=
\frac{\beta e^{-\left(\dfrac{\left|x-\mu\right|}{\alpha }\right)^{\mathrm{\Large{\beta}}}}}{2 \alpha \Gamma \left(\dfrac{1}{\beta }\right)}\\
\end{align}
\,,$$
is a generalization of the normal distribution, where $\mu$ is the location, $\alpha>0$ is the scale, and $\beta>0$ is the shape and where $\beta=2$ yields a normal distribution. It includes the Laplace distribution when $\beta=1$. As $\beta\rightarrow\infty$, the density converges pointwise to a uniform density on $(\mu-\alpha,\mu+\alpha)$. Below is the generalized normal distribution plotted for $\alpha =\frac{\sqrt{\pi} }{2}\,,\beta=1/2,1,4$ in blue with the normal case $\alpha =\frac{\sqrt{\pi} }{2},\,\beta=2$ in orange.
The above can be seen as the generalized normal distribution Version 1 and in different parameterizations is known as the exponential power distribution, and the generalized error distribution, which are in turn one of several other generalized normal distributions.
|
The relationship between the gamma distribution and the normal distribution
Let us address the question posed, This is all somewhat mysterious to me. Is the normal distribution fundamental to the derivation of the gamma distribution...? No mystery really, it is simply that th
|
7,702
|
The relationship between the gamma distribution and the normal distribution
|
The derivation of the chi-squared distribution from the normal distribution is much analogous to the derivation of the gamma distribution from the exponential distribution.
We should be able to generalize this:
If the $X_i$ are independent variables from a generalized normal distribution with power coefficient $m$ then $Y = \sum_{i}^n {X_i}^m$ can be related to some scaled Chi-squared distribution (with "degrees of freedom" equal to $n/m$).
The analogy is as following:
Normal and Chi-squared distributions relate to the sum of squares
The joint density distribution of multiple independent standard normal distributed variables depends on $\sum x_i^2$
$f(x_1, x_2, ... ,x_n) = \frac{\exp \left( {-0.5\sum_{i=1}^{n}{x_i}^2}\right)}{(2\pi)^{n/2}}$
If $X_i \sim N(0,1)$
then $\sum_{i=1}^n {X_i}^2 \sim \chi^2(\nu)$
Exponential and gamma distributions relate to the regular sum
The joint density distribution of multiple independent exponential distributed variables depends on $\sum x_i$
$f(x_1, x_2, ... ,x_n) = \frac{\exp \left( -\lambda\sum_{i=1}^{n}{x_i} \right)}{\lambda^{-n}}$
If $X_i \sim Exp(\lambda)$
then $\sum_{i=1}^n X_i \sim \text{Gamma}(n,\lambda)$
The derivation can be done by a change of variables integrating not over all $x_1,x_2,...x_n$ but instead only over the summed term (this is what Pearson did in 1900). This unfolds very similar in both cases.
For the $\chi^2$ distribution:
$$\begin{array}{rcl}
f_{\chi^2(n)}(s) ds &=& \frac{e^{-s/2}}{\left( 2\pi \right)^{n/2}} \frac{dV}{ds} ds\\
&=& \frac{e^{-s/2}}{\left( 2\pi \right)^{n/2}} \frac{\pi^{n/2}}{\Gamma(n/2)}s^{n/2-1} ds \\
&=& \frac{1}{2^{n/2}\Gamma(n/2)}s^{n/2-1}e^{-s/2} ds \\
\end{array}$$
Where $V(s) = \frac{\pi^{n/2}}{\Gamma (n/2+1)}s^{n/2}$ is the n-dimensional volume of an n-ball with squared radius $s$.
For the gamma distribution:
$$\begin{array}{rcl}
f_{G(n,\lambda)}(s) ds &=& \frac{e^{-\lambda s}}{\lambda^{-n}} \frac{dV}{ds} ds\\
&=& \frac{e^{-\lambda s}}{\lambda^{-n}} n \frac{s^{n-1}}{n!}ds \\
&=& \frac{\lambda^{n}}{ \Gamma(n)} s^{n-1} e^{-\lambda s} ds \\
\end{array}$$
Where $V(s) = \frac{s^n}{n!}$ is the n-dimensional volume of a n-polytope with $\sum x_i < s$.
The gamma distribution can be seen as the waiting time $Y$ for the $n$-th event in a Poisson process which is the distributed as the sum of $n$ exponentially distributed variables.
As Alecos Papadopoulos already noted there is no deeper connection that makes sums of squared normal variables 'a good model for waiting time'. The gamma distribution is the distribution for a sum of generalized normal distributed variables. That is how the two come together.
But the type of sum and type of variables may be different. While the gamma distribution, when derived from the exponential distribution (p=1), gets the interpretation of the exponential distribution (waiting time), you can not go reverse and go back to a sum of squared Gaussian variables and use that same interpretation.
The density distribution for waiting time which falls of exponentially, and the density distribution for a Gaussian error falls of exponentially (with a square). That is another way to see the two connected.
|
The relationship between the gamma distribution and the normal distribution
|
The derivation of the chi-squared distribution from the normal distribution is much analogous to the derivation of the gamma distribution from the exponential distribution.
We should be able to gener
|
The relationship between the gamma distribution and the normal distribution
The derivation of the chi-squared distribution from the normal distribution is much analogous to the derivation of the gamma distribution from the exponential distribution.
We should be able to generalize this:
If the $X_i$ are independent variables from a generalized normal distribution with power coefficient $m$ then $Y = \sum_{i}^n {X_i}^m$ can be related to some scaled Chi-squared distribution (with "degrees of freedom" equal to $n/m$).
The analogy is as following:
Normal and Chi-squared distributions relate to the sum of squares
The joint density distribution of multiple independent standard normal distributed variables depends on $\sum x_i^2$
$f(x_1, x_2, ... ,x_n) = \frac{\exp \left( {-0.5\sum_{i=1}^{n}{x_i}^2}\right)}{(2\pi)^{n/2}}$
If $X_i \sim N(0,1)$
then $\sum_{i=1}^n {X_i}^2 \sim \chi^2(\nu)$
Exponential and gamma distributions relate to the regular sum
The joint density distribution of multiple independent exponential distributed variables depends on $\sum x_i$
$f(x_1, x_2, ... ,x_n) = \frac{\exp \left( -\lambda\sum_{i=1}^{n}{x_i} \right)}{\lambda^{-n}}$
If $X_i \sim Exp(\lambda)$
then $\sum_{i=1}^n X_i \sim \text{Gamma}(n,\lambda)$
The derivation can be done by a change of variables integrating not over all $x_1,x_2,...x_n$ but instead only over the summed term (this is what Pearson did in 1900). This unfolds very similar in both cases.
For the $\chi^2$ distribution:
$$\begin{array}{rcl}
f_{\chi^2(n)}(s) ds &=& \frac{e^{-s/2}}{\left( 2\pi \right)^{n/2}} \frac{dV}{ds} ds\\
&=& \frac{e^{-s/2}}{\left( 2\pi \right)^{n/2}} \frac{\pi^{n/2}}{\Gamma(n/2)}s^{n/2-1} ds \\
&=& \frac{1}{2^{n/2}\Gamma(n/2)}s^{n/2-1}e^{-s/2} ds \\
\end{array}$$
Where $V(s) = \frac{\pi^{n/2}}{\Gamma (n/2+1)}s^{n/2}$ is the n-dimensional volume of an n-ball with squared radius $s$.
For the gamma distribution:
$$\begin{array}{rcl}
f_{G(n,\lambda)}(s) ds &=& \frac{e^{-\lambda s}}{\lambda^{-n}} \frac{dV}{ds} ds\\
&=& \frac{e^{-\lambda s}}{\lambda^{-n}} n \frac{s^{n-1}}{n!}ds \\
&=& \frac{\lambda^{n}}{ \Gamma(n)} s^{n-1} e^{-\lambda s} ds \\
\end{array}$$
Where $V(s) = \frac{s^n}{n!}$ is the n-dimensional volume of a n-polytope with $\sum x_i < s$.
The gamma distribution can be seen as the waiting time $Y$ for the $n$-th event in a Poisson process which is the distributed as the sum of $n$ exponentially distributed variables.
As Alecos Papadopoulos already noted there is no deeper connection that makes sums of squared normal variables 'a good model for waiting time'. The gamma distribution is the distribution for a sum of generalized normal distributed variables. That is how the two come together.
But the type of sum and type of variables may be different. While the gamma distribution, when derived from the exponential distribution (p=1), gets the interpretation of the exponential distribution (waiting time), you can not go reverse and go back to a sum of squared Gaussian variables and use that same interpretation.
The density distribution for waiting time which falls of exponentially, and the density distribution for a Gaussian error falls of exponentially (with a square). That is another way to see the two connected.
|
The relationship between the gamma distribution and the normal distribution
The derivation of the chi-squared distribution from the normal distribution is much analogous to the derivation of the gamma distribution from the exponential distribution.
We should be able to gener
|
7,703
|
What is the difference between dropout and drop connect?
|
DropOut and DropConnect are both methods intended to prevent "co-adaptation" of units in a neural network. In other words, we want units to independently extract features from their inputs instead of relying on other neurons to do so.
Suppose we have a multilayered feedforward network like this one (the topology doesn't really matter). We're worried about the yellow hidden units in the middle layer co-adapting.
DropOut
To apply DropOut, we randomly select a subset of the units and clamp their output to zero, regardless of the input; this effectively removes those units from the model. A different subset of units is randomly selected every time we present a training example.
Below are two possible network configurations. On the first presentation (left), the 1st and 3rd units are disabled, but the 2nd and 3rd units have been randomly selected on a subsequent presentation. At test time, we use the complete network but rescale the weights to compensate for the fact that all of them can now become active (e.g., if you drop half of the nodes, the weights should also be halved).
DropConnect
DropConnect works similarly, except that we disable individual weights (i.e., set them to zero), instead of nodes, so a node can remain partially active. Schematically, it looks like this:
Comparison
These methods both work because they effectively let you train several models at the same time, then average across them for testing. For example, the yellow layer has four nodes, and thus 16 possible DropOut states (all enabled, #1 disabled, #1 and #2 disabled, etc).
DropConnect is a generalization of DropOut because it produces even more possible models, since there are almost always more connections than units. However, you can get similar outcomes on an individual trial. For example, the DropConnect network on the right has effectively dropped Unit #2 since all of the incoming connections have been removed.
Further Reading
The original papers are pretty accessible and contain more details and empirical results.
DropOut: Hinton et al., 2012, Srivasta et al., 2014; JMLR
DropConnect: Wan et al., 2013
|
What is the difference between dropout and drop connect?
|
DropOut and DropConnect are both methods intended to prevent "co-adaptation" of units in a neural network. In other words, we want units to independently extract features from their inputs instead of
|
What is the difference between dropout and drop connect?
DropOut and DropConnect are both methods intended to prevent "co-adaptation" of units in a neural network. In other words, we want units to independently extract features from their inputs instead of relying on other neurons to do so.
Suppose we have a multilayered feedforward network like this one (the topology doesn't really matter). We're worried about the yellow hidden units in the middle layer co-adapting.
DropOut
To apply DropOut, we randomly select a subset of the units and clamp their output to zero, regardless of the input; this effectively removes those units from the model. A different subset of units is randomly selected every time we present a training example.
Below are two possible network configurations. On the first presentation (left), the 1st and 3rd units are disabled, but the 2nd and 3rd units have been randomly selected on a subsequent presentation. At test time, we use the complete network but rescale the weights to compensate for the fact that all of them can now become active (e.g., if you drop half of the nodes, the weights should also be halved).
DropConnect
DropConnect works similarly, except that we disable individual weights (i.e., set them to zero), instead of nodes, so a node can remain partially active. Schematically, it looks like this:
Comparison
These methods both work because they effectively let you train several models at the same time, then average across them for testing. For example, the yellow layer has four nodes, and thus 16 possible DropOut states (all enabled, #1 disabled, #1 and #2 disabled, etc).
DropConnect is a generalization of DropOut because it produces even more possible models, since there are almost always more connections than units. However, you can get similar outcomes on an individual trial. For example, the DropConnect network on the right has effectively dropped Unit #2 since all of the incoming connections have been removed.
Further Reading
The original papers are pretty accessible and contain more details and empirical results.
DropOut: Hinton et al., 2012, Srivasta et al., 2014; JMLR
DropConnect: Wan et al., 2013
|
What is the difference between dropout and drop connect?
DropOut and DropConnect are both methods intended to prevent "co-adaptation" of units in a neural network. In other words, we want units to independently extract features from their inputs instead of
|
7,704
|
What is the difference between dropout and drop connect?
|
Yes, but they are slightly different in terms of how the weights are dropped.
These are the formulas of DropConnect (left) and dropout (right).
So dropout applies a mask to the activations, while DropConnect applies a mask to the weights.
The DropConnect paper says that it is a generalization of dropout in the sense that
DropConnect is the generalization of Dropout in which each connection, instead of each output unit as in Dropout, can be dropped with probability
p.
|
What is the difference between dropout and drop connect?
|
Yes, but they are slightly different in terms of how the weights are dropped.
These are the formulas of DropConnect (left) and dropout (right).
So dropout applies a mask to the activations, while Dr
|
What is the difference between dropout and drop connect?
Yes, but they are slightly different in terms of how the weights are dropped.
These are the formulas of DropConnect (left) and dropout (right).
So dropout applies a mask to the activations, while DropConnect applies a mask to the weights.
The DropConnect paper says that it is a generalization of dropout in the sense that
DropConnect is the generalization of Dropout in which each connection, instead of each output unit as in Dropout, can be dropped with probability
p.
|
What is the difference between dropout and drop connect?
Yes, but they are slightly different in terms of how the weights are dropped.
These are the formulas of DropConnect (left) and dropout (right).
So dropout applies a mask to the activations, while Dr
|
7,705
|
What is the difference between dropout and drop connect?
|
Based on what I saw on Tensorflow 2.5. From the workflow tf.keras.layer.Dropout to tf.python.ops.nn_ops.py function dropout and dropout_v2 (Line 5059 -> 5241), The code does not adjust the shape of the input layer but to mask and do matmul instead with a mask of 0 and 1 to gain speed of continuous vector/matrix/tensor, which is compatible as DropConnect. In another word, although not completely guarantee, the real implementation of Dropout is to fit with the real-world problem (training speed).
By the way, the idea of Dropout is just a general scheme (removing neuron unit of a FC layer). Matt Krause has already well explained it
|
What is the difference between dropout and drop connect?
|
Based on what I saw on Tensorflow 2.5. From the workflow tf.keras.layer.Dropout to tf.python.ops.nn_ops.py function dropout and dropout_v2 (Line 5059 -> 5241), The code does not adjust the shape of th
|
What is the difference between dropout and drop connect?
Based on what I saw on Tensorflow 2.5. From the workflow tf.keras.layer.Dropout to tf.python.ops.nn_ops.py function dropout and dropout_v2 (Line 5059 -> 5241), The code does not adjust the shape of the input layer but to mask and do matmul instead with a mask of 0 and 1 to gain speed of continuous vector/matrix/tensor, which is compatible as DropConnect. In another word, although not completely guarantee, the real implementation of Dropout is to fit with the real-world problem (training speed).
By the way, the idea of Dropout is just a general scheme (removing neuron unit of a FC layer). Matt Krause has already well explained it
|
What is the difference between dropout and drop connect?
Based on what I saw on Tensorflow 2.5. From the workflow tf.keras.layer.Dropout to tf.python.ops.nn_ops.py function dropout and dropout_v2 (Line 5059 -> 5241), The code does not adjust the shape of th
|
7,706
|
Is whitening always good?
|
Pre-whitening is a generalization of feature normalization, which makes the input independent by transforming it against a transformed input covariance matrix. I can't see why this may be a bad thing.
However, a quick search revealed "The Feasibility of Data Whitening to Improve Performance of Weather Radar" (pdf) which reads:
In particular, whitening worked well in the case of the exponential ACF (which is in agreement with Monakov’s results) but less well in the case of the Gaussian one. After numerical experimentation, we found that the Gaussian case is numerically ill conditioned in the sense that the condition number (ratio of maximal to minimal eigenvalue) is extremely large for the Gaussian covariance matrix.
I'm not educated enough to comment on this. Maybe the answer to your question is that whitening is always good but there are certain gotchas (e.g., with random data it won't work well if done via Gaussian autocorrelation function).
|
Is whitening always good?
|
Pre-whitening is a generalization of feature normalization, which makes the input independent by transforming it against a transformed input covariance matrix. I can't see why this may be a bad thing.
|
Is whitening always good?
Pre-whitening is a generalization of feature normalization, which makes the input independent by transforming it against a transformed input covariance matrix. I can't see why this may be a bad thing.
However, a quick search revealed "The Feasibility of Data Whitening to Improve Performance of Weather Radar" (pdf) which reads:
In particular, whitening worked well in the case of the exponential ACF (which is in agreement with Monakov’s results) but less well in the case of the Gaussian one. After numerical experimentation, we found that the Gaussian case is numerically ill conditioned in the sense that the condition number (ratio of maximal to minimal eigenvalue) is extremely large for the Gaussian covariance matrix.
I'm not educated enough to comment on this. Maybe the answer to your question is that whitening is always good but there are certain gotchas (e.g., with random data it won't work well if done via Gaussian autocorrelation function).
|
Is whitening always good?
Pre-whitening is a generalization of feature normalization, which makes the input independent by transforming it against a transformed input covariance matrix. I can't see why this may be a bad thing.
|
7,707
|
Is whitening always good?
|
Firstly, I think that de-correlating and whitening are two separate procedures.
In order to de-correlate the data, we need to transform it so that the transformed data will
have a diagonal covariance matrix. This transform can be found by solving the eigenvalue
problem. We find the eigenvectors and associated eigenvalues of the covariance matrix ${\bf \Sigma} = {\bf X}{\bf X}'$ by solving
$${\bf \Sigma}{\bf \Phi} = {\bf \Phi} {\bf \Lambda}$$
where ${\bf \Lambda}$ is a diagonal matrix having the eigenvalues as its diagonal elements.
The matrix ${\bf \Phi}$ thus diagonalizes the covariance matrix of ${\bf X}$. The columns of ${\bf \Phi}$ are the eigenvectors of the covariance matrix.
We can also write the diagonalized covariance as:
$${\bf \Phi}' {\bf \Sigma} {\bf \Phi} = {\bf \Lambda} \tag{1}$$
So to de-correlate a single vector ${\bf x}_i$, we do:
$${\bf x}_i^* = {\bf \Phi}' {\bf x}_i \tag{2}$$
The diagonal elements (eigenvalues) in ${\bf \Lambda}$ may be the same or different. If we make them all the same, then this is called whitening the data. Since each eigenvalue determines the length of its associated eigenvector, the covariance will correspond to an ellipse when the data is not whitened, and to a sphere (having all dimensions the same length, or uniform) when the data is whitened. Whitening is performed as follows:
$${\bf \Lambda}^{-1/2} {\bf \Lambda} {\bf \Lambda}^{-1/2} = {\bf I}$$
Equivalently, substituting in $(1)$, we write:
$${\bf \Lambda}^{-1/2} {\bf \Phi}' {\bf \Sigma} {\bf \Phi} {\bf \Lambda}^{-1/2} = {\bf I}$$
Thus, to apply this whitening transform to ${\bf x}_i^*$ we simply multiply it by this scale factor, obtaining the whitened data point ${\bf x}_i^\dagger$:
$${\bf x}_i^{\dagger} = {\bf \Lambda}^{-1/2} {\bf x}_i^* = {\bf \Lambda}^{-1/2}{\bf \Phi}'{\bf x}_i \tag 3$$
Now the covariance of ${\bf x}_i^\dagger$ is not only diagonal, but also uniform (white), since the covariance of ${\bf x}_i^\dagger$, ${\bf E}({\bf x}_i^\dagger {{\bf x}_i^\dagger}') = {\bf I}$.
Following on from this, I can see two cases where this might not be useful. The first is rather trivial, it could happen that the scaling of data examples is somehow important in the inference problem you are looking at. Of course you could the eigenvalues as an additional set of features to get around this. The second is a computational issue: firstly you have to compute the covariance matrix ${\bf \Sigma}$, which may be too large to fit in memory (if you have thousands of features) or take too long to compute; secondly the eigenvalue decomposition is O(n^3) in practice, which again is pretty horrible with a large number of features.
And finally, there is a common "gotcha" that people should be careful of. One must be careful that you calculate the scaling factors on the training data, and then you use equations (2) and (3) to apply the same scaling factors to the test data, otherwise you are at risk of overfitting (you would be using information from the test set in the training process).
Source: http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf
|
Is whitening always good?
|
Firstly, I think that de-correlating and whitening are two separate procedures.
In order to de-correlate the data, we need to transform it so that the transformed data will
have a diagonal covariance
|
Is whitening always good?
Firstly, I think that de-correlating and whitening are two separate procedures.
In order to de-correlate the data, we need to transform it so that the transformed data will
have a diagonal covariance matrix. This transform can be found by solving the eigenvalue
problem. We find the eigenvectors and associated eigenvalues of the covariance matrix ${\bf \Sigma} = {\bf X}{\bf X}'$ by solving
$${\bf \Sigma}{\bf \Phi} = {\bf \Phi} {\bf \Lambda}$$
where ${\bf \Lambda}$ is a diagonal matrix having the eigenvalues as its diagonal elements.
The matrix ${\bf \Phi}$ thus diagonalizes the covariance matrix of ${\bf X}$. The columns of ${\bf \Phi}$ are the eigenvectors of the covariance matrix.
We can also write the diagonalized covariance as:
$${\bf \Phi}' {\bf \Sigma} {\bf \Phi} = {\bf \Lambda} \tag{1}$$
So to de-correlate a single vector ${\bf x}_i$, we do:
$${\bf x}_i^* = {\bf \Phi}' {\bf x}_i \tag{2}$$
The diagonal elements (eigenvalues) in ${\bf \Lambda}$ may be the same or different. If we make them all the same, then this is called whitening the data. Since each eigenvalue determines the length of its associated eigenvector, the covariance will correspond to an ellipse when the data is not whitened, and to a sphere (having all dimensions the same length, or uniform) when the data is whitened. Whitening is performed as follows:
$${\bf \Lambda}^{-1/2} {\bf \Lambda} {\bf \Lambda}^{-1/2} = {\bf I}$$
Equivalently, substituting in $(1)$, we write:
$${\bf \Lambda}^{-1/2} {\bf \Phi}' {\bf \Sigma} {\bf \Phi} {\bf \Lambda}^{-1/2} = {\bf I}$$
Thus, to apply this whitening transform to ${\bf x}_i^*$ we simply multiply it by this scale factor, obtaining the whitened data point ${\bf x}_i^\dagger$:
$${\bf x}_i^{\dagger} = {\bf \Lambda}^{-1/2} {\bf x}_i^* = {\bf \Lambda}^{-1/2}{\bf \Phi}'{\bf x}_i \tag 3$$
Now the covariance of ${\bf x}_i^\dagger$ is not only diagonal, but also uniform (white), since the covariance of ${\bf x}_i^\dagger$, ${\bf E}({\bf x}_i^\dagger {{\bf x}_i^\dagger}') = {\bf I}$.
Following on from this, I can see two cases where this might not be useful. The first is rather trivial, it could happen that the scaling of data examples is somehow important in the inference problem you are looking at. Of course you could the eigenvalues as an additional set of features to get around this. The second is a computational issue: firstly you have to compute the covariance matrix ${\bf \Sigma}$, which may be too large to fit in memory (if you have thousands of features) or take too long to compute; secondly the eigenvalue decomposition is O(n^3) in practice, which again is pretty horrible with a large number of features.
And finally, there is a common "gotcha" that people should be careful of. One must be careful that you calculate the scaling factors on the training data, and then you use equations (2) and (3) to apply the same scaling factors to the test data, otherwise you are at risk of overfitting (you would be using information from the test set in the training process).
Source: http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf
|
Is whitening always good?
Firstly, I think that de-correlating and whitening are two separate procedures.
In order to de-correlate the data, we need to transform it so that the transformed data will
have a diagonal covariance
|
7,708
|
Is whitening always good?
|
From http://cs231n.github.io/neural-networks-2/
One weakness of this transformation is that it can greatly exaggerate the noise in the data, since it stretches all dimensions (including the irrelevant dimensions of tiny variance that are mostly noise) to be of equal size in the input. This can in practice be mitigated by stronger smoothing...
Unfortunately I'm not educated enough to comment further on this.
|
Is whitening always good?
|
From http://cs231n.github.io/neural-networks-2/
One weakness of this transformation is that it can greatly exaggerate the noise in the data, since it stretches all dimensions (including the irrelevan
|
Is whitening always good?
From http://cs231n.github.io/neural-networks-2/
One weakness of this transformation is that it can greatly exaggerate the noise in the data, since it stretches all dimensions (including the irrelevant dimensions of tiny variance that are mostly noise) to be of equal size in the input. This can in practice be mitigated by stronger smoothing...
Unfortunately I'm not educated enough to comment further on this.
|
Is whitening always good?
From http://cs231n.github.io/neural-networks-2/
One weakness of this transformation is that it can greatly exaggerate the noise in the data, since it stretches all dimensions (including the irrelevan
|
7,709
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
|
Similarity
Fundamentally both types of algorithms were developed to answer one general question in machine learning applications:
Given predictors (factors) $x_1, x_2, \ldots, x_p$ - how to incorporate the interactions between this factors in order to increase the performance?
One way is to simply introduce new predictors: $x_{p+1} = x_1x_2, x_{p+2} = x_1x_3, \ldots$ But this proves to be bad idea due to huge number of parameters and very specific type of interactions.
Both Multilevel modelling and Deep Learning algorithms answer this question by introducing much smarter model of interactions. And from this point of view they are very similar.
Difference
Now let me try to give my understanding on what is the great conceptual difference between them. In order to give some explanation, let's see the assumptions that we make in each of the models:
Multilevel modelling:$^1$ layers that reflect the data structure can be represented as a Bayesian Hierarchical Network. This network is fixed and usually comes from domain applications.
Deep Learning:$^2$ the data were generated by the interactions of many factors. The structure of interactions is not known, but can be represented as a layered factorisation: higher-level interactions are obtained by transforming lower-level representations.
The fundamental difference comes from the phrase "the structure of interactions is not known" in Deep Learning. We can assume some priors on the type of interaction, but yet the algorithm defines all the interactions during the learning procedure. On the other hand, we have to define the structure of interactions for Multilevel modelling (we learn only vary the parameters of the model afterwards).
Examples
For example, let's assume we are given three factors $x_1, x_2, x_3$ and we define $\{x_1\}$ and $\{x_2, x_3\}$ as different layers.
In the Multilevel modelling regression, for example, we will get the interactions $x_1 x_2$ and $x_1 x_3$, but we will never get the interaction $x_2 x_3$. Of course, partly the results will be affected by the correlation of the errors, but this is not that important for the example.
In Deep learning, for example in multilayered Restricted Boltzmann machines (RBM) with two hidden layers and linear activation function, we will have all the possible polinomial interactions with the degree less or equal than three.
Common advantages and disadvantages
Multilevel modelling
(-) need to define the structure of interactions
(+) results are usually easier to interpret
(+) can apply statistics methods (evaluate confidence intervals, check hypotheses)
Deep learning
(-) requires huge amount of data to train (and time for training as well)
(-) results are usually impossible to interpret (provided as a black box)
(+) no expert knowledge required
(+) once well-trained, usually outperforms most other general methods (not application specific)
Hope it will help!
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
|
Similarity
Fundamentally both types of algorithms were developed to answer one general question in machine learning applications:
Given predictors (factors) $x_1, x_2, \ldots, x_p$ - how to incorpora
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
Similarity
Fundamentally both types of algorithms were developed to answer one general question in machine learning applications:
Given predictors (factors) $x_1, x_2, \ldots, x_p$ - how to incorporate the interactions between this factors in order to increase the performance?
One way is to simply introduce new predictors: $x_{p+1} = x_1x_2, x_{p+2} = x_1x_3, \ldots$ But this proves to be bad idea due to huge number of parameters and very specific type of interactions.
Both Multilevel modelling and Deep Learning algorithms answer this question by introducing much smarter model of interactions. And from this point of view they are very similar.
Difference
Now let me try to give my understanding on what is the great conceptual difference between them. In order to give some explanation, let's see the assumptions that we make in each of the models:
Multilevel modelling:$^1$ layers that reflect the data structure can be represented as a Bayesian Hierarchical Network. This network is fixed and usually comes from domain applications.
Deep Learning:$^2$ the data were generated by the interactions of many factors. The structure of interactions is not known, but can be represented as a layered factorisation: higher-level interactions are obtained by transforming lower-level representations.
The fundamental difference comes from the phrase "the structure of interactions is not known" in Deep Learning. We can assume some priors on the type of interaction, but yet the algorithm defines all the interactions during the learning procedure. On the other hand, we have to define the structure of interactions for Multilevel modelling (we learn only vary the parameters of the model afterwards).
Examples
For example, let's assume we are given three factors $x_1, x_2, x_3$ and we define $\{x_1\}$ and $\{x_2, x_3\}$ as different layers.
In the Multilevel modelling regression, for example, we will get the interactions $x_1 x_2$ and $x_1 x_3$, but we will never get the interaction $x_2 x_3$. Of course, partly the results will be affected by the correlation of the errors, but this is not that important for the example.
In Deep learning, for example in multilayered Restricted Boltzmann machines (RBM) with two hidden layers and linear activation function, we will have all the possible polinomial interactions with the degree less or equal than three.
Common advantages and disadvantages
Multilevel modelling
(-) need to define the structure of interactions
(+) results are usually easier to interpret
(+) can apply statistics methods (evaluate confidence intervals, check hypotheses)
Deep learning
(-) requires huge amount of data to train (and time for training as well)
(-) results are usually impossible to interpret (provided as a black box)
(+) no expert knowledge required
(+) once well-trained, usually outperforms most other general methods (not application specific)
Hope it will help!
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
Similarity
Fundamentally both types of algorithms were developed to answer one general question in machine learning applications:
Given predictors (factors) $x_1, x_2, \ldots, x_p$ - how to incorpora
|
7,710
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
|
While this question/answer has been out there for a bit, I hought it might be helpful to clarify a few points in the answer. First, the phrase raised as a major distinction between hierarchical methods and deep neural networks 'This network is fixed.' is incorrect. Hierarchical methods are no more 'fixed' than the alternative, neural networks. See, for example, the paper Deep Learning with Hierarchical Convolutional Factor Analysis, Chen et. al. . I think you will also find that the requirement to define interactions is also no longer a distinguishing point. A couple of points that are not listed as a plus with the hierarchical modeling is, from my experience, the significantly reduced problem of overfitting and the ability to handle both very large and very small training sets. A nitpick point is that when Bayesian hierarchical methods are used, confidence intervals and hypothesis testing are generally not statistical methods that would be applied.
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
|
While this question/answer has been out there for a bit, I hought it might be helpful to clarify a few points in the answer. First, the phrase raised as a major distinction between hierarchical method
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
While this question/answer has been out there for a bit, I hought it might be helpful to clarify a few points in the answer. First, the phrase raised as a major distinction between hierarchical methods and deep neural networks 'This network is fixed.' is incorrect. Hierarchical methods are no more 'fixed' than the alternative, neural networks. See, for example, the paper Deep Learning with Hierarchical Convolutional Factor Analysis, Chen et. al. . I think you will also find that the requirement to define interactions is also no longer a distinguishing point. A couple of points that are not listed as a plus with the hierarchical modeling is, from my experience, the significantly reduced problem of overfitting and the ability to handle both very large and very small training sets. A nitpick point is that when Bayesian hierarchical methods are used, confidence intervals and hypothesis testing are generally not statistical methods that would be applied.
|
What's the difference between "deep learning" and multilevel/hierarchical modeling?
While this question/answer has been out there for a bit, I hought it might be helpful to clarify a few points in the answer. First, the phrase raised as a major distinction between hierarchical method
|
7,711
|
Performing a statistical test after visualizing data - data dredging?
|
Briefly disagreeing with/giving a counterpoint to @ingolifs's answer: yes, visualizing your data is essential. But visualizing before deciding on the analysis leads you into Gelman and Loken's garden of forking paths. This is not the same as data-dredging or p-hacking, partly through intent (the GoFP is typically well-meaning) and partly because you may not run more than one analysis. But it is a form of snooping: because your analysis is data-dependent, it can lead you to false or overconfident conclusions.
You should in some way determine what your intended analysis is (e.g. "high quality houses should be higher in price") and write it down (or even officially preregister it) before looking at your data (it's OK to look at your predictor variables in advance, just not the response variable(s), but if you really have no a priori ideas then you don't even know which variables might be predictors and which might be responses); if your data suggest some different or additional analyses, then your write-up can state both what you meant to do initially and what (and why) you ended up doing it.
If you are really doing pure exploration (i.e., you have no a priori hypotheses, you just want to see what's in the data):
your thoughts about holding out a sample for confirmation are good.
In my world (I don't work with huge data sets) the loss of resolution due to having a lower sample size would be agonizing
you need to be a bit careful in selecting your holdout sample if your data are structured in any way (geographically, time series, etc. etc.). Subsampling as though the data are iid leads to overconfidence (see Wenger and Olden Methods in Ecology and Evolution 2012), so you might want to pick out geographic units to hold out (see DJ Harris Methods in Ecology and Evolution 2015 for an example)
you can admit that you're being purely exploratory. Ideally you would eschew p-values entirely in this case, but at least telling your audience that you are wandering in the GoFP lets them know that they can take the p-values with enormous grains of salt.
My favorite reference for "safe statistical practices" is Harrell's Regression Modeling Strategies (Springer); he lays out best practices for inference vs. prediction vs. exploration, in a rigorous but practical way.
|
Performing a statistical test after visualizing data - data dredging?
|
Briefly disagreeing with/giving a counterpoint to @ingolifs's answer: yes, visualizing your data is essential. But visualizing before deciding on the analysis leads you into Gelman and Loken's garden
|
Performing a statistical test after visualizing data - data dredging?
Briefly disagreeing with/giving a counterpoint to @ingolifs's answer: yes, visualizing your data is essential. But visualizing before deciding on the analysis leads you into Gelman and Loken's garden of forking paths. This is not the same as data-dredging or p-hacking, partly through intent (the GoFP is typically well-meaning) and partly because you may not run more than one analysis. But it is a form of snooping: because your analysis is data-dependent, it can lead you to false or overconfident conclusions.
You should in some way determine what your intended analysis is (e.g. "high quality houses should be higher in price") and write it down (or even officially preregister it) before looking at your data (it's OK to look at your predictor variables in advance, just not the response variable(s), but if you really have no a priori ideas then you don't even know which variables might be predictors and which might be responses); if your data suggest some different or additional analyses, then your write-up can state both what you meant to do initially and what (and why) you ended up doing it.
If you are really doing pure exploration (i.e., you have no a priori hypotheses, you just want to see what's in the data):
your thoughts about holding out a sample for confirmation are good.
In my world (I don't work with huge data sets) the loss of resolution due to having a lower sample size would be agonizing
you need to be a bit careful in selecting your holdout sample if your data are structured in any way (geographically, time series, etc. etc.). Subsampling as though the data are iid leads to overconfidence (see Wenger and Olden Methods in Ecology and Evolution 2012), so you might want to pick out geographic units to hold out (see DJ Harris Methods in Ecology and Evolution 2015 for an example)
you can admit that you're being purely exploratory. Ideally you would eschew p-values entirely in this case, but at least telling your audience that you are wandering in the GoFP lets them know that they can take the p-values with enormous grains of salt.
My favorite reference for "safe statistical practices" is Harrell's Regression Modeling Strategies (Springer); he lays out best practices for inference vs. prediction vs. exploration, in a rigorous but practical way.
|
Performing a statistical test after visualizing data - data dredging?
Briefly disagreeing with/giving a counterpoint to @ingolifs's answer: yes, visualizing your data is essential. But visualizing before deciding on the analysis leads you into Gelman and Loken's garden
|
7,712
|
Performing a statistical test after visualizing data - data dredging?
|
Visualising the data is an indispensable part of analysis and one of the first things you should do with an unfamiliar data set. A quick eyeball of the data can inform the steps to take next. Indeed, it should be fairly obvious by looking at the graph that the means are different, and I'm not sure why a T-test was necessary to confirm this - the means are sufficiently separated that the graph itself is all the evidence I would require.
Data dredging, as far as I can tell from a quick wikipedia-ing, is a deliberate process of mucking around with the data to force certain levels of fit. Examples would be: Comparing a data set to some random numbers, but regenerating the random numbers until you get a set of favourable ones or trying out a large number of different forms of regression and choosing the one with the best $R^2$ regardless of whether the assumptions are appropriate. Data dredging doesn't appear to be something you can easily do by accident.
I think there's a deeper question in here though. How do you maintain a zen-like neutrality and avoid bias when dealing with data in a scientific way?
The answer is, you don't. Or rather, you don't have to. Forming hunches and hypotheses and building a mental narrative of what the data means, is all perfectly natural and acceptable, provided you are aware that you are doing so, and are mentally prepared to reconsider all these hypotheses when confronted with conflicting data.
|
Performing a statistical test after visualizing data - data dredging?
|
Visualising the data is an indispensable part of analysis and one of the first things you should do with an unfamiliar data set. A quick eyeball of the data can inform the steps to take next. Indeed,
|
Performing a statistical test after visualizing data - data dredging?
Visualising the data is an indispensable part of analysis and one of the first things you should do with an unfamiliar data set. A quick eyeball of the data can inform the steps to take next. Indeed, it should be fairly obvious by looking at the graph that the means are different, and I'm not sure why a T-test was necessary to confirm this - the means are sufficiently separated that the graph itself is all the evidence I would require.
Data dredging, as far as I can tell from a quick wikipedia-ing, is a deliberate process of mucking around with the data to force certain levels of fit. Examples would be: Comparing a data set to some random numbers, but regenerating the random numbers until you get a set of favourable ones or trying out a large number of different forms of regression and choosing the one with the best $R^2$ regardless of whether the assumptions are appropriate. Data dredging doesn't appear to be something you can easily do by accident.
I think there's a deeper question in here though. How do you maintain a zen-like neutrality and avoid bias when dealing with data in a scientific way?
The answer is, you don't. Or rather, you don't have to. Forming hunches and hypotheses and building a mental narrative of what the data means, is all perfectly natural and acceptable, provided you are aware that you are doing so, and are mentally prepared to reconsider all these hypotheses when confronted with conflicting data.
|
Performing a statistical test after visualizing data - data dredging?
Visualising the data is an indispensable part of analysis and one of the first things you should do with an unfamiliar data set. A quick eyeball of the data can inform the steps to take next. Indeed,
|
7,713
|
Does a sample version of the one-sided Chebyshev inequality exist?
|
Yes, we can get an analogous result using the sample mean and variance, with perhaps, a couple slight surprises emerging in the process.
First, we need to refine the question statement just a little bit and set out a few assumptions. Importantly, it should be clear that we cannot hope to replace the population variance with the sample variance on the right hand side since the latter is random! So, we refocus our attention on the equivalent inequality
$$
\mathbb P\left( X - \mathbb E X \geq t \sigma \right) \leq \frac{1}{1+t^2} \>.
$$
In case it is not clear that these are equivalent, note that we've simply replaced $t$ with $t \sigma$ in the original inequality without any loss in generality.
Second, we assume that we have a random sample $X_1,\ldots,X_n$ and we are interested in an upper bound for the analogous quantity
$
\mathbb P(X_1 - \bar X \geq t S)
$,
where $\bar X$ is the sample mean and $S$ is the sample standard deviation.
A half-step forward
Note that already by applying the original one-sided Chebyshev inequality to $X_1 - \bar X$, we get that
$$
\mathbb P( X_1 - \bar X \geq t\sigma ) \leq \frac{1}{1 + \frac{n}{n-1}t^2}
$$
where $\sigma^2 = \mathrm{Var}(X_1)$, which is smaller than the right-hand side of the original version. This makes sense! Any particular realization of a random variable from a sample will tend to be (slightly) closer to the sample mean to which it contributes than to the population mean. As we shall see below, we'll get to replace $\sigma$ by $S$ under even more general assumptions.
A sample version of one-sided Chebyshev
Claim: Let $X_1,\ldots,X_n$ be a random sample such that $\mathbb P(S = 0) = 0$. Then, $$ \mathbb P(X_1 - \bar X \geq t S) \leq \frac{1}{1 + \frac{n}{n-1} t^2}\>. $$ In particular, the
sample version of the bound is tighter than the original population
version.
Note: We do not assume that the $X_i$ have either finite mean or variance!
Proof. The idea is to adapt the proof of the original one-sided Chebyshev inequality and employ symmetry in the process. First, set $Y_i = X_i - \bar X$ for notational convenience. Then, observe that
$$
\mathbb P( Y_1 \geq t S ) = \frac{1}{n} \sum_{i=1}^n \mathbb P( Y_i \geq t S ) = \mathbb E \frac{1}{n} \sum_{i=1}^n \mathbf 1_{(Y_i \geq t S)} \>.
$$
Now, for any $c > 0$, on $\{S > 0\}$,
$$\newcommand{I}[1]{\mathbf{1}_{(#1)}}
\I{Y_i \geq t S} = \I{Y_i + t c S \geq t S (1+c)} \leq \I{(Y_i + t c S)^2 \geq t^2 (1+c)^2 S^2} \leq \frac{(Y_i + t c S)^2}{t^2(1+c)^2 S^2}\>.
$$
Then,
$$
\frac{1}{n} \sum_i \I{Y_i \geq t S} \leq \frac{1}{n} \sum_i \frac{(Y_i + t c S)^2}{t^2(1+c)^2 S^2} = \frac{(n-1)S^2 + n t^2 c^2 S^2}{n t^2 (1+c)^2 S^2} = \frac{(n-1) + n t^2 c^2}{n t^2 (1+c)^2} \>,
$$
since $\bar Y = 0$ and $\sum_i Y_i^2 = (n-1)S^2$.
The right-hand side is a constant (!), so taking expectations on both sides yields,
$$
\mathbb P(X_1 - \bar X \geq t S) \leq \frac{(n-1) + n t^2 c^2}{n t^2 (1+c)^2} \>.
$$
Finally, minimizing over $c$, yields $c = \frac{n-1}{n t^2}$, which after a little algebra establishes the result.
That pesky technical condition
Note that we had to assume $\mathbb P(S = 0) = 0$ in order to be able to divide by $S^2$ in the analysis. This is no problem for absolutely continuous distributions, but poses an inconvenience for discrete ones. For a discrete distribution, there is some probability that all observations are equal, in which case $0 = Y_i = t S = 0$ for all $i$ and $t > 0$.
We can wiggle our way out by setting $q = \mathbb P(S = 0)$. Then, a careful accounting of the argument shows that everything goes through virtually unchanged and we get
Corollary 1. For the case $q = \mathbb P(S = 0) > 0$, we have $$ \mathbb P(X_1 - \bar X \geq t S) \leq (1-q) \frac{1}{1 + \frac{n}{n-1} t^2} + q \>. $$
Proof. Split on the events $\{S > 0\}$ and $\{S = 0\}$. The previous proof goes through for $\{S > 0\}$ and the case $\{S = 0\}$ is trivial.
A slightly cleaner inequality results if we replace the nonstrict inequality in the probability statement with a strict version.
Corollary 2. Let $q = \mathbb P(S = 0)$ (possibly zero). Then, $$ \mathbb P(X_1 - \bar X > t S) \leq (1-q) \frac{1}{1 + \frac{n}{n-1} t^2} \>. $$
Final remark: The sample version of the inequality required no assumptions on $X$ (other than that it not be almost-surely constant in the nonstrict inequality case, which the original version also tacitly assumes), in essence, because the sample mean and sample variance always exist whether or not their population analogs do.
|
Does a sample version of the one-sided Chebyshev inequality exist?
|
Yes, we can get an analogous result using the sample mean and variance, with perhaps, a couple slight surprises emerging in the process.
First, we need to refine the question statement just a little b
|
Does a sample version of the one-sided Chebyshev inequality exist?
Yes, we can get an analogous result using the sample mean and variance, with perhaps, a couple slight surprises emerging in the process.
First, we need to refine the question statement just a little bit and set out a few assumptions. Importantly, it should be clear that we cannot hope to replace the population variance with the sample variance on the right hand side since the latter is random! So, we refocus our attention on the equivalent inequality
$$
\mathbb P\left( X - \mathbb E X \geq t \sigma \right) \leq \frac{1}{1+t^2} \>.
$$
In case it is not clear that these are equivalent, note that we've simply replaced $t$ with $t \sigma$ in the original inequality without any loss in generality.
Second, we assume that we have a random sample $X_1,\ldots,X_n$ and we are interested in an upper bound for the analogous quantity
$
\mathbb P(X_1 - \bar X \geq t S)
$,
where $\bar X$ is the sample mean and $S$ is the sample standard deviation.
A half-step forward
Note that already by applying the original one-sided Chebyshev inequality to $X_1 - \bar X$, we get that
$$
\mathbb P( X_1 - \bar X \geq t\sigma ) \leq \frac{1}{1 + \frac{n}{n-1}t^2}
$$
where $\sigma^2 = \mathrm{Var}(X_1)$, which is smaller than the right-hand side of the original version. This makes sense! Any particular realization of a random variable from a sample will tend to be (slightly) closer to the sample mean to which it contributes than to the population mean. As we shall see below, we'll get to replace $\sigma$ by $S$ under even more general assumptions.
A sample version of one-sided Chebyshev
Claim: Let $X_1,\ldots,X_n$ be a random sample such that $\mathbb P(S = 0) = 0$. Then, $$ \mathbb P(X_1 - \bar X \geq t S) \leq \frac{1}{1 + \frac{n}{n-1} t^2}\>. $$ In particular, the
sample version of the bound is tighter than the original population
version.
Note: We do not assume that the $X_i$ have either finite mean or variance!
Proof. The idea is to adapt the proof of the original one-sided Chebyshev inequality and employ symmetry in the process. First, set $Y_i = X_i - \bar X$ for notational convenience. Then, observe that
$$
\mathbb P( Y_1 \geq t S ) = \frac{1}{n} \sum_{i=1}^n \mathbb P( Y_i \geq t S ) = \mathbb E \frac{1}{n} \sum_{i=1}^n \mathbf 1_{(Y_i \geq t S)} \>.
$$
Now, for any $c > 0$, on $\{S > 0\}$,
$$\newcommand{I}[1]{\mathbf{1}_{(#1)}}
\I{Y_i \geq t S} = \I{Y_i + t c S \geq t S (1+c)} \leq \I{(Y_i + t c S)^2 \geq t^2 (1+c)^2 S^2} \leq \frac{(Y_i + t c S)^2}{t^2(1+c)^2 S^2}\>.
$$
Then,
$$
\frac{1}{n} \sum_i \I{Y_i \geq t S} \leq \frac{1}{n} \sum_i \frac{(Y_i + t c S)^2}{t^2(1+c)^2 S^2} = \frac{(n-1)S^2 + n t^2 c^2 S^2}{n t^2 (1+c)^2 S^2} = \frac{(n-1) + n t^2 c^2}{n t^2 (1+c)^2} \>,
$$
since $\bar Y = 0$ and $\sum_i Y_i^2 = (n-1)S^2$.
The right-hand side is a constant (!), so taking expectations on both sides yields,
$$
\mathbb P(X_1 - \bar X \geq t S) \leq \frac{(n-1) + n t^2 c^2}{n t^2 (1+c)^2} \>.
$$
Finally, minimizing over $c$, yields $c = \frac{n-1}{n t^2}$, which after a little algebra establishes the result.
That pesky technical condition
Note that we had to assume $\mathbb P(S = 0) = 0$ in order to be able to divide by $S^2$ in the analysis. This is no problem for absolutely continuous distributions, but poses an inconvenience for discrete ones. For a discrete distribution, there is some probability that all observations are equal, in which case $0 = Y_i = t S = 0$ for all $i$ and $t > 0$.
We can wiggle our way out by setting $q = \mathbb P(S = 0)$. Then, a careful accounting of the argument shows that everything goes through virtually unchanged and we get
Corollary 1. For the case $q = \mathbb P(S = 0) > 0$, we have $$ \mathbb P(X_1 - \bar X \geq t S) \leq (1-q) \frac{1}{1 + \frac{n}{n-1} t^2} + q \>. $$
Proof. Split on the events $\{S > 0\}$ and $\{S = 0\}$. The previous proof goes through for $\{S > 0\}$ and the case $\{S = 0\}$ is trivial.
A slightly cleaner inequality results if we replace the nonstrict inequality in the probability statement with a strict version.
Corollary 2. Let $q = \mathbb P(S = 0)$ (possibly zero). Then, $$ \mathbb P(X_1 - \bar X > t S) \leq (1-q) \frac{1}{1 + \frac{n}{n-1} t^2} \>. $$
Final remark: The sample version of the inequality required no assumptions on $X$ (other than that it not be almost-surely constant in the nonstrict inequality case, which the original version also tacitly assumes), in essence, because the sample mean and sample variance always exist whether or not their population analogs do.
|
Does a sample version of the one-sided Chebyshev inequality exist?
Yes, we can get an analogous result using the sample mean and variance, with perhaps, a couple slight surprises emerging in the process.
First, we need to refine the question statement just a little b
|
7,714
|
Does a sample version of the one-sided Chebyshev inequality exist?
|
This is just a complement to @cardinal 's ingenious answer. Samuelson Inequality, states that, for a sample of size $n$, when we have at least three distinct values of the realized $x_i$'s, it holds that
$$x_i-\bar x < s\sqrt{n-1},\;\; i=1,...n$$
where $s$ is calculated without the bias correction, $s= \left (\frac 1n\sum_{i=1}^n(x_i-\bar x)^2\right)^{1/2}$.
Then, using the notation of Cardinal's answer we can state that
$$\mathbb P\left(X_1-\bar X \ge S\sqrt{n-1}\right) =0 \;\;a.s. \qquad [1]$$
Since we require, three distinct values, we will have $S\neq 0$ by assumption. So setting $t=\sqrt{n-1}$ in Cardinal's Inequality (the initial version) we obtain
$$\mathbb P\left (X_1 - \bar X \geq S\sqrt{n-1}\right) \leq \frac{1}{1 + n}, \;\; \qquad [2]$$
Eq. $[2]$ is of course compatible with eq. $[1]$. The combination of the two tells us that Cardinal's Inequality is useful as a probabilistic statement for $0< t < \sqrt{n-1}$.
If Cardinal's Inequality requires $S$ to be calculated bias-corrected (call this $\tilde S$) then the equations become
$$\mathbb P\left(X_1-\bar X \ge \tilde S\frac{n-1}{\sqrt{n}}\right) =0 \;\;a.s. \qquad [1a]$$
and we choose $ t= \frac{n-1}{\sqrt{n}}$ to obtain through Cardinal's Inequality
$$\mathbb P\left (X_1 - \bar X \geq \tilde S\frac{n-1}{\sqrt{n}}\right) \leq \frac{1}{ n}, \;\; \qquad [2a]$$
and the probabilistically meaningful interval for $t$ is $0< t < \frac{n-1}{\sqrt{n}}.$
|
Does a sample version of the one-sided Chebyshev inequality exist?
|
This is just a complement to @cardinal 's ingenious answer. Samuelson Inequality, states that, for a sample of size $n$, when we have at least three distinct values of the realized $x_i$'s, it holds t
|
Does a sample version of the one-sided Chebyshev inequality exist?
This is just a complement to @cardinal 's ingenious answer. Samuelson Inequality, states that, for a sample of size $n$, when we have at least three distinct values of the realized $x_i$'s, it holds that
$$x_i-\bar x < s\sqrt{n-1},\;\; i=1,...n$$
where $s$ is calculated without the bias correction, $s= \left (\frac 1n\sum_{i=1}^n(x_i-\bar x)^2\right)^{1/2}$.
Then, using the notation of Cardinal's answer we can state that
$$\mathbb P\left(X_1-\bar X \ge S\sqrt{n-1}\right) =0 \;\;a.s. \qquad [1]$$
Since we require, three distinct values, we will have $S\neq 0$ by assumption. So setting $t=\sqrt{n-1}$ in Cardinal's Inequality (the initial version) we obtain
$$\mathbb P\left (X_1 - \bar X \geq S\sqrt{n-1}\right) \leq \frac{1}{1 + n}, \;\; \qquad [2]$$
Eq. $[2]$ is of course compatible with eq. $[1]$. The combination of the two tells us that Cardinal's Inequality is useful as a probabilistic statement for $0< t < \sqrt{n-1}$.
If Cardinal's Inequality requires $S$ to be calculated bias-corrected (call this $\tilde S$) then the equations become
$$\mathbb P\left(X_1-\bar X \ge \tilde S\frac{n-1}{\sqrt{n}}\right) =0 \;\;a.s. \qquad [1a]$$
and we choose $ t= \frac{n-1}{\sqrt{n}}$ to obtain through Cardinal's Inequality
$$\mathbb P\left (X_1 - \bar X \geq \tilde S\frac{n-1}{\sqrt{n}}\right) \leq \frac{1}{ n}, \;\; \qquad [2a]$$
and the probabilistically meaningful interval for $t$ is $0< t < \frac{n-1}{\sqrt{n}}.$
|
Does a sample version of the one-sided Chebyshev inequality exist?
This is just a complement to @cardinal 's ingenious answer. Samuelson Inequality, states that, for a sample of size $n$, when we have at least three distinct values of the realized $x_i$'s, it holds t
|
7,715
|
Does a sample version of the one-sided Chebyshev inequality exist?
|
I have attempted to apply @cardinal's equation to my permutation test to determine an upper-bound for its p-value. I have 1 unpermuted dataset $y$ and and $n$ permuted datasets $x_i$. I define some testing function $F$ and apply it to my datasets as follows: $F^{orig} = F(y)$ and $F^{perm}_i = F(x_i)$. Then I apply the above inequality to the set of values $\{F^{orig}\}\cup \{ F^{perm}_i \}$ s.t. $X_1 = F^{orig}$. I also compute the z-score of permuted vs original data for comparison $z = \frac{F^{orig} - \bar{F}^{perm}}{s^{perm}}$.
The numbers in the legend correspond to different quantities of sample points.
Rule of thumb: If you wish to use above inequality to prove the $X_1$ is an outlier with p-value of 1% or lower, you need at least 1000 samples, and the outlier must be at least $10\sigma$ away from the mean of the rest of the points. This is kind of brutal compared to slightly less than $3\sigma$ for gaussian distribution, but, I guess, that's the price of having no assumptions
Self-check: For large $n$ and small $p$, we have $t^{min} \approx p^{-0.5} \approx 10$ for $p=0.01$, which matches with z-score being $10\sigma$
If you are wondering, p-value improves with the number of samples because when there are more points, the outlier has less effect on the sample mean and variance
|
Does a sample version of the one-sided Chebyshev inequality exist?
|
I have attempted to apply @cardinal's equation to my permutation test to determine an upper-bound for its p-value. I have 1 unpermuted dataset $y$ and and $n$ permuted datasets $x_i$. I define some te
|
Does a sample version of the one-sided Chebyshev inequality exist?
I have attempted to apply @cardinal's equation to my permutation test to determine an upper-bound for its p-value. I have 1 unpermuted dataset $y$ and and $n$ permuted datasets $x_i$. I define some testing function $F$ and apply it to my datasets as follows: $F^{orig} = F(y)$ and $F^{perm}_i = F(x_i)$. Then I apply the above inequality to the set of values $\{F^{orig}\}\cup \{ F^{perm}_i \}$ s.t. $X_1 = F^{orig}$. I also compute the z-score of permuted vs original data for comparison $z = \frac{F^{orig} - \bar{F}^{perm}}{s^{perm}}$.
The numbers in the legend correspond to different quantities of sample points.
Rule of thumb: If you wish to use above inequality to prove the $X_1$ is an outlier with p-value of 1% or lower, you need at least 1000 samples, and the outlier must be at least $10\sigma$ away from the mean of the rest of the points. This is kind of brutal compared to slightly less than $3\sigma$ for gaussian distribution, but, I guess, that's the price of having no assumptions
Self-check: For large $n$ and small $p$, we have $t^{min} \approx p^{-0.5} \approx 10$ for $p=0.01$, which matches with z-score being $10\sigma$
If you are wondering, p-value improves with the number of samples because when there are more points, the outlier has less effect on the sample mean and variance
|
Does a sample version of the one-sided Chebyshev inequality exist?
I have attempted to apply @cardinal's equation to my permutation test to determine an upper-bound for its p-value. I have 1 unpermuted dataset $y$ and and $n$ permuted datasets $x_i$. I define some te
|
7,716
|
Why could centering independent variables change the main effects with moderation?
|
In models with no interaction terms (that is, with no terms that are constructed as the product of other terms), each variable's regression coefficient is the slope of the regression surface in the direction of that variable. It is constant, regardless of the values of the variables, and therefore can be said to measure the overall effect of that variable.
In models with interactions, this interpretation can be made without further qualification only for those variables that are not involved in any interactions. For a variable that is involved in interactions, the "main-effect" regression coefficient -- that is, the regression coefficient of the variable by itself -- is the slope of the regression surface in the direction of that variable when all other variables that interact with that variable have values of zero, and the significance test of the coefficient refers to the slope of the regression surface only in that region of the predictor space. Since there is no requirement that there actually be data in that region of the space, the main-effect coefficient may bear little resemblance to the slope of the regression surface in the region of the predictor space where data were actually observed.
In anova terms, the main-effect coefficient is analogous to a simple main effect, not an overall main effect. Moreover, it may refer to what in an anova design would be empty cells in which the data were supplied by extrapolating from cells with data.
For a measure of the overall effect of the variable that is analogous to an overall main effect in anova and does not extrapolate beyond the region in which data were observed, we must look at the average slope of the regression surface in the direction of the variable, where the averaging is over the N cases that were actually observed. This average slope can be expressed as a weighted sum of the regression coefficients of all the terms in the model that involve the variable in question.
The weights are awkward to describe but easy to get. A variable's main-effect coefficient always gets a weight of 1. For each other coefficient of a term involving that variable, the weight is the mean of the product of the other variables in that term. For example, if we have five "raw" variables x1, x2, x3, x4, x5, plus four two-way interactions (x1,x2), (x1,x3), (x2,x3), (x4,x5), and one three-way interaction (x1,x2,x3), then the model is
y = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x4 + b5*x5 +
b12*x1*x2 + b13*x1*x3 + b23*x2*x3 + b45*x4*x5 +
b123*x1*x2*x3 + e
and the overall main effects are
B1 = b1 + b12*M[x2] + b13*M[x3] + b123*M[x2*x3],
B2 = b2 + b12*M[x1] + b23*M[x3] + b123*M[x1*x3],
B3 = b3 + b13*M[x1] + b23*M[x2] + b123*M[x1*x2],
B4 = b4 + b45*M[x5],
B5 = b5 + b45*M[x4],
where M[.] denotes the sample mean of the quantity inside the brackets. All the product terms inside the brackets are among those that were constructed in order to do the regression, so a regression program should already know about them and should be able to print their means on request.
In models that have only main effects and two-way interactions, there is a simpler way to get the overall effects: center[1] the raw variables at their means. This is to be done prior to computing the product terms, and is not to be done to the products. Then all the M[.] expressions will become 0, and the regression coefficients will be interpretable as overall effects. The values of the b's will change; the values of the B's will not. Only the variables that are involved in interactions need to be centered, but there is usually no harm in centering other measured variables. The general effect of centering a variable is that, in addition to changing the intercept, it changes only the coefficients of other variables that interact with the centered variable. In particular, it does not change the coefficients of any terms that involve the centered variable. In the example given above, centering x1 would change b0, b2, b3, and b23.
[1 -- "Centering" is used by different people in ways that differ just enough to cause confusion. As used here, "centering a variable at #" means subtracting # from all the scores on the variable, converting the original scores to deviations from #.]
So why not always center at the means, routinely? Three reasons. First, the main-effect coefficients of the uncentered variables may themselves be of interest. Centering in such cases would be counter-productive, since it changes the main-effect coefficients of other variables.
Second, centering will make all the M[.] expressions 0, and thus convert simple effects to overall effects, only in models with no three-way or higher interactions. If the model contains such interactions then the b -> B computations must still be done, even if all the variables are centered at their means.
Third, centering at a value such as the mean, that is defined by the distribution of the predictors as opposed to being chosen rationally, means that all coefficients that are affected by centering will be specific to your particular sample. If you center at the mean then someone attempting to replicate your study must center at your mean, not their own mean, if they want to get the same coefficients that you got. The solution to this problem is to center each variable at a rationally chosen central value of that variable that depends on the meaning of the scores and does not depend on the distribution of the scores. However, the b -> B computations still remain necessary.
The significance of the overall effects may be tested by the usual procedures for testing linear combinations of regression coefficients. However, the results must be interpreted with care because the overall effects are not structural parameters but are design-dependent. The structural parameters -- the regression coefficients (uncentered, or with rational centering) and the error variance -- may be expected to remain invariant under changes in the distribution of the predictors, but the overall effects will generally change. The overall effects are specific to the particular sample and should not be expected to carry over to other samples with different distributions on the predictors. If an overall effect is significant in one study and not in another, it may reflect nothing more than a difference in the distribution of the predictors. In particular, it should not be taken as evidence that the relation of the dependent variable to the predictors is different in the two studies.
|
Why could centering independent variables change the main effects with moderation?
|
In models with no interaction terms (that is, with no terms that are constructed as the product of other terms), each variable's regression coefficient is the slope of the regression surface in the di
|
Why could centering independent variables change the main effects with moderation?
In models with no interaction terms (that is, with no terms that are constructed as the product of other terms), each variable's regression coefficient is the slope of the regression surface in the direction of that variable. It is constant, regardless of the values of the variables, and therefore can be said to measure the overall effect of that variable.
In models with interactions, this interpretation can be made without further qualification only for those variables that are not involved in any interactions. For a variable that is involved in interactions, the "main-effect" regression coefficient -- that is, the regression coefficient of the variable by itself -- is the slope of the regression surface in the direction of that variable when all other variables that interact with that variable have values of zero, and the significance test of the coefficient refers to the slope of the regression surface only in that region of the predictor space. Since there is no requirement that there actually be data in that region of the space, the main-effect coefficient may bear little resemblance to the slope of the regression surface in the region of the predictor space where data were actually observed.
In anova terms, the main-effect coefficient is analogous to a simple main effect, not an overall main effect. Moreover, it may refer to what in an anova design would be empty cells in which the data were supplied by extrapolating from cells with data.
For a measure of the overall effect of the variable that is analogous to an overall main effect in anova and does not extrapolate beyond the region in which data were observed, we must look at the average slope of the regression surface in the direction of the variable, where the averaging is over the N cases that were actually observed. This average slope can be expressed as a weighted sum of the regression coefficients of all the terms in the model that involve the variable in question.
The weights are awkward to describe but easy to get. A variable's main-effect coefficient always gets a weight of 1. For each other coefficient of a term involving that variable, the weight is the mean of the product of the other variables in that term. For example, if we have five "raw" variables x1, x2, x3, x4, x5, plus four two-way interactions (x1,x2), (x1,x3), (x2,x3), (x4,x5), and one three-way interaction (x1,x2,x3), then the model is
y = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x4 + b5*x5 +
b12*x1*x2 + b13*x1*x3 + b23*x2*x3 + b45*x4*x5 +
b123*x1*x2*x3 + e
and the overall main effects are
B1 = b1 + b12*M[x2] + b13*M[x3] + b123*M[x2*x3],
B2 = b2 + b12*M[x1] + b23*M[x3] + b123*M[x1*x3],
B3 = b3 + b13*M[x1] + b23*M[x2] + b123*M[x1*x2],
B4 = b4 + b45*M[x5],
B5 = b5 + b45*M[x4],
where M[.] denotes the sample mean of the quantity inside the brackets. All the product terms inside the brackets are among those that were constructed in order to do the regression, so a regression program should already know about them and should be able to print their means on request.
In models that have only main effects and two-way interactions, there is a simpler way to get the overall effects: center[1] the raw variables at their means. This is to be done prior to computing the product terms, and is not to be done to the products. Then all the M[.] expressions will become 0, and the regression coefficients will be interpretable as overall effects. The values of the b's will change; the values of the B's will not. Only the variables that are involved in interactions need to be centered, but there is usually no harm in centering other measured variables. The general effect of centering a variable is that, in addition to changing the intercept, it changes only the coefficients of other variables that interact with the centered variable. In particular, it does not change the coefficients of any terms that involve the centered variable. In the example given above, centering x1 would change b0, b2, b3, and b23.
[1 -- "Centering" is used by different people in ways that differ just enough to cause confusion. As used here, "centering a variable at #" means subtracting # from all the scores on the variable, converting the original scores to deviations from #.]
So why not always center at the means, routinely? Three reasons. First, the main-effect coefficients of the uncentered variables may themselves be of interest. Centering in such cases would be counter-productive, since it changes the main-effect coefficients of other variables.
Second, centering will make all the M[.] expressions 0, and thus convert simple effects to overall effects, only in models with no three-way or higher interactions. If the model contains such interactions then the b -> B computations must still be done, even if all the variables are centered at their means.
Third, centering at a value such as the mean, that is defined by the distribution of the predictors as opposed to being chosen rationally, means that all coefficients that are affected by centering will be specific to your particular sample. If you center at the mean then someone attempting to replicate your study must center at your mean, not their own mean, if they want to get the same coefficients that you got. The solution to this problem is to center each variable at a rationally chosen central value of that variable that depends on the meaning of the scores and does not depend on the distribution of the scores. However, the b -> B computations still remain necessary.
The significance of the overall effects may be tested by the usual procedures for testing linear combinations of regression coefficients. However, the results must be interpreted with care because the overall effects are not structural parameters but are design-dependent. The structural parameters -- the regression coefficients (uncentered, or with rational centering) and the error variance -- may be expected to remain invariant under changes in the distribution of the predictors, but the overall effects will generally change. The overall effects are specific to the particular sample and should not be expected to carry over to other samples with different distributions on the predictors. If an overall effect is significant in one study and not in another, it may reflect nothing more than a difference in the distribution of the predictors. In particular, it should not be taken as evidence that the relation of the dependent variable to the predictors is different in the two studies.
|
Why could centering independent variables change the main effects with moderation?
In models with no interaction terms (that is, with no terms that are constructed as the product of other terms), each variable's regression coefficient is the slope of the regression surface in the di
|
7,717
|
Why could centering independent variables change the main effects with moderation?
|
That is because in any regression involving more than one predictor, the $\beta$s are partial coefficients; they are interpreted as the predicted change in the dependent variable for each 1-unit increase in a predictor, holding all other predictors constant.
In a regression involving interaction terms, for example $y=\beta_1x_1+\beta_2x_2+\beta_3x_1x_2+\epsilon$, $\beta_1$ is the expected increase in the dependent variable for each 1-unit increase in $x_1$, holding all the other terms constant. This is a problem for the term $\beta_3x_1x_2$, as it will vary as $x_1$ varies. The only way to hold the interaction term constant for a 1-unit increase on either $x_1$ or $x_2$ (the two variables involved in the interaction) is to set the other variable to 0. Therefore, when a variable is also part of an interaction term, the interpretation of the $\beta$ for this variable is conditional on the other variable being 0—not merely being held constant.
For this reason, the interpretation of the $\beta$s will change depending on where the 0 is on the other variable involved in the interaction; where the 0 is on the variable of interest does not actually change the interpretation of its coefficient. In this case, for example, $\beta_1$ is the predicted increase in $y$ for each 1-unit increase in $x_1$ when $x_2=0$. If the relationship between $x_1$ and $y$ changes as a function of $x_2$ (as you hypothesize it does when you include an interaction term), then the significance of $\beta_1$ will change as a function of the centering of $x_2$.
Also, note that if the value of your $\beta$s change considerably as a function of centering, then your interaction term is probably significant; and if it is, interpreting the "main effects" can be misleading, because this means that the relationship between $x_1$ and $y$ depends on the value of $x_2$, and vice versa. A typical way to deal with this is to plot predicted values for $y$ as a function of $x_1$, for a few values of $x_2$ (say, 3; for example, 0 and ±1 SD).
|
Why could centering independent variables change the main effects with moderation?
|
That is because in any regression involving more than one predictor, the $\beta$s are partial coefficients; they are interpreted as the predicted change in the dependent variable for each 1-unit incre
|
Why could centering independent variables change the main effects with moderation?
That is because in any regression involving more than one predictor, the $\beta$s are partial coefficients; they are interpreted as the predicted change in the dependent variable for each 1-unit increase in a predictor, holding all other predictors constant.
In a regression involving interaction terms, for example $y=\beta_1x_1+\beta_2x_2+\beta_3x_1x_2+\epsilon$, $\beta_1$ is the expected increase in the dependent variable for each 1-unit increase in $x_1$, holding all the other terms constant. This is a problem for the term $\beta_3x_1x_2$, as it will vary as $x_1$ varies. The only way to hold the interaction term constant for a 1-unit increase on either $x_1$ or $x_2$ (the two variables involved in the interaction) is to set the other variable to 0. Therefore, when a variable is also part of an interaction term, the interpretation of the $\beta$ for this variable is conditional on the other variable being 0—not merely being held constant.
For this reason, the interpretation of the $\beta$s will change depending on where the 0 is on the other variable involved in the interaction; where the 0 is on the variable of interest does not actually change the interpretation of its coefficient. In this case, for example, $\beta_1$ is the predicted increase in $y$ for each 1-unit increase in $x_1$ when $x_2=0$. If the relationship between $x_1$ and $y$ changes as a function of $x_2$ (as you hypothesize it does when you include an interaction term), then the significance of $\beta_1$ will change as a function of the centering of $x_2$.
Also, note that if the value of your $\beta$s change considerably as a function of centering, then your interaction term is probably significant; and if it is, interpreting the "main effects" can be misleading, because this means that the relationship between $x_1$ and $y$ depends on the value of $x_2$, and vice versa. A typical way to deal with this is to plot predicted values for $y$ as a function of $x_1$, for a few values of $x_2$ (say, 3; for example, 0 and ±1 SD).
|
Why could centering independent variables change the main effects with moderation?
That is because in any regression involving more than one predictor, the $\beta$s are partial coefficients; they are interpreted as the predicted change in the dependent variable for each 1-unit incre
|
7,718
|
Why could centering independent variables change the main effects with moderation?
|
I have been going crazy with the same question, but i finally found the solution to your and my problem.
IT IS ALL ABOUT HOW YOU CALCULATE YOUR CENTERED VARIABLES. Two options are available:
1. MEAN - INDIVIDUAL VARIABLES
2. INDIVIDUAL VARIABLES - MEAN
You probably calculated your centered variables as (individual variable - mean value), therefore those with low values would get negative scores, and those with high values would get positive scores.
I’ll explain with an example to make it easier to understand.
I want to see how muscle strength, affects bone mass and I want to take into account gender to see if it affects differently in girls and boys. The idea is that the higher the muscle strength the higher the bone mass. I therefore have:
Dependent variable: Bone mass
Independent variables: Sex, muscle strength, interaction_SEX_MUSCLEstrength.
As I found multicollinearity (you usually do when you have an interaction term), I centred musclestrength (MEAN – INDIVIDUAL VARIABLE) and created the new interaction term with the new centred variable. My coefficients were
Constant: 0.902
Gender: -0.010 (Boys = 0; Girls =1)
Centred muscle: -0.023
Interaction: 0.0002
Therefore if you wanted to estimate a boys bone mass you would have the following equation:
Bone mass = $0.902 - (0 * 0.010) – (0.023 * muscle centred value) + (Interaction * 0.0002)$
Looking at this you might think that muscle is affecting bone negatively, but you have to think of your centred variables, not your original variables.
Let’s say the mean muscle strength of the group was of 30 KG. And you want to estimate the bone mass of a boy (WEAKBOY) that performed 20 KG and another that performed 40KG (STRONGBOY). The centred values of WEAKBOY will be (MEAN GROUP VALUE – INDIVIDUAL VALUE; 30 – 20 = 10), and for STRONGBOY will be -10. Applying these values to the equation:
WEAKBOY Bone mass= 0.902 – 0 – (0.023*10) + ....=0.672
STRONGBOY Bone mass= 0.902 – (0.023*(-10)) + ...= 1.132
As you can see STRONGBOY will indeed had a stronger bone.
If you had centred your variables the other way round: (INDIVIDUAL – MEAN), all the coefficients will be the same but the symbols will be different. This is because when you apply the centred variable WEAKBOY will be (-10) and STRONGBOY will be (+10). Therefore the final results will be exactly the same.
It all makes sense once you understand it.
Hope the example is clear enough.
|
Why could centering independent variables change the main effects with moderation?
|
I have been going crazy with the same question, but i finally found the solution to your and my problem.
IT IS ALL ABOUT HOW YOU CALCULATE YOUR CENTERED VARIABLES. Two options are available:
1. MEAN
|
Why could centering independent variables change the main effects with moderation?
I have been going crazy with the same question, but i finally found the solution to your and my problem.
IT IS ALL ABOUT HOW YOU CALCULATE YOUR CENTERED VARIABLES. Two options are available:
1. MEAN - INDIVIDUAL VARIABLES
2. INDIVIDUAL VARIABLES - MEAN
You probably calculated your centered variables as (individual variable - mean value), therefore those with low values would get negative scores, and those with high values would get positive scores.
I’ll explain with an example to make it easier to understand.
I want to see how muscle strength, affects bone mass and I want to take into account gender to see if it affects differently in girls and boys. The idea is that the higher the muscle strength the higher the bone mass. I therefore have:
Dependent variable: Bone mass
Independent variables: Sex, muscle strength, interaction_SEX_MUSCLEstrength.
As I found multicollinearity (you usually do when you have an interaction term), I centred musclestrength (MEAN – INDIVIDUAL VARIABLE) and created the new interaction term with the new centred variable. My coefficients were
Constant: 0.902
Gender: -0.010 (Boys = 0; Girls =1)
Centred muscle: -0.023
Interaction: 0.0002
Therefore if you wanted to estimate a boys bone mass you would have the following equation:
Bone mass = $0.902 - (0 * 0.010) – (0.023 * muscle centred value) + (Interaction * 0.0002)$
Looking at this you might think that muscle is affecting bone negatively, but you have to think of your centred variables, not your original variables.
Let’s say the mean muscle strength of the group was of 30 KG. And you want to estimate the bone mass of a boy (WEAKBOY) that performed 20 KG and another that performed 40KG (STRONGBOY). The centred values of WEAKBOY will be (MEAN GROUP VALUE – INDIVIDUAL VALUE; 30 – 20 = 10), and for STRONGBOY will be -10. Applying these values to the equation:
WEAKBOY Bone mass= 0.902 – 0 – (0.023*10) + ....=0.672
STRONGBOY Bone mass= 0.902 – (0.023*(-10)) + ...= 1.132
As you can see STRONGBOY will indeed had a stronger bone.
If you had centred your variables the other way round: (INDIVIDUAL – MEAN), all the coefficients will be the same but the symbols will be different. This is because when you apply the centred variable WEAKBOY will be (-10) and STRONGBOY will be (+10). Therefore the final results will be exactly the same.
It all makes sense once you understand it.
Hope the example is clear enough.
|
Why could centering independent variables change the main effects with moderation?
I have been going crazy with the same question, but i finally found the solution to your and my problem.
IT IS ALL ABOUT HOW YOU CALCULATE YOUR CENTERED VARIABLES. Two options are available:
1. MEAN
|
7,719
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learning?
|
To reproduce this figure, you need to have the ElemStatLearn package installed on you system. The artificial dataset was generated with mixture.example() as pointed out by @StasK.
library(ElemStatLearn)
require(class)
x <- mixture.example$x
g <- mixture.example$y
xnew <- mixture.example$xnew
mod15 <- knn(x, xnew, g, k=15, prob=TRUE)
prob <- attr(mod15, "prob")
prob <- ifelse(mod15=="1", prob, 1-prob)
px1 <- mixture.example$px1
px2 <- mixture.example$px2
prob15 <- matrix(prob, length(px1), length(px2))
par(mar=rep(2,4))
contour(px1, px2, prob15, levels=0.5, labels="", xlab="", ylab="", main=
"15-nearest neighbour", axes=FALSE)
points(x, col=ifelse(g==1, "coral", "cornflowerblue"))
gd <- expand.grid(x=px1, y=px2)
points(gd, pch=".", cex=1.2, col=ifelse(prob15>0.5, "coral", "cornflowerblue"))
box()
All but the last three commands come from the on-line help for mixture.example. Note that we used the fact that expand.grid will arrange its output by varying x first, which further allows to index (by column) colors in the prob15 matrix (of dimension 69x99), which holds the proportion of the votes for the winning class for each lattice coordinates (px1,px2).
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learni
|
To reproduce this figure, you need to have the ElemStatLearn package installed on you system. The artificial dataset was generated with mixture.example() as pointed out by @StasK.
library(ElemStatLear
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learning?
To reproduce this figure, you need to have the ElemStatLearn package installed on you system. The artificial dataset was generated with mixture.example() as pointed out by @StasK.
library(ElemStatLearn)
require(class)
x <- mixture.example$x
g <- mixture.example$y
xnew <- mixture.example$xnew
mod15 <- knn(x, xnew, g, k=15, prob=TRUE)
prob <- attr(mod15, "prob")
prob <- ifelse(mod15=="1", prob, 1-prob)
px1 <- mixture.example$px1
px2 <- mixture.example$px2
prob15 <- matrix(prob, length(px1), length(px2))
par(mar=rep(2,4))
contour(px1, px2, prob15, levels=0.5, labels="", xlab="", ylab="", main=
"15-nearest neighbour", axes=FALSE)
points(x, col=ifelse(g==1, "coral", "cornflowerblue"))
gd <- expand.grid(x=px1, y=px2)
points(gd, pch=".", cex=1.2, col=ifelse(prob15>0.5, "coral", "cornflowerblue"))
box()
All but the last three commands come from the on-line help for mixture.example. Note that we used the fact that expand.grid will arrange its output by varying x first, which further allows to index (by column) colors in the prob15 matrix (of dimension 69x99), which holds the proportion of the votes for the winning class for each lattice coordinates (px1,px2).
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learni
To reproduce this figure, you need to have the ElemStatLearn package installed on you system. The artificial dataset was generated with mixture.example() as pointed out by @StasK.
library(ElemStatLear
|
7,720
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learning?
|
I'm self-learning ESL and trying to work through all examples provided in the book. I just did this and you can check the R code below:
library(MASS)
# set the seed to reproduce data generation in the future
seed <- 123456
set.seed(seed)
# generate two classes means
Sigma <- matrix(c(1,0,0,1),nrow = 2, ncol = 2)
means_1 <- mvrnorm(n = 10, mu = c(1,0), Sigma)
means_2 <- mvrnorm(n = 10, mu = c(0,1), Sigma)
# pick an m_k at random with probability 1/10
# function to generate observations
genObs <- function(classMean, classSigma, size, ...)
{
# check input
if(!is.matrix(classMean)) stop("classMean should be a matrix")
nc <- ncol(classMean)
nr <- nrow(classMean)
if(nc != 2) stop("classMean should be a matrix with 2 columns")
if(ncol(classSigma) != 2) stop("the dimension of classSigma is wrong")
# mean for each obs
# pick an m_k at random
meanObs <- classMean[sample(1:nr, size = size, replace = TRUE),]
obs <- t(apply(meanObs, 1, function(x) mvrnorm(n = 1, mu = x, Sigma = classSigma )) )
colnames(obs) <- c('x1','x2')
return(obs)
}
obs100_1 <- genObs(classMean = means_1, classSigma = Sigma/5, size = 100)
obs100_2 <- genObs(classMean = means_2, classSigma = Sigma/5, size = 100)
# generate label
y <- rep(c(0,1), each = 100)
# training data matrix
trainMat <- as.data.frame(cbind(y, rbind(obs100_1, obs100_2)))
# plot them
library(lattice)
with(trainMat, xyplot(x2 ~ x1,groups = y, col=c('blue', 'orange')))
# now fit two models
# model 1: linear regression
lmfits <- lm(y ~ x1 + x2 , data = trainMat)
# get the slope and intercept for the decision boundary
intercept <- -(lmfits$coef[1] - 0.5) / lmfits$coef[3]
slope <- - lmfits$coef[2] / lmfits$coef[3]
# Figure 2.1
xyplot(x2 ~ x1, groups = y, col = c('blue', 'orange'), data = trainMat,
panel = function(...)
{
panel.xyplot(...)
panel.abline(intercept, slope)
},
main = 'Linear Regression of 0/1 Response')
# model2: k nearest-neighbor methods
library(class)
# get the range of x1 and x2
rx1 <- range(trainMat$x1)
rx2 <- range(trainMat$x2)
# get lattice points in predictor space
px1 <- seq(from = rx1[1], to = rx1[2], by = 0.1 )
px2 <- seq(from = rx2[1], to = rx2[2], by = 0.1 )
xnew <- expand.grid(x1 = px1, x2 = px2)
# get the contour map
knn15 <- knn(train = trainMat[,2:3], test = xnew, cl = trainMat[,1], k = 15, prob = TRUE)
prob <- attr(knn15, "prob")
prob <- ifelse(knn15=="1", prob, 1-prob)
prob15 <- matrix(prob, nrow = length(px1), ncol = length(px2))
# Figure 2.2
par(mar = rep(2,4))
contour(px1, px2, prob15, levels=0.5, labels="", xlab="", ylab="", main=
"15-nearest neighbour", axes=FALSE)
points(trainMat[,2:3], col=ifelse(trainMat[,1]==1, "coral", "cornflowerblue"))
points(xnew, pch=".", cex=1.2, col=ifelse(prob15>0.5, "coral", "cornflowerblue"))
box()
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learni
|
I'm self-learning ESL and trying to work through all examples provided in the book. I just did this and you can check the R code below:
library(MASS)
# set the seed to reproduce data generation in the
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learning?
I'm self-learning ESL and trying to work through all examples provided in the book. I just did this and you can check the R code below:
library(MASS)
# set the seed to reproduce data generation in the future
seed <- 123456
set.seed(seed)
# generate two classes means
Sigma <- matrix(c(1,0,0,1),nrow = 2, ncol = 2)
means_1 <- mvrnorm(n = 10, mu = c(1,0), Sigma)
means_2 <- mvrnorm(n = 10, mu = c(0,1), Sigma)
# pick an m_k at random with probability 1/10
# function to generate observations
genObs <- function(classMean, classSigma, size, ...)
{
# check input
if(!is.matrix(classMean)) stop("classMean should be a matrix")
nc <- ncol(classMean)
nr <- nrow(classMean)
if(nc != 2) stop("classMean should be a matrix with 2 columns")
if(ncol(classSigma) != 2) stop("the dimension of classSigma is wrong")
# mean for each obs
# pick an m_k at random
meanObs <- classMean[sample(1:nr, size = size, replace = TRUE),]
obs <- t(apply(meanObs, 1, function(x) mvrnorm(n = 1, mu = x, Sigma = classSigma )) )
colnames(obs) <- c('x1','x2')
return(obs)
}
obs100_1 <- genObs(classMean = means_1, classSigma = Sigma/5, size = 100)
obs100_2 <- genObs(classMean = means_2, classSigma = Sigma/5, size = 100)
# generate label
y <- rep(c(0,1), each = 100)
# training data matrix
trainMat <- as.data.frame(cbind(y, rbind(obs100_1, obs100_2)))
# plot them
library(lattice)
with(trainMat, xyplot(x2 ~ x1,groups = y, col=c('blue', 'orange')))
# now fit two models
# model 1: linear regression
lmfits <- lm(y ~ x1 + x2 , data = trainMat)
# get the slope and intercept for the decision boundary
intercept <- -(lmfits$coef[1] - 0.5) / lmfits$coef[3]
slope <- - lmfits$coef[2] / lmfits$coef[3]
# Figure 2.1
xyplot(x2 ~ x1, groups = y, col = c('blue', 'orange'), data = trainMat,
panel = function(...)
{
panel.xyplot(...)
panel.abline(intercept, slope)
},
main = 'Linear Regression of 0/1 Response')
# model2: k nearest-neighbor methods
library(class)
# get the range of x1 and x2
rx1 <- range(trainMat$x1)
rx2 <- range(trainMat$x2)
# get lattice points in predictor space
px1 <- seq(from = rx1[1], to = rx1[2], by = 0.1 )
px2 <- seq(from = rx2[1], to = rx2[2], by = 0.1 )
xnew <- expand.grid(x1 = px1, x2 = px2)
# get the contour map
knn15 <- knn(train = trainMat[,2:3], test = xnew, cl = trainMat[,1], k = 15, prob = TRUE)
prob <- attr(knn15, "prob")
prob <- ifelse(knn15=="1", prob, 1-prob)
prob15 <- matrix(prob, nrow = length(px1), ncol = length(px2))
# Figure 2.2
par(mar = rep(2,4))
contour(px1, px2, prob15, levels=0.5, labels="", xlab="", ylab="", main=
"15-nearest neighbour", axes=FALSE)
points(trainMat[,2:3], col=ifelse(trainMat[,1]==1, "coral", "cornflowerblue"))
points(xnew, pch=".", cex=1.2, col=ifelse(prob15>0.5, "coral", "cornflowerblue"))
box()
|
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learni
I'm self-learning ESL and trying to work through all examples provided in the book. I just did this and you can check the R code below:
library(MASS)
# set the seed to reproduce data generation in the
|
7,721
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
In maximum likelihood estimation, you are trying to maximize $nC_x~p^x(1-p)^{n-x}$; however, maximizing this is equivalent to maximizing $p^x(1-p)^{n-x}$ for a fixed $x$.
Actually, the likelihood for the gaussian and poisson also do not involve their leading constants, so this case is just like those as w
Addressing OPs Comment
Here is a bit more detail:
First, $x$ is the total number of successes whereas $x_i$ is a single trial (0 or 1). Therefore:
$$\prod_{i=1}^np^{x_i}(1-p)^{1-x_i} = p^{\sum_1^n x_i}(1-p)^{\sum_1^n1-x_i} = p^{x}(1-p)^{n-x}$$
That shows how you get the factors in the likelihood (by running the above steps backwards).
Why does the constant go away? Informally, and what most people do (including me), is just notice that the leading constant does not affect the value of $p$ that maximizes the likelihood, so we just ignore it (effectively set it to 1).
We can derive this by taking the log of the likelihood function and finding where its derivative is zero:
$$\ln\left(nC_x~p^x(1-p)^{n-x}\right) = \ln(nC_x)+x\ln(p)+(n-x)\ln(1-p)$$
Take derivative wrt $p$ and set to $0$:
$$\frac{d}{dp}\ln(nC_x)+x\ln(p)+(n-x)\ln(1-p) = \frac{x}{p}- \frac{n-x}{1-p} = 0$$
$$\implies \frac{n}{x} = \frac{1}{p} \implies p = \frac{x}{n}$$
Notice that the leading constant dropped out of the calculation of the MLE.
More philosophically, a likelihood is only meaningful for inference up to a multiplying constant, such that if we have two likelihood functions $L_1,L_2$ and $L_1=kL_2$, then they are inferentially equivalent. This is called the Law of Likelihood. Therefore, if we are comparing different values of $p$ using the same likelihood function, the leading term becomes irrelevant.
At a practical level, inference using the likelihood function is actually based on the likelihood ratio, not the absolute value of the likelihood. This is due to the asymptotic theory of likelihood ratios (which are asymptotically chi-square -- subject to certain regularity conditions that are often appropriate). Likelihood ratio tests are favored due to the Neyman-Pearson Lemma. Therefore, when we attempt to test two simple hypotheses, we will take the ratio and the common leading factor will cancel.
NOTE: This will not happen if you were comparing two different models, say a binomial and a poisson. In that case, the constants are important.
Of the above reasons, the first (irrelevance to finding the maximizer of L) most directly answers your question.
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
In maximum likelihood estimation, you are trying to maximize $nC_x~p^x(1-p)^{n-x}$; however, maximizing this is equivalent to maximizing $p^x(1-p)^{n-x}$ for a fixed $x$.
Actually, the likelihood for
|
How to derive the likelihood function for binomial distribution for parameter estimation?
In maximum likelihood estimation, you are trying to maximize $nC_x~p^x(1-p)^{n-x}$; however, maximizing this is equivalent to maximizing $p^x(1-p)^{n-x}$ for a fixed $x$.
Actually, the likelihood for the gaussian and poisson also do not involve their leading constants, so this case is just like those as w
Addressing OPs Comment
Here is a bit more detail:
First, $x$ is the total number of successes whereas $x_i$ is a single trial (0 or 1). Therefore:
$$\prod_{i=1}^np^{x_i}(1-p)^{1-x_i} = p^{\sum_1^n x_i}(1-p)^{\sum_1^n1-x_i} = p^{x}(1-p)^{n-x}$$
That shows how you get the factors in the likelihood (by running the above steps backwards).
Why does the constant go away? Informally, and what most people do (including me), is just notice that the leading constant does not affect the value of $p$ that maximizes the likelihood, so we just ignore it (effectively set it to 1).
We can derive this by taking the log of the likelihood function and finding where its derivative is zero:
$$\ln\left(nC_x~p^x(1-p)^{n-x}\right) = \ln(nC_x)+x\ln(p)+(n-x)\ln(1-p)$$
Take derivative wrt $p$ and set to $0$:
$$\frac{d}{dp}\ln(nC_x)+x\ln(p)+(n-x)\ln(1-p) = \frac{x}{p}- \frac{n-x}{1-p} = 0$$
$$\implies \frac{n}{x} = \frac{1}{p} \implies p = \frac{x}{n}$$
Notice that the leading constant dropped out of the calculation of the MLE.
More philosophically, a likelihood is only meaningful for inference up to a multiplying constant, such that if we have two likelihood functions $L_1,L_2$ and $L_1=kL_2$, then they are inferentially equivalent. This is called the Law of Likelihood. Therefore, if we are comparing different values of $p$ using the same likelihood function, the leading term becomes irrelevant.
At a practical level, inference using the likelihood function is actually based on the likelihood ratio, not the absolute value of the likelihood. This is due to the asymptotic theory of likelihood ratios (which are asymptotically chi-square -- subject to certain regularity conditions that are often appropriate). Likelihood ratio tests are favored due to the Neyman-Pearson Lemma. Therefore, when we attempt to test two simple hypotheses, we will take the ratio and the common leading factor will cancel.
NOTE: This will not happen if you were comparing two different models, say a binomial and a poisson. In that case, the constants are important.
Of the above reasons, the first (irrelevance to finding the maximizer of L) most directly answers your question.
|
How to derive the likelihood function for binomial distribution for parameter estimation?
In maximum likelihood estimation, you are trying to maximize $nC_x~p^x(1-p)^{n-x}$; however, maximizing this is equivalent to maximizing $p^x(1-p)^{n-x}$ for a fixed $x$.
Actually, the likelihood for
|
7,722
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
xi in the product refers to each individual trial. For each individual trial xi can be 0 or 1 and n is equal to 1 always. Therefore, trivially, the binomial coefficient will be equal to 1. Hence, in the product formula for likelihood, product of the binomial coefficients will be 1 and hence there is no nCx in the formula.
Realised this while working it out step by step :)
(Sorry about the formatting, not used to answering with mathematical expressions in answers...yet :) )
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
xi in the product refers to each individual trial. For each individual trial xi can be 0 or 1 and n is equal to 1 always. Therefore, trivially, the binomial coefficient will be equal to 1. Hence, in t
|
How to derive the likelihood function for binomial distribution for parameter estimation?
xi in the product refers to each individual trial. For each individual trial xi can be 0 or 1 and n is equal to 1 always. Therefore, trivially, the binomial coefficient will be equal to 1. Hence, in the product formula for likelihood, product of the binomial coefficients will be 1 and hence there is no nCx in the formula.
Realised this while working it out step by step :)
(Sorry about the formatting, not used to answering with mathematical expressions in answers...yet :) )
|
How to derive the likelihood function for binomial distribution for parameter estimation?
xi in the product refers to each individual trial. For each individual trial xi can be 0 or 1 and n is equal to 1 always. Therefore, trivially, the binomial coefficient will be equal to 1. Hence, in t
|
7,723
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
It might help to remember that likelihoods are not probabilities. In other words, there is no need to have them sum to 1 over the sample space. Therefore, to make the math happen more quickly we can remove anything that is not a function of the data or the parameter(s) from the definition of the likelihood function.
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
It might help to remember that likelihoods are not probabilities. In other words, there is no need to have them sum to 1 over the sample space. Therefore, to make the math happen more quickly we can
|
How to derive the likelihood function for binomial distribution for parameter estimation?
It might help to remember that likelihoods are not probabilities. In other words, there is no need to have them sum to 1 over the sample space. Therefore, to make the math happen more quickly we can remove anything that is not a function of the data or the parameter(s) from the definition of the likelihood function.
|
How to derive the likelihood function for binomial distribution for parameter estimation?
It might help to remember that likelihoods are not probabilities. In other words, there is no need to have them sum to 1 over the sample space. Therefore, to make the math happen more quickly we can
|
7,724
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
For each factor in the likelihood (i.e. for each individual) "n" = $1$ and "x" = $0$ or $1$. In this case with ($n=1$) we always have $C_x = 1$. So $n C_x = 1$ for each of the factors making up the likelihood. So the normalization IS there, it is just $1$.
In general, a good check that one has written down the likelihood correctly and completely (i.e. including all of the factors, even if they do not affect an MLE calculation) is that if you sum the likelihood over all possible realizations of the data you get $1$. It is easy to see that the Miller and Freund's formula is normalized to $1$ this way (just sum over all $x_i = 0$ and $x_i =1$ for all of the i's, one gets $(1-p) + p = 1$ for each $i$ factor)
|
How to derive the likelihood function for binomial distribution for parameter estimation?
|
For each factor in the likelihood (i.e. for each individual) "n" = $1$ and "x" = $0$ or $1$. In this case with ($n=1$) we always have $C_x = 1$. So $n C_x = 1$ for each of the factors making up the li
|
How to derive the likelihood function for binomial distribution for parameter estimation?
For each factor in the likelihood (i.e. for each individual) "n" = $1$ and "x" = $0$ or $1$. In this case with ($n=1$) we always have $C_x = 1$. So $n C_x = 1$ for each of the factors making up the likelihood. So the normalization IS there, it is just $1$.
In general, a good check that one has written down the likelihood correctly and completely (i.e. including all of the factors, even if they do not affect an MLE calculation) is that if you sum the likelihood over all possible realizations of the data you get $1$. It is easy to see that the Miller and Freund's formula is normalized to $1$ this way (just sum over all $x_i = 0$ and $x_i =1$ for all of the i's, one gets $(1-p) + p = 1$ for each $i$ factor)
|
How to derive the likelihood function for binomial distribution for parameter estimation?
For each factor in the likelihood (i.e. for each individual) "n" = $1$ and "x" = $0$ or $1$. In this case with ($n=1$) we always have $C_x = 1$. So $n C_x = 1$ for each of the factors making up the li
|
7,725
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
|
I hope these responses to your two questions will calm your concern:
A correlation matrix is a covariance matrix of the standardized (i.e. not just centered but also rescaled) data; that is, a covariance matrix (as if) of another, different dataset. So it is natural and it shouldn't bother you that the results differ.
Yes it makes sense to find the directions of maximal variance with standardized data - they are the directions of - so to speak - "correlatedness," not "covariatedness"; that is, after the effect of unequal variances - of the original variables - on the shape of the multivariate data cloud was taken off.
Next text and pictures added by @whuber (I thank him. Also, see my comment below)
Here is a two-dimensional example showing why it still makes sense to locate the principal axes of standardized data (shown on the right). Note that in the right hand plot the cloud still has a "shape" even though the variances along the coordinate axes are now exactly equal (to 1.0). Similarly, in higher dimensions the standardized point cloud will have a non-spherical shape even though the variances along all axes are exactly equal (to 1.0). The principal axes (with their corresponding eigenvalues) describe that shape. Another way to understand this is to note that all the rescaling and shifting that goes on when standardizing the variables occurs only in the directions of the coordinate axes and not in the principal directions themselves.
What is happening here is geometrically so intuitive and clear that it would be a stretch to characterize this as a "black-box operation": on the contrary, standardization and PCA are some of the most basic and routine things we do with data in order to understand them.
Continued by @ttnphns
When would one prefer to do PCA (or factor analysis or other similar type of analysis) on correlations (i.e. on z-standardized variables) instead of doing it on covariances (i.e. on centered variables)?
When the variables are different units of measurement. That's clear.
When one wants the analysis to reflect just and only linear associations. Pearson r is not only the covariance between the uniscaled (variance=1) variables; it is suddenly the measure of the strength of linear relationship, whereas usual covariance coefficient is receptive to both linear and monotonic relationship.
When one wants the associations to reflect relative co-deviatedness (from the mean) rather than raw co-deviatedness. The correlation is based on distributions, their spreads, while the covariance is based on the original measurement scale. If I were to factor-analyze patients' psychopathological profiles as assesed by psychiatrists' on some clinical questionnaire consisting of Likert-type items, I'd prefer covariances. Because the professionals are not expected to distort the rating scale intrapsychically. If, on the other hand, I were to analyze the patients' self-portrates by that same questionnaire I'd probably choose correlations. Because layman's assessment is expected to be relative "other people", "the majority" "permissible deviation" or similar implicit das Man loupe which "shrinks" or "stretches" the rating scale for one.
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
|
I hope these responses to your two questions will calm your concern:
A correlation matrix is a covariance matrix of the standardized (i.e. not just centered but also rescaled) data; that is, a covari
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
I hope these responses to your two questions will calm your concern:
A correlation matrix is a covariance matrix of the standardized (i.e. not just centered but also rescaled) data; that is, a covariance matrix (as if) of another, different dataset. So it is natural and it shouldn't bother you that the results differ.
Yes it makes sense to find the directions of maximal variance with standardized data - they are the directions of - so to speak - "correlatedness," not "covariatedness"; that is, after the effect of unequal variances - of the original variables - on the shape of the multivariate data cloud was taken off.
Next text and pictures added by @whuber (I thank him. Also, see my comment below)
Here is a two-dimensional example showing why it still makes sense to locate the principal axes of standardized data (shown on the right). Note that in the right hand plot the cloud still has a "shape" even though the variances along the coordinate axes are now exactly equal (to 1.0). Similarly, in higher dimensions the standardized point cloud will have a non-spherical shape even though the variances along all axes are exactly equal (to 1.0). The principal axes (with their corresponding eigenvalues) describe that shape. Another way to understand this is to note that all the rescaling and shifting that goes on when standardizing the variables occurs only in the directions of the coordinate axes and not in the principal directions themselves.
What is happening here is geometrically so intuitive and clear that it would be a stretch to characterize this as a "black-box operation": on the contrary, standardization and PCA are some of the most basic and routine things we do with data in order to understand them.
Continued by @ttnphns
When would one prefer to do PCA (or factor analysis or other similar type of analysis) on correlations (i.e. on z-standardized variables) instead of doing it on covariances (i.e. on centered variables)?
When the variables are different units of measurement. That's clear.
When one wants the analysis to reflect just and only linear associations. Pearson r is not only the covariance between the uniscaled (variance=1) variables; it is suddenly the measure of the strength of linear relationship, whereas usual covariance coefficient is receptive to both linear and monotonic relationship.
When one wants the associations to reflect relative co-deviatedness (from the mean) rather than raw co-deviatedness. The correlation is based on distributions, their spreads, while the covariance is based on the original measurement scale. If I were to factor-analyze patients' psychopathological profiles as assesed by psychiatrists' on some clinical questionnaire consisting of Likert-type items, I'd prefer covariances. Because the professionals are not expected to distort the rating scale intrapsychically. If, on the other hand, I were to analyze the patients' self-portrates by that same questionnaire I'd probably choose correlations. Because layman's assessment is expected to be relative "other people", "the majority" "permissible deviation" or similar implicit das Man loupe which "shrinks" or "stretches" the rating scale for one.
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
I hope these responses to your two questions will calm your concern:
A correlation matrix is a covariance matrix of the standardized (i.e. not just centered but also rescaled) data; that is, a covari
|
7,726
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
|
Speaking from a practical viewpoint - possibly unpopular here - if you have data measured on different scales, then go with correlation ('UV scaling' if you are a chemometrician), but if the variables are on the same scale and the size of them matters (e.g. with spectroscopic data), then covariance (centering the data only) makes more sense. PCA is a scale-dependent method and also log transformation can help with highly skewed data.
In my humble opinion based on 20 years of practical application of chemometrics you have to experiment a bit and see what works best for your type of data. At the end of the day you need to be able to reproduce your results and try to prove the predictability of your conclusions. How you get there is often a case of trial and error
but the thing that matters is that what you do is documented and reproducible.
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
|
Speaking from a practical viewpoint - possibly unpopular here - if you have data measured on different scales, then go with correlation ('UV scaling' if you are a chemometrician), but if the variables
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
Speaking from a practical viewpoint - possibly unpopular here - if you have data measured on different scales, then go with correlation ('UV scaling' if you are a chemometrician), but if the variables are on the same scale and the size of them matters (e.g. with spectroscopic data), then covariance (centering the data only) makes more sense. PCA is a scale-dependent method and also log transformation can help with highly skewed data.
In my humble opinion based on 20 years of practical application of chemometrics you have to experiment a bit and see what works best for your type of data. At the end of the day you need to be able to reproduce your results and try to prove the predictability of your conclusions. How you get there is often a case of trial and error
but the thing that matters is that what you do is documented and reproducible.
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
Speaking from a practical viewpoint - possibly unpopular here - if you have data measured on different scales, then go with correlation ('UV scaling' if you are a chemometrician), but if the variables
|
7,727
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
|
I have no time to go into a fuller description of detailed & technical aspects of the experiment I described, and clarifications on wordings (recommending, performance, optimum) would again divert us away from the real issue, which is about what type of input data the PCA can(not) / should (not) be taking. PCA operates by taking linear combinations of numbers (values of variables). Mathematically, of course, one can add any two (real or complex) numbers. But if they have been re-scaled before PCA transformation, is their linear combination (and hence to process of maximization) still meaningful to operate on?
If each variable $x_i$ has same variance $s^2$, then clearly yes, because $(x_1/s_1)+(x_2/s_2)=(x_1+x_2)/s$ is still proportional and comparable to the physical superposition of data $x_1+x_2$ itself. But if $s_1\not =s_2$, then the linear combination of standardized quantities distorts the data of the input variables to different degrees. There seems little point then to maximize the variance of their linear combination.
In that case, PCA gives a solution for a different set of data, whereby each variable is scaled differently. If you then unstandardize afterwards (when using corr_PCA) then that may be OK and necessary; but if you just take the the raw corr_PCA solution as-is and stop there, you would obtain a mathematical solution, but not one related to the physical data. As unstandardization afterwards then seems mandatory as a minimum (i.e., 'unstretching' the axes by the inverse standard deviations), cov_PCA could have been used to begin with.
If you are still reading by now, I am impressed! For now, I finish by quoting from Jolliffe's book, p. 42, which is the part that concerns me: 'It must not be forgotten, however, that correlation matrix PCs, when re-expressed in terms of the original variables, are still linear functions of x that maximize variance with respect to the standardized variables and not with respect to the original variables.'
If you think I am interpreting this or its implications wrongly, this excerpt may be a good focus point for further discussion.
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
|
I have no time to go into a fuller description of detailed & technical aspects of the experiment I described, and clarifications on wordings (recommending, performance, optimum) would again divert us
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
I have no time to go into a fuller description of detailed & technical aspects of the experiment I described, and clarifications on wordings (recommending, performance, optimum) would again divert us away from the real issue, which is about what type of input data the PCA can(not) / should (not) be taking. PCA operates by taking linear combinations of numbers (values of variables). Mathematically, of course, one can add any two (real or complex) numbers. But if they have been re-scaled before PCA transformation, is their linear combination (and hence to process of maximization) still meaningful to operate on?
If each variable $x_i$ has same variance $s^2$, then clearly yes, because $(x_1/s_1)+(x_2/s_2)=(x_1+x_2)/s$ is still proportional and comparable to the physical superposition of data $x_1+x_2$ itself. But if $s_1\not =s_2$, then the linear combination of standardized quantities distorts the data of the input variables to different degrees. There seems little point then to maximize the variance of their linear combination.
In that case, PCA gives a solution for a different set of data, whereby each variable is scaled differently. If you then unstandardize afterwards (when using corr_PCA) then that may be OK and necessary; but if you just take the the raw corr_PCA solution as-is and stop there, you would obtain a mathematical solution, but not one related to the physical data. As unstandardization afterwards then seems mandatory as a minimum (i.e., 'unstretching' the axes by the inverse standard deviations), cov_PCA could have been used to begin with.
If you are still reading by now, I am impressed! For now, I finish by quoting from Jolliffe's book, p. 42, which is the part that concerns me: 'It must not be forgotten, however, that correlation matrix PCs, when re-expressed in terms of the original variables, are still linear functions of x that maximize variance with respect to the standardized variables and not with respect to the original variables.'
If you think I am interpreting this or its implications wrongly, this excerpt may be a good focus point for further discussion.
|
PCA on correlation or covariance: does PCA on correlation ever make sense? [closed]
I have no time to go into a fuller description of detailed & technical aspects of the experiment I described, and clarifications on wordings (recommending, performance, optimum) would again divert us
|
7,728
|
What is the difference between EM and Gradient Ascent?
|
From:
Xu L and Jordan MI (1996). On Convergence Properties of the EM Algorithm for
Gaussian Mixtures. Neural Computation 2: 129-151.
Abstract:
We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix.
Page 2
In particular we show that the EM step can be obtained by pre-multiplying the gradient by a positive denite matrix. We provide an explicit expression for the matrix ...
Page 3
That is, the EM algorithm can be viewed as a variable metric gradient ascent algorithm ...
This is, the paper provides explicit transformations of the EM algorithm into gradient-ascent, Newton, quasi-Newton.
From wikipedia
There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss–Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
|
What is the difference between EM and Gradient Ascent?
|
From:
Xu L and Jordan MI (1996). On Convergence Properties of the EM Algorithm for
Gaussian Mixtures. Neural Computation 2: 129-151.
Abstract:
We show that the EM step in parameter space is obta
|
What is the difference between EM and Gradient Ascent?
From:
Xu L and Jordan MI (1996). On Convergence Properties of the EM Algorithm for
Gaussian Mixtures. Neural Computation 2: 129-151.
Abstract:
We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix.
Page 2
In particular we show that the EM step can be obtained by pre-multiplying the gradient by a positive denite matrix. We provide an explicit expression for the matrix ...
Page 3
That is, the EM algorithm can be viewed as a variable metric gradient ascent algorithm ...
This is, the paper provides explicit transformations of the EM algorithm into gradient-ascent, Newton, quasi-Newton.
From wikipedia
There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss–Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
|
What is the difference between EM and Gradient Ascent?
From:
Xu L and Jordan MI (1996). On Convergence Properties of the EM Algorithm for
Gaussian Mixtures. Neural Computation 2: 129-151.
Abstract:
We show that the EM step in parameter space is obta
|
7,729
|
What is the difference between EM and Gradient Ascent?
|
No, they are not equivalent. In particular, EM convergence is much slower.
If you are interested in an optimization point-of-view on EM, in this paper you will see that EM algorithm is a special case of wider class of algorithms (proximal point algorithms).
|
What is the difference between EM and Gradient Ascent?
|
No, they are not equivalent. In particular, EM convergence is much slower.
If you are interested in an optimization point-of-view on EM, in this paper you will see that EM algorithm is a special case
|
What is the difference between EM and Gradient Ascent?
No, they are not equivalent. In particular, EM convergence is much slower.
If you are interested in an optimization point-of-view on EM, in this paper you will see that EM algorithm is a special case of wider class of algorithms (proximal point algorithms).
|
What is the difference between EM and Gradient Ascent?
No, they are not equivalent. In particular, EM convergence is much slower.
If you are interested in an optimization point-of-view on EM, in this paper you will see that EM algorithm is a special case
|
7,730
|
What is the difference between EM and Gradient Ascent?
|
I wanted to follow up (even though this is some years later) on the OP's second question:
Is there any condition under which they are equivalent?
In fact there is a condition under which they're equivalent.
The first order EM algorithm is gradient descent on the marginal likelihood function.
To parse the implications of this statement you need the precise definitions and the derivation, which is pretty straightforward so I'll sketch it here:
The statement above is literally, $\nabla_\theta Q_n(\theta | \theta^t)|_{\theta =\theta^t} = \nabla_{\theta}l(\theta).$
Define,
$$
Q_n(\theta | \theta^t) = \frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\log f_\theta(y_i, z)dz \right\}.
$$
Here $z$ is the unobserved or "latent" variable, $k_{\theta^t}(z|y_i)$ its conditional distribution, $y_i$ are observed data, and $\theta^t$ is the parameter value at iteration $t$, $\theta$ is the parameter you are optimizing for in the EM algorithm. Further
$$
l(\theta) = \frac{1}{n}\sum_{i=1}^n\log\left(\int_z f_\theta(y_i, z)dz\right)
$$
Now consider,
$$
\nabla_\theta Q_n(\theta | \theta^t) = \frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\nabla_\theta \log f_\theta(y_i, z)dz \right\}.
$$
The right-hand side of the equation is:
$$
\frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\nabla_\theta \log f_\theta(y_i, z)dz \right\} = \frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i, z)}\right\}.
$$
Next write out the definition of the conditional distribution,
$$
\frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i, z)}\right\}=
\frac{1}{n}\sum_{i=1}^n \left\{\int_z \frac{f_\theta(y_i, z)}{f_\theta(y_i)}\frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i, z)}\right\}.
$$
Now you cancel the $f_\theta(y_i, z)$ terms
$$
\frac{1}{n}\sum_{i=1}^n \left\{\int_z \frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i)}\right\}.
$$
Now switch the order of the integral and derivative to obtain
$$
\frac{1}{n}\sum_{i=1}^n \left\{ \frac{\nabla_\theta f_\theta(y_i) }{f_\theta(y_i)}\right\} = \frac{1}{n}\sum_{i=1}^n \left\{\nabla_\theta \log f_\theta(y_i)\right\},
$$
and it is easy to see that this is the same as
$$
\nabla_\theta l(\theta),
$$
which shows the claim:
The first order EM algorithm is gradient descent on the marginal likelihood function.
Of course this makes the usual assumptions about interchange of derivative and integral, so if those assumptions are not valid, then the claim will not be valid. Those types of cases occur most frequently when a parameter is on the boundary of the support of the distribution and the derivative w.r.t. the parameter becomes a Dirac delta function which does not allow interchange of derivative and integral.
The claim is made at the bottom of page 82 of the following paper:
Statistical guarantees for the EM algorithm: From population to
sample-based analysis Sivaraman Balakrishnan, Martin J. Wainwright,
Bin Yu Ann. Statist. 45(1): 77-120 (February 2017). DOI:
10.1214/16-AOS1435.
|
What is the difference between EM and Gradient Ascent?
|
I wanted to follow up (even though this is some years later) on the OP's second question:
Is there any condition under which they are equivalent?
In fact there is a condition under which they're equiv
|
What is the difference between EM and Gradient Ascent?
I wanted to follow up (even though this is some years later) on the OP's second question:
Is there any condition under which they are equivalent?
In fact there is a condition under which they're equivalent.
The first order EM algorithm is gradient descent on the marginal likelihood function.
To parse the implications of this statement you need the precise definitions and the derivation, which is pretty straightforward so I'll sketch it here:
The statement above is literally, $\nabla_\theta Q_n(\theta | \theta^t)|_{\theta =\theta^t} = \nabla_{\theta}l(\theta).$
Define,
$$
Q_n(\theta | \theta^t) = \frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\log f_\theta(y_i, z)dz \right\}.
$$
Here $z$ is the unobserved or "latent" variable, $k_{\theta^t}(z|y_i)$ its conditional distribution, $y_i$ are observed data, and $\theta^t$ is the parameter value at iteration $t$, $\theta$ is the parameter you are optimizing for in the EM algorithm. Further
$$
l(\theta) = \frac{1}{n}\sum_{i=1}^n\log\left(\int_z f_\theta(y_i, z)dz\right)
$$
Now consider,
$$
\nabla_\theta Q_n(\theta | \theta^t) = \frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\nabla_\theta \log f_\theta(y_i, z)dz \right\}.
$$
The right-hand side of the equation is:
$$
\frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\nabla_\theta \log f_\theta(y_i, z)dz \right\} = \frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i, z)}\right\}.
$$
Next write out the definition of the conditional distribution,
$$
\frac{1}{n}\sum_{i=1}^n \left\{\int_z k_{\theta^t}(z|y_i)\frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i, z)}\right\}=
\frac{1}{n}\sum_{i=1}^n \left\{\int_z \frac{f_\theta(y_i, z)}{f_\theta(y_i)}\frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i, z)}\right\}.
$$
Now you cancel the $f_\theta(y_i, z)$ terms
$$
\frac{1}{n}\sum_{i=1}^n \left\{\int_z \frac{\nabla_\theta f_\theta(y_i, z)dz }{f_\theta(y_i)}\right\}.
$$
Now switch the order of the integral and derivative to obtain
$$
\frac{1}{n}\sum_{i=1}^n \left\{ \frac{\nabla_\theta f_\theta(y_i) }{f_\theta(y_i)}\right\} = \frac{1}{n}\sum_{i=1}^n \left\{\nabla_\theta \log f_\theta(y_i)\right\},
$$
and it is easy to see that this is the same as
$$
\nabla_\theta l(\theta),
$$
which shows the claim:
The first order EM algorithm is gradient descent on the marginal likelihood function.
Of course this makes the usual assumptions about interchange of derivative and integral, so if those assumptions are not valid, then the claim will not be valid. Those types of cases occur most frequently when a parameter is on the boundary of the support of the distribution and the derivative w.r.t. the parameter becomes a Dirac delta function which does not allow interchange of derivative and integral.
The claim is made at the bottom of page 82 of the following paper:
Statistical guarantees for the EM algorithm: From population to
sample-based analysis Sivaraman Balakrishnan, Martin J. Wainwright,
Bin Yu Ann. Statist. 45(1): 77-120 (February 2017). DOI:
10.1214/16-AOS1435.
|
What is the difference between EM and Gradient Ascent?
I wanted to follow up (even though this is some years later) on the OP's second question:
Is there any condition under which they are equivalent?
In fact there is a condition under which they're equiv
|
7,731
|
In caret what is the real difference between cv and repeatedcv?
|
According to the caret manual(see "reference manual"), the parameter repeats only applies when the method is set to repeatedcv, so no repetition is performed when the method is set to cv. So the difference between both methods is indeed that repeatedcv repeats and cv does not.
Aside: Repeating a crossvalidation with exactly the same splitting will yield exactly the same result for every repetition (assuming that the model is trained in a deterministic manner), which is not only inefficient, but also dangerous when it comes to comparing the validation results for different model algorithms in a statistical manner. So be aware of this if you ever have to program a validation yourself.
|
In caret what is the real difference between cv and repeatedcv?
|
According to the caret manual(see "reference manual"), the parameter repeats only applies when the method is set to repeatedcv, so no repetition is performed when the method is set to cv. So the diffe
|
In caret what is the real difference between cv and repeatedcv?
According to the caret manual(see "reference manual"), the parameter repeats only applies when the method is set to repeatedcv, so no repetition is performed when the method is set to cv. So the difference between both methods is indeed that repeatedcv repeats and cv does not.
Aside: Repeating a crossvalidation with exactly the same splitting will yield exactly the same result for every repetition (assuming that the model is trained in a deterministic manner), which is not only inefficient, but also dangerous when it comes to comparing the validation results for different model algorithms in a statistical manner. So be aware of this if you ever have to program a validation yourself.
|
In caret what is the real difference between cv and repeatedcv?
According to the caret manual(see "reference manual"), the parameter repeats only applies when the method is set to repeatedcv, so no repetition is performed when the method is set to cv. So the diffe
|
7,732
|
In caret what is the real difference between cv and repeatedcv?
|
Admittedly, this is a VERY old post, but based on the code snippets provided by user3466398, the difference is that repeatedcv does exactly that: it repeatedly performs X-fold cross-validation on the training data, i.e. if you specify 5 repeats of 10-fold cross-validation, it will perform 10-fold cross-validation on the training data 5 times, using a different set of folds for each cross-validation.
The rationale for doing this, I presume, is to allow one to have a more accurate and robust accuracy of the cross-validation testing, i.e. one can report the average CV accuracy.
|
In caret what is the real difference between cv and repeatedcv?
|
Admittedly, this is a VERY old post, but based on the code snippets provided by user3466398, the difference is that repeatedcv does exactly that: it repeatedly performs X-fold cross-validation on the
|
In caret what is the real difference between cv and repeatedcv?
Admittedly, this is a VERY old post, but based on the code snippets provided by user3466398, the difference is that repeatedcv does exactly that: it repeatedly performs X-fold cross-validation on the training data, i.e. if you specify 5 repeats of 10-fold cross-validation, it will perform 10-fold cross-validation on the training data 5 times, using a different set of folds for each cross-validation.
The rationale for doing this, I presume, is to allow one to have a more accurate and robust accuracy of the cross-validation testing, i.e. one can report the average CV accuracy.
|
In caret what is the real difference between cv and repeatedcv?
Admittedly, this is a VERY old post, but based on the code snippets provided by user3466398, the difference is that repeatedcv does exactly that: it repeatedly performs X-fold cross-validation on the
|
7,733
|
In caret what is the real difference between cv and repeatedcv?
|
The actual code behind these parameters can be found in the selectByFilter.R and createDataPartition.R (formerly createFolds.R) source files in the `caret/R/' folder of the package.
See these files for e.g. here and here (beware these permalinks may eventually point to older version of the code). For convenience the relevant snippets (as of version 6.0-78 c. Nov 2017) are show below
In selectByFilter.R c. line 157
sbf <- function (x, ...) UseMethod("sbf")
...
"sbf.default" <-
function(x, y,
sbfControl = sbfControl(), ...)
{
...
if(is.null(sbfControl$index)) sbfControl$index <- switch(
tolower(sbfControl$method),
cv = createFolds(y, sbfControl$number, returnTrain = TRUE),
repeatedcv = createMultiFolds(y, sbfControl$number, sbfControl$repeats),
loocv = createFolds(y, length(y), returnTrain = TRUE),
boot =, boot632 = createResample(y, sbfControl$number),
test = createDataPartition(y, 1, sbfControl$p),
lgocv = createDataPartition(y, sbfControl$number, sbfControl$p))
...
In createDataPartition.R c. line 227
createMultiFolds <- function(y, k = 10, times = 5) {
if(class(y)[1] == "Surv") y <- y[,"time"]
prettyNums <- paste("Rep", gsub(" ", "0", format(1:times)), sep = "")
for(i in 1:times) {
tmp <- createFolds(y, k = k, list = TRUE, returnTrain = TRUE)
names(tmp) <- paste("Fold",
gsub(" ", "0", format(seq(along = tmp))),
".",
prettyNums[i],
sep = "")
out <- if(i == 1) tmp else c(out, tmp)
}
out
}
|
In caret what is the real difference between cv and repeatedcv?
|
The actual code behind these parameters can be found in the selectByFilter.R and createDataPartition.R (formerly createFolds.R) source files in the `caret/R/' folder of the package.
See these files fo
|
In caret what is the real difference between cv and repeatedcv?
The actual code behind these parameters can be found in the selectByFilter.R and createDataPartition.R (formerly createFolds.R) source files in the `caret/R/' folder of the package.
See these files for e.g. here and here (beware these permalinks may eventually point to older version of the code). For convenience the relevant snippets (as of version 6.0-78 c. Nov 2017) are show below
In selectByFilter.R c. line 157
sbf <- function (x, ...) UseMethod("sbf")
...
"sbf.default" <-
function(x, y,
sbfControl = sbfControl(), ...)
{
...
if(is.null(sbfControl$index)) sbfControl$index <- switch(
tolower(sbfControl$method),
cv = createFolds(y, sbfControl$number, returnTrain = TRUE),
repeatedcv = createMultiFolds(y, sbfControl$number, sbfControl$repeats),
loocv = createFolds(y, length(y), returnTrain = TRUE),
boot =, boot632 = createResample(y, sbfControl$number),
test = createDataPartition(y, 1, sbfControl$p),
lgocv = createDataPartition(y, sbfControl$number, sbfControl$p))
...
In createDataPartition.R c. line 227
createMultiFolds <- function(y, k = 10, times = 5) {
if(class(y)[1] == "Surv") y <- y[,"time"]
prettyNums <- paste("Rep", gsub(" ", "0", format(1:times)), sep = "")
for(i in 1:times) {
tmp <- createFolds(y, k = k, list = TRUE, returnTrain = TRUE)
names(tmp) <- paste("Fold",
gsub(" ", "0", format(seq(along = tmp))),
".",
prettyNums[i],
sep = "")
out <- if(i == 1) tmp else c(out, tmp)
}
out
}
|
In caret what is the real difference between cv and repeatedcv?
The actual code behind these parameters can be found in the selectByFilter.R and createDataPartition.R (formerly createFolds.R) source files in the `caret/R/' folder of the package.
See these files fo
|
7,734
|
Why is PCA sensitive to outliers?
|
One of the reasons is that PCA can be thought as low-rank decomposition of the data that minimizes the sum of $L_2$ norms of the residuals of the decomposition. I.e. if $Y$ is your data ($m$ vectors of $n$ dimensions), and $X$ is the PCA basis ($k$ vectors of $n$ dimensions), then the decomposition will strictly minimize
$$\lVert Y-XA \rVert^2_F = \sum_{j=1}^{m} \lVert Y_j - X A_{j.} \rVert^2 $$
Here $A$ is the matrix of coefficients of PCA decomposition and $\lVert
\cdot \rVert_F$ is a Frobenius norm of the matrix
Because the PCA minimizes the $L_2$ norms (i.e. quadratic norms) it has the same issues a least-squares or fitting a Gaussian by being sensitive to outliers. Because of the squaring of deviations from the outliers, they will dominate the total norm and therefore will drive the PCA components.
|
Why is PCA sensitive to outliers?
|
One of the reasons is that PCA can be thought as low-rank decomposition of the data that minimizes the sum of $L_2$ norms of the residuals of the decomposition. I.e. if $Y$ is your data ($m$ vectors o
|
Why is PCA sensitive to outliers?
One of the reasons is that PCA can be thought as low-rank decomposition of the data that minimizes the sum of $L_2$ norms of the residuals of the decomposition. I.e. if $Y$ is your data ($m$ vectors of $n$ dimensions), and $X$ is the PCA basis ($k$ vectors of $n$ dimensions), then the decomposition will strictly minimize
$$\lVert Y-XA \rVert^2_F = \sum_{j=1}^{m} \lVert Y_j - X A_{j.} \rVert^2 $$
Here $A$ is the matrix of coefficients of PCA decomposition and $\lVert
\cdot \rVert_F$ is a Frobenius norm of the matrix
Because the PCA minimizes the $L_2$ norms (i.e. quadratic norms) it has the same issues a least-squares or fitting a Gaussian by being sensitive to outliers. Because of the squaring of deviations from the outliers, they will dominate the total norm and therefore will drive the PCA components.
|
Why is PCA sensitive to outliers?
One of the reasons is that PCA can be thought as low-rank decomposition of the data that minimizes the sum of $L_2$ norms of the residuals of the decomposition. I.e. if $Y$ is your data ($m$ vectors o
|
7,735
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Wow, what a big question! The short version of the answer is that just because you can represent two models using diagrammatically similar visual representations, doesn't mean they are even remotely related structurally, functionally, or philosophically. I'm not familiar with FCM or NF, but I can speak to the other ones a bit.
Bayesian Network
In a Bayesian network, the graph represents the conditional dependencies of different variables in the model. Each node represents a variable, and each directed edge represents a conditional relationship. Essentially, the graphical model is a visualization of the chain rule.
Neural Network
In a neural network, each node is a simulated "neuron". The neuron is essentially on or off, and its activation is determined by a linear combination of the values of each output in the preceding "layer" of the network.
Decision Tree
Let's say we are using a decision tree for classification. The tree essentially provides us with a flowchart describing how we should classify an observation. We start at the root of the tree, and the leaf where we end up determines the classification we predict.
As you can see, these three models really have basically nothing at all to do with each other besides being representable with boxes and arrows.
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Wow, what a big question! The short version of the answer is that just because you can represent two models using diagrammatically similar visual representations, doesn't mean they are even remotely r
|
Difference between Bayes network, neural network, decision tree and Petri nets
Wow, what a big question! The short version of the answer is that just because you can represent two models using diagrammatically similar visual representations, doesn't mean they are even remotely related structurally, functionally, or philosophically. I'm not familiar with FCM or NF, but I can speak to the other ones a bit.
Bayesian Network
In a Bayesian network, the graph represents the conditional dependencies of different variables in the model. Each node represents a variable, and each directed edge represents a conditional relationship. Essentially, the graphical model is a visualization of the chain rule.
Neural Network
In a neural network, each node is a simulated "neuron". The neuron is essentially on or off, and its activation is determined by a linear combination of the values of each output in the preceding "layer" of the network.
Decision Tree
Let's say we are using a decision tree for classification. The tree essentially provides us with a flowchart describing how we should classify an observation. We start at the root of the tree, and the leaf where we end up determines the classification we predict.
As you can see, these three models really have basically nothing at all to do with each other besides being representable with boxes and arrows.
|
Difference between Bayes network, neural network, decision tree and Petri nets
Wow, what a big question! The short version of the answer is that just because you can represent two models using diagrammatically similar visual representations, doesn't mean they are even remotely r
|
7,736
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
It is easy to show (see Daphne Koller's course) that Logistic Regression is a restricted version of Conditional Random Fields, which are undirected graphical models, while Bayesian Networks are directed graphical models. Then, Logistic Regression could also be viewed as a single layer perceptron. This is the only link (which is very loose) that I think could be drawn between Bayesian Networks and Neural Networks.
I have yet to find a link between the other concepts you asked about.
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
It is easy to show (see Daphne Koller's course) that Logistic Regression is a restricted version of Conditional Random Fields, which are undirected graphical models, while Bayesian Networks are direct
|
Difference between Bayes network, neural network, decision tree and Petri nets
It is easy to show (see Daphne Koller's course) that Logistic Regression is a restricted version of Conditional Random Fields, which are undirected graphical models, while Bayesian Networks are directed graphical models. Then, Logistic Regression could also be viewed as a single layer perceptron. This is the only link (which is very loose) that I think could be drawn between Bayesian Networks and Neural Networks.
I have yet to find a link between the other concepts you asked about.
|
Difference between Bayes network, neural network, decision tree and Petri nets
It is easy to show (see Daphne Koller's course) that Logistic Regression is a restricted version of Conditional Random Fields, which are undirected graphical models, while Bayesian Networks are direct
|
7,737
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
First we attempt to state the nature of problem attempted to be solved by these methods. If a problem is straightforward, Polynomial or NP Complete we have ready to plug algorithms that could provide with a deterministic answer, by simple recombination of the axioms along logical rules. However, if that is not the case, we would have to rely on a method of reasoning, wherein, we attempt to treat the problem as being heterogeneous and plug it to a network, the nodes being evaluations and the edges being pathways between the components.
In any kind of network based reasoning, we do not reason deductively, by using abstract generalisations and combinations, according to logical rules in a linear flow, but rather work through the problem based on the propagation of reasoning in different directions, such that we solve the problem one node at a time, open to improvements on discovery of new facts concerning any node in the future. Now let us see how each of these techniques approach this method of problem solving in their own way.
Neural Network:
The Neural network is a black box, where it is believed (never could be verified from outside the system) that connections among simpleton nodes are formed and emphasised by repeated external reinforcements. It approaches the problem in a Connectionsitic paradigm. The problem is likely solved, but there is little by way of explainability. The neural net now widely used because of its ability to produce quick results, if the problem with explainability is overlooked.
Bayesian Network:
The Bayesian Network is a directed acyclic graph, which more like the flowchart, only that the flow chart can have cyclic loops. The Bayesian network unlike the flow chart can have multiple start points. It basically traces the propagation of events across multiple ambiguous points, where the event diverges probabilistically between pathways. Obviously, at any given point in the network, the probability of that node being visited is dependent on the joint probability of the preceding nodes. The Bayesian network is different from the Neural Network in that it is explicit reasoning, even though probabilistic and hence could have multiple stable states based on each step being revisited and modified within legal values, just like an algorithm. It is a robust way to reason probabilistically, but it involves encoding of probabilities, conjecturing the points where randomized actions can happen and hence need more heuristic effort to build.
Decision Trees:
The Decision tree is again a network, which is more like a flow chart, which is closer to the Bayesian network than the neural net. Each node has more intelligence than the neural net and the branching can be decided by mathematical or probabilistic evaluations. The decisions are straightforward evaluations based on frequency distributions of likely events, where the decision is probabilistic. However, in Bayesian networks, the decision is based on the distribution of 'evidence' that points to an event having occurred, rather than the direct observation of the event itself.
An Example
For instance, if we were to predict the movement of a man-eating tiger across some Himalayan villages that happens to be in the edge of some tiger reserve, we could model it on either approach as follows:
In a decision tree, we would rely on expert estimates whether a tiger would given the choice between open fields or rivers would choose on the latter.
In a Bayesian network, We track the tiger by pug marks, but reason in a manner that acknowledges that these pug marks might have been those of some other similar sized tiger routinely patrolling its territory. If we are to use a neural net, we would have to train the model repeatedly using various behavioural peculiarities of the tiger in general, such as its preference to swim, preference of covered areas over open areas, its avoidance of human habitations in order to allow the network to generally reason over the course the tiger might take.
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
First we attempt to state the nature of problem attempted to be solved by these methods. If a problem is straightforward, Polynomial or NP Complete we have ready to plug algorithms that could provide
|
Difference between Bayes network, neural network, decision tree and Petri nets
First we attempt to state the nature of problem attempted to be solved by these methods. If a problem is straightforward, Polynomial or NP Complete we have ready to plug algorithms that could provide with a deterministic answer, by simple recombination of the axioms along logical rules. However, if that is not the case, we would have to rely on a method of reasoning, wherein, we attempt to treat the problem as being heterogeneous and plug it to a network, the nodes being evaluations and the edges being pathways between the components.
In any kind of network based reasoning, we do not reason deductively, by using abstract generalisations and combinations, according to logical rules in a linear flow, but rather work through the problem based on the propagation of reasoning in different directions, such that we solve the problem one node at a time, open to improvements on discovery of new facts concerning any node in the future. Now let us see how each of these techniques approach this method of problem solving in their own way.
Neural Network:
The Neural network is a black box, where it is believed (never could be verified from outside the system) that connections among simpleton nodes are formed and emphasised by repeated external reinforcements. It approaches the problem in a Connectionsitic paradigm. The problem is likely solved, but there is little by way of explainability. The neural net now widely used because of its ability to produce quick results, if the problem with explainability is overlooked.
Bayesian Network:
The Bayesian Network is a directed acyclic graph, which more like the flowchart, only that the flow chart can have cyclic loops. The Bayesian network unlike the flow chart can have multiple start points. It basically traces the propagation of events across multiple ambiguous points, where the event diverges probabilistically between pathways. Obviously, at any given point in the network, the probability of that node being visited is dependent on the joint probability of the preceding nodes. The Bayesian network is different from the Neural Network in that it is explicit reasoning, even though probabilistic and hence could have multiple stable states based on each step being revisited and modified within legal values, just like an algorithm. It is a robust way to reason probabilistically, but it involves encoding of probabilities, conjecturing the points where randomized actions can happen and hence need more heuristic effort to build.
Decision Trees:
The Decision tree is again a network, which is more like a flow chart, which is closer to the Bayesian network than the neural net. Each node has more intelligence than the neural net and the branching can be decided by mathematical or probabilistic evaluations. The decisions are straightforward evaluations based on frequency distributions of likely events, where the decision is probabilistic. However, in Bayesian networks, the decision is based on the distribution of 'evidence' that points to an event having occurred, rather than the direct observation of the event itself.
An Example
For instance, if we were to predict the movement of a man-eating tiger across some Himalayan villages that happens to be in the edge of some tiger reserve, we could model it on either approach as follows:
In a decision tree, we would rely on expert estimates whether a tiger would given the choice between open fields or rivers would choose on the latter.
In a Bayesian network, We track the tiger by pug marks, but reason in a manner that acknowledges that these pug marks might have been those of some other similar sized tiger routinely patrolling its territory. If we are to use a neural net, we would have to train the model repeatedly using various behavioural peculiarities of the tiger in general, such as its preference to swim, preference of covered areas over open areas, its avoidance of human habitations in order to allow the network to generally reason over the course the tiger might take.
|
Difference between Bayes network, neural network, decision tree and Petri nets
First we attempt to state the nature of problem attempted to be solved by these methods. If a problem is straightforward, Polynomial or NP Complete we have ready to plug algorithms that could provide
|
7,738
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Excellent answer by @David Marx. I have been wondering, what is the difference between Classification/Regression tree and Bayesian network.One builds on entropy to classify an outcome into classes based on different predictors and the other builds a graphical network using conditional independence and probabilistic parameter estimates.
I feel that the methodology of building the Bayesian network is different compared to the Regression/Decision tree. The algorithm for structural learning, objectives for using the models as well as inferential ability of the models are different.
The score-based and constrained based approach can be understood with some parallels drawn with the information gain criteria in the decision tree families.
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Excellent answer by @David Marx. I have been wondering, what is the difference between Classification/Regression tree and Bayesian network.One builds on entropy to classify an outcome into classes bas
|
Difference between Bayes network, neural network, decision tree and Petri nets
Excellent answer by @David Marx. I have been wondering, what is the difference between Classification/Regression tree and Bayesian network.One builds on entropy to classify an outcome into classes based on different predictors and the other builds a graphical network using conditional independence and probabilistic parameter estimates.
I feel that the methodology of building the Bayesian network is different compared to the Regression/Decision tree. The algorithm for structural learning, objectives for using the models as well as inferential ability of the models are different.
The score-based and constrained based approach can be understood with some parallels drawn with the information gain criteria in the decision tree families.
|
Difference between Bayes network, neural network, decision tree and Petri nets
Excellent answer by @David Marx. I have been wondering, what is the difference between Classification/Regression tree and Bayesian network.One builds on entropy to classify an outcome into classes bas
|
7,739
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Regarding graphical models, Petri Net formalises a system behaviour; in that it sharply differs from the rest of the mentioned models, all of which relate to how a judgement is formed.
Worth to note that most of the cited names designate quite extensive AI concepts, which often coalesce: for example, you may use a Neural Network to build a decision tree, while the Neural Network itself, as an earlier post discussed, may depend on Bayesian inference.
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Regarding graphical models, Petri Net formalises a system behaviour; in that it sharply differs from the rest of the mentioned models, all of which relate to how a judgement is formed.
Worth to note t
|
Difference between Bayes network, neural network, decision tree and Petri nets
Regarding graphical models, Petri Net formalises a system behaviour; in that it sharply differs from the rest of the mentioned models, all of which relate to how a judgement is formed.
Worth to note that most of the cited names designate quite extensive AI concepts, which often coalesce: for example, you may use a Neural Network to build a decision tree, while the Neural Network itself, as an earlier post discussed, may depend on Bayesian inference.
|
Difference between Bayes network, neural network, decision tree and Petri nets
Regarding graphical models, Petri Net formalises a system behaviour; in that it sharply differs from the rest of the mentioned models, all of which relate to how a judgement is formed.
Worth to note t
|
7,740
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Its a good question and I've been asking myself the same. There are more than two kinds of neural network, and it seems the previous answer addressed the competitive type, whereas the Bayesian network seems to have similarities to the feed-forward, back-propagation (FFBP) type, and not the competitive type. In fact, I would say the Bayesian network is a generalisation of the FFBP. So the FFBP is a type of Bayesian network and works in a similar fashion.
|
Difference between Bayes network, neural network, decision tree and Petri nets
|
Its a good question and I've been asking myself the same. There are more than two kinds of neural network, and it seems the previous answer addressed the competitive type, whereas the Bayesian network
|
Difference between Bayes network, neural network, decision tree and Petri nets
Its a good question and I've been asking myself the same. There are more than two kinds of neural network, and it seems the previous answer addressed the competitive type, whereas the Bayesian network seems to have similarities to the feed-forward, back-propagation (FFBP) type, and not the competitive type. In fact, I would say the Bayesian network is a generalisation of the FFBP. So the FFBP is a type of Bayesian network and works in a similar fashion.
|
Difference between Bayes network, neural network, decision tree and Petri nets
Its a good question and I've been asking myself the same. There are more than two kinds of neural network, and it seems the previous answer addressed the competitive type, whereas the Bayesian network
|
7,741
|
How to find confidence intervals for ratings?
|
Like Karl Broman said in his answer, a Bayesian approach would likely be a lot better than using confidence intervals.
The Problem With Confidence Intervals
Why might using confidence intervals not work too well? One reason is that if you don't have many ratings for an item, then your confidence interval is going to be very wide, so the lower bound of the confidence interval will be small. Thus, items without many ratings will end up at the bottom of your list.
Intuitively, however, you probably want items without many ratings to be near the average item, so you want to wiggle your estimated rating of the item toward the mean rating over all items (i.e., you want to push your estimated rating toward a prior). This is exactly what a Bayesian approach does.
Bayesian Approach I: Normal Distribution over Ratings
One way of moving the estimated rating toward a prior is, as in Karl's answer, to use an estimate of the form $w*R + (1-w)*C$:
$R$ is the mean over the ratings for the items.
$C$ is the mean over all items (or whatever prior you want to shrink your rating to).
Note that the formula is just a weighted combination of $R$ and $C$.
$w = \frac{v}{v+m}$ is the weight assigned to $R$, where $v$ is the number of reviews for the beer and $m$ is some kind of constant "threshold" parameter.
Note that when $v$ is very large, i.e., when we have a lot of ratings for the current item, then $w$ is very close to 1, so our estimated rating is very close to $R$ and we pay little attention to the prior $C$. When $v$ is small, however, $w$ is very close to 0, so the estimated rating places a lot of weight on the prior $C$.
This estimate can, in fact, be given a Bayesian interpretation as the posterior estimate of the item's mean rating when individual ratings comes from a normal distribution centered around that mean.
However, assuming that ratings come from a normal distribution has two problems:
A normal distribution is continuous, but ratings are discrete.
Ratings for an item don't necessarily follow a unimodal Gaussian shape. For example, maybe your item is very polarizing, so people tend to either give it a very high rating or give it a very low rating.
Bayesian Approach II: Multinomial Distribution over Ratings
So instead of assuming a normal distribution for ratings, let's assume a multinomial distribution. That is, given some specific item, there's a probability $p_1$ that a random user will give it 1 star, a probability $p_2$ that a random user will give it 2 stars, and so on.
Of course, we have no idea what these probabilities are. As we get more and more ratings for this item, we can guess that $p_1$ is close to $\frac{n_1}{n}$, where $n_1$ is the number of users who gave it 1 star and $n$ is the total number of users who rated the item, but when we first start out, we have nothing. So we place a Dirichlet prior $Dir(\alpha_1, \ldots, \alpha_k)$ on these probabilities.
What is this Dirichlet prior? We can think of each $\alpha_i$ parameter as being a "virtual count" of the number of times some virtual person gave the item $i$ stars. For example, if $\alpha_1 = 2$, $\alpha_2 = 1$, and all the other $\alpha_i$ are equal to 0, then we can think of this as saying that two virtual people gave the item 1 star and one virtual person gave the item 2 stars. So before we even get any actual users, we can use this virtual distribution to provide an estimate of the item's rating.
[One way of choosing the $\alpha_i$ parameters would be to set $\alpha_i$ equal to the overall proportion of votes of $i$ stars. (Note that the $\alpha_i$ parameters aren't necessarily integers.)]
Then, once actual ratings come in, simply add their counts to the virtual counts of your Dirichlet prior. Whenever you want to estimate the rating of your item, simply take the mean over all of the item's ratings (both its virtual ratings and its actual ratings).
|
How to find confidence intervals for ratings?
|
Like Karl Broman said in his answer, a Bayesian approach would likely be a lot better than using confidence intervals.
The Problem With Confidence Intervals
Why might using confidence intervals not wo
|
How to find confidence intervals for ratings?
Like Karl Broman said in his answer, a Bayesian approach would likely be a lot better than using confidence intervals.
The Problem With Confidence Intervals
Why might using confidence intervals not work too well? One reason is that if you don't have many ratings for an item, then your confidence interval is going to be very wide, so the lower bound of the confidence interval will be small. Thus, items without many ratings will end up at the bottom of your list.
Intuitively, however, you probably want items without many ratings to be near the average item, so you want to wiggle your estimated rating of the item toward the mean rating over all items (i.e., you want to push your estimated rating toward a prior). This is exactly what a Bayesian approach does.
Bayesian Approach I: Normal Distribution over Ratings
One way of moving the estimated rating toward a prior is, as in Karl's answer, to use an estimate of the form $w*R + (1-w)*C$:
$R$ is the mean over the ratings for the items.
$C$ is the mean over all items (or whatever prior you want to shrink your rating to).
Note that the formula is just a weighted combination of $R$ and $C$.
$w = \frac{v}{v+m}$ is the weight assigned to $R$, where $v$ is the number of reviews for the beer and $m$ is some kind of constant "threshold" parameter.
Note that when $v$ is very large, i.e., when we have a lot of ratings for the current item, then $w$ is very close to 1, so our estimated rating is very close to $R$ and we pay little attention to the prior $C$. When $v$ is small, however, $w$ is very close to 0, so the estimated rating places a lot of weight on the prior $C$.
This estimate can, in fact, be given a Bayesian interpretation as the posterior estimate of the item's mean rating when individual ratings comes from a normal distribution centered around that mean.
However, assuming that ratings come from a normal distribution has two problems:
A normal distribution is continuous, but ratings are discrete.
Ratings for an item don't necessarily follow a unimodal Gaussian shape. For example, maybe your item is very polarizing, so people tend to either give it a very high rating or give it a very low rating.
Bayesian Approach II: Multinomial Distribution over Ratings
So instead of assuming a normal distribution for ratings, let's assume a multinomial distribution. That is, given some specific item, there's a probability $p_1$ that a random user will give it 1 star, a probability $p_2$ that a random user will give it 2 stars, and so on.
Of course, we have no idea what these probabilities are. As we get more and more ratings for this item, we can guess that $p_1$ is close to $\frac{n_1}{n}$, where $n_1$ is the number of users who gave it 1 star and $n$ is the total number of users who rated the item, but when we first start out, we have nothing. So we place a Dirichlet prior $Dir(\alpha_1, \ldots, \alpha_k)$ on these probabilities.
What is this Dirichlet prior? We can think of each $\alpha_i$ parameter as being a "virtual count" of the number of times some virtual person gave the item $i$ stars. For example, if $\alpha_1 = 2$, $\alpha_2 = 1$, and all the other $\alpha_i$ are equal to 0, then we can think of this as saying that two virtual people gave the item 1 star and one virtual person gave the item 2 stars. So before we even get any actual users, we can use this virtual distribution to provide an estimate of the item's rating.
[One way of choosing the $\alpha_i$ parameters would be to set $\alpha_i$ equal to the overall proportion of votes of $i$ stars. (Note that the $\alpha_i$ parameters aren't necessarily integers.)]
Then, once actual ratings come in, simply add their counts to the virtual counts of your Dirichlet prior. Whenever you want to estimate the rating of your item, simply take the mean over all of the item's ratings (both its virtual ratings and its actual ratings).
|
How to find confidence intervals for ratings?
Like Karl Broman said in his answer, a Bayesian approach would likely be a lot better than using confidence intervals.
The Problem With Confidence Intervals
Why might using confidence intervals not wo
|
7,742
|
How to find confidence intervals for ratings?
|
This situation cries out for a Bayesian approach. There are simple approaches for Bayesian rankings of ratings here (pay particular to the comments, which are interesting) and here, and then a further commentary on these here. As one of the comments in the first of these links points out:
The Best of BeerAdvocate (BA) ... uses a Bayesian estimate:
weighted rank (WR) = (v / (v+m)) × R + (m / (v+m)) × C
where:
R = review average for the beer
v = number of reviews for the beer
m = minimum reviews required to be listed (currently 10)
C = the mean across the list (currently 2.5)
|
How to find confidence intervals for ratings?
|
This situation cries out for a Bayesian approach. There are simple approaches for Bayesian rankings of ratings here (pay particular to the comments, which are interesting) and here, and then a furthe
|
How to find confidence intervals for ratings?
This situation cries out for a Bayesian approach. There are simple approaches for Bayesian rankings of ratings here (pay particular to the comments, which are interesting) and here, and then a further commentary on these here. As one of the comments in the first of these links points out:
The Best of BeerAdvocate (BA) ... uses a Bayesian estimate:
weighted rank (WR) = (v / (v+m)) × R + (m / (v+m)) × C
where:
R = review average for the beer
v = number of reviews for the beer
m = minimum reviews required to be listed (currently 10)
C = the mean across the list (currently 2.5)
|
How to find confidence intervals for ratings?
This situation cries out for a Bayesian approach. There are simple approaches for Bayesian rankings of ratings here (pay particular to the comments, which are interesting) and here, and then a furthe
|
7,743
|
How to find confidence intervals for ratings?
|
The question cites Evan Miller's article "How Not to Sort by Average Rating." Miller has also published an article, "Ranking Items With Star Ratings" that addresses this exact question.
Assume you have 𝐾 possible ratings, indexed by 𝑘, each worth 𝑠𝑘 points. For “star” rating systems, 𝑠𝑘=𝑘. (That is, 1 point, 2 points, ….) Assume a given item has received 𝑁 total ratings, with 𝑛𝑘 ratings for 𝑘. Then the items can be effectively sorted with the criterion:
$$S(n_1, \ldots, n_k) = \sum_{k=1}^K{s_k \frac{n_k + 1}{N + K}} - z_{\alpha/2} \sqrt{\left(\left(\sum_{k=1}^K s_k^2 \frac{n_k + 1}{N + K}\right) - \left(\sum_{k=1}^K{s_k \frac{n_k + 1}{N + K}}\right)^2\right)/(N + K + 1)}$$
Where 𝑧𝛼/2 is the 1−𝛼/2 quantile of a normal distribution. The above expression is the lower bound of a normal approximation to a Bayesian credible interval for the average rating. Setting 𝛼=0.10 (𝑧=1.65), a sort criterion of 𝑋 means that 95% of the time, the item will have an average rating greater than 𝑋, at least according to the belief structure.
Miller's approach doesn't require a minimum number of reviews to be listed, which is nice.
|
How to find confidence intervals for ratings?
|
The question cites Evan Miller's article "How Not to Sort by Average Rating." Miller has also published an article, "Ranking Items With Star Ratings" that addresses this exact question.
Assume you ha
|
How to find confidence intervals for ratings?
The question cites Evan Miller's article "How Not to Sort by Average Rating." Miller has also published an article, "Ranking Items With Star Ratings" that addresses this exact question.
Assume you have 𝐾 possible ratings, indexed by 𝑘, each worth 𝑠𝑘 points. For “star” rating systems, 𝑠𝑘=𝑘. (That is, 1 point, 2 points, ….) Assume a given item has received 𝑁 total ratings, with 𝑛𝑘 ratings for 𝑘. Then the items can be effectively sorted with the criterion:
$$S(n_1, \ldots, n_k) = \sum_{k=1}^K{s_k \frac{n_k + 1}{N + K}} - z_{\alpha/2} \sqrt{\left(\left(\sum_{k=1}^K s_k^2 \frac{n_k + 1}{N + K}\right) - \left(\sum_{k=1}^K{s_k \frac{n_k + 1}{N + K}}\right)^2\right)/(N + K + 1)}$$
Where 𝑧𝛼/2 is the 1−𝛼/2 quantile of a normal distribution. The above expression is the lower bound of a normal approximation to a Bayesian credible interval for the average rating. Setting 𝛼=0.10 (𝑧=1.65), a sort criterion of 𝑋 means that 95% of the time, the item will have an average rating greater than 𝑋, at least according to the belief structure.
Miller's approach doesn't require a minimum number of reviews to be listed, which is nice.
|
How to find confidence intervals for ratings?
The question cites Evan Miller's article "How Not to Sort by Average Rating." Miller has also published an article, "Ranking Items With Star Ratings" that addresses this exact question.
Assume you ha
|
7,744
|
Under which assumptions a regression can be interpreted causally?
|
I made efforts in this direction and I feel myself in charge to give an answer. I written several answers and questions about this topic. Probably some of them can help you. Among others:
Regression and causality in econometrics
conditional and interventional expectation
linear causal model
Structural equation and causal model in economics
regression and causation
What is the relationship between minimizing prediction error versus parameter estimation error?
Difference Between Simultaneous Equation Model and Structural Equation Model
endogenous regressor and correlation
Random Sampling: Weak and Strong Exogenity
Conditional probability and causality
OLS Assumption-No correlation should be there between error term and independent variable and error term and dependent variable
Does homoscedasticity imply that the regressor variables and the errors are uncorrelated?
So, here:
Regression and Causation: A Critical Examination of Six Econometrics Textbooks - Chen and Pearl (2013)
the reply to your question
Under which assumptions a regression can be interpreted causally?
is given. However, at least in Pearl opinion, the question is not well posed. Matter of fact is that some points must be fixed before to “reply directly”. Moreover the language used by Pearl and its colleagues are not familiar in econometrics (not yet).
If you looking for an econometrics book that give you a best reply … I have already made this work for you. I suggest you: Mostly Harmless Econometrics: An Empiricist's Companion - Angrist and Pischke (2009). However Pearl and his colleagues do not consider exhaustive this presentation neither.
So let me try to answer in most concise, but also complete, way as possible.
Consider a data generation process $\text{D}_X(x_1, ... ,
x_n|\theta)$, where $\text{D}_X(\cdot)$ is a joint density function,
with $n$ variables and parameter set $\theta$.
It is well known that a regression of the form $x_n = f(x_1, ... ,
x_{n-1}|\theta)$ is estimating a conditional mean of the joint
distribution, namely, $\text{E}(x_n|x_1,...,x_{n-1})$. In the specific
case of a linear regression, we have something like $$ x_n =
\theta_0 + \theta_1 x_1 + ... + \theta_{n-1}x_{n-1} + \epsilon $$
The question is: under which assumptions of the DGP
$\text{D}_X(\cdot)$ can we infer the regression (linear or not)
represents a causal relationship? ... UPDATE: I am not assuming
any causal structure within my DGP.
The core of the problem is precisely here. All assumptions you invoke involve purely statistical informations only; in this case there are no ways to achieve causal conclusions. At least not in coherently and/or not ambiguous manner. In your reasoning the DGP is presented as a tools that carried out the same information that can be encoded in the joint probability distribution; no more (they are used as synonym). The key point is that, as underscored many times by Pearl, causal assumptions cannot be encoded in a joint probability distribution or any statistical concept completely attributable to it. The root of the problems is that joint probability distribution, and in particular conditioning rules, work well with observational problems but cannot facing properly the interventional one. Now, intervention is the core of causality. Causal assumptions have to stay outside distributional aspects. Most econometrics books fall in confusion/ambiguity/errors about causality because the tools presented there do not permit to distinguish clearly between causal and statistical concepts.
We need something else for pose causal assumptions. The Structural Causal Model (SCM) is the alternative proposed in causal inference literature by Pearl. So, DGP must be precisely the causal mechanism we are interested in, and our SCM encode all we know/assume about the DGP. Read here for more detail about DGP and SCM in causal inference: What's the DGP in causal inference?
Now. You, as most econometrics books, rightly invoke exogeneity, that is a causal concept:
I am however uncertain about this condition [exogeneity]. It seems too weak to
encompass all potential arguments against regression implying
causality. Hence my question above.
I understand well your perplexity about that. Actually many problems move around "exogeneity condition". It is crucial and it can be enough in quite general sense, but it must be used properly. Follow me.
Exogeneity condition must be write on a structural-causal equation (error), no others. Surely not on something like population regression (genuine concept but wrong here). But even not any kind of “true model/DGP” that not have clear causal meaning. For example, no absurd concept like "true regression" used in some presentations. Also vague/ambiguous concepts like "linear model" are used a lot, but are not adequate here.
No more or less sophisticated kind of statistical condition is enough if the above requirement is violated. Something like: weak/strict/strong exogeneity … predetermiteness … past, present, future … orthogonality/scorrelation/independence/mean independence/conditional independence .. stochastic or non stochastic regressors .. ecc. No one of them and related concepts is enough if them are referred on some error/equation/model that do not have causal meaning since origin. You need structural-causal equation.
Now, you and some econometrics books, invoke something like: experiments, randomization and related concepts. This is one right way. However it can be used not properly as in Stock and Watson manual case (if you want I can give details). Even Angrist and Pischke refers on experiments but them introduce also structural-causal concept at the core of their reasoning (linear causal model - chapter 3 pag 44). Moreover, in my checks, them are the only that introduce the concepts of bad controls. This story sound like omitted variables problem but here not only correlation condition but also causal nexus (pag 51) are invoked.
Now, exist in literature a debate between "structuralists vs experimentalists". In Pearl opinion this debate is rhetorical. Briefly, for him structural approach is more general and powerful … experimental one boil down to structural. Indeed structural equations can be viewed as language for coding a set of hypothetical experiment.
Said that, direct answer. If the equation:
$$ x_n = \theta_0 + \theta_1 x_1 + ... + \theta_{n-1}x_{n-1} + \epsilon $$
is a linear causal model like here: linear causal model
and the exogeneity condition like
$$ \text{E}[\epsilon |x_1, ... x_{n-1}] = 0$$
hold.
Then a linear regression like:
$$ x_n = \beta_0 + \beta_1 x_1 + ... + \beta_{n-1}x_{n-1} + v $$
has causal meaning. Or better all $\beta$s identifies $\theta$s and them have clear causal meaning (see note 3).
In Angrist and Pischke opinion, model like above are considered old. Them prefer to distinguish between causal variable(s) (usually only one) and control variables (read: Undergraduate Econometrics Instruction:
Through Our Classes, Darkly - Angrist and Pischke 2017). If you select the right set of controls, you achieve a causal meaning for the causal parameter. In order to select the right controls, for Angrist and Pischke you have to avoid bad controls. The same idea is used even in structural approach, but in it is well formalized in the back-door criterion [reply in: Chen and Pearl (2013)]. For some details on this criterion read here: Causal effect by back-door and front-door adjustments
As conclusion. All above says that linear regression estimated with OLS, if properly used, can be enough for identification of causal effects. Then, in econometrics and elsewhere are presented other estimators also, like IV (Instrumental Variables estimators) and others, that have strong links with regression. Also them can help for identification of causal effects, indeed they were designed for this. However the story above hold yet. If the problems above are not solved, the same, or related, are shared in IV and/or other techniques.
Note 1: I noted from comments that you ask something like: "I have to define the directionality of causation?" Yes, you must. This is a key causal assumption and a key property of structural-causal equations. In experimental side, you have to be well aware about what is the treatment variable and what the outcome one.
Note 2:
So essentially, the point is whether a coefficient represents a deep
parameter or not, something which can never ever be deduced from (that
is, it is not assured alone by) exogeneity assumptions but only from
theory. Is that a fair interpretation? The answer to the question
would then be "trivial" (which is ok): it can when theory tells you
so. Whether such parameter can be estimated consistently or not, that
is an entirely different matter. Consistency does not imply causality.
In that sense, exogeneity alone is never enough.
I fear that your question and answer come from misunderstandings. These come from conflation between causal and puerely statistical concepts. I’m not surprise about that because, unfortunately, this conflation is made in many econometrics books and it represent a tremendous mistake in econometrics literature.
As I said above and in comments, the most part of mistake come from ambiguous and/or erroneous definition of DGP (=true model). The ambiguous and/or erroneous definition of exogeneity, is a consequence. Ambiguous and/or erroneous conclusion about the question come from that. As I said in comments, the weak points of doubled and Dimitriy V. Masterov answers come from these problems.
I starting to face these problems years ago, and I started with the question: “Exogeneity imply causality? Or not? If yes, what form of exogeneity is needed?” I consulted at least a dozen of books (the more widespread were included) and many others presentations/articles about the points. There was many similarities among them (obvious) but to find two presentations that share precisely the same definitions/assumptions/conclusions was almost impossible.
From them, sometimes seemed that exogenety was enough for causality, sometimes not, sometimes depend from the form of exogeneity, sometimes nothing was said. As resume, even if something like exogeneity was used everywhere, the positions moved from “regression never imply causality” to “regression imply causality”.
I feared that some counter circuits was there but … only when I encountered the article cited above, Chen and Pearl (2013), and Pearl literature more in general, I realized that my fear were well founded. I’m econometrics lover and felt disappointment when realized this fact. Read here for more about that: How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?
Now, exogeneity condition is something like $E[\epsilon|X]=0$ but is meaning depend crucially on $\epsilon$. What it is?
The worst position is that it represent something like “population regression error/residual” (DGP=population regression). If linearity is imposed also, this condition is useless. If not, this condition impose a linearity restriction on the regression, no more. No causal conclusions are permitted. Read here: Regression and the CEF
Another position, the most widespread yet, is that $\epsilon$ is something like “true error” but the ambiguity of DGP/true model is shared there too. Here there are the fog, in many case almost nothing is said … but the usual common ground is that it is a “statistical model” or simply a “model”. From that, exogeneity imply unbiasedness/consistency. No more. No causal conclusion, as you said, can be deduced. Then, causal conclusions come from “theory” (economic theory) as you and some books suggest. In this situation causal conclusions can arrive only at the end of the story, and them are founded on something like an, foggy, "expert judgement". No more. This seems me unsustainable position for econometric theory.
This situation is inevitable if, as you (implicitly) said, exogeneity stay in statistical side … and economic theory (or other fields) in another.
We must to change perspective. Exogeneity is, also historically, a causal concept and, as I said above, must be a causal assumption and not just statistical one. Economic theory is expressed also in term of exogeneity; them go together. In different words, the assumptions that you looking for and that permit us causal conclusion for regression, cannot stay in regression itself. These assumption must stay outside, in a structural causal model. You need two objects, no just one. The structural causal model stand for theoretical-causal assumptions, exogeneity is among them and it is needed for identification. Regression stand for estimation (under other pure statistical assumption).
Sometimes Econometric literature don't distinguish clearly between regression and true model neither, sometimes the the distinction is made but the role of true model (or DGP) is not clear. From here the conflation between causal and statistical assumptions come from; first of all an ambiguous role for exogeneity.
Exogeneity condition must be write on structural causal error. Formally, in Pearl language (formally we need it) the exogeneity condition can be written as:
$E[\epsilon |do(X)]=0$ that imply
$E[Y|do(X)]=E[Y|X]$ identifiability condition
in this sense exogeneity imply causality.
Read also here: Random Sampling: Weak and Strong Exogenity
Moreover in this article: TRYGVE HAAVELMO AND THE EMERGENCEOF CAUSAL CALCULUS – Pearl (2015). Some of the above points above are treated.
For some take away of causality in linear model read here: Linear Models: A Useful “Microscope” for Causal Analysis - Pearl (2013)
For an accessible presentation of Pearl literature read this book: JUDEA PEARL, MADELYN GLYMOUR, NICHOLAS P. JEWELL - CAUSAL INFERENCE IN STATISTICS: A PRIMER
http://bayes.cs.ucla.edu/PRIMER/
Note 3: More precisely, is needed to say that $\theta$s surely represent the so called direct causal effects, but without additional assumptions is not possible to say if they represent the total causal effects too. Obviously if there are confusion about causality at all is not possible to address this second round distinction.
|
Under which assumptions a regression can be interpreted causally?
|
I made efforts in this direction and I feel myself in charge to give an answer. I written several answers and questions about this topic. Probably some of them can help you. Among others:
Regression a
|
Under which assumptions a regression can be interpreted causally?
I made efforts in this direction and I feel myself in charge to give an answer. I written several answers and questions about this topic. Probably some of them can help you. Among others:
Regression and causality in econometrics
conditional and interventional expectation
linear causal model
Structural equation and causal model in economics
regression and causation
What is the relationship between minimizing prediction error versus parameter estimation error?
Difference Between Simultaneous Equation Model and Structural Equation Model
endogenous regressor and correlation
Random Sampling: Weak and Strong Exogenity
Conditional probability and causality
OLS Assumption-No correlation should be there between error term and independent variable and error term and dependent variable
Does homoscedasticity imply that the regressor variables and the errors are uncorrelated?
So, here:
Regression and Causation: A Critical Examination of Six Econometrics Textbooks - Chen and Pearl (2013)
the reply to your question
Under which assumptions a regression can be interpreted causally?
is given. However, at least in Pearl opinion, the question is not well posed. Matter of fact is that some points must be fixed before to “reply directly”. Moreover the language used by Pearl and its colleagues are not familiar in econometrics (not yet).
If you looking for an econometrics book that give you a best reply … I have already made this work for you. I suggest you: Mostly Harmless Econometrics: An Empiricist's Companion - Angrist and Pischke (2009). However Pearl and his colleagues do not consider exhaustive this presentation neither.
So let me try to answer in most concise, but also complete, way as possible.
Consider a data generation process $\text{D}_X(x_1, ... ,
x_n|\theta)$, where $\text{D}_X(\cdot)$ is a joint density function,
with $n$ variables and parameter set $\theta$.
It is well known that a regression of the form $x_n = f(x_1, ... ,
x_{n-1}|\theta)$ is estimating a conditional mean of the joint
distribution, namely, $\text{E}(x_n|x_1,...,x_{n-1})$. In the specific
case of a linear regression, we have something like $$ x_n =
\theta_0 + \theta_1 x_1 + ... + \theta_{n-1}x_{n-1} + \epsilon $$
The question is: under which assumptions of the DGP
$\text{D}_X(\cdot)$ can we infer the regression (linear or not)
represents a causal relationship? ... UPDATE: I am not assuming
any causal structure within my DGP.
The core of the problem is precisely here. All assumptions you invoke involve purely statistical informations only; in this case there are no ways to achieve causal conclusions. At least not in coherently and/or not ambiguous manner. In your reasoning the DGP is presented as a tools that carried out the same information that can be encoded in the joint probability distribution; no more (they are used as synonym). The key point is that, as underscored many times by Pearl, causal assumptions cannot be encoded in a joint probability distribution or any statistical concept completely attributable to it. The root of the problems is that joint probability distribution, and in particular conditioning rules, work well with observational problems but cannot facing properly the interventional one. Now, intervention is the core of causality. Causal assumptions have to stay outside distributional aspects. Most econometrics books fall in confusion/ambiguity/errors about causality because the tools presented there do not permit to distinguish clearly between causal and statistical concepts.
We need something else for pose causal assumptions. The Structural Causal Model (SCM) is the alternative proposed in causal inference literature by Pearl. So, DGP must be precisely the causal mechanism we are interested in, and our SCM encode all we know/assume about the DGP. Read here for more detail about DGP and SCM in causal inference: What's the DGP in causal inference?
Now. You, as most econometrics books, rightly invoke exogeneity, that is a causal concept:
I am however uncertain about this condition [exogeneity]. It seems too weak to
encompass all potential arguments against regression implying
causality. Hence my question above.
I understand well your perplexity about that. Actually many problems move around "exogeneity condition". It is crucial and it can be enough in quite general sense, but it must be used properly. Follow me.
Exogeneity condition must be write on a structural-causal equation (error), no others. Surely not on something like population regression (genuine concept but wrong here). But even not any kind of “true model/DGP” that not have clear causal meaning. For example, no absurd concept like "true regression" used in some presentations. Also vague/ambiguous concepts like "linear model" are used a lot, but are not adequate here.
No more or less sophisticated kind of statistical condition is enough if the above requirement is violated. Something like: weak/strict/strong exogeneity … predetermiteness … past, present, future … orthogonality/scorrelation/independence/mean independence/conditional independence .. stochastic or non stochastic regressors .. ecc. No one of them and related concepts is enough if them are referred on some error/equation/model that do not have causal meaning since origin. You need structural-causal equation.
Now, you and some econometrics books, invoke something like: experiments, randomization and related concepts. This is one right way. However it can be used not properly as in Stock and Watson manual case (if you want I can give details). Even Angrist and Pischke refers on experiments but them introduce also structural-causal concept at the core of their reasoning (linear causal model - chapter 3 pag 44). Moreover, in my checks, them are the only that introduce the concepts of bad controls. This story sound like omitted variables problem but here not only correlation condition but also causal nexus (pag 51) are invoked.
Now, exist in literature a debate between "structuralists vs experimentalists". In Pearl opinion this debate is rhetorical. Briefly, for him structural approach is more general and powerful … experimental one boil down to structural. Indeed structural equations can be viewed as language for coding a set of hypothetical experiment.
Said that, direct answer. If the equation:
$$ x_n = \theta_0 + \theta_1 x_1 + ... + \theta_{n-1}x_{n-1} + \epsilon $$
is a linear causal model like here: linear causal model
and the exogeneity condition like
$$ \text{E}[\epsilon |x_1, ... x_{n-1}] = 0$$
hold.
Then a linear regression like:
$$ x_n = \beta_0 + \beta_1 x_1 + ... + \beta_{n-1}x_{n-1} + v $$
has causal meaning. Or better all $\beta$s identifies $\theta$s and them have clear causal meaning (see note 3).
In Angrist and Pischke opinion, model like above are considered old. Them prefer to distinguish between causal variable(s) (usually only one) and control variables (read: Undergraduate Econometrics Instruction:
Through Our Classes, Darkly - Angrist and Pischke 2017). If you select the right set of controls, you achieve a causal meaning for the causal parameter. In order to select the right controls, for Angrist and Pischke you have to avoid bad controls. The same idea is used even in structural approach, but in it is well formalized in the back-door criterion [reply in: Chen and Pearl (2013)]. For some details on this criterion read here: Causal effect by back-door and front-door adjustments
As conclusion. All above says that linear regression estimated with OLS, if properly used, can be enough for identification of causal effects. Then, in econometrics and elsewhere are presented other estimators also, like IV (Instrumental Variables estimators) and others, that have strong links with regression. Also them can help for identification of causal effects, indeed they were designed for this. However the story above hold yet. If the problems above are not solved, the same, or related, are shared in IV and/or other techniques.
Note 1: I noted from comments that you ask something like: "I have to define the directionality of causation?" Yes, you must. This is a key causal assumption and a key property of structural-causal equations. In experimental side, you have to be well aware about what is the treatment variable and what the outcome one.
Note 2:
So essentially, the point is whether a coefficient represents a deep
parameter or not, something which can never ever be deduced from (that
is, it is not assured alone by) exogeneity assumptions but only from
theory. Is that a fair interpretation? The answer to the question
would then be "trivial" (which is ok): it can when theory tells you
so. Whether such parameter can be estimated consistently or not, that
is an entirely different matter. Consistency does not imply causality.
In that sense, exogeneity alone is never enough.
I fear that your question and answer come from misunderstandings. These come from conflation between causal and puerely statistical concepts. I’m not surprise about that because, unfortunately, this conflation is made in many econometrics books and it represent a tremendous mistake in econometrics literature.
As I said above and in comments, the most part of mistake come from ambiguous and/or erroneous definition of DGP (=true model). The ambiguous and/or erroneous definition of exogeneity, is a consequence. Ambiguous and/or erroneous conclusion about the question come from that. As I said in comments, the weak points of doubled and Dimitriy V. Masterov answers come from these problems.
I starting to face these problems years ago, and I started with the question: “Exogeneity imply causality? Or not? If yes, what form of exogeneity is needed?” I consulted at least a dozen of books (the more widespread were included) and many others presentations/articles about the points. There was many similarities among them (obvious) but to find two presentations that share precisely the same definitions/assumptions/conclusions was almost impossible.
From them, sometimes seemed that exogenety was enough for causality, sometimes not, sometimes depend from the form of exogeneity, sometimes nothing was said. As resume, even if something like exogeneity was used everywhere, the positions moved from “regression never imply causality” to “regression imply causality”.
I feared that some counter circuits was there but … only when I encountered the article cited above, Chen and Pearl (2013), and Pearl literature more in general, I realized that my fear were well founded. I’m econometrics lover and felt disappointment when realized this fact. Read here for more about that: How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?
Now, exogeneity condition is something like $E[\epsilon|X]=0$ but is meaning depend crucially on $\epsilon$. What it is?
The worst position is that it represent something like “population regression error/residual” (DGP=population regression). If linearity is imposed also, this condition is useless. If not, this condition impose a linearity restriction on the regression, no more. No causal conclusions are permitted. Read here: Regression and the CEF
Another position, the most widespread yet, is that $\epsilon$ is something like “true error” but the ambiguity of DGP/true model is shared there too. Here there are the fog, in many case almost nothing is said … but the usual common ground is that it is a “statistical model” or simply a “model”. From that, exogeneity imply unbiasedness/consistency. No more. No causal conclusion, as you said, can be deduced. Then, causal conclusions come from “theory” (economic theory) as you and some books suggest. In this situation causal conclusions can arrive only at the end of the story, and them are founded on something like an, foggy, "expert judgement". No more. This seems me unsustainable position for econometric theory.
This situation is inevitable if, as you (implicitly) said, exogeneity stay in statistical side … and economic theory (or other fields) in another.
We must to change perspective. Exogeneity is, also historically, a causal concept and, as I said above, must be a causal assumption and not just statistical one. Economic theory is expressed also in term of exogeneity; them go together. In different words, the assumptions that you looking for and that permit us causal conclusion for regression, cannot stay in regression itself. These assumption must stay outside, in a structural causal model. You need two objects, no just one. The structural causal model stand for theoretical-causal assumptions, exogeneity is among them and it is needed for identification. Regression stand for estimation (under other pure statistical assumption).
Sometimes Econometric literature don't distinguish clearly between regression and true model neither, sometimes the the distinction is made but the role of true model (or DGP) is not clear. From here the conflation between causal and statistical assumptions come from; first of all an ambiguous role for exogeneity.
Exogeneity condition must be write on structural causal error. Formally, in Pearl language (formally we need it) the exogeneity condition can be written as:
$E[\epsilon |do(X)]=0$ that imply
$E[Y|do(X)]=E[Y|X]$ identifiability condition
in this sense exogeneity imply causality.
Read also here: Random Sampling: Weak and Strong Exogenity
Moreover in this article: TRYGVE HAAVELMO AND THE EMERGENCEOF CAUSAL CALCULUS – Pearl (2015). Some of the above points above are treated.
For some take away of causality in linear model read here: Linear Models: A Useful “Microscope” for Causal Analysis - Pearl (2013)
For an accessible presentation of Pearl literature read this book: JUDEA PEARL, MADELYN GLYMOUR, NICHOLAS P. JEWELL - CAUSAL INFERENCE IN STATISTICS: A PRIMER
http://bayes.cs.ucla.edu/PRIMER/
Note 3: More precisely, is needed to say that $\theta$s surely represent the so called direct causal effects, but without additional assumptions is not possible to say if they represent the total causal effects too. Obviously if there are confusion about causality at all is not possible to address this second round distinction.
|
Under which assumptions a regression can be interpreted causally?
I made efforts in this direction and I feel myself in charge to give an answer. I written several answers and questions about this topic. Probably some of them can help you. Among others:
Regression a
|
7,745
|
Under which assumptions a regression can be interpreted causally?
|
Here's a partial answer for when the underlying model is actually linear. Suppose that the true underlying model is
$$Y = \alpha + \beta X + v.$$
I'm making no assumptions about $v$, though we have that $\beta$ is THE effect of $X$ on $Y$. A linear regression for $\beta$, which we will denote as $\tilde{\beta}$ is simply just a statistical relationship between $Y,X$ and we have
$$\tilde{\beta} = \frac{cov(Y,X)}{var(X)}.$$
So one already 'cheap' answer (which you've mentioned already) is that a linear regression identifies a causal effect when the covariance corresponds to a causal effect and not just a statistical relationship. But let's try to do a bit better.
Focusing on the covariance, we have
\begin{align*}
cov(Y,X) & = cov(\alpha + \beta X + v, X)\\
& = \beta cov(X,X) + cov(v,X) \\
& = \beta var(X) + cov(v,X),
\end{align*}
and so dividing by the variance of $X$, we get that
$$ \tilde{\beta} = \beta + \frac{cov(v,X)}{var(X)}.$$
We need $cov(v,X) = 0$ for $\tilde{\beta} = \beta$. We know that
$$cov(v,X) = E[vX] - E[v]E[X],$$
and we need that to be zero, which is true if and only if $E[vX] = E[v]E[X]$, which is true if and only if $v$ and $X$ are uncorrelated. A sufficient condition for this is mean independence similar to what you wrote: i.e. that $E[X|v] = E[X]$, so that $E[vX] = E[E[X|v]v] = E[X]E[v]$ (alternatively, you could let $v' = v - E[V]$ and require $E[v'|X]= 0$ so that $E[v'X] - E[v']E[X] = 0$ which is typically done in regression analysis). All the 'intuitive' language you quote from other posts are various ways to think concretely of such assumptions holding in application. Depending on the field, the terms and concepts and approaches will all differ, but they are all trying to get these kind of assumptions to hold.
Your comment also made me realize that it's important to really stress my assumption of "the true underlying model." I am defining $Y$ as I did. In many situations, we may not know what $Y$ is, and depending on the field, this is precisely why things get 'less rigorous' in some sense. Because you're no longer taking the model specification itself for granted. In some fields such as causal inference in statistics, you could think of these issues using DAGs or the idea of d-separation. In others, such as economics, you could start with a model of how individuals or firms behave and back out a true model through that approach, and so on.
As a final side note, note that in this case, the conditional mean independence assumption is stronger than what you need (you 'just' need the covariance to be zero). This stems from the fact that I specified a linear relationship, but it should be intuitive that imposing less structure on the model and departing from a linear regression will need stronger assumptions even closer to the notion of the error term being mean independent (or fully independent) of $X$ for you to get a causal effect (which also becomes trickier to define.. one approach could be to think of the partial of $Y$ wrt $X$).
|
Under which assumptions a regression can be interpreted causally?
|
Here's a partial answer for when the underlying model is actually linear. Suppose that the true underlying model is
$$Y = \alpha + \beta X + v.$$
I'm making no assumptions about $v$, though we have th
|
Under which assumptions a regression can be interpreted causally?
Here's a partial answer for when the underlying model is actually linear. Suppose that the true underlying model is
$$Y = \alpha + \beta X + v.$$
I'm making no assumptions about $v$, though we have that $\beta$ is THE effect of $X$ on $Y$. A linear regression for $\beta$, which we will denote as $\tilde{\beta}$ is simply just a statistical relationship between $Y,X$ and we have
$$\tilde{\beta} = \frac{cov(Y,X)}{var(X)}.$$
So one already 'cheap' answer (which you've mentioned already) is that a linear regression identifies a causal effect when the covariance corresponds to a causal effect and not just a statistical relationship. But let's try to do a bit better.
Focusing on the covariance, we have
\begin{align*}
cov(Y,X) & = cov(\alpha + \beta X + v, X)\\
& = \beta cov(X,X) + cov(v,X) \\
& = \beta var(X) + cov(v,X),
\end{align*}
and so dividing by the variance of $X$, we get that
$$ \tilde{\beta} = \beta + \frac{cov(v,X)}{var(X)}.$$
We need $cov(v,X) = 0$ for $\tilde{\beta} = \beta$. We know that
$$cov(v,X) = E[vX] - E[v]E[X],$$
and we need that to be zero, which is true if and only if $E[vX] = E[v]E[X]$, which is true if and only if $v$ and $X$ are uncorrelated. A sufficient condition for this is mean independence similar to what you wrote: i.e. that $E[X|v] = E[X]$, so that $E[vX] = E[E[X|v]v] = E[X]E[v]$ (alternatively, you could let $v' = v - E[V]$ and require $E[v'|X]= 0$ so that $E[v'X] - E[v']E[X] = 0$ which is typically done in regression analysis). All the 'intuitive' language you quote from other posts are various ways to think concretely of such assumptions holding in application. Depending on the field, the terms and concepts and approaches will all differ, but they are all trying to get these kind of assumptions to hold.
Your comment also made me realize that it's important to really stress my assumption of "the true underlying model." I am defining $Y$ as I did. In many situations, we may not know what $Y$ is, and depending on the field, this is precisely why things get 'less rigorous' in some sense. Because you're no longer taking the model specification itself for granted. In some fields such as causal inference in statistics, you could think of these issues using DAGs or the idea of d-separation. In others, such as economics, you could start with a model of how individuals or firms behave and back out a true model through that approach, and so on.
As a final side note, note that in this case, the conditional mean independence assumption is stronger than what you need (you 'just' need the covariance to be zero). This stems from the fact that I specified a linear relationship, but it should be intuitive that imposing less structure on the model and departing from a linear regression will need stronger assumptions even closer to the notion of the error term being mean independent (or fully independent) of $X$ for you to get a causal effect (which also becomes trickier to define.. one approach could be to think of the partial of $Y$ wrt $X$).
|
Under which assumptions a regression can be interpreted causally?
Here's a partial answer for when the underlying model is actually linear. Suppose that the true underlying model is
$$Y = \alpha + \beta X + v.$$
I'm making no assumptions about $v$, though we have th
|
7,746
|
Under which assumptions a regression can be interpreted causally?
|
The question is: under which assumptions of the DGP $\text{D}_X(\cdot)$ can we infer the regression (linear or not) represents a causal relationship?
It is well known that experimental data does allow for such interpretation. For what I can read elsewhere, it seems the condition required on the DGP is exogeneity:
$$ \text{E}(x_1, ... x_{n-1}|\epsilon) = 0$$
Regression by itselve can not be interpreted causaly. Indeed 'correlation ≠ causation'. You can see this with the correlated data in the image below. The image is symmetric (the pairs x,y follow a bivariate normal distribution) and regression does not tell whether Y is caused by X or vice versa.
The regression model can be interpreted as representing a causal relationship when the causality is explicitly part of the related data generating process. This is for instance the case when the experimenter performs an experiment where a variable is controlled/changed by the experimenter (and the rest is kept the same, or assumed to be the same), for instance, a 'treatment study', or in an observational study when we assume there is an 'instrumental variable'.
So it is explicit assumptions about causality in the DGP that make a regression relate to a causal relationship. And not situations where the data follows a certain relationship like $\text{E}(x_1, ... x_{n-1}|\epsilon) = 0$
About the condition $\text{E}(x_1, ... x_{n-1}|\epsilon) = 0$
I believe this should be $\text{E}(\epsilon | x_1, ... x_{n-1}) = 0$. The $\text{E}(x_1, ... x_{n-1}|\epsilon) = 0$ is already easily violated when all $x_i>0$, or if you use standardized data then it is violated when there's heteroscedasticity. Or maybe you switched the meaning of X|Y as conditional on X instead of conditional on Y?
The condition on it's own does not assure that your regression model is to be interpreted causally. In the above example (the image) you can use a regression $x_1 = x_2 +\epsilon$ or $x_2 = x_1 +\epsilon$ and for both cases the condition is true (can be assumed to be true), but that does not make it a causal relationship, at least one (possibly both) of the two regressions can not be interpreted causally.
It is the assumption of the linear model as causal that is the key factor in assuring you that the regression model can be interpreted causally. The condition is necessary when you wish to ensure that the estimate of a parameter in a linear model relates completely to the causal model and not partially to the noise and confounding variables as well. So yes, this condition is related to an interpretation of regression as a causal model, but this interpretation starts with an explicit assumption of a causal mechanism in the data generating process.
The condition is more related to ensuring that the causal effect (whose effect size is unknown) is properly estimated by an ordinary least squares regression (ensure there's no bias), but the condition is not related to a sufficient condition that turns a regression into a causal model.
Maybe the $\epsilon$ referring to some true error in a theoretical/mechanistic/ab-initio model (e.g. some specific random process that creates the noise term like dice rolls, particle counts in radiation, vibration of molecules, etc.)? Then the question might be a bit semantic. If you are defining an $\epsilon$ that is the true error in a linear model, then you are implicitly defining the statistical model as equal to the model that is the data generating process. Then it is not really the exogeneity condition that makes that the linear regression can be interpreted causally, but instead the implicit definition/interpretation of $\epsilon$.
|
Under which assumptions a regression can be interpreted causally?
|
The question is: under which assumptions of the DGP $\text{D}_X(\cdot)$ can we infer the regression (linear or not) represents a causal relationship?
It is well known that experimental data does allow
|
Under which assumptions a regression can be interpreted causally?
The question is: under which assumptions of the DGP $\text{D}_X(\cdot)$ can we infer the regression (linear or not) represents a causal relationship?
It is well known that experimental data does allow for such interpretation. For what I can read elsewhere, it seems the condition required on the DGP is exogeneity:
$$ \text{E}(x_1, ... x_{n-1}|\epsilon) = 0$$
Regression by itselve can not be interpreted causaly. Indeed 'correlation ≠ causation'. You can see this with the correlated data in the image below. The image is symmetric (the pairs x,y follow a bivariate normal distribution) and regression does not tell whether Y is caused by X or vice versa.
The regression model can be interpreted as representing a causal relationship when the causality is explicitly part of the related data generating process. This is for instance the case when the experimenter performs an experiment where a variable is controlled/changed by the experimenter (and the rest is kept the same, or assumed to be the same), for instance, a 'treatment study', or in an observational study when we assume there is an 'instrumental variable'.
So it is explicit assumptions about causality in the DGP that make a regression relate to a causal relationship. And not situations where the data follows a certain relationship like $\text{E}(x_1, ... x_{n-1}|\epsilon) = 0$
About the condition $\text{E}(x_1, ... x_{n-1}|\epsilon) = 0$
I believe this should be $\text{E}(\epsilon | x_1, ... x_{n-1}) = 0$. The $\text{E}(x_1, ... x_{n-1}|\epsilon) = 0$ is already easily violated when all $x_i>0$, or if you use standardized data then it is violated when there's heteroscedasticity. Or maybe you switched the meaning of X|Y as conditional on X instead of conditional on Y?
The condition on it's own does not assure that your regression model is to be interpreted causally. In the above example (the image) you can use a regression $x_1 = x_2 +\epsilon$ or $x_2 = x_1 +\epsilon$ and for both cases the condition is true (can be assumed to be true), but that does not make it a causal relationship, at least one (possibly both) of the two regressions can not be interpreted causally.
It is the assumption of the linear model as causal that is the key factor in assuring you that the regression model can be interpreted causally. The condition is necessary when you wish to ensure that the estimate of a parameter in a linear model relates completely to the causal model and not partially to the noise and confounding variables as well. So yes, this condition is related to an interpretation of regression as a causal model, but this interpretation starts with an explicit assumption of a causal mechanism in the data generating process.
The condition is more related to ensuring that the causal effect (whose effect size is unknown) is properly estimated by an ordinary least squares regression (ensure there's no bias), but the condition is not related to a sufficient condition that turns a regression into a causal model.
Maybe the $\epsilon$ referring to some true error in a theoretical/mechanistic/ab-initio model (e.g. some specific random process that creates the noise term like dice rolls, particle counts in radiation, vibration of molecules, etc.)? Then the question might be a bit semantic. If you are defining an $\epsilon$ that is the true error in a linear model, then you are implicitly defining the statistical model as equal to the model that is the data generating process. Then it is not really the exogeneity condition that makes that the linear regression can be interpreted causally, but instead the implicit definition/interpretation of $\epsilon$.
|
Under which assumptions a regression can be interpreted causally?
The question is: under which assumptions of the DGP $\text{D}_X(\cdot)$ can we infer the regression (linear or not) represents a causal relationship?
It is well known that experimental data does allow
|
7,747
|
Under which assumptions a regression can be interpreted causally?
|
Short answer:
There is no explicit way of proving causality. All claims of causality must be logically derived, i.e. through common sense (theory). Imagine having an operator (like correlation) which would return causality or non-causality between variables: you would be able to perfectly identify the sources and relations of anything in the universe (e.g. what/who would an interest rise have an impact on; which chemical would cure cancer etc.). Clearly, this is idealistic. All conclusions of causality are made through (smart) inferences from observations.
Long answer:
The question of which variables cause another is a philosophical one, in the sense that it must be logically determined. For me, the clearest way to see this is through the 2 classical examples of a controlled vs non-controlled experiment. I will go through these while emphasizing how much is statistics and how much is common sense (logic).
1. Controlled experiment: fertilizer
Assume you have an agricultural field divided into parcels (squares). There are parcels on which crops $(y)$ grow with and without sunlight $(X_1)$, with and without good nutrients $(X_2)$. We wish to see if a certain fertilizer ($X_3$) has an impact or not on the crop yield $y$. Let the DGP be: $y_i = \beta_0+\beta_1 X_{1i}+\beta_2 X_{2i}+\beta_3 X_{3i} +\varepsilon_i$. Here $\varepsilon_i$ represents the inherent randomness of the process, i.e. the randomness that we would have in predicting crop yield, even if this true DGP were known.
Exogeneity: [skip if clear]
The strong exogeneity assumption $E[\varepsilon_i|\textbf{X}]=0$ that you mention is needed in order for the coefficients estimated by OLS $\hat\beta$ to be unbiased (not causal). If $E[\varepsilon_i|\textbf{X}]=c$ where $c$ is any constant, all $\hat{\beta_j}$ except for the intercept $\hat{\beta_0}$ are still unbiased. Since we are interested in $\beta_3$ this is sufficient. (Side note: other weaker assumptions such as weak exogeneity and orthogonality between $X$ and $\varepsilon$ are sufficient for unbiasedness.) Saying that $E[X|Z]=c$ for any 2 random variables $X$ and $Z$ means that $X$ is not systematically dependent in the mean on $Z$, i.e. if I take the mean ($\to\infty$) of $X$, for any pair of $(X,Z)$ I will get (approx.) the same value each time, so knowing $Z$ does not help at all in predicting the mean of $X$ (e.g. $E[X|Z=10]=E[X|Z=10000]=E[X|Z=-5]=E[X]=c$)
Why is this interesting? Remember, we want to know if the fertilizer $X_3$ has an impact or not ($\beta_3=0?$) on the crop yield $y$. By spraying fertilizer on random parcels, we implicitly "force" exogeneity of $X_3$ compared to all other regressors. How? Well, if we randomly spray fertilizer on a parcel, no matter if it has sunlight or not, if it has good nutrients or not and if we then take the mean value of fertilizer for sunny parcels, it will be the same as the mean value for non-sunny parcels. Same with nutrient-rich parcels. E.g: the results of the table below hold approx. for large numbers. It makes sense after all that, if $X_3$ is independent of $X_1$, its mean should not change (significantly) as $X_1$ changes.
So, in other words $X_3$ is exogenous wrt $X_1,X_2$, i.e. $E[X_3|X_1,X_2]=c$. This means that effectively, if we want to estimate $\beta_3$ unbiasedly, we don't need $X_1,X_2$. Hence these two variables (sun, nutrients) can be treated as randomness and incorporated into the noise term, giving the regression: $y_i = \beta_0 + \beta_3 X_{3i} + \epsilon_i$, where $\epsilon_i = \beta_1 X_{1i} + \beta_2 X_{2i} + \varepsilon_i$. Hence, the noise term can also be interpreted as a collection of all other variables that influence the response $y$, but not in a systematic fashion in the mean. (Note that $\hat\beta_0$ is biased; further note that exogeneity is weaker than independence, since the variables could be related in a higher moment instead of the mean, such as the variance, but exogeneity would still hold, see heteroskedasticity).
Causality:
Now where does causality come into play? So far we have only shown that randomly distributing fertilizer on better or worse parcels lets us look at crop yield and fertilizer alone, without taking into account the other variables (sun, nutrients), i.e. "forcing" exogeneity of fertilizer and thus all other variables into the noise term. Causality itself was and will not be proven. However, if $\hat\beta_3$ turns out to be significant, we can logically conclude that, since the randomization of fertilizer effectively "de-relates" it from all other variables (in the mean), it must have an impact on crop yield, since all other variables have no systematic impact in this setting.
In other words: 1) we used exogeneity to statistically prove that
this is the condition we need for unbiased estimators (for OLS);
2) we used randomization to get this exogeneity and get rid of other uninteresting variables; 3) we logically concluded that,
since there is a positive relation, it must be a causal one.
Notice that 3) is just a common sense conclusion, no statistics involved as in 1) or 2). It could theoretically be wrong, since e.g. it could have been that the fertilizer was actually a 'placebo' ($\beta_3=0$) but was distributed only on the sunny and nutrient-rich parcels by pure chance. Then the regression would wrongly show a significant coefficient because the fertilizer would get all the credit from the good parcels, when in fact it does nothing. However, with a large number of parcels this is so unlikely that it is very reasonable to conclude causality.
2. Uncontrolled experiment: wage and education
[I will eventually (?) return with an edit to continue here later; topics to be addressed OVB,Granger-causality and instantaneous causality in VAR processes]
This question is precisely the reason why I started learning statistics/data science - shrinking the real world into a model. Truth/ common sense/ logic are the essence. Great question.
|
Under which assumptions a regression can be interpreted causally?
|
Short answer:
There is no explicit way of proving causality. All claims of causality must be logically derived, i.e. through common sense (theory). Imagine having an operator (like correlation) which
|
Under which assumptions a regression can be interpreted causally?
Short answer:
There is no explicit way of proving causality. All claims of causality must be logically derived, i.e. through common sense (theory). Imagine having an operator (like correlation) which would return causality or non-causality between variables: you would be able to perfectly identify the sources and relations of anything in the universe (e.g. what/who would an interest rise have an impact on; which chemical would cure cancer etc.). Clearly, this is idealistic. All conclusions of causality are made through (smart) inferences from observations.
Long answer:
The question of which variables cause another is a philosophical one, in the sense that it must be logically determined. For me, the clearest way to see this is through the 2 classical examples of a controlled vs non-controlled experiment. I will go through these while emphasizing how much is statistics and how much is common sense (logic).
1. Controlled experiment: fertilizer
Assume you have an agricultural field divided into parcels (squares). There are parcels on which crops $(y)$ grow with and without sunlight $(X_1)$, with and without good nutrients $(X_2)$. We wish to see if a certain fertilizer ($X_3$) has an impact or not on the crop yield $y$. Let the DGP be: $y_i = \beta_0+\beta_1 X_{1i}+\beta_2 X_{2i}+\beta_3 X_{3i} +\varepsilon_i$. Here $\varepsilon_i$ represents the inherent randomness of the process, i.e. the randomness that we would have in predicting crop yield, even if this true DGP were known.
Exogeneity: [skip if clear]
The strong exogeneity assumption $E[\varepsilon_i|\textbf{X}]=0$ that you mention is needed in order for the coefficients estimated by OLS $\hat\beta$ to be unbiased (not causal). If $E[\varepsilon_i|\textbf{X}]=c$ where $c$ is any constant, all $\hat{\beta_j}$ except for the intercept $\hat{\beta_0}$ are still unbiased. Since we are interested in $\beta_3$ this is sufficient. (Side note: other weaker assumptions such as weak exogeneity and orthogonality between $X$ and $\varepsilon$ are sufficient for unbiasedness.) Saying that $E[X|Z]=c$ for any 2 random variables $X$ and $Z$ means that $X$ is not systematically dependent in the mean on $Z$, i.e. if I take the mean ($\to\infty$) of $X$, for any pair of $(X,Z)$ I will get (approx.) the same value each time, so knowing $Z$ does not help at all in predicting the mean of $X$ (e.g. $E[X|Z=10]=E[X|Z=10000]=E[X|Z=-5]=E[X]=c$)
Why is this interesting? Remember, we want to know if the fertilizer $X_3$ has an impact or not ($\beta_3=0?$) on the crop yield $y$. By spraying fertilizer on random parcels, we implicitly "force" exogeneity of $X_3$ compared to all other regressors. How? Well, if we randomly spray fertilizer on a parcel, no matter if it has sunlight or not, if it has good nutrients or not and if we then take the mean value of fertilizer for sunny parcels, it will be the same as the mean value for non-sunny parcels. Same with nutrient-rich parcels. E.g: the results of the table below hold approx. for large numbers. It makes sense after all that, if $X_3$ is independent of $X_1$, its mean should not change (significantly) as $X_1$ changes.
So, in other words $X_3$ is exogenous wrt $X_1,X_2$, i.e. $E[X_3|X_1,X_2]=c$. This means that effectively, if we want to estimate $\beta_3$ unbiasedly, we don't need $X_1,X_2$. Hence these two variables (sun, nutrients) can be treated as randomness and incorporated into the noise term, giving the regression: $y_i = \beta_0 + \beta_3 X_{3i} + \epsilon_i$, where $\epsilon_i = \beta_1 X_{1i} + \beta_2 X_{2i} + \varepsilon_i$. Hence, the noise term can also be interpreted as a collection of all other variables that influence the response $y$, but not in a systematic fashion in the mean. (Note that $\hat\beta_0$ is biased; further note that exogeneity is weaker than independence, since the variables could be related in a higher moment instead of the mean, such as the variance, but exogeneity would still hold, see heteroskedasticity).
Causality:
Now where does causality come into play? So far we have only shown that randomly distributing fertilizer on better or worse parcels lets us look at crop yield and fertilizer alone, without taking into account the other variables (sun, nutrients), i.e. "forcing" exogeneity of fertilizer and thus all other variables into the noise term. Causality itself was and will not be proven. However, if $\hat\beta_3$ turns out to be significant, we can logically conclude that, since the randomization of fertilizer effectively "de-relates" it from all other variables (in the mean), it must have an impact on crop yield, since all other variables have no systematic impact in this setting.
In other words: 1) we used exogeneity to statistically prove that
this is the condition we need for unbiased estimators (for OLS);
2) we used randomization to get this exogeneity and get rid of other uninteresting variables; 3) we logically concluded that,
since there is a positive relation, it must be a causal one.
Notice that 3) is just a common sense conclusion, no statistics involved as in 1) or 2). It could theoretically be wrong, since e.g. it could have been that the fertilizer was actually a 'placebo' ($\beta_3=0$) but was distributed only on the sunny and nutrient-rich parcels by pure chance. Then the regression would wrongly show a significant coefficient because the fertilizer would get all the credit from the good parcels, when in fact it does nothing. However, with a large number of parcels this is so unlikely that it is very reasonable to conclude causality.
2. Uncontrolled experiment: wage and education
[I will eventually (?) return with an edit to continue here later; topics to be addressed OVB,Granger-causality and instantaneous causality in VAR processes]
This question is precisely the reason why I started learning statistics/data science - shrinking the real world into a model. Truth/ common sense/ logic are the essence. Great question.
|
Under which assumptions a regression can be interpreted causally?
Short answer:
There is no explicit way of proving causality. All claims of causality must be logically derived, i.e. through common sense (theory). Imagine having an operator (like correlation) which
|
7,748
|
Under which assumptions a regression can be interpreted causally?
|
Let the true DGP (to be defined below) be
$$y=\mathbf{X}\beta + \mathbf{z}\alpha + \mathbf{v},$$
where $\mathbf{X}$ and $\mathbf{z}$ are regressors, and $\mathbf{z}$ is a $n \times 1$ for simplicity (you can think of it as an index of many variables if that feels restrictive). $\mathbf{v}$ is uncorrelated with $\mathbf{X}$ and $\mathbf{z}$.
If $z$ is left out of the OLS model,
$$\hat \beta_{OLS} = \beta + (N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{z})\alpha+(N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{v}).$$
Under the no-correlation assumption, the third term has a $\mathbf{plim}$ of zero, but $$\mathbf{plim}\hat \beta_{OLS}=\beta + \mathbf{plim} \left[ (N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{z}) \right] \alpha.$$
If $\alpha$ is zero or $\mathbf{plim} \left[ (N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{z}) \right] = 0$, then $\beta$ can be interpreted causally. In general, the inconsistency can be positive or negative.
So you need to get the functional form right, and include all variables that matter and are correlated with the regressors of interest.
There is another nice example here.
I think this might be a good example to give some intuition about when parameters can have a causal interpretation. This lays bare what it means to have a true DGP or the have the functional form right.
Let's say we have a SEM/DGP like this:
$$y_1 = \gamma_1 + \beta_1 y_2 + u_1,\quad 0<\beta_1 <1, \quad y_2=y_1+z_1$$
Here we have two endogenous variables (the $y$s), a single exogenous variable $z_1$, a random unobserved disturbance $u_1$, a stochastic relationship linking the two $y$s, and a definitional identity linking the three variables. We also have an inequality constraint to avoid dividing by zero below. The variation in $z_1$ is exogenous, so it is like a casual intervention that "wiggles" stuff around. This wriggling has a direct effect on $y_2$, but there is also an indirect one through the first equation.
Suppose a smart student, who has been paying attention to the lessons on simultaneity, writes down a reduced form model for $y_1$ and $y_2$ in terms of $z_1$:
$$\begin{align}
y_1 =& \frac{\gamma_1}{1-\beta_1} + \frac{\beta_1}{1-\beta_1} z_1 + \frac{u_1}{1-\beta_1} \\
=& E[y_1 \vert z_1] + v_1 \\
y_2 =& \frac{\gamma_1}{1-\beta_1} + \frac{1}{1-\beta_1} z_1 + \frac{u_1}{1-\beta_1} \\
=& E[y_2 \vert z_1] + v_1,
\end{align}$$
where $v_1 = \frac{u_1}{1- \beta_1}$. The two coefficients on $z_1$ have a causal interpretation. Any external change in $z_1$ will cause the $y$s to change by those amounts. But in the SEM/DGP, the values of $y$s also respond to $u_1$. In order to separate the two channels, we require $z_1$ and $u_1$ to be independent in order not to confound the two sources. That is the condition under which the causal effects of $z$ are identified. But this is probably not what we care about here.
In the SEM/DGP,
$$\frac{\partial y_1}{\partial y_2} = \beta_1 =\frac{\partial y_1}{\partial z_1} \div \frac{\partial y_2}{\partial z_1} =\frac{ \frac{\beta_1}{1-\beta_1}}{ \frac{1}{1-\beta_1}}.$$
We know that we can recover $\beta_1$ from the two reduced form coefficients (assuming independence of $z_1$ and $u_1$).
But what does it mean for $\beta_1$ to be the causal effect of $y_2$ on $y_1$ when they are jointly determined? All the changes come from $z_1$ and $u_1$ (as the reduced form equation makes clear), and $y_2$ is only an intermediate cause of $y_1.$ So the first structural equation gives us "snapshot" impact, but the reduced form equations give us an equilibrium impact after allowing the endogenous variables to "settle."
Given a system of linear equations, there are formal conditions for when parameters like $\beta_1$ are recoverable. They can be a DAG or a system of equations. But this is all to say that whether something is "causal" cannot be recovered from a single linear equation and some assumptions about exogeneity. There is always some model lurking in the background, even if it is not acknowledged as such. That is what it means to get the DGP "right", and that is a crucial ingredient.
|
Under which assumptions a regression can be interpreted causally?
|
Let the true DGP (to be defined below) be
$$y=\mathbf{X}\beta + \mathbf{z}\alpha + \mathbf{v},$$
where $\mathbf{X}$ and $\mathbf{z}$ are regressors, and $\mathbf{z}$ is a $n \times 1$ for simplicity (
|
Under which assumptions a regression can be interpreted causally?
Let the true DGP (to be defined below) be
$$y=\mathbf{X}\beta + \mathbf{z}\alpha + \mathbf{v},$$
where $\mathbf{X}$ and $\mathbf{z}$ are regressors, and $\mathbf{z}$ is a $n \times 1$ for simplicity (you can think of it as an index of many variables if that feels restrictive). $\mathbf{v}$ is uncorrelated with $\mathbf{X}$ and $\mathbf{z}$.
If $z$ is left out of the OLS model,
$$\hat \beta_{OLS} = \beta + (N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{z})\alpha+(N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{v}).$$
Under the no-correlation assumption, the third term has a $\mathbf{plim}$ of zero, but $$\mathbf{plim}\hat \beta_{OLS}=\beta + \mathbf{plim} \left[ (N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{z}) \right] \alpha.$$
If $\alpha$ is zero or $\mathbf{plim} \left[ (N^{-1}\mathbf{X}'\mathbf{X})^{-1}(N^{-1}\mathbf{X}'\mathbf{z}) \right] = 0$, then $\beta$ can be interpreted causally. In general, the inconsistency can be positive or negative.
So you need to get the functional form right, and include all variables that matter and are correlated with the regressors of interest.
There is another nice example here.
I think this might be a good example to give some intuition about when parameters can have a causal interpretation. This lays bare what it means to have a true DGP or the have the functional form right.
Let's say we have a SEM/DGP like this:
$$y_1 = \gamma_1 + \beta_1 y_2 + u_1,\quad 0<\beta_1 <1, \quad y_2=y_1+z_1$$
Here we have two endogenous variables (the $y$s), a single exogenous variable $z_1$, a random unobserved disturbance $u_1$, a stochastic relationship linking the two $y$s, and a definitional identity linking the three variables. We also have an inequality constraint to avoid dividing by zero below. The variation in $z_1$ is exogenous, so it is like a casual intervention that "wiggles" stuff around. This wriggling has a direct effect on $y_2$, but there is also an indirect one through the first equation.
Suppose a smart student, who has been paying attention to the lessons on simultaneity, writes down a reduced form model for $y_1$ and $y_2$ in terms of $z_1$:
$$\begin{align}
y_1 =& \frac{\gamma_1}{1-\beta_1} + \frac{\beta_1}{1-\beta_1} z_1 + \frac{u_1}{1-\beta_1} \\
=& E[y_1 \vert z_1] + v_1 \\
y_2 =& \frac{\gamma_1}{1-\beta_1} + \frac{1}{1-\beta_1} z_1 + \frac{u_1}{1-\beta_1} \\
=& E[y_2 \vert z_1] + v_1,
\end{align}$$
where $v_1 = \frac{u_1}{1- \beta_1}$. The two coefficients on $z_1$ have a causal interpretation. Any external change in $z_1$ will cause the $y$s to change by those amounts. But in the SEM/DGP, the values of $y$s also respond to $u_1$. In order to separate the two channels, we require $z_1$ and $u_1$ to be independent in order not to confound the two sources. That is the condition under which the causal effects of $z$ are identified. But this is probably not what we care about here.
In the SEM/DGP,
$$\frac{\partial y_1}{\partial y_2} = \beta_1 =\frac{\partial y_1}{\partial z_1} \div \frac{\partial y_2}{\partial z_1} =\frac{ \frac{\beta_1}{1-\beta_1}}{ \frac{1}{1-\beta_1}}.$$
We know that we can recover $\beta_1$ from the two reduced form coefficients (assuming independence of $z_1$ and $u_1$).
But what does it mean for $\beta_1$ to be the causal effect of $y_2$ on $y_1$ when they are jointly determined? All the changes come from $z_1$ and $u_1$ (as the reduced form equation makes clear), and $y_2$ is only an intermediate cause of $y_1.$ So the first structural equation gives us "snapshot" impact, but the reduced form equations give us an equilibrium impact after allowing the endogenous variables to "settle."
Given a system of linear equations, there are formal conditions for when parameters like $\beta_1$ are recoverable. They can be a DAG or a system of equations. But this is all to say that whether something is "causal" cannot be recovered from a single linear equation and some assumptions about exogeneity. There is always some model lurking in the background, even if it is not acknowledged as such. That is what it means to get the DGP "right", and that is a crucial ingredient.
|
Under which assumptions a regression can be interpreted causally?
Let the true DGP (to be defined below) be
$$y=\mathbf{X}\beta + \mathbf{z}\alpha + \mathbf{v},$$
where $\mathbf{X}$ and $\mathbf{z}$ are regressors, and $\mathbf{z}$ is a $n \times 1$ for simplicity (
|
7,749
|
Under which assumptions a regression can be interpreted causally?
|
Regression is just a series of statistical technique to strengthen causal inferences between two variables of interest by controlling for alternate causal explanations. Even a perfectly linear relationship (r2=1) is meaningless without first establishing the theoretical basis for causality. Classic example being the correlation between icecream consumption and pool drownings--neither causes the other by both are caused by summer weather.
The point of experiments is to determine causality, which typically requires establishing that: 1) one thing happened before the other, 2) that the putative cause had some explanation mechanism for affecting the outcome, and 3) that there are no competing explanations or alternate causes. Also helps if the relationship is reliable--that the lights go on every time you hit the switch. Experiments are designed to establish these relationships, by controlling conditions to establish chronological sequence and control for possible alternate causes.
Pearl (Pearl, J. (2009). Causality. Cambridge university press) is a good read, but beyond that lies a (fascinating) philosophical rat-hole regarding causation and explanation.
|
Under which assumptions a regression can be interpreted causally?
|
Regression is just a series of statistical technique to strengthen causal inferences between two variables of interest by controlling for alternate causal explanations. Even a perfectly linear relatio
|
Under which assumptions a regression can be interpreted causally?
Regression is just a series of statistical technique to strengthen causal inferences between two variables of interest by controlling for alternate causal explanations. Even a perfectly linear relationship (r2=1) is meaningless without first establishing the theoretical basis for causality. Classic example being the correlation between icecream consumption and pool drownings--neither causes the other by both are caused by summer weather.
The point of experiments is to determine causality, which typically requires establishing that: 1) one thing happened before the other, 2) that the putative cause had some explanation mechanism for affecting the outcome, and 3) that there are no competing explanations or alternate causes. Also helps if the relationship is reliable--that the lights go on every time you hit the switch. Experiments are designed to establish these relationships, by controlling conditions to establish chronological sequence and control for possible alternate causes.
Pearl (Pearl, J. (2009). Causality. Cambridge university press) is a good read, but beyond that lies a (fascinating) philosophical rat-hole regarding causation and explanation.
|
Under which assumptions a regression can be interpreted causally?
Regression is just a series of statistical technique to strengthen causal inferences between two variables of interest by controlling for alternate causal explanations. Even a perfectly linear relatio
|
7,750
|
How do decision tree learning algorithms deal with missing values (under the hood)
|
There are several methods used by various decision trees. Simply ignoring the missing values (like ID3 and other old algorithms does) or treating the missing values as another category (in case of a nominal feature) are not real handling missing values. However those approaches were used in the early stages of decision tree development.
The real handling approaches to missing data does not use data point with missing values in the evaluation of a split. However, when child nodes are created and trained, those instances are distributed somehow.
I know about the following approaches to distribute the missing value instances to child nodes:
all goes to the node which already has the biggest number of instances (CART, is not the primary rule)
distribute to all children, but with diminished weights, proportional with the number of instances from each child node (C45 and others)
distribute randomly to only one single child node, eventually according with a categorical distribution (I have seen that in various implementations of C45 and CART for a faster running time)
build, sort and use surrogates to distribute instances to a child node, where surrogates are input features which resembles best how the test feature send data instances to left or right child node (CART, if that fails, the majority rule is used)
|
How do decision tree learning algorithms deal with missing values (under the hood)
|
There are several methods used by various decision trees. Simply ignoring the missing values (like ID3 and other old algorithms does) or treating the missing values as another category (in case of a n
|
How do decision tree learning algorithms deal with missing values (under the hood)
There are several methods used by various decision trees. Simply ignoring the missing values (like ID3 and other old algorithms does) or treating the missing values as another category (in case of a nominal feature) are not real handling missing values. However those approaches were used in the early stages of decision tree development.
The real handling approaches to missing data does not use data point with missing values in the evaluation of a split. However, when child nodes are created and trained, those instances are distributed somehow.
I know about the following approaches to distribute the missing value instances to child nodes:
all goes to the node which already has the biggest number of instances (CART, is not the primary rule)
distribute to all children, but with diminished weights, proportional with the number of instances from each child node (C45 and others)
distribute randomly to only one single child node, eventually according with a categorical distribution (I have seen that in various implementations of C45 and CART for a faster running time)
build, sort and use surrogates to distribute instances to a child node, where surrogates are input features which resembles best how the test feature send data instances to left or right child node (CART, if that fails, the majority rule is used)
|
How do decision tree learning algorithms deal with missing values (under the hood)
There are several methods used by various decision trees. Simply ignoring the missing values (like ID3 and other old algorithms does) or treating the missing values as another category (in case of a n
|
7,751
|
What are some useful guidelines for GBM parameters?
|
The caret package can help you optimize the parameter choice for your problem. The caretTrain vignette shows how to tune the gbm parameters using 10-fold repeated cross-validation - other optimization approaches are available it can all run in parallel using the foreach package. Use vignette("caretTrain", package="caret") to read the document.
The package supports tuning shrinkage, n.trees, and interaction.depth parameters for the gbm model, though you can add your own.
For heuristics, this is my initial approach:
shrinkage: As small as you have time for (the gbm manual has more on this, but in general you can nver go wrong with a smaller value). Your data set is small so I'd probably start with 1e-3
n.trees: I usually grow an initial model adding more and more trees until gbm.perf says I have enough (actually, typically to 1.2 times that value) and then use that as a guide for further analysis.
interaction.depth: you already have an idea about this. Try smaller values as well. Maximum value is floor(sqrt(NCOL(data)).
n.minobsinnode: I find it really important to tune this variable. You don't want it so small that the algorithm finds too many spurious features.
|
What are some useful guidelines for GBM parameters?
|
The caret package can help you optimize the parameter choice for your problem. The caretTrain vignette shows how to tune the gbm parameters using 10-fold repeated cross-validation - other optimization
|
What are some useful guidelines for GBM parameters?
The caret package can help you optimize the parameter choice for your problem. The caretTrain vignette shows how to tune the gbm parameters using 10-fold repeated cross-validation - other optimization approaches are available it can all run in parallel using the foreach package. Use vignette("caretTrain", package="caret") to read the document.
The package supports tuning shrinkage, n.trees, and interaction.depth parameters for the gbm model, though you can add your own.
For heuristics, this is my initial approach:
shrinkage: As small as you have time for (the gbm manual has more on this, but in general you can nver go wrong with a smaller value). Your data set is small so I'd probably start with 1e-3
n.trees: I usually grow an initial model adding more and more trees until gbm.perf says I have enough (actually, typically to 1.2 times that value) and then use that as a guide for further analysis.
interaction.depth: you already have an idea about this. Try smaller values as well. Maximum value is floor(sqrt(NCOL(data)).
n.minobsinnode: I find it really important to tune this variable. You don't want it so small that the algorithm finds too many spurious features.
|
What are some useful guidelines for GBM parameters?
The caret package can help you optimize the parameter choice for your problem. The caretTrain vignette shows how to tune the gbm parameters using 10-fold repeated cross-validation - other optimization
|
7,752
|
What's the difference between the variance and the mean squared error?
|
The mean squared error as you have written it for OLS is hiding something:
$$\frac{\sum_{i}^{n}(y_i - \hat{y}_i) ^2}{n-2} = \frac{\sum_{i}^{n}\left[y_i - \left(\hat{\beta}_{0} + \hat{\beta}_{x}x_{i}\right)\right] ^2}{n-2}$$
Notice that the numerator sums over a function of both $y$ and $x$, so you lose a degree of freedom for each variable (or for each estimated parameter explaining one variable as a function of the other if your prefer), hence $n-2$. In the formula for the sample variance, the numerator is a function of a single variable, so you lose just one degree of freedom in the denominator.
However, you are on track in noticing that these are conceptually similar quantities. The sample variance of $y$ measures the spread of the data around the sample mean of $y$ (in squared units), while the MSE measures the vertical spread of the data around the sample regression line (in squared vertical units).
|
What's the difference between the variance and the mean squared error?
|
The mean squared error as you have written it for OLS is hiding something:
$$\frac{\sum_{i}^{n}(y_i - \hat{y}_i) ^2}{n-2} = \frac{\sum_{i}^{n}\left[y_i - \left(\hat{\beta}_{0} + \hat{\beta}_{x}x_{i}\r
|
What's the difference between the variance and the mean squared error?
The mean squared error as you have written it for OLS is hiding something:
$$\frac{\sum_{i}^{n}(y_i - \hat{y}_i) ^2}{n-2} = \frac{\sum_{i}^{n}\left[y_i - \left(\hat{\beta}_{0} + \hat{\beta}_{x}x_{i}\right)\right] ^2}{n-2}$$
Notice that the numerator sums over a function of both $y$ and $x$, so you lose a degree of freedom for each variable (or for each estimated parameter explaining one variable as a function of the other if your prefer), hence $n-2$. In the formula for the sample variance, the numerator is a function of a single variable, so you lose just one degree of freedom in the denominator.
However, you are on track in noticing that these are conceptually similar quantities. The sample variance of $y$ measures the spread of the data around the sample mean of $y$ (in squared units), while the MSE measures the vertical spread of the data around the sample regression line (in squared vertical units).
|
What's the difference between the variance and the mean squared error?
The mean squared error as you have written it for OLS is hiding something:
$$\frac{\sum_{i}^{n}(y_i - \hat{y}_i) ^2}{n-2} = \frac{\sum_{i}^{n}\left[y_i - \left(\hat{\beta}_{0} + \hat{\beta}_{x}x_{i}\r
|
7,753
|
What's the difference between the variance and the mean squared error?
|
In the variance formula, the sample mean approximates the population mean. The sample mean is calculated for a given sample with $n$ data points. Knowing the sample mean leaves us with only $n-1$ independent data points as the $n$th data point is constrained by the sample mean, so ($n-1$) degrees of freedom (DOF) in the denominator in the variance formula.
To get the estimated value of y ($= \beta_{0} + \beta_{1}\times x$) in the MSE formula, we need to estimate both $\beta_{0}$ (i.e. the intercept) as well as $\beta_{1}$ (i.e. the slope) so we lose 2 DOF, and so that is the reason for ($n-2$) in the denominator in the MSE formula.
|
What's the difference between the variance and the mean squared error?
|
In the variance formula, the sample mean approximates the population mean. The sample mean is calculated for a given sample with $n$ data points. Knowing the sample mean leaves us with only $n-1$ inde
|
What's the difference between the variance and the mean squared error?
In the variance formula, the sample mean approximates the population mean. The sample mean is calculated for a given sample with $n$ data points. Knowing the sample mean leaves us with only $n-1$ independent data points as the $n$th data point is constrained by the sample mean, so ($n-1$) degrees of freedom (DOF) in the denominator in the variance formula.
To get the estimated value of y ($= \beta_{0} + \beta_{1}\times x$) in the MSE formula, we need to estimate both $\beta_{0}$ (i.e. the intercept) as well as $\beta_{1}$ (i.e. the slope) so we lose 2 DOF, and so that is the reason for ($n-2$) in the denominator in the MSE formula.
|
What's the difference between the variance and the mean squared error?
In the variance formula, the sample mean approximates the population mean. The sample mean is calculated for a given sample with $n$ data points. Knowing the sample mean leaves us with only $n-1$ inde
|
7,754
|
Fisher's Exact Test in contingency tables larger than 2x2
|
The only problem with applying Fisher's exact test to tables larger than 2x2 is that the calculations become much more difficult to do. The 2x2 version is the only one which is even feasible by hand, and so I doubt that Fisher ever imagined the test in larger tables because the computations would have been beyond anything he would have envisaged.
Nevertheless, the test can be applied to any mxn table and some software including Stata and SPSS provide the facility. Even so, the calculation is often approximated using a Monte Carlo approach.
Yes, if the expected cell counts are small, it is better to use an exact test as the chi-squared test is no longer a good approximation in such cases.
|
Fisher's Exact Test in contingency tables larger than 2x2
|
The only problem with applying Fisher's exact test to tables larger than 2x2 is that the calculations become much more difficult to do. The 2x2 version is the only one which is even feasible by hand,
|
Fisher's Exact Test in contingency tables larger than 2x2
The only problem with applying Fisher's exact test to tables larger than 2x2 is that the calculations become much more difficult to do. The 2x2 version is the only one which is even feasible by hand, and so I doubt that Fisher ever imagined the test in larger tables because the computations would have been beyond anything he would have envisaged.
Nevertheless, the test can be applied to any mxn table and some software including Stata and SPSS provide the facility. Even so, the calculation is often approximated using a Monte Carlo approach.
Yes, if the expected cell counts are small, it is better to use an exact test as the chi-squared test is no longer a good approximation in such cases.
|
Fisher's Exact Test in contingency tables larger than 2x2
The only problem with applying Fisher's exact test to tables larger than 2x2 is that the calculations become much more difficult to do. The 2x2 version is the only one which is even feasible by hand,
|
7,755
|
Fisher's Exact Test in contingency tables larger than 2x2
|
This page in MathWorld
explains how the calculations work. It points out that the test can be defined in a variety of ways:
To compute the P-value of the test,
the tables must be ordered by some
criterion that measures dependence,
and those tables that represent equal
or greater deviation from independence
than the observed table are the ones
whose probabilities are added
together. There are a variety of
criteria that can be used to measure
dependence.
I have not been able to find other articles or texts that explain how this is done with tables larger than 2x2.
This calculator
computes the exact Fisher's test for tables with 2 columns and up to 5 rows. The criterion it uses is the hypergeometric probability of each table. The overall P value is the sum of the hypergeometric probability of all tables with the same marginal totals whose probabilities are less than or equal to the probability computed from the actual data.
|
Fisher's Exact Test in contingency tables larger than 2x2
|
This page in MathWorld
explains how the calculations work. It points out that the test can be defined in a variety of ways:
To compute the P-value of the test,
the tables must be ordered by some
|
Fisher's Exact Test in contingency tables larger than 2x2
This page in MathWorld
explains how the calculations work. It points out that the test can be defined in a variety of ways:
To compute the P-value of the test,
the tables must be ordered by some
criterion that measures dependence,
and those tables that represent equal
or greater deviation from independence
than the observed table are the ones
whose probabilities are added
together. There are a variety of
criteria that can be used to measure
dependence.
I have not been able to find other articles or texts that explain how this is done with tables larger than 2x2.
This calculator
computes the exact Fisher's test for tables with 2 columns and up to 5 rows. The criterion it uses is the hypergeometric probability of each table. The overall P value is the sum of the hypergeometric probability of all tables with the same marginal totals whose probabilities are less than or equal to the probability computed from the actual data.
|
Fisher's Exact Test in contingency tables larger than 2x2
This page in MathWorld
explains how the calculations work. It points out that the test can be defined in a variety of ways:
To compute the P-value of the test,
the tables must be ordered by some
|
7,756
|
Fisher's Exact Test in contingency tables larger than 2x2
|
If you're looking for other ways to compute Fisher's exact test with larger contingency tables, here is a online calculator for Fisher's exact test for 2x3 contingency tables. Also, here's one for 3x3 contingency tables, and one for 2x4 contingency tables.
Yes, if the expected cell counts are small, it is better to use Fisher's exact test instead of the chi-squared test, if possible.
|
Fisher's Exact Test in contingency tables larger than 2x2
|
If you're looking for other ways to compute Fisher's exact test with larger contingency tables, here is a online calculator for Fisher's exact test for 2x3 contingency tables. Also, here's one for 3x
|
Fisher's Exact Test in contingency tables larger than 2x2
If you're looking for other ways to compute Fisher's exact test with larger contingency tables, here is a online calculator for Fisher's exact test for 2x3 contingency tables. Also, here's one for 3x3 contingency tables, and one for 2x4 contingency tables.
Yes, if the expected cell counts are small, it is better to use Fisher's exact test instead of the chi-squared test, if possible.
|
Fisher's Exact Test in contingency tables larger than 2x2
If you're looking for other ways to compute Fisher's exact test with larger contingency tables, here is a online calculator for Fisher's exact test for 2x3 contingency tables. Also, here's one for 3x
|
7,757
|
Fisher's Exact Test in contingency tables larger than 2x2
|
In order to obtain Fisher"s Exact Test in SPSS, use the Statistics = Exact option in Crosstabs. Methods for computing the Exact Tedt for larger tables have been around at least since the 1960"s. The speed of modern microprocessors makes the computation time inconsequential these days. Indeed, it is so easy to run the Exact Test that it is important not to use it too widely.
|
Fisher's Exact Test in contingency tables larger than 2x2
|
In order to obtain Fisher"s Exact Test in SPSS, use the Statistics = Exact option in Crosstabs. Methods for computing the Exact Tedt for larger tables have been around at least since the 1960"s. The
|
Fisher's Exact Test in contingency tables larger than 2x2
In order to obtain Fisher"s Exact Test in SPSS, use the Statistics = Exact option in Crosstabs. Methods for computing the Exact Tedt for larger tables have been around at least since the 1960"s. The speed of modern microprocessors makes the computation time inconsequential these days. Indeed, it is so easy to run the Exact Test that it is important not to use it too widely.
|
Fisher's Exact Test in contingency tables larger than 2x2
In order to obtain Fisher"s Exact Test in SPSS, use the Statistics = Exact option in Crosstabs. Methods for computing the Exact Tedt for larger tables have been around at least since the 1960"s. The
|
7,758
|
Fisher's Exact Test in contingency tables larger than 2x2
|
One important thing to keep in mind here is that Fisher's exact test is typically implemented for contingency tables with fixed margins, i.e. the efficient algorithms utilized in Stata and R involve either generating all tables with fixed margins or sampling all tables with fixed margins.
However, the assumption of fixed margins is not appropriate in every case. In fact, I think Agresti argues that it is rarely appropriate though this opinion is debated. In any case, before you utilize Fisher's exact test as it's commonly implemented, you need to think about whether it's appropriate for your application to treat both row and column sums as fixed.
|
Fisher's Exact Test in contingency tables larger than 2x2
|
One important thing to keep in mind here is that Fisher's exact test is typically implemented for contingency tables with fixed margins, i.e. the efficient algorithms utilized in Stata and R involve e
|
Fisher's Exact Test in contingency tables larger than 2x2
One important thing to keep in mind here is that Fisher's exact test is typically implemented for contingency tables with fixed margins, i.e. the efficient algorithms utilized in Stata and R involve either generating all tables with fixed margins or sampling all tables with fixed margins.
However, the assumption of fixed margins is not appropriate in every case. In fact, I think Agresti argues that it is rarely appropriate though this opinion is debated. In any case, before you utilize Fisher's exact test as it's commonly implemented, you need to think about whether it's appropriate for your application to treat both row and column sums as fixed.
|
Fisher's Exact Test in contingency tables larger than 2x2
One important thing to keep in mind here is that Fisher's exact test is typically implemented for contingency tables with fixed margins, i.e. the efficient algorithms utilized in Stata and R involve e
|
7,759
|
How are Bayesian Priors Decided in Real Life?
|
There's two big approaches to this problem. Firstly, using relevant past data to somehow "automatically" create a prior (or to somehow include this relevant data into a single model with our new data). This option is often considered attractive, because it "has a certain objectivity to it". Secondly, to ask experts (after showing them any relevant data they may need to have in mind). Finally, but perhaps less relevant, there's the option of using weakly informative priors (or priors that attempt to be uninformative).
In the first class of approaches, the (robust) meta-analytic predictive (MAP) prior of Schmidli et al. was already mentioned and is used quite often - especially in the robust version with an extra weakly/uninformative-mixture component added -, but there are various variants, alternatives like adaptive power priors, ideas to fit a single model over the old and the new data in a fashion robust to prior-data-conflict, and other similar ideas.
In the second class of approaches, there's many ways of getting prior opinions out of experts in ways that minimize the biases that people (including experts) are subject to (="expert elicitation"). One such framework is SHELF, on which you can find a whole course on their webpage and for which there's also a R package. I'm mentioning that one specifically, because I use it in practice, but there are others with different flavors/philosophies.
Here's a few examples of priors being set in practice, mostly drawn for clinical trials/drug development (simply because I'm the most familiar with it there - for more examples see e.g. this book): for a proof of concept study in COPD, for a proof of concept in rheumatoid arthritis (and another one also for RA), for an exponential hazard from historical data, for treatment effects in clinical trials and for predicting event rates and dispersion parameter for count outcomes. In the pharmaceutical industry, using prior information and expert knowledge is especially common for analyzing studies early in clinical development (e.g. analysis of proof of concept studies and deciding whether to proceed) or for decision making later on, while it is rarer for the confirmatory studies that are meant to support regulatory approval (in part, an overoptimistic prior is more a problem for the company when it is for internal decision making, while regulatory authorities put priors chosen for confirmatory studies under much more scrutiny).
|
How are Bayesian Priors Decided in Real Life?
|
There's two big approaches to this problem. Firstly, using relevant past data to somehow "automatically" create a prior (or to somehow include this relevant data into a single model with our new data)
|
How are Bayesian Priors Decided in Real Life?
There's two big approaches to this problem. Firstly, using relevant past data to somehow "automatically" create a prior (or to somehow include this relevant data into a single model with our new data). This option is often considered attractive, because it "has a certain objectivity to it". Secondly, to ask experts (after showing them any relevant data they may need to have in mind). Finally, but perhaps less relevant, there's the option of using weakly informative priors (or priors that attempt to be uninformative).
In the first class of approaches, the (robust) meta-analytic predictive (MAP) prior of Schmidli et al. was already mentioned and is used quite often - especially in the robust version with an extra weakly/uninformative-mixture component added -, but there are various variants, alternatives like adaptive power priors, ideas to fit a single model over the old and the new data in a fashion robust to prior-data-conflict, and other similar ideas.
In the second class of approaches, there's many ways of getting prior opinions out of experts in ways that minimize the biases that people (including experts) are subject to (="expert elicitation"). One such framework is SHELF, on which you can find a whole course on their webpage and for which there's also a R package. I'm mentioning that one specifically, because I use it in practice, but there are others with different flavors/philosophies.
Here's a few examples of priors being set in practice, mostly drawn for clinical trials/drug development (simply because I'm the most familiar with it there - for more examples see e.g. this book): for a proof of concept study in COPD, for a proof of concept in rheumatoid arthritis (and another one also for RA), for an exponential hazard from historical data, for treatment effects in clinical trials and for predicting event rates and dispersion parameter for count outcomes. In the pharmaceutical industry, using prior information and expert knowledge is especially common for analyzing studies early in clinical development (e.g. analysis of proof of concept studies and deciding whether to proceed) or for decision making later on, while it is rarer for the confirmatory studies that are meant to support regulatory approval (in part, an overoptimistic prior is more a problem for the company when it is for internal decision making, while regulatory authorities put priors chosen for confirmatory studies under much more scrutiny).
|
How are Bayesian Priors Decided in Real Life?
There's two big approaches to this problem. Firstly, using relevant past data to somehow "automatically" create a prior (or to somehow include this relevant data into a single model with our new data)
|
7,760
|
How are Bayesian Priors Decided in Real Life?
|
OP here, just wanted add some supplementary material and demonstrate the following: a comparison between Frequentist Regression and Bayesian Regression using R
#cool trick to directly bring this data into R
my_data <- data.frame(read.table(header=TRUE,
row.names = 1,
text="
weight height age
1 2998.958 15.26611 53
2 3002.208 18.08711 52
3 3008.171 16.70896 49
4 3002.374 17.37032 55
5 3000.658 18.04860 50
6 3002.688 17.24797 45
7 3004.923 16.45360 47
8 2987.264 16.71712 47
9 3011.332 17.76626 50
10 2983.783 18.10337 42
11 3007.167 18.18355 50
12 3007.049 18.11375 53
13 3002.656 15.49990 42
14 2986.710 16.73089 47
15 2998.286 17.12075 52
"))
Frequentist Regression : This is how a Frequentist Regression Model (i.e. a Regression Model where the parameters are estimated using Ordinary Least Squares (OLS) - what we all learn in school).
First, fit the regression model:
#fit regression model
model_1 <- lm(age ~ weight + height, data = my_data)
Next, view the results:
#view results
summary(model_1)
Call:
lm(formula = age ~ weight + height, data = my_data)
Residuals:
Min 1Q Median 3Q Max
-6.2369 -1.8688 0.3864 2.1065 5.6170
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -525.2843 369.9144 -1.420 0.181
weight 0.1875 0.1238 1.515 0.156
height 0.6871 1.0859 0.633 0.539
Residual standard error: 3.796 on 12 degrees of freedom
Multiple R-squared: 0.1954, Adjusted R-squared: 0.06135
F-statistic: 1.457 on 2 and 12 DF, p-value: 0.2712
Optional : Visualize Results
library(scatterplot3d)
s3d <- scatterplot3d(my_data$weight, my_data$height,my_data$age, pch = 19, type = c("p"), color = "darkgrey",
main = "Regression Plane", grid = TRUE, box = FALSE,
mar = c(2.5, 2.5, 2, 1.5), angle = 55)
# regression plane
s3d$plane3d(model_1, draw_polygon = TRUE, draw_lines = TRUE,
polygon_args = list(col = rgb(.1, .2, .7, .5)))
# overlay positive residuals
wh <- resid(model_1) > 0
s3d$points3d(my_data$height, my_data$weight, my_data$age, pch = 19)
2) Bayesian Regression: Now, we try to fit a Bayesian Regression Model to the same data:
#load library
library(rstanarm)
library(see)
library(bayestestR)
library(performance)
First, we specify priors on the Height and Weight variables (I picked a normal distribution for both of them - in my original question, we would have decided on these priors by using the research done on giraffes by other biologists):
#specify priors
my_prior <- normal(location = c(3000, 17), scale = c(1, 2))
Next, we run the Bayesian Regression Model
#run bayesian regression model
model_2 <- stan_glm(age~., data=my_data, prior = my_prior, seed=111)
Now, we can view the results:
summary(model_2)
Model Info:
function: stan_glm
family: gaussian [identity]
formula: age ~ .
algorithm: sampling
sample: 4000 (posterior sample size)
priors: see help('prior_summary')
observations: 15
predictors: 3
Estimates:
mean sd 10% 50% 90%
(Intercept) -9000290.7 3116.3 -9004290.9 -9000230.6 -8996293.9
weight 2999.7 1.0 2998.4 2999.7 3001.1
height 17.0 2.0 14.4 17.0 19.6
sigma 3207.5 65.0 3124.2 3207.2 3291.0
Fit Diagnostics:
mean sd 10% 50% 90%
mean_PPD 55.5 824.4 -1002.3 66.1 1107.1
Look at the model performance:
#model performance
performance(model_2)
# Indices of model performance
ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | R2 (adj.) | RMSE | Sigma
----------------------------------------------------------------------------------------------
-574.459 | 154.366 | 1148.918 | 308.733 | 1160.324 | 0.983 | -1.000 | 23876.735 | 3207.163
> se <- sqrt(diag(vcov(model_2)))
> se
(Intercept) weight height
3116.342642 1.038384 2.040471
Optional: Visualize Results
#MCMC Trace
x <- as.array(model_2, pars = c("(Intercept)", "height", "weight"))
bayesplot::mcmc_trace(x, facet_args = list(nrow = 2))
#Posterior Distributions
plot_title <- ggplot2::ggtitle("Posterior Distributions")
plot(model_2, "hist", "weight", "height") + plot_title
#confidence ellipse
bayesplot::color_scheme_set("green")
plot(model_2, "scatter", pars = c("height", "weight"),
size = 3, alpha = 0.5) +
ggplot2::stat_ellipse(level = 0.9)
References:
https://rpubs.com/Qsheep/BayesianLinearRegression
https://www.theoj.org/joss-papers/joss.01541/10.21105.joss.01541.pdf
https://cran.r-project.org/web/packages/rstanarm/vignettes/priors.html#default-priors-and-scale-adjustments
https://mc-stan.org/rstanarm/reference/plot.stanreg.html
Note: I am still learning about Bayesian Regression - please feel to correct any mistakes that I might have made (e.g. It seems like the Bayesian Regression Model is performing far worse than the Linear Regression Model due to my choice of priors? When I run the Bayesian Regression Model with the default priors ("weakly informative priors"), e.g. model_2 <- stan_glm(age~., data=my_data, seed=111) - the results of the Bayesian Linear Regression are comparable with the Linear Regression Model. I must be doing something wrong?).
Thank you!
|
How are Bayesian Priors Decided in Real Life?
|
OP here, just wanted add some supplementary material and demonstrate the following: a comparison between Frequentist Regression and Bayesian Regression using R
#cool trick to directly bring this data
|
How are Bayesian Priors Decided in Real Life?
OP here, just wanted add some supplementary material and demonstrate the following: a comparison between Frequentist Regression and Bayesian Regression using R
#cool trick to directly bring this data into R
my_data <- data.frame(read.table(header=TRUE,
row.names = 1,
text="
weight height age
1 2998.958 15.26611 53
2 3002.208 18.08711 52
3 3008.171 16.70896 49
4 3002.374 17.37032 55
5 3000.658 18.04860 50
6 3002.688 17.24797 45
7 3004.923 16.45360 47
8 2987.264 16.71712 47
9 3011.332 17.76626 50
10 2983.783 18.10337 42
11 3007.167 18.18355 50
12 3007.049 18.11375 53
13 3002.656 15.49990 42
14 2986.710 16.73089 47
15 2998.286 17.12075 52
"))
Frequentist Regression : This is how a Frequentist Regression Model (i.e. a Regression Model where the parameters are estimated using Ordinary Least Squares (OLS) - what we all learn in school).
First, fit the regression model:
#fit regression model
model_1 <- lm(age ~ weight + height, data = my_data)
Next, view the results:
#view results
summary(model_1)
Call:
lm(formula = age ~ weight + height, data = my_data)
Residuals:
Min 1Q Median 3Q Max
-6.2369 -1.8688 0.3864 2.1065 5.6170
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -525.2843 369.9144 -1.420 0.181
weight 0.1875 0.1238 1.515 0.156
height 0.6871 1.0859 0.633 0.539
Residual standard error: 3.796 on 12 degrees of freedom
Multiple R-squared: 0.1954, Adjusted R-squared: 0.06135
F-statistic: 1.457 on 2 and 12 DF, p-value: 0.2712
Optional : Visualize Results
library(scatterplot3d)
s3d <- scatterplot3d(my_data$weight, my_data$height,my_data$age, pch = 19, type = c("p"), color = "darkgrey",
main = "Regression Plane", grid = TRUE, box = FALSE,
mar = c(2.5, 2.5, 2, 1.5), angle = 55)
# regression plane
s3d$plane3d(model_1, draw_polygon = TRUE, draw_lines = TRUE,
polygon_args = list(col = rgb(.1, .2, .7, .5)))
# overlay positive residuals
wh <- resid(model_1) > 0
s3d$points3d(my_data$height, my_data$weight, my_data$age, pch = 19)
2) Bayesian Regression: Now, we try to fit a Bayesian Regression Model to the same data:
#load library
library(rstanarm)
library(see)
library(bayestestR)
library(performance)
First, we specify priors on the Height and Weight variables (I picked a normal distribution for both of them - in my original question, we would have decided on these priors by using the research done on giraffes by other biologists):
#specify priors
my_prior <- normal(location = c(3000, 17), scale = c(1, 2))
Next, we run the Bayesian Regression Model
#run bayesian regression model
model_2 <- stan_glm(age~., data=my_data, prior = my_prior, seed=111)
Now, we can view the results:
summary(model_2)
Model Info:
function: stan_glm
family: gaussian [identity]
formula: age ~ .
algorithm: sampling
sample: 4000 (posterior sample size)
priors: see help('prior_summary')
observations: 15
predictors: 3
Estimates:
mean sd 10% 50% 90%
(Intercept) -9000290.7 3116.3 -9004290.9 -9000230.6 -8996293.9
weight 2999.7 1.0 2998.4 2999.7 3001.1
height 17.0 2.0 14.4 17.0 19.6
sigma 3207.5 65.0 3124.2 3207.2 3291.0
Fit Diagnostics:
mean sd 10% 50% 90%
mean_PPD 55.5 824.4 -1002.3 66.1 1107.1
Look at the model performance:
#model performance
performance(model_2)
# Indices of model performance
ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | R2 (adj.) | RMSE | Sigma
----------------------------------------------------------------------------------------------
-574.459 | 154.366 | 1148.918 | 308.733 | 1160.324 | 0.983 | -1.000 | 23876.735 | 3207.163
> se <- sqrt(diag(vcov(model_2)))
> se
(Intercept) weight height
3116.342642 1.038384 2.040471
Optional: Visualize Results
#MCMC Trace
x <- as.array(model_2, pars = c("(Intercept)", "height", "weight"))
bayesplot::mcmc_trace(x, facet_args = list(nrow = 2))
#Posterior Distributions
plot_title <- ggplot2::ggtitle("Posterior Distributions")
plot(model_2, "hist", "weight", "height") + plot_title
#confidence ellipse
bayesplot::color_scheme_set("green")
plot(model_2, "scatter", pars = c("height", "weight"),
size = 3, alpha = 0.5) +
ggplot2::stat_ellipse(level = 0.9)
References:
https://rpubs.com/Qsheep/BayesianLinearRegression
https://www.theoj.org/joss-papers/joss.01541/10.21105.joss.01541.pdf
https://cran.r-project.org/web/packages/rstanarm/vignettes/priors.html#default-priors-and-scale-adjustments
https://mc-stan.org/rstanarm/reference/plot.stanreg.html
Note: I am still learning about Bayesian Regression - please feel to correct any mistakes that I might have made (e.g. It seems like the Bayesian Regression Model is performing far worse than the Linear Regression Model due to my choice of priors? When I run the Bayesian Regression Model with the default priors ("weakly informative priors"), e.g. model_2 <- stan_glm(age~., data=my_data, seed=111) - the results of the Bayesian Linear Regression are comparable with the Linear Regression Model. I must be doing something wrong?).
Thank you!
|
How are Bayesian Priors Decided in Real Life?
OP here, just wanted add some supplementary material and demonstrate the following: a comparison between Frequentist Regression and Bayesian Regression using R
#cool trick to directly bring this data
|
7,761
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
One difficulty in answering this question is that it's hard to reconcile LASSO with the idea of a "true" model in most real-world applications, which typically have non-negligible correlations among predictor variables. In that case, as with any variable selection technique, the particular predictors returned with non-zero coefficients by LASSO will depend on the vagaries of sampling from the underlying population. You can check this by performing LASSO on multiple bootstrap samples from the same data set and comparing the sets of predictor variables that are returned.
Furthermore, as @AndrewM noted in a comment, the bias of estimates provided by LASSO means that you will not be predicting outcomes "as closely as possible." Rather, you are predicting outcomes that are based on a particular choice of the unavoidable bias-variance tradeoff.
So given those difficulties, I would hope that you would want to know for yourself, not just to satisfy a critic, the magnitudes of main effects of the variables that contribute to the interaction. There is a package available in R, glinternet, that seems to do precisely what you need (although I have no experience with it):
Group-Lasso INTERaction-NET. Fits linear pairwise-interaction models that satisfy strong hierarchy: if an interaction coefficient is estimated to be nonzero, then its two associated main effects also have nonzero estimated coefficients. Accommodates categorical variables (factors) with arbitrary numbers of levels, continuous variables, and combinations thereof.
Alternatively, if you do not have too many predictors, you might consider ridge regression instead, which will return coefficients for all variables that may be much less dependent on the vagaries of your particular data sample.
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
One difficulty in answering this question is that it's hard to reconcile LASSO with the idea of a "true" model in most real-world applications, which typically have non-negligible correlations among p
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
One difficulty in answering this question is that it's hard to reconcile LASSO with the idea of a "true" model in most real-world applications, which typically have non-negligible correlations among predictor variables. In that case, as with any variable selection technique, the particular predictors returned with non-zero coefficients by LASSO will depend on the vagaries of sampling from the underlying population. You can check this by performing LASSO on multiple bootstrap samples from the same data set and comparing the sets of predictor variables that are returned.
Furthermore, as @AndrewM noted in a comment, the bias of estimates provided by LASSO means that you will not be predicting outcomes "as closely as possible." Rather, you are predicting outcomes that are based on a particular choice of the unavoidable bias-variance tradeoff.
So given those difficulties, I would hope that you would want to know for yourself, not just to satisfy a critic, the magnitudes of main effects of the variables that contribute to the interaction. There is a package available in R, glinternet, that seems to do precisely what you need (although I have no experience with it):
Group-Lasso INTERaction-NET. Fits linear pairwise-interaction models that satisfy strong hierarchy: if an interaction coefficient is estimated to be nonzero, then its two associated main effects also have nonzero estimated coefficients. Accommodates categorical variables (factors) with arbitrary numbers of levels, continuous variables, and combinations thereof.
Alternatively, if you do not have too many predictors, you might consider ridge regression instead, which will return coefficients for all variables that may be much less dependent on the vagaries of your particular data sample.
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
One difficulty in answering this question is that it's hard to reconcile LASSO with the idea of a "true" model in most real-world applications, which typically have non-negligible correlations among p
|
7,762
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
I am late for a party, but here are few of my thoughts about your problem.
lasso selects what is informative. Lets consider lasso as a method to get the highest predictive performance with the smallest number of features. It is totally fine that in some cases, lasso selects interaction and not main effects. It just mean that main effects are not informative, but interactions are.
You are just reporting, what you found out. You used some method and it produced some results. You report it in a transparent manner that allows reproducibility. In my opinion, your job is done. Results are objective, you found what you found and it's not your job to justify, why you didn't find something else.
All units are arbitrary. Interactions are just units.
Lets say you study colors. Colors can be included in your model as a wave length, or a log wave length, or as 3 RGB variables, or as an interaction of a hue and tint and so on. There is no inherently correct or incorrect representation of colors. You will choose the one that makes most sense for your problem. Interactions are also just units that you can use arbitrarily. Area of a window, is just interaction of its height and width, should you include height and width of a window in your model? Velocity is just interaction of mass and speed. And Speed is just interaction of time and distance. Manhours is just interaction of time and number of people working. Mathematically treatment dose * age is the same as height * width. The "you have to always include main effects" saying is overrated.
lasso does not approximate real model, it's not meant for inference and selected variables are unstable. If you have correlated informative predictors, lasso tends to choose one and push the others to 0, therefore your model will omit significant proportion of informative variables. Also, as was pointed out in the comments, if you find the best lambda in crossvalidation, lasso will choose more variables than a real model has. Another issue is, that selections from lasso are unstable. So if you run lasso again on a different sample from a population, you will end with a different set of selected variables. Hence don't put much weight on which variables are selected. Also, the betas are biased, and therefore cannot be used for a classical parametric hypothesis testing. However, there are ways around it (next point)
inference with lasso. Lasso can be use to make a inference on predictors. Simplest way is to bootstrap it and count how many times each variable is selected, divide by number of resamples, and you have your p-values. P in that case is a probability of a variable being selected by lasso. You can still end up with significant interaction effects and insignificant main effects, but that's not a problem, it can happen with normal hypothesis testing as well. Great treatment of this topic is in the Hastie et. al. free book: Statistical Learning With Sparsity, chapter 6 http://web.stanford.edu/~hastie/StatLearnSparsity/ The bootstrap can be performed for whole range of lambda values which will result in a stability path for all variables. This can be extended with a stability selection approach to find a set of significant variables corrected for family wise error. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2010.00740.x/abstract There are also some other methods for inference with lasso, that might be useful. Namely adaptive lasso or desparsified lasso. Review with R implementation is here DOI: 10.1214/15-STS527 or IMO more accessible explanation in the Buhlmanm, van de Geer Book: Statistics for High-Dimensional Data http://www.springer.com/la/book/9783642201912
Other lasso related things to be aware. As far as I know ridge or elastic net tends to outperform lasso. If there is a domain knowledge about variables, group lasso or sparse group lasso can be used in order to force lasso to either keep or discard the whole group of predictors instead of treating them individually (e.g. gene paths, dummy coded factor variable). For spatial or ordered data fused lasso can be used. Randomized lasso, introduced in the stability selection paper mentioned above, tends to produce sparser models with the same performance as a standard lasso.
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
I am late for a party, but here are few of my thoughts about your problem.
lasso selects what is informative. Lets consider lasso as a method to get the highest predictive performance with the smalle
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
I am late for a party, but here are few of my thoughts about your problem.
lasso selects what is informative. Lets consider lasso as a method to get the highest predictive performance with the smallest number of features. It is totally fine that in some cases, lasso selects interaction and not main effects. It just mean that main effects are not informative, but interactions are.
You are just reporting, what you found out. You used some method and it produced some results. You report it in a transparent manner that allows reproducibility. In my opinion, your job is done. Results are objective, you found what you found and it's not your job to justify, why you didn't find something else.
All units are arbitrary. Interactions are just units.
Lets say you study colors. Colors can be included in your model as a wave length, or a log wave length, or as 3 RGB variables, or as an interaction of a hue and tint and so on. There is no inherently correct or incorrect representation of colors. You will choose the one that makes most sense for your problem. Interactions are also just units that you can use arbitrarily. Area of a window, is just interaction of its height and width, should you include height and width of a window in your model? Velocity is just interaction of mass and speed. And Speed is just interaction of time and distance. Manhours is just interaction of time and number of people working. Mathematically treatment dose * age is the same as height * width. The "you have to always include main effects" saying is overrated.
lasso does not approximate real model, it's not meant for inference and selected variables are unstable. If you have correlated informative predictors, lasso tends to choose one and push the others to 0, therefore your model will omit significant proportion of informative variables. Also, as was pointed out in the comments, if you find the best lambda in crossvalidation, lasso will choose more variables than a real model has. Another issue is, that selections from lasso are unstable. So if you run lasso again on a different sample from a population, you will end with a different set of selected variables. Hence don't put much weight on which variables are selected. Also, the betas are biased, and therefore cannot be used for a classical parametric hypothesis testing. However, there are ways around it (next point)
inference with lasso. Lasso can be use to make a inference on predictors. Simplest way is to bootstrap it and count how many times each variable is selected, divide by number of resamples, and you have your p-values. P in that case is a probability of a variable being selected by lasso. You can still end up with significant interaction effects and insignificant main effects, but that's not a problem, it can happen with normal hypothesis testing as well. Great treatment of this topic is in the Hastie et. al. free book: Statistical Learning With Sparsity, chapter 6 http://web.stanford.edu/~hastie/StatLearnSparsity/ The bootstrap can be performed for whole range of lambda values which will result in a stability path for all variables. This can be extended with a stability selection approach to find a set of significant variables corrected for family wise error. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2010.00740.x/abstract There are also some other methods for inference with lasso, that might be useful. Namely adaptive lasso or desparsified lasso. Review with R implementation is here DOI: 10.1214/15-STS527 or IMO more accessible explanation in the Buhlmanm, van de Geer Book: Statistics for High-Dimensional Data http://www.springer.com/la/book/9783642201912
Other lasso related things to be aware. As far as I know ridge or elastic net tends to outperform lasso. If there is a domain knowledge about variables, group lasso or sparse group lasso can be used in order to force lasso to either keep or discard the whole group of predictors instead of treating them individually (e.g. gene paths, dummy coded factor variable). For spatial or ordered data fused lasso can be used. Randomized lasso, introduced in the stability selection paper mentioned above, tends to produce sparser models with the same performance as a standard lasso.
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
I am late for a party, but here are few of my thoughts about your problem.
lasso selects what is informative. Lets consider lasso as a method to get the highest predictive performance with the smalle
|
7,763
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
For the Lasso, if your using it in a predictatory setting then the only thing that matters is how well the cross-validated results are. If you are trying to conduct inference around your effects then read the paper about "Double Lasso".
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
For the Lasso, if your using it in a predictatory setting then the only thing that matters is how well the cross-validated results are. If you are trying to conduct inference around your effects then
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
For the Lasso, if your using it in a predictatory setting then the only thing that matters is how well the cross-validated results are. If you are trying to conduct inference around your effects then read the paper about "Double Lasso".
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
For the Lasso, if your using it in a predictatory setting then the only thing that matters is how well the cross-validated results are. If you are trying to conduct inference around your effects then
|
7,764
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
I have an application where I specifically want small number of main effect to be not penalized. Let Y = X.mainbeta + X.interbeta.inter + eps
a) fit.Y = OLS(X.main,Y). Let tilde.Y = Y - predict(fit.Y,X.main)
b) fit[,j] = OLS(X.main, X.inter[,j]) for j = 1...k. Let tilde.X.inter[,j] = X.inter[,j] - predict(fit.j,X.main)
c) fit = Lasso (tilde.X.inter,tilde.y) .
The coefficient on main effect equals fit.Y - coef(fit)*fit[,1:dim(X.inter)[2]].
The coefficient on interaction effect equals coef(fit)
In steps a and b, no need to do sample splitting. That works for me!
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
|
I have an application where I specifically want small number of main effect to be not penalized. Let Y = X.mainbeta + X.interbeta.inter + eps
a) fit.Y = OLS(X.main,Y). Let tilde.Y = Y - predict(fit.Y,
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
I have an application where I specifically want small number of main effect to be not penalized. Let Y = X.mainbeta + X.interbeta.inter + eps
a) fit.Y = OLS(X.main,Y). Let tilde.Y = Y - predict(fit.Y,X.main)
b) fit[,j] = OLS(X.main, X.inter[,j]) for j = 1...k. Let tilde.X.inter[,j] = X.inter[,j] - predict(fit.j,X.main)
c) fit = Lasso (tilde.X.inter,tilde.y) .
The coefficient on main effect equals fit.Y - coef(fit)*fit[,1:dim(X.inter)[2]].
The coefficient on interaction effect equals coef(fit)
In steps a and b, no need to do sample splitting. That works for me!
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
I have an application where I specifically want small number of main effect to be not penalized. Let Y = X.mainbeta + X.interbeta.inter + eps
a) fit.Y = OLS(X.main,Y). Let tilde.Y = Y - predict(fit.Y,
|
7,765
|
Why does glmnet use "naive" elastic net from the Zou & Hastie original paper?
|
I emailed this question to Zou and to Hastie and got the following reply from Hastie (I hope he wouldn't mind me quoting it here):
I think in Zou et al we were worried about the additional bias, but of course rescaling increases the variance.
So it just shifts one along the bias-variance tradeoff curve.
We will soon be including a version of relaxed lasso which is a better form of rescaling.
I interpret these words as an endorsement of some form of "rescaling" of the vanilla elastic net solution, but Hastie does not anymore seem to stand by the particular approach put forward in Zou & Hastie 2005.
In the following I will briefly review and compare several rescaling options.
I will be using glmnet parametrization of the loss $$\mathcal L = \frac{1}{2n}\big\lVert y - \beta_0-X\beta\big\rVert^2 + \lambda\big(\alpha\lVert \beta\rVert_1 + (1-\alpha) \lVert \beta\rVert^2_2/2\big),$$ with the solution denoted as $\hat\beta$.
The approach of Zou & Hastie is to use $$\hat\beta_\text{rescaled} = \big(1+\lambda(1-\alpha)\big)\hat\beta.$$ Note that this yields some non-trivial rescaling for pure ridge when $\alpha=0$ which arguably does not make a lot of sense. On the other hand, this yields no rescaling for pure lasso when $\alpha=1$, despite various claims in the literature that lasso estimator could benefit from some rescaling (see below).
For pure lasso, Tibshirani suggested to use lasso-OLS hybrid, i.e. to use OLS estimator using the subset of predictors selected by lasso. This makes the estimator consistent (but undoes the shrinkage, which can increase the expected error). One can use the same approach for elastic net $$\hat\beta_\text{elastic-OLS-hybrid}= \text{OLS}(X_i\mid\hat\beta_i\ne 0)$$ but the potential problem is that elastic net can select more than $n$ predictors and OLS will break down (in contrast, pure lasso never selects more than $n$ predictors).
Relaxed lasso mentioned in the Hastie's email quoted above is a suggestion to run another lasso on the subset of predictors selected by the first lasso. The idea is to use two different penalties and to select both via cross-validation. One could apply the same idea to elastic net, but this would seem to require four different regularization parameters and tuning them is a nightmare.
I suggest a simpler relaxed elastic net scheme: after obtaining $\hat\beta$, perform ridge regression with $\alpha=0$ and the same $\lambda$ on the selected subset of predictors: $$\hat\beta_\text{relaxed-elastic-net}= \text{Ridge}(X_i\mid\hat\beta_i\ne 0).$$ This (a) does not require any additional regularization parameters, (b) works for any number of selected predictors, and (c) does not do anything if one starts with pure ridge. Sounds good to me.
I am currently working with a small $n\ll p$ dataset with $n=44$ and $p=3000$, where $y$ is well predicted by the few leading PCs of $X$. I will compare the performance of the above estimators using 100x repeated 11-fold cross-validation. As a performance metric, I am using test error, normalized to yield something like an R-squared: $$R^2_\text{test} = 1-\frac{\lVert y_\text{test} - \hat\beta_0 - X_\text{test}\hat\beta\rVert^2}{\lVert y_\text{test} - \hat\beta_0\rVert^2}.$$ In the figure below, dashed lines correspond to the vanilla elastic net estimator $\hat\beta$ and three subplots correspond to the three rescaling approaches:
So, at least in these data, all three approaches outperform the vanilla elastic net estimator, and "relaxed elastic net" performs the best.
|
Why does glmnet use "naive" elastic net from the Zou & Hastie original paper?
|
I emailed this question to Zou and to Hastie and got the following reply from Hastie (I hope he wouldn't mind me quoting it here):
I think in Zou et al we were worried about the additional bias, but
|
Why does glmnet use "naive" elastic net from the Zou & Hastie original paper?
I emailed this question to Zou and to Hastie and got the following reply from Hastie (I hope he wouldn't mind me quoting it here):
I think in Zou et al we were worried about the additional bias, but of course rescaling increases the variance.
So it just shifts one along the bias-variance tradeoff curve.
We will soon be including a version of relaxed lasso which is a better form of rescaling.
I interpret these words as an endorsement of some form of "rescaling" of the vanilla elastic net solution, but Hastie does not anymore seem to stand by the particular approach put forward in Zou & Hastie 2005.
In the following I will briefly review and compare several rescaling options.
I will be using glmnet parametrization of the loss $$\mathcal L = \frac{1}{2n}\big\lVert y - \beta_0-X\beta\big\rVert^2 + \lambda\big(\alpha\lVert \beta\rVert_1 + (1-\alpha) \lVert \beta\rVert^2_2/2\big),$$ with the solution denoted as $\hat\beta$.
The approach of Zou & Hastie is to use $$\hat\beta_\text{rescaled} = \big(1+\lambda(1-\alpha)\big)\hat\beta.$$ Note that this yields some non-trivial rescaling for pure ridge when $\alpha=0$ which arguably does not make a lot of sense. On the other hand, this yields no rescaling for pure lasso when $\alpha=1$, despite various claims in the literature that lasso estimator could benefit from some rescaling (see below).
For pure lasso, Tibshirani suggested to use lasso-OLS hybrid, i.e. to use OLS estimator using the subset of predictors selected by lasso. This makes the estimator consistent (but undoes the shrinkage, which can increase the expected error). One can use the same approach for elastic net $$\hat\beta_\text{elastic-OLS-hybrid}= \text{OLS}(X_i\mid\hat\beta_i\ne 0)$$ but the potential problem is that elastic net can select more than $n$ predictors and OLS will break down (in contrast, pure lasso never selects more than $n$ predictors).
Relaxed lasso mentioned in the Hastie's email quoted above is a suggestion to run another lasso on the subset of predictors selected by the first lasso. The idea is to use two different penalties and to select both via cross-validation. One could apply the same idea to elastic net, but this would seem to require four different regularization parameters and tuning them is a nightmare.
I suggest a simpler relaxed elastic net scheme: after obtaining $\hat\beta$, perform ridge regression with $\alpha=0$ and the same $\lambda$ on the selected subset of predictors: $$\hat\beta_\text{relaxed-elastic-net}= \text{Ridge}(X_i\mid\hat\beta_i\ne 0).$$ This (a) does not require any additional regularization parameters, (b) works for any number of selected predictors, and (c) does not do anything if one starts with pure ridge. Sounds good to me.
I am currently working with a small $n\ll p$ dataset with $n=44$ and $p=3000$, where $y$ is well predicted by the few leading PCs of $X$. I will compare the performance of the above estimators using 100x repeated 11-fold cross-validation. As a performance metric, I am using test error, normalized to yield something like an R-squared: $$R^2_\text{test} = 1-\frac{\lVert y_\text{test} - \hat\beta_0 - X_\text{test}\hat\beta\rVert^2}{\lVert y_\text{test} - \hat\beta_0\rVert^2}.$$ In the figure below, dashed lines correspond to the vanilla elastic net estimator $\hat\beta$ and three subplots correspond to the three rescaling approaches:
So, at least in these data, all three approaches outperform the vanilla elastic net estimator, and "relaxed elastic net" performs the best.
|
Why does glmnet use "naive" elastic net from the Zou & Hastie original paper?
I emailed this question to Zou and to Hastie and got the following reply from Hastie (I hope he wouldn't mind me quoting it here):
I think in Zou et al we were worried about the additional bias, but
|
7,766
|
Feature importance with dummy variables
|
When working on "feature importance" generally it is helpful to remember that in most cases a regularisation approach is often a good alternative. It will automatically "select the most important features" for the problem at hand.
Now, if we do not want to follow the notion for regularisation (usually within the context of regression), random forest classifiers and the notion of permutation tests naturally lend a solution to feature importance of group of variables. This has actually been asked before here: "Relative importance of a set of predictors in a random forests classification in R" a few years back. More rigorous approaches like Gregorutti et al.'s : "Grouped variable importance with random forests and
application to multivariate functional data analysis". Chakraborty & Pal's Selecting Useful Groups of Features in a Connectionist Framework looks into this task within the context of an Multi-Layer Perceptron. Going back to the Gregorutti et al. paper their methodology is directly applicable to any kind of classification/regression algorithm. In short, we use a randomly permuted version in each out-of-bags sample that is used during training.
Having stated the above, while permutation tests are ultimately a heuristic, what has been solved accurately in the past is the penalisation of dummy variables within the context of regularised regression. The answer to that question is Group-LASSO, Group-LARS and Group-Garotte. Seminal papers in that work are Yuan and Lin's: "Model selection and estimation in regression with grouped variables" (2006) and Meier et al.'s: "The group lasso for logistic regression" (2008). This methodology allows us to work in situation where: "each factor may have several levels and can be expressed through a group of dummy variables" (Y&L 2006). The effect is such that "the group lasso encourages sparsity at the factor level." (Y&L 2006). Without going to excessive details the basic idea is that the standard $l_1$ penalty is replaced by the norm of positive definite matrices $K_{j}$, $j = \{1, \dots, J\}$ where $J$ is the number of groups we examine. CV has a few good threads regarding Group-Lasso here, here and here if you want to pursue this further.
[Because we mention Python specifically: I have not used the Python's pyglmnet package but it appears to include grouped lasso regularisation.]
All in all, in does not make sense to simply "add up" variable importance from individual dummy variables because it would not capture association between them as well as lead to potentially meaningless results. That said, both group-penalised methods as well as permutation variable importance methods give a coherent and (especially in the case of permutation importance procedures) generally applicable framework to do so.
Finally to state the obvious: do not bin continuous data. It is bad practice, there is an excellent thread on this matter here (and here). The fact that we observe spurious results after the discretization of continuous variable, like age, is not surprising. Frank Harrell has also written extensivel on problems caused by categorizing continuous variables.
|
Feature importance with dummy variables
|
When working on "feature importance" generally it is helpful to remember that in most cases a regularisation approach is often a good alternative. It will automatically "select the most important feat
|
Feature importance with dummy variables
When working on "feature importance" generally it is helpful to remember that in most cases a regularisation approach is often a good alternative. It will automatically "select the most important features" for the problem at hand.
Now, if we do not want to follow the notion for regularisation (usually within the context of regression), random forest classifiers and the notion of permutation tests naturally lend a solution to feature importance of group of variables. This has actually been asked before here: "Relative importance of a set of predictors in a random forests classification in R" a few years back. More rigorous approaches like Gregorutti et al.'s : "Grouped variable importance with random forests and
application to multivariate functional data analysis". Chakraborty & Pal's Selecting Useful Groups of Features in a Connectionist Framework looks into this task within the context of an Multi-Layer Perceptron. Going back to the Gregorutti et al. paper their methodology is directly applicable to any kind of classification/regression algorithm. In short, we use a randomly permuted version in each out-of-bags sample that is used during training.
Having stated the above, while permutation tests are ultimately a heuristic, what has been solved accurately in the past is the penalisation of dummy variables within the context of regularised regression. The answer to that question is Group-LASSO, Group-LARS and Group-Garotte. Seminal papers in that work are Yuan and Lin's: "Model selection and estimation in regression with grouped variables" (2006) and Meier et al.'s: "The group lasso for logistic regression" (2008). This methodology allows us to work in situation where: "each factor may have several levels and can be expressed through a group of dummy variables" (Y&L 2006). The effect is such that "the group lasso encourages sparsity at the factor level." (Y&L 2006). Without going to excessive details the basic idea is that the standard $l_1$ penalty is replaced by the norm of positive definite matrices $K_{j}$, $j = \{1, \dots, J\}$ where $J$ is the number of groups we examine. CV has a few good threads regarding Group-Lasso here, here and here if you want to pursue this further.
[Because we mention Python specifically: I have not used the Python's pyglmnet package but it appears to include grouped lasso regularisation.]
All in all, in does not make sense to simply "add up" variable importance from individual dummy variables because it would not capture association between them as well as lead to potentially meaningless results. That said, both group-penalised methods as well as permutation variable importance methods give a coherent and (especially in the case of permutation importance procedures) generally applicable framework to do so.
Finally to state the obvious: do not bin continuous data. It is bad practice, there is an excellent thread on this matter here (and here). The fact that we observe spurious results after the discretization of continuous variable, like age, is not surprising. Frank Harrell has also written extensivel on problems caused by categorizing continuous variables.
|
Feature importance with dummy variables
When working on "feature importance" generally it is helpful to remember that in most cases a regularisation approach is often a good alternative. It will automatically "select the most important feat
|
7,767
|
Feature importance with dummy variables
|
One approach that you can take in scikit-learn is to use the permutation_importance function on a pipeline that includes the one-hot encoding. If you do this, then the permutation_importance method will be permuting categorical columns before they get one-hot encoded. This approach can be seen in this example on the scikit-learn webpage. The results of permuting before encoding are shown in the second and third figures, where you can see that a single importance is reported for each categorical variable.
|
Feature importance with dummy variables
|
One approach that you can take in scikit-learn is to use the permutation_importance function on a pipeline that includes the one-hot encoding. If you do this, then the permutation_importance method wi
|
Feature importance with dummy variables
One approach that you can take in scikit-learn is to use the permutation_importance function on a pipeline that includes the one-hot encoding. If you do this, then the permutation_importance method will be permuting categorical columns before they get one-hot encoded. This approach can be seen in this example on the scikit-learn webpage. The results of permuting before encoding are shown in the second and third figures, where you can see that a single importance is reported for each categorical variable.
|
Feature importance with dummy variables
One approach that you can take in scikit-learn is to use the permutation_importance function on a pipeline that includes the one-hot encoding. If you do this, then the permutation_importance method wi
|
7,768
|
Feature importance with dummy variables
|
The question is:
does it make sense to recombine those dummy variable importances into an importance value for a categorical variable by simply summing them?
The short answer:
The simple answer is no. According to the textbook (page 368), the importance of
$$Importance(X_l) = I_{\ell}$$
and
$$(I_{ℓ})^2 = \sum\limits_{t=1}^{J-1} i^2I(v(t)=\ell)$$
thus
$$I_{ℓ} = \sqrt{\sum\limits_{t=1}^{J-1} i^2I(v(t)=\ell)}$$
In conclusion, you must take the square root first.
The longer, more practical answer..
You cannot simply sum together individual variable importance values for dummy variables because you risk
the masking of important variables by others with which they are highly correlated. (page 368)
Issues such as possible multicollinearity can distort the variable importance values and rankings.
It's actually a very interesting problem to understand just how variable importance is affected by issues like multicollinearity. The paper Determining Predictor Importance In Multiple Regression Under Varied Correlational And
Distributional Conditions discusses various methods for computing variable importance and compares the performance for data violating typical statistical assumptions. The authors found that
Although multicollinearity did affect the
performance of relative importance methods, multivariate nonnormality did not. (WHITTAKER p366)
|
Feature importance with dummy variables
|
The question is:
does it make sense to recombine those dummy variable importances into an importance value for a categorical variable by simply summing them?
The short answer:
The simple answer is n
|
Feature importance with dummy variables
The question is:
does it make sense to recombine those dummy variable importances into an importance value for a categorical variable by simply summing them?
The short answer:
The simple answer is no. According to the textbook (page 368), the importance of
$$Importance(X_l) = I_{\ell}$$
and
$$(I_{ℓ})^2 = \sum\limits_{t=1}^{J-1} i^2I(v(t)=\ell)$$
thus
$$I_{ℓ} = \sqrt{\sum\limits_{t=1}^{J-1} i^2I(v(t)=\ell)}$$
In conclusion, you must take the square root first.
The longer, more practical answer..
You cannot simply sum together individual variable importance values for dummy variables because you risk
the masking of important variables by others with which they are highly correlated. (page 368)
Issues such as possible multicollinearity can distort the variable importance values and rankings.
It's actually a very interesting problem to understand just how variable importance is affected by issues like multicollinearity. The paper Determining Predictor Importance In Multiple Regression Under Varied Correlational And
Distributional Conditions discusses various methods for computing variable importance and compares the performance for data violating typical statistical assumptions. The authors found that
Although multicollinearity did affect the
performance of relative importance methods, multivariate nonnormality did not. (WHITTAKER p366)
|
Feature importance with dummy variables
The question is:
does it make sense to recombine those dummy variable importances into an importance value for a categorical variable by simply summing them?
The short answer:
The simple answer is n
|
7,769
|
Relation between variational Bayes and EM
|
Your approach is correct. EM is equivalent to VB under the constraint that the approximate posterior for $\Theta$ is constrained to be a point mass. (This is mentioned without proof on page 337 of Bayesian Data Analysis.) Let $\Theta^*$ be
the unknown location of this point mass:
$$
Q_\Theta(\Theta) = \delta(\Theta - \Theta^*)
$$
VB will minimize the following KL-divergence:
$$
KL(Q||P)=\int \int Q_X(X) Q_\Theta(\Theta) \ln \frac{Q_X(X) Q_\Theta(\Theta)}{P(X,Y,\Theta)} dX d\Theta
\\
= \int Q_X(X) \ln \frac{Q_X(X) Q_\Theta(\Theta^*)}{P(X,Y,\Theta^*)} dX
$$
The minimum over $Q_X(X)$ gives the E-step of EM, and the minimum over $\Theta^*$ gives the M-step of EM.
Of course, if you were to actually evaluate the KL divergence, it would be infinite. But that isn't a problem if you consider the delta function to be a limit.
|
Relation between variational Bayes and EM
|
Your approach is correct. EM is equivalent to VB under the constraint that the approximate posterior for $\Theta$ is constrained to be a point mass. (This is mentioned without proof on page 337 of B
|
Relation between variational Bayes and EM
Your approach is correct. EM is equivalent to VB under the constraint that the approximate posterior for $\Theta$ is constrained to be a point mass. (This is mentioned without proof on page 337 of Bayesian Data Analysis.) Let $\Theta^*$ be
the unknown location of this point mass:
$$
Q_\Theta(\Theta) = \delta(\Theta - \Theta^*)
$$
VB will minimize the following KL-divergence:
$$
KL(Q||P)=\int \int Q_X(X) Q_\Theta(\Theta) \ln \frac{Q_X(X) Q_\Theta(\Theta)}{P(X,Y,\Theta)} dX d\Theta
\\
= \int Q_X(X) \ln \frac{Q_X(X) Q_\Theta(\Theta^*)}{P(X,Y,\Theta^*)} dX
$$
The minimum over $Q_X(X)$ gives the E-step of EM, and the minimum over $\Theta^*$ gives the M-step of EM.
Of course, if you were to actually evaluate the KL divergence, it would be infinite. But that isn't a problem if you consider the delta function to be a limit.
|
Relation between variational Bayes and EM
Your approach is correct. EM is equivalent to VB under the constraint that the approximate posterior for $\Theta$ is constrained to be a point mass. (This is mentioned without proof on page 337 of B
|
7,770
|
Why isn't RANSAC most widely used in statistics?
|
I think that the key here is the discarding of a large portion of the data in RANSAC.
In most statistical applications, some distributions may have heavy tails, and therefore small sample numbers may skew statistical estimation. Robust estimators solve this by weighing the data differently. RANSAC on the other hand makes no attempt to accommodate the outliers, it's built for cases where the data points genuinely don't belong, not just distributed non-normaly.
|
Why isn't RANSAC most widely used in statistics?
|
I think that the key here is the discarding of a large portion of the data in RANSAC.
In most statistical applications, some distributions may have heavy tails, and therefore small sample numbers may
|
Why isn't RANSAC most widely used in statistics?
I think that the key here is the discarding of a large portion of the data in RANSAC.
In most statistical applications, some distributions may have heavy tails, and therefore small sample numbers may skew statistical estimation. Robust estimators solve this by weighing the data differently. RANSAC on the other hand makes no attempt to accommodate the outliers, it's built for cases where the data points genuinely don't belong, not just distributed non-normaly.
|
Why isn't RANSAC most widely used in statistics?
I think that the key here is the discarding of a large portion of the data in RANSAC.
In most statistical applications, some distributions may have heavy tails, and therefore small sample numbers may
|
7,771
|
Why isn't RANSAC most widely used in statistics?
|
For us, it is just one example of a robust regression -- I believe it is used by statisticians also, but maybe not so wide because it has some better known alternatives.
|
Why isn't RANSAC most widely used in statistics?
|
For us, it is just one example of a robust regression -- I believe it is used by statisticians also, but maybe not so wide because it has some better known alternatives.
|
Why isn't RANSAC most widely used in statistics?
For us, it is just one example of a robust regression -- I believe it is used by statisticians also, but maybe not so wide because it has some better known alternatives.
|
Why isn't RANSAC most widely used in statistics?
For us, it is just one example of a robust regression -- I believe it is used by statisticians also, but maybe not so wide because it has some better known alternatives.
|
7,772
|
Why isn't RANSAC most widely used in statistics?
|
This sounds a lot like bagging which is a frequently used technique.
|
Why isn't RANSAC most widely used in statistics?
|
This sounds a lot like bagging which is a frequently used technique.
|
Why isn't RANSAC most widely used in statistics?
This sounds a lot like bagging which is a frequently used technique.
|
Why isn't RANSAC most widely used in statistics?
This sounds a lot like bagging which is a frequently used technique.
|
7,773
|
Why isn't RANSAC most widely used in statistics?
|
You throw away data with RANSAC, potentially without justifying it, but based on increasing the fit of the model. Throwing away data for increased fit is usually shun as you may loose important data. Removal of outliers without justification is always problematic.
It is ofcourse possible to justify it. E.g. if you known the data should follow a given pattern, but that there also are deviation in the data from the pattern due to error in the measurements.
|
Why isn't RANSAC most widely used in statistics?
|
You throw away data with RANSAC, potentially without justifying it, but based on increasing the fit of the model. Throwing away data for increased fit is usually shun as you may loose important data.
|
Why isn't RANSAC most widely used in statistics?
You throw away data with RANSAC, potentially without justifying it, but based on increasing the fit of the model. Throwing away data for increased fit is usually shun as you may loose important data. Removal of outliers without justification is always problematic.
It is ofcourse possible to justify it. E.g. if you known the data should follow a given pattern, but that there also are deviation in the data from the pattern due to error in the measurements.
|
Why isn't RANSAC most widely used in statistics?
You throw away data with RANSAC, potentially without justifying it, but based on increasing the fit of the model. Throwing away data for increased fit is usually shun as you may loose important data.
|
7,774
|
Boosting neural networks
|
In boosting, weak or or unstable classifiers are used as base learners.
This is the case because the aim is to generate decision boundaries that are considerably different. Then, a good base learner is one that is highly biased, in other words, the output remains basically the same even when the training parameters for the base learners are changed slightly.
In neural networks, dropout is a regularization technique that can be compared to training ensembles. The difference is that the ensembling is done in the latent space (neurons exist or not) thus decreasing the generalization error.
"Each training example can thus be viewed as providing gradients for a different, randomly sampled architecture, so that the final neural network efficiently represents a huge ensemble of neural networks, with good generalization capability" - quoting from here.
There are two such techniques: in dropout neurons are dropped (meaning the neurons exist or not with a certain probability) while in dropconnect the weights are dropped.
Now, to answer your question, I believe that neural networks (or perceptrons) are not used as base learners in a boosting setup since they are slower to train (just takes too much time) and the learners are not as weak, although they could be setup to be more unstable. So, it's not worth the effort.
There might have been research on this topic, however it's a pity that ideas that don't work well are not usually successfully published. We need more research covering pathways that don't lead anywhere, aka "don't bother trying this".
EDIT:
I had a bit more though on this and if you are interested in ensembles of large networks, then you might be referring to methods of combining the outputs of multiple such networks. Most people average or use majority voting depending on the task - this might not be optimal. I believe it should be possible to change the weights for each network's output according to the error on a particular record.
The less correlated the outputs, the better your ensembling rule.
|
Boosting neural networks
|
In boosting, weak or or unstable classifiers are used as base learners.
This is the case because the aim is to generate decision boundaries that are considerably different. Then, a good base learner i
|
Boosting neural networks
In boosting, weak or or unstable classifiers are used as base learners.
This is the case because the aim is to generate decision boundaries that are considerably different. Then, a good base learner is one that is highly biased, in other words, the output remains basically the same even when the training parameters for the base learners are changed slightly.
In neural networks, dropout is a regularization technique that can be compared to training ensembles. The difference is that the ensembling is done in the latent space (neurons exist or not) thus decreasing the generalization error.
"Each training example can thus be viewed as providing gradients for a different, randomly sampled architecture, so that the final neural network efficiently represents a huge ensemble of neural networks, with good generalization capability" - quoting from here.
There are two such techniques: in dropout neurons are dropped (meaning the neurons exist or not with a certain probability) while in dropconnect the weights are dropped.
Now, to answer your question, I believe that neural networks (or perceptrons) are not used as base learners in a boosting setup since they are slower to train (just takes too much time) and the learners are not as weak, although they could be setup to be more unstable. So, it's not worth the effort.
There might have been research on this topic, however it's a pity that ideas that don't work well are not usually successfully published. We need more research covering pathways that don't lead anywhere, aka "don't bother trying this".
EDIT:
I had a bit more though on this and if you are interested in ensembles of large networks, then you might be referring to methods of combining the outputs of multiple such networks. Most people average or use majority voting depending on the task - this might not be optimal. I believe it should be possible to change the weights for each network's output according to the error on a particular record.
The less correlated the outputs, the better your ensembling rule.
|
Boosting neural networks
In boosting, weak or or unstable classifiers are used as base learners.
This is the case because the aim is to generate decision boundaries that are considerably different. Then, a good base learner i
|
7,775
|
Boosting neural networks
|
I see this has does not have an accepted answer so I'll give a very heuristic answer. Yes, it is done....e.g. it is available in JMP Pro (probably the best stat package you've never heard of). http://www.jmp.com/support/help/Overview_of_Neural_Networks.shtml
There's a description in in the middle of the page of what it is used for. I haven't put any cycles into investigating the theory, but it seems they are implying it achieves essentially the same results as using more nodes in a single larger model. The advantage [they claim] is in speed of model fitting.
For just a very rough gauge, I compared it on a dataset I have with 2 sigmoid and 2 Gaussian nodes and boosting the model 6x against 12 sigmoid and 12 Gaussian nodes in a single model and the results were virtually identical on my test set of data.
I didn't notice any speed difference either...but the dataset is only 1600 points and I'm only using 12 variables, so on a larger dataset with more variables it may hold true that there is a noticeable computation difference.
|
Boosting neural networks
|
I see this has does not have an accepted answer so I'll give a very heuristic answer. Yes, it is done....e.g. it is available in JMP Pro (probably the best stat package you've never heard of). http:
|
Boosting neural networks
I see this has does not have an accepted answer so I'll give a very heuristic answer. Yes, it is done....e.g. it is available in JMP Pro (probably the best stat package you've never heard of). http://www.jmp.com/support/help/Overview_of_Neural_Networks.shtml
There's a description in in the middle of the page of what it is used for. I haven't put any cycles into investigating the theory, but it seems they are implying it achieves essentially the same results as using more nodes in a single larger model. The advantage [they claim] is in speed of model fitting.
For just a very rough gauge, I compared it on a dataset I have with 2 sigmoid and 2 Gaussian nodes and boosting the model 6x against 12 sigmoid and 12 Gaussian nodes in a single model and the results were virtually identical on my test set of data.
I didn't notice any speed difference either...but the dataset is only 1600 points and I'm only using 12 variables, so on a larger dataset with more variables it may hold true that there is a noticeable computation difference.
|
Boosting neural networks
I see this has does not have an accepted answer so I'll give a very heuristic answer. Yes, it is done....e.g. it is available in JMP Pro (probably the best stat package you've never heard of). http:
|
7,776
|
Interpreting negative cosine similarity
|
Let two vectors $a$ and $b$, the angle $θ$ is obtained by the scalar product and the norm of the vectors :
$$ cos(\theta) = \frac{a \cdot b}{||a|| \cdot ||b||} $$
Since the $cos(\theta)$ value is in the range $[-1,1]$ :
$-1$ value will indicate strongly opposite vectors
$0$ independent (orthogonal) vectors
$1$ similar (positive co-linear) vectors. Intermediate values are
used to assess the degree of similarity.
Example : Let two user $U_1$ and $U_2$, and $sim(U_1, U_2)$ the similarity between these two users according to their taste for movies:
$sim(U_1, U_2) = 1$ if the two users have exactly the same taste (or if
$U_1 = U_2$)
$sim(U_1, U_2) = 0$ if we do not find any correlation between the two users, e.g. if they have not seen any common movies
$sim(U_1, U_2) = -1$ if users have opposed tastes, e.g. if they rated the same movies in an opposite way
|
Interpreting negative cosine similarity
|
Let two vectors $a$ and $b$, the angle $θ$ is obtained by the scalar product and the norm of the vectors :
$$ cos(\theta) = \frac{a \cdot b}{||a|| \cdot ||b||} $$
Since the $cos(\theta)$ value is in
|
Interpreting negative cosine similarity
Let two vectors $a$ and $b$, the angle $θ$ is obtained by the scalar product and the norm of the vectors :
$$ cos(\theta) = \frac{a \cdot b}{||a|| \cdot ||b||} $$
Since the $cos(\theta)$ value is in the range $[-1,1]$ :
$-1$ value will indicate strongly opposite vectors
$0$ independent (orthogonal) vectors
$1$ similar (positive co-linear) vectors. Intermediate values are
used to assess the degree of similarity.
Example : Let two user $U_1$ and $U_2$, and $sim(U_1, U_2)$ the similarity between these two users according to their taste for movies:
$sim(U_1, U_2) = 1$ if the two users have exactly the same taste (or if
$U_1 = U_2$)
$sim(U_1, U_2) = 0$ if we do not find any correlation between the two users, e.g. if they have not seen any common movies
$sim(U_1, U_2) = -1$ if users have opposed tastes, e.g. if they rated the same movies in an opposite way
|
Interpreting negative cosine similarity
Let two vectors $a$ and $b$, the angle $θ$ is obtained by the scalar product and the norm of the vectors :
$$ cos(\theta) = \frac{a \cdot b}{||a|| \cdot ||b||} $$
Since the $cos(\theta)$ value is in
|
7,777
|
Interpreting negative cosine similarity
|
Its right that cosine-similarity between frequency vectors cannot be negative as word-counts cannot be negative, but with word-embeddings (such as glove) you can have negative values.
A simplified view of Word-embedding construction is as follows: You assign each word to a random vector in R^d. Next run an optimizer that tries to nudge two similar-vectors v1 and v2 close to each other or drive two dissimilar vectors v3 and v4 further apart (as per some distance, say cosine). You run this optimization for enough iterations and at the end, you have word-embeddings with the sole criterion that similar words have closer vectors and dissimilar vectors are farther apart. The end result might leave you with some dimension-values being negative and some pairs having negative cosine similarity -- simply because the optimization process did not care about this criterion. It may have nudged some vectors well into the negative-values. The dimensions of the vectors dont correspond to word-counts, they are just some arbitrary latent concepts that admit values in -inf to +inf.
|
Interpreting negative cosine similarity
|
Its right that cosine-similarity between frequency vectors cannot be negative as word-counts cannot be negative, but with word-embeddings (such as glove) you can have negative values.
A simplified vie
|
Interpreting negative cosine similarity
Its right that cosine-similarity between frequency vectors cannot be negative as word-counts cannot be negative, but with word-embeddings (such as glove) you can have negative values.
A simplified view of Word-embedding construction is as follows: You assign each word to a random vector in R^d. Next run an optimizer that tries to nudge two similar-vectors v1 and v2 close to each other or drive two dissimilar vectors v3 and v4 further apart (as per some distance, say cosine). You run this optimization for enough iterations and at the end, you have word-embeddings with the sole criterion that similar words have closer vectors and dissimilar vectors are farther apart. The end result might leave you with some dimension-values being negative and some pairs having negative cosine similarity -- simply because the optimization process did not care about this criterion. It may have nudged some vectors well into the negative-values. The dimensions of the vectors dont correspond to word-counts, they are just some arbitrary latent concepts that admit values in -inf to +inf.
|
Interpreting negative cosine similarity
Its right that cosine-similarity between frequency vectors cannot be negative as word-counts cannot be negative, but with word-embeddings (such as glove) you can have negative values.
A simplified vie
|
7,778
|
Interpreting negative cosine similarity
|
Do not use the absolute values, as the negative sign is not arbitrary. To acquire a cosine value between 0 and 1, you should use the following cosine function:
(R code)
cos.sim <- function(a,b)
{
dot_product = sum(a*b)
anorm = sqrt(sum((a)^2))
bnorm = sqrt(sum((b)^2))
minx =-1
maxx = 1
return(((dot_product/anorm*bnorm)-minx)/(maxx-minx))
}
(Python Code)
def cos_sim(a, b):
"""Takes 2 vectors a, b and returns the cosine similarity according
to the definition of the dot product"""
dot_product = np.dot(a, b)
norm_a = np.linalg.norm(a)
norm_b = np.linalg.norm(b)
return dot_product / (norm_a * norm_b)
minx = -1
maxx = 1
cos_sim(row1, row2)- minx)/(maxx-minx)
```
|
Interpreting negative cosine similarity
|
Do not use the absolute values, as the negative sign is not arbitrary. To acquire a cosine value between 0 and 1, you should use the following cosine function:
(R code)
cos.sim <- function(a,b)
{
d
|
Interpreting negative cosine similarity
Do not use the absolute values, as the negative sign is not arbitrary. To acquire a cosine value between 0 and 1, you should use the following cosine function:
(R code)
cos.sim <- function(a,b)
{
dot_product = sum(a*b)
anorm = sqrt(sum((a)^2))
bnorm = sqrt(sum((b)^2))
minx =-1
maxx = 1
return(((dot_product/anorm*bnorm)-minx)/(maxx-minx))
}
(Python Code)
def cos_sim(a, b):
"""Takes 2 vectors a, b and returns the cosine similarity according
to the definition of the dot product"""
dot_product = np.dot(a, b)
norm_a = np.linalg.norm(a)
norm_b = np.linalg.norm(b)
return dot_product / (norm_a * norm_b)
minx = -1
maxx = 1
cos_sim(row1, row2)- minx)/(maxx-minx)
```
|
Interpreting negative cosine similarity
Do not use the absolute values, as the negative sign is not arbitrary. To acquire a cosine value between 0 and 1, you should use the following cosine function:
(R code)
cos.sim <- function(a,b)
{
d
|
7,779
|
Interpreting negative cosine similarity
|
Cosine similarity is just like Pearson correlation, but without substracting the means. So you can compare the relative strengh of 2 cosine similarities by looking at the absolute values, just like how you would compare the absolute values of 2 Pearson correlations.
|
Interpreting negative cosine similarity
|
Cosine similarity is just like Pearson correlation, but without substracting the means. So you can compare the relative strengh of 2 cosine similarities by looking at the absolute values, just like ho
|
Interpreting negative cosine similarity
Cosine similarity is just like Pearson correlation, but without substracting the means. So you can compare the relative strengh of 2 cosine similarities by looking at the absolute values, just like how you would compare the absolute values of 2 Pearson correlations.
|
Interpreting negative cosine similarity
Cosine similarity is just like Pearson correlation, but without substracting the means. So you can compare the relative strengh of 2 cosine similarities by looking at the absolute values, just like ho
|
7,780
|
What is the most accurate way of determining an object's color?
|
Two things, for starters.
One, definitively do not work in RGB. Your default should be Lab (aka CIE L*a*b*) colorspace. Discard L. From your image it looks like the a coordinate gives you the most information, but you probably should do a principal component analysis on a and b and work along the first (most important) component, just to keep things simple. If this does not work, you can try switching to a 2D model.
Just to get a feeling for it, in a the three yellowish coins have STDs below 6, and means
of 137 ("gold"), 154, and 162 -- should be distinguishable.
Second, the lighting issue. Here you'll have to carefully define your problem. If you want to distinguish close colors under any lighting and in any context -- you can't, not like this, anyway. If you are only worried about local variations in brightness, Lab will mostly take care of this. If you want to be able to work both under daylight and incandescent light, can you ensure uniform white background, like in your example image? Generally, what are your lighting conditions?
Also, your image was taken with a fairly cheap camera, by the looks of it. It probably has some sort of automatic white balance feature, which messes up the colors pretty bad -- turn it off if you can. It also looks like the image either was coded in YCbCr at some point (happens a lot if it's a video camera) or in a similar variant of JPG; the color information is severely undersampled. In your case it might actually be good -- it means the camera has done some denoising for you in the color channels. On the other hand, it probably means that at some point the color information was also quantized stronger than brightness -- that's not so good. The main thing here is -- camera matters, and what you do should depend on the camera you are going to use.
If anything here does not make sense -- leave a comment.
|
What is the most accurate way of determining an object's color?
|
Two things, for starters.
One, definitively do not work in RGB. Your default should be Lab (aka CIE L*a*b*) colorspace. Discard L. From your image it looks like the a coordinate gives you the most inf
|
What is the most accurate way of determining an object's color?
Two things, for starters.
One, definitively do not work in RGB. Your default should be Lab (aka CIE L*a*b*) colorspace. Discard L. From your image it looks like the a coordinate gives you the most information, but you probably should do a principal component analysis on a and b and work along the first (most important) component, just to keep things simple. If this does not work, you can try switching to a 2D model.
Just to get a feeling for it, in a the three yellowish coins have STDs below 6, and means
of 137 ("gold"), 154, and 162 -- should be distinguishable.
Second, the lighting issue. Here you'll have to carefully define your problem. If you want to distinguish close colors under any lighting and in any context -- you can't, not like this, anyway. If you are only worried about local variations in brightness, Lab will mostly take care of this. If you want to be able to work both under daylight and incandescent light, can you ensure uniform white background, like in your example image? Generally, what are your lighting conditions?
Also, your image was taken with a fairly cheap camera, by the looks of it. It probably has some sort of automatic white balance feature, which messes up the colors pretty bad -- turn it off if you can. It also looks like the image either was coded in YCbCr at some point (happens a lot if it's a video camera) or in a similar variant of JPG; the color information is severely undersampled. In your case it might actually be good -- it means the camera has done some denoising for you in the color channels. On the other hand, it probably means that at some point the color information was also quantized stronger than brightness -- that's not so good. The main thing here is -- camera matters, and what you do should depend on the camera you are going to use.
If anything here does not make sense -- leave a comment.
|
What is the most accurate way of determining an object's color?
Two things, for starters.
One, definitively do not work in RGB. Your default should be Lab (aka CIE L*a*b*) colorspace. Discard L. From your image it looks like the a coordinate gives you the most inf
|
7,781
|
What is the most accurate way of determining an object's color?
|
In the spirit of brainstorming, I'll share some ideas you could try:
Try Hue more? It looks like Hue gave you a pretty good discriminator between silver and copper/gold, though not between copper and gold, at least in the single example you showed here. Have you examined using the Hue in greater detail, to see whether it might be a viable feature to distinguish silver from copper/gold?
I might start by gathering a bunch of example images, which you have manually labelled, and computing the Hue of each coin in each image. Then you might try histogramming them, to see if Hue looks like a plausible way to discriminate. I might also try looking at the average Hue of each coin, for a handful of examples like the one you presented here. You might also try Saturation as well, as that looked like it might be helpful as well.
If this fails, you might want to edit your question to show what you've tried and give some examples to concisely illustrate why this is hard or where it fails.
Other color spaces? Similarly, you might try transforming to rg chromacity and then experimenting to see whether the result is helpful at distinguishing silver from copper/gold. It is possible that this might help adjust for illumination variation, so it could be worth trying.
Check relative differences between coins, rather than looking at each coin in isolation? I gather that, from the ratios of coin sizes (radiuses), you have an initial hypothesis for the type of each coin. If you have $n$ coins, this is a $n$-vector. I suggest you test this entire composite hypothesis in a single go, rather than $n$ times testing your hypothesis for each coin on its own.
Why might this help? Well, it may let you take advantage of the relative hues of the coins to each other, which should be closer to invariant with respect to illumination (assuming relatively uniform illumination) than each coin's individual hue. For example, for each pair of coins, you can compute the difference of their hues and check whether this corresponds to what you'd expect give your hypothesis about their two identities. Or, you could generate a $n$-vector $p$ with the predicted hues for the $n$ coins; compute a $n$-vector $o$ with the observed hues for the $n$ coins; cluster each one; and check that there is a one-to-one correspondence between hues. Or, given the vectors $p,o$, you could test whether there exists a simple transformation $T$ such that $o \approx T(p)$, i.e., $o_i \approx T(p_i)$ holds for each i. You may have to experiment with different possibilities for the class of $T$'s that you allow. One example class is the set of functions $T(x)=x+c \pmod{360}$, where the constant $c$ ranges over all possibilities.
Compare to reference images? Rather than using the color of the coin, you might consider trying to match what is printed on the coin. For instance, let's say that you have detected a coin $C$ in the image, and you hypothesize it is a one pound coin. You could take a reference image $R$ of a one pound coin and test whether $R$ seems to match $C$.
You will need to account for differences in pose. Let me start by assuming that you have a head-on image of the coin, as in your example picture. Then the main thing you need to account for is rotation: you don't know a priori how much $C$ is rotated. A simple approach might be to sweep over a range of possible rotation angles $\theta$, rotate $R$ by $\theta$, and check whether $R_\theta$ seems to match $C$. To test for a match, you could use a simple pixel-based diff metric: i.e., for each coordinate $(x,y)$, compute $D(x,y) = R_\theta(x,y) - C(x,y)$ (the difference between the pixel value in $R_\theta$ and the pixel value in $C$); then use a $L_2$ norm (sum of squares) or somesuch to combine all of the difference values into a single metric of how close a match you have (i.e., $\sum_{(x,y)} D(x,y)^2$). You will need to use a small enough step increment that the pixel diff is likely to work. For instance, in your example image, the one-pound coin has a radius of about 127 pixels; if you sweep over values of $\theta$, increasing by $0.25$ degrees at each step, then you will only need to try about 1460 different rotation values, and the error at the circumference of the coin at the closest approximation to the true $\theta$ should be at most about one-quarter of a pixel, which is small enough that the pixel diff might work out OK.
You may want to experiment with multiple variations on this idea. For instance, you could work with a grayscale version of the image; the full RGB, and use a $L_2$ norm over all three R,G,B differences; the full HSB, and use a $L_2$ norm over all three H,S,B differences; or work with just the Hue, Saturation, or Brightness plane. Also, another possibility would be to first run an edge detector on both $R$ and $C$, then match up the resulting image of edges.
For robustness, you might have multiple different reference images for each coin (in fact, each side of each coin), and try all of the reference images to find the best match.
If images of the coins aren't taken from directly head-on, then as a first step you may want to compute the ellipse that represents the perimeter of the coin $C$ in the image and infer the angle at which the coin is being viewed. This will let you compute what $R$ would look like at that angle, before performing the matching.
Check how color varies as a function of distance from the center? Here is a possible intermediate step in between "the coin's mean color" (a single number, i.e., 0-dimensional) and "the entire image of the coin" (a 2-dimensional image). For each coin, you could compute a 1-dimensional vector or function $f$, where $f(r)$ represents the mean color of the pixels at distance approximately $r$ from the center of the coin. You could then try to match the vector $f_C$ for a coin $C$ in your image against the vector $f_R$ for a reference image $R$ of that coin.
This might let you correct for illumination differences. For instance, you might be able to work in grayscale, or in just a single bitplane (e.g., Hue, or Saturation, or Brightness). Or, you might be able to first normalize the function $f$ by subtracting the mean: $g(r) = f(r)-\mu$, where $\mu$ is the mean color of the coin -- then try to match $g_C$ to $g_R$.
The nice thing about this approach is that you don't need to infer how much the coin was rotated: the function $f$ is rotation-invariant.
If you want to experiment with this idea, I would compute the function $f_C$ for a variety of different example images and graph them. Then you should be able to visually inspect them to see if the function seems to have a relatively consistent shape, regardless of illumination. You might need to try this for multiple different possibilities (grayscale, each of the HSB bitplanes, etc.).
If the coin $C$ might not have been photographed from directly head-on, but possibly from an angle, you'll first need to trace the ellipse of $C$'s perimeter to deduce the angle from which it was photographed and then correct for that in the calculation of $f$.
Look at vision algorithms for color constancy. The computer vision community has studied color constancy, the problem of correcting for an unknown illumination source; see, e.g., this overview. You might explore some of the algorithms derived for this problem; they attempt to infer the illumination source and then correct for it, to derive the image you would have obtained had the picture been taken with the reference illumination source.
Look into Color Constant Color Indexing. The basic idea of CCCI, as I understand it, is to first cancel out the unknown illumination source by replacing each pixel's R value with the ratio between its R-value and one of its neighbor's R-values; and similarly for the G and B planes. The idea is that (hopefully) these ratios should now be mostly independent of the illumination source. Then, once you have these ratios, you compute a histogram of the ratios present in the image, and use this as a signature of the image. Now, if you want to compare the image of the coin $C$ to a reference image $R$, you can compare their signatures to see if they seem to match. In your case, you may also need to adjust for angle if the picture of the coin $C$ was not taken head-on -- but this seems like it might help reduce the dependence upon illumination source.
I don't know if any of these has a chance of working, but they are some ideas you could try.
|
What is the most accurate way of determining an object's color?
|
In the spirit of brainstorming, I'll share some ideas you could try:
Try Hue more? It looks like Hue gave you a pretty good discriminator between silver and copper/gold, though not between copper and
|
What is the most accurate way of determining an object's color?
In the spirit of brainstorming, I'll share some ideas you could try:
Try Hue more? It looks like Hue gave you a pretty good discriminator between silver and copper/gold, though not between copper and gold, at least in the single example you showed here. Have you examined using the Hue in greater detail, to see whether it might be a viable feature to distinguish silver from copper/gold?
I might start by gathering a bunch of example images, which you have manually labelled, and computing the Hue of each coin in each image. Then you might try histogramming them, to see if Hue looks like a plausible way to discriminate. I might also try looking at the average Hue of each coin, for a handful of examples like the one you presented here. You might also try Saturation as well, as that looked like it might be helpful as well.
If this fails, you might want to edit your question to show what you've tried and give some examples to concisely illustrate why this is hard or where it fails.
Other color spaces? Similarly, you might try transforming to rg chromacity and then experimenting to see whether the result is helpful at distinguishing silver from copper/gold. It is possible that this might help adjust for illumination variation, so it could be worth trying.
Check relative differences between coins, rather than looking at each coin in isolation? I gather that, from the ratios of coin sizes (radiuses), you have an initial hypothesis for the type of each coin. If you have $n$ coins, this is a $n$-vector. I suggest you test this entire composite hypothesis in a single go, rather than $n$ times testing your hypothesis for each coin on its own.
Why might this help? Well, it may let you take advantage of the relative hues of the coins to each other, which should be closer to invariant with respect to illumination (assuming relatively uniform illumination) than each coin's individual hue. For example, for each pair of coins, you can compute the difference of their hues and check whether this corresponds to what you'd expect give your hypothesis about their two identities. Or, you could generate a $n$-vector $p$ with the predicted hues for the $n$ coins; compute a $n$-vector $o$ with the observed hues for the $n$ coins; cluster each one; and check that there is a one-to-one correspondence between hues. Or, given the vectors $p,o$, you could test whether there exists a simple transformation $T$ such that $o \approx T(p)$, i.e., $o_i \approx T(p_i)$ holds for each i. You may have to experiment with different possibilities for the class of $T$'s that you allow. One example class is the set of functions $T(x)=x+c \pmod{360}$, where the constant $c$ ranges over all possibilities.
Compare to reference images? Rather than using the color of the coin, you might consider trying to match what is printed on the coin. For instance, let's say that you have detected a coin $C$ in the image, and you hypothesize it is a one pound coin. You could take a reference image $R$ of a one pound coin and test whether $R$ seems to match $C$.
You will need to account for differences in pose. Let me start by assuming that you have a head-on image of the coin, as in your example picture. Then the main thing you need to account for is rotation: you don't know a priori how much $C$ is rotated. A simple approach might be to sweep over a range of possible rotation angles $\theta$, rotate $R$ by $\theta$, and check whether $R_\theta$ seems to match $C$. To test for a match, you could use a simple pixel-based diff metric: i.e., for each coordinate $(x,y)$, compute $D(x,y) = R_\theta(x,y) - C(x,y)$ (the difference between the pixel value in $R_\theta$ and the pixel value in $C$); then use a $L_2$ norm (sum of squares) or somesuch to combine all of the difference values into a single metric of how close a match you have (i.e., $\sum_{(x,y)} D(x,y)^2$). You will need to use a small enough step increment that the pixel diff is likely to work. For instance, in your example image, the one-pound coin has a radius of about 127 pixels; if you sweep over values of $\theta$, increasing by $0.25$ degrees at each step, then you will only need to try about 1460 different rotation values, and the error at the circumference of the coin at the closest approximation to the true $\theta$ should be at most about one-quarter of a pixel, which is small enough that the pixel diff might work out OK.
You may want to experiment with multiple variations on this idea. For instance, you could work with a grayscale version of the image; the full RGB, and use a $L_2$ norm over all three R,G,B differences; the full HSB, and use a $L_2$ norm over all three H,S,B differences; or work with just the Hue, Saturation, or Brightness plane. Also, another possibility would be to first run an edge detector on both $R$ and $C$, then match up the resulting image of edges.
For robustness, you might have multiple different reference images for each coin (in fact, each side of each coin), and try all of the reference images to find the best match.
If images of the coins aren't taken from directly head-on, then as a first step you may want to compute the ellipse that represents the perimeter of the coin $C$ in the image and infer the angle at which the coin is being viewed. This will let you compute what $R$ would look like at that angle, before performing the matching.
Check how color varies as a function of distance from the center? Here is a possible intermediate step in between "the coin's mean color" (a single number, i.e., 0-dimensional) and "the entire image of the coin" (a 2-dimensional image). For each coin, you could compute a 1-dimensional vector or function $f$, where $f(r)$ represents the mean color of the pixels at distance approximately $r$ from the center of the coin. You could then try to match the vector $f_C$ for a coin $C$ in your image against the vector $f_R$ for a reference image $R$ of that coin.
This might let you correct for illumination differences. For instance, you might be able to work in grayscale, or in just a single bitplane (e.g., Hue, or Saturation, or Brightness). Or, you might be able to first normalize the function $f$ by subtracting the mean: $g(r) = f(r)-\mu$, where $\mu$ is the mean color of the coin -- then try to match $g_C$ to $g_R$.
The nice thing about this approach is that you don't need to infer how much the coin was rotated: the function $f$ is rotation-invariant.
If you want to experiment with this idea, I would compute the function $f_C$ for a variety of different example images and graph them. Then you should be able to visually inspect them to see if the function seems to have a relatively consistent shape, regardless of illumination. You might need to try this for multiple different possibilities (grayscale, each of the HSB bitplanes, etc.).
If the coin $C$ might not have been photographed from directly head-on, but possibly from an angle, you'll first need to trace the ellipse of $C$'s perimeter to deduce the angle from which it was photographed and then correct for that in the calculation of $f$.
Look at vision algorithms for color constancy. The computer vision community has studied color constancy, the problem of correcting for an unknown illumination source; see, e.g., this overview. You might explore some of the algorithms derived for this problem; they attempt to infer the illumination source and then correct for it, to derive the image you would have obtained had the picture been taken with the reference illumination source.
Look into Color Constant Color Indexing. The basic idea of CCCI, as I understand it, is to first cancel out the unknown illumination source by replacing each pixel's R value with the ratio between its R-value and one of its neighbor's R-values; and similarly for the G and B planes. The idea is that (hopefully) these ratios should now be mostly independent of the illumination source. Then, once you have these ratios, you compute a histogram of the ratios present in the image, and use this as a signature of the image. Now, if you want to compare the image of the coin $C$ to a reference image $R$, you can compare their signatures to see if they seem to match. In your case, you may also need to adjust for angle if the picture of the coin $C$ was not taken head-on -- but this seems like it might help reduce the dependence upon illumination source.
I don't know if any of these has a chance of working, but they are some ideas you could try.
|
What is the most accurate way of determining an object's color?
In the spirit of brainstorming, I'll share some ideas you could try:
Try Hue more? It looks like Hue gave you a pretty good discriminator between silver and copper/gold, though not between copper and
|
7,782
|
What is the most accurate way of determining an object's color?
|
Interesting problem and good work.
Try using median colour values rather than mean. This will be more robust against outlier values due to brightness and saturation. Try using just one of the RGB components instead of all three. Choose the component that best distinguishes the colours. You could try plotting histograms of the pixel values (e.g. one of the RGB components) to give you an idea of the properties of the pixel distribution. This might suggest a solution that is not immediately obvious. Try ploting the RGB components in 3D space to see if they follow any pattern, for example they may lie close to a line indicating that a linear combination of the RGB components may be a better classifier than an individual one.
|
What is the most accurate way of determining an object's color?
|
Interesting problem and good work.
Try using median colour values rather than mean. This will be more robust against outlier values due to brightness and saturation. Try using just one of the RGB comp
|
What is the most accurate way of determining an object's color?
Interesting problem and good work.
Try using median colour values rather than mean. This will be more robust against outlier values due to brightness and saturation. Try using just one of the RGB components instead of all three. Choose the component that best distinguishes the colours. You could try plotting histograms of the pixel values (e.g. one of the RGB components) to give you an idea of the properties of the pixel distribution. This might suggest a solution that is not immediately obvious. Try ploting the RGB components in 3D space to see if they follow any pattern, for example they may lie close to a line indicating that a linear combination of the RGB components may be a better classifier than an individual one.
|
What is the most accurate way of determining an object's color?
Interesting problem and good work.
Try using median colour values rather than mean. This will be more robust against outlier values due to brightness and saturation. Try using just one of the RGB comp
|
7,783
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
|
This isn't so much an answer as a clarification on terminology. Your question asks about raw, standarized, and studentized residuals. However, this is not the terminology used by most statisticians, though I note your class notes state that it is.
Raw: same as you have it.
Standardized: this is actually the raw residuals divided by the true standard deviation of the residuals. As the true standard deviation is rarely known, a standardized residual is almost never used.
Internally Studentized: because the true standard deviation of the residuals is not typically known, the estimated standard deviation is used instead. This is an interanlly studentized residual, and it is what you called standardized.
Externally Studentized: the same as the internally studentized residual, except that the estimate of the standard deviation of the residuals is calcuated from a regression leaving out the observation in question.
Pearson: the raw residual divided by the standard deviation of the response variable (the y variable) rather than of the residuals. You don't have this one listed.
"leave one out": Doesn't have a formal name, but it is the same as the class notes.
standarized "leave one out": also doesn't have a formal name, but this is not what the class notes call studentized.
Sources:
the same wiki link you have about studentized residuals ("a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation")
documentation for residual calculation in SAS
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
|
This isn't so much an answer as a clarification on terminology. Your question asks about raw, standarized, and studentized residuals. However, this is not the terminology used by most statisticians,
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
This isn't so much an answer as a clarification on terminology. Your question asks about raw, standarized, and studentized residuals. However, this is not the terminology used by most statisticians, though I note your class notes state that it is.
Raw: same as you have it.
Standardized: this is actually the raw residuals divided by the true standard deviation of the residuals. As the true standard deviation is rarely known, a standardized residual is almost never used.
Internally Studentized: because the true standard deviation of the residuals is not typically known, the estimated standard deviation is used instead. This is an interanlly studentized residual, and it is what you called standardized.
Externally Studentized: the same as the internally studentized residual, except that the estimate of the standard deviation of the residuals is calcuated from a regression leaving out the observation in question.
Pearson: the raw residual divided by the standard deviation of the response variable (the y variable) rather than of the residuals. You don't have this one listed.
"leave one out": Doesn't have a formal name, but it is the same as the class notes.
standarized "leave one out": also doesn't have a formal name, but this is not what the class notes call studentized.
Sources:
the same wiki link you have about studentized residuals ("a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation")
documentation for residual calculation in SAS
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
This isn't so much an answer as a clarification on terminology. Your question asks about raw, standarized, and studentized residuals. However, this is not the terminology used by most statisticians,
|
7,784
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
|
Re: plots,
There is such a thing as overfitting, but overplotting cannot really do much harm, especially at diagnostics stage. A standardized normal probability plot cannot hurt next to your QQ-plot. I find it better to assess the middle of the distribution.
Re: residuals,
I run both standardized and studentized residuals at draft stage and usually end up coding the standardized ones. I don't know what other people actually run, because diagnostics are really coded down in the replication material that I find online.
Re: diagnostics,
For a linear model, I usually add variance inflation factors (with the vif command in Stata) and a few homoscedasticity tests (e.g. with the hettest command in Stata), as well as model decomposition with nested regression to check if the $R^2$ makes any sense.
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
|
Re: plots,
There is such a thing as overfitting, but overplotting cannot really do much harm, especially at diagnostics stage. A standardized normal probability plot cannot hurt next to your QQ-plot.
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
Re: plots,
There is such a thing as overfitting, but overplotting cannot really do much harm, especially at diagnostics stage. A standardized normal probability plot cannot hurt next to your QQ-plot. I find it better to assess the middle of the distribution.
Re: residuals,
I run both standardized and studentized residuals at draft stage and usually end up coding the standardized ones. I don't know what other people actually run, because diagnostics are really coded down in the replication material that I find online.
Re: diagnostics,
For a linear model, I usually add variance inflation factors (with the vif command in Stata) and a few homoscedasticity tests (e.g. with the hettest command in Stata), as well as model decomposition with nested regression to check if the $R^2$ makes any sense.
|
Raw residuals versus standardised residuals versus studentised residuals - what to use when?
Re: plots,
There is such a thing as overfitting, but overplotting cannot really do much harm, especially at diagnostics stage. A standardized normal probability plot cannot hurt next to your QQ-plot.
|
7,785
|
What problem does oversampling, undersampling, and SMOTE solve?
|
The problem that these methods are trying to solve is to increase the impact of minority class on cost function. This is because algos trying to fit well the whole dataset and then adapt to majority. Other approach would be to use class weights, and this aporoach in most cases gives better results, since there is no information loss by undersampling or performance loss and introduction of noise by oversampling.
|
What problem does oversampling, undersampling, and SMOTE solve?
|
The problem that these methods are trying to solve is to increase the impact of minority class on cost function. This is because algos trying to fit well the whole dataset and then adapt to majority.
|
What problem does oversampling, undersampling, and SMOTE solve?
The problem that these methods are trying to solve is to increase the impact of minority class on cost function. This is because algos trying to fit well the whole dataset and then adapt to majority. Other approach would be to use class weights, and this aporoach in most cases gives better results, since there is no information loss by undersampling or performance loss and introduction of noise by oversampling.
|
What problem does oversampling, undersampling, and SMOTE solve?
The problem that these methods are trying to solve is to increase the impact of minority class on cost function. This is because algos trying to fit well the whole dataset and then adapt to majority.
|
7,786
|
What problem does oversampling, undersampling, and SMOTE solve?
|
Some sampling techniques are to adjust for bias (if the population rate is known and different), but I agree with the notion that the unbalanced class is not the problem itself. One major reason comes down to processing performance. If our targeted class, for example, is an extreme rare case at 1:100000, our modeling dataset would be massive and computation would be difficult. Sampling, no matter what the strategy, is always throwing away some data in order to reduce the total dataset size. I suppose the difference among all the different sampling strategies, is just cleverness around which data do we throw away without sacrificing a loss in predictive possibilities.
|
What problem does oversampling, undersampling, and SMOTE solve?
|
Some sampling techniques are to adjust for bias (if the population rate is known and different), but I agree with the notion that the unbalanced class is not the problem itself. One major reason comes
|
What problem does oversampling, undersampling, and SMOTE solve?
Some sampling techniques are to adjust for bias (if the population rate is known and different), but I agree with the notion that the unbalanced class is not the problem itself. One major reason comes down to processing performance. If our targeted class, for example, is an extreme rare case at 1:100000, our modeling dataset would be massive and computation would be difficult. Sampling, no matter what the strategy, is always throwing away some data in order to reduce the total dataset size. I suppose the difference among all the different sampling strategies, is just cleverness around which data do we throw away without sacrificing a loss in predictive possibilities.
|
What problem does oversampling, undersampling, and SMOTE solve?
Some sampling techniques are to adjust for bias (if the population rate is known and different), but I agree with the notion that the unbalanced class is not the problem itself. One major reason comes
|
7,787
|
What problem does oversampling, undersampling, and SMOTE solve?
|
I will give you a more extreme example. Consider the case where you have a dataset with 99 data points labeled as positive and only one labeled as negative. During training, your model will realize that if it classifies everything as positive, it will end up getting away with it. One way of fixing this is to oversample the underrepresented class and another is to undersample the overrepresented class. For example, in a dataset of 70 positive and 30 negative labels, I might sample the negative labels with replacement and positive ones without replacement which will result in my model encountering more negative labels during training. This way, if my model tries to classify everything as positive, it will incurr greater loss than it would have otherwise.
One more approach that does not pertain to sampling is to adjust the cost function to give higher weights to the data points with the minority label. For example, if you are using NLL loss in a dataset where 1's are overrepresented compared to 0's among labels, you could adjust your loss function to be:
$L(\tilde{x_i}, y_i) = -\alpha(y_i)\ln(\tilde{x_i}) - \beta(1 - y_i) \ln(1 - \tilde{x_i})$
where $\beta > \alpha$. The magnitude of the diference $\beta - \alpha$ depends on the extent of overrepresentation/underrepresentation.
|
What problem does oversampling, undersampling, and SMOTE solve?
|
I will give you a more extreme example. Consider the case where you have a dataset with 99 data points labeled as positive and only one labeled as negative. During training, your model will realize th
|
What problem does oversampling, undersampling, and SMOTE solve?
I will give you a more extreme example. Consider the case where you have a dataset with 99 data points labeled as positive and only one labeled as negative. During training, your model will realize that if it classifies everything as positive, it will end up getting away with it. One way of fixing this is to oversample the underrepresented class and another is to undersample the overrepresented class. For example, in a dataset of 70 positive and 30 negative labels, I might sample the negative labels with replacement and positive ones without replacement which will result in my model encountering more negative labels during training. This way, if my model tries to classify everything as positive, it will incurr greater loss than it would have otherwise.
One more approach that does not pertain to sampling is to adjust the cost function to give higher weights to the data points with the minority label. For example, if you are using NLL loss in a dataset where 1's are overrepresented compared to 0's among labels, you could adjust your loss function to be:
$L(\tilde{x_i}, y_i) = -\alpha(y_i)\ln(\tilde{x_i}) - \beta(1 - y_i) \ln(1 - \tilde{x_i})$
where $\beta > \alpha$. The magnitude of the diference $\beta - \alpha$ depends on the extent of overrepresentation/underrepresentation.
|
What problem does oversampling, undersampling, and SMOTE solve?
I will give you a more extreme example. Consider the case where you have a dataset with 99 data points labeled as positive and only one labeled as negative. During training, your model will realize th
|
7,788
|
What problem does oversampling, undersampling, and SMOTE solve?
|
I'm going to disagree with the premise that unbalanced data isn't a problem in machine learning. Perhaps less so in regression, but it certainly is in classification.
Imbalanced Data is relevant in Machine Learning applications because of decreased performance of algorithms (the research I am thinking of is specifically on classifiers) in the setting of class imbalance.
Take a simple binary classification problem with 25:1 ratio of training examples of class A' vs. 'class B'. Research has shown that accuracy pertaining to the classification of class B takes a hit simply because of the decreased ratio of training data. Makes sense, as the less # of training examples you have, the poorer your classifier will train on that data. As one of the commenters stated, you can't make something out of nothing.
From the papers I've seen, in multiclass classification problems, it seems you need to get to a 10:1 ratio to start having a significant impact on accuracy of the minority class. Perhaps folks who read different literature than I've seen have different opinions.
So, the proposed solutions are: Oversampling the minority class, Undersampling the majority class, or using SMOTE on the minority class. Yes, you can't really create data out of nowhere (SMOTE sort-of does, but not exactly) unless you're getting into synthetic data creation for the minority class (no simple method). Other techniques like MixUp and the like potentially fall into this concept, but I think that they are more regularizers than class imbalance solutions. In the papers I have read, Oversampling > SMOTE > Undersampling.
Regardless of your technique, you are altering the relationship between majority and minority classes which may affect incidence. In other words, if you are creating a classifier to detect super-rare brain disease X which has an incidence of 1 in 100,000 and your classifier is at 1:1, you might be more sensitive and less specific with a larger number of false positives. If it is important that you detect those cases and arbiter later, you're ok. If not, you wasted a lot of other people's time and money. This problem eventually will need to be dealt with.
So to answer the question:
tl/dr: Class-balancing operations like Over/Undersampling and SMOTE (and synthetic data) exist to improve machine learning algorithm (classifier) performance by resolving the inherent performance hit in an algorithm caused by the imbalance itself.
|
What problem does oversampling, undersampling, and SMOTE solve?
|
I'm going to disagree with the premise that unbalanced data isn't a problem in machine learning. Perhaps less so in regression, but it certainly is in classification.
Imbalanced Data is relevant in M
|
What problem does oversampling, undersampling, and SMOTE solve?
I'm going to disagree with the premise that unbalanced data isn't a problem in machine learning. Perhaps less so in regression, but it certainly is in classification.
Imbalanced Data is relevant in Machine Learning applications because of decreased performance of algorithms (the research I am thinking of is specifically on classifiers) in the setting of class imbalance.
Take a simple binary classification problem with 25:1 ratio of training examples of class A' vs. 'class B'. Research has shown that accuracy pertaining to the classification of class B takes a hit simply because of the decreased ratio of training data. Makes sense, as the less # of training examples you have, the poorer your classifier will train on that data. As one of the commenters stated, you can't make something out of nothing.
From the papers I've seen, in multiclass classification problems, it seems you need to get to a 10:1 ratio to start having a significant impact on accuracy of the minority class. Perhaps folks who read different literature than I've seen have different opinions.
So, the proposed solutions are: Oversampling the minority class, Undersampling the majority class, or using SMOTE on the minority class. Yes, you can't really create data out of nowhere (SMOTE sort-of does, but not exactly) unless you're getting into synthetic data creation for the minority class (no simple method). Other techniques like MixUp and the like potentially fall into this concept, but I think that they are more regularizers than class imbalance solutions. In the papers I have read, Oversampling > SMOTE > Undersampling.
Regardless of your technique, you are altering the relationship between majority and minority classes which may affect incidence. In other words, if you are creating a classifier to detect super-rare brain disease X which has an incidence of 1 in 100,000 and your classifier is at 1:1, you might be more sensitive and less specific with a larger number of false positives. If it is important that you detect those cases and arbiter later, you're ok. If not, you wasted a lot of other people's time and money. This problem eventually will need to be dealt with.
So to answer the question:
tl/dr: Class-balancing operations like Over/Undersampling and SMOTE (and synthetic data) exist to improve machine learning algorithm (classifier) performance by resolving the inherent performance hit in an algorithm caused by the imbalance itself.
|
What problem does oversampling, undersampling, and SMOTE solve?
I'm going to disagree with the premise that unbalanced data isn't a problem in machine learning. Perhaps less so in regression, but it certainly is in classification.
Imbalanced Data is relevant in M
|
7,789
|
What problem does oversampling, undersampling, and SMOTE solve?
|
There are many technqiues for oversampling and undersampling to overcome the sparsity of minority in imbalanced data anv vice versa.... Yet most of them have consequence on behavior of your model (roughly speaking variance). I personally use an self-devised technique where oversampling and undersampling are done simoultanously. Spicing this Combined Sampling with Adapative Synthetic Sampling (ADASYN), I call it C-ADASYN. You may think of hair-transplant where there you oversample in sparse area and unersample in dense area, to keep the behaviour fair and add synthetic sample if needed to augment the population.
|
What problem does oversampling, undersampling, and SMOTE solve?
|
There are many technqiues for oversampling and undersampling to overcome the sparsity of minority in imbalanced data anv vice versa.... Yet most of them have consequence on behavior of your model (rou
|
What problem does oversampling, undersampling, and SMOTE solve?
There are many technqiues for oversampling and undersampling to overcome the sparsity of minority in imbalanced data anv vice versa.... Yet most of them have consequence on behavior of your model (roughly speaking variance). I personally use an self-devised technique where oversampling and undersampling are done simoultanously. Spicing this Combined Sampling with Adapative Synthetic Sampling (ADASYN), I call it C-ADASYN. You may think of hair-transplant where there you oversample in sparse area and unersample in dense area, to keep the behaviour fair and add synthetic sample if needed to augment the population.
|
What problem does oversampling, undersampling, and SMOTE solve?
There are many technqiues for oversampling and undersampling to overcome the sparsity of minority in imbalanced data anv vice versa.... Yet most of them have consequence on behavior of your model (rou
|
7,790
|
Intuitive explanation of "Statistical Inference"
|
Sometimes it's best to explain a concept through a concrete example:
Imagine you grab an apple, take a bite from it and it tastes sweet. Will you conclude based on that bite that the entire apple is sweet? If yes, you will have inferred that the entire apple is sweet based on a single bite from it.
Inference is the process of using the part to learn about the whole.
How the part is selected is important in this process: the part needs to be representative of the whole. In other words, the part should be like a mini-me version of the whole. If it is not, our learning will be flawed and possibly incorrect.
Why do we need inference? Because we need to make conclusions and then decisions involving the whole based on partial information about it supplied by the part.
|
Intuitive explanation of "Statistical Inference"
|
Sometimes it's best to explain a concept through a concrete example:
Imagine you grab an apple, take a bite from it and it tastes sweet. Will you conclude based on that bite that the entire apple is s
|
Intuitive explanation of "Statistical Inference"
Sometimes it's best to explain a concept through a concrete example:
Imagine you grab an apple, take a bite from it and it tastes sweet. Will you conclude based on that bite that the entire apple is sweet? If yes, you will have inferred that the entire apple is sweet based on a single bite from it.
Inference is the process of using the part to learn about the whole.
How the part is selected is important in this process: the part needs to be representative of the whole. In other words, the part should be like a mini-me version of the whole. If it is not, our learning will be flawed and possibly incorrect.
Why do we need inference? Because we need to make conclusions and then decisions involving the whole based on partial information about it supplied by the part.
|
Intuitive explanation of "Statistical Inference"
Sometimes it's best to explain a concept through a concrete example:
Imagine you grab an apple, take a bite from it and it tastes sweet. Will you conclude based on that bite that the entire apple is s
|
7,791
|
Intuitive explanation of "Statistical Inference"
|
I'm assuming that you're asking in here about statistical inference.
Using the definition from All of Statistics by Larry A. Wasserman:
Statistical inference, or “learning” as it is called in computer
science, is the process of using data to infer the distribution that
generated the data. A typical statistical inference question is:
$$ \textsf{Given a sample } X_1, \dots, X_n \sim F, \textsf{ how do we
infer } F ? $$
In some cases, we may want to infer only some feature of $F$ such as
its mean.
In statistics we interpret data as realizations of random variables, so what we learn in statistics are the characteristics of the random variables, i.e. things like distribution, expected value, variance, covariance, parameters of the distributions, etc. So statistical inference means learning those things from the data.
|
Intuitive explanation of "Statistical Inference"
|
I'm assuming that you're asking in here about statistical inference.
Using the definition from All of Statistics by Larry A. Wasserman:
Statistical inference, or “learning” as it is called in compute
|
Intuitive explanation of "Statistical Inference"
I'm assuming that you're asking in here about statistical inference.
Using the definition from All of Statistics by Larry A. Wasserman:
Statistical inference, or “learning” as it is called in computer
science, is the process of using data to infer the distribution that
generated the data. A typical statistical inference question is:
$$ \textsf{Given a sample } X_1, \dots, X_n \sim F, \textsf{ how do we
infer } F ? $$
In some cases, we may want to infer only some feature of $F$ such as
its mean.
In statistics we interpret data as realizations of random variables, so what we learn in statistics are the characteristics of the random variables, i.e. things like distribution, expected value, variance, covariance, parameters of the distributions, etc. So statistical inference means learning those things from the data.
|
Intuitive explanation of "Statistical Inference"
I'm assuming that you're asking in here about statistical inference.
Using the definition from All of Statistics by Larry A. Wasserman:
Statistical inference, or “learning” as it is called in compute
|
7,792
|
Intuitive explanation of "Statistical Inference"
|
Citing E.T.Jaynes, "Probability theory: the logic of science" (a highly recommended read):
By 'inference' we mean simply: deductive reasoning whenever enough information is at hand to permit it; inductive or plausible reasoning when - as is almost invariably the case in real problems - the necessary information is not available. But if a problem can be solved by deductive reasoning, probability theory is not needed for it; thus our topic is the optimal processing of incomplete information.
In my own words, inference simply means to start from some given information and draw rational conclusions from it, where what's rational is usually defined by the rules of predicative logic or probability theory.
The information one uses for drawing conclusions may stem from beliefs one holds about the world (in technical jargon: models and prior distributions), from data that have been observed, or both. Of course, an inference can only be valid if the information it is based upon is valid!
If information is certain (you know things to be true or false), then the inference is performed by predicative logic: Aristotle is a man, men are no birds, therefore we infer that arostotle is no bird.
If information is uncertain (you believe things but are not certain), then the inference is performed by probability theory: if 50% of all people like pizza, and 50% of the people who like pizza also like pasta, while 75% of the people who don't' like pizza also don't like pasta, you can infer that - absent any further information - there is a 37.5% chance for you to like pasta. When you hear some kind of noise, based on your experiences you might be unsure whether the television or your little daughter is the source. You are drawing inferences - it's probably either the TV or your daughter - but you're unsure because the information provided is uncertain. When people talk about statistical inference, they usually refer to technical applications where one wants to use a lot of data to infer information about something that is not itself observable, just as in the last example.*
A typical technical example could go as follows: we have a temperature sensor in a room that returns a voltage $V(k)$. The sensor datasheet provides a graph that relates the measured voltage to temperature by a linear model:
$$ V(k) = a \cdot T(k) + b.$$
We may then use this model and the voltage measurements to draw inferences about the temperature in the room. Everything is deductive so far, because we assumed all information to be certain! Given $V(k)$, we can simply calculate $T(k)$.
We then observe that the estimated temperature fluctuates quite rapidly, much quicker than we would expect a room temperature to fluctuate. So we hypothesize that there is some kind of zero-mean, uncorrelated disturbance that also influences the sensor:
$$ V(k) = a \cdot T(k) + b + \epsilon(k).$$
We are now uncertain about the meaning of each voltage measurement (making each measurement an i.i.d. RV)! This tells us that we should average over a few voltage measurements to get a better estimate of the current room temperature.** If any of the information we used (the voltage-temperature model of the sensor, the disturbance model, the actual voltage measurements) is wrong, then our temperature estimate will also be wrong.
*Our brain is an extremely sophisticated inference device which draws all kinds of conclusions about ourselves, other people, our environment, and our future, all the time [1][2][3].
**Assuming that the sampling rate is much higher than the rate of change of the temperature and that the noise is really uncorrelated.
|
Intuitive explanation of "Statistical Inference"
|
Citing E.T.Jaynes, "Probability theory: the logic of science" (a highly recommended read):
By 'inference' we mean simply: deductive reasoning whenever enough information is at hand to permit it; indu
|
Intuitive explanation of "Statistical Inference"
Citing E.T.Jaynes, "Probability theory: the logic of science" (a highly recommended read):
By 'inference' we mean simply: deductive reasoning whenever enough information is at hand to permit it; inductive or plausible reasoning when - as is almost invariably the case in real problems - the necessary information is not available. But if a problem can be solved by deductive reasoning, probability theory is not needed for it; thus our topic is the optimal processing of incomplete information.
In my own words, inference simply means to start from some given information and draw rational conclusions from it, where what's rational is usually defined by the rules of predicative logic or probability theory.
The information one uses for drawing conclusions may stem from beliefs one holds about the world (in technical jargon: models and prior distributions), from data that have been observed, or both. Of course, an inference can only be valid if the information it is based upon is valid!
If information is certain (you know things to be true or false), then the inference is performed by predicative logic: Aristotle is a man, men are no birds, therefore we infer that arostotle is no bird.
If information is uncertain (you believe things but are not certain), then the inference is performed by probability theory: if 50% of all people like pizza, and 50% of the people who like pizza also like pasta, while 75% of the people who don't' like pizza also don't like pasta, you can infer that - absent any further information - there is a 37.5% chance for you to like pasta. When you hear some kind of noise, based on your experiences you might be unsure whether the television or your little daughter is the source. You are drawing inferences - it's probably either the TV or your daughter - but you're unsure because the information provided is uncertain. When people talk about statistical inference, they usually refer to technical applications where one wants to use a lot of data to infer information about something that is not itself observable, just as in the last example.*
A typical technical example could go as follows: we have a temperature sensor in a room that returns a voltage $V(k)$. The sensor datasheet provides a graph that relates the measured voltage to temperature by a linear model:
$$ V(k) = a \cdot T(k) + b.$$
We may then use this model and the voltage measurements to draw inferences about the temperature in the room. Everything is deductive so far, because we assumed all information to be certain! Given $V(k)$, we can simply calculate $T(k)$.
We then observe that the estimated temperature fluctuates quite rapidly, much quicker than we would expect a room temperature to fluctuate. So we hypothesize that there is some kind of zero-mean, uncorrelated disturbance that also influences the sensor:
$$ V(k) = a \cdot T(k) + b + \epsilon(k).$$
We are now uncertain about the meaning of each voltage measurement (making each measurement an i.i.d. RV)! This tells us that we should average over a few voltage measurements to get a better estimate of the current room temperature.** If any of the information we used (the voltage-temperature model of the sensor, the disturbance model, the actual voltage measurements) is wrong, then our temperature estimate will also be wrong.
*Our brain is an extremely sophisticated inference device which draws all kinds of conclusions about ourselves, other people, our environment, and our future, all the time [1][2][3].
**Assuming that the sampling rate is much higher than the rate of change of the temperature and that the noise is really uncorrelated.
|
Intuitive explanation of "Statistical Inference"
Citing E.T.Jaynes, "Probability theory: the logic of science" (a highly recommended read):
By 'inference' we mean simply: deductive reasoning whenever enough information is at hand to permit it; indu
|
7,793
|
Intuitive explanation of "Statistical Inference"
|
Statistical inference is the art of good guessing --- it entails guessing things that are unknown from related things that are known (observed), and giving associated measures of the level of confidence, variability, etc., in your guess.
|
Intuitive explanation of "Statistical Inference"
|
Statistical inference is the art of good guessing --- it entails guessing things that are unknown from related things that are known (observed), and giving associated measures of the level of confiden
|
Intuitive explanation of "Statistical Inference"
Statistical inference is the art of good guessing --- it entails guessing things that are unknown from related things that are known (observed), and giving associated measures of the level of confidence, variability, etc., in your guess.
|
Intuitive explanation of "Statistical Inference"
Statistical inference is the art of good guessing --- it entails guessing things that are unknown from related things that are known (observed), and giving associated measures of the level of confiden
|
7,794
|
Intuitive explanation of "Statistical Inference"
|
Let me try. The broad dictionary definition of inference is as follows:
something that you can find out indirectly from what you already know
And, from a more technical perspective, from The Oxford Dictionary of Statistical Terms by Upton, G., Cook I.,
statistical inference is the process of using data analysis to deduce properties of an underlying distribution of probability
Here, what we already know is the data (experiments we did) and sometimes a prior information. And, we want to know the properties of an entity of interest.
For example, say we have a biased coin and we want to have an idea on the probability of heads. We toss the coin a few times, record the results (which will be our data), and by looking at them, we'll have an understanding (which formally might be the distribution, moments etc.) of what the probability of heads looks like.
|
Intuitive explanation of "Statistical Inference"
|
Let me try. The broad dictionary definition of inference is as follows:
something that you can find out indirectly from what you already know
And, from a more technical perspective, from The Oxford
|
Intuitive explanation of "Statistical Inference"
Let me try. The broad dictionary definition of inference is as follows:
something that you can find out indirectly from what you already know
And, from a more technical perspective, from The Oxford Dictionary of Statistical Terms by Upton, G., Cook I.,
statistical inference is the process of using data analysis to deduce properties of an underlying distribution of probability
Here, what we already know is the data (experiments we did) and sometimes a prior information. And, we want to know the properties of an entity of interest.
For example, say we have a biased coin and we want to have an idea on the probability of heads. We toss the coin a few times, record the results (which will be our data), and by looking at them, we'll have an understanding (which formally might be the distribution, moments etc.) of what the probability of heads looks like.
|
Intuitive explanation of "Statistical Inference"
Let me try. The broad dictionary definition of inference is as follows:
something that you can find out indirectly from what you already know
And, from a more technical perspective, from The Oxford
|
7,795
|
Intuitive explanation of "Statistical Inference"
|
I'll try to rephrase Tim's answer since I think it's too technical for a layman.
Inference is the process of extracting (inferring) a general pattern from a particular set of cases. E.g., we have these particular data about soil, fertilizers and yield. What can we say about the general effect of soils and fertilizers on yield?
Probability, on the other hand, is somewhat the reverse exercise. We know the general pattern and we want to say something about particular cases. E.g., we know a die is fair. What can we say about the next 50 throws?
|
Intuitive explanation of "Statistical Inference"
|
I'll try to rephrase Tim's answer since I think it's too technical for a layman.
Inference is the process of extracting (inferring) a general pattern from a particular set of cases. E.g., we have thes
|
Intuitive explanation of "Statistical Inference"
I'll try to rephrase Tim's answer since I think it's too technical for a layman.
Inference is the process of extracting (inferring) a general pattern from a particular set of cases. E.g., we have these particular data about soil, fertilizers and yield. What can we say about the general effect of soils and fertilizers on yield?
Probability, on the other hand, is somewhat the reverse exercise. We know the general pattern and we want to say something about particular cases. E.g., we know a die is fair. What can we say about the next 50 throws?
|
Intuitive explanation of "Statistical Inference"
I'll try to rephrase Tim's answer since I think it's too technical for a layman.
Inference is the process of extracting (inferring) a general pattern from a particular set of cases. E.g., we have thes
|
7,796
|
Intuitive explanation of "Statistical Inference"
|
From the contents of two popular textbooks,
Casella and Berger (1990) -- Statistical Inference
Efron (2006) -- Computer Age Statistical Inference
I think statistical inference simply means mathematical and reasoning activities that try to make sense of data. More specifically, one may discern two approaches -- Bayesian and Frequentist, of which there are plenty of discussions on this site. I would point out that currently, most of the answers given to this question tend to have a Bayesian flavour. For example, trying to infer the underlying distribution of the data is a distinctly Bayesian activity. Frequentist inference is often more concerned with the procedure or algorithm that we apply to data, rather than the data itself. For example, one of the goals is to find the most powerful test of two hypothesis given the data. Judging by the contents of the book, it seems these activities also fall under the umbrella of statistical inference.
Lastly, I also need to point out that in the age of machine learning, the term inference has taken on a new meaning which is rather different from the above. In the training of neural networks, inference is simply the opposite of training. Whereas in training, a model is "built", in inference, the model is applied for prediction (typically in new data). See, for example, this article.
|
Intuitive explanation of "Statistical Inference"
|
From the contents of two popular textbooks,
Casella and Berger (1990) -- Statistical Inference
Efron (2006) -- Computer Age Statistical Inference
I think statistical inference simply means mathematica
|
Intuitive explanation of "Statistical Inference"
From the contents of two popular textbooks,
Casella and Berger (1990) -- Statistical Inference
Efron (2006) -- Computer Age Statistical Inference
I think statistical inference simply means mathematical and reasoning activities that try to make sense of data. More specifically, one may discern two approaches -- Bayesian and Frequentist, of which there are plenty of discussions on this site. I would point out that currently, most of the answers given to this question tend to have a Bayesian flavour. For example, trying to infer the underlying distribution of the data is a distinctly Bayesian activity. Frequentist inference is often more concerned with the procedure or algorithm that we apply to data, rather than the data itself. For example, one of the goals is to find the most powerful test of two hypothesis given the data. Judging by the contents of the book, it seems these activities also fall under the umbrella of statistical inference.
Lastly, I also need to point out that in the age of machine learning, the term inference has taken on a new meaning which is rather different from the above. In the training of neural networks, inference is simply the opposite of training. Whereas in training, a model is "built", in inference, the model is applied for prediction (typically in new data). See, for example, this article.
|
Intuitive explanation of "Statistical Inference"
From the contents of two popular textbooks,
Casella and Berger (1990) -- Statistical Inference
Efron (2006) -- Computer Age Statistical Inference
I think statistical inference simply means mathematica
|
7,797
|
Intuitive explanation of "Statistical Inference"
|
Take this following case for example:
You want to know men's average height in the U.S. How could you proceed with this problem?
In an ideal situation, if you had unlimited time and energy, you can certainly collect the statistics from different resources and compiled them together to figure out the "undisputed truth" behind the scene, which, from statistician's point of view, often referred the population mean, or the expectation of a random variable $X$, denoted as $E(X)$, where $X$ represents men's height in the case.
Yet we are mortals with flesh so vulnerable to time, disease, and accidents, we only have limited time to do our job and find out the truth. The best thing we can do is to take a sample of our interest $x_1,x_2,\ldots$, then infer the truth from the imperfect mimic of undisputed truth $E(X)$. Besides that, the imperfect term has several interpretations:
1 The samples collected are prone to measurement errors, which may lead to a biased estimate of $E(X)$.
2 The samples surveyed might be not representative of entire population, which may drastically diverge from $E(X)$.
A very good analogy is to think of you sitting in front of a table, trying to figure out the contents in the Jigsaw puzzle. Suppose the number of pieces is infinite, of course you can't assemble each individual to fulfill your task, what the best can you do? If you picked up a bunch of pieces from central parts, you are very likely to get a rough estimate of the contents in a few attempts. What if you unfortunately picked the pieces from the corner sides? They are still the same shape, the same weight as the central pieces, but they are unrepresentative of the object in the picture. Beyond that, the central pieces collected is subject to your choice, which can sometimes lead to a biased estimate of "true" contents underlying in the picture.
In summary, statistical inference is the field of study that allows us to infer the undisputed truth from its representative part in a scientific, rigorous way.
|
Intuitive explanation of "Statistical Inference"
|
Take this following case for example:
You want to know men's average height in the U.S. How could you proceed with this problem?
In an ideal situation, if you had unlimited time and energy, you can ce
|
Intuitive explanation of "Statistical Inference"
Take this following case for example:
You want to know men's average height in the U.S. How could you proceed with this problem?
In an ideal situation, if you had unlimited time and energy, you can certainly collect the statistics from different resources and compiled them together to figure out the "undisputed truth" behind the scene, which, from statistician's point of view, often referred the population mean, or the expectation of a random variable $X$, denoted as $E(X)$, where $X$ represents men's height in the case.
Yet we are mortals with flesh so vulnerable to time, disease, and accidents, we only have limited time to do our job and find out the truth. The best thing we can do is to take a sample of our interest $x_1,x_2,\ldots$, then infer the truth from the imperfect mimic of undisputed truth $E(X)$. Besides that, the imperfect term has several interpretations:
1 The samples collected are prone to measurement errors, which may lead to a biased estimate of $E(X)$.
2 The samples surveyed might be not representative of entire population, which may drastically diverge from $E(X)$.
A very good analogy is to think of you sitting in front of a table, trying to figure out the contents in the Jigsaw puzzle. Suppose the number of pieces is infinite, of course you can't assemble each individual to fulfill your task, what the best can you do? If you picked up a bunch of pieces from central parts, you are very likely to get a rough estimate of the contents in a few attempts. What if you unfortunately picked the pieces from the corner sides? They are still the same shape, the same weight as the central pieces, but they are unrepresentative of the object in the picture. Beyond that, the central pieces collected is subject to your choice, which can sometimes lead to a biased estimate of "true" contents underlying in the picture.
In summary, statistical inference is the field of study that allows us to infer the undisputed truth from its representative part in a scientific, rigorous way.
|
Intuitive explanation of "Statistical Inference"
Take this following case for example:
You want to know men's average height in the U.S. How could you proceed with this problem?
In an ideal situation, if you had unlimited time and energy, you can ce
|
7,798
|
What's the point of time series analysis?
|
One main use is forecasting. I have been feeding my family for over a decade now by forecasting how many units of a specific product a supermarket will sell tomorrow, so he can order enough stock, but not too much. There is money in this.
Other forecasting use cases are given in publications like the International Journal of Forecasting or Foresight. (Full disclosure: I'm an Associate Editor of Foresight.)
Yes, sometimes the prediction-intervals are huge. (I assume you mean PIs, not confidence-intervals. There is a difference.) This simply means that the process is hard to forecast. Then you need to mitigate. In forecasting supermarket sales, this means you need a lot of safety stock. In forecasting sea level rises, this means you need to build higher levees. I would say that a large prediction interval does provide useful information.
And for all forecasting use cases, time-series analyis is useful, though forecasting is a larger topic. You can often improve forecasts by taking the dependencies in your time series into account, so you need to understand them through analysis, which is more specific than just knowing dependencies are there.
Plus, people are interested in time series even if they do not forecast. Econometricians like to detect change points in macroeconomic time series. Or assess the impact of an intervention, such as a change in tax laws, on GDP or something else. You may want to skim through your favorite econometrics journal for more inspiration.
|
What's the point of time series analysis?
|
One main use is forecasting. I have been feeding my family for over a decade now by forecasting how many units of a specific product a supermarket will sell tomorrow, so he can order enough stock, but
|
What's the point of time series analysis?
One main use is forecasting. I have been feeding my family for over a decade now by forecasting how many units of a specific product a supermarket will sell tomorrow, so he can order enough stock, but not too much. There is money in this.
Other forecasting use cases are given in publications like the International Journal of Forecasting or Foresight. (Full disclosure: I'm an Associate Editor of Foresight.)
Yes, sometimes the prediction-intervals are huge. (I assume you mean PIs, not confidence-intervals. There is a difference.) This simply means that the process is hard to forecast. Then you need to mitigate. In forecasting supermarket sales, this means you need a lot of safety stock. In forecasting sea level rises, this means you need to build higher levees. I would say that a large prediction interval does provide useful information.
And for all forecasting use cases, time-series analyis is useful, though forecasting is a larger topic. You can often improve forecasts by taking the dependencies in your time series into account, so you need to understand them through analysis, which is more specific than just knowing dependencies are there.
Plus, people are interested in time series even if they do not forecast. Econometricians like to detect change points in macroeconomic time series. Or assess the impact of an intervention, such as a change in tax laws, on GDP or something else. You may want to skim through your favorite econometrics journal for more inspiration.
|
What's the point of time series analysis?
One main use is forecasting. I have been feeding my family for over a decade now by forecasting how many units of a specific product a supermarket will sell tomorrow, so he can order enough stock, but
|
7,799
|
What's the point of time series analysis?
|
Goals in TS Analysis from the lesson-slides of M. Dettling:
1) Exploratory Analysis:
Visualization of the properties of the series
time series plot
decomposition into trend/seasonal pattern/random error
correlogram for understanding the dependency structure
2) Modeling:
Fitting a stochastic model to the data that represents and
reflects the most important properties of the series
done exploratory or with previous knowledge
model choice and parameter estimation is crucial
inference: how well does the model fit the data?
3) Forecasting:
Prediction of future observations with measure of uncertainty
mostly model based, uses dependency and past data
is an extrapolation, thus often to take with a grain of salt
similar to driving a car by looking in the rear window mirror
4) Process Control:
The output of a (physical) process defines a time series
a stochastic model is fitted to observed data
this allows understanding both signal and noise
it is feasible to monitor normal/abnormal fluctuations
5) Time Series Regression:
Modeling response time series using 1 or more input series
Fitting this model under i.i.d error assumption:
leads to unbiased estimates, but...
often grossly wrong standard errors
thus, confidence intervals and tests are misleading
About the stock marked problem:
These TS are very volatile, which is difficult to model.
For example a change in a law that concerns the company could lead to a change in the TS process... how would any statistical tool predict that?
About serial correlation:
In contrast to multivariate statistics, the data in a time series
are usually not iid, but are serially correlated.
This information can also be useful to detect something to be not iid, what is supposed to be, like for example a dirty laboratory instrument
|
What's the point of time series analysis?
|
Goals in TS Analysis from the lesson-slides of M. Dettling:
1) Exploratory Analysis:
Visualization of the properties of the series
time series plot
decomposition into trend/seasonal pattern/random e
|
What's the point of time series analysis?
Goals in TS Analysis from the lesson-slides of M. Dettling:
1) Exploratory Analysis:
Visualization of the properties of the series
time series plot
decomposition into trend/seasonal pattern/random error
correlogram for understanding the dependency structure
2) Modeling:
Fitting a stochastic model to the data that represents and
reflects the most important properties of the series
done exploratory or with previous knowledge
model choice and parameter estimation is crucial
inference: how well does the model fit the data?
3) Forecasting:
Prediction of future observations with measure of uncertainty
mostly model based, uses dependency and past data
is an extrapolation, thus often to take with a grain of salt
similar to driving a car by looking in the rear window mirror
4) Process Control:
The output of a (physical) process defines a time series
a stochastic model is fitted to observed data
this allows understanding both signal and noise
it is feasible to monitor normal/abnormal fluctuations
5) Time Series Regression:
Modeling response time series using 1 or more input series
Fitting this model under i.i.d error assumption:
leads to unbiased estimates, but...
often grossly wrong standard errors
thus, confidence intervals and tests are misleading
About the stock marked problem:
These TS are very volatile, which is difficult to model.
For example a change in a law that concerns the company could lead to a change in the TS process... how would any statistical tool predict that?
About serial correlation:
In contrast to multivariate statistics, the data in a time series
are usually not iid, but are serially correlated.
This information can also be useful to detect something to be not iid, what is supposed to be, like for example a dirty laboratory instrument
|
What's the point of time series analysis?
Goals in TS Analysis from the lesson-slides of M. Dettling:
1) Exploratory Analysis:
Visualization of the properties of the series
time series plot
decomposition into trend/seasonal pattern/random e
|
7,800
|
What's the point of time series analysis?
|
The easiest way to answer your question is to understand that roughly the data sets are often categorized as cross-sectional, time series and panel. Cross-sectional regression is a go-to tool for the cross-sectional data sets. This is what most people know and refer to with a term regression. Time series regression is sometimes applied to time-series, but time series analysis has a wide range of tools beyond the regression.
Example of cross-sectional data is $(x_1,y_1),(x_2,y_3),\dots,(x_n,y_n)$, where $x_i,y_i$ are weights and heights of randomly picked students in a school. When a sample is random we can often run a linear regression $y\sim x$ and get reliable results, to maybe predict height $\hat y$ of a student in this school knowing only student's weight $x$.
If the sample wasn't random, then the regression may not work at all. For instance, you picked only girls in first grade to estimate the model, but you have to predict the height of a male 12th grader. So, the regression has its own issues even in the cross-sectional setup.
Now, look at time series data, it could be $x_t,y_t$ such as $(x_1,y_1),(x_2,y_3),\dots,(x_n,y_n)$, where $t$ the month of a year, and $x,y$ are still weight and height but of a particular student in this school.
Generally, regression doesn't have to work at all. One reason is that indices $t$ are ordered. So your sample is not random, and I mentioned earlier that regression prefers a random sample to work properly. This is a serious issue. Time series data tend to be persistent, e.g. your height this month is highly correlated to your height next month. In order to deal with these issues time series analysis was developed, it included the regression technique too, but it has to be used in certain ways.
The third common dataset type is a panel, particularly, the one iwth longitudinal data. Here, you may get several snapshots of weight and height variables for a number of students. This dataset may look like waves of cross-sections or a set of time series.
Naturally, this can be more complicated than previous two types. Here we use panel regression and other special techniques developed for panels.
Summarizing, the reason why time series regression is considered as a distinct tool compared to cross-sectional regression is that time series present unique challenges when it comes to independence assumptions of regression technique. Particularly, due to the fact that unlike in cross-sectional analysis, the order of observations matters, it usually leads to all kinds of correlation and dependence structures, which may sometimes invalidate application of regression techniques. You have to deal with dependence, and that's exactly what time series analysis is good at.
Predictability of Asset Prices
Also, you're repeating a common misconception about stock markets and asset prices in general, that they cannot be predicted. This statement is too general to be true. It's true that you can't outright predict next tick of AAPL reliably. However, it's a very narrow problem. If you cast your net wider you'll discover a lot of opportunities to make money using for all kinds of forecasting (and time series analysis in particular). Statistical arbitrage is one such field.
Now, the reason why asset prices are hard to predict in near term is due to the fact that a large component of price changes is new information. The truly new information that cannot be realistically devised from the past is by definition impossible to predict. However, this is an idealized model, and a lot of people would argue that the anomalies exist that allow for persistence of state. This means that the part of price change can be explained by the past. In such cases time series analysis is quite appropriate because it precisely deals with persistence. It separate new from old, new is impossible to predict, but old is dragged from the past into the future. If you can explain even a little bit, in finance it means that you may be able to make money. As long as the price of the strategy built on such forecasting covers the income generated by it.
Finally, take a look at the economics nobel prize in 2013: "it is quite possible to foresee the broad course of these prices over longer periods, such as the next three to five years." Take a look at Shiller's nobel lecture, he discusses forecastability of asset prices.
|
What's the point of time series analysis?
|
The easiest way to answer your question is to understand that roughly the data sets are often categorized as cross-sectional, time series and panel. Cross-sectional regression is a go-to tool for the
|
What's the point of time series analysis?
The easiest way to answer your question is to understand that roughly the data sets are often categorized as cross-sectional, time series and panel. Cross-sectional regression is a go-to tool for the cross-sectional data sets. This is what most people know and refer to with a term regression. Time series regression is sometimes applied to time-series, but time series analysis has a wide range of tools beyond the regression.
Example of cross-sectional data is $(x_1,y_1),(x_2,y_3),\dots,(x_n,y_n)$, where $x_i,y_i$ are weights and heights of randomly picked students in a school. When a sample is random we can often run a linear regression $y\sim x$ and get reliable results, to maybe predict height $\hat y$ of a student in this school knowing only student's weight $x$.
If the sample wasn't random, then the regression may not work at all. For instance, you picked only girls in first grade to estimate the model, but you have to predict the height of a male 12th grader. So, the regression has its own issues even in the cross-sectional setup.
Now, look at time series data, it could be $x_t,y_t$ such as $(x_1,y_1),(x_2,y_3),\dots,(x_n,y_n)$, where $t$ the month of a year, and $x,y$ are still weight and height but of a particular student in this school.
Generally, regression doesn't have to work at all. One reason is that indices $t$ are ordered. So your sample is not random, and I mentioned earlier that regression prefers a random sample to work properly. This is a serious issue. Time series data tend to be persistent, e.g. your height this month is highly correlated to your height next month. In order to deal with these issues time series analysis was developed, it included the regression technique too, but it has to be used in certain ways.
The third common dataset type is a panel, particularly, the one iwth longitudinal data. Here, you may get several snapshots of weight and height variables for a number of students. This dataset may look like waves of cross-sections or a set of time series.
Naturally, this can be more complicated than previous two types. Here we use panel regression and other special techniques developed for panels.
Summarizing, the reason why time series regression is considered as a distinct tool compared to cross-sectional regression is that time series present unique challenges when it comes to independence assumptions of regression technique. Particularly, due to the fact that unlike in cross-sectional analysis, the order of observations matters, it usually leads to all kinds of correlation and dependence structures, which may sometimes invalidate application of regression techniques. You have to deal with dependence, and that's exactly what time series analysis is good at.
Predictability of Asset Prices
Also, you're repeating a common misconception about stock markets and asset prices in general, that they cannot be predicted. This statement is too general to be true. It's true that you can't outright predict next tick of AAPL reliably. However, it's a very narrow problem. If you cast your net wider you'll discover a lot of opportunities to make money using for all kinds of forecasting (and time series analysis in particular). Statistical arbitrage is one such field.
Now, the reason why asset prices are hard to predict in near term is due to the fact that a large component of price changes is new information. The truly new information that cannot be realistically devised from the past is by definition impossible to predict. However, this is an idealized model, and a lot of people would argue that the anomalies exist that allow for persistence of state. This means that the part of price change can be explained by the past. In such cases time series analysis is quite appropriate because it precisely deals with persistence. It separate new from old, new is impossible to predict, but old is dragged from the past into the future. If you can explain even a little bit, in finance it means that you may be able to make money. As long as the price of the strategy built on such forecasting covers the income generated by it.
Finally, take a look at the economics nobel prize in 2013: "it is quite possible to foresee the broad course of these prices over longer periods, such as the next three to five years." Take a look at Shiller's nobel lecture, he discusses forecastability of asset prices.
|
What's the point of time series analysis?
The easiest way to answer your question is to understand that roughly the data sets are often categorized as cross-sectional, time series and panel. Cross-sectional regression is a go-to tool for the
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.