idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
7,601
|
How is Poisson distribution different to normal distribution?
|
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
When the mean of a Poisson distribution is large, it becomes similar to a normal distribution. However, rpois(1000, 10) doesn't even look that similar to a normal distribution (it stops short at 0 and the right tail is too long).
Why are you comparing it to ks.test(..., 'pnorm', 10, 3) rather than ks.test(..., 'pnorm', 10, sqrt(10))? The difference between 3 and $\sqrt{10}$ is small but will itself make a difference when comparing distributions. Even if the distribution truly were normal you would end up with an anti-conservative p-value distribution:
set.seed(1)
hist(replicate(10000, ks.test(rnorm(1000, 10, sqrt(10)), 'pnorm', 10, 3)$p.value))
|
How is Poisson distribution different to normal distribution?
|
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
W
|
How is Poisson distribution different to normal distribution?
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
When the mean of a Poisson distribution is large, it becomes similar to a normal distribution. However, rpois(1000, 10) doesn't even look that similar to a normal distribution (it stops short at 0 and the right tail is too long).
Why are you comparing it to ks.test(..., 'pnorm', 10, 3) rather than ks.test(..., 'pnorm', 10, sqrt(10))? The difference between 3 and $\sqrt{10}$ is small but will itself make a difference when comparing distributions. Even if the distribution truly were normal you would end up with an anti-conservative p-value distribution:
set.seed(1)
hist(replicate(10000, ks.test(rnorm(1000, 10, sqrt(10)), 'pnorm', 10, 3)$p.value))
|
How is Poisson distribution different to normal distribution?
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
W
|
7,602
|
How is Poisson distribution different to normal distribution?
|
Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n becomes large enough. In fact, Abraham de Moivre essentially discovered normal distribution while trying to approximate Binomial distribution because it quickly goes out of hand to compute Binomial distribution as n grows especially when you don't have computers (reference).
Poisson distribution is also just another approximation of Binomial distribution but it holds much better than normal distribution when n is large and p is small, or more precisely when average is approximately same as variance (remember that for Binomial distribution, average = np and var = np(1-p)) (reference). Why is this particular situation so important? Apparently it surfaces a lot in real world and that's why we have this "special" approximation. Below example illustrates scenarios where Poisson approximation works really great.
Example
We have a datacenter of 100,000 computers. Probability of any given computer failing today is 0.001. So on average np=100 computers fail in data center. What is the probability that only 50 computers will fail today?
Binomial: 1.208E-8
Poisson: 1.223E-8
Normal: 1.469E-7
In fact, the approximation quality for normal distribution goes down the drain as we go in the tail of the distribution but Poisson continues to holds very nicely. In above example, let's consider what is the probability that only 5 computers will fail today?
Binomial: 2.96E-36
Poisson: 3.1E-36
Normal: 9.6E-22
Hopefully, this gives you better intuitive understanding of these 3 distributions.
|
How is Poisson distribution different to normal distribution?
|
Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n b
|
How is Poisson distribution different to normal distribution?
Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n becomes large enough. In fact, Abraham de Moivre essentially discovered normal distribution while trying to approximate Binomial distribution because it quickly goes out of hand to compute Binomial distribution as n grows especially when you don't have computers (reference).
Poisson distribution is also just another approximation of Binomial distribution but it holds much better than normal distribution when n is large and p is small, or more precisely when average is approximately same as variance (remember that for Binomial distribution, average = np and var = np(1-p)) (reference). Why is this particular situation so important? Apparently it surfaces a lot in real world and that's why we have this "special" approximation. Below example illustrates scenarios where Poisson approximation works really great.
Example
We have a datacenter of 100,000 computers. Probability of any given computer failing today is 0.001. So on average np=100 computers fail in data center. What is the probability that only 50 computers will fail today?
Binomial: 1.208E-8
Poisson: 1.223E-8
Normal: 1.469E-7
In fact, the approximation quality for normal distribution goes down the drain as we go in the tail of the distribution but Poisson continues to holds very nicely. In above example, let's consider what is the probability that only 5 computers will fail today?
Binomial: 2.96E-36
Poisson: 3.1E-36
Normal: 9.6E-22
Hopefully, this gives you better intuitive understanding of these 3 distributions.
|
How is Poisson distribution different to normal distribution?
Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n b
|
7,603
|
How is Poisson distribution different to normal distribution?
|
I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we can prove this economically here as well. If $X_n \sim \mathrm{Binomial}(n,\lambda/n)$ then for fixed $k$
$$
\begin{align}
\mathbb P(X_n = k) &= \frac{n!}{k!(n-k)!} \left(\frac{\lambda}{n}\right)^k \left(1-\frac{\lambda}{n}\right)^{n-k} \\ &= \underbrace{\frac{n! n^{-k}}{(n-k)!}}_{\to 1} \frac{\lambda^k}{k!}\underbrace{(1-\lambda/n)^n}_{\to e^{-\lambda}} \cdot \underbrace{(1-\lambda/n)^{-k}}_{\to 1} \>.
\end{align}
$$
The first and last terms are easily seen to converge to 1 as $n \to \infty$ (recalling that $k$ is fixed). So,
$$
\mathbb P(X_n = k) \to \frac{e^{-\lambda} \lambda^k}{k!} \,,
$$
as $n \to \infty$ since $(1-\lambda/n)^n \to e^{-\lambda}$.
In addition one has the normal approximation to the Binomial, i.e., Binomial($n$,$p$) $\approxeq^d \mathcal N(np, np(1-p))$. The approximation improves as $n \rightarrow \infty$ and $p$ stays away from 0 and 1. Obviously for the Poisson regime this is not the case (since there $p_n = \lambda / n \rightarrow 0$) but the larger $\lambda$ is the larger $n$ can be and still have a reasonable normal approximation.
|
How is Poisson distribution different to normal distribution?
|
I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we ca
|
How is Poisson distribution different to normal distribution?
I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we can prove this economically here as well. If $X_n \sim \mathrm{Binomial}(n,\lambda/n)$ then for fixed $k$
$$
\begin{align}
\mathbb P(X_n = k) &= \frac{n!}{k!(n-k)!} \left(\frac{\lambda}{n}\right)^k \left(1-\frac{\lambda}{n}\right)^{n-k} \\ &= \underbrace{\frac{n! n^{-k}}{(n-k)!}}_{\to 1} \frac{\lambda^k}{k!}\underbrace{(1-\lambda/n)^n}_{\to e^{-\lambda}} \cdot \underbrace{(1-\lambda/n)^{-k}}_{\to 1} \>.
\end{align}
$$
The first and last terms are easily seen to converge to 1 as $n \to \infty$ (recalling that $k$ is fixed). So,
$$
\mathbb P(X_n = k) \to \frac{e^{-\lambda} \lambda^k}{k!} \,,
$$
as $n \to \infty$ since $(1-\lambda/n)^n \to e^{-\lambda}$.
In addition one has the normal approximation to the Binomial, i.e., Binomial($n$,$p$) $\approxeq^d \mathcal N(np, np(1-p))$. The approximation improves as $n \rightarrow \infty$ and $p$ stays away from 0 and 1. Obviously for the Poisson regime this is not the case (since there $p_n = \lambda / n \rightarrow 0$) but the larger $\lambda$ is the larger $n$ can be and still have a reasonable normal approximation.
|
How is Poisson distribution different to normal distribution?
I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we ca
|
7,604
|
How is Poisson distribution different to normal distribution?
|
It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum of two Poissons
Brownian motion (Gaussian) and Poisson process are both Levy processes
Both Poisson and Gaussian distributions can be approximations of the Binomial distribution with large N
for large $\lambda$ Gaussian distribution looks very much like Poisson, which you already noticed
|
How is Poisson distribution different to normal distribution?
|
It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum o
|
How is Poisson distribution different to normal distribution?
It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum of two Poissons
Brownian motion (Gaussian) and Poisson process are both Levy processes
Both Poisson and Gaussian distributions can be approximations of the Binomial distribution with large N
for large $\lambda$ Gaussian distribution looks very much like Poisson, which you already noticed
|
How is Poisson distribution different to normal distribution?
It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum o
|
7,605
|
How do I calculate a weighted standard deviation? In Excel?
|
The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the number of nonzero weights.
$w_i$ are the weights
$x_i$ are the observations.
$\bar{x}^*$ is the weighted mean.
Remember that the formula for weighted mean is:
$$\bar{x}^* = \frac{\sum_{i=1}^N w_i x_i}{\sum_{i=1}^N w_i}.$$
Use the appropriate weights to get the desired result. In your case I would suggest to use $\frac{\mbox{Number of cases in segment}}{\mbox{Total number of cases}}$.
To do this in Excel, you need to calculate the weighted mean first. Then calculate the $(x_i - \bar{x}^*)^2$ in a separate column. The rest must be very easy.
|
How do I calculate a weighted standard deviation? In Excel?
|
The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the num
|
How do I calculate a weighted standard deviation? In Excel?
The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the number of nonzero weights.
$w_i$ are the weights
$x_i$ are the observations.
$\bar{x}^*$ is the weighted mean.
Remember that the formula for weighted mean is:
$$\bar{x}^* = \frac{\sum_{i=1}^N w_i x_i}{\sum_{i=1}^N w_i}.$$
Use the appropriate weights to get the desired result. In your case I would suggest to use $\frac{\mbox{Number of cases in segment}}{\mbox{Total number of cases}}$.
To do this in Excel, you need to calculate the weighted mean first. Then calculate the $(x_i - \bar{x}^*)^2$ in a separate column. The rest must be very easy.
|
How do I calculate a weighted standard deviation? In Excel?
The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the num
|
7,606
|
How do I calculate a weighted standard deviation? In Excel?
|
The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequencies (i.e. you are just trying to avoid adding up your whole sum), if the weights are in fact the variance of each measurement, or if they're just some external values you impose on your data.
In your case, it superficially looks like the weights are frequencies but they're not. You generate your data from frequencies, but it's not a simple matter of having 45 records of 3 and 15 records of 4 in your data set. Instead, you need to use the last method. (Actually, all of this is rubbish--you really need to use a more sophisticated model of the process that is generating these numbers! You apparently do not have something that spits out Normally-distributed numbers, so characterizing the system with the standard deviation is not the right thing to do.)
In any case, the formula for variance (from which you calculate standard deviation in the normal way) with "reliability" weights is
$${ \sum {w_i (x_i - x^*)^2} \over {\sum w_i - {\sum w_i^2 \over \sum w_i }} }$$
where $x^* = \sum w_i x_i / \sum w_i$ is the weighted mean.
You don't have an estimate for the weights, which I'm assuming you want to take to be proportional to reliability. Taking percentages the way you are is going to make analysis tricky even if they're generated by a Bernoulli process, because if you get a score of 20 and 0, you have infinite percentage. Weighting by the inverse of the SEM is a common and sometimes optimal thing to do. You should perhaps use a Bayesian estimate or Wilson score interval.
|
How do I calculate a weighted standard deviation? In Excel?
|
The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequen
|
How do I calculate a weighted standard deviation? In Excel?
The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequencies (i.e. you are just trying to avoid adding up your whole sum), if the weights are in fact the variance of each measurement, or if they're just some external values you impose on your data.
In your case, it superficially looks like the weights are frequencies but they're not. You generate your data from frequencies, but it's not a simple matter of having 45 records of 3 and 15 records of 4 in your data set. Instead, you need to use the last method. (Actually, all of this is rubbish--you really need to use a more sophisticated model of the process that is generating these numbers! You apparently do not have something that spits out Normally-distributed numbers, so characterizing the system with the standard deviation is not the right thing to do.)
In any case, the formula for variance (from which you calculate standard deviation in the normal way) with "reliability" weights is
$${ \sum {w_i (x_i - x^*)^2} \over {\sum w_i - {\sum w_i^2 \over \sum w_i }} }$$
where $x^* = \sum w_i x_i / \sum w_i$ is the weighted mean.
You don't have an estimate for the weights, which I'm assuming you want to take to be proportional to reliability. Taking percentages the way you are is going to make analysis tricky even if they're generated by a Bernoulli process, because if you get a score of 20 and 0, you have infinite percentage. Weighting by the inverse of the SEM is a common and sometimes optimal thing to do. You should perhaps use a Bayesian estimate or Wilson score interval.
|
How do I calculate a weighted standard deviation? In Excel?
The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequen
|
7,607
|
How do I calculate a weighted standard deviation? In Excel?
|
=SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values
|
How do I calculate a weighted standard deviation? In Excel?
|
=SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values
|
How do I calculate a weighted standard deviation? In Excel?
=SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values
|
How do I calculate a weighted standard deviation? In Excel?
=SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values
|
7,608
|
How do I calculate a weighted standard deviation? In Excel?
|
If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the variance:$$\hat\sigma^2=\sum_ip_i(x_i-\hat\mu)^2$$
|
How do I calculate a weighted standard deviation? In Excel?
|
If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the
|
How do I calculate a weighted standard deviation? In Excel?
If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the variance:$$\hat\sigma^2=\sum_ip_i(x_i-\hat\mu)^2$$
|
How do I calculate a weighted standard deviation? In Excel?
If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the
|
7,609
|
How do I calculate a weighted standard deviation? In Excel?
|
Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download currently available at http://seismo.berkeley.edu/~kirchner/Toolkits/Toolkit_12.pdf, which references
Bevington, P. R., Data Reduction and Error Analysis for the Physical Sciences, 336 pp.,
McGraw-Hill, 1969
will do?
Prof. Kirchner distinguishes between
"Case I" in which some points are more important than others (hence the weighting) but the uncertainties associated with each point are assumed to be the same
"Case II" in which the points are equally important but the uncertainties associated with each point are not the same.
For FabioSpaghetti's comment from yesterday, the above linked paper also shows how to calculate the standard error.
|
How do I calculate a weighted standard deviation? In Excel?
|
Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download
|
How do I calculate a weighted standard deviation? In Excel?
Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download currently available at http://seismo.berkeley.edu/~kirchner/Toolkits/Toolkit_12.pdf, which references
Bevington, P. R., Data Reduction and Error Analysis for the Physical Sciences, 336 pp.,
McGraw-Hill, 1969
will do?
Prof. Kirchner distinguishes between
"Case I" in which some points are more important than others (hence the weighting) but the uncertainties associated with each point are assumed to be the same
"Case II" in which the points are equally important but the uncertainties associated with each point are not the same.
For FabioSpaghetti's comment from yesterday, the above linked paper also shows how to calculate the standard error.
|
How do I calculate a weighted standard deviation? In Excel?
Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download
|
7,610
|
How do I calculate a weighted standard deviation? In Excel?
|
Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values to determine W Standard Deviation
xV = vals.Column ' Column number of first value element
xW = wates.Column ' Column number of first weight element
y = vals.Row - 1 ' Row number of the values and weights
WgtAvg = WorksheetFunction.SumProduct(vals, wates) / WorksheetFunction.Sum(wates)
For i = 1 To N ' step through the elements, calculating the sum of values and the sumproduct
wi = ActiveSheet.Cells(i + y, xW).Value ' (i+y, xW) is the cell containing the weight element
SUMwi = SUMwi + wi
xi = ActiveSheet.Cells(i + y, xV).Value ' (i+y, xV) is the cell containing the value element
sumProd = sumProd + wi * (xi - WgtAvg) ^ 2
Next i
wsdv = (sumProd / SUMwi * N / (N - 1)) ^ (1 / 2) ' output of weighted standard deviation
End Function
|
How do I calculate a weighted standard deviation? In Excel?
|
Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values
|
How do I calculate a weighted standard deviation? In Excel?
Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values to determine W Standard Deviation
xV = vals.Column ' Column number of first value element
xW = wates.Column ' Column number of first weight element
y = vals.Row - 1 ' Row number of the values and weights
WgtAvg = WorksheetFunction.SumProduct(vals, wates) / WorksheetFunction.Sum(wates)
For i = 1 To N ' step through the elements, calculating the sum of values and the sumproduct
wi = ActiveSheet.Cells(i + y, xW).Value ' (i+y, xW) is the cell containing the weight element
SUMwi = SUMwi + wi
xi = ActiveSheet.Cells(i + y, xV).Value ' (i+y, xV) is the cell containing the value element
sumProd = sumProd + wi * (xi - WgtAvg) ^ 2
Next i
wsdv = (sumProd / SUMwi * N / (N - 1)) ^ (1 / 2) ' output of weighted standard deviation
End Function
|
How do I calculate a weighted standard deviation? In Excel?
Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values
|
7,611
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the probability of getting your estimate due to chance. I don't know what that means---it's not a well-specified claim.
But I do understand what is likely meant by the probability of getting my estimate by chance given that the true estimate is equal to a particular value. For example, I can understand what it means to get a very large difference in average heights between men and women given that their average heights are actually the same. That's well specified. And that is what the p-value gives. What's missing in the wrong statement is the condition that the null is true.
Now, we might object that this isn't statement perfect (the chance of getting an exact value for an estimator is 0, for example). But it's far better than the way that most would interpret a p-value.
The key point that I say over and over again when I teach hypothesis testing is "Step one is to assume that the null hypothesis is true. Everything is calculated given this assumption." If people remember that, that's pretty good.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the pr
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the probability of getting your estimate due to chance. I don't know what that means---it's not a well-specified claim.
But I do understand what is likely meant by the probability of getting my estimate by chance given that the true estimate is equal to a particular value. For example, I can understand what it means to get a very large difference in average heights between men and women given that their average heights are actually the same. That's well specified. And that is what the p-value gives. What's missing in the wrong statement is the condition that the null is true.
Now, we might object that this isn't statement perfect (the chance of getting an exact value for an estimator is 0, for example). But it's far better than the way that most would interpret a p-value.
The key point that I say over and over again when I teach hypothesis testing is "Step one is to assume that the null hypothesis is true. Everything is calculated given this assumption." If people remember that, that's pretty good.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the pr
|
7,612
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is $\Pr(\text{H}_0)$ [which actually should be $\Pr(\text{H}_0 | \text{data})$; say, "given what we have seen (the data), what is the probability that only chance is operating?"] This can be a meaningful statement (if you are willing to assign priors and do Bayes), but it is not the p-value.
$\Pr(\text{H}_0 | \text{data})$ can be quite different than the p-value, and so to interpret a p-value in that way can be seriously misleading.
The simplest illustration: say the prior, $\Pr(H_0)$ is quite small, but one has rather little data, and so the p-value is largish (say, 0.3), but the posterior, $\Pr(\text{H}_0|\text{data})$, would still be quite small. [But maybe this example isn't so interesting.]
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is $\Pr(\text{H}_0)$ [which actually should be $\Pr(\text{H}_0 | \text{data})$; say, "given what we have seen (the data), what is the probability that only chance is operating?"] This can be a meaningful statement (if you are willing to assign priors and do Bayes), but it is not the p-value.
$\Pr(\text{H}_0 | \text{data})$ can be quite different than the p-value, and so to interpret a p-value in that way can be seriously misleading.
The simplest illustration: say the prior, $\Pr(H_0)$ is quite small, but one has rather little data, and so the p-value is largish (say, 0.3), but the posterior, $\Pr(\text{H}_0|\text{data})$, would still be quite small. [But maybe this example isn't so interesting.]
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is
|
7,613
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion for students who realize that they cannot logically understand the statement, but assuming that what is taught to them is right they do not realize that they are not able to understand it because it is not right.
This does not affect students who just memorize rules presented to them. But it requires students who learn by understanding to be good enough to
arrive at the correct solution by themselves and
be good enough so they can be sure they are right
and conclude that they are taught bullshit (for some allegedly didactic reason).
I'm not saying that there aren't valid didactic shortcuts. But IMHO when such a shortcut is taken, this should be mentioned (e.g. as "for the ease of the argument, we assume/approximate that ...").
In this particular case, however, I think it is too misleading to be of any use.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion f
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion for students who realize that they cannot logically understand the statement, but assuming that what is taught to them is right they do not realize that they are not able to understand it because it is not right.
This does not affect students who just memorize rules presented to them. But it requires students who learn by understanding to be good enough to
arrive at the correct solution by themselves and
be good enough so they can be sure they are right
and conclude that they are taught bullshit (for some allegedly didactic reason).
I'm not saying that there aren't valid didactic shortcuts. But IMHO when such a shortcut is taken, this should be mentioned (e.g. as "for the ease of the argument, we assume/approximate that ...").
In this particular case, however, I think it is too misleading to be of any use.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion f
|
7,614
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to random chance." If one believes this, then one also probably believes the following: "[1-(p-value)] is the probability that the findings are NOT due to random chance."
The harm then lies in the second statement, because, given the way most people's brains work, this statement grossly overestimates how confident we should be in the specific values of an estimated parameter.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to rand
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to random chance." If one believes this, then one also probably believes the following: "[1-(p-value)] is the probability that the findings are NOT due to random chance."
The harm then lies in the second statement, because, given the way most people's brains work, this statement grossly overestimates how confident we should be in the specific values of an estimated parameter.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to rand
|
7,615
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is 1, so does that mean that we have a 100% chance of having a 2-headed coin?
The tricky thing is that if we had flipped a tails then the p-value would have been 0 and the probability of having a 2-headed coin would have been 0, so they match in this case, but not the above. The p-value of 1 above just means that what we have observed is perfectly consistent with the hypothesis of a 2-headed coin, but it does not prove that the coin is 2-headed.
Further, if we are doing frequentist statistics then the null hypothesis is either True or False (we just don't know which) and making (frequentist) probability statements about the null hypothesis is meaningless. If you want to talk about the probability of the hypothesis, then do proper Bayesian statistics, use the Bayesian definition of probability, start with a prior and calculate the posterior probability that the hypothesis is true. Just don't confuse a p-value with a Bayesian posterior.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is 1, so does that mean that we have a 100% chance of having a 2-headed coin?
The tricky thing is that if we had flipped a tails then the p-value would have been 0 and the probability of having a 2-headed coin would have been 0, so they match in this case, but not the above. The p-value of 1 above just means that what we have observed is perfectly consistent with the hypothesis of a 2-headed coin, but it does not prove that the coin is 2-headed.
Further, if we are doing frequentist statistics then the null hypothesis is either True or False (we just don't know which) and making (frequentist) probability statements about the null hypothesis is meaningless. If you want to talk about the probability of the hypothesis, then do proper Bayesian statistics, use the Bayesian definition of probability, start with a prior and calculate the posterior probability that the hypothesis is true. Just don't confuse a p-value with a Bayesian posterior.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is
|
7,616
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinking clearly about uncertainty and catastrophic for doing sensible statistics. With something like a sequence of coin flips it is easy to assume that 'chance' is described by the Binomial setup with a probability of 0.5. There is a certain naturalness to it for sure, but from a statistical point of view it's not more natural than assuming 0.6 or something else. And for other less 'obvious' examples, e.g. involving real parameters it's utterly unhelpful to think about what 'chance' would look like.
With respect to the question, the key idea is understanding what sort of 'chance' is described by H0, i.e. what actual likelihood/DGP H0 names. Once that concept is in place, students finally stop talking about things happening 'by chance', and start asking what H0 actually is. (They also figure out that things can be consistent with a rather wide variety of Hs so they get a head start on confidence intervals, via inverted tests).
The second problem is that if you're on the way to Fisher's definition of p-values, you should (imho) always explain it first in terms of the data's consistency with H0 because the point of p is to see that, not to interpret the tail area as some sort of 'chance' activity, (or frankly to interpret it at all). This is purely a matter of rhetorical emphasis, obviously, but it seems to help.
In short, the harm is that this way of describing things will not generalise to any non-trivial model they might subsequently try to think about. At worst it may just add to sense of mystery that the study of statistics already generates in the sorts of people such bowdlerised descriptions are aimed at.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinkin
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinking clearly about uncertainty and catastrophic for doing sensible statistics. With something like a sequence of coin flips it is easy to assume that 'chance' is described by the Binomial setup with a probability of 0.5. There is a certain naturalness to it for sure, but from a statistical point of view it's not more natural than assuming 0.6 or something else. And for other less 'obvious' examples, e.g. involving real parameters it's utterly unhelpful to think about what 'chance' would look like.
With respect to the question, the key idea is understanding what sort of 'chance' is described by H0, i.e. what actual likelihood/DGP H0 names. Once that concept is in place, students finally stop talking about things happening 'by chance', and start asking what H0 actually is. (They also figure out that things can be consistent with a rather wide variety of Hs so they get a head start on confidence intervals, via inverted tests).
The second problem is that if you're on the way to Fisher's definition of p-values, you should (imho) always explain it first in terms of the data's consistency with H0 because the point of p is to see that, not to interpret the tail area as some sort of 'chance' activity, (or frankly to interpret it at all). This is purely a matter of rhetorical emphasis, obviously, but it seems to help.
In short, the harm is that this way of describing things will not generalise to any non-trivial model they might subsequently try to think about. At worst it may just add to sense of mystery that the study of statistics already generates in the sorts of people such bowdlerised descriptions are aimed at.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinkin
|
7,617
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statistics lesson where one is explaining the need to try to see through random variability this is a pretty magical and overreaching statement. It imbues p-values with powers they do not have.
If you define chance in a specific case to be the null hypothesis then you're stating that the p-value yields the probability that the observed effect is caused by the null hypothesis. That seems awfully close to the correct statement but claiming that a condition on probability is the cause of that probability is again overreaching. The correct statement, that the p-value is the probability of the effect given the null hypothesis is true, does not ascribe cause to the null effect. The causes are various including the true effect, the variability around the effect, and random chance. The p-value doesn't measure the probability of any of those.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
|
If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statist
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statistics lesson where one is explaining the need to try to see through random variability this is a pretty magical and overreaching statement. It imbues p-values with powers they do not have.
If you define chance in a specific case to be the null hypothesis then you're stating that the p-value yields the probability that the observed effect is caused by the null hypothesis. That seems awfully close to the correct statement but claiming that a condition on probability is the cause of that probability is again overreaching. The correct statement, that the p-value is the probability of the effect given the null hypothesis is true, does not ascribe cause to the null effect. The causes are various including the true effect, the variability around the effect, and random chance. The p-value doesn't measure the probability of any of those.
|
Why is it bad to teach students that p-values are the probability that findings are due to chance?
If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statist
|
7,618
|
A measure of "variance" from the covariance matrix?
|
(The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question it will be enough to state the main results, but by all means, go check the original source).
In any situation where the multivariate pattern of the data can be described by a $k$-variate elliptical distribution, statistical inference will, by definition, reduce it to the problem of fitting (and characterizing) a $k$-variate location vector (say $\boldsymbol\theta$) and a $k\times k$ symmetric semi-positive definite (SPSD) matrix (say $\boldsymbol\varSigma$) to the data. For reasons explained below (which are assumed as premises) it will often be more meaningful to decompose $\boldsymbol\varSigma$ into its shape component (a SPSD matrix of the same size as $\boldsymbol\varSigma$) accounting for the shape of the density contours of your multivariate distribution and a scalar $\sigma_S$ expressing the scale of these contours.
In univariate data ($k=1$), $\boldsymbol\varSigma$, the covariance matrix of your data is a scalar and, as will follow from the discussion below, the shape component of $\boldsymbol\varSigma$ is 1 so that $\boldsymbol\varSigma$ equals its scale component $\boldsymbol\varSigma=\sigma_S$ always and no ambiguity is possible.
In multivariate data, there are many possible choices for scaling functions $\sigma_S$. One in particular ($\sigma_S=|\pmb\varSigma|^{1/k}$) stands out in having a key desirable propriety, making it the preferred choice of scaling functions in the context of elliptical families.
Many problems in MV-statistics involve estimation of a scatter matrix, defined as a function(al) SPSD matrix in $\mathbb{R}^{k\times k}$ ($\boldsymbol\varSigma$) satisfying:
$$(0)\quad\boldsymbol\varSigma(\boldsymbol A\boldsymbol X+\boldsymbol b)=\boldsymbol A\boldsymbol\varSigma(\boldsymbol X)\boldsymbol A^\top$$
(for non singular matrices $\boldsymbol A$ and vectors $\boldsymbol b$). For example the classical estimate of covariance satisfies (0) but it is by no means the only one.
In the presence of elliptical distributed data, where all the density contours are ellipses defined by the same shape matrix, up to multiplication by a scalar, it is natural to consider normalized versions of $\boldsymbol\varSigma$ of the form:
$$\boldsymbol V_S = \boldsymbol\varSigma / S(\boldsymbol\varSigma)$$
where $S$ is a 1-honogenous function satisfying:
$$(1)\quad S(\lambda \boldsymbol\varSigma)=\lambda S(\boldsymbol\varSigma) $$
for all $\lambda>0$. Then, $\boldsymbol V_S$ is called the shape component of the scatter matrix (in short shape matrix) and $\sigma_S=S^{1/2}(\boldsymbol\varSigma)$ is called the scale component of the scatter matrix. Examples of multivariate estimation problems where the loss function only depends on $\boldsymbol\varSigma$ through its shape component $\boldsymbol V_S$ include tests of sphericity, PCA and CCA among others.
Of course, there are many possible scaling functions so this still leaves the open the question of which (if any) of several choices of normalization function $S$ are in some sense optimal. For example:
$S=\text{tr}(\boldsymbol\varSigma)/k$ (for example the one proposed by @amoeba in his comment below the OP's question as well as @HelloGoodbye's answer below. See also [1], [2], [3])
$S=|\boldsymbol\varSigma|^{1/k}$ ([4], [5], [6], [7], [8])
$\boldsymbol\varSigma_{11}$ (the first entry of the covariance matrix)
$\lambda_1(\boldsymbol\varSigma)$ (the first eigenvalue of $\boldsymbol\varSigma$), this is called the spectral norm and is discussed in @Aksakal answer below.
Among these, $S=|\boldsymbol\varSigma|^{1/k}$ is the only scaling function for which the Fisher Information matrix for the corresponding estimates of scale and shape, in locally asymptotically normal families, are block diagonal (that is the scale and shape components of the estimation problem are asymptotically orthogonal) [0]. This means, among other things, that the scale functional $S=|\boldsymbol\varSigma|^{1/k}$ is the only choice of $S$ for which the non specification of $\sigma_S$ does not cause any loss of efficiency when performing inference on $\boldsymbol V_S$.
I do not know of any comparably strong optimality characterization for any of the many possible choices of $S$ that satisfy (1).
[0] Paindaveine, D., A canonical definition of shape, Statistics & Probability Letters, Volume 78, Issue 14, 1 October 2008, Pages 2240-2247. Ungated link
[1] Dumbgen, L. (1998). On Tyler’s M-functional of scatter
in high dimension, Ann. Inst. Statist. Math. 50, 471–491.
[2] Ollila, E., T.P. Hettmansperger, and H. Oja (2004). Affine equivariant multivariate sign methods. Preprint, University of Jyvaskyla.
[3] Tyler, D.E. (1983). Robustness and efficiency properties of scatter matrices, Biometrika 70, 411–420.
[4] Dumbgen, L., and D.E. Tyler (2005). On the breakdown properties of some multivariate M-Functionals, Scand. J. Statist.
32, 247–264.
[5] Hallin, M. and D. Paindaveine (2008). Optimal rank-based tests for homogeneity of scatter, Ann. Statist., to appear.
[6] Salibian-Barrera, M., S. Van Aelst, and G. Willems (200
6). Principal components analysis based on multivariate MM-estimators with fast and robust bootstrap, J. Amer. Statist. Assoc. 101, 1198–1211.
[7] Taskinen, S., C. Croux, A. Kankainen, E. Ollila, and H. O
ja (2006). Influence functions and efficiencies of the canonical correlation and vector estimates based on scatter and shape matrices,
J. Multivariate Anal. 97, 359–384.
[8] Tatsuoka, K.S., and D.E. Tyler (2000). On the uniqueness of S-Functionals and M-functionals under nonelliptical distributions, Ann. Statist. 28, 1219–1243.
|
A measure of "variance" from the covariance matrix?
|
(The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question
|
A measure of "variance" from the covariance matrix?
(The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question it will be enough to state the main results, but by all means, go check the original source).
In any situation where the multivariate pattern of the data can be described by a $k$-variate elliptical distribution, statistical inference will, by definition, reduce it to the problem of fitting (and characterizing) a $k$-variate location vector (say $\boldsymbol\theta$) and a $k\times k$ symmetric semi-positive definite (SPSD) matrix (say $\boldsymbol\varSigma$) to the data. For reasons explained below (which are assumed as premises) it will often be more meaningful to decompose $\boldsymbol\varSigma$ into its shape component (a SPSD matrix of the same size as $\boldsymbol\varSigma$) accounting for the shape of the density contours of your multivariate distribution and a scalar $\sigma_S$ expressing the scale of these contours.
In univariate data ($k=1$), $\boldsymbol\varSigma$, the covariance matrix of your data is a scalar and, as will follow from the discussion below, the shape component of $\boldsymbol\varSigma$ is 1 so that $\boldsymbol\varSigma$ equals its scale component $\boldsymbol\varSigma=\sigma_S$ always and no ambiguity is possible.
In multivariate data, there are many possible choices for scaling functions $\sigma_S$. One in particular ($\sigma_S=|\pmb\varSigma|^{1/k}$) stands out in having a key desirable propriety, making it the preferred choice of scaling functions in the context of elliptical families.
Many problems in MV-statistics involve estimation of a scatter matrix, defined as a function(al) SPSD matrix in $\mathbb{R}^{k\times k}$ ($\boldsymbol\varSigma$) satisfying:
$$(0)\quad\boldsymbol\varSigma(\boldsymbol A\boldsymbol X+\boldsymbol b)=\boldsymbol A\boldsymbol\varSigma(\boldsymbol X)\boldsymbol A^\top$$
(for non singular matrices $\boldsymbol A$ and vectors $\boldsymbol b$). For example the classical estimate of covariance satisfies (0) but it is by no means the only one.
In the presence of elliptical distributed data, where all the density contours are ellipses defined by the same shape matrix, up to multiplication by a scalar, it is natural to consider normalized versions of $\boldsymbol\varSigma$ of the form:
$$\boldsymbol V_S = \boldsymbol\varSigma / S(\boldsymbol\varSigma)$$
where $S$ is a 1-honogenous function satisfying:
$$(1)\quad S(\lambda \boldsymbol\varSigma)=\lambda S(\boldsymbol\varSigma) $$
for all $\lambda>0$. Then, $\boldsymbol V_S$ is called the shape component of the scatter matrix (in short shape matrix) and $\sigma_S=S^{1/2}(\boldsymbol\varSigma)$ is called the scale component of the scatter matrix. Examples of multivariate estimation problems where the loss function only depends on $\boldsymbol\varSigma$ through its shape component $\boldsymbol V_S$ include tests of sphericity, PCA and CCA among others.
Of course, there are many possible scaling functions so this still leaves the open the question of which (if any) of several choices of normalization function $S$ are in some sense optimal. For example:
$S=\text{tr}(\boldsymbol\varSigma)/k$ (for example the one proposed by @amoeba in his comment below the OP's question as well as @HelloGoodbye's answer below. See also [1], [2], [3])
$S=|\boldsymbol\varSigma|^{1/k}$ ([4], [5], [6], [7], [8])
$\boldsymbol\varSigma_{11}$ (the first entry of the covariance matrix)
$\lambda_1(\boldsymbol\varSigma)$ (the first eigenvalue of $\boldsymbol\varSigma$), this is called the spectral norm and is discussed in @Aksakal answer below.
Among these, $S=|\boldsymbol\varSigma|^{1/k}$ is the only scaling function for which the Fisher Information matrix for the corresponding estimates of scale and shape, in locally asymptotically normal families, are block diagonal (that is the scale and shape components of the estimation problem are asymptotically orthogonal) [0]. This means, among other things, that the scale functional $S=|\boldsymbol\varSigma|^{1/k}$ is the only choice of $S$ for which the non specification of $\sigma_S$ does not cause any loss of efficiency when performing inference on $\boldsymbol V_S$.
I do not know of any comparably strong optimality characterization for any of the many possible choices of $S$ that satisfy (1).
[0] Paindaveine, D., A canonical definition of shape, Statistics & Probability Letters, Volume 78, Issue 14, 1 October 2008, Pages 2240-2247. Ungated link
[1] Dumbgen, L. (1998). On Tyler’s M-functional of scatter
in high dimension, Ann. Inst. Statist. Math. 50, 471–491.
[2] Ollila, E., T.P. Hettmansperger, and H. Oja (2004). Affine equivariant multivariate sign methods. Preprint, University of Jyvaskyla.
[3] Tyler, D.E. (1983). Robustness and efficiency properties of scatter matrices, Biometrika 70, 411–420.
[4] Dumbgen, L., and D.E. Tyler (2005). On the breakdown properties of some multivariate M-Functionals, Scand. J. Statist.
32, 247–264.
[5] Hallin, M. and D. Paindaveine (2008). Optimal rank-based tests for homogeneity of scatter, Ann. Statist., to appear.
[6] Salibian-Barrera, M., S. Van Aelst, and G. Willems (200
6). Principal components analysis based on multivariate MM-estimators with fast and robust bootstrap, J. Amer. Statist. Assoc. 101, 1198–1211.
[7] Taskinen, S., C. Croux, A. Kankainen, E. Ollila, and H. O
ja (2006). Influence functions and efficiencies of the canonical correlation and vector estimates based on scatter and shape matrices,
J. Multivariate Anal. 97, 359–384.
[8] Tatsuoka, K.S., and D.E. Tyler (2000). On the uniqueness of S-Functionals and M-functionals under nonelliptical distributions, Ann. Statist. 28, 1219–1243.
|
A measure of "variance" from the covariance matrix?
(The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question
|
7,619
|
A measure of "variance" from the covariance matrix?
|
The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\right]$$
One generalization to a scalar-valued variance for vector-valued random variables can be obtained by interpreting the deviation as the Euclidean distance:
$$\operatorname{Var_s}(\mathbf X) = \operatorname E\left[\left\|\mathbf X - \operatorname E\left[\mathbf X\right]\right\|_2^2\right]$$
This expression can be rewritten as
$$\begin{array}{rcl}
\operatorname{Var_s}(\mathbf X) & = & \operatorname E[(\mathbf X - \operatorname E[\mathbf X ])\cdot(\mathbf X - \operatorname E[\mathbf X ])] \\
& = & \operatorname E\left[\sum_{i=1}^n(X_i - \operatorname E[X_i])^2\right] \\
& = & \sum_{i=1}^n \operatorname E\left[(X_i - \operatorname E[X_i])^2\right] \\
& = & \sum_{i=1}^n \operatorname{Var}(X_i) \\
& = & \sum_{i=1}^n C_{ii}
\end{array}$$
where $\mathbf{C}$ is the covariance matrix. Finally, this can be simplified to
$$\operatorname{Var_s}(\mathbf X) = \operatorname{tr}(\mathbf{C})$$
which is the trace of the covariance matrix.
|
A measure of "variance" from the covariance matrix?
|
The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\rig
|
A measure of "variance" from the covariance matrix?
The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\right]$$
One generalization to a scalar-valued variance for vector-valued random variables can be obtained by interpreting the deviation as the Euclidean distance:
$$\operatorname{Var_s}(\mathbf X) = \operatorname E\left[\left\|\mathbf X - \operatorname E\left[\mathbf X\right]\right\|_2^2\right]$$
This expression can be rewritten as
$$\begin{array}{rcl}
\operatorname{Var_s}(\mathbf X) & = & \operatorname E[(\mathbf X - \operatorname E[\mathbf X ])\cdot(\mathbf X - \operatorname E[\mathbf X ])] \\
& = & \operatorname E\left[\sum_{i=1}^n(X_i - \operatorname E[X_i])^2\right] \\
& = & \sum_{i=1}^n \operatorname E\left[(X_i - \operatorname E[X_i])^2\right] \\
& = & \sum_{i=1}^n \operatorname{Var}(X_i) \\
& = & \sum_{i=1}^n C_{ii}
\end{array}$$
where $\mathbf{C}$ is the covariance matrix. Finally, this can be simplified to
$$\operatorname{Var_s}(\mathbf X) = \operatorname{tr}(\mathbf{C})$$
which is the trace of the covariance matrix.
|
A measure of "variance" from the covariance matrix?
The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\rig
|
7,620
|
A measure of "variance" from the covariance matrix?
|
Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall variance which is large when your variables are independent from each other and is very small when the variables are highly correlated, you can use the determinant of the covariance matrix, |C|.
Please see this article for a better clarification.
|
A measure of "variance" from the covariance matrix?
|
Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall varian
|
A measure of "variance" from the covariance matrix?
Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall variance which is large when your variables are independent from each other and is very small when the variables are highly correlated, you can use the determinant of the covariance matrix, |C|.
Please see this article for a better clarification.
|
A measure of "variance" from the covariance matrix?
Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall varian
|
7,621
|
A measure of "variance" from the covariance matrix?
|
If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of the total variance can be explained if you reduce the dimensionality of your vector to one. See this answer on math SE.
The idea's that you collapse your vector into a single dimension by combining all variables linearly into one series, ending up with a 1d problem.
The explained variance can be reported in terms of a percentage of the total variance. In this case you'll immediately see if there is a lot of linear correlation between series. In some applications this number can be 80% and higher, e.g. interest rate curve modeling in finance. Meaning that you can construct a linear combination of variables that explains 80% of variance of all variables.
|
A measure of "variance" from the covariance matrix?
|
If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of
|
A measure of "variance" from the covariance matrix?
If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of the total variance can be explained if you reduce the dimensionality of your vector to one. See this answer on math SE.
The idea's that you collapse your vector into a single dimension by combining all variables linearly into one series, ending up with a 1d problem.
The explained variance can be reported in terms of a percentage of the total variance. In this case you'll immediately see if there is a lot of linear correlation between series. In some applications this number can be 80% and higher, e.g. interest rate curve modeling in finance. Meaning that you can construct a linear combination of variables that explains 80% of variance of all variables.
|
A measure of "variance" from the covariance matrix?
If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of
|
7,622
|
A measure of "variance" from the covariance matrix?
|
The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multivariate Gaussian distribution for $p(x)$ with mean $\mu$ and covariance $\Sigma$ derived from the data, according to wikipedia, the differential entropy is then,
$$H(X)=\frac{1}{2}\log((2\pi e)^n\det(\Sigma))$$
where $n$ is the number of dimensions. Since multivariate Gaussian is the distribution that maximizes the differential entropy for given covariance, this formula gives an entropy upper bound for an unknown distribution with a given variance.
And it depends on the determinant of the covariance matrix, as @user603 suggests.
|
A measure of "variance" from the covariance matrix?
|
The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multiv
|
A measure of "variance" from the covariance matrix?
The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multivariate Gaussian distribution for $p(x)$ with mean $\mu$ and covariance $\Sigma$ derived from the data, according to wikipedia, the differential entropy is then,
$$H(X)=\frac{1}{2}\log((2\pi e)^n\det(\Sigma))$$
where $n$ is the number of dimensions. Since multivariate Gaussian is the distribution that maximizes the differential entropy for given covariance, this formula gives an entropy upper bound for an unknown distribution with a given variance.
And it depends on the determinant of the covariance matrix, as @user603 suggests.
|
A measure of "variance" from the covariance matrix?
The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multiv
|
7,623
|
What is theta in a negative binomial regression fitted with R?
|
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, but also on the mean
there is no value of theta that will guarantee you lack of skew
If I did not mess it up, in the mu/theta parametrization used in negative binomial regression, the skewness is
$$
{\rm Skew}(NB) = \frac{\theta+2\mu}{\sqrt{\theta\mu(\theta+\mu)}}
= \frac{1 + 2\frac{\mu}{\theta}}{\sqrt{\mu(1+\frac{\mu}{\theta})}}
$$
In this context, $\theta$ is usually interpreted as a measure of overdispersion with respect to the Poisson distribution. The variance of the negative binomial is $\mu + \mu^2/\theta$, so $\theta$ really controls the excess variability compared to Poisson (which would be $\mu$), and not the skew.
|
What is theta in a negative binomial regression fitted with R?
|
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, b
|
What is theta in a negative binomial regression fitted with R?
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, but also on the mean
there is no value of theta that will guarantee you lack of skew
If I did not mess it up, in the mu/theta parametrization used in negative binomial regression, the skewness is
$$
{\rm Skew}(NB) = \frac{\theta+2\mu}{\sqrt{\theta\mu(\theta+\mu)}}
= \frac{1 + 2\frac{\mu}{\theta}}{\sqrt{\mu(1+\frac{\mu}{\theta})}}
$$
In this context, $\theta$ is usually interpreted as a measure of overdispersion with respect to the Poisson distribution. The variance of the negative binomial is $\mu + \mu^2/\theta$, so $\theta$ really controls the excess variability compared to Poisson (which would be $\mu$), and not the skew.
|
What is theta in a negative binomial regression fitted with R?
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, b
|
7,624
|
What is theta in a negative binomial regression fitted with R?
|
I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the dispersion statistic and dispersion parameter.
The dispersion statistic, which gives an indication of count model extra-dispersion, is the Pearson statistic divided by the residual DOF. $\mu$ is the location or shape parameter. For count models, the scale parameter is set at 1. The R glm and glm.nb $\theta$ is a dispersion parameter, or ancillary parameter. I called it the heterogeneity parameter in the first edition of my book, Negative Binomial Regression (2007, Cambridge University Press), but call it the dispersion parameter in my 2011 second edition. I give a complete rationale for the various terms in the NB model in my forthcoming book, Modeling Count Data (Cambridge) which is going to press today. It should be for sale (paperback) by July 15.
glm.nb and glm are unusual in how they define the dispersion parameter. The variance is given as $\mu+\frac{\mu^2}{\theta}$ rather than $\mu+\alpha\mu^2$, which is the direct parameterization. It is the way NB is modeled in SAS, Stata, Limdep, SPSS, Matlab, Genstat, Xplore, and most all software. When you compare glm.nb results with other software results, remember this. The author of glm (which came from S-plus) and glm.nb apparently took the indirect relationship from McCullagh & Nelder, but Nelder (who was the co-founder of GLM in 1972) wrote his kk system add-on to Genstat in 1993 in which he argued that the direct relationship is preferred. He and his wife used to visit me and my family about every other year in Arizona starting in early 1993 until the year before he died. We discussed this pretty thoroughly, since I had put a direct relationship into the glm program I wrote in late 1992 for Stata and Xplore software, and for a SAS macro in 1994.
The nbinomial function in the msme package on CRAN allows the user to employ the direct (default) or indirect (as an option, to duplicate glm.nb) parameterization, and provides the Pearson statistic and residuals to output. Output also displays the dispersion statistic, and allow the user to parameterize $\alpha$ (or $\theta$), giving parameter estimates for the dispersion. This allows you to assess which predictors add to the extra-dispersion of the model. This type of model is generally referred to as heterogeneous negative binomial. I'll put the nbinomial function into the COUNT package before the new book comes out, plus a number of new functions and scripts for graphics.
|
What is theta in a negative binomial regression fitted with R?
|
I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the disp
|
What is theta in a negative binomial regression fitted with R?
I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the dispersion statistic and dispersion parameter.
The dispersion statistic, which gives an indication of count model extra-dispersion, is the Pearson statistic divided by the residual DOF. $\mu$ is the location or shape parameter. For count models, the scale parameter is set at 1. The R glm and glm.nb $\theta$ is a dispersion parameter, or ancillary parameter. I called it the heterogeneity parameter in the first edition of my book, Negative Binomial Regression (2007, Cambridge University Press), but call it the dispersion parameter in my 2011 second edition. I give a complete rationale for the various terms in the NB model in my forthcoming book, Modeling Count Data (Cambridge) which is going to press today. It should be for sale (paperback) by July 15.
glm.nb and glm are unusual in how they define the dispersion parameter. The variance is given as $\mu+\frac{\mu^2}{\theta}$ rather than $\mu+\alpha\mu^2$, which is the direct parameterization. It is the way NB is modeled in SAS, Stata, Limdep, SPSS, Matlab, Genstat, Xplore, and most all software. When you compare glm.nb results with other software results, remember this. The author of glm (which came from S-plus) and glm.nb apparently took the indirect relationship from McCullagh & Nelder, but Nelder (who was the co-founder of GLM in 1972) wrote his kk system add-on to Genstat in 1993 in which he argued that the direct relationship is preferred. He and his wife used to visit me and my family about every other year in Arizona starting in early 1993 until the year before he died. We discussed this pretty thoroughly, since I had put a direct relationship into the glm program I wrote in late 1992 for Stata and Xplore software, and for a SAS macro in 1994.
The nbinomial function in the msme package on CRAN allows the user to employ the direct (default) or indirect (as an option, to duplicate glm.nb) parameterization, and provides the Pearson statistic and residuals to output. Output also displays the dispersion statistic, and allow the user to parameterize $\alpha$ (or $\theta$), giving parameter estimates for the dispersion. This allows you to assess which predictors add to the extra-dispersion of the model. This type of model is generally referred to as heterogeneous negative binomial. I'll put the nbinomial function into the COUNT package before the new book comes out, plus a number of new functions and scripts for graphics.
|
What is theta in a negative binomial regression fitted with R?
I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the disp
|
7,625
|
What is theta in a negative binomial regression fitted with R?
|
glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures.
|
What is theta in a negative binomial regression fitted with R?
|
glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures.
|
What is theta in a negative binomial regression fitted with R?
glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures.
|
What is theta in a negative binomial regression fitted with R?
glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures.
|
7,626
|
Negative binomial distribution vs binomial distribution
|
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X is the number of successes observed in n trials. Because there are a fixed number of trials, the possible values of X are 0, 1, ..., n.
With the Negative Binomial distribution, the random variable Y is the number of trials until observed the r th success is observed. In this case, we keep increasing the number of trials until we reach r successes. The possible values of Y are r, r+1, r+2, ... with no upper bound. The Negative Binomial can also be defined in terms of the number of failures until the r th success, instead of the number of trials until the r th success. Wikipedia defines the Negative Binomial distribution in this manner.
So to summarize:
Binomial:
Fixed number of trials (n)
Fixed probability of success (p)
Random variable is X = Number of successes.
Possible values are 0 ≤ X ≤ n
Negative Binomial:
Fixed number of successes (r)
Fixed probability of success (p)
Random variable is Y = Number of trials until the r th success.
Possible values are r ≤ Y
Thanks to Ben Bolker for reminding me to mention the support of the two distributions. He answered a related question here.
|
Negative binomial distribution vs binomial distribution
|
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X
|
Negative binomial distribution vs binomial distribution
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X is the number of successes observed in n trials. Because there are a fixed number of trials, the possible values of X are 0, 1, ..., n.
With the Negative Binomial distribution, the random variable Y is the number of trials until observed the r th success is observed. In this case, we keep increasing the number of trials until we reach r successes. The possible values of Y are r, r+1, r+2, ... with no upper bound. The Negative Binomial can also be defined in terms of the number of failures until the r th success, instead of the number of trials until the r th success. Wikipedia defines the Negative Binomial distribution in this manner.
So to summarize:
Binomial:
Fixed number of trials (n)
Fixed probability of success (p)
Random variable is X = Number of successes.
Possible values are 0 ≤ X ≤ n
Negative Binomial:
Fixed number of successes (r)
Fixed probability of success (p)
Random variable is Y = Number of trials until the r th success.
Possible values are r ≤ Y
Thanks to Ben Bolker for reminding me to mention the support of the two distributions. He answered a related question here.
|
Negative binomial distribution vs binomial distribution
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X
|
7,627
|
Negative binomial distribution vs binomial distribution
|
Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB is an alternative to Poisson when you observe the dispersion (variance) higher than expected by Poisson. Poisson is a the first choice to consider when you deal with count data, e.g. an annual number of car accident fatalities in a small town. Poisson distribution's both mean and the variance are defined by one parameter - a rate of occurrence, usually denoted as $\lambda$. As long as you estimated $\lambda$, your mean and variance follow. In fact, the mean must be equal to variance.
If your data suggests that the variance is greater than the mean (overdispersion), this rules out Poisson, then Negative binomial would be a next distribution to look at. It has more than one parameter, so its variance can be greater than the mean.
The relation of NB to binomial comes from the underlying process, as it was described in @Jelsema's answer. The process are related, so the distributions are too, but as I explained here the link to Poisson distribution is closer in practical applications.
UPDATE:
Another aspect is the parameterization. Binomial distribution has two parameters: p and n. Its bona fide domain is 0 to n. In that it's not only discrete, but also defined on a finite set of numbers.
In contrast both Poisson and NB are defined on infinite set of non-negative integers. Poisson has one parameter $\lambda$, while NB has two: p and r. Note, that these two do not have parameter $n$. Thus it's one more way to see how NB and Poisson are connected.
|
Negative binomial distribution vs binomial distribution
|
Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB
|
Negative binomial distribution vs binomial distribution
Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB is an alternative to Poisson when you observe the dispersion (variance) higher than expected by Poisson. Poisson is a the first choice to consider when you deal with count data, e.g. an annual number of car accident fatalities in a small town. Poisson distribution's both mean and the variance are defined by one parameter - a rate of occurrence, usually denoted as $\lambda$. As long as you estimated $\lambda$, your mean and variance follow. In fact, the mean must be equal to variance.
If your data suggests that the variance is greater than the mean (overdispersion), this rules out Poisson, then Negative binomial would be a next distribution to look at. It has more than one parameter, so its variance can be greater than the mean.
The relation of NB to binomial comes from the underlying process, as it was described in @Jelsema's answer. The process are related, so the distributions are too, but as I explained here the link to Poisson distribution is closer in practical applications.
UPDATE:
Another aspect is the parameterization. Binomial distribution has two parameters: p and n. Its bona fide domain is 0 to n. In that it's not only discrete, but also defined on a finite set of numbers.
In contrast both Poisson and NB are defined on infinite set of non-negative integers. Poisson has one parameter $\lambda$, while NB has two: p and r. Note, that these two do not have parameter $n$. Thus it's one more way to see how NB and Poisson are connected.
|
Negative binomial distribution vs binomial distribution
Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB
|
7,628
|
Negative binomial distribution vs binomial distribution
|
They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example suppose that three items are selected at random from a manufacturing process and each item is inspected and classified defective, $D$ , or nondefective, $N$ , we see that the sample space in this case is $S = ( DDD, DDN, DND, DNN, NDD, NDN, NND, NNN)$ .
Since Negative Binomial represents the number of failures until you draw a certain number of successes . Consider the same example and suppose the experiment is to sample items randomly until one defective item is observed. Then the sample space for this case is $S = ( D,ND,NND,NNND,... )$.
So Binomial counts successes in a fixed number of trials, while Negative binomial counts failures until a fixed number successes, but For the both we're drawing with replacement, which means that each trial has a fixed probability $p$ of success.
|
Negative binomial distribution vs binomial distribution
|
They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example
|
Negative binomial distribution vs binomial distribution
They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example suppose that three items are selected at random from a manufacturing process and each item is inspected and classified defective, $D$ , or nondefective, $N$ , we see that the sample space in this case is $S = ( DDD, DDN, DND, DNN, NDD, NDN, NND, NNN)$ .
Since Negative Binomial represents the number of failures until you draw a certain number of successes . Consider the same example and suppose the experiment is to sample items randomly until one defective item is observed. Then the sample space for this case is $S = ( D,ND,NND,NNND,... )$.
So Binomial counts successes in a fixed number of trials, while Negative binomial counts failures until a fixed number successes, but For the both we're drawing with replacement, which means that each trial has a fixed probability $p$ of success.
|
Negative binomial distribution vs binomial distribution
They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example
|
7,629
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
|
For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \propto \mathbb{1}_{\sigma>0}$.
Then the joint density of $Y, \mu, \sigma^2$ is proportional to
$$
f(Y, \mu, \sigma^2 | \lambda) \propto \frac{1}{\sigma}\exp \left(-\frac{(y-\mu)^2}{\sigma^2} \right) \times 2\lambda e^{-\lambda \vert \mu \vert}.
$$
Taking a log and discarding terms that do not involve $\mu$,
$$
\log f(Y, \mu, \sigma^2) = -\frac{1}{\sigma^2} \Vert y-\mu\Vert_2^2 -\lambda \vert \mu \vert. \quad (1)$$
Thus the maximum of (1) will be a MAP estimate and is indeed the Lasso problem after we reparametrize $\tilde \lambda = \lambda \sigma^2$.
The extension to regression is clear--replace $\mu$ with $X\beta$ in the Normal likelihood, and set the prior on $\beta$ to be a sequence of independent laplace$(\lambda)$ distributions.
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
|
For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \p
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \propto \mathbb{1}_{\sigma>0}$.
Then the joint density of $Y, \mu, \sigma^2$ is proportional to
$$
f(Y, \mu, \sigma^2 | \lambda) \propto \frac{1}{\sigma}\exp \left(-\frac{(y-\mu)^2}{\sigma^2} \right) \times 2\lambda e^{-\lambda \vert \mu \vert}.
$$
Taking a log and discarding terms that do not involve $\mu$,
$$
\log f(Y, \mu, \sigma^2) = -\frac{1}{\sigma^2} \Vert y-\mu\Vert_2^2 -\lambda \vert \mu \vert. \quad (1)$$
Thus the maximum of (1) will be a MAP estimate and is indeed the Lasso problem after we reparametrize $\tilde \lambda = \lambda \sigma^2$.
The extension to regression is clear--replace $\mu$ with $X\beta$ in the Normal likelihood, and set the prior on $\beta$ to be a sequence of independent laplace$(\lambda)$ distributions.
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \p
|
7,630
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
|
This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{1}{2\tau} \sum_i|\beta_i|}$.
The model for the data is the usual regression assumption $y \stackrel{\text{iid}}{\sim}N(X\beta,\sigma^2)$.
$f(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma^{2}) \propto (\sigma^{2})^{-n/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)\right)$
Now minus twice the log of the posterior is of the form
$k(\sigma^2,\tau,n,p)+$ $\frac{1}{{\sigma}^{2}} (\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)+ \frac{1}{\tau} \sum_i|\beta_i|$
Let $\lambda=\sigma^2/\tau$ and we get $-2\log$-posterior of
$k(\sigma^2,\lambda,n,p)+$ $\frac{1}{{\sigma}^{2}}\left[ (\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)+ \lambda \sum_i|\beta_i|\right]$
The MAP estimator for $\beta$ minimizes the above, which minimizes
$S=(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)+ \lambda \sum_i|\beta_i|$
So the MAP estimator for $\beta$ is LASSO.
(Here I treated $\sigma^2$ as effectively fixed but you can do other things with it and still get LASSO coming out.)
Edit: That's what I get for composing an answer off line; I didn't see a good answer was already posted by Andrew. Mine really doesn't do anything his doesn't do already. I'll leave mine for now because it gives a couple more details of the development in terms of $\beta$.
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
|
This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{1}{2\tau} \sum_i|\beta_i|}$.
The model for the data is the usual regression assumption $y \stackrel{\text{iid}}{\sim}N(X\beta,\sigma^2)$.
$f(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma^{2}) \propto (\sigma^{2})^{-n/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)\right)$
Now minus twice the log of the posterior is of the form
$k(\sigma^2,\tau,n,p)+$ $\frac{1}{{\sigma}^{2}} (\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)+ \frac{1}{\tau} \sum_i|\beta_i|$
Let $\lambda=\sigma^2/\tau$ and we get $-2\log$-posterior of
$k(\sigma^2,\lambda,n,p)+$ $\frac{1}{{\sigma}^{2}}\left[ (\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)+ \lambda \sum_i|\beta_i|\right]$
The MAP estimator for $\beta$ minimizes the above, which minimizes
$S=(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)+ \lambda \sum_i|\beta_i|$
So the MAP estimator for $\beta$ is LASSO.
(Here I treated $\sigma^2$ as effectively fixed but you can do other things with it and still get LASSO coming out.)
Edit: That's what I get for composing an answer off line; I didn't see a good answer was already posted by Andrew. Mine really doesn't do anything his doesn't do already. I'll leave mine for now because it gives a couple more details of the development in terms of $\beta$.
|
Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{
|
7,631
|
Understanding bias-variance tradeoff derivation
|
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y-f_k(x))^2]& = E[(f(x)+\epsilon-f_k(x))^2] \\
&= E[(f(x)-f_k(x))^2]+2E[(f(x)-f_k(x))\epsilon]+E[\epsilon^2]\\
&= E\left[\left(f(x) - E(f_k(x)) + E(f_k(x))-f_k(x) \right)^2 \right] + 2E[(f(x)-f_k(x))\epsilon]+\sigma^2 \\
& = Var(f_k(x)) + \text{Bias}^2(f_k(x)) + \sigma^2.
\end{align*}
Note: $E[(f_k(x)-E(f_k(x)))(f(x)-E(f_k(x))] = E[f_k(x)-E(f_k(x))](f(x)-E(f_k(x))) = 0.$
|
Understanding bias-variance tradeoff derivation
|
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y
|
Understanding bias-variance tradeoff derivation
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y-f_k(x))^2]& = E[(f(x)+\epsilon-f_k(x))^2] \\
&= E[(f(x)-f_k(x))^2]+2E[(f(x)-f_k(x))\epsilon]+E[\epsilon^2]\\
&= E\left[\left(f(x) - E(f_k(x)) + E(f_k(x))-f_k(x) \right)^2 \right] + 2E[(f(x)-f_k(x))\epsilon]+\sigma^2 \\
& = Var(f_k(x)) + \text{Bias}^2(f_k(x)) + \sigma^2.
\end{align*}
Note: $E[(f_k(x)-E(f_k(x)))(f(x)-E(f_k(x))] = E[f_k(x)-E(f_k(x))](f(x)-E(f_k(x))) = 0.$
|
Understanding bias-variance tradeoff derivation
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y
|
7,632
|
Understanding bias-variance tradeoff derivation
|
A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using notation from the book "Elements of Statistical Learning" on page 223
If we assume that $Y = f(X) + \epsilon$ and $E[\epsilon] = 0$ and $Var(\epsilon) = \sigma^2_\epsilon$ then we can derive the expression for the expected prediction error of a regression fit $\hat f(X)$ at an input $X = x_0$ using squared error loss
$$Err(x_0) = E[ (Y - \hat f(x_0) )^2 | X = x_0]$$
For notational simplicity let $\hat f(x_0) = \hat f$, $f(x_0) = f$ and recall that $E[f] = f$ and $E[Y] = f$
\begin{aligned}
E[ (Y - \hat f)^2 ] &= E[(Y - f + f - \hat f )^2]
\\
& = E[(y - f)^2] + E[(f - \hat f)^2] + 2 E[(f - \hat f)(y - f)]
\\
& = E[(f + \epsilon - f)^2] + E[(f - \hat f)^2] + 2E[fY - f^2 - \hat f Y + \hat f f]
\\
& = E[\epsilon^2] + E[(f - \hat f)^2] + 2( f^2 - f^2 - f E[\hat f] + f E[\hat f] )
\\
& = \sigma^2_\epsilon + E[(f - \hat f)^2] + 0
\end{aligned}
For the term $E[(f - \hat f)^2]$ we can use a similar trick as above, adding and subtracting $E[\hat f]$ to get
\begin{aligned}
E[(f - \hat f)^2] & = E[(f + E[\hat f] - E[\hat f] - \hat f)^2]
\\
& = E \left[ f - E[\hat f] \right]^2 + E\left[ \hat f - E[ \hat f] \right]^2
\\
& = \left[ f - E[\hat f] \right]^2 + E\left[ \hat f - E[ \hat f] \right]^2
\\
& = Bias^2[\hat f] + Var[\hat f]
\end{aligned}
Putting it together
$$E[ (Y - \hat f)^2 ] = \sigma^2_\epsilon + Bias^2[\hat f] + Var[\hat f] $$
Some comments on why $E[\hat f Y] = f E[\hat f]$
Taken from Alecos Papadopoulos here
Recall that $\hat f$ is the predictor we have constructed based on the $m$ data points $\{(x^{(1)},y^{(1)}),...,(x^{(m)},y^{(m)}) \}$ so we can write $\hat f = \hat f_m$ to remember that.
On the other hand $Y$ is the prediction we are making on a new data point $(x^{(m+1)},y^{(m+1)})$ by using the model constructed on the $m$ data points above. So the Mean Squared Error can be written as
$$ E[\hat f_m(x^{(m+1)}) - y^{(m+1)}]^2$$
Expanding the equation from the previous section
$$E[\hat f_m Y]=E[\hat f_m (f+ \epsilon)]=E[\hat f_m f+\hat f_m \epsilon]=E[\hat f_m f]+E[\hat f_m \epsilon]$$
The last part of the equation can be viewed as
$$ E[\hat f_m(x^{(m+1)}) \cdot \epsilon^{(m+1)}] = 0$$
Since we make the following assumptions about the point $x^{(m+1)}$:
It was not used when constructing $\hat f_m$
It is independent of all other observations $\{(x^{(1)},y^{(1)}),...,(x^{(m)},y^{(m)}) \}$
It is independent of $\epsilon^{(m+1)}$
Other sources with full derivations
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff#Derivation
https://robjhyndman.com/files/2-biasvardecomp.pdf
http://www.inf.ed.ac.uk/teaching/courses/mlsc/Notes/Lecture4/BiasVariance.pdf
|
Understanding bias-variance tradeoff derivation
|
A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using not
|
Understanding bias-variance tradeoff derivation
A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using notation from the book "Elements of Statistical Learning" on page 223
If we assume that $Y = f(X) + \epsilon$ and $E[\epsilon] = 0$ and $Var(\epsilon) = \sigma^2_\epsilon$ then we can derive the expression for the expected prediction error of a regression fit $\hat f(X)$ at an input $X = x_0$ using squared error loss
$$Err(x_0) = E[ (Y - \hat f(x_0) )^2 | X = x_0]$$
For notational simplicity let $\hat f(x_0) = \hat f$, $f(x_0) = f$ and recall that $E[f] = f$ and $E[Y] = f$
\begin{aligned}
E[ (Y - \hat f)^2 ] &= E[(Y - f + f - \hat f )^2]
\\
& = E[(y - f)^2] + E[(f - \hat f)^2] + 2 E[(f - \hat f)(y - f)]
\\
& = E[(f + \epsilon - f)^2] + E[(f - \hat f)^2] + 2E[fY - f^2 - \hat f Y + \hat f f]
\\
& = E[\epsilon^2] + E[(f - \hat f)^2] + 2( f^2 - f^2 - f E[\hat f] + f E[\hat f] )
\\
& = \sigma^2_\epsilon + E[(f - \hat f)^2] + 0
\end{aligned}
For the term $E[(f - \hat f)^2]$ we can use a similar trick as above, adding and subtracting $E[\hat f]$ to get
\begin{aligned}
E[(f - \hat f)^2] & = E[(f + E[\hat f] - E[\hat f] - \hat f)^2]
\\
& = E \left[ f - E[\hat f] \right]^2 + E\left[ \hat f - E[ \hat f] \right]^2
\\
& = \left[ f - E[\hat f] \right]^2 + E\left[ \hat f - E[ \hat f] \right]^2
\\
& = Bias^2[\hat f] + Var[\hat f]
\end{aligned}
Putting it together
$$E[ (Y - \hat f)^2 ] = \sigma^2_\epsilon + Bias^2[\hat f] + Var[\hat f] $$
Some comments on why $E[\hat f Y] = f E[\hat f]$
Taken from Alecos Papadopoulos here
Recall that $\hat f$ is the predictor we have constructed based on the $m$ data points $\{(x^{(1)},y^{(1)}),...,(x^{(m)},y^{(m)}) \}$ so we can write $\hat f = \hat f_m$ to remember that.
On the other hand $Y$ is the prediction we are making on a new data point $(x^{(m+1)},y^{(m+1)})$ by using the model constructed on the $m$ data points above. So the Mean Squared Error can be written as
$$ E[\hat f_m(x^{(m+1)}) - y^{(m+1)}]^2$$
Expanding the equation from the previous section
$$E[\hat f_m Y]=E[\hat f_m (f+ \epsilon)]=E[\hat f_m f+\hat f_m \epsilon]=E[\hat f_m f]+E[\hat f_m \epsilon]$$
The last part of the equation can be viewed as
$$ E[\hat f_m(x^{(m+1)}) \cdot \epsilon^{(m+1)}] = 0$$
Since we make the following assumptions about the point $x^{(m+1)}$:
It was not used when constructing $\hat f_m$
It is independent of all other observations $\{(x^{(1)},y^{(1)}),...,(x^{(m)},y^{(m)}) \}$
It is independent of $\epsilon^{(m+1)}$
Other sources with full derivations
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff#Derivation
https://robjhyndman.com/files/2-biasvardecomp.pdf
http://www.inf.ed.ac.uk/teaching/courses/mlsc/Notes/Lecture4/BiasVariance.pdf
|
Understanding bias-variance tradeoff derivation
A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using not
|
7,633
|
How to handle a "self defeating" prediction model?
|
There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to higher OOS (promotional sales are harder to predict than regular sales, in part because not only average sales increase, but also the variance of sales, and "harder-to-predict" translates often into OOS), but the system and its users might learn this and lay in additional stock for promotions. After a while, the original relationship between promotions and OOS does not hold any more.
This is often called "model shift" or similar. You can overcome it by adapting your model. The most common way is to weight inputs differently, giving lower weight to older observations.
Even if the relationship between a predictor and OOS does not change, the predictor's distribution might. For instance, multiple days with zero sales of a particular stock keeping unit (SKU) might signal an OOS - but if the model performs well, then OOS might be reduced across the board, and there might simply not be as many sequences of zero sales.
Changes in the distribution of a predictor should not be a problem. Your model will simply output a lower probability of OOS.
In the end, you probably don't need to worry overmuch. There will never be zero OOS. Feedback mechanisms like the ones above do occur, but they will not work until OOS are completely eradicated.
Some pending OOS may simply not be avertable. "I have one unit on the shelf and will probably face a demand for five over the coming week, but the next delivery is only due a week from today."
Some OOS will be very hard to predict, even if they are avertable, if they had been known in time. "If we had known we would drop the pallet off the forklift and destroy all the product, we would have ordered another one."
Retailers do understand that they need to aim for a high service level, but that 100% are not achievable. People do come in and buy up your entire stock on certain products. This is hard to forecast (see above) and sufficiently rare that you do not want to fill up your shelves on the off chance this might happen. Compare Pareto's law: a service level of 80% (or even 90%) is pretty easy to achieve, but 99.9% is much harder. Some OOS are consciously allowed.
Something similar to Moore's law holds: the better ML becomes, the more expectations will increase, and the harder people will make life for the model. While OOS detection (and forecasting) algorithms improve, retailers are busy making our life more difficult.
For instance through variant proliferation. It's easier to detect OOS on four flavors of yoghurt than on twenty different flavors. Why? Because people don't eat five times as much yoghurt. Instead, pretty much unchanged total demand is now distributed across five times as many SKUs, and each SKU's stock is one fifth as high as before. The Long Tail is expanding, and signals are getting weaker.
Or by allowing mobile checkout using your own device. This may well lower psychological barriers to shoplifting, so system inventories will be even worse than they already are, and of course, system inventories are probably the best predictor for OOS, so if they are off, the model will deteriorate.
I happen to have been working in forecasting retail sales for over twelve years now, so I do have a bit of an idea about developments like this.
I may be pessimistic, but I think very similar effects are at work for other ML use cases than OOS detection. Or maybe this is not pessimism: it means that problems will likely never be "solved", so there will still be work for us even decades from now.
|
How to handle a "self defeating" prediction model?
|
There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to hi
|
How to handle a "self defeating" prediction model?
There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to higher OOS (promotional sales are harder to predict than regular sales, in part because not only average sales increase, but also the variance of sales, and "harder-to-predict" translates often into OOS), but the system and its users might learn this and lay in additional stock for promotions. After a while, the original relationship between promotions and OOS does not hold any more.
This is often called "model shift" or similar. You can overcome it by adapting your model. The most common way is to weight inputs differently, giving lower weight to older observations.
Even if the relationship between a predictor and OOS does not change, the predictor's distribution might. For instance, multiple days with zero sales of a particular stock keeping unit (SKU) might signal an OOS - but if the model performs well, then OOS might be reduced across the board, and there might simply not be as many sequences of zero sales.
Changes in the distribution of a predictor should not be a problem. Your model will simply output a lower probability of OOS.
In the end, you probably don't need to worry overmuch. There will never be zero OOS. Feedback mechanisms like the ones above do occur, but they will not work until OOS are completely eradicated.
Some pending OOS may simply not be avertable. "I have one unit on the shelf and will probably face a demand for five over the coming week, but the next delivery is only due a week from today."
Some OOS will be very hard to predict, even if they are avertable, if they had been known in time. "If we had known we would drop the pallet off the forklift and destroy all the product, we would have ordered another one."
Retailers do understand that they need to aim for a high service level, but that 100% are not achievable. People do come in and buy up your entire stock on certain products. This is hard to forecast (see above) and sufficiently rare that you do not want to fill up your shelves on the off chance this might happen. Compare Pareto's law: a service level of 80% (or even 90%) is pretty easy to achieve, but 99.9% is much harder. Some OOS are consciously allowed.
Something similar to Moore's law holds: the better ML becomes, the more expectations will increase, and the harder people will make life for the model. While OOS detection (and forecasting) algorithms improve, retailers are busy making our life more difficult.
For instance through variant proliferation. It's easier to detect OOS on four flavors of yoghurt than on twenty different flavors. Why? Because people don't eat five times as much yoghurt. Instead, pretty much unchanged total demand is now distributed across five times as many SKUs, and each SKU's stock is one fifth as high as before. The Long Tail is expanding, and signals are getting weaker.
Or by allowing mobile checkout using your own device. This may well lower psychological barriers to shoplifting, so system inventories will be even worse than they already are, and of course, system inventories are probably the best predictor for OOS, so if they are off, the model will deteriorate.
I happen to have been working in forecasting retail sales for over twelve years now, so I do have a bit of an idea about developments like this.
I may be pessimistic, but I think very similar effects are at work for other ML use cases than OOS detection. Or maybe this is not pessimism: it means that problems will likely never be "solved", so there will still be work for us even decades from now.
|
How to handle a "self defeating" prediction model?
There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to hi
|
7,634
|
How to handle a "self defeating" prediction model?
|
If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you should optimize to choose the intervention with the best expected outcome. You are not trying to predict your own intervention.
In this case, the model could predict demand (the variable you don't directly control) and this, in combination with the stocking choice, would result in having an out-of-stock event or not. The model should continue to be "rewarded" for predicting demand correctly since this is its job. Out-of-stock events will depend on this variable along with your stocking choice.
|
How to handle a "self defeating" prediction model?
|
If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you shou
|
How to handle a "self defeating" prediction model?
If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you should optimize to choose the intervention with the best expected outcome. You are not trying to predict your own intervention.
In this case, the model could predict demand (the variable you don't directly control) and this, in combination with the stocking choice, would result in having an out-of-stock event or not. The model should continue to be "rewarded" for predicting demand correctly since this is its job. Out-of-stock events will depend on this variable along with your stocking choice.
|
How to handle a "self defeating" prediction model?
If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you shou
|
7,635
|
How to handle a "self defeating" prediction model?
|
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes that any positive stock level is independent of the level of sales. A commenter says that this assumption doesn't hold in reality. I don't know either way -- I don't work on retail data sets. But as a simplification, my proposed approach permits one to make inferences using counterfactual reasoning; whether or not this simplification is too unrealistic to give meaningful insight is up to you.
|
How to handle a "self defeating" prediction model?
|
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes t
|
How to handle a "self defeating" prediction model?
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes that any positive stock level is independent of the level of sales. A commenter says that this assumption doesn't hold in reality. I don't know either way -- I don't work on retail data sets. But as a simplification, my proposed approach permits one to make inferences using counterfactual reasoning; whether or not this simplification is too unrealistic to give meaningful insight is up to you.
|
How to handle a "self defeating" prediction model?
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes t
|
7,636
|
How to handle a "self defeating" prediction model?
|
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
|
How to handle a "self defeating" prediction model?
|
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
|
How to handle a "self defeating" prediction model?
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
|
How to handle a "self defeating" prediction model?
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
|
7,637
|
How to handle a "self defeating" prediction model?
|
One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a means to that end. So as far as Type II errors are concerned, this isn't an issue. Either we continue to have OOSE, in which case we have data to train our model, or we don't, in which the problem that the model was created to address has been solved. What can be a problem is Type I errors. It's easy to fall into a Bear Patrol fallacy, where you have a system X that is built to prevent Y, you don't see Y, so you conclude that X prevents Y, and any attempts to shut X down are dismissed on the basis "But it's doing such a good job preventing Y!" Organizations can be locked into expensive programs because no one wants to risk that Y will come back, and it's difficult to find out whether X is really necessary without allowing that possibility.
It then becomes a trade-off of how much you're willing to occasionally engage in (according to your model) suboptimal behavior to get a control group. That's part of any active exploration: if you have a drug that you think is effective, you have to have a control group that isn't getting the drug to confirm that it is in fact effective.
|
How to handle a "self defeating" prediction model?
|
One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a m
|
How to handle a "self defeating" prediction model?
One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a means to that end. So as far as Type II errors are concerned, this isn't an issue. Either we continue to have OOSE, in which case we have data to train our model, or we don't, in which the problem that the model was created to address has been solved. What can be a problem is Type I errors. It's easy to fall into a Bear Patrol fallacy, where you have a system X that is built to prevent Y, you don't see Y, so you conclude that X prevents Y, and any attempts to shut X down are dismissed on the basis "But it's doing such a good job preventing Y!" Organizations can be locked into expensive programs because no one wants to risk that Y will come back, and it's difficult to find out whether X is really necessary without allowing that possibility.
It then becomes a trade-off of how much you're willing to occasionally engage in (according to your model) suboptimal behavior to get a control group. That's part of any active exploration: if you have a drug that you think is effective, you have to have a control group that isn't getting the drug to confirm that it is in fact effective.
|
How to handle a "self defeating" prediction model?
One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a m
|
7,638
|
Is there a boxplot variant for Poisson distributed data?
|
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a dataset. As such, they are fine even when the data have very skewed distributions (although they might not reveal quite as much information as they do about approximately unskewed distributions).
When boxplots become skewed, as they will with a Poisson distribution, the next step is to re-express the underlying variable (with a monotonic, increasing transformation) and redraw the boxplots. Because the variance of a Poisson distribution is proportional to its mean, a good transformation to use is the square root.
Each boxplot depicts 50 iid draws from a Poisson distribution with given intensity (from 1 through 10, with two trials for each intensity). Notice that the skewness tends to be low.
The same data on a square root scale tend to have boxplots that are slightly more symmetric and (except for the lowest intensity) have approximately equal IQRs regardless of intensity).
In sum, don't change the boxplot algorithm: re-express the data instead.
Incidentally, the relevant chances to be computing are these: what is the chance that an independent normal variate $X$ will exceed the upper(lower) fence $U$($L$) as estimated from $n$ independent draws from the same distribution? This accounts for the fact that the fences in a boxplot are not computed from the underlying distribution but are estimated from the data. In most cases, the chances are much greater than 1%! For instance, here (based on 10,000 Monte-Carlo trials) is a histogram of the log (base 10) chances for the case $n=9$:
(Because the normal distribution is symmetric, this histogram applies to both fences.) The logarithm of 1%/2 is about -2.3. Clearly, most of the time the probability is greater than this. About 16% of the time it exceeds 10%!
It turns out (I won't clutter this reply with the details) that the distributions of these chances are comparable to the normal case (for small $n$) even for Poisson distributions of intensity as low as 1, which is pretty skewed. The main difference is that it's usually less likely to find a low outlier and a little more likely to find a high outlier.
|
Is there a boxplot variant for Poisson distributed data?
|
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a data
|
Is there a boxplot variant for Poisson distributed data?
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a dataset. As such, they are fine even when the data have very skewed distributions (although they might not reveal quite as much information as they do about approximately unskewed distributions).
When boxplots become skewed, as they will with a Poisson distribution, the next step is to re-express the underlying variable (with a monotonic, increasing transformation) and redraw the boxplots. Because the variance of a Poisson distribution is proportional to its mean, a good transformation to use is the square root.
Each boxplot depicts 50 iid draws from a Poisson distribution with given intensity (from 1 through 10, with two trials for each intensity). Notice that the skewness tends to be low.
The same data on a square root scale tend to have boxplots that are slightly more symmetric and (except for the lowest intensity) have approximately equal IQRs regardless of intensity).
In sum, don't change the boxplot algorithm: re-express the data instead.
Incidentally, the relevant chances to be computing are these: what is the chance that an independent normal variate $X$ will exceed the upper(lower) fence $U$($L$) as estimated from $n$ independent draws from the same distribution? This accounts for the fact that the fences in a boxplot are not computed from the underlying distribution but are estimated from the data. In most cases, the chances are much greater than 1%! For instance, here (based on 10,000 Monte-Carlo trials) is a histogram of the log (base 10) chances for the case $n=9$:
(Because the normal distribution is symmetric, this histogram applies to both fences.) The logarithm of 1%/2 is about -2.3. Clearly, most of the time the probability is greater than this. About 16% of the time it exceeds 10%!
It turns out (I won't clutter this reply with the details) that the distributions of these chances are comparable to the normal case (for small $n$) even for Poisson distributions of intensity as low as 1, which is pretty skewed. The main difference is that it's usually less likely to find a low outlier and a little more likely to find a high outlier.
|
Is there a boxplot variant for Poisson distributed data?
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a data
|
7,639
|
Is there a boxplot variant for Poisson distributed data?
|
There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise white paper (Vandervieren, E., Hubert, M. (2004) "An adjusted boxplot for skewed distributions", see here).
There is an $\verb+R+$ implementation of this ($\verb+robustbase::adjbox()+$) as well as a matlab one (in a library called $\verb+libra+$).
I personally find it a better alternative to data transformation (though it is also based on an ad-hoc rule, see white paper).
Incidentally, I find I have something to add to whuber's example here. To the extend that we're discussing the whiskers' behaviour, we really should also consider what happens when considering contaminated data:
library(robustbase)
A0 <- rnorm(100)
A1 <- runif(20, -4.1, -4)
A2 <- runif(20, 4, 4.1)
B1 <- exp(c(A0, A1[1:10], A2[1:10]))
boxplot(sqrt(B1), col="red", main="un-adjusted boxplot of square root of data")
adjbox( B1, col="red", main="adjusted boxplot of data")
In this contamination model, B1 has essentially a log-normal distribution save for 20 percent of the data that are half left, half right outliers (the break down point of adjbox is the same as that of regular boxplots, i.e. it assumes that at most 25 percent of the data can be bad).
The graphs depict the classical boxplots of the transformed data (using the square root transformation)
and the adjusted boxplot of the non-transformed data.
Compared to adjusted boxplots, the former option masks the real outliers and labels good data as outliers. In general, it will contrive to hide any evidence of asymmetry in the data by classifying offending points as outliers.
In this example, the approach of using the standard boxplot on the square root of the data finds 13 outliers (all on the right), whereas the adjusted boxplot finds 10 right and 14 left outliers.
EDIT: adjusted box plots in a nutshell.
In 'classical' boxplots the whiskers are placed at:
$Q_1$-1.5*IQR and $Q_3$+1.5*IQR
where IQR is the inter-quantile range, $Q_1$ is the 25th percentile and $Q_3$ is the 75th percentile of the data. The rule of thumb is to regard everything outside the fence as dubious data (the fence is the interval between the two whiskers).
This rule of thumb is ad-hoc: the justification is that if the uncontaminated part of the data is approximately Gaussian, then less than 1% of the good data would be classified as bad using this rule.
A weakness of this fence-rule, as pointed out by the OP, is that the length of the two whiskers are identical, meaning the fence-rule only makes sense if the uncontaminated part of the data has a symmetric distribution.
A popular approach is to preserve the fence-rule and to adapt the data. The idea is to transform the data using some skew correcting monotonous transformation (square root or log or more generally box-cox transforms). This is somewhat messy approach: it relies on circular logic (the transformation should be chosen so as to correct the skewness of the uncontaminated part of the data, which is at this stage an un-observable) and tends to make the data harder to interpret visually. At any rate, this remains a strange procedure whereby one changes the data to preserve what is after all an ad-hoc rule.
An alternative is to leave the data untouched and change the whisker rule. The adjusted boxplot allows the length of each whisker to vary according to an index measuring the skewness of the uncontaminated part of the data:
$Q_1$-$\exp(M,\alpha)$1.5*IQR and $Q_3$+$\exp(M,\beta)$1.5*IQR
Where $M$ is an index of skewness of the uncontaminated part of the data (i.e., just as the median is a measure of location for the uncontaminated part of the data or the MAD a measure of spread for the uncontaminated part of the data) and $\alpha$ $\beta$ are numbers chosen such that for uncontaminated skewed distributions the probability of lying outside the fence is relatively small across a large collection of skewed distributions (this is the ad-hoc part of the fence rule).
For cases when the good part of the data is symmetric, $M\approx 0$ and we're back to the classical whiskers.
The authors suggest using the med-couple as an estimator of $M$ (see reference inside the white paper) because of its high efficiency (though in principle any robust skew index could be used). With this choice of $M$, they then calculated the optimal $\alpha$ and $\beta$ empirically (using a large number of skewed distributions) as:
$Q_1$-$\exp(-4M)$1.5*IQR and $Q_3$+$\exp(3M)$1.5*IQR, if $M\geq 0$
$Q_1$-$\exp(-3M)$1.5*IQR and $Q_3$+$\exp(4M)$1.5*IQR, if $M<0$
|
Is there a boxplot variant for Poisson distributed data?
|
There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise
|
Is there a boxplot variant for Poisson distributed data?
There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise white paper (Vandervieren, E., Hubert, M. (2004) "An adjusted boxplot for skewed distributions", see here).
There is an $\verb+R+$ implementation of this ($\verb+robustbase::adjbox()+$) as well as a matlab one (in a library called $\verb+libra+$).
I personally find it a better alternative to data transformation (though it is also based on an ad-hoc rule, see white paper).
Incidentally, I find I have something to add to whuber's example here. To the extend that we're discussing the whiskers' behaviour, we really should also consider what happens when considering contaminated data:
library(robustbase)
A0 <- rnorm(100)
A1 <- runif(20, -4.1, -4)
A2 <- runif(20, 4, 4.1)
B1 <- exp(c(A0, A1[1:10], A2[1:10]))
boxplot(sqrt(B1), col="red", main="un-adjusted boxplot of square root of data")
adjbox( B1, col="red", main="adjusted boxplot of data")
In this contamination model, B1 has essentially a log-normal distribution save for 20 percent of the data that are half left, half right outliers (the break down point of adjbox is the same as that of regular boxplots, i.e. it assumes that at most 25 percent of the data can be bad).
The graphs depict the classical boxplots of the transformed data (using the square root transformation)
and the adjusted boxplot of the non-transformed data.
Compared to adjusted boxplots, the former option masks the real outliers and labels good data as outliers. In general, it will contrive to hide any evidence of asymmetry in the data by classifying offending points as outliers.
In this example, the approach of using the standard boxplot on the square root of the data finds 13 outliers (all on the right), whereas the adjusted boxplot finds 10 right and 14 left outliers.
EDIT: adjusted box plots in a nutshell.
In 'classical' boxplots the whiskers are placed at:
$Q_1$-1.5*IQR and $Q_3$+1.5*IQR
where IQR is the inter-quantile range, $Q_1$ is the 25th percentile and $Q_3$ is the 75th percentile of the data. The rule of thumb is to regard everything outside the fence as dubious data (the fence is the interval between the two whiskers).
This rule of thumb is ad-hoc: the justification is that if the uncontaminated part of the data is approximately Gaussian, then less than 1% of the good data would be classified as bad using this rule.
A weakness of this fence-rule, as pointed out by the OP, is that the length of the two whiskers are identical, meaning the fence-rule only makes sense if the uncontaminated part of the data has a symmetric distribution.
A popular approach is to preserve the fence-rule and to adapt the data. The idea is to transform the data using some skew correcting monotonous transformation (square root or log or more generally box-cox transforms). This is somewhat messy approach: it relies on circular logic (the transformation should be chosen so as to correct the skewness of the uncontaminated part of the data, which is at this stage an un-observable) and tends to make the data harder to interpret visually. At any rate, this remains a strange procedure whereby one changes the data to preserve what is after all an ad-hoc rule.
An alternative is to leave the data untouched and change the whisker rule. The adjusted boxplot allows the length of each whisker to vary according to an index measuring the skewness of the uncontaminated part of the data:
$Q_1$-$\exp(M,\alpha)$1.5*IQR and $Q_3$+$\exp(M,\beta)$1.5*IQR
Where $M$ is an index of skewness of the uncontaminated part of the data (i.e., just as the median is a measure of location for the uncontaminated part of the data or the MAD a measure of spread for the uncontaminated part of the data) and $\alpha$ $\beta$ are numbers chosen such that for uncontaminated skewed distributions the probability of lying outside the fence is relatively small across a large collection of skewed distributions (this is the ad-hoc part of the fence rule).
For cases when the good part of the data is symmetric, $M\approx 0$ and we're back to the classical whiskers.
The authors suggest using the med-couple as an estimator of $M$ (see reference inside the white paper) because of its high efficiency (though in principle any robust skew index could be used). With this choice of $M$, they then calculated the optimal $\alpha$ and $\beta$ empirically (using a large number of skewed distributions) as:
$Q_1$-$\exp(-4M)$1.5*IQR and $Q_3$+$\exp(3M)$1.5*IQR, if $M\geq 0$
$Q_1$-$\exp(-3M)$1.5*IQR and $Q_3$+$\exp(4M)$1.5*IQR, if $M<0$
|
Is there a boxplot variant for Poisson distributed data?
There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise
|
7,640
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. First of all, let's import all the required packages:
from sklearn.base import TransformerMixin
from sklearn.datasets import make_regression
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge
Then, we need to convert our three regressor models into transformers. This will allow us to merge their predictions into a single feature vector using FeatureUnion:
class RidgeTransformer(Ridge, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
class RandomForestTransformer(RandomForestRegressor, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
class KNeighborsTransformer(KNeighborsRegressor, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
Now, let's define a builder function for our frankenstein model:
def build_model():
ridge_transformer = Pipeline(steps=[
('scaler', StandardScaler()),
('poly_feats', PolynomialFeatures()),
('ridge', RidgeTransformer())
])
pred_union = FeatureUnion(
transformer_list=[
('ridge', ridge_transformer),
('rand_forest', RandomForestTransformer()),
('knn', KNeighborsTransformer())
],
n_jobs=2
)
model = Pipeline(steps=[
('pred_union', pred_union),
('lin_regr', LinearRegression())
])
return model
Finally, let's fit the model:
print('Build and fit a model...')
model = build_model()
X, y = make_regression(n_features=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print('Done. Score:', score)
Output:
Build and fit a model...
Done. Score: 0.9600413867438636
Why bother complicating things in such a way? Well, this approach allows us to optimize model hyperparameters using standard scikit-learn modules such as GridSearchCV or RandomizedSearchCV. Also, now it is possible to easily save and load from disk a pre-trained model.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. Fi
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. First of all, let's import all the required packages:
from sklearn.base import TransformerMixin
from sklearn.datasets import make_regression
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge
Then, we need to convert our three regressor models into transformers. This will allow us to merge their predictions into a single feature vector using FeatureUnion:
class RidgeTransformer(Ridge, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
class RandomForestTransformer(RandomForestRegressor, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
class KNeighborsTransformer(KNeighborsRegressor, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
Now, let's define a builder function for our frankenstein model:
def build_model():
ridge_transformer = Pipeline(steps=[
('scaler', StandardScaler()),
('poly_feats', PolynomialFeatures()),
('ridge', RidgeTransformer())
])
pred_union = FeatureUnion(
transformer_list=[
('ridge', ridge_transformer),
('rand_forest', RandomForestTransformer()),
('knn', KNeighborsTransformer())
],
n_jobs=2
)
model = Pipeline(steps=[
('pred_union', pred_union),
('lin_regr', LinearRegression())
])
return model
Finally, let's fit the model:
print('Build and fit a model...')
model = build_model()
X, y = make_regression(n_features=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print('Done. Score:', score)
Output:
Build and fit a model...
Done. Score: 0.9600413867438636
Why bother complicating things in such a way? Well, this approach allows us to optimize model hyperparameters using standard scikit-learn modules such as GridSearchCV or RandomizedSearchCV. Also, now it is possible to easily save and load from disk a pre-trained model.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. Fi
|
7,641
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLars and GradientBoostingRegressor). Then I run all of them on training data (same data which was used for training of each of these 3 regressors). I get predictions for examples with each of my algorithms and save these 3 results into pandas dataframe with columns 'predictedSVR', 'predictedLASSO' and 'predictedGBR'. And I add the final column into this datafrane which I call 'predicted' which is a real prediction value.
Then I just train a linear regression on this new dataframe:
#df - dataframe with results of 3 regressors and true output
from sklearn linear_model
stacker= linear_model.LinearRegression()
stacker.fit(df[['predictedSVR', 'predictedLASSO', 'predictedGBR']], df['predicted'])
So when I want to make a prediction for new example I just run each of my 3 regressors separately and then I do:
stacker.predict()
on outputs of my 3 regressors. And get a result.
The problem here is that I am finding optimal weights for regressors 'on average, the weights will be same for each example on which I will try to make prediction.
If anyone has any ideas on how to do stacking (weighting) using the features of current example it would be nice to hear them.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLa
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLars and GradientBoostingRegressor). Then I run all of them on training data (same data which was used for training of each of these 3 regressors). I get predictions for examples with each of my algorithms and save these 3 results into pandas dataframe with columns 'predictedSVR', 'predictedLASSO' and 'predictedGBR'. And I add the final column into this datafrane which I call 'predicted' which is a real prediction value.
Then I just train a linear regression on this new dataframe:
#df - dataframe with results of 3 regressors and true output
from sklearn linear_model
stacker= linear_model.LinearRegression()
stacker.fit(df[['predictedSVR', 'predictedLASSO', 'predictedGBR']], df['predicted'])
So when I want to make a prediction for new example I just run each of my 3 regressors separately and then I do:
stacker.predict()
on outputs of my 3 regressors. And get a result.
The problem here is that I am finding optimal weights for regressors 'on average, the weights will be same for each example on which I will try to make prediction.
If anyone has any ideas on how to do stacking (weighting) using the features of current example it would be nice to hear them.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLa
|
7,642
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determine what cluster it's in and run the associated classifier.
You could also use the inverse distances from the centroids to get a set of weights for each classifier and predict using a linear combination of all of the classifiers.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determ
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determine what cluster it's in and run the associated classifier.
You could also use the inverse distances from the centroids to get a set of weights for each classifier and predict using a linear combination of all of the classifiers.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determ
|
7,643
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores on the test set for each class, for each model
When you predict with the ensemble, each model will give you the most likely class, so weight the confidence or probability by the f1 score for that model on that class. If you're dealing with distance (as in SVM, for example), just normalize the distances to get a general confidence, and then proceed with the per class f1 weighting.
You can further tune your ensemble by taking measure of percent correct over some time. Once you have a significantly large, new data set scored, you can plot threshold in steps of 0.1, for instance, against percent correct if using that threshold to score, to get an idea of what threshold will give you, say, 95% correct for class 1, and so on. You can keep updating the test set and f1 scores as new data come in and keep track of drift, rebuilding the models when thresholds or accuracy fall.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
|
I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores o
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores on the test set for each class, for each model
When you predict with the ensemble, each model will give you the most likely class, so weight the confidence or probability by the f1 score for that model on that class. If you're dealing with distance (as in SVM, for example), just normalize the distances to get a general confidence, and then proceed with the per class f1 weighting.
You can further tune your ensemble by taking measure of percent correct over some time. Once you have a significantly large, new data set scored, you can plot threshold in steps of 0.1, for instance, against percent correct if using that threshold to score, to get an idea of what threshold will give you, say, 95% correct for class 1, and so on. You can keep updating the test set and f1 scores as new data come in and keep track of drift, rebuilding the models when thresholds or accuracy fall.
|
Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores o
|
7,644
|
Checking assumptions lmer/lme mixed models in R
|
Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that the residuals of the model are normally distributed. So a transformation or adding weights to the model would be a way of taking care of this (and checking with diagnostic plots, of course).
Q3: plot(myModel.lme)
Q4: qqnorm(myModel.lme, ~ranef(., level=2)). This code will allow you to make QQ plots for each level of the random effects. LME models assume that not only the within-cluster residuals are normally distributed, but that each level of the random effects are as well. Vary the level from 0, 1, to 2 so that you can check the rat, task, and within-subject residuals.
EDIT: I should also add that while normality is assumed and that transformation likely helps reduce problems with non-normal errors/random effects, it's not clear that all problems are actually resolved or that bias isn't introduced. If your data requires a transformation, then be cautious about estimation of the random effects. Here's a paper addressing this.
|
Checking assumptions lmer/lme mixed models in R
|
Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that
|
Checking assumptions lmer/lme mixed models in R
Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that the residuals of the model are normally distributed. So a transformation or adding weights to the model would be a way of taking care of this (and checking with diagnostic plots, of course).
Q3: plot(myModel.lme)
Q4: qqnorm(myModel.lme, ~ranef(., level=2)). This code will allow you to make QQ plots for each level of the random effects. LME models assume that not only the within-cluster residuals are normally distributed, but that each level of the random effects are as well. Vary the level from 0, 1, to 2 so that you can check the rat, task, and within-subject residuals.
EDIT: I should also add that while normality is assumed and that transformation likely helps reduce problems with non-normal errors/random effects, it's not clear that all problems are actually resolved or that bias isn't introduced. If your data requires a transformation, then be cautious about estimation of the random effects. Here's a paper addressing this.
|
Checking assumptions lmer/lme mixed models in R
Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that
|
7,645
|
Checking assumptions lmer/lme mixed models in R
|
Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument. This topic will be
covered in detail in § 5.2, but, for now, it suffices to know that the
varIdent variance function structure allows different variances for
each level of a factor and can be used to fit the heteroscedastic
model [...]"
Pinheiro and Bates, p. 177
If you would like to check for equal variances between sex you may use this approach:
plot( lm.base2, resid(., type = "p") ~ fitted(.) | sex,
id = 0.05, adj = -0.3 )
If variances are different, you can update your model in the following manner:
lm.base2u <- update( lm.base2, weights = varIdent(form = ~ 1 | sex) )
summary(lm.base2u)
Further more, you may have a look at the robustlmm package which also uses a weighing approach. Koller's PhD thesis about this concept is available as open access ("Robust Estimation of Linear Mixed Models"). The abstract states:
"A new scale estimate, the Design Adaptive Scale estimate, is
developed with the aim to provide a sound basis for subsequent robust
tests. It does so by equalize the natural heteroskedasticity of the
residuals and to adjust for the robust estimating equation for the
scale itself. These design adaptive corrections are crucial in small
sample settings, where the number of observations might be merely five
times the number of parameters to be estimated or less."
I do not have enough points for comments. I see however the necessity to clarify some aspect of @John 's answer above. Pinheiro and Bates state on p. 174:
Assumption 1 - the within-group errors are independent and identically normally distributed, with mean zero and variance σ2, and
they are independent of the random effects.
This statement is indeed not clear about homogeneous variances and I am not deep enough into statistics to know all the maths behind the LME concept. However, on p. 175, §4.3.1, the section dealing with Assumption 1 they write:
In this section, we concentrate on methods for assessing the
assumption that the within-group errors are normally distributed, are
centered at zero, and have constant variance.
Also, in the following examples "constant variances" are indeed important. Thus, one may speculate whether they imply homogeneous variances when they write "identically normally distributed" on p. 174 without addressing it more directly.
|
Checking assumptions lmer/lme mixed models in R
|
Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument.
|
Checking assumptions lmer/lme mixed models in R
Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument. This topic will be
covered in detail in § 5.2, but, for now, it suffices to know that the
varIdent variance function structure allows different variances for
each level of a factor and can be used to fit the heteroscedastic
model [...]"
Pinheiro and Bates, p. 177
If you would like to check for equal variances between sex you may use this approach:
plot( lm.base2, resid(., type = "p") ~ fitted(.) | sex,
id = 0.05, adj = -0.3 )
If variances are different, you can update your model in the following manner:
lm.base2u <- update( lm.base2, weights = varIdent(form = ~ 1 | sex) )
summary(lm.base2u)
Further more, you may have a look at the robustlmm package which also uses a weighing approach. Koller's PhD thesis about this concept is available as open access ("Robust Estimation of Linear Mixed Models"). The abstract states:
"A new scale estimate, the Design Adaptive Scale estimate, is
developed with the aim to provide a sound basis for subsequent robust
tests. It does so by equalize the natural heteroskedasticity of the
residuals and to adjust for the robust estimating equation for the
scale itself. These design adaptive corrections are crucial in small
sample settings, where the number of observations might be merely five
times the number of parameters to be estimated or less."
I do not have enough points for comments. I see however the necessity to clarify some aspect of @John 's answer above. Pinheiro and Bates state on p. 174:
Assumption 1 - the within-group errors are independent and identically normally distributed, with mean zero and variance σ2, and
they are independent of the random effects.
This statement is indeed not clear about homogeneous variances and I am not deep enough into statistics to know all the maths behind the LME concept. However, on p. 175, §4.3.1, the section dealing with Assumption 1 they write:
In this section, we concentrate on methods for assessing the
assumption that the within-group errors are normally distributed, are
centered at zero, and have constant variance.
Also, in the following examples "constant variances" are indeed important. Thus, one may speculate whether they imply homogeneous variances when they write "identically normally distributed" on p. 174 without addressing it more directly.
|
Checking assumptions lmer/lme mixed models in R
Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument.
|
7,646
|
Checking assumptions lmer/lme mixed models in R
|
You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally distributed. And categorical predictors are used in regression all of the time (the underlying function in R that runs an ANOVA is the linear regression command).
For details on examining assumptions check out the Pinheiro and Bates book (p. 174, section 4.3.1). Also, if you plan to use lme4 (which the book isn't written around) you can replicate their plots using plot with an lmer model (?plot.merMod).
To quickly check normality it would just be qqnorm(resid(myModel)).
|
Checking assumptions lmer/lme mixed models in R
|
You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally
|
Checking assumptions lmer/lme mixed models in R
You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally distributed. And categorical predictors are used in regression all of the time (the underlying function in R that runs an ANOVA is the linear regression command).
For details on examining assumptions check out the Pinheiro and Bates book (p. 174, section 4.3.1). Also, if you plan to use lme4 (which the book isn't written around) you can replicate their plots using plot with an lmer model (?plot.merMod).
To quickly check normality it would just be qqnorm(resid(myModel)).
|
Checking assumptions lmer/lme mixed models in R
You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally
|
7,647
|
Checking assumptions lmer/lme mixed models in R
|
Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example.
|
Checking assumptions lmer/lme mixed models in R
|
Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example.
|
Checking assumptions lmer/lme mixed models in R
Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example.
|
Checking assumptions lmer/lme mixed models in R
Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example.
|
7,648
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
|
Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly conclude that Spearman is "better" for a particular dataset. The difference between rho and tau is in their ideology, proportion-of-variance for rho and probability for tau. Rho is a usual Pearson r applied for ranked data, and like r, is more sensitive to points with large moments (that is, deviations from cloud centre) than to points with small moments. Therefore rho is quite sensitive to the shape of the cloud after the ranking done: the coefficient for an oblong rhombic cloud will be higher than the coefficient for an oblong dumbbelled cloud (because sharp edges of the first are large moments). Tau is an extension of Gamma and is equally sensitive to all the data points, so it is less sensitive to peculiarities in shape of the ranked cloud. Tau is more "general" than rho, for rho is warranted only when you believe the underlying (model, or functional in population) relationship between the variables is strictly monotonic. While Tau allows for nonmonotonic underlying curve and measures which monotonic "trend", positive or negative, prevails there overall. Rho is comparable with r in magnitude; tau is not.
Kendall tau as Gamma. Tau is just a standardized form of Gamma. Several related measures all have numerator $P-Q$ but differ in normalizing denominator:
Gamma: $P+Q$
Somers' D("x dependent"): $P+Q+T_x$
Somers' D("y dependent"): $P+Q+T_y$
Somers' D("symmetric"): arithmetic mean of the above two
Kendall's Tau-b corr. (most suitable for square tables): geometric mean of those two
Kendall's Tau-c corr$^1$. (most suitable for rectangular tables): $N^2(k-1)/(2k)$
Kendall's Tau-a corr$^2$. (makes nо adjustment for ties): $N(N-1)/2 = P+Q+T_x+T_y+T_{xy}$
where $P$ - number of pairs of observations with "concordance", $Q$ - with "inversion"; $T_x$ - number of ties by variable X, $T_y$ - by variable Y, $T_{xy}$ – by both variables; $N$ - number of observations, $k$ - number of distinct values in that variable where this number is less.
Thus, tau is directly comparable in theory and magnitude with Gamma. Rho is directly comparable in theory and magnitude with Pearson $r$. Nick Stauner's nice answer here tells how it is possible to compare rho and tau indirectly.
See also about tau and rho.
$^1$ Tau-c of a variable with itself can be below $1$: specifically, when the distribution of $k$ distinct values is unbalanced.
$^2$ Tau-a of a variable with itself can be below $1$: specifically, when there are ties.
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
|
Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly conclude that Spearman is "better" for a particular dataset. The difference between rho and tau is in their ideology, proportion-of-variance for rho and probability for tau. Rho is a usual Pearson r applied for ranked data, and like r, is more sensitive to points with large moments (that is, deviations from cloud centre) than to points with small moments. Therefore rho is quite sensitive to the shape of the cloud after the ranking done: the coefficient for an oblong rhombic cloud will be higher than the coefficient for an oblong dumbbelled cloud (because sharp edges of the first are large moments). Tau is an extension of Gamma and is equally sensitive to all the data points, so it is less sensitive to peculiarities in shape of the ranked cloud. Tau is more "general" than rho, for rho is warranted only when you believe the underlying (model, or functional in population) relationship between the variables is strictly monotonic. While Tau allows for nonmonotonic underlying curve and measures which monotonic "trend", positive or negative, prevails there overall. Rho is comparable with r in magnitude; tau is not.
Kendall tau as Gamma. Tau is just a standardized form of Gamma. Several related measures all have numerator $P-Q$ but differ in normalizing denominator:
Gamma: $P+Q$
Somers' D("x dependent"): $P+Q+T_x$
Somers' D("y dependent"): $P+Q+T_y$
Somers' D("symmetric"): arithmetic mean of the above two
Kendall's Tau-b corr. (most suitable for square tables): geometric mean of those two
Kendall's Tau-c corr$^1$. (most suitable for rectangular tables): $N^2(k-1)/(2k)$
Kendall's Tau-a corr$^2$. (makes nо adjustment for ties): $N(N-1)/2 = P+Q+T_x+T_y+T_{xy}$
where $P$ - number of pairs of observations with "concordance", $Q$ - with "inversion"; $T_x$ - number of ties by variable X, $T_y$ - by variable Y, $T_{xy}$ – by both variables; $N$ - number of observations, $k$ - number of distinct values in that variable where this number is less.
Thus, tau is directly comparable in theory and magnitude with Gamma. Rho is directly comparable in theory and magnitude with Pearson $r$. Nick Stauner's nice answer here tells how it is possible to compare rho and tau indirectly.
See also about tau and rho.
$^1$ Tau-c of a variable with itself can be below $1$: specifically, when the distribution of $k$ distinct values is unbalanced.
$^2$ Tau-a of a variable with itself can be below $1$: specifically, when there are ties.
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly
|
7,649
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
|
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases; and $τ$ is also more tractable mathematically, particularly when ties are present.
I can't add much about Goodman-Kruskal $γ$, other than that it seems to produce ever-so-slightly larger estimates than Kendall's $τ$ in a sample of survey data I've been working with lately... and of course, noticeably lower estimates than Spearman's $ρ$. However, I also tried calculating a couple partial $γ$ estimates (Foraita & Sobotka, 2012), and those came out closer to the partial $ρ$ than the partial $τ$... It took a fair amount of processing time though, so I'll leave the simulation tests or mathematical comparisons to someone else... (who would know how to do them...)
As ttnphns implies, you can't conclude that your $ρ$ estimates are better than your $τ$ estimates by magnitude alone, because their scales differ (even though the limits don't). Gilpin cites Kendall (1962) as describing the ratio of $ρ$ to $τ$ to be roughly 1.5 over most of the range of values. They get closer gradually as their magnitudes increase, so as both approach 1 (or -1), the difference becomes infinitesimal. Gilpin gives a nice big table of equivalent values of $ρ$, $r$, $r^2$, d, and $Z_r$ out to the third digit for $τ$ at every increment of .01 across its range, just like you'd expect to see inside the cover of an intro stats textbook. He based those values on Kendall's specific formulas, which are as follows:
$$
\begin{aligned}
r &= \sin\bigg(\tau\cdot\frac \pi 2 \bigg) \\
\rho &= \frac 6 \pi \bigg(\tau\cdot\arcsin \bigg(\frac{\sin(\tau\cdot\frac \pi 2)} 2 \bigg)\bigg)
\end{aligned}
$$
(I simplified this formula for $ρ$ from the form in which Gilpin wrote, which was in terms of Pearson's $r$.)
Maybe it would make sense to convert your $τ$ into a $ρ$ and see how the computational change affects your effect size estimate. Seems that comparison would give some indication of the extent to which the problems that Spearman's $ρ$ is more sensitive to are present in your data, if at all. More direct methods surely exist for identifying each specific problem individually; my suggestion would produce more of a quick-and-dirty omnibus effect size for those problems. If there's no difference (after correcting for the difference in scale), then one might argue there's no need to look further for problems that only apply to $ρ$. If there's a substantial difference, then it's probably time to break out the magnifying lens to determine what's responsible.
I'm not sure how people usually report effect sizes when using Kendall's $τ$ (to the unfortunately limited extent that people worry about reporting effect sizes in general), but since it seems likely that unfamiliar readers would try to interpret it on the scale of Pearson's $r$, it might be wise to report both your $τ$ statistic and its effect size on the scale of $r$ using the above conversion formula...or at least point out the difference in scale and give a shout out to Gilpin for his handy conversion table.
References
Foraita, R., & Sobotka, F. (2012). Validation of graphical models. gmvalid Package, v1.23. The Comprehensive R Archive Network. URL: http://cran.r-project.org/web/packages/gmvalid/gmvalid.pdf
Gilpin, A. R. (1993). Table for conversion of Kendall's Tau to Spearman's Rho within the context measures of magnitude of effect for meta-analysis. Educational and Psychological Measurement, 53(1), 87-92.
Kendall, M. G. (1962). Rank correlation methods (3rd ed.). London: Griffin.
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
|
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases; and $τ$ is also more tractable mathematically, particularly when ties are present.
I can't add much about Goodman-Kruskal $γ$, other than that it seems to produce ever-so-slightly larger estimates than Kendall's $τ$ in a sample of survey data I've been working with lately... and of course, noticeably lower estimates than Spearman's $ρ$. However, I also tried calculating a couple partial $γ$ estimates (Foraita & Sobotka, 2012), and those came out closer to the partial $ρ$ than the partial $τ$... It took a fair amount of processing time though, so I'll leave the simulation tests or mathematical comparisons to someone else... (who would know how to do them...)
As ttnphns implies, you can't conclude that your $ρ$ estimates are better than your $τ$ estimates by magnitude alone, because their scales differ (even though the limits don't). Gilpin cites Kendall (1962) as describing the ratio of $ρ$ to $τ$ to be roughly 1.5 over most of the range of values. They get closer gradually as their magnitudes increase, so as both approach 1 (or -1), the difference becomes infinitesimal. Gilpin gives a nice big table of equivalent values of $ρ$, $r$, $r^2$, d, and $Z_r$ out to the third digit for $τ$ at every increment of .01 across its range, just like you'd expect to see inside the cover of an intro stats textbook. He based those values on Kendall's specific formulas, which are as follows:
$$
\begin{aligned}
r &= \sin\bigg(\tau\cdot\frac \pi 2 \bigg) \\
\rho &= \frac 6 \pi \bigg(\tau\cdot\arcsin \bigg(\frac{\sin(\tau\cdot\frac \pi 2)} 2 \bigg)\bigg)
\end{aligned}
$$
(I simplified this formula for $ρ$ from the form in which Gilpin wrote, which was in terms of Pearson's $r$.)
Maybe it would make sense to convert your $τ$ into a $ρ$ and see how the computational change affects your effect size estimate. Seems that comparison would give some indication of the extent to which the problems that Spearman's $ρ$ is more sensitive to are present in your data, if at all. More direct methods surely exist for identifying each specific problem individually; my suggestion would produce more of a quick-and-dirty omnibus effect size for those problems. If there's no difference (after correcting for the difference in scale), then one might argue there's no need to look further for problems that only apply to $ρ$. If there's a substantial difference, then it's probably time to break out the magnifying lens to determine what's responsible.
I'm not sure how people usually report effect sizes when using Kendall's $τ$ (to the unfortunately limited extent that people worry about reporting effect sizes in general), but since it seems likely that unfamiliar readers would try to interpret it on the scale of Pearson's $r$, it might be wise to report both your $τ$ statistic and its effect size on the scale of $r$ using the above conversion formula...or at least point out the difference in scale and give a shout out to Gilpin for his handy conversion table.
References
Foraita, R., & Sobotka, F. (2012). Validation of graphical models. gmvalid Package, v1.23. The Comprehensive R Archive Network. URL: http://cran.r-project.org/web/packages/gmvalid/gmvalid.pdf
Gilpin, A. R. (1993). Table for conversion of Kendall's Tau to Spearman's Rho within the context measures of magnitude of effect for meta-analysis. Educational and Psychological Measurement, 53(1), 87-92.
Kendall, M. G. (1962). Rank correlation methods (3rd ed.). London: Griffin.
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$
|
7,650
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
|
These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ (Goodman-Kruskal) are related to pairwise concordance. The main decision to make in choosing $\gamma$ vs. $\tau$ is whether you want to penalize for ties in $X$ and/or $Y$. $\gamma$ does not penalize for ties in either, so that a comparison of the predictive ability of $X_{1}$ and $X_{2}$ in predicting $Y$ will not reward one of the $X$s for being more continuous. This lack of reward makes it a bit inconsistent with model-based likelihood ratio tests. An $X$ that is heavily tied (say a binary $X$) can have high $\gamma$.
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
|
These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ (
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ (Goodman-Kruskal) are related to pairwise concordance. The main decision to make in choosing $\gamma$ vs. $\tau$ is whether you want to penalize for ties in $X$ and/or $Y$. $\gamma$ does not penalize for ties in either, so that a comparison of the predictive ability of $X_{1}$ and $X_{2}$ in predicting $Y$ will not reward one of the $X$s for being more continuous. This lack of reward makes it a bit inconsistent with model-based likelihood ratio tests. An $X$ that is heavily tied (say a binary $X$) can have high $\gamma$.
|
How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ (
|
7,651
|
What if my linear regression data contains several co-mingled linear relationships?
|
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach based on the EM algorithm to basically fit the model that Demetri suggests but without knowing the labels for the variety. Luckily the mixtools package in R provides this functionality for us. Since your data is quite separated and you seem to have quite a bit it should be fairly successful.
library(mixtools)
# Generate some fake data that looks kind of like yours
n1 <- 150
ph1 = runif(n1, 5.1, 7.8)
y1 <- 41.55 + 5.185*ph1 + rnorm(n1, 0, .25)
n2 <- 150
ph2 <- runif(n2, 5.3, 8)
y2 <- 65.14 + 1.48148*ph2 + rnorm(n2, 0, 0.25)
# There are definitely better ways to do all of this but oh well
dat <- data.frame(ph = c(ph1, ph2),
y = c(y1, y2),
group = rep(c(1,2), times = c(n1, n2)))
# Looks about right
plot(dat$ph, dat$y)
# Fit the regression. One line for each component. This defaults
# to assuming there are two underlying groups/components in the data
out <- regmixEM(y = dat$y, x = dat$ph, addintercept = T)
We can examine the results
> summary(out)
summary of regmixEM object:
comp 1 comp 2
lambda 0.497393 0.502607
sigma 0.248649 0.231388
beta1 64.655578 41.514342
beta2 1.557906 5.190076
loglik at estimate: -182.4186
So it fit two regressions and it estimated that 49.7% of the observations fell into the regression for component 1 and 50.2% fell into the regression for component 2. The way I simulated the data it was a 50-50 split so this is good.
The 'true' values I used for the simulation should give the lines:
y = 41.55 + 5.185*ph and y = 65.14 + 1.48148*ph
(which I estimated 'by hand' from your plot so that the data I create looks similar to yours) and the lines that the EM algorithm gave in this case were:
y = 41.514 + 5.19*ph and y = 64.655 + 1.55*ph
Pretty darn close to the actual values.
We can plot the fitted lines along with the data
plot(dat$ph, dat$y, xlab = "Soil Ph", ylab = "Flower Height (cm)")
abline(out$beta[,1], col = "blue") # plot the first fitted line
abline(out$beta[,2], col = "red") # plot the second fitted line
|
What if my linear regression data contains several co-mingled linear relationships?
|
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach bas
|
What if my linear regression data contains several co-mingled linear relationships?
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach based on the EM algorithm to basically fit the model that Demetri suggests but without knowing the labels for the variety. Luckily the mixtools package in R provides this functionality for us. Since your data is quite separated and you seem to have quite a bit it should be fairly successful.
library(mixtools)
# Generate some fake data that looks kind of like yours
n1 <- 150
ph1 = runif(n1, 5.1, 7.8)
y1 <- 41.55 + 5.185*ph1 + rnorm(n1, 0, .25)
n2 <- 150
ph2 <- runif(n2, 5.3, 8)
y2 <- 65.14 + 1.48148*ph2 + rnorm(n2, 0, 0.25)
# There are definitely better ways to do all of this but oh well
dat <- data.frame(ph = c(ph1, ph2),
y = c(y1, y2),
group = rep(c(1,2), times = c(n1, n2)))
# Looks about right
plot(dat$ph, dat$y)
# Fit the regression. One line for each component. This defaults
# to assuming there are two underlying groups/components in the data
out <- regmixEM(y = dat$y, x = dat$ph, addintercept = T)
We can examine the results
> summary(out)
summary of regmixEM object:
comp 1 comp 2
lambda 0.497393 0.502607
sigma 0.248649 0.231388
beta1 64.655578 41.514342
beta2 1.557906 5.190076
loglik at estimate: -182.4186
So it fit two regressions and it estimated that 49.7% of the observations fell into the regression for component 1 and 50.2% fell into the regression for component 2. The way I simulated the data it was a 50-50 split so this is good.
The 'true' values I used for the simulation should give the lines:
y = 41.55 + 5.185*ph and y = 65.14 + 1.48148*ph
(which I estimated 'by hand' from your plot so that the data I create looks similar to yours) and the lines that the EM algorithm gave in this case were:
y = 41.514 + 5.19*ph and y = 64.655 + 1.55*ph
Pretty darn close to the actual values.
We can plot the fitted lines along with the data
plot(dat$ph, dat$y, xlab = "Soil Ph", ylab = "Flower Height (cm)")
abline(out$beta[,1], col = "blue") # plot the first fitted line
abline(out$beta[,2], col = "red") # plot the second fitted line
|
What if my linear regression data contains several co-mingled linear relationships?
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach bas
|
7,652
|
What if my linear regression data contains several co-mingled linear relationships?
|
EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answer is much better. As a consolation, I've coded up a mixture model in Stan. I'm not saying a Bayesian approach is particularly good in this case, but it is just something neat I can contribute.
Stan Code
data{
//Number of data points
int N;
real y[N];
real x[N];
}
parameters{
//mixing parameter
real<lower=0, upper =1> theta;
//Regression intercepts
real beta_0[2];
//Regression slopes.
ordered[2] beta_1;
//Regression noise
real<lower=0> sigma[2];
}
model{
//priors
theta ~ beta(5,5);
beta_0 ~ normal(0,1);
beta_1 ~ normal(0,1);
sigma ~ cauchy(0,2.5);
//mixture likelihood
for (n in 1:N){
target+=log_mix(theta,
normal_lpdf(y[n] | beta_0[1] + beta_1[1]*x[n], sigma[1]),
normal_lpdf(y[n] | beta_0[2] + beta_1[2]*x[n], sigma[2]));
}
}
generated quantities {
//posterior predictive distribution
//will allow us to see what points belong are assigned
//to which mixture
matrix[N,2] p;
matrix[N,2] ps;
for (n in 1:N){
p[n,1] = log_mix(theta,
normal_lpdf(y[n] | beta_0[1] + beta_1[1]*x[n], sigma[1]),
normal_lpdf(y[n] | beta_0[2] + beta_1[2]*x[n], sigma[2]));
p[n,2]= log_mix(1-theta,
normal_lpdf(y[n] | beta_0[1] + beta_1[1]*x[n], sigma[1]),
normal_lpdf(y[n] | beta_0[2] + beta_1[2]*x[n], sigma[2]));
ps[n,]= p[n,]/sum(p[n,]);
}
}
Run The Stan Model From R
library(tidyverse)
library(rstan)
#Simulate the data
N = 100
x = rnorm(N, 0, 3)
group = factor(sample(c('a','b'),size = N, replace = T))
y = model.matrix(~x*group)%*% c(0,1,0,2)
y = as.numeric(y) + rnorm(N)
d = data_frame(x = x, y = y)
d %>%
ggplot(aes(x,y))+
geom_point()
#Fit the model
N = length(x)
x = as.numeric(x)
y = y
fit = stan('mixmodel.stan',
data = list(N= N, x = x, y = y),
chains = 8,
iter = 4000)
Results
Dashed lines are ground truth, solid lines are estimated.
Original Answer
If you know which sample comes from which variety of daffodil, you can estimate an interaction between variety and soil PH.
Your model will look like
$$ y = \beta_0 + \beta_1 \text{variety} + \beta_2\text{PH} + \beta_3\text{variety}\cdot\text{PH} $$
Here is an example in R. I've generated some data that looks like this:
Clearly two different lines, and the lines correspond to two species. Here is how to estimate the lines using linear regression.
library(tidyverse)
#Simulate the data
N = 1000
ph = runif(N,5,8)
species = rbinom(N,1,0.5)
y = model.matrix(~ph*species)%*% c(20,1,20,-3) + rnorm(N, 0, 0.5)
y = as.numeric(y)
d = data_frame(ph = ph, species = species, y = y)
#Estimate the model
model = lm(y~species*ph, data = d)
summary(model)
And the result is
> summary(model)
Call:
lm(formula = y ~ species * ph, data = d)
Residuals:
Min 1Q Median 3Q Max
-1.61884 -0.31976 -0.00226 0.33521 1.46428
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 19.85850 0.17484 113.58 <2e-16 ***
species 20.31363 0.24626 82.49 <2e-16 ***
ph 1.01599 0.02671 38.04 <2e-16 ***
species:ph -3.03174 0.03756 -80.72 <2e-16 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4997 on 996 degrees of freedom
Multiple R-squared: 0.8844, Adjusted R-squared: 0.8841
F-statistic: 2541 on 3 and 996 DF, p-value: < 2.2e-16
For species labeled 0, the line is approximately
$$ y = 19 + 1\cdot \text{PH}$$
For species labeled 1, the line is approximately
$$ y = 40 - 2 \cdot \text{PH} $$
|
What if my linear regression data contains several co-mingled linear relationships?
|
EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answe
|
What if my linear regression data contains several co-mingled linear relationships?
EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answer is much better. As a consolation, I've coded up a mixture model in Stan. I'm not saying a Bayesian approach is particularly good in this case, but it is just something neat I can contribute.
Stan Code
data{
//Number of data points
int N;
real y[N];
real x[N];
}
parameters{
//mixing parameter
real<lower=0, upper =1> theta;
//Regression intercepts
real beta_0[2];
//Regression slopes.
ordered[2] beta_1;
//Regression noise
real<lower=0> sigma[2];
}
model{
//priors
theta ~ beta(5,5);
beta_0 ~ normal(0,1);
beta_1 ~ normal(0,1);
sigma ~ cauchy(0,2.5);
//mixture likelihood
for (n in 1:N){
target+=log_mix(theta,
normal_lpdf(y[n] | beta_0[1] + beta_1[1]*x[n], sigma[1]),
normal_lpdf(y[n] | beta_0[2] + beta_1[2]*x[n], sigma[2]));
}
}
generated quantities {
//posterior predictive distribution
//will allow us to see what points belong are assigned
//to which mixture
matrix[N,2] p;
matrix[N,2] ps;
for (n in 1:N){
p[n,1] = log_mix(theta,
normal_lpdf(y[n] | beta_0[1] + beta_1[1]*x[n], sigma[1]),
normal_lpdf(y[n] | beta_0[2] + beta_1[2]*x[n], sigma[2]));
p[n,2]= log_mix(1-theta,
normal_lpdf(y[n] | beta_0[1] + beta_1[1]*x[n], sigma[1]),
normal_lpdf(y[n] | beta_0[2] + beta_1[2]*x[n], sigma[2]));
ps[n,]= p[n,]/sum(p[n,]);
}
}
Run The Stan Model From R
library(tidyverse)
library(rstan)
#Simulate the data
N = 100
x = rnorm(N, 0, 3)
group = factor(sample(c('a','b'),size = N, replace = T))
y = model.matrix(~x*group)%*% c(0,1,0,2)
y = as.numeric(y) + rnorm(N)
d = data_frame(x = x, y = y)
d %>%
ggplot(aes(x,y))+
geom_point()
#Fit the model
N = length(x)
x = as.numeric(x)
y = y
fit = stan('mixmodel.stan',
data = list(N= N, x = x, y = y),
chains = 8,
iter = 4000)
Results
Dashed lines are ground truth, solid lines are estimated.
Original Answer
If you know which sample comes from which variety of daffodil, you can estimate an interaction between variety and soil PH.
Your model will look like
$$ y = \beta_0 + \beta_1 \text{variety} + \beta_2\text{PH} + \beta_3\text{variety}\cdot\text{PH} $$
Here is an example in R. I've generated some data that looks like this:
Clearly two different lines, and the lines correspond to two species. Here is how to estimate the lines using linear regression.
library(tidyverse)
#Simulate the data
N = 1000
ph = runif(N,5,8)
species = rbinom(N,1,0.5)
y = model.matrix(~ph*species)%*% c(20,1,20,-3) + rnorm(N, 0, 0.5)
y = as.numeric(y)
d = data_frame(ph = ph, species = species, y = y)
#Estimate the model
model = lm(y~species*ph, data = d)
summary(model)
And the result is
> summary(model)
Call:
lm(formula = y ~ species * ph, data = d)
Residuals:
Min 1Q Median 3Q Max
-1.61884 -0.31976 -0.00226 0.33521 1.46428
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 19.85850 0.17484 113.58 <2e-16 ***
species 20.31363 0.24626 82.49 <2e-16 ***
ph 1.01599 0.02671 38.04 <2e-16 ***
species:ph -3.03174 0.03756 -80.72 <2e-16 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4997 on 996 degrees of freedom
Multiple R-squared: 0.8844, Adjusted R-squared: 0.8841
F-statistic: 2541 on 3 and 996 DF, p-value: < 2.2e-16
For species labeled 0, the line is approximately
$$ y = 19 + 1\cdot \text{PH}$$
For species labeled 1, the line is approximately
$$ y = 40 - 2 \cdot \text{PH} $$
|
What if my linear regression data contains several co-mingled linear relationships?
EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answe
|
7,653
|
What if my linear regression data contains several co-mingled linear relationships?
|
The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria or parsimony as a guide in choosing number of latent classes.
Here is a Stata example using a sequence of finite mixture models (FMMs) with 2-4 latent classes/components. The first table is the coefficients for the latent class membership. These are a bit difficult to interpret, but they can be converted to probabilities later with estat lcprob. For each class, you also get an intercept and a ph slope parameter, followed by latent class marginal probabilities, and two in-sample ICs. These coefficient estimates are interpreted just as the coefficients from a linear regression model. Here the smallest in-sample BIC tells you to pick the two component model as the best one. AIC strangely selects the 3 component model. You can also use out-of-sample ICs to pick or use cross validation.
Another way to gauge that you are pushing the data too far is if the last class share is very small, since an additional components may simply reflect the presence of outliers in the data. In that case, parsimony favors simplifying the model and removing components. However, if you think that small classes are possible in your setting, this may not be the canary in the coal mine. Here parsimony favors the 2 component model since the third class only contains $.0143313 \cdot 300 \approx 4$ observations.
The FMM approach will not always work this well in practice if the classes are less stark. You may run into computational difficulties with too many latent classes, especially if you don't have enough data, or the likelihood function has multiple local maxima.
. clear
. /* Fake Data */
. set seed 10011979
. set obs 300
number of observations (_N) was 0, now 300
. gen ph = runiform(5.1, 7.8) in 1/150
(150 missing values generated)
. replace ph = runiform(5.3, 8) in 151/300
(150 real changes made)
. gen y = 41.55 + 5.185*ph + rnormal(0, .25) in 1/150
(150 missing values generated)
. replace y = 65.14 + 1.48148*ph + rnormal(0, 0.25) in 151/300
(150 real changes made)
.
. /* 2 Component FMM */
. fmm 2, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -194.5215
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | .0034359 .1220066 0.03 0.978 -.2356927 .2425645
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 5.173137 .0251922 205.35 0.000 5.123761 5.222513
_cons | 41.654 .1622011 256.80 0.000 41.3361 41.97191
-------------+----------------------------------------------------------------
var(e.y)| .0619599 .0076322 .0486698 .078879
------------------------------------------------------------------------------
Class : 2
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.486062 .026488 56.10 0.000 1.434147 1.537978
_cons | 65.10664 .1789922 363.74 0.000 64.75582 65.45746
-------------+----------------------------------------------------------------
var(e.y)| .0630583 .0075271 .0499042 .0796797
------------------------------------------------------------------------------
. estat lcprob
Latent class marginal probabilities Number of obs = 300
--------------------------------------------------------------
| Delta-method
| Margin Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Class |
1 | .499141 .0305016 .4396545 .5586519
2 | .500859 .0305016 .4413481 .5603455
--------------------------------------------------------------
. estat ic
Akaike's information criterion and Bayesian information criterion
-----------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+---------------------------------------------------------------
. | 300 . -194.5215 7 403.043 428.9695
-----------------------------------------------------------------------------
Note: N=Obs used in calculating BIC; see [R] BIC note.
.
. /* 3 Component FMM */
. fmm 3, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -187.4824
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | -.0312504 .123099 -0.25 0.800 -.2725199 .2100192
-------------+----------------------------------------------------------------
3.Class |
_cons | -3.553227 .5246159 -6.77 0.000 -4.581456 -2.524999
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 5.173077 .0252246 205.08 0.000 5.123637 5.222516
_cons | 41.65412 .16241 256.48 0.000 41.3358 41.97243
-------------+----------------------------------------------------------------
var(e.y)| .0621157 .0076595 .0487797 .0790975
------------------------------------------------------------------------------
Class : 2
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.476049 .0257958 57.22 0.000 1.42549 1.526608
_cons | 65.18698 .1745018 373.56 0.000 64.84496 65.52899
-------------+----------------------------------------------------------------
var(e.y)| .0578413 .0070774 .0455078 .0735173
------------------------------------------------------------------------------
Class : 3
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.776746 .0020074 885.09 0.000 1.772811 1.78068
_cons | 62.76633 .0134072 4681.54 0.000 62.74005 62.79261
-------------+----------------------------------------------------------------
var(e.y)| 9.36e-06 6.85e-06 2.23e-06 .0000392
------------------------------------------------------------------------------
. estat lcprob
Latent class marginal probabilities Number of obs = 300
--------------------------------------------------------------
| Delta-method
| Margin Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Class |
1 | .5005343 .0304855 .4410591 .5599944
2 | .4851343 .0306119 .4256343 .5450587
3 | .0143313 .0073775 .0051968 .038894
--------------------------------------------------------------
. estat ic
Akaike's information criterion and Bayesian information criterion
-----------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+---------------------------------------------------------------
. | 300 . -187.4824 11 396.9648 437.7064
-----------------------------------------------------------------------------
Note: N=Obs used in calculating BIC; see [R] BIC note.
.
. /* 4 Component FMM */
. fmm 4, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -188.06042
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | -.6450345 .5853396 -1.10 0.270 -1.792279 .50221
-------------+----------------------------------------------------------------
3.Class |
_cons | -.8026907 .6794755 -1.18 0.237 -2.134438 .5290568
-------------+----------------------------------------------------------------
4.Class |
_cons | -3.484714 .5548643 -6.28 0.000 -4.572229 -2.3972
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 5.173031 .0251474 205.71 0.000 5.123743 5.222319
_cons | 41.65574 .161938 257.23 0.000 41.33835 41.97313
-------------+----------------------------------------------------------------
var(e.y)| .0617238 .0076596 .0483975 .0787195
------------------------------------------------------------------------------
Class : 2
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.503764 .0371216 40.51 0.000 1.431007 1.576521
_cons | 65.13498 .2666049 244.31 0.000 64.61244 65.65751
-------------+----------------------------------------------------------------
var(e.y)| .0387473 .0188853 .0149062 .1007195
------------------------------------------------------------------------------
Class : 3
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.441334 .0443892 32.47 0.000 1.354333 1.528335
_cons | 65.26791 .2765801 235.98 0.000 64.72582 65.81
-------------+----------------------------------------------------------------
var(e.y)| .0307352 .010982 .0152578 .0619127
------------------------------------------------------------------------------
Class : 4
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.665207 .0079194 210.27 0.000 1.649685 1.680728
_cons | 63.42577 .0510052 1243.52 0.000 63.3258 63.52573
-------------+----------------------------------------------------------------
var(e.y)| .000096 .0000769 .00002 .0004611
------------------------------------------------------------------------------
. estat lcprob
Latent class marginal probabilities Number of obs = 300
--------------------------------------------------------------
| Delta-method
| Margin Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Class |
1 | .4991443 .0304808 .4396979 .558615
2 | .2618733 .1506066 .0715338 .6203076
3 | .2236773 .150279 .0501835 .6110804
4 | .015305 .008329 .005234 .0438994
--------------------------------------------------------------
. estat ic
Akaike's information criterion and Bayesian information criterion
-----------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+---------------------------------------------------------------
. | 300 . -188.0604 15 406.1208 461.6776
-----------------------------------------------------------------------------
Note: N=Obs used in calculating BIC; see [R] BIC note.
|
What if my linear regression data contains several co-mingled linear relationships?
|
The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria
|
What if my linear regression data contains several co-mingled linear relationships?
The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria or parsimony as a guide in choosing number of latent classes.
Here is a Stata example using a sequence of finite mixture models (FMMs) with 2-4 latent classes/components. The first table is the coefficients for the latent class membership. These are a bit difficult to interpret, but they can be converted to probabilities later with estat lcprob. For each class, you also get an intercept and a ph slope parameter, followed by latent class marginal probabilities, and two in-sample ICs. These coefficient estimates are interpreted just as the coefficients from a linear regression model. Here the smallest in-sample BIC tells you to pick the two component model as the best one. AIC strangely selects the 3 component model. You can also use out-of-sample ICs to pick or use cross validation.
Another way to gauge that you are pushing the data too far is if the last class share is very small, since an additional components may simply reflect the presence of outliers in the data. In that case, parsimony favors simplifying the model and removing components. However, if you think that small classes are possible in your setting, this may not be the canary in the coal mine. Here parsimony favors the 2 component model since the third class only contains $.0143313 \cdot 300 \approx 4$ observations.
The FMM approach will not always work this well in practice if the classes are less stark. You may run into computational difficulties with too many latent classes, especially if you don't have enough data, or the likelihood function has multiple local maxima.
. clear
. /* Fake Data */
. set seed 10011979
. set obs 300
number of observations (_N) was 0, now 300
. gen ph = runiform(5.1, 7.8) in 1/150
(150 missing values generated)
. replace ph = runiform(5.3, 8) in 151/300
(150 real changes made)
. gen y = 41.55 + 5.185*ph + rnormal(0, .25) in 1/150
(150 missing values generated)
. replace y = 65.14 + 1.48148*ph + rnormal(0, 0.25) in 151/300
(150 real changes made)
.
. /* 2 Component FMM */
. fmm 2, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -194.5215
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | .0034359 .1220066 0.03 0.978 -.2356927 .2425645
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 5.173137 .0251922 205.35 0.000 5.123761 5.222513
_cons | 41.654 .1622011 256.80 0.000 41.3361 41.97191
-------------+----------------------------------------------------------------
var(e.y)| .0619599 .0076322 .0486698 .078879
------------------------------------------------------------------------------
Class : 2
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.486062 .026488 56.10 0.000 1.434147 1.537978
_cons | 65.10664 .1789922 363.74 0.000 64.75582 65.45746
-------------+----------------------------------------------------------------
var(e.y)| .0630583 .0075271 .0499042 .0796797
------------------------------------------------------------------------------
. estat lcprob
Latent class marginal probabilities Number of obs = 300
--------------------------------------------------------------
| Delta-method
| Margin Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Class |
1 | .499141 .0305016 .4396545 .5586519
2 | .500859 .0305016 .4413481 .5603455
--------------------------------------------------------------
. estat ic
Akaike's information criterion and Bayesian information criterion
-----------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+---------------------------------------------------------------
. | 300 . -194.5215 7 403.043 428.9695
-----------------------------------------------------------------------------
Note: N=Obs used in calculating BIC; see [R] BIC note.
.
. /* 3 Component FMM */
. fmm 3, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -187.4824
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | -.0312504 .123099 -0.25 0.800 -.2725199 .2100192
-------------+----------------------------------------------------------------
3.Class |
_cons | -3.553227 .5246159 -6.77 0.000 -4.581456 -2.524999
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 5.173077 .0252246 205.08 0.000 5.123637 5.222516
_cons | 41.65412 .16241 256.48 0.000 41.3358 41.97243
-------------+----------------------------------------------------------------
var(e.y)| .0621157 .0076595 .0487797 .0790975
------------------------------------------------------------------------------
Class : 2
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.476049 .0257958 57.22 0.000 1.42549 1.526608
_cons | 65.18698 .1745018 373.56 0.000 64.84496 65.52899
-------------+----------------------------------------------------------------
var(e.y)| .0578413 .0070774 .0455078 .0735173
------------------------------------------------------------------------------
Class : 3
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.776746 .0020074 885.09 0.000 1.772811 1.78068
_cons | 62.76633 .0134072 4681.54 0.000 62.74005 62.79261
-------------+----------------------------------------------------------------
var(e.y)| 9.36e-06 6.85e-06 2.23e-06 .0000392
------------------------------------------------------------------------------
. estat lcprob
Latent class marginal probabilities Number of obs = 300
--------------------------------------------------------------
| Delta-method
| Margin Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Class |
1 | .5005343 .0304855 .4410591 .5599944
2 | .4851343 .0306119 .4256343 .5450587
3 | .0143313 .0073775 .0051968 .038894
--------------------------------------------------------------
. estat ic
Akaike's information criterion and Bayesian information criterion
-----------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+---------------------------------------------------------------
. | 300 . -187.4824 11 396.9648 437.7064
-----------------------------------------------------------------------------
Note: N=Obs used in calculating BIC; see [R] BIC note.
.
. /* 4 Component FMM */
. fmm 4, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -188.06042
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | -.6450345 .5853396 -1.10 0.270 -1.792279 .50221
-------------+----------------------------------------------------------------
3.Class |
_cons | -.8026907 .6794755 -1.18 0.237 -2.134438 .5290568
-------------+----------------------------------------------------------------
4.Class |
_cons | -3.484714 .5548643 -6.28 0.000 -4.572229 -2.3972
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 5.173031 .0251474 205.71 0.000 5.123743 5.222319
_cons | 41.65574 .161938 257.23 0.000 41.33835 41.97313
-------------+----------------------------------------------------------------
var(e.y)| .0617238 .0076596 .0483975 .0787195
------------------------------------------------------------------------------
Class : 2
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.503764 .0371216 40.51 0.000 1.431007 1.576521
_cons | 65.13498 .2666049 244.31 0.000 64.61244 65.65751
-------------+----------------------------------------------------------------
var(e.y)| .0387473 .0188853 .0149062 .1007195
------------------------------------------------------------------------------
Class : 3
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.441334 .0443892 32.47 0.000 1.354333 1.528335
_cons | 65.26791 .2765801 235.98 0.000 64.72582 65.81
-------------+----------------------------------------------------------------
var(e.y)| .0307352 .010982 .0152578 .0619127
------------------------------------------------------------------------------
Class : 4
Response : y
Model : regress
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
ph | 1.665207 .0079194 210.27 0.000 1.649685 1.680728
_cons | 63.42577 .0510052 1243.52 0.000 63.3258 63.52573
-------------+----------------------------------------------------------------
var(e.y)| .000096 .0000769 .00002 .0004611
------------------------------------------------------------------------------
. estat lcprob
Latent class marginal probabilities Number of obs = 300
--------------------------------------------------------------
| Delta-method
| Margin Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
Class |
1 | .4991443 .0304808 .4396979 .558615
2 | .2618733 .1506066 .0715338 .6203076
3 | .2236773 .150279 .0501835 .6110804
4 | .015305 .008329 .005234 .0438994
--------------------------------------------------------------
. estat ic
Akaike's information criterion and Bayesian information criterion
-----------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+---------------------------------------------------------------
. | 300 . -188.0604 15 406.1208 461.6776
-----------------------------------------------------------------------------
Note: N=Obs used in calculating BIC; see [R] BIC note.
|
What if my linear regression data contains several co-mingled linear relationships?
The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria
|
7,654
|
What if my linear regression data contains several co-mingled linear relationships?
|
I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some ideas out there (and I'll probably add R code and technical details later).
First, it is convenient to infer the classes. Presuming you have two lines fit to the data, you can approximately reconstruct the two classes by assigning each point to the class of the line closest to it. For points near the intersection, you will run into issues, but for now just ignore those (there may be a way to get around this, but for now just hope that this won't change much).
The way to do this is to choose $x_{l}$ and $x_{r}$ (soil pH values) with $x_{l} \leq x_{r}$ such that the parts left of $x_{l}$ are sufficiently separated and the parts right of $x_{r}$ are sufficiently separated (the closest point where the distributions don't overlap).
Then there are two natural ways I see to go about doing this.
The less fun way is to just run your original dataset combined with the inferred class labels through a linear regression as in Demetri's answer.
A more interesting way to do so would be through a modified version of ANOVA.
The point is to create an artificial dataset that represents the two lines (with similar spread between them) and then apply ANOVA. Technically, you need to do this once for the left side, and once for the right (i.e. you'll have two artificial datasets).
We start with the left, and apply a simple averaging approach to get two groups. Basically, each point in say the first class is of the form
$$ y^{(i)}_{1} = \beta_{1,1} x_{1}^{(i)} + \beta_{1,0} + e_{1}^{(i)}$$
so we are going to replace the linear expression
$\beta_{1,1} x_{1}^{(i)} + \beta_{1,0}$
by a constant, namely the average value of the linear term or
$$ \beta_{1,1} x^{\mathrm{avg}} + \beta_{1, 0}$$
where $x^{\mathrm{avg}}_{l}$ is literally the average $x$ value for the left side (importantly, this is over both classes, since that makes things more consistent). That is, we replace $y_{1}^{(i)}$ with
$$ \tilde{y}_{1}^{(i)} = \beta_{1,1} x^{\mathrm{avg}} + \beta_{1, 0} + e_{1}^{(i)},$$
and we similarly for the second class. That is, your new dataset consists of the collection of $\tilde{y}_{1}^{(i)}$ and similarly $\tilde{y}_{2}^{(i)}$.
Note that both approaches naturally generalize to $N$ classes.
|
What if my linear regression data contains several co-mingled linear relationships?
|
I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some id
|
What if my linear regression data contains several co-mingled linear relationships?
I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some ideas out there (and I'll probably add R code and technical details later).
First, it is convenient to infer the classes. Presuming you have two lines fit to the data, you can approximately reconstruct the two classes by assigning each point to the class of the line closest to it. For points near the intersection, you will run into issues, but for now just ignore those (there may be a way to get around this, but for now just hope that this won't change much).
The way to do this is to choose $x_{l}$ and $x_{r}$ (soil pH values) with $x_{l} \leq x_{r}$ such that the parts left of $x_{l}$ are sufficiently separated and the parts right of $x_{r}$ are sufficiently separated (the closest point where the distributions don't overlap).
Then there are two natural ways I see to go about doing this.
The less fun way is to just run your original dataset combined with the inferred class labels through a linear regression as in Demetri's answer.
A more interesting way to do so would be through a modified version of ANOVA.
The point is to create an artificial dataset that represents the two lines (with similar spread between them) and then apply ANOVA. Technically, you need to do this once for the left side, and once for the right (i.e. you'll have two artificial datasets).
We start with the left, and apply a simple averaging approach to get two groups. Basically, each point in say the first class is of the form
$$ y^{(i)}_{1} = \beta_{1,1} x_{1}^{(i)} + \beta_{1,0} + e_{1}^{(i)}$$
so we are going to replace the linear expression
$\beta_{1,1} x_{1}^{(i)} + \beta_{1,0}$
by a constant, namely the average value of the linear term or
$$ \beta_{1,1} x^{\mathrm{avg}} + \beta_{1, 0}$$
where $x^{\mathrm{avg}}_{l}$ is literally the average $x$ value for the left side (importantly, this is over both classes, since that makes things more consistent). That is, we replace $y_{1}^{(i)}$ with
$$ \tilde{y}_{1}^{(i)} = \beta_{1,1} x^{\mathrm{avg}} + \beta_{1, 0} + e_{1}^{(i)},$$
and we similarly for the second class. That is, your new dataset consists of the collection of $\tilde{y}_{1}^{(i)}$ and similarly $\tilde{y}_{2}^{(i)}$.
Note that both approaches naturally generalize to $N$ classes.
|
What if my linear regression data contains several co-mingled linear relationships?
I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some id
|
7,655
|
What if my linear regression data contains several co-mingled linear relationships?
|
Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking for impacts to a species of daffodil, not the impacts of similar environments on different daffodils. If you have lost the data that helps determine species "A" from species "B" you can simply group behavior "A" and behavior "B" and include the discovery of two species in your narrative. Or, if you really want one chart, simply use two data sets on the same axis. I don't have anywhere near the expertise that I see in the other responses given so I have to find less "skilled" methods. I would run a data analysis in a worksheet environment where the equations are easier to develop. Then, once the groupings become obvious, create the two separate data tables followed by converting them into charts/graphs. I work with a great deal of data and I often find that my assumptions of differing correlations turn out wrong; that is what data is supposed to help us discover. Once I learn that my assumptions are wrong, I display the data based upon the behaviors discovered and discuss those behaviors and resulting statistical analyses as part of the narrative.
|
What if my linear regression data contains several co-mingled linear relationships?
|
Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking
|
What if my linear regression data contains several co-mingled linear relationships?
Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking for impacts to a species of daffodil, not the impacts of similar environments on different daffodils. If you have lost the data that helps determine species "A" from species "B" you can simply group behavior "A" and behavior "B" and include the discovery of two species in your narrative. Or, if you really want one chart, simply use two data sets on the same axis. I don't have anywhere near the expertise that I see in the other responses given so I have to find less "skilled" methods. I would run a data analysis in a worksheet environment where the equations are easier to develop. Then, once the groupings become obvious, create the two separate data tables followed by converting them into charts/graphs. I work with a great deal of data and I often find that my assumptions of differing correlations turn out wrong; that is what data is supposed to help us discover. Once I learn that my assumptions are wrong, I display the data based upon the behaviors discovered and discuss those behaviors and resulting statistical analyses as part of the narrative.
|
What if my linear regression data contains several co-mingled linear relationships?
Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking
|
7,656
|
What are variational autoencoders and to what learning tasks are they used?
|
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Learning and Probabilistic Modeling communities use different terms for the same concepts. Thus when explaining VAEs you risk either concentrating on the statistical model part, leaving the reader without a clue about how to actually implement it, or vice versa to concentrate on the network architecture and loss function, in which the Kullback-Leibler term seems to be pulled out of thin air. I'll try to strike a middle ground here, starting from the model but giving enough details to actually implement it in practice, or understand someone's else implementation.
VAEs are generative models
Unlike classical (sparse, denoising, etc.) autoencoders, VAEs are generative models, like GANs. With generative model I mean a model which learns the probability distribution $p(\mathbf{x})$ over the input space $\mathcal{x}$. This means that after we have trained such a model, we can then sample from (our approximation of) $p(\mathbf{x})$. If our training set is made of handwritten digits (MNIST), then after training the generative model is able to create images which look like handwritten digits, even though they're not "copies" of the images in the training set.
Learning the distribution of the images in the training set implies that images which look like handwritten digits should have an high probability of being generated, while images which look like the Jolly Roger or random noise should have a low probability. In other words, it means learning about the dependencies among pixels: if our image is a $28\times 28=784$ pixels grayscale image from MNIST, the model should learn that if a pixel is very bright, then there's a significant probability that some neighboring pixels are bright too, that if we have a long, slanted line of bright pixels we may have another smaller, horizontal line of pixels above this one (a 7), etc.
VAEs are latent variable models
The VAE is a latent variables model: this means that $\mathbf{x}$, the random vector of the 784 pixel intensities (the observed variables), is modeled as a (possibly very complicated) function of a random vector $\mathbf{z}\in\mathcal{Z}$ of lower dimensionality, whose components are unobserved (latent) variables. When does such a model make sense? For example, in the MNIST case we think that the handwritten digits belong to a manifold of dimension much smaller than the dimension of $\mathcal{x}$, because the vast majority of random arrangements of 784 pixel intensities, don't look at all like handwritten digit. Intuitively we would expect the dimension to be at least 10 (the number of digits), but it's most likely larger because each digit can be written in different ways. Some differences are unimportant for the quality of the final image (for example, global rotations and translations), but others are important. So in this case the latent model makes sense. More on this later. Note that, amazingly, even if our intuition tells us that the dimension should about 10, we can definitely use just 2 latent variables to encode the MNIST dataset with a VAE (though results won't be pretty). The reason is that even a single real variable can encode infinitely many classes, because it can assume all possible integer values and more. Of course, if the classes have significant overlap among them (such as 9 and 8 or 7 and I in MNIST), even the most complicated function of just two latent variables will do a poor job of generating clearly discernible samples for each class. More on this later.
VAEs assume a multivariate parametric distribution $q(\mathbf{z}\vert\mathbf{x},\boldsymbol{\lambda})$ (where $\boldsymbol{\lambda}$ are the parameters of $q$), and they learn the parameters of the multivariate distribution. The use of a parametric pdf for $\mathbf{z}$, which prevents the number of parameters of a VAE to grow without bounds with the growth of the training set, is called amortization in VAE lingo (yeah, I know...).
The decoder network
We start from the decoder network because the VAE is a generative model, and the only part of the VAE which is actually used to generate new images is the decoder. The encoder network is only used at inference (training) time.
The goal of the decoder network is to generate new random vectors $\mathbf{x}$ belonging to the input space $\mathcal{X}$, i.e., new images, starting from realizations of the latent vector $\mathbf{z}$. This means clearly that it must learn the conditional distribution $p(\mathbf{x}\vert\mathbf{z})$. For VAEs this distribution is often assumed to be a multivariate Gaussian1:
$$p_{\boldsymbol{\phi}}(\mathbf{x}\vert\mathbf{z}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}(\mathbf{z}; \boldsymbol{\phi}), \boldsymbol{\sigma}(\mathbf{z}; \boldsymbol{\phi})^2I) $$
$\boldsymbol{\phi}$ is the vector of weights (and biases) of the encoder network. The vectors $\boldsymbol{\mu}(\mathbf{z};\boldsymbol{\phi})$ and $\boldsymbol{\sigma}(\mathbf{z}; \boldsymbol{\phi})$ are complex, unknown nonlinear functions, modeled by the decoder network: neural networks are powerful nonlinear functions approximators.
As noted by @amoeba in the comments, there is a striking similarity between the decoder and a classic latent variables model: Factor Analysis. In Factor Analysis, you assume the model:
$$ \mathbf{x}\vert\mathbf{z}\sim\mathcal{N}(\mathbf{W}\mathbf{z}+\boldsymbol{\mu}, \boldsymbol{\sigma}^2I),\ \mathbf{z}\sim\mathcal{N}(0,I)$$
Both models (FA & the decoder) assume that the conditional distribution of the observable variables $\mathbf{x}$ on the latent variables $\mathbf{z}$ is Gaussian, and that the $\mathbf{z}$ themselves are standard Gaussians. The difference is that the decoder doesn't assume that the mean of $p(\mathbf{x}|\mathbf{z})$ is linear in $\mathbf{z}$, nor it assumes that the standard deviation is a constant vector. On the contrary, it models them as complex nonlinear functions of the $\mathbf{z}$. In this respect, it can be seen as nonlinear Factor Analysis. See here for an insightful discussion of this connection between FA and VAE. Since FA with an isotropic covariance matrix is just PPCA, this also ties in to the well-known result that a linear autoencoder reduces to PCA.
Let's go back to the decoder: how do we learn $\boldsymbol{\phi}$? Intuitively we want latent variables $\mathbf{z}$ which maximize the likelihood of generating the $\mathbf{x}_i$ in the training set $D_n$. In other words we want to compute the posterior probability distribution of the $\mathbf{z}$, given the data:
$$p(\mathbf{z}\vert\mathbf{x})=\frac{p_{\boldsymbol{\phi}}(\mathbf{x}\vert\mathbf{z})p(\mathbf{z})}{p(\mathbf{x})}$$
We assume a $\mathcal{N}(0,I)$ prior on $\mathbf{z}$, and we're left with the usual issue in Bayesian inference that computing $p(\mathbf{x})$ (the evidence) is hard (a multidimensional integral). What's more, since here $\boldsymbol{\mu}(\mathbf{z};\boldsymbol{\phi})$ is unknown, we can't compute it anyway. Enter Variational Inference, the tool which gives Variational Autoencoders their name.
Variational Inference for the VAE model
Variational Inference is a tool to perform approximate Bayesian Inference for very complex models. It's not an overly complex tool, but my answer is already too long and I won't go into a detailed explanation of VI. You can have a look at this answer and the references therein if you're curious:
https://stats.stackexchange.com/a/270569/58675
It suffices to say that VI looks for an approximation to $p(\mathbf{z}\vert \mathbf{x})$ in a parametric family of distributions $q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})$, where, as noted above, $\boldsymbol{\lambda}$ are the parameters of the family. We look for the parameters which minimize the Kullback-Leibler divergence between our target distribution $p(\mathbf{z}\vert \mathbf{x})$ and $q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})$:
$$\min_{\boldsymbol{\lambda}}\mathcal{D}[p(\mathbf{z}\vert \mathbf{x})\vert\vert q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})]$$
Again, we cannot minimize this directly because the definition of Kullback-Leibler divergence includes the evidence. Introducing the ELBO (Evidence Lower BOund) and after some algebraic manipulations, we finally get at:
$$ELBO(\boldsymbol{\lambda})= E_{q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})}[\log p(\mathbf{x}\vert\boldsymbol{z})]-\mathcal{D}[(q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})\vert\vert p(\boldsymbol{z})]$$
Since the ELBO is a lower bound on evidence (see the above link), maximizing the ELBO is not exactly equivalent to maximizing the likelihood of data given $\boldsymbol{\lambda}$ (after all, VI is a tool for approximate Bayesian inference), but it goes in the right direction.
In order to make inference, we need to specify the parametric family $q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$. In most VAEs we choose a multivariate, uncorrelated Gaussian distribution
$$q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda}) = \mathcal{N}(\mathbf{z}\vert\boldsymbol{\mu}(\mathbf{x}), \boldsymbol{\sigma}^2(\mathbf{x})I) $$
This is the same choice we made for $p(\mathbf{x}\vert\mathbf{z})$, though we may have chosen a different parametric family. As before, we can estimate these complex nonlinear functions by introducing a neural network model. Since this model accepts input images and returns parameters of the distribution of the latent variables we call it the encoder network.
As before, we can estimate these complex nonlinear functions by introducing a neural network model. Since this model accepts input images and returns parameters of the distribution of the latent variables we call it the encoder network.
The encoder network
Also called the inference network, this is only used at training time.
As noted above, the encoder must approximate $\boldsymbol{\mu}(\mathbf{x})$ and $\boldsymbol{\sigma}(\mathbf{x})$, thus if we have, say, 24 latent variables, the output of the encoder is a $d=48$ vector. The encoder has weights (and biases) $\boldsymbol{\theta}$. To learn $\boldsymbol{\theta}$, we can finally write the ELBO in terms of the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ of the encoder and decoder network, as well as the training set points:
$$ELBO(\boldsymbol{\theta},\boldsymbol{\phi})= \sum_i E_{q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})}[\log p_{\boldsymbol{\phi}}(\mathbf{x}_i\vert\boldsymbol{z})]-\mathcal{D}[(q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})\vert\vert p(\boldsymbol{z})]$$
We can finally conclude. The opposite of the ELBO, as a function of $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$, is used as the loss function of the VAE. We use SGD to minimize this loss, i.e., maximize the ELBO. Since the ELBO is a lower bound on the evidence, this goes in the direction of maximizing the evidence, and thus generating new images which are optimally similar to those in the training set. The first term in the ELBO is the expected negative log-likelihood of the training set points, thus it encourages the decoder to produce images which are similar to the training ones. The second term can be interpreted as a regularizer: it encourages the encoder to generate a distribution for the latent variables which is similar to $p(\boldsymbol{z})=\mathcal{N}(0,I)$. But by introducing the probability model first, we understood where the whole expression comes from: the minimization of the Kullabck-Leibler divergence between the approximate posterior $q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$ and the model posterior $p(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$.2
Once we have learned $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ by maximizing $ELBO(\boldsymbol{\theta},\boldsymbol{\phi})$, we can throw away the encoder. From now on, to generate new images just sample $\boldsymbol{z}\sim \mathcal{N}(0,I)$ and propagate it through the decoder. The decoder outputs will be images similar to those in the training set.
References and further reading
the original paper: Auto-Encoding Variational Bayes
a nice tutorial, with a few minor imprecisions: Tutorial on Variational Autoencoders
how to reduce the blurriness of the images generated by your VAE, while at the same time getting latent variables which have a visual (perceptual) meaning, so that you can "add" features (smile, sunglasses, etc.) to your generated images: Deep Feature Consistent Variational
Autoencoder
improving the quality of VAE-generated images even more, by using Gaussian versions of autoregressive autoencoders:Improved Variational Inference
with Inverse Autoregressive Flow
new directions of research and a deeper understanding of pros & cons of the VAE model: Towards a Deeper Understanding of Variational Autoencoding Models & INFERENCE SUBOPTIMALITY
IN VARIATIONAL AUTOENCODERS
1 This assumption is not strictly necessary, though it simplifies our description of VAEs. However, depending on applications, you may assume a different distribution for $p_{\phi}(\mathbf{x}\vert\mathbf{z})$. For example, if $\mathbf{x}$ is a vector of binary variables, a Gaussian $p$ makes no sense, and a multivariate Bernoulli can be assumed.
2 The ELBO expression, with its mathematical elegance, conceals two major sources of pain for the VAE practitioners. One is the average term $E_{q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})}[\log p_{\boldsymbol{\phi}}(\mathbf{x}_i\vert\boldsymbol{z})]$. This effectively requires computing an expectation, which requires taking multiple samples from $q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})$. Given the sizes of the involved neural networks, and the low convergence rate of the SGD algorithm, having to draw multiple random samples at each iteration (actually, for each minibatch, which is even worse) is very time-consuming. VAE users solve this problem very pragmatically by computing that expectation with a single (!) random sample. The other issue is that to train two neural networks (encoder & decoder) with the backpropagation algorithm, I need to be able to differentiate all steps involved in forward propagation from the encoder to the decoder. Since the decoder is not deterministic (evaluating its output requires drawing from a multivariate Gaussian), it doesn't even make sense to ask if it's a differentiable architecture. The solution to this is the reparametrization trick.
|
What are variational autoencoders and to what learning tasks are they used?
|
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Le
|
What are variational autoencoders and to what learning tasks are they used?
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Learning and Probabilistic Modeling communities use different terms for the same concepts. Thus when explaining VAEs you risk either concentrating on the statistical model part, leaving the reader without a clue about how to actually implement it, or vice versa to concentrate on the network architecture and loss function, in which the Kullback-Leibler term seems to be pulled out of thin air. I'll try to strike a middle ground here, starting from the model but giving enough details to actually implement it in practice, or understand someone's else implementation.
VAEs are generative models
Unlike classical (sparse, denoising, etc.) autoencoders, VAEs are generative models, like GANs. With generative model I mean a model which learns the probability distribution $p(\mathbf{x})$ over the input space $\mathcal{x}$. This means that after we have trained such a model, we can then sample from (our approximation of) $p(\mathbf{x})$. If our training set is made of handwritten digits (MNIST), then after training the generative model is able to create images which look like handwritten digits, even though they're not "copies" of the images in the training set.
Learning the distribution of the images in the training set implies that images which look like handwritten digits should have an high probability of being generated, while images which look like the Jolly Roger or random noise should have a low probability. In other words, it means learning about the dependencies among pixels: if our image is a $28\times 28=784$ pixels grayscale image from MNIST, the model should learn that if a pixel is very bright, then there's a significant probability that some neighboring pixels are bright too, that if we have a long, slanted line of bright pixels we may have another smaller, horizontal line of pixels above this one (a 7), etc.
VAEs are latent variable models
The VAE is a latent variables model: this means that $\mathbf{x}$, the random vector of the 784 pixel intensities (the observed variables), is modeled as a (possibly very complicated) function of a random vector $\mathbf{z}\in\mathcal{Z}$ of lower dimensionality, whose components are unobserved (latent) variables. When does such a model make sense? For example, in the MNIST case we think that the handwritten digits belong to a manifold of dimension much smaller than the dimension of $\mathcal{x}$, because the vast majority of random arrangements of 784 pixel intensities, don't look at all like handwritten digit. Intuitively we would expect the dimension to be at least 10 (the number of digits), but it's most likely larger because each digit can be written in different ways. Some differences are unimportant for the quality of the final image (for example, global rotations and translations), but others are important. So in this case the latent model makes sense. More on this later. Note that, amazingly, even if our intuition tells us that the dimension should about 10, we can definitely use just 2 latent variables to encode the MNIST dataset with a VAE (though results won't be pretty). The reason is that even a single real variable can encode infinitely many classes, because it can assume all possible integer values and more. Of course, if the classes have significant overlap among them (such as 9 and 8 or 7 and I in MNIST), even the most complicated function of just two latent variables will do a poor job of generating clearly discernible samples for each class. More on this later.
VAEs assume a multivariate parametric distribution $q(\mathbf{z}\vert\mathbf{x},\boldsymbol{\lambda})$ (where $\boldsymbol{\lambda}$ are the parameters of $q$), and they learn the parameters of the multivariate distribution. The use of a parametric pdf for $\mathbf{z}$, which prevents the number of parameters of a VAE to grow without bounds with the growth of the training set, is called amortization in VAE lingo (yeah, I know...).
The decoder network
We start from the decoder network because the VAE is a generative model, and the only part of the VAE which is actually used to generate new images is the decoder. The encoder network is only used at inference (training) time.
The goal of the decoder network is to generate new random vectors $\mathbf{x}$ belonging to the input space $\mathcal{X}$, i.e., new images, starting from realizations of the latent vector $\mathbf{z}$. This means clearly that it must learn the conditional distribution $p(\mathbf{x}\vert\mathbf{z})$. For VAEs this distribution is often assumed to be a multivariate Gaussian1:
$$p_{\boldsymbol{\phi}}(\mathbf{x}\vert\mathbf{z}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}(\mathbf{z}; \boldsymbol{\phi}), \boldsymbol{\sigma}(\mathbf{z}; \boldsymbol{\phi})^2I) $$
$\boldsymbol{\phi}$ is the vector of weights (and biases) of the encoder network. The vectors $\boldsymbol{\mu}(\mathbf{z};\boldsymbol{\phi})$ and $\boldsymbol{\sigma}(\mathbf{z}; \boldsymbol{\phi})$ are complex, unknown nonlinear functions, modeled by the decoder network: neural networks are powerful nonlinear functions approximators.
As noted by @amoeba in the comments, there is a striking similarity between the decoder and a classic latent variables model: Factor Analysis. In Factor Analysis, you assume the model:
$$ \mathbf{x}\vert\mathbf{z}\sim\mathcal{N}(\mathbf{W}\mathbf{z}+\boldsymbol{\mu}, \boldsymbol{\sigma}^2I),\ \mathbf{z}\sim\mathcal{N}(0,I)$$
Both models (FA & the decoder) assume that the conditional distribution of the observable variables $\mathbf{x}$ on the latent variables $\mathbf{z}$ is Gaussian, and that the $\mathbf{z}$ themselves are standard Gaussians. The difference is that the decoder doesn't assume that the mean of $p(\mathbf{x}|\mathbf{z})$ is linear in $\mathbf{z}$, nor it assumes that the standard deviation is a constant vector. On the contrary, it models them as complex nonlinear functions of the $\mathbf{z}$. In this respect, it can be seen as nonlinear Factor Analysis. See here for an insightful discussion of this connection between FA and VAE. Since FA with an isotropic covariance matrix is just PPCA, this also ties in to the well-known result that a linear autoencoder reduces to PCA.
Let's go back to the decoder: how do we learn $\boldsymbol{\phi}$? Intuitively we want latent variables $\mathbf{z}$ which maximize the likelihood of generating the $\mathbf{x}_i$ in the training set $D_n$. In other words we want to compute the posterior probability distribution of the $\mathbf{z}$, given the data:
$$p(\mathbf{z}\vert\mathbf{x})=\frac{p_{\boldsymbol{\phi}}(\mathbf{x}\vert\mathbf{z})p(\mathbf{z})}{p(\mathbf{x})}$$
We assume a $\mathcal{N}(0,I)$ prior on $\mathbf{z}$, and we're left with the usual issue in Bayesian inference that computing $p(\mathbf{x})$ (the evidence) is hard (a multidimensional integral). What's more, since here $\boldsymbol{\mu}(\mathbf{z};\boldsymbol{\phi})$ is unknown, we can't compute it anyway. Enter Variational Inference, the tool which gives Variational Autoencoders their name.
Variational Inference for the VAE model
Variational Inference is a tool to perform approximate Bayesian Inference for very complex models. It's not an overly complex tool, but my answer is already too long and I won't go into a detailed explanation of VI. You can have a look at this answer and the references therein if you're curious:
https://stats.stackexchange.com/a/270569/58675
It suffices to say that VI looks for an approximation to $p(\mathbf{z}\vert \mathbf{x})$ in a parametric family of distributions $q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})$, where, as noted above, $\boldsymbol{\lambda}$ are the parameters of the family. We look for the parameters which minimize the Kullback-Leibler divergence between our target distribution $p(\mathbf{z}\vert \mathbf{x})$ and $q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})$:
$$\min_{\boldsymbol{\lambda}}\mathcal{D}[p(\mathbf{z}\vert \mathbf{x})\vert\vert q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})]$$
Again, we cannot minimize this directly because the definition of Kullback-Leibler divergence includes the evidence. Introducing the ELBO (Evidence Lower BOund) and after some algebraic manipulations, we finally get at:
$$ELBO(\boldsymbol{\lambda})= E_{q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})}[\log p(\mathbf{x}\vert\boldsymbol{z})]-\mathcal{D}[(q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})\vert\vert p(\boldsymbol{z})]$$
Since the ELBO is a lower bound on evidence (see the above link), maximizing the ELBO is not exactly equivalent to maximizing the likelihood of data given $\boldsymbol{\lambda}$ (after all, VI is a tool for approximate Bayesian inference), but it goes in the right direction.
In order to make inference, we need to specify the parametric family $q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$. In most VAEs we choose a multivariate, uncorrelated Gaussian distribution
$$q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda}) = \mathcal{N}(\mathbf{z}\vert\boldsymbol{\mu}(\mathbf{x}), \boldsymbol{\sigma}^2(\mathbf{x})I) $$
This is the same choice we made for $p(\mathbf{x}\vert\mathbf{z})$, though we may have chosen a different parametric family. As before, we can estimate these complex nonlinear functions by introducing a neural network model. Since this model accepts input images and returns parameters of the distribution of the latent variables we call it the encoder network.
As before, we can estimate these complex nonlinear functions by introducing a neural network model. Since this model accepts input images and returns parameters of the distribution of the latent variables we call it the encoder network.
The encoder network
Also called the inference network, this is only used at training time.
As noted above, the encoder must approximate $\boldsymbol{\mu}(\mathbf{x})$ and $\boldsymbol{\sigma}(\mathbf{x})$, thus if we have, say, 24 latent variables, the output of the encoder is a $d=48$ vector. The encoder has weights (and biases) $\boldsymbol{\theta}$. To learn $\boldsymbol{\theta}$, we can finally write the ELBO in terms of the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ of the encoder and decoder network, as well as the training set points:
$$ELBO(\boldsymbol{\theta},\boldsymbol{\phi})= \sum_i E_{q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})}[\log p_{\boldsymbol{\phi}}(\mathbf{x}_i\vert\boldsymbol{z})]-\mathcal{D}[(q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})\vert\vert p(\boldsymbol{z})]$$
We can finally conclude. The opposite of the ELBO, as a function of $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$, is used as the loss function of the VAE. We use SGD to minimize this loss, i.e., maximize the ELBO. Since the ELBO is a lower bound on the evidence, this goes in the direction of maximizing the evidence, and thus generating new images which are optimally similar to those in the training set. The first term in the ELBO is the expected negative log-likelihood of the training set points, thus it encourages the decoder to produce images which are similar to the training ones. The second term can be interpreted as a regularizer: it encourages the encoder to generate a distribution for the latent variables which is similar to $p(\boldsymbol{z})=\mathcal{N}(0,I)$. But by introducing the probability model first, we understood where the whole expression comes from: the minimization of the Kullabck-Leibler divergence between the approximate posterior $q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$ and the model posterior $p(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$.2
Once we have learned $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ by maximizing $ELBO(\boldsymbol{\theta},\boldsymbol{\phi})$, we can throw away the encoder. From now on, to generate new images just sample $\boldsymbol{z}\sim \mathcal{N}(0,I)$ and propagate it through the decoder. The decoder outputs will be images similar to those in the training set.
References and further reading
the original paper: Auto-Encoding Variational Bayes
a nice tutorial, with a few minor imprecisions: Tutorial on Variational Autoencoders
how to reduce the blurriness of the images generated by your VAE, while at the same time getting latent variables which have a visual (perceptual) meaning, so that you can "add" features (smile, sunglasses, etc.) to your generated images: Deep Feature Consistent Variational
Autoencoder
improving the quality of VAE-generated images even more, by using Gaussian versions of autoregressive autoencoders:Improved Variational Inference
with Inverse Autoregressive Flow
new directions of research and a deeper understanding of pros & cons of the VAE model: Towards a Deeper Understanding of Variational Autoencoding Models & INFERENCE SUBOPTIMALITY
IN VARIATIONAL AUTOENCODERS
1 This assumption is not strictly necessary, though it simplifies our description of VAEs. However, depending on applications, you may assume a different distribution for $p_{\phi}(\mathbf{x}\vert\mathbf{z})$. For example, if $\mathbf{x}$ is a vector of binary variables, a Gaussian $p$ makes no sense, and a multivariate Bernoulli can be assumed.
2 The ELBO expression, with its mathematical elegance, conceals two major sources of pain for the VAE practitioners. One is the average term $E_{q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})}[\log p_{\boldsymbol{\phi}}(\mathbf{x}_i\vert\boldsymbol{z})]$. This effectively requires computing an expectation, which requires taking multiple samples from $q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})$. Given the sizes of the involved neural networks, and the low convergence rate of the SGD algorithm, having to draw multiple random samples at each iteration (actually, for each minibatch, which is even worse) is very time-consuming. VAE users solve this problem very pragmatically by computing that expectation with a single (!) random sample. The other issue is that to train two neural networks (encoder & decoder) with the backpropagation algorithm, I need to be able to differentiate all steps involved in forward propagation from the encoder to the decoder. Since the decoder is not deterministic (evaluating its output requires drawing from a multivariate Gaussian), it doesn't even make sense to ask if it's a differentiable architecture. The solution to this is the reparametrization trick.
|
What are variational autoencoders and to what learning tasks are they used?
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Le
|
7,657
|
What are variational autoencoders and to what learning tasks are they used?
|
Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization trick in a paper from 2014 by Kingma and Welling. The main goal is to generate more data - by creating a more regularized latent space for auto encoders.
Auto Encoders
Auto Encoders (AEs) are a NN architecture used mainly to compress data / dimensionality reduction. It's made of two NNs - an Encoder $f_\phi(x)$ which encodes the original data $x$ to some latent space $z$. And a Decoder $g_{\theta}(z)$which decodes the latent space back to the data space. $f$ is parameterized by the NN weights $\phi$, and $g$ is parameterized by $\theta$. The overall structure looks like this:
In order to optimize the NN, you can take, e.g., the L2 norm of the difference between the original data $x$ and the reconstructed data $\hat x = g(f(x))$: $Loss = ||x-\hat x||^2_2$. As usual in NN, you optimize the weights by some form of Gradient Descent (e.g., SGD, ADAM etc.). Once the weights have been optimized, the outputs of the encoder and decoder are fixed - for a given $x$ you will always get the same $z=f(x)$, and for a given $z$ you will always get the same $x=g(z)$.
Now, you would think that this could also be used for generation of new data: sample some random points in the $z$ space, pass them through the decoder, and get new data. Turns out it doesn't work so well - the reason being is that the process of optimizing AEs only care about your actual data, and it doesn't care how it's going to store the representation of the data in the latent $z$ space. So the $z$ space becomes very "messy".
Variational Inference
Variational Inference (VI) is a method to find an approximation to intractable posteriors. Given Bayes formula:
$$ p(z|x) = \frac{p(x|z)p(z)}{\int p(x|z)p(z)dz}
$$
The numerator consists of the likelihood $p(x|z)$ and the prior $p(z)$, both are usually known. The denominator (which is actually equal to $=p(x)$) is called the "evidence", and in high dimensions its usually hard to impossible to compute. Although it's only a normalizing constant (given the data is known) without it we don't really know the distribution. VI says - let's place a probability we know and can control its parameters $q_\phi(z) = q_\phi(z|x)$ (e.g., Gaussian) and "turn the knobs" of that distribution (=optimize the parameters $\phi$) until we reach something that looks like the true posterior. Since we don't know the true posterior, only up to a normalizing constant - a special metric is used, the KL divergence, which is a metric to measure the "statistical distance" between two distributions:
$$KL(q(z)||p(z|x)) = \int q(z)\log\frac{q(z)}{p(z|x)}dz
$$
By using that metric, we can ignore the evidence, and only optimize the terms we know and care about (whose negative is called the ELBO [=evidence lower bound] - hence minimizing the KL is equivalent to maximizing the ELBO):
$$ = \int q(z)\log q(z)dz - \int q(z)\log p(z|x)dz = \mathbb E_{q}[\log q(z)]-\int q(z)\log \frac{p(x|z)p(z)}{p(x)}dz \\
= \mathbb E_{q}[\log q(z)] - \mathbb E_{q}[\log p(x|z)p(z)]+\log p(x) \\
ELBO = \mathbb E_{q}[\log p(x|z)p(z)] - \mathbb E_{q}[\log q(z)]
$$
There are a few ways to continue here - up to the paper the main approach was Coordinate Ascent VI with the "mean field" assumption. There was also the approach of doing gradient ascent on the ELBO, by the log-derivative trick, which suffered from high variance (although later, variance reduction techniques were used. E.g., BBVI). The main focus of the paper seems to be the new method they suggested which is the reparameterization trick (later also used in ADVI): sample from some neutral distribution that doesn't depend on the parameters of the variational distribution $\phi$, and transform those samples to samples from $q_\phi$ - then evaluate the gradient for those samples, and take their average (Monte Carlo estimate):
$$ \epsilon \sim p'(\epsilon) \\
z = \mathcal T(\epsilon; \phi) \\
\nabla _\phi ELBO \approx \frac{1}{L}\sum_{l=1}^L \nabla _\phi [\log p(x|T(\epsilon; \phi))p(T(\epsilon; \phi))] -\log q(T(\epsilon; \phi))]
$$
Variational Auto Encoders
We now place distributions over $z$ and $x$ - e.g., suppose $p(x|z)$ is a Gaussian (though it can also model discrete data and be Bernoulli or Categorical). If we look at the decoder structure from before and say that now it outputs the parameters of that Gaussian $\mu_\theta, \Sigma_\theta$ [suppose for illustration's sake that the decoder is fixed and its weights $\theta$ are known], the posterior $p(z|x)$ is intractable, because of the decoder NN function (a complex non-linear function).
So we can do VI to recover it. And suppose we decide to place also a Gaussian $q(z|x)=N(\mu,\Sigma)$. But suppose we don't simply use global parameters $\phi = (vec(\mu), vec(\Sigma))$, but instead place an encoder network that will output the mean and covariance: $\mu_\phi, \Sigma_\phi$. The overall structure now looks like this:
Suppose we also take a standard Gaussian prior over the $z$'s: $p(z)=N(0,I)$. In this case, we can "massage" the ELBO a little bit more:
$$ ELBO = \mathbb E_{q}[\log p(x|z)p(z)] - \mathbb E_{q}[\log q(z)] = \mathbb E_{q}[\log p(x|z) + \log p(z) - \log q(z)] \\
= -KL(q_\phi(z|x)||p(z)) + \mathbb E_{q_\phi}[\log p_\theta(x|z)]
$$
The reason to do so is that in this case this KL has a closed form. Assuming we have a closed form for the KL, we can also use Monte Carlo estimates (using the reparameterization trick) for the 2nd term:
$$ = -KL(q_\phi(z|x)||p(z)) + \frac{1}{L}\sum_{l=1}^L \log p_\theta(x|z=\Sigma^{0.5}_\phi\cdot\epsilon_l +\mu_\phi)
$$
Note that the 1st part of the objective is optimized only w.r.t. $\phi$, and the 2nd error is optimized both w.r.t. $\phi$ (encoder) and w.r.t. $\theta$ (decoder). There are 2 ways to look at this loss:
Looking at the loss terms, noticing that the 2nd term (called the reconstruction term) is very similar to a loss of AEs (and actually equivalent [w.r.t. $\arg \max$] to it if we assume $\Sigma_\theta=I$, and $\hat x = \mu_\theta$). The 1st term acts as a sort of a regularizer, which tells the (approximated) posterior to not stretch too far from a standard Gaussian.
Looking at the derivatives. For illustration's sake suppose we are actually separating the update rule into two: Suppose $\phi$ is fixed, $\nabla_\theta ELBO$ is finding the decoder by doing a sort of maximum likelihood on our data (given $z$'s drawn from the posterior). Suppose $\theta$ is fixed, $\nabla_\phi ELBO$ is finding the encoder by approximating the posterior using VI.
So, this is not strictly a VI method, as in VI you usually assume the likelihood is fixed. Here you have two moving parts: you optimize the likelihood (decoder) and optimize the posterior (encoder) simultaneously.
Using VAE eventually leads to a latent space which is more ordered and tries to be more like a standard Gaussian.
I've made a video about this topic on my YouTube channel (which elaborates a bit on the paper and on VI), if you want to learn more.
|
What are variational autoencoders and to what learning tasks are they used?
|
Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization
|
What are variational autoencoders and to what learning tasks are they used?
Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization trick in a paper from 2014 by Kingma and Welling. The main goal is to generate more data - by creating a more regularized latent space for auto encoders.
Auto Encoders
Auto Encoders (AEs) are a NN architecture used mainly to compress data / dimensionality reduction. It's made of two NNs - an Encoder $f_\phi(x)$ which encodes the original data $x$ to some latent space $z$. And a Decoder $g_{\theta}(z)$which decodes the latent space back to the data space. $f$ is parameterized by the NN weights $\phi$, and $g$ is parameterized by $\theta$. The overall structure looks like this:
In order to optimize the NN, you can take, e.g., the L2 norm of the difference between the original data $x$ and the reconstructed data $\hat x = g(f(x))$: $Loss = ||x-\hat x||^2_2$. As usual in NN, you optimize the weights by some form of Gradient Descent (e.g., SGD, ADAM etc.). Once the weights have been optimized, the outputs of the encoder and decoder are fixed - for a given $x$ you will always get the same $z=f(x)$, and for a given $z$ you will always get the same $x=g(z)$.
Now, you would think that this could also be used for generation of new data: sample some random points in the $z$ space, pass them through the decoder, and get new data. Turns out it doesn't work so well - the reason being is that the process of optimizing AEs only care about your actual data, and it doesn't care how it's going to store the representation of the data in the latent $z$ space. So the $z$ space becomes very "messy".
Variational Inference
Variational Inference (VI) is a method to find an approximation to intractable posteriors. Given Bayes formula:
$$ p(z|x) = \frac{p(x|z)p(z)}{\int p(x|z)p(z)dz}
$$
The numerator consists of the likelihood $p(x|z)$ and the prior $p(z)$, both are usually known. The denominator (which is actually equal to $=p(x)$) is called the "evidence", and in high dimensions its usually hard to impossible to compute. Although it's only a normalizing constant (given the data is known) without it we don't really know the distribution. VI says - let's place a probability we know and can control its parameters $q_\phi(z) = q_\phi(z|x)$ (e.g., Gaussian) and "turn the knobs" of that distribution (=optimize the parameters $\phi$) until we reach something that looks like the true posterior. Since we don't know the true posterior, only up to a normalizing constant - a special metric is used, the KL divergence, which is a metric to measure the "statistical distance" between two distributions:
$$KL(q(z)||p(z|x)) = \int q(z)\log\frac{q(z)}{p(z|x)}dz
$$
By using that metric, we can ignore the evidence, and only optimize the terms we know and care about (whose negative is called the ELBO [=evidence lower bound] - hence minimizing the KL is equivalent to maximizing the ELBO):
$$ = \int q(z)\log q(z)dz - \int q(z)\log p(z|x)dz = \mathbb E_{q}[\log q(z)]-\int q(z)\log \frac{p(x|z)p(z)}{p(x)}dz \\
= \mathbb E_{q}[\log q(z)] - \mathbb E_{q}[\log p(x|z)p(z)]+\log p(x) \\
ELBO = \mathbb E_{q}[\log p(x|z)p(z)] - \mathbb E_{q}[\log q(z)]
$$
There are a few ways to continue here - up to the paper the main approach was Coordinate Ascent VI with the "mean field" assumption. There was also the approach of doing gradient ascent on the ELBO, by the log-derivative trick, which suffered from high variance (although later, variance reduction techniques were used. E.g., BBVI). The main focus of the paper seems to be the new method they suggested which is the reparameterization trick (later also used in ADVI): sample from some neutral distribution that doesn't depend on the parameters of the variational distribution $\phi$, and transform those samples to samples from $q_\phi$ - then evaluate the gradient for those samples, and take their average (Monte Carlo estimate):
$$ \epsilon \sim p'(\epsilon) \\
z = \mathcal T(\epsilon; \phi) \\
\nabla _\phi ELBO \approx \frac{1}{L}\sum_{l=1}^L \nabla _\phi [\log p(x|T(\epsilon; \phi))p(T(\epsilon; \phi))] -\log q(T(\epsilon; \phi))]
$$
Variational Auto Encoders
We now place distributions over $z$ and $x$ - e.g., suppose $p(x|z)$ is a Gaussian (though it can also model discrete data and be Bernoulli or Categorical). If we look at the decoder structure from before and say that now it outputs the parameters of that Gaussian $\mu_\theta, \Sigma_\theta$ [suppose for illustration's sake that the decoder is fixed and its weights $\theta$ are known], the posterior $p(z|x)$ is intractable, because of the decoder NN function (a complex non-linear function).
So we can do VI to recover it. And suppose we decide to place also a Gaussian $q(z|x)=N(\mu,\Sigma)$. But suppose we don't simply use global parameters $\phi = (vec(\mu), vec(\Sigma))$, but instead place an encoder network that will output the mean and covariance: $\mu_\phi, \Sigma_\phi$. The overall structure now looks like this:
Suppose we also take a standard Gaussian prior over the $z$'s: $p(z)=N(0,I)$. In this case, we can "massage" the ELBO a little bit more:
$$ ELBO = \mathbb E_{q}[\log p(x|z)p(z)] - \mathbb E_{q}[\log q(z)] = \mathbb E_{q}[\log p(x|z) + \log p(z) - \log q(z)] \\
= -KL(q_\phi(z|x)||p(z)) + \mathbb E_{q_\phi}[\log p_\theta(x|z)]
$$
The reason to do so is that in this case this KL has a closed form. Assuming we have a closed form for the KL, we can also use Monte Carlo estimates (using the reparameterization trick) for the 2nd term:
$$ = -KL(q_\phi(z|x)||p(z)) + \frac{1}{L}\sum_{l=1}^L \log p_\theta(x|z=\Sigma^{0.5}_\phi\cdot\epsilon_l +\mu_\phi)
$$
Note that the 1st part of the objective is optimized only w.r.t. $\phi$, and the 2nd error is optimized both w.r.t. $\phi$ (encoder) and w.r.t. $\theta$ (decoder). There are 2 ways to look at this loss:
Looking at the loss terms, noticing that the 2nd term (called the reconstruction term) is very similar to a loss of AEs (and actually equivalent [w.r.t. $\arg \max$] to it if we assume $\Sigma_\theta=I$, and $\hat x = \mu_\theta$). The 1st term acts as a sort of a regularizer, which tells the (approximated) posterior to not stretch too far from a standard Gaussian.
Looking at the derivatives. For illustration's sake suppose we are actually separating the update rule into two: Suppose $\phi$ is fixed, $\nabla_\theta ELBO$ is finding the decoder by doing a sort of maximum likelihood on our data (given $z$'s drawn from the posterior). Suppose $\theta$ is fixed, $\nabla_\phi ELBO$ is finding the encoder by approximating the posterior using VI.
So, this is not strictly a VI method, as in VI you usually assume the likelihood is fixed. Here you have two moving parts: you optimize the likelihood (decoder) and optimize the posterior (encoder) simultaneously.
Using VAE eventually leads to a latent space which is more ordered and tries to be more like a standard Gaussian.
I've made a video about this topic on my YouTube channel (which elaborates a bit on the paper and on VI), if you want to learn more.
|
What are variational autoencoders and to what learning tasks are they used?
Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization
|
7,658
|
Checking if two Poisson samples have the same mean
|
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probability is a function of the ratio two lambda. Therefore,
hypothesis testing and interval estimation procedures can be readily developed from
the exact methods for making inferences about the binomial success probability.
There usually two methods are considered for this purpose,
C-test
E-test
You can find the details about these two tests in this paper.
A more powerful test for comparing two Poisson
means
|
Checking if two Poisson samples have the same mean
|
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probabil
|
Checking if two Poisson samples have the same mean
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probability is a function of the ratio two lambda. Therefore,
hypothesis testing and interval estimation procedures can be readily developed from
the exact methods for making inferences about the binomial success probability.
There usually two methods are considered for this purpose,
C-test
E-test
You can find the details about these two tests in this paper.
A more powerful test for comparing two Poisson
means
|
Checking if two Poisson samples have the same mean
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probabil
|
7,659
|
Checking if two Poisson samples have the same mean
|
You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process for time $t = t_1+t_2$ and counting the events during the interval $[0, t_1]$ ($n_1$ in number) and the events during the interval $[t_1, t_1+t_2]$ ($n_2$ in number). You would estimate the rate as
$$\hat{\lambda} = \frac{n_1+n_2}{t_1+t_2}$$
and from that you can estimate the distribution of the $n_i$: they are Poisson of intensity near $t_i\hat{\lambda}$. If one or both $n_i$ are situated on tails of this distribution, most likely the claim is valid; if not, the claim may be relying on chance variation.
|
Checking if two Poisson samples have the same mean
|
You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process
|
Checking if two Poisson samples have the same mean
You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process for time $t = t_1+t_2$ and counting the events during the interval $[0, t_1]$ ($n_1$ in number) and the events during the interval $[t_1, t_1+t_2]$ ($n_2$ in number). You would estimate the rate as
$$\hat{\lambda} = \frac{n_1+n_2}{t_1+t_2}$$
and from that you can estimate the distribution of the $n_i$: they are Poisson of intensity near $t_i\hat{\lambda}$. If one or both $n_i$ are situated on tails of this distribution, most likely the claim is valid; if not, the claim may be relying on chance variation.
|
Checking if two Poisson samples have the same mean
You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process
|
7,660
|
Checking if two Poisson samples have the same mean
|
How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence interval.
|
Checking if two Poisson samples have the same mean
|
How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence i
|
Checking if two Poisson samples have the same mean
How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence interval.
|
Checking if two Poisson samples have the same mean
How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence i
|
7,661
|
Checking if two Poisson samples have the same mean
|
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-01") - as.Date("2007-12-02")) # Length of recession
Lnrec = as.numeric(as.Date("2007-12-01") - as.Date("2001-12-01")) # L of non rec period
(43/Lrec)/(50/Lnrec)
[1] 2.000276
This check gives a slightly different result (100.03% increase) than the one of the publication (101% increase). Go on with the bootstrap (do it twice):
N = 100000
k=(rpois(N, 43)/Lrec)/(rpois(N, 50)/Lnrec)
c(quantile(k, c(0.025, .25, .5, .75, .975)), mean=mean(k), sd=sd(k))
2.5% 25% 50% 75% 97.5% mean sd
1.3130094 1.7338545 1.9994599 2.2871373 3.0187243 2.0415132 0.4355660
2.5% 25% 50% 75% 97.5% mean sd
1.3130094 1.7351970 2.0013578 2.3259023 3.0173868 2.0440240 0.4349706
The 95% confidence interval of the increase is 31% to 202%.
|
Checking if two Poisson samples have the same mean
|
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-0
|
Checking if two Poisson samples have the same mean
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-01") - as.Date("2007-12-02")) # Length of recession
Lnrec = as.numeric(as.Date("2007-12-01") - as.Date("2001-12-01")) # L of non rec period
(43/Lrec)/(50/Lnrec)
[1] 2.000276
This check gives a slightly different result (100.03% increase) than the one of the publication (101% increase). Go on with the bootstrap (do it twice):
N = 100000
k=(rpois(N, 43)/Lrec)/(rpois(N, 50)/Lnrec)
c(quantile(k, c(0.025, .25, .5, .75, .975)), mean=mean(k), sd=sd(k))
2.5% 25% 50% 75% 97.5% mean sd
1.3130094 1.7338545 1.9994599 2.2871373 3.0187243 2.0415132 0.4355660
2.5% 25% 50% 75% 97.5% mean sd
1.3130094 1.7351970 2.0013578 2.3259023 3.0173868 2.0440240 0.4349706
The 95% confidence interval of the increase is 31% to 202%.
|
Checking if two Poisson samples have the same mean
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-0
|
7,662
|
Think like a bayesian, check like a frequentist: What does that mean?
|
The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief that an event will (or has) occurred. A frequentist probability is a statement about the proportion of similar events that occur in the limit as the number of those events increases.
For me, to "think like a Bayesian" means to update your personal belief as new information arises and to "check [or worry] like a frequentist" means to be concerned with performance of statistical procedures aggregated across the times those procedures are used, e.g. what is the coverage of credible intervals, what is the Type I/II error rates, etc.
|
Think like a bayesian, check like a frequentist: What does that mean?
|
The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief
|
Think like a bayesian, check like a frequentist: What does that mean?
The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief that an event will (or has) occurred. A frequentist probability is a statement about the proportion of similar events that occur in the limit as the number of those events increases.
For me, to "think like a Bayesian" means to update your personal belief as new information arises and to "check [or worry] like a frequentist" means to be concerned with performance of statistical procedures aggregated across the times those procedures are used, e.g. what is the coverage of credible intervals, what is the Type I/II error rates, etc.
|
Think like a bayesian, check like a frequentist: What does that mean?
The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief
|
7,663
|
Think like a bayesian, check like a frequentist: What does that mean?
|
Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful for formulating hypotheses. For instance, Bayesians may be able to arbitrarily assign some probability to the notion that the moon is made of green cheese, regardless of whether astronauts have actually been able to travel there to verify this. This hypothesis is perhaps supported by the idea that, from afar, the moon looks like green cheese. Frequentists cannot singularly conceive of a hypothesis that is more than a strawman, nor can they say evidence favors one hypothesis over another. Even maximum likelihood only generates a statistic which is "most consistent with what was observed". Formally, Bayesian statistics allows us to think outside the box and propose defensible ideas from data. But this is strictly hypothesis generating in nature.
Frequentist statistics are best applied to confirm hypotheses. When an experiment is conducted well, frequentist statistics provide an "independent observer" or "empirical" context to the findings by eschewing priors. This is consistent with the Karl Popper philosophy of science. The point of evidence is not to promulgate a certain idea. Plenty of evidence is consistent with incorrect hypotheses. Evidence can merely falsify beliefs.
The influence of priors is generally regarded as a bias in statistical reasoning. As you know, we can make up any great number of reasons for why things happen. Psychologically, many people believe that our observer bias is the result of priors in our brain that keep us from truly weighting what we see. "Hope clouds observation" as the Reverend Mother said in Dune. Popper made this idea rigorous.
This had great historical importance in some of the greatest scientific experiments of our time. For instance, John Snow meticulously collected evidence for the Cholera epidemic and concluded astutely that Cholera is not caused by moral deprivation, and pointed out that the evidence was highly consistent with sewage contamination: note he did not conclude this, Snow's findings predated the discovery of bacteria, and there was no mechanistic or etiologic understanding. A similar discourse is found in Origin of Species. We didn't actually know whether the moon was made of green cheese until astronauts actually landed on the surface and collected samples. At that point, Bayesian posteriors have assigned very, very low probability to any other possibility, and Frequentists at best can say that the samples are highly inconsistent with anything except moon dust.
In summary, Bayesian statistics are amenable to hypothesis generating and frequentist statistics are amenable to hypothesis confirmation. Ensuring that data are collected independently in these endeavors is one of the greatest challenges modern statisticians face.
|
Think like a bayesian, check like a frequentist: What does that mean?
|
Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful
|
Think like a bayesian, check like a frequentist: What does that mean?
Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful for formulating hypotheses. For instance, Bayesians may be able to arbitrarily assign some probability to the notion that the moon is made of green cheese, regardless of whether astronauts have actually been able to travel there to verify this. This hypothesis is perhaps supported by the idea that, from afar, the moon looks like green cheese. Frequentists cannot singularly conceive of a hypothesis that is more than a strawman, nor can they say evidence favors one hypothesis over another. Even maximum likelihood only generates a statistic which is "most consistent with what was observed". Formally, Bayesian statistics allows us to think outside the box and propose defensible ideas from data. But this is strictly hypothesis generating in nature.
Frequentist statistics are best applied to confirm hypotheses. When an experiment is conducted well, frequentist statistics provide an "independent observer" or "empirical" context to the findings by eschewing priors. This is consistent with the Karl Popper philosophy of science. The point of evidence is not to promulgate a certain idea. Plenty of evidence is consistent with incorrect hypotheses. Evidence can merely falsify beliefs.
The influence of priors is generally regarded as a bias in statistical reasoning. As you know, we can make up any great number of reasons for why things happen. Psychologically, many people believe that our observer bias is the result of priors in our brain that keep us from truly weighting what we see. "Hope clouds observation" as the Reverend Mother said in Dune. Popper made this idea rigorous.
This had great historical importance in some of the greatest scientific experiments of our time. For instance, John Snow meticulously collected evidence for the Cholera epidemic and concluded astutely that Cholera is not caused by moral deprivation, and pointed out that the evidence was highly consistent with sewage contamination: note he did not conclude this, Snow's findings predated the discovery of bacteria, and there was no mechanistic or etiologic understanding. A similar discourse is found in Origin of Species. We didn't actually know whether the moon was made of green cheese until astronauts actually landed on the surface and collected samples. At that point, Bayesian posteriors have assigned very, very low probability to any other possibility, and Frequentists at best can say that the samples are highly inconsistent with anything except moon dust.
In summary, Bayesian statistics are amenable to hypothesis generating and frequentist statistics are amenable to hypothesis confirmation. Ensuring that data are collected independently in these endeavors is one of the greatest challenges modern statisticians face.
|
Think like a bayesian, check like a frequentist: What does that mean?
Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful
|
7,664
|
Think like a bayesian, check like a frequentist: What does that mean?
|
Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from data, often with frequentist methods. That doesn't conform exactly to the quote (which implies Bayes up front, frequentist-like concerns afterwards), but we shouldn't overlook Cliff AB's excellent comment.
Also, there was, and may still be, a school of Bayesian thought that you don't have to check anything after a Bayesian procedure. More modern thought would use posterior predictive checks, and perhaps that kind of double-check-your-answers approach is what the quote is referring to.
Also, frequentist philosophy is concerned with procedures rather than inferences from data. So perhaps that is also a clue to the quote's meaning.
|
Think like a bayesian, check like a frequentist: What does that mean?
|
Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from
|
Think like a bayesian, check like a frequentist: What does that mean?
Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from data, often with frequentist methods. That doesn't conform exactly to the quote (which implies Bayes up front, frequentist-like concerns afterwards), but we shouldn't overlook Cliff AB's excellent comment.
Also, there was, and may still be, a school of Bayesian thought that you don't have to check anything after a Bayesian procedure. More modern thought would use posterior predictive checks, and perhaps that kind of double-check-your-answers approach is what the quote is referring to.
Also, frequentist philosophy is concerned with procedures rather than inferences from data. So perhaps that is also a clue to the quote's meaning.
|
Think like a bayesian, check like a frequentist: What does that mean?
Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from
|
7,665
|
Think like a bayesian, check like a frequentist: What does that mean?
|
In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation data. The advice to "think like a Bayesian" expresses the opinion that a prediction function derived from a Bayesian approach will generally give good results.
|
Think like a bayesian, check like a frequentist: What does that mean?
|
In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation
|
Think like a bayesian, check like a frequentist: What does that mean?
In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation data. The advice to "think like a Bayesian" expresses the opinion that a prediction function derived from a Bayesian approach will generally give good results.
|
Think like a bayesian, check like a frequentist: What does that mean?
In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation
|
7,666
|
Think like a bayesian, check like a frequentist: What does that mean?
|
It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior situations (experimentally or statistically), let's say for example that the mean reading scores for 4th-graders is 80 words per minute, and that some intervention might increase this to 90 words per minute. These are beliefs based on prior studies and hypotheses. Frequentist thinking extrapolates the findings (of the intervention) to obtain confidence intervals or other statistics that are based on the theoretical and practical frequency or probability of these results happening again (i.e., how "frequently"). For example the post-intervention reading score might be 91 words per minute with a 95% confidence interval of 85 to 97 words per minute and an associated p-value (probability value) of this being different from the pre-intervention score. So 95% of the time, the new reading scores would be between 85 and 97 words per minute after the intervention. Therefore "think like a Bayesian"---i.e., theorize, hypothesize, look at previous evidence, and "check like a frequentist"---i.e., how frequently would these experimental results occur, and how likely are they to be due to chance rather than the intervention.
|
Think like a bayesian, check like a frequentist: What does that mean?
|
It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior si
|
Think like a bayesian, check like a frequentist: What does that mean?
It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior situations (experimentally or statistically), let's say for example that the mean reading scores for 4th-graders is 80 words per minute, and that some intervention might increase this to 90 words per minute. These are beliefs based on prior studies and hypotheses. Frequentist thinking extrapolates the findings (of the intervention) to obtain confidence intervals or other statistics that are based on the theoretical and practical frequency or probability of these results happening again (i.e., how "frequently"). For example the post-intervention reading score might be 91 words per minute with a 95% confidence interval of 85 to 97 words per minute and an associated p-value (probability value) of this being different from the pre-intervention score. So 95% of the time, the new reading scores would be between 85 and 97 words per minute after the intervention. Therefore "think like a Bayesian"---i.e., theorize, hypothesize, look at previous evidence, and "check like a frequentist"---i.e., how frequently would these experimental results occur, and how likely are they to be due to chance rather than the intervention.
|
Think like a bayesian, check like a frequentist: What does that mean?
It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior si
|
7,667
|
Inference vs. estimation?
|
Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedia,
Inference is the act or process of deriving logical conclusions from premises known or assumed to be true.
and,
Statistical inference uses mathematics to draw conclusions in the presence of uncertainty.
Estimation is but one aspect of inference where one substitutes unknown parameters (associated with the hypothetical model that generated the data) with optimal solutions based on the data (and possibly prior information about those parameters). It should always be associated with an evaluation of the uncertainty of the reported estimates, evaluation that is an integral part of inference.
Maximum likelihood is one instance of estimation, but it does not cover the whole of inference. On the opposite, Bayesian analysis offers a complete inference machine.
|
Inference vs. estimation?
|
Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedi
|
Inference vs. estimation?
Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedia,
Inference is the act or process of deriving logical conclusions from premises known or assumed to be true.
and,
Statistical inference uses mathematics to draw conclusions in the presence of uncertainty.
Estimation is but one aspect of inference where one substitutes unknown parameters (associated with the hypothetical model that generated the data) with optimal solutions based on the data (and possibly prior information about those parameters). It should always be associated with an evaluation of the uncertainty of the reported estimates, evaluation that is an integral part of inference.
Maximum likelihood is one instance of estimation, but it does not cover the whole of inference. On the opposite, Bayesian analysis offers a complete inference machine.
|
Inference vs. estimation?
Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedi
|
7,668
|
Inference vs. estimation?
|
While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical inference attempts to attach a measure of uncertainty and/or a probability statement to the values of parameters (standard errors and confidence intervals). If the model that the statistician assumes is approximately correct, then provided that the new incoming data continue to conform to that model, the uncertainty statements may have some truth in them, and provide a measure of how often you will be making mistakes in using the model to make your decisions.
The sources of the probability statements are twofold. Sometimes, one can assume an underlying probability distribution of whatever you are measuring, and with some mathematical witchcraft (multivariate integration of a Gaussian distribution, etc.), obtain the probability distribution of the result (the sample mean of the Gaussian data is itself Gaussian). Conjugate priors in Bayesian statistics fall into that witchcraft category. Other times, one has to rely on the asymptotic (large sample) results which state that in large enough sample, things are bound to behave in a certain way (the Central Limit Theorem: the sample mean of the data that are i.i.d. with mean $\mu$ and variance $\sigma^2$ is approximately Gaussian with mean $\mu$ and variance $\sigma^2/n$ regardless of the shape of the distribution of the original data).
The closest that machine learning gets to that is cross-validation when the sample is split into the training and the validation parts, with the latter effectively saying, "if the new data looks like the old data, but is entirely unrelated to the data that was used in setting up my model, then a realistic measure of the error rate is such and such". It is derived fully empirically by running the same model on the data, rather than trying to infer the properties of the model by making statistical assumptions and involving any mathematical results like the above CLT. Arguably, this is more honest, but as it uses less information, and hence requires larger sample sizes. Also, it implicitly assumes that the process does not change, and there is no structure in the data (like cluster or time-series correlations) that could creep in and break the very important assumption of independence between the training and the validation data.
While the phrase "inferring the posterior" may be making sense (I am not a Bayesian, I can't really tell what the accepted terminology is), I don't think there is much involved in making any assumptions in that inferential step. All of the Bayesian assumptions are (1) in the prior and (2) in the assumed model, and once they are set up, the posterior follows automatically (at least in theory via Bayes theorem; the practical steps may be helluvalot complicated, and Sipps Gambling... excuse me, Gibbs sampling may be a relatively easy component of getting to that posterior). If "inferring the posterior" refers to (1) + (2), then it is a flavor of statistical inference to me. If (1) and (2) are stated separately, and then "inferring the posterior" is something else, then I don't quite see what that something else might be on top of Bayes theorem.
|
Inference vs. estimation?
|
While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical
|
Inference vs. estimation?
While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical inference attempts to attach a measure of uncertainty and/or a probability statement to the values of parameters (standard errors and confidence intervals). If the model that the statistician assumes is approximately correct, then provided that the new incoming data continue to conform to that model, the uncertainty statements may have some truth in them, and provide a measure of how often you will be making mistakes in using the model to make your decisions.
The sources of the probability statements are twofold. Sometimes, one can assume an underlying probability distribution of whatever you are measuring, and with some mathematical witchcraft (multivariate integration of a Gaussian distribution, etc.), obtain the probability distribution of the result (the sample mean of the Gaussian data is itself Gaussian). Conjugate priors in Bayesian statistics fall into that witchcraft category. Other times, one has to rely on the asymptotic (large sample) results which state that in large enough sample, things are bound to behave in a certain way (the Central Limit Theorem: the sample mean of the data that are i.i.d. with mean $\mu$ and variance $\sigma^2$ is approximately Gaussian with mean $\mu$ and variance $\sigma^2/n$ regardless of the shape of the distribution of the original data).
The closest that machine learning gets to that is cross-validation when the sample is split into the training and the validation parts, with the latter effectively saying, "if the new data looks like the old data, but is entirely unrelated to the data that was used in setting up my model, then a realistic measure of the error rate is such and such". It is derived fully empirically by running the same model on the data, rather than trying to infer the properties of the model by making statistical assumptions and involving any mathematical results like the above CLT. Arguably, this is more honest, but as it uses less information, and hence requires larger sample sizes. Also, it implicitly assumes that the process does not change, and there is no structure in the data (like cluster or time-series correlations) that could creep in and break the very important assumption of independence between the training and the validation data.
While the phrase "inferring the posterior" may be making sense (I am not a Bayesian, I can't really tell what the accepted terminology is), I don't think there is much involved in making any assumptions in that inferential step. All of the Bayesian assumptions are (1) in the prior and (2) in the assumed model, and once they are set up, the posterior follows automatically (at least in theory via Bayes theorem; the practical steps may be helluvalot complicated, and Sipps Gambling... excuse me, Gibbs sampling may be a relatively easy component of getting to that posterior). If "inferring the posterior" refers to (1) + (2), then it is a flavor of statistical inference to me. If (1) and (2) are stated separately, and then "inferring the posterior" is something else, then I don't quite see what that something else might be on top of Bayes theorem.
|
Inference vs. estimation?
While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical
|
7,669
|
Inference vs. estimation?
|
This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the subject.
Short answer:
Estimation $->$ find unknown values (estimates) for subject of interest
Statistical Inference $->$ use the probability distribution of subject of interest to make probabilistic conclusions
Long answer:
The term "estimation" is often used to describe the process of finding an estimate for an unknown value, while "inference" often refers to statistical inference, a process of discovering distributions (or characteristics) of random variables and using them to draw conclusions.
Think about answering the question of:
How tall is the average person in my country?
If you decide to find an estimate, you could walk around for a couple of days and measure strangers you meet on the street (create a sample) and then calculate your estimate for example as the average of your sample. You have just done some estimation!
On the other hand, you could want to find more than some estimate, which you know is a single number and is bound to be wrong. You could aim to answer the question with a certain confidence, such as: I am 99% certain that average height of a person in my country is between 1.60m and 1.90m.
In order to make such a claim you would need to estimate the height distribution of the people you are meeting and make your conclusions based on this knowledge - which is the basis of statistical inference.
The crucial thing to keep in mind (as pointed out in Xi'an's answer) is that finding an estimator is part of statistical inference.
|
Inference vs. estimation?
|
This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the
|
Inference vs. estimation?
This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the subject.
Short answer:
Estimation $->$ find unknown values (estimates) for subject of interest
Statistical Inference $->$ use the probability distribution of subject of interest to make probabilistic conclusions
Long answer:
The term "estimation" is often used to describe the process of finding an estimate for an unknown value, while "inference" often refers to statistical inference, a process of discovering distributions (or characteristics) of random variables and using them to draw conclusions.
Think about answering the question of:
How tall is the average person in my country?
If you decide to find an estimate, you could walk around for a couple of days and measure strangers you meet on the street (create a sample) and then calculate your estimate for example as the average of your sample. You have just done some estimation!
On the other hand, you could want to find more than some estimate, which you know is a single number and is bound to be wrong. You could aim to answer the question with a certain confidence, such as: I am 99% certain that average height of a person in my country is between 1.60m and 1.90m.
In order to make such a claim you would need to estimate the height distribution of the people you are meeting and make your conclusions based on this knowledge - which is the basis of statistical inference.
The crucial thing to keep in mind (as pointed out in Xi'an's answer) is that finding an estimator is part of statistical inference.
|
Inference vs. estimation?
This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the
|
7,670
|
Inference vs. estimation?
|
Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain accuracy. To make inference is to make assumptions on a population using only a representative sample.
Estimation is when you choose a model to fit your data sample and calculate with a certain precision that model's parameters. It is called estimation because you will never be able to calculate the true values of the parameters since you only have a data sample, and not the entire population.
|
Inference vs. estimation?
|
Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain
|
Inference vs. estimation?
Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain accuracy. To make inference is to make assumptions on a population using only a representative sample.
Estimation is when you choose a model to fit your data sample and calculate with a certain precision that model's parameters. It is called estimation because you will never be able to calculate the true values of the parameters since you only have a data sample, and not the entire population.
|
Inference vs. estimation?
Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain
|
7,671
|
Inference vs. estimation?
|
In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution of your latent variables. Estimation seems to be associated with "point estimation", which is to determine your model parameters. Examples include maximum likelihood estimation. In expectation maximization (EM), in the E step, you do inference. In the M step, you do parameter estimation.
I think I hear people saying "infer the posterior distribution" more than "estimate the posterior distribution". The latter one is not used in the usual exact inference. It is used, for example, in expectation propagation or variational Bayes, where inferring an exact posterior is intractable and additional assumptions on the posterior have to be made. In this case, the inferred posterior is approximate. People may say "approximate the posterior" or "estimate the posterior".
All this is just my opinion. It is not a rule.
|
Inference vs. estimation?
|
In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution o
|
Inference vs. estimation?
In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution of your latent variables. Estimation seems to be associated with "point estimation", which is to determine your model parameters. Examples include maximum likelihood estimation. In expectation maximization (EM), in the E step, you do inference. In the M step, you do parameter estimation.
I think I hear people saying "infer the posterior distribution" more than "estimate the posterior distribution". The latter one is not used in the usual exact inference. It is used, for example, in expectation propagation or variational Bayes, where inferring an exact posterior is intractable and additional assumptions on the posterior have to be made. In this case, the inferred posterior is approximate. People may say "approximate the posterior" or "estimate the posterior".
All this is just my opinion. It is not a rule.
|
Inference vs. estimation?
In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution o
|
7,672
|
Inference vs. estimation?
|
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, the concepts behind are distinct. So it's important to get these concepts clear, and then translate those dialects in the way that your prefer.
Eg.
In PRML by Bishop,
inference stage in which we use training data to learn a model for $p(C_k|x)$
So it seems that here Inference=Learning=Estimation
But in other material, inference may differ from estimation, where inference means prediction while estimation means the learning procedure of the parameters.
|
Inference vs. estimation?
|
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, t
|
Inference vs. estimation?
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, the concepts behind are distinct. So it's important to get these concepts clear, and then translate those dialects in the way that your prefer.
Eg.
In PRML by Bishop,
inference stage in which we use training data to learn a model for $p(C_k|x)$
So it seems that here Inference=Learning=Estimation
But in other material, inference may differ from estimation, where inference means prediction while estimation means the learning procedure of the parameters.
|
Inference vs. estimation?
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, t
|
7,673
|
Inference vs. estimation?
|
I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML algorithms: how do you put a standard deviation on the classification label a neural net or decision tree spits out? In traditional statistics, distributional assumptions allow us to do math and figure out how to assess uncertainty in the parameters. In ML, there may be no parameters, no distributional assumptions, or neither.
There has been some progress made on these fronts, some of it very recent (more recent than the current answers). One option is, as others have mentioned, Bayesian analysis where your posterior gives you uncertainty estimates. Bootstrap type methods are nice. Stefan Wager and Susan Athey, at Stanford, have some work from the past couple years getting inference for random forests. Analagously, BART is a Bayesian tree ensemble method that yields a posterior from which inference can be drawn.
|
Inference vs. estimation?
|
I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML a
|
Inference vs. estimation?
I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML algorithms: how do you put a standard deviation on the classification label a neural net or decision tree spits out? In traditional statistics, distributional assumptions allow us to do math and figure out how to assess uncertainty in the parameters. In ML, there may be no parameters, no distributional assumptions, or neither.
There has been some progress made on these fronts, some of it very recent (more recent than the current answers). One option is, as others have mentioned, Bayesian analysis where your posterior gives you uncertainty estimates. Bootstrap type methods are nice. Stefan Wager and Susan Athey, at Stanford, have some work from the past couple years getting inference for random forests. Analagously, BART is a Bayesian tree ensemble method that yields a posterior from which inference can be drawn.
|
Inference vs. estimation?
I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML a
|
7,674
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
|
Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(which is what the Anscombe data does, since it's a response to people operating under the misunderstanding that the quality of a model can be discerned from the identical statistics you mentioned)
I'll include a few here that might be of greater interest than most of the ones I generate:
1) One example (of quite a few) are some example discrete distributions (and thereby data sets) I constructed to counter the common assertion that zero third-moment skewness implies symmetry. (Kendall and Stuart's Advanced Theory of Statistics offers a more impressive continuous family.)
Here's one of those discrete distribution examples:
\begin{array}{cccc}
\\
x&-4&1&5\\
\hline
P(X=x)&2/6&3/6&1/6
\\
\end{array}
(A data set for a counterexample in the sample case is thereby obvious: $-4, -4, 1, 1, 1, 5$)
As you can see, this distribution isn't symmetric, yet its third moment skewness is zero. Similarly, one can readily construct counterexamples to a similar assertion with respect to the second most common skewness measure, the second Pearson skewness coefficient ($3(\frac{mean-median}{\sigma})$).
Indeed I have also come up with distributions and/or data sets for which the two measures are opposite in sign - which suffices to counter the idea that skewness is a single, easily understood concept, rather than a somewhat slippery idea we don't really know how to suitably measure in many cases.
2) There's a set of data constructed in this answer Box-and-whisker plot for multimodal distribution, following the approach of Choonpradub & McNeil (2005), which shows four very different-looking data sets with the same boxplot.
In particular, the distinctly skewed distribution with the symmetric boxplot tends to surprise people.
3) There are another couple of collections of counterexample data sets I constructed in response to people's over-reliance on histograms, especially with only a few bins and only at one bin-width and bin-origin; which leads to mistakenly confident assertions about distributional shape. These data sets and example displays can be found here
Here's one of the examples from there. This is the data:
1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.90, 2.93, 2.96, 2.99, 3.60,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62
And here are two histograms:
That's the the 34 observations above in both cases, just with different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$. The plots were generated in R as follows:
x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62)
hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE)
hist(x,breaks=0:8,col="aquamarine",freq=FALSE)
4) I recently constructed some data sets to demonstrate the intransitivity of the Wilcoxon-Mann-Whitney test - that is, to show that one might reject a one tailed alternative for each of three or four pairs of data sets, A, B, and C, (and D in the four sample case) such that one concluded that $P(B>A)>\frac{1}{2}$ (i.e. conclude that B tends to be bigger than A), and similarly for C against B, and A against C (or D against C and A against D for the 4 sample case); each tends to be larger (in the sense that it has more than even chance of being larger) than the
previous one in the cycle.
Here's one such data set, with 30 observations in each sample, labelled A to D:
1 2 3 4 5 6 7 8 9 10 11 12
A 1.58 2.10 16.64 17.34 18.74 19.90 1.53 2.78 16.48 17.53 18.57 19.05
B 3.35 4.62 5.03 20.97 21.25 22.92 3.12 4.83 5.29 20.82 21.64 22.06
C 6.63 7.92 8.15 9.97 23.34 24.70 6.40 7.54 8.24 9.37 23.33 24.26
D 10.21 11.19 12.99 13.22 14.17 15.99 10.32 11.33 12.65 13.24 14.90 15.50
13 14 15 16 17 18 19 20 21 22 23 24
A 1.64 2.01 16.79 17.10 18.14 19.70 1.25 2.73 16.19 17.76 18.82 19.08
B 3.39 4.67 5.34 20.52 21.10 22.29 3.38 4.96 5.70 20.45 21.67 22.89
C 6.18 7.74 8.63 9.62 23.07 24.80 6.54 7.37 8.37 9.09 23.22 24.16
D 10.20 11.47 12.54 13.08 14.45 15.38 10.87 11.56 12.98 13.99 14.82 15.65
25 26 27 28 29 30
A 1.42 2.56 16.73 17.01 18.86 19.98
B 3.44 4.13 6.00 20.85 21.82 22.05
C 6.57 7.58 8.81 9.08 23.43 24.45
D 10.29 11.48 12.19 13.09 14.68 15.36
Here's an example test:
> wilcox.test(adf$A,adf$B,alt="less",conf.int=TRUE)
Wilcoxon rank sum test
data: adf$A and adf$B
W = 300, p-value = 0.01317
alternative hypothesis: true location shift is less than 0
95 percent confidence interval:
-Inf -1.336372
sample estimates:
difference in location
-2.500199
As you see, the one-sided test rejects the null; values from A tend to be smaller than values from B. The same conclusion (at the same p-value) applies to B vs C, C vs D and D vs A. This cycle of rejections, of itself, is not automatically a problem, if we don't interpret it to mean something it doesn't. (It's a simple matter to obtain much smaller p-values with similar, but larger, samples.)
The larger "paradox" here comes when you compute the (one-sided in this case) intervals for a location shift -- in every case 0 is excluded (the intervals aren't identical in each case). This leads us to the conclusion that as we move across the data columns from A to B to C to D, the location moves to the right, and yet the same happens again when we move back to A.
With a larger versions of these data sets (similar distribution of values, but more of them), we can get significance (one or two tailed) at substantially smaller significance levels, so that one might use Bonferroni adjustments for example, and still conclude each group came from a distribution which was shifted up from the next one.
This shows us, among other things, that a rejection in the Wilcoxon-Mann-Whitney doesn't of itself automatically justify a claim of a location shift.
(While it's not the case for these data, it's also possible to construct sets where the sample means are constant, while results like the above apply.)
Added in later edit: A very informative and educational reference on this is
Brown BM, and Hettmansperger TP. (2002)
Kruskal-Wallis, multiple comaprisons and Efron dice.
Aust&N.Z. J. Stat., 44, 427–438.
5) Another couple of related counterexamples come up here - where an ANOVA may be significant, but all pairwise comparisons aren't (interpreted two different ways there, yielding different counterexamples).
So there's several counterexample data sets that contradict misunderstandings one might encounter.
As you might guess, I construct such counterexamples reasonably often (as do many other people), usually as the need arises. For some of these common misunderstandings, you can characterize the counterexamples in such a way that new ones may be generated at will (though more often, a certain level of work is involved).
If there are particular kinds of things you might be interested in, I might be able to locate more such sets (mine or those of other people), or perhaps even construct some.
One useful trick for generating random regression data that has coefficients that you want is as follows (the part in parentheses is an outline of R code):
a) set up the coefficients you want with no noise (y = b0 + b1 * x1 + b2 * x2)
b) generate error term with desired characteristics (n = rnorm(length(y),s=0.4)
c) set up a regression of noise on the same x's (nfit = lm(n~x1+x2))
d) add the residuals from that to the y variable (y = y + nfit$residuals)
Done. (the whole thing can actually be done in a couple of lines of R)
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
|
Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(w
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(which is what the Anscombe data does, since it's a response to people operating under the misunderstanding that the quality of a model can be discerned from the identical statistics you mentioned)
I'll include a few here that might be of greater interest than most of the ones I generate:
1) One example (of quite a few) are some example discrete distributions (and thereby data sets) I constructed to counter the common assertion that zero third-moment skewness implies symmetry. (Kendall and Stuart's Advanced Theory of Statistics offers a more impressive continuous family.)
Here's one of those discrete distribution examples:
\begin{array}{cccc}
\\
x&-4&1&5\\
\hline
P(X=x)&2/6&3/6&1/6
\\
\end{array}
(A data set for a counterexample in the sample case is thereby obvious: $-4, -4, 1, 1, 1, 5$)
As you can see, this distribution isn't symmetric, yet its third moment skewness is zero. Similarly, one can readily construct counterexamples to a similar assertion with respect to the second most common skewness measure, the second Pearson skewness coefficient ($3(\frac{mean-median}{\sigma})$).
Indeed I have also come up with distributions and/or data sets for which the two measures are opposite in sign - which suffices to counter the idea that skewness is a single, easily understood concept, rather than a somewhat slippery idea we don't really know how to suitably measure in many cases.
2) There's a set of data constructed in this answer Box-and-whisker plot for multimodal distribution, following the approach of Choonpradub & McNeil (2005), which shows four very different-looking data sets with the same boxplot.
In particular, the distinctly skewed distribution with the symmetric boxplot tends to surprise people.
3) There are another couple of collections of counterexample data sets I constructed in response to people's over-reliance on histograms, especially with only a few bins and only at one bin-width and bin-origin; which leads to mistakenly confident assertions about distributional shape. These data sets and example displays can be found here
Here's one of the examples from there. This is the data:
1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.90, 2.93, 2.96, 2.99, 3.60,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62
And here are two histograms:
That's the the 34 observations above in both cases, just with different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$. The plots were generated in R as follows:
x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62)
hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE)
hist(x,breaks=0:8,col="aquamarine",freq=FALSE)
4) I recently constructed some data sets to demonstrate the intransitivity of the Wilcoxon-Mann-Whitney test - that is, to show that one might reject a one tailed alternative for each of three or four pairs of data sets, A, B, and C, (and D in the four sample case) such that one concluded that $P(B>A)>\frac{1}{2}$ (i.e. conclude that B tends to be bigger than A), and similarly for C against B, and A against C (or D against C and A against D for the 4 sample case); each tends to be larger (in the sense that it has more than even chance of being larger) than the
previous one in the cycle.
Here's one such data set, with 30 observations in each sample, labelled A to D:
1 2 3 4 5 6 7 8 9 10 11 12
A 1.58 2.10 16.64 17.34 18.74 19.90 1.53 2.78 16.48 17.53 18.57 19.05
B 3.35 4.62 5.03 20.97 21.25 22.92 3.12 4.83 5.29 20.82 21.64 22.06
C 6.63 7.92 8.15 9.97 23.34 24.70 6.40 7.54 8.24 9.37 23.33 24.26
D 10.21 11.19 12.99 13.22 14.17 15.99 10.32 11.33 12.65 13.24 14.90 15.50
13 14 15 16 17 18 19 20 21 22 23 24
A 1.64 2.01 16.79 17.10 18.14 19.70 1.25 2.73 16.19 17.76 18.82 19.08
B 3.39 4.67 5.34 20.52 21.10 22.29 3.38 4.96 5.70 20.45 21.67 22.89
C 6.18 7.74 8.63 9.62 23.07 24.80 6.54 7.37 8.37 9.09 23.22 24.16
D 10.20 11.47 12.54 13.08 14.45 15.38 10.87 11.56 12.98 13.99 14.82 15.65
25 26 27 28 29 30
A 1.42 2.56 16.73 17.01 18.86 19.98
B 3.44 4.13 6.00 20.85 21.82 22.05
C 6.57 7.58 8.81 9.08 23.43 24.45
D 10.29 11.48 12.19 13.09 14.68 15.36
Here's an example test:
> wilcox.test(adf$A,adf$B,alt="less",conf.int=TRUE)
Wilcoxon rank sum test
data: adf$A and adf$B
W = 300, p-value = 0.01317
alternative hypothesis: true location shift is less than 0
95 percent confidence interval:
-Inf -1.336372
sample estimates:
difference in location
-2.500199
As you see, the one-sided test rejects the null; values from A tend to be smaller than values from B. The same conclusion (at the same p-value) applies to B vs C, C vs D and D vs A. This cycle of rejections, of itself, is not automatically a problem, if we don't interpret it to mean something it doesn't. (It's a simple matter to obtain much smaller p-values with similar, but larger, samples.)
The larger "paradox" here comes when you compute the (one-sided in this case) intervals for a location shift -- in every case 0 is excluded (the intervals aren't identical in each case). This leads us to the conclusion that as we move across the data columns from A to B to C to D, the location moves to the right, and yet the same happens again when we move back to A.
With a larger versions of these data sets (similar distribution of values, but more of them), we can get significance (one or two tailed) at substantially smaller significance levels, so that one might use Bonferroni adjustments for example, and still conclude each group came from a distribution which was shifted up from the next one.
This shows us, among other things, that a rejection in the Wilcoxon-Mann-Whitney doesn't of itself automatically justify a claim of a location shift.
(While it's not the case for these data, it's also possible to construct sets where the sample means are constant, while results like the above apply.)
Added in later edit: A very informative and educational reference on this is
Brown BM, and Hettmansperger TP. (2002)
Kruskal-Wallis, multiple comaprisons and Efron dice.
Aust&N.Z. J. Stat., 44, 427–438.
5) Another couple of related counterexamples come up here - where an ANOVA may be significant, but all pairwise comparisons aren't (interpreted two different ways there, yielding different counterexamples).
So there's several counterexample data sets that contradict misunderstandings one might encounter.
As you might guess, I construct such counterexamples reasonably often (as do many other people), usually as the need arises. For some of these common misunderstandings, you can characterize the counterexamples in such a way that new ones may be generated at will (though more often, a certain level of work is involved).
If there are particular kinds of things you might be interested in, I might be able to locate more such sets (mine or those of other people), or perhaps even construct some.
One useful trick for generating random regression data that has coefficients that you want is as follows (the part in parentheses is an outline of R code):
a) set up the coefficients you want with no noise (y = b0 + b1 * x1 + b2 * x2)
b) generate error term with desired characteristics (n = rnorm(length(y),s=0.4)
c) set up a regression of noise on the same x's (nfit = lm(n~x1+x2))
d) add the residuals from that to the y variable (y = y + nfit$residuals)
Done. (the whole thing can actually be done in a couple of lines of R)
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(w
|
7,675
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
|
With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American Statistician, 61, 3, pp. 248–254.
As far as datasets that are simply used to demonstrate tricky / counter-intuitive phenomena in statistics, there a lot, but you need to specify what phenomena you want to demonstrate. For example, with respect to demonstrating Simpson's paradox, the Berkeley gender bias case dataset is very famous.
For a great discussion of the most famous dataset of all, see: What aspects of the "Iris" data set make it so successful as an example/teaching/test data set.
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
|
With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graph
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American Statistician, 61, 3, pp. 248–254.
As far as datasets that are simply used to demonstrate tricky / counter-intuitive phenomena in statistics, there a lot, but you need to specify what phenomena you want to demonstrate. For example, with respect to demonstrating Simpson's paradox, the Berkeley gender bias case dataset is very famous.
For a great discussion of the most famous dataset of all, see: What aspects of the "Iris" data set make it so successful as an example/teaching/test data set.
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graph
|
7,676
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
|
In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect real-life cases when data might have suffered a coding error during measurement (e.g. a distortion in assigning data to categorical values, or incorrect quantization procedures).
The synthetic data is created from a perfect linear relationship with two positive coefficients, but once you apply the non-linear coding error, standard regression techniques will produce a coefficient that is of the wrong sign and also statistically significant (and would become more so if you bootstrapped a larger synthetic data set).
Though it is just a small synthetic data set, the paper presents a great refutation of naive "dump everything I can think of on the right hand side" sorts of regression, showing that with even tiny / subtle non-linearities (which actually are quite common in things like coding errors or quantization errors), you can get wildly misleading results if you just trust the output of standard regression push-button analysis.
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
|
In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect re
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect real-life cases when data might have suffered a coding error during measurement (e.g. a distortion in assigning data to categorical values, or incorrect quantization procedures).
The synthetic data is created from a perfect linear relationship with two positive coefficients, but once you apply the non-linear coding error, standard regression techniques will produce a coefficient that is of the wrong sign and also statistically significant (and would become more so if you bootstrapped a larger synthetic data set).
Though it is just a small synthetic data set, the paper presents a great refutation of naive "dump everything I can think of on the right hand side" sorts of regression, showing that with even tiny / subtle non-linearities (which actually are quite common in things like coding errors or quantization errors), you can get wildly misleading results if you just trust the output of standard regression push-button analysis.
|
Datasets constructed for a purpose similar to that of Anscombe's quartet
In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect re
|
7,677
|
Data mining: How should I go about finding the functional form?
|
To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited about it)...and its free :-)
http://creativemachines.cornell.edu/eureqa
EDIT: I gave it a shot with Eureqa and I would go for:
$$AA + AA^2 + BB*CC$$ with $R^2=0.99988$
I would call it a perfect fit (Eureqa gives other, better fitting solutions, but these are also a little bit more complicated. Eureqa favours this one, so I chose this one) - and Eureqa did everything for me in about a few seconds on a normal laptop ;-)
|
Data mining: How should I go about finding the functional form?
|
To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited abo
|
Data mining: How should I go about finding the functional form?
To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited about it)...and its free :-)
http://creativemachines.cornell.edu/eureqa
EDIT: I gave it a shot with Eureqa and I would go for:
$$AA + AA^2 + BB*CC$$ with $R^2=0.99988$
I would call it a perfect fit (Eureqa gives other, better fitting solutions, but these are also a little bit more complicated. Eureqa favours this one, so I chose this one) - and Eureqa did everything for me in about a few seconds on a normal laptop ;-)
|
Data mining: How should I go about finding the functional form?
To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited abo
|
7,678
|
Data mining: How should I go about finding the functional form?
|
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory data analysis (EDA) and regression (but not stepwise or other automated procedures) suggest using a linear model in the form
$$\sqrt{f} = a + b*c + a*b*c + \text{constant} + \text{error}$$
Using OLS, this does achieve an $R^2$ above 0.99. Heartened by such a result, one is tempted to square both sides and regress $f$ on $a$, $b*c$, $a*b*c$, and all their squares and products. This immediately produces a model
$$f = a^2 + b*c + \text{constant} + \text{error}$$
with a root MSE of under 34 and an adjusted $R^2$ of 0.9999. The estimated coefficients of 1.0112 and 0.988 suggest the data may be artificially generated with the formula
$$f = a^2 + b*c + 50$$
plus a little normally distributed error of SD approximately equal to 50.
Edit
In response to @knorv's hints, I continued the analysis. To do so I used the techniques that had been successful so far, beginning with inspecting scatterplot matrices of the residuals against the original variables. Sure enough, there was a clear indication of correlation between $a$ and the residuals (even though OLS regression of $f$ against $a$, $a^2$, and $b*c$ did not indicate $a$ was "significant"). Continuing in this vein I explored all correlations between the quadratic terms $a^2, \ldots, e^2, a*b, a*c, \ldots, d*e$ and the new residuals and found a tiny but highly significant relationship with $b^2$. "Highly significant" means that all this snooping involved looking at some 20 different variables, so my criterion for significance on this fishing expedition was approximately 0.05/20 = 0.0025: anything less stringent could easily be an artifact of the probing for fits.
This has something of the flavor of a physical model in that we expect, and therefore search for, relationships with "interesting" and "simple" coefficients. So, for instance, seeing that the estimated coefficient of $b^2$ was -0.0092 (between -0.005 and -0.013 with 95% confidence), I elected to use -1/100 for it. If this were some other dataset, such as observations of a social or political system, I would make no such changes but just use the OLS estimates as-is.
Anyway, an improved fit is given by
$$f = a + a^2 + b*c - b^2/100 + 30.5 + \text{error}$$
with mean residual $0$, standard deviation 26.8, all residuals between -50 and +43, and no evidence of non-normality (although with such a small dataset the errors could even be uniformly distributed and one couldn't really tell the difference). The reduction in residual standard deviation from around 50 to around 25 would often be expressed as "explaining 75% of the residual variance."
I make no claim that this is the formula used to generate the data. The residuals are large enough to allow some fairly large changes in a few of the coefficients. For instance, 95% CIs for the coefficients of $a$, $b^2$, and the constant are [-0.4, 2.7], [-0.013, -0.003], and [-7, 61] respectively. The point is that if any random error has actually been introduced in the data-generation procedure (and that is true of all real-world data), that would preclude definitive identification of the coefficients (and even of all the variables that might be involved). That's not a limitation of statistical methods: it's just a mathematical fact.
BTW, using robust regression I can fit the model
$$f = 1.0103 a^2 + 0.99493 b*c - 0.007 b^2 + 46.78 + \text{error}$$
with residual SD of 27.4 and all residuals between -51 and +47: essentially as good as the previous fit but with one less variable. It is more parsimonious in that sense, but less parsimonious in the sense that I haven't rounded the coefficients to "nice" values. Nevertheless, this is the form I would usually favor in a regression analysis absent any rigorous theories about what kinds of values the coefficients ought to have and which variables ought to be included.
It is likely that additional strong relationships are lurking here, but they would have to be fairly complicated. Incidentally, taking data whose original SD is 3410 and reducing their variation to residuals with an SD of 27 is a 99.99384% reduction in variance (the $R^2$ of this new fit). One would continue looking for additional effects only if the residual SD is too large for the intended purpose. In the absence of any purpose besides second-guessing the OP, it's time to stop.
|
Data mining: How should I go about finding the functional form?
|
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory
|
Data mining: How should I go about finding the functional form?
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory data analysis (EDA) and regression (but not stepwise or other automated procedures) suggest using a linear model in the form
$$\sqrt{f} = a + b*c + a*b*c + \text{constant} + \text{error}$$
Using OLS, this does achieve an $R^2$ above 0.99. Heartened by such a result, one is tempted to square both sides and regress $f$ on $a$, $b*c$, $a*b*c$, and all their squares and products. This immediately produces a model
$$f = a^2 + b*c + \text{constant} + \text{error}$$
with a root MSE of under 34 and an adjusted $R^2$ of 0.9999. The estimated coefficients of 1.0112 and 0.988 suggest the data may be artificially generated with the formula
$$f = a^2 + b*c + 50$$
plus a little normally distributed error of SD approximately equal to 50.
Edit
In response to @knorv's hints, I continued the analysis. To do so I used the techniques that had been successful so far, beginning with inspecting scatterplot matrices of the residuals against the original variables. Sure enough, there was a clear indication of correlation between $a$ and the residuals (even though OLS regression of $f$ against $a$, $a^2$, and $b*c$ did not indicate $a$ was "significant"). Continuing in this vein I explored all correlations between the quadratic terms $a^2, \ldots, e^2, a*b, a*c, \ldots, d*e$ and the new residuals and found a tiny but highly significant relationship with $b^2$. "Highly significant" means that all this snooping involved looking at some 20 different variables, so my criterion for significance on this fishing expedition was approximately 0.05/20 = 0.0025: anything less stringent could easily be an artifact of the probing for fits.
This has something of the flavor of a physical model in that we expect, and therefore search for, relationships with "interesting" and "simple" coefficients. So, for instance, seeing that the estimated coefficient of $b^2$ was -0.0092 (between -0.005 and -0.013 with 95% confidence), I elected to use -1/100 for it. If this were some other dataset, such as observations of a social or political system, I would make no such changes but just use the OLS estimates as-is.
Anyway, an improved fit is given by
$$f = a + a^2 + b*c - b^2/100 + 30.5 + \text{error}$$
with mean residual $0$, standard deviation 26.8, all residuals between -50 and +43, and no evidence of non-normality (although with such a small dataset the errors could even be uniformly distributed and one couldn't really tell the difference). The reduction in residual standard deviation from around 50 to around 25 would often be expressed as "explaining 75% of the residual variance."
I make no claim that this is the formula used to generate the data. The residuals are large enough to allow some fairly large changes in a few of the coefficients. For instance, 95% CIs for the coefficients of $a$, $b^2$, and the constant are [-0.4, 2.7], [-0.013, -0.003], and [-7, 61] respectively. The point is that if any random error has actually been introduced in the data-generation procedure (and that is true of all real-world data), that would preclude definitive identification of the coefficients (and even of all the variables that might be involved). That's not a limitation of statistical methods: it's just a mathematical fact.
BTW, using robust regression I can fit the model
$$f = 1.0103 a^2 + 0.99493 b*c - 0.007 b^2 + 46.78 + \text{error}$$
with residual SD of 27.4 and all residuals between -51 and +47: essentially as good as the previous fit but with one less variable. It is more parsimonious in that sense, but less parsimonious in the sense that I haven't rounded the coefficients to "nice" values. Nevertheless, this is the form I would usually favor in a regression analysis absent any rigorous theories about what kinds of values the coefficients ought to have and which variables ought to be included.
It is likely that additional strong relationships are lurking here, but they would have to be fairly complicated. Incidentally, taking data whose original SD is 3410 and reducing their variation to residuals with an SD of 27 is a 99.99384% reduction in variance (the $R^2$ of this new fit). One would continue looking for additional effects only if the residual SD is too large for the intended purpose. In the absence of any purpose besides second-guessing the OP, it's time to stop.
|
Data mining: How should I go about finding the functional form?
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory
|
7,679
|
Data mining: How should I go about finding the functional form?
|
Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said, Analysis of Variance (ANOVA) or a "sensitivity study" can tell you a lot about how your inputs (AA..EE) affect your output (FF).
I just did a quick ANOVA and found a reasonably good model: FF = 101*A + 47*B + 49*C - 4484.
The function does not seem to depend on DD or EE linearly. Of course, we could go further with the model and add quadratic and mixture terms. Eventually you will have a perfect model that over-fits the data and has no predictive value. :)
|
Data mining: How should I go about finding the functional form?
|
Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said
|
Data mining: How should I go about finding the functional form?
Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said, Analysis of Variance (ANOVA) or a "sensitivity study" can tell you a lot about how your inputs (AA..EE) affect your output (FF).
I just did a quick ANOVA and found a reasonably good model: FF = 101*A + 47*B + 49*C - 4484.
The function does not seem to depend on DD or EE linearly. Of course, we could go further with the model and add quadratic and mixture terms. Eventually you will have a perfect model that over-fits the data and has no predictive value. :)
|
Data mining: How should I go about finding the functional form?
Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said
|
7,680
|
Data mining: How should I go about finding the functional form?
|
Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A
/edit: also, a radial SVM with C = 4 and sigma = 0.206 easily yields an R2 of .99. Extracting the actual equation used to derive this dataset is left as an exercise to the class. Code is in R.
setwd("~/wherever")
library('caret')
Data <- read.csv("CV.csv", header=TRUE)
FL <- as.formula("FF ~ AA+BB+CC+DD+EE")
model <- train(FL,data=Data,method='svmRadial',tuneGrid = expand.grid(.C=4,.sigma=0.206))
R2( predict(model, Data), Data$FF)
|
Data mining: How should I go about finding the functional form?
|
Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other
|
Data mining: How should I go about finding the functional form?
Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A
/edit: also, a radial SVM with C = 4 and sigma = 0.206 easily yields an R2 of .99. Extracting the actual equation used to derive this dataset is left as an exercise to the class. Code is in R.
setwd("~/wherever")
library('caret')
Data <- read.csv("CV.csv", header=TRUE)
FL <- as.formula("FF ~ AA+BB+CC+DD+EE")
model <- train(FL,data=Data,method='svmRadial',tuneGrid = expand.grid(.C=4,.sigma=0.206))
R2( predict(model, Data), Data$FF)
|
Data mining: How should I go about finding the functional form?
Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other
|
7,681
|
Data mining: How should I go about finding the functional form?
|
All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T=1,10
=1 OTHERWISE
There appears to be a "lagged relationship" between Y and AA AND an explained shift in the mean for observations 11-25 .
Curious results if this is not chronological or spatial data.
|
Data mining: How should I go about finding the functional form?
|
All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T
|
Data mining: How should I go about finding the functional form?
All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T=1,10
=1 OTHERWISE
There appears to be a "lagged relationship" between Y and AA AND an explained shift in the mean for observations 11-25 .
Curious results if this is not chronological or spatial data.
|
Data mining: How should I go about finding the functional form?
All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T
|
7,682
|
Data mining: How should I go about finding the functional form?
|
r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
X3 BBS
X4 CC
Number of Residuals (R) =n 25
Number of Degrees of Freedom =n-m 20
Residual Mean =Sum R / n -.141873E-05
Sum of Squares =Sum R2 .775723E+07
Variance =SOS/(n) 310289.
Adjusted Variance =SOS/(n-m) 387861.
Standard Deviation RMSE =SQRT(Adj Var) 622.785
Standard Error of the Mean =Standard Dev/ (n-m) 139.259
Mean / its Standard Error =Mean/SEM -.101877E-07
Mean Absolute Deviation =Sum(ABS(R))/n 455.684
AIC Value ( Uses var ) =nln +2m 326.131
SBC Value ( Uses var ) =nln +m*lnn 332.226
BIC Value ( Uses var ) =see Wei p153 340.388
R Square = .972211
Durbin-Watson Statistic =[-A(T-1)]**2/A2 1.76580
**
MODEL COMPONENT LAG COEFF STANDARD P T
# (BOP) ERROR VALUE VALUE
1CONSTANT -.381E+04 466. .0000 -8.18
INPUT SERIES X1 AAS AA SQUARED
2Omega (input) -Factor # 1 0 .983 .410E-01 .0000 23.98
INPUT SERIES X2 BB BB AS GIVEN
3Omega (input) -Factor # 2 0 108. 14.9 .0000 7.27
INPUT SERIES X3 BBS BB SQUARED
4Omega (input) -Factor # 3 0 -.577 .147 .0008 -3.93
INPUT SERIES X4 CC CC AS GIVEN
5Omega (input) -Factor # 4 0 49.9 4.67 .0000 10.67
|
Data mining: How should I go about finding the functional form?
|
r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
|
Data mining: How should I go about finding the functional form?
r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
X3 BBS
X4 CC
Number of Residuals (R) =n 25
Number of Degrees of Freedom =n-m 20
Residual Mean =Sum R / n -.141873E-05
Sum of Squares =Sum R2 .775723E+07
Variance =SOS/(n) 310289.
Adjusted Variance =SOS/(n-m) 387861.
Standard Deviation RMSE =SQRT(Adj Var) 622.785
Standard Error of the Mean =Standard Dev/ (n-m) 139.259
Mean / its Standard Error =Mean/SEM -.101877E-07
Mean Absolute Deviation =Sum(ABS(R))/n 455.684
AIC Value ( Uses var ) =nln +2m 326.131
SBC Value ( Uses var ) =nln +m*lnn 332.226
BIC Value ( Uses var ) =see Wei p153 340.388
R Square = .972211
Durbin-Watson Statistic =[-A(T-1)]**2/A2 1.76580
**
MODEL COMPONENT LAG COEFF STANDARD P T
# (BOP) ERROR VALUE VALUE
1CONSTANT -.381E+04 466. .0000 -8.18
INPUT SERIES X1 AAS AA SQUARED
2Omega (input) -Factor # 1 0 .983 .410E-01 .0000 23.98
INPUT SERIES X2 BB BB AS GIVEN
3Omega (input) -Factor # 2 0 108. 14.9 .0000 7.27
INPUT SERIES X3 BBS BB SQUARED
4Omega (input) -Factor # 3 0 -.577 .147 .0008 -3.93
INPUT SERIES X4 CC CC AS GIVEN
5Omega (input) -Factor # 4 0 49.9 4.67 .0000 10.67
|
Data mining: How should I go about finding the functional form?
r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
|
7,683
|
Why should we shuffle data while training a neural network?
|
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concepts described below.
The process of training a neural network is to find the minimum value of a loss function $ℒ_X(W)$, where $W$ represents a matrix (or several matrices) of weights between neurons and $X$ represents the training dataset. I use a subscript for $X$ to indicate that our minimization of $ℒ$ occurs only over the weights $W$ (that is, we are looking for $W$ such that $ℒ$ is minimized) while $X$ is fixed.
Now, if we assume that we have $P$ elements in $W$ (that is, there are $P$ weights in the network), $ℒ$ is a surface in a $P+1$-dimensional space. To give a visual analogue, imagine that we have only two neuron weights ($P=2$). Then $ℒ$ has an easy geometric interpretation: it is a surface in a 3-dimensional space. This arises from the fact that for any given matrices of weights $W$, the loss function can be evaluated on $X$ and that value becomes the elevation of the surface.
But there is the problem of non-convexity; the surface I described will have numerous local minima, and therefore gradient descent algorithms are susceptible to becoming "stuck" in those minima while a deeper/lower/better solution may lie nearby. This is likely to occur if $X$ is unchanged over all training iterations, because the surface is fixed for a given $X$; all its features are static, including its various minima.
A solution to this is mini-batch training combined with shuffling. By shuffling the rows and training on only a subset of them during a given iteration, $X$ changes with every iteration, and it is actually quite possible that no two iterations over the entire sequence of training iterations and epochs will be performed on the exact same $X$. The effect is that the solver can easily "bounce" out of a local minimum. Imagine that the solver is stuck in a local minimum at iteration $i$ with training mini-batch $X_i$. This local minimum corresponds to $ℒ$ evaluated at a particular value of weights; we'll call it $ℒ_{X_i}(W_i)$. On the next iteration the shape of our loss surface actually changes because we are using $X_{i+1}$, that is, $ℒ_{X_{i+1}}(W_i)$ may take on a very different value from $ℒ_{X_i}(W_i)$ and it is quite possible that it does not correspond to a local minimum! We can now compute a gradient update and continue with training. To be clear: the shape of $ℒ_{X_{i+1}}$ will -- in general -- be different from that of $ℒ_{X_{i}}$. Note that here I am referring to the loss function $ℒ$ evaluated on a training set $X$; it is a complete surface defined over all possible values of $W$, rather than the evaluation of that loss (which is just a scalar) for a specific value of $W$. Note also that if mini-batches are used without shuffling there is still a degree of "diversification" of loss surfaces, but there will be a finite (and relatively small) number of unique error surfaces seen by the solver (specifically, it will see the same exact set of mini-batches -- and therefore loss surfaces -- during each epoch).
One thing I deliberately avoided was a discussion of mini-batch sizes, because there are a million opinions on this and it has significant practical implications (greater parallelization can be achieved with larger batches). However, I believe the following is worth mentioning. Because $ℒ$ is evaluated by computing a value for each row of $X$ (and summing or taking the average; i.e., a commutative operator) for a given set of weight matrices $W$, the arrangement of the rows of $X$ has no effect when using full-batch gradient descent (that is, when each batch is the full $X$, and iterations and epochs are the same thing).
|
Why should we shuffle data while training a neural network?
|
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concep
|
Why should we shuffle data while training a neural network?
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concepts described below.
The process of training a neural network is to find the minimum value of a loss function $ℒ_X(W)$, where $W$ represents a matrix (or several matrices) of weights between neurons and $X$ represents the training dataset. I use a subscript for $X$ to indicate that our minimization of $ℒ$ occurs only over the weights $W$ (that is, we are looking for $W$ such that $ℒ$ is minimized) while $X$ is fixed.
Now, if we assume that we have $P$ elements in $W$ (that is, there are $P$ weights in the network), $ℒ$ is a surface in a $P+1$-dimensional space. To give a visual analogue, imagine that we have only two neuron weights ($P=2$). Then $ℒ$ has an easy geometric interpretation: it is a surface in a 3-dimensional space. This arises from the fact that for any given matrices of weights $W$, the loss function can be evaluated on $X$ and that value becomes the elevation of the surface.
But there is the problem of non-convexity; the surface I described will have numerous local minima, and therefore gradient descent algorithms are susceptible to becoming "stuck" in those minima while a deeper/lower/better solution may lie nearby. This is likely to occur if $X$ is unchanged over all training iterations, because the surface is fixed for a given $X$; all its features are static, including its various minima.
A solution to this is mini-batch training combined with shuffling. By shuffling the rows and training on only a subset of them during a given iteration, $X$ changes with every iteration, and it is actually quite possible that no two iterations over the entire sequence of training iterations and epochs will be performed on the exact same $X$. The effect is that the solver can easily "bounce" out of a local minimum. Imagine that the solver is stuck in a local minimum at iteration $i$ with training mini-batch $X_i$. This local minimum corresponds to $ℒ$ evaluated at a particular value of weights; we'll call it $ℒ_{X_i}(W_i)$. On the next iteration the shape of our loss surface actually changes because we are using $X_{i+1}$, that is, $ℒ_{X_{i+1}}(W_i)$ may take on a very different value from $ℒ_{X_i}(W_i)$ and it is quite possible that it does not correspond to a local minimum! We can now compute a gradient update and continue with training. To be clear: the shape of $ℒ_{X_{i+1}}$ will -- in general -- be different from that of $ℒ_{X_{i}}$. Note that here I am referring to the loss function $ℒ$ evaluated on a training set $X$; it is a complete surface defined over all possible values of $W$, rather than the evaluation of that loss (which is just a scalar) for a specific value of $W$. Note also that if mini-batches are used without shuffling there is still a degree of "diversification" of loss surfaces, but there will be a finite (and relatively small) number of unique error surfaces seen by the solver (specifically, it will see the same exact set of mini-batches -- and therefore loss surfaces -- during each epoch).
One thing I deliberately avoided was a discussion of mini-batch sizes, because there are a million opinions on this and it has significant practical implications (greater parallelization can be achieved with larger batches). However, I believe the following is worth mentioning. Because $ℒ$ is evaluated by computing a value for each row of $X$ (and summing or taking the average; i.e., a commutative operator) for a given set of weight matrices $W$, the arrangement of the rows of $X$ has no effect when using full-batch gradient descent (that is, when each batch is the full $X$, and iterations and epochs are the same thing).
|
Why should we shuffle data while training a neural network?
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concep
|
7,684
|
Why should we shuffle data while training a neural network?
|
To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your input and output data. These relationships can include things you would never expect, such as the order in which data is fed in per epoch. If the order of data within each epoch is the same, then the model may use this as a way of reducing the training error, which is a sort of overfitting.
With respect to speed: Mini-batch methods rely on stochastic gradient descent (and improvements thereon), which means that they rely on the randomness to find a minimum. Shuffling mini-batches makes the gradients more variable, which can help convergence because it increases the likelihood of hitting a good direction (or at least that is how I understand it).
|
Why should we shuffle data while training a neural network?
|
To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your inpu
|
Why should we shuffle data while training a neural network?
To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your input and output data. These relationships can include things you would never expect, such as the order in which data is fed in per epoch. If the order of data within each epoch is the same, then the model may use this as a way of reducing the training error, which is a sort of overfitting.
With respect to speed: Mini-batch methods rely on stochastic gradient descent (and improvements thereon), which means that they rely on the randomness to find a minimum. Shuffling mini-batches makes the gradients more variable, which can help convergence because it increases the likelihood of hitting a good direction (or at least that is how I understand it).
|
Why should we shuffle data while training a neural network?
To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your inpu
|
7,685
|
Why should we shuffle data while training a neural network?
|
Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches being disproportionately noisy goes down.
|
Why should we shuffle data while training a neural network?
|
Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches
|
Why should we shuffle data while training a neural network?
Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches being disproportionately noisy goes down.
|
Why should we shuffle data while training a neural network?
Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches
|
7,686
|
Why should we shuffle data while training a neural network?
|
From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't matter, randomization certainly won't hurt. If the order does matter, randomization will help to smooth out those random effects so that they don't become systematic bias. In short, randomization is cheap and never hurts, and will often minimize data-ordering effects.
|
Why should we shuffle data while training a neural network?
|
From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't mat
|
Why should we shuffle data while training a neural network?
From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't matter, randomization certainly won't hurt. If the order does matter, randomization will help to smooth out those random effects so that they don't become systematic bias. In short, randomization is cheap and never hurts, and will often minimize data-ordering effects.
|
Why should we shuffle data while training a neural network?
From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't mat
|
7,687
|
Why should we shuffle data while training a neural network?
|
When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200...etc. This simply means that your network has not learnt the training data but it has learnt the noise of your training data. Classic case of an overfit model. With such network you'll get spot on predictions for the data you have used for training. If you use any other inputs to test it, your model will fall apart. Now, when you shuffle training data after each epoch (iteration of overall set) ,you simply feed different input to neurons at each epoch and that simply regulates the weights meaning you're more likely to get "lower" weights that are closer to zero, and that means your network can make better generalisations.
I hope that was clear.
|
Why should we shuffle data while training a neural network?
|
When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200
|
Why should we shuffle data while training a neural network?
When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200...etc. This simply means that your network has not learnt the training data but it has learnt the noise of your training data. Classic case of an overfit model. With such network you'll get spot on predictions for the data you have used for training. If you use any other inputs to test it, your model will fall apart. Now, when you shuffle training data after each epoch (iteration of overall set) ,you simply feed different input to neurons at each epoch and that simply regulates the weights meaning you're more likely to get "lower" weights that are closer to zero, and that means your network can make better generalisations.
I hope that was clear.
|
Why should we shuffle data while training a neural network?
When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200
|
7,688
|
Why should we shuffle data while training a neural network?
|
Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each round of weight updating. The thing we want happen is this mini-batch-based gradient is roughly the population gradient, because this is expected to produce a quicker convergence. (Imagine if you feed the network 100 class1 data in one mini-batch, and 100 class2 data in another, the network will hover around. A better way is to feed it with 50 class1 + 50 class2 in each mini-batch.)
How to achieve this since we cannot use the population data in a mini-batch? The art of statistics tells us: shuffle the population, and the first batch_size pieces of data can represent the population. This is why we need to shuffle the population.
I have to say, shuffling is not necessary if you have other method to sample data from population and ensure the samples can produce a reasonable gradient.
That's my understanding. Hope it helps.
|
Why should we shuffle data while training a neural network?
|
Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each rou
|
Why should we shuffle data while training a neural network?
Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each round of weight updating. The thing we want happen is this mini-batch-based gradient is roughly the population gradient, because this is expected to produce a quicker convergence. (Imagine if you feed the network 100 class1 data in one mini-batch, and 100 class2 data in another, the network will hover around. A better way is to feed it with 50 class1 + 50 class2 in each mini-batch.)
How to achieve this since we cannot use the population data in a mini-batch? The art of statistics tells us: shuffle the population, and the first batch_size pieces of data can represent the population. This is why we need to shuffle the population.
I have to say, shuffling is not necessary if you have other method to sample data from population and ensure the samples can produce a reasonable gradient.
That's my understanding. Hope it helps.
|
Why should we shuffle data while training a neural network?
Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each rou
|
7,689
|
Data "exploration" vs data "snooping"/"torturing"?
|
There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty tricks in the world to come up with your idea / hypothesis. But when you later test it, you must ruthlessly kill your darlings.
I'm a biologist working with high throughput data all the time, and yes, I do this "slicing and dicing" quite often. Most of the cases the experiment performed was not carefully designed; or maybe those who planned it did not account for all possible results. Or the general attitude when planning was "let's see what's in there". We end up with expensive, valuable and in themselves interesting data sets that I then turn around and around to come up with a story.
But then, it is only a story (possible bedtime). After you have selected a couple of interesting angles -- and here is the crucial point -- you must test it not only with independent data sets or independent samples, but preferably with an independent approach, an independent experimental system.
The importance of this last thing -- an independent experimental setup, not only independent set of measurements or samples -- is often underestimated. However, when we test 30,000 variables for significant difference, it often happens that while similar (but different) samples from the same cohort and analysed with the same method will not reject the hypothesis we based on the previous set. But then we turn to another type of experiment and another cohort, and our findings turn out to be the result of a methodological bias or are limited in their applicability.
That is why we often need several papers by several independent researchers to really accept a hypothesis or a model.
So I think such data torturing is fine, as long as you keep this distinction in mind and remember what you are doing, at what stage of scientific process you are. You can use moon phases or redefine 2+2 as long as you have an independent validation of the data. To put it on a picture:
Unfortunately, there are those who order a microarray to round up a paper after several experiments have been done and no story emerged, with the hope that the high throughput analysis shows something. Or they are confused about the whole hypothesis testing vs. generation thing.
|
Data "exploration" vs data "snooping"/"torturing"?
|
There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty
|
Data "exploration" vs data "snooping"/"torturing"?
There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty tricks in the world to come up with your idea / hypothesis. But when you later test it, you must ruthlessly kill your darlings.
I'm a biologist working with high throughput data all the time, and yes, I do this "slicing and dicing" quite often. Most of the cases the experiment performed was not carefully designed; or maybe those who planned it did not account for all possible results. Or the general attitude when planning was "let's see what's in there". We end up with expensive, valuable and in themselves interesting data sets that I then turn around and around to come up with a story.
But then, it is only a story (possible bedtime). After you have selected a couple of interesting angles -- and here is the crucial point -- you must test it not only with independent data sets or independent samples, but preferably with an independent approach, an independent experimental system.
The importance of this last thing -- an independent experimental setup, not only independent set of measurements or samples -- is often underestimated. However, when we test 30,000 variables for significant difference, it often happens that while similar (but different) samples from the same cohort and analysed with the same method will not reject the hypothesis we based on the previous set. But then we turn to another type of experiment and another cohort, and our findings turn out to be the result of a methodological bias or are limited in their applicability.
That is why we often need several papers by several independent researchers to really accept a hypothesis or a model.
So I think such data torturing is fine, as long as you keep this distinction in mind and remember what you are doing, at what stage of scientific process you are. You can use moon phases or redefine 2+2 as long as you have an independent validation of the data. To put it on a picture:
Unfortunately, there are those who order a microarray to round up a paper after several experiments have been done and no story emerged, with the hope that the high throughput analysis shows something. Or they are confused about the whole hypothesis testing vs. generation thing.
|
Data "exploration" vs data "snooping"/"torturing"?
There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty
|
7,690
|
Data "exploration" vs data "snooping"/"torturing"?
|
Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a priori defined hypotheses severely limits your ability to be surprised.
I think the key thing is that we are honest about what we are doing. If we are in a highly exploratory mode, we should say so. At the opposite end, one professor I know of told her student to change her hypotheses since the original ones were not found to be significant.
|
Data "exploration" vs data "snooping"/"torturing"?
|
Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a pr
|
Data "exploration" vs data "snooping"/"torturing"?
Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a priori defined hypotheses severely limits your ability to be surprised.
I think the key thing is that we are honest about what we are doing. If we are in a highly exploratory mode, we should say so. At the opposite end, one professor I know of told her student to change her hypotheses since the original ones were not found to be significant.
|
Data "exploration" vs data "snooping"/"torturing"?
Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a pr
|
7,691
|
Data "exploration" vs data "snooping"/"torturing"?
|
Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se that data exploration is used on a data set and only parts of those findings are published. The problems are
not describing how much has been tried out
then drawing conclusions as if the study were a validation study for some predictive model / a hypothesis testing study
Science and method development are iterative processes in a far more general way than just hypothesis generation - testing - generating new hypotheses - testing .... IMHO it is a matter of professional judgment what kind of proper conduct is necessary at what stage (see example below).
What I do:
try to make people aware of the optimistic bias that results
When I have a chance, I also show people how much of a difference that makes (feasible mostly with a lower level of the same problem, e.g. compare patient-independently validated data with internal performance estimates of hyper-parameter optimization routines, such as grid search for SVM paraters, "combined models" such as PCA-LDA, and so on. Not really feasible for the real data dredging, because so far, noone gave me the money to make a true replicate of a sensible sized study...)
for papers that I'm coauthor of: insist on a discussion of the limitations of the conclusions. Make sure the conclusions are not formulated in a more general way than the study allows.
Encourage co-workers to use their expert knowledge about the subject of the study and the process of data generation to decide how to treat the data instead of performing costly (in terms of the sample size you'd need to do that properly) optimization of model-"hyper"-parameters (such as what kind of pre-processing to use).
in parallel: try to make people aware of how costly this optimization business is if done properly (whether this is called exploration or not is irrelevant, if done wrongly, it will have similar results like data dredging), e.g. Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33. DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
Here's a study that finds this blind trying around also is often futile, e.g.
J. Engel, J. Gerretzen, E. Szymańska, J. J. Jansen, G. Downey, L. Blanchet, L.M.C. Buydens: Breaking with trends in pre-processing?, TrAC Trends in Analytical Chemistry, 2013, 50, 96-106. DOI: 10.1016/j.trac.2013.04.015
(they tried a large number of combinations of pre-processing steps and found that very few lead to better models than no pre-processing at all)
Emphasise that I'm not torturing my data more than necessary:
example:
All preprocessing was decided exclusively using spectroscopic knowledge, and no data-driven preprocessing was performed.
A follow-up paper using the same data as example for (different) theory development reads
All pre-processing was decided by spectroscopic knowledge, no data-driven steps were included and no parameter optimization was performed. However, we checked that a PLS projection [45] of the spectra onto 25 latent variables as pre-processing for LR training did not lead to more than slight changes in the
prediction (see supplementary figure S.2).
Because meanwhile I was explicitly asked (on a conference by an editor of the journal CILS) to compare the models with PLS pre-processing.
Take a practical point of view: E.g. in the astrocytoma study linked above, of course I still decided some points after looking at the data (such as what intensity threshold corresponds to measurements taken from outside the sample - which were then discarded). Other decisions I know to be uncritical (linear vs. quadratic baseline: my experience with that type of data suggests that this actually doesn't change much - which is also in perfect agreement with what Jasper Engel found on different data of similar type, so I wouldn't expect a large bias to come from deciding the type of baseline by looking at the data (the paper gives an argument why that is sensible).
Based on the study we did, we can now say what should be tackled next and what should be changed. And because we are still in a comparatively early step of method development (looking at ex-vivo samples), it is not worth while to go through all the "homework" that will ultimately be needed before the method could be used in-vivo. E.g. at the present stage of the astrocytoma grading, resampling validation is a more sensible choice than external test set. I still emphasize that a truly external validation study will be needed at some point, because some performance characteristics can only be measured that way (e.g. the effects of instrument drift/proving that we can correct for these). But right now while we're still playing with ex-vivo samples and are solving other parts of the large problem (in the linked papers: how to deal with borderline cases), the gain in useful knowledge from a proper ex-vivo validation study is too low to be worth while the effort (IMHO: unless that were done in order to measure the bias due to data dredging).
I once read an argument about statistical and reporting standards, and whether such should be decided to be necessary for a journal (don't remember which one) which convinced me: the idea expressed there was that there is no need for the editors to try agree on and enforce some standard (which will cause much futile discussion) because:
who uses the proper techniques is usually very aware/proud of that and will (and should) therefore report in detail what was done.
If a certain point (e.g. data dredging, validation not independent on patient level) is not clearly spelled out, the default assumption for reviewers/readers is that the study didn't adhere to the proper principles in that question (possibly because they didn't know better)
|
Data "exploration" vs data "snooping"/"torturing"?
|
Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se th
|
Data "exploration" vs data "snooping"/"torturing"?
Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se that data exploration is used on a data set and only parts of those findings are published. The problems are
not describing how much has been tried out
then drawing conclusions as if the study were a validation study for some predictive model / a hypothesis testing study
Science and method development are iterative processes in a far more general way than just hypothesis generation - testing - generating new hypotheses - testing .... IMHO it is a matter of professional judgment what kind of proper conduct is necessary at what stage (see example below).
What I do:
try to make people aware of the optimistic bias that results
When I have a chance, I also show people how much of a difference that makes (feasible mostly with a lower level of the same problem, e.g. compare patient-independently validated data with internal performance estimates of hyper-parameter optimization routines, such as grid search for SVM paraters, "combined models" such as PCA-LDA, and so on. Not really feasible for the real data dredging, because so far, noone gave me the money to make a true replicate of a sensible sized study...)
for papers that I'm coauthor of: insist on a discussion of the limitations of the conclusions. Make sure the conclusions are not formulated in a more general way than the study allows.
Encourage co-workers to use their expert knowledge about the subject of the study and the process of data generation to decide how to treat the data instead of performing costly (in terms of the sample size you'd need to do that properly) optimization of model-"hyper"-parameters (such as what kind of pre-processing to use).
in parallel: try to make people aware of how costly this optimization business is if done properly (whether this is called exploration or not is irrelevant, if done wrongly, it will have similar results like data dredging), e.g. Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33. DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
Here's a study that finds this blind trying around also is often futile, e.g.
J. Engel, J. Gerretzen, E. Szymańska, J. J. Jansen, G. Downey, L. Blanchet, L.M.C. Buydens: Breaking with trends in pre-processing?, TrAC Trends in Analytical Chemistry, 2013, 50, 96-106. DOI: 10.1016/j.trac.2013.04.015
(they tried a large number of combinations of pre-processing steps and found that very few lead to better models than no pre-processing at all)
Emphasise that I'm not torturing my data more than necessary:
example:
All preprocessing was decided exclusively using spectroscopic knowledge, and no data-driven preprocessing was performed.
A follow-up paper using the same data as example for (different) theory development reads
All pre-processing was decided by spectroscopic knowledge, no data-driven steps were included and no parameter optimization was performed. However, we checked that a PLS projection [45] of the spectra onto 25 latent variables as pre-processing for LR training did not lead to more than slight changes in the
prediction (see supplementary figure S.2).
Because meanwhile I was explicitly asked (on a conference by an editor of the journal CILS) to compare the models with PLS pre-processing.
Take a practical point of view: E.g. in the astrocytoma study linked above, of course I still decided some points after looking at the data (such as what intensity threshold corresponds to measurements taken from outside the sample - which were then discarded). Other decisions I know to be uncritical (linear vs. quadratic baseline: my experience with that type of data suggests that this actually doesn't change much - which is also in perfect agreement with what Jasper Engel found on different data of similar type, so I wouldn't expect a large bias to come from deciding the type of baseline by looking at the data (the paper gives an argument why that is sensible).
Based on the study we did, we can now say what should be tackled next and what should be changed. And because we are still in a comparatively early step of method development (looking at ex-vivo samples), it is not worth while to go through all the "homework" that will ultimately be needed before the method could be used in-vivo. E.g. at the present stage of the astrocytoma grading, resampling validation is a more sensible choice than external test set. I still emphasize that a truly external validation study will be needed at some point, because some performance characteristics can only be measured that way (e.g. the effects of instrument drift/proving that we can correct for these). But right now while we're still playing with ex-vivo samples and are solving other parts of the large problem (in the linked papers: how to deal with borderline cases), the gain in useful knowledge from a proper ex-vivo validation study is too low to be worth while the effort (IMHO: unless that were done in order to measure the bias due to data dredging).
I once read an argument about statistical and reporting standards, and whether such should be decided to be necessary for a journal (don't remember which one) which convinced me: the idea expressed there was that there is no need for the editors to try agree on and enforce some standard (which will cause much futile discussion) because:
who uses the proper techniques is usually very aware/proud of that and will (and should) therefore report in detail what was done.
If a certain point (e.g. data dredging, validation not independent on patient level) is not clearly spelled out, the default assumption for reviewers/readers is that the study didn't adhere to the proper principles in that question (possibly because they didn't know better)
|
Data "exploration" vs data "snooping"/"torturing"?
Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se th
|
7,692
|
Data "exploration" vs data "snooping"/"torturing"?
|
Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experiment until you see it.
For example, with reaction time data for a decision task, you often want to reject times that aren't about the decision (i.e., when they are going so fast they are obviously just guessing and not making a decision). You can plot accuracy of the decision against RT to see where the guessing is generally occurring. But until you've tested that particular paradigm you have no way of knowing where the cutoffs are (in time, not accuracy). To some observers such a procedure looks like torturing the data but as long as it doesn't have anything directly to do with the hypothesis tests (you're not adjusting it based on tests) then it's not torturing the data.
Data snooping during an experiment is ok as long as it's done the right way. It's probably unethical to stick your experiment in a black box and only do the analysis when the planned number of subjects have been run. Sometimes it's hard to tell that there are issues with the experiment until you look at data and you should look at some as soon as possible. Data peeking is strongly disparaged because it's equated to seeing if p < 0.05 and deciding to continue. But there are lots of criteria by which you can decide to continue collecting that do not do anything harmful to your error rates.
Say you want to make sure that your variance estimate is within a known likely range. Small samples can have pretty far out variance estimates so you collect extra data until you know the sample is more representative. In the following simulation I expect the variance in each condition to be 1. I'm going to do something really crazy and sample each group independently for 10 samples and then add subjects until variance is close to 1.
Y <- replicate(1000, {
y1 <- rnorm(10)
while(var(y1) < 0.9 | var(y1) > 1.1) y1 <- c(y1, rnorm(1))
y2 <- rnorm(10)
while(var(y2) < 0.9 | var(y2) > 1.1) y2 <- c(y2, rnorm(1))
c( t.test(y1, y2, var.equal = TRUE)$p.value, length(y1), length(y2) )
})
range(Y[2,]) #range of N's in group 1
[1] 10 1173
range(Y[3,]) #range of N's in group 2
[1] 10 1283
sum(Y[1,] < 0.05) / ncol(Y)
[1] 0.045
So, I've just gone bonkers with the sampling and making my variances close to expected and I still don't affect alpha much (it's a little under 0.05). A few more constraints like the N's must be equal in each group and can't be more than 30 and alpha is pretty much right on 0.05. But what about SE? What if I instead tried to make the SE a given value? That's actually a really interesting idea because I'm in turn setting the width of CI in advance (but not the location).
se <- function(x) sqrt(var(x) / length(x))
Y <- replicate(1000, {
y1 <- rnorm(10)
y2 <- rnorm(10)
while(se(y1) > 0.2 | se(y2) > 0.2) {
y1 <- c(y1, rnorm(1)); y2 <- c(y2, rnorm(1))
}
c( t.test(y1, y2, var.equal = TRUE)$p.value, length(y1) )
})
range(Y[2,]) #range of N's in group 1 and 2 (they're equal now)
[1] 10 46
sum(Y[1,] < 0.05) / ncol(Y)
[1] 0.053
Again, alpha changed a small amount even though I've allowed N's to roam up to 46 from the original 10 based on data snooping. More importantly, the SE's all fall in a narrow range in each of the experiments. It's easy to make a small alpha adjustment to fix that if it's a concern. The point is that some data snooping does little to no harm and can even bring benefits.
(BTW, what I'm showing isn't some magic bullet. You don't actually reduce the number of subjects in the long run doing this because power for the varying N's simulation is about the same as for a simulation of the average N's)
None of the above contradicts the recent literature on adding subjects after an experiment started. In those studies they looked at simulations where you added subjects after doing a hypothesis test in order to get the p-value lower. That's still bad and can extraordinarily inflate alpha. Furthermore, I really like January and Peter Flom's answers. I just wanted to point out that looking at data while you're collecting it, and even changing a planned N while collecting, are not necessarily bad things.
|
Data "exploration" vs data "snooping"/"torturing"?
|
Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experim
|
Data "exploration" vs data "snooping"/"torturing"?
Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experiment until you see it.
For example, with reaction time data for a decision task, you often want to reject times that aren't about the decision (i.e., when they are going so fast they are obviously just guessing and not making a decision). You can plot accuracy of the decision against RT to see where the guessing is generally occurring. But until you've tested that particular paradigm you have no way of knowing where the cutoffs are (in time, not accuracy). To some observers such a procedure looks like torturing the data but as long as it doesn't have anything directly to do with the hypothesis tests (you're not adjusting it based on tests) then it's not torturing the data.
Data snooping during an experiment is ok as long as it's done the right way. It's probably unethical to stick your experiment in a black box and only do the analysis when the planned number of subjects have been run. Sometimes it's hard to tell that there are issues with the experiment until you look at data and you should look at some as soon as possible. Data peeking is strongly disparaged because it's equated to seeing if p < 0.05 and deciding to continue. But there are lots of criteria by which you can decide to continue collecting that do not do anything harmful to your error rates.
Say you want to make sure that your variance estimate is within a known likely range. Small samples can have pretty far out variance estimates so you collect extra data until you know the sample is more representative. In the following simulation I expect the variance in each condition to be 1. I'm going to do something really crazy and sample each group independently for 10 samples and then add subjects until variance is close to 1.
Y <- replicate(1000, {
y1 <- rnorm(10)
while(var(y1) < 0.9 | var(y1) > 1.1) y1 <- c(y1, rnorm(1))
y2 <- rnorm(10)
while(var(y2) < 0.9 | var(y2) > 1.1) y2 <- c(y2, rnorm(1))
c( t.test(y1, y2, var.equal = TRUE)$p.value, length(y1), length(y2) )
})
range(Y[2,]) #range of N's in group 1
[1] 10 1173
range(Y[3,]) #range of N's in group 2
[1] 10 1283
sum(Y[1,] < 0.05) / ncol(Y)
[1] 0.045
So, I've just gone bonkers with the sampling and making my variances close to expected and I still don't affect alpha much (it's a little under 0.05). A few more constraints like the N's must be equal in each group and can't be more than 30 and alpha is pretty much right on 0.05. But what about SE? What if I instead tried to make the SE a given value? That's actually a really interesting idea because I'm in turn setting the width of CI in advance (but not the location).
se <- function(x) sqrt(var(x) / length(x))
Y <- replicate(1000, {
y1 <- rnorm(10)
y2 <- rnorm(10)
while(se(y1) > 0.2 | se(y2) > 0.2) {
y1 <- c(y1, rnorm(1)); y2 <- c(y2, rnorm(1))
}
c( t.test(y1, y2, var.equal = TRUE)$p.value, length(y1) )
})
range(Y[2,]) #range of N's in group 1 and 2 (they're equal now)
[1] 10 46
sum(Y[1,] < 0.05) / ncol(Y)
[1] 0.053
Again, alpha changed a small amount even though I've allowed N's to roam up to 46 from the original 10 based on data snooping. More importantly, the SE's all fall in a narrow range in each of the experiments. It's easy to make a small alpha adjustment to fix that if it's a concern. The point is that some data snooping does little to no harm and can even bring benefits.
(BTW, what I'm showing isn't some magic bullet. You don't actually reduce the number of subjects in the long run doing this because power for the varying N's simulation is about the same as for a simulation of the average N's)
None of the above contradicts the recent literature on adding subjects after an experiment started. In those studies they looked at simulations where you added subjects after doing a hypothesis test in order to get the p-value lower. That's still bad and can extraordinarily inflate alpha. Furthermore, I really like January and Peter Flom's answers. I just wanted to point out that looking at data while you're collecting it, and even changing a planned N while collecting, are not necessarily bad things.
|
Data "exploration" vs data "snooping"/"torturing"?
Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experim
|
7,693
|
Data "exploration" vs data "snooping"/"torturing"?
|
This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be producing results of interest that are novel or contentious, for example, in the sense of rebutting someone else's results. In medical research there is considerable progress being made to redress this problem by the compulsory registration of trials and publication of results with records of abandoned trials to also be made public. I understand that since publication in journals for unsuccessful research may not be practicable, there are plans to keep a publicly available database of them. Unusual results that can not be replicated are not necessarily a result of misdemeanour, as with perhaps 50,000 (a guess) researchers worldwide doing several experiments a year, some pretty unusual results are to be expected from time to time.
Using different methods is not necessarily a solution either. For example, what chemist would mix reagents in different ways in different conditions and expect the same results as a matter of course?
|
Data "exploration" vs data "snooping"/"torturing"?
|
This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be
|
Data "exploration" vs data "snooping"/"torturing"?
This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be producing results of interest that are novel or contentious, for example, in the sense of rebutting someone else's results. In medical research there is considerable progress being made to redress this problem by the compulsory registration of trials and publication of results with records of abandoned trials to also be made public. I understand that since publication in journals for unsuccessful research may not be practicable, there are plans to keep a publicly available database of them. Unusual results that can not be replicated are not necessarily a result of misdemeanour, as with perhaps 50,000 (a guess) researchers worldwide doing several experiments a year, some pretty unusual results are to be expected from time to time.
Using different methods is not necessarily a solution either. For example, what chemist would mix reagents in different ways in different conditions and expect the same results as a matter of course?
|
Data "exploration" vs data "snooping"/"torturing"?
This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be
|
7,694
|
Can anyone explain conjugate priors in simplest possible terms?
|
A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, in which case choosing our prior reduces to choosing the parameters of that family.
For example, consider a normal model $Y_i \stackrel{_\text{iid}}{\sim} N(\mu,\sigma^2)$. For simplicity, let's also take $\sigma^2$ as known. This part of the model - the model for the data - determines the likelihood function.
To complete our Bayesian model, here we need a prior for $\mu$.
As mentioned above, commonly we might specify some distributional family for our prior for $\mu$ and then we only have to choose the parameters of that distribution (for example, often prior information may be fairly vague - like roughly where we want the probability to concentrate - rather than of very specific functional form, and we may have enough freedom to model what we want by choosing the parameters - say to match a prior mean and variance).
If it turns out that the posterior for $\mu$ is from the same family as the prior, then that prior is said to be "conjugate".
(What makes it turn out to be conjugate is the way it combines with the likelihood)
So in this case, let's take a Gaussian prior for $\mu$ (say $\mu\sim N(\theta,\tau^2)$). If we do that, we see that the posterior for $\mu$ is also Gaussian. Consequently, the Gaussian prior was a conjugate prior for our model above.
That's all there is to it really -- if the posterior is from the same family as the prior, it's a conjugate prior.
In simple cases you can identify a conjugate prior by inspection of the likelihood. For example, consider a binomial likelihood; dropping the constants, it looks like a beta density in $p$; and because of the way powers of $p$ and $(1-p)$ each combine, it will multiply by a beta prior to also give a product of powers of $p$ and $(1-p)$ ... so we can see immediately from the likelihood that the beta will be a conjugate prior for $p$ in the binomial likelihood.
In the Gaussian case it's easiest to see that it will happen by considering the log-densities and the log-likelihood; the log-likelihood will be quadratic in $\mu$ and the sum of two quadratics is quadratic, so a quadratic log-prior + quadratic log-likelihood gives a quadratic posterior (each of the coefficients of the highest order term will of course be negative).
|
Can anyone explain conjugate priors in simplest possible terms?
|
A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, i
|
Can anyone explain conjugate priors in simplest possible terms?
A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, in which case choosing our prior reduces to choosing the parameters of that family.
For example, consider a normal model $Y_i \stackrel{_\text{iid}}{\sim} N(\mu,\sigma^2)$. For simplicity, let's also take $\sigma^2$ as known. This part of the model - the model for the data - determines the likelihood function.
To complete our Bayesian model, here we need a prior for $\mu$.
As mentioned above, commonly we might specify some distributional family for our prior for $\mu$ and then we only have to choose the parameters of that distribution (for example, often prior information may be fairly vague - like roughly where we want the probability to concentrate - rather than of very specific functional form, and we may have enough freedom to model what we want by choosing the parameters - say to match a prior mean and variance).
If it turns out that the posterior for $\mu$ is from the same family as the prior, then that prior is said to be "conjugate".
(What makes it turn out to be conjugate is the way it combines with the likelihood)
So in this case, let's take a Gaussian prior for $\mu$ (say $\mu\sim N(\theta,\tau^2)$). If we do that, we see that the posterior for $\mu$ is also Gaussian. Consequently, the Gaussian prior was a conjugate prior for our model above.
That's all there is to it really -- if the posterior is from the same family as the prior, it's a conjugate prior.
In simple cases you can identify a conjugate prior by inspection of the likelihood. For example, consider a binomial likelihood; dropping the constants, it looks like a beta density in $p$; and because of the way powers of $p$ and $(1-p)$ each combine, it will multiply by a beta prior to also give a product of powers of $p$ and $(1-p)$ ... so we can see immediately from the likelihood that the beta will be a conjugate prior for $p$ in the binomial likelihood.
In the Gaussian case it's easiest to see that it will happen by considering the log-densities and the log-likelihood; the log-likelihood will be quadratic in $\mu$ and the sum of two quadratics is quadratic, so a quadratic log-prior + quadratic log-likelihood gives a quadratic posterior (each of the coefficients of the highest order term will of course be negative).
|
Can anyone explain conjugate priors in simplest possible terms?
A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, i
|
7,695
|
Can anyone explain conjugate priors in simplest possible terms?
|
If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\in\Theta$$
with respect to a given dominating measure (Lebesgue, counting, &tc.), where $t\cdot s$ denotes a scalar product over $\mathbb{R}^d$ and
$$T:\mathcal{X}\longrightarrow \mathbb{R}^d\qquad S:\Theta\longrightarrow \mathbb{R}^d$$
are measurable functions, the conjugate priors on $\theta$ are defined by densities of the form
$$\pi(\theta|\xi,\lambda)=C(\xi,\lambda)\exp\{T(\theta)\cdot \xi-\lambda\psi(\theta)\}$$
[with respect to an arbitrarily-chosen dominating measure $\text{d}\nu$ on $\Theta$] with
$$C(\xi,\lambda)^{-1}=\int_\Theta \exp\{T(\theta)\cdot \xi-\lambda\psi(\theta)\} \text{d}\nu<\infty$$
and $\lambda\in\Lambda\subset\mathbb{R}_+$, $\xi\in\Xi\subset \lambda T(\mathcal{X})$
The choice of the dominating measure is determinantal for the family of priors. If for instance one faces a Normal mean likelihood on $\mu$ as in Glen_b's answer, choosing the Lebesgue measure $\text{d}\mu$ as the dominating measure leads to Normal priors being conjugate. If instead one chooses $(1+\mu^2)^{-2}\text{d}\mu$ as the dominating measure, the conjugate priors are within the family of distributions with densities
$$\exp\{-\alpha(\mu-\mu_0)^2\} \qquad\alpha>0,\ \ \mu_0\in\mathbb R$$
with respect to this dominating measure, and are thus no longer Normal priors. This difficulty is essentially the same as the one of choosing a particular parameterisation of the likelihood and opting for the Lebesgue measure for this parameterisation. When faced with a likelihood function, there is no inherent (or intrinsic or reference) dominating measure on the parameter space.
Outside this exponential family setting, there is no non-trivial
family of distributions with a fixed support that allows for conjugate
priors. This is a consequence of the Darmois-Pitman-Koopman
lemma.
|
Can anyone explain conjugate priors in simplest possible terms?
|
If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\i
|
Can anyone explain conjugate priors in simplest possible terms?
If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\in\Theta$$
with respect to a given dominating measure (Lebesgue, counting, &tc.), where $t\cdot s$ denotes a scalar product over $\mathbb{R}^d$ and
$$T:\mathcal{X}\longrightarrow \mathbb{R}^d\qquad S:\Theta\longrightarrow \mathbb{R}^d$$
are measurable functions, the conjugate priors on $\theta$ are defined by densities of the form
$$\pi(\theta|\xi,\lambda)=C(\xi,\lambda)\exp\{T(\theta)\cdot \xi-\lambda\psi(\theta)\}$$
[with respect to an arbitrarily-chosen dominating measure $\text{d}\nu$ on $\Theta$] with
$$C(\xi,\lambda)^{-1}=\int_\Theta \exp\{T(\theta)\cdot \xi-\lambda\psi(\theta)\} \text{d}\nu<\infty$$
and $\lambda\in\Lambda\subset\mathbb{R}_+$, $\xi\in\Xi\subset \lambda T(\mathcal{X})$
The choice of the dominating measure is determinantal for the family of priors. If for instance one faces a Normal mean likelihood on $\mu$ as in Glen_b's answer, choosing the Lebesgue measure $\text{d}\mu$ as the dominating measure leads to Normal priors being conjugate. If instead one chooses $(1+\mu^2)^{-2}\text{d}\mu$ as the dominating measure, the conjugate priors are within the family of distributions with densities
$$\exp\{-\alpha(\mu-\mu_0)^2\} \qquad\alpha>0,\ \ \mu_0\in\mathbb R$$
with respect to this dominating measure, and are thus no longer Normal priors. This difficulty is essentially the same as the one of choosing a particular parameterisation of the likelihood and opting for the Lebesgue measure for this parameterisation. When faced with a likelihood function, there is no inherent (or intrinsic or reference) dominating measure on the parameter space.
Outside this exponential family setting, there is no non-trivial
family of distributions with a fixed support that allows for conjugate
priors. This is a consequence of the Darmois-Pitman-Koopman
lemma.
|
Can anyone explain conjugate priors in simplest possible terms?
If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\i
|
7,696
|
Can anyone explain conjugate priors in simplest possible terms?
|
I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp(a\mu^2 +b\mu)$$
Where $K$ is the "normalising constant" $K=\int \exp(a\mu^2 +b\mu)d\mu=\sqrt{\frac{\pi}{-a}}\exp(-\frac{b^2}{4a})$
The connection with standard mean/variance parameters is $E(\mu|a,b)=-\frac{b}{2a}$ and $Var(\mu|a,b)=-\frac{1}{2a}$
Beta kernel
$$p(\theta|a,b)=K^{-1}\times \theta^a (1-\theta)^b$$
Where $K=\int \theta^a (1-\theta)^b d\theta = Beta(a+1,b+1)$
When we look at the likelihood function, we can do the same thing, and express it in "kernel form". For example with iid data
$$p(D|\mu)=\prod_{i=1}^n p(x_i|\mu)=Q\times f(\mu)$$
For some constant $Q$ and some function $f(\mu)$. If we can recognise this function as a kernel, then we can create a conjugate prior for that likelihood.
If we take the normal likelihood with unit variance, the above looks like
$$p(D|\mu)
=\prod_{i=1}^n p(x_i|\mu)
=\prod_{i=1}^n \frac{1}{\sqrt{2\pi}}\exp(-\frac{(x_i-\mu)^2}{2})
=\left[\prod_{i=1}^n \frac{1}{\sqrt{2\pi}}\right]\times \prod_{i=1}^n \exp(-\frac{(x_i-\mu)^2}{2})
=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{(x_i-\mu)^2}{2})
=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{x_i^2-2x_i\mu+\mu^2}{2})
=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{x_i^2}{2})\times\exp(\mu\sum_{i=1}^n x_i-\mu^2\frac{n}{2})
=Q\times \exp(a\mu^2 +b\mu)$$
where $a=-\frac{n}{2}$ and $b=\sum_{i=1}^n x_i$ and $Q=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{x_i^2}{2})$
This likelihood function has the same kernel as the normal distribution for $\mu$, so a conjugate prior for this likelihood is also the normal distribution.
$$p(\mu|a_0,b_0)=K_0^{-1}\exp(a_0\mu^2 +b_0\mu)$$
The posterior is then
$$p(\mu|D,a_0,b_0)\propto K_0^{-1}\exp(a_0\mu^2 +b_0\mu)\times Q\times \exp(a\mu^2 +b\mu)=K_0^{-1}\times Q\times \exp([a+a_0]\mu^2 +[b+b_0]\mu)\propto\exp([a+a_0]\mu^2 +[b+b_0]\mu)$$
Showing that the posterior is also a normal distribution, with updated parameters from the prior using the information in the data.
In some sense a conjugate prior acts similarly to adding "pseudo data" to the data observed, and then estimating the parameters.
|
Can anyone explain conjugate priors in simplest possible terms?
|
I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp(
|
Can anyone explain conjugate priors in simplest possible terms?
I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp(a\mu^2 +b\mu)$$
Where $K$ is the "normalising constant" $K=\int \exp(a\mu^2 +b\mu)d\mu=\sqrt{\frac{\pi}{-a}}\exp(-\frac{b^2}{4a})$
The connection with standard mean/variance parameters is $E(\mu|a,b)=-\frac{b}{2a}$ and $Var(\mu|a,b)=-\frac{1}{2a}$
Beta kernel
$$p(\theta|a,b)=K^{-1}\times \theta^a (1-\theta)^b$$
Where $K=\int \theta^a (1-\theta)^b d\theta = Beta(a+1,b+1)$
When we look at the likelihood function, we can do the same thing, and express it in "kernel form". For example with iid data
$$p(D|\mu)=\prod_{i=1}^n p(x_i|\mu)=Q\times f(\mu)$$
For some constant $Q$ and some function $f(\mu)$. If we can recognise this function as a kernel, then we can create a conjugate prior for that likelihood.
If we take the normal likelihood with unit variance, the above looks like
$$p(D|\mu)
=\prod_{i=1}^n p(x_i|\mu)
=\prod_{i=1}^n \frac{1}{\sqrt{2\pi}}\exp(-\frac{(x_i-\mu)^2}{2})
=\left[\prod_{i=1}^n \frac{1}{\sqrt{2\pi}}\right]\times \prod_{i=1}^n \exp(-\frac{(x_i-\mu)^2}{2})
=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{(x_i-\mu)^2}{2})
=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{x_i^2-2x_i\mu+\mu^2}{2})
=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{x_i^2}{2})\times\exp(\mu\sum_{i=1}^n x_i-\mu^2\frac{n}{2})
=Q\times \exp(a\mu^2 +b\mu)$$
where $a=-\frac{n}{2}$ and $b=\sum_{i=1}^n x_i$ and $Q=(2\pi)^{-\frac{n}{2}}\times\exp(-\sum_{i=1}^n\frac{x_i^2}{2})$
This likelihood function has the same kernel as the normal distribution for $\mu$, so a conjugate prior for this likelihood is also the normal distribution.
$$p(\mu|a_0,b_0)=K_0^{-1}\exp(a_0\mu^2 +b_0\mu)$$
The posterior is then
$$p(\mu|D,a_0,b_0)\propto K_0^{-1}\exp(a_0\mu^2 +b_0\mu)\times Q\times \exp(a\mu^2 +b\mu)=K_0^{-1}\times Q\times \exp([a+a_0]\mu^2 +[b+b_0]\mu)\propto\exp([a+a_0]\mu^2 +[b+b_0]\mu)$$
Showing that the posterior is also a normal distribution, with updated parameters from the prior using the information in the data.
In some sense a conjugate prior acts similarly to adding "pseudo data" to the data observed, and then estimating the parameters.
|
Can anyone explain conjugate priors in simplest possible terms?
I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp(
|
7,697
|
Can anyone explain conjugate priors in simplest possible terms?
|
For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are conjugate distribution families and the prior is called a conjugate prior for the likelihood function.
Note: $\underbrace{p(\theta|x)}_{\text{posterior}}
\sim
\underbrace{p(x|\theta)}_{\text{likelihood}}
\cdot
\underbrace{p(\theta)}_{\text{prior}}$
|
Can anyone explain conjugate priors in simplest possible terms?
|
For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are c
|
Can anyone explain conjugate priors in simplest possible terms?
For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are conjugate distribution families and the prior is called a conjugate prior for the likelihood function.
Note: $\underbrace{p(\theta|x)}_{\text{posterior}}
\sim
\underbrace{p(x|\theta)}_{\text{likelihood}}
\cdot
\underbrace{p(\theta)}_{\text{prior}}$
|
Can anyone explain conjugate priors in simplest possible terms?
For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are c
|
7,698
|
How to use ordinal logistic regression with random effects?
|
In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between successive levels (e.g. see Dobson and Barnett Introduction to Generalized Linear Models section 8.4.6). However, this is a pain, and luckily there are a few options in R:
the ordinal package, via the clmm and clmm2 functions (clmm = Cumulative Link Mixed Model)
the mixor package, via the mixor function
the MCMCglmm package, via family="ordinal" (see ?MCMCglmm)
the brms package, e.g. via family="cumulative" (see ?brmsfamily)
The latter two options are implemented within Bayesian MCMC frameworks. As far as I know, all of the functions quoted (with the exception of ordinal::clmm2) can handle multiple random effects (intercepts, slopes, etc.); most of them (maybe not MCMCglmm?) can handle choices of link function (logit, probit, etc.).
(If I have time I will come back and revise this answer with a worked example of setting up ordinal models from scratch using lme4)
|
How to use ordinal logistic regression with random effects?
|
In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between su
|
How to use ordinal logistic regression with random effects?
In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between successive levels (e.g. see Dobson and Barnett Introduction to Generalized Linear Models section 8.4.6). However, this is a pain, and luckily there are a few options in R:
the ordinal package, via the clmm and clmm2 functions (clmm = Cumulative Link Mixed Model)
the mixor package, via the mixor function
the MCMCglmm package, via family="ordinal" (see ?MCMCglmm)
the brms package, e.g. via family="cumulative" (see ?brmsfamily)
The latter two options are implemented within Bayesian MCMC frameworks. As far as I know, all of the functions quoted (with the exception of ordinal::clmm2) can handle multiple random effects (intercepts, slopes, etc.); most of them (maybe not MCMCglmm?) can handle choices of link function (logit, probit, etc.).
(If I have time I will come back and revise this answer with a worked example of setting up ordinal models from scratch using lme4)
|
How to use ordinal logistic regression with random effects?
In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between su
|
7,699
|
How to use ordinal logistic regression with random effects?
|
Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstrates the polr() function in the MASS package, there are a number of facilities for fitting ordinal models in R. There is a broader (but less detailed) overview here. The only way I know of to include random effects in R uses the ordinal package, though. I work through an example here: Is there a two-way Friedman's test?
|
How to use ordinal logistic regression with random effects?
|
Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstr
|
How to use ordinal logistic regression with random effects?
Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstrates the polr() function in the MASS package, there are a number of facilities for fitting ordinal models in R. There is a broader (but less detailed) overview here. The only way I know of to include random effects in R uses the ordinal package, though. I work through an example here: Is there a two-way Friedman's test?
|
How to use ordinal logistic regression with random effects?
Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstr
|
7,700
|
The relationship between the gamma distribution and the normal distribution
|
As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of the Gamma distribution:
$$X \sim N(0,\sigma^2) \Rightarrow X^2/\sigma^2 \sim \mathcal \chi^2_1 \Rightarrow X^2 \sim \sigma^2\mathcal \chi^2_1= \text{Gamma}\left(\frac 12, 2\sigma^2\right)$$
the last equality following from the scaling property of the Gamma.
As regards the relation with the exponential, to be accurate it is the sum of two squared zero-mean normals each scaled by the variance of the other, that leads to the Exponential distribution:
$$X_1 \sim N(0,\sigma^2_1),\;\; X_2 \sim N(0,\sigma^2_2) \Rightarrow \frac{X_1^2}{\sigma^2_1}+\frac{X_2^2}{\sigma^2_2} \sim \mathcal \chi^2_2 \Rightarrow \frac{\sigma^2_2X_1^2+ \sigma^2_1X_2^2}{\sigma^2_1\sigma^2_2} \sim \mathcal \chi^2_2$$
$$ \Rightarrow \sigma^2_2X_1^2+ \sigma^2_1X_2^2 \sim \sigma^2_1\sigma^2_2\mathcal \chi^2_2 = \text{Gamma}\left(1, 2\sigma^2_1\sigma^2_2\right) = \text{Exp}( {1\over {2\sigma^2_1\sigma^2_2}})$$
But the suspicion that there is "something special" or "deeper" in the sum of two squared zero mean normals that "makes them a good model for waiting time" is unfounded:
First of all, what is special about the Exponential distribution that makes it a good model for "waiting time"? Memorylessness of course, but is there something "deeper" here, or just the simple functional form of the Exponential distribution function, and the properties of $e$? Unique properties are scattered around all over Mathematics, and most of the time, they don't reflect some "deeper intuition" or "structure" - they just exist (thankfully).
Second, the square of a variable has very little relation with its level. Just consider $f(x) = x$ in, say, $[-2,\,2]$:
...or graph the standard normal density against the chi-square density: they reflect and represent totally different stochastic behaviors, even though they are so intimately related, since the second is the density of a variable that is the square of the first. The normal may be a very important pillar of the mathematical system we have developed to model stochastic behavior - but once you square it, it becomes something totally else.
|
The relationship between the gamma distribution and the normal distribution
|
As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of th
|
The relationship between the gamma distribution and the normal distribution
As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of the Gamma distribution:
$$X \sim N(0,\sigma^2) \Rightarrow X^2/\sigma^2 \sim \mathcal \chi^2_1 \Rightarrow X^2 \sim \sigma^2\mathcal \chi^2_1= \text{Gamma}\left(\frac 12, 2\sigma^2\right)$$
the last equality following from the scaling property of the Gamma.
As regards the relation with the exponential, to be accurate it is the sum of two squared zero-mean normals each scaled by the variance of the other, that leads to the Exponential distribution:
$$X_1 \sim N(0,\sigma^2_1),\;\; X_2 \sim N(0,\sigma^2_2) \Rightarrow \frac{X_1^2}{\sigma^2_1}+\frac{X_2^2}{\sigma^2_2} \sim \mathcal \chi^2_2 \Rightarrow \frac{\sigma^2_2X_1^2+ \sigma^2_1X_2^2}{\sigma^2_1\sigma^2_2} \sim \mathcal \chi^2_2$$
$$ \Rightarrow \sigma^2_2X_1^2+ \sigma^2_1X_2^2 \sim \sigma^2_1\sigma^2_2\mathcal \chi^2_2 = \text{Gamma}\left(1, 2\sigma^2_1\sigma^2_2\right) = \text{Exp}( {1\over {2\sigma^2_1\sigma^2_2}})$$
But the suspicion that there is "something special" or "deeper" in the sum of two squared zero mean normals that "makes them a good model for waiting time" is unfounded:
First of all, what is special about the Exponential distribution that makes it a good model for "waiting time"? Memorylessness of course, but is there something "deeper" here, or just the simple functional form of the Exponential distribution function, and the properties of $e$? Unique properties are scattered around all over Mathematics, and most of the time, they don't reflect some "deeper intuition" or "structure" - they just exist (thankfully).
Second, the square of a variable has very little relation with its level. Just consider $f(x) = x$ in, say, $[-2,\,2]$:
...or graph the standard normal density against the chi-square density: they reflect and represent totally different stochastic behaviors, even though they are so intimately related, since the second is the density of a variable that is the square of the first. The normal may be a very important pillar of the mathematical system we have developed to model stochastic behavior - but once you square it, it becomes something totally else.
|
The relationship between the gamma distribution and the normal distribution
As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of th
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.