idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
7,601 | How is Poisson distribution different to normal distribution? | A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
When the mean of a Poisson distribution is large, it becomes similar to a normal distribution. However, rpois(1000, 10) d... | How is Poisson distribution different to normal distribution? | A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
W | How is Poisson distribution different to normal distribution?
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
When the mean of a Poisson distribution is large, it become... | How is Poisson distribution different to normal distribution?
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference.
W |
7,602 | How is Poisson distribution different to normal distribution? | Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n becomes large enough. In fact, Abraham de Moivre essentially discovered normal distribution while trying to approximate B... | How is Poisson distribution different to normal distribution? | Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n b | How is Poisson distribution different to normal distribution?
Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n becomes large enough. In fact, Abraham de Moivre essentiall... | How is Poisson distribution different to normal distribution?
Here's much easier way to understand it:
You can look at Binomial distribution as the "mother" of most distributions. The normal distribution is just an approximation of Binomial distribution when n b |
7,603 | How is Poisson distribution different to normal distribution? | I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we can prove this economically here as well. If $X_n \sim \mathrm{Binomial}(n,\lambda/n)$ then for fixed $k$
$$
\begin{align}... | How is Poisson distribution different to normal distribution? | I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we ca | How is Poisson distribution different to normal distribution?
I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we can prove this economically here as well. If $X_n \sim \math... | How is Poisson distribution different to normal distribution?
I think it is worth mentioning that a Poisson($\lambda$) pmf is the limiting pmf of a Binomial($n$,$p_n$) with $p_n = \lambda / n$.
One rather lengthy development can be found on this blog.
But, we ca |
7,604 | How is Poisson distribution different to normal distribution? | It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum of two Poissons
Brownian motion (Gaussian) and Poisson process are both Levy processes
Both Poisson and Gaussian distribu... | How is Poisson distribution different to normal distribution? | It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum o | How is Poisson distribution different to normal distribution?
It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum of two Poissons
Brownian motion (Gaussian) and Poisson proc... | How is Poisson distribution different to normal distribution?
It's a great question because Poisson distribution is not only different, but it is also so similar to Normal distribution. Here's how it is similar:
the sum of two normals is normal, so is the sum o |
7,605 | How do I calculate a weighted standard deviation? In Excel? | The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the number of nonzero weights.
$w_i$ are the weights
$x_i$ are the observations.
$\bar{x}^*$ is the weighted mean.
Remember tha... | How do I calculate a weighted standard deviation? In Excel? | The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the num | How do I calculate a weighted standard deviation? In Excel?
The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the number of nonzero weights.
$w_i$ are the weights
$x_i$ are the ... | How do I calculate a weighted standard deviation? In Excel?
The formula for weighted standard deviation is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the num |
7,606 | How do I calculate a weighted standard deviation? In Excel? | The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequencies (i.e. you are just trying to avoid adding up your whole sum), if the weights are in fact the variance of each measu... | How do I calculate a weighted standard deviation? In Excel? | The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequen | How do I calculate a weighted standard deviation? In Excel?
The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequencies (i.e. you are just trying to avoid adding up your whole... | How do I calculate a weighted standard deviation? In Excel?
The formulae are available various places, including Wikipedia.
The key is to notice that it depends on what the weights mean. In particular, you will get different answers if the weights are frequen |
7,607 | How do I calculate a weighted standard deviation? In Excel? | =SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values | How do I calculate a weighted standard deviation? In Excel? | =SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values | How do I calculate a weighted standard deviation? In Excel?
=SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values | How do I calculate a weighted standard deviation? In Excel?
=SQRT(SUM(G7:G16*(H7:H16-(SUMPRODUCT(G7:G16,H7:H16)/SUM(G7:G16)))^2)/
((COUNTIFS(G7:G16,"<>0")-1)/COUNTIFS(G7:G16,"<>0")*SUM(G7:G16)))
Column G are weights, Column H are values |
7,608 | How do I calculate a weighted standard deviation? In Excel? | If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the variance:$$\hat\sigma^2=\sum_ip_i(x_i-\hat\mu)^2$$ | How do I calculate a weighted standard deviation? In Excel? | If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the | How do I calculate a weighted standard deviation? In Excel?
If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the variance:$$\hat\sigma^2=\sum_ip_i(x_i-\hat\mu)^2$$ | How do I calculate a weighted standard deviation? In Excel?
If we treat weights like probabilities, then we build them as follows:
$$p_i=\frac{v_i}{\sum_iv_i},$$
where $v_i$ - data volume.
Next, obviously the weighted mean is $$\hat\mu=\sum_ip_ix_i,$$
and the |
7,609 | How do I calculate a weighted standard deviation? In Excel? | Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download currently available at http://seismo.berkeley.edu/~kirchner/Toolkits/Toolkit_12.pdf, which references
Bevington, P. R.,... | How do I calculate a weighted standard deviation? In Excel? | Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download | How do I calculate a weighted standard deviation? In Excel?
Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download currently available at http://seismo.berkeley.edu/~kirchner/... | How do I calculate a weighted standard deviation? In Excel?
Late in the day I know, but in reference to Whuber's insistance on an authoritative justification for the (M-1)/M term for an unbiased estimate, perhaps Prof. James Kirchner's justification, download |
7,610 | How do I calculate a weighted standard deviation? In Excel? | Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values to determine W Standard Deviation
xV = vals.Column ' Column number of first value element
xW = wates.Column '... | How do I calculate a weighted standard deviation? In Excel? | Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values | How do I calculate a weighted standard deviation? In Excel?
Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values to determine W Standard Deviation
xV = vals.Column ' C... | How do I calculate a weighted standard deviation? In Excel?
Option Explicit
Function wsdv(vals As Range, wates As Range)
Dim i, xV, xW, y As Integer
Dim wi, xi, WgtAvg, N
Dim sumProd, SUMwi
sumProd = 0
SUMwi = 0
N = vals.Count ' number of values |
7,611 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the probability of getting your estimate due to chance. I don't know what that means---it's not a well-specified claim.
But I... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the pr | Why is it bad to teach students that p-values are the probability that findings are due to chance?
I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the probability of getting ... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the pr |
7,612 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is $\Pr(\text{H}_0)$ [which actually should be $\Pr(\text{H}_0 | \text{data})$; say, "given what we have seen (the data), ... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is | Why is it bad to teach students that p-values are the probability that findings are due to chance?
I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is $\Pr(\text{H}_0)$ [w... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
I've seen this interpretation a lot (perhaps more often than the correct one). I interpret "their findings are due to [random] chance" as "$\text{H}_0$ is true", and so really what they are saying is |
7,613 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion for students who realize that they cannot logically understand the statement, but assuming that what is taught to them is... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion f | Why is it bad to teach students that p-values are the probability that findings are due to chance?
I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion for students who reali... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
I'll add a late answer from the (ex) student perspective: IMHO the harm cannot be separated from its being wrong.
This type of wrong "didactic approximations/shortcut" can create a lot of confusion f |
7,614 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to random chance." If one believes this, then one also probably believes the following: "[1-(p-value)] is the probability that ... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to rand | Why is it bad to teach students that p-values are the probability that findings are due to chance?
Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to random chance." If one be... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
Referring directly to the question: Where is the harm?
In my opinion, the answer to this question lies in the converse of the statement, "A p-value is the probability that the findings are due to rand |
7,615 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is 1, so does that mean that we have a 100% chance of having a 2-headed coin?
The tricky thing is that if we had flipped a ... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is | Why is it bad to teach students that p-values are the probability that findings are due to chance?
Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is 1, so does that mean ... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
Here is a simple example that I use:
Suppose our null hypothesis is that we are flipping a 2-headed coin (so prob(heads) = 1). Now we flip the coin one time and get heads, the p-values for this is |
7,616 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinking clearly about uncertainty and catastrophic for doing sensible statistics. With something like a sequence of coin flip... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinkin | Why is it bad to teach students that p-values are the probability that findings are due to chance?
OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinking clearly about uncer... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
OK another, slightly different take on this:
A first basic problem is the phrase "due to [random] chance". The idea of unspecified 'chance' comes naturally to students but it is hazardous for thinkin |
7,617 | Why is it bad to teach students that p-values are the probability that findings are due to chance? | If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statistics lesson where one is explaining the need to try to see through random variability this is a pretty magical and overre... | Why is it bad to teach students that p-values are the probability that findings are due to chance? | If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statist | Why is it bad to teach students that p-values are the probability that findings are due to chance?
If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statistics lesson where one ... | Why is it bad to teach students that p-values are the probability that findings are due to chance?
If I take apart, "p-value is the probability that an effect is due to chance," it seems to be implying that the effect is caused by chance. But every effect is partially caused by chance. In a statist |
7,618 | A measure of "variance" from the covariance matrix? | (The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question it will be enough to state the main results, but by all means, go check the original source).
In any situation where the... | A measure of "variance" from the covariance matrix? | (The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question | A measure of "variance" from the covariance matrix?
(The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question it will be enough to state the main results, but by all means, go ch... | A measure of "variance" from the covariance matrix?
(The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question |
7,619 | A measure of "variance" from the covariance matrix? | The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\right]$$
One generalization to a scalar-valued variance for vector-valued random variables can be obtained by interpreting ... | A measure of "variance" from the covariance matrix? | The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\rig | A measure of "variance" from the covariance matrix?
The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\right]$$
One generalization to a scalar-valued variance for vector-valu... | A measure of "variance" from the covariance matrix?
The variance of a scalar variable is defined as the squared deviation of the variable from its mean:
$$\operatorname{Var}(X) = \operatorname E\left[\left(X - \operatorname E\left[X\right]\right)^2\rig |
7,620 | A measure of "variance" from the covariance matrix? | Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall variance which is large when your variables are independent from each other and is very small when the variables are highly co... | A measure of "variance" from the covariance matrix? | Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall varian | A measure of "variance" from the covariance matrix?
Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall variance which is large when your variables are independent from each othe... | A measure of "variance" from the covariance matrix?
Although the trace of the covariance matrix, tr(C), gives you a measure of the total variance, it does not take into account the correlation between variables.
If you need a measure of overall varian |
7,621 | A measure of "variance" from the covariance matrix? | If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of the total variance can be explained if you reduce the dimensionality of your vector to one. See this answer on math SE.... | A measure of "variance" from the covariance matrix? | If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of | A measure of "variance" from the covariance matrix?
If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of the total variance can be explained if you reduce the dimensionalit... | A measure of "variance" from the covariance matrix?
If you need just one number, then I suggest taking the largest eigenvalue of the covariance matrix. This is also an explained variance of the first principal component in PCA. It tells you how much of |
7,622 | A measure of "variance" from the covariance matrix? | The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multivariate Gaussian distribution for $p(x)$ with mean $\mu$ and covariance $\Sigma$ derived from the data, according to wiki... | A measure of "variance" from the covariance matrix? | The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multiv | A measure of "variance" from the covariance matrix?
The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multivariate Gaussian distribution for $p(x)$ with mean $\mu$ and covarian... | A measure of "variance" from the covariance matrix?
The entropy concept from information theory seems to suit the purpose, as a measure of unpredictability of information content, which is given by
$$H(X)=-\int p(x)\log p(x) dx.$$
If we assume a multiv |
7,623 | What is theta in a negative binomial regression fitted with R? | Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, but also on the mean
there is no value of theta that will guarantee you lack of skew
If I did not mess it up, in the mu/... | What is theta in a negative binomial regression fitted with R? | Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, b | What is theta in a negative binomial regression fitted with R?
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, but also on the mean
there is no value of theta that will ... | What is theta in a negative binomial regression fitted with R?
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
skewness will depend on the value of theta, b |
7,624 | What is theta in a negative binomial regression fitted with R? | I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the dispersion statistic and dispersion parameter.
The dispersion statistic, which gives an indication of count model extra-dis... | What is theta in a negative binomial regression fitted with R? | I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the disp | What is theta in a negative binomial regression fitted with R?
I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the dispersion statistic and dispersion parameter.
The dispersio... | What is theta in a negative binomial regression fitted with R?
I was referred to this site by one of my students in my Modeling Count Data course. There seems to be a lot of misinformation about the negative binomial model, and especially with respect to the disp |
7,625 | What is theta in a negative binomial regression fitted with R? | glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures. | What is theta in a negative binomial regression fitted with R? | glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures. | What is theta in a negative binomial regression fitted with R?
glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures. | What is theta in a negative binomial regression fitted with R?
glm reference negative binomial :
Wikipedia negative binomial 'r' is glm's 'theta' which implies glm 'theta' is shape parameter. In Simple terms, glm's 'theta' is number of failures. |
7,626 | Negative binomial distribution vs binomial distribution | The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X is the number of successes observed in n trials. Because there are a fixed number of trials, the possible values of X ar... | Negative binomial distribution vs binomial distribution | The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X | Negative binomial distribution vs binomial distribution
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X is the number of successes observed in n trials. Because there a... | Negative binomial distribution vs binomial distribution
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p.
With the Binomial distribution, the random variable X |
7,627 | Negative binomial distribution vs binomial distribution | Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB is an alternative to Poisson when you observe the dispersion (variance) higher than expected by Poisson. Poisson is a t... | Negative binomial distribution vs binomial distribution | Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB | Negative binomial distribution vs binomial distribution
Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB is an alternative to Poisson when you observe the dispersion (v... | Negative binomial distribution vs binomial distribution
Negative binomial distribution, despite seemingly obvious relation to binomial, is actually better compared against the Poisson distribution. All three are discrete, btw.
In practical applications, NB |
7,628 | Negative binomial distribution vs binomial distribution | They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example suppose that three items are selected at random from a manufacturing process and each item is inspected and classified d... | Negative binomial distribution vs binomial distribution | They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example | Negative binomial distribution vs binomial distribution
They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example suppose that three items are selected at random from a manufactu... | Negative binomial distribution vs binomial distribution
They are both discrete and represent counts when you are sampling.
Binomial distribution represents the number of successes in an experiment which its number of draws is fixed in advance ,for example |
7,629 | Why is Lasso penalty equivalent to the double exponential (Laplace) prior? | For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \propto \mathbb{1}_{\sigma>0}$.
Then the joint density of $Y, \mu, \sigma^2$ is proportional to
$$
f(Y, \mu, \sigma^2 | \... | Why is Lasso penalty equivalent to the double exponential (Laplace) prior? | For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \p | Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \propto \mathbb{1}_{\sigma>0}$.
Then the joint... | Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
For simplicity let's just consider a single observation of a variable $Y$ such that
$$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$
$\mu \sim \mbox{Laplace}(\lambda)$
and the improper prior
$f(\sigma) \p |
7,630 | Why is Lasso penalty equivalent to the double exponential (Laplace) prior? | This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{1}{2\tau} \sum_i|\beta_i|}$.
The model for the data is the usual regression assumption $y \stackrel{\text{iid}}{\sim}N(X... | Why is Lasso penalty equivalent to the double exponential (Laplace) prior? | This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{ | Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{1}{2\tau} \sum_i|\beta_i|}$.
The model for th... | Why is Lasso penalty equivalent to the double exponential (Laplace) prior?
This is obvious by inspection of the quantity the LASSO is optimizing.
Take the prior for $\beta_i$ to be independent Laplace with mean zero and some scale $\tau$.
So $p(\beta|\tau) \propto e^{-\frac{ |
7,631 | Understanding bias-variance tradeoff derivation | You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y-f_k(x))^2]& = E[(f(x)+\epsilon-f_k(x))^2] \\
&= E[(f(x)-f_k(x))^2]+2E[(f(x)-f_k(x))\epsilon]+E[\epsilon^2]\\
&= E\left[... | Understanding bias-variance tradeoff derivation | You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y | Understanding bias-variance tradeoff derivation
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y-f_k(x))^2]& = E[(f(x)+\epsilon-f_k(x))^2] \\
&= E[(f(x)-f_k(x))^2]+2E[(... | Understanding bias-variance tradeoff derivation
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$.
\begin{align*}
E[(Y |
7,632 | Understanding bias-variance tradeoff derivation | A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using notation from the book "Elements of Statistical Learning" on page 223
If we assume that $Y = f(X) + \epsilon$ and $E[\epsi... | Understanding bias-variance tradeoff derivation | A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using not | Understanding bias-variance tradeoff derivation
A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using notation from the book "Elements of Statistical Learning" on page 223
If w... | Understanding bias-variance tradeoff derivation
A few more steps of the Bias - Variance decomposition
Indeed, the full derivation is rarely given in textbooks as it involves a lot of uninspiring algebra. Here is a more complete derivation using not |
7,633 | How to handle a "self defeating" prediction model? | There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to higher OOS (promotional sales are harder to predict than regular sales, in part because not only average sales increase, b... | How to handle a "self defeating" prediction model? | There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to hi | How to handle a "self defeating" prediction model?
There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to higher OOS (promotional sales are harder to predict than regular sales,... | How to handle a "self defeating" prediction model?
There are two possibilities by which an out-of-stock (OOS) detection model might self-derail:
The relationship between inputs and OOS might change over time. For instance, promotions might lead to hi |
7,634 | How to handle a "self defeating" prediction model? | If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you should optimize to choose the intervention with the best expected outcome. You are not trying to predict your own interventi... | How to handle a "self defeating" prediction model? | If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you shou | How to handle a "self defeating" prediction model?
If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you should optimize to choose the intervention with the best expected outcome... | How to handle a "self defeating" prediction model?
If you are using a model to support decisions about intervening in a system, then logically, the model should seek to predict the outcome conditioned on a given intervention. Then separately, you shou |
7,635 | How to handle a "self defeating" prediction model? | Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes that any positive stock level is independent of the level of sales. A commenter says that this assumption doesn't hold in... | How to handle a "self defeating" prediction model? | Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes t | How to handle a "self defeating" prediction model?
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes that any positive stock level is independent of the level of sales. A ... | How to handle a "self defeating" prediction model?
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to restock inventory.
This assumes t |
7,636 | How to handle a "self defeating" prediction model? | Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it. | How to handle a "self defeating" prediction model? | Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it. | How to handle a "self defeating" prediction model?
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it. | How to handle a "self defeating" prediction model?
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it. |
7,637 | How to handle a "self defeating" prediction model? | One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a means to that end. So as far as Type II errors are concerned, this isn't an issue. Either we continue to have OOSE, in wh... | How to handle a "self defeating" prediction model? | One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a m | How to handle a "self defeating" prediction model?
One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a means to that end. So as far as Type II errors are concerned, this isn... | How to handle a "self defeating" prediction model?
One thing to remember is that ML is an instrumental goal. Ultimately, we don't want to predict out of stock events, we want to prevent out of stock events. Predicting out of stock events is simply a m |
7,638 | Is there a boxplot variant for Poisson distributed data? | Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a dataset. As such, they are fine even when the data have very skewed distributions (although they might not reveal quite as ... | Is there a boxplot variant for Poisson distributed data? | Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a data | Is there a boxplot variant for Poisson distributed data?
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a dataset. As such, they are fine even when the data have very skewe... | Is there a boxplot variant for Poisson distributed data?
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a data |
7,639 | Is there a boxplot variant for Poisson distributed data? | There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise white paper (Vandervieren, E., Hubert, M. (2004) "An adjusted boxplot for skewed distributions", see here).
There is a... | Is there a boxplot variant for Poisson distributed data? | There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise | Is there a boxplot variant for Poisson distributed data?
There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise white paper (Vandervieren, E., Hubert, M. (2004) "An adjusted b... | Is there a boxplot variant for Poisson distributed data?
There is a generalization of standard box-plots that I know of in which the lengths of the whiskers are adjusted to account for skewed data. The details are better explained in a very clear & concise |
7,640 | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. First of all, let's import all the required packages:
from sklearn.base import TransformerMixin
from sklearn.datasets impo... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. Fi | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. First of all, let's import al... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. Fi |
7,641 | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLars and GradientBoostingRegressor). Then I run all of them on training data (same data which was used for training of eac... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLa | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLars and GradientBoostingRegr... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
Ok, after spending some time on googling I found out how I could do the weighting in python even with scikit-learn. Consider the below:
I train a set of my regression models (as mentioned SVR, LassoLa |
7,642 | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determine what cluster it's in and run the associated classifier.
You could also use the inverse distances from the centroids... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determ | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determine what cluster it's in an... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
If your data has obvious subsets you could run a clustering algorithm like k-means and then associate each classifier with the clusters it performs well on. When a new data point arrives, then determ |
7,643 | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores on the test set for each class, for each model
When you predict with the ensemble, each model will give you the most like... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework) | I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores o | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores on the test set for each cla... | Ensemble of different kinds of regressors using scikit-learn (or any other python framework)
I accomplish a type of weighting by doing the following, once all your models are fully trained up and performing well:
Run all your models on a large set of unseen testing data
Store the f1 scores o |
7,644 | Checking assumptions lmer/lme mixed models in R | Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that the residuals of the model are normally distributed. So a transformation or adding weights to the model would be a way o... | Checking assumptions lmer/lme mixed models in R | Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that | Checking assumptions lmer/lme mixed models in R
Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that the residuals of the model are normally distributed. So a transformation... | Checking assumptions lmer/lme mixed models in R
Q1: Yes - just like any regression model.
Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that |
7,645 | Checking assumptions lmer/lme mixed models in R | Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument. This topic will be
covered in detail in § 5.2, but, for now, it suffices to know that the
varIdent variance functio... | Checking assumptions lmer/lme mixed models in R | Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument. | Checking assumptions lmer/lme mixed models in R
Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument. This topic will be
covered in detail in § 5.2, but, for now, it suffi... | Checking assumptions lmer/lme mixed models in R
Regarding Q2:
According to Pinheiro and Bates' book you may use the following approach:
"The lme function allow the modeling of heteroscesdasticity of the
within-error group via a weights argument. |
7,646 | Checking assumptions lmer/lme mixed models in R | You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally distributed. And categorical predictors are used in regression all of the time (the underlying function in R that runs ... | Checking assumptions lmer/lme mixed models in R | You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally | Checking assumptions lmer/lme mixed models in R
You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally distributed. And categorical predictors are used in regression all of t... | Checking assumptions lmer/lme mixed models in R
You seem quite mislead about the assumptions surrounding multi-level models. There is not an assumption of homogeneity of variance in the data, just that the residuals should be approximately normally |
7,647 | Checking assumptions lmer/lme mixed models in R | Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example. | Checking assumptions lmer/lme mixed models in R | Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example. | Checking assumptions lmer/lme mixed models in R
Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example. | Checking assumptions lmer/lme mixed models in R
Q1: Yes, why not?
Q2: I think the requirement is that the errors are normally distributed.
Q3: Can be tested with Leven's test for example. |
7,648 | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare? | Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly conclude that Spearman is "better" for a particular dataset. The difference between rho and tau is in their ideology, pr... | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare? | Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly conclude that Spearman is "be... | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Spearman rho vs Kendall tau. These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly |
7,649 | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare? | Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases; and $τ$ is also more tractable mathematically, particularly when ties are present.
I ca... | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare? | Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$ | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases;... | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons:
[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$ |
7,650 | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare? | These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ (Goodman-Kruskal) are related to pairwise concordance. The main decision to make in choosing $\gamma$ vs. $\tau$ is whet... | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare? | These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ ( | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ (Goodman-Kruskal) are related ... | How do the Goodman-Kruskal gamma and the Kendall tau or Spearman rho correlations compare?
These are all good indexes of monotonic association. Spearman's $\rho$ is related to the probability of majority concordance among random triplets of observations, and $\tau$ (Kendall) and $\gamma$ ( |
7,651 | What if my linear regression data contains several co-mingled linear relationships? | I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach based on the EM algorithm to basically fit the model that Demetri suggests but without knowing the labels for the variety. ... | What if my linear regression data contains several co-mingled linear relationships? | I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach bas | What if my linear regression data contains several co-mingled linear relationships?
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach based on the EM algorithm to basically ... | What if my linear regression data contains several co-mingled linear relationships?
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach bas |
7,652 | What if my linear regression data contains several co-mingled linear relationships? | EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answer is much better. As a consolation, I've coded up a mixture model in Stan. I'm not saying a Bayesian approach is parti... | What if my linear regression data contains several co-mingled linear relationships? | EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answe | What if my linear regression data contains several co-mingled linear relationships?
EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answer is much better. As a consolation,... | What if my linear regression data contains several co-mingled linear relationships?
EDIT: I originally thought OP knew which observations came from which species. OP's edit makes it clear that my original approach is not feasible. I'll leave it up for posterity, but the other answe |
7,653 | What if my linear regression data contains several co-mingled linear relationships? | The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria or parsimony as a guide in choosing number of latent classes.
Here is a Stata example using a sequence of finite mixtur... | What if my linear regression data contains several co-mingled linear relationships? | The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria | What if my linear regression data contains several co-mingled linear relationships?
The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria or parsimony as a guide in choosing... | What if my linear regression data contains several co-mingled linear relationships?
The statistical approach is very similar to two of the answer above, but it deals a bit more with how to pick the number of latent classes if you lack prior knowledge. You can use information criteria |
7,654 | What if my linear regression data contains several co-mingled linear relationships? | I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some ideas out there (and I'll probably add R code and technical details later).
First, it is convenient to infer the classes. ... | What if my linear regression data contains several co-mingled linear relationships? | I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some id | What if my linear regression data contains several co-mingled linear relationships?
I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some ideas out there (and I'll probably add... | What if my linear regression data contains several co-mingled linear relationships?
I'll focus on the question of statistical significance since Dason already covered the modeling part.
I am unfamiliar with any formal tests for this (which I am sure exist), so I'll just throw some id |
7,655 | What if my linear regression data contains several co-mingled linear relationships? | Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking for impacts to a species of daffodil, not the impacts of similar environments on different daffodils. If you have lost... | What if my linear regression data contains several co-mingled linear relationships? | Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking | What if my linear regression data contains several co-mingled linear relationships?
Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking for impacts to a species of daffodi... | What if my linear regression data contains several co-mingled linear relationships?
Is it possible that including both in the same chart is an error? Given that the varieties behave completely different is there any value in overlapping the data? It seems to me that you are looking |
7,656 | What are variational autoencoders and to what learning tasks are they used? | Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Learning and Probabilistic Modeling communities use different terms for the same concepts. Thus when explaining VAEs you r... | What are variational autoencoders and to what learning tasks are they used? | Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Le | What are variational autoencoders and to what learning tasks are they used?
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Learning and Probabilistic Modeling communitie... | What are variational autoencoders and to what learning tasks are they used?
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Le |
7,657 | What are variational autoencoders and to what learning tasks are they used? | Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization trick in a paper from 2014 by Kingma and Welling. The main goal is to generate more data - by creating a more regulariz... | What are variational autoencoders and to what learning tasks are they used? | Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization | What are variational autoencoders and to what learning tasks are they used?
Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization trick in a paper from 2014 by Kingma and We... | What are variational autoencoders and to what learning tasks are they used?
Variational Auto Encoders are an intersection between Auto-Encoders Neural Networks and Variational Inference. It was introduced as an application for a general purpose VI using the reparameterization |
7,658 | Checking if two Poisson samples have the same mean | To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probability is a function of the ratio two lambda. Therefore,
hypothesis testing and interval estimation procedures can be readi... | Checking if two Poisson samples have the same mean | To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probabil | Checking if two Poisson samples have the same mean
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probability is a function of the ratio two lambda. Therefore,
hypothesis test... | Checking if two Poisson samples have the same mean
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution
whose success probabil |
7,659 | Checking if two Poisson samples have the same mean | You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process for time $t = t_1+t_2$ and counting the events during the interval $[0, t_1]$ ($n_1$ in number) and the events during th... | Checking if two Poisson samples have the same mean | You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process | Checking if two Poisson samples have the same mean
You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process for time $t = t_1+t_2$ and counting the events during the interval $[... | Checking if two Poisson samples have the same mean
You're looking for a quick and easy check.
Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process |
7,660 | Checking if two Poisson samples have the same mean | How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence interval. | Checking if two Poisson samples have the same mean | How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence i | Checking if two Poisson samples have the same mean
How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence interval. | Checking if two Poisson samples have the same mean
How about:
poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided"))
This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence i |
7,661 | Checking if two Poisson samples have the same mean | I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-01") - as.Date("2007-12-02")) # Length of recession
Lnrec = as.numeric(as.Date("2007-12-01") - as.Date("2001-12-01")) # L... | Checking if two Poisson samples have the same mean | I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-0 | Checking if two Poisson samples have the same mean
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-01") - as.Date("2007-12-02")) # Length of recession
Lnrec = as.numeric... | Checking if two Poisson samples have the same mean
I would be more interested in a confidence interval than a p value, here is a bootstrap approximation.
Calculating the lengths of the intervals first, and a check:
Lrec = as.numeric(as.Date("2010-07-0 |
7,662 | Think like a bayesian, check like a frequentist: What does that mean? | The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief that an event will (or has) occurred. A frequentist probability is a statement about the proportion of similar events th... | Think like a bayesian, check like a frequentist: What does that mean? | The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief | Think like a bayesian, check like a frequentist: What does that mean?
The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief that an event will (or has) occurred. A frequentis... | Think like a bayesian, check like a frequentist: What does that mean?
The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief |
7,663 | Think like a bayesian, check like a frequentist: What does that mean? | Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful for formulating hypotheses. For instance, Bayesians may be able to arbitrarily assign some probability to the notion th... | Think like a bayesian, check like a frequentist: What does that mean? | Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful | Think like a bayesian, check like a frequentist: What does that mean?
Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful for formulating hypotheses. For instance, Bayesia... | Think like a bayesian, check like a frequentist: What does that mean?
Bayesian statistics summarize beliefs whereas frequentist statistics summarize evidence. The Bayesians view probability as a degree of belief. This inclusive and generative type of reasoning is useful |
7,664 | Think like a bayesian, check like a frequentist: What does that mean? | Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from data, often with frequentist methods. That doesn't conform exactly to the quote (which implies Bayes up front, frequenti... | Think like a bayesian, check like a frequentist: What does that mean? | Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from | Think like a bayesian, check like a frequentist: What does that mean?
Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from data, often with frequentist methods. That doesn't... | Think like a bayesian, check like a frequentist: What does that mean?
Per Cliff AB's comment to the OP, it sounds like they are heading towards an Empirical Bayesian philosophy. There are three main Bayesian schools of thought, and Empirical Bayes estimates priors from |
7,665 | Think like a bayesian, check like a frequentist: What does that mean? | In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation data. The advice to "think like a Bayesian" expresses the opinion that a prediction function derived from a Bayesian ap... | Think like a bayesian, check like a frequentist: What does that mean? | In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation | Think like a bayesian, check like a frequentist: What does that mean?
In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation data. The advice to "think like a Bayesian" expre... | Think like a bayesian, check like a frequentist: What does that mean?
In the context of this data science class, my interpretation of "check like a frequentist" is that you evaluate the performance of your prediction function or decision function on held-out validation |
7,666 | Think like a bayesian, check like a frequentist: What does that mean? | It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior situations (experimentally or statistically), let's say for example that the mean reading scores for 4th-graders is 80 wor... | Think like a bayesian, check like a frequentist: What does that mean? | It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior si | Think like a bayesian, check like a frequentist: What does that mean?
It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior situations (experimentally or statistically), let's ... | Think like a bayesian, check like a frequentist: What does that mean?
It sounds like "think like a Bayesian, check like a frequentist" refers to one's approach in statistical design and analysis. As I understand it, Bayesian thinking involves some belief about prior si |
7,667 | Inference vs. estimation? | Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedia,
Inference is the act or process of deriving logical conclusions from premises known or assumed to be true.
and,
St... | Inference vs. estimation? | Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedi | Inference vs. estimation?
Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedia,
Inference is the act or process of deriving logical conclusions from premises known or ass... | Inference vs. estimation?
Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedi |
7,668 | Inference vs. estimation? | While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical inference attempts to attach a measure of uncertainty and/or a probability statement to the values of parameters (stand... | Inference vs. estimation? | While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical | Inference vs. estimation?
While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical inference attempts to attach a measure of uncertainty and/or a probability statement to the v... | Inference vs. estimation?
While estimation per se is aimed at coming up with values of the unknown parameters (e.g., coefficients in logistic regression, or in the separating hyperplane in support vector machines), statistical |
7,669 | Inference vs. estimation? | This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the subject.
Short answer:
Estimation $->$ find unknown values (estimates) for subject of interest
Statistical Inference $-... | Inference vs. estimation? | This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the | Inference vs. estimation?
This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the subject.
Short answer:
Estimation $->$ find unknown values (estimates) for subject of interes... | Inference vs. estimation?
This is an attempt to give an answer for anyone without a background in statistics. For those who are interested in more details, there are many useful references (such as this one for example) on the |
7,670 | Inference vs. estimation? | Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain accuracy. To make inference is to make assumptions on a population using only a representative sample.
Estimation is wh... | Inference vs. estimation? | Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain | Inference vs. estimation?
Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain accuracy. To make inference is to make assumptions on a population using only a representativ... | Inference vs. estimation?
Suppose you have a representative sample of a population.
Inference is when you use that sample to estimate a model and state that the results can be extended to the entire population, with a certain |
7,671 | Inference vs. estimation? | In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution of your latent variables. Estimation seems to be associated with "point estimation", which is to determine your model pa... | Inference vs. estimation? | In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution o | Inference vs. estimation?
In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution of your latent variables. Estimation seems to be associated with "point estimation", which is ... | Inference vs. estimation?
In the context of machine learning, inference refers to an act of discovering settings of latent (hidden) variables given your observations. This also includes determining the posterior distribution o |
7,672 | Inference vs. estimation? | Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, the concepts behind are distinct. So it's important to get these concepts clear, and then translate those dialects in the... | Inference vs. estimation? | Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, t | Inference vs. estimation?
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, the concepts behind are distinct. So it's important to get these concepts clear, and then trans... | Inference vs. estimation?
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects.
However, whatever terms they might use, t |
7,673 | Inference vs. estimation? | I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML algorithms: how do you put a standard deviation on the classification label a neural net or decision tree spits out? In t... | Inference vs. estimation? | I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML a | Inference vs. estimation?
I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML algorithms: how do you put a standard deviation on the classification label a neural net or dec... | Inference vs. estimation?
I want to add to others' answers by expanding on the "inference" part. In the context of machine learning, an interesting aspect of inference is estimating uncertainty. It's generally tricky with ML a |
7,674 | Datasets constructed for a purpose similar to that of Anscombe's quartet | Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(which is what the Anscombe data does, since it's a response to people operating under the misunderstanding that the quali... | Datasets constructed for a purpose similar to that of Anscombe's quartet | Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(w | Datasets constructed for a purpose similar to that of Anscombe's quartet
Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(which is what the Anscombe data does, since it's... | Datasets constructed for a purpose similar to that of Anscombe's quartet
Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure.
*(w |
7,675 | Datasets constructed for a purpose similar to that of Anscombe's quartet | With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American Statistician, 61, 3, pp. 248–254.
As far as datasets that are ... | Datasets constructed for a purpose similar to that of Anscombe's quartet | With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graph | Datasets constructed for a purpose similar to that of Anscombe's quartet
With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The A... | Datasets constructed for a purpose similar to that of Anscombe's quartet
With regard to generating (e.g., your own) datasets for similar purposes, you might be interested in:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graph |
7,676 | Datasets constructed for a purpose similar to that of Anscombe's quartet | In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect real-life cases when data might have suffered a coding error during measurement (e.g. a distortion in assigning data to ca... | Datasets constructed for a purpose similar to that of Anscombe's quartet | In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect re | Datasets constructed for a purpose similar to that of Anscombe's quartet
In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect real-life cases when data might have suffered a c... | Datasets constructed for a purpose similar to that of Anscombe's quartet
In the paper "Let's Put the Garbage-Can Regressions and Garbage-Can Probits Where They Belong" (C. Achen, 2004) the author creates a synthetic data set with a non-linearity that is meant to reflect re |
7,677 | Data mining: How should I go about finding the functional form? | To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited about it)...and its free :-)
http://creativemachines.cornell.edu/eureqa
EDIT: I gave it a shot with Eureqa and I would go f... | Data mining: How should I go about finding the functional form? | To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited abo | Data mining: How should I go about finding the functional form?
To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited about it)...and its free :-)
http://creativemachines.cornel... | Data mining: How should I go about finding the functional form?
To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited abo |
7,678 | Data mining: How should I go about finding the functional form? | $R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory data analysis (EDA) and regression (but not stepwise or other automated procedures) suggest using a linear model in the... | Data mining: How should I go about finding the functional form? | $R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory | Data mining: How should I go about finding the functional form?
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory data analysis (EDA) and regression (but not stepwise or... | Data mining: How should I go about finding the functional form?
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling.
To that end, note that standard techniques of exploratory |
7,679 | Data mining: How should I go about finding the functional form? | Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said, Analysis of Variance (ANOVA) or a "sensitivity study" can tell you a lot about how your inputs (AA..EE) affect your ou... | Data mining: How should I go about finding the functional form? | Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said | Data mining: How should I go about finding the functional form?
Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said, Analysis of Variance (ANOVA) or a "sensitivity study" ... | Data mining: How should I go about finding the functional form?
Your question needs refining because the function f is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said |
7,680 | Data mining: How should I go about finding the functional form? | Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A
/edit: also, a radial SVM with C = 4 and sigma = 0.206 easily yields an R2 of .99. Ex... | Data mining: How should I go about finding the functional form? | Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other | Data mining: How should I go about finding the functional form?
Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A
/edit: also, a radial ... | Data mining: How should I go about finding the functional form?
Broadly speaking, there's no free lunch in machine learning:
In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other |
7,681 | Data mining: How should I go about finding the functional form? | All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T=1,10
=1 OTHERWISE
There appears to be a "lagged relationship" between Y and AA AND an explained shift in the... | Data mining: How should I go about finding the functional form? | All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T | Data mining: How should I go about finding the functional form?
All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T=1,10
=1 OTHERWISE
There appears to be a "lag... | Data mining: How should I go about finding the functional form?
All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T |
7,682 | Data mining: How should I go about finding the functional form? | r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
X3 BBS
X4 CC
Number of Res... | Data mining: How should I go about finding the functional form? | r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
| Data mining: How should I go about finding the functional form?
r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
X3 BBS
... | Data mining: How should I go about finding the functional form?
r square of 97.2
Estimation/Diagnostic Checking for Variable Y Y
X1 AAS
X2 BB
|
7,683 | Why should we shuffle data while training a neural network? | Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concepts described below.
The process of training a neural network is to find the minimum value of a loss function $ℒ_X(W)$, w... | Why should we shuffle data while training a neural network? | Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concep | Why should we shuffle data while training a neural network?
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concepts described below.
The process of training a neural network... | Why should we shuffle data while training a neural network?
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concep |
7,684 | Why should we shuffle data while training a neural network? | To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your input and output data. These relationships can include things you would never expect, such as the order in which data is fed... | Why should we shuffle data while training a neural network? | To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your inpu | Why should we shuffle data while training a neural network?
To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your input and output data. These relationships can include things yo... | Why should we shuffle data while training a neural network?
To try to give another explanation:
One of the most powerful things about neural networks is that they can be very complex functions, allowing one to learn very complex relationships between your inpu |
7,685 | Why should we shuffle data while training a neural network? | Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches being disproportionately noisy goes down. | Why should we shuffle data while training a neural network? | Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches | Why should we shuffle data while training a neural network?
Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches being disproportionately noisy goes down. | Why should we shuffle data while training a neural network?
Imagine your last few minibatch labels indeed have more noise. Then these batches will pull the final learned weights in the wrong direction. If you shuffle every time, the chances of last few batches |
7,686 | Why should we shuffle data while training a neural network? | From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't matter, randomization certainly won't hurt. If the order does matter, randomization will help to smooth out those random ef... | Why should we shuffle data while training a neural network? | From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't mat | Why should we shuffle data while training a neural network?
From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't matter, randomization certainly won't hurt. If the order does m... | Why should we shuffle data while training a neural network?
From a very simplistic point of view, the data is fed in sequentially, which suggests that at the very least, it's possible for the data order to have an effect on the output. If the order doesn't mat |
7,687 | Why should we shuffle data while training a neural network? | When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200...etc. This simply means that your network has not learnt the training data but it has learnt the noise of your trainin... | Why should we shuffle data while training a neural network? | When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200 | Why should we shuffle data while training a neural network?
When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200...etc. This simply means that your network has not learnt t... | Why should we shuffle data while training a neural network?
When you train your network using a fixed data set, meaning data you never shuffling during the training. You are very much likely to get weights that are very high and very low such as 40,70,-101,200 |
7,688 | Why should we shuffle data while training a neural network? | Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each round of weight updating. The thing we want happen is this mini-batch-based gradient is roughly the population gradient, be... | Why should we shuffle data while training a neural network? | Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each rou | Why should we shuffle data while training a neural network?
Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each round of weight updating. The thing we want happen is this mini... | Why should we shuffle data while training a neural network?
Here is a more intuitive explanation:
When using gradient descent, we want the loss get reduced in a direction of gradient. The gradient is calculated by the data from a single mini-batch for each rou |
7,689 | Data "exploration" vs data "snooping"/"torturing"? | There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty tricks in the world to come up with your idea / hypothesis. But when you later test it, you must ruthlessly kill your da... | Data "exploration" vs data "snooping"/"torturing"? | There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty | Data "exploration" vs data "snooping"/"torturing"?
There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty tricks in the world to come up with your idea / hypothesis. But when ... | Data "exploration" vs data "snooping"/"torturing"?
There is a distinction which sometimes doesn't get enough attention, namely hypothesis generation vs. hypothesis testing, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty |
7,690 | Data "exploration" vs data "snooping"/"torturing"? | Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a priori defined hypotheses severely limits your ability to be surprised.
I think the key thing is that we are honest about... | Data "exploration" vs data "snooping"/"torturing"? | Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a pr | Data "exploration" vs data "snooping"/"torturing"?
Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a priori defined hypotheses severely limits your ability to be surprised.... | Data "exploration" vs data "snooping"/"torturing"?
Herman Friedman, my favorite professor in grad school, used to say that
"if you're not surprised, you haven't learned anything"
Strict avoidance of anything except the most rigorous testing of a pr |
7,691 | Data "exploration" vs data "snooping"/"torturing"? | Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se that data exploration is used on a data set and only parts of those findings are published. The problems are
not describi... | Data "exploration" vs data "snooping"/"torturing"? | Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se th | Data "exploration" vs data "snooping"/"torturing"?
Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se that data exploration is used on a data set and only parts of those fin... | Data "exploration" vs data "snooping"/"torturing"?
Let me add a few points:
first of all, hypothesis generation is an important part of science. And non-predictive (exploratory/descriptive) results can be published.
IMHO the trouble is not per se th |
7,692 | Data "exploration" vs data "snooping"/"torturing"? | Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experiment until you see it.
For example, with reaction time data for a decision task, you often want to reject times that aren... | Data "exploration" vs data "snooping"/"torturing"? | Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experim | Data "exploration" vs data "snooping"/"torturing"?
Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experiment until you see it.
For example, with reaction time data for a deci... | Data "exploration" vs data "snooping"/"torturing"?
Sometimes the things you see as "data torture" aren't really. It's not always clear beforehand exactly what you're going to do with data to give what you believe are the genuine results of the experim |
7,693 | Data "exploration" vs data "snooping"/"torturing"? | This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be producing results of interest that are novel or contentious, for example, in the sense of rebutting someone else's resu... | Data "exploration" vs data "snooping"/"torturing"? | This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be | Data "exploration" vs data "snooping"/"torturing"?
This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be producing results of interest that are novel or contentious, for exa... | Data "exploration" vs data "snooping"/"torturing"?
This is really a cultural problem of unbalanced thinking, where publication bias leads to the favouring of positive results and our competitive nature requires editors and researchers to be seen to be |
7,694 | Can anyone explain conjugate priors in simplest possible terms? | A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, in which case choosing our prior reduces to choosing the parameters of that family.
For example, consider a normal model ... | Can anyone explain conjugate priors in simplest possible terms? | A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, i | Can anyone explain conjugate priors in simplest possible terms?
A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, in which case choosing our prior reduces to choosing the ... | Can anyone explain conjugate priors in simplest possible terms?
A prior for a parameter will almost always have some specific functional form (written in terms of the density, generally). Let's say we restrict ourselves to one particular family of distributions, i |
7,695 | Can anyone explain conjugate priors in simplest possible terms? | If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\in\Theta$$
with respect to a given dominating measure (Lebesgue, counting, &tc.), where $t\cdot s$ denotes a scalar produ... | Can anyone explain conjugate priors in simplest possible terms? | If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\i | Can anyone explain conjugate priors in simplest possible terms?
If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\in\Theta$$
with respect to a given dominating measure (Le... | Can anyone explain conjugate priors in simplest possible terms?
If your model belongs to an exponential family, that is, if the density of the distribution is of the form
$$f(x|\theta)=h(x)\exp\{T(\theta)\cdot S(x)-\psi(\theta)\}\qquad
x\in\mathcal{X}\quad\theta\i |
7,696 | Can anyone explain conjugate priors in simplest possible terms? | I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp(a\mu^2 +b\mu)$$
Where $K$ is the "normalising constant" $K=\int \exp(a\mu^2 +b\mu)d\mu=\sqrt{\frac{\pi}{-a}}\exp(-\frac{... | Can anyone explain conjugate priors in simplest possible terms? | I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp( | Can anyone explain conjugate priors in simplest possible terms?
I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp(a\mu^2 +b\mu)$$
Where $K$ is the "normalising constant" ... | Can anyone explain conjugate priors in simplest possible terms?
I like using the notion of a "kernel" of a distribution. This is where you only leave in the parts that depend on the parameter. A few simple examples.
Normal kernel
$$p(\mu|a,b) = K^{-1} \times \exp( |
7,697 | Can anyone explain conjugate priors in simplest possible terms? | For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are conjugate distribution families and the prior is called a conjugate prior for the likelihood function.
Note: $\underbrac... | Can anyone explain conjugate priors in simplest possible terms? | For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are c | Can anyone explain conjugate priors in simplest possible terms?
For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are conjugate distribution families and the prior is called a... | Can anyone explain conjugate priors in simplest possible terms?
For a given distribution family $D_{lik}$ of the likelihood (e.g. Bernoulli),
if the prior is of the same distribution family $D_{pri}$ as the posterior (e.g. Beta),
then $D_{pri}$ and $D_{lik}$ are c |
7,698 | How to use ordinal logistic regression with random effects? | In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between successive levels (e.g. see Dobson and Barnett Introduction to Generalized Linear Models section 8.4.6). However, this is ... | How to use ordinal logistic regression with random effects? | In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between su | How to use ordinal logistic regression with random effects?
In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between successive levels (e.g. see Dobson and Barnett Introduction to... | How to use ordinal logistic regression with random effects?
In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between su |
7,699 | How to use ordinal logistic regression with random effects? | Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstrates the polr() function in the MASS package, there are a number of facilities for fitting ordinal models in R. There i... | How to use ordinal logistic regression with random effects? | Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstr | How to use ordinal logistic regression with random effects?
Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstrates the polr() function in the MASS package, there are a nu... | How to use ordinal logistic regression with random effects?
Yes, it is possible to include random effects in an ordinal regression model. Conceptually, this is the same as including random effects in a linear mixed model. Although the UCLA site only demonstr |
7,700 | The relationship between the gamma distribution and the normal distribution | As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of the Gamma distribution:
$$X \sim N(0,\sigma^2) \Rightarrow X^2/\sigma^2 \sim \mathcal \chi^2_1 \Rightarrow X^2 \sim \sigma... | The relationship between the gamma distribution and the normal distribution | As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of th | The relationship between the gamma distribution and the normal distribution
As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of the Gamma distribution:
$$X \sim N(0,\sigma^2)... | The relationship between the gamma distribution and the normal distribution
As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.