idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
24,701 | Sum of $n$ Poisson random variables with parameter $\frac 1 n$ | The central limit theorem applies to a sequence of IID random variables with a fixed distribution. If you have a fixed distribution for the underlying sequence of random variables then a suitably standardised version of the sample mean will converge in distribution to the standard normal so long as its variance is finite.
In the example given in your question, the distribution is dependent on $n$ so the underlying distribution for each random variable is changing as $n \rightarrow \infty$. The example is of course a case where the average converges in distribution to something other than the normal distribution, so it effectively shows that you cannot dispense with the assumption of a fixed distribution in the CLT. | Sum of $n$ Poisson random variables with parameter $\frac 1 n$ | The central limit theorem applies to a sequence of IID random variables with a fixed distribution. If you have a fixed distribution for the underlying sequence of random variables then a suitably sta | Sum of $n$ Poisson random variables with parameter $\frac 1 n$
The central limit theorem applies to a sequence of IID random variables with a fixed distribution. If you have a fixed distribution for the underlying sequence of random variables then a suitably standardised version of the sample mean will converge in distribution to the standard normal so long as its variance is finite.
In the example given in your question, the distribution is dependent on $n$ so the underlying distribution for each random variable is changing as $n \rightarrow \infty$. The example is of course a case where the average converges in distribution to something other than the normal distribution, so it effectively shows that you cannot dispense with the assumption of a fixed distribution in the CLT. | Sum of $n$ Poisson random variables with parameter $\frac 1 n$
The central limit theorem applies to a sequence of IID random variables with a fixed distribution. If you have a fixed distribution for the underlying sequence of random variables then a suitably sta |
24,702 | Sum of $n$ Poisson random variables with parameter $\frac 1 n$ | This 'problem' arises because the number $n$, whose limit is taken in the central limit theorem (CLT), is coupled to properties of the terms in the sum.
Let $$X_i \sim Pois(m^{-1})$$ and consider the standardized sum $$S_{m,n} = \frac{-n/m + \sum_{i=1}^n X_i}{\sqrt{n/m}} = \frac{\sum_{i=1}^n X_i-1/m}{\sqrt{n/m}} $$
The following limit will be the correct expression of the CLT
$$\lim_{n \to \infty} S_{m,n} \to N(0,1)$$
This is not the same as
$$\lim_{(m,n) \to (\infty,\infty)} S_{m,n} \to N(0,1)$$
In some cases, this other limit may be true as well, but it will depend on how the parameters $m$ and $n$ go to infinity.
It is not correct, but why does it not work?
Intuitive
The above part is the same point as Dilip Sarwate has made in his answer. The central limit theorem does not relate to changing the parameters $m$ and $n$ simultaneously.
Still, while the CLT has technically not been applied correctly, we do feel in an intuitive sense that a sum of many little variables should approach a Gaussian distribution. One might still wonder what is happening and why this occurs. Why doesn't the CLT work in this way?
If we would do the same with an exponential distributed variable $X_i \sim exp(n)$ then the standardized sum of $n$ of those variables would converge to a Gaussian distributed variable. Why is this not the case for the Poisson distributed variable?
Summing makes the shape of distribution less important
The reason is closely related to the Poisson distribution being infinitly divisible. This might be seen as similar to the central limit theorem. Some distributions can be expressed as a sum of i.i.d distributed variables with the same distribution family. When you sum the variables you get a variable from the same family back. The Gaussian distribution is a well-known example. The Levy distribution and Cauchy distribution have the same property (and they can also arise as a limit distribution when summing many variables).
The 'trick' of the central limit theorem is that summing variables (while shifting and scaling to keep the same mean and variance) will make the specific shape of the distribution less dominant in defining the end result. In the proof of the CLT by means of the characteristic functions, you can see this as only the first terms of a Taylor expansion for the characteristic function counting, while the rest becomes smaller.
In terms of cumulants
In terms of cumulants, it is more easily seen. For cumulants we can use the following properties for the $k$-th cumulant
$$\begin{array}{}
\kappa_{1}(c+X) &=& \kappa_{1}(X) + c \\
\kappa_{k}(c+X) &=& \kappa_{n}(k) \quad \quad \text{for $k \geq 2$} \\
\kappa_{k}(cX) &=& c^k \kappa_{k}(X) \\
\kappa_{k}(X+Y) &=& \kappa_{k}(X)+\kappa_{k}(Y) \\
\end{array}$$
such that
$$\begin{array}{}
\kappa_{1}(S_{m,n}) &=& \mu_S = 0\\
\kappa_{2}(S_{m,n}) &=& \mu_S^2 + \sigma_S^2 = 1\\
\kappa_{k}(S_{m,n}) &=& n \left( \sqrt{\frac{1}{n\sigma_{X_{m}}^2}} \right)^k \kappa_k(X_{m}) \quad \quad \text{for $k \geq 3$}
\end{array}$$
With $X_m$ we denote the variable $X$ with parameter $m$.
For the Poisson distribution the cumulant generating function (the log of the moment generating function) is:
$$g(t) = \log M(t) = \frac{e^t-1}{m}$$
The $k$-th cumulants are the $k$-th derivatives in the point $t=0$
$$\kappa_k = g(t)^{(m)} = \frac{1}{m}$$
So when we increase the number of variables $n$ then we decrease the higher order cumulants and the shape of the distribution becomes less important and the result is more like a Gaussian distribution. The reason why it does not happen in the case of the question is that changing the parameter $m$ makes the shape more pronounced and counters the effect of the central limit theorem.
Generalizing
In terms of the cumulants, we can see a generalization of the situation. Consider that the increase of the number of variables in the sum $n$ is associated with a change of the parameters of the distribution of the variable in the sum. If this is such that the higher-order cumulants, of the standardized variable, increase with a rate of at least $ n^{k/2-2}$
$$ \liminf_{n \to \infty} \frac{\kappa_k(X_n/\sigma_{X_n})}{n^{k/2-2}} = \liminf_{n \to \infty} \frac{\kappa_k(X_n)}{\kappa_2(X_n)^{k/2}} \frac{1}{ n^{k/2-2}} > 0$$
then the higher-order cumulants of the summation won't decrease in the limit and the distribution does not approach a Gaussian distribution.
Example 1
Let's try to do the same with a Bernoulli distribution. The ratio of the second and third cumulant is
$$\frac{\kappa_3(X_n)}{\kappa_2(X_n)^{3/2}} = \frac{(1-2p)}{ \sqrt{p(1-p)}}$$
and if we put this equal to $\sqrt{n}$ or if we use $n = \lceil (1-2p)/p(1-p) \rceil$ then the summation should not approach a Gaussian distribution.
What we will get is a sum of $n$ Bernoulli variables with approximately $p \approx 1/n$ and this will be equal to a Poisson distribution.
Example 2
Let's try it with the exponential distribution.
$$\frac{\kappa_3(X_n)}{\kappa_2(X_n)^{3/2}} = \frac{2\lambda^{-3}}{(\lambda^{-2})^{3/2}} = 2$$
This time there is no dependence on the parameter and no matter how we adjust $\lambda$ as a function of $n$, the normalized sum will always approach a Gaussian distribution.
Example 3
Let's try it with the gamma distribution with fixed $\theta = 1$.
$$\frac{\kappa_3(X_n)}{\kappa_2(X_n)^{3/2}} = \frac{k}{(3k)^{3/2}} = \frac{1}{\sqrt{27 k}}$$
So if we let $k = 1/n$ then we should get a distribution that approaches something that is not a Gaussian distribution.
Knowing the properties of the gamma distribution (sums of gamma with the same scale are gamma as well) this approaches a gamma distribution with parameter $k=1$. | Sum of $n$ Poisson random variables with parameter $\frac 1 n$ | This 'problem' arises because the number $n$, whose limit is taken in the central limit theorem (CLT), is coupled to properties of the terms in the sum.
Let $$X_i \sim Pois(m^{-1})$$ and consider the | Sum of $n$ Poisson random variables with parameter $\frac 1 n$
This 'problem' arises because the number $n$, whose limit is taken in the central limit theorem (CLT), is coupled to properties of the terms in the sum.
Let $$X_i \sim Pois(m^{-1})$$ and consider the standardized sum $$S_{m,n} = \frac{-n/m + \sum_{i=1}^n X_i}{\sqrt{n/m}} = \frac{\sum_{i=1}^n X_i-1/m}{\sqrt{n/m}} $$
The following limit will be the correct expression of the CLT
$$\lim_{n \to \infty} S_{m,n} \to N(0,1)$$
This is not the same as
$$\lim_{(m,n) \to (\infty,\infty)} S_{m,n} \to N(0,1)$$
In some cases, this other limit may be true as well, but it will depend on how the parameters $m$ and $n$ go to infinity.
It is not correct, but why does it not work?
Intuitive
The above part is the same point as Dilip Sarwate has made in his answer. The central limit theorem does not relate to changing the parameters $m$ and $n$ simultaneously.
Still, while the CLT has technically not been applied correctly, we do feel in an intuitive sense that a sum of many little variables should approach a Gaussian distribution. One might still wonder what is happening and why this occurs. Why doesn't the CLT work in this way?
If we would do the same with an exponential distributed variable $X_i \sim exp(n)$ then the standardized sum of $n$ of those variables would converge to a Gaussian distributed variable. Why is this not the case for the Poisson distributed variable?
Summing makes the shape of distribution less important
The reason is closely related to the Poisson distribution being infinitly divisible. This might be seen as similar to the central limit theorem. Some distributions can be expressed as a sum of i.i.d distributed variables with the same distribution family. When you sum the variables you get a variable from the same family back. The Gaussian distribution is a well-known example. The Levy distribution and Cauchy distribution have the same property (and they can also arise as a limit distribution when summing many variables).
The 'trick' of the central limit theorem is that summing variables (while shifting and scaling to keep the same mean and variance) will make the specific shape of the distribution less dominant in defining the end result. In the proof of the CLT by means of the characteristic functions, you can see this as only the first terms of a Taylor expansion for the characteristic function counting, while the rest becomes smaller.
In terms of cumulants
In terms of cumulants, it is more easily seen. For cumulants we can use the following properties for the $k$-th cumulant
$$\begin{array}{}
\kappa_{1}(c+X) &=& \kappa_{1}(X) + c \\
\kappa_{k}(c+X) &=& \kappa_{n}(k) \quad \quad \text{for $k \geq 2$} \\
\kappa_{k}(cX) &=& c^k \kappa_{k}(X) \\
\kappa_{k}(X+Y) &=& \kappa_{k}(X)+\kappa_{k}(Y) \\
\end{array}$$
such that
$$\begin{array}{}
\kappa_{1}(S_{m,n}) &=& \mu_S = 0\\
\kappa_{2}(S_{m,n}) &=& \mu_S^2 + \sigma_S^2 = 1\\
\kappa_{k}(S_{m,n}) &=& n \left( \sqrt{\frac{1}{n\sigma_{X_{m}}^2}} \right)^k \kappa_k(X_{m}) \quad \quad \text{for $k \geq 3$}
\end{array}$$
With $X_m$ we denote the variable $X$ with parameter $m$.
For the Poisson distribution the cumulant generating function (the log of the moment generating function) is:
$$g(t) = \log M(t) = \frac{e^t-1}{m}$$
The $k$-th cumulants are the $k$-th derivatives in the point $t=0$
$$\kappa_k = g(t)^{(m)} = \frac{1}{m}$$
So when we increase the number of variables $n$ then we decrease the higher order cumulants and the shape of the distribution becomes less important and the result is more like a Gaussian distribution. The reason why it does not happen in the case of the question is that changing the parameter $m$ makes the shape more pronounced and counters the effect of the central limit theorem.
Generalizing
In terms of the cumulants, we can see a generalization of the situation. Consider that the increase of the number of variables in the sum $n$ is associated with a change of the parameters of the distribution of the variable in the sum. If this is such that the higher-order cumulants, of the standardized variable, increase with a rate of at least $ n^{k/2-2}$
$$ \liminf_{n \to \infty} \frac{\kappa_k(X_n/\sigma_{X_n})}{n^{k/2-2}} = \liminf_{n \to \infty} \frac{\kappa_k(X_n)}{\kappa_2(X_n)^{k/2}} \frac{1}{ n^{k/2-2}} > 0$$
then the higher-order cumulants of the summation won't decrease in the limit and the distribution does not approach a Gaussian distribution.
Example 1
Let's try to do the same with a Bernoulli distribution. The ratio of the second and third cumulant is
$$\frac{\kappa_3(X_n)}{\kappa_2(X_n)^{3/2}} = \frac{(1-2p)}{ \sqrt{p(1-p)}}$$
and if we put this equal to $\sqrt{n}$ or if we use $n = \lceil (1-2p)/p(1-p) \rceil$ then the summation should not approach a Gaussian distribution.
What we will get is a sum of $n$ Bernoulli variables with approximately $p \approx 1/n$ and this will be equal to a Poisson distribution.
Example 2
Let's try it with the exponential distribution.
$$\frac{\kappa_3(X_n)}{\kappa_2(X_n)^{3/2}} = \frac{2\lambda^{-3}}{(\lambda^{-2})^{3/2}} = 2$$
This time there is no dependence on the parameter and no matter how we adjust $\lambda$ as a function of $n$, the normalized sum will always approach a Gaussian distribution.
Example 3
Let's try it with the gamma distribution with fixed $\theta = 1$.
$$\frac{\kappa_3(X_n)}{\kappa_2(X_n)^{3/2}} = \frac{k}{(3k)^{3/2}} = \frac{1}{\sqrt{27 k}}$$
So if we let $k = 1/n$ then we should get a distribution that approaches something that is not a Gaussian distribution.
Knowing the properties of the gamma distribution (sums of gamma with the same scale are gamma as well) this approaches a gamma distribution with parameter $k=1$. | Sum of $n$ Poisson random variables with parameter $\frac 1 n$
This 'problem' arises because the number $n$, whose limit is taken in the central limit theorem (CLT), is coupled to properties of the terms in the sum.
Let $$X_i \sim Pois(m^{-1})$$ and consider the |
24,703 | Can Z values be thought of as the number of standard deviations? | No. The z score is not 'the number of standard deviations'. Instead the z-score of a value is the number of standard deviations that value is above the mean. A z-score of 1.7 is 1.7 standard deviations above the mean. A z score of -1 is one standard deviation below the mean, and so on.
This is not mere nitpicking, it's essential to correctly conveying your meaning. I have seen exactly this imprecision in relation to z-scores lead to error on numerous occasion. Stats is not the place for woolly thinking and muddled words $-$ it is tricky enough when you say exactly what you mean. | Can Z values be thought of as the number of standard deviations? | No. The z score is not 'the number of standard deviations'. Instead the z-score of a value is the number of standard deviations that value is above the mean. A z-score of 1.7 is 1.7 standard deviation | Can Z values be thought of as the number of standard deviations?
No. The z score is not 'the number of standard deviations'. Instead the z-score of a value is the number of standard deviations that value is above the mean. A z-score of 1.7 is 1.7 standard deviations above the mean. A z score of -1 is one standard deviation below the mean, and so on.
This is not mere nitpicking, it's essential to correctly conveying your meaning. I have seen exactly this imprecision in relation to z-scores lead to error on numerous occasion. Stats is not the place for woolly thinking and muddled words $-$ it is tricky enough when you say exactly what you mean. | Can Z values be thought of as the number of standard deviations?
No. The z score is not 'the number of standard deviations'. Instead the z-score of a value is the number of standard deviations that value is above the mean. A z-score of 1.7 is 1.7 standard deviation |
24,704 | Can Z values be thought of as the number of standard deviations? | Yes. A Z value of a particular data point tells you how many standard deviations it is from its mean. Z=0 means it has the same value as the population mean, Z=-1 means it is 1std lower than its mean etc. The probability that an observation will lie within the interval of its population mean plus/minus two times the standard deviation is 95%. This is the connection between z scores and confidence intervals. | Can Z values be thought of as the number of standard deviations? | Yes. A Z value of a particular data point tells you how many standard deviations it is from its mean. Z=0 means it has the same value as the population mean, Z=-1 means it is 1std lower than its mean | Can Z values be thought of as the number of standard deviations?
Yes. A Z value of a particular data point tells you how many standard deviations it is from its mean. Z=0 means it has the same value as the population mean, Z=-1 means it is 1std lower than its mean etc. The probability that an observation will lie within the interval of its population mean plus/minus two times the standard deviation is 95%. This is the connection between z scores and confidence intervals. | Can Z values be thought of as the number of standard deviations?
Yes. A Z value of a particular data point tells you how many standard deviations it is from its mean. Z=0 means it has the same value as the population mean, Z=-1 means it is 1std lower than its mean |
24,705 | Can Z values be thought of as the number of standard deviations? | no
Sometimes z-score refers to a quantile randomized z-score, where the quantiles of distribution are mapped to z of a standard normal, so by construction, z-score of one is bigger than 68.27% percent of values in the distribution, regardless of how many standard deviations from a mean a value is. | Can Z values be thought of as the number of standard deviations? | no
Sometimes z-score refers to a quantile randomized z-score, where the quantiles of distribution are mapped to z of a standard normal, so by construction, z-score of one is bigger than 68.27% percent | Can Z values be thought of as the number of standard deviations?
no
Sometimes z-score refers to a quantile randomized z-score, where the quantiles of distribution are mapped to z of a standard normal, so by construction, z-score of one is bigger than 68.27% percent of values in the distribution, regardless of how many standard deviations from a mean a value is. | Can Z values be thought of as the number of standard deviations?
no
Sometimes z-score refers to a quantile randomized z-score, where the quantiles of distribution are mapped to z of a standard normal, so by construction, z-score of one is bigger than 68.27% percent |
24,706 | If the sum of the probabilities of events is equal to the probability of their union, does that imply that the events are disjoint? | No, but you can conclude that the probability of any shared events is zero.
Disjoint means that $A_i \cap A_j=\emptyset$ for any $i\ne j$. You cannot conclude that, but you can conclude that $P(A_i \cap A_j)=0$ for all $i\ne j$. Any shared elements must have probability zero. Same goes for all higher-order intersections as well.
In other words, you can say, with probability 1, that none of the sets can occur together.
I have seen such sets called almost disjoint or almost surely disjoint but such terminology is not standard I think. | If the sum of the probabilities of events is equal to the probability of their union, does that impl | No, but you can conclude that the probability of any shared events is zero.
Disjoint means that $A_i \cap A_j=\emptyset$ for any $i\ne j$. You cannot conclude that, but you can conclude that $P(A_i \c | If the sum of the probabilities of events is equal to the probability of their union, does that imply that the events are disjoint?
No, but you can conclude that the probability of any shared events is zero.
Disjoint means that $A_i \cap A_j=\emptyset$ for any $i\ne j$. You cannot conclude that, but you can conclude that $P(A_i \cap A_j)=0$ for all $i\ne j$. Any shared elements must have probability zero. Same goes for all higher-order intersections as well.
In other words, you can say, with probability 1, that none of the sets can occur together.
I have seen such sets called almost disjoint or almost surely disjoint but such terminology is not standard I think. | If the sum of the probabilities of events is equal to the probability of their union, does that impl
No, but you can conclude that the probability of any shared events is zero.
Disjoint means that $A_i \cap A_j=\emptyset$ for any $i\ne j$. You cannot conclude that, but you can conclude that $P(A_i \c |
24,707 | If the sum of the probabilities of events is equal to the probability of their union, does that imply that the events are disjoint? | Not really, for example, consider the uniform distribution.
Let $A_1 = [0,0.5) \cup (\mathbb{Q} \cap [0,1])$ and $A_2=[0.5,1] \cup (\mathbb{Q} \cap [0,1])$ and $A_i =\emptyset$ for $i>2$.
$P(A_1)=0.5$ and $P(A_2)=0.5$ and they sum to $1$ but they are not disjoint. $A_1 \cap A_2 \neq \emptyset$.
They can still intersect with probability measure $0$. | If the sum of the probabilities of events is equal to the probability of their union, does that impl | Not really, for example, consider the uniform distribution.
Let $A_1 = [0,0.5) \cup (\mathbb{Q} \cap [0,1])$ and $A_2=[0.5,1] \cup (\mathbb{Q} \cap [0,1])$ and $A_i =\emptyset$ for $i>2$.
$P(A_1)=0 | If the sum of the probabilities of events is equal to the probability of their union, does that imply that the events are disjoint?
Not really, for example, consider the uniform distribution.
Let $A_1 = [0,0.5) \cup (\mathbb{Q} \cap [0,1])$ and $A_2=[0.5,1] \cup (\mathbb{Q} \cap [0,1])$ and $A_i =\emptyset$ for $i>2$.
$P(A_1)=0.5$ and $P(A_2)=0.5$ and they sum to $1$ but they are not disjoint. $A_1 \cap A_2 \neq \emptyset$.
They can still intersect with probability measure $0$. | If the sum of the probabilities of events is equal to the probability of their union, does that impl
Not really, for example, consider the uniform distribution.
Let $A_1 = [0,0.5) \cup (\mathbb{Q} \cap [0,1])$ and $A_2=[0.5,1] \cup (\mathbb{Q} \cap [0,1])$ and $A_i =\emptyset$ for $i>2$.
$P(A_1)=0 |
24,708 | What's the difference between statistics and informatics? | Excellent question!!
I heard several times that bioinformaticians can go without biostatistics, or even without statistics. That's perfectly true until it becomes false. In my opinion, general lack of statistical knowledge has disastrous effect in the field, as shown by Keith Baggerly. I could also observe that lack of basic knowledge in statistics (and linear algebra) is the cause of stagnation of bioinformaticians in the long run: without a deep knowledge of the theory, they tend to reinvent the wheel and resort to ad hoc solutions that solve nothing but their own problem.
$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $
But now, to answer your question, I agree that overall, statistics can't do without computers those days. Yet, one of the major aspects of statistics is inference, which has nothing to do with computers. Statistical inference is actually what makes statistics a science, because it tells you whether or not your conclusions hold up in other contexts.
In short, you can analyze the hell out of your data, you will still need statistics to know the validity of the predictions or decisions you will make based on your analyses. | What's the difference between statistics and informatics? | Excellent question!!
I heard several times that bioinformaticians can go without biostatistics, or even without statistics. That's perfectly true until it becomes false. In my opinion, general lack of | What's the difference between statistics and informatics?
Excellent question!!
I heard several times that bioinformaticians can go without biostatistics, or even without statistics. That's perfectly true until it becomes false. In my opinion, general lack of statistical knowledge has disastrous effect in the field, as shown by Keith Baggerly. I could also observe that lack of basic knowledge in statistics (and linear algebra) is the cause of stagnation of bioinformaticians in the long run: without a deep knowledge of the theory, they tend to reinvent the wheel and resort to ad hoc solutions that solve nothing but their own problem.
$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $
But now, to answer your question, I agree that overall, statistics can't do without computers those days. Yet, one of the major aspects of statistics is inference, which has nothing to do with computers. Statistical inference is actually what makes statistics a science, because it tells you whether or not your conclusions hold up in other contexts.
In short, you can analyze the hell out of your data, you will still need statistics to know the validity of the predictions or decisions you will make based on your analyses. | What's the difference between statistics and informatics?
Excellent question!!
I heard several times that bioinformaticians can go without biostatistics, or even without statistics. That's perfectly true until it becomes false. In my opinion, general lack of |
24,709 | What's the difference between statistics and informatics? | My view is that while there is a fair amount of overlap between the fields there are also key differences. In general a statistics student (in the higher degrees) will take more theory classes (math and mathstat) than the informatics student, but the informatics student will learn more of the computing (especially the database part) side.
Developing a new statistical test would fall more to the statistician than the informaticist, but designing an interface for a user to enter data and produce tables and plots would fall more to the informaticist than the statistician.
To the statistician the computer is a tool to help with statistics. To the informaticist statistics are a tool to help collect and distribute information (via computer generally).
Edit below here -----
To exand, here is an example. I have worked on projects with informaticists (I am the statistician) where a medical doctor wants to have a system where information on patients is used to predict their risk of some condition (developing a blood clot for example) and wants to receive some form of alert to let them know about the risk. My role in the project (the statistician role) is to develop a model that will predict risk given the predictor variables (a logistic regression model is one such model). The informaticist role in the project is to develop the tools that collect the predictor variables, use my model on them, then send the results to the doctor. The data may be collected from an electronic medical record, or through a data entry screen for a nurse to fill in or others. The alert to the doctor may be a pop-up on the computer or a text message sent to their cell phone or others.
Now I (and many other statisticians) know enough of the programming that I could query a database to get the predictors and create some type of alert, but I am happy to leave that to the informaticists (and they are better at it anyways). There are informaticists that know enough statistics to fit the logistic regression model. So a simple version of this project could be done by only a statistician, or only an informaticist, but it is best when both work together. If you look at this project and think the modeling part is the fun part and the data collection, alert and other interfaces are just tools to move the information to and from the model then you are more of a statistician. If you see designing the interface, optimizing the data retrival, testing different types of alerts, etc. as the fun part and the statistical model as just a tool to convert one part of your data into the other part, then you are more of an informaticist.
$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $ | What's the difference between statistics and informatics? | My view is that while there is a fair amount of overlap between the fields there are also key differences. In general a statistics student (in the higher degrees) will take more theory classes (math | What's the difference between statistics and informatics?
My view is that while there is a fair amount of overlap between the fields there are also key differences. In general a statistics student (in the higher degrees) will take more theory classes (math and mathstat) than the informatics student, but the informatics student will learn more of the computing (especially the database part) side.
Developing a new statistical test would fall more to the statistician than the informaticist, but designing an interface for a user to enter data and produce tables and plots would fall more to the informaticist than the statistician.
To the statistician the computer is a tool to help with statistics. To the informaticist statistics are a tool to help collect and distribute information (via computer generally).
Edit below here -----
To exand, here is an example. I have worked on projects with informaticists (I am the statistician) where a medical doctor wants to have a system where information on patients is used to predict their risk of some condition (developing a blood clot for example) and wants to receive some form of alert to let them know about the risk. My role in the project (the statistician role) is to develop a model that will predict risk given the predictor variables (a logistic regression model is one such model). The informaticist role in the project is to develop the tools that collect the predictor variables, use my model on them, then send the results to the doctor. The data may be collected from an electronic medical record, or through a data entry screen for a nurse to fill in or others. The alert to the doctor may be a pop-up on the computer or a text message sent to their cell phone or others.
Now I (and many other statisticians) know enough of the programming that I could query a database to get the predictors and create some type of alert, but I am happy to leave that to the informaticists (and they are better at it anyways). There are informaticists that know enough statistics to fit the logistic regression model. So a simple version of this project could be done by only a statistician, or only an informaticist, but it is best when both work together. If you look at this project and think the modeling part is the fun part and the data collection, alert and other interfaces are just tools to move the information to and from the model then you are more of a statistician. If you see designing the interface, optimizing the data retrival, testing different types of alerts, etc. as the fun part and the statistical model as just a tool to convert one part of your data into the other part, then you are more of an informaticist.
$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $$ $ | What's the difference between statistics and informatics?
My view is that while there is a fair amount of overlap between the fields there are also key differences. In general a statistics student (in the higher degrees) will take more theory classes (math |
24,710 | What's the difference between statistics and informatics? | Statistics infers from data; Informatics operates on data. Of course they overlap, but the question of which has the larger scope has no answer. | What's the difference between statistics and informatics? | Statistics infers from data; Informatics operates on data. Of course they overlap, but the question of which has the larger scope has no answer. | What's the difference between statistics and informatics?
Statistics infers from data; Informatics operates on data. Of course they overlap, but the question of which has the larger scope has no answer. | What's the difference between statistics and informatics?
Statistics infers from data; Informatics operates on data. Of course they overlap, but the question of which has the larger scope has no answer. |
24,711 | Fitting SIR model with 2019-nCoV data doesn't conververge | There are several points that you can improve in the code
Wrong boundary conditions
Your model is fixed to I=1 for time zero. You can either changes this point to the observed value or add a parameter in the model that shifts the time accordingly.
init <- c(S = N-1, I = 1, R = 0)
# should be
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
Unequal parameter scales
As other people have noted the equation
$$I' = \beta \cdot S \cdot I - \gamma \cdot I$$
has a very large value for $S \cdot I$ this makes that the value of the parameter $\beta$ very small and the algorithm which checks whether the step sizes in the iterations reach some point will not vary the steps in $\beta$ and $\gamma$ equally (the changes in $\beta$ will have a much larger effect than changes in $\gamma$).
You can change scale in the call to the optim function to correct for these differences in size (and checking the hessian allows you to see whether it works a bit). This is done by using a control parameter. In addition you might want to solve the function in segregated steps making the optimization of the two parameters independent from each others (see more here: How to deal with unstable estimates during curve fitting? this is also done in the code below, and the result is much better convergence; although still you reach the limits of your lower and upper bounds)
Opt <- optim(c(2*coefficients(mod)[2]/N, coefficients(mod)[2]), RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper,
hessian = TRUE, control = list(parscale = c(1/N,1),factr = 1))
more intuitive might be to scale the parameter in the function (note the term beta/N in place of beta)
SIR <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
with(par, { dS <- -beta/N * S * I
dI <- beta/N * S * I - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
Starting condition
Because the value of $S$ is in the beginning more or less constant (namely $S \approx N$) the expression for the infected in the beginning can be solved as a single equation:
$$I' \approx (\beta \cdot N - \gamma) \cdot I $$
So you can find a starting condition using an initial exponential fit:
# get a good starting condition
mod <- nls(Infected ~ a*exp(b*day),
start = list(a = Infected[1],
b = log(Infected[2]/Infected[1])))
Unstable, correlation between $\beta$ and $\gamma$
There is a bit of ambiguity how to choose $\beta$ and $\gamma$ for the starting condition.
This will also make the outcome of your analysis not so stable. The error in the individual parameters $\beta$ and $\gamma$ will be very large because many pairs of $\beta$ and $\gamma$ will give a more or less similarly low RSS.
The plot below is for the solution $\beta = 0.8310849; \gamma = 0.4137507 $
However the adjusted Opt_par value $\beta = 0.8310849-0.2; \gamma = 0.4137507-0.2$ works just as well:
Using a different parameterization
The optim function allows you to read out the hessian
> Opt <- optim(optimsstart, RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper,
+ hessian = TRUE)
> Opt$hessian
b
b 7371274104 -7371294772
-7371294772 7371315619
The hessian can be related to the variance of the parameters (In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix?). But note that for this purpose you need the Hessian of the log likelihood which is not the same as the the RSS (it differs by a factor, see the code below).
Based on this you can see that the estimate of the sample variance of the parameters is very large (which means that your results/estimates are not very accurate). But also note that the error is a lot correlated. This means that you can change the parameters such that the outcome is not very correlated. Some example parameterization would be:
$$\begin{array}{}
c &=& \beta - \gamma \\
R_0 &=& \frac{\beta}{\gamma}
\end{array}$$
such that the old equations (note a scaling by 1/N is used):
$$\begin{array}{rccl}
S^\prime &=& - \beta \frac{S}{N}& I\\
I^\prime &=& (\beta \frac{S}{N}-\gamma)& I\\
R^\prime &=& \gamma &I
\end{array}
$$
become
$$\begin{array}{rccl}
S^\prime &=& -c\frac{R_0}{R_0-1} \frac{S}{N}& I&\\
I^\prime &=& c\frac{(S/N) R_0 - 1}{R_0-1} &I& \underbrace{\approx c I}_{\text{for $t=0$ when $S/N \approx 1$}}\\
R^\prime &=& c \frac{1}{R_0-1}& I&
\end{array}
$$
which is especially appealing since you get this approximate $I^\prime = cI$ for the beginning. This will make you see that you are basically estimating the first part which is approximately exponential growth. You will be able to very accurately determine the growth parameter, $c = \beta - \gamma$. However, $\beta$ and $\gamma$, or $R_0$, can not be easily determined.
In the code below a simulation is made with the same value $c=\beta - \gamma$ but with different values for $R_0 = \beta / \gamma$. You can see that the data is not capable to allow us differentiate which different scenario's (which different $R_0$) we are dealing with (and we would need more information, e.g. the locations of each infected individual and trying to see how the infection spread out).
It is interesting that several articles already pretend to have reasonable estimates of $R_0$. For instance this preprint Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions (https://doi.org/10.1101/2020.01.23.20018549)
Some code:
####
####
####
library(deSolve)
library(RColorBrewer)
#https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China
Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515)
day <- 0:(length(Infected)-1)
N <- 1400000000 #pop of china
###edit 1: use different boundary condiotion
###init <- c(S = N-1, I = 1, R = 0)
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
plot(day, Infected)
SIR <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
####edit 2; use equally scaled variables
with(par, { dS <- -beta * (S/N) * I
dI <- beta * (S/N) * I - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
SIR2 <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
####
#### use as change of variables variable
#### const = (beta-gamma)
#### delta = gamma/beta
#### R0 = beta/gamma > 1
####
#### beta-gamma = beta*(1-delta)
#### beta-gamma = beta*(1-1/R0)
#### gamma = beta/R0
with(par, {
beta <- const/(1-1/R0)
gamma <- const/(R0-1)
dS <- -(beta * (S/N) ) * I
dI <- (beta * (S/N)-gamma) * I
dR <- ( gamma) * I
list(c(dS, dI, dR))
})
}
RSS.SIR2 <- function(parameters) {
names(parameters) <- c("const", "R0")
out <- ode(y = init, times = day, func = SIR2, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected - fit)^2)
return(RSS)
}
### plotting different values R0
# use the ordinary exponential model to determine const = beta - gamma
const <- coef(mod)[2]
RSS.SIR <- function(parameters) {
names(parameters) <- c("beta", "gamma")
out <- ode(y = init, times = day, func = SIR, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected - fit)^2)
return(RSS)
}
lower = c(0, 0)
upper = c(1, 1) ###adjust limit because different scale 1/N
### edit: get a good starting condition
mod <- nls(Infected ~ a*exp(b*day),
start = list(a = Infected[1],
b = log(Infected[2]/Infected[1])))
optimsstart <- c(2,1)*coef(mod)[2]
set.seed(12)
Opt <- optim(optimsstart, RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper,
hessian = TRUE)
Opt
### estimated covariance matrix of coefficients
### note the large error, but also strong correlation (nearly 1)
## note scaling with estimate of sigma because we need to use Hessian of loglikelihood
sigest <- sqrt(Opt$value/(length(Infected)-1))
solve(1/(2*sigest^2)*Opt$hessian)
####
#### using alternative parameters
#### for this we use the function SIR2
####
optimsstart <- c(coef(mod)[2],5)
lower = c(0, 1)
upper = c(1, 10^3) ### adjust limit because we use R0 now which should be >1
set.seed(12)
Opt2 <- optim(optimsstart, RSS.SIR2, method = "L-BFGS-B",lower=lower, upper=upper,
hessian = TRUE, control = list(maxit = 1000,
parscale = c(10^-3,1)))
Opt2
# now the estimated variance of the 1st parameter is small
# the 2nd parameter is still with large variance
#
# thus we can predict beta - gamma very well
# this beta - gamma is the initial growth coefficient
# but the individual values of beta and gamma are not very well known
#
# also note that hessian is not at the MLE since we hit the lower boundary
#
sigest <- sqrt(Opt2$value/(length(Infected)-1))
solve(1/(2*sigest^2)*Opt2$hessian)
#### We can also estimated variance by
#### Monte Carlo estimation
##
## assuming data to be distributed as mean +/- q mean
## with q such that mean RSS = 52030
##
##
##
### Two functions RSS to do the optimization in a nested way
RSS.SIRMC2 <- function(const,R0) {
parameters <- c(const=const, R0=R0)
out <- ode(y = init, times = day, func = SIR2, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected_MC - fit)^2)
return(RSS)
}
RSS.SIRMC <- function(const) {
optimize(RSS.SIRMC2, lower=1,upper=10^5,const=const)$objective
}
getOptim <- function() {
opt1 <- optimize(RSS.SIRMC,lower=0,upper=1)
opt2 <- optimize(RSS.SIRMC2, lower=1,upper=10^5,const=opt1$minimum)
return(list(RSS=opt2$objective,const=opt1$minimum,R0=opt2$minimum))
}
# modeled data that we use to repeatedly generate data with noise
Opt_par <- Opt2$par
names(Opt_par) <- c("const", "R0")
modInfected <- data.frame(ode(y = init, times = day, func = SIR2, parms = Opt_par))$I
# doing the nested model to get RSS
set.seed(1)
Infected_MC <- Infected
modnested <- getOptim()
errrate <- modnested$RSS/sum(Infected)
par <- c(0,0)
for (i in 1:100) {
Infected_MC <- rnorm(length(modInfected),modInfected,(modInfected*errrate)^0.5)
OptMC <- getOptim()
par <- rbind(par,c(OptMC$const,OptMC$R0))
}
par <- par[-1,]
plot(par, xlab = "const",ylab="R0",ylim=c(1,1))
title("Monte Carlo simulation")
cov(par)
###conclusion: the parameter R0 can not be reliably estimated
##### End of Monte Carlo estimation
### plotting different values R0
# use the ordinary exponential model to determine const = beta - gamma
const <- coef(mod)[2]
R0 <- 1.1
# graph
plot(-100,-100, xlim=c(0,80), ylim = c(1,N), log="y",
ylab = "infected", xlab = "days", yaxt = "n")
axis(2, las=2, at=10^c(0:9),
labels=c(expression(1),
expression(10^1),
expression(10^2),
expression(10^3),
expression(10^4),
expression(10^5),
expression(10^6),
expression(10^7),
expression(10^8),
expression(10^9)))
axis(2, at=rep(c(2:9),9)*rep(10^c(0:8),each=8), labels=rep("",8*9),tck=-0.02)
title(bquote(paste("scenario's for different ", R[0])), cex.main = 1)
# time
t <- seq(0,60,0.1)
# plot model with different R0
for (R0 in c(1.1,1.2,1.5,2,3,5,10)) {
fit <- data.frame(ode(y = init, times = t, func = SIR2, parms = c(const,R0)))$I
lines(t,fit)
text(t[601],fit[601],
bquote(paste(R[0], " = ",.(R0))),
cex=0.7,pos=4)
}
# plot observations
points(day,Infected)
How is R0 estimated?
The graph above (which is repeated below) showed that there is not much variation in the number of 'infected' as a function of $R_0$, and the data of the number of infected people is not providing much information about $R_0$ (except whether or not it is above or below zero).
However, for the SIR model there is a large variation in the number of recovered or the ratio infected/recovered. This is shown in the image below where the model is plotted not only for the number of infected people but also for the number of recovered people. It is such information (as well additional data like detailed information where and when the people got infected and with whom they had contact) that allows the estimate of $R_0$.
Update
In your blog article you write that the fit is leading to a value of $R_0 \approx 2$.
However that is not the correct solution. You find this value only because the optim is terminating early when it has found a good enough solution and the improvements for given stepsize of the vector $\beta, \gamma$ are getting small.
When you use the nested optimization then you will find a more precise solution with a $R_0$ very close to 1.
We see this value $R_0 \approx 1$ because that is how the (wrong) model is able to get this change in the growth rate into the curve.
###
####
####
library(deSolve)
library(RColorBrewer)
#https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China
Infected <- c(45,62,121,198,291,440,571,830,1287,1975,
2744,4515,5974,7711,9692,11791,14380,17205,20440)
#Infected <- c(45,62,121,198,291,440,571,830,1287,1975,
# 2744,4515,5974,7711,9692,11791,14380,17205,20440,
# 24324,28018,31161,34546,37198,40171,42638,44653)
day <- 0:(length(Infected)-1)
N <- 1400000000 #pop of china
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
# model function
SIR2 <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
with(par, {
beta <- const/(1-1/R0)
gamma <- const/(R0-1)
dS <- -(beta * (S/N) ) * I
dI <- (beta * (S/N)-gamma) * I
dR <- ( gamma) * I
list(c(dS, dI, dR))
})
}
### Two functions RSS to do the optimization in a nested way
RSS.SIRMC2 <- function(R0,const) {
parameters <- c(const=const, R0=R0)
out <- ode(y = init, times = day, func = SIR2, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected_MC - fit)^2)
return(RSS)
}
RSS.SIRMC <- function(const) {
optimize(RSS.SIRMC2, lower=1,upper=10^5,const=const)$objective
}
# wrapper to optimize and return estimated values
getOptim <- function() {
opt1 <- optimize(RSS.SIRMC,lower=0,upper=1)
opt2 <- optimize(RSS.SIRMC2, lower=1,upper=10^5,const=opt1$minimum)
return(list(RSS=opt2$objective,const=opt1$minimum,R0=opt2$minimum))
}
# doing the nested model to get RSS
Infected_MC <- Infected
modnested <- getOptim()
rss <- sapply(seq(0.3,0.5,0.01),
FUN = function(x) optimize(RSS.SIRMC2, lower=1,upper=10^5,const=x)$objective)
plot(seq(0.3,0.5,0.01),rss)
optimize(RSS.SIRMC2, lower=1,upper=10^5,const=0.35)
# view
modnested
### plotting different values R0
const <- modnested$const
R0 <- modnested$R0
# graph
plot(-100,-100, xlim=c(0,80), ylim = c(1,6*10^4), log="",
ylab = "infected", xlab = "days")
title(bquote(paste("scenario's for different ", R[0])), cex.main = 1)
### this is what your beta and gamma from the blog
beta = 0.6746089
gamma = 0.3253912
fit <- data.frame(ode(y = init, times = t, func = SIR, parms = c(beta,gamma)))$I
lines(t,fit,col=3)
# plot model with different R0
t <- seq(0,50,0.1)
for (R0 in c(modnested$R0,1.07,1.08,1.09,1.1,1.11)) {
fit <- data.frame(ode(y = init, times = t, func = SIR2, parms = c(const,R0)))$I
lines(t,fit,col=1+(modnested$R0==R0))
text(t[501],fit[501],
bquote(paste(R[0], " = ",.(R0))),
cex=0.7,pos=4,col=1+(modnested$R0==R0))
}
# plot observations
points(day,Infected, cex = 0.7)
If we use the relation between recovered and infected people $R^\prime = c (R_0-1)^{-1} I$ then we also see the opposite, namely a large $R_0$ of around 18:
I <- c(45,62,121,198,291,440,571,830,1287,1975,2744,4515,5974,7711,9692,11791,14380,17205,20440, 24324,28018,31161,34546,37198,40171,42638,44653)
D <- c(2,2,2,3,6,9,17,25,41,56,80,106,132,170,213,259,304,361,425,490,563,637,722,811,908,1016,1113)
R <- c(12,15,19,25,25,25,25,34,38,49,51,60,103,124,171,243,328,475,632,892,1153,1540,2050,2649,3281,3996,4749)
A <- I-D-R
plot(A[-27],diff(R+D))
mod <- lm(diff(R+D) ~ A[-27])
giving:
> const
[1] 0.3577354
> const/mod$coefficients[2]+1
A[-27]
17.87653
This is a restriction of the SIR model which models $R_0 = \frac{\beta}{\gamma}$ where $\frac{1}{\gamma}$ is the period how long somebody is sick (time from Infected to Recovered) but that may not need to be the time that somebody is infectious. In addition, the compartment models is limited since the age of patients (how long one has been sick) is not taken into account and each age should be considered as a separate compartment.
But in any case. If the numbers from wikipedia are meaningfull (they may be doubted) then only 2% of the active/infected recover daily, and thus the $\gamma$ parameter seems to be small (no matter what model you use). | Fitting SIR model with 2019-nCoV data doesn't conververge | There are several points that you can improve in the code
Wrong boundary conditions
Your model is fixed to I=1 for time zero. You can either changes this point to the observed value or add a paramete | Fitting SIR model with 2019-nCoV data doesn't conververge
There are several points that you can improve in the code
Wrong boundary conditions
Your model is fixed to I=1 for time zero. You can either changes this point to the observed value or add a parameter in the model that shifts the time accordingly.
init <- c(S = N-1, I = 1, R = 0)
# should be
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
Unequal parameter scales
As other people have noted the equation
$$I' = \beta \cdot S \cdot I - \gamma \cdot I$$
has a very large value for $S \cdot I$ this makes that the value of the parameter $\beta$ very small and the algorithm which checks whether the step sizes in the iterations reach some point will not vary the steps in $\beta$ and $\gamma$ equally (the changes in $\beta$ will have a much larger effect than changes in $\gamma$).
You can change scale in the call to the optim function to correct for these differences in size (and checking the hessian allows you to see whether it works a bit). This is done by using a control parameter. In addition you might want to solve the function in segregated steps making the optimization of the two parameters independent from each others (see more here: How to deal with unstable estimates during curve fitting? this is also done in the code below, and the result is much better convergence; although still you reach the limits of your lower and upper bounds)
Opt <- optim(c(2*coefficients(mod)[2]/N, coefficients(mod)[2]), RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper,
hessian = TRUE, control = list(parscale = c(1/N,1),factr = 1))
more intuitive might be to scale the parameter in the function (note the term beta/N in place of beta)
SIR <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
with(par, { dS <- -beta/N * S * I
dI <- beta/N * S * I - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
Starting condition
Because the value of $S$ is in the beginning more or less constant (namely $S \approx N$) the expression for the infected in the beginning can be solved as a single equation:
$$I' \approx (\beta \cdot N - \gamma) \cdot I $$
So you can find a starting condition using an initial exponential fit:
# get a good starting condition
mod <- nls(Infected ~ a*exp(b*day),
start = list(a = Infected[1],
b = log(Infected[2]/Infected[1])))
Unstable, correlation between $\beta$ and $\gamma$
There is a bit of ambiguity how to choose $\beta$ and $\gamma$ for the starting condition.
This will also make the outcome of your analysis not so stable. The error in the individual parameters $\beta$ and $\gamma$ will be very large because many pairs of $\beta$ and $\gamma$ will give a more or less similarly low RSS.
The plot below is for the solution $\beta = 0.8310849; \gamma = 0.4137507 $
However the adjusted Opt_par value $\beta = 0.8310849-0.2; \gamma = 0.4137507-0.2$ works just as well:
Using a different parameterization
The optim function allows you to read out the hessian
> Opt <- optim(optimsstart, RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper,
+ hessian = TRUE)
> Opt$hessian
b
b 7371274104 -7371294772
-7371294772 7371315619
The hessian can be related to the variance of the parameters (In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix?). But note that for this purpose you need the Hessian of the log likelihood which is not the same as the the RSS (it differs by a factor, see the code below).
Based on this you can see that the estimate of the sample variance of the parameters is very large (which means that your results/estimates are not very accurate). But also note that the error is a lot correlated. This means that you can change the parameters such that the outcome is not very correlated. Some example parameterization would be:
$$\begin{array}{}
c &=& \beta - \gamma \\
R_0 &=& \frac{\beta}{\gamma}
\end{array}$$
such that the old equations (note a scaling by 1/N is used):
$$\begin{array}{rccl}
S^\prime &=& - \beta \frac{S}{N}& I\\
I^\prime &=& (\beta \frac{S}{N}-\gamma)& I\\
R^\prime &=& \gamma &I
\end{array}
$$
become
$$\begin{array}{rccl}
S^\prime &=& -c\frac{R_0}{R_0-1} \frac{S}{N}& I&\\
I^\prime &=& c\frac{(S/N) R_0 - 1}{R_0-1} &I& \underbrace{\approx c I}_{\text{for $t=0$ when $S/N \approx 1$}}\\
R^\prime &=& c \frac{1}{R_0-1}& I&
\end{array}
$$
which is especially appealing since you get this approximate $I^\prime = cI$ for the beginning. This will make you see that you are basically estimating the first part which is approximately exponential growth. You will be able to very accurately determine the growth parameter, $c = \beta - \gamma$. However, $\beta$ and $\gamma$, or $R_0$, can not be easily determined.
In the code below a simulation is made with the same value $c=\beta - \gamma$ but with different values for $R_0 = \beta / \gamma$. You can see that the data is not capable to allow us differentiate which different scenario's (which different $R_0$) we are dealing with (and we would need more information, e.g. the locations of each infected individual and trying to see how the infection spread out).
It is interesting that several articles already pretend to have reasonable estimates of $R_0$. For instance this preprint Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions (https://doi.org/10.1101/2020.01.23.20018549)
Some code:
####
####
####
library(deSolve)
library(RColorBrewer)
#https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China
Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515)
day <- 0:(length(Infected)-1)
N <- 1400000000 #pop of china
###edit 1: use different boundary condiotion
###init <- c(S = N-1, I = 1, R = 0)
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
plot(day, Infected)
SIR <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
####edit 2; use equally scaled variables
with(par, { dS <- -beta * (S/N) * I
dI <- beta * (S/N) * I - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
SIR2 <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
####
#### use as change of variables variable
#### const = (beta-gamma)
#### delta = gamma/beta
#### R0 = beta/gamma > 1
####
#### beta-gamma = beta*(1-delta)
#### beta-gamma = beta*(1-1/R0)
#### gamma = beta/R0
with(par, {
beta <- const/(1-1/R0)
gamma <- const/(R0-1)
dS <- -(beta * (S/N) ) * I
dI <- (beta * (S/N)-gamma) * I
dR <- ( gamma) * I
list(c(dS, dI, dR))
})
}
RSS.SIR2 <- function(parameters) {
names(parameters) <- c("const", "R0")
out <- ode(y = init, times = day, func = SIR2, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected - fit)^2)
return(RSS)
}
### plotting different values R0
# use the ordinary exponential model to determine const = beta - gamma
const <- coef(mod)[2]
RSS.SIR <- function(parameters) {
names(parameters) <- c("beta", "gamma")
out <- ode(y = init, times = day, func = SIR, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected - fit)^2)
return(RSS)
}
lower = c(0, 0)
upper = c(1, 1) ###adjust limit because different scale 1/N
### edit: get a good starting condition
mod <- nls(Infected ~ a*exp(b*day),
start = list(a = Infected[1],
b = log(Infected[2]/Infected[1])))
optimsstart <- c(2,1)*coef(mod)[2]
set.seed(12)
Opt <- optim(optimsstart, RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper,
hessian = TRUE)
Opt
### estimated covariance matrix of coefficients
### note the large error, but also strong correlation (nearly 1)
## note scaling with estimate of sigma because we need to use Hessian of loglikelihood
sigest <- sqrt(Opt$value/(length(Infected)-1))
solve(1/(2*sigest^2)*Opt$hessian)
####
#### using alternative parameters
#### for this we use the function SIR2
####
optimsstart <- c(coef(mod)[2],5)
lower = c(0, 1)
upper = c(1, 10^3) ### adjust limit because we use R0 now which should be >1
set.seed(12)
Opt2 <- optim(optimsstart, RSS.SIR2, method = "L-BFGS-B",lower=lower, upper=upper,
hessian = TRUE, control = list(maxit = 1000,
parscale = c(10^-3,1)))
Opt2
# now the estimated variance of the 1st parameter is small
# the 2nd parameter is still with large variance
#
# thus we can predict beta - gamma very well
# this beta - gamma is the initial growth coefficient
# but the individual values of beta and gamma are not very well known
#
# also note that hessian is not at the MLE since we hit the lower boundary
#
sigest <- sqrt(Opt2$value/(length(Infected)-1))
solve(1/(2*sigest^2)*Opt2$hessian)
#### We can also estimated variance by
#### Monte Carlo estimation
##
## assuming data to be distributed as mean +/- q mean
## with q such that mean RSS = 52030
##
##
##
### Two functions RSS to do the optimization in a nested way
RSS.SIRMC2 <- function(const,R0) {
parameters <- c(const=const, R0=R0)
out <- ode(y = init, times = day, func = SIR2, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected_MC - fit)^2)
return(RSS)
}
RSS.SIRMC <- function(const) {
optimize(RSS.SIRMC2, lower=1,upper=10^5,const=const)$objective
}
getOptim <- function() {
opt1 <- optimize(RSS.SIRMC,lower=0,upper=1)
opt2 <- optimize(RSS.SIRMC2, lower=1,upper=10^5,const=opt1$minimum)
return(list(RSS=opt2$objective,const=opt1$minimum,R0=opt2$minimum))
}
# modeled data that we use to repeatedly generate data with noise
Opt_par <- Opt2$par
names(Opt_par) <- c("const", "R0")
modInfected <- data.frame(ode(y = init, times = day, func = SIR2, parms = Opt_par))$I
# doing the nested model to get RSS
set.seed(1)
Infected_MC <- Infected
modnested <- getOptim()
errrate <- modnested$RSS/sum(Infected)
par <- c(0,0)
for (i in 1:100) {
Infected_MC <- rnorm(length(modInfected),modInfected,(modInfected*errrate)^0.5)
OptMC <- getOptim()
par <- rbind(par,c(OptMC$const,OptMC$R0))
}
par <- par[-1,]
plot(par, xlab = "const",ylab="R0",ylim=c(1,1))
title("Monte Carlo simulation")
cov(par)
###conclusion: the parameter R0 can not be reliably estimated
##### End of Monte Carlo estimation
### plotting different values R0
# use the ordinary exponential model to determine const = beta - gamma
const <- coef(mod)[2]
R0 <- 1.1
# graph
plot(-100,-100, xlim=c(0,80), ylim = c(1,N), log="y",
ylab = "infected", xlab = "days", yaxt = "n")
axis(2, las=2, at=10^c(0:9),
labels=c(expression(1),
expression(10^1),
expression(10^2),
expression(10^3),
expression(10^4),
expression(10^5),
expression(10^6),
expression(10^7),
expression(10^8),
expression(10^9)))
axis(2, at=rep(c(2:9),9)*rep(10^c(0:8),each=8), labels=rep("",8*9),tck=-0.02)
title(bquote(paste("scenario's for different ", R[0])), cex.main = 1)
# time
t <- seq(0,60,0.1)
# plot model with different R0
for (R0 in c(1.1,1.2,1.5,2,3,5,10)) {
fit <- data.frame(ode(y = init, times = t, func = SIR2, parms = c(const,R0)))$I
lines(t,fit)
text(t[601],fit[601],
bquote(paste(R[0], " = ",.(R0))),
cex=0.7,pos=4)
}
# plot observations
points(day,Infected)
How is R0 estimated?
The graph above (which is repeated below) showed that there is not much variation in the number of 'infected' as a function of $R_0$, and the data of the number of infected people is not providing much information about $R_0$ (except whether or not it is above or below zero).
However, for the SIR model there is a large variation in the number of recovered or the ratio infected/recovered. This is shown in the image below where the model is plotted not only for the number of infected people but also for the number of recovered people. It is such information (as well additional data like detailed information where and when the people got infected and with whom they had contact) that allows the estimate of $R_0$.
Update
In your blog article you write that the fit is leading to a value of $R_0 \approx 2$.
However that is not the correct solution. You find this value only because the optim is terminating early when it has found a good enough solution and the improvements for given stepsize of the vector $\beta, \gamma$ are getting small.
When you use the nested optimization then you will find a more precise solution with a $R_0$ very close to 1.
We see this value $R_0 \approx 1$ because that is how the (wrong) model is able to get this change in the growth rate into the curve.
###
####
####
library(deSolve)
library(RColorBrewer)
#https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China
Infected <- c(45,62,121,198,291,440,571,830,1287,1975,
2744,4515,5974,7711,9692,11791,14380,17205,20440)
#Infected <- c(45,62,121,198,291,440,571,830,1287,1975,
# 2744,4515,5974,7711,9692,11791,14380,17205,20440,
# 24324,28018,31161,34546,37198,40171,42638,44653)
day <- 0:(length(Infected)-1)
N <- 1400000000 #pop of china
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
# model function
SIR2 <- function(time, state, parameters) {
par <- as.list(c(state, parameters))
with(par, {
beta <- const/(1-1/R0)
gamma <- const/(R0-1)
dS <- -(beta * (S/N) ) * I
dI <- (beta * (S/N)-gamma) * I
dR <- ( gamma) * I
list(c(dS, dI, dR))
})
}
### Two functions RSS to do the optimization in a nested way
RSS.SIRMC2 <- function(R0,const) {
parameters <- c(const=const, R0=R0)
out <- ode(y = init, times = day, func = SIR2, parms = parameters)
fit <- out[ , 3]
RSS <- sum((Infected_MC - fit)^2)
return(RSS)
}
RSS.SIRMC <- function(const) {
optimize(RSS.SIRMC2, lower=1,upper=10^5,const=const)$objective
}
# wrapper to optimize and return estimated values
getOptim <- function() {
opt1 <- optimize(RSS.SIRMC,lower=0,upper=1)
opt2 <- optimize(RSS.SIRMC2, lower=1,upper=10^5,const=opt1$minimum)
return(list(RSS=opt2$objective,const=opt1$minimum,R0=opt2$minimum))
}
# doing the nested model to get RSS
Infected_MC <- Infected
modnested <- getOptim()
rss <- sapply(seq(0.3,0.5,0.01),
FUN = function(x) optimize(RSS.SIRMC2, lower=1,upper=10^5,const=x)$objective)
plot(seq(0.3,0.5,0.01),rss)
optimize(RSS.SIRMC2, lower=1,upper=10^5,const=0.35)
# view
modnested
### plotting different values R0
const <- modnested$const
R0 <- modnested$R0
# graph
plot(-100,-100, xlim=c(0,80), ylim = c(1,6*10^4), log="",
ylab = "infected", xlab = "days")
title(bquote(paste("scenario's for different ", R[0])), cex.main = 1)
### this is what your beta and gamma from the blog
beta = 0.6746089
gamma = 0.3253912
fit <- data.frame(ode(y = init, times = t, func = SIR, parms = c(beta,gamma)))$I
lines(t,fit,col=3)
# plot model with different R0
t <- seq(0,50,0.1)
for (R0 in c(modnested$R0,1.07,1.08,1.09,1.1,1.11)) {
fit <- data.frame(ode(y = init, times = t, func = SIR2, parms = c(const,R0)))$I
lines(t,fit,col=1+(modnested$R0==R0))
text(t[501],fit[501],
bquote(paste(R[0], " = ",.(R0))),
cex=0.7,pos=4,col=1+(modnested$R0==R0))
}
# plot observations
points(day,Infected, cex = 0.7)
If we use the relation between recovered and infected people $R^\prime = c (R_0-1)^{-1} I$ then we also see the opposite, namely a large $R_0$ of around 18:
I <- c(45,62,121,198,291,440,571,830,1287,1975,2744,4515,5974,7711,9692,11791,14380,17205,20440, 24324,28018,31161,34546,37198,40171,42638,44653)
D <- c(2,2,2,3,6,9,17,25,41,56,80,106,132,170,213,259,304,361,425,490,563,637,722,811,908,1016,1113)
R <- c(12,15,19,25,25,25,25,34,38,49,51,60,103,124,171,243,328,475,632,892,1153,1540,2050,2649,3281,3996,4749)
A <- I-D-R
plot(A[-27],diff(R+D))
mod <- lm(diff(R+D) ~ A[-27])
giving:
> const
[1] 0.3577354
> const/mod$coefficients[2]+1
A[-27]
17.87653
This is a restriction of the SIR model which models $R_0 = \frac{\beta}{\gamma}$ where $\frac{1}{\gamma}$ is the period how long somebody is sick (time from Infected to Recovered) but that may not need to be the time that somebody is infectious. In addition, the compartment models is limited since the age of patients (how long one has been sick) is not taken into account and each age should be considered as a separate compartment.
But in any case. If the numbers from wikipedia are meaningfull (they may be doubted) then only 2% of the active/infected recover daily, and thus the $\gamma$ parameter seems to be small (no matter what model you use). | Fitting SIR model with 2019-nCoV data doesn't conververge
There are several points that you can improve in the code
Wrong boundary conditions
Your model is fixed to I=1 for time zero. You can either changes this point to the observed value or add a paramete |
24,712 | Fitting SIR model with 2019-nCoV data doesn't conververge | You might be experiencing numerical issues due to the very large population size $N$, which will force the estimate of $\beta$ to be very close to zero. You could re-parameterise the model as
\begin{align}
{\mathrm d S \over \mathrm d t} &= -\beta {S I / N}\\[1.5ex]
{\mathrm d I \over \mathrm d t} &= \beta {S I / N} - \gamma I \\[1.5ex]
{\mathrm d R \over \mathrm d t} &= \gamma I \\
\end{align}
This will make the estimate of $\beta$ larger so hopefully you'll get something more sensible out of the optimisation.
In this context the SIR model is useful but it only gives a very crude fit to these data (it assumes that the whole population of China mixes homogenously). It's perhaps not too bad as a first attempt at analysis. Ideally you would want some kind of spatial or network model that would better reflect the true contact structure in the population. For example, a metapopulation model as described in Program 7.2 and the accompanying book (Modeling Infectious Diseases in Humans and Animals, Keeling & Rohani). However this approach would require much more work and also some data on the population structure. An approximate alternative could be to replace the $I$ in $\beta SI/N$ (in both of the first two equations) with $I^\delta$ where $\delta$, which is probably $<1$, is a third parameter to be estimated. Such a model tries to capture the fact that the force of infection on a susceptible increases less than linearly with the number of infecteds $I$, while avoiding specification of an explicit population structure. For more details on this approach, see e.g. Hochberg, Non-linear transmission rates and the dynamics of infectious disease, Journal of Theoretical Biology 153:301-321. | Fitting SIR model with 2019-nCoV data doesn't conververge | You might be experiencing numerical issues due to the very large population size $N$, which will force the estimate of $\beta$ to be very close to zero. You could re-parameterise the model as
\begin{a | Fitting SIR model with 2019-nCoV data doesn't conververge
You might be experiencing numerical issues due to the very large population size $N$, which will force the estimate of $\beta$ to be very close to zero. You could re-parameterise the model as
\begin{align}
{\mathrm d S \over \mathrm d t} &= -\beta {S I / N}\\[1.5ex]
{\mathrm d I \over \mathrm d t} &= \beta {S I / N} - \gamma I \\[1.5ex]
{\mathrm d R \over \mathrm d t} &= \gamma I \\
\end{align}
This will make the estimate of $\beta$ larger so hopefully you'll get something more sensible out of the optimisation.
In this context the SIR model is useful but it only gives a very crude fit to these data (it assumes that the whole population of China mixes homogenously). It's perhaps not too bad as a first attempt at analysis. Ideally you would want some kind of spatial or network model that would better reflect the true contact structure in the population. For example, a metapopulation model as described in Program 7.2 and the accompanying book (Modeling Infectious Diseases in Humans and Animals, Keeling & Rohani). However this approach would require much more work and also some data on the population structure. An approximate alternative could be to replace the $I$ in $\beta SI/N$ (in both of the first two equations) with $I^\delta$ where $\delta$, which is probably $<1$, is a third parameter to be estimated. Such a model tries to capture the fact that the force of infection on a susceptible increases less than linearly with the number of infecteds $I$, while avoiding specification of an explicit population structure. For more details on this approach, see e.g. Hochberg, Non-linear transmission rates and the dynamics of infectious disease, Journal of Theoretical Biology 153:301-321. | Fitting SIR model with 2019-nCoV data doesn't conververge
You might be experiencing numerical issues due to the very large population size $N$, which will force the estimate of $\beta$ to be very close to zero. You could re-parameterise the model as
\begin{a |
24,713 | Fitting SIR model with 2019-nCoV data doesn't conververge | Because the population of china is so huge, the parameters will be very small.
Since we are in the early days of the infection, and because N is so big, then $S(t)I(t)/N \ll 1$. It could me more reasonable to assume that at this stage of the infection, the number of infected people is approximately exponential, and fit a much simpler model. | Fitting SIR model with 2019-nCoV data doesn't conververge | Because the population of china is so huge, the parameters will be very small.
Since we are in the early days of the infection, and because N is so big, then $S(t)I(t)/N \ll 1$. It could me more reas | Fitting SIR model with 2019-nCoV data doesn't conververge
Because the population of china is so huge, the parameters will be very small.
Since we are in the early days of the infection, and because N is so big, then $S(t)I(t)/N \ll 1$. It could me more reasonable to assume that at this stage of the infection, the number of infected people is approximately exponential, and fit a much simpler model. | Fitting SIR model with 2019-nCoV data doesn't conververge
Because the population of china is so huge, the parameters will be very small.
Since we are in the early days of the infection, and because N is so big, then $S(t)I(t)/N \ll 1$. It could me more reas |
24,714 | Fitting SIR model with 2019-nCoV data doesn't conververge | This is only marginally related to the detailed coding discussion, but seems highly relevant to the original question concerning modeling of the current 2019-nCoV epidemic. Please see arxiv:2002.00418v1 (paper at https://arxiv.org/pdf/2002.00418v1.pdf ) for a delayed diff equation system ~5 component model, with parameter estimation and predictions using dde23 in MatLab. These are compared to daily published reports of confirmed cases, number cured, etc. To me, it is quite worthy of discussion, refinement, and updating. It concludes that there is a bifurcation in the solution space dependent upon the efficacy of isolation, thus explaining the strong public health measures recently taken, which have a fair chance of success so far. | Fitting SIR model with 2019-nCoV data doesn't conververge | This is only marginally related to the detailed coding discussion, but seems highly relevant to the original question concerning modeling of the current 2019-nCoV epidemic. Please see arxiv:2002.0 | Fitting SIR model with 2019-nCoV data doesn't conververge
This is only marginally related to the detailed coding discussion, but seems highly relevant to the original question concerning modeling of the current 2019-nCoV epidemic. Please see arxiv:2002.00418v1 (paper at https://arxiv.org/pdf/2002.00418v1.pdf ) for a delayed diff equation system ~5 component model, with parameter estimation and predictions using dde23 in MatLab. These are compared to daily published reports of confirmed cases, number cured, etc. To me, it is quite worthy of discussion, refinement, and updating. It concludes that there is a bifurcation in the solution space dependent upon the efficacy of isolation, thus explaining the strong public health measures recently taken, which have a fair chance of success so far. | Fitting SIR model with 2019-nCoV data doesn't conververge
This is only marginally related to the detailed coding discussion, but seems highly relevant to the original question concerning modeling of the current 2019-nCoV epidemic. Please see arxiv:2002.0 |
24,715 | Fitting SIR model with 2019-nCoV data doesn't conververge | what do you think about putting the initial number of infectious as an addition parameter in the optimization problem otherwise the fitting need to start with the initial condition. | Fitting SIR model with 2019-nCoV data doesn't conververge | what do you think about putting the initial number of infectious as an addition parameter in the optimization problem otherwise the fitting need to start with the initial condition. | Fitting SIR model with 2019-nCoV data doesn't conververge
what do you think about putting the initial number of infectious as an addition parameter in the optimization problem otherwise the fitting need to start with the initial condition. | Fitting SIR model with 2019-nCoV data doesn't conververge
what do you think about putting the initial number of infectious as an addition parameter in the optimization problem otherwise the fitting need to start with the initial condition. |
24,716 | Correlation between sine and cosine | Since
$$\begin{align}
\operatorname{Cov}(Y, Z)
&= E[(Y - E[Y])(Z - E[Z])] \\
&= E[(Y - {\textstyle \int}_0^{2\pi} \sin x \;dx)(Z - {\textstyle \int}_0^{2\pi} \cos x \;dx)] \\
&= E[(Y - 0)(Z - 0)] \\
&= E[YZ] \\
&= \int_0^{2\pi} \sin x \cos x \;dx \\
&= 0 ,
\end{align}$$
the correlation must also be 0. | Correlation between sine and cosine | Since
$$\begin{align}
\operatorname{Cov}(Y, Z)
&= E[(Y - E[Y])(Z - E[Z])] \\
&= E[(Y - {\textstyle \int}_0^{2\pi} \sin x \;dx)(Z - {\textstyle \int}_0^{2\pi} \cos x \;dx)] \\
&= E[(Y - 0)(Z - 0)] \\
& | Correlation between sine and cosine
Since
$$\begin{align}
\operatorname{Cov}(Y, Z)
&= E[(Y - E[Y])(Z - E[Z])] \\
&= E[(Y - {\textstyle \int}_0^{2\pi} \sin x \;dx)(Z - {\textstyle \int}_0^{2\pi} \cos x \;dx)] \\
&= E[(Y - 0)(Z - 0)] \\
&= E[YZ] \\
&= \int_0^{2\pi} \sin x \cos x \;dx \\
&= 0 ,
\end{align}$$
the correlation must also be 0. | Correlation between sine and cosine
Since
$$\begin{align}
\operatorname{Cov}(Y, Z)
&= E[(Y - E[Y])(Z - E[Z])] \\
&= E[(Y - {\textstyle \int}_0^{2\pi} \sin x \;dx)(Z - {\textstyle \int}_0^{2\pi} \cos x \;dx)] \\
&= E[(Y - 0)(Z - 0)] \\
& |
24,717 | Correlation between sine and cosine | I really like @whuber's argument from symmetry and don't want it to be lost as a comment, so here's a bit of elaboration.
Consider the random vector $(X, Y)$, where $X = \cos(U)$, and $Y = \sin(U)$, for $U \sim U(0, 2 \pi)$. Then, because $\theta \mapsto (\cos(\theta), \sin(\theta))$ parameterizes the unit circle by arc length, $(X, Y)$ is distributed uniformly on the unit circle. In particular, the distribution of $(-X, Y)$ is the same as the distribution of $(X, Y)$. But then
$$ - \text{Cov} (X, Y) = \text{Cov} (-X, Y) = \text{Cov} (X, Y) $$
so it must be that $\text{Cov} (X, Y) = 0$.
Just a beautiful geometric argument. | Correlation between sine and cosine | I really like @whuber's argument from symmetry and don't want it to be lost as a comment, so here's a bit of elaboration.
Consider the random vector $(X, Y)$, where $X = \cos(U)$, and $Y = \sin(U)$, f | Correlation between sine and cosine
I really like @whuber's argument from symmetry and don't want it to be lost as a comment, so here's a bit of elaboration.
Consider the random vector $(X, Y)$, where $X = \cos(U)$, and $Y = \sin(U)$, for $U \sim U(0, 2 \pi)$. Then, because $\theta \mapsto (\cos(\theta), \sin(\theta))$ parameterizes the unit circle by arc length, $(X, Y)$ is distributed uniformly on the unit circle. In particular, the distribution of $(-X, Y)$ is the same as the distribution of $(X, Y)$. But then
$$ - \text{Cov} (X, Y) = \text{Cov} (-X, Y) = \text{Cov} (X, Y) $$
so it must be that $\text{Cov} (X, Y) = 0$.
Just a beautiful geometric argument. | Correlation between sine and cosine
I really like @whuber's argument from symmetry and don't want it to be lost as a comment, so here's a bit of elaboration.
Consider the random vector $(X, Y)$, where $X = \cos(U)$, and $Y = \sin(U)$, f |
24,718 | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | I agree with Michael Chernick's answer, but think that it can be made a little stronger. Ignore the 0.05 cutoff in most circumstances. It is only relevant to the Neyman-Pearson approach which is largely irrelevant to the inferential use of statistics in many areas of science.
Both tests indicate that your data contains moderate evidence against the null hypothesis. Consider that evidence in light of whatever you know about the system and the consequences that follow from decisions (or indecision) about the state of the real world. Argue a reasoned case and proceed in a manner that acknowledges the possibility of subsequent re-evaluation.
I explain more in this paper:
http://www.ncbi.nlm.nih.gov/pubmed/22394284
[Addendum added Nov 2019: I have a new reference that explains the issues in more detail https://arxiv.org/abs/1910.02042v1 ] | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | I agree with Michael Chernick's answer, but think that it can be made a little stronger. Ignore the 0.05 cutoff in most circumstances. It is only relevant to the Neyman-Pearson approach which is large | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
I agree with Michael Chernick's answer, but think that it can be made a little stronger. Ignore the 0.05 cutoff in most circumstances. It is only relevant to the Neyman-Pearson approach which is largely irrelevant to the inferential use of statistics in many areas of science.
Both tests indicate that your data contains moderate evidence against the null hypothesis. Consider that evidence in light of whatever you know about the system and the consequences that follow from decisions (or indecision) about the state of the real world. Argue a reasoned case and proceed in a manner that acknowledges the possibility of subsequent re-evaluation.
I explain more in this paper:
http://www.ncbi.nlm.nih.gov/pubmed/22394284
[Addendum added Nov 2019: I have a new reference that explains the issues in more detail https://arxiv.org/abs/1910.02042v1 ] | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
I agree with Michael Chernick's answer, but think that it can be made a little stronger. Ignore the 0.05 cutoff in most circumstances. It is only relevant to the Neyman-Pearson approach which is large |
24,719 | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | The Mann-Whitney or Wilcoxon test compares two groups while the Kruskal-Wallis test compares 3. Just like in the ordinary ANOVA with three or more groups the procedure generally suggested is to do the overall ANOVA F test first and then look at pairwise comparisons in case there is a significant difference. I would do the same here with the nonparametric ANOVA. My interpetation of your result is that there is marginally a significant difference between groups at level 0.05 and if you accept that then the difference based on the Mann-Whitney test indicates that it could be attributed to g$_1$ and g$_2$ being significantly different.
Don't get hung up with the magic of the 0.05 significance level! Just because the Kruskal-Wallis test gives p-value slightly over 0.05, don't take that to mean that there is no statistically significant difference between the groups. Also the fact that the Mann-Whitney test gives a p-value for the difference between g$_1$ and g$_2$ a little below 0.03 does not somehow make the difference between the two groups highly significant. Both p-values are close to 0.05. A slightly different data set could easily change to Kruskal-Wallis p-value by that much.
Any thought you might have that the results are contradictory would have to come from thinking of a 0.05 cut off as black and white boundary with no gray area in the neighborhood of 0.05. I think these results are reasonable and quite compatible. | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | The Mann-Whitney or Wilcoxon test compares two groups while the Kruskal-Wallis test compares 3. Just like in the ordinary ANOVA with three or more groups the procedure generally suggested is to do th | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
The Mann-Whitney or Wilcoxon test compares two groups while the Kruskal-Wallis test compares 3. Just like in the ordinary ANOVA with three or more groups the procedure generally suggested is to do the overall ANOVA F test first and then look at pairwise comparisons in case there is a significant difference. I would do the same here with the nonparametric ANOVA. My interpetation of your result is that there is marginally a significant difference between groups at level 0.05 and if you accept that then the difference based on the Mann-Whitney test indicates that it could be attributed to g$_1$ and g$_2$ being significantly different.
Don't get hung up with the magic of the 0.05 significance level! Just because the Kruskal-Wallis test gives p-value slightly over 0.05, don't take that to mean that there is no statistically significant difference between the groups. Also the fact that the Mann-Whitney test gives a p-value for the difference between g$_1$ and g$_2$ a little below 0.03 does not somehow make the difference between the two groups highly significant. Both p-values are close to 0.05. A slightly different data set could easily change to Kruskal-Wallis p-value by that much.
Any thought you might have that the results are contradictory would have to come from thinking of a 0.05 cut off as black and white boundary with no gray area in the neighborhood of 0.05. I think these results are reasonable and quite compatible. | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
The Mann-Whitney or Wilcoxon test compares two groups while the Kruskal-Wallis test compares 3. Just like in the ordinary ANOVA with three or more groups the procedure generally suggested is to do th |
24,720 | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | Results of Kruskal-Wallis and Mann-Whitney U test may differ because
The ranks used for the Mann-Whitney U test are not the ranks used by the Kruskal-Wallis test; and
The rank sum tests do not use the pooled variance implied by the Kruskal-Wallis null hypothesis.
Hence, it is not recommended to use Mann-whitney U test as a post hoc test after Kruskal-Wallis test.
Other tests like Dunn's test (commonly used), Conover-Iman and Dwass-Steel-Citchlow-Fligner tests cane be used as post-hoc test for kruskal-wallis test. | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | Results of Kruskal-Wallis and Mann-Whitney U test may differ because
The ranks used for the Mann-Whitney U test are not the ranks used by the Kruskal-Wallis test; and
The rank sum tests do not use | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
Results of Kruskal-Wallis and Mann-Whitney U test may differ because
The ranks used for the Mann-Whitney U test are not the ranks used by the Kruskal-Wallis test; and
The rank sum tests do not use the pooled variance implied by the Kruskal-Wallis null hypothesis.
Hence, it is not recommended to use Mann-whitney U test as a post hoc test after Kruskal-Wallis test.
Other tests like Dunn's test (commonly used), Conover-Iman and Dwass-Steel-Citchlow-Fligner tests cane be used as post-hoc test for kruskal-wallis test. | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
Results of Kruskal-Wallis and Mann-Whitney U test may differ because
The ranks used for the Mann-Whitney U test are not the ranks used by the Kruskal-Wallis test; and
The rank sum tests do not use |
24,721 | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | This is in answer to @vinesh as well as looking at the general principle in the original question.
There are really 2 issues here with multiple comparisons: as we increase the number of comparisons being made we have more information which makes it easier to see real differences, but the increased number of comparisons also makes it easier to see differences that don't exist (false positives, data dredging, torturing the data until it confesses).
Think of a class with 100 students, each of the students is given a fair coin and told to flip the coin 10 times and use the results to test the null hypothesis that the proportion of heads is 50%. We would expect p-values to range between 0 and 1 and just by chance we would expect to see around 5 of the students get p-values less than 0.05. In fact we would be very surprised if none of them obtained a p-value less than 0.05 (less than 1% chance of that happening). If we only look at the few significant values and ignore all the others then we will falsely conclude that the coins are biased, but if we use a technique that takes into account the multiple comparisons then we will likely still judge correctly that the coins are fair (or at least fail to reject that they or fair).
On the other hand, consider a similar case where we have 10 students rolling a die and determining if the value is in the set {1,2,3} or the set {4,5,6} each of which will have 50% chance each roll if the die is fair (but could be different if the die is rigged). All 10 students compute p-values (null is 50%) and get values between 0.06 and 0.25. Now in this case none of them reached the magic 5% cut-off, so looking at any individual students results will not result in a non-fair declaration, but all the p-values are less than 0.5, if all the dice are fair then the p-values should be uniformly distributed and have a 50% chance of being above 0.5. The chance of getting 10 independent p-values all less than 0.5 when the nulls are true is less that the magic 0.05 and this suggests that the dice are biased, we just did not have enough power to detect this in the individual trials, but grouping the information shows the null is false.
Now coin flipping and die rolling are a bit contrived, so a different example: I have a new drug that I want to test. My budget allows me to test the drug on 1,000 subjects (this will be a paired comparison with each subject being their own control). I am considering 2 different study designs, in the first I recruite 1,000 subjects do the study and report a single p-value. In the second design I recruite 1,000 subjects but break them into 100 groups of 10 each, I do the study on each of the 100 groups of 10 and compute a p-value for each group (100 total p-values). Think about the potential differences between the 2 methodologies and how the conclusions could differ. An objective approach would require that both study designs lead to the same conclusion (given the same 1,000 patients and everything else is the same).
@mljrg, why did you choose to compare g1 and g2? If this was a question of interest before collecting any data then the MW p-value is reasonable and meaningful, however if you did the KW test, then looked to see which 2 groups were the most different and did the MW test only on those that looked the most different, then the assumptions for the MW test were violated and the MW p-value is meaningless and the KW p-value is the only one with potential meaning. | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results? | This is in answer to @vinesh as well as looking at the general principle in the original question.
There are really 2 issues here with multiple comparisons: as we increase the number of comparisons be | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
This is in answer to @vinesh as well as looking at the general principle in the original question.
There are really 2 issues here with multiple comparisons: as we increase the number of comparisons being made we have more information which makes it easier to see real differences, but the increased number of comparisons also makes it easier to see differences that don't exist (false positives, data dredging, torturing the data until it confesses).
Think of a class with 100 students, each of the students is given a fair coin and told to flip the coin 10 times and use the results to test the null hypothesis that the proportion of heads is 50%. We would expect p-values to range between 0 and 1 and just by chance we would expect to see around 5 of the students get p-values less than 0.05. In fact we would be very surprised if none of them obtained a p-value less than 0.05 (less than 1% chance of that happening). If we only look at the few significant values and ignore all the others then we will falsely conclude that the coins are biased, but if we use a technique that takes into account the multiple comparisons then we will likely still judge correctly that the coins are fair (or at least fail to reject that they or fair).
On the other hand, consider a similar case where we have 10 students rolling a die and determining if the value is in the set {1,2,3} or the set {4,5,6} each of which will have 50% chance each roll if the die is fair (but could be different if the die is rigged). All 10 students compute p-values (null is 50%) and get values between 0.06 and 0.25. Now in this case none of them reached the magic 5% cut-off, so looking at any individual students results will not result in a non-fair declaration, but all the p-values are less than 0.5, if all the dice are fair then the p-values should be uniformly distributed and have a 50% chance of being above 0.5. The chance of getting 10 independent p-values all less than 0.5 when the nulls are true is less that the magic 0.05 and this suggests that the dice are biased, we just did not have enough power to detect this in the individual trials, but grouping the information shows the null is false.
Now coin flipping and die rolling are a bit contrived, so a different example: I have a new drug that I want to test. My budget allows me to test the drug on 1,000 subjects (this will be a paired comparison with each subject being their own control). I am considering 2 different study designs, in the first I recruite 1,000 subjects do the study and report a single p-value. In the second design I recruite 1,000 subjects but break them into 100 groups of 10 each, I do the study on each of the 100 groups of 10 and compute a p-value for each group (100 total p-values). Think about the potential differences between the 2 methodologies and how the conclusions could differ. An objective approach would require that both study designs lead to the same conclusion (given the same 1,000 patients and everything else is the same).
@mljrg, why did you choose to compare g1 and g2? If this was a question of interest before collecting any data then the MW p-value is reasonable and meaningful, however if you did the KW test, then looked to see which 2 groups were the most different and did the MW test only on those that looked the most different, then the assumptions for the MW test were violated and the MW p-value is meaningless and the KW p-value is the only one with potential meaning. | Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
This is in answer to @vinesh as well as looking at the general principle in the original question.
There are really 2 issues here with multiple comparisons: as we increase the number of comparisons be |
24,722 | kNN and unbalanced classes | In principal, unbalanced classes are not a problem at all for the k-nearest neighbor algorithm.
Because the algorithm is not influenced in any way by the size of the class, it will not favor any on the basis of size. Try to run k-means with an obvious outlier and k+1 and you will see that most of the time the outlier will get its own class.
Of course, with hard datasets it is always advisable to run the algorithm multiple times. This is to avoid trouble due to a bad initialization. | kNN and unbalanced classes | In principal, unbalanced classes are not a problem at all for the k-nearest neighbor algorithm.
Because the algorithm is not influenced in any way by the size of the class, it will not favor any on t | kNN and unbalanced classes
In principal, unbalanced classes are not a problem at all for the k-nearest neighbor algorithm.
Because the algorithm is not influenced in any way by the size of the class, it will not favor any on the basis of size. Try to run k-means with an obvious outlier and k+1 and you will see that most of the time the outlier will get its own class.
Of course, with hard datasets it is always advisable to run the algorithm multiple times. This is to avoid trouble due to a bad initialization. | kNN and unbalanced classes
In principal, unbalanced classes are not a problem at all for the k-nearest neighbor algorithm.
Because the algorithm is not influenced in any way by the size of the class, it will not favor any on t |
24,723 | kNN and unbalanced classes | I believe Peter Smit's response above is confusing K nearest neighbor (KNN) and K-means, which are very different.
KNN is susceptible to class imbalance, as described well here: https://www.quora.com/Why-does-knn-get-effected-by-the-class-imbalance | kNN and unbalanced classes | I believe Peter Smit's response above is confusing K nearest neighbor (KNN) and K-means, which are very different.
KNN is susceptible to class imbalance, as described well here: https://www.quora.com/ | kNN and unbalanced classes
I believe Peter Smit's response above is confusing K nearest neighbor (KNN) and K-means, which are very different.
KNN is susceptible to class imbalance, as described well here: https://www.quora.com/Why-does-knn-get-effected-by-the-class-imbalance | kNN and unbalanced classes
I believe Peter Smit's response above is confusing K nearest neighbor (KNN) and K-means, which are very different.
KNN is susceptible to class imbalance, as described well here: https://www.quora.com/ |
24,724 | kNN and unbalanced classes | Imbalanced class sizes are both a theoretical and practical problem with KNN which has been characterized in machine learning literature since at least 2003. This is particularly vexing when some classes have a low occurrence in your primary dataset (ex: fraud detection, disease screening, spam filtering).
A google scholar search 1 shows several papers describing the issue and strategies for mitigating it by customizing the KNN algorithm:
weighting neighbors by the inverse of their class size converts neighbor counts into the fraction of each class that falls in your K nearest neighbors
weighting neighbors by their distances
using a radius-based rule for gathering neighbors instead of the K nearest ones (often implemented in KNN packages)
I've also found these two blogs helpful for general background on imbalanced class sizes.
https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/
https://elitedatascience.com/imbalanced-classes | kNN and unbalanced classes | Imbalanced class sizes are both a theoretical and practical problem with KNN which has been characterized in machine learning literature since at least 2003. This is particularly vexing when some clas | kNN and unbalanced classes
Imbalanced class sizes are both a theoretical and practical problem with KNN which has been characterized in machine learning literature since at least 2003. This is particularly vexing when some classes have a low occurrence in your primary dataset (ex: fraud detection, disease screening, spam filtering).
A google scholar search 1 shows several papers describing the issue and strategies for mitigating it by customizing the KNN algorithm:
weighting neighbors by the inverse of their class size converts neighbor counts into the fraction of each class that falls in your K nearest neighbors
weighting neighbors by their distances
using a radius-based rule for gathering neighbors instead of the K nearest ones (often implemented in KNN packages)
I've also found these two blogs helpful for general background on imbalanced class sizes.
https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/
https://elitedatascience.com/imbalanced-classes | kNN and unbalanced classes
Imbalanced class sizes are both a theoretical and practical problem with KNN which has been characterized in machine learning literature since at least 2003. This is particularly vexing when some clas |
24,725 | kNN and unbalanced classes | I would like to add one remark - knn is sensitive to let say number of observation on the boundery of given class to the total number of observation in that class. If you have three classes with the same number of observations from the same distribution but with different means and second class is visiably cloud between two others - its expected value is between two others, then there is more missclassfications in the class number two. But something like this hold for every classifier. | kNN and unbalanced classes | I would like to add one remark - knn is sensitive to let say number of observation on the boundery of given class to the total number of observation in that class. If you have three classes with the s | kNN and unbalanced classes
I would like to add one remark - knn is sensitive to let say number of observation on the boundery of given class to the total number of observation in that class. If you have three classes with the same number of observations from the same distribution but with different means and second class is visiably cloud between two others - its expected value is between two others, then there is more missclassfications in the class number two. But something like this hold for every classifier. | kNN and unbalanced classes
I would like to add one remark - knn is sensitive to let say number of observation on the boundery of given class to the total number of observation in that class. If you have three classes with the s |
24,726 | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples? | Consider the estimator $\hat{\theta} = 3$. If this estimator is estimating a parameter that is not equal to three then it is biased in all finite samples. Is this estimator asymptotically consistent? | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp | Consider the estimator $\hat{\theta} = 3$. If this estimator is estimating a parameter that is not equal to three then it is biased in all finite samples. Is this estimator asymptotically consistent | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples?
Consider the estimator $\hat{\theta} = 3$. If this estimator is estimating a parameter that is not equal to three then it is biased in all finite samples. Is this estimator asymptotically consistent? | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp
Consider the estimator $\hat{\theta} = 3$. If this estimator is estimating a parameter that is not equal to three then it is biased in all finite samples. Is this estimator asymptotically consistent |
24,727 | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples? | Next to the simple and effective example of Ben, here is a more applied and specific one: Imagine you try to estimate a causal effect, but your regression model to estimate this causal parameter is misspecified (often called "omitted variable bias").
A well-known example deals with returns to schooling, i.e., how much more you earn due to (i.e., in the sense of a causal effect of sitting in class, taking courses) additional schooling. If you simply regress earnings on schooling the regression is likely misspecified, because people (after compulsory schooling ends) choose how much schooling they want. Now, more motivated and able students will, as a tendency, find the idea of going to school for longer less daunting than other students. Now, such able and motivated persons will however also be likely to be good at the workplace due to these characteristics, irrespective of how much schooling they have. Hence, they will likely earn more.
Hence, you would need to control for things like ability/motivation - which may not be easy in practice - in your regression (and likely other things, too).
Just collecting more data on your simple regression of earnings on schooling will, in turn, not save you from this problem, so both biased and inconsistent estimation. For both small and large datasets, the simple regression, as a tendency, compares earnings of students who are both able and have higher schooling to earnings of students who are less able and have less schooling. Assigning the entire difference in earnings to schooling hence will overstate the causal effect of schooling. | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp | Next to the simple and effective example of Ben, here is a more applied and specific one: Imagine you try to estimate a causal effect, but your regression model to estimate this causal parameter is mi | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples?
Next to the simple and effective example of Ben, here is a more applied and specific one: Imagine you try to estimate a causal effect, but your regression model to estimate this causal parameter is misspecified (often called "omitted variable bias").
A well-known example deals with returns to schooling, i.e., how much more you earn due to (i.e., in the sense of a causal effect of sitting in class, taking courses) additional schooling. If you simply regress earnings on schooling the regression is likely misspecified, because people (after compulsory schooling ends) choose how much schooling they want. Now, more motivated and able students will, as a tendency, find the idea of going to school for longer less daunting than other students. Now, such able and motivated persons will however also be likely to be good at the workplace due to these characteristics, irrespective of how much schooling they have. Hence, they will likely earn more.
Hence, you would need to control for things like ability/motivation - which may not be easy in practice - in your regression (and likely other things, too).
Just collecting more data on your simple regression of earnings on schooling will, in turn, not save you from this problem, so both biased and inconsistent estimation. For both small and large datasets, the simple regression, as a tendency, compares earnings of students who are both able and have higher schooling to earnings of students who are less able and have less schooling. Assigning the entire difference in earnings to schooling hence will overstate the causal effect of schooling. | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp
Next to the simple and effective example of Ben, here is a more applied and specific one: Imagine you try to estimate a causal effect, but your regression model to estimate this causal parameter is mi |
24,728 | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples? | Yes, in some circumstances, no in others. For example, if a bias results from a self-inconsistent assumption, then no. Examples of this latter include omitted variable bias and AIC in the case of censored data, which violates the maximum likelihood assumption. Examples of when it pertains would be AIC in the case of complete support (i.e., without censoring such that the maximum likelihood assumption pertains), and ordinary least squares for equidistant independent axis data. In still others, for example, variance is generally unbiased, but standard deviation is not, see this. Standard deviation would still be asymptotically correct because the small number bias would reduce to zero for $n\to\infty$. Nevertheless, one should not rely on just any asymptotic convergence, if a rather better estimator is available, and see how this was done in this example. Briefly, if you small number correct standard deviations from a large number of 2 sample SD's and then average them, you will obtain a more variable estimate than if you root mean square combine all the variances and then use a much lesser small number correction for the total number of trials. Some people are surprised at how ineffectual AIC can be for small samples. Thus, how fast asymptotic convergence occurs can be critical to interpretation of statistical results, and sometimes, for example for AIC, when we do not have measures that inform us of how precise or accurate statistical results are, it can be problematic.
Thus, the question of whether or not a procedure is asymptotically convergent is not by itself a sufficient criterion of validity of statistical results. We also need confidence intervals for those results. | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp | Yes, in some circumstances, no in others. For example, if a bias results from a self-inconsistent assumption, then no. Examples of this latter include omitted variable bias and AIC in the case of cens | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples?
Yes, in some circumstances, no in others. For example, if a bias results from a self-inconsistent assumption, then no. Examples of this latter include omitted variable bias and AIC in the case of censored data, which violates the maximum likelihood assumption. Examples of when it pertains would be AIC in the case of complete support (i.e., without censoring such that the maximum likelihood assumption pertains), and ordinary least squares for equidistant independent axis data. In still others, for example, variance is generally unbiased, but standard deviation is not, see this. Standard deviation would still be asymptotically correct because the small number bias would reduce to zero for $n\to\infty$. Nevertheless, one should not rely on just any asymptotic convergence, if a rather better estimator is available, and see how this was done in this example. Briefly, if you small number correct standard deviations from a large number of 2 sample SD's and then average them, you will obtain a more variable estimate than if you root mean square combine all the variances and then use a much lesser small number correction for the total number of trials. Some people are surprised at how ineffectual AIC can be for small samples. Thus, how fast asymptotic convergence occurs can be critical to interpretation of statistical results, and sometimes, for example for AIC, when we do not have measures that inform us of how precise or accurate statistical results are, it can be problematic.
Thus, the question of whether or not a procedure is asymptotically convergent is not by itself a sufficient criterion of validity of statistical results. We also need confidence intervals for those results. | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp
Yes, in some circumstances, no in others. For example, if a bias results from a self-inconsistent assumption, then no. Examples of this latter include omitted variable bias and AIC in the case of cens |
24,729 | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples? | Is it true that an estimator will always asymptotically be consistent
if it is biased in finite samples?
The correct reply is trivial and it is NO, as pointed out above.
However your question immediately suggest a more interesting one:
Is it true that an estimator will always asymptotically be consistent if it is unbiased in finite samples?
The reply is: yes, if its variance going to zero when sample size diverge.
I add this part here because I suppose can be interesting for some readers. | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp | Is it true that an estimator will always asymptotically be consistent
if it is biased in finite samples?
The correct reply is trivial and it is NO, as pointed out above.
However your question immedia | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples?
Is it true that an estimator will always asymptotically be consistent
if it is biased in finite samples?
The correct reply is trivial and it is NO, as pointed out above.
However your question immediately suggest a more interesting one:
Is it true that an estimator will always asymptotically be consistent if it is unbiased in finite samples?
The reply is: yes, if its variance going to zero when sample size diverge.
I add this part here because I suppose can be interesting for some readers. | Is it true that an estimator will always asymptotically be consistent if it is biased in finite samp
Is it true that an estimator will always asymptotically be consistent
if it is biased in finite samples?
The correct reply is trivial and it is NO, as pointed out above.
However your question immedia |
24,730 | When can I not replace a random variable with its mean? | If you replace a missing value by some point estimate, you disregard all its variability. Thus, you will not propagate all the original variability to your model. Your parameter estimates will appear to have too low standard-errors. If you do inference, your p values will be biased low. Your confidence-intervals will be too narrow. If you do prediction, your prediction-intervals will be too narrow.
Overall: you will be too sure of your conclusions. | When can I not replace a random variable with its mean? | If you replace a missing value by some point estimate, you disregard all its variability. Thus, you will not propagate all the original variability to your model. Your parameter estimates will appear | When can I not replace a random variable with its mean?
If you replace a missing value by some point estimate, you disregard all its variability. Thus, you will not propagate all the original variability to your model. Your parameter estimates will appear to have too low standard-errors. If you do inference, your p values will be biased low. Your confidence-intervals will be too narrow. If you do prediction, your prediction-intervals will be too narrow.
Overall: you will be too sure of your conclusions. | When can I not replace a random variable with its mean?
If you replace a missing value by some point estimate, you disregard all its variability. Thus, you will not propagate all the original variability to your model. Your parameter estimates will appear |
24,731 | When can I not replace a random variable with its mean? | In addition to Stephan's points:
In almost any application where you're interested in nonlinear functions of the random variable, substituting the mean will generally introduce bias and possibly contradictory results. The average velocity and average mass of a particle will generally not be consistent with average kinetic energy, because energy scales with V^2.
The mean value may not even be a possible outcome for the random variable. If my possible outcomes are 0 "patient dies" and 1 "patient lives", it's probably not helpful to have a model that describes the patient as 0.1 "mostly dead but slightly alive". | When can I not replace a random variable with its mean? | In addition to Stephan's points:
In almost any application where you're interested in nonlinear functions of the random variable, substituting the mean will generally introduce bias and possibly cont | When can I not replace a random variable with its mean?
In addition to Stephan's points:
In almost any application where you're interested in nonlinear functions of the random variable, substituting the mean will generally introduce bias and possibly contradictory results. The average velocity and average mass of a particle will generally not be consistent with average kinetic energy, because energy scales with V^2.
The mean value may not even be a possible outcome for the random variable. If my possible outcomes are 0 "patient dies" and 1 "patient lives", it's probably not helpful to have a model that describes the patient as 0.1 "mostly dead but slightly alive". | When can I not replace a random variable with its mean?
In addition to Stephan's points:
In almost any application where you're interested in nonlinear functions of the random variable, substituting the mean will generally introduce bias and possibly cont |
24,732 | When can I not replace a random variable with its mean? | A real life example (related to the two answers you got), in the financial markets. The price of an option is based in the probability that the price of an asset goes above (or below) a given level.
For example, the price of an option for buying an asset at a price 100 when the expected value of the asset is 80. If you substitute the random variable (the asset price) by its mean, you would get a price of zero (as you would never by at 100 an asset that costs 80). When you take into account the stochasticity of the asset (and that's the right way of doing it) you get a positive price, as there is some probability that the asset price goes above 100. | When can I not replace a random variable with its mean? | A real life example (related to the two answers you got), in the financial markets. The price of an option is based in the probability that the price of an asset goes above (or below) a given level.
F | When can I not replace a random variable with its mean?
A real life example (related to the two answers you got), in the financial markets. The price of an option is based in the probability that the price of an asset goes above (or below) a given level.
For example, the price of an option for buying an asset at a price 100 when the expected value of the asset is 80. If you substitute the random variable (the asset price) by its mean, you would get a price of zero (as you would never by at 100 an asset that costs 80). When you take into account the stochasticity of the asset (and that's the right way of doing it) you get a positive price, as there is some probability that the asset price goes above 100. | When can I not replace a random variable with its mean?
A real life example (related to the two answers you got), in the financial markets. The price of an option is based in the probability that the price of an asset goes above (or below) a given level.
F |
24,733 | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification) | Say you end up in court and you did not do it. Do you think it is fair that you still have a 50% chance of being found guilty? Is a 50% chance of being innocent "guilty beyond reasonable doubt"? Would you think it is fair that you had a 5% chance of being found guilty even though you did not do it? If I were in court I would consider 5% not conservative enough.
You are right that the 5% is arbitrary. We could just as well choose 2%, or 1%, or if you are nerdy $\pi$% or $e$%. There are people who are willing to accept 10%, but 50% will never be acceptable.
In response to your edit of the question:
Your idea would be reasonable if all hypotheses were created equal. However, that is not the case. We typically care about the alternative hypothesis, so we strengthen our argument if we choose a low $\alpha$. In that sense, the example you chose originally illustrates that point well. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific | Say you end up in court and you did not do it. Do you think it is fair that you still have a 50% chance of being found guilty? Is a 50% chance of being innocent "guilty beyond reasonable doubt"? Would | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification)
Say you end up in court and you did not do it. Do you think it is fair that you still have a 50% chance of being found guilty? Is a 50% chance of being innocent "guilty beyond reasonable doubt"? Would you think it is fair that you had a 5% chance of being found guilty even though you did not do it? If I were in court I would consider 5% not conservative enough.
You are right that the 5% is arbitrary. We could just as well choose 2%, or 1%, or if you are nerdy $\pi$% or $e$%. There are people who are willing to accept 10%, but 50% will never be acceptable.
In response to your edit of the question:
Your idea would be reasonable if all hypotheses were created equal. However, that is not the case. We typically care about the alternative hypothesis, so we strengthen our argument if we choose a low $\alpha$. In that sense, the example you chose originally illustrates that point well. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific
Say you end up in court and you did not do it. Do you think it is fair that you still have a 50% chance of being found guilty? Is a 50% chance of being innocent "guilty beyond reasonable doubt"? Would |
24,734 | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification) | It is like you say - it depends on how important False Positive and False Negative errors are.
In the example you use, as Maarten Buis already answered, being convicted if there is a 50% chance that you were innocent is hardly fair.
When applying it to research, look at this way: Imagine you want to know if a certain new medication helps against a certain disease. Say that you find a difference between your treatment group and your control group in favour of the treatment. Great! The medicine must work, right? You can reject the null hypothesis that the medication does not work. Your p-value is 0.49! There is a higher chance that the effect you found was based on the truth rather than by chance!
Now consider this: the medication has nasty adverse effects. You only want to take it if you're convinced it works. And are you? No, because there is still a 51% chance that the difference you found between the two groups was purely by chance.
I can imagine that there are domains where you're satisfied with e.g. 10%. I've seen articles where 10% is accepted. I've also seen articles where they chose 2%. It depends on how important you think it is that you're convinced that rejecting the null hypothesis will be based on the truth and not on chance. I can hardly imagine a situation where you're satisfied with a 50% chance that the difference you found was based on pure luck. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific | It is like you say - it depends on how important False Positive and False Negative errors are.
In the example you use, as Maarten Buis already answered, being convicted if there is a 50% chance that y | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification)
It is like you say - it depends on how important False Positive and False Negative errors are.
In the example you use, as Maarten Buis already answered, being convicted if there is a 50% chance that you were innocent is hardly fair.
When applying it to research, look at this way: Imagine you want to know if a certain new medication helps against a certain disease. Say that you find a difference between your treatment group and your control group in favour of the treatment. Great! The medicine must work, right? You can reject the null hypothesis that the medication does not work. Your p-value is 0.49! There is a higher chance that the effect you found was based on the truth rather than by chance!
Now consider this: the medication has nasty adverse effects. You only want to take it if you're convinced it works. And are you? No, because there is still a 51% chance that the difference you found between the two groups was purely by chance.
I can imagine that there are domains where you're satisfied with e.g. 10%. I've seen articles where 10% is accepted. I've also seen articles where they chose 2%. It depends on how important you think it is that you're convinced that rejecting the null hypothesis will be based on the truth and not on chance. I can hardly imagine a situation where you're satisfied with a 50% chance that the difference you found was based on pure luck. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific
It is like you say - it depends on how important False Positive and False Negative errors are.
In the example you use, as Maarten Buis already answered, being convicted if there is a 50% chance that y |
24,735 | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification) | Other answers have pointed out that it all depends on how you relatively value the different possible errors, and that in a scientific context $.05$ is potentially quite reasonable, an even more stringent criterion is also potentially quite reasonable, but that $.50$ is unlikely to be reasonable. That is all true, but let me take this in a different direction and challenge the assumption that lies behind the question.
You take "[h]ypothesis testing [to be] akin to a Classification problem". The apparent similarity here is only superficial; that isn't really true in a meaningful sense.
In a binary classification problem, there really are just two classes; that can be established absolutely and a-priori. Hypothesis testing isn't like that. Your figure displays a null and an alternative hypothesis as they are often drawn to illustrate a power analysis or the logic of hypothesis testing in a Stats 101 class. The figure implies that there is one null hypothesis and one alternative hypothesis. While it is (usually) true that there only one null, the alternative isn't fixed to be only a single point value of the (say) mean difference. When planning a study, researchers will often select a minimum value they want to be able to detect. Let's say that in some particular study it is a mean shift of $.67$ SDs. So they design and power their study accordingly. Now imagine that the result is significant, but $.67$ does not appear to be a likely value. Well, they don't just walk away! The researchers would nonetheless conclude that the treatment makes a difference, but adjust their belief about the magnitude of the effect according to their interpretation of the results. If there are multiple studies, a meta-analysis will help refine the true effect as data accumulates. In other words, the alternative that is proffered during study planning (and that is drawn in your figure) isn't really a singular alternative such that the researchers must choose between it and the null as their only options.
Let's go about this a different way. You could say that it's quite simple: either the null hypothesis is true or it is false, so there really are just two possibilities. However, the null is typically a point value (viz., $0$) and the null being false simply means that any value other than exactly $0$ is the true value. If we recall that a point has no width, essentially $100\%$ of the number line corresponds to the alternative being true. Thus, unless your observed result is $0.\bar{0}$ (i.e., zero to infinite decimal places), your result will be closer to some non-$0$ value than it is to $0$ (i.e., $p<.5$). As a result, you would always end up concluding the null hypothesis is false. To make this explicit, the mistaken premise in your question is that there is a single, meaningful blue line (as depicted in your figure) that can be used as you suggest.
The above need not always be the case however. It does sometimes occur that there are two theories making different predictions about a phenomenon where the theories are sufficiently well mathematized to yield precise point estimates and likely sampling distributions. Then, a critical experiment can be devised to differentiate between them. In such a case, neither theory needs to be taken as the null and the likelihood ratio can be taken as the weight of evidence favoring one or the other theory. That usage would be analogous to taking $.50$ as your alpha. There is no theoretical reason this scenario couldn't be the most common one in science, it just happens that it is very rare for there to be two such theories in most fields right now. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific | Other answers have pointed out that it all depends on how you relatively value the different possible errors, and that in a scientific context $.05$ is potentially quite reasonable, an even more strin | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification)
Other answers have pointed out that it all depends on how you relatively value the different possible errors, and that in a scientific context $.05$ is potentially quite reasonable, an even more stringent criterion is also potentially quite reasonable, but that $.50$ is unlikely to be reasonable. That is all true, but let me take this in a different direction and challenge the assumption that lies behind the question.
You take "[h]ypothesis testing [to be] akin to a Classification problem". The apparent similarity here is only superficial; that isn't really true in a meaningful sense.
In a binary classification problem, there really are just two classes; that can be established absolutely and a-priori. Hypothesis testing isn't like that. Your figure displays a null and an alternative hypothesis as they are often drawn to illustrate a power analysis or the logic of hypothesis testing in a Stats 101 class. The figure implies that there is one null hypothesis and one alternative hypothesis. While it is (usually) true that there only one null, the alternative isn't fixed to be only a single point value of the (say) mean difference. When planning a study, researchers will often select a minimum value they want to be able to detect. Let's say that in some particular study it is a mean shift of $.67$ SDs. So they design and power their study accordingly. Now imagine that the result is significant, but $.67$ does not appear to be a likely value. Well, they don't just walk away! The researchers would nonetheless conclude that the treatment makes a difference, but adjust their belief about the magnitude of the effect according to their interpretation of the results. If there are multiple studies, a meta-analysis will help refine the true effect as data accumulates. In other words, the alternative that is proffered during study planning (and that is drawn in your figure) isn't really a singular alternative such that the researchers must choose between it and the null as their only options.
Let's go about this a different way. You could say that it's quite simple: either the null hypothesis is true or it is false, so there really are just two possibilities. However, the null is typically a point value (viz., $0$) and the null being false simply means that any value other than exactly $0$ is the true value. If we recall that a point has no width, essentially $100\%$ of the number line corresponds to the alternative being true. Thus, unless your observed result is $0.\bar{0}$ (i.e., zero to infinite decimal places), your result will be closer to some non-$0$ value than it is to $0$ (i.e., $p<.5$). As a result, you would always end up concluding the null hypothesis is false. To make this explicit, the mistaken premise in your question is that there is a single, meaningful blue line (as depicted in your figure) that can be used as you suggest.
The above need not always be the case however. It does sometimes occur that there are two theories making different predictions about a phenomenon where the theories are sufficiently well mathematized to yield precise point estimates and likely sampling distributions. Then, a critical experiment can be devised to differentiate between them. In such a case, neither theory needs to be taken as the null and the likelihood ratio can be taken as the weight of evidence favoring one or the other theory. That usage would be analogous to taking $.50$ as your alpha. There is no theoretical reason this scenario couldn't be the most common one in science, it just happens that it is very rare for there to be two such theories in most fields right now. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific
Other answers have pointed out that it all depends on how you relatively value the different possible errors, and that in a scientific context $.05$ is potentially quite reasonable, an even more strin |
24,736 | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification) | To add to the very good previous answers: Yes, 5% is arbitrary, but regardless of the specific threshold you pick, it has to be reasonably small, otherwise hypothesis testing makes little sense.
You're looking for an effect and want to make sure your results are not purely due to chance. To that extent, you set a significance level which says basically "If there were actually no effect (null hypothesis is true), this would be the probability to still get such results (or more extreme) by pure chance". Setting this too high will result in lots of false positives, and undermine your ability to get a meaningful answer to your research question.
As always, there's a trade-off involved, so the research community came up with this 5% guideline. But it's different in different fields. In particle physics, it's more like 0.00001% or something. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific | To add to the very good previous answers: Yes, 5% is arbitrary, but regardless of the specific threshold you pick, it has to be reasonably small, otherwise hypothesis testing makes little sense.
You' | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification)
To add to the very good previous answers: Yes, 5% is arbitrary, but regardless of the specific threshold you pick, it has to be reasonably small, otherwise hypothesis testing makes little sense.
You're looking for an effect and want to make sure your results are not purely due to chance. To that extent, you set a significance level which says basically "If there were actually no effect (null hypothesis is true), this would be the probability to still get such results (or more extreme) by pure chance". Setting this too high will result in lots of false positives, and undermine your ability to get a meaningful answer to your research question.
As always, there's a trade-off involved, so the research community came up with this 5% guideline. But it's different in different fields. In particle physics, it's more like 0.00001% or something. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific
To add to the very good previous answers: Yes, 5% is arbitrary, but regardless of the specific threshold you pick, it has to be reasonably small, otherwise hypothesis testing makes little sense.
You' |
24,737 | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification) | Classification and hypothesis testing are different and been used differently. In most cases, people use
"Classification"" to perform the task of "classifying something according to shared qualities or characteristics".
And use "hypothesis testing" to verify some "significant discoveries".
Note that, in hypothesis testing, the "null hypothesis" is "common sense", but if we can reject null hypotheses then we have a break though.
This is why we have a more strict criteria in hypothesis testing. Think example of developing new drags, we want to be very careful to say the is significant and effective. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific | Classification and hypothesis testing are different and been used differently. In most cases, people use
"Classification"" to perform the task of "classifying something according to shared qualities | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classification)
Classification and hypothesis testing are different and been used differently. In most cases, people use
"Classification"" to perform the task of "classifying something according to shared qualities or characteristics".
And use "hypothesis testing" to verify some "significant discoveries".
Note that, in hypothesis testing, the "null hypothesis" is "common sense", but if we can reject null hypotheses then we have a break though.
This is why we have a more strict criteria in hypothesis testing. Think example of developing new drags, we want to be very careful to say the is significant and effective. | Why we reject the null hypothesis at the 0.05 level and not the 0.5 level (as we do in the Classific
Classification and hypothesis testing are different and been used differently. In most cases, people use
"Classification"" to perform the task of "classifying something according to shared qualities |
24,738 | Finding the column index by its name in R [closed] | probably this is the simplest way:
which(names(x)=="bar") | Finding the column index by its name in R [closed] | probably this is the simplest way:
which(names(x)=="bar") | Finding the column index by its name in R [closed]
probably this is the simplest way:
which(names(x)=="bar") | Finding the column index by its name in R [closed]
probably this is the simplest way:
which(names(x)=="bar") |
24,739 | Finding the column index by its name in R [closed] | just to add another possibility:
You can usually use grep and it's decedents (i.e., grepl, to do these kind of jobs in a more sophisiticated way using regular expressions.
On your example your could get the column index with:
grep("^bar$", colnames(x)) or grep("^bar$", names(x))
The ^ and $ are meta characters for the beginning and end of a string, respectively.
Check ?grep and especially ?regex for more infos (i.e., you can grab only partial names/matches, or the return value is the string itself or a logical vector,...)
For me, grep is more R-ish.
Strongly related is the recent package by Hadley Wickhem: stringr, A package for "modern, consistent string processing" including grep like functions. He recently published a paper on it in the R Journal.
See also my answer on stackoverflow on an identical issue. | Finding the column index by its name in R [closed] | just to add another possibility:
You can usually use grep and it's decedents (i.e., grepl, to do these kind of jobs in a more sophisiticated way using regular expressions.
On your example your could | Finding the column index by its name in R [closed]
just to add another possibility:
You can usually use grep and it's decedents (i.e., grepl, to do these kind of jobs in a more sophisiticated way using regular expressions.
On your example your could get the column index with:
grep("^bar$", colnames(x)) or grep("^bar$", names(x))
The ^ and $ are meta characters for the beginning and end of a string, respectively.
Check ?grep and especially ?regex for more infos (i.e., you can grab only partial names/matches, or the return value is the string itself or a logical vector,...)
For me, grep is more R-ish.
Strongly related is the recent package by Hadley Wickhem: stringr, A package for "modern, consistent string processing" including grep like functions. He recently published a paper on it in the R Journal.
See also my answer on stackoverflow on an identical issue. | Finding the column index by its name in R [closed]
just to add another possibility:
You can usually use grep and it's decedents (i.e., grepl, to do these kind of jobs in a more sophisiticated way using regular expressions.
On your example your could |
24,740 | If a data set appears to be normal after some transformation is applied, is it really normal? | NO
It means that the transformed distribution is normal (at least roughly). Depending on the transformation, it might suggest a lack of normality of the original distribution. For instance, if a log-transformed distribution is normal, then the original distribution was log-normal, which certainly is not normal. | If a data set appears to be normal after some transformation is applied, is it really normal? | NO
It means that the transformed distribution is normal (at least roughly). Depending on the transformation, it might suggest a lack of normality of the original distribution. For instance, if a log-t | If a data set appears to be normal after some transformation is applied, is it really normal?
NO
It means that the transformed distribution is normal (at least roughly). Depending on the transformation, it might suggest a lack of normality of the original distribution. For instance, if a log-transformed distribution is normal, then the original distribution was log-normal, which certainly is not normal. | If a data set appears to be normal after some transformation is applied, is it really normal?
NO
It means that the transformed distribution is normal (at least roughly). Depending on the transformation, it might suggest a lack of normality of the original distribution. For instance, if a log-t |
24,741 | If a data set appears to be normal after some transformation is applied, is it really normal? | _Comment continued: Consider lognormal data x, which does become
exactly normal when transformed by taking logs. In this case (with $n=1000),$
Q-Q plots and the Shapiro-Wilk normality test agree for original and transformed data.
set.seed(2022)
x = rlnorm(100, 50, 7)
y = log(x)
par(mfrow = c(1,2))
hdr1 = "Lognormal Sample: Norm Q-Q Plot"
qqnorm(x, main=hdr1)
abline(a=mean(x), b=sd(x), col="blue")
hdr2 = "Normal Sample: Norm Q-Q Plot"
qqnorm(y, main=hdr2)
abline(a=mean(y), b=sd(y), col="blue")
par(mfrow = c(1,1))
shapiro.test(x)
Shapiro-Wilk normality test
data: x
W = 0.1143, p-value < 2.2e-16 # Normality strongly rejected
shapiro.test(y)
Shapiro-Wilk normality test
data: y
W = 0.99017, p-value = 0.678 # Does not rejece null hyp: normal | If a data set appears to be normal after some transformation is applied, is it really normal? | _Comment continued: Consider lognormal data x, which does become
exactly normal when transformed by taking logs. In this case (with $n=1000),$
Q-Q plots and the Shapiro-Wilk normality test agree for o | If a data set appears to be normal after some transformation is applied, is it really normal?
_Comment continued: Consider lognormal data x, which does become
exactly normal when transformed by taking logs. In this case (with $n=1000),$
Q-Q plots and the Shapiro-Wilk normality test agree for original and transformed data.
set.seed(2022)
x = rlnorm(100, 50, 7)
y = log(x)
par(mfrow = c(1,2))
hdr1 = "Lognormal Sample: Norm Q-Q Plot"
qqnorm(x, main=hdr1)
abline(a=mean(x), b=sd(x), col="blue")
hdr2 = "Normal Sample: Norm Q-Q Plot"
qqnorm(y, main=hdr2)
abline(a=mean(y), b=sd(y), col="blue")
par(mfrow = c(1,1))
shapiro.test(x)
Shapiro-Wilk normality test
data: x
W = 0.1143, p-value < 2.2e-16 # Normality strongly rejected
shapiro.test(y)
Shapiro-Wilk normality test
data: y
W = 0.99017, p-value = 0.678 # Does not rejece null hyp: normal | If a data set appears to be normal after some transformation is applied, is it really normal?
_Comment continued: Consider lognormal data x, which does become
exactly normal when transformed by taking logs. In this case (with $n=1000),$
Q-Q plots and the Shapiro-Wilk normality test agree for o |
24,742 | If a data set appears to be normal after some transformation is applied, is it really normal? | In general, the answer is no. It will be normal only if it was generated by the back transformation that corresponds to your (series of) transformations (see edit below). Nevertheless, there are good chances that the distribution of the transformed data is approximately normal, but keep in mind that not every bell-shaped distribution is a normal distribution. You need more than eye bowling before you start your analyses.
Edit regarding the first sentence in my answer: the transformation must be monotonic. For example, if you take data that was generated by a normal distribution, square it and then apply square root - you will not end with a normal distribution. | If a data set appears to be normal after some transformation is applied, is it really normal? | In general, the answer is no. It will be normal only if it was generated by the back transformation that corresponds to your (series of) transformations (see edit below). Nevertheless, there are good | If a data set appears to be normal after some transformation is applied, is it really normal?
In general, the answer is no. It will be normal only if it was generated by the back transformation that corresponds to your (series of) transformations (see edit below). Nevertheless, there are good chances that the distribution of the transformed data is approximately normal, but keep in mind that not every bell-shaped distribution is a normal distribution. You need more than eye bowling before you start your analyses.
Edit regarding the first sentence in my answer: the transformation must be monotonic. For example, if you take data that was generated by a normal distribution, square it and then apply square root - you will not end with a normal distribution. | If a data set appears to be normal after some transformation is applied, is it really normal?
In general, the answer is no. It will be normal only if it was generated by the back transformation that corresponds to your (series of) transformations (see edit below). Nevertheless, there are good |
24,743 | Does this code demonstrate the central limit theorem? | Here's a complete study in a few lines.
For a given set of sample sizes n and underlying distribution r, it generates n.sim independent samples of each size from that distribution, standardizes the empirical distribution of their means, plots the histogram, and overplots the standard Normal density in red. The CLT says that when the underlying distribution has finite variance, the red curve more and more closely approximates the histogram.
The first three rows illustrate the process for sample sizes of $10,20,100,500$ and underlying Normal, Gamma, and Bernoulli distributions. As sample size increases the approximation grows noticeably better. The bottom row uses a Cauchy distribution. Because a key assumption of the CLT (finite variance) does not hold in this case, its conclusion doesn't hold, which is pretty clear.
Execution time is about one second.
f <- function(n, r=rnorm, n.sim=1e3, name="Normal", ...) {
sapply(n, function(n) {
x <- scale(colMeans(matrix(r(n*n.sim, ...), n))) # Sample, take mean, standardize
hist(x, sub=name, main=n, freq=FALSE, breaks=30) # Plot distribution
curve(dnorm(x), col="Red", lwd=2, add=TRUE) # Compare to standard Normal
})
}
n <- c(5,20,100,500)
mfrow.old <- par(mfrow=c(4,length(n)))
f(n)
f(n, rgamma, shape=1/2, name="Gamma(1/2)")
f(n, function(n) runif(n) < 0.9, name="Bernoulli(9/10)")
f(n, rt, df=1, name="Cauchy")
par(mfrow=mfrow.old) | Does this code demonstrate the central limit theorem? | Here's a complete study in a few lines.
For a given set of sample sizes n and underlying distribution r, it generates n.sim independent samples of each size from that distribution, standardizes the e | Does this code demonstrate the central limit theorem?
Here's a complete study in a few lines.
For a given set of sample sizes n and underlying distribution r, it generates n.sim independent samples of each size from that distribution, standardizes the empirical distribution of their means, plots the histogram, and overplots the standard Normal density in red. The CLT says that when the underlying distribution has finite variance, the red curve more and more closely approximates the histogram.
The first three rows illustrate the process for sample sizes of $10,20,100,500$ and underlying Normal, Gamma, and Bernoulli distributions. As sample size increases the approximation grows noticeably better. The bottom row uses a Cauchy distribution. Because a key assumption of the CLT (finite variance) does not hold in this case, its conclusion doesn't hold, which is pretty clear.
Execution time is about one second.
f <- function(n, r=rnorm, n.sim=1e3, name="Normal", ...) {
sapply(n, function(n) {
x <- scale(colMeans(matrix(r(n*n.sim, ...), n))) # Sample, take mean, standardize
hist(x, sub=name, main=n, freq=FALSE, breaks=30) # Plot distribution
curve(dnorm(x), col="Red", lwd=2, add=TRUE) # Compare to standard Normal
})
}
n <- c(5,20,100,500)
mfrow.old <- par(mfrow=c(4,length(n)))
f(n)
f(n, rgamma, shape=1/2, name="Gamma(1/2)")
f(n, function(n) runif(n) < 0.9, name="Bernoulli(9/10)")
f(n, rt, df=1, name="Cauchy")
par(mfrow=mfrow.old) | Does this code demonstrate the central limit theorem?
Here's a complete study in a few lines.
For a given set of sample sizes n and underlying distribution r, it generates n.sim independent samples of each size from that distribution, standardizes the e |
24,744 | Does this code demonstrate the central limit theorem? | Here's an example of one of my suggestions from comments. Means of samples of size n=100000 (takes about 20 seconds or so, be patient):
ln.mean = replicate(1000,mean(rlnorm(100000,0,4)))
hist(ln.mean,n=100)
Even at this huge sample size, the distribution of sample means is still really skew -- but the central limit theorem nevertheless applies here - even the "classic" CLT. | Does this code demonstrate the central limit theorem? | Here's an example of one of my suggestions from comments. Means of samples of size n=100000 (takes about 20 seconds or so, be patient):
ln.mean = replicate(1000,mean(rlnorm(100000,0,4)))
hist(ln.m | Does this code demonstrate the central limit theorem?
Here's an example of one of my suggestions from comments. Means of samples of size n=100000 (takes about 20 seconds or so, be patient):
ln.mean = replicate(1000,mean(rlnorm(100000,0,4)))
hist(ln.mean,n=100)
Even at this huge sample size, the distribution of sample means is still really skew -- but the central limit theorem nevertheless applies here - even the "classic" CLT. | Does this code demonstrate the central limit theorem?
Here's an example of one of my suggestions from comments. Means of samples of size n=100000 (takes about 20 seconds or so, be patient):
ln.mean = replicate(1000,mean(rlnorm(100000,0,4)))
hist(ln.m |
24,745 | Does this code demonstrate the central limit theorem? | Maybe use something like the following (simpler, more direct) R code to show that
averages of a dozen standard uniform random variables are
difficult to distinguish from normal.
set.seed(1126)
a = replicate(5000, mean(runif(12))
shapiro.test(a)
Shapiro-Wilk normality test
data: a
W = 0.99965, p-value = 0.565
plot(qqnorm(a))
Then use R code to show that averages of 50, or even 100, standard exponential
random variables are easy to distinguish from normal. What is the distribution
of $A = \bar X_{100}?$
set.seed(1127)
a = replicate(5000, mean(rexp(100)))
shapiro.test(a)$p.val
[1] 1.675877e-06
However, averages of 1000 standard exponentials are more difficult to distinguish from normal.
set.seed(1127)
a = replicate(5000, mean(rexp(1000)))
shapiro.test(a)$p.val
[1] 0.2413559 | Does this code demonstrate the central limit theorem? | Maybe use something like the following (simpler, more direct) R code to show that
averages of a dozen standard uniform random variables are
difficult to distinguish from normal.
set.seed(1126)
a = rep | Does this code demonstrate the central limit theorem?
Maybe use something like the following (simpler, more direct) R code to show that
averages of a dozen standard uniform random variables are
difficult to distinguish from normal.
set.seed(1126)
a = replicate(5000, mean(runif(12))
shapiro.test(a)
Shapiro-Wilk normality test
data: a
W = 0.99965, p-value = 0.565
plot(qqnorm(a))
Then use R code to show that averages of 50, or even 100, standard exponential
random variables are easy to distinguish from normal. What is the distribution
of $A = \bar X_{100}?$
set.seed(1127)
a = replicate(5000, mean(rexp(100)))
shapiro.test(a)$p.val
[1] 1.675877e-06
However, averages of 1000 standard exponentials are more difficult to distinguish from normal.
set.seed(1127)
a = replicate(5000, mean(rexp(1000)))
shapiro.test(a)$p.val
[1] 0.2413559 | Does this code demonstrate the central limit theorem?
Maybe use something like the following (simpler, more direct) R code to show that
averages of a dozen standard uniform random variables are
difficult to distinguish from normal.
set.seed(1126)
a = rep |
24,746 | What regression model is the most appropriate to use with count data? | No, there is no general count data regression model.
(Just as there is no general regression model for continuous data. A linear model with normally distributed homoskedastic noise is most commonly assumed, and fitted using Ordinary Least Squares. However, gamma regression or exponential regression is often used to deal with different error distribution assumptions, or conditional heteroskedasticity models, like ARCH or GARCH in a time series context, to deal with heteroskedastic noise.)
Common models include poisson-regression, as you write, or Negative Binomial Regression. These models are sufficiently widespread to find all kinds of software, tutorials or textbooks. I particularly like Hilbe's Negative Binomial Regression. This earlier question discusses how to choose between different count data models.
If you have "many" zeros in your data, and especially if you suspect that zeros could be driven by a different data-generating process than non-zeros (or that some zeros come from one DGP, and other zeros and non-zeros come from a different DGP), zero-inflation models may be useful. The most common one is zero-inflated Poisson (ZIP) regression.
You could also skim through our previous questions tagged both "regression" and "count-data".
EDIT: @MichaelM raises a good point. This does look like time series of count data. (And the missing data for 1992 and 1994 suggest to me that there should be a zero in each of these years. If so, do include it. Zero is a valid number, and it does carry information.) In light of this, I'd also suggest looking through our previous questions tagged both "time-series" and "count-data". | What regression model is the most appropriate to use with count data? | No, there is no general count data regression model.
(Just as there is no general regression model for continuous data. A linear model with normally distributed homoskedastic noise is most commonly as | What regression model is the most appropriate to use with count data?
No, there is no general count data regression model.
(Just as there is no general regression model for continuous data. A linear model with normally distributed homoskedastic noise is most commonly assumed, and fitted using Ordinary Least Squares. However, gamma regression or exponential regression is often used to deal with different error distribution assumptions, or conditional heteroskedasticity models, like ARCH or GARCH in a time series context, to deal with heteroskedastic noise.)
Common models include poisson-regression, as you write, or Negative Binomial Regression. These models are sufficiently widespread to find all kinds of software, tutorials or textbooks. I particularly like Hilbe's Negative Binomial Regression. This earlier question discusses how to choose between different count data models.
If you have "many" zeros in your data, and especially if you suspect that zeros could be driven by a different data-generating process than non-zeros (or that some zeros come from one DGP, and other zeros and non-zeros come from a different DGP), zero-inflation models may be useful. The most common one is zero-inflated Poisson (ZIP) regression.
You could also skim through our previous questions tagged both "regression" and "count-data".
EDIT: @MichaelM raises a good point. This does look like time series of count data. (And the missing data for 1992 and 1994 suggest to me that there should be a zero in each of these years. If so, do include it. Zero is a valid number, and it does carry information.) In light of this, I'd also suggest looking through our previous questions tagged both "time-series" and "count-data". | What regression model is the most appropriate to use with count data?
No, there is no general count data regression model.
(Just as there is no general regression model for continuous data. A linear model with normally distributed homoskedastic noise is most commonly as |
24,747 | What regression model is the most appropriate to use with count data? | The "default", the most commonly used and described, distribution of choice for count data is the Poisson distribution. Most often it is illustrated using example of its first practical usage:
A practical application of this distribution was made by Ladislaus
Bortkiewicz in 1898 when he was given the task of investigating the
number of soldiers in the Prussian army killed accidentally by horse
kicks; this experiment introduced the Poisson distribution to the
field of reliability engineering.
Poisson distribution is parametrized by rate $\lambda$ per fixed time interval ($\lambda$ is also it's mean and variance). In case of regression, we can use Poisson distribution in generalized linear model with log-linear link function
$$
E(Y|X,\beta) = \lambda = \exp\left( \beta_0 + \beta_1 X_1 + \dots + \beta_k X_k \right)
$$
that is called Poisson regression, since we can assume that $\lambda$ is a rate of Poisson distribution. Notice however that for log-linear regression you do not have to make such assumption and simply use GLM with log link with non-count data. When interpreting the parameters you need to remember that, because of using log transform, changes in independent variable result in multiplicative changes in the predicted counts.
The problem with using Poisson distribution for the real-life data is that it assumes mean to be equal to the variance. Violation of this assumption is called overdispersion. In such cases you can always use quasi-Poisson model, non-Poisson log-linear model (for large counts Poisson can be approximated by normal distribution), negative binomial regression (closely related to Poisson; see Berk and MacDonald, 2008), or other models, as described by Stephan Kolassa.
For some friendly introduction to Poisson regression you can also check papers by Lavery (2010), or Coxe, West and Aiken (2009).
Lavery, R. (2010). An Animated Guide: An Introduction To Poisson Regression. NESUG paper, sa04.
Coxe, S., West, S.G., & Aiken, L.S. (2009). The analysis of count data: A gentle introduction to Poisson regression and its alternatives. Journal of personality assessment, 91(2), 121-136.
Berk, R., & MacDonald, J. M. (2008). Overdispersion and Poisson regression. Journal of Quantitative Criminology, 24(3), 269-284. | What regression model is the most appropriate to use with count data? | The "default", the most commonly used and described, distribution of choice for count data is the Poisson distribution. Most often it is illustrated using example of its first practical usage:
A prac | What regression model is the most appropriate to use with count data?
The "default", the most commonly used and described, distribution of choice for count data is the Poisson distribution. Most often it is illustrated using example of its first practical usage:
A practical application of this distribution was made by Ladislaus
Bortkiewicz in 1898 when he was given the task of investigating the
number of soldiers in the Prussian army killed accidentally by horse
kicks; this experiment introduced the Poisson distribution to the
field of reliability engineering.
Poisson distribution is parametrized by rate $\lambda$ per fixed time interval ($\lambda$ is also it's mean and variance). In case of regression, we can use Poisson distribution in generalized linear model with log-linear link function
$$
E(Y|X,\beta) = \lambda = \exp\left( \beta_0 + \beta_1 X_1 + \dots + \beta_k X_k \right)
$$
that is called Poisson regression, since we can assume that $\lambda$ is a rate of Poisson distribution. Notice however that for log-linear regression you do not have to make such assumption and simply use GLM with log link with non-count data. When interpreting the parameters you need to remember that, because of using log transform, changes in independent variable result in multiplicative changes in the predicted counts.
The problem with using Poisson distribution for the real-life data is that it assumes mean to be equal to the variance. Violation of this assumption is called overdispersion. In such cases you can always use quasi-Poisson model, non-Poisson log-linear model (for large counts Poisson can be approximated by normal distribution), negative binomial regression (closely related to Poisson; see Berk and MacDonald, 2008), or other models, as described by Stephan Kolassa.
For some friendly introduction to Poisson regression you can also check papers by Lavery (2010), or Coxe, West and Aiken (2009).
Lavery, R. (2010). An Animated Guide: An Introduction To Poisson Regression. NESUG paper, sa04.
Coxe, S., West, S.G., & Aiken, L.S. (2009). The analysis of count data: A gentle introduction to Poisson regression and its alternatives. Journal of personality assessment, 91(2), 121-136.
Berk, R., & MacDonald, J. M. (2008). Overdispersion and Poisson regression. Journal of Quantitative Criminology, 24(3), 269-284. | What regression model is the most appropriate to use with count data?
The "default", the most commonly used and described, distribution of choice for count data is the Poisson distribution. Most often it is illustrated using example of its first practical usage:
A prac |
24,748 | What regression model is the most appropriate to use with count data? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Poisson or negative binomial are two widely used models for count data. I'd opt for the negative binomial as it has better assumptions for variance. | What regression model is the most appropriate to use with count data? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What regression model is the most appropriate to use with count data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Poisson or negative binomial are two widely used models for count data. I'd opt for the negative binomial as it has better assumptions for variance. | What regression model is the most appropriate to use with count data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
24,749 | Diagonal straight lines in residuals vs fitted values plot for multiple regression | It seems that on some its subrange your dependent variable is constant or is exactly linearly dependent on the predictor(s). Let's have two correlated variables, X and Y (Y is dependent). The scatterplot is on the left.
Let's return, as example, on the first ("constant") possibility. Recode all Y values from lowest to -0.5 to a single value -1 (see picture in the centre). Regress Y on X and plot residuals scatter, that is, rotate the central picture so that the prediction line is horizontal now. Does it resemble your picture? | Diagonal straight lines in residuals vs fitted values plot for multiple regression | It seems that on some its subrange your dependent variable is constant or is exactly linearly dependent on the predictor(s). Let's have two correlated variables, X and Y (Y is dependent). The scatterp | Diagonal straight lines in residuals vs fitted values plot for multiple regression
It seems that on some its subrange your dependent variable is constant or is exactly linearly dependent on the predictor(s). Let's have two correlated variables, X and Y (Y is dependent). The scatterplot is on the left.
Let's return, as example, on the first ("constant") possibility. Recode all Y values from lowest to -0.5 to a single value -1 (see picture in the centre). Regress Y on X and plot residuals scatter, that is, rotate the central picture so that the prediction line is horizontal now. Does it resemble your picture? | Diagonal straight lines in residuals vs fitted values plot for multiple regression
It seems that on some its subrange your dependent variable is constant or is exactly linearly dependent on the predictor(s). Let's have two correlated variables, X and Y (Y is dependent). The scatterp |
24,750 | Diagonal straight lines in residuals vs fitted values plot for multiple regression | It's not surprising you don't see the pattern in the histogram, the odd pattern spans quite a bit of the range of the histogram and represents only a few data points in each bin. You really need to find out which data points those are and look at them. You could use the predicted values and residuals to find them easy enough. Once you find the values start investigating why those ones might be special.
Having said that, this particular pattern is only special because it's long. If you look carefully at your residuals plot and your quantile plot you'll see it repeats but that it's smaller sequences. Perhaps it really just is an anomaly. Or perhaps it really is a pattern that repeats. But, you're going to have to find where it is in the raw data and examine it in order to have any hope of understanding it at all.
To give you a bit of help, the quantile-quantile plot suggests you have a bunch of identical residuals. It's possible that it could be a coding error. I can generate something similar in R with...
x <- c(rnorm(50), rep(-0.2, 10), rep(0, 4))
qqnorm(x);qqline(x)
Note the flat two flat spots in the line. However, it seems more complex than that because there's an implication that the identical residuals are coming across a range of predictors. | Diagonal straight lines in residuals vs fitted values plot for multiple regression | It's not surprising you don't see the pattern in the histogram, the odd pattern spans quite a bit of the range of the histogram and represents only a few data points in each bin. You really need to f | Diagonal straight lines in residuals vs fitted values plot for multiple regression
It's not surprising you don't see the pattern in the histogram, the odd pattern spans quite a bit of the range of the histogram and represents only a few data points in each bin. You really need to find out which data points those are and look at them. You could use the predicted values and residuals to find them easy enough. Once you find the values start investigating why those ones might be special.
Having said that, this particular pattern is only special because it's long. If you look carefully at your residuals plot and your quantile plot you'll see it repeats but that it's smaller sequences. Perhaps it really just is an anomaly. Or perhaps it really is a pattern that repeats. But, you're going to have to find where it is in the raw data and examine it in order to have any hope of understanding it at all.
To give you a bit of help, the quantile-quantile plot suggests you have a bunch of identical residuals. It's possible that it could be a coding error. I can generate something similar in R with...
x <- c(rnorm(50), rep(-0.2, 10), rep(0, 4))
qqnorm(x);qqline(x)
Note the flat two flat spots in the line. However, it seems more complex than that because there's an implication that the identical residuals are coming across a range of predictors. | Diagonal straight lines in residuals vs fitted values plot for multiple regression
It's not surprising you don't see the pattern in the histogram, the odd pattern spans quite a bit of the range of the histogram and represents only a few data points in each bin. You really need to f |
24,751 | Diagonal straight lines in residuals vs fitted values plot for multiple regression | It looks like you are using R. If so, note that you can identify points on a scatterplot using ?identify. I think there are several things going on here. First, you have a very influential point on the plot of LN_RT_vol_in ~ LN_AT_vol_in (the highlighted one) at about (.2, 1.5). This is very likely to be the standardized residual that's about -3.7. The effect of that point will be to flatten the regression line, tilting it more horizontal than the sharply upward line you otherwise would have gotten. An effect of that is that all your residuals will be rotated counterclockwise relative to where they would otherwise have been located within the residual ~ predicted plot (at least when thinking in terms of that covariate and ignoring the other one).
Nonetheless, the apparent straight line of residuals that you see would still be there, as they exist somewhere in the 3-dimensional cloud of your original data. They may be hard to find in either of the marginal plots. You can use the identify() function to help, and you can also use the rgl package to create a dynamic 3D scatterplot that you can rotate freely with your mouse. However, note that the straight line residuals are all below 0 in their predicted value, and have below 0 residuals (i.e., they are below the fitted regression line); that gives you a big hint for where to look. Looking again at your plot of LN_RT_vol_in ~ LN_AT_vol_in, I think I may see them. There is a fairly straight cluster of points running diagonally down and to the left from about (-.01, -1.00) at the lower edge of the cloud of points in that region. I suspect those are the points in question.
In other words, the residuals look that way because they are that way somewhere within the data space already. In essence, this is what @ttnphns is suggesting, but I don't think it's quite a constant in any of the original dimensions--it's a constant in a dimension at an angle to your original axes. I further agree with @MichaelChernick that this apparent straightness in the residual plot is probably harmless, but that your data are not really very normal. They are somewhat normal-ish, however, and you seem to have a decent number of data, so the CLT may cover you, but you may want to bootstrap just in case. Finally, I would worry that that 'outlier' is driving your results; a robust approach is probably merited. | Diagonal straight lines in residuals vs fitted values plot for multiple regression | It looks like you are using R. If so, note that you can identify points on a scatterplot using ?identify. I think there are several things going on here. First, you have a very influential point on | Diagonal straight lines in residuals vs fitted values plot for multiple regression
It looks like you are using R. If so, note that you can identify points on a scatterplot using ?identify. I think there are several things going on here. First, you have a very influential point on the plot of LN_RT_vol_in ~ LN_AT_vol_in (the highlighted one) at about (.2, 1.5). This is very likely to be the standardized residual that's about -3.7. The effect of that point will be to flatten the regression line, tilting it more horizontal than the sharply upward line you otherwise would have gotten. An effect of that is that all your residuals will be rotated counterclockwise relative to where they would otherwise have been located within the residual ~ predicted plot (at least when thinking in terms of that covariate and ignoring the other one).
Nonetheless, the apparent straight line of residuals that you see would still be there, as they exist somewhere in the 3-dimensional cloud of your original data. They may be hard to find in either of the marginal plots. You can use the identify() function to help, and you can also use the rgl package to create a dynamic 3D scatterplot that you can rotate freely with your mouse. However, note that the straight line residuals are all below 0 in their predicted value, and have below 0 residuals (i.e., they are below the fitted regression line); that gives you a big hint for where to look. Looking again at your plot of LN_RT_vol_in ~ LN_AT_vol_in, I think I may see them. There is a fairly straight cluster of points running diagonally down and to the left from about (-.01, -1.00) at the lower edge of the cloud of points in that region. I suspect those are the points in question.
In other words, the residuals look that way because they are that way somewhere within the data space already. In essence, this is what @ttnphns is suggesting, but I don't think it's quite a constant in any of the original dimensions--it's a constant in a dimension at an angle to your original axes. I further agree with @MichaelChernick that this apparent straightness in the residual plot is probably harmless, but that your data are not really very normal. They are somewhat normal-ish, however, and you seem to have a decent number of data, so the CLT may cover you, but you may want to bootstrap just in case. Finally, I would worry that that 'outlier' is driving your results; a robust approach is probably merited. | Diagonal straight lines in residuals vs fitted values plot for multiple regression
It looks like you are using R. If so, note that you can identify points on a scatterplot using ?identify. I think there are several things going on here. First, you have a very influential point on |
24,752 | Diagonal straight lines in residuals vs fitted values plot for multiple regression | I would not necessarily say that the histogram is okay. Visually superimposing the best fitting normal on a histogram can be deceptive and your histogrsm could be sensitive to the choice of bin width. The normal probability plot seems to indicate a large departure from normal and even looking at the histogram there seems to my eye to be slight skewness (higher frequency in the [0,+0.5] bin compared to the [-0.5,0] bin) and severe kurtosis ( too large of a frequency in the intervals [-4,-3.5] and [2.5, 3]).
Regarding the pattern you see it may be coming from selective exploring through the scatterplot. It looks like if you hunt some more you can find two or three more lines nearly parallel to the one you picked out. I think you are reading too much into this. But the nonnormality is a real concern. You have one very huge outlier with a residual of nearly -4. Do these residuals come from a least squares fit? I agree that it might be enlightening to look at the fitted line on a scatter plot of the data. | Diagonal straight lines in residuals vs fitted values plot for multiple regression | I would not necessarily say that the histogram is okay. Visually superimposing the best fitting normal on a histogram can be deceptive and your histogrsm could be sensitive to the choice of bin width | Diagonal straight lines in residuals vs fitted values plot for multiple regression
I would not necessarily say that the histogram is okay. Visually superimposing the best fitting normal on a histogram can be deceptive and your histogrsm could be sensitive to the choice of bin width. The normal probability plot seems to indicate a large departure from normal and even looking at the histogram there seems to my eye to be slight skewness (higher frequency in the [0,+0.5] bin compared to the [-0.5,0] bin) and severe kurtosis ( too large of a frequency in the intervals [-4,-3.5] and [2.5, 3]).
Regarding the pattern you see it may be coming from selective exploring through the scatterplot. It looks like if you hunt some more you can find two or three more lines nearly parallel to the one you picked out. I think you are reading too much into this. But the nonnormality is a real concern. You have one very huge outlier with a residual of nearly -4. Do these residuals come from a least squares fit? I agree that it might be enlightening to look at the fitted line on a scatter plot of the data. | Diagonal straight lines in residuals vs fitted values plot for multiple regression
I would not necessarily say that the histogram is okay. Visually superimposing the best fitting normal on a histogram can be deceptive and your histogrsm could be sensitive to the choice of bin width |
24,753 | How should you handle cell values equal to zero in a contingency table? | A very nice discussion of structural zeros in contingency tables is provided by
West, L. and Hankin, R. (2008), “Exact Tests for Two-Way Contingency Tables with Structural Zeros,” Journal of Statistical Software, 28(11), 1–19.
URL http://www.jstatsoft.org/v28/i11
As the title implies, they implement Fisher’s exact test for two-way contingency tables
in the case where some of the table entries are constrained to be zero. | How should you handle cell values equal to zero in a contingency table? | A very nice discussion of structural zeros in contingency tables is provided by
West, L. and Hankin, R. (2008), “Exact Tests for Two-Way Contingency Tables with Structural Zeros,” Journal of Statisti | How should you handle cell values equal to zero in a contingency table?
A very nice discussion of structural zeros in contingency tables is provided by
West, L. and Hankin, R. (2008), “Exact Tests for Two-Way Contingency Tables with Structural Zeros,” Journal of Statistical Software, 28(11), 1–19.
URL http://www.jstatsoft.org/v28/i11
As the title implies, they implement Fisher’s exact test for two-way contingency tables
in the case where some of the table entries are constrained to be zero. | How should you handle cell values equal to zero in a contingency table?
A very nice discussion of structural zeros in contingency tables is provided by
West, L. and Hankin, R. (2008), “Exact Tests for Two-Way Contingency Tables with Structural Zeros,” Journal of Statisti |
24,754 | How should you handle cell values equal to zero in a contingency table? | Zeros in tables are sometimes classified as structural, i.e.zero by design or by definition, or as random, i.e. a possible value that was observed. In the case of a study where no instances were observed despite being possible, the question often comes up: What is the one-sided 95% confidence interval above zero? This can be sensibly answered. It is, for instance, addressed in "If Nothing Goes Wrong, Is Everything All Right? Interpreting Zero Numerators" Hanley and Lippman-Hand. JAMA. 1983;249(13):1743-45. Their bottom line was that the upper end of the confidence interval around the observed value of zero was 3/n where n was the number of observations. This "rule of 3" has been further addressed in later analyses and to my surprise I found it even has a Wikipedia page. The best discussion I found was by Jovanovic and Levy in the American Statistician. That does not seem to be available in full-text in the searches, but can report after looking through it a second time that they modified the formula to be 3/(n+1) after sensible Bayesian considerations, which tightens up the CI a bit. There is a more recent review in International Statistical Review (2009), 77, 2, 266–275.
Addenda: After looking more closely at the last citation, above I also remember finding the extensive discussion in Agresti & Coull "The American Statistician", Vol. 52, No. 2 (May, 1998), pp. 119-126 informative. The "Agresti-Coull" intervals are incorporated into various SAS and R functions. One R function with it is binom.confint {package:binom} by Sundar Dorai-Raj.
There are several methods for dealing with situations where an accumulation of "zero" observations distort an otherwise nice, tractable distribution of say costs or health-care usage patterns. These include zero-inflated and hurdle models as described by Zeileis in "Regression Models for Count Data in R". Searching Google also demonstrates that Stata and SAS have facilities to handle such models.
After seeing the citation to Browne (and correcting the Jovanovic and Levy modification), I am adding this snippet from the even more entertaining rejoinder to Browne:
"But as the sample size becomes smaller, prior information becomes even more important since there are so few data points to “speak for themselves.” Indeed, small sample sizes provide not only the most compelling opportunity to think hard about the prior, but an obligation to do so.
"More generally, we would like to take this opportunity to speak out against
the mindless, uncritical use of simple formulas or rules."
And I add the citation to the Winkler, et al paper that was in dispute. | How should you handle cell values equal to zero in a contingency table? | Zeros in tables are sometimes classified as structural, i.e.zero by design or by definition, or as random, i.e. a possible value that was observed. In the case of a study where no instances were obser | How should you handle cell values equal to zero in a contingency table?
Zeros in tables are sometimes classified as structural, i.e.zero by design or by definition, or as random, i.e. a possible value that was observed. In the case of a study where no instances were observed despite being possible, the question often comes up: What is the one-sided 95% confidence interval above zero? This can be sensibly answered. It is, for instance, addressed in "If Nothing Goes Wrong, Is Everything All Right? Interpreting Zero Numerators" Hanley and Lippman-Hand. JAMA. 1983;249(13):1743-45. Their bottom line was that the upper end of the confidence interval around the observed value of zero was 3/n where n was the number of observations. This "rule of 3" has been further addressed in later analyses and to my surprise I found it even has a Wikipedia page. The best discussion I found was by Jovanovic and Levy in the American Statistician. That does not seem to be available in full-text in the searches, but can report after looking through it a second time that they modified the formula to be 3/(n+1) after sensible Bayesian considerations, which tightens up the CI a bit. There is a more recent review in International Statistical Review (2009), 77, 2, 266–275.
Addenda: After looking more closely at the last citation, above I also remember finding the extensive discussion in Agresti & Coull "The American Statistician", Vol. 52, No. 2 (May, 1998), pp. 119-126 informative. The "Agresti-Coull" intervals are incorporated into various SAS and R functions. One R function with it is binom.confint {package:binom} by Sundar Dorai-Raj.
There are several methods for dealing with situations where an accumulation of "zero" observations distort an otherwise nice, tractable distribution of say costs or health-care usage patterns. These include zero-inflated and hurdle models as described by Zeileis in "Regression Models for Count Data in R". Searching Google also demonstrates that Stata and SAS have facilities to handle such models.
After seeing the citation to Browne (and correcting the Jovanovic and Levy modification), I am adding this snippet from the even more entertaining rejoinder to Browne:
"But as the sample size becomes smaller, prior information becomes even more important since there are so few data points to “speak for themselves.” Indeed, small sample sizes provide not only the most compelling opportunity to think hard about the prior, but an obligation to do so.
"More generally, we would like to take this opportunity to speak out against
the mindless, uncritical use of simple formulas or rules."
And I add the citation to the Winkler, et al paper that was in dispute. | How should you handle cell values equal to zero in a contingency table?
Zeros in tables are sometimes classified as structural, i.e.zero by design or by definition, or as random, i.e. a possible value that was observed. In the case of a study where no instances were obser |
24,755 | How should you handle cell values equal to zero in a contingency table? | Thomas Wickens, in his excellent book Multiway Contingency Table Analysis for the Social Sciences, offers a different suggestion from the ones already proposed. He distinguishes between random zeros, "which are accidents of sampling and whose treatment largely consists of adjustments to the degrees of freedom (chapter 5, p. 120, "Empty Cells")," from structural voids or zeros, "which lack a complete factorial structure and whose analysis requires a modification of the concept of independence" (chapter 10, p. 246).
Chapter 10 is titled "Structurally Incomplete Tables" and considers the treatment of data in which certain cells are a priori excluded from consideration. "Examples of this include hospital admissions by gender: although pregnant men may have a cell in the contingency table, none are observed," (p. 247).
Most importantly, "If one treats the impossible cells (structural zeros) as frequencies of zero, they assert themselves as dependencies in a test of independence (p. 246)."
What one wants to do is ignore the impossible cells in any test of independence or association. The way to do this is to estimate the appropriate model on the full contingency table (including the structural zeros) and then subtract the sum of the chi-square values associated with the zero cells from the total chi-square test. This generates a reduced chi-square test of independence only for the reduced contingency table. | How should you handle cell values equal to zero in a contingency table? | Thomas Wickens, in his excellent book Multiway Contingency Table Analysis for the Social Sciences, offers a different suggestion from the ones already proposed. He distinguishes between random zeros, | How should you handle cell values equal to zero in a contingency table?
Thomas Wickens, in his excellent book Multiway Contingency Table Analysis for the Social Sciences, offers a different suggestion from the ones already proposed. He distinguishes between random zeros, "which are accidents of sampling and whose treatment largely consists of adjustments to the degrees of freedom (chapter 5, p. 120, "Empty Cells")," from structural voids or zeros, "which lack a complete factorial structure and whose analysis requires a modification of the concept of independence" (chapter 10, p. 246).
Chapter 10 is titled "Structurally Incomplete Tables" and considers the treatment of data in which certain cells are a priori excluded from consideration. "Examples of this include hospital admissions by gender: although pregnant men may have a cell in the contingency table, none are observed," (p. 247).
Most importantly, "If one treats the impossible cells (structural zeros) as frequencies of zero, they assert themselves as dependencies in a test of independence (p. 246)."
What one wants to do is ignore the impossible cells in any test of independence or association. The way to do this is to estimate the appropriate model on the full contingency table (including the structural zeros) and then subtract the sum of the chi-square values associated with the zero cells from the total chi-square test. This generates a reduced chi-square test of independence only for the reduced contingency table. | How should you handle cell values equal to zero in a contingency table?
Thomas Wickens, in his excellent book Multiway Contingency Table Analysis for the Social Sciences, offers a different suggestion from the ones already proposed. He distinguishes between random zeros, |
24,756 | Reasons besides prediction to build models? | Reason 17. Write a paper.
Sort-of just kidding but not really. There seems to be a bit of overlap between some of his points (eg 1, 5, 6, 12, 14). | Reasons besides prediction to build models? | Reason 17. Write a paper.
Sort-of just kidding but not really. There seems to be a bit of overlap between some of his points (eg 1, 5, 6, 12, 14). | Reasons besides prediction to build models?
Reason 17. Write a paper.
Sort-of just kidding but not really. There seems to be a bit of overlap between some of his points (eg 1, 5, 6, 12, 14). | Reasons besides prediction to build models?
Reason 17. Write a paper.
Sort-of just kidding but not really. There seems to be a bit of overlap between some of his points (eg 1, 5, 6, 12, 14). |
24,757 | Reasons besides prediction to build models? | Save money
I build mathematical/statistical of cellular mechanisms. For example, how a particular protein affects cellular ageing. The role of the model is mainly prediction, but also to save money. It's far cheaper to employ a single modeller than (say) a few wet-lab biologists with the associated equipment costs. Of course modelling doesn't fully replace the experiment, it just aids the process. | Reasons besides prediction to build models? | Save money
I build mathematical/statistical of cellular mechanisms. For example, how a particular protein affects cellular ageing. The role of the model is mainly prediction, but also to save money. | Reasons besides prediction to build models?
Save money
I build mathematical/statistical of cellular mechanisms. For example, how a particular protein affects cellular ageing. The role of the model is mainly prediction, but also to save money. It's far cheaper to employ a single modeller than (say) a few wet-lab biologists with the associated equipment costs. Of course modelling doesn't fully replace the experiment, it just aids the process. | Reasons besides prediction to build models?
Save money
I build mathematical/statistical of cellular mechanisms. For example, how a particular protein affects cellular ageing. The role of the model is mainly prediction, but also to save money. |
24,758 | Reasons besides prediction to build models? | For fun!
I'm sure most statisticians/modellers do their job because they enjoy it. Getting paid to do something you enjoy is quite nice! | Reasons besides prediction to build models? | For fun!
I'm sure most statisticians/modellers do their job because they enjoy it. Getting paid to do something you enjoy is quite nice! | Reasons besides prediction to build models?
For fun!
I'm sure most statisticians/modellers do their job because they enjoy it. Getting paid to do something you enjoy is quite nice! | Reasons besides prediction to build models?
For fun!
I'm sure most statisticians/modellers do their job because they enjoy it. Getting paid to do something you enjoy is quite nice! |
24,759 | Reasons besides prediction to build models? | dimension reduction
Sometimes there can be too much data, so forming an initial model allows for further analysis. | Reasons besides prediction to build models? | dimension reduction
Sometimes there can be too much data, so forming an initial model allows for further analysis. | Reasons besides prediction to build models?
dimension reduction
Sometimes there can be too much data, so forming an initial model allows for further analysis. | Reasons besides prediction to build models?
dimension reduction
Sometimes there can be too much data, so forming an initial model allows for further analysis. |
24,760 | Reasons besides prediction to build models? | regulation
Government agencies require firms to provide reports using certain models. This provides for a degree of standardization in oversight. An example is the use of Value-at-Risk in the financial sector. | Reasons besides prediction to build models? | regulation
Government agencies require firms to provide reports using certain models. This provides for a degree of standardization in oversight. An example is the use of Value-at-Risk in the financi | Reasons besides prediction to build models?
regulation
Government agencies require firms to provide reports using certain models. This provides for a degree of standardization in oversight. An example is the use of Value-at-Risk in the financial sector. | Reasons besides prediction to build models?
regulation
Government agencies require firms to provide reports using certain models. This provides for a degree of standardization in oversight. An example is the use of Value-at-Risk in the financi |
24,761 | Reasons besides prediction to build models? | Control
A major aspect of the dynamic modelling literature is associated with control. This kind of work spans a lot of disciplines from politics/economics (see, e.g. Stafford Beer), biology (see e.g. N Weiner's 1948 work on Cybernetics) through to contemporary state space control theory (see for an intro Ljung 1999).
Control is kind of related to Epstein's 9 and 10, and Shane's answers about human judgement / regulation, but I thought it made sense to be explicit. Indeed, at the end of my engineering undergraduate career I would have given you a very concise response to the uses of modelling: control, inference and prediction. I guess inference, by which I mean filtering/smoothing/dimension-reduction etc, is maybe similar to Epstein's points 3 and 8.
Of course in my later years I wouldn't be so bold as to limit the purposes of modelling to control, inference and prediction. Maybe a fourth, covering many of Epsteins's points, should be "coercion" - the only way you should "educate the public" is to encourage us to make our own models... | Reasons besides prediction to build models? | Control
A major aspect of the dynamic modelling literature is associated with control. This kind of work spans a lot of disciplines from politics/economics (see, e.g. Stafford Beer), biology (see e.g | Reasons besides prediction to build models?
Control
A major aspect of the dynamic modelling literature is associated with control. This kind of work spans a lot of disciplines from politics/economics (see, e.g. Stafford Beer), biology (see e.g. N Weiner's 1948 work on Cybernetics) through to contemporary state space control theory (see for an intro Ljung 1999).
Control is kind of related to Epstein's 9 and 10, and Shane's answers about human judgement / regulation, but I thought it made sense to be explicit. Indeed, at the end of my engineering undergraduate career I would have given you a very concise response to the uses of modelling: control, inference and prediction. I guess inference, by which I mean filtering/smoothing/dimension-reduction etc, is maybe similar to Epstein's points 3 and 8.
Of course in my later years I wouldn't be so bold as to limit the purposes of modelling to control, inference and prediction. Maybe a fourth, covering many of Epsteins's points, should be "coercion" - the only way you should "educate the public" is to encourage us to make our own models... | Reasons besides prediction to build models?
Control
A major aspect of the dynamic modelling literature is associated with control. This kind of work spans a lot of disciplines from politics/economics (see, e.g. Stafford Beer), biology (see e.g |
24,762 | Reasons besides prediction to build models? | This is closely related to some of the others, but:
Eliminate human judgement
Human decision making is subject to many different forces and biases. That means that you not only get different answers to the same question, but you can also end up with really suboptimal outcomes. Examples would be the over-confidence bias or anchoring. | Reasons besides prediction to build models? | This is closely related to some of the others, but:
Eliminate human judgement
Human decision making is subject to many different forces and biases. That means that you not only get different answers | Reasons besides prediction to build models?
This is closely related to some of the others, but:
Eliminate human judgement
Human decision making is subject to many different forces and biases. That means that you not only get different answers to the same question, but you can also end up with really suboptimal outcomes. Examples would be the over-confidence bias or anchoring. | Reasons besides prediction to build models?
This is closely related to some of the others, but:
Eliminate human judgement
Human decision making is subject to many different forces and biases. That means that you not only get different answers |
24,763 | Reasons besides prediction to build models? | To take (useful) action.
I'm paraphrasing someone else here, but suppose we built a system of public health around the model that infectious diseases are due to malevolent spirits that spread through contact. The science of microbes may be an infinitely better model, but you could prevent a good number of contagions nonetheless. (I think this was on reading a history of cybernetics, but I can't remember who made the point.)
The point is that, along the lines of "all models bad, some useful", we need to formulate models and refine them in order to undertake any useful actions with lasting consequences. Otherwise, we might as well flip coins. | Reasons besides prediction to build models? | To take (useful) action.
I'm paraphrasing someone else here, but suppose we built a system of public health around the model that infectious diseases are due to malevolent spirits that spread through | Reasons besides prediction to build models?
To take (useful) action.
I'm paraphrasing someone else here, but suppose we built a system of public health around the model that infectious diseases are due to malevolent spirits that spread through contact. The science of microbes may be an infinitely better model, but you could prevent a good number of contagions nonetheless. (I think this was on reading a history of cybernetics, but I can't remember who made the point.)
The point is that, along the lines of "all models bad, some useful", we need to formulate models and refine them in order to undertake any useful actions with lasting consequences. Otherwise, we might as well flip coins. | Reasons besides prediction to build models?
To take (useful) action.
I'm paraphrasing someone else here, but suppose we built a system of public health around the model that infectious diseases are due to malevolent spirits that spread through |
24,764 | Reasons besides prediction to build models? | Repetitive problems that involve some form of benefit / cost
In my field, we model the same set of variables in different locations, time frame, and magnitudes | Reasons besides prediction to build models? | Repetitive problems that involve some form of benefit / cost
In my field, we model the same set of variables in different locations, time frame, and magnitudes | Reasons besides prediction to build models?
Repetitive problems that involve some form of benefit / cost
In my field, we model the same set of variables in different locations, time frame, and magnitudes | Reasons besides prediction to build models?
Repetitive problems that involve some form of benefit / cost
In my field, we model the same set of variables in different locations, time frame, and magnitudes |
24,765 | Reasons besides prediction to build models? | In my opinion 16 are too many reasons, too fine of a specification and sort of overlap at times. Instead I would personally streamline into broad groups. We can classify study objectives in 3 main categories: single hypothesis testing, exploratory study and to predict. | Reasons besides prediction to build models? | In my opinion 16 are too many reasons, too fine of a specification and sort of overlap at times. Instead I would personally streamline into broad groups. We can classify study objectives in 3 main ca | Reasons besides prediction to build models?
In my opinion 16 are too many reasons, too fine of a specification and sort of overlap at times. Instead I would personally streamline into broad groups. We can classify study objectives in 3 main categories: single hypothesis testing, exploratory study and to predict. | Reasons besides prediction to build models?
In my opinion 16 are too many reasons, too fine of a specification and sort of overlap at times. Instead I would personally streamline into broad groups. We can classify study objectives in 3 main ca |
24,766 | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | The mathematical concept of a "random variable" does not require a belief in randomness
You seem to be reading more into the mathematical concept of a "random variable" than is actually contained in the concept. Mathematically, the concept of a random variable is just a mapping from a sample space in a probability space to the real numbers, complex numbers, or some other set of values for a quantity of interest. This means that every random variable has a probability distribution and every quantity with a probability distribution is a random variable. So in fact these are not very seperate concepts; they are identical. Treated purely as a mathematical concept, this does not necessarily entail any particular metaphysical properties of the quantity, and in particular, it does not imply that the quantity is actually random in an aleatory sense. Perhaps this is a bit naughty and confusing on the part of the mathematics community, and I can see why it would lead to misunderstandings, but it arises largely for historical reasons relating to the evolution of probability theory.
The most commonly applied paradigm for Bayesian statistics treats probability as an epistemological concept that describes our own uncertainty in a quantity, but does not take a position on metaphysical issues relating to determinism, indeterminism, and randomness. There is a good treatment of the philosophical and mathematical foundations of Bayesian statistics in Bernardo and Smith (1994), which discusses the epistemological interpretation of probability in the "subjective Bayesian paradigm". This approach is agnostic on the metaphysical issue of whether or not randomness actually exists in nature --- either way we need a tool to describe our own uncertainty about quantities, and probability does this job. | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | The mathematical concept of a "random variable" does not require a belief in randomness
You seem to be reading more into the mathematical concept of a "random variable" than is actually contained in t | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
The mathematical concept of a "random variable" does not require a belief in randomness
You seem to be reading more into the mathematical concept of a "random variable" than is actually contained in the concept. Mathematically, the concept of a random variable is just a mapping from a sample space in a probability space to the real numbers, complex numbers, or some other set of values for a quantity of interest. This means that every random variable has a probability distribution and every quantity with a probability distribution is a random variable. So in fact these are not very seperate concepts; they are identical. Treated purely as a mathematical concept, this does not necessarily entail any particular metaphysical properties of the quantity, and in particular, it does not imply that the quantity is actually random in an aleatory sense. Perhaps this is a bit naughty and confusing on the part of the mathematics community, and I can see why it would lead to misunderstandings, but it arises largely for historical reasons relating to the evolution of probability theory.
The most commonly applied paradigm for Bayesian statistics treats probability as an epistemological concept that describes our own uncertainty in a quantity, but does not take a position on metaphysical issues relating to determinism, indeterminism, and randomness. There is a good treatment of the philosophical and mathematical foundations of Bayesian statistics in Bernardo and Smith (1994), which discusses the epistemological interpretation of probability in the "subjective Bayesian paradigm". This approach is agnostic on the metaphysical issue of whether or not randomness actually exists in nature --- either way we need a tool to describe our own uncertainty about quantities, and probability does this job. | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
The mathematical concept of a "random variable" does not require a belief in randomness
You seem to be reading more into the mathematical concept of a "random variable" than is actually contained in t |
24,767 | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | I like to view Bayes rule intuitively as a situation where we know that some (not directly measured) unknown value B follows a known probability distribution (e.g. it is sampled from a population with a reasonably well know distribution) and there is another known value A that has a statistical relationship with the value of B.
(Image from the question: Bayes' Theorem Intuition)
In this picture of Bayes rule, the parameters A and B are shown to follow a particular joint distribution. In an application of the rule, often A is measured/observed and B is unknown.
So we consider B to follow a certain (known/estimated) distribution, because it can be considered as being sampled from a larger population with a certain distribution, but that doesn't mean that B doesn't have a fixed value. It is just that this value is unknown.
Example: Say, B, could be the 'intelligence' of a person and A could be the score on an intelligence test. The joint distribution expresses that people with a same intelligence B might score differently on the intelligence test A. So people with a particular score A are not to be considered as having intelligence A, but instead they are considered to follow a posterior distribution B|A.
without actually believing that the parameters are "random variables" i.e they do have a single "true" value, but there's just uncertainty about what that value is.
Eventhough the value of B might be fixed, it is in a certain sense a random value to is, when we don't know what the value actually is. In that case we describe the value B in terms of what we do know and that is that the value B follows a particular know probability distribution.
The randomness expresses uncertainty.
That can be either for a single entity having variations all the time (according to some distribution).
Or it is a single entity having a fixed value, but being sampled from a population that fixed value is random to us as we do not know the value.
Having a fixed value, when that value is unknown, we can treat it as random. Maybe the world is deterministic or it is not, but for our analyses does it matter? | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | I like to view Bayes rule intuitively as a situation where we know that some (not directly measured) unknown value B follows a known probability distribution (e.g. it is sampled from a population with | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
I like to view Bayes rule intuitively as a situation where we know that some (not directly measured) unknown value B follows a known probability distribution (e.g. it is sampled from a population with a reasonably well know distribution) and there is another known value A that has a statistical relationship with the value of B.
(Image from the question: Bayes' Theorem Intuition)
In this picture of Bayes rule, the parameters A and B are shown to follow a particular joint distribution. In an application of the rule, often A is measured/observed and B is unknown.
So we consider B to follow a certain (known/estimated) distribution, because it can be considered as being sampled from a larger population with a certain distribution, but that doesn't mean that B doesn't have a fixed value. It is just that this value is unknown.
Example: Say, B, could be the 'intelligence' of a person and A could be the score on an intelligence test. The joint distribution expresses that people with a same intelligence B might score differently on the intelligence test A. So people with a particular score A are not to be considered as having intelligence A, but instead they are considered to follow a posterior distribution B|A.
without actually believing that the parameters are "random variables" i.e they do have a single "true" value, but there's just uncertainty about what that value is.
Eventhough the value of B might be fixed, it is in a certain sense a random value to is, when we don't know what the value actually is. In that case we describe the value B in terms of what we do know and that is that the value B follows a particular know probability distribution.
The randomness expresses uncertainty.
That can be either for a single entity having variations all the time (according to some distribution).
Or it is a single entity having a fixed value, but being sampled from a population that fixed value is random to us as we do not know the value.
Having a fixed value, when that value is unknown, we can treat it as random. Maybe the world is deterministic or it is not, but for our analyses does it matter? | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
I like to view Bayes rule intuitively as a situation where we know that some (not directly measured) unknown value B follows a known probability distribution (e.g. it is sampled from a population with |
24,768 | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | You can find a very good answer to your question in the Who Are The Bayesians? thread and How exactly do Bayesians define (or interpret?) probability? that discusses the specific way how Bayesian approach probability, which is one of the core concepts of the approach.
But making the answer more specific, it is not about latent variables, because there are many non-Bayesian models with latent variables. There is a whole class of latent variable models that can be treated as Bayesian models but do not have to (e.g. you can find their parameters with maximum likelihood). It's more about using priors and specific definition of probability, as you can learn from the threads mentioned above. | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | You can find a very good answer to your question in the Who Are The Bayesians? thread and How exactly do Bayesians define (or interpret?) probability? that discusses the specific way how Bayesian appr | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
You can find a very good answer to your question in the Who Are The Bayesians? thread and How exactly do Bayesians define (or interpret?) probability? that discusses the specific way how Bayesian approach probability, which is one of the core concepts of the approach.
But making the answer more specific, it is not about latent variables, because there are many non-Bayesian models with latent variables. There is a whole class of latent variable models that can be treated as Bayesian models but do not have to (e.g. you can find their parameters with maximum likelihood). It's more about using priors and specific definition of probability, as you can learn from the threads mentioned above. | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
You can find a very good answer to your question in the Who Are The Bayesians? thread and How exactly do Bayesians define (or interpret?) probability? that discusses the specific way how Bayesian appr |
24,769 | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | There are cases where frequentist and Bayesian inference converge on the same answer, so within these limits, one could argue that the distinctions are only philosophical.
I feel that the wording "incorporating prior beliefs" already implies that the parameter you are infering is a random variable.
Your "prior belief" has to be a probability distribution rather than a fixed value, otherwise it would be a "present certainty" rather than "prior belief," and it wouldn't make sense to perform inference if you already know what the answer is.
"Incorporating" implies that it needs to contribute to your answer. If I thought 1+1=3 and observe that it is 2, my original 3 is entirely discarded and not incorporated into my observation of 2. Our prior beliefs are "incorporated" via multiplication in Bayes' theorem.
A probability distribution, whether multiplied by another probability distribution or a fixed value, equals another probability distribution. Hence, our answer is in the form of a probability distribution and is therefore a random variable by definition. If we insist that the answer is a fixed value in some way, how can we claim that we've multiplied in our prior beliefs as defined above?
I am having trouble understanding your "concrete scenario" because I don't know how you could actually "use a prior that reflects our uncertainty" in practice, other than via Bayes' theorem. Allow me to ignore that for the time being. You then propose that "the resulting probability distribution of the inferred parameter is simply arising because of uncertainty, not inherent randomness in the parameter." Allow me to be even less rigorous in my mathematical terminology... I would say that "the resulting distribution" you speak of has some variance, and this final variance consists of two components: A) the variance originating from your prior probability distribution, and B) the variance originating from your sampling distribution. Your claim about variance "simply arising because of uncertainty, not inherent randomness in the parameter" can only apply component B of the final variance but not component A. Since there is some component of variance remaining in your final answer, even assuming your claim is true, your final answer must still be a random variable. | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | There are cases where frequentist and Bayesian inference converge on the same answer, so within these limits, one could argue that the distinctions are only philosophical.
I feel that the wording "inc | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
There are cases where frequentist and Bayesian inference converge on the same answer, so within these limits, one could argue that the distinctions are only philosophical.
I feel that the wording "incorporating prior beliefs" already implies that the parameter you are infering is a random variable.
Your "prior belief" has to be a probability distribution rather than a fixed value, otherwise it would be a "present certainty" rather than "prior belief," and it wouldn't make sense to perform inference if you already know what the answer is.
"Incorporating" implies that it needs to contribute to your answer. If I thought 1+1=3 and observe that it is 2, my original 3 is entirely discarded and not incorporated into my observation of 2. Our prior beliefs are "incorporated" via multiplication in Bayes' theorem.
A probability distribution, whether multiplied by another probability distribution or a fixed value, equals another probability distribution. Hence, our answer is in the form of a probability distribution and is therefore a random variable by definition. If we insist that the answer is a fixed value in some way, how can we claim that we've multiplied in our prior beliefs as defined above?
I am having trouble understanding your "concrete scenario" because I don't know how you could actually "use a prior that reflects our uncertainty" in practice, other than via Bayes' theorem. Allow me to ignore that for the time being. You then propose that "the resulting probability distribution of the inferred parameter is simply arising because of uncertainty, not inherent randomness in the parameter." Allow me to be even less rigorous in my mathematical terminology... I would say that "the resulting distribution" you speak of has some variance, and this final variance consists of two components: A) the variance originating from your prior probability distribution, and B) the variance originating from your sampling distribution. Your claim about variance "simply arising because of uncertainty, not inherent randomness in the parameter" can only apply component B of the final variance but not component A. Since there is some component of variance remaining in your final answer, even assuming your claim is true, your final answer must still be a random variable. | Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both?
There are cases where frequentist and Bayesian inference converge on the same answer, so within these limits, one could argue that the distinctions are only philosophical.
I feel that the wording "inc |
24,770 | Is probability equal about the mean? | No.
In your integral equations, $\lambda$ is the median, not the mean. It may be the case that the median and mean are equal (such as a normal distribution), but they do not have to be.
As a counterexample, consider $X\sim exp(1)$. | Is probability equal about the mean? | No.
In your integral equations, $\lambda$ is the median, not the mean. It may be the case that the median and mean are equal (such as a normal distribution), but they do not have to be.
As a counterex | Is probability equal about the mean?
No.
In your integral equations, $\lambda$ is the median, not the mean. It may be the case that the median and mean are equal (such as a normal distribution), but they do not have to be.
As a counterexample, consider $X\sim exp(1)$. | Is probability equal about the mean?
No.
In your integral equations, $\lambda$ is the median, not the mean. It may be the case that the median and mean are equal (such as a normal distribution), but they do not have to be.
As a counterex |
24,771 | Is probability equal about the mean? | I'm not surprised that you struggle with the proof, because this does not hold.
As a simple counterexample (with support that is really the whole real line), consider a mixture of two normals with different means and unequal weights. For instance, $0.25\times N(0,0.1)+0.75\times N(1,0.1)$ has a mean of $0.75$, but:
> library(EnvStats)
> pnormMix(q=0.75,mean1=0,sd1=.1,mean2=1,sd2=.1,p.mix=0.25)
[1] 0.7515524 | Is probability equal about the mean? | I'm not surprised that you struggle with the proof, because this does not hold.
As a simple counterexample (with support that is really the whole real line), consider a mixture of two normals with dif | Is probability equal about the mean?
I'm not surprised that you struggle with the proof, because this does not hold.
As a simple counterexample (with support that is really the whole real line), consider a mixture of two normals with different means and unequal weights. For instance, $0.25\times N(0,0.1)+0.75\times N(1,0.1)$ has a mean of $0.75$, but:
> library(EnvStats)
> pnormMix(q=0.75,mean1=0,sd1=.1,mean2=1,sd2=.1,p.mix=0.25)
[1] 0.7515524 | Is probability equal about the mean?
I'm not surprised that you struggle with the proof, because this does not hold.
As a simple counterexample (with support that is really the whole real line), consider a mixture of two normals with dif |
24,772 | Is probability equal about the mean? | It is not true since, as others have said, the median does not have to be equal to the mean.
What is true with $\mathbb E[X]=\lambda$, if you use the cumulative distribution function $F(x)$, is
$$\int_{-\infty}^{\lambda} F(x)\,dx = \int_\lambda^{\infty}(1-F(x))\,dx $$ so with the density function
$$\int_{x=-\infty}^{\lambda}\int_{y=-\infty}^{x} f(y)\,dy\,dx = \int_{x=\lambda}^{\infty}\int_{y=x}^{\infty} f(y)\,dy\,dx $$ | Is probability equal about the mean? | It is not true since, as others have said, the median does not have to be equal to the mean.
What is true with $\mathbb E[X]=\lambda$, if you use the cumulative distribution function $F(x)$, is
$$\int | Is probability equal about the mean?
It is not true since, as others have said, the median does not have to be equal to the mean.
What is true with $\mathbb E[X]=\lambda$, if you use the cumulative distribution function $F(x)$, is
$$\int_{-\infty}^{\lambda} F(x)\,dx = \int_\lambda^{\infty}(1-F(x))\,dx $$ so with the density function
$$\int_{x=-\infty}^{\lambda}\int_{y=-\infty}^{x} f(y)\,dy\,dx = \int_{x=\lambda}^{\infty}\int_{y=x}^{\infty} f(y)\,dy\,dx $$ | Is probability equal about the mean?
It is not true since, as others have said, the median does not have to be equal to the mean.
What is true with $\mathbb E[X]=\lambda$, if you use the cumulative distribution function $F(x)$, is
$$\int |
24,773 | Is probability equal about the mean? | No, but that happens in some cases for instance in the Gaussian.
In fact, you have defined the median: the data point for which half of the population (of the dataset) is higher compared to this value (and therefore the other half being lower to that same value).
You have also pointed to a nice qualitative property of probability distributions:
Consider you have positive numbers drawn from a "heavy tailed" distribution, like wealth in a population of individuals. The more you have inequalities (that is, a high population of poor people and some very wealthy outliers), then the lower this median will be compared to the mean. This defines a shape parameter which is qualitatively important to describe probability distribution functions. | Is probability equal about the mean? | No, but that happens in some cases for instance in the Gaussian.
In fact, you have defined the median: the data point for which half of the population (of the dataset) is higher compared to this value | Is probability equal about the mean?
No, but that happens in some cases for instance in the Gaussian.
In fact, you have defined the median: the data point for which half of the population (of the dataset) is higher compared to this value (and therefore the other half being lower to that same value).
You have also pointed to a nice qualitative property of probability distributions:
Consider you have positive numbers drawn from a "heavy tailed" distribution, like wealth in a population of individuals. The more you have inequalities (that is, a high population of poor people and some very wealthy outliers), then the lower this median will be compared to the mean. This defines a shape parameter which is qualitatively important to describe probability distribution functions. | Is probability equal about the mean?
No, but that happens in some cases for instance in the Gaussian.
In fact, you have defined the median: the data point for which half of the population (of the dataset) is higher compared to this value |
24,774 | Is using deciles to find correlation a statistically valid approach? | 0. The correlation (0.0775) is small but (statistically) significantly different from 0. That is, it looks like there really is correlation, it's just very small/weak (equivalently, there's a lot of noise around the relationship).
1. What averaging within bins does is reduce the variation in the data (the $\sigma/\sqrt{n}$ effect for standard error of a mean), which means that you artificially inflate the weak correlation. Also see this (somewhat) related issue.
2. Sure, fewer bins means more data gets averaged, reducing noise, but the wider they are, the "fuzzier" the average becomes in each bin because the mean isn't quite constant - there's a trade-off. While one might derive a formula to optimize the correlation under an assumption of linearity and the distribution of the $x$'s, it wouldn't take full account of the somewhat exploitable effect of noise in the data. The easy way is to just try a whole variety of different bin boundaries until you get what you like. Don't forget to try varying the bin-widths and bin-origins. That strategy can occasionally prove surprisingly useful with densities, and that kind of occasional advantage can be carried over to functional relationships - perhaps enabling you to get exactly the result you hoped for.
3. Yes. Possibly start with this search, then perhaps try synonyms.
4. This is a good place to start; it's a very popular book aimed at non-statisticians.
5. (more seriously:) I'd suggest smoothing (such as via local polynomial regression/kernel smoothing, say) as one way to investigate relationships. It depends on what you want, exactly, but this can be a valid approach when you don't know the form of a relationship, as long as you avoid the data-dredging issue.
There's a popular quote, whose originator appears to be Ronald Coase:
"If you torture the data enough, nature will always confess." | Is using deciles to find correlation a statistically valid approach? | 0. The correlation (0.0775) is small but (statistically) significantly different from 0. That is, it looks like there really is correlation, it's just very small/weak (equivalently, there's a lot of n | Is using deciles to find correlation a statistically valid approach?
0. The correlation (0.0775) is small but (statistically) significantly different from 0. That is, it looks like there really is correlation, it's just very small/weak (equivalently, there's a lot of noise around the relationship).
1. What averaging within bins does is reduce the variation in the data (the $\sigma/\sqrt{n}$ effect for standard error of a mean), which means that you artificially inflate the weak correlation. Also see this (somewhat) related issue.
2. Sure, fewer bins means more data gets averaged, reducing noise, but the wider they are, the "fuzzier" the average becomes in each bin because the mean isn't quite constant - there's a trade-off. While one might derive a formula to optimize the correlation under an assumption of linearity and the distribution of the $x$'s, it wouldn't take full account of the somewhat exploitable effect of noise in the data. The easy way is to just try a whole variety of different bin boundaries until you get what you like. Don't forget to try varying the bin-widths and bin-origins. That strategy can occasionally prove surprisingly useful with densities, and that kind of occasional advantage can be carried over to functional relationships - perhaps enabling you to get exactly the result you hoped for.
3. Yes. Possibly start with this search, then perhaps try synonyms.
4. This is a good place to start; it's a very popular book aimed at non-statisticians.
5. (more seriously:) I'd suggest smoothing (such as via local polynomial regression/kernel smoothing, say) as one way to investigate relationships. It depends on what you want, exactly, but this can be a valid approach when you don't know the form of a relationship, as long as you avoid the data-dredging issue.
There's a popular quote, whose originator appears to be Ronald Coase:
"If you torture the data enough, nature will always confess." | Is using deciles to find correlation a statistically valid approach?
0. The correlation (0.0775) is small but (statistically) significantly different from 0. That is, it looks like there really is correlation, it's just very small/weak (equivalently, there's a lot of n |
24,775 | Is using deciles to find correlation a statistically valid approach? | Perhaps you would benefit from an exploratory tool. Splitting the data into deciles of the x coordinate appears to have been performed in that spirit. With modifications described below, it's a perfectly fine approach.
Many bivariate exploratory methods have been invented. A simple one proposed by John Tukey (EDA, Addison-Wesley 1977) is his "wandering schematic plot." You slice the x-coordinate into bins, erect a vertical boxplot of the corresponding y data at the median of each bin, and connect the key parts of the boxplots (medians, hinges, etc.) into curves (optionally smoothing them). These "wandering traces" provide a picture of the bivariate distribution of the data and allow immediate visual assessment of correlation, linearity of relationship, outliers, and marginal distributions, as well as robust estimation and goodness-of-fit evaluation of any nonlinear regression function.
To this idea Tukey added the thought, consistent with the boxplot idea, that a good way to probe the distribution of data is to start at the middle and work outwards, halving the amount of data as you go. That is, the bins to use need not be cut at equally-spaced quantiles, but instead should reflect the quantiles at the points $2^{-k}$ and $1-2^{-k}$ for $k=1, 2, 3, \ldots$.
To display the varying bin populations we can make each boxplot's width proportional to the amount of data it represents.
The resulting wandering schematic plot would look something like this. Data, as developed from the data summary, are shown as gray dots in the background. Over this the wandering schematic plot has been drawn, with the five traces in color and the boxplots (including any outliers shown) in black and white.
The nature of the near-zero correlation becomes immediately clear: the data twist around. Near their center, ranging from $x=-4$ to $x=4$, they have a strong positive correlation. At extreme values, these data exhibit curvilinear relationships that tend on the whole to be negative. The net correlation coefficient (which happens to be $-0.074$ for these data) is close to zero. However, insisting on interpreting that as "nearly no correlation" or "significant but low correlation" would be the same error spoofed in the old joke about the statistician who was happy with her head in the oven and feet in the icebox because on average the temperature was comfortable. Sometimes a single number just won't do to describe the situation.
Alternative exploratory tools with similar purposes include robust smooths of windowed quantiles of the data and fits of quantile regressions using a range of quantiles. With the ready availability of software to perform these calculations they have perhaps become easier to execute than a wandering schematic trace, but they do not enjoy the same simplicity of construction, ease of interpretation, and broad applicability.
The following R code produced the figure and can be applied to the original data with little or no change. (Ignore the warnings produced by bplt (called by bxp): it complains when it has no outliers to draw.)
#
# Data
#
set.seed(17)
n <- 1449
x <- sort(rnorm(n, 0, 4))
s <- spline(quantile(x, seq(0,1,1/10)), c(0,.03,-.6,.5,-.1,.6,1.2,.7,1.4,.1,.6),
xout=x, method="natural")
#plot(s, type="l")
e <- rnorm(length(x), sd=1)
y <- s$y + e # ($ interferes with MathJax processing on SE)
#
# Calculations
#
q <- 2^(-(2:floor(log(n/10, 2))))
q <- c(rev(q), 1/2, 1-q)
n.bins <- length(q)+1
bins <- cut(x, quantile(x, probs = c(0,q,1)))
x.binmed <- by(x, bins, median)
x.bincount <- by(x, bins, length)
x.bincount.max <- max(x.bincount)
x.delta <- diff(range(x))
cor(x,y)
#
# Plot
#
par(mfrow=c(1,1))
b <- boxplot(y ~ bins, varwidth=TRUE, plot=FALSE)
plot(x,y, pch=19, col="#00000010",
main="Wandering schematic plot", xlab="X", ylab="Y")
for (i in 1:n.bins) {
invisible(bxp(list(stats=b$stats[,i, drop=FALSE],
n=b$n[i],
conf=b$conf[,i, drop=FALSE],
out=b$out[b$group==i],
group=1,
names=b$names[i]), add=TRUE,
boxwex=2*x.delta*x.bincount[i]/x.bincount.max/n.bins,
at=x.binmed[i]))
}
colors <- hsv(seq(2/6, 1, 1/6), 3/4, 5/6)
temp <- sapply(1:5, function(i) lines(spline(x.binmed, b$stats[i,],
method="natural"), col=colors[i], lwd=2)) | Is using deciles to find correlation a statistically valid approach? | Perhaps you would benefit from an exploratory tool. Splitting the data into deciles of the x coordinate appears to have been performed in that spirit. With modifications described below, it's a perf | Is using deciles to find correlation a statistically valid approach?
Perhaps you would benefit from an exploratory tool. Splitting the data into deciles of the x coordinate appears to have been performed in that spirit. With modifications described below, it's a perfectly fine approach.
Many bivariate exploratory methods have been invented. A simple one proposed by John Tukey (EDA, Addison-Wesley 1977) is his "wandering schematic plot." You slice the x-coordinate into bins, erect a vertical boxplot of the corresponding y data at the median of each bin, and connect the key parts of the boxplots (medians, hinges, etc.) into curves (optionally smoothing them). These "wandering traces" provide a picture of the bivariate distribution of the data and allow immediate visual assessment of correlation, linearity of relationship, outliers, and marginal distributions, as well as robust estimation and goodness-of-fit evaluation of any nonlinear regression function.
To this idea Tukey added the thought, consistent with the boxplot idea, that a good way to probe the distribution of data is to start at the middle and work outwards, halving the amount of data as you go. That is, the bins to use need not be cut at equally-spaced quantiles, but instead should reflect the quantiles at the points $2^{-k}$ and $1-2^{-k}$ for $k=1, 2, 3, \ldots$.
To display the varying bin populations we can make each boxplot's width proportional to the amount of data it represents.
The resulting wandering schematic plot would look something like this. Data, as developed from the data summary, are shown as gray dots in the background. Over this the wandering schematic plot has been drawn, with the five traces in color and the boxplots (including any outliers shown) in black and white.
The nature of the near-zero correlation becomes immediately clear: the data twist around. Near their center, ranging from $x=-4$ to $x=4$, they have a strong positive correlation. At extreme values, these data exhibit curvilinear relationships that tend on the whole to be negative. The net correlation coefficient (which happens to be $-0.074$ for these data) is close to zero. However, insisting on interpreting that as "nearly no correlation" or "significant but low correlation" would be the same error spoofed in the old joke about the statistician who was happy with her head in the oven and feet in the icebox because on average the temperature was comfortable. Sometimes a single number just won't do to describe the situation.
Alternative exploratory tools with similar purposes include robust smooths of windowed quantiles of the data and fits of quantile regressions using a range of quantiles. With the ready availability of software to perform these calculations they have perhaps become easier to execute than a wandering schematic trace, but they do not enjoy the same simplicity of construction, ease of interpretation, and broad applicability.
The following R code produced the figure and can be applied to the original data with little or no change. (Ignore the warnings produced by bplt (called by bxp): it complains when it has no outliers to draw.)
#
# Data
#
set.seed(17)
n <- 1449
x <- sort(rnorm(n, 0, 4))
s <- spline(quantile(x, seq(0,1,1/10)), c(0,.03,-.6,.5,-.1,.6,1.2,.7,1.4,.1,.6),
xout=x, method="natural")
#plot(s, type="l")
e <- rnorm(length(x), sd=1)
y <- s$y + e # ($ interferes with MathJax processing on SE)
#
# Calculations
#
q <- 2^(-(2:floor(log(n/10, 2))))
q <- c(rev(q), 1/2, 1-q)
n.bins <- length(q)+1
bins <- cut(x, quantile(x, probs = c(0,q,1)))
x.binmed <- by(x, bins, median)
x.bincount <- by(x, bins, length)
x.bincount.max <- max(x.bincount)
x.delta <- diff(range(x))
cor(x,y)
#
# Plot
#
par(mfrow=c(1,1))
b <- boxplot(y ~ bins, varwidth=TRUE, plot=FALSE)
plot(x,y, pch=19, col="#00000010",
main="Wandering schematic plot", xlab="X", ylab="Y")
for (i in 1:n.bins) {
invisible(bxp(list(stats=b$stats[,i, drop=FALSE],
n=b$n[i],
conf=b$conf[,i, drop=FALSE],
out=b$out[b$group==i],
group=1,
names=b$names[i]), add=TRUE,
boxwex=2*x.delta*x.bincount[i]/x.bincount.max/n.bins,
at=x.binmed[i]))
}
colors <- hsv(seq(2/6, 1, 1/6), 3/4, 5/6)
temp <- sapply(1:5, function(i) lines(spline(x.binmed, b$stats[i,],
method="natural"), col=colors[i], lwd=2)) | Is using deciles to find correlation a statistically valid approach?
Perhaps you would benefit from an exploratory tool. Splitting the data into deciles of the x coordinate appears to have been performed in that spirit. With modifications described below, it's a perf |
24,776 | Is using deciles to find correlation a statistically valid approach? | I do not believe that binning is a scientific approach to the problem. It is information losing and arbitrary. Rank (ordinal; semiparametric) methods are far better and do not lose information. Even if one were to settle on decile binning, the method is still arbitrary and non-reproducible by others, simply because of the large number of definitions that are used for quantiles in the case of ties in the data. And as alluded to in the nice data torture comment above, Howard Wainer has a nice paper showing how to find bins that can produce a positive association, and find bins that can produce a negative association, from the same dataset:
@Article{wai06fin,
author = {Wainer, Howard},
title = {Finding what is not there through the unfortunate
binning of results: {The} {Mendel} effect},
journal = {Chance},
year = 2006,
volume = 19,
number = 1,
pages = {49-56},
annote = {can find bins that yield either positive or negative
association;especially pertinent when effects are small;``With four
parameters, I can fit an elephant; with five, I can make it wiggle its
trunk.'' - John von Neumann}
} | Is using deciles to find correlation a statistically valid approach? | I do not believe that binning is a scientific approach to the problem. It is information losing and arbitrary. Rank (ordinal; semiparametric) methods are far better and do not lose information. Ev | Is using deciles to find correlation a statistically valid approach?
I do not believe that binning is a scientific approach to the problem. It is information losing and arbitrary. Rank (ordinal; semiparametric) methods are far better and do not lose information. Even if one were to settle on decile binning, the method is still arbitrary and non-reproducible by others, simply because of the large number of definitions that are used for quantiles in the case of ties in the data. And as alluded to in the nice data torture comment above, Howard Wainer has a nice paper showing how to find bins that can produce a positive association, and find bins that can produce a negative association, from the same dataset:
@Article{wai06fin,
author = {Wainer, Howard},
title = {Finding what is not there through the unfortunate
binning of results: {The} {Mendel} effect},
journal = {Chance},
year = 2006,
volume = 19,
number = 1,
pages = {49-56},
annote = {can find bins that yield either positive or negative
association;especially pertinent when effects are small;``With four
parameters, I can fit an elephant; with five, I can make it wiggle its
trunk.'' - John von Neumann}
} | Is using deciles to find correlation a statistically valid approach?
I do not believe that binning is a scientific approach to the problem. It is information losing and arbitrary. Rank (ordinal; semiparametric) methods are far better and do not lose information. Ev |
24,777 | Is using deciles to find correlation a statistically valid approach? | I found the localgauss package very useful for this.
https://cran.r-project.org/web/packages/localgauss/index.html
The package contains
Computational routines for estimating and visualizing local Gaussian
parameters. Local Gaussian parameters are useful for characterizing
and testing for non-linear dependence within bivariate data.
Example:
library(localgauss)
x=rnorm(n=1000)
y=x^2 + rnorm(n=1000)
lgobj = localgauss(x,y)
plot(lgobj)
Result: | Is using deciles to find correlation a statistically valid approach? | I found the localgauss package very useful for this.
https://cran.r-project.org/web/packages/localgauss/index.html
The package contains
Computational routines for estimating and visualizing local Gau | Is using deciles to find correlation a statistically valid approach?
I found the localgauss package very useful for this.
https://cran.r-project.org/web/packages/localgauss/index.html
The package contains
Computational routines for estimating and visualizing local Gaussian
parameters. Local Gaussian parameters are useful for characterizing
and testing for non-linear dependence within bivariate data.
Example:
library(localgauss)
x=rnorm(n=1000)
y=x^2 + rnorm(n=1000)
lgobj = localgauss(x,y)
plot(lgobj)
Result: | Is using deciles to find correlation a statistically valid approach?
I found the localgauss package very useful for this.
https://cran.r-project.org/web/packages/localgauss/index.html
The package contains
Computational routines for estimating and visualizing local Gau |
24,778 | Is using deciles to find correlation a statistically valid approach? | Splitting the data into deciles based on the observed X ("Entry Point Quality") appears to be a generalization of an old method first proposed by Wald and later by others for situations wherein both X and Y are subject to error. (Wald split the data into two groups. Nair & Shrivastava and Bartlett split it into three.) It is described in section 5C of Understanding robust and Exploratory Data Analysis, edited by Hoaglin, Mosteller and Tukey (Wiley, 1983). However, a lot of work on such "Measurement Error" or "Error in Variables Models" has been done since then. The textbooks that I've looked at are Measurement Error: Models, Methods and Applications by John Buonaccorsi (CRC Press, 2010) and Measurement Error Models by Wayne Fuller (Wiley, 1987).
Your situation may be somewhat different because your scatterplot leads me to suspect that both observations are random variables and I don't know whether they each contain measurement error. What do the variables represent? | Is using deciles to find correlation a statistically valid approach? | Splitting the data into deciles based on the observed X ("Entry Point Quality") appears to be a generalization of an old method first proposed by Wald and later by others for situations wherein both X | Is using deciles to find correlation a statistically valid approach?
Splitting the data into deciles based on the observed X ("Entry Point Quality") appears to be a generalization of an old method first proposed by Wald and later by others for situations wherein both X and Y are subject to error. (Wald split the data into two groups. Nair & Shrivastava and Bartlett split it into three.) It is described in section 5C of Understanding robust and Exploratory Data Analysis, edited by Hoaglin, Mosteller and Tukey (Wiley, 1983). However, a lot of work on such "Measurement Error" or "Error in Variables Models" has been done since then. The textbooks that I've looked at are Measurement Error: Models, Methods and Applications by John Buonaccorsi (CRC Press, 2010) and Measurement Error Models by Wayne Fuller (Wiley, 1987).
Your situation may be somewhat different because your scatterplot leads me to suspect that both observations are random variables and I don't know whether they each contain measurement error. What do the variables represent? | Is using deciles to find correlation a statistically valid approach?
Splitting the data into deciles based on the observed X ("Entry Point Quality") appears to be a generalization of an old method first proposed by Wald and later by others for situations wherein both X |
24,779 | Is there an R implementation to some mixed models quantile regression statistical procedure? | The extent to which one can answer your question depends on what sort of study you have in mind. Roger Koenker has done some work on quantile regression for longitudinal or panel data. Some details, a paper, and an early set of R code is available from Roger's website.
Do note the message on that webpage that it is now easier to do the methods discussed in the paper using qrss() in the quantreg package, shrinking fixed effects using the lasso penalty. | Is there an R implementation to some mixed models quantile regression statistical procedure? | The extent to which one can answer your question depends on what sort of study you have in mind. Roger Koenker has done some work on quantile regression for longitudinal or panel data. Some details, a | Is there an R implementation to some mixed models quantile regression statistical procedure?
The extent to which one can answer your question depends on what sort of study you have in mind. Roger Koenker has done some work on quantile regression for longitudinal or panel data. Some details, a paper, and an early set of R code is available from Roger's website.
Do note the message on that webpage that it is now easier to do the methods discussed in the paper using qrss() in the quantreg package, shrinking fixed effects using the lasso penalty. | Is there an R implementation to some mixed models quantile regression statistical procedure?
The extent to which one can answer your question depends on what sort of study you have in mind. Roger Koenker has done some work on quantile regression for longitudinal or panel data. Some details, a |
24,780 | Is there an R implementation to some mixed models quantile regression statistical procedure? | Recently, the lqmm package "Linear Quantile Mixed Models" has been uploaded on CRAN. Although I have never used it, the lqmm package seems to do what you want.
This presentation from the useR! 2011 conference shows some examples of the package. Here is a description of the package taken from the useR! 2011 conference abstracts:
Conditional quantile regression (QR) pertains to the estimation of
unknown quantiles of an outcome as a function of a set of covariates
and a vector of fixed regression coefficients. In the last few years,
the need for extending the capabilities of QR for independent data to
deal with clustered sampling designs (e.g., repeated measures) has led
to several and quite distinct approaches. Here, I consider the
likelihood-based approach that hinges on the strict relationship
between the weighted L₁ norm problem associated with a conditional QR
model and the asymmetric Laplace distribution (Geraci and Bottai,
2007).
In this presentation, I will illustrate the use of the R package lqmm
to perform QR with mixed (fixed and random) effects for a two-level
nested model. The estimation of the fixed regression coefficients and
of the random effects' covariance matrix is based on a combination of
Gaussian quadrature approximations and optimization algorithms. The
former include Gauss-Hermite and Gauss-Laguerre quadratures for,
respectively, normal and double-exponential (i.e., symmetric Laplace)
random effects; the latter include a modified compass search algorithm
and general purpose optimizers (optim and optimize). Modelling and
inferential issues are detailed in Geraci and Bottai (2011) (a
preliminary draft is available upon request). The package also
provides commands for the case of independent data. | Is there an R implementation to some mixed models quantile regression statistical procedure? | Recently, the lqmm package "Linear Quantile Mixed Models" has been uploaded on CRAN. Although I have never used it, the lqmm package seems to do what you want.
This presentation from the useR! 2011 c | Is there an R implementation to some mixed models quantile regression statistical procedure?
Recently, the lqmm package "Linear Quantile Mixed Models" has been uploaded on CRAN. Although I have never used it, the lqmm package seems to do what you want.
This presentation from the useR! 2011 conference shows some examples of the package. Here is a description of the package taken from the useR! 2011 conference abstracts:
Conditional quantile regression (QR) pertains to the estimation of
unknown quantiles of an outcome as a function of a set of covariates
and a vector of fixed regression coefficients. In the last few years,
the need for extending the capabilities of QR for independent data to
deal with clustered sampling designs (e.g., repeated measures) has led
to several and quite distinct approaches. Here, I consider the
likelihood-based approach that hinges on the strict relationship
between the weighted L₁ norm problem associated with a conditional QR
model and the asymmetric Laplace distribution (Geraci and Bottai,
2007).
In this presentation, I will illustrate the use of the R package lqmm
to perform QR with mixed (fixed and random) effects for a two-level
nested model. The estimation of the fixed regression coefficients and
of the random effects' covariance matrix is based on a combination of
Gaussian quadrature approximations and optimization algorithms. The
former include Gauss-Hermite and Gauss-Laguerre quadratures for,
respectively, normal and double-exponential (i.e., symmetric Laplace)
random effects; the latter include a modified compass search algorithm
and general purpose optimizers (optim and optimize). Modelling and
inferential issues are detailed in Geraci and Bottai (2011) (a
preliminary draft is available upon request). The package also
provides commands for the case of independent data. | Is there an R implementation to some mixed models quantile regression statistical procedure?
Recently, the lqmm package "Linear Quantile Mixed Models" has been uploaded on CRAN. Although I have never used it, the lqmm package seems to do what you want.
This presentation from the useR! 2011 c |
24,781 | Is there an R implementation to some mixed models quantile regression statistical procedure? | I have uploaded to the CRAN a package called qrLMM available here
http://cran.r-project.org/web/packages/qrLMM/index.html
where it does exactly what you are looking for and also in a paper to be submitted soon, we proof that we obtain better estimates (lower bies and standard errors) in all scenarios than the package lqmm from Geraci(2014). I hope it will be useful for some future research. | Is there an R implementation to some mixed models quantile regression statistical procedure? | I have uploaded to the CRAN a package called qrLMM available here
http://cran.r-project.org/web/packages/qrLMM/index.html
where it does exactly what you are looking for and also in a paper to be sub | Is there an R implementation to some mixed models quantile regression statistical procedure?
I have uploaded to the CRAN a package called qrLMM available here
http://cran.r-project.org/web/packages/qrLMM/index.html
where it does exactly what you are looking for and also in a paper to be submitted soon, we proof that we obtain better estimates (lower bies and standard errors) in all scenarios than the package lqmm from Geraci(2014). I hope it will be useful for some future research. | Is there an R implementation to some mixed models quantile regression statistical procedure?
I have uploaded to the CRAN a package called qrLMM available here
http://cran.r-project.org/web/packages/qrLMM/index.html
where it does exactly what you are looking for and also in a paper to be sub |
24,782 | Complex regression plot in R | Does the picture below look like what you want to achieve?
Here's the updated R code, following your comments:
do.it <- function(df, type="confidence", ...) {
require(ellipse)
lm0 <- lm(y ~ x, data=df)
xc <- with(df, xyTable(x, y))
df.new <- data.frame(x=seq(min(df$x), max(df$x), 0.1))
pred.ulb <- predict(lm0, df.new, interval=type)
pred.lo <- predict(loess(y ~ x, data=df), df.new)
plot(xc$x, xc$y, cex=xc$number*2/3, xlab="x", ylab="y", ...)
abline(lm0, col="red")
lines(df.new$x, pred.lo, col="green", lwd=1.5)
lines(df.new$x, pred.ulb[,"lwr"], lty=2, col="red")
lines(df.new$x, pred.ulb[,"upr"], lty=2, col="red")
lines(ellipse(cor(df$x, df$y), scale=c(sd(df$x),sd(df$y)),
centre=c(mean(df$x),mean(df$y))), lwd=1.5, col="green")
invisible(lm0)
}
set.seed(101)
n <- 1000
x <- rnorm(n, mean=2)
y <- 1.5 + 0.4*x + rnorm(n)
df <- data.frame(x=x, y=y)
# take a bootstrap sample
df <- df[sample(nrow(df), nrow(df), rep=TRUE),]
do.it(df, pch=19, col=rgb(0,0,.7,.5))
And here is the ggplotized version
produced with the following piece of code:
xc <- with(df, xyTable(x, y))
df2 <- cbind.data.frame(x=xc$x, y=xc$y, n=xc$number)
df.ell <- as.data.frame(with(df, ellipse(cor(x, y),
scale=c(sd(x),sd(y)),
centre=c(mean(x),mean(y)))))
library(ggplot2)
ggplot(data=df2, aes(x=x, y=y)) +
geom_point(aes(size=n), alpha=.6) +
stat_smooth(data=df, method="loess", se=FALSE, color="green") +
stat_smooth(data=df, method="lm") +
geom_path(data=df.ell, colour="green", size=1.2)
It could be customized a little bit more by adding model fit indices, like Cook's distance, with a color shading effect. | Complex regression plot in R | Does the picture below look like what you want to achieve?
Here's the updated R code, following your comments:
do.it <- function(df, type="confidence", ...) {
require(ellipse)
lm0 <- lm(y ~ x, da | Complex regression plot in R
Does the picture below look like what you want to achieve?
Here's the updated R code, following your comments:
do.it <- function(df, type="confidence", ...) {
require(ellipse)
lm0 <- lm(y ~ x, data=df)
xc <- with(df, xyTable(x, y))
df.new <- data.frame(x=seq(min(df$x), max(df$x), 0.1))
pred.ulb <- predict(lm0, df.new, interval=type)
pred.lo <- predict(loess(y ~ x, data=df), df.new)
plot(xc$x, xc$y, cex=xc$number*2/3, xlab="x", ylab="y", ...)
abline(lm0, col="red")
lines(df.new$x, pred.lo, col="green", lwd=1.5)
lines(df.new$x, pred.ulb[,"lwr"], lty=2, col="red")
lines(df.new$x, pred.ulb[,"upr"], lty=2, col="red")
lines(ellipse(cor(df$x, df$y), scale=c(sd(df$x),sd(df$y)),
centre=c(mean(df$x),mean(df$y))), lwd=1.5, col="green")
invisible(lm0)
}
set.seed(101)
n <- 1000
x <- rnorm(n, mean=2)
y <- 1.5 + 0.4*x + rnorm(n)
df <- data.frame(x=x, y=y)
# take a bootstrap sample
df <- df[sample(nrow(df), nrow(df), rep=TRUE),]
do.it(df, pch=19, col=rgb(0,0,.7,.5))
And here is the ggplotized version
produced with the following piece of code:
xc <- with(df, xyTable(x, y))
df2 <- cbind.data.frame(x=xc$x, y=xc$y, n=xc$number)
df.ell <- as.data.frame(with(df, ellipse(cor(x, y),
scale=c(sd(x),sd(y)),
centre=c(mean(x),mean(y)))))
library(ggplot2)
ggplot(data=df2, aes(x=x, y=y)) +
geom_point(aes(size=n), alpha=.6) +
stat_smooth(data=df, method="loess", se=FALSE, color="green") +
stat_smooth(data=df, method="lm") +
geom_path(data=df.ell, colour="green", size=1.2)
It could be customized a little bit more by adding model fit indices, like Cook's distance, with a color shading effect. | Complex regression plot in R
Does the picture below look like what you want to achieve?
Here's the updated R code, following your comments:
do.it <- function(df, type="confidence", ...) {
require(ellipse)
lm0 <- lm(y ~ x, da |
24,783 | Complex regression plot in R | For point 1 just use the cex parameter on plot to set the point size.
For instance
x = rnorm(100)
plot(x, pch=20, cex=abs(x))
To have multiple graphs in one plot use par(mfrow=c(numrows, numcols)) to have an evenly spaced layout or layout to make more complex ones. | Complex regression plot in R | For point 1 just use the cex parameter on plot to set the point size.
For instance
x = rnorm(100)
plot(x, pch=20, cex=abs(x))
To have multiple graphs in one plot use par(mfrow=c(numrows, numcols)) to | Complex regression plot in R
For point 1 just use the cex parameter on plot to set the point size.
For instance
x = rnorm(100)
plot(x, pch=20, cex=abs(x))
To have multiple graphs in one plot use par(mfrow=c(numrows, numcols)) to have an evenly spaced layout or layout to make more complex ones. | Complex regression plot in R
For point 1 just use the cex parameter on plot to set the point size.
For instance
x = rnorm(100)
plot(x, pch=20, cex=abs(x))
To have multiple graphs in one plot use par(mfrow=c(numrows, numcols)) to |
24,784 | What is the difference between functional data analysis and high dimensional data analysis | Functional Data often involves different question. I've been reading Functional Data Analysis, Ramsey and Silverman, and they spend a lot of times discussing curve registration, warping functions, and estimating derivatives of curves. These tend to be very different questions than those asked by people interested in studying high-dimensional data. | What is the difference between functional data analysis and high dimensional data analysis | Functional Data often involves different question. I've been reading Functional Data Analysis, Ramsey and Silverman, and they spend a lot of times discussing curve registration, warping functions, an | What is the difference between functional data analysis and high dimensional data analysis
Functional Data often involves different question. I've been reading Functional Data Analysis, Ramsey and Silverman, and they spend a lot of times discussing curve registration, warping functions, and estimating derivatives of curves. These tend to be very different questions than those asked by people interested in studying high-dimensional data. | What is the difference between functional data analysis and high dimensional data analysis
Functional Data often involves different question. I've been reading Functional Data Analysis, Ramsey and Silverman, and they spend a lot of times discussing curve registration, warping functions, an |
24,785 | What is the difference between functional data analysis and high dimensional data analysis | Yes and no. At the theoretical level, both cases can use similar techniques and frameworks (an excellent example being Gaussian process regression).
The critical difference is the assumptions used to prevent overfitting (regularization):
In the functional case, there is usually some assumption of smoothness, in other words, values occurring close to each other should be similar in some systematic way. This leads to the use of techniques such as splines, loess, Gaussian processes, etc.
In the high-dimensional case, there is usually an assumption of sparsity: that is, only a subset of the dimensions will have any signal. This leads to techniques aiming at identifying those dimensions (Lasso, LARS, slab-and-spike priors, etc.)
UPDATE:
I didn't really think about wavelet/Fourier methods, but yes, the thresholding techniques used for such methods are aiming for sparsity in the projected space. Conversely, some high-dimensional techniques assume a projection on to a lower-dimensional manifold (e.g. principal component analysis), which is a type of smoothness assumption. | What is the difference between functional data analysis and high dimensional data analysis | Yes and no. At the theoretical level, both cases can use similar techniques and frameworks (an excellent example being Gaussian process regression).
The critical difference is the assumptions used to | What is the difference between functional data analysis and high dimensional data analysis
Yes and no. At the theoretical level, both cases can use similar techniques and frameworks (an excellent example being Gaussian process regression).
The critical difference is the assumptions used to prevent overfitting (regularization):
In the functional case, there is usually some assumption of smoothness, in other words, values occurring close to each other should be similar in some systematic way. This leads to the use of techniques such as splines, loess, Gaussian processes, etc.
In the high-dimensional case, there is usually an assumption of sparsity: that is, only a subset of the dimensions will have any signal. This leads to techniques aiming at identifying those dimensions (Lasso, LARS, slab-and-spike priors, etc.)
UPDATE:
I didn't really think about wavelet/Fourier methods, but yes, the thresholding techniques used for such methods are aiming for sparsity in the projected space. Conversely, some high-dimensional techniques assume a projection on to a lower-dimensional manifold (e.g. principal component analysis), which is a type of smoothness assumption. | What is the difference between functional data analysis and high dimensional data analysis
Yes and no. At the theoretical level, both cases can use similar techniques and frameworks (an excellent example being Gaussian process regression).
The critical difference is the assumptions used to |
24,786 | Intuition behind the formula for the variance of a sum of two variables | Simple answer:
The variance involves a square: $$Var(X) = E[(X - E[X])^2]$$
So, your question boils down to the factor 2 in the square identity:
$$(a+b)^2 = a^2 + b^2 + 2ab$$
Which can be understood visually as a decomposition of the area of a square of side $(a+b)$ into the area of the smaller squares of sides $a$ and $b$, in addition to two rectangles of sides $a$ and $b$:
More involved answer:
If you want a mathematically more involved answer, the covariance is a bilinear form, meaning that it is linear in both its first and second arguments, this leads to:
$$\begin{aligned}
Var(A+B) &= Cov(A+B, A+B) \\
&= Cov(A, A+B) + Cov(B, A+B) \\
&= Cov(A,A) + Cov(A,B) + Cov(B,A) + Cov(B,B) \\
&= Var(A) + 2 Cov(A,B) + Var(B)
\end{aligned}$$
In the last line, I used the fact that the covariance is symmetrical:
$$Cov(A,B) = Cov(B,A)$$
To sum up:
It is two because you have to account for both $cov(A,B)$ and $cov(B,A)$. | Intuition behind the formula for the variance of a sum of two variables | Simple answer:
The variance involves a square: $$Var(X) = E[(X - E[X])^2]$$
So, your question boils down to the factor 2 in the square identity:
$$(a+b)^2 = a^2 + b^2 + 2ab$$
Which can be understood v | Intuition behind the formula for the variance of a sum of two variables
Simple answer:
The variance involves a square: $$Var(X) = E[(X - E[X])^2]$$
So, your question boils down to the factor 2 in the square identity:
$$(a+b)^2 = a^2 + b^2 + 2ab$$
Which can be understood visually as a decomposition of the area of a square of side $(a+b)$ into the area of the smaller squares of sides $a$ and $b$, in addition to two rectangles of sides $a$ and $b$:
More involved answer:
If you want a mathematically more involved answer, the covariance is a bilinear form, meaning that it is linear in both its first and second arguments, this leads to:
$$\begin{aligned}
Var(A+B) &= Cov(A+B, A+B) \\
&= Cov(A, A+B) + Cov(B, A+B) \\
&= Cov(A,A) + Cov(A,B) + Cov(B,A) + Cov(B,B) \\
&= Var(A) + 2 Cov(A,B) + Var(B)
\end{aligned}$$
In the last line, I used the fact that the covariance is symmetrical:
$$Cov(A,B) = Cov(B,A)$$
To sum up:
It is two because you have to account for both $cov(A,B)$ and $cov(B,A)$. | Intuition behind the formula for the variance of a sum of two variables
Simple answer:
The variance involves a square: $$Var(X) = E[(X - E[X])^2]$$
So, your question boils down to the factor 2 in the square identity:
$$(a+b)^2 = a^2 + b^2 + 2ab$$
Which can be understood v |
24,787 | Intuition behind the formula for the variance of a sum of two variables | The set of random variables is a vector space, and many of the properties of Euclidean space can be analogized to them. The standard deviation acts much like a length, and the variance like length squared. Independence corresponds to being orthogonal, while perfect correlation corresponds with scalar multiplication. Thus, variance of independent variables follow the Pythagorean Theorem: $var(A+B) = var(A)+var(B)$.
If they are perfectly correlated, then $std(A+B) = std(A)+std(B)$
Note that this is equivalent to $var(A+B) = var(A)+var(B)+2\sqrt{var(A)var(B)}$
If they are not independent, then they follow a law analogous to the law of cosines: $var(A+B) = var(A)+var(B)+2cov(A,B)$
Note that the general case is one in between complete independence and perfect correlation. If $A$ and $B$ are independent, then $cov(A,B)$ is zero. So the general case is that $var(A,B)$ always has a $var(A)$ term and a $var(B)$ term, and then it has some variation on the $2\sqrt{var(A)var(B)}$ term; the more correlated the variables are, the larger this third term will be. And this is precisely what $2cov(A,B)$ is: it's $2\sqrt{var(A)var(B)}$ times the $r^2$ of $A$ and $B$.
$var(A+B) = var(A)+var(B)+MeasureOfCorrelation*PerfectCorrelationTerm$
where $MeasureOfCorrelation = r^2$ and $PerfectCorrelationTerm=2\sqrt{var(A)var(B)}$
Put in other terms, if $r = correl(A,B)$, then
$\sigma_{A+B} = \sigma_A^2+\sigma_B^2+ 2(r\sigma_A)(r\sigma_B)$
Thus, $r^2$ is analogous to the $cos$ in the Law of Cosines. | Intuition behind the formula for the variance of a sum of two variables | The set of random variables is a vector space, and many of the properties of Euclidean space can be analogized to them. The standard deviation acts much like a length, and the variance like length squ | Intuition behind the formula for the variance of a sum of two variables
The set of random variables is a vector space, and many of the properties of Euclidean space can be analogized to them. The standard deviation acts much like a length, and the variance like length squared. Independence corresponds to being orthogonal, while perfect correlation corresponds with scalar multiplication. Thus, variance of independent variables follow the Pythagorean Theorem: $var(A+B) = var(A)+var(B)$.
If they are perfectly correlated, then $std(A+B) = std(A)+std(B)$
Note that this is equivalent to $var(A+B) = var(A)+var(B)+2\sqrt{var(A)var(B)}$
If they are not independent, then they follow a law analogous to the law of cosines: $var(A+B) = var(A)+var(B)+2cov(A,B)$
Note that the general case is one in between complete independence and perfect correlation. If $A$ and $B$ are independent, then $cov(A,B)$ is zero. So the general case is that $var(A,B)$ always has a $var(A)$ term and a $var(B)$ term, and then it has some variation on the $2\sqrt{var(A)var(B)}$ term; the more correlated the variables are, the larger this third term will be. And this is precisely what $2cov(A,B)$ is: it's $2\sqrt{var(A)var(B)}$ times the $r^2$ of $A$ and $B$.
$var(A+B) = var(A)+var(B)+MeasureOfCorrelation*PerfectCorrelationTerm$
where $MeasureOfCorrelation = r^2$ and $PerfectCorrelationTerm=2\sqrt{var(A)var(B)}$
Put in other terms, if $r = correl(A,B)$, then
$\sigma_{A+B} = \sigma_A^2+\sigma_B^2+ 2(r\sigma_A)(r\sigma_B)$
Thus, $r^2$ is analogous to the $cos$ in the Law of Cosines. | Intuition behind the formula for the variance of a sum of two variables
The set of random variables is a vector space, and many of the properties of Euclidean space can be analogized to them. The standard deviation acts much like a length, and the variance like length squ |
24,788 | Intuition behind the formula for the variance of a sum of two variables | I would add that what you cited is not the definition of $Var(A+B)$, but rather a consequence of the definitions of $Var$ and $Cov$. So the answer to why that equation holds is the calculation carried out by byouness. Your question may really be why that makes sense; informally:
How much $A+B$ will "vary" depends on four factors:
How much $A$ would vary on its own.
How much $B$ would vary on its own.
How much $A$ will vary as $B$ moves around (or varies).
How much $B$ will vary as $A$ moves around.
Which brings us to $$Var(A+B)=Var(A)+Var(B)+Cov(A,B)+Cov(B,A)$$ $$=Var(A)+Var(B)+2Cov(A,B)$$
because $Cov$ is a symmetric operator. | Intuition behind the formula for the variance of a sum of two variables | I would add that what you cited is not the definition of $Var(A+B)$, but rather a consequence of the definitions of $Var$ and $Cov$. So the answer to why that equation holds is the calculation carried | Intuition behind the formula for the variance of a sum of two variables
I would add that what you cited is not the definition of $Var(A+B)$, but rather a consequence of the definitions of $Var$ and $Cov$. So the answer to why that equation holds is the calculation carried out by byouness. Your question may really be why that makes sense; informally:
How much $A+B$ will "vary" depends on four factors:
How much $A$ would vary on its own.
How much $B$ would vary on its own.
How much $A$ will vary as $B$ moves around (or varies).
How much $B$ will vary as $A$ moves around.
Which brings us to $$Var(A+B)=Var(A)+Var(B)+Cov(A,B)+Cov(B,A)$$ $$=Var(A)+Var(B)+2Cov(A,B)$$
because $Cov$ is a symmetric operator. | Intuition behind the formula for the variance of a sum of two variables
I would add that what you cited is not the definition of $Var(A+B)$, but rather a consequence of the definitions of $Var$ and $Cov$. So the answer to why that equation holds is the calculation carried |
24,789 | Combining PCA, feature scaling, and cross-validation without training-test data leakage | You need to think feature scaling, then pca, then your regression model as an unbreakable chain of operations (as if it is a single model), in which the cross validation is applied upon. This is quite tricky to code it yourself but considerably easy in sklearn via Pipelines. A pipeline object is a cascade of operators on the data that is regarded (and acts) as a seemingly single model confirming to fit and predict paradigm in the library. | Combining PCA, feature scaling, and cross-validation without training-test data leakage | You need to think feature scaling, then pca, then your regression model as an unbreakable chain of operations (as if it is a single model), in which the cross validation is applied upon. This is quite | Combining PCA, feature scaling, and cross-validation without training-test data leakage
You need to think feature scaling, then pca, then your regression model as an unbreakable chain of operations (as if it is a single model), in which the cross validation is applied upon. This is quite tricky to code it yourself but considerably easy in sklearn via Pipelines. A pipeline object is a cascade of operators on the data that is regarded (and acts) as a seemingly single model confirming to fit and predict paradigm in the library. | Combining PCA, feature scaling, and cross-validation without training-test data leakage
You need to think feature scaling, then pca, then your regression model as an unbreakable chain of operations (as if it is a single model), in which the cross validation is applied upon. This is quite |
24,790 | Combining PCA, feature scaling, and cross-validation without training-test data leakage | For the benefit of possible readers who don't use the scikit pipeline:
Centering and scaling the training subset does not only result in the centered and scaled training data but also in vectors describing the offset and scaling factor. When predicting new cases, this offset and scale is applied to the new case, and the resulting centered and scaled data then passed to the principal component prediction
which in turn applies the rotation determined from fitting the training data.
and so on, until the final prediction is reached. | Combining PCA, feature scaling, and cross-validation without training-test data leakage | For the benefit of possible readers who don't use the scikit pipeline:
Centering and scaling the training subset does not only result in the centered and scaled training data but also in vectors desc | Combining PCA, feature scaling, and cross-validation without training-test data leakage
For the benefit of possible readers who don't use the scikit pipeline:
Centering and scaling the training subset does not only result in the centered and scaled training data but also in vectors describing the offset and scaling factor. When predicting new cases, this offset and scale is applied to the new case, and the resulting centered and scaled data then passed to the principal component prediction
which in turn applies the rotation determined from fitting the training data.
and so on, until the final prediction is reached. | Combining PCA, feature scaling, and cross-validation without training-test data leakage
For the benefit of possible readers who don't use the scikit pipeline:
Centering and scaling the training subset does not only result in the centered and scaled training data but also in vectors desc |
24,791 | Combining PCA, feature scaling, and cross-validation without training-test data leakage | For anyone who might stumble upon this question, I have a solution using scikit-learn's Pipeline, as recommended in the accepted answer. Below is the code I used to get this to work for my problem, chaining together StandardScaler, PCA and Ridge regression into a cross-validated grid-search:
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Ridge
from sklearn.preprocessing import StandardScaler
pipe = Pipeline([("scale", StandardScaler()),
("reduce_dims", PCA()),
("ridge", Ridge())
])
param_grid = dict(reduce_dims__n_components = [0.5, 0.75, 0.95],
ridge__alpha = np.logspace(-5, 5, 10),
ridge__fit_intercept = [True, False],
)
grid = GridSearchCV(pipe, param_grid=param_grid, cv=10)
grid.fit(X, y) | Combining PCA, feature scaling, and cross-validation without training-test data leakage | For anyone who might stumble upon this question, I have a solution using scikit-learn's Pipeline, as recommended in the accepted answer. Below is the code I used to get this to work for my problem, ch | Combining PCA, feature scaling, and cross-validation without training-test data leakage
For anyone who might stumble upon this question, I have a solution using scikit-learn's Pipeline, as recommended in the accepted answer. Below is the code I used to get this to work for my problem, chaining together StandardScaler, PCA and Ridge regression into a cross-validated grid-search:
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Ridge
from sklearn.preprocessing import StandardScaler
pipe = Pipeline([("scale", StandardScaler()),
("reduce_dims", PCA()),
("ridge", Ridge())
])
param_grid = dict(reduce_dims__n_components = [0.5, 0.75, 0.95],
ridge__alpha = np.logspace(-5, 5, 10),
ridge__fit_intercept = [True, False],
)
grid = GridSearchCV(pipe, param_grid=param_grid, cv=10)
grid.fit(X, y) | Combining PCA, feature scaling, and cross-validation without training-test data leakage
For anyone who might stumble upon this question, I have a solution using scikit-learn's Pipeline, as recommended in the accepted answer. Below is the code I used to get this to work for my problem, ch |
24,792 | Combining PCA, feature scaling, and cross-validation without training-test data leakage | I've encounter some problems with pipelines (for example, if I want to apply my own custom function, it is a real hazard) so here is what I use instead:
X_train, X_test, y_train, y_test = train_test_split(X, Y, stratify=Y, random_state=seed, test_size=0.2)
sc = StandardScaler().fit(X_train)
X_train = sc.transform(X_train)
X_test = sc.transform(X_test)
pca = PCA().fit(X_train)
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
eclf = SVC()
parameters_grid = {
'C': (0.1, 1, 10)
}
grid_search = GridSearchCV(eclf, parameters_grid, cv=cv, refit='auc', return_train_score=True)
grid_search.fit(X_train, y_train)
best_model = eclf.set_params(**grid_search.best_params_).fit(X_train, y_train)
test_auc_score = roc_auc_score(y_test, best_model.predict(X_test))
I realize it is a bit long but it is clear on what you are doing. | Combining PCA, feature scaling, and cross-validation without training-test data leakage | I've encounter some problems with pipelines (for example, if I want to apply my own custom function, it is a real hazard) so here is what I use instead:
X_train, X_test, y_train, y_test = train_test_s | Combining PCA, feature scaling, and cross-validation without training-test data leakage
I've encounter some problems with pipelines (for example, if I want to apply my own custom function, it is a real hazard) so here is what I use instead:
X_train, X_test, y_train, y_test = train_test_split(X, Y, stratify=Y, random_state=seed, test_size=0.2)
sc = StandardScaler().fit(X_train)
X_train = sc.transform(X_train)
X_test = sc.transform(X_test)
pca = PCA().fit(X_train)
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
eclf = SVC()
parameters_grid = {
'C': (0.1, 1, 10)
}
grid_search = GridSearchCV(eclf, parameters_grid, cv=cv, refit='auc', return_train_score=True)
grid_search.fit(X_train, y_train)
best_model = eclf.set_params(**grid_search.best_params_).fit(X_train, y_train)
test_auc_score = roc_auc_score(y_test, best_model.predict(X_test))
I realize it is a bit long but it is clear on what you are doing. | Combining PCA, feature scaling, and cross-validation without training-test data leakage
I've encounter some problems with pipelines (for example, if I want to apply my own custom function, it is a real hazard) so here is what I use instead:
X_train, X_test, y_train, y_test = train_test_s |
24,793 | Exchangeability and IID random variables | I think, the word "identically distributed" is mostly misleading when not used to discuss independent random variables. Consider the following example:
$$\begin{pmatrix}X_1 \\ X_2 \\ X_3\end{pmatrix} \sim \mathrm{N}\left(\begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix},\begin{pmatrix}1 &0 & 0 \\ 0&1&0.1 \\ 0&0.1&1\end{pmatrix} \right)$$
The components of the vector $(X_1, X_2, X_3)^T$ are neither independent, nor exchangeable, but they are identically distributed: the marginal distributions are all standard normal: $X_i \sim \mathrm{N}(0,1)$, $i = 1,2,3$.
Next example:
$$\begin{pmatrix}Y_1 \\ Y_2 \\ Y_3\end{pmatrix} \sim \mathrm{N}\left(\begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix},\begin{pmatrix}1 &0.1 & 0.1 \\ 0.1&1&0.1 \\ 0.1&0.1&1\end{pmatrix} \right)$$
The components are now not independent but exchangeable. The marginal distributions are again identical, standard normal: $Y_i \sim \mathrm{N}(0,1)$, $i = 1,2,3$.
We have in the end the following implications:
$$ \text{i.i.d. } \Rightarrow \text{ exchangeability } \Rightarrow \text{marginals identical}.$$
The counterexamples above show, that the converse implications are all wrong. | Exchangeability and IID random variables | I think, the word "identically distributed" is mostly misleading when not used to discuss independent random variables. Consider the following example:
$$\begin{pmatrix}X_1 \\ X_2 \\ X_3\end{pmatrix} | Exchangeability and IID random variables
I think, the word "identically distributed" is mostly misleading when not used to discuss independent random variables. Consider the following example:
$$\begin{pmatrix}X_1 \\ X_2 \\ X_3\end{pmatrix} \sim \mathrm{N}\left(\begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix},\begin{pmatrix}1 &0 & 0 \\ 0&1&0.1 \\ 0&0.1&1\end{pmatrix} \right)$$
The components of the vector $(X_1, X_2, X_3)^T$ are neither independent, nor exchangeable, but they are identically distributed: the marginal distributions are all standard normal: $X_i \sim \mathrm{N}(0,1)$, $i = 1,2,3$.
Next example:
$$\begin{pmatrix}Y_1 \\ Y_2 \\ Y_3\end{pmatrix} \sim \mathrm{N}\left(\begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix},\begin{pmatrix}1 &0.1 & 0.1 \\ 0.1&1&0.1 \\ 0.1&0.1&1\end{pmatrix} \right)$$
The components are now not independent but exchangeable. The marginal distributions are again identical, standard normal: $Y_i \sim \mathrm{N}(0,1)$, $i = 1,2,3$.
We have in the end the following implications:
$$ \text{i.i.d. } \Rightarrow \text{ exchangeability } \Rightarrow \text{marginals identical}.$$
The counterexamples above show, that the converse implications are all wrong. | Exchangeability and IID random variables
I think, the word "identically distributed" is mostly misleading when not used to discuss independent random variables. Consider the following example:
$$\begin{pmatrix}X_1 \\ X_2 \\ X_3\end{pmatrix} |
24,794 | Exchangeability and IID random variables | To answer this question you need to understand the "representation theorem" for exchangeable sequences of random variables (first stated by de Finetti and extended by Hewitt and Savage). This (brilliant) theorem says that every sequence of exchangeable random variables can be considered as a sequence of conditionally IID random variables, with distribution equal to the limiting empirical distribution of the sequence. This means that every sequence of conditionally IID random variables is exchangeable and every sequence of exchangeable random variables is conditionally IID. Conditional independence does not imply marginal independence, and it is common for exchangeable sequences of random variables to be positively correlated (but they cannot be negatively correlated).
In regard to your question, this means that independence is not a requirement for exchangeability, but conditional independence is. Most sequences of exchangeable random variables are positively correlated, owing to the fact that conditional independence generally induces an information link between the random variables. | Exchangeability and IID random variables | To answer this question you need to understand the "representation theorem" for exchangeable sequences of random variables (first stated by de Finetti and extended by Hewitt and Savage). This (brilli | Exchangeability and IID random variables
To answer this question you need to understand the "representation theorem" for exchangeable sequences of random variables (first stated by de Finetti and extended by Hewitt and Savage). This (brilliant) theorem says that every sequence of exchangeable random variables can be considered as a sequence of conditionally IID random variables, with distribution equal to the limiting empirical distribution of the sequence. This means that every sequence of conditionally IID random variables is exchangeable and every sequence of exchangeable random variables is conditionally IID. Conditional independence does not imply marginal independence, and it is common for exchangeable sequences of random variables to be positively correlated (but they cannot be negatively correlated).
In regard to your question, this means that independence is not a requirement for exchangeability, but conditional independence is. Most sequences of exchangeable random variables are positively correlated, owing to the fact that conditional independence generally induces an information link between the random variables. | Exchangeability and IID random variables
To answer this question you need to understand the "representation theorem" for exchangeable sequences of random variables (first stated by de Finetti and extended by Hewitt and Savage). This (brilli |
24,795 | Exchangeability and IID random variables | There is no need for exchangeable random variables to be independent. For instance, if the vector $X$ follows a multivariate t distribution with mean zero , identity matrix as a scale matrix, and q degrees of freedom, then it's components are exchangeable, uncorrelated, and identically distributed, but not independent. Of course, by the exchangeability theorem it's components are conditionally iid (conditionally on a Gamma(q/2,q/2) in fact), but they are not independent. | Exchangeability and IID random variables | There is no need for exchangeable random variables to be independent. For instance, if the vector $X$ follows a multivariate t distribution with mean zero , identity matrix as a scale matrix, and q de | Exchangeability and IID random variables
There is no need for exchangeable random variables to be independent. For instance, if the vector $X$ follows a multivariate t distribution with mean zero , identity matrix as a scale matrix, and q degrees of freedom, then it's components are exchangeable, uncorrelated, and identically distributed, but not independent. Of course, by the exchangeability theorem it's components are conditionally iid (conditionally on a Gamma(q/2,q/2) in fact), but they are not independent. | Exchangeability and IID random variables
There is no need for exchangeable random variables to be independent. For instance, if the vector $X$ follows a multivariate t distribution with mean zero , identity matrix as a scale matrix, and q de |
24,796 | How to measure dispersion in word frequency data? | For probabilities (proportions or shares) $p_i$ summing to 1, the family $\sum p_i^a [\ln (1/p_i)]^b$ encapsulates several proposals for measures (indexes, coefficients, whatever) in this territory. Thus
$a = 0, b = 0$ returns the number of distinct words observed, which is the simplest to think about, regardless of its ignoring differences among the probabilities. This is always useful if only as context. In other fields, this could be the number of firms in a sector, the number of species observed at a site, and so forth. In general, let's call this the number of distinct items.
$a = 2, b = 0$ returns the Gini-Turing-Simpson-Herfindahl-Hirschman-Greenberg sum of squared probabilities, otherwise known as the repeat rate or purity or match probability or homozygosity. It is often reported as its complement or its reciprocal, sometimes then under other names, such as impurity or heterozygosity. In this context, it is the probability that two words selected randomly are the same, and its complement $1 - \sum p_i^2$ the probability that two words are different. The reciprocal $1 / \sum p_i^2$ has an interpretation as the equivalent number of equally common categories; this is sometimes called the numbers equivalent. Such an interpretation can be seen by noting that $k$ equally common categories (each probability thus $1/k$) imply $\sum p_i^2 = k (1/k)^2 = 1/k$ so that the reciprocal of the probability is just $k$. Picking a name is most likely to betray the field in which you work. Each field honours their own forebears, but I commend match probability as simple and most nearly self-defining.
$a = 1, b = 1$ returns Shannon entropy, often denoted $H$ and already signalled directly or indirectly in previous answers. The name entropy has stuck here, for a mix of excellent and not so good reasons, even occasionally physics envy. Note that $\exp(H)$ is the numbers equivalent for this measure, as seen by noting in similar style that $k$ equally common categories yield $H = \sum^k (1/k) \ln [1/(1/k)] = \ln k$, and hence $\exp(H) = \exp(\ln k)$ gives you back $k$. Entropy has many splendid properties; "information theory" is a good search term.
The formulation is found in I.J. Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika 40: 237-264.
www.jstor.org/stable/2333344.
Other bases for logarithm (e.g. 10 or 2) are equally possible according to taste or precedent or convenience, with just simple variations implied for some formulas above.
Independent rediscoveries (or reinventions) of the second measure are manifold across several disciplines and the names above are far from a complete list.
Tying together common measures in a family is not just mildly appealing
mathematically. It underlines that there is a choice of measure depending
on the relative weights applied to scarce and common items, and so reduces
any impression of adhockery created by a small profusion of apparently
arbitrary proposals. The literature in some fields is weakened by papers and even books based on tenuous claims that some measure favoured by the author(s) is the best measure that everyone should be using.
My calculations indicate that examples A and B are not so different except
on the first measure:
----------------------------------------------------------------------
| Shannon H exp(H) Simpson 1/Simpson #items
----------+-----------------------------------------------------------
A | 0.656 1.927 0.643 1.556 14
B | 0.684 1.981 0.630 1.588 9
----------------------------------------------------------------------
(Some may be interested to note that the Simpson named here (Edward Hugh Simpson, 1922- ) is the same as that honoured by the name Simpson's paradox. He did excellent work, but he wasn't the first to discover either thing for which he is named, which in turn is Stigler's paradox, which in turn....) | How to measure dispersion in word frequency data? | For probabilities (proportions or shares) $p_i$ summing to 1, the family $\sum p_i^a [\ln (1/p_i)]^b$ encapsulates several proposals for measures (indexes, coefficients, whatever) in this territory. | How to measure dispersion in word frequency data?
For probabilities (proportions or shares) $p_i$ summing to 1, the family $\sum p_i^a [\ln (1/p_i)]^b$ encapsulates several proposals for measures (indexes, coefficients, whatever) in this territory. Thus
$a = 0, b = 0$ returns the number of distinct words observed, which is the simplest to think about, regardless of its ignoring differences among the probabilities. This is always useful if only as context. In other fields, this could be the number of firms in a sector, the number of species observed at a site, and so forth. In general, let's call this the number of distinct items.
$a = 2, b = 0$ returns the Gini-Turing-Simpson-Herfindahl-Hirschman-Greenberg sum of squared probabilities, otherwise known as the repeat rate or purity or match probability or homozygosity. It is often reported as its complement or its reciprocal, sometimes then under other names, such as impurity or heterozygosity. In this context, it is the probability that two words selected randomly are the same, and its complement $1 - \sum p_i^2$ the probability that two words are different. The reciprocal $1 / \sum p_i^2$ has an interpretation as the equivalent number of equally common categories; this is sometimes called the numbers equivalent. Such an interpretation can be seen by noting that $k$ equally common categories (each probability thus $1/k$) imply $\sum p_i^2 = k (1/k)^2 = 1/k$ so that the reciprocal of the probability is just $k$. Picking a name is most likely to betray the field in which you work. Each field honours their own forebears, but I commend match probability as simple and most nearly self-defining.
$a = 1, b = 1$ returns Shannon entropy, often denoted $H$ and already signalled directly or indirectly in previous answers. The name entropy has stuck here, for a mix of excellent and not so good reasons, even occasionally physics envy. Note that $\exp(H)$ is the numbers equivalent for this measure, as seen by noting in similar style that $k$ equally common categories yield $H = \sum^k (1/k) \ln [1/(1/k)] = \ln k$, and hence $\exp(H) = \exp(\ln k)$ gives you back $k$. Entropy has many splendid properties; "information theory" is a good search term.
The formulation is found in I.J. Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika 40: 237-264.
www.jstor.org/stable/2333344.
Other bases for logarithm (e.g. 10 or 2) are equally possible according to taste or precedent or convenience, with just simple variations implied for some formulas above.
Independent rediscoveries (or reinventions) of the second measure are manifold across several disciplines and the names above are far from a complete list.
Tying together common measures in a family is not just mildly appealing
mathematically. It underlines that there is a choice of measure depending
on the relative weights applied to scarce and common items, and so reduces
any impression of adhockery created by a small profusion of apparently
arbitrary proposals. The literature in some fields is weakened by papers and even books based on tenuous claims that some measure favoured by the author(s) is the best measure that everyone should be using.
My calculations indicate that examples A and B are not so different except
on the first measure:
----------------------------------------------------------------------
| Shannon H exp(H) Simpson 1/Simpson #items
----------+-----------------------------------------------------------
A | 0.656 1.927 0.643 1.556 14
B | 0.684 1.981 0.630 1.588 9
----------------------------------------------------------------------
(Some may be interested to note that the Simpson named here (Edward Hugh Simpson, 1922- ) is the same as that honoured by the name Simpson's paradox. He did excellent work, but he wasn't the first to discover either thing for which he is named, which in turn is Stigler's paradox, which in turn....) | How to measure dispersion in word frequency data?
For probabilities (proportions or shares) $p_i$ summing to 1, the family $\sum p_i^a [\ln (1/p_i)]^b$ encapsulates several proposals for measures (indexes, coefficients, whatever) in this territory. |
24,797 | How to measure dispersion in word frequency data? | I don't know if there's a common way of doing it, but this looks to me analogous to inequality questions in economics. If you treat each word as an individual and their count as comparable to income, you're interested in comparing where the bag of words is between the extremes of every word having the same count (complete equality), or one word having all the counts and everyone else zero. The complication being that the "zeros" don't show up, you can't have less than a count of 1 in a bag of words as usually defined ...
The Gini coefficient of A is 0.18, and of B is 0.43, which shows that A is more "equal" than B.
library(ineq)
A <- c(3, 2, 2, rep(1, 11))
B <- c(9, 2, rep(1, 7))
Gini(A)
Gini(B)
I'm interested in any other answers too. Obviously the old fashioned variance in counts would be a starting point too, but you'd have to scale it somehow to make it comparable for bags of different sizes and hence different mean counts per word. | How to measure dispersion in word frequency data? | I don't know if there's a common way of doing it, but this looks to me analogous to inequality questions in economics. If you treat each word as an individual and their count as comparable to income, | How to measure dispersion in word frequency data?
I don't know if there's a common way of doing it, but this looks to me analogous to inequality questions in economics. If you treat each word as an individual and their count as comparable to income, you're interested in comparing where the bag of words is between the extremes of every word having the same count (complete equality), or one word having all the counts and everyone else zero. The complication being that the "zeros" don't show up, you can't have less than a count of 1 in a bag of words as usually defined ...
The Gini coefficient of A is 0.18, and of B is 0.43, which shows that A is more "equal" than B.
library(ineq)
A <- c(3, 2, 2, rep(1, 11))
B <- c(9, 2, rep(1, 7))
Gini(A)
Gini(B)
I'm interested in any other answers too. Obviously the old fashioned variance in counts would be a starting point too, but you'd have to scale it somehow to make it comparable for bags of different sizes and hence different mean counts per word. | How to measure dispersion in word frequency data?
I don't know if there's a common way of doing it, but this looks to me analogous to inequality questions in economics. If you treat each word as an individual and their count as comparable to income, |
24,798 | How to measure dispersion in word frequency data? | This article has a review of standard dispersion measures used by linguists. They are listed as single-word dispersion measures (They measure the dispersion of words across sections, pages etc.) but could conceivably be used as word frequency dispersion measures. The standard statistical ones seem to be:
max-min
standard deviation
coefficient of variation $CV$
chi-squared $\chi^2$
The classics are:
Jullard's $D = 1-\frac{CV}{\sqrt{n-1}}$
Rosengren's $S = N\frac{(\sum_{i=1}^{n}\sqrt{n_i})^2}{n}$
Carroll's $D_2 = (\log_2N - \frac{\sum_{i=1}^n{n_i \log_2 n_i}}{N})/{\log_2(n)}$
Lyne's $D_3 = \frac{1-\chi^2}{4N}$
Where $N$ is the total number of words in the text, $n$ is the number of distinct words, and $n_i$ the number of occurrences of the i-th word in the text.
The text also mentions two more measures of dispersion, but they rely on the spatial positioning of the words, so this is inapplicable to the bag of words model.
Note: I changed the original notation from the article, to make the formulas more consistent with the standard notation. | How to measure dispersion in word frequency data? | This article has a review of standard dispersion measures used by linguists. They are listed as single-word dispersion measures (They measure the dispersion of words across sections, pages etc.) but c | How to measure dispersion in word frequency data?
This article has a review of standard dispersion measures used by linguists. They are listed as single-word dispersion measures (They measure the dispersion of words across sections, pages etc.) but could conceivably be used as word frequency dispersion measures. The standard statistical ones seem to be:
max-min
standard deviation
coefficient of variation $CV$
chi-squared $\chi^2$
The classics are:
Jullard's $D = 1-\frac{CV}{\sqrt{n-1}}$
Rosengren's $S = N\frac{(\sum_{i=1}^{n}\sqrt{n_i})^2}{n}$
Carroll's $D_2 = (\log_2N - \frac{\sum_{i=1}^n{n_i \log_2 n_i}}{N})/{\log_2(n)}$
Lyne's $D_3 = \frac{1-\chi^2}{4N}$
Where $N$ is the total number of words in the text, $n$ is the number of distinct words, and $n_i$ the number of occurrences of the i-th word in the text.
The text also mentions two more measures of dispersion, but they rely on the spatial positioning of the words, so this is inapplicable to the bag of words model.
Note: I changed the original notation from the article, to make the formulas more consistent with the standard notation. | How to measure dispersion in word frequency data?
This article has a review of standard dispersion measures used by linguists. They are listed as single-word dispersion measures (They measure the dispersion of words across sections, pages etc.) but c |
24,799 | How to measure dispersion in word frequency data? | The first I would do is calculating Shannon's entropy. You can use the R package infotheo, function entropy(X, method="emp"). If you wrap natstobits(H) around it, you will get the entropy of this source in bits. | How to measure dispersion in word frequency data? | The first I would do is calculating Shannon's entropy. You can use the R package infotheo, function entropy(X, method="emp"). If you wrap natstobits(H) around it, you will get the entropy of this sour | How to measure dispersion in word frequency data?
The first I would do is calculating Shannon's entropy. You can use the R package infotheo, function entropy(X, method="emp"). If you wrap natstobits(H) around it, you will get the entropy of this source in bits. | How to measure dispersion in word frequency data?
The first I would do is calculating Shannon's entropy. You can use the R package infotheo, function entropy(X, method="emp"). If you wrap natstobits(H) around it, you will get the entropy of this sour |
24,800 | How to measure dispersion in word frequency data? | One possible measure of equality you could use is the scaled Shannon entropy. If you have a vector of proportions $\boldsymbol{p} \equiv (p_1, ... , p_n)$ then this measure is given by:
$$\bar{H}(\boldsymbol{p}) \equiv - \frac{\sum p_i \ln p_i}{\ln n}.$$
This is a scaled measure with range $0 \leqslant \bar{H}(\boldsymbol{p}) \leqslant 1$ with extreme values occurring at the extremes of equality or inequality. Shannon entropy is a measure of information, and the scaled version allows comparison between cases with different numbers of categories.
Extreme Inequality: All the count is in some category $k$. In this case we have $p_i = \mathbb{I}(i=k)$ and this gives us $\bar{H}(\boldsymbol{p}) = 0$.
Extreme Equality: All the counts are equal over all categories. In this case we have $p_i = 1/n$ and this gives us $\bar{H}(\boldsymbol{p}) = 1$. | How to measure dispersion in word frequency data? | One possible measure of equality you could use is the scaled Shannon entropy. If you have a vector of proportions $\boldsymbol{p} \equiv (p_1, ... , p_n)$ then this measure is given by:
$$\bar{H}(\bo | How to measure dispersion in word frequency data?
One possible measure of equality you could use is the scaled Shannon entropy. If you have a vector of proportions $\boldsymbol{p} \equiv (p_1, ... , p_n)$ then this measure is given by:
$$\bar{H}(\boldsymbol{p}) \equiv - \frac{\sum p_i \ln p_i}{\ln n}.$$
This is a scaled measure with range $0 \leqslant \bar{H}(\boldsymbol{p}) \leqslant 1$ with extreme values occurring at the extremes of equality or inequality. Shannon entropy is a measure of information, and the scaled version allows comparison between cases with different numbers of categories.
Extreme Inequality: All the count is in some category $k$. In this case we have $p_i = \mathbb{I}(i=k)$ and this gives us $\bar{H}(\boldsymbol{p}) = 0$.
Extreme Equality: All the counts are equal over all categories. In this case we have $p_i = 1/n$ and this gives us $\bar{H}(\boldsymbol{p}) = 1$. | How to measure dispersion in word frequency data?
One possible measure of equality you could use is the scaled Shannon entropy. If you have a vector of proportions $\boldsymbol{p} \equiv (p_1, ... , p_n)$ then this measure is given by:
$$\bar{H}(\bo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.