idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
14,401
When the Central Limit Theorem and the Law of Large Numbers disagree
I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then that we can equivalently either subtract $\sqrt{n}$ from both sides, or multliply both sides by $1/\sqrt{n}$. We get $$\mathbb{P}\left(\frac{1}{\sqrt{n}} \sum_{i=1}^n X_i \leq \sqrt{n}\right)=\mathbb{P}\left(\frac{1}{\sqrt{n}} \sum_{i=1}^n(X_i-1) \leq 0\right) = \mathbb{P}\left(\frac{1}{n} \sum_{i=1}^nX_i \leq 1\right)$$ So if the limit exists, it will be identical. Setting $Z_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n(X_i-1)$, we have, using distribution functions $$\mathbb{P}\left(\frac{1}{\sqrt{n}} \sum_{i=1}^n X_i \leq \sqrt{n}\right)= F_{Z_n}(0) = F_{\bar X_n}(1)$$ ...and it is true that $\lim_{n\to \infty}F_{Z_n}(0)= \Phi(0) = 1/2$. The thinking in the "LLN approach" goes as follows: "We know from the LLN that $\bar X_n$ converges in probabililty to a constant. And we also know that "convergence in probability implies convergence in distribution". So, $\bar X_n$ converges in distribution to a constant". Up to here we are correct. Then we state: "therefore, limiting probabilities for $\bar X_n$ are given by the distribution function of the constant at $1$ random variable", $$F_1(x) = \cases {1 \;\;\;\;x\geq 1 \\ 0 \;\;\;\;x<1} \implies F_1(1) = 1$$ ... so $\lim_{n\to \infty} F_{\bar X_n}(1) = F_1(1) = 1$... ...and we just made our mistake. Why? Because, as @AlexR. answer noted, "convergence in distribution" covers only the points of continuity of the limiting distribution function. And $1$ is a point of discontinuity for $F_1$. This means that $\lim_{n\to \infty} F_{\bar X_n}(1)$ may be equal to $F_1(1)$ but it may be not, without negating the "convergence in distribution to a constant" implication of the LLN. And since from the CLT approach we know what the value of the limit must be ($1/2$). I do not know of a way to prove directly that $\lim_{n\to \infty} F_{\bar X_n}(1) = 1/2$. Did we learn anything new? I did. The LLN asserts that $$\lim_{n \rightarrow \infty} \mathbb{P} \Big( |\bar{X}_n - 1| \leqslant \varepsilon \Big) = 1 \quad \quad \text{for all } \varepsilon > 0$$ $$\implies \lim_{n \rightarrow \infty} \Big[ \mathbb{P} \Big( 1-\varepsilon <\bar{X}_n \leq 1\Big) + \mathbb{P} \Big( 1 <\bar{X}_n \leq 1+\varepsilon\Big)\Big] = 1$$ $$\implies \lim_{n \rightarrow \infty} \Big[ \mathbb{P} \Big(\bar{X}_n \leq 1\Big) + \mathbb{P} \Big( 1 <\bar{X}_n \leq 1+\varepsilon\Big)\Big] = 1$$ The LLN does not say how is the probability allocated in the $(1-\varepsilon, 1+\varepsilon)$ interval. What I learned is that, in this class of convergence results, the probability is at the limit allocated equally on the two sides of the centerpoint of the collapsing interval. The general statement here is, assume $$X_n\to_p \theta,\;\;\; h(n)(X_n-\theta) \to_d D(0,V)$$ where $D$ is some rv with distribution function $F_D$. Then $$\lim_{n\to \infty} \mathbb P[X_n \leq \theta] = \lim_{n\to \infty}\mathbb P[h(n)(X_n-\theta) \leq 0] = F_D(0)$$ ...which may not be equal to $F_{\theta}(0)$ (the distribution function of the constant rv). Also, this is a strong example that, when the distribution function of the limiting random variable has discontinuities, then "convergence in distribution to a random variable" may describe a situation where "the limiting distribution" may disagree with the "distribution of the limiting random variable" at the discontinuity points. Strictly speaking, the limiting distribution for the continuity points is that of the constant random variable. For the discontinuity points we may be able to calculate the limiting probability, as "separate" entities.
When the Central Limit Theorem and the Law of Large Numbers disagree
I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then
When the Central Limit Theorem and the Law of Large Numbers disagree I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then that we can equivalently either subtract $\sqrt{n}$ from both sides, or multliply both sides by $1/\sqrt{n}$. We get $$\mathbb{P}\left(\frac{1}{\sqrt{n}} \sum_{i=1}^n X_i \leq \sqrt{n}\right)=\mathbb{P}\left(\frac{1}{\sqrt{n}} \sum_{i=1}^n(X_i-1) \leq 0\right) = \mathbb{P}\left(\frac{1}{n} \sum_{i=1}^nX_i \leq 1\right)$$ So if the limit exists, it will be identical. Setting $Z_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n(X_i-1)$, we have, using distribution functions $$\mathbb{P}\left(\frac{1}{\sqrt{n}} \sum_{i=1}^n X_i \leq \sqrt{n}\right)= F_{Z_n}(0) = F_{\bar X_n}(1)$$ ...and it is true that $\lim_{n\to \infty}F_{Z_n}(0)= \Phi(0) = 1/2$. The thinking in the "LLN approach" goes as follows: "We know from the LLN that $\bar X_n$ converges in probabililty to a constant. And we also know that "convergence in probability implies convergence in distribution". So, $\bar X_n$ converges in distribution to a constant". Up to here we are correct. Then we state: "therefore, limiting probabilities for $\bar X_n$ are given by the distribution function of the constant at $1$ random variable", $$F_1(x) = \cases {1 \;\;\;\;x\geq 1 \\ 0 \;\;\;\;x<1} \implies F_1(1) = 1$$ ... so $\lim_{n\to \infty} F_{\bar X_n}(1) = F_1(1) = 1$... ...and we just made our mistake. Why? Because, as @AlexR. answer noted, "convergence in distribution" covers only the points of continuity of the limiting distribution function. And $1$ is a point of discontinuity for $F_1$. This means that $\lim_{n\to \infty} F_{\bar X_n}(1)$ may be equal to $F_1(1)$ but it may be not, without negating the "convergence in distribution to a constant" implication of the LLN. And since from the CLT approach we know what the value of the limit must be ($1/2$). I do not know of a way to prove directly that $\lim_{n\to \infty} F_{\bar X_n}(1) = 1/2$. Did we learn anything new? I did. The LLN asserts that $$\lim_{n \rightarrow \infty} \mathbb{P} \Big( |\bar{X}_n - 1| \leqslant \varepsilon \Big) = 1 \quad \quad \text{for all } \varepsilon > 0$$ $$\implies \lim_{n \rightarrow \infty} \Big[ \mathbb{P} \Big( 1-\varepsilon <\bar{X}_n \leq 1\Big) + \mathbb{P} \Big( 1 <\bar{X}_n \leq 1+\varepsilon\Big)\Big] = 1$$ $$\implies \lim_{n \rightarrow \infty} \Big[ \mathbb{P} \Big(\bar{X}_n \leq 1\Big) + \mathbb{P} \Big( 1 <\bar{X}_n \leq 1+\varepsilon\Big)\Big] = 1$$ The LLN does not say how is the probability allocated in the $(1-\varepsilon, 1+\varepsilon)$ interval. What I learned is that, in this class of convergence results, the probability is at the limit allocated equally on the two sides of the centerpoint of the collapsing interval. The general statement here is, assume $$X_n\to_p \theta,\;\;\; h(n)(X_n-\theta) \to_d D(0,V)$$ where $D$ is some rv with distribution function $F_D$. Then $$\lim_{n\to \infty} \mathbb P[X_n \leq \theta] = \lim_{n\to \infty}\mathbb P[h(n)(X_n-\theta) \leq 0] = F_D(0)$$ ...which may not be equal to $F_{\theta}(0)$ (the distribution function of the constant rv). Also, this is a strong example that, when the distribution function of the limiting random variable has discontinuities, then "convergence in distribution to a random variable" may describe a situation where "the limiting distribution" may disagree with the "distribution of the limiting random variable" at the discontinuity points. Strictly speaking, the limiting distribution for the continuity points is that of the constant random variable. For the discontinuity points we may be able to calculate the limiting probability, as "separate" entities.
When the Central Limit Theorem and the Law of Large Numbers disagree I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then
14,402
Example of distribution where large sample size is necessary for central limit theorem
Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless. There are non-normal distributions for which n=2 will do okay and non-normal distributions for which much larger $n$ is insufficient - so without an explicit restriction on the circumstances, the rule is misleading. In any case, even if it were kind of true, the required $n$ would vary depending on what you were doing. Often you get good approximations near the centre of the distribution at small $n$, but need much larger $n$ to get a decent approximation in the tail. Edit: See the answers to this question for numerous but apparently unanimous opinions on that issue, and some good links. I won't labour the point though, since you already clearly understand it. I am wanting to see some examples of distributions where even with a large sample size (maybe 100 or 1000 or higher), the distribution of the sample mean is still fairly skewed. Examples are relatively easy to construct; one easy way is to find an infinitely divisible distribution that is non-normal and divide it up. If you have one that will approach the normal when you average or sum it up, start at the boundary of 'close to normal' and divide it as much as you like. So for example: Consider a Gamma distribution with shape parameter $α$. Take the scale as 1 (scale doesn't matter). Let's say you regard $\text{Gamma}(α_0,1)$ as just "sufficiently normal". Then a distribution for which you need to get 1000 observations to be sufficiently normal has a $\text{Gamma}(α_0/1000,1)$ distribution. So if you feel that a Gamma with $\alpha=20$ is just 'normal enough' - Then divide $\alpha=20$ by 1000, to get $\alpha = 0.02$: The average of 1000 of those will have the shape of the first pdf (but not its scale). If you instead choose an infinitely divisible distribution that doesn't approach the normal, like say the Cauchy, then there may be no sample size at which sample means have approximately normal distributions (or, in some cases, they might still approach normality, but you don't have a $\sigma/\sqrt n$ effect for the standard error). @whuber's point about contaminated distributions is a very good one; it may pay to try some simulation with that case and see how things behave across many such samples.
Example of distribution where large sample size is necessary for central limit theorem
Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless
Example of distribution where large sample size is necessary for central limit theorem Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless. There are non-normal distributions for which n=2 will do okay and non-normal distributions for which much larger $n$ is insufficient - so without an explicit restriction on the circumstances, the rule is misleading. In any case, even if it were kind of true, the required $n$ would vary depending on what you were doing. Often you get good approximations near the centre of the distribution at small $n$, but need much larger $n$ to get a decent approximation in the tail. Edit: See the answers to this question for numerous but apparently unanimous opinions on that issue, and some good links. I won't labour the point though, since you already clearly understand it. I am wanting to see some examples of distributions where even with a large sample size (maybe 100 or 1000 or higher), the distribution of the sample mean is still fairly skewed. Examples are relatively easy to construct; one easy way is to find an infinitely divisible distribution that is non-normal and divide it up. If you have one that will approach the normal when you average or sum it up, start at the boundary of 'close to normal' and divide it as much as you like. So for example: Consider a Gamma distribution with shape parameter $α$. Take the scale as 1 (scale doesn't matter). Let's say you regard $\text{Gamma}(α_0,1)$ as just "sufficiently normal". Then a distribution for which you need to get 1000 observations to be sufficiently normal has a $\text{Gamma}(α_0/1000,1)$ distribution. So if you feel that a Gamma with $\alpha=20$ is just 'normal enough' - Then divide $\alpha=20$ by 1000, to get $\alpha = 0.02$: The average of 1000 of those will have the shape of the first pdf (but not its scale). If you instead choose an infinitely divisible distribution that doesn't approach the normal, like say the Cauchy, then there may be no sample size at which sample means have approximately normal distributions (or, in some cases, they might still approach normality, but you don't have a $\sigma/\sqrt n$ effect for the standard error). @whuber's point about contaminated distributions is a very good one; it may pay to try some simulation with that case and see how things behave across many such samples.
Example of distribution where large sample size is necessary for central limit theorem Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless
14,403
Example of distribution where large sample size is necessary for central limit theorem
In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite misleading (and underestimates the sample size needed). He makes an excellent point that the mean can be approximately normal but that is only half the story when we do not know $\sigma$. When $\sigma$ is unknown, we typically use the $t$ distribution for tests and confidence limits. The sample variance may be very, very far from a scaled $\chi^2$ distribution and the resulting $t$ ratio may look nothing like a $t$ distribution when $n=30$. Simply put, non-normality messes up $s^2$ more than it messes up $\bar{X}$.
Example of distribution where large sample size is necessary for central limit theorem
In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite
Example of distribution where large sample size is necessary for central limit theorem In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite misleading (and underestimates the sample size needed). He makes an excellent point that the mean can be approximately normal but that is only half the story when we do not know $\sigma$. When $\sigma$ is unknown, we typically use the $t$ distribution for tests and confidence limits. The sample variance may be very, very far from a scaled $\chi^2$ distribution and the resulting $t$ ratio may look nothing like a $t$ distribution when $n=30$. Simply put, non-normality messes up $s^2$ more than it messes up $\bar{X}$.
Example of distribution where large sample size is necessary for central limit theorem In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite
14,404
Example of distribution where large sample size is necessary for central limit theorem
You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. At what sample size do certain distributed data follow a normal distribution due to CLT? Apparently a lot of data collected for psychology experiments is not anywhere near normally distributed, so the discipline relies pretty heavily on CLT to do any inference on their stats. First they ran tests on data that was uniform, bimodal, and one distibution that was normal. Using Kolmogorov-Smirnov, the researchers tested how many of the distributions were rejected for normality at the $\alpha = 0.05$ level. Table 2. Percentage of replications that departed normality based on the KS-test. Sample Size 5 10 15 20 25 30 Normal 100 95 70 65 60 35 Uniform 100 100 100 100 100 95 Bimodal 100 100 100 75 85 50 Oddly enough, 65 percent of the normally distributed data were rejected with a sample size of 20, and even with a sample size of 30, 35% were still rejected. They then tested several several heavily skewed distributions created using Fleishman's power method: $Y = aX + bX^2 +cX^3 + dX^4$ X represents the value drawn from the normal distribution while a, b, c, and d are constants (note that a=-c). They ran the tests with sample sizes up to 300 Skew Kurt A B C D 1.75 3.75 -0.399 0.930 0.399 -0.036 1.50 3.75 -0.221 0.866 0.221 0.027 1.25 3.75 -0.161 0.819 0.161 0.049 1.00 3.75 -0.119 0.789 0.119 0.062 They found that at the highest levels of skew and kurt (1.75 and 3.75) that sample sizes of 300 did not produce sample means that followed a normal distribution. Unfortunately, I don't think that this is exactly what you're looking for, but I stumbled upon it and found it interesting, and thought you might too.
Example of distribution where large sample size is necessary for central limit theorem
You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. A
Example of distribution where large sample size is necessary for central limit theorem You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. At what sample size do certain distributed data follow a normal distribution due to CLT? Apparently a lot of data collected for psychology experiments is not anywhere near normally distributed, so the discipline relies pretty heavily on CLT to do any inference on their stats. First they ran tests on data that was uniform, bimodal, and one distibution that was normal. Using Kolmogorov-Smirnov, the researchers tested how many of the distributions were rejected for normality at the $\alpha = 0.05$ level. Table 2. Percentage of replications that departed normality based on the KS-test. Sample Size 5 10 15 20 25 30 Normal 100 95 70 65 60 35 Uniform 100 100 100 100 100 95 Bimodal 100 100 100 75 85 50 Oddly enough, 65 percent of the normally distributed data were rejected with a sample size of 20, and even with a sample size of 30, 35% were still rejected. They then tested several several heavily skewed distributions created using Fleishman's power method: $Y = aX + bX^2 +cX^3 + dX^4$ X represents the value drawn from the normal distribution while a, b, c, and d are constants (note that a=-c). They ran the tests with sample sizes up to 300 Skew Kurt A B C D 1.75 3.75 -0.399 0.930 0.399 -0.036 1.50 3.75 -0.221 0.866 0.221 0.027 1.25 3.75 -0.161 0.819 0.161 0.049 1.00 3.75 -0.119 0.789 0.119 0.062 They found that at the highest levels of skew and kurt (1.75 and 3.75) that sample sizes of 300 did not produce sample means that followed a normal distribution. Unfortunately, I don't think that this is exactly what you're looking for, but I stumbled upon it and found it interesting, and thought you might too.
Example of distribution where large sample size is necessary for central limit theorem You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. A
14,405
Is it appropriate to use "time" as a causal variable in a DAG?
As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that represents a particular state-of-nature occurring or existing over a specified period of time (which is actually a state variable). These issues are the impetus for the question itself, since my intuition tells me that "time" in a causal model must be a kind of proxy for some kind of state variable. Time itself cannot be a causal variable Time is already a component of the concept of causality: The first hurdle is the fact that the concept of of causality involves actions, and actions occur over time. Thus, "time" is already baked into the concept of causality. One might therefore regard it as a concept where time is a priori inadmissible as an argument variable in the concept. To assert that time is a cause of an effect requires time to be admitted both as the asserted causal variable, and also as a necessary concept for causality itself. (We will see more of the effects of this below.) If time causes anything, it causes everything: The second hurdle is that causality is generally regarded as requiring a counterfactual condition that reduces to triviality in the case where time is asserted as the causal variable. If we say that "precondition X causes action Y", the relevant counterfactual condition is that (1) the presence/occurrence of precondition X means that action Y will occur; and (2) in the absence of another cause, the absence of precondition X means that action Y will not occur. But since "will occur" means "will occur over time", the use of a "time" as a causal variable adds nothing to the first requirement, and makes the second a tautology. If precondition X is "the movement of time" then (1) reduces to "the movement of time means that action Y will occur", which logically reduces to "action Y will occur"; and (2) reduces to "the absence of movement of time means that action Y will not occur" (which is a tautology, since action can only occur over time). Under this counterfactual interpretation of causality, an assertion of the time-causality of an action is logically equivalent to an assertion that this action will occur. Thus, we must either conclude that this condition is too weak to constitute causality (i.e., time is not a cause of anything) or that time is the cause of everything. Pure time-causality is metaphysically equivalent to randomness: Another hurdle here occurs when we have a situation where "time" is the only asserted causal variable (i.e., in the case of pure time-causality). The problem is, if any change in a variable occurs over time, in the absence of causality from a non-time variable, this has traditionally been regarded as the very definition of aleatory randomness ---i.e., non-causality. Thus, to assert that time is the sole cause of an effect is to banish the notion of non-causality (randomness) entirely from metaphysics, and substitute it with a base "cause" that is always present if there is no other cause. Alternatively, one might reasonably assert that a claim of time-causality is equivalent to an assertion of randomness ---i.e., it is an assertion that there are no causes to the change, other than the passage of time. If such is the case, then the presence of "time" as a causal variable in a DAG is equivalent to its absence (and thus parsimony counsels that it be excluded). Moreover, the history of the field counsels in favour of keeping the existing terminology of "randomness". Problems with causal calculus with time as a causal variable: Another final hurdle I will mention (there may be more) is that it is difficult to deal with "time" as a causal variable in the causal calculus. In standard causal calculus, we have a $\text{do}(\cdot)$ operator that operates on a causal variable to reflect intervention into the system to change that variable to a chosen value that may be different from what it would be under passive observation. It is not entirely clear that it is possible to impose an "intervention" for a time variable, without running afoul of other philosophical or statistical principles. One could certainly argue that waiting is an intervention that changes time (forward only), but even if this were so interpreted, it cannot be differentiated from passivity, and so arguably it would not be distinct from passive observation. One might instead argue that we could record a large amount of data over different times, and then the "intervention" would be to choose which time values are included in the data for the analysis. That would indeed involve a choice of time periods (over the available data), and so it would seem to constitute an intervention, but that is an epistemic intervention, not a metaphysical one. (It also gives rise to a secondary problem of failing to use all the available data.) A state variable accruing over time can be a causal variable DAGs can include variables representing states-of-nature occurring over a prescribed time: There are a number of legitimate causal variables that represent the occurrence of some state or some event over a prescribed period of time. A simple example (hat tip to Carlos in the answer below) is investment of money over time, which yields interest. In this case, the accrual of interest is caused by the fact that money is invested over a period of time, and the longer the investment period, the higher the interest accrued. In this case, it is legitimate to have a "time" variable, that represents the chosen period of time for the investment, and this variable would have a direct causal impact on the accrued interest. Similarly, the "age" variable for a person is a kind of "time" variable (hat tip to AdamO in the answer below), representing the fact that the person has been alive over a specified period of time. Each of these variables are legitimate causal variables that can be included in a DAG. These variables do not represent the progression of time itself --- they represent the fact that a certain state-of-nature was present over a specified period of time. In many cases, it is a useful shorthand to label a variable like this as "time", but it is important to bear in mind that it represents a specific state over a period of time, rather than the progression of time itself. In some sense, every variable is of this kind: Since every possible event or state-of-nature occurs either at a particular point in time, or over a period of time, every variable involves some (often implicit) time specification. Nevertheless, there are variables such as "age" or "time invested" that have a more direct connection to time, insofar as the variable represents the amount of accrual of time during which a particular state obtained. Using "time" in a DAG is a shorthand for a state variable accruing over time: If the above argument is correct, it would appear that any use of a "time" variable in a DAG must be a shorthand for a variable representing the occurrence of a particular event or the existence of a particular state-of-nature over a specified period of time. The progression of time itself is not subject to control or intervention, and cannot be a causal variable for the reasons described above. However, the prevalence of a particular state-of-nature over a period of time certainly can be a legitimate causal variable that can be included in a DAG. These points give some basic idea of why the use of "time" as a causal variable is problematic, and what it means to add "time" to a DAG. As you can see, my view is that time itself cannot be a causal variable, but that you can have a "time" variable that actually represents an event or state-of-nature occurring or existing over a period of time. I am open to being convinced to the contrary, but this seems to me to be a sensible resolution of the issue.
Is it appropriate to use "time" as a causal variable in a DAG?
As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that repres
Is it appropriate to use "time" as a causal variable in a DAG? As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that represents a particular state-of-nature occurring or existing over a specified period of time (which is actually a state variable). These issues are the impetus for the question itself, since my intuition tells me that "time" in a causal model must be a kind of proxy for some kind of state variable. Time itself cannot be a causal variable Time is already a component of the concept of causality: The first hurdle is the fact that the concept of of causality involves actions, and actions occur over time. Thus, "time" is already baked into the concept of causality. One might therefore regard it as a concept where time is a priori inadmissible as an argument variable in the concept. To assert that time is a cause of an effect requires time to be admitted both as the asserted causal variable, and also as a necessary concept for causality itself. (We will see more of the effects of this below.) If time causes anything, it causes everything: The second hurdle is that causality is generally regarded as requiring a counterfactual condition that reduces to triviality in the case where time is asserted as the causal variable. If we say that "precondition X causes action Y", the relevant counterfactual condition is that (1) the presence/occurrence of precondition X means that action Y will occur; and (2) in the absence of another cause, the absence of precondition X means that action Y will not occur. But since "will occur" means "will occur over time", the use of a "time" as a causal variable adds nothing to the first requirement, and makes the second a tautology. If precondition X is "the movement of time" then (1) reduces to "the movement of time means that action Y will occur", which logically reduces to "action Y will occur"; and (2) reduces to "the absence of movement of time means that action Y will not occur" (which is a tautology, since action can only occur over time). Under this counterfactual interpretation of causality, an assertion of the time-causality of an action is logically equivalent to an assertion that this action will occur. Thus, we must either conclude that this condition is too weak to constitute causality (i.e., time is not a cause of anything) or that time is the cause of everything. Pure time-causality is metaphysically equivalent to randomness: Another hurdle here occurs when we have a situation where "time" is the only asserted causal variable (i.e., in the case of pure time-causality). The problem is, if any change in a variable occurs over time, in the absence of causality from a non-time variable, this has traditionally been regarded as the very definition of aleatory randomness ---i.e., non-causality. Thus, to assert that time is the sole cause of an effect is to banish the notion of non-causality (randomness) entirely from metaphysics, and substitute it with a base "cause" that is always present if there is no other cause. Alternatively, one might reasonably assert that a claim of time-causality is equivalent to an assertion of randomness ---i.e., it is an assertion that there are no causes to the change, other than the passage of time. If such is the case, then the presence of "time" as a causal variable in a DAG is equivalent to its absence (and thus parsimony counsels that it be excluded). Moreover, the history of the field counsels in favour of keeping the existing terminology of "randomness". Problems with causal calculus with time as a causal variable: Another final hurdle I will mention (there may be more) is that it is difficult to deal with "time" as a causal variable in the causal calculus. In standard causal calculus, we have a $\text{do}(\cdot)$ operator that operates on a causal variable to reflect intervention into the system to change that variable to a chosen value that may be different from what it would be under passive observation. It is not entirely clear that it is possible to impose an "intervention" for a time variable, without running afoul of other philosophical or statistical principles. One could certainly argue that waiting is an intervention that changes time (forward only), but even if this were so interpreted, it cannot be differentiated from passivity, and so arguably it would not be distinct from passive observation. One might instead argue that we could record a large amount of data over different times, and then the "intervention" would be to choose which time values are included in the data for the analysis. That would indeed involve a choice of time periods (over the available data), and so it would seem to constitute an intervention, but that is an epistemic intervention, not a metaphysical one. (It also gives rise to a secondary problem of failing to use all the available data.) A state variable accruing over time can be a causal variable DAGs can include variables representing states-of-nature occurring over a prescribed time: There are a number of legitimate causal variables that represent the occurrence of some state or some event over a prescribed period of time. A simple example (hat tip to Carlos in the answer below) is investment of money over time, which yields interest. In this case, the accrual of interest is caused by the fact that money is invested over a period of time, and the longer the investment period, the higher the interest accrued. In this case, it is legitimate to have a "time" variable, that represents the chosen period of time for the investment, and this variable would have a direct causal impact on the accrued interest. Similarly, the "age" variable for a person is a kind of "time" variable (hat tip to AdamO in the answer below), representing the fact that the person has been alive over a specified period of time. Each of these variables are legitimate causal variables that can be included in a DAG. These variables do not represent the progression of time itself --- they represent the fact that a certain state-of-nature was present over a specified period of time. In many cases, it is a useful shorthand to label a variable like this as "time", but it is important to bear in mind that it represents a specific state over a period of time, rather than the progression of time itself. In some sense, every variable is of this kind: Since every possible event or state-of-nature occurs either at a particular point in time, or over a period of time, every variable involves some (often implicit) time specification. Nevertheless, there are variables such as "age" or "time invested" that have a more direct connection to time, insofar as the variable represents the amount of accrual of time during which a particular state obtained. Using "time" in a DAG is a shorthand for a state variable accruing over time: If the above argument is correct, it would appear that any use of a "time" variable in a DAG must be a shorthand for a variable representing the occurrence of a particular event or the existence of a particular state-of-nature over a specified period of time. The progression of time itself is not subject to control or intervention, and cannot be a causal variable for the reasons described above. However, the prevalence of a particular state-of-nature over a period of time certainly can be a legitimate causal variable that can be included in a DAG. These points give some basic idea of why the use of "time" as a causal variable is problematic, and what it means to add "time" to a DAG. As you can see, my view is that time itself cannot be a causal variable, but that you can have a "time" variable that actually represents an event or state-of-nature occurring or existing over a period of time. I am open to being convinced to the contrary, but this seems to me to be a sensible resolution of the issue.
Is it appropriate to use "time" as a causal variable in a DAG? As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that repres
14,406
Is it appropriate to use "time" as a causal variable in a DAG?
I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the temperature to change. There are mediators in between, but it doesn't matter from this 10,000 foot view. From this DAG, it is logical to include time as a variable in a regression model, as expected. When I was drawing this, I was thinking "are there any interesting confounders of time and temp I could include?" - but no, because nothing, AFAIK, causes time. Turning to the question of interpretation, that's trickier and it might come down to whether you follow Hernan's "no causation without manipulation" vs Pearl's "anything goes" attitude. See some of their recent papers on the topic including Does obesity shorten life? and Does Obesity Shorten Life? Or is it the Soda? On Non-manipulable Causes.
Is it appropriate to use "time" as a causal variable in a DAG?
I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the te
Is it appropriate to use "time" as a causal variable in a DAG? I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the temperature to change. There are mediators in between, but it doesn't matter from this 10,000 foot view. From this DAG, it is logical to include time as a variable in a regression model, as expected. When I was drawing this, I was thinking "are there any interesting confounders of time and temp I could include?" - but no, because nothing, AFAIK, causes time. Turning to the question of interpretation, that's trickier and it might come down to whether you follow Hernan's "no causation without manipulation" vs Pearl's "anything goes" attitude. See some of their recent papers on the topic including Does obesity shorten life? and Does Obesity Shorten Life? Or is it the Soda? On Non-manipulable Causes.
Is it appropriate to use "time" as a causal variable in a DAG? I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the te
14,407
Is it appropriate to use "time" as a causal variable in a DAG?
Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about causal modeling per se. In some models, "time" (or "year" or "duration in seconds") will be an "appropriate" variable, in others it may not be. To illustrate my point concretely, and since you believe time cannot be a causal variable, I will give you a simple counter-example in which time (duration) is an appropriate causal variable---a model of earnings in a savings account as a function of the time you leave your money invested. Let $Y$ be your earnings, $I$ be the initial investment, and let $T$ be "time", or more precisely, how long you leave your money invested in the savings account (say, measured in months). Then, $Y = f(I, T)$ is an appropriate structural equation for $Y$, and how long you leave your money in the bank does cause how much money you will make. The action $do(T = 6)$ also has a clear meaning in this model (ie, leave the money invested for 6 months, regardless of other factors). In sum, with this model we can answer interventional and counterfactual questions regarding the effect of time in earnings (what you want from a causal model), and the model does have a clear (and simple) real world interpretation. You may argue that $T$ in the model above is not "truly" what you mean by "time". But then you need to define what time "really" is, as a variable in the context of a specific causal model. Without defining what "time" stands for, what phenomenon is being modeled, and what the model is going to be used for (predictions of interventions?) we can't judge whether "time" is an appropriate variable, or whether it is being modeled appropriately. An addendum: on variables as causes In essence, causality is about the modification of (some) mechanisms, while keeping other mechanisms intact. Thus, if we wanted to be exact, we would need to describe all mechanisms that an action does and doesn't change. This is too demanding for most practical purposes, both describing the action completely, and all of the actions ramifications. Causal models abstract away this complexity by modeling causality in terms of events or variables. So what does it mean to say that variable $X$ "causes" variable $Y$? This is a shortcut to, intead of characterizing an action by everything that it changes, characterizing it by its immediate effect. For instance, $P(Y|do(X =x))$ is a shortcut for stating that "the perturbation needed to bring about the event $X=x$ alters the distribution of $Y$ to $P^*(Y)$" and we define this new distribution $P^*(Y):= P(Y|do(X =x))$. Thus, when we say "time" causes something, this is an abstraction of a more complicated description of the process. In the case of the duration of investment, for instance, $do(T = t)$ really stands for "sustaining a specific process for t units of time".
Is it appropriate to use "time" as a causal variable in a DAG?
Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about
Is it appropriate to use "time" as a causal variable in a DAG? Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about causal modeling per se. In some models, "time" (or "year" or "duration in seconds") will be an "appropriate" variable, in others it may not be. To illustrate my point concretely, and since you believe time cannot be a causal variable, I will give you a simple counter-example in which time (duration) is an appropriate causal variable---a model of earnings in a savings account as a function of the time you leave your money invested. Let $Y$ be your earnings, $I$ be the initial investment, and let $T$ be "time", or more precisely, how long you leave your money invested in the savings account (say, measured in months). Then, $Y = f(I, T)$ is an appropriate structural equation for $Y$, and how long you leave your money in the bank does cause how much money you will make. The action $do(T = 6)$ also has a clear meaning in this model (ie, leave the money invested for 6 months, regardless of other factors). In sum, with this model we can answer interventional and counterfactual questions regarding the effect of time in earnings (what you want from a causal model), and the model does have a clear (and simple) real world interpretation. You may argue that $T$ in the model above is not "truly" what you mean by "time". But then you need to define what time "really" is, as a variable in the context of a specific causal model. Without defining what "time" stands for, what phenomenon is being modeled, and what the model is going to be used for (predictions of interventions?) we can't judge whether "time" is an appropriate variable, or whether it is being modeled appropriately. An addendum: on variables as causes In essence, causality is about the modification of (some) mechanisms, while keeping other mechanisms intact. Thus, if we wanted to be exact, we would need to describe all mechanisms that an action does and doesn't change. This is too demanding for most practical purposes, both describing the action completely, and all of the actions ramifications. Causal models abstract away this complexity by modeling causality in terms of events or variables. So what does it mean to say that variable $X$ "causes" variable $Y$? This is a shortcut to, intead of characterizing an action by everything that it changes, characterizing it by its immediate effect. For instance, $P(Y|do(X =x))$ is a shortcut for stating that "the perturbation needed to bring about the event $X=x$ alters the distribution of $Y$ to $P^*(Y)$" and we define this new distribution $P^*(Y):= P(Y|do(X =x))$. Thus, when we say "time" causes something, this is an abstraction of a more complicated description of the process. In the case of the duration of investment, for instance, $do(T = t)$ really stands for "sustaining a specific process for t units of time".
Is it appropriate to use "time" as a causal variable in a DAG? Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about
14,408
Is it appropriate to use "time" as a causal variable in a DAG?
Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. Age is time since birth. We all agree this causes mortality. We also unthinking model interactions between age and other factors as a form of adjustment: cumulative pack-years of smoking, telomere length, educational achievement, household income, marital status, left-ventricular hypertrophy, et cetera. Yes, age is a form of time. You can also have calendar year, especially when there are interruptions to a time series, you can find massive forms of temporal confounding because a certain intervention or policy was made available that massively disrupts a planned analysis, especially when treatment is allocated in a stepped-wedge, cross-over, or other non-parallel fashion. Even in clinical trials, time-on-study is reflected in a number of important measures. Some drugs are likely to produce toxic effects at their first administration, others cumulatively overcome the liver or kidney's ability to metabolize and eventually lead to organ failure. Hawthorne effect can have a diminishing impact on the measured safety and efficacy outcomes, as a consequence of learning or becoming accustomed to the study setting. This is illustrated also with the issues of modeling per-protocol and intent-to-treat effects, where non-compliers and non-responders are dropped from the analysis set, you might say that conditioning on their outcomes, you can estimate a "pristine" effect of treatment in an ideal setting where patients comply with and suitably respond to treatment. These are just the age, period, and cohort effects: the three forms of time that the statistician must account for in analyses. As we learn in time series modeling, when a lack of stationarity holds, we cannot presume that measures taken repeatedly over time are the same as many measures taken all at once. The statistician must identify and interpret a causal estimand and account for time in the appropriate, causal fashion.
Is it appropriate to use "time" as a causal variable in a DAG?
Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. A
Is it appropriate to use "time" as a causal variable in a DAG? Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. Age is time since birth. We all agree this causes mortality. We also unthinking model interactions between age and other factors as a form of adjustment: cumulative pack-years of smoking, telomere length, educational achievement, household income, marital status, left-ventricular hypertrophy, et cetera. Yes, age is a form of time. You can also have calendar year, especially when there are interruptions to a time series, you can find massive forms of temporal confounding because a certain intervention or policy was made available that massively disrupts a planned analysis, especially when treatment is allocated in a stepped-wedge, cross-over, or other non-parallel fashion. Even in clinical trials, time-on-study is reflected in a number of important measures. Some drugs are likely to produce toxic effects at their first administration, others cumulatively overcome the liver or kidney's ability to metabolize and eventually lead to organ failure. Hawthorne effect can have a diminishing impact on the measured safety and efficacy outcomes, as a consequence of learning or becoming accustomed to the study setting. This is illustrated also with the issues of modeling per-protocol and intent-to-treat effects, where non-compliers and non-responders are dropped from the analysis set, you might say that conditioning on their outcomes, you can estimate a "pristine" effect of treatment in an ideal setting where patients comply with and suitably respond to treatment. These are just the age, period, and cohort effects: the three forms of time that the statistician must account for in analyses. As we learn in time series modeling, when a lack of stationarity holds, we cannot presume that measures taken repeatedly over time are the same as many measures taken all at once. The statistician must identify and interpret a causal estimand and account for time in the appropriate, causal fashion.
Is it appropriate to use "time" as a causal variable in a DAG? Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. A
14,409
Is it appropriate to use "time" as a causal variable in a DAG?
Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary to permit time one role but not the other.
Is it appropriate to use "time" as a causal variable in a DAG?
Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary
Is it appropriate to use "time" as a causal variable in a DAG? Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary to permit time one role but not the other.
Is it appropriate to use "time" as a causal variable in a DAG? Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary
14,410
General method for deriving the standard error
What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them together, and divide the sum by $n$. We than find the variance of this quantity and get the standard deviation by taking the square root of its variance. So, let the items that you pick be represented by the random variables $X_i, 1\le i \le n$, each of them identically distributed with variance $\sigma^2$. They are independently sampled, so the variance of the sum is just the sum of the variances. $$ \text{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\text{Var}\left(X_i\right) = \sum_{i=1}^n\sigma^2 = n\sigma^2 $$ Next we divide by $n$. We know in general that $\text{Var}(kY)=k^2 \text{Var}(Y)$, so putting $k=1/n$ we have $$ \text{Var}\left(\frac{\sum_{i=1}^n X_i}{n}\right) = \frac{1}{n^2} \text{Var}\left(\sum_{i=1}^n X_i\right) = \frac{1}{n^2} n\sigma^2 = \frac{\sigma^2}{n} $$ Finally take the square root to get the standard deviation $\dfrac{\sigma}{\sqrt{n}}$. When the population standard deviation isn't available the sample standard deviation $s$ is used as an estimate, giving $\dfrac{s}{\sqrt{n}}$. All of the above is true regardless of the distribution of the $X_i$s, but it begs the question of what do you actually want to do with the standard error? Typically you might want to construct confidence intervals, and it is then important assign a probability to constructing a confidence interval that contains the mean. If your $X_i$s are normally distributed, this is easy, because then the sampling distribution is also normally distributed. You can say 68% of samples of the mean will lie within 1 standard error of the true mean, 95% will be within 2 standard errors, etc. If you have a large enough sample (or a smaller sample and the $X_i$s are not too abnormal) then you can invoke the central limit theorem and say that the sampling distribution is approximately normally distributed, and your probability statements are also approximate. A case in point is estimating a proportion $p$, where you draw $n$ items each from a Bernouilli distribution. The variance of each $X_i$ distribution is $p(1-p)$ and hence the standard error is $\sqrt{p(1-p)/n}$ (the proportion $p$ is estimated using the data). To then jump to saying that approximately some % of samples are within so many standard deviations of the mean, you need to understand when the sampling distribution is approximately normal. Repeatedly sampling from a Bernouilli distribution is the same as sampling from a Binomial distribution, and one common rule of thumb is to approximate only when $np$ and $n(1-p)$ are $\ge5$. (See wikipedia for a more in-depth discussion on approximating binomial with normal. See here for a worked example of standard errors with a proportion.) If, on the other hand, your sampling distribution can't be approximated by a normal distribution, then the standard error is a lot less useful. For example, with a very skewed, asymmetric distribution you can't say that the same % of samples would be $\pm1$ standard deviation either side of the mean, and you might want to find a different way to associate probabilities with samples.
General method for deriving the standard error
What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them to
General method for deriving the standard error What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them together, and divide the sum by $n$. We than find the variance of this quantity and get the standard deviation by taking the square root of its variance. So, let the items that you pick be represented by the random variables $X_i, 1\le i \le n$, each of them identically distributed with variance $\sigma^2$. They are independently sampled, so the variance of the sum is just the sum of the variances. $$ \text{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\text{Var}\left(X_i\right) = \sum_{i=1}^n\sigma^2 = n\sigma^2 $$ Next we divide by $n$. We know in general that $\text{Var}(kY)=k^2 \text{Var}(Y)$, so putting $k=1/n$ we have $$ \text{Var}\left(\frac{\sum_{i=1}^n X_i}{n}\right) = \frac{1}{n^2} \text{Var}\left(\sum_{i=1}^n X_i\right) = \frac{1}{n^2} n\sigma^2 = \frac{\sigma^2}{n} $$ Finally take the square root to get the standard deviation $\dfrac{\sigma}{\sqrt{n}}$. When the population standard deviation isn't available the sample standard deviation $s$ is used as an estimate, giving $\dfrac{s}{\sqrt{n}}$. All of the above is true regardless of the distribution of the $X_i$s, but it begs the question of what do you actually want to do with the standard error? Typically you might want to construct confidence intervals, and it is then important assign a probability to constructing a confidence interval that contains the mean. If your $X_i$s are normally distributed, this is easy, because then the sampling distribution is also normally distributed. You can say 68% of samples of the mean will lie within 1 standard error of the true mean, 95% will be within 2 standard errors, etc. If you have a large enough sample (or a smaller sample and the $X_i$s are not too abnormal) then you can invoke the central limit theorem and say that the sampling distribution is approximately normally distributed, and your probability statements are also approximate. A case in point is estimating a proportion $p$, where you draw $n$ items each from a Bernouilli distribution. The variance of each $X_i$ distribution is $p(1-p)$ and hence the standard error is $\sqrt{p(1-p)/n}$ (the proportion $p$ is estimated using the data). To then jump to saying that approximately some % of samples are within so many standard deviations of the mean, you need to understand when the sampling distribution is approximately normal. Repeatedly sampling from a Bernouilli distribution is the same as sampling from a Binomial distribution, and one common rule of thumb is to approximate only when $np$ and $n(1-p)$ are $\ge5$. (See wikipedia for a more in-depth discussion on approximating binomial with normal. See here for a worked example of standard errors with a proportion.) If, on the other hand, your sampling distribution can't be approximated by a normal distribution, then the standard error is a lot less useful. For example, with a very skewed, asymmetric distribution you can't say that the same % of samples would be $\pm1$ standard deviation either side of the mean, and you might want to find a different way to associate probabilities with samples.
General method for deriving the standard error What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them to
14,411
General method for deriving the standard error
The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or moment generating function of your statistic, find the second central moment, and take the square root. For example, if you're sampling from a normal distribution with mean $\mu$ and variance $\sigma^2$, the sample mean $\bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_i$ is normally distributed with mean $\mu$ and variance $\sigma^2/n$. This can be derived from three properties: The sum of independent random variables is normal, $\mathrm{E}\left[\sum_{i=1}^{n} a_i X_i\right] = \sum_{i=1}^{n} a_i \mathrm{E}\left[ X_i \right]$, If $X_1$ and $X_2$ are independent, $\mathrm{Var}\left(a_1 X_1 + a_2 X_2 \right) = a_1^2 \mathrm{Var}\left(X_1\right) + a_2^2 \mathrm{Var}\left( X_2 \right)$. Thus the standard error of the sample mean, which is the square root of its variance, is $\sigma/\sqrt{n}$. There are shortcuts, like you don't necessarily need to find the distribution of the statistic, but I think conceptually it's useful to have the distributions in the back of your mind if you know them.
General method for deriving the standard error
The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or mom
General method for deriving the standard error The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or moment generating function of your statistic, find the second central moment, and take the square root. For example, if you're sampling from a normal distribution with mean $\mu$ and variance $\sigma^2$, the sample mean $\bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_i$ is normally distributed with mean $\mu$ and variance $\sigma^2/n$. This can be derived from three properties: The sum of independent random variables is normal, $\mathrm{E}\left[\sum_{i=1}^{n} a_i X_i\right] = \sum_{i=1}^{n} a_i \mathrm{E}\left[ X_i \right]$, If $X_1$ and $X_2$ are independent, $\mathrm{Var}\left(a_1 X_1 + a_2 X_2 \right) = a_1^2 \mathrm{Var}\left(X_1\right) + a_2^2 \mathrm{Var}\left( X_2 \right)$. Thus the standard error of the sample mean, which is the square root of its variance, is $\sigma/\sqrt{n}$. There are shortcuts, like you don't necessarily need to find the distribution of the statistic, but I think conceptually it's useful to have the distributions in the back of your mind if you know them.
General method for deriving the standard error The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or mom
14,412
What does Theta mean?
It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw of an old fashioned thumbtack (the ones with a big circular bottom). You assume that the probability that it falls point down is an unknown value that you call $\theta$. You could call a random variable $X$ and say that $X=1$ when the thumbtack falls point down and $X=0$ when it falls point up. You would write the model $$P(X = 1) = \theta \\ P(X = 0) = 1-\theta,$$ and you would be interested in estimating $\theta$ (here, the proability that the thumbtack falls point down). Example 2. You want to study the disintegration of a radioactive atom. Based on the literature, you know that the amount of radioactivity decreases exponentially, so you decide to model the time to disintegration with an exponential distribution. If $t$ is the time to disintegration, the model is $$f(t) = \theta e^{-\theta t}.$$ Here $f(t)$ is a probability density, which means that the probability that the atom disintegrates in the time interval $(t, t+dt)$ is $f(t)dt$. Again, you will be interested in estimating $\theta$ (here, the disintegration rate). Example 3. You want to study the precision of a weighing instrument. Based on the literature, you know that the measurement are Gaussian so you decide to model the weighing of a standard 1 kg object as $$f(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp \left\{ -\left( \frac{x-\mu}{2\sigma} \right)^2\right\}.$$ Here $x$ is the measure given by the scale, $f(x)$ is the density of probability, and the parameters are $\mu$ and $\sigma$, so $\theta = (\mu, \sigma)$. The paramter $\mu$ is the target weight (the scale is biased if $\mu \neq 1$), and $\sigma$ is the standard deviation of the measure every time you weigh the object. Again, you will be interested in estimating $\theta$ (here, the bias and the imprecision of the scale).
What does Theta mean?
It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw
What does Theta mean? It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw of an old fashioned thumbtack (the ones with a big circular bottom). You assume that the probability that it falls point down is an unknown value that you call $\theta$. You could call a random variable $X$ and say that $X=1$ when the thumbtack falls point down and $X=0$ when it falls point up. You would write the model $$P(X = 1) = \theta \\ P(X = 0) = 1-\theta,$$ and you would be interested in estimating $\theta$ (here, the proability that the thumbtack falls point down). Example 2. You want to study the disintegration of a radioactive atom. Based on the literature, you know that the amount of radioactivity decreases exponentially, so you decide to model the time to disintegration with an exponential distribution. If $t$ is the time to disintegration, the model is $$f(t) = \theta e^{-\theta t}.$$ Here $f(t)$ is a probability density, which means that the probability that the atom disintegrates in the time interval $(t, t+dt)$ is $f(t)dt$. Again, you will be interested in estimating $\theta$ (here, the disintegration rate). Example 3. You want to study the precision of a weighing instrument. Based on the literature, you know that the measurement are Gaussian so you decide to model the weighing of a standard 1 kg object as $$f(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp \left\{ -\left( \frac{x-\mu}{2\sigma} \right)^2\right\}.$$ Here $x$ is the measure given by the scale, $f(x)$ is the density of probability, and the parameters are $\mu$ and $\sigma$, so $\theta = (\mu, \sigma)$. The paramter $\mu$ is the target weight (the scale is biased if $\mu \neq 1$), and $\sigma$ is the standard deviation of the measure every time you weigh the object. Again, you will be interested in estimating $\theta$ (here, the bias and the imprecision of the scale).
What does Theta mean? It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw
14,413
What does Theta mean?
What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one or more independent variables (usually called X), getting something like $Y_i = b_0 + b_1x_1 + b_2x_2 + ... + b_px_p$ where p is the number of independent variables. The parameters to be estimated here are the $\beta s$ and $\theta$ is a name for all the $\beta s$. But $\theta$ is more general can apply to any parameters we want to estimate.
What does Theta mean?
What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one
What does Theta mean? What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one or more independent variables (usually called X), getting something like $Y_i = b_0 + b_1x_1 + b_2x_2 + ... + b_px_p$ where p is the number of independent variables. The parameters to be estimated here are the $\beta s$ and $\theta$ is a name for all the $\beta s$. But $\theta$ is more general can apply to any parameters we want to estimate.
What does Theta mean? What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one
14,414
What does Theta mean?
In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(x)$ outputs a probability of $x$. There are different such a functions, but for now let consider $f$ as some kind of "general" function. However, for $f$ to be universal, that is, one that is possible to apply to different data (that share similar properties), it needs parameters that change its shape so that it fits different data. A simple example of such a parameter is $\mu$ in normal distribution that tells where is the center (mean) of this distribution and so it can describe random variables with different mean values. Normal distribution has another parameter $\sigma$ and other distributions also have at least one such a parameters. The parameters are often called $\theta$, where for normal distribution $\theta$ is a shorthand for both $\mu$ and $\sigma$ (i.e. is a vector of the two values). Why is $\theta$ important? Statistical distributions are used to approximate the empirical distributions of data. Say you have dataset of ages of a group of people and on average they are 50 years old and you want to approximate the distribution of their ages using a normal distribution. If normal distribution didn't allow for different values of $\mu$ (e.g. had a fixed value of this parameter, say $\mu=0$), then it would be useless for this data. However, since $\mu$ is not fixed, normal distribution could use different values of $\mu$, with $\mu=50$ being one of them. This is a simple example, but there are more complicated cases where the values of $\theta$ parameters are not so clear and so you have to use statistical tools for estimating (finding the most appropriate) $\theta$ values. So you could say that statistics is about finding the best $\theta$ values given the data (Bayesians would say: given the data and priors).
What does Theta mean?
In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(
What does Theta mean? In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(x)$ outputs a probability of $x$. There are different such a functions, but for now let consider $f$ as some kind of "general" function. However, for $f$ to be universal, that is, one that is possible to apply to different data (that share similar properties), it needs parameters that change its shape so that it fits different data. A simple example of such a parameter is $\mu$ in normal distribution that tells where is the center (mean) of this distribution and so it can describe random variables with different mean values. Normal distribution has another parameter $\sigma$ and other distributions also have at least one such a parameters. The parameters are often called $\theta$, where for normal distribution $\theta$ is a shorthand for both $\mu$ and $\sigma$ (i.e. is a vector of the two values). Why is $\theta$ important? Statistical distributions are used to approximate the empirical distributions of data. Say you have dataset of ages of a group of people and on average they are 50 years old and you want to approximate the distribution of their ages using a normal distribution. If normal distribution didn't allow for different values of $\mu$ (e.g. had a fixed value of this parameter, say $\mu=0$), then it would be useless for this data. However, since $\mu$ is not fixed, normal distribution could use different values of $\mu$, with $\mu=50$ being one of them. This is a simple example, but there are more complicated cases where the values of $\theta$ parameters are not so clear and so you have to use statistical tools for estimating (finding the most appropriate) $\theta$ values. So you could say that statistics is about finding the best $\theta$ values given the data (Bayesians would say: given the data and priors).
What does Theta mean? In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(
14,415
Back-transformation of regression coefficients
One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just a matter of some simple algebra. But, in usual regression you only have $E(Y|X)=α+β⋅X $ ; you've left the error term out of your model. If transformation from $Y$ back to $Y_{orig}$ is non-linear, you may have a problem since $E\big(f(X)\big)≠f\big(E(X)\big)$, in general. I think that may have to do with the discrepancy you're seeing. Edit: Note that if the transformation is linear, you can back transform to get estimates of the coefficients on the original scale, since expectation is linear.
Back-transformation of regression coefficients
One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just
Back-transformation of regression coefficients One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just a matter of some simple algebra. But, in usual regression you only have $E(Y|X)=α+β⋅X $ ; you've left the error term out of your model. If transformation from $Y$ back to $Y_{orig}$ is non-linear, you may have a problem since $E\big(f(X)\big)≠f\big(E(X)\big)$, in general. I think that may have to do with the discrepancy you're seeing. Edit: Note that if the transformation is linear, you can back transform to get estimates of the coefficients on the original scale, since expectation is linear.
Back-transformation of regression coefficients One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just
14,416
Back-transformation of regression coefficients
I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you back transform $\hat{y}_i$, but that's it. Of course, you can also get a prediction interval by computing the high and low limit values, and then back transform them as well, but in no case do you back transform the betas.
Back-transformation of regression coefficients
I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you
Back-transformation of regression coefficients I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you back transform $\hat{y}_i$, but that's it. Of course, you can also get a prediction interval by computing the high and low limit values, and then back transform them as well, but in no case do you back transform the betas.
Back-transformation of regression coefficients I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you
14,417
What does conditioning on a random variable mean?
Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an event $\{ Y=y \}$ where the actual value $y$ is an algebraic variable that falls within some range.$^\dagger$ For example, we might specify the conditional density: $$p_{X|Y}(x|y) = p(X=x | Y=y) = {y \choose x} \frac{1}{2^y} \quad \quad \quad \text{for all integers } 0 \leqslant x \leqslant y.$$ This refers to the probability density for the random variable $X$ conditional on the known event $\{ Y=y \}$, where we are free to set any $y \in \mathbb{N}$. The use of the variable $y$ in this formulation simply means that the conditional distribution has a form that allows us to substitute a range of values for this variable, so we write it as a function of the conditioning value as well as the argument value for the random variable $X$. Regardless of which particular value $y$ we choose, the resulting density is conditional on that event being treated as known ---i.e., no longer random. As I have stated in another answer here, it is also worth noting that many theories of probability regard all probability to be conditional on implicit information. This idea is most famously associated with the axiomatic approach of the mathematician Alfréd Rényi (see e.g., Kaminski 1984). Rényi argued that every probability measure must be interpreted as being conditional on some underlying information, and that reference to marginal probabilities was merely a reference to probability where the underlying conditions are implicit, rather than explicit. $^\dagger$ Technically, it's worth noting that if we conditioning on the value of a continuous random variable (an event with probability zero) then there is an extended definition of the conditional probability. Essentially this is just a function that satisfies the required integral statement for the marginal probability. In the present answer we will stick to discrete random variables to keep things simple.
What does conditioning on a random variable mean?
Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an
What does conditioning on a random variable mean? Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an event $\{ Y=y \}$ where the actual value $y$ is an algebraic variable that falls within some range.$^\dagger$ For example, we might specify the conditional density: $$p_{X|Y}(x|y) = p(X=x | Y=y) = {y \choose x} \frac{1}{2^y} \quad \quad \quad \text{for all integers } 0 \leqslant x \leqslant y.$$ This refers to the probability density for the random variable $X$ conditional on the known event $\{ Y=y \}$, where we are free to set any $y \in \mathbb{N}$. The use of the variable $y$ in this formulation simply means that the conditional distribution has a form that allows us to substitute a range of values for this variable, so we write it as a function of the conditioning value as well as the argument value for the random variable $X$. Regardless of which particular value $y$ we choose, the resulting density is conditional on that event being treated as known ---i.e., no longer random. As I have stated in another answer here, it is also worth noting that many theories of probability regard all probability to be conditional on implicit information. This idea is most famously associated with the axiomatic approach of the mathematician Alfréd Rényi (see e.g., Kaminski 1984). Rényi argued that every probability measure must be interpreted as being conditional on some underlying information, and that reference to marginal probabilities was merely a reference to probability where the underlying conditions are implicit, rather than explicit. $^\dagger$ Technically, it's worth noting that if we conditioning on the value of a continuous random variable (an event with probability zero) then there is an extended definition of the conditional probability. Essentially this is just a function that satisfies the required integral statement for the marginal probability. In the present answer we will stick to discrete random variables to keep things simple.
What does conditioning on a random variable mean? Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an
14,418
What does conditioning on a random variable mean?
Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $B$ by $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ for every event $A$. This defines a new probability measure $P(\ \cdot\mid B)$ on the underlying probability space, and if $X$ is a random variable which is either non-negative or $P$-integrable on $A$, then we have $$ E[X \mid B] = \int X \, dP(\ \cdot\mid B) = \frac{1}{P(B)} \int X \mathbf{1}_B \, dP. $$ The intuitive interpretation is that $E[X \mid B]$ is the "best guess" for what value $X$ takes, knowing that the event $B$ actually happens. This intuition is justified by the last integral above: we integrate $X$ with respect to $P$, but only on the event $B$ (and dividing by $P(B)$ is due to us concentrating all our attention on $B$ and hence re-weighting $B$ to have probability $1$). That's the easy case. To understand conditioning on a random variable, we need the more general idea of conditioning on information. A probability measure by itself gives us prior probabilities for all possible events. But probabilities that certain events happen change if we know that certain other events do or do not happen. That is, when we have information about whether certain events happen or not, we can update our probabilities for the remaining events. Conditioning on a Collection of Events Formally, suppose $\mathcal{G}$ is a $\sigma$-algebra of events. Assume that it is known whether each event in $\mathcal{G}$ happens or not. We want to define the conditional probability $P(\ \cdot\mid \mathcal{G})$ and the conditional expectation $E[\ \cdot\mid \mathcal{G}]$. The conditional probability $P(A \mid \mathcal{G})$ should reflect our updated probability of an event $A$ after knowing the information contained in $\mathcal{G}$, and $E[X \mid\mathcal{G}]$ should be our "best guess" for the value of a random variable $X$ using the information contained in $\mathcal{G}$. (NB: Why should $\mathcal{G}$ be a $\sigma$-algebra and not a more general collection of events? Because if $\mathcal{G}$ weren't a $\sigma$ algebra but we know whether each event in $\mathcal{G}$ happens or not, then we would know whether each event in the $\sigma$-algebra generated by $\mathcal{G}$ happens or not, so we might as well replace $\mathcal{G}$ with $\sigma(\mathcal{G})$.) Conditional Probability Here's where things get interesting. $P(A \mid\mathcal{G})$ is no longer just a number: it is a random variable!. We define $P(A \mid\mathcal{G})$ to be any $\mathcal{G}$-measurable random variable $X$ such that $$ E[X \mathbf{1}_B] = P(A \cap B) $$ for every event $B \in \mathcal{G}$. Moreover, if $X$ and $X^\prime$ are two random variables satisfying this definition, then $X = X^\prime$ almost surely. That is pretty abstract stuff, so hopefully an example can shed some light on the abstraction. Example. Let $(\Omega, \mathcal{F}, P)$ be a probability space, and let $B \in \mathcal{F}$ be an event with $0 < P(B) < 1$. Suppose $\mathcal{G} = \{\emptyset, B, B^c, \Omega\}$. That is, $\mathcal{G}$ is the $\sigma$-algebra containing all the information about whether $B$ happens or not. Then for any event $A \in \mathcal{F}$ we have $$ P(A \mid \mathcal{G}) = P(A \mid B) \mathbf{1}_B + P(A \mid B^c) \mathbf{1}_{B^c}. $$ That is, for an outcome $\omega \in \Omega$, we have $$ P(A \mid \mathcal{G})(\omega) = P(A \mid B) $$ if $\omega \in B$ (i.e., if $B$ happens), and $$ P(A \mid \mathcal{G})(\omega) = P(A \mid B^c) $$ if $\omega \notin B$ (i.e., if $B$ doesn't happen). It is easy to check that this random variable actually satisfies the definition of the conditional probability $P(A \mid \mathcal{G})$ defined above. Conditional Expectation I mentioned already that conditional probabilities aren't unique, but they are unique almost surely. It turns out that if $X$ is a nonnegative or integrable random variable, $\mathcal{G}$ is a $\sigma$-algebra of events, and $Q$ is the distribution of $X$ (a Borel probability measure on $\mathbb{R}$) then it is possible to choose versions of conditional probabilities $Q(B \mid \mathcal{G})$ for all Borel subsets $B$ of $\mathbb{R}$ such that $Q(\ \cdot \mid \mathcal{G})(\omega)$ is a probability measure for each outcome $\omega$. Given this possibility, we may define $$ E[X\mid\mathcal{G}]=\int_{\mathbb{R}} x \, Q(dx\mid\mathcal{G}), $$ which is again a random variable. It can be shown that this is the almost surely unique random variable $Y$ which is $\mathcal{G}$-measurable and satisfies $$ E[Y \mathbf{1}_A] = E[X \mathbf{1}_A] $$ for all $A \in \mathcal{G}$. Conditioning on a Random Variable Given the general definitions of conditional probability and conditional expectation given above, we may easily define what it means to condition on a random variable $Y$: it means conditioning on the $\sigma$-algebra generated by $Y$: $$ \sigma(Y) = \big\{\{Y \in B\} : \text{$B$ is a Borel subset of $\mathbb{R}$}\big\}. $$ I said "easy to define," but I am aware that that doesn't mean "easy to understand." But at least we can now say what an expression like $E[X \mid Y]$ means: it is a random variable that satisfies $$ E[E[X \mid Y] \mathbf{1}_A] = E[X \mathbf{1}_A] $$ for every event $A$ of the form $A = \{Y \in B\}$ for some Borel subset $B$ of $\mathbb{R}$. Wow, that's abstract! Fortunately, there are easy ways to work with $E[X \mid Y]$ if $Y$ is discrete or absolutely continuous. $Y$ Discrete Suppose $Y$ takes values in a countable set $S \subseteq \mathbb{R}$. Then it can be shown that $$ P(A \mid Y)(\omega) = P(A \mid Y = Y(\omega)) $$ for each outcome $\omega$. The right-hand side above is shorthand for the more verbose $$ P(A \mid \{Y = Y(\omega)\}) $$ where $\{Y = Y(\omega)\}$ is the event $$ \{Y = Y(\omega)\} = \{\omega^\prime : Y(\omega^\prime) = Y(\omega)\}. $$ That is, if our outcome is $\omega$, and $Y(\omega) = k$, then $$ P(A \mid Y)(\omega) = P(A \mid Y = k) = \frac{P(A \cap \{Y = k\})}{P(Y = k)}. $$ Similarly, if $X$ is another random variable taking values in $S$, then we have $$ E[X \mid Y](\omega) = E[X \mid Y = Y(\omega)] = \sum_{x \in S} x P(X = x \mid Y = Y(\omega)) $$ $Y$ Absolutely Continuous Suppose now that $Y$ is absolutely continuous with density $f_Y$. Let $X$ be another absolutely continuous random variable, with density $f_X$. Let $f_{X, Y}$ be the joint density of $X$ and $Y$. Then we define the conditional density of $X$ given $Y = y$ by $$ f_{X\mid Y}(x \mid y) = \frac{f_{X, Y}(x, y)}{f_Y(y)} = \frac{f_{X, Y}(x, y)}{\int_{\mathbb{R}} f_{X, Y}(x^\prime, y) \, dx^\prime}. $$ Now we may define a function $g : \mathbb{R} \to \mathbb{R}$ given by $$ g(y) = E[X \mid Y = y] = \int_{\mathbb{R}} x f_{X \mid Y}(x \mid y) \, dx. $$ In particular, $g(y) = E[X \mid Y = y]$ is a real number for each $y$. Using this $g$, we can show that $$ E[X \mid Y] = g(Y), $$ meaning that $$ E[X \mid Y](\omega) = g(Y(\omega)) = E[X \mid Y = Y(\omega)] $$ for each outcome $\omega$. This is just scratching the surface of the theory of conditioning. For a great reference, see chapters 21 and 23 of A Modern Approach to Probability by Fristedt and Gray. Some Takeaways Conditioning on a random variable is different from conditioning on an event. Expressions like $P(A \mid Y)$ and $E[X \mid Y]$ are random variables Expressions like $P(A \mid Y = y)$ and $E[X \mid Y = y]$ are real numbers.
What does conditioning on a random variable mean?
Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $
What does conditioning on a random variable mean? Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $B$ by $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ for every event $A$. This defines a new probability measure $P(\ \cdot\mid B)$ on the underlying probability space, and if $X$ is a random variable which is either non-negative or $P$-integrable on $A$, then we have $$ E[X \mid B] = \int X \, dP(\ \cdot\mid B) = \frac{1}{P(B)} \int X \mathbf{1}_B \, dP. $$ The intuitive interpretation is that $E[X \mid B]$ is the "best guess" for what value $X$ takes, knowing that the event $B$ actually happens. This intuition is justified by the last integral above: we integrate $X$ with respect to $P$, but only on the event $B$ (and dividing by $P(B)$ is due to us concentrating all our attention on $B$ and hence re-weighting $B$ to have probability $1$). That's the easy case. To understand conditioning on a random variable, we need the more general idea of conditioning on information. A probability measure by itself gives us prior probabilities for all possible events. But probabilities that certain events happen change if we know that certain other events do or do not happen. That is, when we have information about whether certain events happen or not, we can update our probabilities for the remaining events. Conditioning on a Collection of Events Formally, suppose $\mathcal{G}$ is a $\sigma$-algebra of events. Assume that it is known whether each event in $\mathcal{G}$ happens or not. We want to define the conditional probability $P(\ \cdot\mid \mathcal{G})$ and the conditional expectation $E[\ \cdot\mid \mathcal{G}]$. The conditional probability $P(A \mid \mathcal{G})$ should reflect our updated probability of an event $A$ after knowing the information contained in $\mathcal{G}$, and $E[X \mid\mathcal{G}]$ should be our "best guess" for the value of a random variable $X$ using the information contained in $\mathcal{G}$. (NB: Why should $\mathcal{G}$ be a $\sigma$-algebra and not a more general collection of events? Because if $\mathcal{G}$ weren't a $\sigma$ algebra but we know whether each event in $\mathcal{G}$ happens or not, then we would know whether each event in the $\sigma$-algebra generated by $\mathcal{G}$ happens or not, so we might as well replace $\mathcal{G}$ with $\sigma(\mathcal{G})$.) Conditional Probability Here's where things get interesting. $P(A \mid\mathcal{G})$ is no longer just a number: it is a random variable!. We define $P(A \mid\mathcal{G})$ to be any $\mathcal{G}$-measurable random variable $X$ such that $$ E[X \mathbf{1}_B] = P(A \cap B) $$ for every event $B \in \mathcal{G}$. Moreover, if $X$ and $X^\prime$ are two random variables satisfying this definition, then $X = X^\prime$ almost surely. That is pretty abstract stuff, so hopefully an example can shed some light on the abstraction. Example. Let $(\Omega, \mathcal{F}, P)$ be a probability space, and let $B \in \mathcal{F}$ be an event with $0 < P(B) < 1$. Suppose $\mathcal{G} = \{\emptyset, B, B^c, \Omega\}$. That is, $\mathcal{G}$ is the $\sigma$-algebra containing all the information about whether $B$ happens or not. Then for any event $A \in \mathcal{F}$ we have $$ P(A \mid \mathcal{G}) = P(A \mid B) \mathbf{1}_B + P(A \mid B^c) \mathbf{1}_{B^c}. $$ That is, for an outcome $\omega \in \Omega$, we have $$ P(A \mid \mathcal{G})(\omega) = P(A \mid B) $$ if $\omega \in B$ (i.e., if $B$ happens), and $$ P(A \mid \mathcal{G})(\omega) = P(A \mid B^c) $$ if $\omega \notin B$ (i.e., if $B$ doesn't happen). It is easy to check that this random variable actually satisfies the definition of the conditional probability $P(A \mid \mathcal{G})$ defined above. Conditional Expectation I mentioned already that conditional probabilities aren't unique, but they are unique almost surely. It turns out that if $X$ is a nonnegative or integrable random variable, $\mathcal{G}$ is a $\sigma$-algebra of events, and $Q$ is the distribution of $X$ (a Borel probability measure on $\mathbb{R}$) then it is possible to choose versions of conditional probabilities $Q(B \mid \mathcal{G})$ for all Borel subsets $B$ of $\mathbb{R}$ such that $Q(\ \cdot \mid \mathcal{G})(\omega)$ is a probability measure for each outcome $\omega$. Given this possibility, we may define $$ E[X\mid\mathcal{G}]=\int_{\mathbb{R}} x \, Q(dx\mid\mathcal{G}), $$ which is again a random variable. It can be shown that this is the almost surely unique random variable $Y$ which is $\mathcal{G}$-measurable and satisfies $$ E[Y \mathbf{1}_A] = E[X \mathbf{1}_A] $$ for all $A \in \mathcal{G}$. Conditioning on a Random Variable Given the general definitions of conditional probability and conditional expectation given above, we may easily define what it means to condition on a random variable $Y$: it means conditioning on the $\sigma$-algebra generated by $Y$: $$ \sigma(Y) = \big\{\{Y \in B\} : \text{$B$ is a Borel subset of $\mathbb{R}$}\big\}. $$ I said "easy to define," but I am aware that that doesn't mean "easy to understand." But at least we can now say what an expression like $E[X \mid Y]$ means: it is a random variable that satisfies $$ E[E[X \mid Y] \mathbf{1}_A] = E[X \mathbf{1}_A] $$ for every event $A$ of the form $A = \{Y \in B\}$ for some Borel subset $B$ of $\mathbb{R}$. Wow, that's abstract! Fortunately, there are easy ways to work with $E[X \mid Y]$ if $Y$ is discrete or absolutely continuous. $Y$ Discrete Suppose $Y$ takes values in a countable set $S \subseteq \mathbb{R}$. Then it can be shown that $$ P(A \mid Y)(\omega) = P(A \mid Y = Y(\omega)) $$ for each outcome $\omega$. The right-hand side above is shorthand for the more verbose $$ P(A \mid \{Y = Y(\omega)\}) $$ where $\{Y = Y(\omega)\}$ is the event $$ \{Y = Y(\omega)\} = \{\omega^\prime : Y(\omega^\prime) = Y(\omega)\}. $$ That is, if our outcome is $\omega$, and $Y(\omega) = k$, then $$ P(A \mid Y)(\omega) = P(A \mid Y = k) = \frac{P(A \cap \{Y = k\})}{P(Y = k)}. $$ Similarly, if $X$ is another random variable taking values in $S$, then we have $$ E[X \mid Y](\omega) = E[X \mid Y = Y(\omega)] = \sum_{x \in S} x P(X = x \mid Y = Y(\omega)) $$ $Y$ Absolutely Continuous Suppose now that $Y$ is absolutely continuous with density $f_Y$. Let $X$ be another absolutely continuous random variable, with density $f_X$. Let $f_{X, Y}$ be the joint density of $X$ and $Y$. Then we define the conditional density of $X$ given $Y = y$ by $$ f_{X\mid Y}(x \mid y) = \frac{f_{X, Y}(x, y)}{f_Y(y)} = \frac{f_{X, Y}(x, y)}{\int_{\mathbb{R}} f_{X, Y}(x^\prime, y) \, dx^\prime}. $$ Now we may define a function $g : \mathbb{R} \to \mathbb{R}$ given by $$ g(y) = E[X \mid Y = y] = \int_{\mathbb{R}} x f_{X \mid Y}(x \mid y) \, dx. $$ In particular, $g(y) = E[X \mid Y = y]$ is a real number for each $y$. Using this $g$, we can show that $$ E[X \mid Y] = g(Y), $$ meaning that $$ E[X \mid Y](\omega) = g(Y(\omega)) = E[X \mid Y = Y(\omega)] $$ for each outcome $\omega$. This is just scratching the surface of the theory of conditioning. For a great reference, see chapters 21 and 23 of A Modern Approach to Probability by Fristedt and Gray. Some Takeaways Conditioning on a random variable is different from conditioning on an event. Expressions like $P(A \mid Y)$ and $E[X \mid Y]$ are random variables Expressions like $P(A \mid Y = y)$ and $E[X \mid Y = y]$ are real numbers.
What does conditioning on a random variable mean? Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $
14,419
What does conditioning on a random variable mean?
It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
What does conditioning on a random variable mean?
It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
What does conditioning on a random variable mean? It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
What does conditioning on a random variable mean? It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
14,420
Is PCA optimization convex?
No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of transformations rather than just getting the answer: it lies in the journey, not the destination. The chief steps in this journey are Obtain a simple expression for the objective function. Enlarge its domain, which is not convex, into one which is. Modify the objective, which is not convex, into one which is, in a way that obviously does not change the points at which it attains its optimal values. If you keep close watch, you can see the SVD and Lagrange multipliers lurking--but they're just a sideshow, there for scenic interest, and I won't comment on them further. The standard variance-maximizing formulation of PCA (or at least its key step) is $$\text{Maximize }f(x)=\ x^\prime \mathbb{A} x\ \text{ subject to }\ x^\prime x=1\tag{*}$$ where the $n\times n$ matrix $\mathbb A$ is a symmetric, positive-semidefinite matrix constructed from the data (usually its sum of squares and products matrix, its covariance matrix, or its correlation matrix). (Equivalently, we may try to maximize the unconstrained objective $x^\prime \mathbb{A} x / x^\prime x$. Not only is this a nastier expression--it's no longer a quadratic function--but graphing special cases will quickly show it is not a convex function, either. Usually one observes this function is invariant under rescalings $x\to \lambda x$ and then reduces it to the constrained formulation $(*)$.) Any optimization problem can be abstractly formulated as Find at least one $x\in\mathcal{X}$ that makes the function $f:\mathcal{X}\to\mathbb{R}$ as large as possible. Recall that an optimization problem is convex when it enjoys two separate properties: The domain $\mathcal{X}\subset\mathbb{R}^n$ is convex. This can be formulated in many ways. One is that whenever $x\in\mathcal{X}$ and $y\in\mathcal{X}$ and $0 \le \lambda \le 1$, $\lambda x + (1-\lambda)y\in\mathcal{X}$ also. Geometrically: whenever two endpoints of a line segment lie in $\mathcal X$, the entire segment lies in $\mathcal X$. The function $f$ is convex. This also can be formulated in many ways. One is that whenever $x\in\mathcal{X}$ and $y\in\mathcal{X}$ and $0 \le \lambda \le 1$, $$f(\lambda x + (1-\lambda)y) \ge \lambda f(x) + (1-\lambda) f(y).$$ (We needed $\mathcal X$ to be convex in order for this condition to make any sense.) Geometrically: whenever $\bar{xy}$ is any line segment in $\mathcal X$, the graph of $f$ (as restricted to this segment) lies above or on the segment connecting $(x,f(x))$ and $(y,f(y))$ in $\mathbb{R}^{n+1}$. The archetype of a convex function is locally everywhere parabolic with non-positive leading coefficient: on any line segment it can be expressed in the form $y\to a y^2 + b y + c$ with $a \le 0.$ A difficulty with $(*)$ is that $\mathcal X$ is the unit sphere $S^{n-1}\subset\mathbb{R}^n$, which is decidedly not convex. However, we can modify this problem by including smaller vectors. That is because when we scale $x$ by a factor $\lambda$, $f$ is multiplied by $\lambda^2$. When $0 \lt x^\prime x \lt 1$, we can scale $x$ up to unit length by multiplying it by $\lambda=1/\sqrt{x^\prime x} \gt 1$, thereby increasing $f$ but staying within the unit ball $D^n = \{x\in\mathbb{R}^n\mid x^\prime x \le 1\}$. Let us therefore reformulate $(*)$ as $$\text{Maximize }f(x)=\ x^\prime \mathbb{A} x\ \text{ subject to }\ x^\prime x\le1\tag{**}$$ Its domain is $\mathcal{X}=D^n$ which clearly is convex, so we're halfway there. It remains to consider the convexity of the graph of $f$. A good way to think about the problem $(**)$--even if you don't intend to carry out the corresponding calculations--is in terms of the Spectral Theorem. It says that by means of an orthogonal transformation $\mathbb P$, you can find at least one basis of $\mathbb{R}^n$ in which $\mathbb A$ is diagonal: that is, $$\mathbb {A = P^\prime \Sigma P}$$ where all the off-diagonal entries of $\Sigma$ are zero. Such a choice of $\mathbb{P}$ can be conceived of as changing nothing at all about $\mathbb A$, but merely changing how you describe it: when you rotate your point of view, the axes of the level hypersurfaces of the function $x\to x^\prime \mathbb{A} x$ (which were always ellipsoids) align with the coordinate axes. Since $\mathbb A$ is positive-semidefinite, all the diagonal entries of $\Sigma$ must be non-negative. We may further permute the axes (which is just another orthogonal transformation, and therefore can be absorbed into $\mathbb P$) to assure that $$\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_n \ge 0.$$ If we let $x=\mathbb{P}^\prime y$ be the new coordinates $x$ (entailing $y=\mathbb{P}x$), the function $f$ is $$f(y) = y^\prime \mathbb{A} y = x^\prime \mathbb{P^\prime A P} x = x^\prime \Sigma x = \sigma_1 x_1^2 + \sigma_2 x_2^2 + \cdots + \sigma_n x_n^2.$$ This function is decidedly not convex! Its graph looks like part of a hyperparaboloid: at every point in the interior of $\mathcal X$, the fact that all the $\sigma_i$ are nonnegative makes it curl upward rather than downward. However, we can turn $(**)$ into a convex problem with one very useful technique. Knowing that the maximum will occur where $x^\prime x = 1$, let's subtract the constant $\sigma_1$ from $f$, at least for points on the boundary of $\mathcal{X}$. That will not change the locations of any points on the boundary at which $f$ is optimized, because it lowers all the values of $f$ on the boundary by the same value $\sigma_1$. This suggests examining the function $$g(y) = f(y) - \sigma_1 y^\prime y.$$ This indeed subtracts the constant $\sigma_1$ from $f$ at boundary points, and subtracts smaller values at interior points. This will assure that $g$, compared to $f$, has no new global maxima on the interior of $\mathcal X$. Let's examine what has happened with this sleight-of-hand of replacing $-\sigma_1$ by $-\sigma_1 y^\prime y$. Because $\mathbb P$ is orthogonal, $y^\prime y = x^\prime x$. (That's practically the definition of an orthogonal transformation.) Therefore, in terms of the $x$ coordinates, $g$ can be written $$g(y) = \sigma_1 x_1 ^2 + \cdots + \sigma_n x_n^2 - \sigma_1(x_1^2 + \cdots + x_n^2) = (\sigma_2-\sigma_1)x_2^2 + \cdots + (\sigma_n - \sigma_1)x_n^2.$$ Because $\sigma_1 \ge \sigma_i$ for all $i$, each of the coefficients is zero or negative. Consequently, (a) $g$ is convex and (b) $g$ is optimized when $x_2=x_3=\cdots=x_n=0$. ($x^\prime x=1$ then implies $x_1=\pm 1$ and the optimum is attained when $y = \mathbb{P} (\pm 1,0,\ldots, 0)^\prime$, which is--up to sign--the first column of $\mathbb P$.) Let's recapitulate the logic. Because $g$ is optimized on the boundary $\partial D^n=S^{n-1}$ where $y^\prime y = 1$, because $f$ differs from $g$ merely by the constant $\sigma_1$ on that boundary, and because the values of $g$ are even closer to the values of $f$ on the interior of $D^n$, the maxima of $f$ must coincide with the maxima of $g$.
Is PCA optimization convex?
No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of
Is PCA optimization convex? No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of transformations rather than just getting the answer: it lies in the journey, not the destination. The chief steps in this journey are Obtain a simple expression for the objective function. Enlarge its domain, which is not convex, into one which is. Modify the objective, which is not convex, into one which is, in a way that obviously does not change the points at which it attains its optimal values. If you keep close watch, you can see the SVD and Lagrange multipliers lurking--but they're just a sideshow, there for scenic interest, and I won't comment on them further. The standard variance-maximizing formulation of PCA (or at least its key step) is $$\text{Maximize }f(x)=\ x^\prime \mathbb{A} x\ \text{ subject to }\ x^\prime x=1\tag{*}$$ where the $n\times n$ matrix $\mathbb A$ is a symmetric, positive-semidefinite matrix constructed from the data (usually its sum of squares and products matrix, its covariance matrix, or its correlation matrix). (Equivalently, we may try to maximize the unconstrained objective $x^\prime \mathbb{A} x / x^\prime x$. Not only is this a nastier expression--it's no longer a quadratic function--but graphing special cases will quickly show it is not a convex function, either. Usually one observes this function is invariant under rescalings $x\to \lambda x$ and then reduces it to the constrained formulation $(*)$.) Any optimization problem can be abstractly formulated as Find at least one $x\in\mathcal{X}$ that makes the function $f:\mathcal{X}\to\mathbb{R}$ as large as possible. Recall that an optimization problem is convex when it enjoys two separate properties: The domain $\mathcal{X}\subset\mathbb{R}^n$ is convex. This can be formulated in many ways. One is that whenever $x\in\mathcal{X}$ and $y\in\mathcal{X}$ and $0 \le \lambda \le 1$, $\lambda x + (1-\lambda)y\in\mathcal{X}$ also. Geometrically: whenever two endpoints of a line segment lie in $\mathcal X$, the entire segment lies in $\mathcal X$. The function $f$ is convex. This also can be formulated in many ways. One is that whenever $x\in\mathcal{X}$ and $y\in\mathcal{X}$ and $0 \le \lambda \le 1$, $$f(\lambda x + (1-\lambda)y) \ge \lambda f(x) + (1-\lambda) f(y).$$ (We needed $\mathcal X$ to be convex in order for this condition to make any sense.) Geometrically: whenever $\bar{xy}$ is any line segment in $\mathcal X$, the graph of $f$ (as restricted to this segment) lies above or on the segment connecting $(x,f(x))$ and $(y,f(y))$ in $\mathbb{R}^{n+1}$. The archetype of a convex function is locally everywhere parabolic with non-positive leading coefficient: on any line segment it can be expressed in the form $y\to a y^2 + b y + c$ with $a \le 0.$ A difficulty with $(*)$ is that $\mathcal X$ is the unit sphere $S^{n-1}\subset\mathbb{R}^n$, which is decidedly not convex. However, we can modify this problem by including smaller vectors. That is because when we scale $x$ by a factor $\lambda$, $f$ is multiplied by $\lambda^2$. When $0 \lt x^\prime x \lt 1$, we can scale $x$ up to unit length by multiplying it by $\lambda=1/\sqrt{x^\prime x} \gt 1$, thereby increasing $f$ but staying within the unit ball $D^n = \{x\in\mathbb{R}^n\mid x^\prime x \le 1\}$. Let us therefore reformulate $(*)$ as $$\text{Maximize }f(x)=\ x^\prime \mathbb{A} x\ \text{ subject to }\ x^\prime x\le1\tag{**}$$ Its domain is $\mathcal{X}=D^n$ which clearly is convex, so we're halfway there. It remains to consider the convexity of the graph of $f$. A good way to think about the problem $(**)$--even if you don't intend to carry out the corresponding calculations--is in terms of the Spectral Theorem. It says that by means of an orthogonal transformation $\mathbb P$, you can find at least one basis of $\mathbb{R}^n$ in which $\mathbb A$ is diagonal: that is, $$\mathbb {A = P^\prime \Sigma P}$$ where all the off-diagonal entries of $\Sigma$ are zero. Such a choice of $\mathbb{P}$ can be conceived of as changing nothing at all about $\mathbb A$, but merely changing how you describe it: when you rotate your point of view, the axes of the level hypersurfaces of the function $x\to x^\prime \mathbb{A} x$ (which were always ellipsoids) align with the coordinate axes. Since $\mathbb A$ is positive-semidefinite, all the diagonal entries of $\Sigma$ must be non-negative. We may further permute the axes (which is just another orthogonal transformation, and therefore can be absorbed into $\mathbb P$) to assure that $$\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_n \ge 0.$$ If we let $x=\mathbb{P}^\prime y$ be the new coordinates $x$ (entailing $y=\mathbb{P}x$), the function $f$ is $$f(y) = y^\prime \mathbb{A} y = x^\prime \mathbb{P^\prime A P} x = x^\prime \Sigma x = \sigma_1 x_1^2 + \sigma_2 x_2^2 + \cdots + \sigma_n x_n^2.$$ This function is decidedly not convex! Its graph looks like part of a hyperparaboloid: at every point in the interior of $\mathcal X$, the fact that all the $\sigma_i$ are nonnegative makes it curl upward rather than downward. However, we can turn $(**)$ into a convex problem with one very useful technique. Knowing that the maximum will occur where $x^\prime x = 1$, let's subtract the constant $\sigma_1$ from $f$, at least for points on the boundary of $\mathcal{X}$. That will not change the locations of any points on the boundary at which $f$ is optimized, because it lowers all the values of $f$ on the boundary by the same value $\sigma_1$. This suggests examining the function $$g(y) = f(y) - \sigma_1 y^\prime y.$$ This indeed subtracts the constant $\sigma_1$ from $f$ at boundary points, and subtracts smaller values at interior points. This will assure that $g$, compared to $f$, has no new global maxima on the interior of $\mathcal X$. Let's examine what has happened with this sleight-of-hand of replacing $-\sigma_1$ by $-\sigma_1 y^\prime y$. Because $\mathbb P$ is orthogonal, $y^\prime y = x^\prime x$. (That's practically the definition of an orthogonal transformation.) Therefore, in terms of the $x$ coordinates, $g$ can be written $$g(y) = \sigma_1 x_1 ^2 + \cdots + \sigma_n x_n^2 - \sigma_1(x_1^2 + \cdots + x_n^2) = (\sigma_2-\sigma_1)x_2^2 + \cdots + (\sigma_n - \sigma_1)x_n^2.$$ Because $\sigma_1 \ge \sigma_i$ for all $i$, each of the coefficients is zero or negative. Consequently, (a) $g$ is convex and (b) $g$ is optimized when $x_2=x_3=\cdots=x_n=0$. ($x^\prime x=1$ then implies $x_1=\pm 1$ and the optimum is attained when $y = \mathbb{P} (\pm 1,0,\ldots, 0)^\prime$, which is--up to sign--the first column of $\mathbb P$.) Let's recapitulate the logic. Because $g$ is optimized on the boundary $\partial D^n=S^{n-1}$ where $y^\prime y = 1$, because $f$ differs from $g$ merely by the constant $\sigma_1$ on that boundary, and because the values of $g$ are even closer to the values of $f$ on the interior of $D^n$, the maxima of $f$ must coincide with the maxima of $g$.
Is PCA optimization convex? No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of
14,421
Is PCA optimization convex?
No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the norm is convex, the set over which it is optimized is nonconvex. A convex relaxation of PCA's problem is called Convex Low Rank Approximation $\hat{X} = \underset{\|X\|_* \leq c}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_*$ is nuclear norm. it's convex relaxation of rank - just like $\|\cdot\|_1$ is convex relaxation of number of nonzero elements for vectors) You can see Statistical Learning with Sparsity, ch 6 (matrix decompositions) for details. If you're interested in more general problems and how they relate to convexity, see Generalized Low Rank Models.
Is PCA optimization convex?
No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the
Is PCA optimization convex? No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the norm is convex, the set over which it is optimized is nonconvex. A convex relaxation of PCA's problem is called Convex Low Rank Approximation $\hat{X} = \underset{\|X\|_* \leq c}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_*$ is nuclear norm. it's convex relaxation of rank - just like $\|\cdot\|_1$ is convex relaxation of number of nonzero elements for vectors) You can see Statistical Learning with Sparsity, ch 6 (matrix decompositions) for details. If you're interested in more general problems and how they relate to convexity, see Generalized Low Rank Models.
Is PCA optimization convex? No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the
14,422
Is PCA optimization convex?
Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant for those poor souls (such as me) who are not so familiar with the jargon of Unit Spheres and SVDs - which is, btw, good to know. My source is this lecture notes by Prof. Tibshirani For an optimization problem to be solved with convex optimization techniques, there are two prerequisites. The objective function has to be convex. The constraint functions should also be convex. Most formulations of PCA involve a constraint on the rank of a matrix. In these type of PCA formulations, condition 2 is violated. Because, the constraint that $rank(X) = k, $ is not convex. For example, let $J_{11}$, $J_{22}$ be 2 × 2 zero matrices with a single 1 in the upper left corner and lower right corner respectively. Then, each of these have rank 1, but their average has rank 2.
Is PCA optimization convex?
Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant fo
Is PCA optimization convex? Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant for those poor souls (such as me) who are not so familiar with the jargon of Unit Spheres and SVDs - which is, btw, good to know. My source is this lecture notes by Prof. Tibshirani For an optimization problem to be solved with convex optimization techniques, there are two prerequisites. The objective function has to be convex. The constraint functions should also be convex. Most formulations of PCA involve a constraint on the rank of a matrix. In these type of PCA formulations, condition 2 is violated. Because, the constraint that $rank(X) = k, $ is not convex. For example, let $J_{11}$, $J_{22}$ be 2 × 2 zero matrices with a single 1 in the upper left corner and lower right corner respectively. Then, each of these have rank 1, but their average has rank 2.
Is PCA optimization convex? Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant fo
14,423
Does Bayes theorem hold for expectations?
$$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the right side of $(1)$ involves a division by $0$ and so $(1)$ is meaningless. Note that whether or not $A$ and $B$ are independent is not relevant. In general, $(1)$ does not hold for dependent random variables but specific examples of dependent $A$ and $B$ satisfying $(1)$ can be found. Note that we must continue to insist that $E[B]\neq 0$, else the right side of $(1)$ is meaningless. Bear in mind that $E[A\mid B]$ is a random variable that happens to be a function of the random variable $B$, say $g(B)$ while $E[B\mid A]$ is a random variable that is a function of the random variable $A$, say $h(A)$. So, $(1)$ is similar to asking whether $$g(B)\stackrel{?}= h(A)\frac{E[A]}{E[B]} \tag 2$$ can be a true statement, and obviously the answer is that $g(B)$ cannot be a multiple of $h(A)$ in general. To my knowledge, there are only two special cases where $(1)$ can hold. As noted above, for independent random variables $A$ and $B$, $g(B)$ and $h(A)$ are degenerate random variables (called constants by statistically-illiterate folks) that equal $E[A]$ and $E[B]$ respectively, and so if $E[B]\neq 0$, we have equality in $(1)$. At the other end of the spectrum from independence, suppose that $A=g(B)$ where $g(\cdot)$ is an invertible function and thus $A=g(B)$ and $B=g^{-1}(A)$ are wholly dependent random variables. In this case, $$E[A\mid B] = g(B), \quad E[B\mid A] = g^{-1}(A) = g^{-1}(g(B)) = B$$ and so $(1)$ becomes $$g(B)\stackrel{?}= B\frac{E[A]}{E[B]}$$ which holds exactly when $g(x) = \alpha x$ where $\alpha$ can be any nonzero real number. Thus, $(1)$ holds whenever $A$ is a scalar multiple of $B$, and of course $E[B]$ must be nonzero (cf. Michael Hardy's answer). The above development shows that $g(x)$ must be a linear function and that $(1)$ cannot hold for affine functions $g(x) = \alpha x + \beta$ with $\beta \neq 0$. However, note that Alecos Papadopolous in his answer and his comments thereafter claims that if $B$ is a normal random variable with nonzero mean, then for specific values of $\alpha$ and $\beta\neq 0$ that he provides, $A=\alpha B+\beta$ and $B$ satisfy $(1)$. In my opinion, his example is incorrect. In a comment on this answer, Huber has suggested considering the symmetric conjectured equality $$E[A\mid B]E[B] \stackrel{?}=E[B\mid A]E[A]\tag{3}$$ which of course always holds for independent random variables regardless of the values of $E[A]$ and $E[B]$ and for scalar multiples $A = \alpha B$ also. Of course, more trivially, $(3)$ holds for any zero-mean random variables $A$ and $B$ (independent or dependent, scalar multiple or not; it does not matter!): $E[A]=E[B]=0$ is sufficient for equality in $(3)$. Thus, $(3)$ might not be as interesting as $(1)$ as a topic for discussion.
Does Bayes theorem hold for expectations?
$$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the r
Does Bayes theorem hold for expectations? $$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the right side of $(1)$ involves a division by $0$ and so $(1)$ is meaningless. Note that whether or not $A$ and $B$ are independent is not relevant. In general, $(1)$ does not hold for dependent random variables but specific examples of dependent $A$ and $B$ satisfying $(1)$ can be found. Note that we must continue to insist that $E[B]\neq 0$, else the right side of $(1)$ is meaningless. Bear in mind that $E[A\mid B]$ is a random variable that happens to be a function of the random variable $B$, say $g(B)$ while $E[B\mid A]$ is a random variable that is a function of the random variable $A$, say $h(A)$. So, $(1)$ is similar to asking whether $$g(B)\stackrel{?}= h(A)\frac{E[A]}{E[B]} \tag 2$$ can be a true statement, and obviously the answer is that $g(B)$ cannot be a multiple of $h(A)$ in general. To my knowledge, there are only two special cases where $(1)$ can hold. As noted above, for independent random variables $A$ and $B$, $g(B)$ and $h(A)$ are degenerate random variables (called constants by statistically-illiterate folks) that equal $E[A]$ and $E[B]$ respectively, and so if $E[B]\neq 0$, we have equality in $(1)$. At the other end of the spectrum from independence, suppose that $A=g(B)$ where $g(\cdot)$ is an invertible function and thus $A=g(B)$ and $B=g^{-1}(A)$ are wholly dependent random variables. In this case, $$E[A\mid B] = g(B), \quad E[B\mid A] = g^{-1}(A) = g^{-1}(g(B)) = B$$ and so $(1)$ becomes $$g(B)\stackrel{?}= B\frac{E[A]}{E[B]}$$ which holds exactly when $g(x) = \alpha x$ where $\alpha$ can be any nonzero real number. Thus, $(1)$ holds whenever $A$ is a scalar multiple of $B$, and of course $E[B]$ must be nonzero (cf. Michael Hardy's answer). The above development shows that $g(x)$ must be a linear function and that $(1)$ cannot hold for affine functions $g(x) = \alpha x + \beta$ with $\beta \neq 0$. However, note that Alecos Papadopolous in his answer and his comments thereafter claims that if $B$ is a normal random variable with nonzero mean, then for specific values of $\alpha$ and $\beta\neq 0$ that he provides, $A=\alpha B+\beta$ and $B$ satisfy $(1)$. In my opinion, his example is incorrect. In a comment on this answer, Huber has suggested considering the symmetric conjectured equality $$E[A\mid B]E[B] \stackrel{?}=E[B\mid A]E[A]\tag{3}$$ which of course always holds for independent random variables regardless of the values of $E[A]$ and $E[B]$ and for scalar multiples $A = \alpha B$ also. Of course, more trivially, $(3)$ holds for any zero-mean random variables $A$ and $B$ (independent or dependent, scalar multiple or not; it does not matter!): $E[A]=E[B]=0$ is sufficient for equality in $(3)$. Thus, $(3)$ might not be as interesting as $(1)$ as a topic for discussion.
Does Bayes theorem hold for expectations? $$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the r
14,424
Does Bayes theorem hold for expectations?
The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha, \beta)$, that is, a bayesian model with conjugate prior. Now just calculate the two sides of your formula, the left hand side is $\DeclareMathOperator{\E}{\mathbb{E}} \E X \mid P = nP$, while the right hand side is $$ \E( P\mid X) \frac{\E X}{\E P} = \frac{\alpha+X}{n+\alpha+\beta} \frac{\alpha/(\alpha+\beta)}{n\alpha/(\alpha+\beta)} $$ and those are certainly not equal.
Does Bayes theorem hold for expectations?
The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha,
Does Bayes theorem hold for expectations? The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha, \beta)$, that is, a bayesian model with conjugate prior. Now just calculate the two sides of your formula, the left hand side is $\DeclareMathOperator{\E}{\mathbb{E}} \E X \mid P = nP$, while the right hand side is $$ \E( P\mid X) \frac{\E X}{\E P} = \frac{\alpha+X}{n+\alpha+\beta} \frac{\alpha/(\alpha+\beta)}{n\alpha/(\alpha+\beta)} $$ and those are certainly not equal.
Does Bayes theorem hold for expectations? The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha,
14,425
Does Bayes theorem hold for expectations?
The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname{E}(A\mid B)$ is $h(B),$ a random variable whose value is completely determined by the value of the random variable $B$. Thus $\operatorname{E}(A\mid B)$ is a function of $B$ and $\operatorname{E}(B\mid A)$ is a function of $A$. The quotient $\operatorname{E}(A)/\operatorname{E}(B)$ is just a number. So one side of your proposed equality is determined by $A$ and the other by $B$, so they cannot generally be equal. (Perhaps I should add that they can be equal in the trivial case when the values of $A$ and $B$ determine each other, as when for example, $A = \alpha B, \alpha \neq 0$ and $E[B]\neq 0$, when $$E[A\mid B] = \alpha B = E[B\mid A]\cdot\alpha = E[B\mid A]\frac{\alpha E[B]}{E[B]} = E[B\mid A]\frac{E[A]}{E[B]}.$$ But functions equal to each other only at a few points are not equal.)
Does Bayes theorem hold for expectations?
The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname
Does Bayes theorem hold for expectations? The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname{E}(A\mid B)$ is $h(B),$ a random variable whose value is completely determined by the value of the random variable $B$. Thus $\operatorname{E}(A\mid B)$ is a function of $B$ and $\operatorname{E}(B\mid A)$ is a function of $A$. The quotient $\operatorname{E}(A)/\operatorname{E}(B)$ is just a number. So one side of your proposed equality is determined by $A$ and the other by $B$, so they cannot generally be equal. (Perhaps I should add that they can be equal in the trivial case when the values of $A$ and $B$ determine each other, as when for example, $A = \alpha B, \alpha \neq 0$ and $E[B]\neq 0$, when $$E[A\mid B] = \alpha B = E[B\mid A]\cdot\alpha = E[B\mid A]\frac{\alpha E[B]}{E[B]} = E[B\mid A]\frac{E[A]}{E[B]}.$$ But functions equal to each other only at a few points are not equal.)
Does Bayes theorem hold for expectations? The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname
14,426
Does Bayes theorem hold for expectations?
The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if the two variables are linear functions of each other and have the same coefficient of variation (the ratio of standard deviation over mean) in absolute terms. For jointly normals we have $$\operatorname{E}(A \mid B) = \mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B)$$ and we want to impose $$\mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B) = \left[\mu_B + \rho \frac{\sigma_B}{\sigma_A}(A - \mu_A)\right]\frac{\mu_A}{\mu_B}$$ $$\implies \mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B) = \mu_A + \rho \frac{\sigma_B}{\sigma_A}\frac{\mu_A}{\mu_B}(A - \mu_A)$$ Simplify $\mu_A$ and then $\rho$, and re-arrange to get $$B = \mu_B +\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}(A - \mu_A)$$ So this is the linear relationship that must hold between the two variables (so they are certainly dependent, with correlation coefficient equal to unity in absolute terms) in order to get the desired equality. What it implies? First, it must also be satisfied that $$E(B) \equiv \mu_B = \mu_B+\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}(E(A) - \mu_A) \implies \mu_B = \mu_B$$ so no other restirction is imposed on the mean of $B$ ( or of $A$) except of them being non-zero. Also a relation for the variance must be satisfied, $$\operatorname{Var}(B) \equiv \sigma^2_B = \left(\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}\right)^2\operatorname{Var}(A)$$ $$\implies \left(\sigma^2_A\right)^2\sigma^2_B = \left(\sigma^2_B\right)^2\sigma^2_A\left(\frac{\mu_A}{\mu_B}\right)^2$$ $$\implies \left(\frac{\sigma_A}{\mu_A}\right)^2 = \left(\frac{\sigma_B}{\mu_B}\right)^2 \implies (\text{cv}_A)^2 = (\text{cv}_B)^2$$ $$\implies |\text{cv}_A| = |\text{cv}_B|$$ which was to be shown. Note that equality of the coefficient of variation in absolute terms, allows the variables to have different variances, and also, one to have positive mean and the other negative.
Does Bayes theorem hold for expectations?
The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if
Does Bayes theorem hold for expectations? The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if the two variables are linear functions of each other and have the same coefficient of variation (the ratio of standard deviation over mean) in absolute terms. For jointly normals we have $$\operatorname{E}(A \mid B) = \mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B)$$ and we want to impose $$\mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B) = \left[\mu_B + \rho \frac{\sigma_B}{\sigma_A}(A - \mu_A)\right]\frac{\mu_A}{\mu_B}$$ $$\implies \mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B) = \mu_A + \rho \frac{\sigma_B}{\sigma_A}\frac{\mu_A}{\mu_B}(A - \mu_A)$$ Simplify $\mu_A$ and then $\rho$, and re-arrange to get $$B = \mu_B +\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}(A - \mu_A)$$ So this is the linear relationship that must hold between the two variables (so they are certainly dependent, with correlation coefficient equal to unity in absolute terms) in order to get the desired equality. What it implies? First, it must also be satisfied that $$E(B) \equiv \mu_B = \mu_B+\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}(E(A) - \mu_A) \implies \mu_B = \mu_B$$ so no other restirction is imposed on the mean of $B$ ( or of $A$) except of them being non-zero. Also a relation for the variance must be satisfied, $$\operatorname{Var}(B) \equiv \sigma^2_B = \left(\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}\right)^2\operatorname{Var}(A)$$ $$\implies \left(\sigma^2_A\right)^2\sigma^2_B = \left(\sigma^2_B\right)^2\sigma^2_A\left(\frac{\mu_A}{\mu_B}\right)^2$$ $$\implies \left(\frac{\sigma_A}{\mu_A}\right)^2 = \left(\frac{\sigma_B}{\mu_B}\right)^2 \implies (\text{cv}_A)^2 = (\text{cv}_B)^2$$ $$\implies |\text{cv}_A| = |\text{cv}_B|$$ which was to be shown. Note that equality of the coefficient of variation in absolute terms, allows the variables to have different variances, and also, one to have positive mean and the other negative.
Does Bayes theorem hold for expectations? The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if
14,427
Matrix notation for logistic regression
In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{lin}=\underset{x}{\text{argmin}} \|Ax-b\|_2^2 = (A^TA)^{-1}A^Tb$$ This is read as "find the $x$ that minimizes the objective function, $\|Ax-b\|_2^2$". The nice thing about representing the linear regression objective function in this way is that we can keep everything in matrix notation and solve for $\hat{x}_\text{lin}$ by hand. As Alex R. mentions, in practice we often don't consider $(A^TA)^{-1}$ directly because it is computationally inefficient and $A$ often does not meet the full rank criteria. Instead, we turn to the Moore-Penrose pseudoinverse. The details of computationally solving for the pseudo-inverse can involve the Cholesky decomposition or the Singular Value Decomposition. Alternatively, the MLE solution for estimating the coefficients in logistic regression is: $$\hat{x}_\text{log} = \underset{x}{\text{argmin}} \sum_{i=1}^{N} y^{(i)}\log(1+e^{-x^Ta^{(i)}}) + (1-y^{(i)})\log(1+e^{x^T a^{(i)}})$$ where (assuming each sample of data is stored row-wise): $x$ is a vector represents regression coefficients $a^{(i)}$ is a vector represents the $i^{th}$ sample/ row in data matrix $A$ $y^{(i)}$ is a scalar in $\{0, 1\}$, and the $i^{th}$ label corresponding to the $i^{th}$ sample $N$ is the number of data samples / number of rows in data matrix $A$. Again, this is read as "find the $x$ that minimizes the objective function". If you wanted to, you could take it a step further and represent $\hat{x}_\text{log}$ in matrix notation as follows: $$ \hat{x}_\text{log} = \underset{x}{\text{argmin}} \begin{bmatrix} 1 & (1-y^{(1)}) \\ \vdots & \vdots \\ 1 & (1-y^{(N)})\\\end{bmatrix} \begin{bmatrix} \log(1+e^{-x^Ta^{(1)}}) & ... & \log(1+e^{-x^Ta^{(N)}}) \\\log(1+e^{x^Ta^{(1)}}) & ... & \log(1+e^{x^Ta^{(N)}}) \end{bmatrix} $$ but you don't gain anything from doing this. Logistic regression does not have a closed form solution and does not gain the same benefits as linear regression does by representing it in matrix notation. To solve for $\hat{x}_\text{log}$ estimation techniques such as gradient descent and the Newton-Raphson method are used. Through using some of these techniques (i.e. Newton-Raphson), $\hat{x}_\text{log}$ is approximated and is represented in matrix notation (see link provided by Alex R.).
Matrix notation for logistic regression
In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{li
Matrix notation for logistic regression In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{lin}=\underset{x}{\text{argmin}} \|Ax-b\|_2^2 = (A^TA)^{-1}A^Tb$$ This is read as "find the $x$ that minimizes the objective function, $\|Ax-b\|_2^2$". The nice thing about representing the linear regression objective function in this way is that we can keep everything in matrix notation and solve for $\hat{x}_\text{lin}$ by hand. As Alex R. mentions, in practice we often don't consider $(A^TA)^{-1}$ directly because it is computationally inefficient and $A$ often does not meet the full rank criteria. Instead, we turn to the Moore-Penrose pseudoinverse. The details of computationally solving for the pseudo-inverse can involve the Cholesky decomposition or the Singular Value Decomposition. Alternatively, the MLE solution for estimating the coefficients in logistic regression is: $$\hat{x}_\text{log} = \underset{x}{\text{argmin}} \sum_{i=1}^{N} y^{(i)}\log(1+e^{-x^Ta^{(i)}}) + (1-y^{(i)})\log(1+e^{x^T a^{(i)}})$$ where (assuming each sample of data is stored row-wise): $x$ is a vector represents regression coefficients $a^{(i)}$ is a vector represents the $i^{th}$ sample/ row in data matrix $A$ $y^{(i)}$ is a scalar in $\{0, 1\}$, and the $i^{th}$ label corresponding to the $i^{th}$ sample $N$ is the number of data samples / number of rows in data matrix $A$. Again, this is read as "find the $x$ that minimizes the objective function". If you wanted to, you could take it a step further and represent $\hat{x}_\text{log}$ in matrix notation as follows: $$ \hat{x}_\text{log} = \underset{x}{\text{argmin}} \begin{bmatrix} 1 & (1-y^{(1)}) \\ \vdots & \vdots \\ 1 & (1-y^{(N)})\\\end{bmatrix} \begin{bmatrix} \log(1+e^{-x^Ta^{(1)}}) & ... & \log(1+e^{-x^Ta^{(N)}}) \\\log(1+e^{x^Ta^{(1)}}) & ... & \log(1+e^{x^Ta^{(N)}}) \end{bmatrix} $$ but you don't gain anything from doing this. Logistic regression does not have a closed form solution and does not gain the same benefits as linear regression does by representing it in matrix notation. To solve for $\hat{x}_\text{log}$ estimation techniques such as gradient descent and the Newton-Raphson method are used. Through using some of these techniques (i.e. Newton-Raphson), $\hat{x}_\text{log}$ is approximated and is represented in matrix notation (see link provided by Alex R.).
Matrix notation for logistic regression In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{li
14,428
Matrix notation for logistic regression
@joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models, including linear and logistic regression models, there is another general approach that is based on the method of moments estimation. The linear regression estimator can also be formulated as the root to the estimating equation: $$0 = \mathbf{X}^T(Y - \mathbf{X}\beta)$$ In this regard $\beta$ is seen as the value which retrieves an average residual of 0. It needn't rely on any underlying probability model to have this interpretation. It is, however, interesting to go about deriving the score equations for a normal likelihood, you will see indeed that they take exactly the form displayed above. Maximizing the likelihood of regular exponential family for a linear model (e.g. linear or logistic regression) is equivalent to obtaining solutions to their score equations. $$0 = \sum_{i=1}^n S_i(\alpha, \beta) = \frac{\partial}{\partial \beta} \log \mathcal{L}( \beta, \alpha, X, Y) = \mathbf{X}^T (Y - g(\mathbf{X}\beta))$$ Where $Y_i$ has expected value $g(\mathbf{X}_i \beta)$. In GLM estimation, $g$ is said to be the inverse of a link function. In normal likelihood equations, $g^{-1}$ is the identity function, and in logistic regression $g^{-1}$ is the logit function. A more general approach would be to require $0 = \sum_{i=1}^n Y - g(\mathbf{X}_i\beta)$ which allows for model misspecification. Additionally, it is interesting to note that for regular exponential families, $\frac{\partial g(\mathbf{X}\beta)}{\partial \beta} = \mathbf{V}(g(\mathbf{X}\beta))$ which is called a mean-variance relationship. Indeed for logistic regression, the mean variance relationship is such that the mean $p = g(\mathbf{X}\beta)$ is related to the variance by $\mbox{var}(Y_i) = p_i(1-p_i)$. This suggests an interpretation of a model misspecified GLM as being one which gives a 0 average Pearson residual. This further suggests a generalization to allow non-proportional functional mean derivatives and mean-variance relationships. A generalized estimating equation approach would specify linear models in the following way: $$0 = \frac{\partial g(\mathbf{X}\beta)}{\partial \beta} \mathbf{V}^{-1}\left(Y - g(\mathbf{X}\beta)\right)$$ With $\mathbf{V}$ a matrix of variances based on the fitted value (mean) given by $g(\mathbf{X}\beta)$. This approach to estimation allows one to pick a link function and mean variance relationship as with GLMs. In logistic regression $g$ would be the inverse logit, and $V_{ii}$ would be given by $g(\mathbf{X}_i \beta)(1-g(\mathbf{X}\beta))$. The solutions to this estimating equation, obtained by Newton-Raphson, will yield the $\beta$ obtained from logistic regression. However a somewhat broader class of models is estimable under a similar framework. For instance, the link function can be taken to be the log of the linear predictor so that the regression coefficients are relative risks and not odds ratios. Which--given the well documented pitfalls of interpreting ORs as RRs--behooves me to ask why anyone fits logistic regression models at all anymore.
Matrix notation for logistic regression
@joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models,
Matrix notation for logistic regression @joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models, including linear and logistic regression models, there is another general approach that is based on the method of moments estimation. The linear regression estimator can also be formulated as the root to the estimating equation: $$0 = \mathbf{X}^T(Y - \mathbf{X}\beta)$$ In this regard $\beta$ is seen as the value which retrieves an average residual of 0. It needn't rely on any underlying probability model to have this interpretation. It is, however, interesting to go about deriving the score equations for a normal likelihood, you will see indeed that they take exactly the form displayed above. Maximizing the likelihood of regular exponential family for a linear model (e.g. linear or logistic regression) is equivalent to obtaining solutions to their score equations. $$0 = \sum_{i=1}^n S_i(\alpha, \beta) = \frac{\partial}{\partial \beta} \log \mathcal{L}( \beta, \alpha, X, Y) = \mathbf{X}^T (Y - g(\mathbf{X}\beta))$$ Where $Y_i$ has expected value $g(\mathbf{X}_i \beta)$. In GLM estimation, $g$ is said to be the inverse of a link function. In normal likelihood equations, $g^{-1}$ is the identity function, and in logistic regression $g^{-1}$ is the logit function. A more general approach would be to require $0 = \sum_{i=1}^n Y - g(\mathbf{X}_i\beta)$ which allows for model misspecification. Additionally, it is interesting to note that for regular exponential families, $\frac{\partial g(\mathbf{X}\beta)}{\partial \beta} = \mathbf{V}(g(\mathbf{X}\beta))$ which is called a mean-variance relationship. Indeed for logistic regression, the mean variance relationship is such that the mean $p = g(\mathbf{X}\beta)$ is related to the variance by $\mbox{var}(Y_i) = p_i(1-p_i)$. This suggests an interpretation of a model misspecified GLM as being one which gives a 0 average Pearson residual. This further suggests a generalization to allow non-proportional functional mean derivatives and mean-variance relationships. A generalized estimating equation approach would specify linear models in the following way: $$0 = \frac{\partial g(\mathbf{X}\beta)}{\partial \beta} \mathbf{V}^{-1}\left(Y - g(\mathbf{X}\beta)\right)$$ With $\mathbf{V}$ a matrix of variances based on the fitted value (mean) given by $g(\mathbf{X}\beta)$. This approach to estimation allows one to pick a link function and mean variance relationship as with GLMs. In logistic regression $g$ would be the inverse logit, and $V_{ii}$ would be given by $g(\mathbf{X}_i \beta)(1-g(\mathbf{X}\beta))$. The solutions to this estimating equation, obtained by Newton-Raphson, will yield the $\beta$ obtained from logistic regression. However a somewhat broader class of models is estimable under a similar framework. For instance, the link function can be taken to be the log of the linear predictor so that the regression coefficients are relative risks and not odds ratios. Which--given the well documented pitfalls of interpreting ORs as RRs--behooves me to ask why anyone fits logistic regression models at all anymore.
Matrix notation for logistic regression @joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models,
14,429
square things in statistics- generalized rationale [duplicate]
$\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making predictions from your model. For instance, if you buy a stock expecting its future price to be $P_{\predicted}$ and its future price is $P_{\actual}$ instead, you lose money proportional to $(P_{\predicted} - P_{\actual})$, not its square! The same is true in many other contexts. So why squared error? The squared error has many nice mathematical properties. Echoing the other answerers here, I would say that many of them are merely "convenient"--we might choose to use the absolute error instead if it didn't pose technical issues when solving problems. For instance: If $X$ is a random variable, then the estimator of $X$ that minimizes the squared error is the mean, $E(X)$. On the other hand, the estimator that minimizes the absolute error is the median, $m(X)$. The mean has much nicer properties than the median; for instance, $E(X + Y) = E(X) + E(Y)$, but there is no general expression for $m(X + Y)$. If you have a vector $\vec X = (X_1, X_2)$ estimated by $\vec x = x_1, x_2$, then for the squared error it doesn't matter whether you consider the components separately or together: $||\vec X - \vec x||^2 = (X_1 - x_1)^2 + (X_2 - x_2)^2$, so the squared error of the components just adds. You can't do that with absolute error. This means that the squared error is independent of re-parameterizations: for instance, if you define $\vec Y_1 = (X_1 + X_2, X_1 - X_2)$, then the minimum-squared-deviance estimators for $Y$ and $X$ are the same, but the minimum-absolute-deviance estimators are not. For independent random variables, variances (expected squared errors) add: $\Var(X + Y) = \Var(X) + \Var(Y)$. The same is not true for expected absolute error. For a sample from a multivariate Gaussian distribution (where probability density is exponential in the squared distance from the mean), all of its coordinates are Gaussian, no matter what coordinate system you use. For a multivariate Laplace distribution (like a Gaussian but with absolute, not squared, distance), this isn't true. The squared error of a probabilistic classifier is a proper scoring rule. If you had an oracle telling you the actual probability of each class for each item, and you were being scored based on your Brier score, your best bet would be to predict what the oracle told you for each class. This is not true for absolute error. (For instance, if the oracle tells you that $P(Y=1) = 0.9$, then predicting that $P(Y=1) = 0.9$ yields an expected score of $0.9\cdot 0.1 + 0.1 \cdot 0.9 = 0.18$; you should instead predict that $P(Y=1) = 1$, for an expected score of $0.9\cdot 0 + 0.1 \cdot 1 = 0.1$.) Some mathematical coincidences or conveniences involving the squared error are more important, though. They don't pose technical problem-solving issues; rather, they give us intrinsic reasons why minimizing the square error might be a good idea: When fitting a Gaussian distribution to a set of data, the maximum-likelihood fit is that which minimizes the squared error, not the absolute error. When doing dimensionality reduction, finding the basis that minimizes the squared reconstruction error yields principal component analysis, which is nice to compute, coordinate-independent, and has a natural interpretation for multivariate Gaussian distributions (finding the axes of the ellipse that the distribution makes). There's a variant called "robust PCA" that is sometimes applied to minimizing absolute reconstruction error, but it seems to be less well-studied and harder to understand and compute. Looking deeper One might well ask whether there is some deep mathematical truth underlying the many different conveniences of the squared error. As far as I know, there are a few (which are related in some sense, but not, I would say, the same): Differentiability The squared error is everywhere differentiable, while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization. To optimize the squared error, you can just set its derivative equal to 0 and solve; to optimize the absolute error often requires more complex techniques. Inner products The squared error is induced by an inner product on the underlying space. An inner product is basically a way of "projecting vector $x$ along vector $y$," or figuring out "how much does $x$ point in the same direction as $y$." In finite dimensions this is the standard (Euclidean) inner product $\langle a, b\rangle = \sum_i a_ib_i$. Inner products are what allow us to think geometrically about a space, because they give a notion of: a right angle ($x$ and $y$ are right angles if $\langle x, y\rangle = 0$); and a length (the length of $x$ is $||x|| = \sqrt{\langle x, x\rangle}$). By "the squared error is induced by the Euclidean inner product" I mean that the squared error between $x$ and $y$ is $||x-y||$, the Euclidean distance between them. In fact the Euclidean inner product is in some sense the "only possible" axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric properties. For random variables, in fact, you can define is a similar inner product: $\langle X, Y\rangle = E(XY)$. This means that we can think of a "geometry" of random variables, in which two variables make a "right angle" if $E(XY) = 0$. Not coincidentally, the "length" of $X$ is $E(X^2)$, which is related to its variance. In fact, in this framework, "independent variances add" is just a consequence of the Pythagorean Theorem: \begin{align} \Var(X + Y) &= ||(X - \mu_X)\, + (Y - \mu_Y)||^2 \\ &= ||X - \mu_X||^2 + ||Y - \mu_Y||^2 \\ &= \Var(X)\quad\ \ \, + \Var(Y). \end{align} Beyond squared error Given these nice mathematical properties, would we ever not want to use squared error? Well, as I mentioned at the very beginning, sometimes absolute error is closer to what we "care about" in practice. For instance, if your data has tails that are fatter than Gaussian, then minimizing the squared error can place too much weight on outlying points. The absolute error is less sensitive to such outliers. (For instance, if you observe an outlier in your sample, it changes the squared-error-minimizing mean proportionally to the magnitude of the outlier, but hardly changes the absolute-error-minimizing median at all!) And although the absolute error doesn't enjoy the same nice mathematical properties as the squared error, that just means absolute-error problems are harder to solve, not that they're objectively worse in some sense. The upshot is that as computational methods have advanced, we've become able to solve absolute-error problems numerically, leading to the rise of the subfield of robust statistical methods. In fact, there's a fairly nice correspondence between some squared-error and absolute-error methods: Squared error | Absolute error ========================|============================ Mean | Median Variance | Expected absolute deviation Gaussian distribution | Laplace distribution Linear regression | Quantile regression PCA | Robust PCA Ridge regression | LASSO As we get better at modern numerical methods, no doubt we'll find other useful absolute-error-based techniques, and the gap between squared-error and absolute-error methods will narrow. But because of the connection between the squared error and the Gaussian distribution, I don't think it will ever go away entirely.
square things in statistics- generalized rationale [duplicate]
$\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute err
square things in statistics- generalized rationale [duplicate] $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making predictions from your model. For instance, if you buy a stock expecting its future price to be $P_{\predicted}$ and its future price is $P_{\actual}$ instead, you lose money proportional to $(P_{\predicted} - P_{\actual})$, not its square! The same is true in many other contexts. So why squared error? The squared error has many nice mathematical properties. Echoing the other answerers here, I would say that many of them are merely "convenient"--we might choose to use the absolute error instead if it didn't pose technical issues when solving problems. For instance: If $X$ is a random variable, then the estimator of $X$ that minimizes the squared error is the mean, $E(X)$. On the other hand, the estimator that minimizes the absolute error is the median, $m(X)$. The mean has much nicer properties than the median; for instance, $E(X + Y) = E(X) + E(Y)$, but there is no general expression for $m(X + Y)$. If you have a vector $\vec X = (X_1, X_2)$ estimated by $\vec x = x_1, x_2$, then for the squared error it doesn't matter whether you consider the components separately or together: $||\vec X - \vec x||^2 = (X_1 - x_1)^2 + (X_2 - x_2)^2$, so the squared error of the components just adds. You can't do that with absolute error. This means that the squared error is independent of re-parameterizations: for instance, if you define $\vec Y_1 = (X_1 + X_2, X_1 - X_2)$, then the minimum-squared-deviance estimators for $Y$ and $X$ are the same, but the minimum-absolute-deviance estimators are not. For independent random variables, variances (expected squared errors) add: $\Var(X + Y) = \Var(X) + \Var(Y)$. The same is not true for expected absolute error. For a sample from a multivariate Gaussian distribution (where probability density is exponential in the squared distance from the mean), all of its coordinates are Gaussian, no matter what coordinate system you use. For a multivariate Laplace distribution (like a Gaussian but with absolute, not squared, distance), this isn't true. The squared error of a probabilistic classifier is a proper scoring rule. If you had an oracle telling you the actual probability of each class for each item, and you were being scored based on your Brier score, your best bet would be to predict what the oracle told you for each class. This is not true for absolute error. (For instance, if the oracle tells you that $P(Y=1) = 0.9$, then predicting that $P(Y=1) = 0.9$ yields an expected score of $0.9\cdot 0.1 + 0.1 \cdot 0.9 = 0.18$; you should instead predict that $P(Y=1) = 1$, for an expected score of $0.9\cdot 0 + 0.1 \cdot 1 = 0.1$.) Some mathematical coincidences or conveniences involving the squared error are more important, though. They don't pose technical problem-solving issues; rather, they give us intrinsic reasons why minimizing the square error might be a good idea: When fitting a Gaussian distribution to a set of data, the maximum-likelihood fit is that which minimizes the squared error, not the absolute error. When doing dimensionality reduction, finding the basis that minimizes the squared reconstruction error yields principal component analysis, which is nice to compute, coordinate-independent, and has a natural interpretation for multivariate Gaussian distributions (finding the axes of the ellipse that the distribution makes). There's a variant called "robust PCA" that is sometimes applied to minimizing absolute reconstruction error, but it seems to be less well-studied and harder to understand and compute. Looking deeper One might well ask whether there is some deep mathematical truth underlying the many different conveniences of the squared error. As far as I know, there are a few (which are related in some sense, but not, I would say, the same): Differentiability The squared error is everywhere differentiable, while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization. To optimize the squared error, you can just set its derivative equal to 0 and solve; to optimize the absolute error often requires more complex techniques. Inner products The squared error is induced by an inner product on the underlying space. An inner product is basically a way of "projecting vector $x$ along vector $y$," or figuring out "how much does $x$ point in the same direction as $y$." In finite dimensions this is the standard (Euclidean) inner product $\langle a, b\rangle = \sum_i a_ib_i$. Inner products are what allow us to think geometrically about a space, because they give a notion of: a right angle ($x$ and $y$ are right angles if $\langle x, y\rangle = 0$); and a length (the length of $x$ is $||x|| = \sqrt{\langle x, x\rangle}$). By "the squared error is induced by the Euclidean inner product" I mean that the squared error between $x$ and $y$ is $||x-y||$, the Euclidean distance between them. In fact the Euclidean inner product is in some sense the "only possible" axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric properties. For random variables, in fact, you can define is a similar inner product: $\langle X, Y\rangle = E(XY)$. This means that we can think of a "geometry" of random variables, in which two variables make a "right angle" if $E(XY) = 0$. Not coincidentally, the "length" of $X$ is $E(X^2)$, which is related to its variance. In fact, in this framework, "independent variances add" is just a consequence of the Pythagorean Theorem: \begin{align} \Var(X + Y) &= ||(X - \mu_X)\, + (Y - \mu_Y)||^2 \\ &= ||X - \mu_X||^2 + ||Y - \mu_Y||^2 \\ &= \Var(X)\quad\ \ \, + \Var(Y). \end{align} Beyond squared error Given these nice mathematical properties, would we ever not want to use squared error? Well, as I mentioned at the very beginning, sometimes absolute error is closer to what we "care about" in practice. For instance, if your data has tails that are fatter than Gaussian, then minimizing the squared error can place too much weight on outlying points. The absolute error is less sensitive to such outliers. (For instance, if you observe an outlier in your sample, it changes the squared-error-minimizing mean proportionally to the magnitude of the outlier, but hardly changes the absolute-error-minimizing median at all!) And although the absolute error doesn't enjoy the same nice mathematical properties as the squared error, that just means absolute-error problems are harder to solve, not that they're objectively worse in some sense. The upshot is that as computational methods have advanced, we've become able to solve absolute-error problems numerically, leading to the rise of the subfield of robust statistical methods. In fact, there's a fairly nice correspondence between some squared-error and absolute-error methods: Squared error | Absolute error ========================|============================ Mean | Median Variance | Expected absolute deviation Gaussian distribution | Laplace distribution Linear regression | Quantile regression PCA | Robust PCA Ridge regression | LASSO As we get better at modern numerical methods, no doubt we'll find other useful absolute-error-based techniques, and the gap between squared-error and absolute-error methods will narrow. But because of the connection between the squared error and the Gaussian distribution, I don't think it will ever go away entirely.
square things in statistics- generalized rationale [duplicate] $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute err
14,430
square things in statistics- generalized rationale [duplicate]
It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data values $(x_1,x_2,\ldots,x_n)$ as a point in $n$-dimensional space. Then the sample SD is $1/\sqrt {n-1}$ times the distance between this point and the point of means $(\bar x,\bar x,\ldots,\bar x)$. And the sums of squares in one-way anova really do satisfy the Pythagorean Theorem, framed in a similar way.
square things in statistics- generalized rationale [duplicate]
It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data va
square things in statistics- generalized rationale [duplicate] It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data values $(x_1,x_2,\ldots,x_n)$ as a point in $n$-dimensional space. Then the sample SD is $1/\sqrt {n-1}$ times the distance between this point and the point of means $(\bar x,\bar x,\ldots,\bar x)$. And the sums of squares in one-way anova really do satisfy the Pythagorean Theorem, framed in a similar way.
square things in statistics- generalized rationale [duplicate] It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data va
14,431
square things in statistics- generalized rationale [duplicate]
Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed form solutions. Thus they are often ignored until a project demands they be used.
square things in statistics- generalized rationale [duplicate]
Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed
square things in statistics- generalized rationale [duplicate] Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed form solutions. Thus they are often ignored until a project demands they be used.
square things in statistics- generalized rationale [duplicate] Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed
14,432
square things in statistics- generalized rationale [duplicate]
Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a link to a description of the Laplace distrubtion http://en.wikipedia.org/wiki/Laplace_distribution. Before computers using absolute value instead of squared differences made life hard for the statistician.
square things in statistics- generalized rationale [duplicate]
Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a l
square things in statistics- generalized rationale [duplicate] Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a link to a description of the Laplace distrubtion http://en.wikipedia.org/wiki/Laplace_distribution. Before computers using absolute value instead of squared differences made life hard for the statistician.
square things in statistics- generalized rationale [duplicate] Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a l
14,433
How to best visualize differences in many proportions across three groups?
Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise: Superimposition here allows and eases comparison. The order of topics in your displays appears quite arbitrary. Absent a natural order (e.g. time, space, an ordered variable) I would always sort on one of the variables to provide a framework. Which to use could be a matter of whether one is particularly interesting or important, a researcher's decision. Another possibility is to order on some measure of the differences between papers, so that topics receiving similar coverage were at one end and those receiving different coverage at the other end. Open markers or point symbols allow overlap or identity to be resolved better than closed or solid markers or symbols, which in the worst cases obscure or occlude each other. (An alternative that might work quite well here is letters such as A, D and I for the three newspapers.) There is clearly much scope for improving my design. For example, is the lettering too large and/or too heavy? On the other hand, the headings must be easily readable, or else the graph is a failure. Some smaller, pickier points: a. Red and green on your graph is a colour combination to be avoided. When different markers are used, colour choices are a little less crucial. b. The horizontal ticks on your graph are distracting. In contrast, grid lines on mine are needed, but I try to make them unobtrusive by using thin, light lines. c. Your graph shows percents and the total is about 20 $\times$ 0.1% or 2%, so 98% of the papers is something else? I used the proportions directly in the .csv provided. Cleveland dot charts owe most to Cleveland, W.S. 1984. Graphical methods for data presentation: full scale breaks, dot charts, and multibased logging. American Statistician 38: 270-80. Cleveland, W.S. 1985. Elements of graphing data. Monterey, CA: Wadsworth. Cleveland, W.S. 1994. Elements of graphing data. Summit, NJ: Hobart Press. One precursor (more famous statistically for quite different work!!!) was Pearson, E.S. 1956. Some aspects of the geometry of statistics: the use of visual presentation in understanding the theory and application of mathematical statistics. Journal of the Royal Statistical Society A 119: 125-146. Another earlier use of the same main idea is in Snedecor, G.W. 1937. Statistical Methods Applied to Experiments in Agriculture and Biology. Ames, IA: Collegiate Press. See Figures 2.1, 2.3 (pp.24, 39). and in each successive edition until 1956. Note that the title and the publisher change intermittently between editions. For those interested, the graph was prepared in Stata after reading in the .csv with code graph dot (asis) prop , over(pub) over(label, sort(1)) asyvars marker(1, ms(Oh)) marker(2, ms(+)) marker(3, ms(Th)) linetype(line) lines(lc(gs12) lw(vthin)) scheme(s1color)
How to best visualize differences in many proportions across three groups?
Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise:
How to best visualize differences in many proportions across three groups? Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise: Superimposition here allows and eases comparison. The order of topics in your displays appears quite arbitrary. Absent a natural order (e.g. time, space, an ordered variable) I would always sort on one of the variables to provide a framework. Which to use could be a matter of whether one is particularly interesting or important, a researcher's decision. Another possibility is to order on some measure of the differences between papers, so that topics receiving similar coverage were at one end and those receiving different coverage at the other end. Open markers or point symbols allow overlap or identity to be resolved better than closed or solid markers or symbols, which in the worst cases obscure or occlude each other. (An alternative that might work quite well here is letters such as A, D and I for the three newspapers.) There is clearly much scope for improving my design. For example, is the lettering too large and/or too heavy? On the other hand, the headings must be easily readable, or else the graph is a failure. Some smaller, pickier points: a. Red and green on your graph is a colour combination to be avoided. When different markers are used, colour choices are a little less crucial. b. The horizontal ticks on your graph are distracting. In contrast, grid lines on mine are needed, but I try to make them unobtrusive by using thin, light lines. c. Your graph shows percents and the total is about 20 $\times$ 0.1% or 2%, so 98% of the papers is something else? I used the proportions directly in the .csv provided. Cleveland dot charts owe most to Cleveland, W.S. 1984. Graphical methods for data presentation: full scale breaks, dot charts, and multibased logging. American Statistician 38: 270-80. Cleveland, W.S. 1985. Elements of graphing data. Monterey, CA: Wadsworth. Cleveland, W.S. 1994. Elements of graphing data. Summit, NJ: Hobart Press. One precursor (more famous statistically for quite different work!!!) was Pearson, E.S. 1956. Some aspects of the geometry of statistics: the use of visual presentation in understanding the theory and application of mathematical statistics. Journal of the Royal Statistical Society A 119: 125-146. Another earlier use of the same main idea is in Snedecor, G.W. 1937. Statistical Methods Applied to Experiments in Agriculture and Biology. Ames, IA: Collegiate Press. See Figures 2.1, 2.3 (pp.24, 39). and in each successive edition until 1956. Note that the title and the publisher change intermittently between editions. For those interested, the graph was prepared in Stata after reading in the .csv with code graph dot (asis) prop , over(pub) over(label, sort(1)) asyvars marker(1, ms(Oh)) marker(2, ms(+)) marker(3, ms(Th)) linetype(line) lines(lc(gs12) lw(vthin)) scheme(s1color)
How to best visualize differences in many proportions across three groups? Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise:
14,434
How to best visualize differences in many proportions across three groups?
The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the difference bar with the length of the second bar. And for a different big picture view, you can try something like a slope chart or parallel coordinates plot. The lines may be a bit too crowded here, but it may work if you want to highlight on a subset of the topics. Also, you might try helpmeviz.com which is geared towards very specific data viz questions like this.
How to best visualize differences in many proportions across three groups?
The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the differe
How to best visualize differences in many proportions across three groups? The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the difference bar with the length of the second bar. And for a different big picture view, you can try something like a slope chart or parallel coordinates plot. The lines may be a bit too crowded here, but it may work if you want to highlight on a subset of the topics. Also, you might try helpmeviz.com which is geared towards very specific data viz questions like this.
How to best visualize differences in many proportions across three groups? The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the differe
14,435
How to best visualize differences in many proportions across three groups?
My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the sub-category's proportionate share. There's an R package to draw them, but it's also fairly straightforward to do with lower-level graphing tools. However, mosaic plots (like percentage-based stacked bar graphs) work best if there are only 2 or 3 categories in the dimension in which you want to compare proportions. So they would work well if you wanted to compare differences between topics in the proportion of articles that were in each of three newspapers, but not so much for your intended use, comparing differences between three newspapers in the proportion of coverage for each topic. A subtle but important distinction! For what you want to emphasize, I think the most effective graph is one of the simplest -- a grouped bar graph. More people understand bar graphs than dot charts; at a glance, you can see that you're comparing quantities of different size, and the values you want to compare are side-by-side. However, if you really wanted to emphasize the differences in proportion, you could create a custom grouped bar graph, modified to position each group so that the median value per category is aligned with the axis, instead of the zero values: Difference in proportion of coverage per Newspaper, relative to category median (narrow bars) ____-0.1%____0_____0.1%____0.2%_____ | |********|***** A |~~~~~~~~| |#### | | |****|********** B |~~ | |####| | |***** | C |~~~~~~~|~~~~~ |#######| | |*** | D |~~~~~~~~~~~| |###########|## | 0.2%_____0.1%____0_____ Median proportion of coverage per category, all papers (large bars) Note that the bars in each group are still aligned for easy comparison of size, and that each group's baseline is now positioned to the left of the axis according to that group's median value, while the bars that project to the right of the axis are equivalent to your second bar graph showing the difference between the top two categories. Regardless of whether you use a standard grouped bar graph or an offset-adjusted graph like the above, you could still take an idea from mosaic plots and make the width of each bar proportional to the total article count for that newspaper (so the size of the bar is proportional to the number of articles in that newspaper in that category). Since your test statistic is a property of each comparison, not of individual values, I don't think it's useful to scale every data point according to the significance. Instead, I would have an icon next to each grouping representing significance. For academic publication, the standard */**/*** has the benefit of familiarity, but you could get creative if you wanted to show the full continuum of the statistic.
How to best visualize differences in many proportions across three groups?
My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the s
How to best visualize differences in many proportions across three groups? My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the sub-category's proportionate share. There's an R package to draw them, but it's also fairly straightforward to do with lower-level graphing tools. However, mosaic plots (like percentage-based stacked bar graphs) work best if there are only 2 or 3 categories in the dimension in which you want to compare proportions. So they would work well if you wanted to compare differences between topics in the proportion of articles that were in each of three newspapers, but not so much for your intended use, comparing differences between three newspapers in the proportion of coverage for each topic. A subtle but important distinction! For what you want to emphasize, I think the most effective graph is one of the simplest -- a grouped bar graph. More people understand bar graphs than dot charts; at a glance, you can see that you're comparing quantities of different size, and the values you want to compare are side-by-side. However, if you really wanted to emphasize the differences in proportion, you could create a custom grouped bar graph, modified to position each group so that the median value per category is aligned with the axis, instead of the zero values: Difference in proportion of coverage per Newspaper, relative to category median (narrow bars) ____-0.1%____0_____0.1%____0.2%_____ | |********|***** A |~~~~~~~~| |#### | | |****|********** B |~~ | |####| | |***** | C |~~~~~~~|~~~~~ |#######| | |*** | D |~~~~~~~~~~~| |###########|## | 0.2%_____0.1%____0_____ Median proportion of coverage per category, all papers (large bars) Note that the bars in each group are still aligned for easy comparison of size, and that each group's baseline is now positioned to the left of the axis according to that group's median value, while the bars that project to the right of the axis are equivalent to your second bar graph showing the difference between the top two categories. Regardless of whether you use a standard grouped bar graph or an offset-adjusted graph like the above, you could still take an idea from mosaic plots and make the width of each bar proportional to the total article count for that newspaper (so the size of the bar is proportional to the number of articles in that newspaper in that category). Since your test statistic is a property of each comparison, not of individual values, I don't think it's useful to scale every data point according to the significance. Instead, I would have an icon next to each grouping representing significance. For academic publication, the standard */**/*** has the benefit of familiarity, but you could get creative if you wanted to show the full continuum of the statistic.
How to best visualize differences in many proportions across three groups? My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the s
14,436
How to best visualize differences in many proportions across three groups?
Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentage that each news outlet covers the topic. The size of the circle could indicate the relative coverage of the topic. e.g if more total articles are written about oil than culture then the oil circle has a bigger diameter.
How to best visualize differences in many proportions across three groups?
Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentag
How to best visualize differences in many proportions across three groups? Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentage that each news outlet covers the topic. The size of the circle could indicate the relative coverage of the topic. e.g if more total articles are written about oil than culture then the oil circle has a bigger diameter.
How to best visualize differences in many proportions across three groups? Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentag
14,437
The linearity of variance
$\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent from the other $X$s." $X$ is not independent of $X$. The symbol $X$ is being used to refer to the same random variable here. Once you know the value of the first $X$ to appear in your formula, this also fixes the value of the second $X$ to appear. If you want them to refer to distinct (and potentially independent) random variables, you need to denote them with different letters (e.g. $X$ and $Y$) or using subscripts (e.g. $X_1$ and $X_2$); the latter is often (but not always) used to denote variables drawn from the same distribution. If two variables $X$ and $Y$ are independent then $\Pr(X=a|Y=b)$ is the same as $\Pr(X=a)$: knowing the value of $Y$ does not give us any additional information about the value of $X$. But $\Pr(X=a|X=b)$ is $1$ if $a=b$ and $0$ otherwise: knowing the value of $X$ gives you complete information about the value of $X$. [You can replace the probabilities in this paragraph by cumulative distribution functions, or where appropriate, probability density functions, to essentially the same effect.] Another way of seeing things is that if two variables are independent then they have zero correlation (though zero correlation does not imply independence!) but $X$ is perfectly correlated with itself, $\Corr(X,X)=1$ so $X$ can't be independent of itself. Note that since the covariance is given by $\Cov(X,Y)=\Corr(X,Y)\sqrt{\Var(X)\Var(Y)}$, then $$\Cov(X,X)=1\sqrt{\Var(X)^2}=\Var(X)$$ The more general formula for the variance of a sum of two random variables is $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ In particular, $\Cov(X,X) = \Var(X)$, so $$\Var(X+X) = \Var(X) + \Var(X) + 2\Var(X) = 4\Var(X)$$ which is the same as you would have deduced from applying the rule $$\Var(aX) = a^2 \Var(X) \implies \Var(2X) = 4\Var(X)$$ If you are interested in linearity, then you might be interested in the bilinearity of covariance. For random variables $W$, $X$, $Y$ and $Z$ (whether dependent or independent) and constants $a$, $b$, $c$ and $d$ we have $$\Cov(aW + bX, Y) = a \Cov(W,Y) + b \Cov(X,Y)$$ $$\Cov(X, cY + dZ) = c \Cov(X,Y) + d \Cov(X,Z)$$ and overall, $$\Cov(aW + bX, cY + dZ) = ac \Cov(W,Y) + ad \Cov(W,Z) + bc \Cov(X,Y) + bd \Cov(X,Z)$$ You can then use this to prove the (non-linear) results for variance that you wrote in your post: $$\Var(aX) = \Cov(aX, aX) = a^2 \Cov(X,X) = a^2 \Var(X)$$ $$ \begin{align} \Var(aX + bY) &= \Cov(aX + bY, aX + bY) \\ &= a^2 \Cov(X,X) + ab \Cov(X,Y) + ba \Cov (X,Y) + b^2 \Cov(Y,Y) \\ \Var(aX + bY) &= a^2 \Var(X) + b^2 \Var(Y) + 2ab \Cov(X,Y) \end{align} $$ The latter gives, as a special case when $a=b=1$, $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ When $X$ and $Y$ are uncorrelated (which includes the case where they are independent), then this reduces to $\Var(X+Y) = \Var(X) + \Var(Y)$. So if you want to manipulate variances in a "linear" way (which is often a nice way to work algebraically), then work with the covariances instead, and exploit their bilinearity.
The linearity of variance
$\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent
The linearity of variance $\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent from the other $X$s." $X$ is not independent of $X$. The symbol $X$ is being used to refer to the same random variable here. Once you know the value of the first $X$ to appear in your formula, this also fixes the value of the second $X$ to appear. If you want them to refer to distinct (and potentially independent) random variables, you need to denote them with different letters (e.g. $X$ and $Y$) or using subscripts (e.g. $X_1$ and $X_2$); the latter is often (but not always) used to denote variables drawn from the same distribution. If two variables $X$ and $Y$ are independent then $\Pr(X=a|Y=b)$ is the same as $\Pr(X=a)$: knowing the value of $Y$ does not give us any additional information about the value of $X$. But $\Pr(X=a|X=b)$ is $1$ if $a=b$ and $0$ otherwise: knowing the value of $X$ gives you complete information about the value of $X$. [You can replace the probabilities in this paragraph by cumulative distribution functions, or where appropriate, probability density functions, to essentially the same effect.] Another way of seeing things is that if two variables are independent then they have zero correlation (though zero correlation does not imply independence!) but $X$ is perfectly correlated with itself, $\Corr(X,X)=1$ so $X$ can't be independent of itself. Note that since the covariance is given by $\Cov(X,Y)=\Corr(X,Y)\sqrt{\Var(X)\Var(Y)}$, then $$\Cov(X,X)=1\sqrt{\Var(X)^2}=\Var(X)$$ The more general formula for the variance of a sum of two random variables is $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ In particular, $\Cov(X,X) = \Var(X)$, so $$\Var(X+X) = \Var(X) + \Var(X) + 2\Var(X) = 4\Var(X)$$ which is the same as you would have deduced from applying the rule $$\Var(aX) = a^2 \Var(X) \implies \Var(2X) = 4\Var(X)$$ If you are interested in linearity, then you might be interested in the bilinearity of covariance. For random variables $W$, $X$, $Y$ and $Z$ (whether dependent or independent) and constants $a$, $b$, $c$ and $d$ we have $$\Cov(aW + bX, Y) = a \Cov(W,Y) + b \Cov(X,Y)$$ $$\Cov(X, cY + dZ) = c \Cov(X,Y) + d \Cov(X,Z)$$ and overall, $$\Cov(aW + bX, cY + dZ) = ac \Cov(W,Y) + ad \Cov(W,Z) + bc \Cov(X,Y) + bd \Cov(X,Z)$$ You can then use this to prove the (non-linear) results for variance that you wrote in your post: $$\Var(aX) = \Cov(aX, aX) = a^2 \Cov(X,X) = a^2 \Var(X)$$ $$ \begin{align} \Var(aX + bY) &= \Cov(aX + bY, aX + bY) \\ &= a^2 \Cov(X,X) + ab \Cov(X,Y) + ba \Cov (X,Y) + b^2 \Cov(Y,Y) \\ \Var(aX + bY) &= a^2 \Var(X) + b^2 \Var(Y) + 2ab \Cov(X,Y) \end{align} $$ The latter gives, as a special case when $a=b=1$, $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ When $X$ and $Y$ are uncorrelated (which includes the case where they are independent), then this reduces to $\Var(X+Y) = \Var(X) + \Var(Y)$. So if you want to manipulate variances in a "linear" way (which is often a nice way to work algebraically), then work with the covariances instead, and exploit their bilinearity.
The linearity of variance $\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent
14,438
The linearity of variance
Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words, it's the difference between rolling a die once and doubling the result, vs rolling a die twice.
The linearity of variance
Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words,
The linearity of variance Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words, it's the difference between rolling a die once and doubling the result, vs rolling a die twice.
The linearity of variance Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words,
14,439
Why kurtosis of a normal distribution is 3 instead of 0
Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized version of the variable we're looking at, then the population kurtosis is the average fourth power of that standardized variable; $E(Z^4)$. The sample kurtosis is correspondingly related to the mean fourth power of a standardized set of sample values (in some cases it is scaled by a factor that goes to 1 in large samples). As you note, this fourth standardized moment is 3 in the case of a normal random variable. As Alecos notes in comments, some people define kurtosis as $E(Z^4)-3$; that's sometimes called excess kurtosis (it's also the fourth cumulant). When seeing the word 'kurtosis' you need to keep in mind this possibility that different people use the same word to refer to two different (but closely related) quantities. Kurtosis is usually either described as peakedness* (say, how sharply curved the peak is - which was presumably the intent of choosing the word "kurtosis") or heavy-tailedness (often what people are interested in using it to measure), but in actual fact the usual fourth standardized moment doesn't quite measure either of those things. Indeed, the first volume of Kendall and Stuart give counterexamples that show that higher kurtosis is not necessarily associated with either higher peak (in a standardized variable) or fatter tails (in rather similar way that the third moment doesn't quite measure what many people think it does). However in many situations there's some tendency to be associated with both, in that greater peakedness and heavy tailedness often tend to be seen when kurtosis is higher -- we should simply beware thinking it is necessarily the case. Kurtosis and skewness are strongly related (the kurtosis must be at least 1 more than the square of the skewness; interpretation of kurtosis is somewhat easier when the distribution is nearly symmetric. Darlington (1970) and Moors (1986) showed that the fourth moment measure of kurtosis is in effect variability about "the shoulders" - $\mu\pm\sigma$, and Balanda and MacGillivray (1988) suggest thinking of it in vague terms related to that sense (and consider some other ways to measure it). If the distribution is closely concentrated about $\mu\pm\sigma$, then kurtosis is (necessarily) small, while if the distribution is spread out away from $\mu\pm\sigma$ (which will tend to simultaneously pile it up in the center and move probability into the tails in order to move it away from the shoulders), fourth-moment kurtosis will be large. De Carlo (1997) is a reasonable starting place (after more basic resources like Wikipedia) for reading about kurtosis. Edit: I see some occasional questioning of whether higher peakedness (values near 0) can affect kurtosis at all. The answer is yes, definitely it can. That this is the case is a consequence of it being the fourth moment of a standardized variable -- to increase the fourth moment of a standardized variate you must increase $E(Z^4)$ while holding $E(Z^2)$ constant. This means that movement of probability further into the tail must be accompanied by some further in (inside $(-1,1)$); and vice versa -- if you put more weight at the center while holding the variance at 1, you also put some out in the tail. [NB as discussed in comments this is incorrect as a general statement; a somewhat different statement is required here.] This effect of variance being held constant is directly connected to the discussion of kurtosis as "variation about the shoulders" in Darlington and Moors' papers. That result is not some handwavy-notion, but a plain mathematical equivalence - one cannot hold it to be otherwise without misrepresenting kurtosis. Now it's possible to increase the probability inside $(-1,1)$ without lifting the peak. Equally, it's possible to increase the probability outside $(-1,1)$ without necessarily making the distant tail heavier (by some typical tail-index, say). That is, it's quite possible to raise kurtosis while making the tail lighter (e.g. having a lighter tail beyond 2sds either side of the mean, say). [My inclusion of Kendall and Stuart in the references is because their discussion of kurtosis also relevant to this point.] So what can we say? Kurtosis is often associated with a higher peak and with a heavier tail, without having to occur wither either. Certainly it's easier to lift kurtosis by playing with the tail (since it's possible to get more than 1 sd away) then adjusting the center to keep variance constant, but that doesn't mean that the peak has no impact; it assuredly does, and one can manipulate kurtosis by focusing on it instead. Kurtosis is largely but not only associated with tail heaviness -- again, look to the variation about the shoulders result; if anything that's what kurtosis is looking at, in an unavoidable mathematical sense. References Balanda, K.P. and MacGillivray, H.L. (1988), "Kurtosis: A critical review." American Statistician 42, 111-119. Darlington, Richard B. (1970), "Is Kurtosis Really "Peakedness?"." American Statistician 24, 19-22. Moors, J.J.A. (1986), "The meaning of kurtosis: Darlington reexamined." American Statistician 40, 283-284. DeCarlo, L.T. (1997), "On the meaning and use of kurtosis." Psychol. Methods, 2, 292-307. Kendall, M. G., and A. Stuart, The Advanced Theory of Statistics, Vol. 1, 3rd Ed. (more recent editions have Stuart and Ord)
Why kurtosis of a normal distribution is 3 instead of 0
Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized vers
Why kurtosis of a normal distribution is 3 instead of 0 Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized version of the variable we're looking at, then the population kurtosis is the average fourth power of that standardized variable; $E(Z^4)$. The sample kurtosis is correspondingly related to the mean fourth power of a standardized set of sample values (in some cases it is scaled by a factor that goes to 1 in large samples). As you note, this fourth standardized moment is 3 in the case of a normal random variable. As Alecos notes in comments, some people define kurtosis as $E(Z^4)-3$; that's sometimes called excess kurtosis (it's also the fourth cumulant). When seeing the word 'kurtosis' you need to keep in mind this possibility that different people use the same word to refer to two different (but closely related) quantities. Kurtosis is usually either described as peakedness* (say, how sharply curved the peak is - which was presumably the intent of choosing the word "kurtosis") or heavy-tailedness (often what people are interested in using it to measure), but in actual fact the usual fourth standardized moment doesn't quite measure either of those things. Indeed, the first volume of Kendall and Stuart give counterexamples that show that higher kurtosis is not necessarily associated with either higher peak (in a standardized variable) or fatter tails (in rather similar way that the third moment doesn't quite measure what many people think it does). However in many situations there's some tendency to be associated with both, in that greater peakedness and heavy tailedness often tend to be seen when kurtosis is higher -- we should simply beware thinking it is necessarily the case. Kurtosis and skewness are strongly related (the kurtosis must be at least 1 more than the square of the skewness; interpretation of kurtosis is somewhat easier when the distribution is nearly symmetric. Darlington (1970) and Moors (1986) showed that the fourth moment measure of kurtosis is in effect variability about "the shoulders" - $\mu\pm\sigma$, and Balanda and MacGillivray (1988) suggest thinking of it in vague terms related to that sense (and consider some other ways to measure it). If the distribution is closely concentrated about $\mu\pm\sigma$, then kurtosis is (necessarily) small, while if the distribution is spread out away from $\mu\pm\sigma$ (which will tend to simultaneously pile it up in the center and move probability into the tails in order to move it away from the shoulders), fourth-moment kurtosis will be large. De Carlo (1997) is a reasonable starting place (after more basic resources like Wikipedia) for reading about kurtosis. Edit: I see some occasional questioning of whether higher peakedness (values near 0) can affect kurtosis at all. The answer is yes, definitely it can. That this is the case is a consequence of it being the fourth moment of a standardized variable -- to increase the fourth moment of a standardized variate you must increase $E(Z^4)$ while holding $E(Z^2)$ constant. This means that movement of probability further into the tail must be accompanied by some further in (inside $(-1,1)$); and vice versa -- if you put more weight at the center while holding the variance at 1, you also put some out in the tail. [NB as discussed in comments this is incorrect as a general statement; a somewhat different statement is required here.] This effect of variance being held constant is directly connected to the discussion of kurtosis as "variation about the shoulders" in Darlington and Moors' papers. That result is not some handwavy-notion, but a plain mathematical equivalence - one cannot hold it to be otherwise without misrepresenting kurtosis. Now it's possible to increase the probability inside $(-1,1)$ without lifting the peak. Equally, it's possible to increase the probability outside $(-1,1)$ without necessarily making the distant tail heavier (by some typical tail-index, say). That is, it's quite possible to raise kurtosis while making the tail lighter (e.g. having a lighter tail beyond 2sds either side of the mean, say). [My inclusion of Kendall and Stuart in the references is because their discussion of kurtosis also relevant to this point.] So what can we say? Kurtosis is often associated with a higher peak and with a heavier tail, without having to occur wither either. Certainly it's easier to lift kurtosis by playing with the tail (since it's possible to get more than 1 sd away) then adjusting the center to keep variance constant, but that doesn't mean that the peak has no impact; it assuredly does, and one can manipulate kurtosis by focusing on it instead. Kurtosis is largely but not only associated with tail heaviness -- again, look to the variation about the shoulders result; if anything that's what kurtosis is looking at, in an unavoidable mathematical sense. References Balanda, K.P. and MacGillivray, H.L. (1988), "Kurtosis: A critical review." American Statistician 42, 111-119. Darlington, Richard B. (1970), "Is Kurtosis Really "Peakedness?"." American Statistician 24, 19-22. Moors, J.J.A. (1986), "The meaning of kurtosis: Darlington reexamined." American Statistician 40, 283-284. DeCarlo, L.T. (1997), "On the meaning and use of kurtosis." Psychol. Methods, 2, 292-307. Kendall, M. G., and A. Stuart, The Advanced Theory of Statistics, Vol. 1, 3rd Ed. (more recent editions have Stuart and Ord)
Why kurtosis of a normal distribution is 3 instead of 0 Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized vers
14,440
Why kurtosis of a normal distribution is 3 instead of 0
Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4$. Consider the graph of the pdf of $V$, $p_V(v)$. This curve is to the right of zero, and extends to infinity, with 0.999 quantile 117.2, but much of the mass is near zero; e.g., 68% less than 1.0. The mean of this distribution is kurtosis. A common way to understand the mean is as the "point of balance" of the pdf graph. If $X$ is normal, this curve $p_V(v)$ balances at 3.0. This representation also explains why kurtosis measures heaviness of tails of a distribution. If $X$ is non-normal, the curve $p_V(v)$ "falls to the right" when the kurtosis is greater than 3.0, and so in this case the density of $X$ can be said to be "heavier-tailed than the normal distribution." Similarly, the curve $p_V(v)$ "falls to the left" when the kurtosis is less than 3.0, and so in this case the density of $X$ can be said to be "lighter-tailed than the normal distribution." It is commonly thought that higher kurtosis refers to more mass near the center (i.e., more mass near 0 in the pdf $p_V(v)$). While in many cases this is true, it is obviously not the (possibly increased) mass near zero that causes the graph to "fall to the right" in the high kurtosis case. It is instead the tail leverage. From this standpoint, the essentially correct "tail weight" interpretation of kurtosis might be more specifically characterized as "tail leverage" to avoid confusing "increased tail weight" with "increased mass in the tail." After all, it is possible that higher kurtosis corresponds to less mass in the tail, but where this diminished mass occupies a more distant position. "Give me the place to stand, and I shall move the earth." -Archimedes
Why kurtosis of a normal distribution is 3 instead of 0
Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4
Why kurtosis of a normal distribution is 3 instead of 0 Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4$. Consider the graph of the pdf of $V$, $p_V(v)$. This curve is to the right of zero, and extends to infinity, with 0.999 quantile 117.2, but much of the mass is near zero; e.g., 68% less than 1.0. The mean of this distribution is kurtosis. A common way to understand the mean is as the "point of balance" of the pdf graph. If $X$ is normal, this curve $p_V(v)$ balances at 3.0. This representation also explains why kurtosis measures heaviness of tails of a distribution. If $X$ is non-normal, the curve $p_V(v)$ "falls to the right" when the kurtosis is greater than 3.0, and so in this case the density of $X$ can be said to be "heavier-tailed than the normal distribution." Similarly, the curve $p_V(v)$ "falls to the left" when the kurtosis is less than 3.0, and so in this case the density of $X$ can be said to be "lighter-tailed than the normal distribution." It is commonly thought that higher kurtosis refers to more mass near the center (i.e., more mass near 0 in the pdf $p_V(v)$). While in many cases this is true, it is obviously not the (possibly increased) mass near zero that causes the graph to "fall to the right" in the high kurtosis case. It is instead the tail leverage. From this standpoint, the essentially correct "tail weight" interpretation of kurtosis might be more specifically characterized as "tail leverage" to avoid confusing "increased tail weight" with "increased mass in the tail." After all, it is possible that higher kurtosis corresponds to less mass in the tail, but where this diminished mass occupies a more distant position. "Give me the place to stand, and I shall move the earth." -Archimedes
Why kurtosis of a normal distribution is 3 instead of 0 Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4
14,441
Stats is not maths?
Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering complex but necessary consequences from simple axioms. Statistics uses math, but it is not math. It's educated guesswork. It's gambling. Statistics does not deal with idealized abstractions (although it does use some as tools), it deals with real world phenomena. Statistical tools often make simplifying assumptions to reduce the messy real world data to something that fits into the problem domain of a solved mathematical abstraction. This allows us to make educated guesses, but that's really all that statistics is: the art of making very well informed guesses. Consider hypothesis testing with p-values. Let's say we are testing some hypothesis with significance $\alpha = 0.01$, and after gathering data we find a p-value of $0.001$. So we reject the null hypothesis in favor of an alternative hypothesis. But what is this p-value really? What is the significance? Our test statistic was developed such that it conformed to a particular distribution, probably student's t. Under the null hypothesis, the percentile of our observed test statistic is the p-value. In other words, the p-value gives the probability that we would get a value as far from the expectation of the distribution (or farther) as the observed test statistic. The signficance level is a fairly arbitrary rule-of-thumb cutoff: setting it to $0.01$ is equivalent to saying, "it's acceptable if 1 in 100 repetitions of this experiment suggest that we reject the null, even if the null is in fact true." The p-value gives us the probability that we observe the data at hand given that the null is true (or rather, getting a bit more technical, that we observe data under the null hypothesis that gives us at least as extreme a value of the tested statistic as that which we found). If we're going to reject the null, then we want this probability to be small, to approach zero. In our specific example, we found that the probability of observing the data we gathered if the null hypothesis were true was just $0.1\%$, so we rejected the null. This was an educated guess. We never really know for sure that the null hypothesis is false using these methods, we just develop a measurement for how strongly our evidence supports the alternative. Did we use math to calculate the p-value? Sure. But math did not give us our conclusion. Based on the evidence, we formed an educated opinion, but it's still a gamble. We've found these tools to be extremely effective over the last 100 years, but the people of the future may wonder in horror at the fragility of our methods.
Stats is not maths?
Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering
Stats is not maths? Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering complex but necessary consequences from simple axioms. Statistics uses math, but it is not math. It's educated guesswork. It's gambling. Statistics does not deal with idealized abstractions (although it does use some as tools), it deals with real world phenomena. Statistical tools often make simplifying assumptions to reduce the messy real world data to something that fits into the problem domain of a solved mathematical abstraction. This allows us to make educated guesses, but that's really all that statistics is: the art of making very well informed guesses. Consider hypothesis testing with p-values. Let's say we are testing some hypothesis with significance $\alpha = 0.01$, and after gathering data we find a p-value of $0.001$. So we reject the null hypothesis in favor of an alternative hypothesis. But what is this p-value really? What is the significance? Our test statistic was developed such that it conformed to a particular distribution, probably student's t. Under the null hypothesis, the percentile of our observed test statistic is the p-value. In other words, the p-value gives the probability that we would get a value as far from the expectation of the distribution (or farther) as the observed test statistic. The signficance level is a fairly arbitrary rule-of-thumb cutoff: setting it to $0.01$ is equivalent to saying, "it's acceptable if 1 in 100 repetitions of this experiment suggest that we reject the null, even if the null is in fact true." The p-value gives us the probability that we observe the data at hand given that the null is true (or rather, getting a bit more technical, that we observe data under the null hypothesis that gives us at least as extreme a value of the tested statistic as that which we found). If we're going to reject the null, then we want this probability to be small, to approach zero. In our specific example, we found that the probability of observing the data we gathered if the null hypothesis were true was just $0.1\%$, so we rejected the null. This was an educated guess. We never really know for sure that the null hypothesis is false using these methods, we just develop a measurement for how strongly our evidence supports the alternative. Did we use math to calculate the p-value? Sure. But math did not give us our conclusion. Based on the evidence, we formed an educated opinion, but it's still a gamble. We've found these tools to be extremely effective over the last 100 years, but the people of the future may wonder in horror at the fragility of our methods.
Stats is not maths? Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering
14,442
Stats is not maths?
Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statistics is the branch of maths that describes reality. ;o) I'd say statistics is a branch of mathematics in the same way that logic is a branch of mathematics. It certainly includes an element of philosophy, but I don't think it is the only branch of mathematics where that is the case (see e.g. Morris Kline, "Mathematics - The Loss of Certainty", Oxford University Press, 1980).
Stats is not maths?
Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statis
Stats is not maths? Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statistics is the branch of maths that describes reality. ;o) I'd say statistics is a branch of mathematics in the same way that logic is a branch of mathematics. It certainly includes an element of philosophy, but I don't think it is the only branch of mathematics where that is the case (see e.g. Morris Kline, "Mathematics - The Loss of Certainty", Oxford University Press, 1980).
Stats is not maths? Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statis
14,443
Stats is not maths?
Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines probability in an abstract and axiomatic way as you can see in this pdf on page 42 or here at the bottom of page 1 and next pages. Just to give you a flavour of his abstract definitions, he defines a random variable as a 'measurable' function as explained in a more 'intuitive' way here : If a random variable is a function, then how do we define a function of a random variable With a very limited number of axioms and using results from (again maths) measure theory he can define concepts are random variables, distributions, conditional probability, ... in an abstract way and derive all well known results like the law of large numbers, ... from this set of axioms. I advise you to give it a try and you will be surprised about the mathematical beauty of it. For an explanation on p-values I refer to: Misunderstanding a P-value?
Stats is not maths?
Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines
Stats is not maths? Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines probability in an abstract and axiomatic way as you can see in this pdf on page 42 or here at the bottom of page 1 and next pages. Just to give you a flavour of his abstract definitions, he defines a random variable as a 'measurable' function as explained in a more 'intuitive' way here : If a random variable is a function, then how do we define a function of a random variable With a very limited number of axioms and using results from (again maths) measure theory he can define concepts are random variables, distributions, conditional probability, ... in an abstract way and derive all well known results like the law of large numbers, ... from this set of axioms. I advise you to give it a try and you will be surprised about the mathematical beauty of it. For an explanation on p-values I refer to: Misunderstanding a P-value?
Stats is not maths? Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines
14,444
Stats is not maths?
I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from their math, and statistics (usually) offers only probabilistic conclusions with associated p values. Actually, this is exactly what I love about stats. We live in a fundamentally uncertain world, and we do the best we can to understand it. And we do a great job, all things considered.
Stats is not maths?
I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from t
Stats is not maths? I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from their math, and statistics (usually) offers only probabilistic conclusions with associated p values. Actually, this is exactly what I love about stats. We live in a fundamentally uncertain world, and we do the best we can to understand it. And we do a great job, all things considered.
Stats is not maths? I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from t
14,445
Stats is not maths?
Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about them. In many cases, the proofs provide compelling evidence that the statistical tools in question are reliable and/or powerful. Statistics and its community may not be "pure" enough for mathematicians of a certain taste, but it is definitely invested in math extremely deeply, and theoretical statistics is just as much a branch of mathematics as theoretical physics or theoretical computer science.
Stats is not maths?
Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about t
Stats is not maths? Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about them. In many cases, the proofs provide compelling evidence that the statistical tools in question are reliable and/or powerful. Statistics and its community may not be "pure" enough for mathematicians of a certain taste, but it is definitely invested in math extremely deeply, and theoretical statistics is just as much a branch of mathematics as theoretical physics or theoretical computer science.
Stats is not maths? Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about t
14,446
Stats is not maths?
Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two primary points as to why statistics isn't mathematics*. It isn't exact/certain, and as such relies on assumptions. It applies math to problems and anytime you apply math it is no longer math. Isn't exact and uses assumptions Assumptions/approximations are useful for lots of math. The properties of a triangle that I learned about in grade school I believe are considered true math, even though they don't hold true in non-Elucidean geometry. So clearly an admission of the limits, or stated another way "assuming XYZ the following is valid", to a branch of math doesn't disqualify the branch from being "true" math. Calculus I'm certain would be considered a pure form of math, but limits are the core tool we built it on. We can keep calculating up to the limit, just as we can keep making a sample size larger, but neither give increased insight past a certain threshold. Once you apply math it isn't math The obvious contradiction here is we use math to prove mathematical theorems, and no one argues that proving mathematical theorems isn't math. The next statement might be that thing x isn't math if you use math to get a result. That doesn't make any sense either. The statement I would agree with is that when you use the results of a calculation to make a decision then the decision isn't math. That doesn't mean that the analysis leading up to the decision isn't math. I think when we use statistical analysis all the math performed is real math. It is only once we hand the results to someone for interpretation does statistics exit mathematics. As such statistics and statisticians are doing real mathematics and are real mathematicians. It is the interpretation done by the business and/or the translation of the results to the business by the statistician that isn't math. From the comments: whuber said: If you were to replace "statistics" by "chemistry," "economics," "engineering," or any other field that employs mathematics (such as home economics), it appears none of your argument would change. I think the key difference between "chemistry", "engineering", and "balancing my checkbook" is that those fields just use existing mathematical concepts. It is my understanding that statisticians like Guass expanded the body of mathematical concepts. I believe (this might be blatantly wrong) that in order to earn a PhD in statistics you have to contribute, in some way, to expanding the body of mathematical concepts. Chemistry/Engineering PhD candidates don't have that requirement to my knowledge. The distinction that statistics contributes to the body of mathematical concepts is what sets it apart from the other fields that merely use mathematical concepts. *: The notable exception is this answer that effectively states the boundaries are artificial due to various social reasons. I think that is the only true answer, but where is the fun in that? ;)
Stats is not maths?
Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two pr
Stats is not maths? Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two primary points as to why statistics isn't mathematics*. It isn't exact/certain, and as such relies on assumptions. It applies math to problems and anytime you apply math it is no longer math. Isn't exact and uses assumptions Assumptions/approximations are useful for lots of math. The properties of a triangle that I learned about in grade school I believe are considered true math, even though they don't hold true in non-Elucidean geometry. So clearly an admission of the limits, or stated another way "assuming XYZ the following is valid", to a branch of math doesn't disqualify the branch from being "true" math. Calculus I'm certain would be considered a pure form of math, but limits are the core tool we built it on. We can keep calculating up to the limit, just as we can keep making a sample size larger, but neither give increased insight past a certain threshold. Once you apply math it isn't math The obvious contradiction here is we use math to prove mathematical theorems, and no one argues that proving mathematical theorems isn't math. The next statement might be that thing x isn't math if you use math to get a result. That doesn't make any sense either. The statement I would agree with is that when you use the results of a calculation to make a decision then the decision isn't math. That doesn't mean that the analysis leading up to the decision isn't math. I think when we use statistical analysis all the math performed is real math. It is only once we hand the results to someone for interpretation does statistics exit mathematics. As such statistics and statisticians are doing real mathematics and are real mathematicians. It is the interpretation done by the business and/or the translation of the results to the business by the statistician that isn't math. From the comments: whuber said: If you were to replace "statistics" by "chemistry," "economics," "engineering," or any other field that employs mathematics (such as home economics), it appears none of your argument would change. I think the key difference between "chemistry", "engineering", and "balancing my checkbook" is that those fields just use existing mathematical concepts. It is my understanding that statisticians like Guass expanded the body of mathematical concepts. I believe (this might be blatantly wrong) that in order to earn a PhD in statistics you have to contribute, in some way, to expanding the body of mathematical concepts. Chemistry/Engineering PhD candidates don't have that requirement to my knowledge. The distinction that statistics contributes to the body of mathematical concepts is what sets it apart from the other fields that merely use mathematical concepts. *: The notable exception is this answer that effectively states the boundaries are artificial due to various social reasons. I think that is the only true answer, but where is the fun in that? ;)
Stats is not maths? Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two pr
14,447
Stats is not maths?
The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the way, Bayesian statistics is an axiomatised area.
Stats is not maths?
The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the
Stats is not maths? The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the way, Bayesian statistics is an axiomatised area.
Stats is not maths? The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the
14,448
Stats is not maths?
This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initially formalized the least squares regression model in astronomical predictions. The majority of contributions to statistics before Fisher were from Physicists (or highly applied mathematicians whose work would be called Physics by today's standards): Lyapunov, De Moivre, Gauss, and one or more of the Bernoullis. The overarching principle is the characterization of errors and seeming randomness propagated from an infinite number of unmeasured sources of variation. As experiments became harder to control, experimental errors needed to be formally described and accounted for to calibrate the preponderance of experimental evidence against the proposed mathematical model. Later, as particle physics delved into quantum physics, formalizing particles as random distributions gave a much more concise language to describe the seemingly uncontrollable randomness with photons and electrons. The properties of estimators such as their mean (center of mass) and standard deviation (second moment of deviations) are very intuitive to physicists. The majority of limit theorems can be loosely connected to Murphy's law, i.e. that the limiting normal distribution is maximum entropy. So statistics is a subbranch of physics.
Stats is not maths?
This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initi
Stats is not maths? This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initially formalized the least squares regression model in astronomical predictions. The majority of contributions to statistics before Fisher were from Physicists (or highly applied mathematicians whose work would be called Physics by today's standards): Lyapunov, De Moivre, Gauss, and one or more of the Bernoullis. The overarching principle is the characterization of errors and seeming randomness propagated from an infinite number of unmeasured sources of variation. As experiments became harder to control, experimental errors needed to be formally described and accounted for to calibrate the preponderance of experimental evidence against the proposed mathematical model. Later, as particle physics delved into quantum physics, formalizing particles as random distributions gave a much more concise language to describe the seemingly uncontrollable randomness with photons and electrons. The properties of estimators such as their mean (center of mass) and standard deviation (second moment of deviations) are very intuitive to physicists. The majority of limit theorems can be loosely connected to Murphy's law, i.e. that the limiting normal distribution is maximum entropy. So statistics is a subbranch of physics.
Stats is not maths? This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initi
14,449
How to generate a non-integer amount of consecutive Bernoulli successes?
We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable determines a fixed known value $f_n \in [0,1]$. Generate a $\mathrm{Ber}(f_n)$ random variable using fair coin flips generated from blockwise paired flips of our $\mathrm{Ber}(p)$ coin. The resulting outcome will be $\mathrm{Ber}(p^a)$ for any $a \in (0,1)$, which is all we need. To make things more digestible, we'll break things into pieces. Piece 1: Without loss of generality assume that $0 < a < 1$. If $a \geq 1$, then, we can write $p^a = p^n p^b$ for some positive integer $n$ and some $0 \leq b < 1$. But, for any two independent Bernoulli's, we have $$\renewcommand{\Pr}{\mathbb P} \Pr(X_1 = X_2 = 1) = p_1 p_2 \>. $$ We can generate a $p^n$ Bernoulli from our coin in the obvious way. Hence, we need only concern ourselves with generating the $\mathrm{Ber}(p^a)$ when $a \in (0,1)$. Piece 2: Know how to generate an arbitrary $\mathrm{Ber}(q)$ from fair coin flips. There is a standard way to do this. Expand $q = 0.q_1 q_2 q_3 \ldots$ in its binary expansion and then use our fair coin flips to "match" the digits of $q$. The first match determines whether we declare a success ("heads") or failure ("tails"). If $q_n = 1$ and our coin flip is heads, declare heads, if $q_n = 0$ and our coin flip is tails, declare tails. Otherwise, consider the subsequent digit against a new coin flip. Piece 3: Know how to generate a fair coin flip from unfair ones with unknown bias. This is done, assuming $p \in (0,1)$, by flipping the coin in pairs. If we get $HT$, declare a heads; if we get $TH$, declare a tails, and otherwise repeat the experiment until one of the two aforementioned outcomes occurs. They are equally probable, so must have probability $1/2$. Piece 4: Some math. (Taylor to the rescue.) By expanding $h(p) = p^a$ around $p_0 = 1$, Taylor's theorem asserts that $$ p^a = 1 - a(1-p) - \frac{a(1-a)}{2!} (1-p)^2 - \frac{a(1-a)(2-a)}{3!} (1-p)^3 \cdots \>. $$ Note that because $0 < a < 1$, each term after the first is negative, so we have $$ p^a = 1 - \sum_{n=1}^\infty b_n (1-p)^n \>, $$ where $0 \leq b_n \leq 1$ are known a priori. Hence $$ 1 - p^a = \sum_{n=1}^{\infty} b_n (1-p)^n = \sum_{n=1}^\infty b_n \Pr(G \geq n) = \sum_{n=1}^\infty f_n \Pr(G = n) = \mathbb E f(G), $$ where $G \sim \mathrm{Geom}(p)$, $f_0 = 0$ and $f_n = \sum_{k=1}^n b_k$ for $n \geq 1$. And, we already know how to use our coin to generate a Geometric random variable with probability of success $p$. Piece 5: A Monte Carlo trick. Let $X$ be a discrete random variable taking values in $[0,1]$ with $\Pr(X = x_n) = p_n$. Let $U \mid X \sim \mathrm{Ber}(X)$. Then $$ \Pr(U = 1) = \sum_n x_n p_n. $$ But, taking $p_n = p(1-p)^n$ and $x_n = f_n$, we see now how to generate a $\mathrm{Ber}(1-p^a)$ random variable and this is equivalent to generating a $\mathrm{Ber}(p^a)$ one.
How to generate a non-integer amount of consecutive Bernoulli successes?
We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable de
How to generate a non-integer amount of consecutive Bernoulli successes? We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable determines a fixed known value $f_n \in [0,1]$. Generate a $\mathrm{Ber}(f_n)$ random variable using fair coin flips generated from blockwise paired flips of our $\mathrm{Ber}(p)$ coin. The resulting outcome will be $\mathrm{Ber}(p^a)$ for any $a \in (0,1)$, which is all we need. To make things more digestible, we'll break things into pieces. Piece 1: Without loss of generality assume that $0 < a < 1$. If $a \geq 1$, then, we can write $p^a = p^n p^b$ for some positive integer $n$ and some $0 \leq b < 1$. But, for any two independent Bernoulli's, we have $$\renewcommand{\Pr}{\mathbb P} \Pr(X_1 = X_2 = 1) = p_1 p_2 \>. $$ We can generate a $p^n$ Bernoulli from our coin in the obvious way. Hence, we need only concern ourselves with generating the $\mathrm{Ber}(p^a)$ when $a \in (0,1)$. Piece 2: Know how to generate an arbitrary $\mathrm{Ber}(q)$ from fair coin flips. There is a standard way to do this. Expand $q = 0.q_1 q_2 q_3 \ldots$ in its binary expansion and then use our fair coin flips to "match" the digits of $q$. The first match determines whether we declare a success ("heads") or failure ("tails"). If $q_n = 1$ and our coin flip is heads, declare heads, if $q_n = 0$ and our coin flip is tails, declare tails. Otherwise, consider the subsequent digit against a new coin flip. Piece 3: Know how to generate a fair coin flip from unfair ones with unknown bias. This is done, assuming $p \in (0,1)$, by flipping the coin in pairs. If we get $HT$, declare a heads; if we get $TH$, declare a tails, and otherwise repeat the experiment until one of the two aforementioned outcomes occurs. They are equally probable, so must have probability $1/2$. Piece 4: Some math. (Taylor to the rescue.) By expanding $h(p) = p^a$ around $p_0 = 1$, Taylor's theorem asserts that $$ p^a = 1 - a(1-p) - \frac{a(1-a)}{2!} (1-p)^2 - \frac{a(1-a)(2-a)}{3!} (1-p)^3 \cdots \>. $$ Note that because $0 < a < 1$, each term after the first is negative, so we have $$ p^a = 1 - \sum_{n=1}^\infty b_n (1-p)^n \>, $$ where $0 \leq b_n \leq 1$ are known a priori. Hence $$ 1 - p^a = \sum_{n=1}^{\infty} b_n (1-p)^n = \sum_{n=1}^\infty b_n \Pr(G \geq n) = \sum_{n=1}^\infty f_n \Pr(G = n) = \mathbb E f(G), $$ where $G \sim \mathrm{Geom}(p)$, $f_0 = 0$ and $f_n = \sum_{k=1}^n b_k$ for $n \geq 1$. And, we already know how to use our coin to generate a Geometric random variable with probability of success $p$. Piece 5: A Monte Carlo trick. Let $X$ be a discrete random variable taking values in $[0,1]$ with $\Pr(X = x_n) = p_n$. Let $U \mid X \sim \mathrm{Ber}(X)$. Then $$ \Pr(U = 1) = \sum_n x_n p_n. $$ But, taking $p_n = p(1-p)^n$ and $x_n = f_n$, we see now how to generate a $\mathrm{Ber}(1-p^a)$ random variable and this is equivalent to generating a $\mathrm{Ber}(p^a)$ one.
How to generate a non-integer amount of consecutive Bernoulli successes? We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable de
14,450
How to generate a non-integer amount of consecutive Bernoulli successes?
Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approximately distributed as $\mathrm{Ber}(p^a)$, when $n\to\infty$. Hence, if you don't know $p$, but you can toss this coin a lot of times, it is possible to sample (approximately) from a $\mathrm{Ber}(p^a)$ random variable. Example Rcode: n <- 1000000 p <- 1/3 # works for any 0 <= p <= 1 a <- 4 x <- rbinom(n, 1, p) y <- rbinom(n, 1, mean(x)^a) cat("p^a =", p^a, "\n") cat("est =", mean(y)) Results: p^a = 0.01234568 est = 0.012291
How to generate a non-integer amount of consecutive Bernoulli successes?
Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approxi
How to generate a non-integer amount of consecutive Bernoulli successes? Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approximately distributed as $\mathrm{Ber}(p^a)$, when $n\to\infty$. Hence, if you don't know $p$, but you can toss this coin a lot of times, it is possible to sample (approximately) from a $\mathrm{Ber}(p^a)$ random variable. Example Rcode: n <- 1000000 p <- 1/3 # works for any 0 <= p <= 1 a <- 4 x <- rbinom(n, 1, p) y <- rbinom(n, 1, mean(x)^a) cat("p^a =", p^a, "\n") cat("est =", mean(y)) Results: p^a = 0.01234568 est = 0.012291
How to generate a non-integer amount of consecutive Bernoulli successes? Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approxi
14,451
How to generate a non-integer amount of consecutive Bernoulli successes?
I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constructing a random variable." I'm posting a copy here as community wiki to make this publicly and more permanently available. There was an interesting question and answer on stat.stackexchange.com related to power series: "How to generate a non-integer amount of consecutive Bernoulli successes?" I'll paraphrase the question and the answer by cardinal. Suppose we have a possibly unfair coin which is heads with probability $p$, and a positive real number $\alpha$. How can we construct an event whose probability is $p^\alpha$? If $\alpha$ were a positive integer, we could just flip the coin $\alpha$ times and let the event be that all tosses were heads. However, if $\alpha$ is not an integer, say $1/2$, then this doesn't make sense, but we can use this idea to reduce to the case that $0 \lt \alpha \lt 1$. If we want to construct an event whose probability is $p^{3.5}$, we take the intersection of independent events whose probabilities are $p^3$ and $p^{0.5}$. One thing we can do is construct an event with any known probability $p' \in [0,1]$. To do this, we can construct a stream of fair bits by repeatedly flipping the coin twice, reading $HT$ as $1$ and $TH$ as $0$, and ignoring $HH$ and $TT$. We compare this stream with the binary expansion of $p' = 0.a_1a_2a_3..._2$. The event that the first disagreement is where $a_i=1$ has probability $p'$. We don't know $p^\alpha$, so we can't use this directly, but it will be a useful tool. The main idea is that we would like to use the power series for $p^\alpha = (1-q)^\alpha = 1 - \alpha q - \frac{\alpha(1-\alpha)}{2} q^2 - \frac{\alpha (1-\alpha)(2-\alpha)}{3!}q^3 -...$ where $p=1-q$. We can construct events whose probabilities are $q^n$ by flipping the coin $n$ times and seeing if they are all tails, and we can produce an event with probability $p' q^n$ by comparing the binary digits of $p'$ with a fair bit stream as above and checking whether $n$ tosses are all tails. Construct a geometric random variable $G$ with parameter $p$. This is the number of tails before the first head in an infinite sequence of coin tosses. $P(G=n) = (1-p)^np = q^n p$. (Some people use a definition which differs by $1$.) Given a sequence $t_0, t_1, t_2, ...$, we can produce $t_G$: Flip the coin until the first head, and if there are $G$ tails before the first head, take the element of the sequence of index $G$. If each $t_n \in [0,1]$, we can compare $t_G$ with a uniform random variable in $[0,1]$ (constructed as above) to get an event with probability $E[t_G] = \sum_n t_n P(G=n) = \sum_n t_n q^n p $. This is almost what we need. We would like to eliminate that $p$ to use the power series for $p^\alpha$ in $q$. $$1 = p + qp + q^2p + q^3p + ...$$ $$q^n = q^np + q^{n+1}p + q^{n+2}p + ...$$ $$\begin{eqnarray} \sum_n s_n q^n & = & \sum_n s_n (q^n p + q^{n+1}p + q^{n+2}p + ...) \newline & = & \sum_n (s_0 + s_1 + ... + s_n) q^n p \end{eqnarray}$$ Consider $1-p^\alpha = \alpha q + \frac{\alpha(1-\alpha)}{2} q^2 + ... $. Let $t_n$ be the sum of the coefficients of $q$ through $q^n$. Then $1-p^\alpha = \sum_n t_n q^n p$. Each $t_n\in [0,1]$ since the coefficients are positive and sum to $1-0^\alpha = 1$, so we can construct an event with probability $1-p^\alpha$ by comparing a fair bit stream with the binary expansion of $t_G$. The complement has probability $p^\alpha$ as required. Again, the argument is due to cardinal.
How to generate a non-integer amount of consecutive Bernoulli successes?
I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constr
How to generate a non-integer amount of consecutive Bernoulli successes? I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constructing a random variable." I'm posting a copy here as community wiki to make this publicly and more permanently available. There was an interesting question and answer on stat.stackexchange.com related to power series: "How to generate a non-integer amount of consecutive Bernoulli successes?" I'll paraphrase the question and the answer by cardinal. Suppose we have a possibly unfair coin which is heads with probability $p$, and a positive real number $\alpha$. How can we construct an event whose probability is $p^\alpha$? If $\alpha$ were a positive integer, we could just flip the coin $\alpha$ times and let the event be that all tosses were heads. However, if $\alpha$ is not an integer, say $1/2$, then this doesn't make sense, but we can use this idea to reduce to the case that $0 \lt \alpha \lt 1$. If we want to construct an event whose probability is $p^{3.5}$, we take the intersection of independent events whose probabilities are $p^3$ and $p^{0.5}$. One thing we can do is construct an event with any known probability $p' \in [0,1]$. To do this, we can construct a stream of fair bits by repeatedly flipping the coin twice, reading $HT$ as $1$ and $TH$ as $0$, and ignoring $HH$ and $TT$. We compare this stream with the binary expansion of $p' = 0.a_1a_2a_3..._2$. The event that the first disagreement is where $a_i=1$ has probability $p'$. We don't know $p^\alpha$, so we can't use this directly, but it will be a useful tool. The main idea is that we would like to use the power series for $p^\alpha = (1-q)^\alpha = 1 - \alpha q - \frac{\alpha(1-\alpha)}{2} q^2 - \frac{\alpha (1-\alpha)(2-\alpha)}{3!}q^3 -...$ where $p=1-q$. We can construct events whose probabilities are $q^n$ by flipping the coin $n$ times and seeing if they are all tails, and we can produce an event with probability $p' q^n$ by comparing the binary digits of $p'$ with a fair bit stream as above and checking whether $n$ tosses are all tails. Construct a geometric random variable $G$ with parameter $p$. This is the number of tails before the first head in an infinite sequence of coin tosses. $P(G=n) = (1-p)^np = q^n p$. (Some people use a definition which differs by $1$.) Given a sequence $t_0, t_1, t_2, ...$, we can produce $t_G$: Flip the coin until the first head, and if there are $G$ tails before the first head, take the element of the sequence of index $G$. If each $t_n \in [0,1]$, we can compare $t_G$ with a uniform random variable in $[0,1]$ (constructed as above) to get an event with probability $E[t_G] = \sum_n t_n P(G=n) = \sum_n t_n q^n p $. This is almost what we need. We would like to eliminate that $p$ to use the power series for $p^\alpha$ in $q$. $$1 = p + qp + q^2p + q^3p + ...$$ $$q^n = q^np + q^{n+1}p + q^{n+2}p + ...$$ $$\begin{eqnarray} \sum_n s_n q^n & = & \sum_n s_n (q^n p + q^{n+1}p + q^{n+2}p + ...) \newline & = & \sum_n (s_0 + s_1 + ... + s_n) q^n p \end{eqnarray}$$ Consider $1-p^\alpha = \alpha q + \frac{\alpha(1-\alpha)}{2} q^2 + ... $. Let $t_n$ be the sum of the coefficients of $q$ through $q^n$. Then $1-p^\alpha = \sum_n t_n q^n p$. Each $t_n\in [0,1]$ since the coefficients are positive and sum to $1-0^\alpha = 1$, so we can construct an event with probability $1-p^\alpha$ by comparing a fair bit stream with the binary expansion of $t_G$. The complement has probability $p^\alpha$ as required. Again, the argument is due to cardinal.
How to generate a non-integer amount of consecutive Bernoulli successes? I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constr
14,452
How to generate a non-integer amount of consecutive Bernoulli successes?
The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ $q$, then $M_n := \max(X_1,\,X_2,\,\dots, X_n)$ is a Bernoulli r.v. with PZ $q^n$. Now making $n$ random i.e., replacing it by an integer rv $N \geq 1$ leads to Bernoulli rv $M_N$ with $$ \mathrm{Pr}\{M_N =0\} = \sum_{n=1}^\infty \mathrm{Pr}\{M_N =0 \,\vert\, N =n\} \mathrm{Pr}\{N =n\} = \sum_{n=1}^\infty \mathrm{Pr}\{N =n\} \, q^n. $$ So if $0 < a < 1$ and if we take $\mathrm{Pr}\{N =n\} =b_n$ from cardinal's answer, we find $\mathrm{Pr}\{M_N =0\} = 1- p^a$ and $1-M_N$ is $\mathrm{Ber}(p^a)$ as wanted. This is indeed possible since the coefficients $b_n$ satisfy $b_n \geqslant 0$ and they sum to $1$. The discrete distribution of $N$ depends only on $a$ with $0 < a < 1$, recall $$ \mathrm{Pr}\{N =n\} = \frac{a}{n}\,\prod_{k=1}^{n-1}\left(1 - a/k\right) \qquad (n \geq 1). $$ It has interesting features. It turns out to have an infinite expectation and an heavy tail behaviour $n \,b_n \sim c/n^a$ with $c = -1/\Gamma(-a) >0$. Though $M_N$ is the maximum of $N$ rvs, its determination needs a number of $X_k$ which is $\leq N$ since the result is known as soon as one $X_k$ is $1$. The number of computed $X_k$ is geometrically distributed.
How to generate a non-integer amount of consecutive Bernoulli successes?
The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ
How to generate a non-integer amount of consecutive Bernoulli successes? The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ $q$, then $M_n := \max(X_1,\,X_2,\,\dots, X_n)$ is a Bernoulli r.v. with PZ $q^n$. Now making $n$ random i.e., replacing it by an integer rv $N \geq 1$ leads to Bernoulli rv $M_N$ with $$ \mathrm{Pr}\{M_N =0\} = \sum_{n=1}^\infty \mathrm{Pr}\{M_N =0 \,\vert\, N =n\} \mathrm{Pr}\{N =n\} = \sum_{n=1}^\infty \mathrm{Pr}\{N =n\} \, q^n. $$ So if $0 < a < 1$ and if we take $\mathrm{Pr}\{N =n\} =b_n$ from cardinal's answer, we find $\mathrm{Pr}\{M_N =0\} = 1- p^a$ and $1-M_N$ is $\mathrm{Ber}(p^a)$ as wanted. This is indeed possible since the coefficients $b_n$ satisfy $b_n \geqslant 0$ and they sum to $1$. The discrete distribution of $N$ depends only on $a$ with $0 < a < 1$, recall $$ \mathrm{Pr}\{N =n\} = \frac{a}{n}\,\prod_{k=1}^{n-1}\left(1 - a/k\right) \qquad (n \geq 1). $$ It has interesting features. It turns out to have an infinite expectation and an heavy tail behaviour $n \,b_n \sim c/n^a$ with $c = -1/\Gamma(-a) >0$. Though $M_N$ is the maximum of $N$ rvs, its determination needs a number of $X_k$ which is $\leq N$ since the result is known as soon as one $X_k$ is $1$. The number of computed $X_k$ is geometrically distributed.
How to generate a non-integer amount of consecutive Bernoulli successes? The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ
14,453
Advantages and disadvantages of SVM
There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowledge about the problem via engineering the kernel. Thirdly an SVM is defined by a convex optimisation problem (no local minima) for which there are efficient methods (e.g. SMO). Lastly, it is an approximation to a bound on the test error rate, and there is a substantial body of theory behind it which suggests it should be a good idea. The disadvantages are that the theory only really covers the determination of the parameters for a given value of the regularisation and kernel parameters and choice of kernel. In a way the SVM moves the problem of over-fitting from optimising the parameters to model selection. Sadly kernel models can be quite sensitive to over-fitting the model selection criterion, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (pdf) Note however this problem is not unique to kernel methods, most machine learning methods have similar problems. The hinge loss used in the SVM results in sparsity. However, often the optimal choice of kernel and regularisation parameters means you end up with all data being support vectors. If you really want a sparse kernel machine, use something that was designed to be sparse from the outset (rather than being a useful byproduct), such as the Informative Vector Machine. The loss function used for support vector regression doesn't have an obvious statistical intepretation, often expert knowledge of the problem can be encoded in the loss function, e.g. Poisson or Beta or Gaussian. Likewise in many classification problems you actually want the probability of class membership, so it would be better to use a method like Kernel Logistic Regression, rather than post-process the output of the SVM to get probabilities. That is about all I can think of off-hand.
Advantages and disadvantages of SVM
There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowle
Advantages and disadvantages of SVM There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowledge about the problem via engineering the kernel. Thirdly an SVM is defined by a convex optimisation problem (no local minima) for which there are efficient methods (e.g. SMO). Lastly, it is an approximation to a bound on the test error rate, and there is a substantial body of theory behind it which suggests it should be a good idea. The disadvantages are that the theory only really covers the determination of the parameters for a given value of the regularisation and kernel parameters and choice of kernel. In a way the SVM moves the problem of over-fitting from optimising the parameters to model selection. Sadly kernel models can be quite sensitive to over-fitting the model selection criterion, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (pdf) Note however this problem is not unique to kernel methods, most machine learning methods have similar problems. The hinge loss used in the SVM results in sparsity. However, often the optimal choice of kernel and regularisation parameters means you end up with all data being support vectors. If you really want a sparse kernel machine, use something that was designed to be sparse from the outset (rather than being a useful byproduct), such as the Informative Vector Machine. The loss function used for support vector regression doesn't have an obvious statistical intepretation, often expert knowledge of the problem can be encoded in the loss function, e.g. Poisson or Beta or Gaussian. Likewise in many classification problems you actually want the probability of class membership, so it would be better to use a method like Kernel Logistic Regression, rather than post-process the output of the SVM to get probabilities. That is about all I can think of off-hand.
Advantages and disadvantages of SVM There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowle
14,454
Interpreting R's ur.df (Dickey-Fuller unit root test) results
It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I found that Enders was an incredibly helpful resource (Applied Econometric Time Series 3e, 2010, p. 206-209--I imagine other editions would also be fine). Below I'll use data from the URCA package, real income in Denmark as an example. > income <- ts(denmark$LRY) It might be useful to first describe the 3 different formulae Dickey-Fuller used to get different hypotheses, since these match the ur.df "type" options. Enders specifies that in all of these 3 cases, the consistent term used is gamma, the coefficient for the previous value of y, the lag term. If gamma=0, then there is a unit root (random walk, nonstationary). Where the null hypothesis is gamma=0, if p<0.05, then we reject the null (at the 95% level), and presume there is no unit root. If we fail to reject the null (p>0.05) then we presume a unit root exists. From here, we can proceed to interpreting the tau's and phi's. type="none": $\Delta y_t = \gamma \, y_{t-1} + e_t$ (formula from Enders p. 208) (where $e_t$ is the error term, presumed to be white noise; $\gamma = a-1$ from $y_t = a \,y_{t-1} + e_t$; $y_{t-1}$ refers to the previous value of $y$, so is the lag term) For type= "none," tau (or tau1 in R output) is the null hypothesis for gamma = 0. Using the Denmark income example, I get "Value of test-statistic is 0.7944" and the "Critical values for test statistics are: tau1 -2.6 -1.95 -1.61. Given that the test statistic is within the all 3 regions (1%, 5%, 10%) where we fail to reject the null, we should presume the data is a random walk, ie that a unit root is present. In this case, the tau1 refers to the gamma = 0 hypothesis. The "z.lag1" is the gamma term, the coefficient for the lag term (y(t-1)), which is p=0.431, which we fail to reject as significant, simply implying that gamma isn't statistically significant to this model. Here is the output from R > summary(ur.df(y=income, type = "none",lags=1)) > > ############################################### > # Augmented Dickey-Fuller Test Unit Root Test # > ############################################### > > Test regression none > > > Call: > lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag) > > Residuals: > Min 1Q Median 3Q Max > -0.044067 -0.016747 -0.006596 0.010305 0.085688 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > z.lag.1 0.0004636 0.0005836 0.794 0.431 > z.diff.lag 0.1724315 0.1362615 1.265 0.211 > > Residual standard error: 0.0251 on 51 degrees of freedom > Multiple R-squared: 0.04696, Adjusted R-squared: 0.009589 > F-statistic: 1.257 on 2 and 51 DF, p-value: 0.2933 > > > Value of test-statistic is: 0.7944 > > Critical values for test statistics: > 1pct 5pct 10pct > tau1 -2.6 -1.95 -1.61 type = "drift" (your specific question above): : $\Delta y_t = a_0 + \gamma \, y_{t-1} + e_t$ (formula from Enders p. 208) (where $a_0$ is "a sub-zero" and refers to the constant, or drift term) Here is where the output interpretation gets trickier. "tau2" is still the $\gamma=0$ null hypothesis. In this case, where the first test statistic = -1.4462 is within the region of failing to reject the null, we should again presume a unit root, that $\gamma=0$. The phi1 term refers to the second hypothesis, which is a combined null hypothesis of $a_0 = \gamma = 0$. This means that BOTH of the values are tested to be 0 at the same time. If p<0.05, we reject the null, and presume that AT LEAST one of these is false--i.e. one or both of the terms $a_0$ or $\gamma$ are not 0. Failing to reject this null implies that BOTH $a_0$ AND $\gamma = 0$, implying 1) that $\gamma=0$ therefore a unit root is present, AND 2) $a_0=0$, so there is no drift term. Here is the R output > summary(ur.df(y=income, type = "drift",lags=1)) > > ############################################### > # Augmented Dickey-Fuller Test Unit Root Test # > ############################################### > > Test regression drift > > > Call: > lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag) > > Residuals: > Min 1Q Median 3Q Max > -0.041910 -0.016484 -0.006994 0.013651 0.074920 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 0.43453 0.28995 1.499 0.140 > z.lag.1 -0.07256 0.04873 -1.489 0.143 > z.diff.lag 0.22028 0.13836 1.592 0.118 > > Residual standard error: 0.0248 on 50 degrees of freedom > Multiple R-squared: 0.07166, Adjusted R-squared: 0.03452 > F-statistic: 1.93 on 2 and 50 DF, p-value: 0.1559 > > > Value of test-statistic is: -1.4891 1.4462 > > Critical values for test statistics: > 1pct 5pct 10pct > tau2 -3.51 -2.89 -2.58 > phi1 6.70 4.71 3.86 Finally, for the type="trend": $\Delta y_t = a_0 + \gamma * y_{t-1} + a_{2}t + e_t$ (formula from Enders p. 208) (where $a_{2}t$ is a time trend term) The hypotheses (from Enders p. 208) are as follows: tau: $\gamma=0$ phi3: $\gamma = a_2 = 0$ phi2: $a_0 = \gamma = a_2 = 0$ This is similar to the R output. In this case, the test statistics are -2.4216 2.1927 2.9343 In all of these cases, these fall within the "fail to reject the null" zones (see critical values below). What tau3 implies, as above, is that we fail to reject the null of unit root, implying a unit root is present. Failing to reject phi3 implies two things: 1) $\gamma = 0$ (unit root) AND 2) there is no time trend term, i.e., $a_2=0$. If we rejected this null, it would imply that one or both of these terms was not 0. Failing to reject phi2 implies 3 things: 1) $\gamma = 0$ AND 2) no time trend term AND 3) no drift term, i.e. that $\gamma =0$, that $a_0 = 0$, and that $a_2 = 0$. Rejecting this null implies that one, two, OR all three of these terms was NOT zero. Here is the R output > summary(ur.df(y=income, type = "trend",lags=1)) > > ############################################### > # Augmented Dickey-Fuller Test Unit Root Test # > ############################################### > > Test regression trend > > > Call: > lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag) > > Residuals: > Min 1Q Median 3Q Max > -0.036693 -0.016457 -0.000435 0.014344 0.074299 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 1.0369478 0.4272693 2.427 0.0190 * > z.lag.1 -0.1767666 0.0729961 -2.422 0.0192 * > tt 0.0006299 0.0003348 1.881 0.0659 . > z.diff.lag 0.2557788 0.1362896 1.877 0.0665 . > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 0.02419 on 49 degrees of freedom > Multiple R-squared: 0.1342, Adjusted R-squared: 0.08117 > F-statistic: 2.531 on 3 and 49 DF, p-value: 0.06785 > > > Value of test-statistic is: -2.4216 2.1927 2.9343 > > Critical values for test statistics: > 1pct 5pct 10pct > tau3 -4.04 -3.45 -3.15 > phi2 6.50 4.88 4.16 > phi3 8.73 6.49 5.47 In your specific example above, for the d.Aus data, since both of the test statistics are inside of the "fail to reject" zone, it implies that $\gamma=0$ AND $a_0 = 0$, meaning that there is a unit root, but no drift term.
Interpreting R's ur.df (Dickey-Fuller unit root test) results
It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I f
Interpreting R's ur.df (Dickey-Fuller unit root test) results It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I found that Enders was an incredibly helpful resource (Applied Econometric Time Series 3e, 2010, p. 206-209--I imagine other editions would also be fine). Below I'll use data from the URCA package, real income in Denmark as an example. > income <- ts(denmark$LRY) It might be useful to first describe the 3 different formulae Dickey-Fuller used to get different hypotheses, since these match the ur.df "type" options. Enders specifies that in all of these 3 cases, the consistent term used is gamma, the coefficient for the previous value of y, the lag term. If gamma=0, then there is a unit root (random walk, nonstationary). Where the null hypothesis is gamma=0, if p<0.05, then we reject the null (at the 95% level), and presume there is no unit root. If we fail to reject the null (p>0.05) then we presume a unit root exists. From here, we can proceed to interpreting the tau's and phi's. type="none": $\Delta y_t = \gamma \, y_{t-1} + e_t$ (formula from Enders p. 208) (where $e_t$ is the error term, presumed to be white noise; $\gamma = a-1$ from $y_t = a \,y_{t-1} + e_t$; $y_{t-1}$ refers to the previous value of $y$, so is the lag term) For type= "none," tau (or tau1 in R output) is the null hypothesis for gamma = 0. Using the Denmark income example, I get "Value of test-statistic is 0.7944" and the "Critical values for test statistics are: tau1 -2.6 -1.95 -1.61. Given that the test statistic is within the all 3 regions (1%, 5%, 10%) where we fail to reject the null, we should presume the data is a random walk, ie that a unit root is present. In this case, the tau1 refers to the gamma = 0 hypothesis. The "z.lag1" is the gamma term, the coefficient for the lag term (y(t-1)), which is p=0.431, which we fail to reject as significant, simply implying that gamma isn't statistically significant to this model. Here is the output from R > summary(ur.df(y=income, type = "none",lags=1)) > > ############################################### > # Augmented Dickey-Fuller Test Unit Root Test # > ############################################### > > Test regression none > > > Call: > lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag) > > Residuals: > Min 1Q Median 3Q Max > -0.044067 -0.016747 -0.006596 0.010305 0.085688 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > z.lag.1 0.0004636 0.0005836 0.794 0.431 > z.diff.lag 0.1724315 0.1362615 1.265 0.211 > > Residual standard error: 0.0251 on 51 degrees of freedom > Multiple R-squared: 0.04696, Adjusted R-squared: 0.009589 > F-statistic: 1.257 on 2 and 51 DF, p-value: 0.2933 > > > Value of test-statistic is: 0.7944 > > Critical values for test statistics: > 1pct 5pct 10pct > tau1 -2.6 -1.95 -1.61 type = "drift" (your specific question above): : $\Delta y_t = a_0 + \gamma \, y_{t-1} + e_t$ (formula from Enders p. 208) (where $a_0$ is "a sub-zero" and refers to the constant, or drift term) Here is where the output interpretation gets trickier. "tau2" is still the $\gamma=0$ null hypothesis. In this case, where the first test statistic = -1.4462 is within the region of failing to reject the null, we should again presume a unit root, that $\gamma=0$. The phi1 term refers to the second hypothesis, which is a combined null hypothesis of $a_0 = \gamma = 0$. This means that BOTH of the values are tested to be 0 at the same time. If p<0.05, we reject the null, and presume that AT LEAST one of these is false--i.e. one or both of the terms $a_0$ or $\gamma$ are not 0. Failing to reject this null implies that BOTH $a_0$ AND $\gamma = 0$, implying 1) that $\gamma=0$ therefore a unit root is present, AND 2) $a_0=0$, so there is no drift term. Here is the R output > summary(ur.df(y=income, type = "drift",lags=1)) > > ############################################### > # Augmented Dickey-Fuller Test Unit Root Test # > ############################################### > > Test regression drift > > > Call: > lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag) > > Residuals: > Min 1Q Median 3Q Max > -0.041910 -0.016484 -0.006994 0.013651 0.074920 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 0.43453 0.28995 1.499 0.140 > z.lag.1 -0.07256 0.04873 -1.489 0.143 > z.diff.lag 0.22028 0.13836 1.592 0.118 > > Residual standard error: 0.0248 on 50 degrees of freedom > Multiple R-squared: 0.07166, Adjusted R-squared: 0.03452 > F-statistic: 1.93 on 2 and 50 DF, p-value: 0.1559 > > > Value of test-statistic is: -1.4891 1.4462 > > Critical values for test statistics: > 1pct 5pct 10pct > tau2 -3.51 -2.89 -2.58 > phi1 6.70 4.71 3.86 Finally, for the type="trend": $\Delta y_t = a_0 + \gamma * y_{t-1} + a_{2}t + e_t$ (formula from Enders p. 208) (where $a_{2}t$ is a time trend term) The hypotheses (from Enders p. 208) are as follows: tau: $\gamma=0$ phi3: $\gamma = a_2 = 0$ phi2: $a_0 = \gamma = a_2 = 0$ This is similar to the R output. In this case, the test statistics are -2.4216 2.1927 2.9343 In all of these cases, these fall within the "fail to reject the null" zones (see critical values below). What tau3 implies, as above, is that we fail to reject the null of unit root, implying a unit root is present. Failing to reject phi3 implies two things: 1) $\gamma = 0$ (unit root) AND 2) there is no time trend term, i.e., $a_2=0$. If we rejected this null, it would imply that one or both of these terms was not 0. Failing to reject phi2 implies 3 things: 1) $\gamma = 0$ AND 2) no time trend term AND 3) no drift term, i.e. that $\gamma =0$, that $a_0 = 0$, and that $a_2 = 0$. Rejecting this null implies that one, two, OR all three of these terms was NOT zero. Here is the R output > summary(ur.df(y=income, type = "trend",lags=1)) > > ############################################### > # Augmented Dickey-Fuller Test Unit Root Test # > ############################################### > > Test regression trend > > > Call: > lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag) > > Residuals: > Min 1Q Median 3Q Max > -0.036693 -0.016457 -0.000435 0.014344 0.074299 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 1.0369478 0.4272693 2.427 0.0190 * > z.lag.1 -0.1767666 0.0729961 -2.422 0.0192 * > tt 0.0006299 0.0003348 1.881 0.0659 . > z.diff.lag 0.2557788 0.1362896 1.877 0.0665 . > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 0.02419 on 49 degrees of freedom > Multiple R-squared: 0.1342, Adjusted R-squared: 0.08117 > F-statistic: 2.531 on 3 and 49 DF, p-value: 0.06785 > > > Value of test-statistic is: -2.4216 2.1927 2.9343 > > Critical values for test statistics: > 1pct 5pct 10pct > tau3 -4.04 -3.45 -3.15 > phi2 6.50 4.88 4.16 > phi3 8.73 6.49 5.47 In your specific example above, for the d.Aus data, since both of the test statistics are inside of the "fail to reject" zone, it implies that $\gamma=0$ AND $a_0 = 0$, meaning that there is a unit root, but no drift term.
Interpreting R's ur.df (Dickey-Fuller unit root test) results It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I f
14,455
Interpreting R's ur.df (Dickey-Fuller unit root test) results
As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conclusion is true within 99% confidence. The Wikipedia article on Dickey-Fuller describes the three versions of the Dickey-Fuller test: the "unit root", "unit root with drift", and "unit root with drift and deterministic time trend", or what is referred to in the urca documentation as type= "none", "drift", and "trend", respectively. Each of these tests is a progressively more complex linear regression. In all of them there is the root, but in the drift there is also a drift coefficient, and in the trend there is also a trend coefficient. Each of these coefficients has an associated significance level. While the significance of the root coefficient is the most important and the main focus of the DF test, we might also be interested in knowing whether or not the trend/drift is statistically significant as well. After tinkering around with the different modes and seeing which coefficients appear/disappear in the t-tests, I was able to easily identify which coefficient corresponded to which t-test. They can be written as follows (from wiki page): (unit root) $\Delta y_{t} = \delta y_{t-1} + u_{t}$ (with drift) $\Delta y_{t} = \delta y_{t-1} + a_{0} + u_{t}$ (with trend) $\Delta y_{t} = \delta y_{t-1} + a_{0} + a_{1}t + u_{t}$ In your case, "tau2" corresponds to $\delta$, while "phi1" corresponds to $a_{0}$. You will also see a third coefficient appear in the "trend" test, which would correspond to $a_{1}$ in the third equation above. However the names of the variables will change when you switch to "trend", so be careful and make sure you do this tinkering yourself to check. I believe in "trend" mode, "tau3" corresponds to $\delta$, "phi2" corresponds to $a_{0}$, and "phi3" corresponds to $a_{1}$.
Interpreting R's ur.df (Dickey-Fuller unit root test) results
As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conc
Interpreting R's ur.df (Dickey-Fuller unit root test) results As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conclusion is true within 99% confidence. The Wikipedia article on Dickey-Fuller describes the three versions of the Dickey-Fuller test: the "unit root", "unit root with drift", and "unit root with drift and deterministic time trend", or what is referred to in the urca documentation as type= "none", "drift", and "trend", respectively. Each of these tests is a progressively more complex linear regression. In all of them there is the root, but in the drift there is also a drift coefficient, and in the trend there is also a trend coefficient. Each of these coefficients has an associated significance level. While the significance of the root coefficient is the most important and the main focus of the DF test, we might also be interested in knowing whether or not the trend/drift is statistically significant as well. After tinkering around with the different modes and seeing which coefficients appear/disappear in the t-tests, I was able to easily identify which coefficient corresponded to which t-test. They can be written as follows (from wiki page): (unit root) $\Delta y_{t} = \delta y_{t-1} + u_{t}$ (with drift) $\Delta y_{t} = \delta y_{t-1} + a_{0} + u_{t}$ (with trend) $\Delta y_{t} = \delta y_{t-1} + a_{0} + a_{1}t + u_{t}$ In your case, "tau2" corresponds to $\delta$, while "phi1" corresponds to $a_{0}$. You will also see a third coefficient appear in the "trend" test, which would correspond to $a_{1}$ in the third equation above. However the names of the variables will change when you switch to "trend", so be careful and make sure you do this tinkering yourself to check. I believe in "trend" mode, "tau3" corresponds to $\delta$, "phi2" corresponds to $a_{0}$, and "phi3" corresponds to $a_{1}$.
Interpreting R's ur.df (Dickey-Fuller unit root test) results As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conc
14,456
Interpreting R's ur.df (Dickey-Fuller unit root test) results
I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three types of models, and gives warnings if there are inconsistencies or inconclusive results (I don't think there ever should inconsistencies if I understand the ADF math correctly, but I thought still a good check in case the ur.df function has any defects). Please take a look. Happy to take comments/correction/improvements. https://gist.github.com/hankroark/968fc28b767f1e43b5a33b151b771bf9
Interpreting R's ur.df (Dickey-Fuller unit root test) results
I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three t
Interpreting R's ur.df (Dickey-Fuller unit root test) results I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three types of models, and gives warnings if there are inconsistencies or inconclusive results (I don't think there ever should inconsistencies if I understand the ADF math correctly, but I thought still a good check in case the ur.df function has any defects). Please take a look. Happy to take comments/correction/improvements. https://gist.github.com/hankroark/968fc28b767f1e43b5a33b151b771bf9
Interpreting R's ur.df (Dickey-Fuller unit root test) results I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three t
14,457
Interpreting R's ur.df (Dickey-Fuller unit root test) results
More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistics refer. Content agrees with the image provided above.
Interpreting R's ur.df (Dickey-Fuller unit root test) results
More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistic
Interpreting R's ur.df (Dickey-Fuller unit root test) results More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistics refer. Content agrees with the image provided above.
Interpreting R's ur.df (Dickey-Fuller unit root test) results More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistic
14,458
Interpreting R's ur.df (Dickey-Fuller unit root test) results
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. phi1 phi2 phi3 are equivalent to F-tests in ADF framework
Interpreting R's ur.df (Dickey-Fuller unit root test) results
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Interpreting R's ur.df (Dickey-Fuller unit root test) results Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. phi1 phi2 phi3 are equivalent to F-tests in ADF framework
Interpreting R's ur.df (Dickey-Fuller unit root test) results Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
14,459
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a supplementary probability distribution on the parameter $$\theta\sim\pi(\theta)$$ The posterior distribution on $\theta$ is thus defined as the conditional distribution of $\theta$ conditional on $X=x$, the observed data. This construction clearly relies on the assumption that the data is a realisation of a random variable with a well-defined distribution. It would otherwise be impossible to define a conditional distribution like the posterior, since there would be no random variable to condition upon. The possible confusion may stem from the fact that a difference between Bayesian and frequentist approaches is that frequentist procedures are evaluated and compared based on their frequency properties, ie by averaging over all possible realisations, instead of conditional on the actual realisation, as the Bayesian approach does. For instance, the frequentist risk of a procedure $\delta$ for a loss function $L(\theta,d)$ is $$R(\theta,\delta) = \mathbb E_\theta[L(\theta,\delta(X))]$$ while the Bayesian posterior loss of a procedure $\delta$ for the prior $\pi$ is $$\rho(\delta(x),\pi) = \mathbb E^\pi[L(\theta,\delta(x))|X=x]$$
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a sup
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a supplementary probability distribution on the parameter $$\theta\sim\pi(\theta)$$ The posterior distribution on $\theta$ is thus defined as the conditional distribution of $\theta$ conditional on $X=x$, the observed data. This construction clearly relies on the assumption that the data is a realisation of a random variable with a well-defined distribution. It would otherwise be impossible to define a conditional distribution like the posterior, since there would be no random variable to condition upon. The possible confusion may stem from the fact that a difference between Bayesian and frequentist approaches is that frequentist procedures are evaluated and compared based on their frequency properties, ie by averaging over all possible realisations, instead of conditional on the actual realisation, as the Bayesian approach does. For instance, the frequentist risk of a procedure $\delta$ for a loss function $L(\theta,d)$ is $$R(\theta,\delta) = \mathbb E_\theta[L(\theta,\delta(X))]$$ while the Bayesian posterior loss of a procedure $\delta$ for the prior $\pi$ is $$\rho(\delta(x),\pi) = \mathbb E^\pi[L(\theta,\delta(x))|X=x]$$
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a sup
14,460
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actually having observed the data? So the data are random in the sense of having a distribution as long as they're uncertain, i.e., not fully observed, and then they become fixed by observation. (Nothing particularly Bayesian about this, though.) Reading a comment on the original question, "To a subjective Bayesian, nothing is random" - nothing is really/objectively random (to a subjective Bayesian at least), however it can be random in the sense of being modelled by a random variable. So another source of confusion may be mixing up the use of the term "random" in a "philosophical" manner (referring to something that is "truly random", in the sense of having randomness as intrinsic property), and in a mathematical/technical manner, referring to something that appears as random variable in a probability model.
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actu
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actually having observed the data? So the data are random in the sense of having a distribution as long as they're uncertain, i.e., not fully observed, and then they become fixed by observation. (Nothing particularly Bayesian about this, though.) Reading a comment on the original question, "To a subjective Bayesian, nothing is random" - nothing is really/objectively random (to a subjective Bayesian at least), however it can be random in the sense of being modelled by a random variable. So another source of confusion may be mixing up the use of the term "random" in a "philosophical" manner (referring to something that is "truly random", in the sense of having randomness as intrinsic property), and in a mathematical/technical manner, referring to something that appears as random variable in a probability model.
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actu
14,461
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables is some are observed and some are hidden. For example in your case $y$ is an observed random variable and $\theta$ is a hidden random variable, your goal is to estimate the posterior distribution of $\theta$ conditioned on the observed $y$. That says in Bayesian mindset we shouldn't reat $y$ like a constant as in the traditional sense, instead, it's an instance, or reliazation, of an random variable. (The observed values of the variables are also called "evidence" in most of the Bayesian statistics literatures.)
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables is some are observed and some are hidden. For example in your case $y$ is an observed random variable and $\theta$ is a hidden random variable, your goal is to estimate the posterior distribution of $\theta$ conditioned on the observed $y$. That says in Bayesian mindset we shouldn't reat $y$ like a constant as in the traditional sense, instead, it's an instance, or reliazation, of an random variable. (The observed values of the variables are also called "evidence" in most of the Bayesian statistics literatures.)
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables
14,462
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a lot of times). On this pattern, you can apply a new chance of appearing. If you throw with two dices, a new pattern will emerge. This is because different dices will produce different outcomes (only in the case of perfect dices, the chance distribution of each one is the same as for the others). The chance that the dices used in separate throws is the same is very small. But there is a chance. And this chance is measured by applying the chance to the non-random chance distributions of a dice (for every dice this is a different distribution, though they are all quite alike).
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a lot of times). On this pattern, you can apply a new chance of appearing. If you throw with two dices, a new pattern will emerge. This is because different dices will produce different outcomes (only in the case of perfect dices, the chance distribution of each one is the same as for the others). The chance that the dices used in separate throws is the same is very small. But there is a chance. And this chance is measured by applying the chance to the non-random chance distributions of a dice (for every dice this is a different distribution, though they are all quite alike).
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a
14,463
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated process. That's to repeat the entire modeling process on multiple bootstrap re-samples of the data set. That's about as close as you can get to estimating out-of-sample performance of the modeling process. Recall the bootstrap principle. The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modelled by resampling the sample data and performing inference about a sample from resampled data (resampled → sample). As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable. Following that principle, if you repeat the full model building process on multiple bootstrap re-samples of the data, then test each resulting model's performance on the full data set, you have a reasonable estimate of generalizability in terms of how well your modeling process on the full data set might apply to the original population. So, in your example, if there were some quantitative criterion for deciding that quadratic rather than linear modeling of the predictor is to be preferred, then you use that criterion along with all other steps of the modeling on each re-sample. It's obviously best to avoid such data snooping. There's no harm in looking at things like distributions of predictors or outcomes on their own. You can look at associations among predictors, with a view toward combining related predictors into single summary measures. You can use knowledge of the subject matter as a guide. For example, if your outcome is strictly positive and has a measurement error that is known to be proportional to the measured value, a log transform makes good sense on theoretical grounds. Those approaches can lead to data transformations that aren't contaminated by looking at predictor-outcome relationships. Another useful approach is to start with a highly flexible model (provided the model isn't at risk of overfitting), and pulling back from that toward a more parsimonious model. For example, with a continuous predictor you could start with a spline fit having multiple knots, then do an analysis of variance of nested models having progressively fewer knots to determine how few knots (down to even a simple linear term) can provide statistically indistinguishable results. Frank Harrell's course notes and book provide detailed guidance for ways to model reliably without data snooping. The above process for validating the modeling approach can also be valuable if you build a model without snooping.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated proces
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated process. That's to repeat the entire modeling process on multiple bootstrap re-samples of the data set. That's about as close as you can get to estimating out-of-sample performance of the modeling process. Recall the bootstrap principle. The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modelled by resampling the sample data and performing inference about a sample from resampled data (resampled → sample). As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable. Following that principle, if you repeat the full model building process on multiple bootstrap re-samples of the data, then test each resulting model's performance on the full data set, you have a reasonable estimate of generalizability in terms of how well your modeling process on the full data set might apply to the original population. So, in your example, if there were some quantitative criterion for deciding that quadratic rather than linear modeling of the predictor is to be preferred, then you use that criterion along with all other steps of the modeling on each re-sample. It's obviously best to avoid such data snooping. There's no harm in looking at things like distributions of predictors or outcomes on their own. You can look at associations among predictors, with a view toward combining related predictors into single summary measures. You can use knowledge of the subject matter as a guide. For example, if your outcome is strictly positive and has a measurement error that is known to be proportional to the measured value, a log transform makes good sense on theoretical grounds. Those approaches can lead to data transformations that aren't contaminated by looking at predictor-outcome relationships. Another useful approach is to start with a highly flexible model (provided the model isn't at risk of overfitting), and pulling back from that toward a more parsimonious model. For example, with a continuous predictor you could start with a spline fit having multiple knots, then do an analysis of variance of nested models having progressively fewer knots to determine how few knots (down to even a simple linear term) can provide statistically indistinguishable results. Frank Harrell's course notes and book provide detailed guidance for ways to model reliably without data snooping. The above process for validating the modeling approach can also be valuable if you build a model without snooping.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated proces
14,464
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have in out-of-sample performance. In other words, the more likely you are to overfit to your sample. In data-snooping, one is engaging in a search through a possibly-very-large-and-flexible model space. So the chance of finding a model that overfits becomes more likely. We can prove this doesn't happen (with high probability, under conditions) if the model space is limited enough, compared to the dataset size. ... So the distinction between data-snooping and principled investigation can be as fine as: the space of models that, a priori, one is willing to consider. For example, suppose that author finds no quadratic fit, so they move on to cubics, quartics, ..., and eventually they find a degree-27 polynomial that is a good fit, and claim this truly models the data-generating process. We would be very skeptical. Similarly if they try log-transforming arbitrary subsets of the variables until a fit occurs. On the other hand, suppose the plan is to give up after cubics and say that the process is not explainable in this way. The space of degree-at-most-3 polynomials is quite restricted and structured, so if a cubic fit is indeed discovered, we can be pretty confident that it is not a coincidence. ... Therefore, one way to generally prevent "false discovery", as we often call it, is to limit oneself a priori to a certain restricted set of models. This is analogous to pre-registering hypotheses in an experimental work. In regression, the model space is already quite restricted, so I think one would have to try a lot of different tricks before being at risk of discovering a spurious relationship, unless the dataset is small.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have in out-of-sample performance. In other words, the more likely you are to overfit to your sample. In data-snooping, one is engaging in a search through a possibly-very-large-and-flexible model space. So the chance of finding a model that overfits becomes more likely. We can prove this doesn't happen (with high probability, under conditions) if the model space is limited enough, compared to the dataset size. ... So the distinction between data-snooping and principled investigation can be as fine as: the space of models that, a priori, one is willing to consider. For example, suppose that author finds no quadratic fit, so they move on to cubics, quartics, ..., and eventually they find a degree-27 polynomial that is a good fit, and claim this truly models the data-generating process. We would be very skeptical. Similarly if they try log-transforming arbitrary subsets of the variables until a fit occurs. On the other hand, suppose the plan is to give up after cubics and say that the process is not explainable in this way. The space of degree-at-most-3 polynomials is quite restricted and structured, so if a cubic fit is indeed discovered, we can be pretty confident that it is not a coincidence. ... Therefore, one way to generally prevent "false discovery", as we often call it, is to limit oneself a priori to a certain restricted set of models. This is analogous to pre-registering hypotheses in an experimental work. In regression, the model space is already quite restricted, so I think one would have to try a lot of different tricks before being at risk of discovering a spurious relationship, unless the dataset is small.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have
14,465
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually doing what you are supposed to do. If you're response variable is decibels and your explanatory variables are things like power input and material properties, then if you didn't model in log space, you would be doing it wrong. This could be an exponential model, or a log transform. Many natural phenomena result in not-normal distributions. In these cases, you should either use an analysis method that allows you to incorporate that distribution structure (Poisson regression, negative binomial, log-linear, lognormal, etc.) or transform the data keeping in mind that will also be transforming the variance and covariance structure. Even if you don't have an example from the literature backing up the use of some particular distribution that is not normal, if you can justify your claim with a minimal explanation of why that distribution might make physical sense, or through a preponderance of similarly distributed data reported in the literature, then I think you are justified in choosing that given distribution as a model. If you do this, then you are modeling, not fitting, and therefore not data snooping.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually doing what you are supposed to do. If you're response variable is decibels and your explanatory variables are things like power input and material properties, then if you didn't model in log space, you would be doing it wrong. This could be an exponential model, or a log transform. Many natural phenomena result in not-normal distributions. In these cases, you should either use an analysis method that allows you to incorporate that distribution structure (Poisson regression, negative binomial, log-linear, lognormal, etc.) or transform the data keeping in mind that will also be transforming the variance and covariance structure. Even if you don't have an example from the literature backing up the use of some particular distribution that is not normal, if you can justify your claim with a minimal explanation of why that distribution might make physical sense, or through a preponderance of similarly distributed data reported in the literature, then I think you are justified in choosing that given distribution as a model. If you do this, then you are modeling, not fitting, and therefore not data snooping.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually
14,466
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is what is the effectiveness of such model when you use it to predict out-of-sample data that was not used to generate the model. If it is reasonable to assume that the data generating mechanism used to calculate the model and the mechanism that generates the new data are the same, there is nothing wrong with using the model you obtained. But you may have some justifiable scepticism about this assertion which goes to the essence of frequentist statistics. As you develop the model, you obtain the parameters that best fit the data. To get a better model you add more data. But that does not help if you add data points that you do not know whether they belong to the same data-generating mechanism used to develop the model. Here the issue is one of belief about how likely it is for the new data point(s) to belong to the same mechanism. This takes you directly to Bayesian analysis by which you determine the probability distribution of the parameters of the model and see how this distribution changes as you add more data. For an introductory explanation of Bayesian analysis see here. For a nice explanation of Bayesian regression see here.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is w
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is what is the effectiveness of such model when you use it to predict out-of-sample data that was not used to generate the model. If it is reasonable to assume that the data generating mechanism used to calculate the model and the mechanism that generates the new data are the same, there is nothing wrong with using the model you obtained. But you may have some justifiable scepticism about this assertion which goes to the essence of frequentist statistics. As you develop the model, you obtain the parameters that best fit the data. To get a better model you add more data. But that does not help if you add data points that you do not know whether they belong to the same data-generating mechanism used to develop the model. Here the issue is one of belief about how likely it is for the new data point(s) to belong to the same mechanism. This takes you directly to Bayesian analysis by which you determine the probability distribution of the parameters of the model and see how this distribution changes as you add more data. For an introductory explanation of Bayesian analysis see here. For a nice explanation of Bayesian regression see here.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is w
14,467
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type of model you want to look into based on plots of the training data, that's not data snooping. Ideally, any metrics describing the accuracy of a model should be derived from completely "clean" data: that is, data that the model generation process is not in any way dependent on. There's a tension here, as the more data you train your model on, the more accurate it can be, but that also means there is less data to validate it on. The difference between training a model, and choosing between two models based on their validation scores is, in some sense, a matter of degree rather than kind. It can be a very large degree, however. If you're choosing between two different models, then looking at validation scores gives you at most one bit of data leakage. But as you add more and more hyperparameters, the distinction between them and regular parameters can start to blur. As you build a model, you should gradually transition from exploration, in which you prioritize fitting your model to the training data as much as possible, to validation, where you prioritize estimating out of sample accuracy. If you want to be absolutely sure that you aren't engaging in data snooping, you should find someone to run your model on data that you have no access to.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type o
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type of model you want to look into based on plots of the training data, that's not data snooping. Ideally, any metrics describing the accuracy of a model should be derived from completely "clean" data: that is, data that the model generation process is not in any way dependent on. There's a tension here, as the more data you train your model on, the more accurate it can be, but that also means there is less data to validate it on. The difference between training a model, and choosing between two models based on their validation scores is, in some sense, a matter of degree rather than kind. It can be a very large degree, however. If you're choosing between two different models, then looking at validation scores gives you at most one bit of data leakage. But as you add more and more hyperparameters, the distinction between them and regular parameters can start to blur. As you build a model, you should gradually transition from exploration, in which you prioritize fitting your model to the training data as much as possible, to validation, where you prioritize estimating out of sample accuracy. If you want to be absolutely sure that you aren't engaging in data snooping, you should find someone to run your model on data that you have no access to.
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type o
14,468
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the size of the measured errors that may reveal an unknown causative phenomenon or, more usually and sadly, some unforeseen problem with the experiment. Physicists are also concerned with recognizing systematic errors as opposed to random errors. Fitting a model to a dataset whose values have been transformed carries the problem that the statistical distribution of the data measurement error is distorted. If the measurement errors were normally distributed they are no longer normally distributed after applying the transformation and so least squares optimization of the model may not be appropriate. Such a fit is convenient as a data summary, but the fitted function may have no underlying physical basis. Whether this matters or not depends on your use of the fit: data summary or physics. An outstanding example in astronomy comes from the measurement of the radial distribution of light elliptical galaxies (roundish objects with no gas) where, during the 1960's the popular fitting function was $e^{r^{1/4}}$, a "law" proposed by G. de Vaucouleurs in 1953 and still in use this century (doi: 10.1093/mnras/113.2.134). There has been no satisfactory physical explanation for the "de Vaucouleurs law". There are several physically motivated light profiles based on assumptions about internal stellar relaxation processes and the influence of external tidal forces exerted by neighboring galaxies. The irony today is that better data has led to fitting generalizations of the de Vaucouleurs profile as in $e^{r^{1/n}}$, the Sersic profile where the value of $n$ reflects a distortion of the profile away from the de Vaucouleurs profile. There is a fine summary of astronomical light profiles in Mark Whittle's UVa lecture notes at http://people.virginia.edu/~dmw8f/astr5630/Topic07/Lecture_7.html and in Amina Helmi's Kapteyn Institute lectures (with pictures!) at: http://www.astro.rug.nl/~ahelmi/galaxies_course/class_VII-E/ellip-06.pdf
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the size of the measured errors that may reveal an unknown causative phenomenon or, more usually and sadly, some unforeseen problem with the experiment. Physicists are also concerned with recognizing systematic errors as opposed to random errors. Fitting a model to a dataset whose values have been transformed carries the problem that the statistical distribution of the data measurement error is distorted. If the measurement errors were normally distributed they are no longer normally distributed after applying the transformation and so least squares optimization of the model may not be appropriate. Such a fit is convenient as a data summary, but the fitted function may have no underlying physical basis. Whether this matters or not depends on your use of the fit: data summary or physics. An outstanding example in astronomy comes from the measurement of the radial distribution of light elliptical galaxies (roundish objects with no gas) where, during the 1960's the popular fitting function was $e^{r^{1/4}}$, a "law" proposed by G. de Vaucouleurs in 1953 and still in use this century (doi: 10.1093/mnras/113.2.134). There has been no satisfactory physical explanation for the "de Vaucouleurs law". There are several physically motivated light profiles based on assumptions about internal stellar relaxation processes and the influence of external tidal forces exerted by neighboring galaxies. The irony today is that better data has led to fitting generalizations of the de Vaucouleurs profile as in $e^{r^{1/n}}$, the Sersic profile where the value of $n$ reflects a distortion of the profile away from the de Vaucouleurs profile. There is a fine summary of astronomical light profiles in Mark Whittle's UVa lecture notes at http://people.virginia.edu/~dmw8f/astr5630/Topic07/Lecture_7.html and in Amina Helmi's Kapteyn Institute lectures (with pictures!) at: http://www.astro.rug.nl/~ahelmi/galaxies_course/class_VII-E/ellip-06.pdf
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the
14,469
How to do data augmentation and train-validate split?
First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it should only contain real world data. Adding augmented data will not improve the accuracy of the validation. It will at best say something about how well your method responds to the data augmentation, and at worst ruin the validation results and interpretability.
How to do data augmentation and train-validate split?
First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it
How to do data augmentation and train-validate split? First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it should only contain real world data. Adding augmented data will not improve the accuracy of the validation. It will at best say something about how well your method responds to the data augmentation, and at worst ruin the validation results and interpretability.
How to do data augmentation and train-validate split? First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it
14,470
How to do data augmentation and train-validate split?
never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and validation.
How to do data augmentation and train-validate split?
never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and valida
How to do data augmentation and train-validate split? never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and validation.
How to do data augmentation and train-validate split? never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and valida
14,471
How to do data augmentation and train-validate split?
Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process would be better suitable: Do data augmentation --> Splitting data
How to do data augmentation and train-validate split?
Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process w
How to do data augmentation and train-validate split? Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process would be better suitable: Do data augmentation --> Splitting data
How to do data augmentation and train-validate split? Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process w
14,472
Misunderstanding a P-value?
Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What follows if we fail to reject the null hypothesis?, it is similar to 'proof by contradiction' in mathematics. So if we want to find 'statistical evidence' then we assume the opposite, which we denote $H_0$ of what we try to proof which we call $H_1$. After this we draw a sample, and from the sample we compute a so-called test-statistic (e.g. a t-value in a t-test). Then, as we assume that $H_0$ is true and that our sample is randomly drawn from the distribution under $H_0$, we can compute the probability of observing values that exceed or equal the value derived from our (random) sample. This probability is called the p-value. If this value is ''small enough'', i.e. smaller than the significance level thase we have choosen, then we reject $H_0$ and we consider to $H_1$ is 'statistically proven'. Several things are important in this way of doing: we have derived probabilities under the assumption that $H_0$ is true we have taken a random sample from the distrubtion that was assumed under $H_0$ we decide to have found evidence for $H_1$ if the test-statistic derived from the random sample has a low probability of being exceeded. So it is not impossible that it is exceeded while $H_0$ is true and in these cases we make a type I error. So what is a type I error: a type I error is made when the sample, randomly drawn from $H_0$, leads to the conclusion that $H_0$ is false while in reality it is true. Note that this implies that a p-value is not the probability of a type I error. Indeed, a type I error is a wrong decision by the test and the decision can only be made by comparing the p-value to the choosen significance level, with a p-value alone one can not make a decision, it is only after comparing the p-value to the choosen significance level that a decision is made, and as long as no decision is made, type I error is not even defined. What then is the p-value ? The potentially wrong rejection of $H_0$ is due to the fact that we draw a random sample under $H_0$, so it could be that we have ''bad luck'' by drawing the sample, and that this ''bad luck'' leads to a false rejection of $H_0$. So the p-value (although this is not fully correct) is more like the probability of drawing a ''bad sample''. The correct interpretation of the p-value is that it is the probability that the test-statistic exceeds or equals the value of the test-statistic derived from a randomly drawn sample under $H_0$ False discovery rate (FDR) As explained above, each time the null hypothesis is rejected, one considers this as 'statistical evidence' for $H_1$. So we have found new scientific knowledge, therefore it is called a discovery. Also explained above is that we can make false discoveries (i.e. falsely rejecting $H_0$) when we make a type I error. In that case we have a false belief of a scientific truth. We only want to discover really true things and therefore one tries to keep the false discoveries to a minimum, i.e. one will control for a type I error. It is not so hard to see that the probability of a type I error is the chosen significance level $\alpha$. So in order to control for type I errors, one fixes an $\alpha$-level reflecting your willingness to accept ''false evidence''. Intuitively, this means that if we draw a huge number of samples, and with each sample we perform the test, then a fraction $\alpha$ of these tests will lead to a wrong conclusion. It is important to note that we're 'averaging over many samples'; so same test, many samples. If we use the same sample to do many different tests then we have a multiple testing error (see my anser on Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems?). In that case one can control the $\alpha$ inflation using techniques to control the family-wise error rate (FWER), like e.g. a Bonferroni correction. A different approach than FWER is to control the false discovery rate (FDR). In that case one controls the number of false discoveries (FD) among all discoveries (D), so one controls $\frac{FD}{D}$, D is the number of rejected $H_0$. So the type I error probability has to do with executing the same test on many different samples. For a huge number of samples the type I error probability will converge to the number of samples leading to a false rejection divided by the total number of samples drawn. The FDR has to do with many tests on the same sample and for a huge number of tests it will converge to the number of tests where a type I error is made (i.e. the number of false discoveries) divided by total the number of rejections of $H_0$ (i.e. the total number of discoveries). Note that, comparing the two paragraphs above: The context is different; one test and many samples versus many tests and one sample. The denominator for computing the type I error probability is clearly different from the denominator for computing the FDR. The numerators are similar in a way, but have a different context. The FDR tells you that, if you perform many tests on the same sample and you find 1000 discoveries (i.e. rejections of $H_0$) then with an FDR of 0.38 you will have $0.38 \times 1000$ false discoveries.
Misunderstanding a P-value?
Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What foll
Misunderstanding a P-value? Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What follows if we fail to reject the null hypothesis?, it is similar to 'proof by contradiction' in mathematics. So if we want to find 'statistical evidence' then we assume the opposite, which we denote $H_0$ of what we try to proof which we call $H_1$. After this we draw a sample, and from the sample we compute a so-called test-statistic (e.g. a t-value in a t-test). Then, as we assume that $H_0$ is true and that our sample is randomly drawn from the distribution under $H_0$, we can compute the probability of observing values that exceed or equal the value derived from our (random) sample. This probability is called the p-value. If this value is ''small enough'', i.e. smaller than the significance level thase we have choosen, then we reject $H_0$ and we consider to $H_1$ is 'statistically proven'. Several things are important in this way of doing: we have derived probabilities under the assumption that $H_0$ is true we have taken a random sample from the distrubtion that was assumed under $H_0$ we decide to have found evidence for $H_1$ if the test-statistic derived from the random sample has a low probability of being exceeded. So it is not impossible that it is exceeded while $H_0$ is true and in these cases we make a type I error. So what is a type I error: a type I error is made when the sample, randomly drawn from $H_0$, leads to the conclusion that $H_0$ is false while in reality it is true. Note that this implies that a p-value is not the probability of a type I error. Indeed, a type I error is a wrong decision by the test and the decision can only be made by comparing the p-value to the choosen significance level, with a p-value alone one can not make a decision, it is only after comparing the p-value to the choosen significance level that a decision is made, and as long as no decision is made, type I error is not even defined. What then is the p-value ? The potentially wrong rejection of $H_0$ is due to the fact that we draw a random sample under $H_0$, so it could be that we have ''bad luck'' by drawing the sample, and that this ''bad luck'' leads to a false rejection of $H_0$. So the p-value (although this is not fully correct) is more like the probability of drawing a ''bad sample''. The correct interpretation of the p-value is that it is the probability that the test-statistic exceeds or equals the value of the test-statistic derived from a randomly drawn sample under $H_0$ False discovery rate (FDR) As explained above, each time the null hypothesis is rejected, one considers this as 'statistical evidence' for $H_1$. So we have found new scientific knowledge, therefore it is called a discovery. Also explained above is that we can make false discoveries (i.e. falsely rejecting $H_0$) when we make a type I error. In that case we have a false belief of a scientific truth. We only want to discover really true things and therefore one tries to keep the false discoveries to a minimum, i.e. one will control for a type I error. It is not so hard to see that the probability of a type I error is the chosen significance level $\alpha$. So in order to control for type I errors, one fixes an $\alpha$-level reflecting your willingness to accept ''false evidence''. Intuitively, this means that if we draw a huge number of samples, and with each sample we perform the test, then a fraction $\alpha$ of these tests will lead to a wrong conclusion. It is important to note that we're 'averaging over many samples'; so same test, many samples. If we use the same sample to do many different tests then we have a multiple testing error (see my anser on Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems?). In that case one can control the $\alpha$ inflation using techniques to control the family-wise error rate (FWER), like e.g. a Bonferroni correction. A different approach than FWER is to control the false discovery rate (FDR). In that case one controls the number of false discoveries (FD) among all discoveries (D), so one controls $\frac{FD}{D}$, D is the number of rejected $H_0$. So the type I error probability has to do with executing the same test on many different samples. For a huge number of samples the type I error probability will converge to the number of samples leading to a false rejection divided by the total number of samples drawn. The FDR has to do with many tests on the same sample and for a huge number of tests it will converge to the number of tests where a type I error is made (i.e. the number of false discoveries) divided by total the number of rejections of $H_0$ (i.e. the total number of discoveries). Note that, comparing the two paragraphs above: The context is different; one test and many samples versus many tests and one sample. The denominator for computing the type I error probability is clearly different from the denominator for computing the FDR. The numerators are similar in a way, but have a different context. The FDR tells you that, if you perform many tests on the same sample and you find 1000 discoveries (i.e. rejections of $H_0$) then with an FDR of 0.38 you will have $0.38 \times 1000$ false discoveries.
Misunderstanding a P-value? Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What foll
14,473
Misunderstanding a P-value?
The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "[This statement] may look similar to the definition of an error of Type I (i.e., the probability of rejecting the H0 although it is in fact true), but having actually rejected the H0, this decision would be wrong if and only if the H0 were true. Thus the probability "that you are making the wrong decision” is p(H0) and this probability... cannot be derived with null hypothesis significance testing. " More simply, in order to assess the probability that you have incorrectly rejected H0 you require the probability that H0 is true which you simply cannot obtain using this test.
Misunderstanding a P-value?
The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "
Misunderstanding a P-value? The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "[This statement] may look similar to the definition of an error of Type I (i.e., the probability of rejecting the H0 although it is in fact true), but having actually rejected the H0, this decision would be wrong if and only if the H0 were true. Thus the probability "that you are making the wrong decision” is p(H0) and this probability... cannot be derived with null hypothesis significance testing. " More simply, in order to assess the probability that you have incorrectly rejected H0 you require the probability that H0 is true which you simply cannot obtain using this test.
Misunderstanding a P-value? The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "
14,474
Misunderstanding a P-value?
The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the null hypothesis is true. Incorrect interpretations generally involve either a marginal probability or a switching of the condition: $$\begin{equation} \begin{aligned} \text{p-value} = \mathbb{P}(\text{At least as extreme as observed outcome} | H_0) \neq \mathbb{P}(\text{Type I error} ). \end{aligned} \end{equation}$$
Misunderstanding a P-value?
The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the nu
Misunderstanding a P-value? The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the null hypothesis is true. Incorrect interpretations generally involve either a marginal probability or a switching of the condition: $$\begin{equation} \begin{aligned} \text{p-value} = \mathbb{P}(\text{At least as extreme as observed outcome} | H_0) \neq \mathbb{P}(\text{Type I error} ). \end{aligned} \end{equation}$$
Misunderstanding a P-value? The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the nu
14,475
Misunderstanding a P-value?
The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statistically significant result, and the null hypothesis should be rejected. If the p-value is greater than the significance level, α, then the null hypothesis cannot be rejected. This is the whole reason of looking up the p-value if you're using the table or using an online calculator, such as this one, p-value calculator, to find the p-value from the test statistic. Now I know that you mentioned type I and type II errors. This really has nothing to do with the p-value. This has to do with the original data, such as the sample size used and the values obtained for the data. If the sample size is too small, for instance, this can lead to a type I error.
Misunderstanding a P-value?
The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statist
Misunderstanding a P-value? The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statistically significant result, and the null hypothesis should be rejected. If the p-value is greater than the significance level, α, then the null hypothesis cannot be rejected. This is the whole reason of looking up the p-value if you're using the table or using an online calculator, such as this one, p-value calculator, to find the p-value from the test statistic. Now I know that you mentioned type I and type II errors. This really has nothing to do with the p-value. This has to do with the original data, such as the sample size used and the values obtained for the data. If the sample size is too small, for instance, this can lead to a type I error.
Misunderstanding a P-value? The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statist
14,476
How is it possible that Poisson GLM accepts non-integer numbers?
Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there are times when it makes sense to treat non-integer data as though it were [approximately] Poisson. For example, if you send out two observers to record the same count data, it may happen that the two observers do not always agree on the count -- one might say that something happened 3 times while the other said it happened 4 times. It is nice then to have the option to use 3.5 when fitting your Poisson coefficients, instead of having to choose between 3 and 4. Computationally, the factorial in the Poisson could make it seem difficult to work with non-integers, but a continuous generalization of the factorial exists. Moreover, performing maximum likelihood estimation for the Poisson does not even involve the factorial function, once you simplify the expression.
How is it possible that Poisson GLM accepts non-integer numbers?
Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there a
How is it possible that Poisson GLM accepts non-integer numbers? Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there are times when it makes sense to treat non-integer data as though it were [approximately] Poisson. For example, if you send out two observers to record the same count data, it may happen that the two observers do not always agree on the count -- one might say that something happened 3 times while the other said it happened 4 times. It is nice then to have the option to use 3.5 when fitting your Poisson coefficients, instead of having to choose between 3 and 4. Computationally, the factorial in the Poisson could make it seem difficult to work with non-integers, but a continuous generalization of the factorial exists. Moreover, performing maximum likelihood estimation for the Poisson does not even involve the factorial function, once you simplify the expression.
How is it possible that Poisson GLM accepts non-integer numbers? Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there a
14,477
How is it possible that Poisson GLM accepts non-integer numbers?
For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{T}}\vec{x}_i}$$ then consistent estimates for the regression coefficients $\vec\beta$ can be obtained by solving the score equations for the Poisson model: $$\sum_i^n{\vec{x}_i\left(y_i-\exp{\vec\beta^{\mathrm{T}}\vec{x}_i}\right)}=0$$ Of course consistency doesn't imply validity of any tests or confidence intervals; the likelihood has not been specified. This follows on from the method-of-moments approach we learnt at school, & leads on to that of generalized estimating equations. @Aaron's pointed out you're actually using a quasi-Poisson fit in your code. That means the variance is proportional to the mean $$\operatorname{Var}Y_i=\phi\operatorname{E}Y_i$$ with a dispersion parameter $\phi$ that can be estimated from the data. The coefficient estimates will be the same, but their standard errors wider; this is a more flexible & therefore more generally useful approach. (Note also that sandwich estimators for the parameters' variance–covariance matrix are often used in these sort of situations to give robust standard errors.)
How is it possible that Poisson GLM accepts non-integer numbers?
For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{
How is it possible that Poisson GLM accepts non-integer numbers? For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{T}}\vec{x}_i}$$ then consistent estimates for the regression coefficients $\vec\beta$ can be obtained by solving the score equations for the Poisson model: $$\sum_i^n{\vec{x}_i\left(y_i-\exp{\vec\beta^{\mathrm{T}}\vec{x}_i}\right)}=0$$ Of course consistency doesn't imply validity of any tests or confidence intervals; the likelihood has not been specified. This follows on from the method-of-moments approach we learnt at school, & leads on to that of generalized estimating equations. @Aaron's pointed out you're actually using a quasi-Poisson fit in your code. That means the variance is proportional to the mean $$\operatorname{Var}Y_i=\phi\operatorname{E}Y_i$$ with a dispersion parameter $\phi$ that can be estimated from the data. The coefficient estimates will be the same, but their standard errors wider; this is a more flexible & therefore more generally useful approach. (Note also that sandwich estimators for the parameters' variance–covariance matrix are often used in these sort of situations to give robust standard errors.)
How is it possible that Poisson GLM accepts non-integer numbers? For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{
14,478
Poisson or quasi poisson in a regression with count data and overdispersion?
When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted values from from your Normal model can help with this. With Poisson regression, the assumed relationship is that the variance equals the expected value; rather restrictive, I think you'll agree. With a "standard" linear regression, the assumption is that the variance is constant regardless of the expected value. For a quasi-poisson regression, the variance is assumed to be a linear function of the mean; for negative binomial regression, a quadratic function. However, you aren't restricted to these relationships. The specification of a "family" (other than "quasi") determines the mean-variance relationship. I don't have The R Book, but I imagine it has a table that shows the family functions and corresponding mean-variance relationships. For the "quasi" family you can specify any of several mean-variance relationships, and you can even write your own; see the R documentation. It may be that you can find a much better fit by specifying a non-default value for the mean-variance function in a "quasi" model. You also should pay attention to the range of the target variable; in your case it's nonnegative count data. If you have a substantial fraction of low values - 0, 1, 2 - the continuous distributions probably won't fit well, but if you don't, there's not much value in using a discrete distribution. It's rare that you'd consider Poisson and Normal distributions as competitors.
Poisson or quasi poisson in a regression with count data and overdispersion?
When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rh
Poisson or quasi poisson in a regression with count data and overdispersion? When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted values from from your Normal model can help with this. With Poisson regression, the assumed relationship is that the variance equals the expected value; rather restrictive, I think you'll agree. With a "standard" linear regression, the assumption is that the variance is constant regardless of the expected value. For a quasi-poisson regression, the variance is assumed to be a linear function of the mean; for negative binomial regression, a quadratic function. However, you aren't restricted to these relationships. The specification of a "family" (other than "quasi") determines the mean-variance relationship. I don't have The R Book, but I imagine it has a table that shows the family functions and corresponding mean-variance relationships. For the "quasi" family you can specify any of several mean-variance relationships, and you can even write your own; see the R documentation. It may be that you can find a much better fit by specifying a non-default value for the mean-variance function in a "quasi" model. You also should pay attention to the range of the target variable; in your case it's nonnegative count data. If you have a substantial fraction of low values - 0, 1, 2 - the continuous distributions probably won't fit well, but if you don't, there's not much value in using a discrete distribution. It's rare that you'd consider Poisson and Normal distributions as competitors.
Poisson or quasi poisson in a regression with count data and overdispersion? When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rh
14,479
Poisson or quasi poisson in a regression with count data and overdispersion?
You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will provide better fit. However, it is no longer maximum likelihood what you are then doing, and certain model tests and indices can't be used. A good discussion can be found in Venables and Ripley, Modern Applied Statistics with S (Section 7.5). An alternative is to use a negative binomial model, e.g. the glm.nb() function in package MASS.
Poisson or quasi poisson in a regression with count data and overdispersion?
You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will
Poisson or quasi poisson in a regression with count data and overdispersion? You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will provide better fit. However, it is no longer maximum likelihood what you are then doing, and certain model tests and indices can't be used. A good discussion can be found in Venables and Ripley, Modern Applied Statistics with S (Section 7.5). An alternative is to use a negative binomial model, e.g. the glm.nb() function in package MASS.
Poisson or quasi poisson in a regression with count data and overdispersion? You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will
14,480
Poisson or quasi poisson in a regression with count data and overdispersion?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You might want to try this: summary(model,dispersion = deviance(model)/df.residual(model)) this corrects p-values according to dispersion.
Poisson or quasi poisson in a regression with count data and overdispersion?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Poisson or quasi poisson in a regression with count data and overdispersion? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You might want to try this: summary(model,dispersion = deviance(model)/df.residual(model)) this corrects p-values according to dispersion.
Poisson or quasi poisson in a regression with count data and overdispersion? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
14,481
Showing machine learning results are statistically irrelevant
You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of training and uses that in all predictions. The dataset is a timeseries (time-varying, usually 1-2 samples per week), so the last sample model just uses the previous sample's value. You benchmarked the result with trivial models and they outperform the model. This is enough to discard the model. What you did is a pretty standard procedure for validating time-series models. Negative $R^2$ are consistent with your benchmarks. In fact, $R^2$ already compares the model to the mean model because it is defined as $$ R^2 = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar y_i)^2} $$ so the numerator is the sum of squared errors of the model and the denominator is sum of squared errors of the mean model. Your model should have a smaller squared error than the mean model for it to be positive. Maybe the authors of the published paper didn't run the sanity check? Many crappy results somehow get published. I'm afraid that if reasonable arguments like comparing the results to the benchmarks don't convince your colleagues, I doubt “a statistical test” will. They already are willing to ignore the results they don't like, so it seems rather hopeless.
Showing machine learning results are statistically irrelevant
You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of tr
Showing machine learning results are statistically irrelevant You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of training and uses that in all predictions. The dataset is a timeseries (time-varying, usually 1-2 samples per week), so the last sample model just uses the previous sample's value. You benchmarked the result with trivial models and they outperform the model. This is enough to discard the model. What you did is a pretty standard procedure for validating time-series models. Negative $R^2$ are consistent with your benchmarks. In fact, $R^2$ already compares the model to the mean model because it is defined as $$ R^2 = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar y_i)^2} $$ so the numerator is the sum of squared errors of the model and the denominator is sum of squared errors of the mean model. Your model should have a smaller squared error than the mean model for it to be positive. Maybe the authors of the published paper didn't run the sanity check? Many crappy results somehow get published. I'm afraid that if reasonable arguments like comparing the results to the benchmarks don't convince your colleagues, I doubt “a statistical test” will. They already are willing to ignore the results they don't like, so it seems rather hopeless.
Showing machine learning results are statistically irrelevant You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of tr
14,482
Showing machine learning results are statistically irrelevant
Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric depending on the problem. For example, a regression model predicting the price of a stock in the following day. Any small amount of correlation beyond a random guess (or convergence to the mean) would make you a millionaire! To summarize, not all low R2 scores are bad.
Showing machine learning results are statistically irrelevant
Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric dependin
Showing machine learning results are statistically irrelevant Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric depending on the problem. For example, a regression model predicting the price of a stock in the following day. Any small amount of correlation beyond a random guess (or convergence to the mean) would make you a millionaire! To summarize, not all low R2 scores are bad.
Showing machine learning results are statistically irrelevant Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric dependin
14,483
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your validation set obviously influences your training set (since it is being taken out from it). To which extend this is the case depends on your data and predictor. To make a very simple example using your task regarding the daily temperature using leave-one-out: If your data only contained a single (the same) value $n$ times, then your mean predictor would always predict the correct value in all $n$ folds. And if you used a predictor taking the maximum value from the training set (for prediction and calculating the true values), then your model would be correct in $n-1$ folds (only the fold which removes the maximum value from the train dataset would be predicted incorrectly). I.e. there are predictors and datasets where leave-one-out may be more or less suitable. Specifically your mean-estimator has two properties: It depends on all examples in the train set (i.e. in real world non-trivial datasets (unlike my example above) it will predict a different value in each fold). A maximum-predictor, for example, would not show this behavior. It is sensitive to outliers (i.e. removing an extremely high or low value in one of the folds will have a relatively large impact on your prediction). A median-predictor, for example, would not show this behavior to the same extent. This means your mean-predictor is somewhat unstable per design. Which you can either accept (especially in case the observed variance is not significantly large) or choose a different predictor instead. However, as pointed out earlier this also depends on your dataset. If your dataset is small and of high variance, instability of the mean-predictor will increase. So having a sufficiently sized dataset with proper pre-processing (potentially removing outliers) could be another way to approach this. Also, I'd keep in mind that there is no perfect method to measure accuracy. The paper A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection is a good starting point for this topic. It focuses on classification but will still be a good read to get more details and further readings on the topic.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your val
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your validation set obviously influences your training set (since it is being taken out from it). To which extend this is the case depends on your data and predictor. To make a very simple example using your task regarding the daily temperature using leave-one-out: If your data only contained a single (the same) value $n$ times, then your mean predictor would always predict the correct value in all $n$ folds. And if you used a predictor taking the maximum value from the training set (for prediction and calculating the true values), then your model would be correct in $n-1$ folds (only the fold which removes the maximum value from the train dataset would be predicted incorrectly). I.e. there are predictors and datasets where leave-one-out may be more or less suitable. Specifically your mean-estimator has two properties: It depends on all examples in the train set (i.e. in real world non-trivial datasets (unlike my example above) it will predict a different value in each fold). A maximum-predictor, for example, would not show this behavior. It is sensitive to outliers (i.e. removing an extremely high or low value in one of the folds will have a relatively large impact on your prediction). A median-predictor, for example, would not show this behavior to the same extent. This means your mean-predictor is somewhat unstable per design. Which you can either accept (especially in case the observed variance is not significantly large) or choose a different predictor instead. However, as pointed out earlier this also depends on your dataset. If your dataset is small and of high variance, instability of the mean-predictor will increase. So having a sufficiently sized dataset with proper pre-processing (potentially removing outliers) could be another way to approach this. Also, I'd keep in mind that there is no perfect method to measure accuracy. The paper A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection is a good starting point for this topic. It focuses on classification but will still be a good read to get more details and further readings on the topic.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your val
14,484
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be it LOO or K-fold) will in expectation be less accurate/precise than these based on the entire sample. This will cause the prediction loss from the model estimated on the entire sample to be overestimated. For LOOCV, the difference will usually be small, since $n_{training}$ and $n$ differ by very little (by $1$); for K-fold CV it will be larger.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be i
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be it LOO or K-fold) will in expectation be less accurate/precise than these based on the entire sample. This will cause the prediction loss from the model estimated on the entire sample to be overestimated. For LOOCV, the difference will usually be small, since $n_{training}$ and $n$ differ by very little (by $1$); for K-fold CV it will be larger.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be i
14,485
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation is that if you have n data points, then each time you do a prediction for one of the data points, 1/n of the prediction is coming from itself, so you're over estimating accuracy by an amount proportional to 1/n. If you take the point out, you're getting rid of that overestimation, and so the lower accuracy isn't overestimating error, it's getting a more valid estimate of error. Consider rolling a die ten times, and you're trying to predict the roll $x_i$. Let $\bar x$ be the mean of all ten rolls, and $\bar{x'}$ be the mean of the nine rolls excluding $x_i$. Clearly, there's correlation between $x_i$ and $\bar x$. Try simulating 100 trials of rolling ten dice, and comparing the first roll to the overall mean for each trial. While the expected value of $\bar x$ is 35, there's going to be some variation, and the trials in which the average is higher than 35 are, more likely than not, trials in which the first roll is higher than 3. There is, however, no correlation between $x_i$ and $\bar{x'}$. The other two rolls are independent of $x_i$. While there is negative correlation between $x_i$ and the change between $\bar x$ versus $\bar {x'}$, this is just reflecting the fact that $\bar x$ had a correlation that you're getting rid of.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation is that if you have n data points, then each time you do a prediction for one of the data points, 1/n of the prediction is coming from itself, so you're over estimating accuracy by an amount proportional to 1/n. If you take the point out, you're getting rid of that overestimation, and so the lower accuracy isn't overestimating error, it's getting a more valid estimate of error. Consider rolling a die ten times, and you're trying to predict the roll $x_i$. Let $\bar x$ be the mean of all ten rolls, and $\bar{x'}$ be the mean of the nine rolls excluding $x_i$. Clearly, there's correlation between $x_i$ and $\bar x$. Try simulating 100 trials of rolling ten dice, and comparing the first roll to the overall mean for each trial. While the expected value of $\bar x$ is 35, there's going to be some variation, and the trials in which the average is higher than 35 are, more likely than not, trials in which the first roll is higher than 3. There is, however, no correlation between $x_i$ and $\bar{x'}$. The other two rolls are independent of $x_i$. While there is negative correlation between $x_i$ and the change between $\bar x$ versus $\bar {x'}$, this is just reflecting the fact that $\bar x$ had a correlation that you're getting rid of.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation
14,486
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanation of the well-known slight pessimistic bias of a correctly performed resampling validation (including all flavors of cross validation). However, the mechanism discussed in the body of the question, namely that removing a case that is in some sense extreme will give a test/training subset split where the training subset is particularly un-representative of the subset to be tested. This can cause additional error as Sammy explained already. So the reason for this high error is that predictive performance deteriorates extremely fast for cases just outside (or at the edge) of training space. What to do against this effect? There are different points of view on such a situation, and it will depend on your judgment of the task at hand which one applies and what to do about this. On the one hand, this may be seen as an indication of the error to be expected for application cases similarly extreme (somewhat outside training space) - and encountering such cases during resampling can be seen as indication that similarly extreme cases for the model built on the whole data set will be encountered during production use. From this point of view, the additional error is not a bias but an evaluation including slight extrapolation outside training space which is judged to be representative of production use. On the other hand, it is perfectly valid to set up a model under the additional constraint/requirement/assumption that no prediction should be done outside training space. Such a model should ideally reject prediction of cases outside its training domain, LOO error for predicted test cases of such a model would not be worse, but would encounter a lot of rejects. Now, one can argue that the mechanism of leave one out produces an unrepresentatively high proportion of outside training space cases due to the described opposite influence on training and test subset populations. This can be shown by studying the bias and variance properties of various $k$ or $n$ for leave-$n$-out and $k$-fold cross valiation, respectively. Doing this, there are situations (data set + model combinations) where leave one out exhibits a larger pessimistic bias that would be expected from leave-more-than-one-out. (see Kohavi paper linked by Sammy; there are also other papers reporting such a behaviour) I may add that as leave-one-out has other undesirable properties (conflating model stability wrt. training cases with random error of tested cases), I'd anyways recommend against using LOO whenever feasible. Stratified variants of resampling validation produce by design more closely matching training and test subpopultation, they are available for classification as well as regression. Whether it is appropriate or not to employ such a stratification is basically a matter of judgment about the task at hand. However, leave one out differs from other resampling validation schemes in that it does not allow stratification. So if stratification should be employed, leave one out is not an appropriate validation scheme. When does this particular pessimistic bias occur? This is a small sample size problem: in the described model, as soon as there are sufficient cases in each weekday "bin" so that even leaving out an extreme case leads to a fluctuation of the training mean that is << the spread of temperatures for that weekday, the effect on observed error is negligible. High dimensional input/feature/training space has more "possibilities" for a case to be extreme in some direction: In high dimensional spaces, most points tend to be at the "outside". This is related to the curse of dimensionality. It is also related to model complexity in the sense that high error for edge cases is an indication that the model is unstable right away outside the training region.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanati
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanation of the well-known slight pessimistic bias of a correctly performed resampling validation (including all flavors of cross validation). However, the mechanism discussed in the body of the question, namely that removing a case that is in some sense extreme will give a test/training subset split where the training subset is particularly un-representative of the subset to be tested. This can cause additional error as Sammy explained already. So the reason for this high error is that predictive performance deteriorates extremely fast for cases just outside (or at the edge) of training space. What to do against this effect? There are different points of view on such a situation, and it will depend on your judgment of the task at hand which one applies and what to do about this. On the one hand, this may be seen as an indication of the error to be expected for application cases similarly extreme (somewhat outside training space) - and encountering such cases during resampling can be seen as indication that similarly extreme cases for the model built on the whole data set will be encountered during production use. From this point of view, the additional error is not a bias but an evaluation including slight extrapolation outside training space which is judged to be representative of production use. On the other hand, it is perfectly valid to set up a model under the additional constraint/requirement/assumption that no prediction should be done outside training space. Such a model should ideally reject prediction of cases outside its training domain, LOO error for predicted test cases of such a model would not be worse, but would encounter a lot of rejects. Now, one can argue that the mechanism of leave one out produces an unrepresentatively high proportion of outside training space cases due to the described opposite influence on training and test subset populations. This can be shown by studying the bias and variance properties of various $k$ or $n$ for leave-$n$-out and $k$-fold cross valiation, respectively. Doing this, there are situations (data set + model combinations) where leave one out exhibits a larger pessimistic bias that would be expected from leave-more-than-one-out. (see Kohavi paper linked by Sammy; there are also other papers reporting such a behaviour) I may add that as leave-one-out has other undesirable properties (conflating model stability wrt. training cases with random error of tested cases), I'd anyways recommend against using LOO whenever feasible. Stratified variants of resampling validation produce by design more closely matching training and test subpopultation, they are available for classification as well as regression. Whether it is appropriate or not to employ such a stratification is basically a matter of judgment about the task at hand. However, leave one out differs from other resampling validation schemes in that it does not allow stratification. So if stratification should be employed, leave one out is not an appropriate validation scheme. When does this particular pessimistic bias occur? This is a small sample size problem: in the described model, as soon as there are sufficient cases in each weekday "bin" so that even leaving out an extreme case leads to a fluctuation of the training mean that is << the spread of temperatures for that weekday, the effect on observed error is negligible. High dimensional input/feature/training space has more "possibilities" for a case to be extreme in some direction: In high dimensional spaces, most points tend to be at the "outside". This is related to the curse of dimensionality. It is also related to model complexity in the sense that high error for edge cases is an indication that the model is unstable right away outside the training region.
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanati
14,487
Most effective use of colour in heat/contour maps
Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a perceptual order The luminance bounces around: our eyes are mostly rods for luminance, not cones for color We see hues categorically Hues often have unequal presences (e.g., wide green and narrow yellow) On the plus side: Spectral themes have high resolution (more distinguishable color values in the scale) There's safety in numbers; such themes are still quite common See Rainbow Color Map (Still) Considered Harmful for discussion and alternatives, including black-body radiation and grayscale. If a diverging scheme is suitable, I like the perceptually uniform cool-to-warm scheme derived by Kenneth Moreland in his paper, Diverging Color Maps for Scientific Visualization. It and other schemes are compared with images in the ParaView wiki, though with a perspective of coloring a 3-D surface, which means the color scheme has to survive shading effects. Recent blog post with more links and Matlab alternatives: Rainbow Colormaps – What are they good for? Absolutely nothing! Recommendation: First try grayscale or another monochromatic gradient. If you need more resolution, try black-body radiation. If the extremes are more important than the middle values, try a diverging scheme with gray in the middle, such as the cool-to-warm scheme. Images from the ParaView wiki page: Rainbow: Grayscale: Black-body: Cool-to-warm:
Most effective use of colour in heat/contour maps
Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a p
Most effective use of colour in heat/contour maps Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a perceptual order The luminance bounces around: our eyes are mostly rods for luminance, not cones for color We see hues categorically Hues often have unequal presences (e.g., wide green and narrow yellow) On the plus side: Spectral themes have high resolution (more distinguishable color values in the scale) There's safety in numbers; such themes are still quite common See Rainbow Color Map (Still) Considered Harmful for discussion and alternatives, including black-body radiation and grayscale. If a diverging scheme is suitable, I like the perceptually uniform cool-to-warm scheme derived by Kenneth Moreland in his paper, Diverging Color Maps for Scientific Visualization. It and other schemes are compared with images in the ParaView wiki, though with a perspective of coloring a 3-D surface, which means the color scheme has to survive shading effects. Recent blog post with more links and Matlab alternatives: Rainbow Colormaps – What are they good for? Absolutely nothing! Recommendation: First try grayscale or another monochromatic gradient. If you need more resolution, try black-body radiation. If the extremes are more important than the middle values, try a diverging scheme with gray in the middle, such as the cool-to-warm scheme. Images from the ParaView wiki page: Rainbow: Grayscale: Black-body: Cool-to-warm:
Most effective use of colour in heat/contour maps Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a p
14,488
Most effective use of colour in heat/contour maps
I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative tasks, from InfoVis '11: Michelle Borkin, Krzysztof Gajos, Amanda Peters, Dimitrios Mitsouras, Simone Melchionna, Frank Rybicki, Charles Feldman, and Hanspeter Pfister. 2011. Evaluation of Artery Visualizations for Heart Disease Diagnosis. IEEE Transactions on Visualization and Computer Graphics 17, 12 (December 2011), 2479-2488. DOI=10.1109/TVCG.2011.192 Link to PDF, Slides, and Images. The only thing rainbow/categorical color maps are good for is to show separate values of categorical variables. However, the colors you choose matter. If you need a categorical scale, check out this excellent paper from CHI '12 that uses the XKCD survey dataset that talks about how we perceive differences in color. It allows you to rate a color scale by how well humans perceive the differences. Their web-based Color Palette Analyzer will let you evaluate your own color scale, too! Jeffrey Heer and Maureen Stone. 2012. Color naming models for color selection, image editing and palette design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 1007-1016. DOI=10.1145/2207676.2208547 Link to PDF, online demos, etc.
Most effective use of colour in heat/contour maps
I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative task
Most effective use of colour in heat/contour maps I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative tasks, from InfoVis '11: Michelle Borkin, Krzysztof Gajos, Amanda Peters, Dimitrios Mitsouras, Simone Melchionna, Frank Rybicki, Charles Feldman, and Hanspeter Pfister. 2011. Evaluation of Artery Visualizations for Heart Disease Diagnosis. IEEE Transactions on Visualization and Computer Graphics 17, 12 (December 2011), 2479-2488. DOI=10.1109/TVCG.2011.192 Link to PDF, Slides, and Images. The only thing rainbow/categorical color maps are good for is to show separate values of categorical variables. However, the colors you choose matter. If you need a categorical scale, check out this excellent paper from CHI '12 that uses the XKCD survey dataset that talks about how we perceive differences in color. It allows you to rate a color scale by how well humans perceive the differences. Their web-based Color Palette Analyzer will let you evaluate your own color scale, too! Jeffrey Heer and Maureen Stone. 2012. Color naming models for color selection, image editing and palette design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 1007-1016. DOI=10.1145/2207676.2208547 Link to PDF, online demos, etc.
Most effective use of colour in heat/contour maps I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative task
14,489
Maximum value of coefficient of variation for bounded data set
Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \bar{x})$ is the orthogonal projection of the vector of data $\mathbf{x}=(x_1, x_2, \ldots, x_n)$ onto the linear subspace generated by the constant vector $(1,1,\ldots,1)$ and that $\sigma_x$ is directly proportional to the (Euclidean) distance between $\mathbf{x}$ and $\mathbf{\bar{x}}.$ The non-negativity constraints are linear and distance is a convex function, whence the extremes of distance must be attained at the edges of the cone determined by the constraints. This cone is the positive orthant in $\mathbb{R}^n$ and its edges are the coordinate axes, whence it immediately follows that all but one of the $x_i$ must be zero at the maximum distances. For such a set of data, a direct (simple) calculation shows $\sigma_x/\bar{x}=\sqrt{n}.$ Solution exploiting classical inequalities $\sigma_x/\bar{x}$ is optimized simultaneously with any monotonic transformation thereof. In light of this, let's maximize $$\frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} = \frac{1}{n}\left(\frac{n-1}{n}\left(\frac{\sigma_x}{\bar{x}}\right)^2+1\right) = f\left(\frac{\sigma_x}{\bar{x}}\right).$$ (The formula for $f$ may look mysterious until you realize it just records the steps one would take in algebraically manipulating $\sigma_x/\bar{x}$ to get it into a simple looking form, which is the left hand side.) An easy way begins with Holder's Inequality, $$x_1^2+x_2^2+\ldots+x_n^2 \le \left(x_1+x_2+\ldots+x_n\right)\max(\{x_i\}).$$ (This needs no special proof in this simple context: merely replace one factor of each term $x_i^2 = x_i \times x_i$ by the maximum component $\max(\{x_i\})$: obviously the sum of squares will not decrease. Factoring out the common term $\max(\{x_i\})$ yields the right hand side of the inequality.) Because the $x_i$ are not all $0$ (that would leave $\sigma_x/\bar{x}$ undefined), division by the square of their sum is valid and gives the equivalent inequality $$\frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} \le \frac{\max(\{x_i\})}{x_1+x_2+\ldots+x_n}.$$ Because the denominator cannot be less than the numerator (which itself is just one of the terms in the denominator), the right hand side is dominated by the value $1$, which is achieved only when all but one of the $x_i$ equal $0$. Whence $$\frac{\sigma_x}{\bar{x}} \le f^{-1}\left(1\right) = \sqrt{\left(1 \times (n - 1)\right)\frac{n}{n-1}}=\sqrt{n}.$$ Alternative approach Because the $x_i$ are nonnegative and cannot sum to $0$, the values $p(i) = x_i/(x_1+x_2+\ldots+x_n)$ determine a probability distribution $F$ on $\{1,2,\ldots,n\}$. Writing $s$ for the sum of the $x_i$, we recognize $$\eqalign{ \frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} &= \frac{x_1^2+x_2^2+\ldots+x_n^2}{s^2} \\ &= \left(\frac{x_1}{s}\right)\left(\frac{x_1}{s}\right)+\left(\frac{x_2}{s}\right)\left(\frac{x_2}{s}\right) + \ldots + \left(\frac{x_n}{s}\right)\left(\frac{x_n}{s}\right)\\ &= p_1 p_1 + p_2 p_2 + \ldots + p_n p_n\\ &= \mathbb{E}_F[p]. }$$ The axiomatic fact that no probability can exceed $1$ implies this expectation cannot exceed $1$, either, but it's easy to make it equal to $1$ by setting all but one of the $p_i$ equal to $0$ and therefore exactly one of the $x_i$ is nonzero. Compute the coefficient of variation as in the last line of the geometric solution above.
Maximum value of coefficient of variation for bounded data set
Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \b
Maximum value of coefficient of variation for bounded data set Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \bar{x})$ is the orthogonal projection of the vector of data $\mathbf{x}=(x_1, x_2, \ldots, x_n)$ onto the linear subspace generated by the constant vector $(1,1,\ldots,1)$ and that $\sigma_x$ is directly proportional to the (Euclidean) distance between $\mathbf{x}$ and $\mathbf{\bar{x}}.$ The non-negativity constraints are linear and distance is a convex function, whence the extremes of distance must be attained at the edges of the cone determined by the constraints. This cone is the positive orthant in $\mathbb{R}^n$ and its edges are the coordinate axes, whence it immediately follows that all but one of the $x_i$ must be zero at the maximum distances. For such a set of data, a direct (simple) calculation shows $\sigma_x/\bar{x}=\sqrt{n}.$ Solution exploiting classical inequalities $\sigma_x/\bar{x}$ is optimized simultaneously with any monotonic transformation thereof. In light of this, let's maximize $$\frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} = \frac{1}{n}\left(\frac{n-1}{n}\left(\frac{\sigma_x}{\bar{x}}\right)^2+1\right) = f\left(\frac{\sigma_x}{\bar{x}}\right).$$ (The formula for $f$ may look mysterious until you realize it just records the steps one would take in algebraically manipulating $\sigma_x/\bar{x}$ to get it into a simple looking form, which is the left hand side.) An easy way begins with Holder's Inequality, $$x_1^2+x_2^2+\ldots+x_n^2 \le \left(x_1+x_2+\ldots+x_n\right)\max(\{x_i\}).$$ (This needs no special proof in this simple context: merely replace one factor of each term $x_i^2 = x_i \times x_i$ by the maximum component $\max(\{x_i\})$: obviously the sum of squares will not decrease. Factoring out the common term $\max(\{x_i\})$ yields the right hand side of the inequality.) Because the $x_i$ are not all $0$ (that would leave $\sigma_x/\bar{x}$ undefined), division by the square of their sum is valid and gives the equivalent inequality $$\frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} \le \frac{\max(\{x_i\})}{x_1+x_2+\ldots+x_n}.$$ Because the denominator cannot be less than the numerator (which itself is just one of the terms in the denominator), the right hand side is dominated by the value $1$, which is achieved only when all but one of the $x_i$ equal $0$. Whence $$\frac{\sigma_x}{\bar{x}} \le f^{-1}\left(1\right) = \sqrt{\left(1 \times (n - 1)\right)\frac{n}{n-1}}=\sqrt{n}.$$ Alternative approach Because the $x_i$ are nonnegative and cannot sum to $0$, the values $p(i) = x_i/(x_1+x_2+\ldots+x_n)$ determine a probability distribution $F$ on $\{1,2,\ldots,n\}$. Writing $s$ for the sum of the $x_i$, we recognize $$\eqalign{ \frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} &= \frac{x_1^2+x_2^2+\ldots+x_n^2}{s^2} \\ &= \left(\frac{x_1}{s}\right)\left(\frac{x_1}{s}\right)+\left(\frac{x_2}{s}\right)\left(\frac{x_2}{s}\right) + \ldots + \left(\frac{x_n}{s}\right)\left(\frac{x_n}{s}\right)\\ &= p_1 p_1 + p_2 p_2 + \ldots + p_n p_n\\ &= \mathbb{E}_F[p]. }$$ The axiomatic fact that no probability can exceed $1$ implies this expectation cannot exceed $1$, either, but it's easy to make it equal to $1$ by setting all but one of the $p_i$ equal to $0$ and therefore exactly one of the $x_i$ is nonzero. Compute the coefficient of variation as in the last line of the geometric solution above.
Maximum value of coefficient of variation for bounded data set Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \b
14,490
Maximum value of coefficient of variation for bounded data set
Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result was mentioned earlier by Longley (1952). Cramér (1946, p.357) proved a less sharp result, and Kirby (1974) proved a less general result. Cramér, H. 1946. Mathematical methods of statistics. Princeton, NJ: Princeton University Press. Katsnelson, J., and S. Kotz. 1957. On the upper limits of some measures of variability. Archiv für Meteorologie, Geophysik und Bioklimatologie, Series B 8: 103–107. Kirby, W. 1974. Algebraic boundedness of sample statistics. Water Resources Research 10: 220–222. Longley, R. W. 1952. Measures of the variability of precipitation. Monthly Weather Review 80: 111–117. I came across these papers in working on Cox, N.J. 2010. The limits of sample skewness and kurtosis. Stata Journal 10: 482-495. which discusses broadly similar bounds on moment-based skewness and kurtosis.
Maximum value of coefficient of variation for bounded data set
Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result
Maximum value of coefficient of variation for bounded data set Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result was mentioned earlier by Longley (1952). Cramér (1946, p.357) proved a less sharp result, and Kirby (1974) proved a less general result. Cramér, H. 1946. Mathematical methods of statistics. Princeton, NJ: Princeton University Press. Katsnelson, J., and S. Kotz. 1957. On the upper limits of some measures of variability. Archiv für Meteorologie, Geophysik und Bioklimatologie, Series B 8: 103–107. Kirby, W. 1974. Algebraic boundedness of sample statistics. Water Resources Research 10: 220–222. Longley, R. W. 1952. Measures of the variability of precipitation. Monthly Weather Review 80: 111–117. I came across these papers in working on Cox, N.J. 2010. The limits of sample skewness and kurtosis. Stata Journal 10: 482-495. which discusses broadly similar bounds on moment-based skewness and kurtosis.
Maximum value of coefficient of variation for bounded data set Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result
14,491
Maximum value of coefficient of variation for bounded data set
With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this to $n$ non-negative datapoints, this means that unless all but one of the $n$ numbers are zero and so cannot be reduced further, it is possible to increase the variance and standard deviation by widening the gap between any pair of the data points while retaining the same mean, thus increasing the coefficient of variation. So the maximum coefficient of variation for the data set is as you suggest: $\sqrt{n-1}$. $c$ should not affect the result as $\frac{\sigma_x}{\bar{x}}$ does not change if all the values are multiplied by any positive constant $k$ (as I said in my comment).
Maximum value of coefficient of variation for bounded data set
With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this t
Maximum value of coefficient of variation for bounded data set With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this to $n$ non-negative datapoints, this means that unless all but one of the $n$ numbers are zero and so cannot be reduced further, it is possible to increase the variance and standard deviation by widening the gap between any pair of the data points while retaining the same mean, thus increasing the coefficient of variation. So the maximum coefficient of variation for the data set is as you suggest: $\sqrt{n-1}$. $c$ should not affect the result as $\frac{\sigma_x}{\bar{x}}$ does not change if all the values are multiplied by any positive constant $k$ (as I said in my comment).
Maximum value of coefficient of variation for bounded data set With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this t
14,492
What's the point of asymptotics?
The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will generally get better as we get more data, and it eventually becomes "perfect" as the amount of data gets to the full population. You are correct that when $n \rightarrow \infty$ we have the whole (super)population, so presumably we should then be able to know any identifiable parameters of interest. If that is the case, it suggests that non-consistent estimators should be ruled out of consideration as failing a basic sensibleness criterion. As you say, if we have the whole population then we should be able to determine parameters of interest perfectly, so if an estimator doesn't do this, it suggests that it is fundamentally flawed. There are a number of other asymptotic properties that are similarly of interest, but less important than consistency. Another reason we look at the asyptotics of estimators is that if we have a large sample size, we can often use the asymptotic properties as approximations to the finite-sample behaviour of the estimator. For example, if an estimator is known to be asymptotically normally distributed (which is true in a wide class of cases) then we will often perform statistical analysis that uses the normal distribution as an approximation to the true distribution of the estimator, so long as the sample size is large. Many statistical hypothesis tests (e.g., chi-squared tests) are built on this basis, and so are a lot of confidence intervals.
What's the point of asymptotics?
The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will general
What's the point of asymptotics? The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will generally get better as we get more data, and it eventually becomes "perfect" as the amount of data gets to the full population. You are correct that when $n \rightarrow \infty$ we have the whole (super)population, so presumably we should then be able to know any identifiable parameters of interest. If that is the case, it suggests that non-consistent estimators should be ruled out of consideration as failing a basic sensibleness criterion. As you say, if we have the whole population then we should be able to determine parameters of interest perfectly, so if an estimator doesn't do this, it suggests that it is fundamentally flawed. There are a number of other asymptotic properties that are similarly of interest, but less important than consistency. Another reason we look at the asyptotics of estimators is that if we have a large sample size, we can often use the asymptotic properties as approximations to the finite-sample behaviour of the estimator. For example, if an estimator is known to be asymptotically normally distributed (which is true in a wide class of cases) then we will often perform statistical analysis that uses the normal distribution as an approximation to the true distribution of the estimator, so long as the sample size is large. Many statistical hypothesis tests (e.g., chi-squared tests) are built on this basis, and so are a lot of confidence intervals.
What's the point of asymptotics? The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will general
14,493
What's the point of asymptotics?
It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes estimators that beat OLS in many respects. Case-in-point: the problem of shrinkage and L2 penalization in the estimation of a mean vector for a multivariate normal sample. For the purposes of hypothesis testing and error estimation, asymptotics go farther than just the limit of the statistic. We're also interested in whether a suitable transformation of the statistic has a known limiting distribution. In that case, we can use the limiting distribution to approximate the sampling distribution of the test statistic in finite sample sizes. The classic example is the central limit theorem where $\sqrt{n} (\bar{X} - \mu) \rightarrow_d \mathcal{N}(0, 1)$. And similar methods are used to identify critical values, p-values, power, and sample size calculations knowing full well they are only approximations. In some cases, the approximations can be improved (such as with the Student-T distribution, Agresti correction, Clopper-Pearson intervals) or exchanged entirely for exact (Fisher's Exact Test) or empirical (Bootstrap) methods. It's my opinion that these methods should be taught alongside standard testing and error estimation methods, and the latter methods used more often in practical data analysis. The idea of an "infinite" sample size underpins all of frequentist statistics. Consider the a "frequentist" interpretation of probability: what do we mean when we say the heads probability of a coin flip is 0.5? If you actually flipped a coin an infinite number of times, it would wear down to nothing. The same holds with frequentist methods for finite sampling, that is when you sample a substantial fraction of a finite population. The sampling distributions might change somewhat, but I still conceptualize multiple (i.e. an infinite number of) scenarios when I can replicate a particular result. The "expectation" of my estimator - and it's ultimate limit as a result of the LLN - is defined by that value. Suppose for instance I sample 30% of all surviving Siberian tigers for biometrics - say length. I can produce a CI for my length estimate. That CI is based on sampling 30% of all known Siberian tigers (as of now <400) an infinite number of times. That said, the implementation of these methods can introduce some methodological issues (bootstrap intervals are not always valid, even with bca or double bootstrap), and some problems require analytic simplicity to provide reproducible, and communicable results. For instance, when calculating the sample size of a time to event analysis, I can base my selection of N on an exponential distribution with a known rate parameter in the control and treatment arms, duration of follow-up, and a test based on the asymptotic Wald statistic. In that case, it's easy for another statistician to verify my results. In summary, for a didactic program, I would say learn both methods and understand their limitations. For a practical application, consider your audience and what they need to understand. And when in doubt, be conservative in your approach!
What's the point of asymptotics?
It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes es
What's the point of asymptotics? It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes estimators that beat OLS in many respects. Case-in-point: the problem of shrinkage and L2 penalization in the estimation of a mean vector for a multivariate normal sample. For the purposes of hypothesis testing and error estimation, asymptotics go farther than just the limit of the statistic. We're also interested in whether a suitable transformation of the statistic has a known limiting distribution. In that case, we can use the limiting distribution to approximate the sampling distribution of the test statistic in finite sample sizes. The classic example is the central limit theorem where $\sqrt{n} (\bar{X} - \mu) \rightarrow_d \mathcal{N}(0, 1)$. And similar methods are used to identify critical values, p-values, power, and sample size calculations knowing full well they are only approximations. In some cases, the approximations can be improved (such as with the Student-T distribution, Agresti correction, Clopper-Pearson intervals) or exchanged entirely for exact (Fisher's Exact Test) or empirical (Bootstrap) methods. It's my opinion that these methods should be taught alongside standard testing and error estimation methods, and the latter methods used more often in practical data analysis. The idea of an "infinite" sample size underpins all of frequentist statistics. Consider the a "frequentist" interpretation of probability: what do we mean when we say the heads probability of a coin flip is 0.5? If you actually flipped a coin an infinite number of times, it would wear down to nothing. The same holds with frequentist methods for finite sampling, that is when you sample a substantial fraction of a finite population. The sampling distributions might change somewhat, but I still conceptualize multiple (i.e. an infinite number of) scenarios when I can replicate a particular result. The "expectation" of my estimator - and it's ultimate limit as a result of the LLN - is defined by that value. Suppose for instance I sample 30% of all surviving Siberian tigers for biometrics - say length. I can produce a CI for my length estimate. That CI is based on sampling 30% of all known Siberian tigers (as of now <400) an infinite number of times. That said, the implementation of these methods can introduce some methodological issues (bootstrap intervals are not always valid, even with bca or double bootstrap), and some problems require analytic simplicity to provide reproducible, and communicable results. For instance, when calculating the sample size of a time to event analysis, I can base my selection of N on an exponential distribution with a known rate parameter in the control and treatment arms, duration of follow-up, and a test based on the asymptotic Wald statistic. In that case, it's easy for another statistician to verify my results. In summary, for a didactic program, I would say learn both methods and understand their limitations. For a practical application, consider your audience and what they need to understand. And when in doubt, be conservative in your approach!
What's the point of asymptotics? It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes es
14,494
What's the point of asymptotics?
Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order analysis Recall the Taylor series of the function $\sin$ around the point $x=0$: \begin{equation*} \sin x - 0 = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots, \end{equation*} which most learn about in introductory calculus courses. The series helps us understand the behavior of the function $\sin$ function near $x=0$. The plot below (from Wikipedia) shows that these four terms largely reproduce the behavior of the curve near $x=0$: Thus, to study the behavior of $\sin(x)$ near $x=0$, very little is lost in simply studying the behavior of the polynomial $x \mapsto x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!}$. Of course, the polynomial is much more susceptible to study, so this is convenient. Non random case: first order analysis The close relationship between the function $\sin$ and its truncated Taylor series also holds in the first order, when we approximate \begin{equation*} \sin x - 0 = x - \dots, \end{equation*} For example, most calculus students have shown that \begin{equation*} \lim_{x \to 0} \frac{\sin x}{x} = 1. \end{equation*} This result can be interpreted as saying that the function $\sin$ and the identity function $x \mapsto x$ are indistinguishable near $x=0$. We stress that the limit result, although asymptotic, is merely reflecting our knowledge gained from the figure above: the two functions are very close near $x=0$. Probabilistic case There is an extension to Taylor series which is designed for estimators rather than functions. We'll summarise the extension and illustrate how it is behind most "asymptotic" results. Let $P_0$ denote the true (unknown) distribution, let $\theta(P_0)$ denote the parameter of interest, and let $T(X_1, \dots, X_n)$ be the estimator using i.i.d. data $X_i \sim P_0$. Then, the following expansion is a first order expansion of the estimator around the parameter: \begin{align*} T(X_1, \dots, X_n) - \theta(P) = \frac{1}{n} \sum_{i=1}^n \varphi(X_i; P_0) + \dots. \end{align*} The term on the left is the estimator error, i.e. the difference between the estimator and its target (the parameter). The term on the right is a first order expansion, analogous to $x$ in the $\sin$ case. The function $\varphi$ is called the "influence function" and determines the asymptotic behavior, along with the remainder. (More details later..)
What's the point of asymptotics?
Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order a
What's the point of asymptotics? Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order analysis Recall the Taylor series of the function $\sin$ around the point $x=0$: \begin{equation*} \sin x - 0 = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots, \end{equation*} which most learn about in introductory calculus courses. The series helps us understand the behavior of the function $\sin$ function near $x=0$. The plot below (from Wikipedia) shows that these four terms largely reproduce the behavior of the curve near $x=0$: Thus, to study the behavior of $\sin(x)$ near $x=0$, very little is lost in simply studying the behavior of the polynomial $x \mapsto x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!}$. Of course, the polynomial is much more susceptible to study, so this is convenient. Non random case: first order analysis The close relationship between the function $\sin$ and its truncated Taylor series also holds in the first order, when we approximate \begin{equation*} \sin x - 0 = x - \dots, \end{equation*} For example, most calculus students have shown that \begin{equation*} \lim_{x \to 0} \frac{\sin x}{x} = 1. \end{equation*} This result can be interpreted as saying that the function $\sin$ and the identity function $x \mapsto x$ are indistinguishable near $x=0$. We stress that the limit result, although asymptotic, is merely reflecting our knowledge gained from the figure above: the two functions are very close near $x=0$. Probabilistic case There is an extension to Taylor series which is designed for estimators rather than functions. We'll summarise the extension and illustrate how it is behind most "asymptotic" results. Let $P_0$ denote the true (unknown) distribution, let $\theta(P_0)$ denote the parameter of interest, and let $T(X_1, \dots, X_n)$ be the estimator using i.i.d. data $X_i \sim P_0$. Then, the following expansion is a first order expansion of the estimator around the parameter: \begin{align*} T(X_1, \dots, X_n) - \theta(P) = \frac{1}{n} \sum_{i=1}^n \varphi(X_i; P_0) + \dots. \end{align*} The term on the left is the estimator error, i.e. the difference between the estimator and its target (the parameter). The term on the right is a first order expansion, analogous to $x$ in the $\sin$ case. The function $\varphi$ is called the "influence function" and determines the asymptotic behavior, along with the remainder. (More details later..)
What's the point of asymptotics? Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order a
14,495
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy variables? [closed]
glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm = cv.glmnet(x=x_train,y = as.factor(train$y), intercept=FALSE ,family = "binomial", alpha=1, nfolds=7) best_lambda <- lm$lambda[which.min(lm$cvm)] alpha=1 will build a LASSO.
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy
glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy variables? [closed] glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm = cv.glmnet(x=x_train,y = as.factor(train$y), intercept=FALSE ,family = "binomial", alpha=1, nfolds=7) best_lambda <- lm$lambda[which.min(lm$cvm)] alpha=1 will build a LASSO.
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm
14,496
Is there a measure of 'evenness' of spread?
A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these are typically used to evaluate two-dimensional spatial point configurations, the analysis needed to adapt them to one dimension (which usually is not given in references) is simple. Theory The K function estimates the mean proportion of points within a distance $d$ of a typical point. For a uniform distribution on the interval $[0,1]$, the true proportion can be computed and (asymptotically in the sample size) equals $1 - (1-d)^2$. The appropriate one-dimensional version of the L function subtracts this value from K to show deviations from uniformity. We might therefore consider normalizing any batch of data to have a unit range and examining its L function for deviations around zero. Worked Examples To illustrate, I have simulated $999$ independent samples of size $64$ from a uniform distribution and plotted their (normalized) L functions for shorter distances (from $0$ to $1/3$), thereby creating an envelope to estimate the sampling distribution of the L function. (Plotted points well within this envelope cannot be significantly distinguished from uniformity.) Over this I have plotted the L functions for samples of the same size from a U-shaped distribution, a mixture distribution with four obvious components, and a standard Normal distribution. The histograms of these samples (and of their parent distributions) are shown for reference, using line symbols to match those of the L functions. The sharp separated spikes of the U-shaped distribution (dashed red line, leftmost histogram) create clusters of closely spaced values. This is reflected by a very large slope in the L function at $0$. The L function then decreases, eventually becoming negative to reflect the gaps at intermediate distances. The sample from the normal distribution (solid blue line, rightmost histogram) is fairly close to uniformly distributed. Accordingly, its L function does not depart from $0$ quickly. However, by distances of $0.10$ or so, it has risen sufficiently above the envelope to signal a slight tendency to cluster. The continued rise across intermediate distances indicates the clustering is diffuse and widespread (not confined to some isolated peaks). The initial large slope for the sample from the mixture distribution (middle histogram) reveals clustering at small distances (less than $0.15$). By dropping to negative levels, it signals separation at intermediate distances. Comparing this to the U-shaped distribution's L function is revealing: the slopes at $0$, the amounts by which these curves rise above $0$, and the rates at which they eventually descend back to $0$ all provide information about the nature of the clustering present in the data. Any of these characteristics could be chosen as a single measure of "evenness" to suit a particular application. These examples show how an L-function can be examined to evaluate departures of the data from uniformity ("evenness") and how quantitative information about the scale and nature of the departures can be extracted from it. (One can indeed plot the entire L function, extending to the full normalized distance of $1$, to assess large-scale departures from uniformity. Ordinarily, though, assessing the behavior of the data at smaller distances is of greater importance.) Software R code to generate this figure follows. It starts by defining functions to compute K and L. It creates a capability to simulate from a mixture distribution. Then it generates the simulated data and makes the plots. Ripley.K <- function(x, scale) { # Arguments: # x is an array of data. # scale (not actually used) is an option to rescale the data. # # Return value: # A function that calculates Ripley's K for any value between 0 and 1 (or `scale`). # x.pairs <- outer(x, x, function(a,b) abs(a-b)) # All pairwise distances x.pairs <- x.pairs[lower.tri(x.pairs)] # Distances between distinct pairs if(missing(scale)) scale <- diff(range(x.pairs))# Rescale distances to [0,1] x.pairs <- x.pairs / scale # # The built-in `ecdf` function returns the proportion of values in `x.pairs` that # are less than or equal to its argument. # return (ecdf(x.pairs)) } # # The one-dimensional L function. # It merely subtracts 1 - (1-y)^2 from `Ripley.K(x)(y)`. # Its argument `x` is an array of data values. # Ripley.L <- function(x) {function(y) Ripley.K(x)(y) - 1 + (1-y)^2} #-------------------------------------------------------------------------------# set.seed(17) # # Create mixtures of random variables. # rmixture <- function(n, p=1, f=list(runif), factor=10) { q <- ceiling(factor * abs(p) * n / sum(abs(p))) x <- as.vector(unlist(mapply(function(y,f) f(y), q, f))) sample(x, n) } dmixture <- function(x, p=1, f=list(dunif)) { z <- matrix(unlist(sapply(f, function(g) g(x))), ncol=length(f)) z %*% (abs(p) / sum(abs(p))) } p <- rep(1, 4) fg <- lapply(p, function(q) { v <- runif(1,0,30) list(function(n) rnorm(n,v), function(x) dnorm(x,v), v) }) f <- lapply(fg, function(u) u[[1]]) # For random sampling g <- lapply(fg, function(u) u[[2]]) # The distribution functions v <- sapply(fg, function(u) u[[3]]) # The parameters (for reference) #-------------------------------------------------------------------------------# # # Study the L function. # n <- 64 # Sample size alpha <- beta <- 0.2 # Beta distribution parameters layout(matrix(c(rep(1,3), 3, 4, 2), 2, 3, byrow=TRUE), heights=c(0.6, 0.4)) # # Display the L functions over an envelope for the uniform distribution. # plot(c(0,1/3), c(-1/8,1/6), type="n", xlab="Normalized Distance", ylab="Total Proportion", main="Ripley L Functions") invisible(replicate(999, { plot(Ripley.L(x.unif <- runif(n)), col="#00000010", add=TRUE) })) abline(h=0, lwd=2, col="White") # # Each of these lines generates a random set of `n` data according to a specified # distribution, calls `Ripley.L`, and plots its values. # plot(Ripley.L(x.norm <- rnorm(n)), col="Blue", lwd=2, add=TRUE) plot(Ripley.L(x.beta <- rbeta(n, alpha, beta)), col="Red", lwd=2, lty=2, add=TRUE) plot(Ripley.L(x.mixture <- rmixture(n, p, f)), col="Green", lwd=2, lty=3, add=TRUE) # # Display the histograms. # n.breaks <- 24 h <- hist(x.norm, main="Normal Sample", breaks=n.breaks, xlab="Value") curve(dnorm(x)*n*mean(diff(h$breaks)), add=TRUE, lwd=2, col="Blue") h <- hist(x.beta, main=paste0("Beta(", alpha, ",", beta, ") Sample"), breaks=n.breaks, xlab="Value") curve(dbeta(x, alpha, beta)*n*mean(diff(h$breaks)), add=TRUE, lwd=2, lty=2, col="Red") h <- hist(x.mixture, main="Mixture Sample", breaks=n.breaks, xlab="Value") curve(dmixture(x, p, g)*n*mean(diff(h$breaks)), add=TRUE, lwd=2, lty=3, col="Green")
Is there a measure of 'evenness' of spread?
A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these
Is there a measure of 'evenness' of spread? A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these are typically used to evaluate two-dimensional spatial point configurations, the analysis needed to adapt them to one dimension (which usually is not given in references) is simple. Theory The K function estimates the mean proportion of points within a distance $d$ of a typical point. For a uniform distribution on the interval $[0,1]$, the true proportion can be computed and (asymptotically in the sample size) equals $1 - (1-d)^2$. The appropriate one-dimensional version of the L function subtracts this value from K to show deviations from uniformity. We might therefore consider normalizing any batch of data to have a unit range and examining its L function for deviations around zero. Worked Examples To illustrate, I have simulated $999$ independent samples of size $64$ from a uniform distribution and plotted their (normalized) L functions for shorter distances (from $0$ to $1/3$), thereby creating an envelope to estimate the sampling distribution of the L function. (Plotted points well within this envelope cannot be significantly distinguished from uniformity.) Over this I have plotted the L functions for samples of the same size from a U-shaped distribution, a mixture distribution with four obvious components, and a standard Normal distribution. The histograms of these samples (and of their parent distributions) are shown for reference, using line symbols to match those of the L functions. The sharp separated spikes of the U-shaped distribution (dashed red line, leftmost histogram) create clusters of closely spaced values. This is reflected by a very large slope in the L function at $0$. The L function then decreases, eventually becoming negative to reflect the gaps at intermediate distances. The sample from the normal distribution (solid blue line, rightmost histogram) is fairly close to uniformly distributed. Accordingly, its L function does not depart from $0$ quickly. However, by distances of $0.10$ or so, it has risen sufficiently above the envelope to signal a slight tendency to cluster. The continued rise across intermediate distances indicates the clustering is diffuse and widespread (not confined to some isolated peaks). The initial large slope for the sample from the mixture distribution (middle histogram) reveals clustering at small distances (less than $0.15$). By dropping to negative levels, it signals separation at intermediate distances. Comparing this to the U-shaped distribution's L function is revealing: the slopes at $0$, the amounts by which these curves rise above $0$, and the rates at which they eventually descend back to $0$ all provide information about the nature of the clustering present in the data. Any of these characteristics could be chosen as a single measure of "evenness" to suit a particular application. These examples show how an L-function can be examined to evaluate departures of the data from uniformity ("evenness") and how quantitative information about the scale and nature of the departures can be extracted from it. (One can indeed plot the entire L function, extending to the full normalized distance of $1$, to assess large-scale departures from uniformity. Ordinarily, though, assessing the behavior of the data at smaller distances is of greater importance.) Software R code to generate this figure follows. It starts by defining functions to compute K and L. It creates a capability to simulate from a mixture distribution. Then it generates the simulated data and makes the plots. Ripley.K <- function(x, scale) { # Arguments: # x is an array of data. # scale (not actually used) is an option to rescale the data. # # Return value: # A function that calculates Ripley's K for any value between 0 and 1 (or `scale`). # x.pairs <- outer(x, x, function(a,b) abs(a-b)) # All pairwise distances x.pairs <- x.pairs[lower.tri(x.pairs)] # Distances between distinct pairs if(missing(scale)) scale <- diff(range(x.pairs))# Rescale distances to [0,1] x.pairs <- x.pairs / scale # # The built-in `ecdf` function returns the proportion of values in `x.pairs` that # are less than or equal to its argument. # return (ecdf(x.pairs)) } # # The one-dimensional L function. # It merely subtracts 1 - (1-y)^2 from `Ripley.K(x)(y)`. # Its argument `x` is an array of data values. # Ripley.L <- function(x) {function(y) Ripley.K(x)(y) - 1 + (1-y)^2} #-------------------------------------------------------------------------------# set.seed(17) # # Create mixtures of random variables. # rmixture <- function(n, p=1, f=list(runif), factor=10) { q <- ceiling(factor * abs(p) * n / sum(abs(p))) x <- as.vector(unlist(mapply(function(y,f) f(y), q, f))) sample(x, n) } dmixture <- function(x, p=1, f=list(dunif)) { z <- matrix(unlist(sapply(f, function(g) g(x))), ncol=length(f)) z %*% (abs(p) / sum(abs(p))) } p <- rep(1, 4) fg <- lapply(p, function(q) { v <- runif(1,0,30) list(function(n) rnorm(n,v), function(x) dnorm(x,v), v) }) f <- lapply(fg, function(u) u[[1]]) # For random sampling g <- lapply(fg, function(u) u[[2]]) # The distribution functions v <- sapply(fg, function(u) u[[3]]) # The parameters (for reference) #-------------------------------------------------------------------------------# # # Study the L function. # n <- 64 # Sample size alpha <- beta <- 0.2 # Beta distribution parameters layout(matrix(c(rep(1,3), 3, 4, 2), 2, 3, byrow=TRUE), heights=c(0.6, 0.4)) # # Display the L functions over an envelope for the uniform distribution. # plot(c(0,1/3), c(-1/8,1/6), type="n", xlab="Normalized Distance", ylab="Total Proportion", main="Ripley L Functions") invisible(replicate(999, { plot(Ripley.L(x.unif <- runif(n)), col="#00000010", add=TRUE) })) abline(h=0, lwd=2, col="White") # # Each of these lines generates a random set of `n` data according to a specified # distribution, calls `Ripley.L`, and plots its values. # plot(Ripley.L(x.norm <- rnorm(n)), col="Blue", lwd=2, add=TRUE) plot(Ripley.L(x.beta <- rbeta(n, alpha, beta)), col="Red", lwd=2, lty=2, add=TRUE) plot(Ripley.L(x.mixture <- rmixture(n, p, f)), col="Green", lwd=2, lty=3, add=TRUE) # # Display the histograms. # n.breaks <- 24 h <- hist(x.norm, main="Normal Sample", breaks=n.breaks, xlab="Value") curve(dnorm(x)*n*mean(diff(h$breaks)), add=TRUE, lwd=2, col="Blue") h <- hist(x.beta, main=paste0("Beta(", alpha, ",", beta, ") Sample"), breaks=n.breaks, xlab="Value") curve(dbeta(x, alpha, beta)*n*mean(diff(h$breaks)), add=TRUE, lwd=2, lty=2, col="Red") h <- hist(x.mixture, main="Mixture Sample", breaks=n.breaks, xlab="Value") curve(dmixture(x, p, g)*n*mean(diff(h$breaks)), add=TRUE, lwd=2, lty=3, col="Green")
Is there a measure of 'evenness' of spread? A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these
14,497
Is there a measure of 'evenness' of spread?
I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulative distribution function of the sample. Let's assume that the variable is defined on the set $\{1,2,3,4,5\}$. Then the uniform distribution have the cdf $F_u(x)$ given by $$F_u(x) = \sum_{i=1}^{[x]} 1/5 .$$ Now, assume that your sample $X$ is $1,3,5$. Then the empirical distribution of $X$ is $$ F_X(1) = 1/3, F_X(2) = 1/3, F_X(3) = 2/3, F_X(4) = 2/3, F_X(5) = 1 $$ And let sample $Y$ be $1,1,5$. Then the empirical distribution of $Y$ is $$ F_Y(1) = 2/3, F_Y(2) = 2/3, F_Y(3) = 2/3, F_Y(4) = 2/3, F_Y(5) = 1 $$ Now, as a measure of distance between distributions let's take the sum of distances at each point, i.e. $$ d(F,G) = \sum_{i=1}^5 | F(x) - G(x)|. $$ You can easily find out that $d(F_u,F_X) < d(F_u,F_Y) $. In more complicated cases you need to revise the norm used above, but the main idea remains the same. If you need testing procedure, it may be good to use norms for which tests are developped (the ones that @TomMinka pointed out).
Is there a measure of 'evenness' of spread?
I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulati
Is there a measure of 'evenness' of spread? I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulative distribution function of the sample. Let's assume that the variable is defined on the set $\{1,2,3,4,5\}$. Then the uniform distribution have the cdf $F_u(x)$ given by $$F_u(x) = \sum_{i=1}^{[x]} 1/5 .$$ Now, assume that your sample $X$ is $1,3,5$. Then the empirical distribution of $X$ is $$ F_X(1) = 1/3, F_X(2) = 1/3, F_X(3) = 2/3, F_X(4) = 2/3, F_X(5) = 1 $$ And let sample $Y$ be $1,1,5$. Then the empirical distribution of $Y$ is $$ F_Y(1) = 2/3, F_Y(2) = 2/3, F_Y(3) = 2/3, F_Y(4) = 2/3, F_Y(5) = 1 $$ Now, as a measure of distance between distributions let's take the sum of distances at each point, i.e. $$ d(F,G) = \sum_{i=1}^5 | F(x) - G(x)|. $$ You can easily find out that $d(F_u,F_X) < d(F_u,F_Y) $. In more complicated cases you need to revise the norm used above, but the main idea remains the same. If you need testing procedure, it may be good to use norms for which tests are developped (the ones that @TomMinka pointed out).
Is there a measure of 'evenness' of spread? I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulati
14,498
Is there a measure of 'evenness' of spread?
If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of observations at the same value, that would be uneven. Assuming we are talking discrete observations, perhaps you could look at both the average difference between the probability mass points, the maximum difference or maybe how many observations have a difference from the "average" over a certain threshold. If it were truly uniform in the observations, all PM points should have equal value, and the difference between max and min is 0. The closer the average difference is to 0, the more "even" the bulk of the observations are, the lower the maximum difference and the fewer "peaks" there are also goes to show how "even" the empirical observations are. Update Of course, you can use a chi-square test for uniformity or compare the empirical distribution function with a uniform, but in those cases, you will be penalized by any large "gaps" in observations, even though the distributions of observations are still "even".
Is there a measure of 'evenness' of spread?
If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of obse
Is there a measure of 'evenness' of spread? If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of observations at the same value, that would be uneven. Assuming we are talking discrete observations, perhaps you could look at both the average difference between the probability mass points, the maximum difference or maybe how many observations have a difference from the "average" over a certain threshold. If it were truly uniform in the observations, all PM points should have equal value, and the difference between max and min is 0. The closer the average difference is to 0, the more "even" the bulk of the observations are, the lower the maximum difference and the fewer "peaks" there are also goes to show how "even" the empirical observations are. Update Of course, you can use a chi-square test for uniformity or compare the empirical distribution function with a uniform, but in those cases, you will be penalized by any large "gaps" in observations, even though the distributions of observations are still "even".
Is there a measure of 'evenness' of spread? If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of obse
14,499
Is there a measure of 'evenness' of spread?
Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target distance (or normalized distance). The same is true for the discrepancy metric outlined in Martin Roberts' answer, which is spiritually similar to Ripley's L function but computes the maximum difference from uniformity observed within a given interval, instead of a proportion for a given distance. A nice practical example of using Ripley's family of functions to detect "clumps" in a 2D scenario is given in this article by Kiskowski et al. Issues In both Ripley's L function and discrepancy, some target distance/interval needs to be specified. If you need a summary statistic for "clumpiness" (i.e. how even/uneven your distribution is independently of a specific distance/interval), you can examine Ripley's L function, as proposed by @whuber, and derive different statistics, but it is not obvious how to go about it. Side note Given a distance value $d$, the Ripley's K function $K(d)$ returns the fraction of observed distances below $d$, among all the pairwise distances in your data. Then Ripley's L is defined as $L(d)=K(d)-K_u(d)$, where $K_u(d)$ is the value of K(d) for a perfectly uniform distribution. In @whuber’s reply $K_u(d)=1 - (1-d)^2$. This is true for the continuous case, but only asymptotically true for the discrete case. The exact solution for the discrete case would be $\frac{M^2-(M-d)(M-d+1)-M}{M^2-M}$, where $M$ is the number of distinct observable positions. Note that here we are assuming that for the discrete case the range of positions is not normalized to $[0,1]$ and can be described as $[0,M)$. So, $M$ could also represent the resolution with which we observe positions. Here is a visual proof of the formula for the discrete case with $10$ distinct positions ($M=10$) and $d=3$. There are $M^2=100$ pairwise distances. Those below $d=3$ are colored in blue. The number of white cells is $(M-d)(M-d+1)=56$. Therefore, the number of blue cells is $M^2-(M-d)(M-d+1)$=44. Note that it’s common practice to avoid considering the distance of an element with itself, so the $0$-valued cells on the diagonal are not going to be counted, giving us a number of distances below $d$ of $M^2-(M-d)(M-d+1)-M$ out of a total of $M^2-M$. Binning-based approaches Alternative metrics can be devised using a binning approach. By defining regularly spaced bins over the range of observations (or, if known, the domain of the sampled distribution), one can define the distribution of data points across the bins (either as absolute numbers or relative frequencies). The entropy of this distribution can then be computed as follows: $-\sum_{i=1}^{b}f(x_i)·log_2(f(x_i))$ where $b$ is the number of bins and $f(x_i)$ the relative frequency of observations in bin $i$ (i.e. counts in bin $i$ over total counts). In other words, we are splitting our range into bins and then asking how uniform (high entropy) or non-uniform (low entropy) our counts of observations in each bin are. Using the same binning approach, other measures of inequality, like the Gini coefficient, can be used. A practical example of this approach, using both entropy and Gini coefficient, can be found in this article by Mascolo et al. Issues Binning-based approaches directly generate a summary statistic (the overall positional entropy, or Gini coefficient). They do not require that we specify a distance/interval at which we want to evaluate "clumpiness", but they require that we define a bin size, which will affect the counts and the resulting statistic. Therefore, bin size indirectly defines the scale at which one wants to detect clustering, similarly to the choice of $d$ for Ripley’s functions. Binning also introduces artifacts by arbitrarily chunking the sequence at regular intervals, which could potentially result in missing a clump that sits across two bins. A simple way to make results less dependent on the binning frame is to use two sets of bins with an offset of half the bin size, as described here by Mascolo et al. If you are interested in the statistical significance of the observed unevenness of your distribution, keep in mind that this method (as the others) is sensitive to the number of datapoints. Getting low entropy values by chance is easier for small samples. As the sample size increases, the expected value of the entropy approaches the theoretical maximum $-\sum_{i=1}^{b}1/b·log_2(1/b)=-log_2(1/b)=log_2(b)$ where $b$ is the number of bins. Therefore, remember to control for sample size when comparing experiments. So far there’s no exact solution to the problem of finding the expected value of entropy, given the sample size and the number of bins. Interval variance If you want to obtain a summary statistic that indicates the evenness/unevenness of a one-dimensional distribution, as outlined in the question, and that does not require you to specify a given distance/interval/bin size at which to analyze clumpiness, an easy solution is to use the variance of the intervals. That is, we take our observations, sort them, and compute the distance among each consecutive observation (i.e. the length of the interval between two consecutive data points). Given a sample $x_1,\ldots,x_N\in{X}$ of size $N$ for a given distribution, we define the interval series $i_{x_2-x_1},\ldots,i_{x_{N}-x_{N-1}}\in{I}$, of length $N-1$. Intuitively, for the "even" distribution $X$ introduced by @Ketan in their question, all the intervals are exactly the same and their variance is therefore $0$. Conversely, for the "uneven" distribution $Y$, interval sizes are either small (within clumps) or large (across clumps), leading to high variance. Shown below are five $N=64$ samples from a uniform distribution in $[-5,+5]$ [top], and five $N=64$ samples for a "clumped" distribution in the same domain (with four uniformly distributed clumps centered at $-4 [\pm0.5]$, $-2 [\pm0.75]$, $1.5 [\pm0.25]$ and $3.75 [\pm0.5]$) [bottom]. The corresponding interval histograms are shown below. Notice that samples from clumped distributions (5 rightmost bars in each bin) have many more small intervals (from within clumps) than the uniform samples (5 leftmost bars in each bin). The samples from clumped distributions also have several (inter-clump) large interval values that are completely absent in the uniform samples. The corresponding interval variances are: Uniform distribution samples: $0.025\pm0.005$ Clumped distribution samples: $0.137\pm0.009$ which recapitulate our intuition that "clumped" distributions should have larger interval variance. A practical example of this measure is illustrated in this article by Philip and Freeland. Issues A finite random sample from a uniform distribution will not be perfectly even, regardless of the measure of evenness/unevenness/clumpiness chosen. As sample size increases, the expected value of the interval variance approaches 0, just like the Gini coefficient and the value of the Ripley’s L function, while entropy reaches its theoretical maximum $log(b)$. But if the sample size is not large enough, the expected value can be far from that. When comparing results to an ideal perfectly uniform sample, you should be aware that these metrics are biased. With the binning-based approaches one can increase bin size, while with Ripley’s functions one can increase the value of $d$. But there is no way to control for that with the variance of the intervals. Keep that in mind if you plan to apply it to small samples, which are very prone to picking up noise in the form of natural variance among the intervals of uniformly distributed points.
Is there a measure of 'evenness' of spread?
Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target dist
Is there a measure of 'evenness' of spread? Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target distance (or normalized distance). The same is true for the discrepancy metric outlined in Martin Roberts' answer, which is spiritually similar to Ripley's L function but computes the maximum difference from uniformity observed within a given interval, instead of a proportion for a given distance. A nice practical example of using Ripley's family of functions to detect "clumps" in a 2D scenario is given in this article by Kiskowski et al. Issues In both Ripley's L function and discrepancy, some target distance/interval needs to be specified. If you need a summary statistic for "clumpiness" (i.e. how even/uneven your distribution is independently of a specific distance/interval), you can examine Ripley's L function, as proposed by @whuber, and derive different statistics, but it is not obvious how to go about it. Side note Given a distance value $d$, the Ripley's K function $K(d)$ returns the fraction of observed distances below $d$, among all the pairwise distances in your data. Then Ripley's L is defined as $L(d)=K(d)-K_u(d)$, where $K_u(d)$ is the value of K(d) for a perfectly uniform distribution. In @whuber’s reply $K_u(d)=1 - (1-d)^2$. This is true for the continuous case, but only asymptotically true for the discrete case. The exact solution for the discrete case would be $\frac{M^2-(M-d)(M-d+1)-M}{M^2-M}$, where $M$ is the number of distinct observable positions. Note that here we are assuming that for the discrete case the range of positions is not normalized to $[0,1]$ and can be described as $[0,M)$. So, $M$ could also represent the resolution with which we observe positions. Here is a visual proof of the formula for the discrete case with $10$ distinct positions ($M=10$) and $d=3$. There are $M^2=100$ pairwise distances. Those below $d=3$ are colored in blue. The number of white cells is $(M-d)(M-d+1)=56$. Therefore, the number of blue cells is $M^2-(M-d)(M-d+1)$=44. Note that it’s common practice to avoid considering the distance of an element with itself, so the $0$-valued cells on the diagonal are not going to be counted, giving us a number of distances below $d$ of $M^2-(M-d)(M-d+1)-M$ out of a total of $M^2-M$. Binning-based approaches Alternative metrics can be devised using a binning approach. By defining regularly spaced bins over the range of observations (or, if known, the domain of the sampled distribution), one can define the distribution of data points across the bins (either as absolute numbers or relative frequencies). The entropy of this distribution can then be computed as follows: $-\sum_{i=1}^{b}f(x_i)·log_2(f(x_i))$ where $b$ is the number of bins and $f(x_i)$ the relative frequency of observations in bin $i$ (i.e. counts in bin $i$ over total counts). In other words, we are splitting our range into bins and then asking how uniform (high entropy) or non-uniform (low entropy) our counts of observations in each bin are. Using the same binning approach, other measures of inequality, like the Gini coefficient, can be used. A practical example of this approach, using both entropy and Gini coefficient, can be found in this article by Mascolo et al. Issues Binning-based approaches directly generate a summary statistic (the overall positional entropy, or Gini coefficient). They do not require that we specify a distance/interval at which we want to evaluate "clumpiness", but they require that we define a bin size, which will affect the counts and the resulting statistic. Therefore, bin size indirectly defines the scale at which one wants to detect clustering, similarly to the choice of $d$ for Ripley’s functions. Binning also introduces artifacts by arbitrarily chunking the sequence at regular intervals, which could potentially result in missing a clump that sits across two bins. A simple way to make results less dependent on the binning frame is to use two sets of bins with an offset of half the bin size, as described here by Mascolo et al. If you are interested in the statistical significance of the observed unevenness of your distribution, keep in mind that this method (as the others) is sensitive to the number of datapoints. Getting low entropy values by chance is easier for small samples. As the sample size increases, the expected value of the entropy approaches the theoretical maximum $-\sum_{i=1}^{b}1/b·log_2(1/b)=-log_2(1/b)=log_2(b)$ where $b$ is the number of bins. Therefore, remember to control for sample size when comparing experiments. So far there’s no exact solution to the problem of finding the expected value of entropy, given the sample size and the number of bins. Interval variance If you want to obtain a summary statistic that indicates the evenness/unevenness of a one-dimensional distribution, as outlined in the question, and that does not require you to specify a given distance/interval/bin size at which to analyze clumpiness, an easy solution is to use the variance of the intervals. That is, we take our observations, sort them, and compute the distance among each consecutive observation (i.e. the length of the interval between two consecutive data points). Given a sample $x_1,\ldots,x_N\in{X}$ of size $N$ for a given distribution, we define the interval series $i_{x_2-x_1},\ldots,i_{x_{N}-x_{N-1}}\in{I}$, of length $N-1$. Intuitively, for the "even" distribution $X$ introduced by @Ketan in their question, all the intervals are exactly the same and their variance is therefore $0$. Conversely, for the "uneven" distribution $Y$, interval sizes are either small (within clumps) or large (across clumps), leading to high variance. Shown below are five $N=64$ samples from a uniform distribution in $[-5,+5]$ [top], and five $N=64$ samples for a "clumped" distribution in the same domain (with four uniformly distributed clumps centered at $-4 [\pm0.5]$, $-2 [\pm0.75]$, $1.5 [\pm0.25]$ and $3.75 [\pm0.5]$) [bottom]. The corresponding interval histograms are shown below. Notice that samples from clumped distributions (5 rightmost bars in each bin) have many more small intervals (from within clumps) than the uniform samples (5 leftmost bars in each bin). The samples from clumped distributions also have several (inter-clump) large interval values that are completely absent in the uniform samples. The corresponding interval variances are: Uniform distribution samples: $0.025\pm0.005$ Clumped distribution samples: $0.137\pm0.009$ which recapitulate our intuition that "clumped" distributions should have larger interval variance. A practical example of this measure is illustrated in this article by Philip and Freeland. Issues A finite random sample from a uniform distribution will not be perfectly even, regardless of the measure of evenness/unevenness/clumpiness chosen. As sample size increases, the expected value of the interval variance approaches 0, just like the Gini coefficient and the value of the Ripley’s L function, while entropy reaches its theoretical maximum $log(b)$. But if the sample size is not large enough, the expected value can be far from that. When comparing results to an ideal perfectly uniform sample, you should be aware that these metrics are biased. With the binning-based approaches one can increase bin size, while with Ripley’s functions one can increase the value of $d$. But there is no way to control for that with the variance of the intervals. Keep that in mind if you plan to apply it to small samples, which are very prone to picking up noise in the form of natural variance among the intervals of uniformly distributed points.
Is there a measure of 'evenness' of spread? Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target dist
14,500
Is there a measure of 'evenness' of spread?
The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N\in{I}$. For a subset $J\subset{I}$, let $A(J,N)$ denote the number of elements of this sequence inside $J$. That is, $$ A(J,N)=\left|\{x_1,\ldots,x_N\}\cap{J}\right|, $$ and let $V(J)$ denote the volume of $J$. The discrepancy of the sequence $x_1,\ldots,x_N$ is defined as $$ > D_N=\sup_{J}{\left|A(J,N)-V(J)\cdot{N}\right|}, $$ where the supremum is taken over all half-open subintervals $J=\prod_{j=1}{[0,t_j)}$, with $0\leq{t_j}\leq1$. The discrepancy thus compares the actual number of points in a given volume with the expected number of points in that volume, assuming the sequence $x_1,\ldots,x_N$ is uniformly distributed in $I$. Low discrepancy sequences are often called quasirandom sequences. A basic overview of low discrepancy sequences can be found here, and my blog post "The unreasonable effectiveness of quasirandom sequences" compares various methods when applied to Numerical Integration, mapping points to the surface of a sphere, and quasiperiodic tiling.
Is there a measure of 'evenness' of spread?
The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N
Is there a measure of 'evenness' of spread? The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N\in{I}$. For a subset $J\subset{I}$, let $A(J,N)$ denote the number of elements of this sequence inside $J$. That is, $$ A(J,N)=\left|\{x_1,\ldots,x_N\}\cap{J}\right|, $$ and let $V(J)$ denote the volume of $J$. The discrepancy of the sequence $x_1,\ldots,x_N$ is defined as $$ > D_N=\sup_{J}{\left|A(J,N)-V(J)\cdot{N}\right|}, $$ where the supremum is taken over all half-open subintervals $J=\prod_{j=1}{[0,t_j)}$, with $0\leq{t_j}\leq1$. The discrepancy thus compares the actual number of points in a given volume with the expected number of points in that volume, assuming the sequence $x_1,\ldots,x_N$ is uniformly distributed in $I$. Low discrepancy sequences are often called quasirandom sequences. A basic overview of low discrepancy sequences can be found here, and my blog post "The unreasonable effectiveness of quasirandom sequences" compares various methods when applied to Numerical Integration, mapping points to the surface of a sphere, and quasiperiodic tiling.
Is there a measure of 'evenness' of spread? The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N