idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
9,001
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
|
Steepest descent can be inefficient even if the objective function is strongly convex.
Ordinary gradient descent
I mean "inefficient" in the sense that steepest descent can take steps that oscillate wildly away from the optimum, even if the function is strongly convex or even quadratic.
Consider $f(x)=x_1^2 + 25x_2^2$. This is convex because it is a quadratic with positive coefficients. By inspection, we can see that it has a global minimum at $x=[0,0]^\top$. It has gradient
$$
\nabla f(x)=
\begin{bmatrix}
2x_1 \\
50x_2
\end{bmatrix}
$$
With a learning rate of $\alpha=0.035$, and initial guess $x^{(0)}=[0.5, 0.5]^\top,$ we have the gradient update
$$
x^{(1)} =x^{(0)}-\alpha \nabla f\left(x^{(0)}\right)
$$
which exhibits this wildly oscillating progress towards the minimum.
Indeed, the angle $\theta$ formed between $(x^{(i)}, x^*)$ and $(x^{(i)}, x^{(i+1)})$ only gradually decays to 0. What this means is that the direction of the update is sometimes wrong -- at most, it is wrong by almost 68 degrees -- even though the algorithm is converging and working correctly.
Each step is wildly oscillating because the function is much steeper in the $x_2$ direction than the $x_1$ direction. Because of this fact, we can infer that the gradient is not always, or even usually, pointing toward the minimum. This is a general property of gradient descent when the eigenvalues of the Hessian $\nabla^2 f(x)$ are on dissimilar scales. Progress is slow in directions corresponding to the eigenvectors with the smallest corresponding eigenvalues, and fastest in the directions with the largest eigenvalues. It is this property, in combination with the choice of learning rate, that determines how quickly gradient descent progresses.
The direct path to the minimum would be to move "diagonally" instead of in this fashion which is strongly dominated by vertical oscillations. However, gradient descent only has information about local steepness, so it "doesn't know" that strategy would be more efficient, and it is subject to the vagaries of the Hessian having eigenvalues on different scales.
Stochastic gradient descent
SGD has the same properties, with the exception that the updates are noisy, implying that the contour surface looks different from one iteration to the next, and therefore the gradients are also different. This implies that the angle between the direction of the gradient step and the optimum will also have noise - just imagine the same plots with some jitter.
More information:
Can we apply analyticity of a neural network to improve upon gradient descent?
Why are second-order derivatives useful in convex optimization?
How can change in cost function be positive?
This answer borrows this example and figure from Neural Networks Design (2nd Ed.) Chapter 9 by Martin T. Hagan, Howard B. Demuth, Mark Hudson Beale, Orlando De Jesús.
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
|
Steepest descent can be inefficient even if the objective function is strongly convex.
Ordinary gradient descent
I mean "inefficient" in the sense that steepest descent can take steps that oscillate w
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
Steepest descent can be inefficient even if the objective function is strongly convex.
Ordinary gradient descent
I mean "inefficient" in the sense that steepest descent can take steps that oscillate wildly away from the optimum, even if the function is strongly convex or even quadratic.
Consider $f(x)=x_1^2 + 25x_2^2$. This is convex because it is a quadratic with positive coefficients. By inspection, we can see that it has a global minimum at $x=[0,0]^\top$. It has gradient
$$
\nabla f(x)=
\begin{bmatrix}
2x_1 \\
50x_2
\end{bmatrix}
$$
With a learning rate of $\alpha=0.035$, and initial guess $x^{(0)}=[0.5, 0.5]^\top,$ we have the gradient update
$$
x^{(1)} =x^{(0)}-\alpha \nabla f\left(x^{(0)}\right)
$$
which exhibits this wildly oscillating progress towards the minimum.
Indeed, the angle $\theta$ formed between $(x^{(i)}, x^*)$ and $(x^{(i)}, x^{(i+1)})$ only gradually decays to 0. What this means is that the direction of the update is sometimes wrong -- at most, it is wrong by almost 68 degrees -- even though the algorithm is converging and working correctly.
Each step is wildly oscillating because the function is much steeper in the $x_2$ direction than the $x_1$ direction. Because of this fact, we can infer that the gradient is not always, or even usually, pointing toward the minimum. This is a general property of gradient descent when the eigenvalues of the Hessian $\nabla^2 f(x)$ are on dissimilar scales. Progress is slow in directions corresponding to the eigenvectors with the smallest corresponding eigenvalues, and fastest in the directions with the largest eigenvalues. It is this property, in combination with the choice of learning rate, that determines how quickly gradient descent progresses.
The direct path to the minimum would be to move "diagonally" instead of in this fashion which is strongly dominated by vertical oscillations. However, gradient descent only has information about local steepness, so it "doesn't know" that strategy would be more efficient, and it is subject to the vagaries of the Hessian having eigenvalues on different scales.
Stochastic gradient descent
SGD has the same properties, with the exception that the updates are noisy, implying that the contour surface looks different from one iteration to the next, and therefore the gradients are also different. This implies that the angle between the direction of the gradient step and the optimum will also have noise - just imagine the same plots with some jitter.
More information:
Can we apply analyticity of a neural network to improve upon gradient descent?
Why are second-order derivatives useful in convex optimization?
How can change in cost function be positive?
This answer borrows this example and figure from Neural Networks Design (2nd Ed.) Chapter 9 by Martin T. Hagan, Howard B. Demuth, Mark Hudson Beale, Orlando De Jesús.
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
Steepest descent can be inefficient even if the objective function is strongly convex.
Ordinary gradient descent
I mean "inefficient" in the sense that steepest descent can take steps that oscillate w
|
9,002
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
|
Local steepest direction is not the same with the global optimum direction. If it were, then your gradient direction wouldn't change; because if you go towards your optimum always, your direction vector would point optimum always. But, that's not the case. If it were the case, why bother calculating your gradient every iteration?
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
|
Local steepest direction is not the same with the global optimum direction. If it were, then your gradient direction wouldn't change; because if you go towards your optimum always, your direction vect
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
Local steepest direction is not the same with the global optimum direction. If it were, then your gradient direction wouldn't change; because if you go towards your optimum always, your direction vector would point optimum always. But, that's not the case. If it were the case, why bother calculating your gradient every iteration?
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
Local steepest direction is not the same with the global optimum direction. If it were, then your gradient direction wouldn't change; because if you go towards your optimum always, your direction vect
|
9,003
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
|
The other answers highlight some annoying rate-of-convergence issues for GD/SGD, but your comment "SGD can eventually converge..." isn't always correct (ignoring pedantic usage remarks about the word "can" since it seems you meant "will").
One nice trick for finding counter-examples with SGD is to notice that if every data point is the same, your cost function is deterministic. Imagine the extremely pathological example where we have one data point $$(x_0,y_0)=(1,0)$$ and we have a model for how our system should work based on a single parameter $\alpha$ $$f(x,\alpha)=\sqrt{\alpha^2-\alpha x}.$$
With MSE as our cost function, this simplifies to $$(f(x_0,\alpha)-y_0)^2=\alpha^2-\alpha,$$ a convex function. Suppose we choose our learning rate $\beta$ poorly so that our update rule is as follows: $$\alpha_{n+1}=\alpha_n-\beta(2\alpha_n-1)=\alpha_n-(2\alpha_n-1)=1-\alpha_n.$$ Now, our cost function has a minimum at $\alpha=\frac12$, but if we start literally anywhere other than $p=\frac12$ then SGD will simply bounce between cycle between the starting point $p$ and $1-p$ and never converge.
I'm not sure if convexity is enough to break some worse behavior that exists for general SGD, but if you allow functions even as complex as cubics for your cost function then SGD can bounce around on a dense subset of the domain and never converge anywhere or approach any cycle.
SGD can also approach/obtain cycles of any finite length, diverge toward $\infty$, oscillate toward $\pm\infty$ (excuse the notation), and have tons of other pathological behavior.
One interesting thing about the whole situation is that there exist uncountably many functions (like SGD) which take arbitrary convex functions as inputs and then output an update rule which always quickly converges to the global minimum (if one exists). Even though conceptually there exist loads of them, our best attempts at convex optimization all have pathological counterexamples. Somehow the idea of a simple/intuitive/performant update rule runs counter to the idea of a provably correct update rule.
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
|
The other answers highlight some annoying rate-of-convergence issues for GD/SGD, but your comment "SGD can eventually converge..." isn't always correct (ignoring pedantic usage remarks about the word
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
The other answers highlight some annoying rate-of-convergence issues for GD/SGD, but your comment "SGD can eventually converge..." isn't always correct (ignoring pedantic usage remarks about the word "can" since it seems you meant "will").
One nice trick for finding counter-examples with SGD is to notice that if every data point is the same, your cost function is deterministic. Imagine the extremely pathological example where we have one data point $$(x_0,y_0)=(1,0)$$ and we have a model for how our system should work based on a single parameter $\alpha$ $$f(x,\alpha)=\sqrt{\alpha^2-\alpha x}.$$
With MSE as our cost function, this simplifies to $$(f(x_0,\alpha)-y_0)^2=\alpha^2-\alpha,$$ a convex function. Suppose we choose our learning rate $\beta$ poorly so that our update rule is as follows: $$\alpha_{n+1}=\alpha_n-\beta(2\alpha_n-1)=\alpha_n-(2\alpha_n-1)=1-\alpha_n.$$ Now, our cost function has a minimum at $\alpha=\frac12$, but if we start literally anywhere other than $p=\frac12$ then SGD will simply bounce between cycle between the starting point $p$ and $1-p$ and never converge.
I'm not sure if convexity is enough to break some worse behavior that exists for general SGD, but if you allow functions even as complex as cubics for your cost function then SGD can bounce around on a dense subset of the domain and never converge anywhere or approach any cycle.
SGD can also approach/obtain cycles of any finite length, diverge toward $\infty$, oscillate toward $\pm\infty$ (excuse the notation), and have tons of other pathological behavior.
One interesting thing about the whole situation is that there exist uncountably many functions (like SGD) which take arbitrary convex functions as inputs and then output an update rule which always quickly converges to the global minimum (if one exists). Even though conceptually there exist loads of them, our best attempts at convex optimization all have pathological counterexamples. Somehow the idea of a simple/intuitive/performant update rule runs counter to the idea of a provably correct update rule.
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
The other answers highlight some annoying rate-of-convergence issues for GD/SGD, but your comment "SGD can eventually converge..." isn't always correct (ignoring pedantic usage remarks about the word
|
9,004
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
|
Maybe the answers to this question need a quick update. It seems like SGD yields a global minimum also in the non-convex case (convex is just a special case of that):
SGD Converges To Global Minimum In Deep Learning via Star-Convex Path, Anonymous authors, Paper under double-blind review at ICLR 2019
https://openreview.net/pdf?id=BylIciRcYQ
The authors establish the convergence of SGD to a global minimum for nonconvex optimization problems that are commonly encountered in neural network training. The argument exploits the following two important properties: 1) the training loss can achieve zero value (approximately); 2) SGD follows a star-convex path. In such a context, although SGD has long been considered as a randomized algorithm,
the paper reveals that it converges in an intrinsically deterministic manner to a global minimum.
This should be taken with a grain of salt though. The paper is still under review.
The notion of star-convex path gives a hint about towards where the gradient would point at each iteration.
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
|
Maybe the answers to this question need a quick update. It seems like SGD yields a global minimum also in the non-convex case (convex is just a special case of that):
SGD Converges To Global Minimum
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?
Maybe the answers to this question need a quick update. It seems like SGD yields a global minimum also in the non-convex case (convex is just a special case of that):
SGD Converges To Global Minimum In Deep Learning via Star-Convex Path, Anonymous authors, Paper under double-blind review at ICLR 2019
https://openreview.net/pdf?id=BylIciRcYQ
The authors establish the convergence of SGD to a global minimum for nonconvex optimization problems that are commonly encountered in neural network training. The argument exploits the following two important properties: 1) the training loss can achieve zero value (approximately); 2) SGD follows a star-convex path. In such a context, although SGD has long been considered as a randomized algorithm,
the paper reveals that it converges in an intrinsically deterministic manner to a global minimum.
This should be taken with a grain of salt though. The paper is still under review.
The notion of star-convex path gives a hint about towards where the gradient would point at each iteration.
|
For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global e
Maybe the answers to this question need a quick update. It seems like SGD yields a global minimum also in the non-convex case (convex is just a special case of that):
SGD Converges To Global Minimum
|
9,005
|
Sample two numbers from 1 to 10; maximize the expected product
|
Hint: Note the relationship between $E[XY]$ and the covariance. Now think about the sign of the covariance - or if you prefer it in those terms, the sign of the correlation will work - under the two sampling schemes (it's zero under one but clearly not under the other, noting that we're here taking $X$ and $Y$ as the values on the two draws). The solution to maximizing $E[XY]$ is immediate.
This sounds like the sort of thing one might encounter in one of those interviews where they try to ask you some odd question and see what you do with it -- there's typically a shortcut that avoids explicit calculation; this one
definitely has a shortcut. Realizing the connection between $E[XY]$ and $\text{Cov}[XY]$ and then the connection to the two sampling methods, one should be able to answer it in a matter of seconds, and justify it.
It seems further explanation may be helpful. These are the ideas involved.
Unconditional expectations are unchanged whether you use with replacement or without replacement. That is $E[Y]=E[X]$* under both schemes.
If you sample without replacement, the covariance must be negative because when you take a value below the population mean there's now more values available above the population mean than below it for the second draw, and vice versa. That is, the second value is more likely to be the opposite side of the mean from the first value than on the same side in a way that will make the covariance negative for this case.
$E[XY] = E[X]E[Y]+\text{Cov}[X,Y]$
With replacement, covariance is 0 and $E[XY]=E[X]E[Y]$
Without replacement, covariance is <0 and $E[XY] < E[X]E[Y]$
* If this doesn't seem obvious, consider the following thought experiment: I take a deck of cards numbered 1 to 10 and thoroughly shuffle them, placing the deck face down on the table. Person A will take the first card and person B the second card. But now person B asks that we extend the shuffling a little further and interchange the positions of the top two cards. Clearly that last step doesn't change the distributions experienced by A and B (the additional mixing step doesn't make it any less random). So B must experience the same distribution under both schemes, and thereby B has the same (unconditional) distribution as A does -- it doesn't matter if you take the first card or the second card. Plainly, then, $E[Y]=E[X]$.
(Naturally, however, if B observes what A got before drawing, B's conditional distribution and hence the conditional expectation $E[Y|X=x]$ is impacted by the specific value of $x$. This is not the situation we're dealing with, since we were trying to find the unconditional expectation $E[Y]$.)
|
Sample two numbers from 1 to 10; maximize the expected product
|
Hint: Note the relationship between $E[XY]$ and the covariance. Now think about the sign of the covariance - or if you prefer it in those terms, the sign of the correlation will work - under the two s
|
Sample two numbers from 1 to 10; maximize the expected product
Hint: Note the relationship between $E[XY]$ and the covariance. Now think about the sign of the covariance - or if you prefer it in those terms, the sign of the correlation will work - under the two sampling schemes (it's zero under one but clearly not under the other, noting that we're here taking $X$ and $Y$ as the values on the two draws). The solution to maximizing $E[XY]$ is immediate.
This sounds like the sort of thing one might encounter in one of those interviews where they try to ask you some odd question and see what you do with it -- there's typically a shortcut that avoids explicit calculation; this one
definitely has a shortcut. Realizing the connection between $E[XY]$ and $\text{Cov}[XY]$ and then the connection to the two sampling methods, one should be able to answer it in a matter of seconds, and justify it.
It seems further explanation may be helpful. These are the ideas involved.
Unconditional expectations are unchanged whether you use with replacement or without replacement. That is $E[Y]=E[X]$* under both schemes.
If you sample without replacement, the covariance must be negative because when you take a value below the population mean there's now more values available above the population mean than below it for the second draw, and vice versa. That is, the second value is more likely to be the opposite side of the mean from the first value than on the same side in a way that will make the covariance negative for this case.
$E[XY] = E[X]E[Y]+\text{Cov}[X,Y]$
With replacement, covariance is 0 and $E[XY]=E[X]E[Y]$
Without replacement, covariance is <0 and $E[XY] < E[X]E[Y]$
* If this doesn't seem obvious, consider the following thought experiment: I take a deck of cards numbered 1 to 10 and thoroughly shuffle them, placing the deck face down on the table. Person A will take the first card and person B the second card. But now person B asks that we extend the shuffling a little further and interchange the positions of the top two cards. Clearly that last step doesn't change the distributions experienced by A and B (the additional mixing step doesn't make it any less random). So B must experience the same distribution under both schemes, and thereby B has the same (unconditional) distribution as A does -- it doesn't matter if you take the first card or the second card. Plainly, then, $E[Y]=E[X]$.
(Naturally, however, if B observes what A got before drawing, B's conditional distribution and hence the conditional expectation $E[Y|X=x]$ is impacted by the specific value of $x$. This is not the situation we're dealing with, since we were trying to find the unconditional expectation $E[Y]$.)
|
Sample two numbers from 1 to 10; maximize the expected product
Hint: Note the relationship between $E[XY]$ and the covariance. Now think about the sign of the covariance - or if you prefer it in those terms, the sign of the correlation will work - under the two s
|
9,006
|
Sample two numbers from 1 to 10; maximize the expected product
|
If you don't get to the smart covariance trick by Glen B, then you could also consider the following approach which is one level of abstraction lower
Step 1: consider computing the hard way by adding all 10 by 10 terms from a table $$\frac{1}{100}\sum_{x_1=1}^{10}\sum_{x_2=1}^{10} x_1 \cdot x_2 = E[X]^2 = 5.5^2$$
$$\begin{array}{cccccccccc}
1&2&3&4&5&6&7&8&9&10\\
2&4&6&8&10&12&14&16&18&20\\
3&6&9&12&15&18&21&24&27&30\\
4&8&12&16&20&24&28&32&36&40\\
5&10&15&20&25&30&35&40&45&50\\
6&12&18&24&30&36&42&48&54&60\\
7&14&21&28&35&42&49&56&63&70\\
8&16&24&32&40&48&56&64&72&80\\
9&18&27&36&45&54&63&72&81&90\\
10&20&30&40&50&60&70&80&90&100
\end{array}$$
Step 2: without replacement you get a similar sum but without the terms where $x_1=x_2$.
$$\begin{array}{cccccccccc}
&2&3&4&5&6&7&8&9&10\\
2&&6&8&10&12&14&16&18&20\\
3&6&&12&15&18&21&24&27&30\\
4&8&12&&20&24&28&32&36&40\\
5&10&15&20&&30&35&40&45&50\\
6&12&18&24&30&&42&48&54&60\\
7&14&21&28&35&42&&56&63&70\\
8&16&24&32&40&48&56&&72&80\\
9&18&27&36&45&54&63&72&&90\\
10&20&30&40&50&60&70&80&90&
\end{array}$$
Then a trick to answer the question is to analyse only the difference in this diagonal with the average of the previous step. The question is whether the average of this diagonal $E[X^2]$ is lower or smaller than $E[X]^2 = 5.5^2$. Since $E[X^2] = E[X]^2 + VAR[X] > E[X]^2$ you get that the diagonal has a more than average contribution and leaving it out would reduce the expectation of the outcome (thus you get the higher result when you use replacement, and leave the diagonal in the draw).
Here is a simple way to compute it with brute force in R
n = 10
### generate a matrix of the outer product
M = outer(1:n,1:n, FUN = "*")
### make a second matrix but with the diagonal zero
M2 = M
diag(M2) = NA
### with replacement 30.25
mean(M)
### without replacement (no diagonal) 29.3333
mean(M2, na.rm = TRUE)
|
Sample two numbers from 1 to 10; maximize the expected product
|
If you don't get to the smart covariance trick by Glen B, then you could also consider the following approach which is one level of abstraction lower
Step 1: consider computing the hard way by adding
|
Sample two numbers from 1 to 10; maximize the expected product
If you don't get to the smart covariance trick by Glen B, then you could also consider the following approach which is one level of abstraction lower
Step 1: consider computing the hard way by adding all 10 by 10 terms from a table $$\frac{1}{100}\sum_{x_1=1}^{10}\sum_{x_2=1}^{10} x_1 \cdot x_2 = E[X]^2 = 5.5^2$$
$$\begin{array}{cccccccccc}
1&2&3&4&5&6&7&8&9&10\\
2&4&6&8&10&12&14&16&18&20\\
3&6&9&12&15&18&21&24&27&30\\
4&8&12&16&20&24&28&32&36&40\\
5&10&15&20&25&30&35&40&45&50\\
6&12&18&24&30&36&42&48&54&60\\
7&14&21&28&35&42&49&56&63&70\\
8&16&24&32&40&48&56&64&72&80\\
9&18&27&36&45&54&63&72&81&90\\
10&20&30&40&50&60&70&80&90&100
\end{array}$$
Step 2: without replacement you get a similar sum but without the terms where $x_1=x_2$.
$$\begin{array}{cccccccccc}
&2&3&4&5&6&7&8&9&10\\
2&&6&8&10&12&14&16&18&20\\
3&6&&12&15&18&21&24&27&30\\
4&8&12&&20&24&28&32&36&40\\
5&10&15&20&&30&35&40&45&50\\
6&12&18&24&30&&42&48&54&60\\
7&14&21&28&35&42&&56&63&70\\
8&16&24&32&40&48&56&&72&80\\
9&18&27&36&45&54&63&72&&90\\
10&20&30&40&50&60&70&80&90&
\end{array}$$
Then a trick to answer the question is to analyse only the difference in this diagonal with the average of the previous step. The question is whether the average of this diagonal $E[X^2]$ is lower or smaller than $E[X]^2 = 5.5^2$. Since $E[X^2] = E[X]^2 + VAR[X] > E[X]^2$ you get that the diagonal has a more than average contribution and leaving it out would reduce the expectation of the outcome (thus you get the higher result when you use replacement, and leave the diagonal in the draw).
Here is a simple way to compute it with brute force in R
n = 10
### generate a matrix of the outer product
M = outer(1:n,1:n, FUN = "*")
### make a second matrix but with the diagonal zero
M2 = M
diag(M2) = NA
### with replacement 30.25
mean(M)
### without replacement (no diagonal) 29.3333
mean(M2, na.rm = TRUE)
|
Sample two numbers from 1 to 10; maximize the expected product
If you don't get to the smart covariance trick by Glen B, then you could also consider the following approach which is one level of abstraction lower
Step 1: consider computing the hard way by adding
|
9,007
|
Sample two numbers from 1 to 10; maximize the expected product
|
Here is an intuitive approach to the problem. Suppose the first number we pick is a 1 - we'd obviously be better off picking the second number without replacement, in order to eliminate the chance of getting another 1. Suppose the first number we pick is a 10 - we'd obviously be better off picking the second number with replacement, to allow for the possibility of getting another 10. You can extend this logic to see that if we pick a number above the average of 5.5, we would prefer to pick with replacement, but if we pick below the average, we would prefer to pick without replacement.
There are 5 numbers above average and 5 below, so an equal number of instances where we'd prefer one strategy versus the other, all of which are equiprobable. But the difference is in the value of the potential product. If the first number is a 1, it doesn't really matter a whole lot what the second number is, since the score is multiplied by just 1 - with replacement, the score will be between 1 and 10, instead of between 2 and 10 without. But if the first number is a 10, there is a greater impact due to the fact that it is multiplied by 10, giving us a possible score between 10 and 100 with replacement, instead of 10 to 90 without. If we adopt the "with replacement" strategy, the "gain" from getting double 10's is larger than the "loss" we get from picking double 1's. You can extend this symmetrically to see that the gain from double 9's is greater than the loss from double 2's, and so forth. We always gain more at the upper end with the "with replacement" strategy than we lose at the lower end, so the "with replacement" strategy is preferable overall.
|
Sample two numbers from 1 to 10; maximize the expected product
|
Here is an intuitive approach to the problem. Suppose the first number we pick is a 1 - we'd obviously be better off picking the second number without replacement, in order to eliminate the chance of
|
Sample two numbers from 1 to 10; maximize the expected product
Here is an intuitive approach to the problem. Suppose the first number we pick is a 1 - we'd obviously be better off picking the second number without replacement, in order to eliminate the chance of getting another 1. Suppose the first number we pick is a 10 - we'd obviously be better off picking the second number with replacement, to allow for the possibility of getting another 10. You can extend this logic to see that if we pick a number above the average of 5.5, we would prefer to pick with replacement, but if we pick below the average, we would prefer to pick without replacement.
There are 5 numbers above average and 5 below, so an equal number of instances where we'd prefer one strategy versus the other, all of which are equiprobable. But the difference is in the value of the potential product. If the first number is a 1, it doesn't really matter a whole lot what the second number is, since the score is multiplied by just 1 - with replacement, the score will be between 1 and 10, instead of between 2 and 10 without. But if the first number is a 10, there is a greater impact due to the fact that it is multiplied by 10, giving us a possible score between 10 and 100 with replacement, instead of 10 to 90 without. If we adopt the "with replacement" strategy, the "gain" from getting double 10's is larger than the "loss" we get from picking double 1's. You can extend this symmetrically to see that the gain from double 9's is greater than the loss from double 2's, and so forth. We always gain more at the upper end with the "with replacement" strategy than we lose at the lower end, so the "with replacement" strategy is preferable overall.
|
Sample two numbers from 1 to 10; maximize the expected product
Here is an intuitive approach to the problem. Suppose the first number we pick is a 1 - we'd obviously be better off picking the second number without replacement, in order to eliminate the chance of
|
9,008
|
Sample two numbers from 1 to 10; maximize the expected product
|
If you're better at programming than maths, I believe there's a reasonably simple way to get this by brute force as well.
It provides a slightly different intuition: $E[XY]$ is higher without replacement when $X < \text{mean}(X)$, because removing a low value of $X$ increases $E[Y]$, it is lower, and by a greater amount, when $X > \text{mean}(X)$. A little thought shows that this asymmetry occurs because we're looking at the product, e.g. it wouldn't happen for $E[X + Y]$.
library(tidyverse)
vals = 1:10
# Calculate E[XY|x] conditional on each possible value of X
no_replace = map_dbl(vals, function(x) mean(x * vals[vals != x]))
with_replace = map_dbl(vals, function(x) mean(x * vals))
# Take averages over X values to get E[XY]
mean(no_replace) # 29.33333
mean(with_replace) # 30.25
# Plot
df = data.frame(x = vals,
`Replacement`=with_replace,
`Without replacement`=no_replace)
df %>% pivot_longer(-x) %>%
ggplot(aes(x, value, color = name)) +
geom_point() +
geom_path() +
geom_vline(linetype = 'dashed', xintercept = mean(vals)) +
scale_x_continuous(breaks = vals) +
labs(x = 'X', y = 'E[XY]', color = 'Method')
|
Sample two numbers from 1 to 10; maximize the expected product
|
If you're better at programming than maths, I believe there's a reasonably simple way to get this by brute force as well.
It provides a slightly different intuition: $E[XY]$ is higher without replacem
|
Sample two numbers from 1 to 10; maximize the expected product
If you're better at programming than maths, I believe there's a reasonably simple way to get this by brute force as well.
It provides a slightly different intuition: $E[XY]$ is higher without replacement when $X < \text{mean}(X)$, because removing a low value of $X$ increases $E[Y]$, it is lower, and by a greater amount, when $X > \text{mean}(X)$. A little thought shows that this asymmetry occurs because we're looking at the product, e.g. it wouldn't happen for $E[X + Y]$.
library(tidyverse)
vals = 1:10
# Calculate E[XY|x] conditional on each possible value of X
no_replace = map_dbl(vals, function(x) mean(x * vals[vals != x]))
with_replace = map_dbl(vals, function(x) mean(x * vals))
# Take averages over X values to get E[XY]
mean(no_replace) # 29.33333
mean(with_replace) # 30.25
# Plot
df = data.frame(x = vals,
`Replacement`=with_replace,
`Without replacement`=no_replace)
df %>% pivot_longer(-x) %>%
ggplot(aes(x, value, color = name)) +
geom_point() +
geom_path() +
geom_vline(linetype = 'dashed', xintercept = mean(vals)) +
scale_x_continuous(breaks = vals) +
labs(x = 'X', y = 'E[XY]', color = 'Method')
|
Sample two numbers from 1 to 10; maximize the expected product
If you're better at programming than maths, I believe there's a reasonably simple way to get this by brute force as well.
It provides a slightly different intuition: $E[XY]$ is higher without replacem
|
9,009
|
Sample two numbers from 1 to 10; maximize the expected product
|
There are three facts that led me to the conclusion that replacement gives a higher expected value:
1. Products result in right-skewed distributions. When you multiply numbers together, you tend to get results clustered mostly in small numbers, with a few large results.
2. Among right-skewed distribution, increasing the variance tends to increase the mean. Since extremes above the median are more extreme than the extremes below it, getting more extreme increases the result on average. For instance, in the question you present, we can quickly estimate the median as being somewhere around $5\times6$ (the product of the two middle numbers). The minimum is $1\times1$, and the maximum is $10\times10$. So getting an extremely low number costs us at most $30$, while an extremely high number can give us as much as $70$ over the median.
3. Allowing replacement increases the variance. With replacement, you can get something consisting entirely of the most extreme numbers. Without replacement, you can only get one instance of the maximum and the other one can be at most second highest, and similarly for the minimum.
This isn't a rigorous proof, but understanding how asymmetry, variance, and replacement interact is important for working with statistics.
As a side note, I don't think is really game theory. If simply phrasing it as a choice between options made it game theory, then all optimization problems would be game theory (and pretty much everything can be phrased as an optimization problem).
|
Sample two numbers from 1 to 10; maximize the expected product
|
There are three facts that led me to the conclusion that replacement gives a higher expected value:
1. Products result in right-skewed distributions. When you multiply numbers together, you tend to ge
|
Sample two numbers from 1 to 10; maximize the expected product
There are three facts that led me to the conclusion that replacement gives a higher expected value:
1. Products result in right-skewed distributions. When you multiply numbers together, you tend to get results clustered mostly in small numbers, with a few large results.
2. Among right-skewed distribution, increasing the variance tends to increase the mean. Since extremes above the median are more extreme than the extremes below it, getting more extreme increases the result on average. For instance, in the question you present, we can quickly estimate the median as being somewhere around $5\times6$ (the product of the two middle numbers). The minimum is $1\times1$, and the maximum is $10\times10$. So getting an extremely low number costs us at most $30$, while an extremely high number can give us as much as $70$ over the median.
3. Allowing replacement increases the variance. With replacement, you can get something consisting entirely of the most extreme numbers. Without replacement, you can only get one instance of the maximum and the other one can be at most second highest, and similarly for the minimum.
This isn't a rigorous proof, but understanding how asymmetry, variance, and replacement interact is important for working with statistics.
As a side note, I don't think is really game theory. If simply phrasing it as a choice between options made it game theory, then all optimization problems would be game theory (and pretty much everything can be phrased as an optimization problem).
|
Sample two numbers from 1 to 10; maximize the expected product
There are three facts that led me to the conclusion that replacement gives a higher expected value:
1. Products result in right-skewed distributions. When you multiply numbers together, you tend to ge
|
9,010
|
Sample two numbers from 1 to 10; maximize the expected product
|
This is a different shortcut than Glen_B's -- but of course they must be related at some level. It all comes down to the fact that squares of numbers are non-negative.
Let there be $n$ ($10$ in this instance) objects $i_1, i_2, \ldots, i_n$ in the box bearing the numbers $x_1, x_2, \ldots, x_n,$ respectively. Write $j$ for the index of the first object drawn and $k$ for the index of the second; and let $X_1 = x_j,$ $X_2 = x_k$ be the corresponding random values.
The expectation of the product $X_1X_2$ with replacement is easy to find because the two values $X_1$ and $X_2$ are independent and identically distributed (that's what "with replacement" means) and therefore, writing $\overline{x} = (x_1 + \cdots + x_n)/n,$
$$E[X_1X_2] = E[X_1]E[X_2] = \overline{x}^2.$$
The key idea is that the joint distribution of $(X_1,X_2)$ with replacement can be expressed as a mixture of the distribution conditional on the events $\mathcal{E}_\lt: j \lt k,$ $\mathcal{E}_\gt: j \gt k,$ and $\mathcal{E}_=: j = k.$
The conditional expectation of $X_1X_2$ for the first two events (red and blue in the figure) is the same as sampling without replacement, because in both cases the probability is uniform on the set of distinct unordered pairs $\{j, k\}.$ The conditional expectation in the third case (the white diagonal strip in the figure) is that of $X_1^2$ because $X_1 = X_2.$
Many famous inequalities -- Cauchy-Schwarz, Jensen's, etc. -- tell us that for any random variable $X,$
$$E[X^2] \ge E[X]^2$$
with equality if and only if $X$ is almost surely constant. A statistical proof observes that the expectation of any squared random variable, such as the residual $X - \overline{X},$ cannot be negative:
$$0 \le E[(X - \overline{X})^2] = \operatorname{Var}(X) = E[X^2] - E[X]^2,$$
which is Glen_B's point of departure.
Since, when sampling with replacement, the expectation of $X_1X_2$ is the probability-weighted sum of these three conditional expectations, and one of them exceeds the without-replacement expectation, sampling with replacement has a greater expectation than sampling without replacement.
The inequality reduces to an equality if either (a) $X_1$ or (equivalently) $X_2$ is almost surely constant or (b) $\Pr(X_1 = X_2) = 0,$ which occurs only when the $X_i$ have a continuous distribution (never the case for finite $n$).
|
Sample two numbers from 1 to 10; maximize the expected product
|
This is a different shortcut than Glen_B's -- but of course they must be related at some level. It all comes down to the fact that squares of numbers are non-negative.
Let there be $n$ ($10$ in this
|
Sample two numbers from 1 to 10; maximize the expected product
This is a different shortcut than Glen_B's -- but of course they must be related at some level. It all comes down to the fact that squares of numbers are non-negative.
Let there be $n$ ($10$ in this instance) objects $i_1, i_2, \ldots, i_n$ in the box bearing the numbers $x_1, x_2, \ldots, x_n,$ respectively. Write $j$ for the index of the first object drawn and $k$ for the index of the second; and let $X_1 = x_j,$ $X_2 = x_k$ be the corresponding random values.
The expectation of the product $X_1X_2$ with replacement is easy to find because the two values $X_1$ and $X_2$ are independent and identically distributed (that's what "with replacement" means) and therefore, writing $\overline{x} = (x_1 + \cdots + x_n)/n,$
$$E[X_1X_2] = E[X_1]E[X_2] = \overline{x}^2.$$
The key idea is that the joint distribution of $(X_1,X_2)$ with replacement can be expressed as a mixture of the distribution conditional on the events $\mathcal{E}_\lt: j \lt k,$ $\mathcal{E}_\gt: j \gt k,$ and $\mathcal{E}_=: j = k.$
The conditional expectation of $X_1X_2$ for the first two events (red and blue in the figure) is the same as sampling without replacement, because in both cases the probability is uniform on the set of distinct unordered pairs $\{j, k\}.$ The conditional expectation in the third case (the white diagonal strip in the figure) is that of $X_1^2$ because $X_1 = X_2.$
Many famous inequalities -- Cauchy-Schwarz, Jensen's, etc. -- tell us that for any random variable $X,$
$$E[X^2] \ge E[X]^2$$
with equality if and only if $X$ is almost surely constant. A statistical proof observes that the expectation of any squared random variable, such as the residual $X - \overline{X},$ cannot be negative:
$$0 \le E[(X - \overline{X})^2] = \operatorname{Var}(X) = E[X^2] - E[X]^2,$$
which is Glen_B's point of departure.
Since, when sampling with replacement, the expectation of $X_1X_2$ is the probability-weighted sum of these three conditional expectations, and one of them exceeds the without-replacement expectation, sampling with replacement has a greater expectation than sampling without replacement.
The inequality reduces to an equality if either (a) $X_1$ or (equivalently) $X_2$ is almost surely constant or (b) $\Pr(X_1 = X_2) = 0,$ which occurs only when the $X_i$ have a continuous distribution (never the case for finite $n$).
|
Sample two numbers from 1 to 10; maximize the expected product
This is a different shortcut than Glen_B's -- but of course they must be related at some level. It all comes down to the fact that squares of numbers are non-negative.
Let there be $n$ ($10$ in this
|
9,011
|
Sample two numbers from 1 to 10; maximize the expected product
|
I think Glen_b's answer is wonderful intuition that you need to only consider the the covariance, as the unconditional expectations are the same. Below I prove the unconditional expectations are the same (ie $EX = EY$)
$$
\begin{align*}
EX &= \dfrac{1}{n}\sum_{i=1}^n i = \dfrac{n(n+1)}{2n} = \dfrac{(n+1)}{2}\\
E[Y|X=j] &= \dfrac{1}{n-1}\sum_{i \in [1,n] \setminus j} i = \dfrac{1}{n-1}\left(\dfrac{n(n+1)}{2} - j\right)\\
EY &= E[EY|X]\\
&= \dfrac{1}{n}\sum_{j=1}^n \dfrac{1}{n-1}\left(\dfrac{n(n+1)}{2} - j\right)\\
&= \dfrac{n(n+1)}{2(n-1)} - \dfrac{n+1}{2(n-1)} \\
&= \dfrac{(n+1)(n-1)}{2(n-1)} \\
&= \dfrac{(n+1)}{2}
\end{align*}
$$
|
Sample two numbers from 1 to 10; maximize the expected product
|
I think Glen_b's answer is wonderful intuition that you need to only consider the the covariance, as the unconditional expectations are the same. Below I prove the unconditional expectations are the s
|
Sample two numbers from 1 to 10; maximize the expected product
I think Glen_b's answer is wonderful intuition that you need to only consider the the covariance, as the unconditional expectations are the same. Below I prove the unconditional expectations are the same (ie $EX = EY$)
$$
\begin{align*}
EX &= \dfrac{1}{n}\sum_{i=1}^n i = \dfrac{n(n+1)}{2n} = \dfrac{(n+1)}{2}\\
E[Y|X=j] &= \dfrac{1}{n-1}\sum_{i \in [1,n] \setminus j} i = \dfrac{1}{n-1}\left(\dfrac{n(n+1)}{2} - j\right)\\
EY &= E[EY|X]\\
&= \dfrac{1}{n}\sum_{j=1}^n \dfrac{1}{n-1}\left(\dfrac{n(n+1)}{2} - j\right)\\
&= \dfrac{n(n+1)}{2(n-1)} - \dfrac{n+1}{2(n-1)} \\
&= \dfrac{(n+1)(n-1)}{2(n-1)} \\
&= \dfrac{(n+1)}{2}
\end{align*}
$$
|
Sample two numbers from 1 to 10; maximize the expected product
I think Glen_b's answer is wonderful intuition that you need to only consider the the covariance, as the unconditional expectations are the same. Below I prove the unconditional expectations are the s
|
9,012
|
Sample two numbers from 1 to 10; maximize the expected product
|
Don't know if this counts as simpler than doing direct computation (you have to know Gauss's formula for the sum of the first $n$ numbers), but you can prove your guess by induction on the number of randomly drawn numbers $n$ (and then, your case follows from the special case $n=10$).
So, we have to prove that for each $n \in \mathbb{N}$ with $n \ge 2$ it holds that
$$\frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n i j < \frac{1}{n^2}\sum_{i,j=1}^n ij\;,$$
and since, by straightforward computations, we have that
$$ \frac{1}{n^2}\sum_{i,j=1}^n ij = \frac{(n+1)^2}{4} \;, $$
everything boils down to proving that
$$\frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n i j < \frac{(n+1)^2}{4}\;.$$
The base case $n=2$ is clear. Assuming we have proven the claim for $n \in \mathbb{N}$ with $n \ge2$, we can prove that the claim is true also for $n+1$ in the following way:
\begin{align*}
\frac{1}{(n+1)n}\sum_{\substack{i,j=1 \\ i\neq j}}^{n+1} i j
&=
\frac{1}{(n+1)n}\sum_{\substack{i,j=1 \\ i\neq j}}^n ij + \frac{2}{n} \sum_{k=1}^n k
=
\frac{1}{(n+1)n}\sum_{\substack{i,j=1 \\ i\neq j}}^n ij + (n+1)
\\
&=
\frac{n-1}{n+1} \frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n ij + (n+1)
\\
&<
\frac{n-1}{n+1} \frac{(n+1)^2}{4} + (n+1) = \frac{(n+1)(n+3)}{4} < \frac{(n+2)^2}{4} \;,
\end{align*}
where the first inequality follows by the induction hypothesis and the second one by the fact that the square is among the rectangles of a given perimeter the one that has the largest area.
As a side effect, note that as a corollary of previous result, we can recycle the previous proof to prove the stronger claim that, for each $n \in \mathbb{N}$ such that $n \ge 2$, we have
$$ \frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n i j \le \frac{n(n+2)}{4} \;,$$
with strict inequality as long as $n \ge 3$ (base case $n =2$ by direct computation, and then the same proof as before ending just before the last inequality).
|
Sample two numbers from 1 to 10; maximize the expected product
|
Don't know if this counts as simpler than doing direct computation (you have to know Gauss's formula for the sum of the first $n$ numbers), but you can prove your guess by induction on the number of r
|
Sample two numbers from 1 to 10; maximize the expected product
Don't know if this counts as simpler than doing direct computation (you have to know Gauss's formula for the sum of the first $n$ numbers), but you can prove your guess by induction on the number of randomly drawn numbers $n$ (and then, your case follows from the special case $n=10$).
So, we have to prove that for each $n \in \mathbb{N}$ with $n \ge 2$ it holds that
$$\frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n i j < \frac{1}{n^2}\sum_{i,j=1}^n ij\;,$$
and since, by straightforward computations, we have that
$$ \frac{1}{n^2}\sum_{i,j=1}^n ij = \frac{(n+1)^2}{4} \;, $$
everything boils down to proving that
$$\frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n i j < \frac{(n+1)^2}{4}\;.$$
The base case $n=2$ is clear. Assuming we have proven the claim for $n \in \mathbb{N}$ with $n \ge2$, we can prove that the claim is true also for $n+1$ in the following way:
\begin{align*}
\frac{1}{(n+1)n}\sum_{\substack{i,j=1 \\ i\neq j}}^{n+1} i j
&=
\frac{1}{(n+1)n}\sum_{\substack{i,j=1 \\ i\neq j}}^n ij + \frac{2}{n} \sum_{k=1}^n k
=
\frac{1}{(n+1)n}\sum_{\substack{i,j=1 \\ i\neq j}}^n ij + (n+1)
\\
&=
\frac{n-1}{n+1} \frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n ij + (n+1)
\\
&<
\frac{n-1}{n+1} \frac{(n+1)^2}{4} + (n+1) = \frac{(n+1)(n+3)}{4} < \frac{(n+2)^2}{4} \;,
\end{align*}
where the first inequality follows by the induction hypothesis and the second one by the fact that the square is among the rectangles of a given perimeter the one that has the largest area.
As a side effect, note that as a corollary of previous result, we can recycle the previous proof to prove the stronger claim that, for each $n \in \mathbb{N}$ such that $n \ge 2$, we have
$$ \frac{1}{n(n-1)}\sum_{\substack{i,j=1 \\ i\neq j}}^n i j \le \frac{n(n+2)}{4} \;,$$
with strict inequality as long as $n \ge 3$ (base case $n =2$ by direct computation, and then the same proof as before ending just before the last inequality).
|
Sample two numbers from 1 to 10; maximize the expected product
Don't know if this counts as simpler than doing direct computation (you have to know Gauss's formula for the sum of the first $n$ numbers), but you can prove your guess by induction on the number of r
|
9,013
|
Sample two numbers from 1 to 10; maximize the expected product
|
I would pick 10 twice, this results in 100 which is the largest possible product of two numbers between 1 and 10.
|
Sample two numbers from 1 to 10; maximize the expected product
|
I would pick 10 twice, this results in 100 which is the largest possible product of two numbers between 1 and 10.
|
Sample two numbers from 1 to 10; maximize the expected product
I would pick 10 twice, this results in 100 which is the largest possible product of two numbers between 1 and 10.
|
Sample two numbers from 1 to 10; maximize the expected product
I would pick 10 twice, this results in 100 which is the largest possible product of two numbers between 1 and 10.
|
9,014
|
Sample two numbers from 1 to 10; maximize the expected product
|
There are two states: pick new number or not. In replacement there are four
choices and without replacement there are three:$(0, 0), (0, 1), (1, 0)$.
The probability of the last two is $\frac{1\cdot(n-1)}{n(n-1)}$ and the
expected value is $n\frac{n-1+1}{2}$. Multiplying and simplifying gives $n/2$.
Let $e_n$ be the replacement expectation and $r_n$ the expectation without replacement:
$$
r_n = \frac{(n-1)(n-2)}{n(n-1)} r_{n-1} + 2\frac{n}{2} = (1 - \frac{2}{n}) r_{n-1} + n
$$
so $r_n = \frac{1}{12} (n + 1) (3n + 2)$. Since $e_n = \frac{(n+1)^2}{4}$,
$$ e_n - r_n = \frac{n^2 + 2n + 1}{4} - \frac{n^2}{4} -\frac{5n}{12} - \frac{1}{6} = \frac{n+1}{12}>0.
$$
|
Sample two numbers from 1 to 10; maximize the expected product
|
There are two states: pick new number or not. In replacement there are four
choices and without replacement there are three:$(0, 0), (0, 1), (1, 0)$.
The probability of the last two is $\frac{1\cdot(n
|
Sample two numbers from 1 to 10; maximize the expected product
There are two states: pick new number or not. In replacement there are four
choices and without replacement there are three:$(0, 0), (0, 1), (1, 0)$.
The probability of the last two is $\frac{1\cdot(n-1)}{n(n-1)}$ and the
expected value is $n\frac{n-1+1}{2}$. Multiplying and simplifying gives $n/2$.
Let $e_n$ be the replacement expectation and $r_n$ the expectation without replacement:
$$
r_n = \frac{(n-1)(n-2)}{n(n-1)} r_{n-1} + 2\frac{n}{2} = (1 - \frac{2}{n}) r_{n-1} + n
$$
so $r_n = \frac{1}{12} (n + 1) (3n + 2)$. Since $e_n = \frac{(n+1)^2}{4}$,
$$ e_n - r_n = \frac{n^2 + 2n + 1}{4} - \frac{n^2}{4} -\frac{5n}{12} - \frac{1}{6} = \frac{n+1}{12}>0.
$$
|
Sample two numbers from 1 to 10; maximize the expected product
There are two states: pick new number or not. In replacement there are four
choices and without replacement there are three:$(0, 0), (0, 1), (1, 0)$.
The probability of the last two is $\frac{1\cdot(n
|
9,015
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
|
large clusters: if overprinting is a problem, you could either use a lower alpha, so single points are dim, but overprining makes more intense colour. Or you switch to 2d histograms or density estimates.
require ("ggplot2")
ggplot (iris, aes (x = Sepal.Length, y = Sepal.Width, colour = Species)) + stat_density2d ()
You'd probably want to facet this...
ggplot (iris, aes (x = Sepal.Length, y = Sepal.Width, fill = Species)) + stat_binhex (bins=5, aes (alpha = ..count..)) + facet_grid (. ~ Species)
While you can procude this plot also without facets, the prining order of the Species influnces the final picture.
You can avoid this if you're willing to get your hands a bit dirty (= link to explanation & code) and calculate mixed colours for the hexagons:
Another useful thing is to use (hex)bins for high density areas, and plot single points for other parts:
ggplot (df, aes (x = date, y = t5)) +
stat_binhex (data = df [df$t5 <= 0.5,], bins = nrow (df) / 250) +
geom_point (data = df [df$t5 > 0.5,], aes (col = type), shape = 3) +
scale_fill_gradient (low = "#AAAAFF", high = "#000080") +
scale_colour_manual ("response type",
values = c (normal = "black", timeout = "red")) +
ylab ("t / s")
For the sake of completeness of the plotting packages, let me also mention lattice:
require ("lattice")
xyplot(Sepal.Width ~ Sepal.Length | Species, iris, pch= 20)
xyplot(Sepal.Width ~ Sepal.Length, iris, groups = iris$Species, pch= 20)
xyplot(Sepal.Width ~ Sepal.Length | Species, iris, groups = iris$Species, pch= 20)
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
|
large clusters: if overprinting is a problem, you could either use a lower alpha, so single points are dim, but overprining makes more intense colour. Or you switch to 2d histograms or density estima
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
large clusters: if overprinting is a problem, you could either use a lower alpha, so single points are dim, but overprining makes more intense colour. Or you switch to 2d histograms or density estimates.
require ("ggplot2")
ggplot (iris, aes (x = Sepal.Length, y = Sepal.Width, colour = Species)) + stat_density2d ()
You'd probably want to facet this...
ggplot (iris, aes (x = Sepal.Length, y = Sepal.Width, fill = Species)) + stat_binhex (bins=5, aes (alpha = ..count..)) + facet_grid (. ~ Species)
While you can procude this plot also without facets, the prining order of the Species influnces the final picture.
You can avoid this if you're willing to get your hands a bit dirty (= link to explanation & code) and calculate mixed colours for the hexagons:
Another useful thing is to use (hex)bins for high density areas, and plot single points for other parts:
ggplot (df, aes (x = date, y = t5)) +
stat_binhex (data = df [df$t5 <= 0.5,], bins = nrow (df) / 250) +
geom_point (data = df [df$t5 > 0.5,], aes (col = type), shape = 3) +
scale_fill_gradient (low = "#AAAAFF", high = "#000080") +
scale_colour_manual ("response type",
values = c (normal = "black", timeout = "red")) +
ylab ("t / s")
For the sake of completeness of the plotting packages, let me also mention lattice:
require ("lattice")
xyplot(Sepal.Width ~ Sepal.Length | Species, iris, pch= 20)
xyplot(Sepal.Width ~ Sepal.Length, iris, groups = iris$Species, pch= 20)
xyplot(Sepal.Width ~ Sepal.Length | Species, iris, groups = iris$Species, pch= 20)
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
large clusters: if overprinting is a problem, you could either use a lower alpha, so single points are dim, but overprining makes more intense colour. Or you switch to 2d histograms or density estima
|
9,016
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
|
This is one of the classic problems for the 'Iris' data set. This is a link to a whole set of plotting projects based on that data set with R code, which you may be able to adapt to your problem.
Here is an approach that uses with base R rather than an add-on package.
plot(iris$Petal.Length, iris$Petal.Width, pch=21,
bg=c("red","green3","blue")[unclass(iris$Species)],
main="Edgar Anderson's Iris Data")
which produces this figure:
From there, depending on your plot, you can start messing about with alpha/transparency levels to allow for overplotting, etc. but I would build up from a very basic graph first.
While there are many reasons to stick with base R, other packages simplify plotting. Separating out data by a distinguishing feature is one of the strengths of the ggplot2 and lattice packages. ggplot2 makes particularly visually appealing plots. Both packages are demonstrated in the answer by @cbeleites.
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
|
This is one of the classic problems for the 'Iris' data set. This is a link to a whole set of plotting projects based on that data set with R code, which you may be able to adapt to your problem.
Here
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
This is one of the classic problems for the 'Iris' data set. This is a link to a whole set of plotting projects based on that data set with R code, which you may be able to adapt to your problem.
Here is an approach that uses with base R rather than an add-on package.
plot(iris$Petal.Length, iris$Petal.Width, pch=21,
bg=c("red","green3","blue")[unclass(iris$Species)],
main="Edgar Anderson's Iris Data")
which produces this figure:
From there, depending on your plot, you can start messing about with alpha/transparency levels to allow for overplotting, etc. but I would build up from a very basic graph first.
While there are many reasons to stick with base R, other packages simplify plotting. Separating out data by a distinguishing feature is one of the strengths of the ggplot2 and lattice packages. ggplot2 makes particularly visually appealing plots. Both packages are demonstrated in the answer by @cbeleites.
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
This is one of the classic problems for the 'Iris' data set. This is a link to a whole set of plotting projects based on that data set with R code, which you may be able to adapt to your problem.
Here
|
9,017
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
|
Or with ggplot2:
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width, colour = Species)) + geom_point()
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width)) + geom_point() + facet_grid(~Species)
Which produces
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
|
Or with ggplot2:
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width, colour = Species)) + geom_point()
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width)) + geom_point() + facet_grid(~Species)
Which
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
Or with ggplot2:
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width, colour = Species)) + geom_point()
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width)) + geom_point() + facet_grid(~Species)
Which produces
|
What's a good way to use R to make a scatterplot that separates the data by treatment?
Or with ggplot2:
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width, colour = Species)) + geom_point()
ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width)) + geom_point() + facet_grid(~Species)
Which
|
9,018
|
What theories should every statistician know?
|
Frankly, I don't think the law of large numbers has a huge role in industry. It is helpful to understand the asymptotic justifications of the common procedures, such as maximum likelihood estimates and tests (including the omniimportant GLMs and logistic regression, in particular), the bootstrap, but these are distributional issues rather than probability of hitting a bad sample issues.
Beyond the topics already mentioned (GLM, inference, bootstrap), the most common statistical model is linear regression, so a thorough understanding of the linear model is a must. You may never run ANOVA in your industry life, but if you don't understand it, you should not be called a statistician.
There are different kinds of industries. In pharma, you cannot make a living without randomized trials and logistic regression. In survey statistics, you cannot make a living without Horvitz-Thompson estimator and non-response adjustments. In computer science related statistics, you cannot make a living without statistical learning and data mining. In public policy think tanks (and, increasingly, education statistics), you cannot make a living without causality and treatment effect estimators (which, increasingly, involve randomized trials). In marketing research, you need to have a mix of economics background with psychometric measurement theory (and you can learn neither of them in a typical statistics department offerings). Industrial statistics operates with its own peculiar six sigma paradigms which are but remotely connected to mainstream statistics; a stronger bond can be found in design of experiments material. Wall Street material would be financial econometrics, all the way up to stochastic calculus. These are VERY disparate skills, and the term "industry" is even more poorly defined than "academia". I don't think anybody can claim to know more than two or three of the above at the same time.
The top skills, however, that would be universally required in "industry" (whatever that may mean for you) would be time management, project management, and communication with less statistically-savvy clients. So if you want to prepare yourself for industry placement, take classes in business school on these topics.
UPDATE: The original post was written in February 2012; these days (March 2014), you probably should call yourself "a data scientist" rather than "a statistician" to find a hot job in industry... and better learn some Hadoop to follow with that self-proclamation.
|
What theories should every statistician know?
|
Frankly, I don't think the law of large numbers has a huge role in industry. It is helpful to understand the asymptotic justifications of the common procedures, such as maximum likelihood estimates an
|
What theories should every statistician know?
Frankly, I don't think the law of large numbers has a huge role in industry. It is helpful to understand the asymptotic justifications of the common procedures, such as maximum likelihood estimates and tests (including the omniimportant GLMs and logistic regression, in particular), the bootstrap, but these are distributional issues rather than probability of hitting a bad sample issues.
Beyond the topics already mentioned (GLM, inference, bootstrap), the most common statistical model is linear regression, so a thorough understanding of the linear model is a must. You may never run ANOVA in your industry life, but if you don't understand it, you should not be called a statistician.
There are different kinds of industries. In pharma, you cannot make a living without randomized trials and logistic regression. In survey statistics, you cannot make a living without Horvitz-Thompson estimator and non-response adjustments. In computer science related statistics, you cannot make a living without statistical learning and data mining. In public policy think tanks (and, increasingly, education statistics), you cannot make a living without causality and treatment effect estimators (which, increasingly, involve randomized trials). In marketing research, you need to have a mix of economics background with psychometric measurement theory (and you can learn neither of them in a typical statistics department offerings). Industrial statistics operates with its own peculiar six sigma paradigms which are but remotely connected to mainstream statistics; a stronger bond can be found in design of experiments material. Wall Street material would be financial econometrics, all the way up to stochastic calculus. These are VERY disparate skills, and the term "industry" is even more poorly defined than "academia". I don't think anybody can claim to know more than two or three of the above at the same time.
The top skills, however, that would be universally required in "industry" (whatever that may mean for you) would be time management, project management, and communication with less statistically-savvy clients. So if you want to prepare yourself for industry placement, take classes in business school on these topics.
UPDATE: The original post was written in February 2012; these days (March 2014), you probably should call yourself "a data scientist" rather than "a statistician" to find a hot job in industry... and better learn some Hadoop to follow with that self-proclamation.
|
What theories should every statistician know?
Frankly, I don't think the law of large numbers has a huge role in industry. It is helpful to understand the asymptotic justifications of the common procedures, such as maximum likelihood estimates an
|
9,019
|
What theories should every statistician know?
|
I think a good understanding of the issues relating to the bias-variance tradeoff. Most statisticians will end up, at some point, analysing a dataset that is small enough for the variance of an estimator or the parameters of the model to be sufficiently high that bias is a secondary consideration.
|
What theories should every statistician know?
|
I think a good understanding of the issues relating to the bias-variance tradeoff. Most statisticians will end up, at some point, analysing a dataset that is small enough for the variance of an estim
|
What theories should every statistician know?
I think a good understanding of the issues relating to the bias-variance tradeoff. Most statisticians will end up, at some point, analysing a dataset that is small enough for the variance of an estimator or the parameters of the model to be sufficiently high that bias is a secondary consideration.
|
What theories should every statistician know?
I think a good understanding of the issues relating to the bias-variance tradeoff. Most statisticians will end up, at some point, analysing a dataset that is small enough for the variance of an estim
|
9,020
|
What theories should every statistician know?
|
To point out the super obvious one:
Central Limit Theorem
since it allows practitioners to approximate $p$-values in many situations where getting exact $p$-values is intractable. Along those same lines, any successful practitioner would be well served to be familiar, in general, with
Bootstrapping
|
What theories should every statistician know?
|
To point out the super obvious one:
Central Limit Theorem
since it allows practitioners to approximate $p$-values in many situations where getting exact $p$-values is intractable. Along those same lin
|
What theories should every statistician know?
To point out the super obvious one:
Central Limit Theorem
since it allows practitioners to approximate $p$-values in many situations where getting exact $p$-values is intractable. Along those same lines, any successful practitioner would be well served to be familiar, in general, with
Bootstrapping
|
What theories should every statistician know?
To point out the super obvious one:
Central Limit Theorem
since it allows practitioners to approximate $p$-values in many situations where getting exact $p$-values is intractable. Along those same lin
|
9,021
|
What theories should every statistician know?
|
I wouldn't say this is very similar to something like the law of large numbers or the central limit theorem, but because making inferences about causality is often central, understanding Judea Pearl's work on using structured graphs to model causality is something people should be familiar with. It provides a way to understand why experimental and observational studies differ with respect to the causal inferences they afford, and offers ways to deal with observational data. For a good overview, his book is here.
|
What theories should every statistician know?
|
I wouldn't say this is very similar to something like the law of large numbers or the central limit theorem, but because making inferences about causality is often central, understanding Judea Pearl's
|
What theories should every statistician know?
I wouldn't say this is very similar to something like the law of large numbers or the central limit theorem, but because making inferences about causality is often central, understanding Judea Pearl's work on using structured graphs to model causality is something people should be familiar with. It provides a way to understand why experimental and observational studies differ with respect to the causal inferences they afford, and offers ways to deal with observational data. For a good overview, his book is here.
|
What theories should every statistician know?
I wouldn't say this is very similar to something like the law of large numbers or the central limit theorem, but because making inferences about causality is often central, understanding Judea Pearl's
|
9,022
|
What theories should every statistician know?
|
A solid understanding of the substantive problem to be addressed is as important as any particular statistical approach. A good scientist in the industry is more likely than a statistician without such knowledge to come to a reasonable solution to their problem. A statistician with substantive knowledge can help.
|
What theories should every statistician know?
|
A solid understanding of the substantive problem to be addressed is as important as any particular statistical approach. A good scientist in the industry is more likely than a statistician without su
|
What theories should every statistician know?
A solid understanding of the substantive problem to be addressed is as important as any particular statistical approach. A good scientist in the industry is more likely than a statistician without such knowledge to come to a reasonable solution to their problem. A statistician with substantive knowledge can help.
|
What theories should every statistician know?
A solid understanding of the substantive problem to be addressed is as important as any particular statistical approach. A good scientist in the industry is more likely than a statistician without su
|
9,023
|
What theories should every statistician know?
|
The Delta-Method, how to calculate the variance of bizarre statistics and find their asymptotic relative efficiency, to recommend changes of variable and explain efficiency boosts by "estimating the right thing". In conjunction with that, Jensen's Inequality for understanding GLMs and strange kinds of bias which arise in transformations like above. And, now that bias and variance are mentioned, the concept of the bias-variance trade-off and MSE as an objective measure of predictive accuracy.
|
What theories should every statistician know?
|
The Delta-Method, how to calculate the variance of bizarre statistics and find their asymptotic relative efficiency, to recommend changes of variable and explain efficiency boosts by "estimating the r
|
What theories should every statistician know?
The Delta-Method, how to calculate the variance of bizarre statistics and find their asymptotic relative efficiency, to recommend changes of variable and explain efficiency boosts by "estimating the right thing". In conjunction with that, Jensen's Inequality for understanding GLMs and strange kinds of bias which arise in transformations like above. And, now that bias and variance are mentioned, the concept of the bias-variance trade-off and MSE as an objective measure of predictive accuracy.
|
What theories should every statistician know?
The Delta-Method, how to calculate the variance of bizarre statistics and find their asymptotic relative efficiency, to recommend changes of variable and explain efficiency boosts by "estimating the r
|
9,024
|
What theories should every statistician know?
|
In my view, statistical inference is most important for a practitioner. Inference has two parts: 1) Estimation & 2) Hypothesis testing. Hypothesis testing is important one. Since in estimation mostly a unique procedure, maximum likelihood estimation, followed and it is available most statistical package(so there is no confusion).
Frequent practitioners questions are around significant testing of difference or causation analysis. Important hypothesis tests can be find in this link .
Knowing about Linear models, GLM or in general statistical modelling is required for causation interpretation. I assume future of data analysis include Bayesian inference.
|
What theories should every statistician know?
|
In my view, statistical inference is most important for a practitioner. Inference has two parts: 1) Estimation & 2) Hypothesis testing. Hypothesis testing is important one. Since in estimation mostly
|
What theories should every statistician know?
In my view, statistical inference is most important for a practitioner. Inference has two parts: 1) Estimation & 2) Hypothesis testing. Hypothesis testing is important one. Since in estimation mostly a unique procedure, maximum likelihood estimation, followed and it is available most statistical package(so there is no confusion).
Frequent practitioners questions are around significant testing of difference or causation analysis. Important hypothesis tests can be find in this link .
Knowing about Linear models, GLM or in general statistical modelling is required for causation interpretation. I assume future of data analysis include Bayesian inference.
|
What theories should every statistician know?
In my view, statistical inference is most important for a practitioner. Inference has two parts: 1) Estimation & 2) Hypothesis testing. Hypothesis testing is important one. Since in estimation mostly
|
9,025
|
What theories should every statistician know?
|
Casual inference is must. And how to address it's fundamental problem, you can't go back in time and not give someone a treatment. Read articles about rubin, fisher the founder of modern statistics student.)....
What to learn to address this problem, proper randomisation and how Law of large numbers says things are properly randomised, Hypothesis testing ,Potential outcomes (holds against hetroscastisty assumption and is great with missingness ),
matching (great for missingness but potential outcomes is better because it's more generalised, I mean why learn a ton of complicated things when you can only learn one complicated thing ),
Bootstrap
,Bayesian statistics of course( Bayesian regression, naïve Bayesian regression, Bayesian factors) , and
Non papmetric alternatives.
Normally in practice just follow these general steps ,
Regarding a previous comment you should genrally first start with an ANOVA (random effects or fixed effects, and transform continuous types into bins) then use a regression (which if you transform and alter can sometimes be as good as a ANOVA but never beat it) to see which specific treatments are significant,( apposed to doing multiple t test and using some correction like Holm methid) use a regression.
In the cases where you have to predict things use bayasian regression.
Missingness at more than 5% use potential outcomes
Another branch of data analytics is supervised machine learning which must be mentioned
|
What theories should every statistician know?
|
Casual inference is must. And how to address it's fundamental problem, you can't go back in time and not give someone a treatment. Read articles about rubin, fisher the founder of modern statistics st
|
What theories should every statistician know?
Casual inference is must. And how to address it's fundamental problem, you can't go back in time and not give someone a treatment. Read articles about rubin, fisher the founder of modern statistics student.)....
What to learn to address this problem, proper randomisation and how Law of large numbers says things are properly randomised, Hypothesis testing ,Potential outcomes (holds against hetroscastisty assumption and is great with missingness ),
matching (great for missingness but potential outcomes is better because it's more generalised, I mean why learn a ton of complicated things when you can only learn one complicated thing ),
Bootstrap
,Bayesian statistics of course( Bayesian regression, naïve Bayesian regression, Bayesian factors) , and
Non papmetric alternatives.
Normally in practice just follow these general steps ,
Regarding a previous comment you should genrally first start with an ANOVA (random effects or fixed effects, and transform continuous types into bins) then use a regression (which if you transform and alter can sometimes be as good as a ANOVA but never beat it) to see which specific treatments are significant,( apposed to doing multiple t test and using some correction like Holm methid) use a regression.
In the cases where you have to predict things use bayasian regression.
Missingness at more than 5% use potential outcomes
Another branch of data analytics is supervised machine learning which must be mentioned
|
What theories should every statistician know?
Casual inference is must. And how to address it's fundamental problem, you can't go back in time and not give someone a treatment. Read articles about rubin, fisher the founder of modern statistics st
|
9,026
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
|
No. A counterexample:
Let $X$ be uniformly distributed on $[-1, 1]$, $Y = X^2$.
Then $E[X]=0$ and also $E[XY]=E[X^3]=0$ ($X^3$ is odd function), so $X,Y$ are uncorrelated.
But $E[X^2Y] = E[X^4] = E[{X^2}^2] > E[X^2]^2 = E[X^2]E[Y]$
The last inequality follows from Jensen's inequality. It also follows from the fact that
$E[{X^2}^2] - E[X^2]^2 = Var(X) > 0$ since $X$ is not constant.
The problem with your reasoning is that $f_X$ might depend on $y$ and vice versa, so your penultimate equality is invalid.
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
|
No. A counterexample:
Let $X$ be uniformly distributed on $[-1, 1]$, $Y = X^2$.
Then $E[X]=0$ and also $E[XY]=E[X^3]=0$ ($X^3$ is odd function), so $X,Y$ are uncorrelated.
But $E[X^2Y] = E[X^4] = E[{X
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
No. A counterexample:
Let $X$ be uniformly distributed on $[-1, 1]$, $Y = X^2$.
Then $E[X]=0$ and also $E[XY]=E[X^3]=0$ ($X^3$ is odd function), so $X,Y$ are uncorrelated.
But $E[X^2Y] = E[X^4] = E[{X^2}^2] > E[X^2]^2 = E[X^2]E[Y]$
The last inequality follows from Jensen's inequality. It also follows from the fact that
$E[{X^2}^2] - E[X^2]^2 = Var(X) > 0$ since $X$ is not constant.
The problem with your reasoning is that $f_X$ might depend on $y$ and vice versa, so your penultimate equality is invalid.
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
No. A counterexample:
Let $X$ be uniformly distributed on $[-1, 1]$, $Y = X^2$.
Then $E[X]=0$ and also $E[XY]=E[X^3]=0$ ($X^3$ is odd function), so $X,Y$ are uncorrelated.
But $E[X^2Y] = E[X^4] = E[{X
|
9,027
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
|
Even if $\operatorname{Corr}(X,Y)=0$, not only is it possible that $X^2$ and $Y$ are correlated, but they may even be perfectly correlated, with $\operatorname{Corr}(X^2,Y)=1$:
> x <- c(-1,0,1); y <- c(1,0,1)
> cor(x,y)
[1] 0
> cor(x^2,y)
[1] 1
Or $\operatorname{Corr}(X^2,Y)=-1$:
> x <- c(-1,0,1); y <- c(-1,0,-1)
> cor(x,y)
[1] 0
> cor(x^2,y)
[1] -1
In case you cannot read R code, the first example is equivalent to considering two random variables $X$ and $Y$ with a a joint distribution such that $(X,Y)$ is equally likely to be $(-1,1)$, $(0,0)$ or $(1,1)$. In the perfectly negatively correlated example, $(X,Y)$ is equally likely to be $(-1,-1)$, $(0,0)$ or $(1,-1)$.
Nevertheless, we can also construct $X$ and $Y$ such that $\operatorname{Corr}(X^2,Y)=0$, so all extremes are possible:
> x <- c(-1,-1,0,1,1); y <- c(1,-1,0,1,-1)
> cor(x,y)
[1] 0
> cor(x^2,y)
[1] 0
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
|
Even if $\operatorname{Corr}(X,Y)=0$, not only is it possible that $X^2$ and $Y$ are correlated, but they may even be perfectly correlated, with $\operatorname{Corr}(X^2,Y)=1$:
> x <- c(-1,0,1); y <-
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
Even if $\operatorname{Corr}(X,Y)=0$, not only is it possible that $X^2$ and $Y$ are correlated, but they may even be perfectly correlated, with $\operatorname{Corr}(X^2,Y)=1$:
> x <- c(-1,0,1); y <- c(1,0,1)
> cor(x,y)
[1] 0
> cor(x^2,y)
[1] 1
Or $\operatorname{Corr}(X^2,Y)=-1$:
> x <- c(-1,0,1); y <- c(-1,0,-1)
> cor(x,y)
[1] 0
> cor(x^2,y)
[1] -1
In case you cannot read R code, the first example is equivalent to considering two random variables $X$ and $Y$ with a a joint distribution such that $(X,Y)$ is equally likely to be $(-1,1)$, $(0,0)$ or $(1,1)$. In the perfectly negatively correlated example, $(X,Y)$ is equally likely to be $(-1,-1)$, $(0,0)$ or $(1,-1)$.
Nevertheless, we can also construct $X$ and $Y$ such that $\operatorname{Corr}(X^2,Y)=0$, so all extremes are possible:
> x <- c(-1,-1,0,1,1); y <- c(1,-1,0,1,-1)
> cor(x,y)
[1] 0
> cor(x^2,y)
[1] 0
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
Even if $\operatorname{Corr}(X,Y)=0$, not only is it possible that $X^2$ and $Y$ are correlated, but they may even be perfectly correlated, with $\operatorname{Corr}(X^2,Y)=1$:
> x <- c(-1,0,1); y <-
|
9,028
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
|
The error in your reasoning is that you write the following about $E[h(X,Y)]$:
$$E[h(X,Y)]=\int h(x,y) f_X(x)f_Y(y)dxdy$$
while in general
$$E[h(X,Y)]=\int h(x,y) f_{XY}(x,y)dxdy.$$
The two coincide if $f_{XY}(x,y)=f_X(x)f_Y(y)$, i.e. if $X$ and $Y$ are independent. Being uncorrelated is a necessary but not sufficient condition for being independent. So if two variables $X$ and $Y$ are uncorrelated but dependent, then $f(X)$ and $g(Y)$ may be correlated.
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
|
The error in your reasoning is that you write the following about $E[h(X,Y)]$:
$$E[h(X,Y)]=\int h(x,y) f_X(x)f_Y(y)dxdy$$
while in general
$$E[h(X,Y)]=\int h(x,y) f_{XY}(x,y)dxdy.$$
The two coincide i
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
The error in your reasoning is that you write the following about $E[h(X,Y)]$:
$$E[h(X,Y)]=\int h(x,y) f_X(x)f_Y(y)dxdy$$
while in general
$$E[h(X,Y)]=\int h(x,y) f_{XY}(x,y)dxdy.$$
The two coincide if $f_{XY}(x,y)=f_X(x)f_Y(y)$, i.e. if $X$ and $Y$ are independent. Being uncorrelated is a necessary but not sufficient condition for being independent. So if two variables $X$ and $Y$ are uncorrelated but dependent, then $f(X)$ and $g(Y)$ may be correlated.
|
If X and Y are uncorrelated, are X^2 and Y also uncorrelated?
The error in your reasoning is that you write the following about $E[h(X,Y)]$:
$$E[h(X,Y)]=\int h(x,y) f_X(x)f_Y(y)dxdy$$
while in general
$$E[h(X,Y)]=\int h(x,y) f_{XY}(x,y)dxdy.$$
The two coincide i
|
9,029
|
Bayes' Theorem Intuition
|
Although there are four components listed in Bayes' law, I prefer to think in terms of three conceptual components:
$$
\underbrace{P(B|A)}_2 = \underbrace{\frac{P(A|B)}{P(A)}}_3 \underbrace{P(B)}_1
$$
The prior is what you believed about $B$ before having encountered a new and relevant piece of information (i.e., $A$).
The posterior is what you believe (or ought to, if you are rational) about $B$ after having encountered a new and relevant piece of information.
The quotient of the likelihood divided by the marginal probability of the new piece of information indexes the informativeness of the new information for your beliefs about $B$.
|
Bayes' Theorem Intuition
|
Although there are four components listed in Bayes' law, I prefer to think in terms of three conceptual components:
$$
\underbrace{P(B|A)}_2 = \underbrace{\frac{P(A|B)}{P(A)}}_3 \underbrace{P(B)}_1
$$
|
Bayes' Theorem Intuition
Although there are four components listed in Bayes' law, I prefer to think in terms of three conceptual components:
$$
\underbrace{P(B|A)}_2 = \underbrace{\frac{P(A|B)}{P(A)}}_3 \underbrace{P(B)}_1
$$
The prior is what you believed about $B$ before having encountered a new and relevant piece of information (i.e., $A$).
The posterior is what you believe (or ought to, if you are rational) about $B$ after having encountered a new and relevant piece of information.
The quotient of the likelihood divided by the marginal probability of the new piece of information indexes the informativeness of the new information for your beliefs about $B$.
|
Bayes' Theorem Intuition
Although there are four components listed in Bayes' law, I prefer to think in terms of three conceptual components:
$$
\underbrace{P(B|A)}_2 = \underbrace{\frac{P(A|B)}{P(A)}}_3 \underbrace{P(B)}_1
$$
|
9,030
|
Bayes' Theorem Intuition
|
There are several good answers already, but perhaps this can add something new ...
I always think of Bayes rule in terms of the component probabilities, which can be understood geometrically in terms of the events $A$ and $B$ as pictured below.
The marginal probabilities $P(A)$ and $P(B)$ are given by the areas of the corresponding circles. All possible outcomes are represented by $P(A \cup B)=1$, corresponding to the set of events "$A$ or $B$". The joint probability $P(A \cap B)$ corresponds to the event "$A$ and $B$".
In this framework, the conditional probabilities in Bayes theorem can be understood as ratios of areas. The probability of $A$ given $B$ is the fraction of $B$ occupied by $A \cap B$, expressed as
$$P(A\vert B)=\frac{P(A \cap B)}{P(B)}$$
Similarly, the probability of $B$ given $A$ is the fraction of $A$ occupied by $A \cap B$, i.e.
$$P(B\vert A)=\frac{P(A \cap B)}{P(A)}$$
Bayes theorem is really just a mathematical consequence of the above definitions, which can be restated as
$$P(B\vert A)P(A)=P(A \cap B)=P(A\vert B)P(B)$$
I find this symmetric form of Bayes theorem to be much easier to remember. That is, the identity holds regardless of which $p(A)$ or $p(B)$ is labelled "prior" vs. "posterior".
(Another way of understanding the above discussion is given in my answer to this question, from a more "accounting spreadsheet" point of view.)
|
Bayes' Theorem Intuition
|
There are several good answers already, but perhaps this can add something new ...
I always think of Bayes rule in terms of the component probabilities, which can be understood geometrically in terms
|
Bayes' Theorem Intuition
There are several good answers already, but perhaps this can add something new ...
I always think of Bayes rule in terms of the component probabilities, which can be understood geometrically in terms of the events $A$ and $B$ as pictured below.
The marginal probabilities $P(A)$ and $P(B)$ are given by the areas of the corresponding circles. All possible outcomes are represented by $P(A \cup B)=1$, corresponding to the set of events "$A$ or $B$". The joint probability $P(A \cap B)$ corresponds to the event "$A$ and $B$".
In this framework, the conditional probabilities in Bayes theorem can be understood as ratios of areas. The probability of $A$ given $B$ is the fraction of $B$ occupied by $A \cap B$, expressed as
$$P(A\vert B)=\frac{P(A \cap B)}{P(B)}$$
Similarly, the probability of $B$ given $A$ is the fraction of $A$ occupied by $A \cap B$, i.e.
$$P(B\vert A)=\frac{P(A \cap B)}{P(A)}$$
Bayes theorem is really just a mathematical consequence of the above definitions, which can be restated as
$$P(B\vert A)P(A)=P(A \cap B)=P(A\vert B)P(B)$$
I find this symmetric form of Bayes theorem to be much easier to remember. That is, the identity holds regardless of which $p(A)$ or $p(B)$ is labelled "prior" vs. "posterior".
(Another way of understanding the above discussion is given in my answer to this question, from a more "accounting spreadsheet" point of view.)
|
Bayes' Theorem Intuition
There are several good answers already, but perhaps this can add something new ...
I always think of Bayes rule in terms of the component probabilities, which can be understood geometrically in terms
|
9,031
|
Bayes' Theorem Intuition
|
@gung has a great answer. I would add one example to explain the "initiation" in a real world example.
For better connection with real world examples, I would like to change the notation, where use $H$ to represent the hypothesis (the $A$ in your equation), and use $E$ to represent evidence. (the $B$ in your equation.)
So the formula is
$$P(H|E) = \frac{P(E|H)P(H)}{P(E)}$$
Note the same formula can be written as
$$P(H|E) \propto {P(E|H)P(H)}$$
where $\propto$ means proportional to and $P(E|H)$ is the likelihood and $P(H)$ is the prior. This equation means that the posterior will be larger, if the right side of the equation larger. And you can think about $P(E)$ is a normalization constant to make the number into probability (the reason I say it is a constant is because the evidence $E$ is already given.).
For a real world example, suppose we are doing some fraud detection on credit card transactions. Then the hypothesis would be $H \in \{0,1\}$ where represent the transaction is a normal or fraudulent. (I picked extreme imbalanced case to show the intuition).
From domain knowledge, we know most transactions would be normal, only very few are fraud. Let us assume an expert told us there are $1$ in $1000$ would be fraud. So we can say the prior is $P(H=1)=0.001$, and $P(H=0)=0.999$.
The ultimate goal is calculating $P(H|E)$ which means we want to know if a transaction is a fraud not not based on the evidence in addition to prior. If you look at the right side of the equation, we decompose it into likelihood and prior.
Where we already explained what is prior, here we explain what is likelihood. Suppose we have two types of evidence, $E\in\{0,1\}$ that represent, if we are seeing normal or strange geographical location of the transaction.
The likelihood $P(E=1|H=0)$ may be small, which means given a normal transaction, it is very unlikely the location is strange. On the other hand, $P(E=1|H=1)$ can be large.
Suppose, we observed $E=1$ we want to see if it is a fraud or not, we need to consider both prior and likelihood. Intuitively, from prior, we know there are very few fraud transactions, we would likely to be very conservative to make a fraud classification, unless the evidence is very strong. Therefore, the product between two will consider two factors at same time.
|
Bayes' Theorem Intuition
|
@gung has a great answer. I would add one example to explain the "initiation" in a real world example.
For better connection with real world examples, I would like to change the notation, where use $H
|
Bayes' Theorem Intuition
@gung has a great answer. I would add one example to explain the "initiation" in a real world example.
For better connection with real world examples, I would like to change the notation, where use $H$ to represent the hypothesis (the $A$ in your equation), and use $E$ to represent evidence. (the $B$ in your equation.)
So the formula is
$$P(H|E) = \frac{P(E|H)P(H)}{P(E)}$$
Note the same formula can be written as
$$P(H|E) \propto {P(E|H)P(H)}$$
where $\propto$ means proportional to and $P(E|H)$ is the likelihood and $P(H)$ is the prior. This equation means that the posterior will be larger, if the right side of the equation larger. And you can think about $P(E)$ is a normalization constant to make the number into probability (the reason I say it is a constant is because the evidence $E$ is already given.).
For a real world example, suppose we are doing some fraud detection on credit card transactions. Then the hypothesis would be $H \in \{0,1\}$ where represent the transaction is a normal or fraudulent. (I picked extreme imbalanced case to show the intuition).
From domain knowledge, we know most transactions would be normal, only very few are fraud. Let us assume an expert told us there are $1$ in $1000$ would be fraud. So we can say the prior is $P(H=1)=0.001$, and $P(H=0)=0.999$.
The ultimate goal is calculating $P(H|E)$ which means we want to know if a transaction is a fraud not not based on the evidence in addition to prior. If you look at the right side of the equation, we decompose it into likelihood and prior.
Where we already explained what is prior, here we explain what is likelihood. Suppose we have two types of evidence, $E\in\{0,1\}$ that represent, if we are seeing normal or strange geographical location of the transaction.
The likelihood $P(E=1|H=0)$ may be small, which means given a normal transaction, it is very unlikely the location is strange. On the other hand, $P(E=1|H=1)$ can be large.
Suppose, we observed $E=1$ we want to see if it is a fraud or not, we need to consider both prior and likelihood. Intuitively, from prior, we know there are very few fraud transactions, we would likely to be very conservative to make a fraud classification, unless the evidence is very strong. Therefore, the product between two will consider two factors at same time.
|
Bayes' Theorem Intuition
@gung has a great answer. I would add one example to explain the "initiation" in a real world example.
For better connection with real world examples, I would like to change the notation, where use $H
|
9,032
|
Bayes' Theorem Intuition
|
Aug 7 2015 Medium article explains with many pictures! 1 in 10 people are sick.
To simplify the example, we assume we know which ones are sick and which ones are healthy, but in a real test you don’t know that information. Now we test everybody for the disease:
The true positives = number of positive results among the sick population = #(Positive | Sick) = 9.
Now the interesting question, what's probability of being sick if you test positive? In math, $\Pr(Sick | Positive)?$
|
Bayes' Theorem Intuition
|
Aug 7 2015 Medium article explains with many pictures! 1 in 10 people are sick.
To simplify the example, we assume we know which ones are sick and which ones are healthy, but in a real test you don
|
Bayes' Theorem Intuition
Aug 7 2015 Medium article explains with many pictures! 1 in 10 people are sick.
To simplify the example, we assume we know which ones are sick and which ones are healthy, but in a real test you don’t know that information. Now we test everybody for the disease:
The true positives = number of positive results among the sick population = #(Positive | Sick) = 9.
Now the interesting question, what's probability of being sick if you test positive? In math, $\Pr(Sick | Positive)?$
|
Bayes' Theorem Intuition
Aug 7 2015 Medium article explains with many pictures! 1 in 10 people are sick.
To simplify the example, we assume we know which ones are sick and which ones are healthy, but in a real test you don
|
9,033
|
Bayes' Theorem Intuition
|
Here's a super straightforward and intuitive explanation.
https://metagrokker.medium.com/an-intuitive-interpretation-of-bayes-theorem-d3e43b05bb2a
Here's a summary; think of Bayes in the following form:
P(A|B) = P(A) * (P(B|A) / P(B))
P(A) is our confidence about hypothesis A being true, before accounting for the new evidence B. The equation multiplies P(A) by the ratio (P(B|A) / P(B)) to turn P(A) into the posterior probability P(A|B); our 'updated' confidence about hypothesis A being true, after accounting for evidence B. This is what Bayes does, it updates our knowledge based on new evidence, turning our prior knowledge to posterior.
Now, what does the ratio mean? P(B) is the probability of our observation happening, while P(B|A) is the probability of the observation happening if A were true. Therefore, this ratio captures how much more likely observing B becomes when A is true vs generally. If A being true results in B being more likely, this means B is a good sign that A is true, so P(B|A) will be greater than P(B), making the ratio greater than one, and therefore it will increase our confidence on A. So the ratio captures how significant observing B is to A being true, and the equation adjusts our knowledge proportionally to this.
|
Bayes' Theorem Intuition
|
Here's a super straightforward and intuitive explanation.
https://metagrokker.medium.com/an-intuitive-interpretation-of-bayes-theorem-d3e43b05bb2a
Here's a summary; think of Bayes in the following for
|
Bayes' Theorem Intuition
Here's a super straightforward and intuitive explanation.
https://metagrokker.medium.com/an-intuitive-interpretation-of-bayes-theorem-d3e43b05bb2a
Here's a summary; think of Bayes in the following form:
P(A|B) = P(A) * (P(B|A) / P(B))
P(A) is our confidence about hypothesis A being true, before accounting for the new evidence B. The equation multiplies P(A) by the ratio (P(B|A) / P(B)) to turn P(A) into the posterior probability P(A|B); our 'updated' confidence about hypothesis A being true, after accounting for evidence B. This is what Bayes does, it updates our knowledge based on new evidence, turning our prior knowledge to posterior.
Now, what does the ratio mean? P(B) is the probability of our observation happening, while P(B|A) is the probability of the observation happening if A were true. Therefore, this ratio captures how much more likely observing B becomes when A is true vs generally. If A being true results in B being more likely, this means B is a good sign that A is true, so P(B|A) will be greater than P(B), making the ratio greater than one, and therefore it will increase our confidence on A. So the ratio captures how significant observing B is to A being true, and the equation adjusts our knowledge proportionally to this.
|
Bayes' Theorem Intuition
Here's a super straightforward and intuitive explanation.
https://metagrokker.medium.com/an-intuitive-interpretation-of-bayes-theorem-d3e43b05bb2a
Here's a summary; think of Bayes in the following for
|
9,034
|
Bayes' Theorem Intuition
|
Note that Bayes' rule is
$P(a|b)=\frac{P(b,a)}{P(b)}=\frac{P(b,a)}{P(b)P(a)}P(a)$.
Note the ratio
$$\frac{P(b,a)}{P(b)P(a)}.$$
If $B \perp A$, then $P(b,a)=P(b)P(a)$. So it’s almost like telling us how far the joint deviates from full independence, or how much information the variables have in common.
Of course this is also evident from the ration $\frac{P(b|a)}{P(b)},$ but the symmetry of the expression above is nice, and more interestingly, the log of this $\frac{P(b,a)}{P(b)P(a)}$ is the pointwise mutual information!!!
The full mutual information is:
$$I(A|B) = \sum_{a,b}P_{}(a,b)\mathbf{\log\frac{P_{}(b,a)}{P(b)P(a)}}.$$
|
Bayes' Theorem Intuition
|
Note that Bayes' rule is
$P(a|b)=\frac{P(b,a)}{P(b)}=\frac{P(b,a)}{P(b)P(a)}P(a)$.
Note the ratio
$$\frac{P(b,a)}{P(b)P(a)}.$$
If $B \perp A$, then $P(b,a)=P(b)P(a)$. So it’s almost like telling us h
|
Bayes' Theorem Intuition
Note that Bayes' rule is
$P(a|b)=\frac{P(b,a)}{P(b)}=\frac{P(b,a)}{P(b)P(a)}P(a)$.
Note the ratio
$$\frac{P(b,a)}{P(b)P(a)}.$$
If $B \perp A$, then $P(b,a)=P(b)P(a)$. So it’s almost like telling us how far the joint deviates from full independence, or how much information the variables have in common.
Of course this is also evident from the ration $\frac{P(b|a)}{P(b)},$ but the symmetry of the expression above is nice, and more interestingly, the log of this $\frac{P(b,a)}{P(b)P(a)}$ is the pointwise mutual information!!!
The full mutual information is:
$$I(A|B) = \sum_{a,b}P_{}(a,b)\mathbf{\log\frac{P_{}(b,a)}{P(b)P(a)}}.$$
|
Bayes' Theorem Intuition
Note that Bayes' rule is
$P(a|b)=\frac{P(b,a)}{P(b)}=\frac{P(b,a)}{P(b)P(a)}P(a)$.
Note the ratio
$$\frac{P(b,a)}{P(b)P(a)}.$$
If $B \perp A$, then $P(b,a)=P(b)P(a)$. So it’s almost like telling us h
|
9,035
|
Bayes' Theorem Intuition
|
I often find viewing the theorem as a table, with the possible outcomes for "B" as the rows, and the possible outcomes for "A" as the columns. The joint probabilities $P(A,B)$ are the values for each cell. In this table we have
likelihood = row proportions
posterior = column proportions
The prior and marginal are analogously defined, but based on "totals" instead of a particular column
marginal = row total proportions
prior = column total proportions
I find this helps me.
|
Bayes' Theorem Intuition
|
I often find viewing the theorem as a table, with the possible outcomes for "B" as the rows, and the possible outcomes for "A" as the columns. The joint probabilities $P(A,B)$ are the values for each
|
Bayes' Theorem Intuition
I often find viewing the theorem as a table, with the possible outcomes for "B" as the rows, and the possible outcomes for "A" as the columns. The joint probabilities $P(A,B)$ are the values for each cell. In this table we have
likelihood = row proportions
posterior = column proportions
The prior and marginal are analogously defined, but based on "totals" instead of a particular column
marginal = row total proportions
prior = column total proportions
I find this helps me.
|
Bayes' Theorem Intuition
I often find viewing the theorem as a table, with the possible outcomes for "B" as the rows, and the possible outcomes for "A" as the columns. The joint probabilities $P(A,B)$ are the values for each
|
9,036
|
Bayes' Theorem Intuition
|
Here is an additional graphic to the intuition of Bayes rule in terms $A$ and $B$ following some joint distribution.
$$P(B|A)\cdot P(A) = P(A,B) = P(A|B) \cdot P(B)$$
or
$$P(B|A) = \frac{P(A,B)}{P(A)} = \frac{P(A|B) \cdot P(B)}{P(A) }$$
The prior (independent of the observation of A), is the marginal distribution in this joint distribution. The posterior (dependent on the observation of A), is the conditional distribution in this joint distribution.
The view is often ${P(A|B) \cdot P(B)}$ instead of ${P(A,B)}$. This is, I believe, because often $B$ is some model parameter and $A$ is an observation and we can describe $P(A|B)$ well in terms of some theoretical model for the observation as function of the coefficient.
|
Bayes' Theorem Intuition
|
Here is an additional graphic to the intuition of Bayes rule in terms $A$ and $B$ following some joint distribution.
$$P(B|A)\cdot P(A) = P(A,B) = P(A|B) \cdot P(B)$$
or
$$P(B|A) = \frac{P(A,B)}{P(A)}
|
Bayes' Theorem Intuition
Here is an additional graphic to the intuition of Bayes rule in terms $A$ and $B$ following some joint distribution.
$$P(B|A)\cdot P(A) = P(A,B) = P(A|B) \cdot P(B)$$
or
$$P(B|A) = \frac{P(A,B)}{P(A)} = \frac{P(A|B) \cdot P(B)}{P(A) }$$
The prior (independent of the observation of A), is the marginal distribution in this joint distribution. The posterior (dependent on the observation of A), is the conditional distribution in this joint distribution.
The view is often ${P(A|B) \cdot P(B)}$ instead of ${P(A,B)}$. This is, I believe, because often $B$ is some model parameter and $A$ is an observation and we can describe $P(A|B)$ well in terms of some theoretical model for the observation as function of the coefficient.
|
Bayes' Theorem Intuition
Here is an additional graphic to the intuition of Bayes rule in terms $A$ and $B$ following some joint distribution.
$$P(B|A)\cdot P(A) = P(A,B) = P(A|B) \cdot P(B)$$
or
$$P(B|A) = \frac{P(A,B)}{P(A)}
|
9,037
|
When can correlation be useful without causation?
|
Correlation (or any other measure of association) is useful for prediction regardless of causation. Suppose that you measure a clear, stable association between two variables. What this means is that knowing the level of one variable also provides you with some information about another variable of interest, which you can use to help predict one variable as a function of the other and, most importantly, take some action based on that prediction. Taking action involves changing one or more variables, such as when making an automated recommendation or employing some medical intervention. Of course, you could make better predictions and act more effectively if you had more insight into the direct or indirect relationships between two variables. This insight may involve other variables, including spatial and temporal ones.
|
When can correlation be useful without causation?
|
Correlation (or any other measure of association) is useful for prediction regardless of causation. Suppose that you measure a clear, stable association between two variables. What this means is that
|
When can correlation be useful without causation?
Correlation (or any other measure of association) is useful for prediction regardless of causation. Suppose that you measure a clear, stable association between two variables. What this means is that knowing the level of one variable also provides you with some information about another variable of interest, which you can use to help predict one variable as a function of the other and, most importantly, take some action based on that prediction. Taking action involves changing one or more variables, such as when making an automated recommendation or employing some medical intervention. Of course, you could make better predictions and act more effectively if you had more insight into the direct or indirect relationships between two variables. This insight may involve other variables, including spatial and temporal ones.
|
When can correlation be useful without causation?
Correlation (or any other measure of association) is useful for prediction regardless of causation. Suppose that you measure a clear, stable association between two variables. What this means is that
|
9,038
|
When can correlation be useful without causation?
|
There are a lot of good points here already. Let me unpack your claim that "it seems that if X is a predictor of Y, it would be useful in predicting future values of Y based on X, regardless of causality" a little bit. You are correct: If all you want is to be able to predict an unknown Y value from a known X value and an known, stable relationship, the causal status of that relationship is irrelevant. Consider that:
You can predict an effect from a cause. This is intuitive and uncontroversial.
You can also predict a cause from knowledge of an effect. Some, but very few, people who get lung cancer never smoked. As a result, if you know someone has lung cancer, you can predict with good confidence that they are / were a smoker, despite the fact that smoking is causal and cancer is the effect. If the grass in the yard is wet, and the sprinkler hasn't been running, you can predict that it has rained, even though rain is the cause and wet grass is just the effect. Etc.
You can also predict an unknown effect from a known effect of the same cause. For example, if Billy and Bobby are identical twins, and I've never met Billy, but I know that Bobby is 5' 10' (178 cm), I can predict Billy is also 178 cm with good confidence, despite the fact that neither Billy's height causes Bobby's height nor does Bobby's height cause Billy's height.
|
When can correlation be useful without causation?
|
There are a lot of good points here already. Let me unpack your claim that "it seems that if X is a predictor of Y, it would be useful in predicting future values of Y based on X, regardless of causa
|
When can correlation be useful without causation?
There are a lot of good points here already. Let me unpack your claim that "it seems that if X is a predictor of Y, it would be useful in predicting future values of Y based on X, regardless of causality" a little bit. You are correct: If all you want is to be able to predict an unknown Y value from a known X value and an known, stable relationship, the causal status of that relationship is irrelevant. Consider that:
You can predict an effect from a cause. This is intuitive and uncontroversial.
You can also predict a cause from knowledge of an effect. Some, but very few, people who get lung cancer never smoked. As a result, if you know someone has lung cancer, you can predict with good confidence that they are / were a smoker, despite the fact that smoking is causal and cancer is the effect. If the grass in the yard is wet, and the sprinkler hasn't been running, you can predict that it has rained, even though rain is the cause and wet grass is just the effect. Etc.
You can also predict an unknown effect from a known effect of the same cause. For example, if Billy and Bobby are identical twins, and I've never met Billy, but I know that Bobby is 5' 10' (178 cm), I can predict Billy is also 178 cm with good confidence, despite the fact that neither Billy's height causes Bobby's height nor does Bobby's height cause Billy's height.
|
When can correlation be useful without causation?
There are a lot of good points here already. Let me unpack your claim that "it seems that if X is a predictor of Y, it would be useful in predicting future values of Y based on X, regardless of causa
|
9,039
|
When can correlation be useful without causation?
|
They aren't poopooing the importance of correlation. It's just that the tendency is to interpret correlation as causation.
Take breastfeeding as the perfect example. Mothers almost always interpret the (observational studies') findings about breastfeeding as a suggestion as to whether or not they should actually breastfeed. It's true that, on average, babies who are breastfed tend to be healthier adults in order age even after controlling for longitudinal maternal and paternal age, socioeconomic status, etc. This does not imply that breastfeeding alone is responsible for the difference, though it may partially play a role in early development of appetite regulation. The relationship is very complex and one can easily speculate at a whole host of mediating factors that could underlie the differences observed.
Plenty of studies look to associations to warrant a deeper understanding of what's going on. Correlation is not useless, it just is several steps below causation and one needs to be mindful of how to report findings to prevent misinterpretation from nonexperts.
|
When can correlation be useful without causation?
|
They aren't poopooing the importance of correlation. It's just that the tendency is to interpret correlation as causation.
Take breastfeeding as the perfect example. Mothers almost always interpret th
|
When can correlation be useful without causation?
They aren't poopooing the importance of correlation. It's just that the tendency is to interpret correlation as causation.
Take breastfeeding as the perfect example. Mothers almost always interpret the (observational studies') findings about breastfeeding as a suggestion as to whether or not they should actually breastfeed. It's true that, on average, babies who are breastfed tend to be healthier adults in order age even after controlling for longitudinal maternal and paternal age, socioeconomic status, etc. This does not imply that breastfeeding alone is responsible for the difference, though it may partially play a role in early development of appetite regulation. The relationship is very complex and one can easily speculate at a whole host of mediating factors that could underlie the differences observed.
Plenty of studies look to associations to warrant a deeper understanding of what's going on. Correlation is not useless, it just is several steps below causation and one needs to be mindful of how to report findings to prevent misinterpretation from nonexperts.
|
When can correlation be useful without causation?
They aren't poopooing the importance of correlation. It's just that the tendency is to interpret correlation as causation.
Take breastfeeding as the perfect example. Mothers almost always interpret th
|
9,040
|
When can correlation be useful without causation?
|
You're right that correlation is useful. The reason that causal models are better than associational models is that — as Pearl says — they are oracles for interventions. In other words, they allow you to reason hypothetically. A causal model answers the question "if I were to make X happen, what would happen to Y?"
But you do not always need to reason hypothetically. If your model is only going to be used to answer questions like "if I observe X, what do I know about Y?", then an associational model is all you need.
|
When can correlation be useful without causation?
|
You're right that correlation is useful. The reason that causal models are better than associational models is that — as Pearl says — they are oracles for interventions. In other words, they allow y
|
When can correlation be useful without causation?
You're right that correlation is useful. The reason that causal models are better than associational models is that — as Pearl says — they are oracles for interventions. In other words, they allow you to reason hypothetically. A causal model answers the question "if I were to make X happen, what would happen to Y?"
But you do not always need to reason hypothetically. If your model is only going to be used to answer questions like "if I observe X, what do I know about Y?", then an associational model is all you need.
|
When can correlation be useful without causation?
You're right that correlation is useful. The reason that causal models are better than associational models is that — as Pearl says — they are oracles for interventions. In other words, they allow y
|
9,041
|
When can correlation be useful without causation?
|
You are correct that correlation is useful for prediction. It is also useful for getting a better understanding of the system under study.
One case where knowledge about the causal mechanism is necessary is if the target distribution has been manipulated (e.g. some variables have been "forced" to take certain values). A model based on correlations only will perform poorly, whereas a model that used causal information should perform much better.
|
When can correlation be useful without causation?
|
You are correct that correlation is useful for prediction. It is also useful for getting a better understanding of the system under study.
One case where knowledge about the causal mechanism is necess
|
When can correlation be useful without causation?
You are correct that correlation is useful for prediction. It is also useful for getting a better understanding of the system under study.
One case where knowledge about the causal mechanism is necessary is if the target distribution has been manipulated (e.g. some variables have been "forced" to take certain values). A model based on correlations only will perform poorly, whereas a model that used causal information should perform much better.
|
When can correlation be useful without causation?
You are correct that correlation is useful for prediction. It is also useful for getting a better understanding of the system under study.
One case where knowledge about the causal mechanism is necess
|
9,042
|
When can correlation be useful without causation?
|
As you stated, correlation alone has plenty of utility, mainly prediction.
The reason this phrase is used (or misused, see my comment up top to the post) so often is that causation is often a much more interesting question. That is to say, if we've spent a lot of effort to examine the relation between $A$ and $B$, it is very likely because, back in the real world, we are curious if we can use $A$ to to influence $B$.
For example, all these studies showing that heavy usage of coffee in senior citizens is correlated with healthier cardio-vascular systems are, in my mind, undoubtable motivated by people wanting to justify their heavy coffee habits. However, saying drinking coffee is only correlated with healthier hearts, rather than causal, does nothing to answer our real question of interest: are we going to be healthier if we drink more coffee or if we cut down? It can be very frustrating to find very interesting results (Coffee's linked to healthier hearts!) but not be able to use that information to make decisions (still don't know if you should drink coffee to be healthier), and so there's almost always a temptation to interpret correlation as causation.
Unless maybe all you care about is gambling (i.e. you want to predict but not influence).
|
When can correlation be useful without causation?
|
As you stated, correlation alone has plenty of utility, mainly prediction.
The reason this phrase is used (or misused, see my comment up top to the post) so often is that causation is often a much m
|
When can correlation be useful without causation?
As you stated, correlation alone has plenty of utility, mainly prediction.
The reason this phrase is used (or misused, see my comment up top to the post) so often is that causation is often a much more interesting question. That is to say, if we've spent a lot of effort to examine the relation between $A$ and $B$, it is very likely because, back in the real world, we are curious if we can use $A$ to to influence $B$.
For example, all these studies showing that heavy usage of coffee in senior citizens is correlated with healthier cardio-vascular systems are, in my mind, undoubtable motivated by people wanting to justify their heavy coffee habits. However, saying drinking coffee is only correlated with healthier hearts, rather than causal, does nothing to answer our real question of interest: are we going to be healthier if we drink more coffee or if we cut down? It can be very frustrating to find very interesting results (Coffee's linked to healthier hearts!) but not be able to use that information to make decisions (still don't know if you should drink coffee to be healthier), and so there's almost always a temptation to interpret correlation as causation.
Unless maybe all you care about is gambling (i.e. you want to predict but not influence).
|
When can correlation be useful without causation?
As you stated, correlation alone has plenty of utility, mainly prediction.
The reason this phrase is used (or misused, see my comment up top to the post) so often is that causation is often a much m
|
9,043
|
When can correlation be useful without causation?
|
Correlation is an useful tool if you have an underlying model that explains causality.
For example if you know that applying a force to an object influences its movement, you can measure the correlation between the force and velocity and force and acceleration. The stronger correlation (with the acceleration) will be explanatory by itself.
In observational studies, correlation can reveal certain common patterns (as stated breastfeeding and later health) which might be give a ground for further scientific exploration via proper experimental design that can confirm or reject causality (e.g. maybe instead of breastfeeding being the cause it might be the consequence for a certain cultural framework).
So, correlation can be useful, but it can rarely be conclusive.
|
When can correlation be useful without causation?
|
Correlation is an useful tool if you have an underlying model that explains causality.
For example if you know that applying a force to an object influences its movement, you can measure the correlati
|
When can correlation be useful without causation?
Correlation is an useful tool if you have an underlying model that explains causality.
For example if you know that applying a force to an object influences its movement, you can measure the correlation between the force and velocity and force and acceleration. The stronger correlation (with the acceleration) will be explanatory by itself.
In observational studies, correlation can reveal certain common patterns (as stated breastfeeding and later health) which might be give a ground for further scientific exploration via proper experimental design that can confirm or reject causality (e.g. maybe instead of breastfeeding being the cause it might be the consequence for a certain cultural framework).
So, correlation can be useful, but it can rarely be conclusive.
|
When can correlation be useful without causation?
Correlation is an useful tool if you have an underlying model that explains causality.
For example if you know that applying a force to an object influences its movement, you can measure the correlati
|
9,044
|
When can correlation be useful without causation?
|
Correlation is an observable phenomenon. You can measure it. You can act on those measurements. On its own, it can be useful.
However, if all you have is a correlation, you do not have any guarantee that a change you make will actually have an effect (see the famous graphs tying the rise of iPhones to overseas slavery and such). It just shows that there is a correlation there, and if you tweak the environment (by acting), that correlation may still be there.
However, this is a very subtle approach. In many scenarios we want to have a less subtle tool: causality. Causality is a correlation combined with a claim that if you tweak your environment by acting in one way or another, one should expect the correlation to still be there. This allows for longer term planning, such as the chaining of 20 or 50 causal events in a row to identify a useful outcome. Doing so with 20 or 50 correlations often leaves a very fuzzy and murky result.
As an example of how they have been useful in the past, consider western science vs. Traditional Chinese Medicine (TCM). Western science focuses primarily on "Develop a theory, isolate a test which can demonstrate the theory, run the test and document the results." This starts with "develop a theory," which is highly tied to causality. TCM spun it around, starting with "devise a test which may provide useful results, run the test, identify correlations in the answer." The focus is more on correlations.
Nowdays westerners tend to prefer to think almost entirely in causality terms, so the value of studying correlation is harder to spy. However, we find it lurking in every corner of our life. And never forget that even in western science, correlations are an important tool for identifying which theories are worth exploring!
|
When can correlation be useful without causation?
|
Correlation is an observable phenomenon. You can measure it. You can act on those measurements. On its own, it can be useful.
However, if all you have is a correlation, you do not have any guarante
|
When can correlation be useful without causation?
Correlation is an observable phenomenon. You can measure it. You can act on those measurements. On its own, it can be useful.
However, if all you have is a correlation, you do not have any guarantee that a change you make will actually have an effect (see the famous graphs tying the rise of iPhones to overseas slavery and such). It just shows that there is a correlation there, and if you tweak the environment (by acting), that correlation may still be there.
However, this is a very subtle approach. In many scenarios we want to have a less subtle tool: causality. Causality is a correlation combined with a claim that if you tweak your environment by acting in one way or another, one should expect the correlation to still be there. This allows for longer term planning, such as the chaining of 20 or 50 causal events in a row to identify a useful outcome. Doing so with 20 or 50 correlations often leaves a very fuzzy and murky result.
As an example of how they have been useful in the past, consider western science vs. Traditional Chinese Medicine (TCM). Western science focuses primarily on "Develop a theory, isolate a test which can demonstrate the theory, run the test and document the results." This starts with "develop a theory," which is highly tied to causality. TCM spun it around, starting with "devise a test which may provide useful results, run the test, identify correlations in the answer." The focus is more on correlations.
Nowdays westerners tend to prefer to think almost entirely in causality terms, so the value of studying correlation is harder to spy. However, we find it lurking in every corner of our life. And never forget that even in western science, correlations are an important tool for identifying which theories are worth exploring!
|
When can correlation be useful without causation?
Correlation is an observable phenomenon. You can measure it. You can act on those measurements. On its own, it can be useful.
However, if all you have is a correlation, you do not have any guarante
|
9,045
|
When can correlation be useful without causation?
|
There's value in correlation, but one should look at more evidence to conclude causation.
Years ago, there was a study resulting in "coffee causes cancer." As soon as I heard this on the news I told my wife "false correlation." It turned out I was correct. The 2-3 cup per day coffee population had a higher rate of smoking than the non-coffee drinkers. Once the data collectors figured this out, they retracted their results.
Another interesting study before the housing boom and bust showed racism when it came to processing mortgages. The claim was that black applicants were being rejected at a higher rate than whites. But another study looked at default rates. Black homeowners were defaulting at the sames rate as whites. If black application were being held to a higher standard, their default rate would actually be far lower. Note: this anecdote was shared by author Thomas Sowell in his book The Housing Boom and Bust
Data mining can easily produce two sets of data that show high correlation, but for events that couldn't possibly be related. In the end, it's best to look at studies that are sent your way with a very critical eye. Finding false correlations isn't always easy, it's an acquired talent.
|
When can correlation be useful without causation?
|
There's value in correlation, but one should look at more evidence to conclude causation.
Years ago, there was a study resulting in "coffee causes cancer." As soon as I heard this on the news I told
|
When can correlation be useful without causation?
There's value in correlation, but one should look at more evidence to conclude causation.
Years ago, there was a study resulting in "coffee causes cancer." As soon as I heard this on the news I told my wife "false correlation." It turned out I was correct. The 2-3 cup per day coffee population had a higher rate of smoking than the non-coffee drinkers. Once the data collectors figured this out, they retracted their results.
Another interesting study before the housing boom and bust showed racism when it came to processing mortgages. The claim was that black applicants were being rejected at a higher rate than whites. But another study looked at default rates. Black homeowners were defaulting at the sames rate as whites. If black application were being held to a higher standard, their default rate would actually be far lower. Note: this anecdote was shared by author Thomas Sowell in his book The Housing Boom and Bust
Data mining can easily produce two sets of data that show high correlation, but for events that couldn't possibly be related. In the end, it's best to look at studies that are sent your way with a very critical eye. Finding false correlations isn't always easy, it's an acquired talent.
|
When can correlation be useful without causation?
There's value in correlation, but one should look at more evidence to conclude causation.
Years ago, there was a study resulting in "coffee causes cancer." As soon as I heard this on the news I told
|
9,046
|
Probability of being born on a leap day?
|
To accurately predict that probability using statistics, it would be helpful to know where the birth took place.
This page http://chmullig.com/2012/06/births-by-day-of-year/ has a graph showing a subset of the number of births per day (multiplying the 29th by 4, which is incorrect, and undesirable for this question, but it also links to the original data and gives a rough indication of what you can expect) in the United States. I would assume that this curve doesn't hold true for other countries, and especially not for other continents. In particular the southern hemisphere and equatorial region may show a substantial derivation from these results - assuming that climate is a determining factor.
Furthermore, there's the issue of "elective birth" (touched upon by the authors of http://bmjopen.bmj.com/content/3/8/e002920.full ) - in poorer regions of the globe, I would expect a different distribution of births, simply because (non-emergency-) cesarian sections or induced birth are rarer than in developed countries. This skews the final distribution of births.
Using the American data, assuming ~71 Million births (rough graphed mean * 366) and 46.000 births on February 29ths, not correcting for the distribution of leap years in the data, because the precise period is not indicated, I arrive at a probability of around ~0.000648. This is slightly below the value one would expect given a flat distribution of births, and thus in line with the general impression give by the graph.
I'll leave a significance test of this rough estimation to a motivated reader. But given that the 29th (though uncorrected - the year 2000 injects a below average bias into the data) scores low even for the already low February standards, I assume a relatively high confidence that the null-hypthosesis of equal distribution can be rejected.
|
Probability of being born on a leap day?
|
To accurately predict that probability using statistics, it would be helpful to know where the birth took place.
This page http://chmullig.com/2012/06/births-by-day-of-year/ has a graph showing a subs
|
Probability of being born on a leap day?
To accurately predict that probability using statistics, it would be helpful to know where the birth took place.
This page http://chmullig.com/2012/06/births-by-day-of-year/ has a graph showing a subset of the number of births per day (multiplying the 29th by 4, which is incorrect, and undesirable for this question, but it also links to the original data and gives a rough indication of what you can expect) in the United States. I would assume that this curve doesn't hold true for other countries, and especially not for other continents. In particular the southern hemisphere and equatorial region may show a substantial derivation from these results - assuming that climate is a determining factor.
Furthermore, there's the issue of "elective birth" (touched upon by the authors of http://bmjopen.bmj.com/content/3/8/e002920.full ) - in poorer regions of the globe, I would expect a different distribution of births, simply because (non-emergency-) cesarian sections or induced birth are rarer than in developed countries. This skews the final distribution of births.
Using the American data, assuming ~71 Million births (rough graphed mean * 366) and 46.000 births on February 29ths, not correcting for the distribution of leap years in the data, because the precise period is not indicated, I arrive at a probability of around ~0.000648. This is slightly below the value one would expect given a flat distribution of births, and thus in line with the general impression give by the graph.
I'll leave a significance test of this rough estimation to a motivated reader. But given that the 29th (though uncorrected - the year 2000 injects a below average bias into the data) scores low even for the already low February standards, I assume a relatively high confidence that the null-hypthosesis of equal distribution can be rejected.
|
Probability of being born on a leap day?
To accurately predict that probability using statistics, it would be helpful to know where the birth took place.
This page http://chmullig.com/2012/06/births-by-day-of-year/ has a graph showing a subs
|
9,047
|
Probability of being born on a leap day?
|
Sure. See here for a more detailed explanation: http://www.public.iastate.edu/~mlamias/LeapYear.pdf.
But essentially the author concludes, "There are 485 leap years in 2 millennia. So, in 2 millennia, there are $485(366) + (2000-485)(365)= 730485$ total days. Of those days, February 29 occurs in 485 of them (the leap years), so the probability is $485/730485=0.0006639424$"
|
Probability of being born on a leap day?
|
Sure. See here for a more detailed explanation: http://www.public.iastate.edu/~mlamias/LeapYear.pdf.
But essentially the author concludes, "There are 485 leap years in 2 millennia. So, in 2 millenni
|
Probability of being born on a leap day?
Sure. See here for a more detailed explanation: http://www.public.iastate.edu/~mlamias/LeapYear.pdf.
But essentially the author concludes, "There are 485 leap years in 2 millennia. So, in 2 millennia, there are $485(366) + (2000-485)(365)= 730485$ total days. Of those days, February 29 occurs in 485 of them (the leap years), so the probability is $485/730485=0.0006639424$"
|
Probability of being born on a leap day?
Sure. See here for a more detailed explanation: http://www.public.iastate.edu/~mlamias/LeapYear.pdf.
But essentially the author concludes, "There are 485 leap years in 2 millennia. So, in 2 millenni
|
9,048
|
Probability of being born on a leap day?
|
I think the answer to this question can only be empirical. Any theoretical answer would be flawed without accounting birthday selection phenomena, seasonality etc. These things are impossible to deal with theoretically.
The birthday data is hard to find in US for privacy reasons. There's one anonymous data set here. It's from insurance applications in USA. The difference from other reports, such as a popular often cited NYT article, is that it lists the frequency of births by date, instead of simple ranking of days in a year. The weak point is of course the sampling bias, since it comes from insurance: uninsured people are not included etc.
According to the data there were 325 births in Feb 29 of total 481040. According to the Roy Murphy, the sample spans from 1981 through 1994. It includes 3 leap years of total 14 years. Without any adjustments the probability would be 0.0675% of being born on Feb 29 between 1981 and 1994.
You can adjust the probability by accounting for the frequency of leap years, which is close to 1/4 (not exactly though), e.g. by multiplying this number by $14/12$ to arrive to 0.079% estimate. Here, the conditional probability $p$ of being born on Feb 29 in a leap year is linked to the observed frequency $F_o=325$ by the frequency $f_L=3$ of leap years in a sample:
$$F_o=f_L/N\cdot F\cdot p,$$
where $N=14$ is the number of years in a sample, and $F=481040$ is the total frequency of births.
Normally, the probability of leap years is $p_L\approx 1/4$, hence, the long run average probability $P_L$ of being born on Feb 29 is:
$$P_L=p_L\cdot p\approx \frac{p_L\cdot N}{f_L} \frac{F_o}{F} \approx 0.079\%$$
You might be interested in the conditional probability $p$ of being born on Feb 29 given that you were born on leap year:
$$p= \frac{N}{f_L} \frac{F_o}{F}\approx 0.32\%$$
So, the link between $P_L$ and $p$ is based on some couple of assumptions, e.g. that the probability of being born in any given year is uniform, and doesn't change.
Of course, this discussion was US centric. Who knows what are the patterns in other countries.
UPDATE: We automatically assumed that OP is Gregorian calendar. It gets even more interesting if you consider different calendars such as lunar calendar Hijri, where the leap years are every 30 years or so.
UPDATE 2:
What's surprising is that estimated probability $p$ leads to the expected occurrence of birthdays in Feb 29 for this sample: $F\cdot p=1,527$. This is only lower than Jan 1 and Dec 25, which is consistent with NYT's ranking above! They don't describe the source of data, referring to only Amitabh Chandra, Harvard University, but it's either the same or the finding is robust.
Now, how likely it is that very peculiar days in Gregorian calendar: Jan 1, Dec 25 and Deb 29 would come randomly as the most popular birthdays? I say it's highly unlikely a random occurrence. Hence, it's even more interesting to see what's going on in other calendars such as Hijri.
UPDATE 3:
Note that both $P_L,p$ are higher than naive theoretical estimates:
$$\hat p\approx 1/366\approx 0.27%$$
$$\hat P_L\approx p\cdot\frac{366}{365*4+1}\approx 0.068%$$
UPDATE 4:
Ben Millwood made comment that the distribution of births by day of year is non-uniform. Can we test this statement? Using my data set we can run $\chi^2$ test on the theoretical distribution with a null hypothesis that the distribution is uniform. The result is the rejection, i.e. the distribution doesn't seem to be uniform.
The theoretical distribution is built like this. We assume that the birth frequency is uniform across all calendar days, i.e. in 14 years span across $14*365+3$ days. Then we roll up the days into days of year, which is 366. Obviously only 3 leap days were encountered and 14 non-leap days. Below is my MATLAB code and the distribution plot for comparison of theoretical and empiricals.
d=[0101 1482
...
1231 1352];
%%
tc = sum(d(:,2)); % total obs
idL = 60; % index of Feb 29
% theor frequency, assuming uniform
ny = 1994 - 1981 + 1; % num of years
nL = 3; % # of leap years: 1984, 1988, 1992
nd = 365*ny + nL; % total # of days
fc = tc/nd; % expected freq for calendar date in sample
td = ones(366,1)*fc*ny; % roll the dates into day of year
td(idL) = fc*nL;
fprintf(1,'non-leap day expected freq: %f\n',td(end))
fprintf(1,'leap day expected freq: %f\n',td(idL))
fprintf(1,'non-leap day average freq: %f\n',mean(d([1:idL-1 idL+1:end],2)))
fprintf(1,'non-leap day freq std dev: %f\n',std(d([1:idL-1 idL+1:end],2)))
fprintf(1,'leap day observed freq: %f\n',d(idL,2))
% plots
bar(d(:,2))
hold on
plot(td,'r')
legend('empirical','theoretical')
title('Distribution of birth dates 1981-1994')
set(gca,'XTick',1:30:366)
set(gca,'XTickLabels',[num2str(floor(d(1:30:366,1)/100)) repmat('/',13,1) num2str(rem(d(1:30:366,1),100))])
grid on
% chi^2 test
[h p]=chi2gof(d(:,2),'Expected',td)
OUTPUT:
non-leap day expected freq: 1317.144534
leap day expected freq: 282.245257
non-leap day average freq: 1317.027397
non-leap day freq std dev: 69.960227
leap day observed freq: 325.000000
h =
1
p =
0
|
Probability of being born on a leap day?
|
I think the answer to this question can only be empirical. Any theoretical answer would be flawed without accounting birthday selection phenomena, seasonality etc. These things are impossible to deal
|
Probability of being born on a leap day?
I think the answer to this question can only be empirical. Any theoretical answer would be flawed without accounting birthday selection phenomena, seasonality etc. These things are impossible to deal with theoretically.
The birthday data is hard to find in US for privacy reasons. There's one anonymous data set here. It's from insurance applications in USA. The difference from other reports, such as a popular often cited NYT article, is that it lists the frequency of births by date, instead of simple ranking of days in a year. The weak point is of course the sampling bias, since it comes from insurance: uninsured people are not included etc.
According to the data there were 325 births in Feb 29 of total 481040. According to the Roy Murphy, the sample spans from 1981 through 1994. It includes 3 leap years of total 14 years. Without any adjustments the probability would be 0.0675% of being born on Feb 29 between 1981 and 1994.
You can adjust the probability by accounting for the frequency of leap years, which is close to 1/4 (not exactly though), e.g. by multiplying this number by $14/12$ to arrive to 0.079% estimate. Here, the conditional probability $p$ of being born on Feb 29 in a leap year is linked to the observed frequency $F_o=325$ by the frequency $f_L=3$ of leap years in a sample:
$$F_o=f_L/N\cdot F\cdot p,$$
where $N=14$ is the number of years in a sample, and $F=481040$ is the total frequency of births.
Normally, the probability of leap years is $p_L\approx 1/4$, hence, the long run average probability $P_L$ of being born on Feb 29 is:
$$P_L=p_L\cdot p\approx \frac{p_L\cdot N}{f_L} \frac{F_o}{F} \approx 0.079\%$$
You might be interested in the conditional probability $p$ of being born on Feb 29 given that you were born on leap year:
$$p= \frac{N}{f_L} \frac{F_o}{F}\approx 0.32\%$$
So, the link between $P_L$ and $p$ is based on some couple of assumptions, e.g. that the probability of being born in any given year is uniform, and doesn't change.
Of course, this discussion was US centric. Who knows what are the patterns in other countries.
UPDATE: We automatically assumed that OP is Gregorian calendar. It gets even more interesting if you consider different calendars such as lunar calendar Hijri, where the leap years are every 30 years or so.
UPDATE 2:
What's surprising is that estimated probability $p$ leads to the expected occurrence of birthdays in Feb 29 for this sample: $F\cdot p=1,527$. This is only lower than Jan 1 and Dec 25, which is consistent with NYT's ranking above! They don't describe the source of data, referring to only Amitabh Chandra, Harvard University, but it's either the same or the finding is robust.
Now, how likely it is that very peculiar days in Gregorian calendar: Jan 1, Dec 25 and Deb 29 would come randomly as the most popular birthdays? I say it's highly unlikely a random occurrence. Hence, it's even more interesting to see what's going on in other calendars such as Hijri.
UPDATE 3:
Note that both $P_L,p$ are higher than naive theoretical estimates:
$$\hat p\approx 1/366\approx 0.27%$$
$$\hat P_L\approx p\cdot\frac{366}{365*4+1}\approx 0.068%$$
UPDATE 4:
Ben Millwood made comment that the distribution of births by day of year is non-uniform. Can we test this statement? Using my data set we can run $\chi^2$ test on the theoretical distribution with a null hypothesis that the distribution is uniform. The result is the rejection, i.e. the distribution doesn't seem to be uniform.
The theoretical distribution is built like this. We assume that the birth frequency is uniform across all calendar days, i.e. in 14 years span across $14*365+3$ days. Then we roll up the days into days of year, which is 366. Obviously only 3 leap days were encountered and 14 non-leap days. Below is my MATLAB code and the distribution plot for comparison of theoretical and empiricals.
d=[0101 1482
...
1231 1352];
%%
tc = sum(d(:,2)); % total obs
idL = 60; % index of Feb 29
% theor frequency, assuming uniform
ny = 1994 - 1981 + 1; % num of years
nL = 3; % # of leap years: 1984, 1988, 1992
nd = 365*ny + nL; % total # of days
fc = tc/nd; % expected freq for calendar date in sample
td = ones(366,1)*fc*ny; % roll the dates into day of year
td(idL) = fc*nL;
fprintf(1,'non-leap day expected freq: %f\n',td(end))
fprintf(1,'leap day expected freq: %f\n',td(idL))
fprintf(1,'non-leap day average freq: %f\n',mean(d([1:idL-1 idL+1:end],2)))
fprintf(1,'non-leap day freq std dev: %f\n',std(d([1:idL-1 idL+1:end],2)))
fprintf(1,'leap day observed freq: %f\n',d(idL,2))
% plots
bar(d(:,2))
hold on
plot(td,'r')
legend('empirical','theoretical')
title('Distribution of birth dates 1981-1994')
set(gca,'XTick',1:30:366)
set(gca,'XTickLabels',[num2str(floor(d(1:30:366,1)/100)) repmat('/',13,1) num2str(rem(d(1:30:366,1),100))])
grid on
% chi^2 test
[h p]=chi2gof(d(:,2),'Expected',td)
OUTPUT:
non-leap day expected freq: 1317.144534
leap day expected freq: 282.245257
non-leap day average freq: 1317.027397
non-leap day freq std dev: 69.960227
leap day observed freq: 325.000000
h =
1
p =
0
|
Probability of being born on a leap day?
I think the answer to this question can only be empirical. Any theoretical answer would be flawed without accounting birthday selection phenomena, seasonality etc. These things are impossible to deal
|
9,049
|
Probability of being born on a leap day?
|
My all-time favorite cover to a book provides some highly relevant evidence against the assumption of a uniform allocation of births to dates. Specifically that births in the US since 1970 exhibit several trends superimposed on each other: a long, multi-decade trend, a non-periodic trend, day-of-week trends, day-of-year trends, holiday trends (because procedures like Cesarean section allow one to effectively schedule the birthdate, and doctors often don't do those on holidays). The result is that the probability of being born on a randomly-chosen day in a year is not uniform, and because birth rate varies between years, not all years are equally likely, either. So the answer that just checks how many leap years there are in some interval and reckons from the calendar is making very strong assumptions which have little utility in describing the real world in any reasonable way!
This also provides evidence that Asksal's solution, while a very strong contender, is also incomplete. A small number of leap days will be "contaminated" by all off the effects at play here, so Asksal's estimate is also capturing (quite by accident) the effect of day-of-week and long-term trends along with the Feb. 29 effect. Which effects are and are not appropriate to include are not clearly defined by your question.
And this analysis only has bearing on the US, which has demographic trends which might be quite different from other nations or populations. Japan's birth rate has been declining for decades, for example. China's birth rate was regulated by the state, with some consequences for its nation's gender composition and hence birth rates in subsequent generations.
Likewise, Gelman's analysis only describes several recent decades, and it's not necessarily clear that this is even the era of interest to your question.
For those who get excited about this kind of thing, the material in the cover is discussed at length in the chapter on Gaussian processes.
|
Probability of being born on a leap day?
|
My all-time favorite cover to a book provides some highly relevant evidence against the assumption of a uniform allocation of births to dates. Specifically that births in the US since 1970 exhibit sev
|
Probability of being born on a leap day?
My all-time favorite cover to a book provides some highly relevant evidence against the assumption of a uniform allocation of births to dates. Specifically that births in the US since 1970 exhibit several trends superimposed on each other: a long, multi-decade trend, a non-periodic trend, day-of-week trends, day-of-year trends, holiday trends (because procedures like Cesarean section allow one to effectively schedule the birthdate, and doctors often don't do those on holidays). The result is that the probability of being born on a randomly-chosen day in a year is not uniform, and because birth rate varies between years, not all years are equally likely, either. So the answer that just checks how many leap years there are in some interval and reckons from the calendar is making very strong assumptions which have little utility in describing the real world in any reasonable way!
This also provides evidence that Asksal's solution, while a very strong contender, is also incomplete. A small number of leap days will be "contaminated" by all off the effects at play here, so Asksal's estimate is also capturing (quite by accident) the effect of day-of-week and long-term trends along with the Feb. 29 effect. Which effects are and are not appropriate to include are not clearly defined by your question.
And this analysis only has bearing on the US, which has demographic trends which might be quite different from other nations or populations. Japan's birth rate has been declining for decades, for example. China's birth rate was regulated by the state, with some consequences for its nation's gender composition and hence birth rates in subsequent generations.
Likewise, Gelman's analysis only describes several recent decades, and it's not necessarily clear that this is even the era of interest to your question.
For those who get excited about this kind of thing, the material in the cover is discussed at length in the chapter on Gaussian processes.
|
Probability of being born on a leap day?
My all-time favorite cover to a book provides some highly relevant evidence against the assumption of a uniform allocation of births to dates. Specifically that births in the US since 1970 exhibit sev
|
9,050
|
Probability of being born on a leap day?
|
February 29th is a date that occurs each year that is a multiple of 4.
However years that are a multiple of 100 but aren't one of 400, are not considered as leap years (E.g: 1900 is not a leap year while 2000 or 1600 are). Therefore, nowadays, it is the same pattern every 400 years.
So let's do the maths on a [0;400[ interval:
On a 400 years period, there is exactly 4 x 25 = 100 years that are a multiple of 4. But we have to subtract 3 (years multiple of 100 but not of 400) from 100, and we get 100 - 3 = 97 years.
Now we have to multiply 97 by 366 , 97 x 366 = 35502 (number of days in a leap year in a 400 years period), it remains (365 x (400-97)) = 110 595 (number of days that aren't in a leap year in a 400 years period).
Then we just have to add these two numbers in order to know the total number of days in a 400 years period: 110 595 + 35502 = 146 097.
To finish, our probability is the number of February 29th in a 400 years period so 97 given that there is 97 leap years divided by the total number of days of our interval:
p = 97 / 146097 ≈ 0,0006639424492
Hope this is right and clear.
|
Probability of being born on a leap day?
|
February 29th is a date that occurs each year that is a multiple of 4.
However years that are a multiple of 100 but aren't one of 400, are not considered as leap years (E.g: 1900 is not a leap year wh
|
Probability of being born on a leap day?
February 29th is a date that occurs each year that is a multiple of 4.
However years that are a multiple of 100 but aren't one of 400, are not considered as leap years (E.g: 1900 is not a leap year while 2000 or 1600 are). Therefore, nowadays, it is the same pattern every 400 years.
So let's do the maths on a [0;400[ interval:
On a 400 years period, there is exactly 4 x 25 = 100 years that are a multiple of 4. But we have to subtract 3 (years multiple of 100 but not of 400) from 100, and we get 100 - 3 = 97 years.
Now we have to multiply 97 by 366 , 97 x 366 = 35502 (number of days in a leap year in a 400 years period), it remains (365 x (400-97)) = 110 595 (number of days that aren't in a leap year in a 400 years period).
Then we just have to add these two numbers in order to know the total number of days in a 400 years period: 110 595 + 35502 = 146 097.
To finish, our probability is the number of February 29th in a 400 years period so 97 given that there is 97 leap years divided by the total number of days of our interval:
p = 97 / 146097 ≈ 0,0006639424492
Hope this is right and clear.
|
Probability of being born on a leap day?
February 29th is a date that occurs each year that is a multiple of 4.
However years that are a multiple of 100 but aren't one of 400, are not considered as leap years (E.g: 1900 is not a leap year wh
|
9,051
|
Probability of being born on a leap day?
|
I believe there are two questions being mixed up here. The one is "What is the probability of any given day being a Feb. 29th?". The second one is (and the one actually asked) "What is the probability of being born on a leap day?"
The approach of simply counting days seems to be misleading as Aksakal is pointing it. Counting days and calculating frequencies of Feb. 29th occuring addresses the question: "What is the probability that any given day is a Feb. 29th?" (Imagine waking up after a coma, no clue what day it is. The probability of it being a Feb. 29th is as pointed out above $p = \frac{97}{146097}\approx 0,00066394$).
Following Aksakal's answer, the probability can just be based on empirical studies of the distribution of births across the days of the year. Different data sets will come to different conclusions (e.g. due to effects of seasonality, long-term trends in birth rates, cultural differences). Aksakal pointed out a study (One comment: to account for the unrepresentative occurence of a leap year in the mentioned data (i.e. $\frac{3}{14}$) compared to the long-term frequency of leap year occurences (i.e. $\frac{97}{400}$) you would have to multiply the frequency of birth on Feb. 29th from the sample by $\frac{97}{400}\cdot\frac{14}{3} = \frac{679}{600} \approx 1.131667$).
Finally, there is a third possible interpretation of the question, which I believe was not intended though: "What is the probability of a specific person being born on a leap day?" Well, for anyone already born that is easy. It is either $0$ or $1$. For anyone not born but already conceived it also can be estimated using empirical studies on the length of pregnancy (see Wikipedia for an overview). For anyone not conceived yet, see above.
|
Probability of being born on a leap day?
|
I believe there are two questions being mixed up here. The one is "What is the probability of any given day being a Feb. 29th?". The second one is (and the one actually asked) "What is the probability
|
Probability of being born on a leap day?
I believe there are two questions being mixed up here. The one is "What is the probability of any given day being a Feb. 29th?". The second one is (and the one actually asked) "What is the probability of being born on a leap day?"
The approach of simply counting days seems to be misleading as Aksakal is pointing it. Counting days and calculating frequencies of Feb. 29th occuring addresses the question: "What is the probability that any given day is a Feb. 29th?" (Imagine waking up after a coma, no clue what day it is. The probability of it being a Feb. 29th is as pointed out above $p = \frac{97}{146097}\approx 0,00066394$).
Following Aksakal's answer, the probability can just be based on empirical studies of the distribution of births across the days of the year. Different data sets will come to different conclusions (e.g. due to effects of seasonality, long-term trends in birth rates, cultural differences). Aksakal pointed out a study (One comment: to account for the unrepresentative occurence of a leap year in the mentioned data (i.e. $\frac{3}{14}$) compared to the long-term frequency of leap year occurences (i.e. $\frac{97}{400}$) you would have to multiply the frequency of birth on Feb. 29th from the sample by $\frac{97}{400}\cdot\frac{14}{3} = \frac{679}{600} \approx 1.131667$).
Finally, there is a third possible interpretation of the question, which I believe was not intended though: "What is the probability of a specific person being born on a leap day?" Well, for anyone already born that is easy. It is either $0$ or $1$. For anyone not born but already conceived it also can be estimated using empirical studies on the length of pregnancy (see Wikipedia for an overview). For anyone not conceived yet, see above.
|
Probability of being born on a leap day?
I believe there are two questions being mixed up here. The one is "What is the probability of any given day being a Feb. 29th?". The second one is (and the one actually asked) "What is the probability
|
9,052
|
Probability of being born on a leap day?
|
I've noticed that most of the answers above work this out by calculating the number of leap days in a particular period. There is a simpler way to get the answer, 100% accurately, by definition:
We use leap years to adjust the regular (365 day) calendar to the mean tropical year (aka mean solar year). The mean tropical year "is the time that the Sun takes to return to the same position in the cycle of seasons, as seen from Earth" (Wikipedia). The tropical year varies slightly, but the mean (average) tropical year is ABOUT 365.24667.
If out leap days are correct, then the chance of a randomly selected day being a leap day, is ((tropical year) - (non-leap-year)) / tropical year
Pluging in the approximate number we have, it's (365.24667-365)/365.24667, or 0.24667/365.24667, or 675 per million (0.0675%).
This, however, is for a randomly selected day. I imagine that this is substantially skewed by parents who would rather not have to explain to their kids, "your actual birthday only comes once per 4 years".
|
Probability of being born on a leap day?
|
I've noticed that most of the answers above work this out by calculating the number of leap days in a particular period. There is a simpler way to get the answer, 100% accurately, by definition:
We us
|
Probability of being born on a leap day?
I've noticed that most of the answers above work this out by calculating the number of leap days in a particular period. There is a simpler way to get the answer, 100% accurately, by definition:
We use leap years to adjust the regular (365 day) calendar to the mean tropical year (aka mean solar year). The mean tropical year "is the time that the Sun takes to return to the same position in the cycle of seasons, as seen from Earth" (Wikipedia). The tropical year varies slightly, but the mean (average) tropical year is ABOUT 365.24667.
If out leap days are correct, then the chance of a randomly selected day being a leap day, is ((tropical year) - (non-leap-year)) / tropical year
Pluging in the approximate number we have, it's (365.24667-365)/365.24667, or 0.24667/365.24667, or 675 per million (0.0675%).
This, however, is for a randomly selected day. I imagine that this is substantially skewed by parents who would rather not have to explain to their kids, "your actual birthday only comes once per 4 years".
|
Probability of being born on a leap day?
I've noticed that most of the answers above work this out by calculating the number of leap days in a particular period. There is a simpler way to get the answer, 100% accurately, by definition:
We us
|
9,053
|
Probability of being born on a leap day?
|
I asked my sister, whose bithday is February 29, and she said, "The result of my own empirical study was that it is 1.00, obviously."
|
Probability of being born on a leap day?
|
I asked my sister, whose bithday is February 29, and she said, "The result of my own empirical study was that it is 1.00, obviously."
|
Probability of being born on a leap day?
I asked my sister, whose bithday is February 29, and she said, "The result of my own empirical study was that it is 1.00, obviously."
|
Probability of being born on a leap day?
I asked my sister, whose bithday is February 29, and she said, "The result of my own empirical study was that it is 1.00, obviously."
|
9,054
|
Why not use the third derivative for numerical optimization?
|
I am interpreting the question as being "Why does Newton's method only use first and second derivatives, not third or higher derivatives?"
Actually, in many cases, going to the third derivative does help; I've done it with custom stuff before. However, in general, going to higher derivatives adds computational complexity - you have to find and calculate all those derivatives, and for multivariate problems, there are a lot more third derivatives than there are first derivatives! - that far outweighs the savings in step count you get, if any. For example, if I have a 3-dimensional problem, I have 3 first derivatives, 6 second derivatives, and 10 third derivatives, so going to a third-order version more than doubles the number of evaluations I have to do (from 9 to 19), not to mention increased complexity of calculating the step direction / size once I've done those evaluations, but will almost certainly not cut the number of steps I have to take in half.
Now, in the general case with $k$ variables, the collection of $n^{th}$ partial derivatives will number${k+n-1} \choose {k-1}$, so for a problem with five variables, the total number of third, fourth, and fifth partial derivatives will equal 231, a more than 10-fold increase over the number of first and second partial derivatives (20). You would have to have a problem that is very, very close to a fifth-order polynomial in the variables to see a large enough reduction in iteration counts to make up for that extra computational burden.
|
Why not use the third derivative for numerical optimization?
|
I am interpreting the question as being "Why does Newton's method only use first and second derivatives, not third or higher derivatives?"
Actually, in many cases, going to the third derivative does h
|
Why not use the third derivative for numerical optimization?
I am interpreting the question as being "Why does Newton's method only use first and second derivatives, not third or higher derivatives?"
Actually, in many cases, going to the third derivative does help; I've done it with custom stuff before. However, in general, going to higher derivatives adds computational complexity - you have to find and calculate all those derivatives, and for multivariate problems, there are a lot more third derivatives than there are first derivatives! - that far outweighs the savings in step count you get, if any. For example, if I have a 3-dimensional problem, I have 3 first derivatives, 6 second derivatives, and 10 third derivatives, so going to a third-order version more than doubles the number of evaluations I have to do (from 9 to 19), not to mention increased complexity of calculating the step direction / size once I've done those evaluations, but will almost certainly not cut the number of steps I have to take in half.
Now, in the general case with $k$ variables, the collection of $n^{th}$ partial derivatives will number${k+n-1} \choose {k-1}$, so for a problem with five variables, the total number of third, fourth, and fifth partial derivatives will equal 231, a more than 10-fold increase over the number of first and second partial derivatives (20). You would have to have a problem that is very, very close to a fifth-order polynomial in the variables to see a large enough reduction in iteration counts to make up for that extra computational burden.
|
Why not use the third derivative for numerical optimization?
I am interpreting the question as being "Why does Newton's method only use first and second derivatives, not third or higher derivatives?"
Actually, in many cases, going to the third derivative does h
|
9,055
|
Why not use the third derivative for numerical optimization?
|
I don't really see what the statistical aspect of this question is, so I'll answer the optimization part.
There are 2 parts to convergence: iteration cost & iteration count
Pretty much every answer here is focusing on just the iteration cost and ignoring the iteration count. But both of them matter. An method that iterates in 1 nanosecond but that takes $10^{20}$ iterations to converge won't do you any good. And a method that blows up won't help either, no matter how cheap its iteration cost.
Let's figure out what's going on.
So: Why not use >2nd-order derivatives?
Partly because (and this is true for 2nd-order too, but more on that in a bit):
Higher-order methods generally only converge faster when near the optimum.
On the other hand, they blow up more easily when they are farther from the optimum!
(Of course, this isn't always true; e.g. a quadratic will converge in 1 step with Newton's method. But for arbitrary functions in the real world that don't have nice properties, this is generally true.)
This means that when you are farther away from the optimum, you generally want a low-order (read: first-order) method. Only when you are close do you want to increase the order of the method.
So why stop at 2nd order when you are near the root?
Because "quadratic" convergence behavior really is "good enough"!
To see why, you first have to understand what "quadratic convergence" means.
Mathematically, quadratic convergence means that, if $\epsilon_k$ is your error at iteration $k$, then the following eventually holds true for some constant $c$:
$$\lvert\epsilon_{k+1}\rvert \leq c\ \lvert\epsilon_{k}\rvert^2$$
In plain English, this means that, once you are near the optimum (important!), every extra step doubles the number of digits of accuracy.
Why? It's easy to see with an example: for $c = 1$ and $\lvert\epsilon_1\rvert = 0.1$, you have $\lvert\epsilon_2\rvert \leq 0.01$, $\lvert\epsilon_3\rvert \leq 0.0001$, etc. which is ridiculously fast. (It's super-exponential!)
Why not stop at 1st order rather than 2nd-order?
Actually, people often do this when second-order derivatives become too expensive. But linear convergence can be very slow. e.g. if you got $\epsilon_k = 0.9999999$ then you'd need maybe 10,000,000 iterations with linear convergence to get $\lvert\epsilon\rvert < 0.5$, but only 23 iterations with quadratic convergence. So you can see why there's a drastic difference between linear and quadratic convergence. This is not true for 2nd and 3rd-order convergence, for example (see next paragraph).
At this point, if you know any computer science, you understand that with 2nd-order convergence, the problem is already solved. If you don't see why, here's why: there is nothing practical to gain from tripling the number of digits every iteration instead of doubling it—what's it going to buy you? After all, in a computer, even a double-precision number has 52 bits of precision, which is around 16 decimal digits.
Maybe it will decrease the number of steps you require from 16 to 3... which sounds great, until you realize it comes at the price of having to compute third derivatives at each iteration, which is where the curse of dimensionality hits you hard. For a $6$-dimensional problem, you just paid a factor of $6$ to gain a factor of $\approx 5$, which is dumb. And in the real world problems have at least hundreds of dimensions (or even thousands or even millions), not merely $6$! So you gain a factor of maybe 20 by paying a factor of, say, 20,000... hardly a wise trade-off.
But again: remember the curse of dimensionality is half the story.
The other half is that you generally get worse behavior when you're far from the optimum, which generally adversely affects the number of iterations you have to do.
Conclusion
In a general setting, higher-order methods than 2 are a bad idea. Of course, if you can bring additional helpful assumptions to the table (e.g. perhaps your data does resemble a high-degree polynomial, or you have ways of bounding the location of the optimum, etc.), then maybe you can find that they are a good idea—but that will be a problem-specific decision, and not a general rule of thumb to live by.
|
Why not use the third derivative for numerical optimization?
|
I don't really see what the statistical aspect of this question is, so I'll answer the optimization part.
There are 2 parts to convergence: iteration cost & iteration count
Pretty much every answer he
|
Why not use the third derivative for numerical optimization?
I don't really see what the statistical aspect of this question is, so I'll answer the optimization part.
There are 2 parts to convergence: iteration cost & iteration count
Pretty much every answer here is focusing on just the iteration cost and ignoring the iteration count. But both of them matter. An method that iterates in 1 nanosecond but that takes $10^{20}$ iterations to converge won't do you any good. And a method that blows up won't help either, no matter how cheap its iteration cost.
Let's figure out what's going on.
So: Why not use >2nd-order derivatives?
Partly because (and this is true for 2nd-order too, but more on that in a bit):
Higher-order methods generally only converge faster when near the optimum.
On the other hand, they blow up more easily when they are farther from the optimum!
(Of course, this isn't always true; e.g. a quadratic will converge in 1 step with Newton's method. But for arbitrary functions in the real world that don't have nice properties, this is generally true.)
This means that when you are farther away from the optimum, you generally want a low-order (read: first-order) method. Only when you are close do you want to increase the order of the method.
So why stop at 2nd order when you are near the root?
Because "quadratic" convergence behavior really is "good enough"!
To see why, you first have to understand what "quadratic convergence" means.
Mathematically, quadratic convergence means that, if $\epsilon_k$ is your error at iteration $k$, then the following eventually holds true for some constant $c$:
$$\lvert\epsilon_{k+1}\rvert \leq c\ \lvert\epsilon_{k}\rvert^2$$
In plain English, this means that, once you are near the optimum (important!), every extra step doubles the number of digits of accuracy.
Why? It's easy to see with an example: for $c = 1$ and $\lvert\epsilon_1\rvert = 0.1$, you have $\lvert\epsilon_2\rvert \leq 0.01$, $\lvert\epsilon_3\rvert \leq 0.0001$, etc. which is ridiculously fast. (It's super-exponential!)
Why not stop at 1st order rather than 2nd-order?
Actually, people often do this when second-order derivatives become too expensive. But linear convergence can be very slow. e.g. if you got $\epsilon_k = 0.9999999$ then you'd need maybe 10,000,000 iterations with linear convergence to get $\lvert\epsilon\rvert < 0.5$, but only 23 iterations with quadratic convergence. So you can see why there's a drastic difference between linear and quadratic convergence. This is not true for 2nd and 3rd-order convergence, for example (see next paragraph).
At this point, if you know any computer science, you understand that with 2nd-order convergence, the problem is already solved. If you don't see why, here's why: there is nothing practical to gain from tripling the number of digits every iteration instead of doubling it—what's it going to buy you? After all, in a computer, even a double-precision number has 52 bits of precision, which is around 16 decimal digits.
Maybe it will decrease the number of steps you require from 16 to 3... which sounds great, until you realize it comes at the price of having to compute third derivatives at each iteration, which is where the curse of dimensionality hits you hard. For a $6$-dimensional problem, you just paid a factor of $6$ to gain a factor of $\approx 5$, which is dumb. And in the real world problems have at least hundreds of dimensions (or even thousands or even millions), not merely $6$! So you gain a factor of maybe 20 by paying a factor of, say, 20,000... hardly a wise trade-off.
But again: remember the curse of dimensionality is half the story.
The other half is that you generally get worse behavior when you're far from the optimum, which generally adversely affects the number of iterations you have to do.
Conclusion
In a general setting, higher-order methods than 2 are a bad idea. Of course, if you can bring additional helpful assumptions to the table (e.g. perhaps your data does resemble a high-degree polynomial, or you have ways of bounding the location of the optimum, etc.), then maybe you can find that they are a good idea—but that will be a problem-specific decision, and not a general rule of thumb to live by.
|
Why not use the third derivative for numerical optimization?
I don't really see what the statistical aspect of this question is, so I'll answer the optimization part.
There are 2 parts to convergence: iteration cost & iteration count
Pretty much every answer he
|
9,056
|
Why not use the third derivative for numerical optimization?
|
Even calculating Hessians is quite a bit of work:
$$H = \begin{bmatrix}
\dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex]
\vdots & \vdots & \ddots & \vdots \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2}
\end{bmatrix}.$$
Now see how the third derivative looks like:
$$\partial H/\partial x=\begin{bmatrix}
\dfrac{\partial H}{\partial x_1}\\
\dfrac{\partial H}{\partial x_2}\\
\vdots\\
\dfrac{\partial H}{\partial x_n}
\end{bmatrix}$$
This is a three dimensional matrix. Here's how its elements look like:
$$(\partial H/\partial x)_{ijk}=\dfrac{\partial^3 f}{\partial x_i\partial x_j\partial x_k}$$
Sixth's derivative will be six dimensional matrix:
$$\dfrac{\partial^6 f}{\partial x_i\partial x_j\partial x_k\partial x_l\partial x_m\partial x_n}$$
Usually, the trade-off is not favorable for going after higher than Hessian. I mean the trade-off between potential gain in speed through using higher order approximations vs. the noise amplification. You always have noise in inputs because we're talking about statistical applications. This noise will be amplified by the derivatives.
If you play golf then the analogy in optimization is to first swing trying to get to the green, not worry to much about a hole. Once, on the green, we'll putt aiming a hole.
|
Why not use the third derivative for numerical optimization?
|
Even calculating Hessians is quite a bit of work:
$$H = \begin{bmatrix}
\dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\parti
|
Why not use the third derivative for numerical optimization?
Even calculating Hessians is quite a bit of work:
$$H = \begin{bmatrix}
\dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex]
\vdots & \vdots & \ddots & \vdots \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2}
\end{bmatrix}.$$
Now see how the third derivative looks like:
$$\partial H/\partial x=\begin{bmatrix}
\dfrac{\partial H}{\partial x_1}\\
\dfrac{\partial H}{\partial x_2}\\
\vdots\\
\dfrac{\partial H}{\partial x_n}
\end{bmatrix}$$
This is a three dimensional matrix. Here's how its elements look like:
$$(\partial H/\partial x)_{ijk}=\dfrac{\partial^3 f}{\partial x_i\partial x_j\partial x_k}$$
Sixth's derivative will be six dimensional matrix:
$$\dfrac{\partial^6 f}{\partial x_i\partial x_j\partial x_k\partial x_l\partial x_m\partial x_n}$$
Usually, the trade-off is not favorable for going after higher than Hessian. I mean the trade-off between potential gain in speed through using higher order approximations vs. the noise amplification. You always have noise in inputs because we're talking about statistical applications. This noise will be amplified by the derivatives.
If you play golf then the analogy in optimization is to first swing trying to get to the green, not worry to much about a hole. Once, on the green, we'll putt aiming a hole.
|
Why not use the third derivative for numerical optimization?
Even calculating Hessians is quite a bit of work:
$$H = \begin{bmatrix}
\dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\parti
|
9,057
|
Why not use the third derivative for numerical optimization?
|
Typically, when you analyze the effectiveness of such algorithms, you'll find results such as one step of a fourth order algorithm having roughly the same effectiveness as two steps of a second order algorithm.
So the choice of which algorithm to use is relatively simple: if one step of the fourth order algorithm takes twice as much work or more than one step of the second order algorithm, you should use the latter instead.
That is the typical situation for these sorts of methods: the classical algorithm has the optimal work-to-effectiveness ratio for general problems. While there are occasional problems where a higher order approach is unusually easy to compute and can outperform the classical variant, they are relatively uncommon.
|
Why not use the third derivative for numerical optimization?
|
Typically, when you analyze the effectiveness of such algorithms, you'll find results such as one step of a fourth order algorithm having roughly the same effectiveness as two steps of a second order
|
Why not use the third derivative for numerical optimization?
Typically, when you analyze the effectiveness of such algorithms, you'll find results such as one step of a fourth order algorithm having roughly the same effectiveness as two steps of a second order algorithm.
So the choice of which algorithm to use is relatively simple: if one step of the fourth order algorithm takes twice as much work or more than one step of the second order algorithm, you should use the latter instead.
That is the typical situation for these sorts of methods: the classical algorithm has the optimal work-to-effectiveness ratio for general problems. While there are occasional problems where a higher order approach is unusually easy to compute and can outperform the classical variant, they are relatively uncommon.
|
Why not use the third derivative for numerical optimization?
Typically, when you analyze the effectiveness of such algorithms, you'll find results such as one step of a fourth order algorithm having roughly the same effectiveness as two steps of a second order
|
9,058
|
Why not use the third derivative for numerical optimization?
|
You can think of the order of derivatives as the order of a polynomial approximation to the function. Most optimization routines rely on convexity. A quadratic polynomial will be convex/concave everywhere whereas a 3rd order or higher polynomial will not be convex everywhere. Most optimization routines rely on successive approximations of convex functions with quadratics for this reason. A quadratic approximation that is convex requires a positive definiteness condition to be imposed in order for the quadratic to be convex.
|
Why not use the third derivative for numerical optimization?
|
You can think of the order of derivatives as the order of a polynomial approximation to the function. Most optimization routines rely on convexity. A quadratic polynomial will be convex/concave everyw
|
Why not use the third derivative for numerical optimization?
You can think of the order of derivatives as the order of a polynomial approximation to the function. Most optimization routines rely on convexity. A quadratic polynomial will be convex/concave everywhere whereas a 3rd order or higher polynomial will not be convex everywhere. Most optimization routines rely on successive approximations of convex functions with quadratics for this reason. A quadratic approximation that is convex requires a positive definiteness condition to be imposed in order for the quadratic to be convex.
|
Why not use the third derivative for numerical optimization?
You can think of the order of derivatives as the order of a polynomial approximation to the function. Most optimization routines rely on convexity. A quadratic polynomial will be convex/concave everyw
|
9,059
|
Why not use the third derivative for numerical optimization?
|
Let me be the only one here defending 3rd order methods for SGD convergence, but definitely not in the entire space what would need $\approx dim^3/6$ coefficients, but e.g. in just a single direction, which needs only a single additional coefficient if already having 2nd order model in this direction.
Why single direction 3rd order model can be beneficial? For example because close to zero second derivative in this direction basically means two alternative scenarios: plateau or inflection point - only the former requires larger step size, and 3rd derivative allows to distinguish them.
I believe we will go toward hybrid multi-order methods: 2nd order method in a low dimensional subspace e.g. from PCA of recent gradients, what still allows for free 1st order simultaneous gradient descent toward part of gradient orthogonal to this subspace ... and additionally I would add e.g. 3rd order model for a single most relevant direction.
|
Why not use the third derivative for numerical optimization?
|
Let me be the only one here defending 3rd order methods for SGD convergence, but definitely not in the entire space what would need $\approx dim^3/6$ coefficients, but e.g. in just a single direction,
|
Why not use the third derivative for numerical optimization?
Let me be the only one here defending 3rd order methods for SGD convergence, but definitely not in the entire space what would need $\approx dim^3/6$ coefficients, but e.g. in just a single direction, which needs only a single additional coefficient if already having 2nd order model in this direction.
Why single direction 3rd order model can be beneficial? For example because close to zero second derivative in this direction basically means two alternative scenarios: plateau or inflection point - only the former requires larger step size, and 3rd derivative allows to distinguish them.
I believe we will go toward hybrid multi-order methods: 2nd order method in a low dimensional subspace e.g. from PCA of recent gradients, what still allows for free 1st order simultaneous gradient descent toward part of gradient orthogonal to this subspace ... and additionally I would add e.g. 3rd order model for a single most relevant direction.
|
Why not use the third derivative for numerical optimization?
Let me be the only one here defending 3rd order methods for SGD convergence, but definitely not in the entire space what would need $\approx dim^3/6$ coefficients, but e.g. in just a single direction,
|
9,060
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
|
Recall that an offset is just a predictor variable whose coefficient is fixed at 1. So, using the standard setup for a Poisson regression with a log link, we have:
$$\log \mathrm{E}(Y) = \beta' \mathrm{X} + \log \mathcal{E}$$
where $\mathcal{E}$ is the offset/exposure variable. This can be rewritten as
$$\log \mathrm{E}(Y) - \log \mathcal{E} = \beta' \mathrm{X}$$
$$\log \mathrm{E}(Y/\mathcal{E}) = \beta' \mathrm{X}$$
Your underlying random variable is still $Y$, but by dividing by $\mathcal{E}$ we've converted the LHS of the model equation to be a rate of events per unit exposure. But this division also alters the variance of the response, so we have to weight by $\mathcal{E}$ when fitting the model.
Example in R:
library(MASS) # for Insurance dataset
# modelling the claim rate, with exposure as a weight
# use quasipoisson family to stop glm complaining about nonintegral response
glm(Claims/Holders ~ District + Group + Age,
family=quasipoisson, data=Insurance, weights=Holders)
Call: glm(formula = Claims/Holders ~ District + Group + Age, family = quasipoisson,
data = Insurance, weights = Holders)
Coefficients:
(Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C
-1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737
Degrees of Freedom: 63 Total (i.e. Null); 54 Residual
Null Deviance: 236.3
Residual Deviance: 51.42 AIC: NA
# with log-exposure as offset
glm(Claims ~ District + Group + Age + offset(log(Holders)),
family=poisson, data=Insurance)
Call: glm(formula = Claims ~ District + Group + Age + offset(log(Holders)),
family = poisson, data = Insurance)
Coefficients:
(Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C
-1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737
Degrees of Freedom: 63 Total (i.e. Null); 54 Residual
Null Deviance: 236.3
Residual Deviance: 51.42 AIC: 388.7
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
|
Recall that an offset is just a predictor variable whose coefficient is fixed at 1. So, using the standard setup for a Poisson regression with a log link, we have:
$$\log \mathrm{E}(Y) = \beta' \mathr
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
Recall that an offset is just a predictor variable whose coefficient is fixed at 1. So, using the standard setup for a Poisson regression with a log link, we have:
$$\log \mathrm{E}(Y) = \beta' \mathrm{X} + \log \mathcal{E}$$
where $\mathcal{E}$ is the offset/exposure variable. This can be rewritten as
$$\log \mathrm{E}(Y) - \log \mathcal{E} = \beta' \mathrm{X}$$
$$\log \mathrm{E}(Y/\mathcal{E}) = \beta' \mathrm{X}$$
Your underlying random variable is still $Y$, but by dividing by $\mathcal{E}$ we've converted the LHS of the model equation to be a rate of events per unit exposure. But this division also alters the variance of the response, so we have to weight by $\mathcal{E}$ when fitting the model.
Example in R:
library(MASS) # for Insurance dataset
# modelling the claim rate, with exposure as a weight
# use quasipoisson family to stop glm complaining about nonintegral response
glm(Claims/Holders ~ District + Group + Age,
family=quasipoisson, data=Insurance, weights=Holders)
Call: glm(formula = Claims/Holders ~ District + Group + Age, family = quasipoisson,
data = Insurance, weights = Holders)
Coefficients:
(Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C
-1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737
Degrees of Freedom: 63 Total (i.e. Null); 54 Residual
Null Deviance: 236.3
Residual Deviance: 51.42 AIC: NA
# with log-exposure as offset
glm(Claims ~ District + Group + Age + offset(log(Holders)),
family=poisson, data=Insurance)
Call: glm(formula = Claims ~ District + Group + Age + offset(log(Holders)),
family = poisson, data = Insurance)
Coefficients:
(Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C
-1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737
Degrees of Freedom: 63 Total (i.e. Null); 54 Residual
Null Deviance: 236.3
Residual Deviance: 51.42 AIC: 388.7
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
Recall that an offset is just a predictor variable whose coefficient is fixed at 1. So, using the standard setup for a Poisson regression with a log link, we have:
$$\log \mathrm{E}(Y) = \beta' \mathr
|
9,061
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
|
The offset does act similarly for both Poisson and NB. The offset has two functions. For Poisson models, the actual number of events defines the variance, so that's needed. It also provides the denominator, so you can compare rates. It's unite-less.
Just using a ratio will mess up the standard errors. Having a model,that deals with the offset as most Poisson regression model functions do takes care of both the standard errors AND comparing rates.
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
|
The offset does act similarly for both Poisson and NB. The offset has two functions. For Poisson models, the actual number of events defines the variance, so that's needed. It also provides the den
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
The offset does act similarly for both Poisson and NB. The offset has two functions. For Poisson models, the actual number of events defines the variance, so that's needed. It also provides the denominator, so you can compare rates. It's unite-less.
Just using a ratio will mess up the standard errors. Having a model,that deals with the offset as most Poisson regression model functions do takes care of both the standard errors AND comparing rates.
|
Where does the offset go in Poisson/negative binomial regression? [duplicate]
The offset does act similarly for both Poisson and NB. The offset has two functions. For Poisson models, the actual number of events defines the variance, so that's needed. It also provides the den
|
9,062
|
Sufficient statistics for layman
|
A sufficient statistic summarizes all the information contained in a sample so that you would make the same parameter estimate whether we gave you the sample or just the statistic itself. It's reduction of the data without information loss.
Here's one example. Suppose $X$ has a symmetric distribution about zero. Instead of giving you a sample, I hand you a sample of absolute values instead (that's the statistic). You don't get to see the sign. But you know that the distribution is symmetric, so for a given value $x$, $-x$ and $x$ are equally likely (the conditional probability is $0.5$). So you can flip a fair coin. If it comes up heads, make that $x$ negative. If tails, make it positive. This gives you a sample from $X'$, which has the same distribution as the original data $X$. You basically were able to reconstruct the data from the statistic. That's what makes it sufficient.
|
Sufficient statistics for layman
|
A sufficient statistic summarizes all the information contained in a sample so that you would make the same parameter estimate whether we gave you the sample or just the statistic itself. It's reducti
|
Sufficient statistics for layman
A sufficient statistic summarizes all the information contained in a sample so that you would make the same parameter estimate whether we gave you the sample or just the statistic itself. It's reduction of the data without information loss.
Here's one example. Suppose $X$ has a symmetric distribution about zero. Instead of giving you a sample, I hand you a sample of absolute values instead (that's the statistic). You don't get to see the sign. But you know that the distribution is symmetric, so for a given value $x$, $-x$ and $x$ are equally likely (the conditional probability is $0.5$). So you can flip a fair coin. If it comes up heads, make that $x$ negative. If tails, make it positive. This gives you a sample from $X'$, which has the same distribution as the original data $X$. You basically were able to reconstruct the data from the statistic. That's what makes it sufficient.
|
Sufficient statistics for layman
A sufficient statistic summarizes all the information contained in a sample so that you would make the same parameter estimate whether we gave you the sample or just the statistic itself. It's reducti
|
9,063
|
Sufficient statistics for layman
|
In Bayesian terms, you have some observable property $X$ and a parameter $\Theta$. The joint distribution for $X,\Theta$ is specified, but factored as the conditional distribution of $X\mid \Theta$ and the prior distribution of $\Theta$. A statistic $T$ is sufficient for this model if and only if the posterior distribution of $\Theta\mid X$ is the same as that of $\Theta\mid T(X)$, for every prior distribution of $\Theta$. In words, your updated uncertainty about $\Theta$ after knowing the value of $X$ is the same as your updated uncertainty about $\Theta$ after knowing the value of $T(X)$, whatever prior information you have about $\Theta$. Keep in mind that sufficiency is a model dependent concept.
Somewhat surprisingly, this Bayesian definition of sufficiency is due to Kolmogorov (see the second paragraph on page 1012 of Rikhin (1990)).
|
Sufficient statistics for layman
|
In Bayesian terms, you have some observable property $X$ and a parameter $\Theta$. The joint distribution for $X,\Theta$ is specified, but factored as the conditional distribution of $X\mid \Theta$ an
|
Sufficient statistics for layman
In Bayesian terms, you have some observable property $X$ and a parameter $\Theta$. The joint distribution for $X,\Theta$ is specified, but factored as the conditional distribution of $X\mid \Theta$ and the prior distribution of $\Theta$. A statistic $T$ is sufficient for this model if and only if the posterior distribution of $\Theta\mid X$ is the same as that of $\Theta\mid T(X)$, for every prior distribution of $\Theta$. In words, your updated uncertainty about $\Theta$ after knowing the value of $X$ is the same as your updated uncertainty about $\Theta$ after knowing the value of $T(X)$, whatever prior information you have about $\Theta$. Keep in mind that sufficiency is a model dependent concept.
Somewhat surprisingly, this Bayesian definition of sufficiency is due to Kolmogorov (see the second paragraph on page 1012 of Rikhin (1990)).
|
Sufficient statistics for layman
In Bayesian terms, you have some observable property $X$ and a parameter $\Theta$. The joint distribution for $X,\Theta$ is specified, but factored as the conditional distribution of $X\mid \Theta$ an
|
9,064
|
Sufficient statistics for layman
|
Say you have a coin, and you don't know whether it's fair or not. In other words, it has probability $p$ of coming up heads ($H$) and $1 - p$ of coming up tails ($T$), and you don't know the value of $p$.
You try to get an idea of the value of $p$ by tossing the coin several times, say $n$ times.
Let's say $n = 5$ and the outcome you happen to get is the sequence $(H, H, T, H, T)$.
Now you want your statistician friend to estimate the value of $p$ for you, and perhaps tell you if the coin is likely to be fair or not. What information do you need to tell them so that they can do their calculations and make their conclusions?
You could tell them all of the data, i.e. $(H, H, T, H, T)$. Is this necessary though? Could you summarise this data without losing any relevant information?
It is clear that the order of the coin tosses is irrelevant, because you were doing the same thing for each coin toss, and the coin tosses didn't influence each other. If the outcome were $(H, H, T, T, H)$ instead, for example, our conclusions won't be any different. It follows that all you really need to tell your statistician friend is the count of how many heads there were.
We express this by saying the number of heads is a sufficient statistic for p.
This example gives the flavour of the concept. Read on if you'd like to see how it connects with the formal definition.
Formally, a statistic is sufficient for a parameter if, given the value of the statistic, the probability distribution of the outcomes doesn't involve the parameter.
In this example, before we know the number of heads, the probability of any outcome is $p^\text{number of heads}(1 - p)^\text{n - number of heads}$. Obviously this depends on $p$.
But once we know that the number of heads is 3 (or any other value), all the outcomes with 3 heads ($(H, H, T, H, T)$, $(H, H, T, T, H)$, $...$) are equally likely (in fact there are ten possibilities so they all have probability $1/10$). So the conditional distribution of the outcomes no longer has anything to do with $p$. Intuitively this means whichever specific outcome we observe won't tell us any more information about $p$, because the outcomes aren't affected by $p$.
As an aside, note that the probability before we know the number of heads only depends on $p$ through the $\text{number of heads}$. It turns out that this is equivalent to the $\text{number of heads}$ being sufficient for $p$.
|
Sufficient statistics for layman
|
Say you have a coin, and you don't know whether it's fair or not. In other words, it has probability $p$ of coming up heads ($H$) and $1 - p$ of coming up tails ($T$), and you don't know the value of
|
Sufficient statistics for layman
Say you have a coin, and you don't know whether it's fair or not. In other words, it has probability $p$ of coming up heads ($H$) and $1 - p$ of coming up tails ($T$), and you don't know the value of $p$.
You try to get an idea of the value of $p$ by tossing the coin several times, say $n$ times.
Let's say $n = 5$ and the outcome you happen to get is the sequence $(H, H, T, H, T)$.
Now you want your statistician friend to estimate the value of $p$ for you, and perhaps tell you if the coin is likely to be fair or not. What information do you need to tell them so that they can do their calculations and make their conclusions?
You could tell them all of the data, i.e. $(H, H, T, H, T)$. Is this necessary though? Could you summarise this data without losing any relevant information?
It is clear that the order of the coin tosses is irrelevant, because you were doing the same thing for each coin toss, and the coin tosses didn't influence each other. If the outcome were $(H, H, T, T, H)$ instead, for example, our conclusions won't be any different. It follows that all you really need to tell your statistician friend is the count of how many heads there were.
We express this by saying the number of heads is a sufficient statistic for p.
This example gives the flavour of the concept. Read on if you'd like to see how it connects with the formal definition.
Formally, a statistic is sufficient for a parameter if, given the value of the statistic, the probability distribution of the outcomes doesn't involve the parameter.
In this example, before we know the number of heads, the probability of any outcome is $p^\text{number of heads}(1 - p)^\text{n - number of heads}$. Obviously this depends on $p$.
But once we know that the number of heads is 3 (or any other value), all the outcomes with 3 heads ($(H, H, T, H, T)$, $(H, H, T, T, H)$, $...$) are equally likely (in fact there are ten possibilities so they all have probability $1/10$). So the conditional distribution of the outcomes no longer has anything to do with $p$. Intuitively this means whichever specific outcome we observe won't tell us any more information about $p$, because the outcomes aren't affected by $p$.
As an aside, note that the probability before we know the number of heads only depends on $p$ through the $\text{number of heads}$. It turns out that this is equivalent to the $\text{number of heads}$ being sufficient for $p$.
|
Sufficient statistics for layman
Say you have a coin, and you don't know whether it's fair or not. In other words, it has probability $p$ of coming up heads ($H$) and $1 - p$ of coming up tails ($T$), and you don't know the value of
|
9,065
|
What is the definition of Top-n accuracy?
|
In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.
|
What is the definition of Top-n accuracy?
|
In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.
|
What is the definition of Top-n accuracy?
In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.
|
What is the definition of Top-n accuracy?
In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.
|
9,066
|
What is the definition of Top-n accuracy?
|
I found this explanation by one Nathan Yan on Quora
Top-N accuracy means that the correct class gets to be in the Top-N probabilities for it to count as “correct”. As an example, suppose I have a data set of images and the images are a:
Dog
Cat
Dog
Bird
Cat
Cat
Mouse
Penguin
For each of these input images, the model will predict a corresponding class.
Input image: Dog -- Predicted class: Dog ✔
Input image: Cat -- Predicted class: Bird ✘
Input image: Dog -- Predicted class: Dog ✔
Input image: Bird -- Predicted class: Bird ✔
Input image: Cat -- Predicted class: Cat ✔
Input image: Cat -- Predicted class: Cat ✔
Input image: Mouse -- Predicted class: Penguin ✘
Input image: Penguin -- Predicted class: Dog ✘
The Top-1 accuracy for this is (5 correct out of 8), 62.5%. Now suppose I also list the rest of the classes the model predicted, in descending order of their probabilities (the further right the class appears, the less likely the model thinks the image is that class)
- Dog => [Dog, Cat, Bird, Mouse, Penguin]
- Cat => [Bird, Mouse, Cat, Penguin, Dog]
- Dog => [Dog, Cat, Bird, Penguin, Mouse]
- Bird => [Bird, Cat, Mouse, Penguin, Dog]
- Cat => [Cat, Bird, Mouse, Dog, Penguin]
- Cat => [Cat, Mouse, Dog, Penguin, Bird]
- Mouse => [Penguin, Mouse, Cat, Dog, Bird]
- Penguin => [Dog, Mouse, Penguin, Cat, Bird]
If we take the top-3 accuracy for this, the correct class only needs to be in the top three predicted classes to count. As a result, despite the model not perfectly getting every problem, its top-3 accuracy is 100%!
|
What is the definition of Top-n accuracy?
|
I found this explanation by one Nathan Yan on Quora
Top-N accuracy means that the correct class gets to be in the Top-N probabilities for it to count as “correct”. As an example, suppose I have a data
|
What is the definition of Top-n accuracy?
I found this explanation by one Nathan Yan on Quora
Top-N accuracy means that the correct class gets to be in the Top-N probabilities for it to count as “correct”. As an example, suppose I have a data set of images and the images are a:
Dog
Cat
Dog
Bird
Cat
Cat
Mouse
Penguin
For each of these input images, the model will predict a corresponding class.
Input image: Dog -- Predicted class: Dog ✔
Input image: Cat -- Predicted class: Bird ✘
Input image: Dog -- Predicted class: Dog ✔
Input image: Bird -- Predicted class: Bird ✔
Input image: Cat -- Predicted class: Cat ✔
Input image: Cat -- Predicted class: Cat ✔
Input image: Mouse -- Predicted class: Penguin ✘
Input image: Penguin -- Predicted class: Dog ✘
The Top-1 accuracy for this is (5 correct out of 8), 62.5%. Now suppose I also list the rest of the classes the model predicted, in descending order of their probabilities (the further right the class appears, the less likely the model thinks the image is that class)
- Dog => [Dog, Cat, Bird, Mouse, Penguin]
- Cat => [Bird, Mouse, Cat, Penguin, Dog]
- Dog => [Dog, Cat, Bird, Penguin, Mouse]
- Bird => [Bird, Cat, Mouse, Penguin, Dog]
- Cat => [Cat, Bird, Mouse, Dog, Penguin]
- Cat => [Cat, Mouse, Dog, Penguin, Bird]
- Mouse => [Penguin, Mouse, Cat, Dog, Bird]
- Penguin => [Dog, Mouse, Penguin, Cat, Bird]
If we take the top-3 accuracy for this, the correct class only needs to be in the top three predicted classes to count. As a result, despite the model not perfectly getting every problem, its top-3 accuracy is 100%!
|
What is the definition of Top-n accuracy?
I found this explanation by one Nathan Yan on Quora
Top-N accuracy means that the correct class gets to be in the Top-N probabilities for it to count as “correct”. As an example, suppose I have a data
|
9,067
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
Think about what happens the first time you get an H followed by a T.
Case 1: you're looking for H-T-H, and you've seen H-T for the first time. If the next toss is H, you're done. If it's T, you're back to square one: since the last two tosses were T-T you now need the full H-T-H.
Case 2: you're looking for H-T-T, and you've seen H-T for the first time. If the next toss is T, you're done. If it's H, this is clearly a setback; however, it's a minor one since you now have the H and only need -T-T. If the next toss is H, this makes your situation no worse, whereas T makes it better, and so on.
Put another way, in case 2 the first H that you see takes you 1/3 of the way, and from that point on you never have to start from scratch. This is not true in case 1, where a T-T erases all progress you've made.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
Think about what happens the first time you get an H followed by a T.
Case 1: you're looking for H-T-H, and you've seen H-T for the first time. If the next toss is H, you're done. If it's T, you're ba
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
Think about what happens the first time you get an H followed by a T.
Case 1: you're looking for H-T-H, and you've seen H-T for the first time. If the next toss is H, you're done. If it's T, you're back to square one: since the last two tosses were T-T you now need the full H-T-H.
Case 2: you're looking for H-T-T, and you've seen H-T for the first time. If the next toss is T, you're done. If it's H, this is clearly a setback; however, it's a minor one since you now have the H and only need -T-T. If the next toss is H, this makes your situation no worse, whereas T makes it better, and so on.
Put another way, in case 2 the first H that you see takes you 1/3 of the way, and from that point on you never have to start from scratch. This is not true in case 1, where a T-T erases all progress you've made.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
Think about what happens the first time you get an H followed by a T.
Case 1: you're looking for H-T-H, and you've seen H-T for the first time. If the next toss is H, you're done. If it's T, you're ba
|
9,068
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
I like to draw pictures.
These diagrams are finite state automata (FSAs). They are tiny children's games (like Chutes and Ladders) that "recognize" or "accept" the HTT and HTH sequences, respectively, by moving a token from one node to another in response to the coin flips. The token begins at the top node, pointed to by an arrow (line i). After each toss of the coin, the token is moved along the edge labeled with that coin's outcome (either H or T) to another node (which I will call the "H node" and "T node," respectively). When the token lands on a terminal node (no outgoing arrows, indicated in green) the game is over and the FSA has accepted the sequence.
Think of each FSA as progressing vertically down a linear track. Tossing the "right" sequence of heads and tails causes the token to progress towards its destination. Tossing a "wrong" value causes the token to back up (or at least stand still). The token backs up to the most advanced state corresponding to the most recent tosses. For instance, the HTT FSA at line ii stays put at line ii upon seeing a head, because that head could be the initial sequence of an eventual HTH. It does not go all the way back to the beginning, because that would effectively ignore this last head altogether.
After verifying these two games indeed correspond to HTT and HTH as claimed, and comparing them line by line, and it should now be obvious that HTH is harder to win. They differ in their graphical structure only on line iii, where an H takes HTT back to line ii (and a T accepts) but, in HTH, a T takes us all the way back to line i (and an H accepts). The penalty at line iii in playing HTH is more severe than the penalty in playing HTT.
This can be quantified. I have labeled the nodes of these two FSAs with the expected number of tosses needed for acceptance. Let us call these the node "values." The labeling begins by
(1) writing the obvious value of 0 at the accepting nodes.
Let the probability of heads be p(H) and the probability of tails be 1 - p(H) = p(T). (For a fair coin, both probabilities equal 1/2.) Because each coin flip adds one to the number of tosses,
(2) the value of a node equals one plus p(H) times the value of the H node plus p(T) times the value of the T node.
These rules determine the values. It's a quick and informative exercise to verify that the labeled values (assuming a fair coin) are correct. As an example, consider the value for HTH on line ii. The rule says 8 must be 1 more than the average of 8 (the value of the H node on line i) and 6 (the value of the T node on line iii): sure enough, 8 = 1 + (1/2)*8 + (1/2)*6. You can just as readily check the remaining five values in the illustration.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
I like to draw pictures.
These diagrams are finite state automata (FSAs). They are tiny children's games (like Chutes and Ladders) that "recognize" or "accept" the HTT and HTH sequences, respectivel
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
I like to draw pictures.
These diagrams are finite state automata (FSAs). They are tiny children's games (like Chutes and Ladders) that "recognize" or "accept" the HTT and HTH sequences, respectively, by moving a token from one node to another in response to the coin flips. The token begins at the top node, pointed to by an arrow (line i). After each toss of the coin, the token is moved along the edge labeled with that coin's outcome (either H or T) to another node (which I will call the "H node" and "T node," respectively). When the token lands on a terminal node (no outgoing arrows, indicated in green) the game is over and the FSA has accepted the sequence.
Think of each FSA as progressing vertically down a linear track. Tossing the "right" sequence of heads and tails causes the token to progress towards its destination. Tossing a "wrong" value causes the token to back up (or at least stand still). The token backs up to the most advanced state corresponding to the most recent tosses. For instance, the HTT FSA at line ii stays put at line ii upon seeing a head, because that head could be the initial sequence of an eventual HTH. It does not go all the way back to the beginning, because that would effectively ignore this last head altogether.
After verifying these two games indeed correspond to HTT and HTH as claimed, and comparing them line by line, and it should now be obvious that HTH is harder to win. They differ in their graphical structure only on line iii, where an H takes HTT back to line ii (and a T accepts) but, in HTH, a T takes us all the way back to line i (and an H accepts). The penalty at line iii in playing HTH is more severe than the penalty in playing HTT.
This can be quantified. I have labeled the nodes of these two FSAs with the expected number of tosses needed for acceptance. Let us call these the node "values." The labeling begins by
(1) writing the obvious value of 0 at the accepting nodes.
Let the probability of heads be p(H) and the probability of tails be 1 - p(H) = p(T). (For a fair coin, both probabilities equal 1/2.) Because each coin flip adds one to the number of tosses,
(2) the value of a node equals one plus p(H) times the value of the H node plus p(T) times the value of the T node.
These rules determine the values. It's a quick and informative exercise to verify that the labeled values (assuming a fair coin) are correct. As an example, consider the value for HTH on line ii. The rule says 8 must be 1 more than the average of 8 (the value of the H node on line i) and 6 (the value of the T node on line iii): sure enough, 8 = 1 + (1/2)*8 + (1/2)*6. You can just as readily check the remaining five values in the illustration.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
I like to draw pictures.
These diagrams are finite state automata (FSAs). They are tiny children's games (like Chutes and Ladders) that "recognize" or "accept" the HTT and HTH sequences, respectivel
|
9,069
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
Suppose you toss the coin $8n+2$ times and count the number of times you see a "HTH" pattern (including overlaps). The expected number is $n$. But it is also $n$ for "HTT". Since $HTH$ can overlap itself and "HTT" cannot, you would expect more clumping with "HTH", which increases the expected time for the first appearance of $HTH$.
Another way of looking at it is that after reaching "HT", a "T" will send "HTH" back to the start, while an "H" will start progress to a possible "HTT".
You can work out the two expected times using Conway's algorithm [I think], by looking at the overlaps: if the first $k$ tosses of the pattern match the last $k$, then add $2^k$. So for "HTH" you get $2+0+8=10$ as the expectation and for "HTT" you get $0+0+8=8$, confirming your simulation.
The oddness does not stop there. If you have a race between the two patterns, they have an equal probability of appearing first, and the expected time until one of them appears is $5$ (one more than expected time to get "HT", after which one of them must appear).
It gets worse: in Penney's game you choose a pattern to race and then I choose another. If you choose "HTH" then I will choose "HHT" and have 2:1 odds of winning; if you choose "HTT" then I will choose "HHT" again and still have 2:1 odds in my favour. But if you choose "HHT" then I will choose "THH" and have 3:1 odds. The second player can always bias the odds, and the best choices are not transitive.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
Suppose you toss the coin $8n+2$ times and count the number of times you see a "HTH" pattern (including overlaps). The expected number is $n$. But it is also $n$ for "HTT". Since $HTH$ can overlap
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
Suppose you toss the coin $8n+2$ times and count the number of times you see a "HTH" pattern (including overlaps). The expected number is $n$. But it is also $n$ for "HTT". Since $HTH$ can overlap itself and "HTT" cannot, you would expect more clumping with "HTH", which increases the expected time for the first appearance of $HTH$.
Another way of looking at it is that after reaching "HT", a "T" will send "HTH" back to the start, while an "H" will start progress to a possible "HTT".
You can work out the two expected times using Conway's algorithm [I think], by looking at the overlaps: if the first $k$ tosses of the pattern match the last $k$, then add $2^k$. So for "HTH" you get $2+0+8=10$ as the expectation and for "HTT" you get $0+0+8=8$, confirming your simulation.
The oddness does not stop there. If you have a race between the two patterns, they have an equal probability of appearing first, and the expected time until one of them appears is $5$ (one more than expected time to get "HT", after which one of them must appear).
It gets worse: in Penney's game you choose a pattern to race and then I choose another. If you choose "HTH" then I will choose "HHT" and have 2:1 odds of winning; if you choose "HTT" then I will choose "HHT" again and still have 2:1 odds in my favour. But if you choose "HHT" then I will choose "THH" and have 3:1 odds. The second player can always bias the odds, and the best choices are not transitive.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
Suppose you toss the coin $8n+2$ times and count the number of times you see a "HTH" pattern (including overlaps). The expected number is $n$. But it is also $n$ for "HTT". Since $HTH$ can overlap
|
9,070
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
Some great answers. I'd like to take a slightly different tack, and address the question of counter-intuitivity. (I quite agree, BTW)
Here's how I make sense of it. Imagine a column of random sequential coin-toss results printed on a paper tape, consisting of the letters "H" and "T".
Arbitrarily tear off a section of this tape, and make an identical copy.
On a given tape, the sequence HTH and the sequence HTT will each occur as often, if the tape is long enough.
But occasionally the HTH instances will run together, ie HTHTH. (or even very occasionally HTHTHTH)
This overlap cannot happen with HTT instances.
Use a highlighter to pick out the "stripes" of successful outcomes, HTH on one tape and HTT on the other. A few of the HTH stripes will be shorter due to the overlap. Consequently the gaps between them, on average, will be slightly longer than on the other tape.
It's a bit like waiting for a bus, when averagely there's one every five minutes. If the buses are allowed to overlap each other, the interval will be slightly longer than five minutes, on average, because sometime two will go past together.
If you arrive at an arbitrary time, you'll be waiting slightly longer for the next (to you, first) bus, on average, if they're allowed to overlap.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
Some great answers. I'd like to take a slightly different tack, and address the question of counter-intuitivity. (I quite agree, BTW)
Here's how I make sense of it. Imagine a column of random sequenti
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
Some great answers. I'd like to take a slightly different tack, and address the question of counter-intuitivity. (I quite agree, BTW)
Here's how I make sense of it. Imagine a column of random sequential coin-toss results printed on a paper tape, consisting of the letters "H" and "T".
Arbitrarily tear off a section of this tape, and make an identical copy.
On a given tape, the sequence HTH and the sequence HTT will each occur as often, if the tape is long enough.
But occasionally the HTH instances will run together, ie HTHTH. (or even very occasionally HTHTHTH)
This overlap cannot happen with HTT instances.
Use a highlighter to pick out the "stripes" of successful outcomes, HTH on one tape and HTT on the other. A few of the HTH stripes will be shorter due to the overlap. Consequently the gaps between them, on average, will be slightly longer than on the other tape.
It's a bit like waiting for a bus, when averagely there's one every five minutes. If the buses are allowed to overlap each other, the interval will be slightly longer than five minutes, on average, because sometime two will go past together.
If you arrive at an arbitrary time, you'll be waiting slightly longer for the next (to you, first) bus, on average, if they're allowed to overlap.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
Some great answers. I'd like to take a slightly different tack, and address the question of counter-intuitivity. (I quite agree, BTW)
Here's how I make sense of it. Imagine a column of random sequenti
|
9,071
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
I was looking for the intuition to this in the integer case (as I'm slogging through Ross' Intro. to Probability Models). So I was thinking about integer cases. I found this helped:
Let $A$ be the symbol needed to begin the pattern I'm waiting for.
Let $B$ be the symbol needed to complete the pattern I'm waiting for.
In the case of an overlap roughly $A=B$ and so $P(A \cap \tilde{B})=0$.
Whereas in the case of no overlap $A \ne B$ and so $P(A \cap \tilde{B}) \ge 0$.
So, let me imagine that I have a chance to finish the pattern on the next draw. I draw the next symbol and it doesn't finish the pattern. In the case my pattern doesn't overlap, the symbol drawn might still allow me to begin building the pattern from the beginning again.
In the case of an overlap, a the symbol I needed to finish my partial pattern was the same as the symbol as I would need to start rebuilding. So I can't do either, and therefore will definitely need to wait until the next draw for a chance to start building again.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
|
I was looking for the intuition to this in the integer case (as I'm slogging through Ross' Intro. to Probability Models). So I was thinking about integer cases. I found this helped:
Let $A$ be the sym
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
I was looking for the intuition to this in the integer case (as I'm slogging through Ross' Intro. to Probability Models). So I was thinking about integer cases. I found this helped:
Let $A$ be the symbol needed to begin the pattern I'm waiting for.
Let $B$ be the symbol needed to complete the pattern I'm waiting for.
In the case of an overlap roughly $A=B$ and so $P(A \cap \tilde{B})=0$.
Whereas in the case of no overlap $A \ne B$ and so $P(A \cap \tilde{B}) \ge 0$.
So, let me imagine that I have a chance to finish the pattern on the next draw. I draw the next symbol and it doesn't finish the pattern. In the case my pattern doesn't overlap, the symbol drawn might still allow me to begin building the pattern from the beginning again.
In the case of an overlap, a the symbol I needed to finish my partial pattern was the same as the symbol as I would need to start rebuilding. So I can't do either, and therefore will definitely need to wait until the next draw for a chance to start building again.
|
Time taken to hit a pattern of heads and tails in a series of coin-tosses
I was looking for the intuition to this in the integer case (as I'm slogging through Ross' Intro. to Probability Models). So I was thinking about integer cases. I found this helped:
Let $A$ be the sym
|
9,072
|
Who is Gail Gasram?
|
It looks like "Gail Gasram" is "Marsaglia G" (George Marsaglia's surname and first initial) spelled backwards.
|
Who is Gail Gasram?
|
It looks like "Gail Gasram" is "Marsaglia G" (George Marsaglia's surname and first initial) spelled backwards.
|
Who is Gail Gasram?
It looks like "Gail Gasram" is "Marsaglia G" (George Marsaglia's surname and first initial) spelled backwards.
|
Who is Gail Gasram?
It looks like "Gail Gasram" is "Marsaglia G" (George Marsaglia's surname and first initial) spelled backwards.
|
9,073
|
Who is Gail Gasram?
|
Diehard Code
After some extensive digging, it appears that Gail Gasram participated in developing Diehard code, which represented a suite of programs for testing random number generators. Furthermore, the project was developed at Florida State University, being supported by a grant from the U.S. National Science Foundation. The exact contribution of Gasram in the development of the Diehard code is unclear, as Peterson (2003) noted in his book that the Diehard was developed by Gasram, but Sidorenko (2007) wrote that the well-known Diehard battery was, in fact, developed by Marsaglia. Yet, in another report it is written: "Gail Gasram wrote a serious of computer
functions for various operating systems to test random numbers" [sic]. Despite some conflicting information, it is most likely that Gasram was involved in the Diehard project.
Fun facts
Indeed, just googling Gail Gasram provides almost no information, except for this quote. I managed to get onto the right trail after using a foreign language and uncommon search engines. If anyone provides more information on Gail Gasram, it would be interesting to learn.
|
Who is Gail Gasram?
|
Diehard Code
After some extensive digging, it appears that Gail Gasram participated in developing Diehard code, which represented a suite of programs for testing random number generators. Furthermore,
|
Who is Gail Gasram?
Diehard Code
After some extensive digging, it appears that Gail Gasram participated in developing Diehard code, which represented a suite of programs for testing random number generators. Furthermore, the project was developed at Florida State University, being supported by a grant from the U.S. National Science Foundation. The exact contribution of Gasram in the development of the Diehard code is unclear, as Peterson (2003) noted in his book that the Diehard was developed by Gasram, but Sidorenko (2007) wrote that the well-known Diehard battery was, in fact, developed by Marsaglia. Yet, in another report it is written: "Gail Gasram wrote a serious of computer
functions for various operating systems to test random numbers" [sic]. Despite some conflicting information, it is most likely that Gasram was involved in the Diehard project.
Fun facts
Indeed, just googling Gail Gasram provides almost no information, except for this quote. I managed to get onto the right trail after using a foreign language and uncommon search engines. If anyone provides more information on Gail Gasram, it would be interesting to learn.
|
Who is Gail Gasram?
Diehard Code
After some extensive digging, it appears that Gail Gasram participated in developing Diehard code, which represented a suite of programs for testing random number generators. Furthermore,
|
9,074
|
Estimating a distribution based on three percentiles
|
Using a purely statistical method to do this work will provide absolutely no additional information about the distribution of school spending: the result will merely reflect an arbitrary choice of algorithm.
You need more data.
This is easy to come by: use data from previous years, from comparable districts, whatever. For example, federal spending on 14866 school districts in 2008 is available from the Census site. It shows that across the country, total per-capita (enrolled) federal revenues were approximately lognormally distributed, but breaking it down by state shows substantial variation (e.g., log spending in Alaska has negative skew while log spending in Colorado has strong positive skew). Use those data to characterize the likely form of distribution and then fit your quantiles to that form.
If you're even close to the right distributional form, then you should be able to reproduce the quantiles accurately by fitting one or at most two parameters. The best technique for finding the fit will depend on what distributional form you use, but--far more importantly--it will depend on what you intend to use the results for. Do you need to estimate an average spending amount? Upper and lower limits on spending? Whatever it is, you want to adopt some measure of goodness of fit that will give you the best chance of making good decisions with your results. For example, if your interest is focused in the upper 10% of all spending, you will want to fit the 95th percentile accurately and you might care little about fitting the 5th percentile. No sophisticated fitting technique will make these considerations for you.
Of course no one can legitimately guarantee that this data-informed, decision-oriented method will perform any better (or any worse) than some statistical recipe, but--unlike a purely statistical approach--this method has a basis grounded in reality, with a focus on your needs, giving it some credibility and defense against criticism.
|
Estimating a distribution based on three percentiles
|
Using a purely statistical method to do this work will provide absolutely no additional information about the distribution of school spending: the result will merely reflect an arbitrary choice of alg
|
Estimating a distribution based on three percentiles
Using a purely statistical method to do this work will provide absolutely no additional information about the distribution of school spending: the result will merely reflect an arbitrary choice of algorithm.
You need more data.
This is easy to come by: use data from previous years, from comparable districts, whatever. For example, federal spending on 14866 school districts in 2008 is available from the Census site. It shows that across the country, total per-capita (enrolled) federal revenues were approximately lognormally distributed, but breaking it down by state shows substantial variation (e.g., log spending in Alaska has negative skew while log spending in Colorado has strong positive skew). Use those data to characterize the likely form of distribution and then fit your quantiles to that form.
If you're even close to the right distributional form, then you should be able to reproduce the quantiles accurately by fitting one or at most two parameters. The best technique for finding the fit will depend on what distributional form you use, but--far more importantly--it will depend on what you intend to use the results for. Do you need to estimate an average spending amount? Upper and lower limits on spending? Whatever it is, you want to adopt some measure of goodness of fit that will give you the best chance of making good decisions with your results. For example, if your interest is focused in the upper 10% of all spending, you will want to fit the 95th percentile accurately and you might care little about fitting the 5th percentile. No sophisticated fitting technique will make these considerations for you.
Of course no one can legitimately guarantee that this data-informed, decision-oriented method will perform any better (or any worse) than some statistical recipe, but--unlike a purely statistical approach--this method has a basis grounded in reality, with a focus on your needs, giving it some credibility and defense against criticism.
|
Estimating a distribution based on three percentiles
Using a purely statistical method to do this work will provide absolutely no additional information about the distribution of school spending: the result will merely reflect an arbitrary choice of alg
|
9,075
|
Estimating a distribution based on three percentiles
|
As @whuber pointed out, statistical methods do not exactly work here. You need to infer the distribution from other sources. When you know the distribution you have a non-linear equation solving exercise. Denote by $f$ the quantile function of your chosen probability distribution with parameter vector $\theta$. What you have is the following nonlinear system of equations:
\begin{align*}
q_{0.05}&=f(0.05,\theta) \\\\
q_{0.5}&=f(0.5,\theta) \\\\
q_{0.95}&=f(0.95,\theta)\\\\
\end{align*}
where $q$ are your quantiles. You need to solve this system to find $\theta$. Now for practically for any 3-parameter distribution you will find values of parameters satisfying this equation. For 2-parameter and 1-parameter distributions this system is overdetermined, so there are no exact solutions. In this case you can search for a set of parameters which minimizes the discrepancy:
\begin{align*}
(q_{0.05}-f(0.05,\theta))^2+ (q_{0.5}-f(0.5,\theta))^2 + (q_{0.95}-f(0.95,\theta))^2
\end{align*}
Here I chose the quadratic function, but you can chose whatever you want. According to @whuber comments you can assign weights, so that more important quantiles can be fitted more accurately.
For four and more parameters the system is underdetermined, so infinite number of solutions exists.
Here is some sample R code illustrating this approach. For purposes of demonstration I generate the quantiles from Singh-Maddala distribution from VGAM package. This distribution has 3 parameters and is used in income distribution modelling.
q <- qsinmad(c(0.05,0.5,0.95),2,1,4)
plot(x<-seq(0,2,by=0.01), dsinmad(x, 2, 1, 4),type="l")
points(p<-c(0.05, 0.5, 0.95), dsinmad(p, 2, 1, 4))
Now form the function which evaluates the non-linear system of equations:
fn <- function(x,q) q-qsinmad(c(0.05, 0.5, 0.95), x[1], x[2], x[3])
Check whether true values satisfy the equation:
> fn(c(2,1,4),q)
[1] 0 0 0
For solving the non-linear equation system, I use the function nleqslv from package nleqslv.
> sol <- nleqslv(c(2.4,1.5,4.3),fn,q=q)
> sol$x
[1] 2.000000 1.000000 4.000001
As we see we get the exact solution. Now let us try to fit log-normal distribution to these quantiles. For this we will use the optim function.
> ofn <- function(x,q)sum(abs(q-qlnorm(c(0.05,0.5,0.95),x[1],x[2]))^2)
> osol <- optim(c(1,1),ofn)
> osol$par
[1] -0.905049 0.586334
Now plot the result
plot(x,dlnorm(x,osol$par[1],osol$par[2]),type="l",col=2)
lines(x,dsinmad(x,2,1,4))
points(p,dsinmad(p,2,1,4))
From this we immediately see that the quadratic function is not so good.
Hope this helps.
|
Estimating a distribution based on three percentiles
|
As @whuber pointed out, statistical methods do not exactly work here. You need to infer the distribution from other sources. When you know the distribution you have a non-linear equation solving exerc
|
Estimating a distribution based on three percentiles
As @whuber pointed out, statistical methods do not exactly work here. You need to infer the distribution from other sources. When you know the distribution you have a non-linear equation solving exercise. Denote by $f$ the quantile function of your chosen probability distribution with parameter vector $\theta$. What you have is the following nonlinear system of equations:
\begin{align*}
q_{0.05}&=f(0.05,\theta) \\\\
q_{0.5}&=f(0.5,\theta) \\\\
q_{0.95}&=f(0.95,\theta)\\\\
\end{align*}
where $q$ are your quantiles. You need to solve this system to find $\theta$. Now for practically for any 3-parameter distribution you will find values of parameters satisfying this equation. For 2-parameter and 1-parameter distributions this system is overdetermined, so there are no exact solutions. In this case you can search for a set of parameters which minimizes the discrepancy:
\begin{align*}
(q_{0.05}-f(0.05,\theta))^2+ (q_{0.5}-f(0.5,\theta))^2 + (q_{0.95}-f(0.95,\theta))^2
\end{align*}
Here I chose the quadratic function, but you can chose whatever you want. According to @whuber comments you can assign weights, so that more important quantiles can be fitted more accurately.
For four and more parameters the system is underdetermined, so infinite number of solutions exists.
Here is some sample R code illustrating this approach. For purposes of demonstration I generate the quantiles from Singh-Maddala distribution from VGAM package. This distribution has 3 parameters and is used in income distribution modelling.
q <- qsinmad(c(0.05,0.5,0.95),2,1,4)
plot(x<-seq(0,2,by=0.01), dsinmad(x, 2, 1, 4),type="l")
points(p<-c(0.05, 0.5, 0.95), dsinmad(p, 2, 1, 4))
Now form the function which evaluates the non-linear system of equations:
fn <- function(x,q) q-qsinmad(c(0.05, 0.5, 0.95), x[1], x[2], x[3])
Check whether true values satisfy the equation:
> fn(c(2,1,4),q)
[1] 0 0 0
For solving the non-linear equation system, I use the function nleqslv from package nleqslv.
> sol <- nleqslv(c(2.4,1.5,4.3),fn,q=q)
> sol$x
[1] 2.000000 1.000000 4.000001
As we see we get the exact solution. Now let us try to fit log-normal distribution to these quantiles. For this we will use the optim function.
> ofn <- function(x,q)sum(abs(q-qlnorm(c(0.05,0.5,0.95),x[1],x[2]))^2)
> osol <- optim(c(1,1),ofn)
> osol$par
[1] -0.905049 0.586334
Now plot the result
plot(x,dlnorm(x,osol$par[1],osol$par[2]),type="l",col=2)
lines(x,dsinmad(x,2,1,4))
points(p,dsinmad(p,2,1,4))
From this we immediately see that the quadratic function is not so good.
Hope this helps.
|
Estimating a distribution based on three percentiles
As @whuber pointed out, statistical methods do not exactly work here. You need to infer the distribution from other sources. When you know the distribution you have a non-linear equation solving exerc
|
9,076
|
Estimating a distribution based on three percentiles
|
Try the rriskDistributions package, and -- if you are sure about the lognormal distribution family -- use the command
get.lnorm.par(p=c(0.05,0.5,0.95),q=c(8.135,11.259,23.611))
which should solve your problem. Use fit.perc instead if you do not want to restrict to one known pdf.
|
Estimating a distribution based on three percentiles
|
Try the rriskDistributions package, and -- if you are sure about the lognormal distribution family -- use the command
get.lnorm.par(p=c(0.05,0.5,0.95),q=c(8.135,11.259,23.611))
which should solve you
|
Estimating a distribution based on three percentiles
Try the rriskDistributions package, and -- if you are sure about the lognormal distribution family -- use the command
get.lnorm.par(p=c(0.05,0.5,0.95),q=c(8.135,11.259,23.611))
which should solve your problem. Use fit.perc instead if you do not want to restrict to one known pdf.
|
Estimating a distribution based on three percentiles
Try the rriskDistributions package, and -- if you are sure about the lognormal distribution family -- use the command
get.lnorm.par(p=c(0.05,0.5,0.95),q=c(8.135,11.259,23.611))
which should solve you
|
9,077
|
Estimating a distribution based on three percentiles
|
For a lognormal the ratio of the 95th percentile to the median is the same as the ratio of the median to the 5th percentile. That's not even nearly true here so lognormal wouldn't be a good fit.
You have enough information to fit a distribution with three parameters, and you clearly need a skew distribution. For analytical simplicity, I'd suggest the shifted log-logistic distribution as its quantile function (i.e. the inverse of its cumulative distribution function) can be written in a reasonably simple closed form, so you should be able to get closed-form expressions for its three parameters in terms of your three quantiles with a bit of algebra (i'll leave that as an exercise!). This distribution is used in flood frequency analysis.
This isn't going to give you any indication of the uncertainty in the estimates of the other quantiles though. I don't know if you need that, but as a statistician I feel I should be able to provide it, so I'm not really satisfied with this answer. I certainly wouldn't use this method, or probably any method, to extrapolate (much) outside the range of the 5th to 95th percentiles.
|
Estimating a distribution based on three percentiles
|
For a lognormal the ratio of the 95th percentile to the median is the same as the ratio of the median to the 5th percentile. That's not even nearly true here so lognormal wouldn't be a good fit.
You h
|
Estimating a distribution based on three percentiles
For a lognormal the ratio of the 95th percentile to the median is the same as the ratio of the median to the 5th percentile. That's not even nearly true here so lognormal wouldn't be a good fit.
You have enough information to fit a distribution with three parameters, and you clearly need a skew distribution. For analytical simplicity, I'd suggest the shifted log-logistic distribution as its quantile function (i.e. the inverse of its cumulative distribution function) can be written in a reasonably simple closed form, so you should be able to get closed-form expressions for its three parameters in terms of your three quantiles with a bit of algebra (i'll leave that as an exercise!). This distribution is used in flood frequency analysis.
This isn't going to give you any indication of the uncertainty in the estimates of the other quantiles though. I don't know if you need that, but as a statistician I feel I should be able to provide it, so I'm not really satisfied with this answer. I certainly wouldn't use this method, or probably any method, to extrapolate (much) outside the range of the 5th to 95th percentiles.
|
Estimating a distribution based on three percentiles
For a lognormal the ratio of the 95th percentile to the median is the same as the ratio of the median to the 5th percentile. That's not even nearly true here so lognormal wouldn't be a good fit.
You h
|
9,078
|
Estimating a distribution based on three percentiles
|
About the only things you can infer from the data is that the distribution is nonsymmetric. You can't even tell whether those quantiles came from a fitted distribution or just the ecdf.
If they came from a fitted distribution, you could try all the distributions you can think of and see if any match. If not, there's not nearly enough information. You could interpolate a 2nd degree polynomial or a 3rd degree spline for the quantile function and use that, or come up with a theory as to the distribution family and match quantiles, but any inferences you would make with these methods would be deeply suspect.
|
Estimating a distribution based on three percentiles
|
About the only things you can infer from the data is that the distribution is nonsymmetric. You can't even tell whether those quantiles came from a fitted distribution or just the ecdf.
If they came f
|
Estimating a distribution based on three percentiles
About the only things you can infer from the data is that the distribution is nonsymmetric. You can't even tell whether those quantiles came from a fitted distribution or just the ecdf.
If they came from a fitted distribution, you could try all the distributions you can think of and see if any match. If not, there's not nearly enough information. You could interpolate a 2nd degree polynomial or a 3rd degree spline for the quantile function and use that, or come up with a theory as to the distribution family and match quantiles, but any inferences you would make with these methods would be deeply suspect.
|
Estimating a distribution based on three percentiles
About the only things you can infer from the data is that the distribution is nonsymmetric. You can't even tell whether those quantiles came from a fitted distribution or just the ecdf.
If they came f
|
9,079
|
Estimating a distribution based on three percentiles
|
The use of quantiles to estimate parameters of a priori distributions is discussed in the literature on human response time measurement as "quantile maximum probability estimation" (QMPE, though originally erroneously dubbed "quantile maximum likelihood estimation", QMLE), discussed at length by Heathcote and colleagues. You could fit a number of different a priori distributions (ex-Gaussian, shifted Lognormal, Wald, and Weibull) then compare the sum log likelihoods of the resulting best fits for each distribution to find the distribution flavor that seems to yield the best fit.
|
Estimating a distribution based on three percentiles
|
The use of quantiles to estimate parameters of a priori distributions is discussed in the literature on human response time measurement as "quantile maximum probability estimation" (QMPE, though origi
|
Estimating a distribution based on three percentiles
The use of quantiles to estimate parameters of a priori distributions is discussed in the literature on human response time measurement as "quantile maximum probability estimation" (QMPE, though originally erroneously dubbed "quantile maximum likelihood estimation", QMLE), discussed at length by Heathcote and colleagues. You could fit a number of different a priori distributions (ex-Gaussian, shifted Lognormal, Wald, and Weibull) then compare the sum log likelihoods of the resulting best fits for each distribution to find the distribution flavor that seems to yield the best fit.
|
Estimating a distribution based on three percentiles
The use of quantiles to estimate parameters of a priori distributions is discussed in the literature on human response time measurement as "quantile maximum probability estimation" (QMPE, though origi
|
9,080
|
Estimating a distribution based on three percentiles
|
You can use your percentile information to simulate the data in some way and use the R package "logspline" to estimate the distribution nonparametrically. Below is my function that employs a method like this.
calc.dist.from.median.and.range <- function(m, r)
{
## PURPOSE: Return a Log-Logspline Distribution given (m, r).
## It may be necessary to call this function multiple times in order to get a satisfying distribution (from the plot).
## ----------------------------------------------------------------------
## ARGUMENT:
## m: Median
## r: Range (a vector of two numbers)
## ----------------------------------------------------------------------
## RETURN: A log-logspline distribution object.
## ----------------------------------------------------------------------
## AUTHOR: Feiming Chen, Date: 10 Feb 2016, 10:35
if (m < r[1] || m > r[2] || r[1] > r[2]) stop("Misspecified Median and Range")
mu <- log10(m)
log.r <- log10(r)
## Simulate data that will have median of "mu" and range of "log.r"
## Distribution on the Left/Right: Simulate a Normal Distribution centered at "mu" and truncate the part above/below the "mu".
## May keep sample size intentionaly small so as to introduce uncertainty about the distribution.
d1 <- rnorm(n=200, mean=mu, sd=(mu - log.r[1])/3) # Assums 3*SD informs the bound
d2 <- d1[d1 < mu] # Simulated Data to the Left of "mu"
d3 <- rnorm(n=200, mean=mu, sd=(log.r[2] - mu)/3)
d4 <- d3[d3 > mu] # Simulated Data to the Right of "mu"
d5 <- c(d2, d4) # Combined Simulated Data for the unknown distribution
require(logspline)
ans <- logspline(x=d5)
plot(ans)
return(ans)
}
if (F) { # Unit Test
calc.dist.from.median.and.range(m=1e10, r=c(3.6e5, 3.1e12))
my.dist <- calc.dist.from.median.and.range(m=1e7, r=c(7e2, 3e11))
dlogspline(log10(c(7e2, 1e7, 3e11)), my.dist) # Density
plogspline(log10(c(7e2, 1e7, 3e11)), my.dist) # Probability
10^qlogspline(c(0.05, 0.5, 0.95), my.dist) # Quantiles
10^rlogspline(10, my.dist) # Random Sample
}
|
Estimating a distribution based on three percentiles
|
You can use your percentile information to simulate the data in some way and use the R package "logspline" to estimate the distribution nonparametrically. Below is my function that employs a method
|
Estimating a distribution based on three percentiles
You can use your percentile information to simulate the data in some way and use the R package "logspline" to estimate the distribution nonparametrically. Below is my function that employs a method like this.
calc.dist.from.median.and.range <- function(m, r)
{
## PURPOSE: Return a Log-Logspline Distribution given (m, r).
## It may be necessary to call this function multiple times in order to get a satisfying distribution (from the plot).
## ----------------------------------------------------------------------
## ARGUMENT:
## m: Median
## r: Range (a vector of two numbers)
## ----------------------------------------------------------------------
## RETURN: A log-logspline distribution object.
## ----------------------------------------------------------------------
## AUTHOR: Feiming Chen, Date: 10 Feb 2016, 10:35
if (m < r[1] || m > r[2] || r[1] > r[2]) stop("Misspecified Median and Range")
mu <- log10(m)
log.r <- log10(r)
## Simulate data that will have median of "mu" and range of "log.r"
## Distribution on the Left/Right: Simulate a Normal Distribution centered at "mu" and truncate the part above/below the "mu".
## May keep sample size intentionaly small so as to introduce uncertainty about the distribution.
d1 <- rnorm(n=200, mean=mu, sd=(mu - log.r[1])/3) # Assums 3*SD informs the bound
d2 <- d1[d1 < mu] # Simulated Data to the Left of "mu"
d3 <- rnorm(n=200, mean=mu, sd=(log.r[2] - mu)/3)
d4 <- d3[d3 > mu] # Simulated Data to the Right of "mu"
d5 <- c(d2, d4) # Combined Simulated Data for the unknown distribution
require(logspline)
ans <- logspline(x=d5)
plot(ans)
return(ans)
}
if (F) { # Unit Test
calc.dist.from.median.and.range(m=1e10, r=c(3.6e5, 3.1e12))
my.dist <- calc.dist.from.median.and.range(m=1e7, r=c(7e2, 3e11))
dlogspline(log10(c(7e2, 1e7, 3e11)), my.dist) # Density
plogspline(log10(c(7e2, 1e7, 3e11)), my.dist) # Probability
10^qlogspline(c(0.05, 0.5, 0.95), my.dist) # Quantiles
10^rlogspline(10, my.dist) # Random Sample
}
|
Estimating a distribution based on three percentiles
You can use your percentile information to simulate the data in some way and use the R package "logspline" to estimate the distribution nonparametrically. Below is my function that employs a method
|
9,081
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [closed]
|
There must be some features in your training set with class 'char' .
Please check this
> a <- c("1", "2",letters[1:5], "3")
> as.numeric(a)
[1] 1 2 NA NA NA NA NA 3
Warning message:
NAs introduced by coercion
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [clos
|
There must be some features in your training set with class 'char' .
Please check this
> a <- c("1", "2",letters[1:5], "3")
> as.numeric(a)
[1] 1 2 NA NA NA NA NA 3
Warning message:
NAs introduced
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [closed]
There must be some features in your training set with class 'char' .
Please check this
> a <- c("1", "2",letters[1:5], "3")
> as.numeric(a)
[1] 1 2 NA NA NA NA NA 3
Warning message:
NAs introduced by coercion
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [clos
There must be some features in your training set with class 'char' .
Please check this
> a <- c("1", "2",letters[1:5], "3")
> as.numeric(a)
[1] 1 2 NA NA NA NA NA 3
Warning message:
NAs introduced
|
9,082
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [closed]
|
Probably the cause is you have some character variables in your data frame.
Convert all character variable into factor in one line:
library(dplyr)
data_fac=data_char %>% mutate_if(is.character, as.factor)
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [clos
|
Probably the cause is you have some character variables in your data frame.
Convert all character variable into factor in one line:
library(dplyr)
data_fac=data_char %>% mutate_if(is.character, as.fa
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [closed]
Probably the cause is you have some character variables in your data frame.
Convert all character variable into factor in one line:
library(dplyr)
data_fac=data_char %>% mutate_if(is.character, as.factor)
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [clos
Probably the cause is you have some character variables in your data frame.
Convert all character variable into factor in one line:
library(dplyr)
data_fac=data_char %>% mutate_if(is.character, as.fa
|
9,083
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [closed]
|
As shown in the warning there were 28 errors which happened to be the number of columns with character datatypes ("chr"). Forcing these columns to factors allowed for the run to commence.
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [clos
|
As shown in the warning there were 28 errors which happened to be the number of columns with character datatypes ("chr"). Forcing these columns to factors allowed for the run to commence.
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [closed]
As shown in the warning there were 28 errors which happened to be the number of columns with character datatypes ("chr"). Forcing these columns to factors allowed for the run to commence.
|
R: Random Forest throwing NaN/Inf in "foreign function call" error despite no NaN's in dataset [clos
As shown in the warning there were 28 errors which happened to be the number of columns with character datatypes ("chr"). Forcing these columns to factors allowed for the run to commence.
|
9,084
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in achievement by teaching better?
|
You are right in suspecting that your professor misunderstood.
The correct answer is that we cannot say anything whatsoever about the percentage improvement in student achievement driven by teacher expertise. Nothing at all.
Why is this so? The quote is in terms of variance explained. Variance explained has nothing to do with the actual values on which the scales are measured - which any percentage improvement in student achievement would be accounted in. The two are completely separate.
Let's look at an example. Here is some simulated data:
R code:
nn <- 1e2
set.seed(1) # for reproducibility
teaching_expertise <- runif(nn)
student_achievement <- 5+0.1*teaching_expertise+rnorm(nn,0,0.05)
model <- lm(student_achievement~teaching_expertise)
plot(teaching_expertise,student_achievement,pch=19,las=1,
xlab="Teaching Expertise",ylab="Student Achievement")
abline(model,col="red")
Note that the model is correctly specified: student achievement depends linearly on teaching expertise, and that is what I am modeling. No cheap tricks here.
We have $R^2=0.30$, so teaching expertise indeed accounts for 30% of student achievement (see here):
> summary(model)
Call:
lm(formula = student_achievement ~ teaching_expertise)
... snip ...
Multiple R-squared: 0.304, Adjusted R-squared: 0.2969
However, here is the student achievement we would predict for teachers at the very bottom (teaching expertise of 0) vs. at the very top of the range (1):
> (foo <- predict(model,newdata=data.frame(teaching_expertise=c(0,1))))
1 2
4.991034 5.106651
The improvement is on the order of $\frac{5.11-4.99}{4.99}\approx 2.4\%$.
> diff(foo)/foo[1]
2
0.02316497
(Plus, this is expected achievement. Actual achievement will be different. With regression to the mean typically being stronger at the extremes, the actual difference will be even smaller.)
And you know what? We could change this percentage change to pretty much any number we want. Even a negative percentage improvement! How? Simply by changing that single innocuous number 5 in the data simulation above, i.e., the intercept.
What's going on? Variance explained measures the amount by which the (sum of squared) residuals are reduced by a model, i.e., the difference between the residuals to the regression line and the residuals to the overall average. By changing the intercept (the 5), we can shift everything up and down. Including the overall average. So changing the intercept will leave variance explained completely unchanged. (If you have R, try this. We'll wait.)
However, shifting everything up and down will change the concrete scores. In particular the percentage improvement of a "good" vs. a "bad" teacher. If we shift everything down far enough, we get a negative student achievement for the "bad" teacher. And a positive change for the "good" teacher against a negative baseline will give you a negative percentage improvement. (Again, try this. An intercept of -1 works.)
Yes, of course such negative percentage improvements make no sense here. This is just an illustration of the fact that there is zero relationship between variance explained and percentage improvement in measurements.
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in a
|
You are right in suspecting that your professor misunderstood.
The correct answer is that we cannot say anything whatsoever about the percentage improvement in student achievement driven by teacher ex
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in achievement by teaching better?
You are right in suspecting that your professor misunderstood.
The correct answer is that we cannot say anything whatsoever about the percentage improvement in student achievement driven by teacher expertise. Nothing at all.
Why is this so? The quote is in terms of variance explained. Variance explained has nothing to do with the actual values on which the scales are measured - which any percentage improvement in student achievement would be accounted in. The two are completely separate.
Let's look at an example. Here is some simulated data:
R code:
nn <- 1e2
set.seed(1) # for reproducibility
teaching_expertise <- runif(nn)
student_achievement <- 5+0.1*teaching_expertise+rnorm(nn,0,0.05)
model <- lm(student_achievement~teaching_expertise)
plot(teaching_expertise,student_achievement,pch=19,las=1,
xlab="Teaching Expertise",ylab="Student Achievement")
abline(model,col="red")
Note that the model is correctly specified: student achievement depends linearly on teaching expertise, and that is what I am modeling. No cheap tricks here.
We have $R^2=0.30$, so teaching expertise indeed accounts for 30% of student achievement (see here):
> summary(model)
Call:
lm(formula = student_achievement ~ teaching_expertise)
... snip ...
Multiple R-squared: 0.304, Adjusted R-squared: 0.2969
However, here is the student achievement we would predict for teachers at the very bottom (teaching expertise of 0) vs. at the very top of the range (1):
> (foo <- predict(model,newdata=data.frame(teaching_expertise=c(0,1))))
1 2
4.991034 5.106651
The improvement is on the order of $\frac{5.11-4.99}{4.99}\approx 2.4\%$.
> diff(foo)/foo[1]
2
0.02316497
(Plus, this is expected achievement. Actual achievement will be different. With regression to the mean typically being stronger at the extremes, the actual difference will be even smaller.)
And you know what? We could change this percentage change to pretty much any number we want. Even a negative percentage improvement! How? Simply by changing that single innocuous number 5 in the data simulation above, i.e., the intercept.
What's going on? Variance explained measures the amount by which the (sum of squared) residuals are reduced by a model, i.e., the difference between the residuals to the regression line and the residuals to the overall average. By changing the intercept (the 5), we can shift everything up and down. Including the overall average. So changing the intercept will leave variance explained completely unchanged. (If you have R, try this. We'll wait.)
However, shifting everything up and down will change the concrete scores. In particular the percentage improvement of a "good" vs. a "bad" teacher. If we shift everything down far enough, we get a negative student achievement for the "bad" teacher. And a positive change for the "good" teacher against a negative baseline will give you a negative percentage improvement. (Again, try this. An intercept of -1 works.)
Yes, of course such negative percentage improvements make no sense here. This is just an illustration of the fact that there is zero relationship between variance explained and percentage improvement in measurements.
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in a
You are right in suspecting that your professor misunderstood.
The correct answer is that we cannot say anything whatsoever about the percentage improvement in student achievement driven by teacher ex
|
9,085
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in achievement by teaching better?
|
The Hattie 2003 paper mentions a simple form of hierarchical linear modelling ignoring interactions. The paper’s description of the 30% isn’t particularly thorough, with broken links in the references making it difficult to see where the number even came from. I assume his approach relied on partial R-squared.
The answer is no, going from a bad teacher to a good teacher can’t be expected to increase performance by 30%. The two 30%'s are measured completely differently.
For example, suppose performance followed this equation:$$\text{performance} = \beta_0 + \beta_1 ~\text{studentEffort} + \beta_2 ~\text{teacherEffort} + \text{noise}$$If the $\beta_2$ is small, the performance graph would be nearly flat as teacherEffort changed. This can happen no matter what the $R^2$ is or how it might divide up into partial $R^2$'s.
In other words, saying that teachingEffort accounts for 30% of a variation doesn't tell you what that variation is over the dataset, i.e. how much performance changes.
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in a
|
The Hattie 2003 paper mentions a simple form of hierarchical linear modelling ignoring interactions. The paper’s description of the 30% isn’t particularly thorough, with broken links in the references
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in achievement by teaching better?
The Hattie 2003 paper mentions a simple form of hierarchical linear modelling ignoring interactions. The paper’s description of the 30% isn’t particularly thorough, with broken links in the references making it difficult to see where the number even came from. I assume his approach relied on partial R-squared.
The answer is no, going from a bad teacher to a good teacher can’t be expected to increase performance by 30%. The two 30%'s are measured completely differently.
For example, suppose performance followed this equation:$$\text{performance} = \beta_0 + \beta_1 ~\text{studentEffort} + \beta_2 ~\text{teacherEffort} + \text{noise}$$If the $\beta_2$ is small, the performance graph would be nearly flat as teacherEffort changed. This can happen no matter what the $R^2$ is or how it might divide up into partial $R^2$'s.
In other words, saying that teachingEffort accounts for 30% of a variation doesn't tell you what that variation is over the dataset, i.e. how much performance changes.
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in a
The Hattie 2003 paper mentions a simple form of hierarchical linear modelling ignoring interactions. The paper’s description of the 30% isn’t particularly thorough, with broken links in the references
|
9,086
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in achievement by teaching better?
|
You write '"Teaching expertise accounts for about 30 percent of the variance in student achievement" means that a teacher is responsible for 30% of what a student achieves.'
A better formulation would be "A teacher is responsible for 30% of the difference in performance between students".
In other words, if the average performance of some group of students with teacher A is 80 points and the average performance of another group with teacher B is 70 points, the performance of the teachers A and B can account for around 3 points (30% of the 10 point variance in performance).
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in a
|
You write '"Teaching expertise accounts for about 30 percent of the variance in student achievement" means that a teacher is responsible for 30% of what a student achieves.'
A better formulation would
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in achievement by teaching better?
You write '"Teaching expertise accounts for about 30 percent of the variance in student achievement" means that a teacher is responsible for 30% of what a student achieves.'
A better formulation would be "A teacher is responsible for 30% of the difference in performance between students".
In other words, if the average performance of some group of students with teacher A is 80 points and the average performance of another group with teacher B is 70 points, the performance of the teachers A and B can account for around 3 points (30% of the 10 point variance in performance).
|
If teachers account for 30% of variance of student achievement, can a teacher have 30% increase in a
You write '"Teaching expertise accounts for about 30 percent of the variance in student achievement" means that a teacher is responsible for 30% of what a student achieves.'
A better formulation would
|
9,087
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
This is not a problem if the CV is nested, i.e. all optimisations, feature selections and model selections, whether they themselves use CV or not, are wrapped in one big CV.
How does this compare to having an extra validation set? While the validation set is usually just a more or less randomly selected part of the whole data, it is simply an equivalent of one iteration of CV. To this end, it is actually a worse method because it can be easily be biased by (hopefully) luckily/unluckily selected or cherry-picked validation set.
The only exception to this are time-series and other data where the object order matters; but they require special treatment either way.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
This is not a problem if the CV is nested, i.e. all optimisations, feature selections and model selections, whether they themselves use CV or not, are wrapped in one big CV.
How does this compare to h
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
This is not a problem if the CV is nested, i.e. all optimisations, feature selections and model selections, whether they themselves use CV or not, are wrapped in one big CV.
How does this compare to having an extra validation set? While the validation set is usually just a more or less randomly selected part of the whole data, it is simply an equivalent of one iteration of CV. To this end, it is actually a worse method because it can be easily be biased by (hopefully) luckily/unluckily selected or cherry-picked validation set.
The only exception to this are time-series and other data where the object order matters; but they require special treatment either way.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
This is not a problem if the CV is nested, i.e. all optimisations, feature selections and model selections, whether they themselves use CV or not, are wrapped in one big CV.
How does this compare to h
|
9,088
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
The main reason is that the k-fold cross-validation estimator has a lower variance than a single hold-out set estimator, which can be very important if the amount of data available is limited. If you have a single hold out set, where 90% of data are used for training and 10% used for testing, the test set is very small, so there will be a lot of variation in the performance estimate for different samples of data, or for different partitions of the data to form training and test sets. k-fold validation reduces this variance by averaging over k different partitions, so the performance estimate is less sensitive to the partitioning of the data. You can go even further by repeated k-fold cross-validation, where the cross-validation is performed using different partitionings of the data to form k sub-sets, and then taking the average over that as well.
Note however, all steps of the model fitting procedure (model selection, feature selection etc.) must be performed independently in each fold of the cross-validation procedure, or the resulting performance estimate will be optimistically biased.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
The main reason is that the k-fold cross-validation estimator has a lower variance than a single hold-out set estimator, which can be very important if the amount of data available is limited. If you
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
The main reason is that the k-fold cross-validation estimator has a lower variance than a single hold-out set estimator, which can be very important if the amount of data available is limited. If you have a single hold out set, where 90% of data are used for training and 10% used for testing, the test set is very small, so there will be a lot of variation in the performance estimate for different samples of data, or for different partitions of the data to form training and test sets. k-fold validation reduces this variance by averaging over k different partitions, so the performance estimate is less sensitive to the partitioning of the data. You can go even further by repeated k-fold cross-validation, where the cross-validation is performed using different partitionings of the data to form k sub-sets, and then taking the average over that as well.
Note however, all steps of the model fitting procedure (model selection, feature selection etc.) must be performed independently in each fold of the cross-validation procedure, or the resulting performance estimate will be optimistically biased.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
The main reason is that the k-fold cross-validation estimator has a lower variance than a single hold-out set estimator, which can be very important if the amount of data available is limited. If you
|
9,089
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
[EDITED in light of the comment]
I think there is a problem if you use CV results to select among multiple models.
CV allows you to use the entire dataset to train and test one model/method, while being able to have a reasonable idea of how well it will generalize. But if you're comparing multiple models, my instinct is that the model comparison uses up the extra level of train-test isolation that CV gives you, so the final result will not be a reasonable estimate of the chosen model's accuracy.
So I'd guess that if you create several models and choose one based on its CV, you're being overly-optimistic about what you've found. Another validation set would be needed to see how well the winner generalizes.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
[EDITED in light of the comment]
I think there is a problem if you use CV results to select among multiple models.
CV allows you to use the entire dataset to train and test one model/method, while be
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
[EDITED in light of the comment]
I think there is a problem if you use CV results to select among multiple models.
CV allows you to use the entire dataset to train and test one model/method, while being able to have a reasonable idea of how well it will generalize. But if you're comparing multiple models, my instinct is that the model comparison uses up the extra level of train-test isolation that CV gives you, so the final result will not be a reasonable estimate of the chosen model's accuracy.
So I'd guess that if you create several models and choose one based on its CV, you're being overly-optimistic about what you've found. Another validation set would be needed to see how well the winner generalizes.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
[EDITED in light of the comment]
I think there is a problem if you use CV results to select among multiple models.
CV allows you to use the entire dataset to train and test one model/method, while be
|
9,090
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
In my experience, the main reason is usually that you don't have enough samples.
In my field (classification of biological/medical samples), sometimes a test set is kept separate, but often it comprises only few cases. In that case confidence intervals are usually too wide to be of any use.
Another advantage of repeated/iterated cross validation or out-of-bootstrap validation is that you build a bunch of "surrogate" models. These are assumed to be equal. If they are not, the modes are unstable. You can actually measure this instability (with respect to exchanging a few training cases) by comparing either the surrogate models themselves or the predictions different surrogate models make for the same case.
This paper by Esbensen & Geladi gives a nice discussion of some limitations of cross validation.
You can take care of most of them, but one important point that cannot be tackled by resampling validation is drift, which is related to mbq's point:
The only exception to this are time-series and other data where the object order matters
Drift means that e.g. an instrument's response/true calibration changes slowly over time. So the generalization error for unknown cases may not be the same as for unknown future cases. You arrive at instructions like "redo calibration daily/weekly/..." if you find drift during validation, but this needs test sets systematically acquired later than the training data.
(You could do "special" splits that take into account acquisition time, if your experiment is planned accorodingly, but usually this will not cover as much time as you'd want to test for for drift detection)
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
In my experience, the main reason is usually that you don't have enough samples.
In my field (classification of biological/medical samples), sometimes a test set is kept separate, but often it compris
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
In my experience, the main reason is usually that you don't have enough samples.
In my field (classification of biological/medical samples), sometimes a test set is kept separate, but often it comprises only few cases. In that case confidence intervals are usually too wide to be of any use.
Another advantage of repeated/iterated cross validation or out-of-bootstrap validation is that you build a bunch of "surrogate" models. These are assumed to be equal. If they are not, the modes are unstable. You can actually measure this instability (with respect to exchanging a few training cases) by comparing either the surrogate models themselves or the predictions different surrogate models make for the same case.
This paper by Esbensen & Geladi gives a nice discussion of some limitations of cross validation.
You can take care of most of them, but one important point that cannot be tackled by resampling validation is drift, which is related to mbq's point:
The only exception to this are time-series and other data where the object order matters
Drift means that e.g. an instrument's response/true calibration changes slowly over time. So the generalization error for unknown cases may not be the same as for unknown future cases. You arrive at instructions like "redo calibration daily/weekly/..." if you find drift during validation, but this needs test sets systematically acquired later than the training data.
(You could do "special" splits that take into account acquisition time, if your experiment is planned accorodingly, but usually this will not cover as much time as you'd want to test for for drift detection)
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
In my experience, the main reason is usually that you don't have enough samples.
In my field (classification of biological/medical samples), sometimes a test set is kept separate, but often it compris
|
9,091
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
Why we should do cross-validation instead of using separate validation set?
Aurélien Géron talks about this in his book
To avoid “wasting” too much training data in validation sets, a common
technique isto use cross-validation.
Instead of other k values, why we may prefer to use k=10 in cross-validation?
To answer this, at first, I would like to thank Jason Brownlee, PhD for his great tutorial on k-fold Cross-Validation. I am citing one of his cited book.
Kuhn & Johnson talked about the choice of k value in their book .
The choice of k is usually 5 or 10, but there is no formal rule. As k
gets larger, the difference in size between the training set and the
resampling subsets gets smaller. As this difference decreases, the
bias of the technique becomes smaller (i.e., the bias is smaller for
k=10 than k= 5). In this context, the bias is the difference between
the estimated and true values of performance
Then, one may say that why we do not use leave-one-out cross-validation (LOOCV) as k value is maximum there and thus, bias will be least there. In that book, they have also talked why we can prefer 10 fold CV instead of preferring LOOCV.
From a practical viewpoint, larger values of k are more
computationally burdensome. In the extreme, LOOCV is most
computationally taxing because it requires as many model fits as data
points and each model fit uses a subset that is nearly the same size
of the training set. Molinaro (2005) found that leave-one-out and
k=10-fold cross-validation yielded similar results, indicating that
k= 10 is more attractive from the perspective of computational
efficiency. Also, small values of k, say 2 or 3, have high bias
but are very computationally efficient.
I have read a lot of research papers about sentiment classification and related topics. Most of them use 10-fold cross validation to train and test classifiers. That means that no separate testing/validation is done. Why is that?
If we do not use cross-validation (CV) to select one of the multiple models (or we do not use CV to tune the hyper-parameters), we do not need to do separate test. The reason is, the purpose of doing separate test is accomplished here in CV (by one of the k folds in each iteration). Different SE threads have talked about this a lot. You may check.
At the end, feel free to ask me, if something I have written is not clear to you.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
|
Why we should do cross-validation instead of using separate validation set?
Aurélien Géron talks about this in his book
To avoid “wasting” too much training data in validation sets, a common
techni
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
Why we should do cross-validation instead of using separate validation set?
Aurélien Géron talks about this in his book
To avoid “wasting” too much training data in validation sets, a common
technique isto use cross-validation.
Instead of other k values, why we may prefer to use k=10 in cross-validation?
To answer this, at first, I would like to thank Jason Brownlee, PhD for his great tutorial on k-fold Cross-Validation. I am citing one of his cited book.
Kuhn & Johnson talked about the choice of k value in their book .
The choice of k is usually 5 or 10, but there is no formal rule. As k
gets larger, the difference in size between the training set and the
resampling subsets gets smaller. As this difference decreases, the
bias of the technique becomes smaller (i.e., the bias is smaller for
k=10 than k= 5). In this context, the bias is the difference between
the estimated and true values of performance
Then, one may say that why we do not use leave-one-out cross-validation (LOOCV) as k value is maximum there and thus, bias will be least there. In that book, they have also talked why we can prefer 10 fold CV instead of preferring LOOCV.
From a practical viewpoint, larger values of k are more
computationally burdensome. In the extreme, LOOCV is most
computationally taxing because it requires as many model fits as data
points and each model fit uses a subset that is nearly the same size
of the training set. Molinaro (2005) found that leave-one-out and
k=10-fold cross-validation yielded similar results, indicating that
k= 10 is more attractive from the perspective of computational
efficiency. Also, small values of k, say 2 or 3, have high bias
but are very computationally efficient.
I have read a lot of research papers about sentiment classification and related topics. Most of them use 10-fold cross validation to train and test classifiers. That means that no separate testing/validation is done. Why is that?
If we do not use cross-validation (CV) to select one of the multiple models (or we do not use CV to tune the hyper-parameters), we do not need to do separate test. The reason is, the purpose of doing separate test is accomplished here in CV (by one of the k folds in each iteration). Different SE threads have talked about this a lot. You may check.
At the end, feel free to ask me, if something I have written is not clear to you.
|
Why do researchers use 10-fold cross validation instead of testing on a validation set?
Why we should do cross-validation instead of using separate validation set?
Aurélien Géron talks about this in his book
To avoid “wasting” too much training data in validation sets, a common
techni
|
9,092
|
Is PCA always recommended?
|
Blindly using PCA is a recipe for disaster. (As an aside, automatically applying any method is not a good idea, because what works in one context is not guaranteed to work in another. We can formalize this intuitive idea with the No Free Lunch theorem.)
It's easy enough to construct an example where the eigenvectors to the smallest eigenvalues are the most informative. If you discard this data, you're discarding the most helpful information for your classification or regression problem, and your model would be improved if you had retained them.
More concretely, suppose $A$ is our $n \times p$ design matrix with $n$ observations of $p$ features, and each column is mean-centered. Then we can use SVD to compute the PCA of $A$. (see: Relationship between SVD and PCA. How to use SVD to perform PCA?)
For an example in the case of a linear model, this gives us a factorization
$$
AV = US
$$
and we wish to predict some outcome $y$ as a linear combination of the PCs: $AV\beta = y+\epsilon$ where $\epsilon$ is some noise. Further, let's assume that this linear model is the correct model.
In general, the estimated vector $\hat \beta$ can be anything. In the PCA setting where only the top $k$ components are kept, you are implicitly fixing the $\hat \beta$ coefficients of the $p-k$ discarded components to 0. In other words, even though we started out with the correct model, the truncated model is not correct because it omits the key variables.
In other words, PCA has a weakness in a supervised learning scenario because it is not "$y$-aware." Of course, in the cases where PCA is a helpful step, then $\beta$ will have nonzero entries corresponding to the larger singular values.
I think this example is instructive because it shows that even in the special case that the model is linear, truncating $AV$ risks discarding information.
You can even generate data where the discarded components are essential. Create 2 independent features, one that's completely random, and one that perfectly predicts the outcome, but has a smaller variance. Using PCA & keeping $k=1$ components will fail. Moreover, the smaller the variance of the informative feature, the more pronounced this effect will be.
This illustration comes from this answer https://stats.stackexchange.com/a/80450/22311 with my thanks to Flounderer.
This class implements a simple demonstration. It randomly generates data according to my scheme, and then applies PCA, retaining the desired number of features. Then it tunes an SVM classifier and reports the AUC.
class PcaSvm(object):
def __init__(self, seed):
self.seed = seed
self.rng = np.random.default_rng(seed)
def __call__(self, a, k, sample_size=1000):
# x1 is uninformative & has standard deviation = 1
x1 = self.rng.standard_normal(sample_size).reshape((-1, 1))
# x1 is very informative & has standard deviation = a
x2 = a * self.rng.standard_normal(sample_size).reshape((-1, 1))
# y strongly depends on x2; some samples will be perfectly separable or nearly so
y = self.rng.binomial(n=1, p=expit(1e6 * np.sign(x2))).reshape(-1)
svc_params = {
"svc__C": stats.loguniform(1e0, 1e3),
"svc__gamma": stats.loguniform(1e-4, 1e-2),
}
clf = sklearn.pipeline.make_pipeline(
PCA(n_components=k), StandardScaler(), SVC()
)
random_search = RandomizedSearchCV(
clf,
param_distributions=svc_params,
n_iter=60,
scoring="roc_auc",
random_state=self.seed,
)
random_search.fit(np.hstack([x1, x2]), y)
best_test_auc = random_search.cv_results_["mean_test_score"].max()
print(
f"Using a={a}, the best model with k={k} PCA components has an average AUC (on the test set) of {best_test_auc:.4f}"
)
When PCA only retains 1 feature, the model is somewhere between worthless and mediocre. When retaining 2 features, the model is literally perfect.
standard deviation of informative feature
number of components retained
AUC
0.001
1
0.5024
0.1
1
0.5075
0.9
1
0.5197
1.0
1
0.7277
0.001
2
1.0
0.1
2
1.0
0.9
2
1.0
1.0
2
1.0
Other common objections to "always" using PCA include:
PCA is a linear model, but the relationships among features may not have the form of a linear factorization. This implies that PCA will be a distortion.
PCA can be hard to interpret, because it tends to yield "dense" factorizations, where all features in $A$ have nonzero effect on each PC.
We also have a few related threads (thanks, @gung!):
Low variance components in PCA, are they really just noise? Is there any way to test for it?
The first principal component does not separate classes, but other PCs do; how is that possible?
Examples of PCA where PCs with low variance are "useful"
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)?
|
Is PCA always recommended?
|
Blindly using PCA is a recipe for disaster. (As an aside, automatically applying any method is not a good idea, because what works in one context is not guaranteed to work in another. We can formalize
|
Is PCA always recommended?
Blindly using PCA is a recipe for disaster. (As an aside, automatically applying any method is not a good idea, because what works in one context is not guaranteed to work in another. We can formalize this intuitive idea with the No Free Lunch theorem.)
It's easy enough to construct an example where the eigenvectors to the smallest eigenvalues are the most informative. If you discard this data, you're discarding the most helpful information for your classification or regression problem, and your model would be improved if you had retained them.
More concretely, suppose $A$ is our $n \times p$ design matrix with $n$ observations of $p$ features, and each column is mean-centered. Then we can use SVD to compute the PCA of $A$. (see: Relationship between SVD and PCA. How to use SVD to perform PCA?)
For an example in the case of a linear model, this gives us a factorization
$$
AV = US
$$
and we wish to predict some outcome $y$ as a linear combination of the PCs: $AV\beta = y+\epsilon$ where $\epsilon$ is some noise. Further, let's assume that this linear model is the correct model.
In general, the estimated vector $\hat \beta$ can be anything. In the PCA setting where only the top $k$ components are kept, you are implicitly fixing the $\hat \beta$ coefficients of the $p-k$ discarded components to 0. In other words, even though we started out with the correct model, the truncated model is not correct because it omits the key variables.
In other words, PCA has a weakness in a supervised learning scenario because it is not "$y$-aware." Of course, in the cases where PCA is a helpful step, then $\beta$ will have nonzero entries corresponding to the larger singular values.
I think this example is instructive because it shows that even in the special case that the model is linear, truncating $AV$ risks discarding information.
You can even generate data where the discarded components are essential. Create 2 independent features, one that's completely random, and one that perfectly predicts the outcome, but has a smaller variance. Using PCA & keeping $k=1$ components will fail. Moreover, the smaller the variance of the informative feature, the more pronounced this effect will be.
This illustration comes from this answer https://stats.stackexchange.com/a/80450/22311 with my thanks to Flounderer.
This class implements a simple demonstration. It randomly generates data according to my scheme, and then applies PCA, retaining the desired number of features. Then it tunes an SVM classifier and reports the AUC.
class PcaSvm(object):
def __init__(self, seed):
self.seed = seed
self.rng = np.random.default_rng(seed)
def __call__(self, a, k, sample_size=1000):
# x1 is uninformative & has standard deviation = 1
x1 = self.rng.standard_normal(sample_size).reshape((-1, 1))
# x1 is very informative & has standard deviation = a
x2 = a * self.rng.standard_normal(sample_size).reshape((-1, 1))
# y strongly depends on x2; some samples will be perfectly separable or nearly so
y = self.rng.binomial(n=1, p=expit(1e6 * np.sign(x2))).reshape(-1)
svc_params = {
"svc__C": stats.loguniform(1e0, 1e3),
"svc__gamma": stats.loguniform(1e-4, 1e-2),
}
clf = sklearn.pipeline.make_pipeline(
PCA(n_components=k), StandardScaler(), SVC()
)
random_search = RandomizedSearchCV(
clf,
param_distributions=svc_params,
n_iter=60,
scoring="roc_auc",
random_state=self.seed,
)
random_search.fit(np.hstack([x1, x2]), y)
best_test_auc = random_search.cv_results_["mean_test_score"].max()
print(
f"Using a={a}, the best model with k={k} PCA components has an average AUC (on the test set) of {best_test_auc:.4f}"
)
When PCA only retains 1 feature, the model is somewhere between worthless and mediocre. When retaining 2 features, the model is literally perfect.
standard deviation of informative feature
number of components retained
AUC
0.001
1
0.5024
0.1
1
0.5075
0.9
1
0.5197
1.0
1
0.7277
0.001
2
1.0
0.1
2
1.0
0.9
2
1.0
1.0
2
1.0
Other common objections to "always" using PCA include:
PCA is a linear model, but the relationships among features may not have the form of a linear factorization. This implies that PCA will be a distortion.
PCA can be hard to interpret, because it tends to yield "dense" factorizations, where all features in $A$ have nonzero effect on each PC.
We also have a few related threads (thanks, @gung!):
Low variance components in PCA, are they really just noise? Is there any way to test for it?
The first principal component does not separate classes, but other PCs do; how is that possible?
Examples of PCA where PCs with low variance are "useful"
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)?
|
Is PCA always recommended?
Blindly using PCA is a recipe for disaster. (As an aside, automatically applying any method is not a good idea, because what works in one context is not guaranteed to work in another. We can formalize
|
9,093
|
Is PCA always recommended?
|
First of all, blindly throwing a model on some data cannot be possibly recommended (you may be able to relax that no-no if you have an infinite amount of independent cases at hand...).
There is a formulation of the no-free lunch theorem that is related to the question: it states that over all possible data sets, no model is better than any other. The usual conclusion from that is that models are superior, iff they are better suited for the particular task at hand (including both what the purpose of the analysis is and particular characteristics of the data).
So, the more sensible question you should ask youself is whether your data has characteristics that make it suitable for PCA.
For example, I work mostly with spectroscopic data. This kind of data has properties that align very well with bilinear models such as PCA or PLS, and much less well with a feature selection picking particular measurement channels (wavelengths, features).
In particular, I know for physical and chemical reasons that the information I'm seeking is usually spread out quite "thin" over large regions of the spectrum.
Because of that, I routinely use PCA as exploratory tool, e.g. to check whether there is large variance that is not correlated with the outcome I want to predict/study. And possibly even to have a look whether I can find out what the source of such variance is and then decide how to deal with that. I then decide whether to use PCA as feature reduction - whereas I know from the beginning that feature selection picking particular wavelength is hardly ever appropriate.
Contrast that, say, with gene microarray data where I know beforehand that the information is probably concentrated in a few genes with all other genes carrying noise only. Here, feature selection is needed.
we might be leaving out features that do not explain much of the variance of the dataset but do explain what characterizes one class against another.
Of course, and in my field (chemometrics) for regression this observation is the textbook trigger to move on from Principal Component Regression to Partial Least Squares Regression.
|
Is PCA always recommended?
|
First of all, blindly throwing a model on some data cannot be possibly recommended (you may be able to relax that no-no if you have an infinite amount of independent cases at hand...).
There is a for
|
Is PCA always recommended?
First of all, blindly throwing a model on some data cannot be possibly recommended (you may be able to relax that no-no if you have an infinite amount of independent cases at hand...).
There is a formulation of the no-free lunch theorem that is related to the question: it states that over all possible data sets, no model is better than any other. The usual conclusion from that is that models are superior, iff they are better suited for the particular task at hand (including both what the purpose of the analysis is and particular characteristics of the data).
So, the more sensible question you should ask youself is whether your data has characteristics that make it suitable for PCA.
For example, I work mostly with spectroscopic data. This kind of data has properties that align very well with bilinear models such as PCA or PLS, and much less well with a feature selection picking particular measurement channels (wavelengths, features).
In particular, I know for physical and chemical reasons that the information I'm seeking is usually spread out quite "thin" over large regions of the spectrum.
Because of that, I routinely use PCA as exploratory tool, e.g. to check whether there is large variance that is not correlated with the outcome I want to predict/study. And possibly even to have a look whether I can find out what the source of such variance is and then decide how to deal with that. I then decide whether to use PCA as feature reduction - whereas I know from the beginning that feature selection picking particular wavelength is hardly ever appropriate.
Contrast that, say, with gene microarray data where I know beforehand that the information is probably concentrated in a few genes with all other genes carrying noise only. Here, feature selection is needed.
we might be leaving out features that do not explain much of the variance of the dataset but do explain what characterizes one class against another.
Of course, and in my field (chemometrics) for regression this observation is the textbook trigger to move on from Principal Component Regression to Partial Least Squares Regression.
|
Is PCA always recommended?
First of all, blindly throwing a model on some data cannot be possibly recommended (you may be able to relax that no-no if you have an infinite amount of independent cases at hand...).
There is a for
|
9,094
|
Is PCA always recommended?
|
Of course not, I don't recall reading/hearing any scientific method's name with the word always, let alone PCA. And, there are many other methods that can be used for dimensionality reduction, e.g. ICA, LDA, variuous feature selection methods, matrix/tensor factorization techniques, autoencoders ...
|
Is PCA always recommended?
|
Of course not, I don't recall reading/hearing any scientific method's name with the word always, let alone PCA. And, there are many other methods that can be used for dimensionality reduction, e.g. IC
|
Is PCA always recommended?
Of course not, I don't recall reading/hearing any scientific method's name with the word always, let alone PCA. And, there are many other methods that can be used for dimensionality reduction, e.g. ICA, LDA, variuous feature selection methods, matrix/tensor factorization techniques, autoencoders ...
|
Is PCA always recommended?
Of course not, I don't recall reading/hearing any scientific method's name with the word always, let alone PCA. And, there are many other methods that can be used for dimensionality reduction, e.g. IC
|
9,095
|
Is PCA always recommended?
|
The two major limitations of PCA:
1) It assumes linear relationship between variables.
2) The components are much harder to interpret than the original data.
If the limitations outweigh the benefit, one should not use it; hence, pca should not always be used. IMO, it is better to not use PCA, unless there is a good reason to.
|
Is PCA always recommended?
|
The two major limitations of PCA:
1) It assumes linear relationship between variables.
2) The components are much harder to interpret than the original data.
If the limitations outweigh the benefit,
|
Is PCA always recommended?
The two major limitations of PCA:
1) It assumes linear relationship between variables.
2) The components are much harder to interpret than the original data.
If the limitations outweigh the benefit, one should not use it; hence, pca should not always be used. IMO, it is better to not use PCA, unless there is a good reason to.
|
Is PCA always recommended?
The two major limitations of PCA:
1) It assumes linear relationship between variables.
2) The components are much harder to interpret than the original data.
If the limitations outweigh the benefit,
|
9,096
|
Introduction to structural equation modeling
|
I would go for some papers by Múthen and Múthen, who authored the Mplus software, especially
Múthen, B.O. (1984). A general structural equation model with dichotomous, ordered categorical and continuous latent indicators. Psychometrika, 49, 115–132.
Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report.
(Available as PDFs from here: Weighted Least Squares for Categorical Variables.)
There is a lot more to see on Mplus wiki, e.g. WLS vs. WLSMV results with ordinal data; the two authors are very responsive and always provide detailed answers with accompanying references when possible. Some comparisons of robust weighted least squares vs. ML-based methods of analyzing polychoric or polyserial correlation matrices can be found in:
Lei, P.W. (2009). Evaluating estimation methods for ordinal data in
structural equation modeling. Quality & Quantity, 43, 495–507.
For other mathematical development, you can have a look at:
Jöreskog, K.G. (1994) On the estimation of polychoric correlations
and their asymptotic covariance matrix. Psychometrika, 59(3),
381-389. (See also S-Y Lee's papers.)
Sophia Rabe-Hesketh and her colleagues also have good papers on SEM. Some relevant references include:
Rabe-Hesketh, S. Skrondal, A., and Pickles, A. (2004b). Generalized multilevel structural equation modeling. Psychometrika, 69, 167–190.
Skrondal, A. and Rabe-Hesketh, S. (2004). Generalized Latent Variable Modeling: Multilevel, Longitudinal, and Structural Equation Models. Chapman & Hall/CRC, Boca Raton, FL. (This is the reference textbook for understanding/working with Stata gllamm.)
Other good resources are probably listed on John Uebersax's excellent website, in particular Introduction to the Tetrachoric and Polychoric Correlation Coefficients. Given that you are also interested in applied work, I would suggest taking a look at OpenMx (yet another software package for modeling covariance structure) and lavaan (which aims at delivering output similar to those of EQS or Mplus), both available under R.
|
Introduction to structural equation modeling
|
I would go for some papers by Múthen and Múthen, who authored the Mplus software, especially
Múthen, B.O. (1984). A general structural equation model with dichotomous, ordered categorical and continu
|
Introduction to structural equation modeling
I would go for some papers by Múthen and Múthen, who authored the Mplus software, especially
Múthen, B.O. (1984). A general structural equation model with dichotomous, ordered categorical and continuous latent indicators. Psychometrika, 49, 115–132.
Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report.
(Available as PDFs from here: Weighted Least Squares for Categorical Variables.)
There is a lot more to see on Mplus wiki, e.g. WLS vs. WLSMV results with ordinal data; the two authors are very responsive and always provide detailed answers with accompanying references when possible. Some comparisons of robust weighted least squares vs. ML-based methods of analyzing polychoric or polyserial correlation matrices can be found in:
Lei, P.W. (2009). Evaluating estimation methods for ordinal data in
structural equation modeling. Quality & Quantity, 43, 495–507.
For other mathematical development, you can have a look at:
Jöreskog, K.G. (1994) On the estimation of polychoric correlations
and their asymptotic covariance matrix. Psychometrika, 59(3),
381-389. (See also S-Y Lee's papers.)
Sophia Rabe-Hesketh and her colleagues also have good papers on SEM. Some relevant references include:
Rabe-Hesketh, S. Skrondal, A., and Pickles, A. (2004b). Generalized multilevel structural equation modeling. Psychometrika, 69, 167–190.
Skrondal, A. and Rabe-Hesketh, S. (2004). Generalized Latent Variable Modeling: Multilevel, Longitudinal, and Structural Equation Models. Chapman & Hall/CRC, Boca Raton, FL. (This is the reference textbook for understanding/working with Stata gllamm.)
Other good resources are probably listed on John Uebersax's excellent website, in particular Introduction to the Tetrachoric and Polychoric Correlation Coefficients. Given that you are also interested in applied work, I would suggest taking a look at OpenMx (yet another software package for modeling covariance structure) and lavaan (which aims at delivering output similar to those of EQS or Mplus), both available under R.
|
Introduction to structural equation modeling
I would go for some papers by Múthen and Múthen, who authored the Mplus software, especially
Múthen, B.O. (1984). A general structural equation model with dichotomous, ordered categorical and continu
|
9,097
|
Introduction to structural equation modeling
|
While only tangent to your goals at this point, if you continue on projects using latent variables I would highly suggest you read Denny Boorsboom's Measuring the Mind. Don't be fooled by the title, it is mainly a detailed essay on the logic of latent variables, and a large critique of classical test theory. I would say it is necessary reading if you are utilizing latent variables in a longitudinal framework. It is only about the logic of latent variables though, it has nothing about actually estimating models.
Do post back with your experiences, I have some of the references given here already, although I would like to expand my library as well. FWIW, Ken Bollen's Structural equations with latent variables was the next on my reading list (although that is only based on my opinion of his scholarly work).
Besides that I would say I enjoy the work of Bengt Muthén as well. The MPlus software is incredibly popular, and you can see all of the types of analysis that can be accomplished on the Mplus website (link to the user's guide). He also has a series of mp3 postings of his course on statistical analysis with latent variables at UCLA. I haven't listened to them all, but I suspect all are thorough introductions to whatever particular topic is covered for that weeks lecture.
|
Introduction to structural equation modeling
|
While only tangent to your goals at this point, if you continue on projects using latent variables I would highly suggest you read Denny Boorsboom's Measuring the Mind. Don't be fooled by the title, i
|
Introduction to structural equation modeling
While only tangent to your goals at this point, if you continue on projects using latent variables I would highly suggest you read Denny Boorsboom's Measuring the Mind. Don't be fooled by the title, it is mainly a detailed essay on the logic of latent variables, and a large critique of classical test theory. I would say it is necessary reading if you are utilizing latent variables in a longitudinal framework. It is only about the logic of latent variables though, it has nothing about actually estimating models.
Do post back with your experiences, I have some of the references given here already, although I would like to expand my library as well. FWIW, Ken Bollen's Structural equations with latent variables was the next on my reading list (although that is only based on my opinion of his scholarly work).
Besides that I would say I enjoy the work of Bengt Muthén as well. The MPlus software is incredibly popular, and you can see all of the types of analysis that can be accomplished on the Mplus website (link to the user's guide). He also has a series of mp3 postings of his course on statistical analysis with latent variables at UCLA. I haven't listened to them all, but I suspect all are thorough introductions to whatever particular topic is covered for that weeks lecture.
|
Introduction to structural equation modeling
While only tangent to your goals at this point, if you continue on projects using latent variables I would highly suggest you read Denny Boorsboom's Measuring the Mind. Don't be fooled by the title, i
|
9,098
|
Introduction to structural equation modeling
|
This was the recommended text on the course I took:
P.B.Kline, Principles and Practice of Structural Equation Modeling, The Guilford Press.
It is an introductory text, and not heavily mathematical.
For a more mathematical, Bayesian, treatment, you could try:
S-Y. Lee, Structural Equation Modeling: A Bayesian Approach, Wiley.
|
Introduction to structural equation modeling
|
This was the recommended text on the course I took:
P.B.Kline, Principles and Practice of Structural Equation Modeling, The Guilford Press.
It is an introductory text, and not heavily mathematical.
F
|
Introduction to structural equation modeling
This was the recommended text on the course I took:
P.B.Kline, Principles and Practice of Structural Equation Modeling, The Guilford Press.
It is an introductory text, and not heavily mathematical.
For a more mathematical, Bayesian, treatment, you could try:
S-Y. Lee, Structural Equation Modeling: A Bayesian Approach, Wiley.
|
Introduction to structural equation modeling
This was the recommended text on the course I took:
P.B.Kline, Principles and Practice of Structural Equation Modeling, The Guilford Press.
It is an introductory text, and not heavily mathematical.
F
|
9,099
|
Introduction to structural equation modeling
|
Kline's book is excellent. For a quick intro as a paper see
Gefen, D. 2000. Structural equation modeling and regression: Guidelines for research practice. CAIS. Volume 4. http://aisel.aisnet.org/cais/vol4/iss1/7/
Hox, J.J. and Bechger, T.M. An introduction to structural equation modeling. Family Science Review. 11:354-373. http://joophox.net/publist/semfamre.pdf
Lei, P.W. and Wu, Q. 2007. Introduction to Structural Equation Modeling: Issues and Practical Considerations. Educational Measurement: Issues and Practice. http://dx.doi.org/10.1111/j.1745-3992.2007.00099.x
Grace, J. 2010. Structural Equation Modeling for Observational Studies. The Journal of Wildlife Management. 72:14-22 http://dx.doi.org/10.2193/2007-307
See also http://lavaan.org
|
Introduction to structural equation modeling
|
Kline's book is excellent. For a quick intro as a paper see
Gefen, D. 2000. Structural equation modeling and regression: Guidelines for research practice. CAIS. Volume 4. http://aisel.aisnet.org/ca
|
Introduction to structural equation modeling
Kline's book is excellent. For a quick intro as a paper see
Gefen, D. 2000. Structural equation modeling and regression: Guidelines for research practice. CAIS. Volume 4. http://aisel.aisnet.org/cais/vol4/iss1/7/
Hox, J.J. and Bechger, T.M. An introduction to structural equation modeling. Family Science Review. 11:354-373. http://joophox.net/publist/semfamre.pdf
Lei, P.W. and Wu, Q. 2007. Introduction to Structural Equation Modeling: Issues and Practical Considerations. Educational Measurement: Issues and Practice. http://dx.doi.org/10.1111/j.1745-3992.2007.00099.x
Grace, J. 2010. Structural Equation Modeling for Observational Studies. The Journal of Wildlife Management. 72:14-22 http://dx.doi.org/10.2193/2007-307
See also http://lavaan.org
|
Introduction to structural equation modeling
Kline's book is excellent. For a quick intro as a paper see
Gefen, D. 2000. Structural equation modeling and regression: Guidelines for research practice. CAIS. Volume 4. http://aisel.aisnet.org/ca
|
9,100
|
Introduction to structural equation modeling
|
I'm studying SEM at the moment, using LISREL. We're using these two books:
A Beginner's Guide to Structural Equation Modelling
New Developments and Techniques in Structural Equation Modelling
Dr Schumaker is the instructor on my course. The first book is really good at introducing SEM, as it takes you through the process of model specification, identification, and so forth. While it is based on the LISREL software, I would expect that the general methods and interpretation of results will be independent of software.
|
Introduction to structural equation modeling
|
I'm studying SEM at the moment, using LISREL. We're using these two books:
A Beginner's Guide to Structural Equation Modelling
New Developments and Techniques in Structural Equation Modelling
Dr Sch
|
Introduction to structural equation modeling
I'm studying SEM at the moment, using LISREL. We're using these two books:
A Beginner's Guide to Structural Equation Modelling
New Developments and Techniques in Structural Equation Modelling
Dr Schumaker is the instructor on my course. The first book is really good at introducing SEM, as it takes you through the process of model specification, identification, and so forth. While it is based on the LISREL software, I would expect that the general methods and interpretation of results will be independent of software.
|
Introduction to structural equation modeling
I'm studying SEM at the moment, using LISREL. We're using these two books:
A Beginner's Guide to Structural Equation Modelling
New Developments and Techniques in Structural Equation Modelling
Dr Sch
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.