idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
15,101
|
What do/did you do to remember Bayes' rule?
|
Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule.
I simply say ---
"A given B equals the reverse times A over B"
That is to say,
The probability of A given B P(A | B) equals the reverse (B | A) times A over B P(A) / P(B).
Put in full,
$$P(A | B) = \frac{P(B | A) * P(A)}{P(B)}$$
And with that I never forget it.
|
What do/did you do to remember Bayes' rule?
|
Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule.
I simply say ---
"A given B equals the reverse times A over B"
That is to say,
The probability of A given
|
What do/did you do to remember Bayes' rule?
Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule.
I simply say ---
"A given B equals the reverse times A over B"
That is to say,
The probability of A given B P(A | B) equals the reverse (B | A) times A over B P(A) / P(B).
Put in full,
$$P(A | B) = \frac{P(B | A) * P(A)}{P(B)}$$
And with that I never forget it.
|
What do/did you do to remember Bayes' rule?
Here's my little unorthodox (and dare I say unscientific) trick for remembering Bayes Rule.
I simply say ---
"A given B equals the reverse times A over B"
That is to say,
The probability of A given
|
15,102
|
What do/did you do to remember Bayes' rule?
|
If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possibility of confusion:
$$
P(B|A)=\frac{P(A|B)P(B)}{P(A)} \quad \text{vs} \quad P(B|A)=\frac{P(A|B)P(A)}{P(B)}.
$$
To remember what goes into the numerator, think to what happens if the event $B$ is impossible ($P(B)=0$). You want $P(B|A)$ to be zero, too, so it must be in the numerator.
|
What do/did you do to remember Bayes' rule?
|
If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possi
|
What do/did you do to remember Bayes' rule?
If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possibility of confusion:
$$
P(B|A)=\frac{P(A|B)P(B)}{P(A)} \quad \text{vs} \quad P(B|A)=\frac{P(A|B)P(A)}{P(B)}.
$$
To remember what goes into the numerator, think to what happens if the event $B$ is impossible ($P(B)=0$). You want $P(B|A)$ to be zero, too, so it must be in the numerator.
|
What do/did you do to remember Bayes' rule?
If you have clear which terms have to go into the equation ("it is a formula that shows a direct proportionality between $P(A|B)$ and $P(B|A)$ using $P(B)$ and $P(A)$"), there is really only one possi
|
15,103
|
What do/did you do to remember Bayes' rule?
|
A person --> disease --> test positive (red)
A person --> disease --> test negative (yellow)
A person --> no disease --> test positive (blue)
A person --> no disease --> test negative (green)
To better remember Bayes' rule, draw the above into a tree structure and mark the edges with color. Say we want to know P(disease | test positive). Given test result being positive, two possible paths are "red" and "blue", and conditional probability of having a disease is the conditional probability of being "red", thus P(red) / (P(red) + P(blue)). Apply chain rule and we have:
P(red) = P(disease) * P(test positive | disease)
P(blue) = P(no disease) * P(test positive | no disease)
P(disease | test positive) = P(disease) * P(test positive | disease) / (P(disease) * P(test positive | disease) + P(no disease) * P(test positive | no disease)) = P(disease, test positive) / P(test positive)
|
What do/did you do to remember Bayes' rule?
|
A person --> disease --> test positive (red)
A person --> disease --> test negative (yellow)
A person --> no disease --> test positive (blue)
A person --> no disease --> test negative (green)
To bette
|
What do/did you do to remember Bayes' rule?
A person --> disease --> test positive (red)
A person --> disease --> test negative (yellow)
A person --> no disease --> test positive (blue)
A person --> no disease --> test negative (green)
To better remember Bayes' rule, draw the above into a tree structure and mark the edges with color. Say we want to know P(disease | test positive). Given test result being positive, two possible paths are "red" and "blue", and conditional probability of having a disease is the conditional probability of being "red", thus P(red) / (P(red) + P(blue)). Apply chain rule and we have:
P(red) = P(disease) * P(test positive | disease)
P(blue) = P(no disease) * P(test positive | no disease)
P(disease | test positive) = P(disease) * P(test positive | disease) / (P(disease) * P(test positive | disease) + P(no disease) * P(test positive | no disease)) = P(disease, test positive) / P(test positive)
|
What do/did you do to remember Bayes' rule?
A person --> disease --> test positive (red)
A person --> disease --> test negative (yellow)
A person --> no disease --> test positive (blue)
A person --> no disease --> test negative (green)
To bette
|
15,104
|
Standard error of the median
|
Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small.
If you were selecting median because it's a small sample that's not a good justification. You select median because the median is an important value. It says something different from the mean. You might also select it for some statistical calculations because it's robust against certain problems like outliers or skew. However, small sample size isn't one of those problems it's robust against. For example, when sample size gets smaller it's actually much more sensitive to skew than the mean.
|
Standard error of the median
|
Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small.
If you were selecting median because it's a small sample that
|
Standard error of the median
Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small.
If you were selecting median because it's a small sample that's not a good justification. You select median because the median is an important value. It says something different from the mean. You might also select it for some statistical calculations because it's robust against certain problems like outliers or skew. However, small sample size isn't one of those problems it's robust against. For example, when sample size gets smaller it's actually much more sensitive to skew than the mean.
|
Standard error of the median
Based on some of @mary's comments I think the following is appropriate. She seems to be selecting the median because the sample is small.
If you were selecting median because it's a small sample that
|
15,105
|
Standard error of the median
|
The magic number 1.253 comes from the asymptotic variance formula:
$$
{\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n}
$$
where $m$ is the true median, and $f(m)$ is the true density at that point. The magic number 1.253 is $\sqrt{\pi/2}$ from the normal distribution so... you still are assuming normality with that.
For any distribution other than the normal (and mary admits that this is doubtful in her data), you would have a different factor. If you had a Laplace/double exponential distribution, the density at the median is $1/2b$ and the variance is $2b^2$, so the factor should be $1/\sqrt{2} = 1.414$ -- the median is the maximum likelihood estimate of the shift parameter, and is more efficient than the mean. So you can start picking your magic numbers in different ways...
Getting the median estimate $\hat m$ is not such a big deal, although you can start agonizing about the middle values for the even number of observations vs. inverting the cdf or something like that. More importantly, the relevant density value can be estimated by kernel density estimators, if needed. Overall, this of course is relatively dubious as three approximations are being taken:
That the asymptotic formula for variance works for the small sample;
That the estimated median is close enough to the true median;
That the kernel density estimator gives an accurate value.
The lower the sample size, the more dubious it gets.
|
Standard error of the median
|
The magic number 1.253 comes from the asymptotic variance formula:
$$
{\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n}
$$
where $m$ is the true median, and $f(m)$ is the true density at that point. The magi
|
Standard error of the median
The magic number 1.253 comes from the asymptotic variance formula:
$$
{\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n}
$$
where $m$ is the true median, and $f(m)$ is the true density at that point. The magic number 1.253 is $\sqrt{\pi/2}$ from the normal distribution so... you still are assuming normality with that.
For any distribution other than the normal (and mary admits that this is doubtful in her data), you would have a different factor. If you had a Laplace/double exponential distribution, the density at the median is $1/2b$ and the variance is $2b^2$, so the factor should be $1/\sqrt{2} = 1.414$ -- the median is the maximum likelihood estimate of the shift parameter, and is more efficient than the mean. So you can start picking your magic numbers in different ways...
Getting the median estimate $\hat m$ is not such a big deal, although you can start agonizing about the middle values for the even number of observations vs. inverting the cdf or something like that. More importantly, the relevant density value can be estimated by kernel density estimators, if needed. Overall, this of course is relatively dubious as three approximations are being taken:
That the asymptotic formula for variance works for the small sample;
That the estimated median is close enough to the true median;
That the kernel density estimator gives an accurate value.
The lower the sample size, the more dubious it gets.
|
Standard error of the median
The magic number 1.253 comes from the asymptotic variance formula:
$$
{\rm As. Var.}[\hat m] = \frac1{4f(m)^2 n}
$$
where $m$ is the true median, and $f(m)$ is the true density at that point. The magi
|
15,106
|
Standard error of the median
|
Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your question is no. See also here.
One way to obtain the standard error and confidence intervals for the median in small samples with non-normal distributions would be bootstrapping. This post provides links to Python packages for bootstrapping.
Warning
@whuber pointed out that bootstrapping the median in small samples isn't very informative as the justifications of the bootstrap are asymptotic (see comments below).
|
Standard error of the median
|
Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your ques
|
Standard error of the median
Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your question is no. See also here.
One way to obtain the standard error and confidence intervals for the median in small samples with non-normal distributions would be bootstrapping. This post provides links to Python packages for bootstrapping.
Warning
@whuber pointed out that bootstrapping the median in small samples isn't very informative as the justifications of the bootstrap are asymptotic (see comments below).
|
Standard error of the median
Sokal and Rohlf give this formula in their book Biometry (page 139). Under "Comments on applicability" they write: Large samples from normal populations. Thus, I am afraid that the answer to your ques
|
15,107
|
Standard error of the median
|
Not a solution here, but perhaps helpful:
Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the number m such that P(m) = 1/2.
Following this helpful page we can compute the distribution of a number $x$ being the median of $n$ samples of this distribution. I think it is $q(x) = c_n p(x) (P(x)(1-P(x)))^{(n-1)/2}$. Here $c_n$ is the appropriate constant to make this a probability distribution, and I think it is n-1 choose (n-1)/2 if n is odd (unsure on that).
Finally, you would like to know the variance of q(x), which you may be able to reason about with this formula.
|
Standard error of the median
|
Not a solution here, but perhaps helpful:
Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the num
|
Standard error of the median
Not a solution here, but perhaps helpful:
Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the number m such that P(m) = 1/2.
Following this helpful page we can compute the distribution of a number $x$ being the median of $n$ samples of this distribution. I think it is $q(x) = c_n p(x) (P(x)(1-P(x)))^{(n-1)/2}$. Here $c_n$ is the appropriate constant to make this a probability distribution, and I think it is n-1 choose (n-1)/2 if n is odd (unsure on that).
Finally, you would like to know the variance of q(x), which you may be able to reason about with this formula.
|
Standard error of the median
Not a solution here, but perhaps helpful:
Suppose your data distribution is $p(x)$, and let $P(x) = \int_{-\infty}^x p$ be the cumulative density function. So the median of the distribution is the num
|
15,108
|
Standard error of the median
|
There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecker, 2015 in the section named ‘Inferences about the median’. Stata implements the procedure as the ‘centiles’ command and the Stata doc provides a mathematical justification with references.
Here is the procedure for a 95% CI and a python script. The standard error of the median is determined from the CI. The results are the same as the results from the Stata 'centiles' command.
import numpy as np
from scipy.stats import binom
# n = 25
data = [1.1, 1.2, 2.1, 2.6, 2.7, 2.9, 3.6, 3.9, 4.2, 4.3, 4.5, 4.7, 5.3,
5.6, 5.8, 6.5, 6.7, 6.7, 7.8, 7.8, 14.2, 25.9, 29.5, 34.8, 43.8]
median = np.median(data)
print(f'median: {median}')
# the distribution
n = 25
p = .5
rv = binom(n, p)
# 95% critical value
q = .05
binom_critical = rv.ppf(q=q)
print(f'binom 95% critical value: {binom_critical}')
# the 95% CI for the median
L_q = int(binom_critical)
U_q = int(n - binom_critical)
print(f'L_q: {L_q} U_q: {U_q}')
lower_ci = data[L_q - 1]
upper_ci = data[U_q - 1]
print(f'lower_ci: {lower_ci} upper_ci: {upper_ci}')
median: 5.3
binom 95% critical value: 8.0
L_q: 8 U_q: 17
lower_ci: 3.9 upper_ci: 6.7
The standard error is CI/2. In this case 6.7 – 3.9 / 2 = 1.4. This is analogous to the normal case, where if given the CI for the mean, the standard error is calculated as:
For the non-parametric method for the median, there is no t-statistic because the confidence level is embodied in the critical values for the binomial distribution.
|
Standard error of the median
|
There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecke
|
Standard error of the median
There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecker, 2015 in the section named ‘Inferences about the median’. Stata implements the procedure as the ‘centiles’ command and the Stata doc provides a mathematical justification with references.
Here is the procedure for a 95% CI and a python script. The standard error of the median is determined from the CI. The results are the same as the results from the Stata 'centiles' command.
import numpy as np
from scipy.stats import binom
# n = 25
data = [1.1, 1.2, 2.1, 2.6, 2.7, 2.9, 3.6, 3.9, 4.2, 4.3, 4.5, 4.7, 5.3,
5.6, 5.8, 6.5, 6.7, 6.7, 7.8, 7.8, 14.2, 25.9, 29.5, 34.8, 43.8]
median = np.median(data)
print(f'median: {median}')
# the distribution
n = 25
p = .5
rv = binom(n, p)
# 95% critical value
q = .05
binom_critical = rv.ppf(q=q)
print(f'binom 95% critical value: {binom_critical}')
# the 95% CI for the median
L_q = int(binom_critical)
U_q = int(n - binom_critical)
print(f'L_q: {L_q} U_q: {U_q}')
lower_ci = data[L_q - 1]
upper_ci = data[U_q - 1]
print(f'lower_ci: {lower_ci} upper_ci: {upper_ci}')
median: 5.3
binom 95% critical value: 8.0
L_q: 8 U_q: 17
lower_ci: 3.9 upper_ci: 6.7
The standard error is CI/2. In this case 6.7 – 3.9 / 2 = 1.4. This is analogous to the normal case, where if given the CI for the mean, the standard error is calculated as:
For the non-parametric method for the median, there is no t-statistic because the confidence level is embodied in the critical values for the binomial distribution.
|
Standard error of the median
There is an empirical procedure for obtaining a confidence interval for the sample median. The procedure is non-parametric and relies on the binomial distribution. It can be found in Ott and Longnecke
|
15,109
|
Why did statisticians define random matrices?
|
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specifically, it was the eigenvalues (which are energy levels in atomic physics) of random matrices that generated tons of interest because the correlations between eigenvalues gave insight into the emission spectrum of nuclear decay processes.
More recently, there has been a large resurgence in this field, with the advent of the Tracy-Widom distribution/s for the largest eigenvalues of random matrices, along with stunning connections to seemingly unrelated fields, such as tiling theory, statistical physics, integrable systems, KPZ phenomena, random combinatorics and even the Riemann Hypothesis. You can find some more examples here.
For more down-to-earth examples, a natural question to ask about a matrix of row vectors is what its PCA components might look like. You can get heuristic estimates for this by assuming the data comes from some distribution, and then looking at covariance matrix eigenvalues, which will be predicted from random matrix universality: regardless (within reason) of the distribution of your vectors, the limiting distribution of the eigenvalues will always approach a set of known classes. You can think of this as a kind of CLT for random matrices. See this paper for examples.
|
Why did statisticians define random matrices?
|
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specif
|
Why did statisticians define random matrices?
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specifically, it was the eigenvalues (which are energy levels in atomic physics) of random matrices that generated tons of interest because the correlations between eigenvalues gave insight into the emission spectrum of nuclear decay processes.
More recently, there has been a large resurgence in this field, with the advent of the Tracy-Widom distribution/s for the largest eigenvalues of random matrices, along with stunning connections to seemingly unrelated fields, such as tiling theory, statistical physics, integrable systems, KPZ phenomena, random combinatorics and even the Riemann Hypothesis. You can find some more examples here.
For more down-to-earth examples, a natural question to ask about a matrix of row vectors is what its PCA components might look like. You can get heuristic estimates for this by assuming the data comes from some distribution, and then looking at covariance matrix eigenvalues, which will be predicted from random matrix universality: regardless (within reason) of the distribution of your vectors, the limiting distribution of the eigenvalues will always approach a set of known classes. You can think of this as a kind of CLT for random matrices. See this paper for examples.
|
Why did statisticians define random matrices?
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here. Specif
|
15,110
|
Why did statisticians define random matrices?
|
You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series, look at Treasury bills 4-week, 3-month, 6-month and 1-year. You can think of these 4 rates as a vector with 4 elements. It's quire random too, look at the historical values on the plot below.
As with any random numbers we might ask ourselves: what's the covariance between them? Now you get 4x4 covariance matrix. If you estimate it on one month daily data, you get 12 different covariance matrices each year, if you want them non-overlapping. The sample covariance matrix of random series is itself a random object, see Wishart's paper "THE GENERALISED PRODUCT MOMENT DISTRIBUTION
IN SAMPLES FROM A NORMAL MULTIVARIATE POPULATION." here. There's a distribution called after him.
This is one way to get to random matrices. It's no wonder that the random matrix theory (RMT) is used in finance, as you can see now.
|
Why did statisticians define random matrices?
|
You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series
|
Why did statisticians define random matrices?
You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series, look at Treasury bills 4-week, 3-month, 6-month and 1-year. You can think of these 4 rates as a vector with 4 elements. It's quire random too, look at the historical values on the plot below.
As with any random numbers we might ask ourselves: what's the covariance between them? Now you get 4x4 covariance matrix. If you estimate it on one month daily data, you get 12 different covariance matrices each year, if you want them non-overlapping. The sample covariance matrix of random series is itself a random object, see Wishart's paper "THE GENERALISED PRODUCT MOMENT DISTRIBUTION
IN SAMPLES FROM A NORMAL MULTIVARIATE POPULATION." here. There's a distribution called after him.
This is one way to get to random matrices. It's no wonder that the random matrix theory (RMT) is used in finance, as you can see now.
|
Why did statisticians define random matrices?
You seem to be comfortable with applications of random vectors. For instance, I deal with this kind of random vectors every day: interest rates of different tenors. Federal Reserve Bank has H15 series
|
15,111
|
Why did statisticians define random matrices?
|
In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries.
My background in theoretical physics may cause me to present a slightly biased point of view here, but I would even go so far to suggest that the popularity of random matrix theory (RMT) originated from its successful application in physics.
Without going too much into detail, for example energy spectra in quantum mechanics can be obtained by calculating eigenvalues of the systems Hamiltonian - which can be expressed as an hermitian matrix.
Often physicists are not interested in particular systems but want to know what are the general properties of quantum systems that have chaotic properties, which leads the values of the hermitian Hamiltonian matrix to fill the matrix-space ergodically upon variation of the energy or other parameters (e.g. boundary conditions). This motivates treating a class of physical systems as random matrices and looking at average properties of these systems. I recommend literature on the Bohigas-Gianonni-Schmidt conjecture if you want to dive into this deeper.
In short, one can for instance show that energy levels of systems that have time reversal symmetry behave universally different than energy levels of systems which have no time reversal symmetry (which happens for instance if you add a magnetic field). An in fact quite short calculation using Gaussian random matrices can show that the energy levels tend to be differently close in both systems.
These results can be extended and helped to understand also other symmetries, which had a major impact on different fields, like also particle physics or the theory of mesoscopic transport and later even in financial markets.
|
Why did statisticians define random matrices?
|
In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries.
My background in theoretical physics may cause
|
Why did statisticians define random matrices?
In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries.
My background in theoretical physics may cause me to present a slightly biased point of view here, but I would even go so far to suggest that the popularity of random matrix theory (RMT) originated from its successful application in physics.
Without going too much into detail, for example energy spectra in quantum mechanics can be obtained by calculating eigenvalues of the systems Hamiltonian - which can be expressed as an hermitian matrix.
Often physicists are not interested in particular systems but want to know what are the general properties of quantum systems that have chaotic properties, which leads the values of the hermitian Hamiltonian matrix to fill the matrix-space ergodically upon variation of the energy or other parameters (e.g. boundary conditions). This motivates treating a class of physical systems as random matrices and looking at average properties of these systems. I recommend literature on the Bohigas-Gianonni-Schmidt conjecture if you want to dive into this deeper.
In short, one can for instance show that energy levels of systems that have time reversal symmetry behave universally different than energy levels of systems which have no time reversal symmetry (which happens for instance if you add a magnetic field). An in fact quite short calculation using Gaussian random matrices can show that the energy levels tend to be differently close in both systems.
These results can be extended and helped to understand also other symmetries, which had a major impact on different fields, like also particle physics or the theory of mesoscopic transport and later even in financial markets.
|
Why did statisticians define random matrices?
In theoretical physics random matrices play an important role to understand universal features of energy spectra of systems with particular symmetries.
My background in theoretical physics may cause
|
15,112
|
Why did statisticians define random matrices?
|
A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you want to consider random linear maps between those two spaces, you should come up with a theory of random matrices. Random projection is a simple example of such a thing.
Also, there are matrix/tensor valued objects in physics. The viscous stress tensor is one such (among a veritable zoo). In a nearly homogeneous viscoelastic materials, it can be useful to model the strains (elastic, viscous, et al.) and hence the stresses pointwise as a random tensor with small variance. Although there is a "linear map" sense to this stress/strain, it is more honest to describe this application of random matrices as randomizing something that was already a matrix.
|
Why did statisticians define random matrices?
|
A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you wa
|
Why did statisticians define random matrices?
A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you want to consider random linear maps between those two spaces, you should come up with a theory of random matrices. Random projection is a simple example of such a thing.
Also, there are matrix/tensor valued objects in physics. The viscous stress tensor is one such (among a veritable zoo). In a nearly homogeneous viscoelastic materials, it can be useful to model the strains (elastic, viscous, et al.) and hence the stresses pointwise as a random tensor with small variance. Although there is a "linear map" sense to this stress/strain, it is more honest to describe this application of random matrices as randomizing something that was already a matrix.
|
Why did statisticians define random matrices?
A linear map is a map between vector spaces. Suppose you have a linear map and have chosen bases for its domain and range spaces. Then you can write a matrix which encodes the linear map. If you wa
|
15,113
|
Why did statisticians define random matrices?
|
Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for these matrices and play a role in the theory.
Grossly simplified, it turns out that minimizing the L1 norm of a certain product of a Gaussian matrix and an sparse input signal allows you to recover much more information than you might expect.
The most notable early research in this area I know of is Rice University's work: http://dsp.rice.edu/research/compressive-sensing/random-matrices
The theory of matrix products as "measurements of a signal" goes at least as far back as WW2. As a former professor of mine recounted to me, individually testing every army enlistee for, say, syphilis, was cost prohibitive. Mixing together these samples in a systematic way (by mixing portions of each blood sample together and testing them) would reduce the number of times a test needed to be performed. This could be modeled as a random binary vector multiplied with a sparse matrix.
|
Why did statisticians define random matrices?
|
Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for th
|
Why did statisticians define random matrices?
Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for these matrices and play a role in the theory.
Grossly simplified, it turns out that minimizing the L1 norm of a certain product of a Gaussian matrix and an sparse input signal allows you to recover much more information than you might expect.
The most notable early research in this area I know of is Rice University's work: http://dsp.rice.edu/research/compressive-sensing/random-matrices
The theory of matrix products as "measurements of a signal" goes at least as far back as WW2. As a former professor of mine recounted to me, individually testing every army enlistee for, say, syphilis, was cost prohibitive. Mixing together these samples in a systematic way (by mixing portions of each blood sample together and testing them) would reduce the number of times a test needed to be performed. This could be modeled as a random binary vector multiplied with a sparse matrix.
|
Why did statisticians define random matrices?
Compressive sensing as an application in image processing relies on random matrices as combined measurements of a 2D signal. Specific properties of these matrices, namely coherence, are defined for th
|
15,114
|
How do I interpret a probit model in Stata?
|
In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how much the (conditional) probability of the outcome variable changes when you change the value of a regressor, holding all other regressors constant at some values. This is different from the linear regression case where you are directly interpreting the estimated coefficients. This is so because in the linear regression case, the regression coefficients are the marginal effects.
In the probit regression, there is an additional step of computation required to get the marginal effects once you have computed the probit regression fit.
Linear and probit regression models
Probit regression: Recall that in the probit model, you are modelling the (conditional) probability of a "successful" outcome, that is, $Y_i=1$,
$$
\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right] = \Phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution. This basically says that, conditional on the regressors, the probability that the outcome variable, $Y_i$ is 1, is a certain function of a linear combination of the regressors.
Linear regression: Compare this to the linear regression model, where
$$
\mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right) = \beta_0 + \sum_{k=1}^K \beta_kX_{ki}$$
the (conditional) mean of the outcome is a linear combination of the regressors.
Marginal effects
Other than in the linear regression model, coefficients rarely have any direct interpretation. We are typically interested in the ceteris paribus effects of changes in the regressors affecting the features of the outcome variable. This is the notion that marginal effects measure.
Linear regression: I would now like to know how much the mean of the outcome variable moves when I move one of the regressors
$$
\frac{\partial \mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right)}{\partial X_{ki}} = \beta_k
$$
But this is just the regression coeffcient, which means that the marginal effect of a change in the $k$-th regressor is just the regression coefficient.
Probit regression: However, it is easy to see that this is not the case for the probit regression
$$
\frac{\partial \mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]}{\partial X_{ki}} = \beta_k\phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
which is not the same as the regression coefficient. These are the marginal effects for the probit model, and the quantity we are after. In particular, this depends on the values of all the other regressors, and the regression coefficients. Here $\phi(\cdot)$ is the standard normal probability density function.
How are you to compute this quantity, and what are the choices of the other regressors that should enter this formula? Thankfully, Stata provides this computation after a probit regression, and provides some defaults of the choices of the other regressors (there is no universal agreement on these defaults).
Discrete regressors
Note that much of the above applies to the case of continuous regressors, since we have used calculus. In the case of discrete regressors, you need to use discrete changes. SO, for example, the discrete change in a regressor $X_{ki}$ that takes the values $\{0,1\}$ is
$$
\small
\begin{align}
\Delta_{X_{ki}}\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]&=\beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+\beta_k + \sum_{l=k+1}^K\beta_l X_{li}) \\
&\quad- \beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+ \sum_{l=k+1}^K\beta_l X_{li})
\end{align}
$$
Computing marginal effects in Stata
Probit regression: Here is an example of computation of marginal effects after a probit regression in Stata.
webuse union
probit union age grade not_smsa south##c.year
margins, dydx(*)
Here is the output you will get from the margins command
. margins, dydx(*)
Average marginal effects Number of obs = 26200
Model VCE : OIM
Expression : Pr(union), predict()
dy/dx w.r.t. : age grade not_smsa 1.south year
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | .003442 .000844 4.08 0.000 .0017878 .0050963
grade | .0077673 .0010639 7.30 0.000 .0056822 .0098525
not_smsa | -.0375788 .0058753 -6.40 0.000 -.0490941 -.0260634
1.south | -.1054928 .0050851 -20.75 0.000 -.1154594 -.0955261
year | -.0017906 .0009195 -1.95 0.051 -.0035928 .0000115
------------------------------------------------------------------------------
Note: dy/dx for factor levels is the discrete change from the base level.
This can be interpreted, for example, that the a one unit change in the age variable, increases the probability of union status by 0.003442. Similarly, being from the south, decreases the probability of union status by 0.1054928
Linear regression: As a final check, we can confirm that the marginal effects in the linear regression model are the same as the regression coefficients (with one small twist). Running the following regression and computing the marginal effects after
sysuse auto, clear
regress mpg weight c.weight#c.weight foreign
margins, dydx(*)
just gives you back the regression coefficients. Note the interesting fact that Stata computes the net marginal effect of a regressor including the effect through the quadratic terms if included in the model.
. margins, dydx(*)
Average marginal effects Number of obs = 74
Model VCE : OLS
Expression : Linear prediction, predict()
dy/dx w.r.t. : weight foreign
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight | -.0069641 .0006314 -11.03 0.000 -.0082016 -.0057266
foreign | -2.2035 1.059246 -2.08 0.038 -4.279585 -.1274157
------------------------------------------------------------------------------
|
How do I interpret a probit model in Stata?
|
In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how
|
How do I interpret a probit model in Stata?
In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how much the (conditional) probability of the outcome variable changes when you change the value of a regressor, holding all other regressors constant at some values. This is different from the linear regression case where you are directly interpreting the estimated coefficients. This is so because in the linear regression case, the regression coefficients are the marginal effects.
In the probit regression, there is an additional step of computation required to get the marginal effects once you have computed the probit regression fit.
Linear and probit regression models
Probit regression: Recall that in the probit model, you are modelling the (conditional) probability of a "successful" outcome, that is, $Y_i=1$,
$$
\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right] = \Phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution. This basically says that, conditional on the regressors, the probability that the outcome variable, $Y_i$ is 1, is a certain function of a linear combination of the regressors.
Linear regression: Compare this to the linear regression model, where
$$
\mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right) = \beta_0 + \sum_{k=1}^K \beta_kX_{ki}$$
the (conditional) mean of the outcome is a linear combination of the regressors.
Marginal effects
Other than in the linear regression model, coefficients rarely have any direct interpretation. We are typically interested in the ceteris paribus effects of changes in the regressors affecting the features of the outcome variable. This is the notion that marginal effects measure.
Linear regression: I would now like to know how much the mean of the outcome variable moves when I move one of the regressors
$$
\frac{\partial \mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right)}{\partial X_{ki}} = \beta_k
$$
But this is just the regression coeffcient, which means that the marginal effect of a change in the $k$-th regressor is just the regression coefficient.
Probit regression: However, it is easy to see that this is not the case for the probit regression
$$
\frac{\partial \mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]}{\partial X_{ki}} = \beta_k\phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
which is not the same as the regression coefficient. These are the marginal effects for the probit model, and the quantity we are after. In particular, this depends on the values of all the other regressors, and the regression coefficients. Here $\phi(\cdot)$ is the standard normal probability density function.
How are you to compute this quantity, and what are the choices of the other regressors that should enter this formula? Thankfully, Stata provides this computation after a probit regression, and provides some defaults of the choices of the other regressors (there is no universal agreement on these defaults).
Discrete regressors
Note that much of the above applies to the case of continuous regressors, since we have used calculus. In the case of discrete regressors, you need to use discrete changes. SO, for example, the discrete change in a regressor $X_{ki}$ that takes the values $\{0,1\}$ is
$$
\small
\begin{align}
\Delta_{X_{ki}}\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]&=\beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+\beta_k + \sum_{l=k+1}^K\beta_l X_{li}) \\
&\quad- \beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+ \sum_{l=k+1}^K\beta_l X_{li})
\end{align}
$$
Computing marginal effects in Stata
Probit regression: Here is an example of computation of marginal effects after a probit regression in Stata.
webuse union
probit union age grade not_smsa south##c.year
margins, dydx(*)
Here is the output you will get from the margins command
. margins, dydx(*)
Average marginal effects Number of obs = 26200
Model VCE : OIM
Expression : Pr(union), predict()
dy/dx w.r.t. : age grade not_smsa 1.south year
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | .003442 .000844 4.08 0.000 .0017878 .0050963
grade | .0077673 .0010639 7.30 0.000 .0056822 .0098525
not_smsa | -.0375788 .0058753 -6.40 0.000 -.0490941 -.0260634
1.south | -.1054928 .0050851 -20.75 0.000 -.1154594 -.0955261
year | -.0017906 .0009195 -1.95 0.051 -.0035928 .0000115
------------------------------------------------------------------------------
Note: dy/dx for factor levels is the discrete change from the base level.
This can be interpreted, for example, that the a one unit change in the age variable, increases the probability of union status by 0.003442. Similarly, being from the south, decreases the probability of union status by 0.1054928
Linear regression: As a final check, we can confirm that the marginal effects in the linear regression model are the same as the regression coefficients (with one small twist). Running the following regression and computing the marginal effects after
sysuse auto, clear
regress mpg weight c.weight#c.weight foreign
margins, dydx(*)
just gives you back the regression coefficients. Note the interesting fact that Stata computes the net marginal effect of a regressor including the effect through the quadratic terms if included in the model.
. margins, dydx(*)
Average marginal effects Number of obs = 74
Model VCE : OLS
Expression : Linear prediction, predict()
dy/dx w.r.t. : weight foreign
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight | -.0069641 .0006314 -11.03 0.000 -.0082016 -.0057266
foreign | -2.2035 1.059246 -2.08 0.038 -4.279585 -.1274157
------------------------------------------------------------------------------
|
How do I interpret a probit model in Stata?
In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how
|
15,115
|
How do I interpret a probit model in Stata?
|
Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union" (see link).
. webuse union
. keep union age grade
. probit union age grade
Iteration 0: log likelihood = -13864.23
Iteration 1: log likelihood = -13796.359
Iteration 2: log likelihood = -13796.336
Iteration 3: log likelihood = -13796.336
Probit regression Number of obs = 26,200
LR chi2(2) = 135.79
Prob > chi2 = 0.0000
Log likelihood = -13796.336 Pseudo R2 = 0.0049
------------------------------------------------------------------------------
union | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | .0051821 .0013471 3.85 0.000 .0025418 .0078224
grade | .0373899 .0035814 10.44 0.000 .0303706 .0444092
_cons | -1.404697 .0587797 -23.90 0.000 -1.519903 -1.289491
------------------------------------------------------------------------------
Then do
predict yhat
And you'll see that for obs 1, the fitted value is equivalent to the $\beta{age}*20 + \beta{grade}*12 + \beta{cons}$. Plug that into the normal() funciton to return the corresponding probability:
di normal(.0051821*20 + .0373899*12 + -1.404697)
.19700266
Therefore, a one-unit increase in age corresponds to a $\beta{age}$ increase in the z-score of the probability of being in the union.
|
How do I interpret a probit model in Stata?
|
Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union
|
How do I interpret a probit model in Stata?
Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union" (see link).
. webuse union
. keep union age grade
. probit union age grade
Iteration 0: log likelihood = -13864.23
Iteration 1: log likelihood = -13796.359
Iteration 2: log likelihood = -13796.336
Iteration 3: log likelihood = -13796.336
Probit regression Number of obs = 26,200
LR chi2(2) = 135.79
Prob > chi2 = 0.0000
Log likelihood = -13796.336 Pseudo R2 = 0.0049
------------------------------------------------------------------------------
union | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | .0051821 .0013471 3.85 0.000 .0025418 .0078224
grade | .0373899 .0035814 10.44 0.000 .0303706 .0444092
_cons | -1.404697 .0587797 -23.90 0.000 -1.519903 -1.289491
------------------------------------------------------------------------------
Then do
predict yhat
And you'll see that for obs 1, the fitted value is equivalent to the $\beta{age}*20 + \beta{grade}*12 + \beta{cons}$. Plug that into the normal() funciton to return the corresponding probability:
di normal(.0051821*20 + .0373899*12 + -1.404697)
.19700266
Therefore, a one-unit increase in age corresponds to a $\beta{age}$ increase in the z-score of the probability of being in the union.
|
How do I interpret a probit model in Stata?
Also, and more simply, the coefficient in a probit regression can be interpreted as "a one-unit increase in age corresponds to an $\beta{age}$ increase in the z-score for probability of being in union
|
15,116
|
Does Bayesian statistics make meta-analysis obsolete?
|
What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in different order (see e.g. here or here). Notice that if previous experiments influence your future experiments, then also in the case of classical meta-analysis there would be a dependence that is not taken into consideration (if assuming exchangeability).
It makes perfect sense to update your knowledge using Bayesian updating, since it's simply another way of doing it, then using classical meta-analysis. The question if it makes the traditional meta-analysis obsolete, or not, is opinion based and depends if you are willing to adopt Bayesian viewpoint. The most important difference between both approaches is that in Bayesian case you explicitly state your prior assumptions.
|
Does Bayesian statistics make meta-analysis obsolete?
|
What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in differen
|
Does Bayesian statistics make meta-analysis obsolete?
What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in different order (see e.g. here or here). Notice that if previous experiments influence your future experiments, then also in the case of classical meta-analysis there would be a dependence that is not taken into consideration (if assuming exchangeability).
It makes perfect sense to update your knowledge using Bayesian updating, since it's simply another way of doing it, then using classical meta-analysis. The question if it makes the traditional meta-analysis obsolete, or not, is opinion based and depends if you are willing to adopt Bayesian viewpoint. The most important difference between both approaches is that in Bayesian case you explicitly state your prior assumptions.
|
Does Bayesian statistics make meta-analysis obsolete?
What you are describing is called Bayesian updating. If you can assume that subsequent trials are exchangeable, then it won't matter if you updated your prior sequentially, all at once, or in differen
|
15,117
|
Does Bayesian statistics make meta-analysis obsolete?
|
I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter estimate. We are interested in whether effects are consistent among each other, of the same direction, have CI bounds that are inversely proportional to the root of the sample size approximately, and so on. Only when all the studies seem to point to the same effect size and magnitude for an association or treatment effect do we tend to report, with some confidence, that what has been observed may be a "truth".
Indeed, there are frequentist ways of conducting a pooled analysis, such as just aggregating evidence from multiple studies with random effects to account for heterogeneity. A Bayesian approach is a nice modification of this, because you can be explicit about how one study might inform another.
Just as well, there are Bayesian approaches to "studying the studies" as a typical (frequentist) meta analysis might do, but that's not what you're describing here.
|
Does Bayesian statistics make meta-analysis obsolete?
|
I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter
|
Does Bayesian statistics make meta-analysis obsolete?
I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter estimate. We are interested in whether effects are consistent among each other, of the same direction, have CI bounds that are inversely proportional to the root of the sample size approximately, and so on. Only when all the studies seem to point to the same effect size and magnitude for an association or treatment effect do we tend to report, with some confidence, that what has been observed may be a "truth".
Indeed, there are frequentist ways of conducting a pooled analysis, such as just aggregating evidence from multiple studies with random effects to account for heterogeneity. A Bayesian approach is a nice modification of this, because you can be explicit about how one study might inform another.
Just as well, there are Bayesian approaches to "studying the studies" as a typical (frequentist) meta analysis might do, but that's not what you're describing here.
|
Does Bayesian statistics make meta-analysis obsolete?
I'm sure many people would argue as to what the purpose of a meta-analysis is, but perhaps at a meta-meta level the point of such analysis is to study the studies rather than obtain a pooled parameter
|
15,118
|
Does Bayesian statistics make meta-analysis obsolete?
|
When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Spiegelhalter showed years ago that the most commonly used method for meta-analysis, the DerSimonian and Laird method, is overconfident. See http://www.citeulike.org/user/harrelfe/article/13264878 for details.
Related to earlier posts when the number of studies is limited I prefer to think of this as Bayesian updating, which allows the posterior distribution from previous studies to be any shape and does not require the assumption of exchangeability. It just requires the assumption of applicability.
|
Does Bayesian statistics make meta-analysis obsolete?
|
When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Sp
|
Does Bayesian statistics make meta-analysis obsolete?
When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Spiegelhalter showed years ago that the most commonly used method for meta-analysis, the DerSimonian and Laird method, is overconfident. See http://www.citeulike.org/user/harrelfe/article/13264878 for details.
Related to earlier posts when the number of studies is limited I prefer to think of this as Bayesian updating, which allows the posterior distribution from previous studies to be any shape and does not require the assumption of exchangeability. It just requires the assumption of applicability.
|
Does Bayesian statistics make meta-analysis obsolete?
When one wants to do meta-analysis as opposed to fully prospective research, I view Bayesian methods as allowing one to get more accurate meta-analysis. For example, Bayesian biostatistician David Sp
|
15,119
|
Does Bayesian statistics make meta-analysis obsolete?
|
One important clarification about this question.
You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the things you should be concerned about in a meta-analysis!
Most directly to the point is that good methods for meta-analyses acknowledge that the underlying effects are not necessarily uniform study to study. For example, if you want to combine the mean from two different studies, it is helpful to think about the means as
$\mu_1 = \mu + \alpha_1$
$\mu_2 = \mu + \alpha_2$
$\alpha_1 + \alpha_2 = 0$
where $\mu_1$ is the population mean from study 1, $\mu_2$ is the population mean from study 2, $\mu$ is the global mean of interest, and $\alpha_1$ and $\alpha_2$ are the deviation from the global mean in each study. Of course, you hope that $\alpha_1$ and $\alpha_2$ are very small in magnitude, but assuming 0 is a bit foolish.
This model can easily be fit in a Bayesian framework, just as it could be fit in a frequentist framework. My only point is that in the OP's question, it could be read as using the naive model of assuming $\alpha = 0$ is okay if you are in the Bayesian setting, which is still naive but with a naive prior as well.
So in conclusion, no, Bayesian methods do not make the field of meta-analysis obsolete. Rather, Bayesian methods work nicely hand-in-hand with meta-analyses.
|
Does Bayesian statistics make meta-analysis obsolete?
|
One important clarification about this question.
You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the thin
|
Does Bayesian statistics make meta-analysis obsolete?
One important clarification about this question.
You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the things you should be concerned about in a meta-analysis!
Most directly to the point is that good methods for meta-analyses acknowledge that the underlying effects are not necessarily uniform study to study. For example, if you want to combine the mean from two different studies, it is helpful to think about the means as
$\mu_1 = \mu + \alpha_1$
$\mu_2 = \mu + \alpha_2$
$\alpha_1 + \alpha_2 = 0$
where $\mu_1$ is the population mean from study 1, $\mu_2$ is the population mean from study 2, $\mu$ is the global mean of interest, and $\alpha_1$ and $\alpha_2$ are the deviation from the global mean in each study. Of course, you hope that $\alpha_1$ and $\alpha_2$ are very small in magnitude, but assuming 0 is a bit foolish.
This model can easily be fit in a Bayesian framework, just as it could be fit in a frequentist framework. My only point is that in the OP's question, it could be read as using the naive model of assuming $\alpha = 0$ is okay if you are in the Bayesian setting, which is still naive but with a naive prior as well.
So in conclusion, no, Bayesian methods do not make the field of meta-analysis obsolete. Rather, Bayesian methods work nicely hand-in-hand with meta-analyses.
|
Does Bayesian statistics make meta-analysis obsolete?
One important clarification about this question.
You certainly can do a meta-analysis in the Bayesian settings. But simply using a Bayesian perspective does not allow you to forget about all the thin
|
15,120
|
Does Bayesian statistics make meta-analysis obsolete?
|
People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough is already enough. For instance Wetterslev and colleagues in J Clin Epid here. The same authors have a number of publications on this theme which are fairly easy to find. I think at least some of them are open access.
|
Does Bayesian statistics make meta-analysis obsolete?
|
People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough
|
Does Bayesian statistics make meta-analysis obsolete?
People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough is already enough. For instance Wetterslev and colleagues in J Clin Epid here. The same authors have a number of publications on this theme which are fairly easy to find. I think at least some of them are open access.
|
Does Bayesian statistics make meta-analysis obsolete?
People have tried to analyse what happens when you perform meta-analysis cumulatively although their main concern is to establish whether it is worth collecting more data or conversely whether enough
|
15,121
|
Are MCMC without memory?
|
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "without memory" to the extent that only the previous value affects the present conditional probability, and all previous states are "forgotten". (You are right that it is not completely without memory - after all, the conditional distribution of the present value depends on the previous value.) That is true for MCMC and also for any other Markov chain.
|
Are MCMC without memory?
|
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "withou
|
Are MCMC without memory?
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "without memory" to the extent that only the previous value affects the present conditional probability, and all previous states are "forgotten". (You are right that it is not completely without memory - after all, the conditional distribution of the present value depends on the previous value.) That is true for MCMC and also for any other Markov chain.
|
Are MCMC without memory?
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value. So every Markov chain is "withou
|
15,122
|
Are MCMC without memory?
|
While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ from vector $x_{i}$. Now, moment $i$ is metaphorically seen as "the present", and all vectors coming "earlier than" $x_{i}$ are irrelevant for calculating the next one in the future.
Through this simple renumbering, it becomes "completely without memory" in the intuitive sense - that is, it doesn't matter at all how the Markov system came to be in its present state. The present state alone determines future states, without using any information from past ($x_{i-n}$) states.
A maybe subtler point: the word "memory" is also being used because this also means that you can't infer past states from the present state. Once you are at $x_{i}$, you don't know what happened "before" during $x_{i-n}$. This is the opposite of systems which encode knowledge of past states in the present state.
|
Are MCMC without memory?
|
While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ fro
|
Are MCMC without memory?
While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ from vector $x_{i}$. Now, moment $i$ is metaphorically seen as "the present", and all vectors coming "earlier than" $x_{i}$ are irrelevant for calculating the next one in the future.
Through this simple renumbering, it becomes "completely without memory" in the intuitive sense - that is, it doesn't matter at all how the Markov system came to be in its present state. The present state alone determines future states, without using any information from past ($x_{i-n}$) states.
A maybe subtler point: the word "memory" is also being used because this also means that you can't infer past states from the present state. Once you are at $x_{i}$, you don't know what happened "before" during $x_{i-n}$. This is the opposite of systems which encode knowledge of past states in the present state.
|
Are MCMC without memory?
While we have the correct answer, I would like to expand just a little bit on the intuitive semantics of the statement. Imagine that we redefine our indices such that you generate vector $x_{i+1}$ fro
|
15,123
|
Are MCMC without memory?
|
You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point in time. That is essentially the same situation as what is happening in MCMC.
It is using the current information that it can currently see to make a decision about what to do next. Instead of thinking of it as figuring out $x_{i}$ from $x_{i-1}$ (which might be what is causing you trouble because you're thinking "hey we're looking into the past when we look at $x_{i-1}$) think of it as figuring out what $x_{i+1}$ should be based on the current information $x_i$ for which you don't need any 'memory'. Those two formulations are equivalent but it might help you think about the semantics a bit better.
|
Are MCMC without memory?
|
You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point i
|
Are MCMC without memory?
You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point in time. That is essentially the same situation as what is happening in MCMC.
It is using the current information that it can currently see to make a decision about what to do next. Instead of thinking of it as figuring out $x_{i}$ from $x_{i-1}$ (which might be what is causing you trouble because you're thinking "hey we're looking into the past when we look at $x_{i-1}$) think of it as figuring out what $x_{i+1}$ should be based on the current information $x_i$ for which you don't need any 'memory'. Those two formulations are equivalent but it might help you think about the semantics a bit better.
|
Are MCMC without memory?
You wake up. You have no idea how you got where you are. You look around at your surroundings and make a decision on what to do next based solely on the information you have available at that point i
|
15,124
|
Why f beta score define beta like that?
|
Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two definitions represent only notational differences in the definition of the $F_\beta$ score. I have seen it defined both the first way (e.g. on the wikipedia page) and the second (e.g. here).
The $F_1$ measure is obtained by taking the harmonic mean of precision and recall, namely the reciprocal of the average of the reciprocal of precision and the reciprocal of recall:
\begin{align*}
F_1 &= \frac{1}{\frac{1}{2}\frac{1}{\text{precision}}+\frac{1}{2}\frac{1}{\text{recall}}} \\
&= 2\frac{\text{precision}\cdot\text{recall}}{\text{precision}+\text{recall}}
\end{align*}
Instead of using weights in the denominator that are equal and sum to 1 ($\frac{1}{2}$ for recall and $\frac{1}{2}$ for precision), we might instead assign weights that still sum to 1 but for which the weight on recall is $\beta$ times as large as the weight on precision ($\frac{\beta}{\beta+1}$ for recall and $\frac{1}{\beta+1}$ for precision). This yields your second definition of the $F_\beta$ score:
\begin{align*}
F_\beta &= \frac{1}{\frac{1}{\beta+1}\frac{1}{\text{precision}}+\frac{\beta}{\beta+1}\frac{1}{\text{recall}}} \\
&= (1+\beta)\frac{\text{precision}\cdot\text{recall}}{\beta\cdot\text{precision}+\text{recall}}
\end{align*}
Again, if we had used $\beta^2$ instead of $\beta$ here we would have arrived at your first definition, so the differences between the two definitions are just notational.
|
Why f beta score define beta like that?
|
Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two defi
|
Why f beta score define beta like that?
Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two definitions represent only notational differences in the definition of the $F_\beta$ score. I have seen it defined both the first way (e.g. on the wikipedia page) and the second (e.g. here).
The $F_1$ measure is obtained by taking the harmonic mean of precision and recall, namely the reciprocal of the average of the reciprocal of precision and the reciprocal of recall:
\begin{align*}
F_1 &= \frac{1}{\frac{1}{2}\frac{1}{\text{precision}}+\frac{1}{2}\frac{1}{\text{recall}}} \\
&= 2\frac{\text{precision}\cdot\text{recall}}{\text{precision}+\text{recall}}
\end{align*}
Instead of using weights in the denominator that are equal and sum to 1 ($\frac{1}{2}$ for recall and $\frac{1}{2}$ for precision), we might instead assign weights that still sum to 1 but for which the weight on recall is $\beta$ times as large as the weight on precision ($\frac{\beta}{\beta+1}$ for recall and $\frac{1}{\beta+1}$ for precision). This yields your second definition of the $F_\beta$ score:
\begin{align*}
F_\beta &= \frac{1}{\frac{1}{\beta+1}\frac{1}{\text{precision}}+\frac{\beta}{\beta+1}\frac{1}{\text{recall}}} \\
&= (1+\beta)\frac{\text{precision}\cdot\text{recall}}{\beta\cdot\text{precision}+\text{recall}}
\end{align*}
Again, if we had used $\beta^2$ instead of $\beta$ here we would have arrived at your first definition, so the differences between the two definitions are just notational.
|
Why f beta score define beta like that?
Letting $\beta$ be the weight in the first definition you provide and $\tilde\beta$ the weight in the second, the two definitions are equivalent when you set $\tilde\beta = \beta^2$, so these two defi
|
15,125
|
Why f beta score define beta like that?
|
The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definition for what it means to attach $\beta$ times as much importance to recall than precision.
The particular way of defining the relative importance of the two metrics that leads to the $\beta^{2}$ formulation can be found in Information Retrieval (Van Rijsbergen, 1979):
Definition: The relative importance a user attaches to precision and recall is the $P/R$ ratio at which $\partial{E}/ \partial{R} = \partial{E}/ \partial{P}$, where $E = E(P, R)$ is the measure of effectiveness based on precision and recall.
The motivation for this being:
The simplest way I know of quantifying this is to specify the $P/R$ ratio at which the user is willing to trade an increment in precision for an equal loss in recall.
To see that this leads to the $\beta^{2}$ formulation we can start with the general formula for the weighted harmonic mean of $P$ and $R$ and calculate their partial derivatives with respect to $P$ and $R$. The source cited uses $E$ (for "effectiveness measure"), which is just $1-F$ and the explanation is equivalent whether we consider $E$ or $F$.
\begin{equation}
F = \frac{1}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})}
\end{equation}
\begin{equation}
\partial{F}/\partial{P} = \frac{\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}P^{2}}
\end{equation}
\begin{equation}
\partial{F}/\partial{R} = \frac{1-\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}R^{2}}
\end{equation}
Now, setting the derivatives equal to one another places a restriction on the relationship between $\alpha$ and the ratio $P/R$. Given that we wish to attach $\beta$ times as much importance to recall as precision we will consider the ratio $R/P$1:
\begin{equation}
\partial{F}/\partial{P} = \partial{F}/\partial{R} \rightarrow \frac{\alpha}{P^{2}} = \frac{1-\alpha}{R^{2}} \rightarrow
\frac{R}{P} = \sqrt{\frac{1-\alpha}{\alpha}}
\end{equation}
Defining $\beta$ as this ratio and rearranging for $\alpha$ gives the weightings in terms of $\beta^{2}$:
\begin{equation}
\beta = \sqrt{\frac{1-\alpha}{\alpha}} \rightarrow \beta^{2} = \frac{1-\alpha}{\alpha} \rightarrow
\beta^{2} + 1 = \frac{1}{\alpha} \rightarrow
\alpha = \frac{1}{\beta^{2} + 1}
\end{equation}
\begin{equation}
1 - \alpha = 1 - \frac{1}{\beta^{2} + 1} \rightarrow
\frac{\beta^{2}}{\beta^{2} + 1}
\end{equation}
We obtain:
\begin{equation}
F = \frac{1}{(\frac{1}{\beta^{2} + 1}\frac{1}{P} + \frac{\beta^{2}}{\beta^{2} + 1}\frac{1}{R})}
\end{equation}
Which can be rearranged to give the form in your question.
Thus, given the quoted definition, if you wish to attach $\beta$ times as much importance to recall as precision then the $\beta^{2}$ formulation should be used. This interpretation does not hold if one uses $\beta$.
You could define a score as you suggest. In this case, as Vic has shown, the definition for the relative importance you would be assuming is:
Definition: The relative importance a user attaches to precision and recall is the $(\partial{E}/ \partial{R}) / (\partial{E}/ \partial{P})$ ratio at which $R = P$.
Footnotes:
$P/R$ is used in Information Retrieval but this appears to be a typo, see The Truth of F-measure (Saski, 2007).
References:
C. J. Van Rijsbergen. 1979. Information Retrieval (2nd ed.), pp.133-134
Y. Sasaki. 2007. “The Truth of F-measure”, Teaching, Tutorial materials
|
Why f beta score define beta like that?
|
The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definitio
|
Why f beta score define beta like that?
The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definition for what it means to attach $\beta$ times as much importance to recall than precision.
The particular way of defining the relative importance of the two metrics that leads to the $\beta^{2}$ formulation can be found in Information Retrieval (Van Rijsbergen, 1979):
Definition: The relative importance a user attaches to precision and recall is the $P/R$ ratio at which $\partial{E}/ \partial{R} = \partial{E}/ \partial{P}$, where $E = E(P, R)$ is the measure of effectiveness based on precision and recall.
The motivation for this being:
The simplest way I know of quantifying this is to specify the $P/R$ ratio at which the user is willing to trade an increment in precision for an equal loss in recall.
To see that this leads to the $\beta^{2}$ formulation we can start with the general formula for the weighted harmonic mean of $P$ and $R$ and calculate their partial derivatives with respect to $P$ and $R$. The source cited uses $E$ (for "effectiveness measure"), which is just $1-F$ and the explanation is equivalent whether we consider $E$ or $F$.
\begin{equation}
F = \frac{1}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})}
\end{equation}
\begin{equation}
\partial{F}/\partial{P} = \frac{\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}P^{2}}
\end{equation}
\begin{equation}
\partial{F}/\partial{R} = \frac{1-\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}R^{2}}
\end{equation}
Now, setting the derivatives equal to one another places a restriction on the relationship between $\alpha$ and the ratio $P/R$. Given that we wish to attach $\beta$ times as much importance to recall as precision we will consider the ratio $R/P$1:
\begin{equation}
\partial{F}/\partial{P} = \partial{F}/\partial{R} \rightarrow \frac{\alpha}{P^{2}} = \frac{1-\alpha}{R^{2}} \rightarrow
\frac{R}{P} = \sqrt{\frac{1-\alpha}{\alpha}}
\end{equation}
Defining $\beta$ as this ratio and rearranging for $\alpha$ gives the weightings in terms of $\beta^{2}$:
\begin{equation}
\beta = \sqrt{\frac{1-\alpha}{\alpha}} \rightarrow \beta^{2} = \frac{1-\alpha}{\alpha} \rightarrow
\beta^{2} + 1 = \frac{1}{\alpha} \rightarrow
\alpha = \frac{1}{\beta^{2} + 1}
\end{equation}
\begin{equation}
1 - \alpha = 1 - \frac{1}{\beta^{2} + 1} \rightarrow
\frac{\beta^{2}}{\beta^{2} + 1}
\end{equation}
We obtain:
\begin{equation}
F = \frac{1}{(\frac{1}{\beta^{2} + 1}\frac{1}{P} + \frac{\beta^{2}}{\beta^{2} + 1}\frac{1}{R})}
\end{equation}
Which can be rearranged to give the form in your question.
Thus, given the quoted definition, if you wish to attach $\beta$ times as much importance to recall as precision then the $\beta^{2}$ formulation should be used. This interpretation does not hold if one uses $\beta$.
You could define a score as you suggest. In this case, as Vic has shown, the definition for the relative importance you would be assuming is:
Definition: The relative importance a user attaches to precision and recall is the $(\partial{E}/ \partial{R}) / (\partial{E}/ \partial{P})$ ratio at which $R = P$.
Footnotes:
$P/R$ is used in Information Retrieval but this appears to be a typo, see The Truth of F-measure (Saski, 2007).
References:
C. J. Van Rijsbergen. 1979. Information Retrieval (2nd ed.), pp.133-134
Y. Sasaki. 2007. “The Truth of F-measure”, Teaching, Tutorial materials
|
Why f beta score define beta like that?
The reason for defining the F-beta score with $\beta^{2}$ is exactly the quote you provide (i.e. wanting to attach $\beta$ times as much importance to recall as precision) given a particular definitio
|
15,126
|
Why f beta score define beta like that?
|
To point something out quickly.
It means that as the beta value increases, you value precision more.
I actually think it's the opposite - since higher is better in F-β scoring, you want the denominator to be small. Therefore, if you decrease β, then the model is punished less for having a good precision score. If you increase β, then the F-β score is punished more when precision is high.
If you want to weight the F-β scoring so that it values precision, β should be 0 < β < 1, where β->0 values only precision (the numerator becomes very small, and the only thing in the denominator is recall, so the F-β score decreases as recall increases).
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html
|
Why f beta score define beta like that?
|
To point something out quickly.
It means that as the beta value increases, you value precision more.
I actually think it's the opposite - since higher is better in F-β scoring, you want the denomina
|
Why f beta score define beta like that?
To point something out quickly.
It means that as the beta value increases, you value precision more.
I actually think it's the opposite - since higher is better in F-β scoring, you want the denominator to be small. Therefore, if you decrease β, then the model is punished less for having a good precision score. If you increase β, then the F-β score is punished more when precision is high.
If you want to weight the F-β scoring so that it values precision, β should be 0 < β < 1, where β->0 values only precision (the numerator becomes very small, and the only thing in the denominator is recall, so the F-β score decreases as recall increases).
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html
|
Why f beta score define beta like that?
To point something out quickly.
It means that as the beta value increases, you value precision more.
I actually think it's the opposite - since higher is better in F-β scoring, you want the denomina
|
15,127
|
Why f beta score define beta like that?
|
TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term.
A Person's answer does well to show why $\beta^{2}$ appears, given Van Rijsbergen's chosen way to define the relative importance of precision and recall. However, there is a consideration that's missing in the literature, which I'm arguing here: the chosen definition is unintuitive and unnatural, and if you actually used $F_\beta$ (in practice) the way it's defined, you would quickly be left thinking, "the effect of $\beta$ seems way more aggressive than the value I've chosen".
To be fair, it is mostly Wikipedia's summary that is misleading, as it neglects to mention the subjective measure of importance involved, whereas Van Rijsbergen merely presented a possible definition that was simple but not necessarily the best or most meaningful one.
Let's review Van Rijsbergen's choice of definition:
The simplest way I know of quantifying this is to specify the $P/R$ ratio at which the user is willing to trade an increment in precision for an equal loss in recall.
Generally speaking, if $R/P > \beta$ then an increase in $P$ is more influential than an increase in $R$, whereas $R$ is more influential than $P$ where $R/P < \beta$. But here's why I would argue that the weighting is unintuitive. When $P = R$, increases in $R$ are $\beta^2$ times as effective as $P$. (This can be calculated from the partial derivatives provided in A Person's answer.) When someone says "I want recall to be weighted 3x more important than precision", I would not jump to the definition that equates to "precision will be penalised until it's literally a third of the value of recall", and I certainly wouldn't expect that when precision and recall are equal, recall contributes 9x as much. That doesn't seem practical in most situations where you ideally want both precision and recall to be high, just one to be a little higher than the other.
Below is a visual representation of what $F_\beta$ looks like. The red lines highlight the ratio $R/P = \beta$ and that the partial derivatives of $F_\beta$ are equal at that ratio, shown by the solid red slopes.
I'll now present an alternative subjective definition, which equates to "when precision and recall are equal, improvements in recall are worth $\gamma$ times more than improvements in precision". I argue that this definition is more intuitive while being equally simple as Van Rijsbergen's definition:
When $P = R$, set $\frac{\partial{F}/\partial{R}}{\partial{F}/\partial{P}} = \gamma$, where $\gamma$ is the relative importance of improvements in recall over precision.
Substituting equations derived in A Person's answer:
$\frac{1-\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}R^{2}} = \gamma \frac{\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}P^{2}}$
Remembering that $P = R$, this simplifies to:
$\gamma = \frac{1-\alpha}{\alpha}$ and $\alpha = \frac{1}{\gamma + 1}$,
contrasted with:
$\beta^2 = \frac{1-\alpha}{\alpha}$ and $\alpha = \frac{1}{\beta^2+1}$ under Van Rijsbergen's formulation.
What does this mean? An informal summary:
Van Rijsbergen's definition $\Leftrightarrow$ recall is $\beta$ times as important as precision in terms of value.
My proposed definition $\Leftrightarrow$ recall is $\gamma$ times as important as precision in terms of improvements in value.
Both definitions are based on a weighted harmonic mean of precision and recall, and the weightings under these two definitions can be mapped. Specifically, placing $\beta = \sqrt{\gamma}$ times importance in terms of value is equivalent to placing $\gamma$ times importance in terms of improvements in value.
One can defensibly argue that using a $\beta$ term instead of $\beta^2$ is a more intuitive weighting.
|
Why f beta score define beta like that?
|
TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term.
A Person's answer
|
Why f beta score define beta like that?
TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term.
A Person's answer does well to show why $\beta^{2}$ appears, given Van Rijsbergen's chosen way to define the relative importance of precision and recall. However, there is a consideration that's missing in the literature, which I'm arguing here: the chosen definition is unintuitive and unnatural, and if you actually used $F_\beta$ (in practice) the way it's defined, you would quickly be left thinking, "the effect of $\beta$ seems way more aggressive than the value I've chosen".
To be fair, it is mostly Wikipedia's summary that is misleading, as it neglects to mention the subjective measure of importance involved, whereas Van Rijsbergen merely presented a possible definition that was simple but not necessarily the best or most meaningful one.
Let's review Van Rijsbergen's choice of definition:
The simplest way I know of quantifying this is to specify the $P/R$ ratio at which the user is willing to trade an increment in precision for an equal loss in recall.
Generally speaking, if $R/P > \beta$ then an increase in $P$ is more influential than an increase in $R$, whereas $R$ is more influential than $P$ where $R/P < \beta$. But here's why I would argue that the weighting is unintuitive. When $P = R$, increases in $R$ are $\beta^2$ times as effective as $P$. (This can be calculated from the partial derivatives provided in A Person's answer.) When someone says "I want recall to be weighted 3x more important than precision", I would not jump to the definition that equates to "precision will be penalised until it's literally a third of the value of recall", and I certainly wouldn't expect that when precision and recall are equal, recall contributes 9x as much. That doesn't seem practical in most situations where you ideally want both precision and recall to be high, just one to be a little higher than the other.
Below is a visual representation of what $F_\beta$ looks like. The red lines highlight the ratio $R/P = \beta$ and that the partial derivatives of $F_\beta$ are equal at that ratio, shown by the solid red slopes.
I'll now present an alternative subjective definition, which equates to "when precision and recall are equal, improvements in recall are worth $\gamma$ times more than improvements in precision". I argue that this definition is more intuitive while being equally simple as Van Rijsbergen's definition:
When $P = R$, set $\frac{\partial{F}/\partial{R}}{\partial{F}/\partial{P}} = \gamma$, where $\gamma$ is the relative importance of improvements in recall over precision.
Substituting equations derived in A Person's answer:
$\frac{1-\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}R^{2}} = \gamma \frac{\alpha}{(\frac{\alpha}{P}+ \frac{1-\alpha}{R})^{2}P^{2}}$
Remembering that $P = R$, this simplifies to:
$\gamma = \frac{1-\alpha}{\alpha}$ and $\alpha = \frac{1}{\gamma + 1}$,
contrasted with:
$\beta^2 = \frac{1-\alpha}{\alpha}$ and $\alpha = \frac{1}{\beta^2+1}$ under Van Rijsbergen's formulation.
What does this mean? An informal summary:
Van Rijsbergen's definition $\Leftrightarrow$ recall is $\beta$ times as important as precision in terms of value.
My proposed definition $\Leftrightarrow$ recall is $\gamma$ times as important as precision in terms of improvements in value.
Both definitions are based on a weighted harmonic mean of precision and recall, and the weightings under these two definitions can be mapped. Specifically, placing $\beta = \sqrt{\gamma}$ times importance in terms of value is equivalent to placing $\gamma$ times importance in terms of improvements in value.
One can defensibly argue that using a $\beta$ term instead of $\beta^2$ is a more intuitive weighting.
|
Why f beta score define beta like that?
TLDR; Contrary to the literature which all traces back to an arbitrary proposed definition, using a $\beta$ term like OP suggests is actually more intuitive than the $\beta^2$ term.
A Person's answer
|
15,128
|
Why f beta score define beta like that?
|
The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with recall that would also work, it would just mean that as the beta value increases you value recall more.
|
Why f beta score define beta like that?
|
The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with reca
|
Why f beta score define beta like that?
The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with recall that would also work, it would just mean that as the beta value increases you value recall more.
|
Why f beta score define beta like that?
The reason that β^2 is multiplied with precision is just the way that F-Scores are defined. It means that as the beta value increases, you value precision more. If you wanted to multiply it with reca
|
15,129
|
Why f beta score define beta like that?
|
The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
|
Why f beta score define beta like that?
|
The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
|
Why f beta score define beta like that?
The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
|
Why f beta score define beta like that?
The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision.
|
15,130
|
Why do we care more about test error than expected test error in Machine Learning?
|
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err?
I can only guess, but I think it is a reasonable guess.
The former concerns the error for the training set we have right now. It answers "If I were to use this dataset to train this model, what kind of error would I expect?". It is easy to think of the type of people who would want to know this quantity (e.g. data scientists, applied statisticians, basically anyone using a model as a means to an end). These people don't care about the properties of the model across new training sets per se, they only care about how the model they made will perform.
Contrast this to the latter error, which is the expectation of the former error across all training sets. It answers "Were I to collect an infinite sequence of new training examples, and were I to compute $\operatorname{Err}_{\mathcal{T}}$ for each of those training sets in an infinite sequence, what would be average value of that sequence of errors?". It is easy to think of the type of people who care about this quantity (e.g. researchers, theorists, etc). These people are not concerned with any one instance of a model (in contrast to the people in the previous paragraph), they are interested in the general behavior of a model.
So why the former and not the latter? The book is largely concerned with how to fit and validate models when readers have a single dataset in hand and want to know how that model may perform on new data.
|
Why do we care more about test error than expected test error in Machine Learning?
|
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err?
I can only guess, but I think it is a reasonable guess.
The former concerns the error for the training set we have right now. I
|
Why do we care more about test error than expected test error in Machine Learning?
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err?
I can only guess, but I think it is a reasonable guess.
The former concerns the error for the training set we have right now. It answers "If I were to use this dataset to train this model, what kind of error would I expect?". It is easy to think of the type of people who would want to know this quantity (e.g. data scientists, applied statisticians, basically anyone using a model as a means to an end). These people don't care about the properties of the model across new training sets per se, they only care about how the model they made will perform.
Contrast this to the latter error, which is the expectation of the former error across all training sets. It answers "Were I to collect an infinite sequence of new training examples, and were I to compute $\operatorname{Err}_{\mathcal{T}}$ for each of those training sets in an infinite sequence, what would be average value of that sequence of errors?". It is easy to think of the type of people who care about this quantity (e.g. researchers, theorists, etc). These people are not concerned with any one instance of a model (in contrast to the people in the previous paragraph), they are interested in the general behavior of a model.
So why the former and not the latter? The book is largely concerned with how to fit and validate models when readers have a single dataset in hand and want to know how that model may perform on new data.
|
Why do we care more about test error than expected test error in Machine Learning?
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err?
I can only guess, but I think it is a reasonable guess.
The former concerns the error for the training set we have right now. I
|
15,131
|
Why do we care more about test error than expected test error in Machine Learning?
|
+1 to Demetri Pananos's answer.
It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite different than $\mathrm{Err}_{\mathcal{T}'}$ - either much lower, or much higher. This may be of vastly larger importance when we actually apply $f$ than the expected error $\mathrm{Err}$ over all possible $\mathcal{T}$s.
As an example, I do forecasting for supermarket replenishment and apply my model to many, many training datasets (essentially, historical sales of one product at one store). The loss directly transforms into the necessary safety stock. It's much more important to know the necessary safety stock per product and store than the "overall" safety stock.
|
Why do we care more about test error than expected test error in Machine Learning?
|
+1 to Demetri Pananos's answer.
It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite diffe
|
Why do we care more about test error than expected test error in Machine Learning?
+1 to Demetri Pananos's answer.
It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite different than $\mathrm{Err}_{\mathcal{T}'}$ - either much lower, or much higher. This may be of vastly larger importance when we actually apply $f$ than the expected error $\mathrm{Err}$ over all possible $\mathcal{T}$s.
As an example, I do forecasting for supermarket replenishment and apply my model to many, many training datasets (essentially, historical sales of one product at one store). The loss directly transforms into the necessary safety stock. It's much more important to know the necessary safety stock per product and store than the "overall" safety stock.
|
Why do we care more about test error than expected test error in Machine Learning?
+1 to Demetri Pananos's answer.
It may well be that we apply the same model $f$ to two different training datasets $\mathcal{T}$ and $\mathcal{T}'$. And $\mathrm{Err}_{\mathcal{T}}$ may be quite diffe
|
15,132
|
Why do we care more about test error than expected test error in Machine Learning?
|
Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate implementation of one such bound (although IMHO the thing that makes it work well is the regularisation, rather than the hinge loss part). Perhaps it could be said that $\mathrm{Err}$ is important in the design of learning algorithms, where as $\mathrm{Err}_\mathcal{T}$ is more relevant when applying them to a particular problem/dataset.
|
Why do we care more about test error than expected test error in Machine Learning?
|
Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate imp
|
Why do we care more about test error than expected test error in Machine Learning?
Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate implementation of one such bound (although IMHO the thing that makes it work well is the regularisation, rather than the hinge loss part). Perhaps it could be said that $\mathrm{Err}$ is important in the design of learning algorithms, where as $\mathrm{Err}_\mathcal{T}$ is more relevant when applying them to a particular problem/dataset.
|
Why do we care more about test error than expected test error in Machine Learning?
Computational Learning Theory, often is concerned with putting bounds on $\mathrm{Err}$, e.g. VC dimension (which doesn't depend on the training set). The Support Vector Machine is an approximate imp
|
15,133
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, though one could make a case for the one-sided alternate hypothesis $p>1/2$.
We need to choose the significance level of the test. That's up to you. Two traditional numbers are $5$% and $1$%.
Suppose that the null hypothesis holds. Then the number of heads has *binomial distribution with mean $(900)(1/2)=450$, and standard deviation $\sqrt{(900)(1/2)(1/2)}=15$.
The probability that in tossing a fair coin the number of heads differs from $450$ by $40$ or more (in either direction) is, by symmetry,
$$2\sum_{k=490}^{900} \binom{900}{k}\left(\frac{1}{2}\right)^{900}.$$
This is not practical to compute by hand, but Wolfram Alpha gives an answer of roughly $0.008419$.
Thus, if the coin was unbiased, then a number of heads that differs from $450$ by $40$ or more would be pretty unlikely. It would have probability less than $1$%. so at the $1$% significance level, we reject the null hypothesis.
We can also use the normal approximation to the binomial to estimate the probability that the number of heads is $\ge 490$ or $\le 410$ under the null hypothesis $p=1/2$. Our normal has mean $450$ and variance $15$ is $\ge 490$ with probability the probability that a standard normal is $\ge 40/15$. From tables for the normal, this is about $0.0039$. Double to take the left tail into account. We get about $0.0078$, fairly close to the value given by Wolfram Alpha, and under $1$\%. So if we use $1$\% as our level of significance, again we reject the null hypothesis $H_0$.
Comments: $1$. In the normal approximation to the binomial, we get a better approximation to the probability that the binomial is $\ge 490$ by calculating the probability that the normal is $\ge 489.5$. If you want to look it up, this is the continuity correction. If we use the normal approximation with continuity correction, we find that the probability of $490$ or more or $410$ or fewer heads is about $0.008468$, quite close to the "exact" answer provided by Wolfram Alpha. Thus we can find a very accurate estimate by, as in the bad old days, using tables of the standard normal and doing the arithmetic "by hand."
$2$. Suppose that we use the somewhat less natural alternate hypothesis $p>1/2$.
If $p=1/2$, the probability of $490$ or more is about $0.00421$. Thus again at the $1$% significance level, we would reject the null hypothesis, indeed we would reject it even if we were using significance level $0.005$.
Setting a significance level is always necessary, for it is possible for a fair coin to yield say $550$ or more heads in $900$ tosses, just ridiculously unlikely.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, t
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, though one could make a case for the one-sided alternate hypothesis $p>1/2$.
We need to choose the significance level of the test. That's up to you. Two traditional numbers are $5$% and $1$%.
Suppose that the null hypothesis holds. Then the number of heads has *binomial distribution with mean $(900)(1/2)=450$, and standard deviation $\sqrt{(900)(1/2)(1/2)}=15$.
The probability that in tossing a fair coin the number of heads differs from $450$ by $40$ or more (in either direction) is, by symmetry,
$$2\sum_{k=490}^{900} \binom{900}{k}\left(\frac{1}{2}\right)^{900}.$$
This is not practical to compute by hand, but Wolfram Alpha gives an answer of roughly $0.008419$.
Thus, if the coin was unbiased, then a number of heads that differs from $450$ by $40$ or more would be pretty unlikely. It would have probability less than $1$%. so at the $1$% significance level, we reject the null hypothesis.
We can also use the normal approximation to the binomial to estimate the probability that the number of heads is $\ge 490$ or $\le 410$ under the null hypothesis $p=1/2$. Our normal has mean $450$ and variance $15$ is $\ge 490$ with probability the probability that a standard normal is $\ge 40/15$. From tables for the normal, this is about $0.0039$. Double to take the left tail into account. We get about $0.0078$, fairly close to the value given by Wolfram Alpha, and under $1$\%. So if we use $1$\% as our level of significance, again we reject the null hypothesis $H_0$.
Comments: $1$. In the normal approximation to the binomial, we get a better approximation to the probability that the binomial is $\ge 490$ by calculating the probability that the normal is $\ge 489.5$. If you want to look it up, this is the continuity correction. If we use the normal approximation with continuity correction, we find that the probability of $490$ or more or $410$ or fewer heads is about $0.008468$, quite close to the "exact" answer provided by Wolfram Alpha. Thus we can find a very accurate estimate by, as in the bad old days, using tables of the standard normal and doing the arithmetic "by hand."
$2$. Suppose that we use the somewhat less natural alternate hypothesis $p>1/2$.
If $p=1/2$, the probability of $490$ or more is about $0.00421$. Thus again at the $1$% significance level, we would reject the null hypothesis, indeed we would reject it even if we were using significance level $0.005$.
Setting a significance level is always necessary, for it is possible for a fair coin to yield say $550$ or more heads in $900$ tosses, just ridiculously unlikely.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Here the natural null-hypothesis $H_0$ is that the coin is unbiased, that is, that the probability $p$ of a head is equal to $1/2$. The most reasonable alternate hypothesis $H_1$ is that $p\ne 1/2$, t
|
15,134
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null hypothesis of a fair coin. So, the $p$-value - the probability of seeing a result this extreme or more extreme given that the coin is far, is
$$ P( X \geq 490 ) $$
If you seek the 2-sided $p$-value, that would be
$$ 1 - P(410 < X < 490 ) $$
I'll leave it to you to describe why that is the case.
We know that the mass function for $ Y \sim {\rm Binomial}(n,p)$, is
$$ P(Y = y) = \binom{n}{y} p^y (1-p)^{n-y} $$
I'll leave it to you to calculate $p$-value you seek.
Note: The sample size here is sufficiently large that you could use the normal approximation to the binomial distribution. I've detailed above how to calculate the exact $p$-value.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null h
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null hypothesis of a fair coin. So, the $p$-value - the probability of seeing a result this extreme or more extreme given that the coin is far, is
$$ P( X \geq 490 ) $$
If you seek the 2-sided $p$-value, that would be
$$ 1 - P(410 < X < 490 ) $$
I'll leave it to you to describe why that is the case.
We know that the mass function for $ Y \sim {\rm Binomial}(n,p)$, is
$$ P(Y = y) = \binom{n}{y} p^y (1-p)^{n-y} $$
I'll leave it to you to calculate $p$-value you seek.
Note: The sample size here is sufficiently large that you could use the normal approximation to the binomial distribution. I've detailed above how to calculate the exact $p$-value.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
If the coin is unbiased then the probability of 'heads' is $\frac{1}{2}$. Therefore, the number of heads thrown in 900 tries, $X$, has a ${\rm Binomial}(900,\frac{1}{2})$ distribution under the null h
|
15,135
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head is unknown, so we use a flat prior distribution on 1. We then compute the bayes factor
$K = \frac{p(x=490|M_0)}{p(x=490|M_1)}$
where
$p(x=490|M1) = \mathrm{nchoosek}(900,490)\frac12^{900} = 7.5896\times10^{-4}$
and
$p(x=490|M2) = \int_0^1 \mathrm{nchoosek}(900,490)q^{490}(1-q)^{410}dq = \frac{1}{901}$
Gives a Bayes factor of $K \approx 1.4624$, which according to the usual scale of interpretation is "barely worth mentioning".
Note however (i) the Bayes factor has a built in occam penalty that favours simple models, and M1 is a lot simpler as it has no nuisance parameters, where as M2 does; (ii) a flat prior on $q$ is not physically reasonable, in practice a biased coin is going to be close to fair unless the coin is obviously assymetrical; (iii) it has been a long day and I could easily have made a mistake some(any)where in the analysis from assumptions to calculations.
Note that the coin is biased if it is a physical object as its assymetry means that it won't be exactly as likely to come down heads as tails.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head is unknown, so we use a flat prior distribution on 1. We then compute the bayes factor
$K = \frac{p(x=490|M_0)}{p(x=490|M_1)}$
where
$p(x=490|M1) = \mathrm{nchoosek}(900,490)\frac12^{900} = 7.5896\times10^{-4}$
and
$p(x=490|M2) = \int_0^1 \mathrm{nchoosek}(900,490)q^{490}(1-q)^{410}dq = \frac{1}{901}$
Gives a Bayes factor of $K \approx 1.4624$, which according to the usual scale of interpretation is "barely worth mentioning".
Note however (i) the Bayes factor has a built in occam penalty that favours simple models, and M1 is a lot simpler as it has no nuisance parameters, where as M2 does; (ii) a flat prior on $q$ is not physically reasonable, in practice a biased coin is going to be close to fair unless the coin is obviously assymetrical; (iii) it has been a long day and I could easily have made a mistake some(any)where in the analysis from assumptions to calculations.
Note that the coin is biased if it is a physical object as its assymetry means that it won't be exactly as likely to come down heads as tails.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
The example from the Wikipedia page on Bayes Factor seems quite relevant to the question. If we have two models, M1 where the coin is exactly unbiased (q=0.5), and M2 where the probability of a head
|
15,136
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
Null hypothesis,Ho:P=0.5 (P=Q=0.5)
H1:P>0.5
where P is the prob of occuring head.
we know z = (p-P)/sqrt(PQ/N)
where p =490/900
=0.54
Now z=(0.54-0.5)/sqrt((0.5*0.5)/900)
z=2
hence at 5% LOS (i.e,1.64<2) Ho is rejected
hence the coin is biased.....
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
Null hypothesis,Ho:P=0.5 (P=Q=0.5)
H1:P>0.5
where P is the prob of occuring head.
we know z = (p-P)/sqrt(PQ/N)
where p =490/900
=0.54
Now z=(0.54-0.5)/sqrt((0.5*0.5)/900)
z=2
hence at 5% L
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Null hypothesis,Ho:P=0.5 (P=Q=0.5)
H1:P>0.5
where P is the prob of occuring head.
we know z = (p-P)/sqrt(PQ/N)
where p =490/900
=0.54
Now z=(0.54-0.5)/sqrt((0.5*0.5)/900)
z=2
hence at 5% LOS (i.e,1.64<2) Ho is rejected
hence the coin is biased.....
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Null hypothesis,Ho:P=0.5 (P=Q=0.5)
H1:P>0.5
where P is the prob of occuring head.
we know z = (p-P)/sqrt(PQ/N)
where p =490/900
=0.54
Now z=(0.54-0.5)/sqrt((0.5*0.5)/900)
z=2
hence at 5% L
|
15,137
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
Your question could be addressed in a few different ways.
The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as the null hypothesis and see if the data (the 490 out of 900 heads) can be used to reject this null hypothesis by computing a p-value. If the p-value is less than $\alpha$ then we reject the null, but a p-value $>\alpha$ does not mean that we can say the data supports the null, just that it is consistant with the assumption that the null is true, but in truth the null could be false, just the truth is a value of $p$ very close to $0.5$.
The "equivalence" approach would be to define unbiased not as $p=0.5$ but rather choose a small region around 0.5 to consider as unbiased $ 0.5-\epsilon < p < 0.5+\epsilon$. Then if the confidence interval on the true proportion lies fully within the equivalence interval of "unbiased" then the data would support the hypothesis of "unbiasedness".
Another approach would be to use a Bayesian approach where we start with a prior distribution on the true proportion $p$ including a point mass at 0.5 and the rest of the probabilit spread accross possible values. Then combine that with the data to get a posterior. If the posterioun probabilit of $p=0.5$ is high enough then that would support the claim of being unbiased.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
Your question could be addressed in a few different ways.
The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Your question could be addressed in a few different ways.
The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as the null hypothesis and see if the data (the 490 out of 900 heads) can be used to reject this null hypothesis by computing a p-value. If the p-value is less than $\alpha$ then we reject the null, but a p-value $>\alpha$ does not mean that we can say the data supports the null, just that it is consistant with the assumption that the null is true, but in truth the null could be false, just the truth is a value of $p$ very close to $0.5$.
The "equivalence" approach would be to define unbiased not as $p=0.5$ but rather choose a small region around 0.5 to consider as unbiased $ 0.5-\epsilon < p < 0.5+\epsilon$. Then if the confidence interval on the true proportion lies fully within the equivalence interval of "unbiased" then the data would support the hypothesis of "unbiasedness".
Another approach would be to use a Bayesian approach where we start with a prior distribution on the true proportion $p$ including a point mass at 0.5 and the rest of the probabilit spread accross possible values. Then combine that with the data to get a posterior. If the posterioun probabilit of $p=0.5$ is high enough then that would support the claim of being unbiased.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
Your question could be addressed in a few different ways.
The traditional test of hypothesis is designed to rule out possibilities, not necessarily prove them. In this case we can use $H_0: p=0.5$ as
|
15,138
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
And an R illustration:
Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased then p=probability of heads(or tails) = 0.5).
If we would like to test the alternative that Ha: p<>0.5 at alpha 0.05 we can look at the tails of the distribution under the null as follows and see that 490 falls outside the interval {421, 479} and thus we reject Ho.
n<-900
p<-0.5
qbinom(c(0.025,0.975),size=n,prob=p)
# 421 479
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
And an R illustration:
Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
And an R illustration:
Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased then p=probability of heads(or tails) = 0.5).
If we would like to test the alternative that Ha: p<>0.5 at alpha 0.05 we can look at the tails of the distribution under the null as follows and see that 490 falls outside the interval {421, 479} and thus we reject Ho.
n<-900
p<-0.5
qbinom(c(0.025,0.975),size=n,prob=p)
# 421 479
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
And an R illustration:
Not bothering to approximate by the normal, we can look at a random variable distributed binomial with n=900 and p=0.5 under the null hypothesis (i.e. if the coin were unbiased
|
15,139
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
To clarify the Bayesian approach:
You start by knowing nothing, except that P(Heads) is in [0,1].
So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution -> beta(1,1).
Each time you flip the coin do a Bayesian up-date of the coin's P(Heads) by multiplying each point in the distribution by it's likelihood (multiply by x if you roll heads, multiply by (1-x) if you get tails), and re-normalize the total probability to 1. This is what the beta distribution does, so if the first roll is heads you'll have beta(2,1). In your case you have beta(490,510).
From there I would calculate the 95% probability interval, and if 0.5 isn't in that interval, I would start to get suspicious.
The first time I went through this exercise I was really surprised at how long it took to converge... I started because someone said "if you flip a coin 100 times, you know P(Heads) to +/- 1%" this turns out to be totally wrong, you need magnitudes more than 100 flips.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
|
To clarify the Bayesian approach:
You start by knowing nothing, except that P(Heads) is in [0,1].
So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
To clarify the Bayesian approach:
You start by knowing nothing, except that P(Heads) is in [0,1].
So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution -> beta(1,1).
Each time you flip the coin do a Bayesian up-date of the coin's P(Heads) by multiplying each point in the distribution by it's likelihood (multiply by x if you roll heads, multiply by (1-x) if you get tails), and re-normalize the total probability to 1. This is what the beta distribution does, so if the first roll is heads you'll have beta(2,1). In your case you have beta(490,510).
From there I would calculate the 95% probability interval, and if 0.5 isn't in that interval, I would start to get suspicious.
The first time I went through this exercise I was really surprised at how long it took to converge... I started because someone said "if you flip a coin 100 times, you know P(Heads) to +/- 1%" this turns out to be totally wrong, you need magnitudes more than 100 flips.
|
How to assess whether a coin tossed 900 times and comes up heads 490 times is biased?
To clarify the Bayesian approach:
You start by knowing nothing, except that P(Heads) is in [0,1].
So start with a maximum entropy prior -> uniform(0,1). This can be represented as a beta distribution
|
15,140
|
Among Matlab and Python, which language is good for statistical analysis?
|
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are not being productive enough, and you will fall back to using your default best language. At the very least, I would suggest you try to become equally proficient in a number of languages (I would suggest R as well).
What I like about Matlab:
I am proficient in it.
It is the lingua franca among numerical analysts.
the profiling tool is very good. This is the only reason I use Matlab instead of octave.
There is a freeware clone, octave, which has good compliance with the reference implementation.
What I do not like about Matlab:
There is not a good system to manage third party (free or otherwise) packages and scripts. Mathworks controls the 'central file exchange', and installation of add-on packages seems very clunky, nothing like the excellent system that R has. Furthermore, Mathworks has no incentive to improve this situation, because they make money on selling toolboxes, which compete with freeware packages;
Licenses for parallel computation in Matlab are insanely expensive;
Much of the m-code, including many of the toolbox functions, and some builtins, were designed to be obviously correct, at the expense of efficiency and/or usability. The most glaring example of this is Matlab's median function, which performs a sort of the data, then takes the middle value. This has been the wrong algorithm since the 70's.
saving graphs to file is dodgy at best in Matlab.
I have not found my user experience to have improved over the last 5 years (when I started using Matlab instead of octave), even though Mathworks continues to add bells and whistles. This indicates that I am not their target customer, rather they are looking to expand market share by making things worse for power users.
There are now 2 ways to do object-oriented programming in Matlab, which is confusing at best. Legacy code using the old style will persist for some time.
The Matlab UI is written in Java, which has unpleasant ideas about memory management.
|
Among Matlab and Python, which language is good for statistical analysis?
|
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are
|
Among Matlab and Python, which language is good for statistical analysis?
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are not being productive enough, and you will fall back to using your default best language. At the very least, I would suggest you try to become equally proficient in a number of languages (I would suggest R as well).
What I like about Matlab:
I am proficient in it.
It is the lingua franca among numerical analysts.
the profiling tool is very good. This is the only reason I use Matlab instead of octave.
There is a freeware clone, octave, which has good compliance with the reference implementation.
What I do not like about Matlab:
There is not a good system to manage third party (free or otherwise) packages and scripts. Mathworks controls the 'central file exchange', and installation of add-on packages seems very clunky, nothing like the excellent system that R has. Furthermore, Mathworks has no incentive to improve this situation, because they make money on selling toolboxes, which compete with freeware packages;
Licenses for parallel computation in Matlab are insanely expensive;
Much of the m-code, including many of the toolbox functions, and some builtins, were designed to be obviously correct, at the expense of efficiency and/or usability. The most glaring example of this is Matlab's median function, which performs a sort of the data, then takes the middle value. This has been the wrong algorithm since the 70's.
saving graphs to file is dodgy at best in Matlab.
I have not found my user experience to have improved over the last 5 years (when I started using Matlab instead of octave), even though Mathworks continues to add bells and whistles. This indicates that I am not their target customer, rather they are looking to expand market share by making things worse for power users.
There are now 2 ways to do object-oriented programming in Matlab, which is confusing at best. Legacy code using the old style will persist for some time.
The Matlab UI is written in Java, which has unpleasant ideas about memory management.
|
Among Matlab and Python, which language is good for statistical analysis?
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are
|
15,141
|
Among Matlab and Python, which language is good for statistical analysis?
|
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc).
On the first, the biggest difference is that Python is a general purpose programming language. Matlab is great as long as your world is roughly isomorphic to a fortran numeric array. Once you start dealing with data munging and related issues, Python outshines Matlab. For example, see Greg Wilson's book: Data Crunching: Solve Everyday Problems Using Java, Python, and more.
On the second, Matlab really does shine with numeric work. A lot of the research community uses it and if you're looking for say, some algorithm related to a paper in compressed sensing, you're far more likely to find an implementation in Matlab. On the other hand, Matlab is kind of the PHP of scientific computing -- it strives to have a function for everything under the sun. The resulting aesthetics and architecture are maddening if you're a programming language geek, but in utilitarian terms, it gets the job done. A lot of this has become less relvant with the rise of Numpy/Scipy, you're just as likely to find optimization and machine learning libraries available for Python. Interfacing with C is about as easy in either language.
On the availability of statistical libraries for modeling and such, both are somewhat lacking when compared to something like R. (Though I suspect both will meet the needs for 80% of people doing statistical work.) For the Python side of things see this question: Python as a statistics workbench. For the Matlab side, I know there's a statistics toolbox, but I'll let someone more knowledgeable fill in the blanks (my experience with Matlab is limited to numerical work unrelated to statistics).
|
Among Matlab and Python, which language is good for statistical analysis?
|
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc).
|
Among Matlab and Python, which language is good for statistical analysis?
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc).
On the first, the biggest difference is that Python is a general purpose programming language. Matlab is great as long as your world is roughly isomorphic to a fortran numeric array. Once you start dealing with data munging and related issues, Python outshines Matlab. For example, see Greg Wilson's book: Data Crunching: Solve Everyday Problems Using Java, Python, and more.
On the second, Matlab really does shine with numeric work. A lot of the research community uses it and if you're looking for say, some algorithm related to a paper in compressed sensing, you're far more likely to find an implementation in Matlab. On the other hand, Matlab is kind of the PHP of scientific computing -- it strives to have a function for everything under the sun. The resulting aesthetics and architecture are maddening if you're a programming language geek, but in utilitarian terms, it gets the job done. A lot of this has become less relvant with the rise of Numpy/Scipy, you're just as likely to find optimization and machine learning libraries available for Python. Interfacing with C is about as easy in either language.
On the availability of statistical libraries for modeling and such, both are somewhat lacking when compared to something like R. (Though I suspect both will meet the needs for 80% of people doing statistical work.) For the Python side of things see this question: Python as a statistics workbench. For the Matlab side, I know there's a statistics toolbox, but I'll let someone more knowledgeable fill in the blanks (my experience with Matlab is limited to numerical work unrelated to statistics).
|
Among Matlab and Python, which language is good for statistical analysis?
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc).
|
15,142
|
Among Matlab and Python, which language is good for statistical analysis?
|
I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I often needed to create algorithms for quick turnaround analysis. Since these algorithms often utilize matrix math, Matlab was an ideal candidate for my job. In addition to my Matlab toolbox of code, others in my group worked extensively in Java since there was clear interoperability between the languages. For years I was completely happy with Matlab, but about 3 years ago I decided to start the slow transition away from Matlab and happy to say I haven't opened it in about a year now. Here are the reason for my move:
I work with online and offline computing systems, the licensing system was always a headache. It always seemed to happen that when we most needed Matlab, the license would expire or suddenly have issues. This was always a headache. Also, if we ever needed to share code, and the other party did not have licenses for the same toolboxes, this created a headache. It's not free
I often need to create presentations. Even though Matlab provides extensive tools for creating figures, which makes it very powerful for algorithm design, but saving the figure such that it could be inserted into a presentation and look nice is no simple task. I often had to insert an EPS file into Adobe illustrator to remove all the garbage, fix the fonts, and clean up the lines. There are some tools to help with this on the file exchange though (export_fig.m).
I often get Matlab code from others. When this happens, I almost always rewrite it because: their API is not compatible with my data, their code doesn't make sense, it's slow, it doesn't output what I need,... Basically people who develop in Matlab are not software engineers and Matlab does not encourage any type of design principle.
I'm a power user. I like terminals. I hate the GUI--hate it. And when they added the "windows" style ribbon, I hated it some more. Basically their tweaks to the GUI and terrible memory management pushed my last button and I decided to leave. Using the -nodesktop option is good most of the time, but has it's issues.
Many possibilities to design of functions (using OO, or functional design), but none feel right, most feel adhoc. I do not get satisfaction from designing good functions in Matlab
The community is big, but isn't easy to share and find good code. The file exchange isn't that great.
This is only a few of my many gripes with Matlab. It's one shining attribute: it's easy, really easy to write code quickly (if not ugly). I did leave it though, and my quest led me through Clojure->JavaScript->Python<->Julia ; yeah, I've been all over the place.
Clojure: beautiful functional language. My reason for using Clojure was its ability to script Java. A lot of our "big" code base is in Java, so this made a lot of sense. At the time a lot of scientific processing was not readily available, and not a lot with visualization either. But I think this is changing.
Javascript: after seeing the benchmarks at http://julialang.org/, and since I was definitely interested in the visualization capability of D3, I decided to try JavaScript. JavaScript is surprisingly very fast. But if you really want to hate yourself, learn JavaScript.
Python: Python has an amazing community and has lots of great projects going on. The IPython Notebook is amazing for many reason (one of them being simple copy/past of figures into powerpoint). Projects like NumPy/SciPy/Scikit-Learn/Pandas have really made Python fun and easy to use. It's so easy to use on multiple cores or clusters. I've been really happy for the switch.
Julia: Julia is amazing. Especially for Matlab users. It's in it's infancy though, so lots of changes going on. One of the major drawbacks to Python is it does not have all the built-in functionality that Matlab has. Sure NumPy/SciPy bring that functionality, but it's not built-in and you have to make decisions on whether to be pure python objects or numpy objects. Julia basically has everything you wish Python had coming from Matlab. I'd wait, but this is the best option for Matlab users in the future.
|
Among Matlab and Python, which language is good for statistical analysis?
|
I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I
|
Among Matlab and Python, which language is good for statistical analysis?
I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I often needed to create algorithms for quick turnaround analysis. Since these algorithms often utilize matrix math, Matlab was an ideal candidate for my job. In addition to my Matlab toolbox of code, others in my group worked extensively in Java since there was clear interoperability between the languages. For years I was completely happy with Matlab, but about 3 years ago I decided to start the slow transition away from Matlab and happy to say I haven't opened it in about a year now. Here are the reason for my move:
I work with online and offline computing systems, the licensing system was always a headache. It always seemed to happen that when we most needed Matlab, the license would expire or suddenly have issues. This was always a headache. Also, if we ever needed to share code, and the other party did not have licenses for the same toolboxes, this created a headache. It's not free
I often need to create presentations. Even though Matlab provides extensive tools for creating figures, which makes it very powerful for algorithm design, but saving the figure such that it could be inserted into a presentation and look nice is no simple task. I often had to insert an EPS file into Adobe illustrator to remove all the garbage, fix the fonts, and clean up the lines. There are some tools to help with this on the file exchange though (export_fig.m).
I often get Matlab code from others. When this happens, I almost always rewrite it because: their API is not compatible with my data, their code doesn't make sense, it's slow, it doesn't output what I need,... Basically people who develop in Matlab are not software engineers and Matlab does not encourage any type of design principle.
I'm a power user. I like terminals. I hate the GUI--hate it. And when they added the "windows" style ribbon, I hated it some more. Basically their tweaks to the GUI and terrible memory management pushed my last button and I decided to leave. Using the -nodesktop option is good most of the time, but has it's issues.
Many possibilities to design of functions (using OO, or functional design), but none feel right, most feel adhoc. I do not get satisfaction from designing good functions in Matlab
The community is big, but isn't easy to share and find good code. The file exchange isn't that great.
This is only a few of my many gripes with Matlab. It's one shining attribute: it's easy, really easy to write code quickly (if not ugly). I did leave it though, and my quest led me through Clojure->JavaScript->Python<->Julia ; yeah, I've been all over the place.
Clojure: beautiful functional language. My reason for using Clojure was its ability to script Java. A lot of our "big" code base is in Java, so this made a lot of sense. At the time a lot of scientific processing was not readily available, and not a lot with visualization either. But I think this is changing.
Javascript: after seeing the benchmarks at http://julialang.org/, and since I was definitely interested in the visualization capability of D3, I decided to try JavaScript. JavaScript is surprisingly very fast. But if you really want to hate yourself, learn JavaScript.
Python: Python has an amazing community and has lots of great projects going on. The IPython Notebook is amazing for many reason (one of them being simple copy/past of figures into powerpoint). Projects like NumPy/SciPy/Scikit-Learn/Pandas have really made Python fun and easy to use. It's so easy to use on multiple cores or clusters. I've been really happy for the switch.
Julia: Julia is amazing. Especially for Matlab users. It's in it's infancy though, so lots of changes going on. One of the major drawbacks to Python is it does not have all the built-in functionality that Matlab has. Sure NumPy/SciPy bring that functionality, but it's not built-in and you have to make decisions on whether to be pure python objects or numpy objects. Julia basically has everything you wish Python had coming from Matlab. I'd wait, but this is the best option for Matlab users in the future.
|
Among Matlab and Python, which language is good for statistical analysis?
I also have been an avid Matlab user for 10+ years. For many of those years I had no reason to work beyond the toolbox I had created for my job. Although many functions were created for a toolbox, I
|
15,143
|
Optimizing OLS with Newton's Method
|
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients.
On each iteration, Newton's method constructs a quadratic approximation of the loss function around the current parameters, based on the gradient and Hessian. The parameters are then updated by minimizing this approximation. For quadratic loss functions (as we have with OLS regression) the approximation is equivalent to the loss function itself, so convergence occurs in a single step.
This assumes we're using the 'vanilla' version of Newton's method. Some variants use a restricted step size, in which case multiple steps would be needed. It also assumes the design matrix has full rank. If this doesn't hold, the Hessian is non-invertible so Newton's method can't be used without modifying the problem and/or update rule (also, there's no unique OLS solution in this case).
Proof
Assume the design matrix $X \in \mathbb{R}^{n \times d}$ has full rank. Let $y \in \mathbb{R}^n$ be the responses, and $w \in \mathbb{R}^d$ be the coefficients. The loss function is:
$$L(w) = \frac{1}{2} \|y - X w\|_2^2$$
The gradient and Hessian are:
$$\nabla L(w) = X^T X w - X^T y \quad \quad
H_L(w) = X^T X$$
Newton's method sets the parameters to an initial guess $w_0$, then iteratively updates them. Let $w_t$ be the current parameters on iteration $t$. The updated parameters $w_{t+1}$ are obtained by subtracting the product of the inverse Hessian and the gradient:
$$w_{t+1} = w_t - H_L(w_t)^{-1} \nabla L(w_t)$$
Plug in the expressions for the gradient and Hessian:
$$w_{t+1} = w_t - (X^T X)^{-1} (X^T X w_t - X^T y)$$
$$= (X^T X)^{-1} X^T y$$
This is the standard, closed form expression for the OLS coefficients. Therefore, no matter what we choose for the initial guess $w_0$, we'll have the correct solution at $w_1$ after a single iteration.
Furthermore, this is a stationary point. Notice that the expression for $w_{t+1}$ doesn't depend on $w_t$, so the solution won't change if we continue beyond one iteration. This indicates that Newton's method converges in a single step.
|
Optimizing OLS with Newton's Method
|
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients.
On each iteration, Newton's method construct
|
Optimizing OLS with Newton's Method
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients.
On each iteration, Newton's method constructs a quadratic approximation of the loss function around the current parameters, based on the gradient and Hessian. The parameters are then updated by minimizing this approximation. For quadratic loss functions (as we have with OLS regression) the approximation is equivalent to the loss function itself, so convergence occurs in a single step.
This assumes we're using the 'vanilla' version of Newton's method. Some variants use a restricted step size, in which case multiple steps would be needed. It also assumes the design matrix has full rank. If this doesn't hold, the Hessian is non-invertible so Newton's method can't be used without modifying the problem and/or update rule (also, there's no unique OLS solution in this case).
Proof
Assume the design matrix $X \in \mathbb{R}^{n \times d}$ has full rank. Let $y \in \mathbb{R}^n$ be the responses, and $w \in \mathbb{R}^d$ be the coefficients. The loss function is:
$$L(w) = \frac{1}{2} \|y - X w\|_2^2$$
The gradient and Hessian are:
$$\nabla L(w) = X^T X w - X^T y \quad \quad
H_L(w) = X^T X$$
Newton's method sets the parameters to an initial guess $w_0$, then iteratively updates them. Let $w_t$ be the current parameters on iteration $t$. The updated parameters $w_{t+1}$ are obtained by subtracting the product of the inverse Hessian and the gradient:
$$w_{t+1} = w_t - H_L(w_t)^{-1} \nabla L(w_t)$$
Plug in the expressions for the gradient and Hessian:
$$w_{t+1} = w_t - (X^T X)^{-1} (X^T X w_t - X^T y)$$
$$= (X^T X)^{-1} X^T y$$
This is the standard, closed form expression for the OLS coefficients. Therefore, no matter what we choose for the initial guess $w_0$, we'll have the correct solution at $w_1$ after a single iteration.
Furthermore, this is a stationary point. Notice that the expression for $w_{t+1}$ doesn't depend on $w_t$, so the solution won't change if we continue beyond one iteration. This indicates that Newton's method converges in a single step.
|
Optimizing OLS with Newton's Method
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients.
On each iteration, Newton's method construct
|
15,144
|
Optimizing OLS with Newton's Method
|
It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact.
Newton's method does
$$\beta \gets \beta-\frac{f'(\beta)}{f''(\beta)}$$
and we have
$$f(\beta)=\|y-x\beta\|^2$$
$$f'(\beta)=-2x^T (y-x\beta)$$
$$f''(\beta)=-2x^Tx$$
First, for simplicity, do it starting at $\beta=0$. The first iterate is $-f'(0)/f''(0)$, which is
$$-(-2x^Tx)^{-1}(-2x^T (y-x\beta))=(x^Tx)^{-1}x^Ty$$
so we get the standard solution.
Starting somewhere else, the first iterate is
$$\beta-(-2x^Tx)^{-1}(-2x^T (y-x\beta))= \beta+(x^Tx)^{-1}x^Ty-(x^Tx)^{-1}x^Tx\beta=(x^Tx)^{-1}x^Ty$$
|
Optimizing OLS with Newton's Method
|
It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact.
Newto
|
Optimizing OLS with Newton's Method
It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact.
Newton's method does
$$\beta \gets \beta-\frac{f'(\beta)}{f''(\beta)}$$
and we have
$$f(\beta)=\|y-x\beta\|^2$$
$$f'(\beta)=-2x^T (y-x\beta)$$
$$f''(\beta)=-2x^Tx$$
First, for simplicity, do it starting at $\beta=0$. The first iterate is $-f'(0)/f''(0)$, which is
$$-(-2x^Tx)^{-1}(-2x^T (y-x\beta))=(x^Tx)^{-1}x^Ty$$
so we get the standard solution.
Starting somewhere else, the first iterate is
$$\beta-(-2x^Tx)^{-1}(-2x^T (y-x\beta))= \beta+(x^Tx)^{-1}x^Ty-(x^Tx)^{-1}x^Tx\beta=(x^Tx)^{-1}x^Ty$$
|
Optimizing OLS with Newton's Method
It takes one iteration, basically because Newton's method works by solving an approximating quadratic equation in one step. Since the squared error loss is quadratic, the approximation is exact.
Newto
|
15,145
|
What does interpolating the training set actually mean?
|
Your question already got two nice answers, but I feel that some more context is needed.
First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized models we mean such that have way more parameters than datapoints. For example, Neal (2019), and Neal et al (2018) trained a network with hundreds of thousands of parameters for a sample of 100 MNIST images. The discussed models are so large, that they would be unreasonable for any practical applications. Because they are so large, they are able to fully memorize the training data. Before the double descent phenomenon attracted more attention in the machine learning community, memorizing the training data was assumed to lead to overfitting and poor generalization in general.
As already mentioned by @jcken, if a model has a huge number of parameters, it can easily fit a function to the data such that it "connects all the dots" and at prediction time just interpolates between the points. I'll repeat myself, but until recently we would assume that this would lead to overfitting and poor performance. With the insanely huge models, this doesn't have to be the case. The models would still interpolate, but the function would be so flexible that it won't hurt the test set performance.
To understand it better, consider the lottery ticket hypothesis. Loosely speaking, it says that if you randomly initialize and train a big machine learning model (deep network), this network would contain a smaller sub-network, the "lottery ticket", such that you could prune the big network while keeping the performance guarantees. The image below (taken from the linked post), illustrates such pruning. Having a huge number of parameters is like buying piles of lottery tickets, the more you have, the higher your chance of winning. In such a case, you can find a lottery ticket model that interpolates between the datapoints but also generalizes.
Another way to think about it is to consider a neural network as a kind of ensemble model. Every neural network has a pre-ultimate layer (image below, adapted from this), that you can think of as a collection of intermediate representations of your problem. The outputs of this layer are then aggregated (usually using a dense layer) for making the final prediction. This is like ensembling many smaller models. Again, if the smaller models memorized the data, even if each would overfit, by aggregating them, the effects would hopefully cancel out.
All the machine learning algorithms kind of interpolate between the datapoints, but if you more parameters than data, you would literally memorize the data and interpolate between them.
|
What does interpolating the training set actually mean?
|
Your question already got two nice answers, but I feel that some more context is needed.
First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized
|
What does interpolating the training set actually mean?
Your question already got two nice answers, but I feel that some more context is needed.
First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized models we mean such that have way more parameters than datapoints. For example, Neal (2019), and Neal et al (2018) trained a network with hundreds of thousands of parameters for a sample of 100 MNIST images. The discussed models are so large, that they would be unreasonable for any practical applications. Because they are so large, they are able to fully memorize the training data. Before the double descent phenomenon attracted more attention in the machine learning community, memorizing the training data was assumed to lead to overfitting and poor generalization in general.
As already mentioned by @jcken, if a model has a huge number of parameters, it can easily fit a function to the data such that it "connects all the dots" and at prediction time just interpolates between the points. I'll repeat myself, but until recently we would assume that this would lead to overfitting and poor performance. With the insanely huge models, this doesn't have to be the case. The models would still interpolate, but the function would be so flexible that it won't hurt the test set performance.
To understand it better, consider the lottery ticket hypothesis. Loosely speaking, it says that if you randomly initialize and train a big machine learning model (deep network), this network would contain a smaller sub-network, the "lottery ticket", such that you could prune the big network while keeping the performance guarantees. The image below (taken from the linked post), illustrates such pruning. Having a huge number of parameters is like buying piles of lottery tickets, the more you have, the higher your chance of winning. In such a case, you can find a lottery ticket model that interpolates between the datapoints but also generalizes.
Another way to think about it is to consider a neural network as a kind of ensemble model. Every neural network has a pre-ultimate layer (image below, adapted from this), that you can think of as a collection of intermediate representations of your problem. The outputs of this layer are then aggregated (usually using a dense layer) for making the final prediction. This is like ensembling many smaller models. Again, if the smaller models memorized the data, even if each would overfit, by aggregating them, the effects would hopefully cancel out.
All the machine learning algorithms kind of interpolate between the datapoints, but if you more parameters than data, you would literally memorize the data and interpolate between them.
|
What does interpolating the training set actually mean?
Your question already got two nice answers, but I feel that some more context is needed.
First, we are talking here about overparametrized models and the double descent phenomenon. By overparametrized
|
15,146
|
What does interpolating the training set actually mean?
|
In layman's terms, an interpolator will literally 'join the dots'.
Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does play a useful role in statistics/ml but should be used carefully. The black dots are training data and the red crossed are a similar dataset drawn form the same data generating process - they can be though of as a test set.
We can see the in the left hand plot fits okay to both the training and test data. On the left I just used linear regression to fit a line to the data (just $2$ parameters). The curve in the right hand plot perfectly predicts the training set but looks nothing like the test set. I'd used an $11^{th}$ order polynomial (plus intercept) to fit the interpolator. Additionally, on the test set, the linear fit gives $MSE = 23.8$, the interpolator gives $MSE = 10350842349$ -- not good!
|
What does interpolating the training set actually mean?
|
In layman's terms, an interpolator will literally 'join the dots'.
Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does pl
|
What does interpolating the training set actually mean?
In layman's terms, an interpolator will literally 'join the dots'.
Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does play a useful role in statistics/ml but should be used carefully. The black dots are training data and the red crossed are a similar dataset drawn form the same data generating process - they can be though of as a test set.
We can see the in the left hand plot fits okay to both the training and test data. On the left I just used linear regression to fit a line to the data (just $2$ parameters). The curve in the right hand plot perfectly predicts the training set but looks nothing like the test set. I'd used an $11^{th}$ order polynomial (plus intercept) to fit the interpolator. Additionally, on the test set, the linear fit gives $MSE = 23.8$, the interpolator gives $MSE = 10350842349$ -- not good!
|
What does interpolating the training set actually mean?
In layman's terms, an interpolator will literally 'join the dots'.
Here's a simple graphical summary of what interpolation can do and why it can be awful. I'd like to stress that interpolation does pl
|
15,147
|
What does interpolating the training set actually mean?
|
Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/context means zero training loss but still not overfitting on the test set. Hence the curious phenomena, that normally we would call an overfitting (overtraining actually), which is not resolved yet.
|
What does interpolating the training set actually mean?
|
Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/cont
|
What does interpolating the training set actually mean?
Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/context means zero training loss but still not overfitting on the test set. Hence the curious phenomena, that normally we would call an overfitting (overtraining actually), which is not resolved yet.
|
What does interpolating the training set actually mean?
Apart from literal meaning of interpolation, this is related to something called deep learning models totally memorize the training data. Hence, both interpolating and memorisation in this paper/cont
|
15,148
|
What does interpolating the training set actually mean?
|
I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the data is generated by a process say
$y= f(x) + \epsilon$
Where $\epsilon$ represents noise. By fitting the model such that $y=f(x)$ for every training example even when $\epsilon$ is non-zero you are interpolating.
|
What does interpolating the training set actually mean?
|
I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the d
|
What does interpolating the training set actually mean?
I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the data is generated by a process say
$y= f(x) + \epsilon$
Where $\epsilon$ represents noise. By fitting the model such that $y=f(x)$ for every training example even when $\epsilon$ is non-zero you are interpolating.
|
What does interpolating the training set actually mean?
I would add, the quote actually contains the definition, "fitting all the training examples... including noisy ones". So the training loss is zero. The comment "including noisy ones" implies the the d
|
15,149
|
Two envelope problem revisited
|
1. UNNECESSARY PROBABILITIES.
The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although straightforward, appears to be new. In particular, it identifies a set of decision procedures for the two envelope problem that are demonstrably superior to the “always switch” or “never switch” procedures.
Section 2 introduces (standard) terminology, concepts, and notation. It analyzes all possible decision procedures for the "guess which is larger problem." Readers familiar with this material might like to skip this section. Section 3 applies a similar analysis to the two envelope problem. Section 4, the conclusions, summarizes the key points.
All published analyses of these puzzles assume there is a probability distribution governing the possible states of nature. This assumption, however, is not part of the puzzle statements. The key idea to these analyses is that dropping this (unwarranted) assumption leads to a simple resolution of the apparent paradoxes in these puzzles.
2. THE “GUESS WHICH IS LARGER” PROBLEM.
An experimenter is told that different real numbers $x_1$ and $x_2$ are written on two slips of paper. She looks at the number on a randomly chosen slip. Based only on this one observation, she must decide whether it is the smaller or larger of the two numbers.
Simple but open-ended problems like this about probability are notorious for being confusing and counter-intuitive. In particular, there are at least three distinct ways in which probability enters the picture. To clarify this, let's adopt a formal experimental point of view (2).
Begin by specifying a loss function. Our goal will be to minimize its expectation, in a sense to be defined below. A good choice is to make the loss equal to $1$ when the experimenter guesses correctly and $0$ otherwise. The expectation of this loss function is the probability of guessing incorrectly. In general, by assigning various penalties to wrong guesses, a loss function captures the objective of guessing correctly. To be sure, adopting a loss function is as arbitrary as assuming a prior probability distribution on $x_1$ and $x_2$, but it is more natural and fundamental. When we are faced with making a decision, we naturally consider the consequences of being right or wrong. If there are no consequences either way, then why care? We implicitly undertake considerations of potential loss whenever we make a (rational) decision and so we benefit from an explicit consideration of loss, whereas the use of probability to describe the possible values on the slips of paper is unnecessary, artificial, and-—as we shall see—-can prevent us from obtaining useful solutions.
Decision theory models observational results and our analysis of them. It uses three additional mathematical objects: a sample space, a set of “states of nature,” and a decision procedure.
The sample space $S$ consists of all possible observations; here it can be identified with $\mathbb{R}$ (the set of real numbers).
The states of nature $\Omega$ are the possible probability distributions governing the experimental outcome. (This is the first sense in which we may talk about the “probability” of an event.) In the “guess which is larger” problem, these are the discrete distributions taking values at distinct real numbers $x_1$ and $x_2$ with equal probabilities of $\frac{1}{2}$ at each value. $\Omega$ can be parameterized by $\{\omega = (x_1, x_2) \in \mathbb{R}\times\mathbb{R}\ |\ x_1 \gt x_2\}.$
The decision space is the binary set $\Delta = \{\text{smaller}, \text{larger}\}$ of possible decisions.
In these terms, the loss function is a real-valued function defined on $\Omega \times \Delta$. It tells us how “bad” a decision is (the second argument) compared to reality (the first argument).
The most general decision procedure $\delta$ available to the experimenter is a randomized one: its value for any experimental outcome is a probability distribution on $\Delta$. That is, the decision to make upon observing outcome $x$ is not necessarily definite, but rather is to be chosen randomly according to a distribution $\delta(x)$. (This is the second way in which probability may be involved.)
When $\Delta$ has just two elements, any randomized procedure can be identified by the probability it assigns to a prespecified decision, which to be concrete we take to be “larger.”
A physical spinner implements such a binary randomized procedure: the freely-spinning pointer will come to stop in the upper area, corresponding to one decision in $\Delta$, with probability $\delta$, and otherwise will stop in the lower left area with probability $1-\delta(x)$. The spinner is completely determined by specifying the value of $\delta(x)\in[0,1]$.
Thus a decision procedure can be thought of as a function
$$\delta^\prime:S\to[0,1],$$
where
$${\Pr}_{\delta(x)}(\text{larger}) = \delta^\prime(x)\ \text{ and }\ {\Pr}_{\delta(x)}(\text{smaller})=1-\delta^\prime(x).$$
Conversely, any such function $\delta^\prime$ determines a randomized decision procedure. The randomized decisions include deterministic decisions in the special case where the range of $\delta^\prime$ lies in $\{0,1\}$.
Let us say that the cost of a decision procedure $\delta$ for an outcome $x$ is the expected loss of $\delta(x)$. The expectation is with respect to the probability distribution $\delta(x)$ on the decision space $\Delta$. Each state of nature $\omega$ (which, recall, is a Binomial probability distribution on the sample space $S$) determines the expected cost of any procedure $\delta$; this is the risk of $\delta$ for $\omega$, $\text{Risk}_\delta(\omega)$. Here, the expectation is taken with respect to the state of nature $\omega$.
Decision procedures are compared in terms of their risk functions. When the state of nature is truly unknown, $\varepsilon$ and $\delta$ are two procedures, and $\text{Risk}_\varepsilon(\omega)\ge \text{Risk}_\delta(\omega)$ for all $\omega$, then there is no sense in using procedure $\varepsilon$, because procedure $\delta$ is never any worse (and might be better in some cases). Such a procedure $\varepsilon$ is inadmissible; otherwise, it is admissible. Often many admissible procedures exist. We shall consider any of them “good” because none of them can be consistently out-performed by some other procedure.
Note that no prior distribution is introduced on $\Omega$ (a “mixed strategy for $C$” in the terminology of (1)). This is the third way in which probability may be part of the problem setting. Using it makes the present analysis more general than that of (1) and its references, while yet being simpler.
Table 1 evaluates the risk when the true state of nature is given by $\omega=(x_1, x_2).$ Recall that $x_1 \gt x_2.$
Table 1.
$$\matrix {
\text{Decision:}& & \text{Larger} & \text{Larger} & \text{Smaller} & \text{Smaller}\\
\text{Outcome} & \text{Probability} & \text{Probability} & \text{Loss} & \text{Probability} & \text{Loss} & \text{Cost} \\
x_1 & 1/2 & \delta^\prime(x_1) & 0 & 1 - \delta^\prime(x_1) & 1 & 1 - \delta^\prime(x_1) \\
x_2 & 1/2 & \delta^\prime(x_2) & 1 & 1 - \delta^\prime(x_2) & 0 & 1 - \delta^\prime(x_2)
}$$
$$\text{Risk}(x_1,x_2):\ (1 - \delta^\prime(x_1) + \delta^\prime(x_2))/2.$$
In these terms the “guess which is larger” problem becomes
Given you know nothing about $x_1$ and $x_2$, except that they are distinct, can you find a decision procedure $\delta$ for which the risk $[1 – \delta^\prime(\max(x_1, x_2)) + \delta^\prime(\min(x_1, x_2))]/2$ is surely less than $\frac{1}{2}$?
This statement is equivalent to requiring $\delta^\prime(x)\gt \delta^\prime(y)$ whenever $x \gt y.$ Whence, it is necessary and sufficient for the experimenter's decision procedure to be specified by some strictly increasing function $\delta^\prime: S\to [0, 1].$ This set of procedures includes, but is larger than, all the “mixed strategies $Q$” of 1. There are lots of randomized decision procedures that are better than any unrandomized procedure!
3. THE “TWO ENVELOPE” PROBLEM.
It is encouraging that this straightforward analysis disclosed a large set of solutions to the “guess which is larger” problem, including good ones that have not been identified before. Let us see what the same approach can reveal about the other problem before us, the “two envelope” problem (or “box problem,” as it is sometimes called). This concerns a game played by randomly selecting one of two envelopes, one of which is known to have twice as much money in it as the other. After opening the envelope and observing the amount $x$ of money in it, the player decides whether to keep the money in the unopened envelope (to “switch”) or to keep the money in the opened envelope. One would think that switching and not switching would be equally acceptable strategies, because the player is equally uncertain as to which envelope contains the larger amount. The paradox is that switching seems to be the superior option, because it offers “equally probable” alternatives between payoffs of $2x$ and $x/2,$ whose expected value of $5x/4$ exceeds the value in the opened envelope. Note that both these strategies are deterministic and constant.
In this situation, we may formally write
$$\eqalign{
S &= \{ x\in \mathbb{R}\ |\ x \gt 0\}, \\
\Omega &= \{\text{Discrete distributions supported on }\{\omega, 2\omega\}\ |\ \omega \gt 0 \text{ and }\Pr(\omega) = \frac{1}{2}\}, \text{and}\\
\Delta &= \{\text{Switch}, \text{Do not switch}\}.
}$$
As before, any decision procedure $\delta$ can be considered a function from $S$ to $[0, 1],$ this time by associating it with the probability of not switching, which again can be written $\delta^\prime(x)$. The probability of switching must of course be the complementary value $1–\delta^\prime(x).$
The loss, shown in Table 2, is the negative of the game's payoff. It is a function of the true state of nature $\omega$, the outcome $x$ (which can be either $\omega$ or $2\omega$), and the decision, which depends on the outcome.
Table 2.
$$\matrix{
& \text{Loss}&\text{Loss} &\\
\text{Outcome}(x) & \text{Switch} & \text{Do not switch} & \text{Cost}\\
\omega & -2\omega & -\omega & -\omega[2(1-\delta^\prime(\omega)) + \delta^\prime(\omega)]\\
2\omega & -\omega & -2\omega & -\omega[1 - \delta^\prime(2\omega) + 2\delta^\prime(2\omega)]
}$$
In addition to displaying the loss function, Table 2 also computes the cost of an arbitrary decision procedure $\delta$. Because the game produces the two outcomes with equal probabilities of $\frac{1}{2}$, the risk when $\omega$ is the true state of nature is
$$\eqalign{
\text{Risk}_\delta(\omega) &=-\omega[2(1-\delta^\prime(\omega)) + \delta^\prime(\omega)]/2 + -\omega[1 - \delta^\prime(2\omega) + 2\delta^\prime(2\omega)]/2 \\
&= (-\omega/2)[3 + \delta^\prime(2\omega) - \delta^\prime(\omega)].
}$$
A constant procedure, which means always switching ($\delta^\prime(x)=0$) or always standing pat ($\delta^\prime(x)=1$), will have risk $-3\omega/2$. Any strictly increasing function, or more generally, any function $\delta^\prime$ with range in $[0, 1]$ for which $\delta^\prime(2x) \gt \delta^\prime(x)$ for all positive real $x,$ determines a procedure $\delta$ having a risk function that is always strictly less than $-3\omega/2$ and thus is superior to either constant procedure, regardless of the true state of nature $\omega$! The constant procedures therefore are inadmissible because there exist procedures with risks that are sometimes lower, and never higher, regardless of the state of nature.
Comparing this to the preceding solution of the “guess which is larger” problem shows the close connection between the two. In both cases, an appropriately chosen randomized procedure is demonstrably superior to the “obvious” constant strategies.
These randomized strategies have some notable properties:
There are no bad situations for the randomized strategies: no matter how the amount of money in the envelope is chosen, in the long run these strategies will be no worse than a constant strategy.
No randomized strategy with limiting values of $0$ and $1$ dominates any of the others: if the expectation for $\delta$ when $(\omega, 2\omega)$ is in the envelopes exceeds the expectation for $\varepsilon$, then there exists some other possible state with $(\eta, 2\eta)$ in the envelopes and the expectation of $\varepsilon$ exceeds that of $\delta$ .
The $\delta$ strategies include, as special cases, strategies equivalent to many of the Bayesian strategies. Any strategy that says “switch if $x$ is less than some threshold $T$ and stay otherwise” corresponds to $\delta(x)=1$ when $x \ge T, \delta(x) = 0$ otherwise.
What, then, is the fallacy in the argument that favors always switching? It lies in the implicit assumption that there is any probability distribution at all for the alternatives. Specifically, having observed $x$ in the opened envelope, the intuitive argument for switching is based on the conditional probabilities Prob(Amount in unopened envelope | $x$ was observed), which are probabilities defined on the set of underlying states of nature. But these are not computable from the data. The decision-theoretic framework does not require a probability distribution on $\Omega$ in order to solve the problem, nor does the problem specify one.
This result differs from the ones obtained by (1) and its references in a subtle but important way. The other solutions all assume (even though it is irrelevant) there is a prior probability distribution on $\Omega$ and then show, essentially, that it must be uniform over $S.$ That, in turn, is impossible. However, the solutions to the two-envelope problem given here do not arise as the best decision procedures for some given prior distribution and thereby are overlooked by such an analysis. In the present treatment, it simply does not matter whether a prior probability distribution can exist or not. We might characterize this as a contrast between being uncertain what the envelopes contain (as described by a prior distribution) and being completely ignorant of their contents (so that no prior distribution is relevant).
4. CONCLUSIONS.
In the “guess which is larger” problem, a good procedure is to decide randomly that the observed value is the larger of the two, with a probability that increases as the observed value increases. There is no single best procedure. In the “two envelope” problem, a good procedure is again to decide randomly that the observed amount of money is worth keeping (that is, that it is the larger of the two), with a probability that increases as the observed value increases. Again there is no single best procedure. In both cases, if many players used such a procedure and independently played games for a given $\omega$, then (regardless of the value of $\omega$) on the whole they would win more than they lose, because their decision procedures favor selecting the larger amounts.
In both problems, making an additional assumption-—a prior distribution on the states of nature—-that is not part of the problem gives rise to an apparent paradox. By focusing on what is specified in each problem, this assumption is altogether avoided (tempting as it may be to make), allowing the paradoxes to disappear and straightforward solutions to emerge.
REFERENCES
(1) D. Samet, I. Samet, and D. Schmeidler, One Observation behind Two-Envelope Puzzles. American Mathematical Monthly 111 (April 2004) 347-351.
(2) J. Kiefer, Introduction to Statistical Inference. Springer-Verlag, New York, 1987.
|
Two envelope problem revisited
|
1. UNNECESSARY PROBABILITIES.
The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although
|
Two envelope problem revisited
1. UNNECESSARY PROBABILITIES.
The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although straightforward, appears to be new. In particular, it identifies a set of decision procedures for the two envelope problem that are demonstrably superior to the “always switch” or “never switch” procedures.
Section 2 introduces (standard) terminology, concepts, and notation. It analyzes all possible decision procedures for the "guess which is larger problem." Readers familiar with this material might like to skip this section. Section 3 applies a similar analysis to the two envelope problem. Section 4, the conclusions, summarizes the key points.
All published analyses of these puzzles assume there is a probability distribution governing the possible states of nature. This assumption, however, is not part of the puzzle statements. The key idea to these analyses is that dropping this (unwarranted) assumption leads to a simple resolution of the apparent paradoxes in these puzzles.
2. THE “GUESS WHICH IS LARGER” PROBLEM.
An experimenter is told that different real numbers $x_1$ and $x_2$ are written on two slips of paper. She looks at the number on a randomly chosen slip. Based only on this one observation, she must decide whether it is the smaller or larger of the two numbers.
Simple but open-ended problems like this about probability are notorious for being confusing and counter-intuitive. In particular, there are at least three distinct ways in which probability enters the picture. To clarify this, let's adopt a formal experimental point of view (2).
Begin by specifying a loss function. Our goal will be to minimize its expectation, in a sense to be defined below. A good choice is to make the loss equal to $1$ when the experimenter guesses correctly and $0$ otherwise. The expectation of this loss function is the probability of guessing incorrectly. In general, by assigning various penalties to wrong guesses, a loss function captures the objective of guessing correctly. To be sure, adopting a loss function is as arbitrary as assuming a prior probability distribution on $x_1$ and $x_2$, but it is more natural and fundamental. When we are faced with making a decision, we naturally consider the consequences of being right or wrong. If there are no consequences either way, then why care? We implicitly undertake considerations of potential loss whenever we make a (rational) decision and so we benefit from an explicit consideration of loss, whereas the use of probability to describe the possible values on the slips of paper is unnecessary, artificial, and-—as we shall see—-can prevent us from obtaining useful solutions.
Decision theory models observational results and our analysis of them. It uses three additional mathematical objects: a sample space, a set of “states of nature,” and a decision procedure.
The sample space $S$ consists of all possible observations; here it can be identified with $\mathbb{R}$ (the set of real numbers).
The states of nature $\Omega$ are the possible probability distributions governing the experimental outcome. (This is the first sense in which we may talk about the “probability” of an event.) In the “guess which is larger” problem, these are the discrete distributions taking values at distinct real numbers $x_1$ and $x_2$ with equal probabilities of $\frac{1}{2}$ at each value. $\Omega$ can be parameterized by $\{\omega = (x_1, x_2) \in \mathbb{R}\times\mathbb{R}\ |\ x_1 \gt x_2\}.$
The decision space is the binary set $\Delta = \{\text{smaller}, \text{larger}\}$ of possible decisions.
In these terms, the loss function is a real-valued function defined on $\Omega \times \Delta$. It tells us how “bad” a decision is (the second argument) compared to reality (the first argument).
The most general decision procedure $\delta$ available to the experimenter is a randomized one: its value for any experimental outcome is a probability distribution on $\Delta$. That is, the decision to make upon observing outcome $x$ is not necessarily definite, but rather is to be chosen randomly according to a distribution $\delta(x)$. (This is the second way in which probability may be involved.)
When $\Delta$ has just two elements, any randomized procedure can be identified by the probability it assigns to a prespecified decision, which to be concrete we take to be “larger.”
A physical spinner implements such a binary randomized procedure: the freely-spinning pointer will come to stop in the upper area, corresponding to one decision in $\Delta$, with probability $\delta$, and otherwise will stop in the lower left area with probability $1-\delta(x)$. The spinner is completely determined by specifying the value of $\delta(x)\in[0,1]$.
Thus a decision procedure can be thought of as a function
$$\delta^\prime:S\to[0,1],$$
where
$${\Pr}_{\delta(x)}(\text{larger}) = \delta^\prime(x)\ \text{ and }\ {\Pr}_{\delta(x)}(\text{smaller})=1-\delta^\prime(x).$$
Conversely, any such function $\delta^\prime$ determines a randomized decision procedure. The randomized decisions include deterministic decisions in the special case where the range of $\delta^\prime$ lies in $\{0,1\}$.
Let us say that the cost of a decision procedure $\delta$ for an outcome $x$ is the expected loss of $\delta(x)$. The expectation is with respect to the probability distribution $\delta(x)$ on the decision space $\Delta$. Each state of nature $\omega$ (which, recall, is a Binomial probability distribution on the sample space $S$) determines the expected cost of any procedure $\delta$; this is the risk of $\delta$ for $\omega$, $\text{Risk}_\delta(\omega)$. Here, the expectation is taken with respect to the state of nature $\omega$.
Decision procedures are compared in terms of their risk functions. When the state of nature is truly unknown, $\varepsilon$ and $\delta$ are two procedures, and $\text{Risk}_\varepsilon(\omega)\ge \text{Risk}_\delta(\omega)$ for all $\omega$, then there is no sense in using procedure $\varepsilon$, because procedure $\delta$ is never any worse (and might be better in some cases). Such a procedure $\varepsilon$ is inadmissible; otherwise, it is admissible. Often many admissible procedures exist. We shall consider any of them “good” because none of them can be consistently out-performed by some other procedure.
Note that no prior distribution is introduced on $\Omega$ (a “mixed strategy for $C$” in the terminology of (1)). This is the third way in which probability may be part of the problem setting. Using it makes the present analysis more general than that of (1) and its references, while yet being simpler.
Table 1 evaluates the risk when the true state of nature is given by $\omega=(x_1, x_2).$ Recall that $x_1 \gt x_2.$
Table 1.
$$\matrix {
\text{Decision:}& & \text{Larger} & \text{Larger} & \text{Smaller} & \text{Smaller}\\
\text{Outcome} & \text{Probability} & \text{Probability} & \text{Loss} & \text{Probability} & \text{Loss} & \text{Cost} \\
x_1 & 1/2 & \delta^\prime(x_1) & 0 & 1 - \delta^\prime(x_1) & 1 & 1 - \delta^\prime(x_1) \\
x_2 & 1/2 & \delta^\prime(x_2) & 1 & 1 - \delta^\prime(x_2) & 0 & 1 - \delta^\prime(x_2)
}$$
$$\text{Risk}(x_1,x_2):\ (1 - \delta^\prime(x_1) + \delta^\prime(x_2))/2.$$
In these terms the “guess which is larger” problem becomes
Given you know nothing about $x_1$ and $x_2$, except that they are distinct, can you find a decision procedure $\delta$ for which the risk $[1 – \delta^\prime(\max(x_1, x_2)) + \delta^\prime(\min(x_1, x_2))]/2$ is surely less than $\frac{1}{2}$?
This statement is equivalent to requiring $\delta^\prime(x)\gt \delta^\prime(y)$ whenever $x \gt y.$ Whence, it is necessary and sufficient for the experimenter's decision procedure to be specified by some strictly increasing function $\delta^\prime: S\to [0, 1].$ This set of procedures includes, but is larger than, all the “mixed strategies $Q$” of 1. There are lots of randomized decision procedures that are better than any unrandomized procedure!
3. THE “TWO ENVELOPE” PROBLEM.
It is encouraging that this straightforward analysis disclosed a large set of solutions to the “guess which is larger” problem, including good ones that have not been identified before. Let us see what the same approach can reveal about the other problem before us, the “two envelope” problem (or “box problem,” as it is sometimes called). This concerns a game played by randomly selecting one of two envelopes, one of which is known to have twice as much money in it as the other. After opening the envelope and observing the amount $x$ of money in it, the player decides whether to keep the money in the unopened envelope (to “switch”) or to keep the money in the opened envelope. One would think that switching and not switching would be equally acceptable strategies, because the player is equally uncertain as to which envelope contains the larger amount. The paradox is that switching seems to be the superior option, because it offers “equally probable” alternatives between payoffs of $2x$ and $x/2,$ whose expected value of $5x/4$ exceeds the value in the opened envelope. Note that both these strategies are deterministic and constant.
In this situation, we may formally write
$$\eqalign{
S &= \{ x\in \mathbb{R}\ |\ x \gt 0\}, \\
\Omega &= \{\text{Discrete distributions supported on }\{\omega, 2\omega\}\ |\ \omega \gt 0 \text{ and }\Pr(\omega) = \frac{1}{2}\}, \text{and}\\
\Delta &= \{\text{Switch}, \text{Do not switch}\}.
}$$
As before, any decision procedure $\delta$ can be considered a function from $S$ to $[0, 1],$ this time by associating it with the probability of not switching, which again can be written $\delta^\prime(x)$. The probability of switching must of course be the complementary value $1–\delta^\prime(x).$
The loss, shown in Table 2, is the negative of the game's payoff. It is a function of the true state of nature $\omega$, the outcome $x$ (which can be either $\omega$ or $2\omega$), and the decision, which depends on the outcome.
Table 2.
$$\matrix{
& \text{Loss}&\text{Loss} &\\
\text{Outcome}(x) & \text{Switch} & \text{Do not switch} & \text{Cost}\\
\omega & -2\omega & -\omega & -\omega[2(1-\delta^\prime(\omega)) + \delta^\prime(\omega)]\\
2\omega & -\omega & -2\omega & -\omega[1 - \delta^\prime(2\omega) + 2\delta^\prime(2\omega)]
}$$
In addition to displaying the loss function, Table 2 also computes the cost of an arbitrary decision procedure $\delta$. Because the game produces the two outcomes with equal probabilities of $\frac{1}{2}$, the risk when $\omega$ is the true state of nature is
$$\eqalign{
\text{Risk}_\delta(\omega) &=-\omega[2(1-\delta^\prime(\omega)) + \delta^\prime(\omega)]/2 + -\omega[1 - \delta^\prime(2\omega) + 2\delta^\prime(2\omega)]/2 \\
&= (-\omega/2)[3 + \delta^\prime(2\omega) - \delta^\prime(\omega)].
}$$
A constant procedure, which means always switching ($\delta^\prime(x)=0$) or always standing pat ($\delta^\prime(x)=1$), will have risk $-3\omega/2$. Any strictly increasing function, or more generally, any function $\delta^\prime$ with range in $[0, 1]$ for which $\delta^\prime(2x) \gt \delta^\prime(x)$ for all positive real $x,$ determines a procedure $\delta$ having a risk function that is always strictly less than $-3\omega/2$ and thus is superior to either constant procedure, regardless of the true state of nature $\omega$! The constant procedures therefore are inadmissible because there exist procedures with risks that are sometimes lower, and never higher, regardless of the state of nature.
Comparing this to the preceding solution of the “guess which is larger” problem shows the close connection between the two. In both cases, an appropriately chosen randomized procedure is demonstrably superior to the “obvious” constant strategies.
These randomized strategies have some notable properties:
There are no bad situations for the randomized strategies: no matter how the amount of money in the envelope is chosen, in the long run these strategies will be no worse than a constant strategy.
No randomized strategy with limiting values of $0$ and $1$ dominates any of the others: if the expectation for $\delta$ when $(\omega, 2\omega)$ is in the envelopes exceeds the expectation for $\varepsilon$, then there exists some other possible state with $(\eta, 2\eta)$ in the envelopes and the expectation of $\varepsilon$ exceeds that of $\delta$ .
The $\delta$ strategies include, as special cases, strategies equivalent to many of the Bayesian strategies. Any strategy that says “switch if $x$ is less than some threshold $T$ and stay otherwise” corresponds to $\delta(x)=1$ when $x \ge T, \delta(x) = 0$ otherwise.
What, then, is the fallacy in the argument that favors always switching? It lies in the implicit assumption that there is any probability distribution at all for the alternatives. Specifically, having observed $x$ in the opened envelope, the intuitive argument for switching is based on the conditional probabilities Prob(Amount in unopened envelope | $x$ was observed), which are probabilities defined on the set of underlying states of nature. But these are not computable from the data. The decision-theoretic framework does not require a probability distribution on $\Omega$ in order to solve the problem, nor does the problem specify one.
This result differs from the ones obtained by (1) and its references in a subtle but important way. The other solutions all assume (even though it is irrelevant) there is a prior probability distribution on $\Omega$ and then show, essentially, that it must be uniform over $S.$ That, in turn, is impossible. However, the solutions to the two-envelope problem given here do not arise as the best decision procedures for some given prior distribution and thereby are overlooked by such an analysis. In the present treatment, it simply does not matter whether a prior probability distribution can exist or not. We might characterize this as a contrast between being uncertain what the envelopes contain (as described by a prior distribution) and being completely ignorant of their contents (so that no prior distribution is relevant).
4. CONCLUSIONS.
In the “guess which is larger” problem, a good procedure is to decide randomly that the observed value is the larger of the two, with a probability that increases as the observed value increases. There is no single best procedure. In the “two envelope” problem, a good procedure is again to decide randomly that the observed amount of money is worth keeping (that is, that it is the larger of the two), with a probability that increases as the observed value increases. Again there is no single best procedure. In both cases, if many players used such a procedure and independently played games for a given $\omega$, then (regardless of the value of $\omega$) on the whole they would win more than they lose, because their decision procedures favor selecting the larger amounts.
In both problems, making an additional assumption-—a prior distribution on the states of nature—-that is not part of the problem gives rise to an apparent paradox. By focusing on what is specified in each problem, this assumption is altogether avoided (tempting as it may be to make), allowing the paradoxes to disappear and straightforward solutions to emerge.
REFERENCES
(1) D. Samet, I. Samet, and D. Schmeidler, One Observation behind Two-Envelope Puzzles. American Mathematical Monthly 111 (April 2004) 347-351.
(2) J. Kiefer, Introduction to Statistical Inference. Springer-Verlag, New York, 1987.
|
Two envelope problem revisited
1. UNNECESSARY PROBABILITIES.
The next two sections of this note analyze the "guess which is larger" and "two envelope" problems using standard tools of decision theory (2). This approach, although
|
15,150
|
Two envelope problem revisited
|
The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The problem has been formulized incorrectly.
However, a real world formulation of the problem is this: you have two identical envelopes: $A$ and $B$, where $B=2A$. You can pick either envelope and then are offered to swap.
Case 1: You've picked $A$. If you switch you gain $A$ dollars.
Case 2: You've picked $B$. If you switch you loose $A$ dollars.
This is where the flaw in the two-envelope paradox enters in. While you are looking at loosing half the value or doubling your money, you still don't know the original value of $A$ and the value of $A$ has been fixed. What you are looking at is either $+A$ or $-A$, not $2A$ or $\frac{1}{2}A$.
If we assume that the probability of selecting $A$ or $B$ at each step is equal,. the after the first offered swap, the results can be either:
Case 1: Picked $A$, No swap: Reward $A$
Case 2: Picked $A$, Swapped for $B$: Reward $2A$
Case 3: Picked $B$, No swap: Reward $2A$
Case 4: Picked $B$, Swapped for $A$: Reward $A$
The end result is that half the time you get $A$ and half the time you get $2A$. This will not change no matter how many times you are offered a swap, nor will it change based upon knowing what is in one envelope.
|
Two envelope problem revisited
|
The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The prob
|
Two envelope problem revisited
The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The problem has been formulized incorrectly.
However, a real world formulation of the problem is this: you have two identical envelopes: $A$ and $B$, where $B=2A$. You can pick either envelope and then are offered to swap.
Case 1: You've picked $A$. If you switch you gain $A$ dollars.
Case 2: You've picked $B$. If you switch you loose $A$ dollars.
This is where the flaw in the two-envelope paradox enters in. While you are looking at loosing half the value or doubling your money, you still don't know the original value of $A$ and the value of $A$ has been fixed. What you are looking at is either $+A$ or $-A$, not $2A$ or $\frac{1}{2}A$.
If we assume that the probability of selecting $A$ or $B$ at each step is equal,. the after the first offered swap, the results can be either:
Case 1: Picked $A$, No swap: Reward $A$
Case 2: Picked $A$, Swapped for $B$: Reward $2A$
Case 3: Picked $B$, No swap: Reward $2A$
Case 4: Picked $B$, Swapped for $A$: Reward $A$
The end result is that half the time you get $A$ and half the time you get $2A$. This will not change no matter how many times you are offered a swap, nor will it change based upon knowing what is in one envelope.
|
Two envelope problem revisited
The issue in general with the two envelope problem is that the problem as presented on wikipedia allows the size of the values in the envelopes to change after the first choice has been made. The prob
|
15,151
|
Two envelope problem revisited
|
My interpretation of the question
I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair coin and based on that puts either $0.5X$ or $2X$ to the second envelope. The player knows all this, but not $X$ nor the result of the coin-flip. The organizer gives the player the first envelope (closed) and asks if the player wants to switch. The questioner argues 1. that the player wants to switch because the switching increases expectation (correct) and 2. that after switching, the same reasoning symmetrically holds and the player wants to switch back (incorrect). I also assume the player is a rational risk-neutral Bayesian agent that puts a probability distribution over $X$ and maximizes expected amount of money earned.
Note that if the we player did not know about the coin-flip procedure, there might be no reason in the first place to argue that the probabilities are 0.5 for the second envelope to be higher/lower.
Why there is no paradox
Your problem 3 (as interpreted in my answer) is not the envelope paradox. Let the $Z$ be a Bernoulli random variable with $P(Z=1)=0.5$. Define the amount $Y$ in the 2nd envelope so that $Z=1$ implies $Y=2X$ and $Z=0$ implies $Y=0.5X$. In the scenario here, $X$ is selected without knowledge of the result of the coin-flip and thus $Z$ and $X$ are independent, which implies $E(Y\mid X) = 1.25X$.
\begin{equation}
E(Y) = E(E(Y\mid X)) = E(1.25X) = 1.25E(X)
\end{equation}
Thus, if if X>0 (or at least $E(X)>0$), the player will prefer to switch to envelope 2. However, there is nothing paradoxical about the fact that if you offer me a good deal (envelope 1) and an opportunity to switch to a better deal (envelope 2), I will want to switch to the better deal.
To invoke the paradox, you would have to make the situation symmetric, so that you could argue that I also want to switch from envelope 2 to envelope 1. Only this would be the paradox: that I would want to keep switching forever. In the question, you argue that the situation indeed is symmetric, however, there is no justification provided. The situation is not symmetric: the second envelope contains the amount that was picked as a function of a coin-flip and the amount in the first envelope, while the amount in the first envelope was not picked as a function of a coin-flip and the amount in the second envelope. Hence, the argument for switching back from the second envelope is not valid.
Example with small number of possibilities
Let us assume that (the player's belief is that) $X=10$ or $X=40$ with equal probabilities, and work out the computations case by case. In this case, the possibilities for $(X,Y)$ are $\{(10,5),(10,20),(40,20),(40,80)\}$, each of which has probability $1/4$. First, we look at the player's reasoning when holding the first envelope.
If my envelope contains $10$, the second envelope contains either $5$ or $20$ with equal probabilities, thus by switching I gain on average $0.5\times(-5) + 0.5\times10 = 2.5$.
If my envelope contains $40$, the second envelope contains either $20$ or $80$ with equal probabilities, thus by switching I gain on average $0.5\times(-20) + 0.5\times(40) = 10$.
Taking the average over these, the expected gain of switching is $0.5\times2.5 + 0.5\times10 = 6.25$, so the player switches. Now, let us make similar case-by-case analysis of switching back:
If my envelope contains $5$, the old envelope with probability 1 contains $10$, and I gain $5$ by switching.
If my envelope contains $20$, the old envelope contains $10$ or $40$ with equal probabilities, and by switching I gain $0.5\times(-10) + 0.5\times20 = 5$.
If my envelope contains $80$, the old envelope with probability 1 contains $40$ and I lose $40$ by switching.
Now, the expected value, i.e. probability-weighted average, of gain by switching back is $0.25\times5+0.5\times5+0.25\times(-40) = -6.25$. So, switching back exactly cancels the expected utility gain.
Another example with a continuum of possibilities
You might object to my previous example by claiming that I maybe cleverly selected the distribution over $X$ so that in the $Y=80$ case the player knows that he is losing. Let us now consider a case where $X$ has a continuous unbounded distribution: $X \sim \textrm{Exp}(1)$, $Z$ independent of $X$ as previously, and $Y$ as a function of $X$ and $Z$ as previously. The expected gain of switching from $X$ to $Y$ is again $E(0.25X) = 0.25E(X) = 0.25$. For the back-switch, we first compute the conditional probability $P(X=0.5Y \mid Y=y)$ using Bayes' theorem:
\begin{equation}
P(X=0.5Y \mid Y=y) = P(Z=1 \mid Y=y) = \frac{p(Y=y\mid Z=1)P(Z=1)}{p(Y=y)} = \frac{p(2X=y)P(Z=1)}{p(Y=y)} = \frac{0.25e^{-0.5y}}{p(Y=y)}
\end{equation}
and similarly $P(X=2Y \mid Y=y) = \frac{e^{-2y}}{p(Y=y)}$, wherefore the conditional expected gain of switching back to the first envelope is
\begin{equation}
E(X-Y \mid Y=y) = \frac{-0.125y e^{-0.5y} + ye^{-2y}}{p(Y=y)},
\end{equation}
and taking the expectation over $Y$, this becomes
\begin{equation}
E(X-Y) = \int_0^\infty \frac{-0.125y e^{-0.5y} + ye^{-2y}}{p(Y=y)}p(Y=y) dy = -0.25,
\end{equation}
which cancels out the expected gain of the first switch.
General solution
The situation seen in the two examples must always occur: you cannot construct a probability distribution for $X,Z,Y$ with these conditions: $X$ is not a.s. 0, $Z$ is Bernoulli with $P(Z=1)=0.5$, $Z$ is independent of $X$, $Y=2X$ when $Z=1$ and $0.5X$ otherwise and also $Y,Z$ are independent. This is explained in the Wikipedia article under heading 'Proposed resolutions to the alternative interpretation': such a condition would imply that the probability that the smaller envelope has amount between $2^n,2^{n+1}$ ($P(2^n<=\min(X,Y)<2^{n+1})$ with my notation) would be a constant over all natural numbers $n$, which is impossible for a proper probability distribution.
Note that there is another version of the paradox where the probabilities need not be 0.5, but the expectation of other envelope conditional on the amount in this envelope is still always higher. Probability distributions satisfying this type of condition exist (e.g., let the amounts in the envelopes be independent half-Cauchy), but as the Wikipedia article explains, they require infinite mean. I think this part is rather unrelated to your question, but for completeness wanted to mention this.
|
Two envelope problem revisited
|
My interpretation of the question
I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair
|
Two envelope problem revisited
My interpretation of the question
I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair coin and based on that puts either $0.5X$ or $2X$ to the second envelope. The player knows all this, but not $X$ nor the result of the coin-flip. The organizer gives the player the first envelope (closed) and asks if the player wants to switch. The questioner argues 1. that the player wants to switch because the switching increases expectation (correct) and 2. that after switching, the same reasoning symmetrically holds and the player wants to switch back (incorrect). I also assume the player is a rational risk-neutral Bayesian agent that puts a probability distribution over $X$ and maximizes expected amount of money earned.
Note that if the we player did not know about the coin-flip procedure, there might be no reason in the first place to argue that the probabilities are 0.5 for the second envelope to be higher/lower.
Why there is no paradox
Your problem 3 (as interpreted in my answer) is not the envelope paradox. Let the $Z$ be a Bernoulli random variable with $P(Z=1)=0.5$. Define the amount $Y$ in the 2nd envelope so that $Z=1$ implies $Y=2X$ and $Z=0$ implies $Y=0.5X$. In the scenario here, $X$ is selected without knowledge of the result of the coin-flip and thus $Z$ and $X$ are independent, which implies $E(Y\mid X) = 1.25X$.
\begin{equation}
E(Y) = E(E(Y\mid X)) = E(1.25X) = 1.25E(X)
\end{equation}
Thus, if if X>0 (or at least $E(X)>0$), the player will prefer to switch to envelope 2. However, there is nothing paradoxical about the fact that if you offer me a good deal (envelope 1) and an opportunity to switch to a better deal (envelope 2), I will want to switch to the better deal.
To invoke the paradox, you would have to make the situation symmetric, so that you could argue that I also want to switch from envelope 2 to envelope 1. Only this would be the paradox: that I would want to keep switching forever. In the question, you argue that the situation indeed is symmetric, however, there is no justification provided. The situation is not symmetric: the second envelope contains the amount that was picked as a function of a coin-flip and the amount in the first envelope, while the amount in the first envelope was not picked as a function of a coin-flip and the amount in the second envelope. Hence, the argument for switching back from the second envelope is not valid.
Example with small number of possibilities
Let us assume that (the player's belief is that) $X=10$ or $X=40$ with equal probabilities, and work out the computations case by case. In this case, the possibilities for $(X,Y)$ are $\{(10,5),(10,20),(40,20),(40,80)\}$, each of which has probability $1/4$. First, we look at the player's reasoning when holding the first envelope.
If my envelope contains $10$, the second envelope contains either $5$ or $20$ with equal probabilities, thus by switching I gain on average $0.5\times(-5) + 0.5\times10 = 2.5$.
If my envelope contains $40$, the second envelope contains either $20$ or $80$ with equal probabilities, thus by switching I gain on average $0.5\times(-20) + 0.5\times(40) = 10$.
Taking the average over these, the expected gain of switching is $0.5\times2.5 + 0.5\times10 = 6.25$, so the player switches. Now, let us make similar case-by-case analysis of switching back:
If my envelope contains $5$, the old envelope with probability 1 contains $10$, and I gain $5$ by switching.
If my envelope contains $20$, the old envelope contains $10$ or $40$ with equal probabilities, and by switching I gain $0.5\times(-10) + 0.5\times20 = 5$.
If my envelope contains $80$, the old envelope with probability 1 contains $40$ and I lose $40$ by switching.
Now, the expected value, i.e. probability-weighted average, of gain by switching back is $0.25\times5+0.5\times5+0.25\times(-40) = -6.25$. So, switching back exactly cancels the expected utility gain.
Another example with a continuum of possibilities
You might object to my previous example by claiming that I maybe cleverly selected the distribution over $X$ so that in the $Y=80$ case the player knows that he is losing. Let us now consider a case where $X$ has a continuous unbounded distribution: $X \sim \textrm{Exp}(1)$, $Z$ independent of $X$ as previously, and $Y$ as a function of $X$ and $Z$ as previously. The expected gain of switching from $X$ to $Y$ is again $E(0.25X) = 0.25E(X) = 0.25$. For the back-switch, we first compute the conditional probability $P(X=0.5Y \mid Y=y)$ using Bayes' theorem:
\begin{equation}
P(X=0.5Y \mid Y=y) = P(Z=1 \mid Y=y) = \frac{p(Y=y\mid Z=1)P(Z=1)}{p(Y=y)} = \frac{p(2X=y)P(Z=1)}{p(Y=y)} = \frac{0.25e^{-0.5y}}{p(Y=y)}
\end{equation}
and similarly $P(X=2Y \mid Y=y) = \frac{e^{-2y}}{p(Y=y)}$, wherefore the conditional expected gain of switching back to the first envelope is
\begin{equation}
E(X-Y \mid Y=y) = \frac{-0.125y e^{-0.5y} + ye^{-2y}}{p(Y=y)},
\end{equation}
and taking the expectation over $Y$, this becomes
\begin{equation}
E(X-Y) = \int_0^\infty \frac{-0.125y e^{-0.5y} + ye^{-2y}}{p(Y=y)}p(Y=y) dy = -0.25,
\end{equation}
which cancels out the expected gain of the first switch.
General solution
The situation seen in the two examples must always occur: you cannot construct a probability distribution for $X,Z,Y$ with these conditions: $X$ is not a.s. 0, $Z$ is Bernoulli with $P(Z=1)=0.5$, $Z$ is independent of $X$, $Y=2X$ when $Z=1$ and $0.5X$ otherwise and also $Y,Z$ are independent. This is explained in the Wikipedia article under heading 'Proposed resolutions to the alternative interpretation': such a condition would imply that the probability that the smaller envelope has amount between $2^n,2^{n+1}$ ($P(2^n<=\min(X,Y)<2^{n+1})$ with my notation) would be a constant over all natural numbers $n$, which is impossible for a proper probability distribution.
Note that there is another version of the paradox where the probabilities need not be 0.5, but the expectation of other envelope conditional on the amount in this envelope is still always higher. Probability distributions satisfying this type of condition exist (e.g., let the amounts in the envelopes be independent half-Cauchy), but as the Wikipedia article explains, they require infinite mean. I think this part is rather unrelated to your question, but for completeness wanted to mention this.
|
Two envelope problem revisited
My interpretation of the question
I am assuming that the setting in problem 3 is as follows: the organizer first selects amount $X$ and puts $X$ in the first envelope. Then, the organizer flips a fair
|
15,152
|
Two envelope problem revisited
|
Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin.
Problem 2: The problem is the same as problem 1 because you are told that there is an equal probability that either 5 or 20 is in the other envelope.
Problem 3: The difference in problem 3 is that telling me the other envelope has either $X/2$ or $2X$ in it does not mean that I should assume that the two possibilities are equally likely for all possible values of $X$. Doing so implies an improper prior on the possible values of $X$. See the Bayesian resolution to the paradox.
|
Two envelope problem revisited
|
Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin.
Problem 2: The problem is the
|
Two envelope problem revisited
Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin.
Problem 2: The problem is the same as problem 1 because you are told that there is an equal probability that either 5 or 20 is in the other envelope.
Problem 3: The difference in problem 3 is that telling me the other envelope has either $X/2$ or $2X$ in it does not mean that I should assume that the two possibilities are equally likely for all possible values of $X$. Doing so implies an improper prior on the possible values of $X$. See the Bayesian resolution to the paradox.
|
Two envelope problem revisited
Problem 1: Agreed, play the game. The key here is that you know the actual probabilities of winning 5 vs 20 since the outcome is dependent upon the flip of a fair coin.
Problem 2: The problem is the
|
15,153
|
Two envelope problem revisited
|
This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation.
So the only thing that changed between problem 2 and problem 3 is that the amount became in the envelope you hold became random. If you allow that amount to be negative so there might be a bill there instead of money then it makes perfect sense. The extra information you get when you open the envelope is whether it's a bill or money hence you care to switch in one case while in the other you don't.
If however you are told the bill is not a possibility then the problem remains. (of course do you assign a probability that they lie?)
|
Two envelope problem revisited
|
This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation.
So the only thing
|
Two envelope problem revisited
This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation.
So the only thing that changed between problem 2 and problem 3 is that the amount became in the envelope you hold became random. If you allow that amount to be negative so there might be a bill there instead of money then it makes perfect sense. The extra information you get when you open the envelope is whether it's a bill or money hence you care to switch in one case while in the other you don't.
If however you are told the bill is not a possibility then the problem remains. (of course do you assign a probability that they lie?)
|
Two envelope problem revisited
This is a potential explanation that I have. I think it is wrong but I'm not sure. I will post it to be voted on and commented on. Hopefully someone will offer a better explanation.
So the only thing
|
15,154
|
Two envelope problem revisited
|
Overview
I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope before the envelope is chosen
Not distinguishing those scenarios lies at the root of many people's confusion.
Problem 1
If you are flipping a coin to decide if either double your money or lose half, always play the game. Instead of double or nothing, it is double or lose some.
Problem 2
This is exactly the same as the coin flip scenario. The only difference is that the person picking the envelope flipped before giving you the first envelope. Note You Did Not Choose an Envelope!!!! You were given one envelope, and then given the choice to switch This is a subtle but important difference over problem 3, which affects the distribution of the priors
Problem 3
This is the classical setup to the two envelope problem. Here you are given the choice between the two envelopes. The most important points to realize are
There is a maximum amount of money that can be in the any envelope. Because the person running the game has finite resources, or a finite amount they are willing to invest
If you call the maximum money that could be in the envelope M, you are not equally likely to get any number between 0 and M. If you assume a random amount of money between 0 and M was put in the first envelope, and half of that for the second (or double, the math still works) If you open an envelope, you are 3 times as likely to see something less than M/2 than above M/2. (This is because half the time both envelopes will have less than M/2, and the other half the time 1 envelope will)
Since there is not an even distribution, the 50% of the time you double, 50% of the time you cut in half doesn't apply
When you work out the actual probabilities, you find the expected value of the first envelope is M/2, and the EV of the second envelope, switching or not is also M/2
Interestingly, if you can make some guess as to what the maximum money in the envelope can be, or if you can play the game multiple times, then you can benefit by switching, whenever you open an envelope less than M/2. I have simulated this two envelope problem here and find that if you have this outside information, on average you can do 1.25 as well as just always switching or never switching.
|
Two envelope problem revisited
|
Overview
I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope b
|
Two envelope problem revisited
Overview
I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope before the envelope is chosen
Not distinguishing those scenarios lies at the root of many people's confusion.
Problem 1
If you are flipping a coin to decide if either double your money or lose half, always play the game. Instead of double or nothing, it is double or lose some.
Problem 2
This is exactly the same as the coin flip scenario. The only difference is that the person picking the envelope flipped before giving you the first envelope. Note You Did Not Choose an Envelope!!!! You were given one envelope, and then given the choice to switch This is a subtle but important difference over problem 3, which affects the distribution of the priors
Problem 3
This is the classical setup to the two envelope problem. Here you are given the choice between the two envelopes. The most important points to realize are
There is a maximum amount of money that can be in the any envelope. Because the person running the game has finite resources, or a finite amount they are willing to invest
If you call the maximum money that could be in the envelope M, you are not equally likely to get any number between 0 and M. If you assume a random amount of money between 0 and M was put in the first envelope, and half of that for the second (or double, the math still works) If you open an envelope, you are 3 times as likely to see something less than M/2 than above M/2. (This is because half the time both envelopes will have less than M/2, and the other half the time 1 envelope will)
Since there is not an even distribution, the 50% of the time you double, 50% of the time you cut in half doesn't apply
When you work out the actual probabilities, you find the expected value of the first envelope is M/2, and the EV of the second envelope, switching or not is also M/2
Interestingly, if you can make some guess as to what the maximum money in the envelope can be, or if you can play the game multiple times, then you can benefit by switching, whenever you open an envelope less than M/2. I have simulated this two envelope problem here and find that if you have this outside information, on average you can do 1.25 as well as just always switching or never switching.
|
Two envelope problem revisited
Overview
I believe that they way you have broken out the problem is completely correct. You need to distinguish the "Coin Flip" scenario, from the situation where the money is added to the envelope b
|
15,155
|
Two envelope problem revisited
|
Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side only. You then get to choose one side (the revealed, or the hidden), and you win the amount on that side.
If you see "\$5," you know you should choose the hidden side and will win \$10. If you see "\$20," you know you should choose the revealed side and will win \$20. But if you see "\$10," I have not given you enough information calculate an expectation for the hidden side. Had I said there were an equal number of {\$5,\$10} cards as {\$10,\$20} cards, the expectation would be \$12.50. But you can't find the expectation from $only$ the fact - which was still true - that you had equal chances to reveal the higher, or lower, value on the card. You need to know how many of each kind of card there were.
Problem 3A: The same jar is used, but this time the cards all have different, and unknown, values written on them. The only thing that is the same, is that on each card one side is twice the value of the other.
Pick a card, and a side, but don't look at it. There is a 50% chance that it is the higher side, or the lower side. One possible solution is that the card is either {X/2,X} or {X,2X} with 50% probability, where X is your side. But we saw above that the the probability of choosing high or low is not the same thing as these two $different$ cards being equally likely to be in the jar.
What changed between your Problem 2 and Problem 3, is that you made these two probabilities the same in Problem 2 by saying "This envelope either has \$5 or \$20 in it with equal probability." With unknown values, that can't be true in Problem 3.
|
Two envelope problem revisited
|
Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side onl
|
Two envelope problem revisited
Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side only. You then get to choose one side (the revealed, or the hidden), and you win the amount on that side.
If you see "\$5," you know you should choose the hidden side and will win \$10. If you see "\$20," you know you should choose the revealed side and will win \$20. But if you see "\$10," I have not given you enough information calculate an expectation for the hidden side. Had I said there were an equal number of {\$5,\$10} cards as {\$10,\$20} cards, the expectation would be \$12.50. But you can't find the expectation from $only$ the fact - which was still true - that you had equal chances to reveal the higher, or lower, value on the card. You need to know how many of each kind of card there were.
Problem 3A: The same jar is used, but this time the cards all have different, and unknown, values written on them. The only thing that is the same, is that on each card one side is twice the value of the other.
Pick a card, and a side, but don't look at it. There is a 50% chance that it is the higher side, or the lower side. One possible solution is that the card is either {X/2,X} or {X,2X} with 50% probability, where X is your side. But we saw above that the the probability of choosing high or low is not the same thing as these two $different$ cards being equally likely to be in the jar.
What changed between your Problem 2 and Problem 3, is that you made these two probabilities the same in Problem 2 by saying "This envelope either has \$5 or \$20 in it with equal probability." With unknown values, that can't be true in Problem 3.
|
Two envelope problem revisited
Problem 2A: 100 note cards are in an opaque jar. "\$10" is written on one side of each card; the opposite side has either "\$5" or "\$20" written on it. You get to pick a card and look at one side onl
|
15,156
|
Test whether variables follow the same distribution
|
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumstances, so often we are faced with identifying the circumstances in which any proposed test might possibly be a good choice.
Description of the test
Like any hypothesis test, this one consists of (a) a null and alternate hypothesis and (b) a test statistic (the correlation coefficient) intended to discriminate between the hypotheses.
The null hypothesis is that the two variables come from the same distribution. To be precise, let us name the variables $X$ and $Y$ and assume we have observed $n_x$ instances of $X$, called $x_i = (x_1, x_2, \ldots, x_{n_x})$, and $n_y$ instances of $Y$, called $y_i$. The null hypothesis is that all instances of $X$ and $Y$ are independent and identically distributed (iid).
Let us take as the alternate hypothesis that (a) all instances of $X$ are iid according to some underlying distribution $F_X$ and (b) all instances of $Y$ are iid according to some underlying distribution $F_Y$ but (c) $F_X$ differs from $F_Y$. (Thus, we will not be looking for correlations among the $x_i$, correlations among the $y_i$, correlations between the $x_i$ and $y_j$, or differences of distribution among the $x$'s or $y$'s separately: that's assumed not to be plausible.)
The proposed test statistic assumes that $n_x = n_y$ (call this common value $n$) and computes the correlation coefficient of the $(x_{[i]}, y_{[i]})$ (where, as usual, $[i]$ designates the $i^\text{th}$ smallest of the data). Call this $t(x,y)$.
Permutation tests
In this situation--no matter what statistic $t$ is proposed--we can always conduct a permutation test. Under the null hypothesis, the likelihood of the data $\left((x_1, x_2, \ldots, x_n), (y_1, y_2, \ldots, y_n)\right)$ is the same as the likelihood of any permutation of the $2n$ data values. In other words, the assignment of half the data to $X$ and the other half to $Y$ is a pure random coincidence. This is a simple, direct consequence of the iid assumptions and the null hypothesis that $F_X=F_Y$.
Therefore, the sampling distribution of $t(x,y)$, conditional on the observations $x_i$ and $y_i$, is the distribution of all the values of $t$ attained for all $(2n)!$ permutations of the data. We are interested in this because for any given intended test size $\alpha$, such as $\alpha = .05$ (corresponding to $95$% confidence), we will construct a two-sided critical region from the sampling distribution of $t$: it consists of the most extreme $100\alpha$% of the possible values of $t$ (on the high side, because high correlation is consistent with similar distributions and low correlation is not). This is how we go about determining how large the correlation coefficient must be in order to decide the data come from different distributions.
Simulating the null sampling distribution
Because $(2n)!$ (or, if you like, $\binom{2n}{n}/2$, which counts the number of ways of splitting the $2n$ data into two pieces of size $n$) gets big even for small $n$, it is not practicable to compute the sampling distribution exactly, so we sample it using a simulation. (For instance, when $n=16$, $\binom{2n}{n}/2 = 300\ 540\ 195$ and $(2n)! \approx 2.63\times 10^{35}$.) About a thousand samples often suffices (and certainly will for the explorations we are about to undertake).
There are two things we need to find out: first, what does the sampling distribution look like under the null hypothesis. Second, how well does this test discriminate between different distributions?
There is a complication: the sampling distribution depends on the nature of the data. All we can do is to look at realistic data, created to emulate whatever it is we are interested in studying, and hope that what we learn from the simulations will apply to our own situation.
Implementation
To illustrate, I have carried out this work in R. It falls naturally into three pieces.
A function to compute the test statistic $t(x,y)$. Because I want to be a little more general, my version handles different size datasets ($n_x \ne n_y$) by linearly interpolating among the values in the (sorted) larger dataset to create matches with the (sorted) smaller dataset. Because this is already done by the R function qqplot, I just take its results:
test.statistic <- function(x, y) {
transform <- function(z) -log(1-z^2)/2
fit <- qqplot(x,y, plot.it=FALSE)
transform(cor(fit$x, fit$y))
}
A little twist--unnecessary but helpful for visualization--re-expresses the correlation coefficient in a way that will make the distribution of the null statistic approximately symmetric. That's what transform is doing.
The simulation of the sampling distribution. For input this function accepts the number of iterations n.iter along with the two sets of data in arrays x and y. It outputs an array of n.iter values of the test statistic. Its inner workings should be transparent, even to a non R user:
permutation.test <- function(n.iter, x, y) {
z <- c(x,y)
n.x <- length(x)
n.y <- length(y)
n <- length(z)
k <- min(n.x, n.y)
divide <- function() {
i <- sample.int(n, size=k)
test.statistic(z[i], z[-i])
}
replicate(n.iter, divide())
}
Although that's all we need to conduct the test, in order to study it we will want to repeat the test many times. So, we conduct the test once and wrap that code within a third functional layer, just generally named f here, which we can call repeatedly. To make it sufficiently general for a broad study, for input it accepts the sizes of the datasets to simulate (n.x and n.y), the number of iterations for each permutation test (n.iter), a reference to the function test to compute the test statistic (you will see momentarily why we might not want to hard-code this), and two functions to generate iid random values, one for $X$ (dist.x) and one for $Y$ (dist.y). An option plot.it is useful to help see what's going on.
f <- function(n.x, n.y, n.iter, test=test.statistic, dist.x=runif, dist.y=runif,
plot.it=FALSE) {
x <- dist.x(n.x)
y <- dist.y(n.y)
if(plot.it) qqplot(x,y)
t0 <- test(x,y)
sim <- permutation.test(n.iter, x, y)
p <- mean(sim > t0) + mean(sim==t0)/2
if(plot.it) {
hist(sim, xlim=c(min(t0, min(sim)), max(t0, max(sim))),
main="Permutation distribution")
abline(v=t0, col="Red", lwd=2)
}
return(p)
}
The output is a simulated "p-value": the proportion of simulations yielding a statistic that looks more extreme than the one actually computed for the data.
Parts (2) and (3) are extremely general: you can conduct a study like this one for a different test simply by replacing test.statistic with some other calculation. We do that below.
First results
By default, our code compares data drawn from two uniform distributions. I let it do that (for $n.x = n.y = 16$, which are fairly small datasets and therefore present a moderately difficult test case) and then repeat it for a uniform-normal comparison and a uniform-exponential comparison. (Uniform distributions are not easy to distinguish from normal distributions unless you have a bit more than $16$ values, but exponential distributions--having high skewness and a long right tail--are usually easily distinguished from uniform distributions.)
set.seed(17) # Makes the results reproducible
n.per.rep <- 1000 # Number of iterations to compute each p-value
n.reps <- 1000 # Number of times to call `f`
n.x <- 16; n.y <- 16 # Dataset sizes
par(mfcol=c(2,3)) # Lay results out in three columns
null <- replicate(n.reps, f(n.x, n.y, n.per.rep))
hist(null, breaks=20)
plot(null)
normal <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=rnorm))
hist(normal, breaks=20)
plot(normal)
exponential <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=function(n) rgamma(n, 1)))
hist(exponential, breaks=20)
plot(exponential)
On the left is the null distribution of the p-values when both $X$ and $Y$ are uniform. We would hope that the histogram is close to uniform (paying especial attention to the extreme left end, which is in the range of "significant" results)--and it actually is--and that the sequence of values obtained during the simulation, shown below it, looks random--and it does. That's good. It means we can move on to the next step to study how this changes when $X$ and $Y$ come from different distributions.
The middle plots test $16$ uniform variates $x_i$ against $16$ normal variates $y_i$. More often than not, the p-values were lower than expected. That indicates a tendency for this test actually to detect a difference. But it's not a large one. For instance, the leftmost bar in the histogram shows that out of the 1000 runs of f (comprising 1000 separately simulated datasets), the p-value was less than $0.05$ only about 110 times. If we consider that "significant," then this test has only about an $11$% chance of detecting the difference between a uniform and normal distribution based on $16$ independent values from each. That's pretty low power. But maybe it's unavoidable, so let's proceed.
The right-hand plots similarly test a uniform distribution against an exponential one. This result is bizarre. This test tends, more often than not, to conclude that uniform data and exponential data look the same. It seems to "think" that uniform and exponential variates are more similar than two uniform variables! What's going on here?
The problem is that data from an exponential distribution will tend to have a few extremely high values. When you make a scatterplot of those against uniformly-distributed values, there will then be a few points far to the upper right of all the rest. That corresponds to a very high correlation coefficient. Thus, whenever either of the distributions generates a few extreme values, the correlation coefficient is a terrible choice for measuring how different the distributions are. This leads to another even worse problem: as the dataset sizes grow, the chances of obtaining a few extreme observations increase. Thus, we can expect this test to perform worse and worse as the amount of data increase. How very awful... .
A better test
The original question has been answered in the negative. However, there is a well-known, powerful test for discriminating among distributions: the Kolmogorov-Smirnov test. Instead of the correlation coefficient, it computes the largest vertical deviation from the line $y=x$ in their QQ plot. (When data come from the same distribution, the QQ plot tends to follow this line. Otherwise, it will deviate somewhere; the K-S statistic picks up the largest such deviation.)
Here is an R implementation:
test.statistic <- function(x, y) {
ks.test(x,y)$statistic
}
That's right: it's built in to the software, so we only have to call it. But wait! If you read the manual carefully, you will learn that (a) the test supplies a p-value but (b) that p-value is (grossly) incorrect when both x and y are datasets. It is intended for use when you believe you know exactly what distribution the data x came from and you want to see whether that's true. Thus the test does not properly accommodate the uncertainty about the distribution the data in y came from.
No problem! The permutation test framework is still just as valid. By making the preceding change to test.statistic, all we have to do is re-run the previous study, unchanged. Here are the results.
Although the null distribution is not uniform (upper left), it's pretty uniform below $p=0.20$ or so, which is where we really care about its values. A glance at the plot below it (bottom left) shows the problem: the K-S statistic tends to cluster around a few discrete values. (This problem practically goes away for larger datasets.)
The middle (uniform vs normal) and right( uniform vs exponential) histograms are doing exactly the right thing: in the vast majority of cases where the two distributions differ, this test is producing small p-values. For instance, it has a $70$% chance of yielding a p-value less than $0.05$ when comparing a uniform to a normal based on 16 values from each. Compare this to the piddling $11$% achieved by the correlation coefficient test.
The right histogram is not quite as good, but at least it's in the correct direction now! We estimate that it has a $30$% chance of detecting the difference between a uniform and exponential distribution at the $\alpha=5$% level and a $50$% chance of making that detection at the $\alpha=10$% level (because the two bars for the p-value less than $0.10$ total over 500 of the 1000 iterations).
Conclusions
Thus, the problems with the correlation test are not due to some inherent difficulty in this setting. Not only does the correlation test perform very badly, it is bad compared to a widely known and available test. (I would guess that it is inadmissible, meaning that it will always perform worse, on the average, than the permutation version of the K-S test, implying there is no reason ever to use it.)
|
Test whether variables follow the same distribution
|
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumsta
|
Test whether variables follow the same distribution
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumstances, so often we are faced with identifying the circumstances in which any proposed test might possibly be a good choice.
Description of the test
Like any hypothesis test, this one consists of (a) a null and alternate hypothesis and (b) a test statistic (the correlation coefficient) intended to discriminate between the hypotheses.
The null hypothesis is that the two variables come from the same distribution. To be precise, let us name the variables $X$ and $Y$ and assume we have observed $n_x$ instances of $X$, called $x_i = (x_1, x_2, \ldots, x_{n_x})$, and $n_y$ instances of $Y$, called $y_i$. The null hypothesis is that all instances of $X$ and $Y$ are independent and identically distributed (iid).
Let us take as the alternate hypothesis that (a) all instances of $X$ are iid according to some underlying distribution $F_X$ and (b) all instances of $Y$ are iid according to some underlying distribution $F_Y$ but (c) $F_X$ differs from $F_Y$. (Thus, we will not be looking for correlations among the $x_i$, correlations among the $y_i$, correlations between the $x_i$ and $y_j$, or differences of distribution among the $x$'s or $y$'s separately: that's assumed not to be plausible.)
The proposed test statistic assumes that $n_x = n_y$ (call this common value $n$) and computes the correlation coefficient of the $(x_{[i]}, y_{[i]})$ (where, as usual, $[i]$ designates the $i^\text{th}$ smallest of the data). Call this $t(x,y)$.
Permutation tests
In this situation--no matter what statistic $t$ is proposed--we can always conduct a permutation test. Under the null hypothesis, the likelihood of the data $\left((x_1, x_2, \ldots, x_n), (y_1, y_2, \ldots, y_n)\right)$ is the same as the likelihood of any permutation of the $2n$ data values. In other words, the assignment of half the data to $X$ and the other half to $Y$ is a pure random coincidence. This is a simple, direct consequence of the iid assumptions and the null hypothesis that $F_X=F_Y$.
Therefore, the sampling distribution of $t(x,y)$, conditional on the observations $x_i$ and $y_i$, is the distribution of all the values of $t$ attained for all $(2n)!$ permutations of the data. We are interested in this because for any given intended test size $\alpha$, such as $\alpha = .05$ (corresponding to $95$% confidence), we will construct a two-sided critical region from the sampling distribution of $t$: it consists of the most extreme $100\alpha$% of the possible values of $t$ (on the high side, because high correlation is consistent with similar distributions and low correlation is not). This is how we go about determining how large the correlation coefficient must be in order to decide the data come from different distributions.
Simulating the null sampling distribution
Because $(2n)!$ (or, if you like, $\binom{2n}{n}/2$, which counts the number of ways of splitting the $2n$ data into two pieces of size $n$) gets big even for small $n$, it is not practicable to compute the sampling distribution exactly, so we sample it using a simulation. (For instance, when $n=16$, $\binom{2n}{n}/2 = 300\ 540\ 195$ and $(2n)! \approx 2.63\times 10^{35}$.) About a thousand samples often suffices (and certainly will for the explorations we are about to undertake).
There are two things we need to find out: first, what does the sampling distribution look like under the null hypothesis. Second, how well does this test discriminate between different distributions?
There is a complication: the sampling distribution depends on the nature of the data. All we can do is to look at realistic data, created to emulate whatever it is we are interested in studying, and hope that what we learn from the simulations will apply to our own situation.
Implementation
To illustrate, I have carried out this work in R. It falls naturally into three pieces.
A function to compute the test statistic $t(x,y)$. Because I want to be a little more general, my version handles different size datasets ($n_x \ne n_y$) by linearly interpolating among the values in the (sorted) larger dataset to create matches with the (sorted) smaller dataset. Because this is already done by the R function qqplot, I just take its results:
test.statistic <- function(x, y) {
transform <- function(z) -log(1-z^2)/2
fit <- qqplot(x,y, plot.it=FALSE)
transform(cor(fit$x, fit$y))
}
A little twist--unnecessary but helpful for visualization--re-expresses the correlation coefficient in a way that will make the distribution of the null statistic approximately symmetric. That's what transform is doing.
The simulation of the sampling distribution. For input this function accepts the number of iterations n.iter along with the two sets of data in arrays x and y. It outputs an array of n.iter values of the test statistic. Its inner workings should be transparent, even to a non R user:
permutation.test <- function(n.iter, x, y) {
z <- c(x,y)
n.x <- length(x)
n.y <- length(y)
n <- length(z)
k <- min(n.x, n.y)
divide <- function() {
i <- sample.int(n, size=k)
test.statistic(z[i], z[-i])
}
replicate(n.iter, divide())
}
Although that's all we need to conduct the test, in order to study it we will want to repeat the test many times. So, we conduct the test once and wrap that code within a third functional layer, just generally named f here, which we can call repeatedly. To make it sufficiently general for a broad study, for input it accepts the sizes of the datasets to simulate (n.x and n.y), the number of iterations for each permutation test (n.iter), a reference to the function test to compute the test statistic (you will see momentarily why we might not want to hard-code this), and two functions to generate iid random values, one for $X$ (dist.x) and one for $Y$ (dist.y). An option plot.it is useful to help see what's going on.
f <- function(n.x, n.y, n.iter, test=test.statistic, dist.x=runif, dist.y=runif,
plot.it=FALSE) {
x <- dist.x(n.x)
y <- dist.y(n.y)
if(plot.it) qqplot(x,y)
t0 <- test(x,y)
sim <- permutation.test(n.iter, x, y)
p <- mean(sim > t0) + mean(sim==t0)/2
if(plot.it) {
hist(sim, xlim=c(min(t0, min(sim)), max(t0, max(sim))),
main="Permutation distribution")
abline(v=t0, col="Red", lwd=2)
}
return(p)
}
The output is a simulated "p-value": the proportion of simulations yielding a statistic that looks more extreme than the one actually computed for the data.
Parts (2) and (3) are extremely general: you can conduct a study like this one for a different test simply by replacing test.statistic with some other calculation. We do that below.
First results
By default, our code compares data drawn from two uniform distributions. I let it do that (for $n.x = n.y = 16$, which are fairly small datasets and therefore present a moderately difficult test case) and then repeat it for a uniform-normal comparison and a uniform-exponential comparison. (Uniform distributions are not easy to distinguish from normal distributions unless you have a bit more than $16$ values, but exponential distributions--having high skewness and a long right tail--are usually easily distinguished from uniform distributions.)
set.seed(17) # Makes the results reproducible
n.per.rep <- 1000 # Number of iterations to compute each p-value
n.reps <- 1000 # Number of times to call `f`
n.x <- 16; n.y <- 16 # Dataset sizes
par(mfcol=c(2,3)) # Lay results out in three columns
null <- replicate(n.reps, f(n.x, n.y, n.per.rep))
hist(null, breaks=20)
plot(null)
normal <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=rnorm))
hist(normal, breaks=20)
plot(normal)
exponential <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=function(n) rgamma(n, 1)))
hist(exponential, breaks=20)
plot(exponential)
On the left is the null distribution of the p-values when both $X$ and $Y$ are uniform. We would hope that the histogram is close to uniform (paying especial attention to the extreme left end, which is in the range of "significant" results)--and it actually is--and that the sequence of values obtained during the simulation, shown below it, looks random--and it does. That's good. It means we can move on to the next step to study how this changes when $X$ and $Y$ come from different distributions.
The middle plots test $16$ uniform variates $x_i$ against $16$ normal variates $y_i$. More often than not, the p-values were lower than expected. That indicates a tendency for this test actually to detect a difference. But it's not a large one. For instance, the leftmost bar in the histogram shows that out of the 1000 runs of f (comprising 1000 separately simulated datasets), the p-value was less than $0.05$ only about 110 times. If we consider that "significant," then this test has only about an $11$% chance of detecting the difference between a uniform and normal distribution based on $16$ independent values from each. That's pretty low power. But maybe it's unavoidable, so let's proceed.
The right-hand plots similarly test a uniform distribution against an exponential one. This result is bizarre. This test tends, more often than not, to conclude that uniform data and exponential data look the same. It seems to "think" that uniform and exponential variates are more similar than two uniform variables! What's going on here?
The problem is that data from an exponential distribution will tend to have a few extremely high values. When you make a scatterplot of those against uniformly-distributed values, there will then be a few points far to the upper right of all the rest. That corresponds to a very high correlation coefficient. Thus, whenever either of the distributions generates a few extreme values, the correlation coefficient is a terrible choice for measuring how different the distributions are. This leads to another even worse problem: as the dataset sizes grow, the chances of obtaining a few extreme observations increase. Thus, we can expect this test to perform worse and worse as the amount of data increase. How very awful... .
A better test
The original question has been answered in the negative. However, there is a well-known, powerful test for discriminating among distributions: the Kolmogorov-Smirnov test. Instead of the correlation coefficient, it computes the largest vertical deviation from the line $y=x$ in their QQ plot. (When data come from the same distribution, the QQ plot tends to follow this line. Otherwise, it will deviate somewhere; the K-S statistic picks up the largest such deviation.)
Here is an R implementation:
test.statistic <- function(x, y) {
ks.test(x,y)$statistic
}
That's right: it's built in to the software, so we only have to call it. But wait! If you read the manual carefully, you will learn that (a) the test supplies a p-value but (b) that p-value is (grossly) incorrect when both x and y are datasets. It is intended for use when you believe you know exactly what distribution the data x came from and you want to see whether that's true. Thus the test does not properly accommodate the uncertainty about the distribution the data in y came from.
No problem! The permutation test framework is still just as valid. By making the preceding change to test.statistic, all we have to do is re-run the previous study, unchanged. Here are the results.
Although the null distribution is not uniform (upper left), it's pretty uniform below $p=0.20$ or so, which is where we really care about its values. A glance at the plot below it (bottom left) shows the problem: the K-S statistic tends to cluster around a few discrete values. (This problem practically goes away for larger datasets.)
The middle (uniform vs normal) and right( uniform vs exponential) histograms are doing exactly the right thing: in the vast majority of cases where the two distributions differ, this test is producing small p-values. For instance, it has a $70$% chance of yielding a p-value less than $0.05$ when comparing a uniform to a normal based on 16 values from each. Compare this to the piddling $11$% achieved by the correlation coefficient test.
The right histogram is not quite as good, but at least it's in the correct direction now! We estimate that it has a $30$% chance of detecting the difference between a uniform and exponential distribution at the $\alpha=5$% level and a $50$% chance of making that detection at the $\alpha=10$% level (because the two bars for the p-value less than $0.10$ total over 500 of the 1000 iterations).
Conclusions
Thus, the problems with the correlation test are not due to some inherent difficulty in this setting. Not only does the correlation test perform very badly, it is bad compared to a widely known and available test. (I would guess that it is inadmissible, meaning that it will always perform worse, on the average, than the permutation version of the K-S test, implying there is no reason ever to use it.)
|
Test whether variables follow the same distribution
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumsta
|
15,157
|
Test whether variables follow the same distribution
|
No, correlation is not a good test of this.
x <- 1:100 #Uniform
y <- sort(rnorm(100)) #Normal
cor(x,y) #.98
I don't know of a good test that compares whether, e.g. two distributions are both normal, but possibly with different mean and s.d. Indirectly, you could test the normality of each, separately, and if both seemed normal, guess they both were.
|
Test whether variables follow the same distribution
|
No, correlation is not a good test of this.
x <- 1:100 #Uniform
y <- sort(rnorm(100)) #Normal
cor(x,y) #.98
I don't know of a good test that compares whether, e.g. two distributions are both normal,
|
Test whether variables follow the same distribution
No, correlation is not a good test of this.
x <- 1:100 #Uniform
y <- sort(rnorm(100)) #Normal
cor(x,y) #.98
I don't know of a good test that compares whether, e.g. two distributions are both normal, but possibly with different mean and s.d. Indirectly, you could test the normality of each, separately, and if both seemed normal, guess they both were.
|
Test whether variables follow the same distribution
No, correlation is not a good test of this.
x <- 1:100 #Uniform
y <- sort(rnorm(100)) #Normal
cor(x,y) #.98
I don't know of a good test that compares whether, e.g. two distributions are both normal,
|
15,158
|
Test whether variables follow the same distribution
|
If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it provides little means to estimate confidence that they might use the same model.
A problem that you are liable to experience is when you have models with similar mean and skewness, but a difference in kurtosis, as a moderate number of measurements may fit sufficiently well to look fairly well correlated.
It seems more reasonable to model both variables against different distributions to see which is most likely for each, and compare the results.
There may be some merit to normalising both values, sorting and plotting each - this will allow you to see how the fits compare - and you can plot a possible model for both as well, which would be related to what you suggested, but rather than expecting a concrete answer, just a visual idea on the closeness of the distributions.
|
Test whether variables follow the same distribution
|
If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it p
|
Test whether variables follow the same distribution
If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it provides little means to estimate confidence that they might use the same model.
A problem that you are liable to experience is when you have models with similar mean and skewness, but a difference in kurtosis, as a moderate number of measurements may fit sufficiently well to look fairly well correlated.
It seems more reasonable to model both variables against different distributions to see which is most likely for each, and compare the results.
There may be some merit to normalising both values, sorting and plotting each - this will allow you to see how the fits compare - and you can plot a possible model for both as well, which would be related to what you suggested, but rather than expecting a concrete answer, just a visual idea on the closeness of the distributions.
|
Test whether variables follow the same distribution
If there are a sufficiently large number of variables, then this may show more correlation with size-ordered values. However, it doesn't seem to be a particularly useful method, not least because it p
|
15,159
|
Auto.arima vs autobox do they differ?
|
michael/wayne
AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met
1) there are pulses in the data
2) there is 1 or more level/step shift in the data
3) if there are seasonal pulses in the data
4) there are 1 or more local time trends in the data that are not simply remedied
5) if the parameters of the model change over time
6) if the variance of the errors change over time and no power transformation is adequate.
In terms of a specific example, I would suggest that both of you select/make a time series and post both of them to the web. I will use AUTOBOX to analyse the data in an unattended mode and I will post the models to the list. You then run the R program and then each of you make a separate objective analysis of both results, pointing out similarities and differences. Send those two models complete with all available supporting material including the final error terms to me for my comments. Summarize and presents these results to the list and then ask readers of the list to VOTE for which procedure seems best to them.
|
Auto.arima vs autobox do they differ?
|
michael/wayne
AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met
1) there are pulses in the data
2) there is 1 or more level/step shift in
|
Auto.arima vs autobox do they differ?
michael/wayne
AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met
1) there are pulses in the data
2) there is 1 or more level/step shift in the data
3) if there are seasonal pulses in the data
4) there are 1 or more local time trends in the data that are not simply remedied
5) if the parameters of the model change over time
6) if the variance of the errors change over time and no power transformation is adequate.
In terms of a specific example, I would suggest that both of you select/make a time series and post both of them to the web. I will use AUTOBOX to analyse the data in an unattended mode and I will post the models to the list. You then run the R program and then each of you make a separate objective analysis of both results, pointing out similarities and differences. Send those two models complete with all available supporting material including the final error terms to me for my comments. Summarize and presents these results to the list and then ask readers of the list to VOTE for which procedure seems best to them.
|
Auto.arima vs autobox do they differ?
michael/wayne
AUTOBOX would definitely deliver/identify a different model if one or more of the following conditions is met
1) there are pulses in the data
2) there is 1 or more level/step shift in
|
15,160
|
Auto.arima vs autobox do they differ?
|
They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox.
auto.arima() fits (seasonal) ARIMA models including drift terms. Autobox fits transfer function models to handle level shifts and outliers. An ARIMA model is a special case of a transfer function model.
Even if you turned off the level shifts and outlier detection in Autobox, you would get a different ARIMA model from auto.arima() due to different choices in how to identify the ARIMA parameters.
In my testing on the M3 and M-competition data, auto.arima() produces more accurate forecasts than Autobox for these data. However, Autobox will do better with data containing major outliers and level shifts.
|
Auto.arima vs autobox do they differ?
|
They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox.
auto.arima() fits (seasonal) ARIMA models including drift te
|
Auto.arima vs autobox do they differ?
They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox.
auto.arima() fits (seasonal) ARIMA models including drift terms. Autobox fits transfer function models to handle level shifts and outliers. An ARIMA model is a special case of a transfer function model.
Even if you turned off the level shifts and outlier detection in Autobox, you would get a different ARIMA model from auto.arima() due to different choices in how to identify the ARIMA parameters.
In my testing on the M3 and M-competition data, auto.arima() produces more accurate forecasts than Autobox for these data. However, Autobox will do better with data containing major outliers and level shifts.
|
Auto.arima vs autobox do they differ?
They represent two different approaches to two similar but different problems. I wrote auto.arima and @IrishStat is the author of Autobox.
auto.arima() fits (seasonal) ARIMA models including drift te
|
15,161
|
Auto.arima vs autobox do they differ?
|
EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will definitely be a difference: auto.arima doesn't care about outliers, while autobox will detect them and handle them appropriately, which would give a better model. There may be other differences as well, and I"m sure IrishStat can describe those.
I believe autobox detects outliers and other things beyond just searching for the best AR, I, and MA coefficients. If that's correct, it would require more analysis and a couple of other R functions to have similar functionality. And IrishStats is a valuable member of this community, and quite friendly.
Of course, R is free and can do a bazillion things beyond ARIMA.
Another choice that's free for economics-style ARIMA is X13-ARIMA SEATS, from the US Census Bureau, which is open source. There are binaries for Windows and Linux, but it compiled straightforwardly on my Mac, given that I'd already loaded gnu's gfortran compiler. It's the successor to X12-ARIMA, and was just released in the last few days, after years of development and testing. (It updates X12 and also adds in SEATS/TRAMO features. X12 is the official US tool, while SEATS/TRAMO is from the Bank of Spain and is the "European tool".)
I really like X12 (and now X13) a lot. If you output a fair amount of diagnostics and read through them and learn what they mean, they are actually a fairly good education in ARIMA and time series. I've developed my own workflow, but there's an R package x12 for doing most work from within R (you still have to create the input model (".spc") file for X12).
I say X12 is good at "economics style" ARIMA to mean monthly data with more than 3 years of data. (You need 5+ years of data to use some diagnostic features.) It has an outlier identification feature, can handle all kinds of outlier specifications, and can handle holidays, floating holidays, trading day effects, and a host of economic things. It's the tool that the US government uses to create seasonally-adjusted data.
|
Auto.arima vs autobox do they differ?
|
EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will defin
|
Auto.arima vs autobox do they differ?
EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will definitely be a difference: auto.arima doesn't care about outliers, while autobox will detect them and handle them appropriately, which would give a better model. There may be other differences as well, and I"m sure IrishStat can describe those.
I believe autobox detects outliers and other things beyond just searching for the best AR, I, and MA coefficients. If that's correct, it would require more analysis and a couple of other R functions to have similar functionality. And IrishStats is a valuable member of this community, and quite friendly.
Of course, R is free and can do a bazillion things beyond ARIMA.
Another choice that's free for economics-style ARIMA is X13-ARIMA SEATS, from the US Census Bureau, which is open source. There are binaries for Windows and Linux, but it compiled straightforwardly on my Mac, given that I'd already loaded gnu's gfortran compiler. It's the successor to X12-ARIMA, and was just released in the last few days, after years of development and testing. (It updates X12 and also adds in SEATS/TRAMO features. X12 is the official US tool, while SEATS/TRAMO is from the Bank of Spain and is the "European tool".)
I really like X12 (and now X13) a lot. If you output a fair amount of diagnostics and read through them and learn what they mean, they are actually a fairly good education in ARIMA and time series. I've developed my own workflow, but there's an R package x12 for doing most work from within R (you still have to create the input model (".spc") file for X12).
I say X12 is good at "economics style" ARIMA to mean monthly data with more than 3 years of data. (You need 5+ years of data to use some diagnostic features.) It has an outlier identification feature, can handle all kinds of outlier specifications, and can handle holidays, floating holidays, trading day effects, and a host of economic things. It's the tool that the US government uses to create seasonally-adjusted data.
|
Auto.arima vs autobox do they differ?
EDIT: Per your comment, I believe that if you turn off many of autobox's options, you'd probably get a similar answer to auto.arima. But if you do not, and in the presence of outliers there will defin
|
15,162
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
One way is to reverse-sort the data and use duplicated to drop all the duplicates.
For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well.
# Some data to start with:
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
# id var
# 1 2
# 1 4
# 2 1
# 2 3
# 3 5
# 4 2
# Reverse sort
z <- z[order(z$id, z$var, decreasing=TRUE),]
# id var
# 4 2
# 3 5
# 2 3
# 2 1
# 1 4
# 1 2
# Keep only the first row for each duplicate of z$id; this row will have the
# largest value for z$var
z <- z[!duplicated(z$id),]
# Sort so it looks nice
z <- z[order(z$id, z$var),]
# id var
# 1 4
# 2 3
# 3 5
# 4 2
Edit: I just realized that the reverse sort above doesn't even need to sort on id at all. You could just use z[order(z$var, decreasing=TRUE),] instead and it will work just as well.
One more thought... If the var column is numeric, then there's a simple way to sort so that id is ascending, but var is descending. This eliminates the need for the sort at the end (assuming you even wanted it to be sorted).
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
# Sort: id ascending, var descending
z <- z[order(z$id, -z$var),]
# Remove duplicates
z <- z[!duplicated(z$id),]
# id var
# 1 4
# 2 3
# 3 5
# 4 2
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
One way is to reverse-sort the data and use duplicated to drop all the duplicates.
For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well.
# Some
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
One way is to reverse-sort the data and use duplicated to drop all the duplicates.
For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well.
# Some data to start with:
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
# id var
# 1 2
# 1 4
# 2 1
# 2 3
# 3 5
# 4 2
# Reverse sort
z <- z[order(z$id, z$var, decreasing=TRUE),]
# id var
# 4 2
# 3 5
# 2 3
# 2 1
# 1 4
# 1 2
# Keep only the first row for each duplicate of z$id; this row will have the
# largest value for z$var
z <- z[!duplicated(z$id),]
# Sort so it looks nice
z <- z[order(z$id, z$var),]
# id var
# 1 4
# 2 3
# 3 5
# 4 2
Edit: I just realized that the reverse sort above doesn't even need to sort on id at all. You could just use z[order(z$var, decreasing=TRUE),] instead and it will work just as well.
One more thought... If the var column is numeric, then there's a simple way to sort so that id is ascending, but var is descending. This eliminates the need for the sort at the end (assuming you even wanted it to be sorted).
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
# Sort: id ascending, var descending
z <- z[order(z$id, -z$var),]
# Remove duplicates
z <- z[!duplicated(z$id),]
# id var
# 1 4
# 2 3
# 3 5
# 4 2
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
One way is to reverse-sort the data and use duplicated to drop all the duplicates.
For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well.
# Some
|
15,163
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr:
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
> ddply(dt,.(id),summarise,var_1=max(var))
id var_1
1 1 4
2 2 3
3 3 4
4 4 2
unique and duplicated is for removing duplicate records, in your case you only have duplicate ids, not records.
Update: Here is the code when there are additional variables:
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2),bu=rnorm(6))
> ddply(dt,~id,function(d)d[which.max(d$var),])
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr:
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
> ddply(dt,.(id)
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr:
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
> ddply(dt,.(id),summarise,var_1=max(var))
id var_1
1 1 4
2 2 3
3 3 4
4 4 2
unique and duplicated is for removing duplicate records, in your case you only have duplicate ids, not records.
Update: Here is the code when there are additional variables:
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2),bu=rnorm(6))
> ddply(dt,~id,function(d)d[which.max(d$var),])
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
You actualy want to select the maximum element from the elements with the same id. For that you can use ddply from package plyr:
> dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
> ddply(dt,.(id)
|
15,164
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
The base-R solution would involve split, like this:
z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),]))
split splits the data frame into a list of chunks, on which we perform cutting to the single row with max value and then do.call(rbind,...) reduces the list of single rows into a data frame again.
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
The base-R solution would involve split, like this:
z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),]))
split splits
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
The base-R solution would involve split, like this:
z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),]))
split splits the data frame into a list of chunks, on which we perform cutting to the single row with max value and then do.call(rbind,...) reduces the list of single rows into a data frame again.
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
The base-R solution would involve split, like this:
z<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
do.call(rbind,lapply(split(z,z$id),function(chunk) chunk[which.max(chunk$var),]))
split splits
|
15,165
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
I prefer using ave
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2))
## use unique if you want to exclude duplicate maxima
unique(subset(dt, var==ave(var, id, FUN=max)))
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
I prefer using ave
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2))
## use unique if you want to exclude duplicate maxima
unique(subset(dt, var==ave(var, id, FUN=max)))
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
I prefer using ave
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2))
## use unique if you want to exclude duplicate maxima
unique(subset(dt, var==ave(var, id, FUN=max)))
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
I prefer using ave
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,3,3,4,2))
## use unique if you want to exclude duplicate maxima
unique(subset(dt, var==ave(var, id, FUN=max)))
|
15,166
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
Yet another way to do this with base:
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max))
id max
1 1 4
2 2 3
3 3 4
4 4 2
I prefer mpiktas
' plyr solution though.
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
Yet another way to do this with base:
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max))
id max
1 1 4
2 2 3
3 3 4
4 4
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
Yet another way to do this with base:
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max))
id max
1 1 4
2 2 3
3 3 4
4 4 2
I prefer mpiktas
' plyr solution though.
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
Yet another way to do this with base:
dt<-data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,4,2))
data.frame(id=sort(unique(dt$var)),max=tapply(dt$var,dt$id,max))
id max
1 1 4
2 2 3
3 3 4
4 4
|
15,167
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is considered from the reverse side, keeping the last elements:
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
z[!duplicated(z$id, fromLast = TRUE), ]
id var
2 1 4
4 2 3
5 3 5
6 4 2
Otherwise we sort the data frame in ascending order first:
z <- z[order(z$id, z$var), ]
z[!duplicated(z$id, fromLast = TRUE), ]
Using the dplyr package:
library(dplyr)
z %>%
group_by(id) %>%
summarise(var = max(var))
Source: local data frame [4 x 2]
id var
1 1 4
2 2 3
3 3 5
4 4 2
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
|
If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is c
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is considered from the reverse side, keeping the last elements:
z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2))
z[!duplicated(z$id, fromLast = TRUE), ]
id var
2 1 4
4 2 3
5 3 5
6 4 2
Otherwise we sort the data frame in ascending order first:
z <- z[order(z$id, z$var), ]
z[!duplicated(z$id, fromLast = TRUE), ]
Using the dplyr package:
library(dplyr)
z %>%
group_by(id) %>%
summarise(var = max(var))
Source: local data frame [4 x 2]
id var
1 1 4
2 2 3
3 3 5
4 4 2
|
How do I remove all but one specific duplicate record in an R data frame? [closed]
If, as in the example, the column var is already in ascending order we do not need to sort the data frame. We just use the function duplicated passing the argument fromLast = TRUE, so duplication is c
|
15,168
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see wikipedia for further details.
I suppose (here) in the UK they don't make the distinction between sample and population at high school. They certainly don't touch concepts such as biased estimators.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see w
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see wikipedia for further details.
I suppose (here) in the UK they don't make the distinction between sample and population at high school. They certainly don't touch concepts such as biased estimators.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
The first formula is the population standard deviation and the second formula is the the sample standard deviation. The second formula is also related to the unbiased estimator of the variance - see w
|
15,169
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that.
For many reasons, it is appropriate to compare standard deviations in terms of their ratios rather than their differences. The ratio is
$$s_n / s = \sqrt{\frac{N-1}{N}} = \sqrt{1 - \frac{1}{N}} \approx 1 - \frac{1}{2N}.$$
The approximation can be viewed as truncating the (alternating) Taylor series for the square root, indicating the error cannot exceed $|\binom{1/2}{2}N^{-2}|$ = $1 / (8 N^2)$. This establishes that the approximation is more than good enough (for our purposes) once $N$ is $2$ or larger.
It is immediate that the two SD estimates are within (about) 10% of each other once $N$ exceeds $5$, within 5% once $N$ exceeds $10$, and so on. Clearly, for many purposes these discrepancies are so small that it does not matter which formula is used, especially when the SD is intended for describing the spread of data or for making semi-quantitative assessments or predictions (such as in employing the 68-95-99.7 rule of thumb). The discrepancies are even less important when comparing SDs, such as when comparing the spreads of two datasets. (When the datasets are equinumerous, the discrepancies effectively vanish altogether and both formulas lead to identical conclusions.) Arguably, these are the forms of reasoning we are trying to teach beginning students, so if the students are becoming concerned about which formula to use, that could be taken as a sign that the text or the class is failing to emphasize what is really important.
We might want to pay some attention to the case of very small $N$. Here, people may be using $t$ tests instead of $z$ tests, for instance. In that case, it is essential to employ whichever formula for the standard deviation is used by one's table or software. (This is not a matter of one formula being wrong or right; it's just a consistency requirement.) Most tables use $s$, not $s_n$: this is the one place in the elementary syllabus where the text and teacher need to be clear about which formula to use.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that.
For many reasons, it is appropriate to compare standard devia
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that.
For many reasons, it is appropriate to compare standard deviations in terms of their ratios rather than their differences. The ratio is
$$s_n / s = \sqrt{\frac{N-1}{N}} = \sqrt{1 - \frac{1}{N}} \approx 1 - \frac{1}{2N}.$$
The approximation can be viewed as truncating the (alternating) Taylor series for the square root, indicating the error cannot exceed $|\binom{1/2}{2}N^{-2}|$ = $1 / (8 N^2)$. This establishes that the approximation is more than good enough (for our purposes) once $N$ is $2$ or larger.
It is immediate that the two SD estimates are within (about) 10% of each other once $N$ exceeds $5$, within 5% once $N$ exceeds $10$, and so on. Clearly, for many purposes these discrepancies are so small that it does not matter which formula is used, especially when the SD is intended for describing the spread of data or for making semi-quantitative assessments or predictions (such as in employing the 68-95-99.7 rule of thumb). The discrepancies are even less important when comparing SDs, such as when comparing the spreads of two datasets. (When the datasets are equinumerous, the discrepancies effectively vanish altogether and both formulas lead to identical conclusions.) Arguably, these are the forms of reasoning we are trying to teach beginning students, so if the students are becoming concerned about which formula to use, that could be taken as a sign that the text or the class is failing to emphasize what is really important.
We might want to pay some attention to the case of very small $N$. Here, people may be using $t$ tests instead of $z$ tests, for instance. In that case, it is essential to employ whichever formula for the standard deviation is used by one's table or software. (This is not a matter of one formula being wrong or right; it's just a consistency requirement.) Most tables use $s$, not $s_n$: this is the one place in the elementary syllabus where the text and teacher need to be clear about which formula to use.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Because nobody has yet answered the final question--namely, to quantify the differences between the two formulas--let's take care of that.
For many reasons, it is appropriate to compare standard devia
|
15,170
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
This is Bessel's Correction. The US version is showing the formula for the sample standard deviation, where the UK version above is the standard deviation of the sample.
|
15,171
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383).
How to compute the SD with n-1 in the denominator
Compute the square of the difference between each value and the sample mean.
Add those values up.
Divide the sum by n-1. The result is called the variance.
Take the square root to obtain the Standard Deviation.
Why n-1?
Why divide by n-1 rather than n when computing a standard deviation? In step 1, you compute the difference between each value and the mean of those values. You don't know the true mean of the population; all you know is the mean of your sample. Except for the rare cases where the sample mean happens to equal the population mean, the data will be closer to the sample mean than it will be to the true population mean. So the value you compute in step 2 will probably be a bit smaller (and can't be larger) than what it would be if you used the true population mean in step 1. To make up for this, divide by n-1 rather than n.v This is called Bessel's correction.
But why n-1? If you knew the sample mean, and all but one of the values, you could calculate what that last value must be. Statisticians say there are n-1 degrees of freedom.
When should the SD be computed with a denominator of n instead of n-1?
Statistics books often show two equations to compute the SD, one using n, and the other using n-1, in the denominator. Some calculators have two buttons.
The n-1 equation is used in the common situation where you are analyzing a sample of data and wish to make more general conclusions. The SD computed this way (with n-1 in the denominator) is your best guess for the value of the SD in the overall population.
If you simply want to quantify the variation in a particular set of data, and don't plan to extrapolate to make wider conclusions, then you can compute the SD using n in the denominator. The resulting SD is the SD of those particular values. It makes no sense to compute the SD this way if you want to estimate the SD of the population from which those points were drawn. It only makes sense to use n in the denominator when there is no sampling from a population, there is no desire to make general conclusions.
The goal of science is almost always to generalize, so the equation with n in the denominator should not be used. The only example I can think of where it might make sense is in quantifying the variation among exam scores. But much better would be to show a scatterplot of every score, or a frequency distribution histogram.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383).
How to compute the SD with n-1 in the den
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383).
How to compute the SD with n-1 in the denominator
Compute the square of the difference between each value and the sample mean.
Add those values up.
Divide the sum by n-1. The result is called the variance.
Take the square root to obtain the Standard Deviation.
Why n-1?
Why divide by n-1 rather than n when computing a standard deviation? In step 1, you compute the difference between each value and the mean of those values. You don't know the true mean of the population; all you know is the mean of your sample. Except for the rare cases where the sample mean happens to equal the population mean, the data will be closer to the sample mean than it will be to the true population mean. So the value you compute in step 2 will probably be a bit smaller (and can't be larger) than what it would be if you used the true population mean in step 1. To make up for this, divide by n-1 rather than n.v This is called Bessel's correction.
But why n-1? If you knew the sample mean, and all but one of the values, you could calculate what that last value must be. Statisticians say there are n-1 degrees of freedom.
When should the SD be computed with a denominator of n instead of n-1?
Statistics books often show two equations to compute the SD, one using n, and the other using n-1, in the denominator. Some calculators have two buttons.
The n-1 equation is used in the common situation where you are analyzing a sample of data and wish to make more general conclusions. The SD computed this way (with n-1 in the denominator) is your best guess for the value of the SD in the overall population.
If you simply want to quantify the variation in a particular set of data, and don't plan to extrapolate to make wider conclusions, then you can compute the SD using n in the denominator. The resulting SD is the SD of those particular values. It makes no sense to compute the SD this way if you want to estimate the SD of the population from which those points were drawn. It only makes sense to use n in the denominator when there is no sampling from a population, there is no desire to make general conclusions.
The goal of science is almost always to generalize, so the equation with n in the denominator should not be used. The only example I can think of where it might make sense is in quantifying the variation among exam scores. But much better would be to show a scatterplot of every score, or a frequency distribution histogram.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
I am not sure this is purely a US vs. British issue. The rest of this page is excerpted from a faq I wrote.(http://www.graphpad.com/faq/viewfaq.cfm?faq=1383).
How to compute the SD with n-1 in the den
|
15,172
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the data set), so one should use N-1 when estimating the standard deviation from a data set for which one had to estimate the mean before.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
|
Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the d
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the data set), so one should use N-1 when estimating the standard deviation from a data set for which one had to estimate the mean before.
|
Why do US and UK Schools Teach Different methods of Calculating the Standard Deviation?
Since N is the number of points in the data set, one could argue that by calculating the mean one has reduced the degree of freedom in the data set by one (since one introduced a dependency into the d
|
15,173
|
Neural networks vs everything else
|
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather than a ensemble of nonlinear networks.
If you take a look at the winners of past Kaggle competitions, excepting any challenges with image/video data, you will quickly find that neural networks are not the solution to everything. Some past solutions here.
apply regularization till you see no over-fitting and then train them to the end
There is no guarantee that you can apply enough regularization to prevent overfitting without completely destroying the capacity of the network to learn anything. In real life, it is rarely feasible to eliminate the train-test gap, and that's why papers still report train and test performance.
they are universal estimators
This is only true in the limit of having an unbounded number of units, which isn't realistic.
you can give me the link to the problem and i would train the best neural network that i can and we can see if 2 layered or 3 layered neural networks falls short of any other benchmark machine learning algorithm
An example problem which I expect a neural network would never be able to solve: Given an integer, classify as prime or not-prime.
I believe this could be solved perfectly with a simple algorithm that iterates over all valid programs in ascending length and finds the shortest program which correctly identifies the prime numbers. Indeed, this 13 character regex string can match prime numbers, which wouldn't be computationally intractable to search.
Can regularization take a model from one that overfits to the one that has its representational power severely hamstrung by regularization? Won't there always be that sweet spot in between?
Yes, there is a sweet spot, but it is usually way before you stop overfitting. See this figure:
If you flip the horizontal axis and relabel it as "amount of regularization", it's pretty accurate -- if you regularize until there is no overfitting at all, your error will be huge. The "sweet spot" occurs when there is a bit of overfitting, but not too much.
How is a 'simple algorithm that iterates over all valid programs in
ascending length and finds the shortest program which correctly
identifies the prime numbers.' an algorithm that learns?
It finds the parameters $\theta$ such that we have a hypothesis $H(\theta)$ which explains the data, just like backpropagation finds the parameters $\theta$ which minimize the loss (and by proxy, explains the data). Only in this case, the parameter is a string instead of many floating point values.
so if i get you correctly you are making the argument that if the data is not substantial the deep network will never hit the validation accuracy of the best shallow network given the best hyperparameters for both?
Yes. Here is an ugly but hopefully effective figure to illustrate my point.
but that doesnt make sense. a deep network can just learn a 1-1
mapping above the shallow
The question is not "can it", but "will it", and if you are training backpropagation, the answer is probably not.
We discussed the fact that larger networks will always work better than smaller networks
Without further qualification, that claim is just wrong.
|
Neural networks vs everything else
|
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather tha
|
Neural networks vs everything else
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather than a ensemble of nonlinear networks.
If you take a look at the winners of past Kaggle competitions, excepting any challenges with image/video data, you will quickly find that neural networks are not the solution to everything. Some past solutions here.
apply regularization till you see no over-fitting and then train them to the end
There is no guarantee that you can apply enough regularization to prevent overfitting without completely destroying the capacity of the network to learn anything. In real life, it is rarely feasible to eliminate the train-test gap, and that's why papers still report train and test performance.
they are universal estimators
This is only true in the limit of having an unbounded number of units, which isn't realistic.
you can give me the link to the problem and i would train the best neural network that i can and we can see if 2 layered or 3 layered neural networks falls short of any other benchmark machine learning algorithm
An example problem which I expect a neural network would never be able to solve: Given an integer, classify as prime or not-prime.
I believe this could be solved perfectly with a simple algorithm that iterates over all valid programs in ascending length and finds the shortest program which correctly identifies the prime numbers. Indeed, this 13 character regex string can match prime numbers, which wouldn't be computationally intractable to search.
Can regularization take a model from one that overfits to the one that has its representational power severely hamstrung by regularization? Won't there always be that sweet spot in between?
Yes, there is a sweet spot, but it is usually way before you stop overfitting. See this figure:
If you flip the horizontal axis and relabel it as "amount of regularization", it's pretty accurate -- if you regularize until there is no overfitting at all, your error will be huge. The "sweet spot" occurs when there is a bit of overfitting, but not too much.
How is a 'simple algorithm that iterates over all valid programs in
ascending length and finds the shortest program which correctly
identifies the prime numbers.' an algorithm that learns?
It finds the parameters $\theta$ such that we have a hypothesis $H(\theta)$ which explains the data, just like backpropagation finds the parameters $\theta$ which minimize the loss (and by proxy, explains the data). Only in this case, the parameter is a string instead of many floating point values.
so if i get you correctly you are making the argument that if the data is not substantial the deep network will never hit the validation accuracy of the best shallow network given the best hyperparameters for both?
Yes. Here is an ugly but hopefully effective figure to illustrate my point.
but that doesnt make sense. a deep network can just learn a 1-1
mapping above the shallow
The question is not "can it", but "will it", and if you are training backpropagation, the answer is probably not.
We discussed the fact that larger networks will always work better than smaller networks
Without further qualification, that claim is just wrong.
|
Neural networks vs everything else
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather tha
|
15,174
|
Neural networks vs everything else
|
I would add that there is not such thing as a machine learning panacea:
By the no free lunch theorem:
If an algorithm performs well on a certain class of problems then it
necessarily pays for that with degraded performance on the set of all
remaining problems
|
Neural networks vs everything else
|
I would add that there is not such thing as a machine learning panacea:
By the no free lunch theorem:
If an algorithm performs well on a certain class of problems then it
necessarily pays for that
|
Neural networks vs everything else
I would add that there is not such thing as a machine learning panacea:
By the no free lunch theorem:
If an algorithm performs well on a certain class of problems then it
necessarily pays for that with degraded performance on the set of all
remaining problems
|
Neural networks vs everything else
I would add that there is not such thing as a machine learning panacea:
By the no free lunch theorem:
If an algorithm performs well on a certain class of problems then it
necessarily pays for that
|
15,175
|
Area under the ROC curve or area under the PR curve for imbalanced data?
|
The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR curves, you can refer to the following paper: The Relationship Between Precision-Recall and ROC Curves by Davis and Goadrich.
To quote Davis and Goadrich:
However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance.
ROC curves plot FPR vs TPR. To be more explicit:
$$FPR = \frac{FP}{FP+TN}, \quad TPR=\frac{TP}{TP+FN}.$$
PR curves plot precision versus recall (FPR), or more explicitly:
$$recall = \frac{TP}{TP+FN} = TPR,\quad precision = \frac{TP}{TP+FP}$$
Precision is directly influenced by class (im)balance since $FP$ is affected, whereas TPR only depends on positives. This is why ROC curves do not capture such effects.
Precision-recall curves are better to highlight differences between models for highly imbalanced data sets. If you want to compare different models in imbalanced settings, area under the PR curve will likely exhibit larger differences than area under the ROC curve.
That said, ROC curves are much more common (even if they are less suited). Depending on your audience, ROC curves may be the lingua franca so using those is probably the safer choice. If one model completely dominates another in PR space (e.g. always have higher precision over the entire recall range), it will also dominate in ROC space. If the curves cross in either space they will also cross in the other. In other words, the main conclusions will be similar no matter which curve you use.
Shameless advertisement. As an additional example, you could have a look at one of my papers in which I report both ROC and PR curves in an imbalanced setting. Figure 3 contains ROC and PR curves for identical models, clearly showing the difference between the two. To compare area under the PR versus area under ROC you can compare tables 1-2 (AUPR) and tables 3-4 (AUROC) where you can see that AUPR shows much larger differences between individual models than AUROC. This emphasizes the suitability of PR curves once more.
|
Area under the ROC curve or area under the PR curve for imbalanced data?
|
The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR cur
|
Area under the ROC curve or area under the PR curve for imbalanced data?
The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR curves, you can refer to the following paper: The Relationship Between Precision-Recall and ROC Curves by Davis and Goadrich.
To quote Davis and Goadrich:
However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance.
ROC curves plot FPR vs TPR. To be more explicit:
$$FPR = \frac{FP}{FP+TN}, \quad TPR=\frac{TP}{TP+FN}.$$
PR curves plot precision versus recall (FPR), or more explicitly:
$$recall = \frac{TP}{TP+FN} = TPR,\quad precision = \frac{TP}{TP+FP}$$
Precision is directly influenced by class (im)balance since $FP$ is affected, whereas TPR only depends on positives. This is why ROC curves do not capture such effects.
Precision-recall curves are better to highlight differences between models for highly imbalanced data sets. If you want to compare different models in imbalanced settings, area under the PR curve will likely exhibit larger differences than area under the ROC curve.
That said, ROC curves are much more common (even if they are less suited). Depending on your audience, ROC curves may be the lingua franca so using those is probably the safer choice. If one model completely dominates another in PR space (e.g. always have higher precision over the entire recall range), it will also dominate in ROC space. If the curves cross in either space they will also cross in the other. In other words, the main conclusions will be similar no matter which curve you use.
Shameless advertisement. As an additional example, you could have a look at one of my papers in which I report both ROC and PR curves in an imbalanced setting. Figure 3 contains ROC and PR curves for identical models, clearly showing the difference between the two. To compare area under the PR versus area under ROC you can compare tables 1-2 (AUPR) and tables 3-4 (AUROC) where you can see that AUPR shows much larger differences between individual models than AUROC. This emphasizes the suitability of PR curves once more.
|
Area under the ROC curve or area under the PR curve for imbalanced data?
The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR cur
|
15,176
|
Area under the ROC curve or area under the PR curve for imbalanced data?
|
ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are the standard for showing operating tradeoffs and I believe they would be most well received.
Precision and Recall alone can be misleading because it does not account for true negatives.
|
Area under the ROC curve or area under the PR curve for imbalanced data?
|
ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are t
|
Area under the ROC curve or area under the PR curve for imbalanced data?
ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are the standard for showing operating tradeoffs and I believe they would be most well received.
Precision and Recall alone can be misleading because it does not account for true negatives.
|
Area under the ROC curve or area under the PR curve for imbalanced data?
ROC curves plot TPR on the y-axis and FPR on the x-axis, but it depends on what you want to portray. Unless there is some reason to plot it differently in your area of study, TPR/FPR ROC curves are t
|
15,177
|
Area under the ROC curve or area under the PR curve for imbalanced data?
|
I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only looking at your positive class. So in a balanced class situation and where you care about both negative and positive classes, the ROC AUC metric works great. When you have an imbalanced situation, it is preferred to use the PR AUC, but keep in mind it is only determining how well your model can "calculate" the positive class!
|
Area under the ROC curve or area under the PR curve for imbalanced data?
|
I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only loo
|
Area under the ROC curve or area under the PR curve for imbalanced data?
I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only looking at your positive class. So in a balanced class situation and where you care about both negative and positive classes, the ROC AUC metric works great. When you have an imbalanced situation, it is preferred to use the PR AUC, but keep in mind it is only determining how well your model can "calculate" the positive class!
|
Area under the ROC curve or area under the PR curve for imbalanced data?
I consider the largest difference in ROC and PR AUC the fact the ROC is determining how well your model can "calculate" the positive class AND the negative class where as the PR AUC is really only loo
|
15,178
|
Sidak or Bonferroni?
|
If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random variable. Specifically, it is taken from a binomial distribution with $p=\alpha$ and $n=k$. For example, if you plan to run 3 tests using $\alpha=.05$, and (unbeknownst to you) there is actually no difference in each case, then there is a 5% chance of finding a significant result in each test. In this way, the type I error rate is held to $\alpha$ for the tests individually, but across the set of 3 tests the long-run type I error rate will be higher. If you believe that it is meaningful to group / think of these 3 tests together, then you may want to hold the type I error rate at $\alpha$ for the set as a whole, rather than just individually. How should you go about this? There are two approaches that center on shifting from the original $\alpha$ (i.e., $\alpha_o$) to a new value (i.e., $\alpha_{\rm new}$):
Bonferroni: adjust the $\alpha$ used to assess 'significance' such that
$$\alpha_{\rm new}=\frac{\alpha_{o}}{k}\qquad\qquad\quad$$
Dunn-Sidak: adjust $\alpha$ using
$$\alpha_{\rm new}=1-(1-\alpha_{o})^{1/k}$$
(Note that the Dunn-Sidak assumes all the tests within the set are independent of each other and could yield familywise type I error inflation if that assumption does not hold.)
It is important to note that when conducting tests, there are two kinds of errors that you want to avoid, type I (i.e., saying there is a difference when there isn't one) and type II (i.e., saying there isn't a difference when there actually is). Typically, when people discuss this topic, they only discuss—and seem to only be aware of / concerned with—type I errors. In addition, people often neglect to mention that the calculated error rate will only hold if all nulls are true. It is trivially obvious that you cannot make a type I error if the null hypothesis is false, but it is important to hold that fact explicitly in mind when discussing this issue.
I bring this up because there are implications of these facts that appear to often go unconsidered. First, if $k>1$, the Dunn-Sidak approach will offer higher power (although the difference can be quite tiny with small $k$) and so should always be preferred (when applicable). Second, a 'step-down' approach should be used. That is, test the biggest effect first; if you are convinced that the null does not obtain in that case, then the maximum possible number of type I errors is $k-1$, so the next test should be adjusted accordingly, and so on. (This often makes people uncomfortable and looks like fishing, but it is not fishing, as the tests are independent, and you intended to conduct them before you ever saw the data. This is just a way of adjusting $\alpha$ optimally.)
The above holds no matter how you you value type I relative to type II errors. However, a-priori there is no reason to believe that type I errors are worse than type II (despite the fact that everyone seems to assume so). Instead, this is a decision that must be made by the researcher, and must be specific to that situation. Personally, if I am running theoretically-suggested, a-priori, orthogonal contrasts, I don't usually adjust $\alpha$.
(And to state this again, because it's important, all of the above assumes that the tests are independent. If the contrasts are not independent, such as when several treatments are each being compared to the same control, a different approach than $\alpha$ adjustment, such as Dunnett's test, should be used.)
|
Sidak or Bonferroni?
|
If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random v
|
Sidak or Bonferroni?
If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random variable. Specifically, it is taken from a binomial distribution with $p=\alpha$ and $n=k$. For example, if you plan to run 3 tests using $\alpha=.05$, and (unbeknownst to you) there is actually no difference in each case, then there is a 5% chance of finding a significant result in each test. In this way, the type I error rate is held to $\alpha$ for the tests individually, but across the set of 3 tests the long-run type I error rate will be higher. If you believe that it is meaningful to group / think of these 3 tests together, then you may want to hold the type I error rate at $\alpha$ for the set as a whole, rather than just individually. How should you go about this? There are two approaches that center on shifting from the original $\alpha$ (i.e., $\alpha_o$) to a new value (i.e., $\alpha_{\rm new}$):
Bonferroni: adjust the $\alpha$ used to assess 'significance' such that
$$\alpha_{\rm new}=\frac{\alpha_{o}}{k}\qquad\qquad\quad$$
Dunn-Sidak: adjust $\alpha$ using
$$\alpha_{\rm new}=1-(1-\alpha_{o})^{1/k}$$
(Note that the Dunn-Sidak assumes all the tests within the set are independent of each other and could yield familywise type I error inflation if that assumption does not hold.)
It is important to note that when conducting tests, there are two kinds of errors that you want to avoid, type I (i.e., saying there is a difference when there isn't one) and type II (i.e., saying there isn't a difference when there actually is). Typically, when people discuss this topic, they only discuss—and seem to only be aware of / concerned with—type I errors. In addition, people often neglect to mention that the calculated error rate will only hold if all nulls are true. It is trivially obvious that you cannot make a type I error if the null hypothesis is false, but it is important to hold that fact explicitly in mind when discussing this issue.
I bring this up because there are implications of these facts that appear to often go unconsidered. First, if $k>1$, the Dunn-Sidak approach will offer higher power (although the difference can be quite tiny with small $k$) and so should always be preferred (when applicable). Second, a 'step-down' approach should be used. That is, test the biggest effect first; if you are convinced that the null does not obtain in that case, then the maximum possible number of type I errors is $k-1$, so the next test should be adjusted accordingly, and so on. (This often makes people uncomfortable and looks like fishing, but it is not fishing, as the tests are independent, and you intended to conduct them before you ever saw the data. This is just a way of adjusting $\alpha$ optimally.)
The above holds no matter how you you value type I relative to type II errors. However, a-priori there is no reason to believe that type I errors are worse than type II (despite the fact that everyone seems to assume so). Instead, this is a decision that must be made by the researcher, and must be specific to that situation. Personally, if I am running theoretically-suggested, a-priori, orthogonal contrasts, I don't usually adjust $\alpha$.
(And to state this again, because it's important, all of the above assumes that the tests are independent. If the contrasts are not independent, such as when several treatments are each being compared to the same control, a different approach than $\alpha$ adjustment, such as Dunnett's test, should be used.)
|
Sidak or Bonferroni?
If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random v
|
15,179
|
Sidak or Bonferroni?
|
Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like this (if the test are independent): $\alpha^*=1 − (1 − \alpha)^{1/n}$.
Because $\alpha/n < 1 − (1 − \alpha)^{1/n}$, the Sidak correction is a bit more powerful (i.e. you get significant results more easily) but Bonferroni is a bit simpler to handle.
If you need an even more powerful procedure you might want to use the Bonferroni-Holm procedure.
|
Sidak or Bonferroni?
|
Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like
|
Sidak or Bonferroni?
Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like this (if the test are independent): $\alpha^*=1 − (1 − \alpha)^{1/n}$.
Because $\alpha/n < 1 − (1 − \alpha)^{1/n}$, the Sidak correction is a bit more powerful (i.e. you get significant results more easily) but Bonferroni is a bit simpler to handle.
If you need an even more powerful procedure you might want to use the Bonferroni-Holm procedure.
|
Sidak or Bonferroni?
Denote with $\alpha^*$ the corrected significance level, then Bonferroni works like this: Divide the significance level $\alpha$ by the number $n$ of tests, i.e. $\alpha^*=\alpha/n$. Sidak works like
|
15,180
|
Sidak or Bonferroni?
|
Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 comparisons and a familywise alpha of .05, Sidak would conduct each test at .0253 and Bonferroni would conduct each test at .0250.
Many commenters on this site have said that Sidak is only valid when the test statistics of your comparisons are independent. That's not true. Sidak allows slight inflation of the familywise error rate when the test statistics are NEGATIVELY dependent, but if you're doing two-sided tests, negative dependence isn't generally a concern. Under non-negative dependence, Sidak does in fact provide an upper bound on the familywise error rate. That said, there are other procedures that provide such a bound and tend to retain more statistical power than Sidak. So Sidak probably isn't the best choice.
One thing the Bonferroni procedure provides (that Sidak doesn't) is strict control of the expected number of Type I errors--the so-called "per-family error rate," which is more conservative than the familywise error rate. For more info, see: Frane, AV (2015) "Are per-family Type I error rates relevant in social and behavioral science?" Journal of Modern Applied Statistical Methods 14(1), 12-23.
|
Sidak or Bonferroni?
|
Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 co
|
Sidak or Bonferroni?
Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 comparisons and a familywise alpha of .05, Sidak would conduct each test at .0253 and Bonferroni would conduct each test at .0250.
Many commenters on this site have said that Sidak is only valid when the test statistics of your comparisons are independent. That's not true. Sidak allows slight inflation of the familywise error rate when the test statistics are NEGATIVELY dependent, but if you're doing two-sided tests, negative dependence isn't generally a concern. Under non-negative dependence, Sidak does in fact provide an upper bound on the familywise error rate. That said, there are other procedures that provide such a bound and tend to retain more statistical power than Sidak. So Sidak probably isn't the best choice.
One thing the Bonferroni procedure provides (that Sidak doesn't) is strict control of the expected number of Type I errors--the so-called "per-family error rate," which is more conservative than the familywise error rate. For more info, see: Frane, AV (2015) "Are per-family Type I error rates relevant in social and behavioral science?" Journal of Modern Applied Statistical Methods 14(1), 12-23.
|
Sidak or Bonferroni?
Sidak and Bonferroni are so similar that you will probably get the same result regardless of which procedure you use. Bonferroni is only marginally more conservative than Sidak. For instance, for 2 co
|
15,181
|
Sidak or Bonferroni?
|
The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
|
Sidak or Bonferroni?
|
The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
|
Sidak or Bonferroni?
The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
|
Sidak or Bonferroni?
The Sidak correction assumes the individual tests are statistically independent. The Bonferroni correction doesn't assume this.
|
15,182
|
"It was the correct play even though I lost"
|
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this distribution and a specified loss function. Whether the predictive distribution is delivered by a Bayesian or by a frequentist is irrelevant - all that matters is how far it diverges from reality. (Of course, getting only a single realization makes it hard to assess this, but again, that is orthogonal.)
|
"It was the correct play even though I lost"
|
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this
|
"It was the correct play even though I lost"
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this distribution and a specified loss function. Whether the predictive distribution is delivered by a Bayesian or by a frequentist is irrelevant - all that matters is how far it diverges from reality. (Of course, getting only a single realization makes it hard to assess this, but again, that is orthogonal.)
|
"It was the correct play even though I lost"
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this
|
15,183
|
"It was the correct play even though I lost"
|
"The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as: If you’ve won money, it doesn’t mean you played the hand well. If you’ve lost money, it doesn’t mean that you played the hand badly.
A few results down I found a poker forum dedicated to this. The multiple deeply jargony replies to a "was this correct" hits it home how a whole culture thinks that can be determined regardless of actual results or number of hands played later. It's also interesting since they talk about the known probabilities of cards, but also guesses as to what other players were to likely do.
Annie Duke's Thinking in Bets is all over this idea. A person flipped one or two houses for a profit and assumes they're good at it, then goes on to lose their shirt. For one thing, too small a sample. For another, if they reviewed things they'd have noticed how much luck they required both times and realized that was proof they were terrible at flipping houses.
Poker players actually mock considering a series of hands. If some duffer won with a lucky inside straight two hands ago, you know they're going to go for it now (inside straights are "hot") and you can raise a little higher to take more money from them. But I couldn't say if that's more Bayesian or Frequentist.
|
"It was the correct play even though I lost"
|
"The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as
|
"It was the correct play even though I lost"
"The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as: If you’ve won money, it doesn’t mean you played the hand well. If you’ve lost money, it doesn’t mean that you played the hand badly.
A few results down I found a poker forum dedicated to this. The multiple deeply jargony replies to a "was this correct" hits it home how a whole culture thinks that can be determined regardless of actual results or number of hands played later. It's also interesting since they talk about the known probabilities of cards, but also guesses as to what other players were to likely do.
Annie Duke's Thinking in Bets is all over this idea. A person flipped one or two houses for a profit and assumes they're good at it, then goes on to lose their shirt. For one thing, too small a sample. For another, if they reviewed things they'd have noticed how much luck they required both times and realized that was proof they were terrible at flipping houses.
Poker players actually mock considering a series of hands. If some duffer won with a lucky inside straight two hands ago, you know they're going to go for it now (inside straights are "hot") and you can raise a little higher to take more money from them. But I couldn't say if that's more Bayesian or Frequentist.
|
"It was the correct play even though I lost"
"The Correct Play is the one that should have won" is a mantra in professional poker. Hearthstone players are probably borrowing it. From the top result of "Poker correct play" I found it expressed as
|
15,184
|
"It was the correct play even though I lost"
|
I don't think either that this is a question about frequentist vs bayesian.
There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: what interest do I have on what happens to an experiment if I repeat it indefinitevely, if I actually don't have the possibility or the intention to repeat it any more time?
And of course a lot of people think bayesian view of probability is more natural to most people. In the frequentist point of view, if you are going to win or lose a hand is a fixed fact: there are your cards, there are the other players cards, there is no misty randomness about them, and whoever is going to win is written in clear letters in the book of nature, that you can only try to read with a bit of uncertainty. Instead, more bayesian-fond statisticians will tell you that, since you don't know the other players hands, what you have, when you consider your prior knowledge of them and of the game in general, and after looking of your cards, is an informed and revised distribution of your odds of winning. In fact, if you win or not is indeed random to bayesian philosophy.
After you play and lose one game, I think that there is similarly little consolation in knowing that if you had made the same bets in some infinite loop you would have won most of the times, or in recalling that according to your bayesian posterior information you had better chances of winning. What's the point anyway?
The point is that if you have a method that maximizes your chances of winning (if we talk about future experiments, it's probability in both frameworks, and it works the same), you stick to it. Because it maximizes your chances, no further reason needed. The play was correct because it was what the method suggested.
|
"It was the correct play even though I lost"
|
I don't think either that this is a question about frequentist vs bayesian.
There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: wh
|
"It was the correct play even though I lost"
I don't think either that this is a question about frequentist vs bayesian.
There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: what interest do I have on what happens to an experiment if I repeat it indefinitevely, if I actually don't have the possibility or the intention to repeat it any more time?
And of course a lot of people think bayesian view of probability is more natural to most people. In the frequentist point of view, if you are going to win or lose a hand is a fixed fact: there are your cards, there are the other players cards, there is no misty randomness about them, and whoever is going to win is written in clear letters in the book of nature, that you can only try to read with a bit of uncertainty. Instead, more bayesian-fond statisticians will tell you that, since you don't know the other players hands, what you have, when you consider your prior knowledge of them and of the game in general, and after looking of your cards, is an informed and revised distribution of your odds of winning. In fact, if you win or not is indeed random to bayesian philosophy.
After you play and lose one game, I think that there is similarly little consolation in knowing that if you had made the same bets in some infinite loop you would have won most of the times, or in recalling that according to your bayesian posterior information you had better chances of winning. What's the point anyway?
The point is that if you have a method that maximizes your chances of winning (if we talk about future experiments, it's probability in both frameworks, and it works the same), you stick to it. Because it maximizes your chances, no further reason needed. The play was correct because it was what the method suggested.
|
"It was the correct play even though I lost"
I don't think either that this is a question about frequentist vs bayesian.
There is someone, in fact, that argue that frequentist approach to the case of once-only experiments is not solid enough: wh
|
15,185
|
"It was the correct play even though I lost"
|
As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win or a loss.
If you introduce that information into your framework, then you are leaving yourself open to hindsight bias (which IMO it is the elephant in the room here and hasn't really been acknowledged in the other answers).
Hence, if you don't consider the end result, you need to rely in your model/computed odds/whatever information you had available at decision time. And sadly, that means that sometimes you'll lose, even when making the correct play.
|
"It was the correct play even though I lost"
|
As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win
|
"It was the correct play even though I lost"
As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win or a loss.
If you introduce that information into your framework, then you are leaving yourself open to hindsight bias (which IMO it is the elephant in the room here and hasn't really been acknowledged in the other answers).
Hence, if you don't consider the end result, you need to rely in your model/computed odds/whatever information you had available at decision time. And sadly, that means that sometimes you'll lose, even when making the correct play.
|
"It was the correct play even though I lost"
As others are saying, the problem has nothing to do with frequentist VS bayesian. The problem is that at the point of making the decision you don't have any information about whether it will be a win
|
15,186
|
"It was the correct play even though I lost"
|
"The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says you will do well.
If you get into "but but but..." then you no longer follow the winning strategy you've developed.
Your winning strategy will result in you getting burned sometimes, perhaps often enough that an unlucky streak will bankrupt you before you get back to making money. However, if you allow for "but but but" you no longer follow your winning strategy and no longer use the strategy optimized for the least loss.
|
"It was the correct play even though I lost"
|
"The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says
|
"It was the correct play even though I lost"
"The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says you will do well.
If you get into "but but but..." then you no longer follow the winning strategy you've developed.
Your winning strategy will result in you getting burned sometimes, perhaps often enough that an unlucky streak will bankrupt you before you get back to making money. However, if you allow for "but but but" you no longer follow your winning strategy and no longer use the strategy optimized for the least loss.
|
"It was the correct play even though I lost"
"The correct play" is the play from the strategy that you believe works out the best for you, calculated through some kind of loss function. If you have that strategy and stick with it, the math says
|
15,187
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$
More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$). As others have already noted, the (unscaled) score of player $A$ is
$$X \sim Beta(n, 1)$$
and the score of player $B$ is
$$Y \sim Beta(m, 1)$$
with $X$ and $Y$ independent. Thus, the joint distribution of $X$ and $Y$ is
$$f_{XY}(x, y) = nmx^{n-1}y^{m-1}, \ 0 < x, y < 1.$$
The goal is to find a constant $c$ such that
$$P(Y \geq cX) = \frac{1}{2}$$.
This probability can be found in terms of $c$, $n$ and $m$ as follows.
\begin{align*}
P(Y \geq cX) &= \int_0^{1/c}\int_{cx}^1 nmx^{n-1}y^{m-1}dydx \\[1.5ex] &= \cdots \\[1.5ex]
&= c^{-n}\left\{\frac{m}{n+m} \right\}
\end{align*}
Setting this equal to $1/2$ and solving for $c$ yields
$$c = \left(\frac{2m}{n+m}\right)^{1/n}.$$
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$
More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$).
|
How to equalize the chance of throwing the highest dice? (Riddle)
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$
More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$). As others have already noted, the (unscaled) score of player $A$ is
$$X \sim Beta(n, 1)$$
and the score of player $B$ is
$$Y \sim Beta(m, 1)$$
with $X$ and $Y$ independent. Thus, the joint distribution of $X$ and $Y$ is
$$f_{XY}(x, y) = nmx^{n-1}y^{m-1}, \ 0 < x, y < 1.$$
The goal is to find a constant $c$ such that
$$P(Y \geq cX) = \frac{1}{2}$$.
This probability can be found in terms of $c$, $n$ and $m$ as follows.
\begin{align*}
P(Y \geq cX) &= \int_0^{1/c}\int_{cx}^1 nmx^{n-1}y^{m-1}dydx \\[1.5ex] &= \cdots \\[1.5ex]
&= c^{-n}\left\{\frac{m}{n+m} \right\}
\end{align*}
Setting this equal to $1/2$ and solving for $c$ yields
$$c = \left(\frac{2m}{n+m}\right)^{1/n}.$$
|
How to equalize the chance of throwing the highest dice? (Riddle)
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$
More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$).
|
15,188
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can.
If you raise player-A's score to the $\frac{3}{7}$ you should have a fair game. Obviously, since scores are between 0 and 1, raising it to a power of between 0 and 1 (not inclusive) will actually increase it.
Why?
The way I figure it, the probability of a score not exceeding $S$, is equal to $1 - S^n$.
If we set $1 - S_1^{n_1} = 1 - S_2^{n_2}$
$$1-S_1^{n_1}=1-S_2^{n_2}$$
$$S_1^{n_1}=S_2^{n_2}$$
$$n_1 log(S_1) = n_2 log(S_2)$$
$$log(S_1) = \frac{n_2}{n_1} log(S_2)$$
$$S_1 = S_2^{\frac{n_2}{n_1}}$$
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can.
If you raise player-A's score to the $\frac{3}{7}$
|
How to equalize the chance of throwing the highest dice? (Riddle)
I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can.
If you raise player-A's score to the $\frac{3}{7}$ you should have a fair game. Obviously, since scores are between 0 and 1, raising it to a power of between 0 and 1 (not inclusive) will actually increase it.
Why?
The way I figure it, the probability of a score not exceeding $S$, is equal to $1 - S^n$.
If we set $1 - S_1^{n_1} = 1 - S_2^{n_2}$
$$1-S_1^{n_1}=1-S_2^{n_2}$$
$$S_1^{n_1}=S_2^{n_2}$$
$$n_1 log(S_1) = n_2 log(S_2)$$
$$log(S_1) = \frac{n_2}{n_1} log(S_2)$$
$$S_1 = S_2^{\frac{n_2}{n_1}}$$
|
How to equalize the chance of throwing the highest dice? (Riddle)
I don't believe that a linear scaling factor will equalize the odds, or at least I cannot determine one. However, there is a power factor that can.
If you raise player-A's score to the $\frac{3}{7}$
|
15,189
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution.
As @whuber says in his Comment, the maximum $X_1$ of three independent
standard uniform random variables has $X_1 \sim \mathsf{Beta}(3,1)$ and the maximum $X_2$ of seven independent
standard uniform random variables has $X_2 \sim \mathsf{Beta}(7,1).$ This is easy to prove analytically.
Then, as implied by @MikeP's Answer, $X_1^{3/7} \sim \mathsf{Beta}(7,1).$ This is also easy to prove analytically. Thus $X_2$ and $X_1^{3/7}$ have the same distribution.
Below are simulations in R of the distributions of $X_1, X_2,$ and $X_1^{3/7},$ each based on samples of size $100\,000.$
Histogams show the simulation results along with the
density functions of $\mathsf{Beta}(3,1)$ [red curve] and
$\mathsf{Beta}(7,1)$ [blue], as appropriate.
set.seed(1120)
x1 = replicate(10^5, max(runif(3)))
mean(x1)
[1] 0.7488232 # aprx E(X1) = 3/4
par(mfrow=c(1,3))
hist(x1, prob=T, col="skyblue2")
curve(dbeta(x,3,1), add=T, col="red", n=10001)
x2 = replicate(10^5, max(runif(7)))
mean(x2)
[1] 0.8746943 # aprx E(X2) = 7/8
hist(x2, prob=T, col="skyblue2")
curve(dbeta(x,7,1), add=T, col="blue", n=10001)
mean(x1^(3/7))
[1] 0.8743326 # aprx 7/8
hist(x1^(3/7), prob=T, col="skyblue2")
curve(dbeta(x,7,1), add=T, col="blue", n = 10001)
par(mfrow=c(1,1))
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution.
As @whuber says in his Comment, the maximum $X_1$ of three independent
stand
|
How to equalize the chance of throwing the highest dice? (Riddle)
I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution.
As @whuber says in his Comment, the maximum $X_1$ of three independent
standard uniform random variables has $X_1 \sim \mathsf{Beta}(3,1)$ and the maximum $X_2$ of seven independent
standard uniform random variables has $X_2 \sim \mathsf{Beta}(7,1).$ This is easy to prove analytically.
Then, as implied by @MikeP's Answer, $X_1^{3/7} \sim \mathsf{Beta}(7,1).$ This is also easy to prove analytically. Thus $X_2$ and $X_1^{3/7}$ have the same distribution.
Below are simulations in R of the distributions of $X_1, X_2,$ and $X_1^{3/7},$ each based on samples of size $100\,000.$
Histogams show the simulation results along with the
density functions of $\mathsf{Beta}(3,1)$ [red curve] and
$\mathsf{Beta}(7,1)$ [blue], as appropriate.
set.seed(1120)
x1 = replicate(10^5, max(runif(3)))
mean(x1)
[1] 0.7488232 # aprx E(X1) = 3/4
par(mfrow=c(1,3))
hist(x1, prob=T, col="skyblue2")
curve(dbeta(x,3,1), add=T, col="red", n=10001)
x2 = replicate(10^5, max(runif(7)))
mean(x2)
[1] 0.8746943 # aprx E(X2) = 7/8
hist(x2, prob=T, col="skyblue2")
curve(dbeta(x,7,1), add=T, col="blue", n=10001)
mean(x1^(3/7))
[1] 0.8743326 # aprx 7/8
hist(x1^(3/7), prob=T, col="skyblue2")
curve(dbeta(x,7,1), add=T, col="blue", n = 10001)
par(mfrow=c(1,1))
|
How to equalize the chance of throwing the highest dice? (Riddle)
I'd like to try to put pieces of comments and answers together into a simulation, and into a plan for an analytic solution.
As @whuber says in his Comment, the maximum $X_1$ of three independent
stand
|
15,190
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of player $b$. For each ratio I simulated 1000 games and computed the multiplicative constant.
This what I got:
For the dice I assumed a uniform distribution between 0 and 1.
If we take the same ratio the expected value for the multiplicative constant is the same. I tested with a ratio of $0.5$ timing $a$ and $b$ up to a factor of 2000. Here the results as scatter plot and density distribution
:
|
How to equalize the chance of throwing the highest dice? (Riddle)
|
I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of pla
|
How to equalize the chance of throwing the highest dice? (Riddle)
I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of player $b$. For each ratio I simulated 1000 games and computed the multiplicative constant.
This what I got:
For the dice I assumed a uniform distribution between 0 and 1.
If we take the same ratio the expected value for the multiplicative constant is the same. I tested with a ratio of $0.5$ timing $a$ and $b$ up to a factor of 2000. Here the results as scatter plot and density distribution
:
|
How to equalize the chance of throwing the highest dice? (Riddle)
I did not solve the problem analytically but I performed a simulation with 100 different $a/b$ ratios varying from 0.01 to 1. $a$ is the number of dice of player A and $b$ is the number of dice of pla
|
15,191
|
Why Normality assumption in linear regression
|
We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practice.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_\infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_\infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
In fact, here's an example of a "uniform error" model fitted to data by hand:
It's easy to identify (by sliding a straightedge toward the data) that the four marked points are the only candidates for being in the active set; three of them will actually form the active set (and a little checking soon identifies which three lead to the narrowest band that encompassess all the data). The line at the center of that band (marked in red) is then the maximum likelihood estimate of the line.
Many other choices of model are possible and quite a few have been used in practice.
Note that if you have additive, independent, constant-spread errors with a density of the form $k\,\exp(-c.g(\varepsilon))$, maximizing the likelihood will correspond to minimizing $\sum_i g(e_i)$, where $e_i$ is the $i$th residual.
However, there are a variety of reasons that least squares is a popular choice, many of which don't require any assumption of normality.
|
Why Normality assumption in linear regression
|
We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practic
|
Why Normality assumption in linear regression
We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practice.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_\infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_\infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
In fact, here's an example of a "uniform error" model fitted to data by hand:
It's easy to identify (by sliding a straightedge toward the data) that the four marked points are the only candidates for being in the active set; three of them will actually form the active set (and a little checking soon identifies which three lead to the narrowest band that encompassess all the data). The line at the center of that band (marked in red) is then the maximum likelihood estimate of the line.
Many other choices of model are possible and quite a few have been used in practice.
Note that if you have additive, independent, constant-spread errors with a density of the form $k\,\exp(-c.g(\varepsilon))$, maximizing the likelihood will correspond to minimizing $\sum_i g(e_i)$, where $e_i$ is the $i$th residual.
However, there are a variety of reasons that least squares is a popular choice, many of which don't require any assumption of normality.
|
Why Normality assumption in linear regression
We do choose other error distributions. You can in many cases do so fairly easily; if you are using maximum likelihood estimation, this will change the loss function. This is certainly done in practic
|
15,192
|
Why Normality assumption in linear regression
|
The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimization problem, which can be solved using pure linear algebra. Other choices of noise distributions yield more complicated optimization problems which typically have to be solved numerically. In particular, the problem may be non-convex, yielding additional complications.
Normality is not necessarily a good assumption in general. The normal distribution has very light tails, and this makes the regression estimate quite sensitive to outliers. Alternatives such as the Laplace or Student's t distributions are often superior if measurement data contain outliers.
See Peter Huber's seminal book Robust Statistics for more information.
|
Why Normality assumption in linear regression
|
The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimizat
|
Why Normality assumption in linear regression
The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimization problem, which can be solved using pure linear algebra. Other choices of noise distributions yield more complicated optimization problems which typically have to be solved numerically. In particular, the problem may be non-convex, yielding additional complications.
Normality is not necessarily a good assumption in general. The normal distribution has very light tails, and this makes the regression estimate quite sensitive to outliers. Alternatives such as the Laplace or Student's t distributions are often superior if measurement data contain outliers.
See Peter Huber's seminal book Robust Statistics for more information.
|
Why Normality assumption in linear regression
The normal/Gaussian assumption is often used because it is the most computationally convenient choice. Computing the maximum likelihood estimate of the regression coefficients is a quadratic minimizat
|
15,193
|
Why Normality assumption in linear regression
|
When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as well as confidence intervals for your predictions.
In conclusion, the reason why we often choose normal distribution is its properties, which often make things easy. It is also not a very restrictive assumption, as many other types of data will behaive "kind-of-normally"
Anyway, as mentioned in a previous answer, there are possibilities to define regression models for other distributions. The normal just happens to be the most recurrent one
|
Why Normality assumption in linear regression
|
When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as
|
Why Normality assumption in linear regression
When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as well as confidence intervals for your predictions.
In conclusion, the reason why we often choose normal distribution is its properties, which often make things easy. It is also not a very restrictive assumption, as many other types of data will behaive "kind-of-normally"
Anyway, as mentioned in a previous answer, there are possibilities to define regression models for other distributions. The normal just happens to be the most recurrent one
|
Why Normality assumption in linear regression
When working with those hypothesis, squared-erros based regression and maximum likelihood provide you the same solution. You are also capable of getting simple F-tests for coefficient significance, as
|
15,194
|
Why Normality assumption in linear regression
|
Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions.
However, why is the normal distribution chosen so often?
The reason is that the normal distribution occurs in many places naturally. It is a bit the same like we often see the golden ratio or the Fibonacci numbers occurring "spontaneously" at various places in nature.
The normal distribution is the limiting distribution for a sum of variables with finite variance (or less strict restrictions are possible as well). And, without taking the limit, it is also a good approximation for a sum of a finite number of variables. So, because many observed errors occur as a sum of many little unobserved errors, the normal distribution is a good approximation.
See also here Importance of normal distribution
where Galton's bean machines show the principle intuitively
|
Why Normality assumption in linear regression
|
Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions.
However, why is the normal distri
|
Why Normality assumption in linear regression
Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions.
However, why is the normal distribution chosen so often?
The reason is that the normal distribution occurs in many places naturally. It is a bit the same like we often see the golden ratio or the Fibonacci numbers occurring "spontaneously" at various places in nature.
The normal distribution is the limiting distribution for a sum of variables with finite variance (or less strict restrictions are possible as well). And, without taking the limit, it is also a good approximation for a sum of a finite number of variables. So, because many observed errors occur as a sum of many little unobserved errors, the normal distribution is a good approximation.
See also here Importance of normal distribution
where Galton's bean machines show the principle intuitively
|
Why Normality assumption in linear regression
Glen_b has explained nicely that OLS regression can be generalized (maximizing likelihood instead of minimizing sum of squares) and we do choose other distributions.
However, why is the normal distri
|
15,195
|
Why Normality assumption in linear regression
|
Why we don't choose other distributions?—we do.
Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and an input vector $x_i \in \mathbb R^n$. Let the prediction of the target given $x_i$ be
$$\hat y_i = w^\intercal x_i.$$
The surprisal loss is usually the most sensible loss:
$$L = -\log P(y_i \mid x_i).$$
You can think of linear regression as using a normal density with fixed variance in the above equation:
$$L = -\log P(y_i \mid x_i) \propto (y_i - \hat y_i)^2.$$
This leads to the weight update:
$$\nabla_w L = (\hat y_i - y_i)x_i $$
In general, if you use another exponential family distribution, this model is called a generalized linear model. The different distribution corresponds to a different density, but it can be more easily formalized by changing the prediction, the weight, and the target.
The weight is changed to a matrix $W \in \mathbb R^{n\times k}$. The prediction is changed to
$$\hat u_i \triangleq \nabla g(W x_i)$$
where $\nabla g: \mathbb R^k \to \mathbb R^k$ is called the link function or gradient log-normalizer. And, the target $y_i$ is changed to a vector called sufficient statistics $u_i = T(y_i) \in \mathbb R^k$.
Each link function and sufficient statistics corresponds to a different distributional assumption, which is what your question is about. To see why, let's look at a continuous-valued exponential family's density function with natural parameters $\eta$:
$$f(z) = h(z)\exp(\eta^\intercal T(z) - g(\eta)).$$
Let the natural parameters $\eta$ be $w^\intercal x_i$, and evaluate the density at the observed target $z = y_i$. Then, the loss gradient is
$$\begin{align}
\nabla_W L &= \nabla_W -\log f(x) \\
&= (\nabla g(W x_i)) x_i^\intercal - T(y_i) x_i^\intercal \\
&= (\hat u_i - u_i) x_i^\intercal
\end{align},$$
which has the same nice form as linear regression.
As far as I know, the gradient log-normalizer can be any monotonic, analytic function, and any monotonic, analytic function is the gradient log-normalizer of some exponential family.
|
Why Normality assumption in linear regression
|
Why we don't choose other distributions?—we do.
Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and a
|
Why Normality assumption in linear regression
Why we don't choose other distributions?—we do.
Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and an input vector $x_i \in \mathbb R^n$. Let the prediction of the target given $x_i$ be
$$\hat y_i = w^\intercal x_i.$$
The surprisal loss is usually the most sensible loss:
$$L = -\log P(y_i \mid x_i).$$
You can think of linear regression as using a normal density with fixed variance in the above equation:
$$L = -\log P(y_i \mid x_i) \propto (y_i - \hat y_i)^2.$$
This leads to the weight update:
$$\nabla_w L = (\hat y_i - y_i)x_i $$
In general, if you use another exponential family distribution, this model is called a generalized linear model. The different distribution corresponds to a different density, but it can be more easily formalized by changing the prediction, the weight, and the target.
The weight is changed to a matrix $W \in \mathbb R^{n\times k}$. The prediction is changed to
$$\hat u_i \triangleq \nabla g(W x_i)$$
where $\nabla g: \mathbb R^k \to \mathbb R^k$ is called the link function or gradient log-normalizer. And, the target $y_i$ is changed to a vector called sufficient statistics $u_i = T(y_i) \in \mathbb R^k$.
Each link function and sufficient statistics corresponds to a different distributional assumption, which is what your question is about. To see why, let's look at a continuous-valued exponential family's density function with natural parameters $\eta$:
$$f(z) = h(z)\exp(\eta^\intercal T(z) - g(\eta)).$$
Let the natural parameters $\eta$ be $w^\intercal x_i$, and evaluate the density at the observed target $z = y_i$. Then, the loss gradient is
$$\begin{align}
\nabla_W L &= \nabla_W -\log f(x) \\
&= (\nabla g(W x_i)) x_i^\intercal - T(y_i) x_i^\intercal \\
&= (\hat u_i - u_i) x_i^\intercal
\end{align},$$
which has the same nice form as linear regression.
As far as I know, the gradient log-normalizer can be any monotonic, analytic function, and any monotonic, analytic function is the gradient log-normalizer of some exponential family.
|
Why Normality assumption in linear regression
Why we don't choose other distributions?—we do.
Regression means modeling a continuous values given a set of inputs. Consider training examples consisting of a target scalar $y_i \in \mathbb R$ and a
|
15,196
|
"-iles" terminology for the top half a percent
|
Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.)
By extension, the interval or bin between the upper or third quartile and the maximum is often also called the upper quartile, and sometimes the fourth quartile. More generally, $k$ breakpoints define $k + 1$ groups. The word "quarter" is also available and perhaps preferable.
Some might quibble at this laxness of terminology and prefer (or even insist on) terminology such as bin or interval whenever bins or intervals are in question. More positively, disambiguation of two related senses is usually not too difficult. If there is talk of people in the top quartile of course performance or BMI or whatever, it is clear what is intended.
Similar comments apply here to deciles and percentiles. Other terms in varying use are tertiles (rare?), quintiles (common), sextiles (rare?) and octiles (uncommon but not rare). The qualifications here are based on my haphazard reading and memory.
Latin is no longer as familiar as its most enthusiastic proponents would like and these terms are challenging to many. More positively, there seems to be a growing convergence on quantile as a standard term and just to expect to see the numerical definitions being explicit. Thus I'd expect to see references to the $5, 1, 0.5$% points or quantiles and similarly the upper $95, 99, 99.5$% points or quantiles. In practice I see no use of, and in principle see no need for the use of, terms in Latin (or Greek or any other languages) for most such values or the bins they define. Concretely, anyone knowing how to interpret "the top half-percentile" is likely to find "above the 99.5% point" simpler to use.
EDIT 5 October 2016
Aronson (2001) documented first uses of various terms for quantiles. The list here includes some earlier dates from searches of the Oxford English Dictionary and www.jstor.org on 5 October 2016. The dates refer to earliest citations of the terms with their statistical meaning and not to other meanings. The general term quantile itself is often attributed to Kendall (1940) but can be found in Fisher and Yates (1938).
English ordinal Statistical term Earliest citation 2016+ additions
(Aronson) (Cox)
Third Tertile 1931 1911
Tercile 1942
Fourth Quartile 1879
Fifth Quintile 1951 1910
Sixth Sextile 1920
Seventh Septile 1993 1981
Eighth Octile 1879
Ninth Nonile 1968
Tenth Decile 1881
Decentile (***) 1988
Sixteenth Suboctile 1880
Hexadecile (*) 2001
Twentieth Vigintile 1936
Ventile (**)
Thirtieth Trentile 1958
Fortieth Quadragintile 1976
Hundredth Percentile 1885
Centile 1902 1894
Thousandth Permille 1904
Aronson, J. K. 2001. Francis Galton and the invention of terms for quantiles.
Journal of Clinical Epidemiology 54: 1191-1194.
Fisher, R. A. and Yates, F. 1938. Statistical Tables for Biological, Agricultural and Medical Research. Edinburgh: Oliver and Boyd.
Kendall, M. G. 1940. Note on the distribution of quantiles for large samples.
Supplement to the Journal of the Royal Statistical Society 7: 83-85.
EDIT 22 Dec 2016 The historical information above is now written up within Cox, N.J. 2016. Letter values as selected quantiles. Stata Journal 16: 1058-1071 http://www.stata-journal.com/article.html?article=st0465
EDIT 20 June 2017 Added "trentile" reference. Slonim, M.J. 1958. The trentile deviation method of weather forecast evaluation. Journal of the American Statistical Association 53: 398–407. http://www.jstor.org/stable/2281863
EDIT 7 Aug 2019 Another reference for trentile is Panofsky, H.A. and Brier, G.W. 1958. Some Applications of Statistics to Meteorology. University Park, PA: College of Mineral Industries, Pennsylvania State University. They refer to use in World War II.
EDIT 9 Jan 2021 Quartile, sextile and octile are in the first edition of Samuel Johnson's Dictionary (1755), but with astronomical meanings. None of the other terms is.
EDIT 29 Jan 2021 (*) Hexadecile is recorded from 2001 (link courtesy @whuber).
EDIT 5 Feb 2021 (**) Earliest use of ventile in this sense is hard to spot among quite different meanings.
EDIT 25 Sept 2021 (***) Added decentile (not in OED, some JSTOR hits).
|
"-iles" terminology for the top half a percent
|
Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.)
By extension, the in
|
"-iles" terminology for the top half a percent
Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.)
By extension, the interval or bin between the upper or third quartile and the maximum is often also called the upper quartile, and sometimes the fourth quartile. More generally, $k$ breakpoints define $k + 1$ groups. The word "quarter" is also available and perhaps preferable.
Some might quibble at this laxness of terminology and prefer (or even insist on) terminology such as bin or interval whenever bins or intervals are in question. More positively, disambiguation of two related senses is usually not too difficult. If there is talk of people in the top quartile of course performance or BMI or whatever, it is clear what is intended.
Similar comments apply here to deciles and percentiles. Other terms in varying use are tertiles (rare?), quintiles (common), sextiles (rare?) and octiles (uncommon but not rare). The qualifications here are based on my haphazard reading and memory.
Latin is no longer as familiar as its most enthusiastic proponents would like and these terms are challenging to many. More positively, there seems to be a growing convergence on quantile as a standard term and just to expect to see the numerical definitions being explicit. Thus I'd expect to see references to the $5, 1, 0.5$% points or quantiles and similarly the upper $95, 99, 99.5$% points or quantiles. In practice I see no use of, and in principle see no need for the use of, terms in Latin (or Greek or any other languages) for most such values or the bins they define. Concretely, anyone knowing how to interpret "the top half-percentile" is likely to find "above the 99.5% point" simpler to use.
EDIT 5 October 2016
Aronson (2001) documented first uses of various terms for quantiles. The list here includes some earlier dates from searches of the Oxford English Dictionary and www.jstor.org on 5 October 2016. The dates refer to earliest citations of the terms with their statistical meaning and not to other meanings. The general term quantile itself is often attributed to Kendall (1940) but can be found in Fisher and Yates (1938).
English ordinal Statistical term Earliest citation 2016+ additions
(Aronson) (Cox)
Third Tertile 1931 1911
Tercile 1942
Fourth Quartile 1879
Fifth Quintile 1951 1910
Sixth Sextile 1920
Seventh Septile 1993 1981
Eighth Octile 1879
Ninth Nonile 1968
Tenth Decile 1881
Decentile (***) 1988
Sixteenth Suboctile 1880
Hexadecile (*) 2001
Twentieth Vigintile 1936
Ventile (**)
Thirtieth Trentile 1958
Fortieth Quadragintile 1976
Hundredth Percentile 1885
Centile 1902 1894
Thousandth Permille 1904
Aronson, J. K. 2001. Francis Galton and the invention of terms for quantiles.
Journal of Clinical Epidemiology 54: 1191-1194.
Fisher, R. A. and Yates, F. 1938. Statistical Tables for Biological, Agricultural and Medical Research. Edinburgh: Oliver and Boyd.
Kendall, M. G. 1940. Note on the distribution of quantiles for large samples.
Supplement to the Journal of the Royal Statistical Society 7: 83-85.
EDIT 22 Dec 2016 The historical information above is now written up within Cox, N.J. 2016. Letter values as selected quantiles. Stata Journal 16: 1058-1071 http://www.stata-journal.com/article.html?article=st0465
EDIT 20 June 2017 Added "trentile" reference. Slonim, M.J. 1958. The trentile deviation method of weather forecast evaluation. Journal of the American Statistical Association 53: 398–407. http://www.jstor.org/stable/2281863
EDIT 7 Aug 2019 Another reference for trentile is Panofsky, H.A. and Brier, G.W. 1958. Some Applications of Statistics to Meteorology. University Park, PA: College of Mineral Industries, Pennsylvania State University. They refer to use in World War II.
EDIT 9 Jan 2021 Quartile, sextile and octile are in the first edition of Samuel Johnson's Dictionary (1755), but with astronomical meanings. None of the other terms is.
EDIT 29 Jan 2021 (*) Hexadecile is recorded from 2001 (link courtesy @whuber).
EDIT 5 Feb 2021 (**) Earliest use of ventile in this sense is hard to spot among quite different meanings.
EDIT 25 Sept 2021 (***) Added decentile (not in OED, some JSTOR hits).
|
"-iles" terminology for the top half a percent
Historically, and to the present, the upper or third quartile (for example) is the value exceeded by just 25% of values. (I only ever see informal use of "top" for this meaning.)
By extension, the in
|
15,197
|
"-iles" terminology for the top half a percent
|
The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequently used quantiles (terciles, quartiles, percentiles, etc.), but we don't have one for the rest. Technically I guess you can come up with a name for them if you know Latin, like 'bicentile' but no one would understand it and you would end up explaining it anyways.
|
"-iles" terminology for the top half a percent
|
The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequ
|
"-iles" terminology for the top half a percent
The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequently used quantiles (terciles, quartiles, percentiles, etc.), but we don't have one for the rest. Technically I guess you can come up with a name for them if you know Latin, like 'bicentile' but no one would understand it and you would end up explaining it anyways.
|
"-iles" terminology for the top half a percent
The general term for these segments is 'quantile', i.e. the top 0.005 quantile is the data segment you are looking for. Quantiles are in a range of [0, 1]. We have separate names for the notable/frequ
|
15,198
|
"-iles" terminology for the top half a percent
|
It's called the top half-percentile or upper half-percentile. Google
"top half-percentile"
or
"upper half-percentile"
to find these terms used in practice, most often in economics.
|
"-iles" terminology for the top half a percent
|
It's called the top half-percentile or upper half-percentile. Google
"top half-percentile"
or
"upper half-percentile"
to find these terms used in practice, most often in economics.
|
"-iles" terminology for the top half a percent
It's called the top half-percentile or upper half-percentile. Google
"top half-percentile"
or
"upper half-percentile"
to find these terms used in practice, most often in economics.
|
"-iles" terminology for the top half a percent
It's called the top half-percentile or upper half-percentile. Google
"top half-percentile"
or
"upper half-percentile"
to find these terms used in practice, most often in economics.
|
15,199
|
"-iles" terminology for the top half a percent
|
There is percent (%) and permille (‰) so you could say the top five permille.
However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4228404/ and https://openi.nlm.nih.gov/detailedresult.php?img=PMC4228404_1476-069X-12-92-1&req=4.
They may be the same occurrences found in 1st percentile, 2nd percentile… But how to say “2.5th” percentile?
|
"-iles" terminology for the top half a percent
|
There is percent (%) and permille (‰) so you could say the top five permille.
However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.
|
"-iles" terminology for the top half a percent
There is percent (%) and permille (‰) so you could say the top five permille.
However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4228404/ and https://openi.nlm.nih.gov/detailedresult.php?img=PMC4228404_1476-069X-12-92-1&req=4.
They may be the same occurrences found in 1st percentile, 2nd percentile… But how to say “2.5th” percentile?
|
"-iles" terminology for the top half a percent
There is percent (%) and permille (‰) so you could say the top five permille.
However, the only occurrences of the latter's use I can find are by one set of authors in two articles at http://www.ncbi.
|
15,200
|
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost?
|
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that there are some assumptions about your population that are made, for example, a bias in favor of one particular label even though users say "I don't know".
I totally understand your frustration but the proper way to approach the problem is to modify the model to suit your needs based on the true existing data, and not the other way around (modifying the data).
|
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost?
|
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that
|
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost?
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that there are some assumptions about your population that are made, for example, a bias in favor of one particular label even though users say "I don't know".
I totally understand your frustration but the proper way to approach the problem is to modify the model to suit your needs based on the true existing data, and not the other way around (modifying the data).
|
A 6th response option ("I don't know") was added to a 5-point Likert scale. Is the data lost?
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.