idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
5,601
|
How to plot trends properly
|
One more version: ratios (mean death rate from 1927 to current year)/(death rate 1927)
Done with Mathematica code
data = {
{year, de, fr, be, nl, den, ch, aut, cz, pl},
{1927, 10.9, 16.5, 13.0, 10.2, 11.6, 12.4, 15.0, 16.0, 17.3},
{1928, 11.2, 16.4, 12.8, 9.6, 11.0, 12.0, 14.5, 15.1, 16.4},
{1929, 11.4, 17.9, 14.4, 10.7, 11.2, 12.5, 14.6, 15.5, 16.7},
{1930, 10.4, 15.6, 12.8, 9.1, 10.8, 11.6, 13.5, 14.2, 15.6},
{1931, 10.4, 16.2, 12.7, 9.6, 11.4, 12.1, 14.0, 14.4, 15.5},
{1932, 10.2, 15.8, 12.7, 9.0, 11.0, 12.2, 13.9, 14.1, 15.0},
{1933, 10.8, 15.8, 12.7, 8.8, 10.6, 11.4, 13.2, 13.7, 14.2},
{1934, 10.6, 15.1, 11.7, 8.4, 10.4, 11.3, 12.7, 13.2, 14.4},
{1935, 11.4, 15.7, 12.3, 8.7, 11.1, 12.1, 13.7, 13.5, 14.0},
{1936, 11.7, 15.3, 12.2, 8.7, 11.0, 11.4, 13.2, 13.3, 14.2},
{1937, 11.5, 15.0, 12.5, 8.8, 10.8, 11.3, 13.3, 13.3, 14.0}
}
ListPlot[
Map[
Table[{First[data[[k + 1]]], Mean[Take[#, k]]/First[#]}, {k, Length[#]}] &,
Map[Rest, Rest[Transpose[data]]]
],
Joined -> True,
PlotRange -> All,
Frame -> True,
FrameTicks -> {Map[First, Rest[data]], Automatic},
PlotLabels -> Rest[First[data]],
AxesOrigin -> {First[First[Rest[data]]], 1}
]
(Peaks in 1929 seem to be related to a flu pandemic that occurred around that time)
|
How to plot trends properly
|
One more version: ratios (mean death rate from 1927 to current year)/(death rate 1927)
Done with Mathematica code
data = {
{year, de, fr, be, nl, den, ch, aut, cz, pl},
{1927, 10.9
|
How to plot trends properly
One more version: ratios (mean death rate from 1927 to current year)/(death rate 1927)
Done with Mathematica code
data = {
{year, de, fr, be, nl, den, ch, aut, cz, pl},
{1927, 10.9, 16.5, 13.0, 10.2, 11.6, 12.4, 15.0, 16.0, 17.3},
{1928, 11.2, 16.4, 12.8, 9.6, 11.0, 12.0, 14.5, 15.1, 16.4},
{1929, 11.4, 17.9, 14.4, 10.7, 11.2, 12.5, 14.6, 15.5, 16.7},
{1930, 10.4, 15.6, 12.8, 9.1, 10.8, 11.6, 13.5, 14.2, 15.6},
{1931, 10.4, 16.2, 12.7, 9.6, 11.4, 12.1, 14.0, 14.4, 15.5},
{1932, 10.2, 15.8, 12.7, 9.0, 11.0, 12.2, 13.9, 14.1, 15.0},
{1933, 10.8, 15.8, 12.7, 8.8, 10.6, 11.4, 13.2, 13.7, 14.2},
{1934, 10.6, 15.1, 11.7, 8.4, 10.4, 11.3, 12.7, 13.2, 14.4},
{1935, 11.4, 15.7, 12.3, 8.7, 11.1, 12.1, 13.7, 13.5, 14.0},
{1936, 11.7, 15.3, 12.2, 8.7, 11.0, 11.4, 13.2, 13.3, 14.2},
{1937, 11.5, 15.0, 12.5, 8.8, 10.8, 11.3, 13.3, 13.3, 14.0}
}
ListPlot[
Map[
Table[{First[data[[k + 1]]], Mean[Take[#, k]]/First[#]}, {k, Length[#]}] &,
Map[Rest, Rest[Transpose[data]]]
],
Joined -> True,
PlotRange -> All,
Frame -> True,
FrameTicks -> {Map[First, Rest[data]], Automatic},
PlotLabels -> Rest[First[data]],
AxesOrigin -> {First[First[Rest[data]]], 1}
]
(Peaks in 1929 seem to be related to a flu pandemic that occurred around that time)
|
How to plot trends properly
One more version: ratios (mean death rate from 1927 to current year)/(death rate 1927)
Done with Mathematica code
data = {
{year, de, fr, be, nl, den, ch, aut, cz, pl},
{1927, 10.9
|
5,602
|
What exactly is the alpha in the Dirichlet distribution?
|
The Dirichlet distribution is a multivariate probability distribution that describes $k\ge2$ variables $X_1,\dots,X_k$, such that each $x_i \in (0,1)$ and $\sum_{i=1}^N x_i = 1$, that is parametrized by a vector of positive-valued parameters $\boldsymbol{\alpha} = (\alpha_1,\dots,\alpha_k)$. The parameters do not have to be integers, they only need to be positive real numbers. They are not "normalized" in any way, they are parameters of this distribution.
The Dirichlet distribution is a generalization of the beta distribution into multiple dimensions, so you can start by learning about the beta distribution. Beta is a univariate distribution of a random variable $X \in (0,1)$ parameterized by parameters $\alpha$ and $\beta$. The nice intuition about it comes if you recall that it is a conjugate prior for the binomial distribution and if we assume a beta prior parameterized by $\alpha$ and $\beta$ for the binomial distribution's probability parameter $p$, then the posterior distribution of $p$ is also a beta distribution parameterized by $\alpha' = \alpha + \text{number of successes}$ and $\beta' = \beta + \text{number of failures}$. So you can think of $\alpha$ and $\beta$ as of pseudocounts (they do not need to be integers) of successes and failures (check also this thread).
In the case of the Dirichlet distribution, it is a conjugate prior for the multinomial distribution. If in the case of the binomial distribution we can think of it in terms of drawing white and black balls with replacement from the urn, then in case of the multinomial distribution we are drawing with replacement $N$ balls appearing in $k$ colors, where each of colors of the balls can be drawn with probabilities $p_1,\dots,p_k$. The Dirichlet distribution is a conjugate prior for $p_1,\dots,p_k$ probabilities and $\alpha_1,\dots,\alpha_k$ parameters can be thought of as pseudocounts of balls of each color assumed a priori (but you should read also about the pitfalls of such reasoning). In Dirichlet-multinomial model $\alpha_1,\dots,\alpha_k$ get updated by summing them with observed counts in each category: $\alpha_1+n_1,\dots,\alpha_k+n_k$ in similar fashion as in case of beta-binomial model.
The higher value of $\alpha_i$, the greater "weight" of $X_i$ and the greater amount of the total "mass" is assigned to it (recall that in total it must be $x_1+\dots+x_k=1$). If all $\alpha_i$ are equal, the distribution is symmetric. If $\alpha_i < 1$, it can be thought of as anti-weight that pushes away $x_i$ toward extremes, while when it is high, it attracts $x_i$ toward some central value (central in the sense that all points are concentrated around it, not in the sense that it is symmetrically central). If $\alpha_1 = \dots = \alpha_k = 1$, then the points are uniformly distributed.
This can be seen on the plots below, where you can see trivariate Dirichlet distributions (unfortunately we can produce reasonable plots only up to three dimensions) parameterized by (a) $\alpha_1 = \alpha_2 = \alpha_3 = 1$, (b) $\alpha_1 = \alpha_2 = \alpha_3 = 10$, (c) $\alpha_1 = 1, \alpha_2 = 10, \alpha_3 = 5$, (d) $\alpha_1 = \alpha_2 = \alpha_3 = 0.2$.
The Dirichlet distribution is sometimes called a "distribution over distributions" since it can be thought of as a distribution of probabilities themselves. Notice that since each $x_i \in (0,1)$ and $\sum_{i=1}^k x_i = 1$, then $x_i$'s are consistent with the first and second axioms of probability. So you can use the Dirichlet distribution as a distribution of probabilities for discrete events described by distributions such as categorical or multinomial. It is not true that it is a distribution over any distributions, for example it is not related to probabilities of continuous random variables, or even some discrete ones (e.g. a Poisson distributed random variable describes probabilities of observing values that are any natural numbers, so to use a Dirichlet distribution over their probabilities, you'd need an infinite number of random variables $k$).
|
What exactly is the alpha in the Dirichlet distribution?
|
The Dirichlet distribution is a multivariate probability distribution that describes $k\ge2$ variables $X_1,\dots,X_k$, such that each $x_i \in (0,1)$ and $\sum_{i=1}^N x_i = 1$, that is parametrized
|
What exactly is the alpha in the Dirichlet distribution?
The Dirichlet distribution is a multivariate probability distribution that describes $k\ge2$ variables $X_1,\dots,X_k$, such that each $x_i \in (0,1)$ and $\sum_{i=1}^N x_i = 1$, that is parametrized by a vector of positive-valued parameters $\boldsymbol{\alpha} = (\alpha_1,\dots,\alpha_k)$. The parameters do not have to be integers, they only need to be positive real numbers. They are not "normalized" in any way, they are parameters of this distribution.
The Dirichlet distribution is a generalization of the beta distribution into multiple dimensions, so you can start by learning about the beta distribution. Beta is a univariate distribution of a random variable $X \in (0,1)$ parameterized by parameters $\alpha$ and $\beta$. The nice intuition about it comes if you recall that it is a conjugate prior for the binomial distribution and if we assume a beta prior parameterized by $\alpha$ and $\beta$ for the binomial distribution's probability parameter $p$, then the posterior distribution of $p$ is also a beta distribution parameterized by $\alpha' = \alpha + \text{number of successes}$ and $\beta' = \beta + \text{number of failures}$. So you can think of $\alpha$ and $\beta$ as of pseudocounts (they do not need to be integers) of successes and failures (check also this thread).
In the case of the Dirichlet distribution, it is a conjugate prior for the multinomial distribution. If in the case of the binomial distribution we can think of it in terms of drawing white and black balls with replacement from the urn, then in case of the multinomial distribution we are drawing with replacement $N$ balls appearing in $k$ colors, where each of colors of the balls can be drawn with probabilities $p_1,\dots,p_k$. The Dirichlet distribution is a conjugate prior for $p_1,\dots,p_k$ probabilities and $\alpha_1,\dots,\alpha_k$ parameters can be thought of as pseudocounts of balls of each color assumed a priori (but you should read also about the pitfalls of such reasoning). In Dirichlet-multinomial model $\alpha_1,\dots,\alpha_k$ get updated by summing them with observed counts in each category: $\alpha_1+n_1,\dots,\alpha_k+n_k$ in similar fashion as in case of beta-binomial model.
The higher value of $\alpha_i$, the greater "weight" of $X_i$ and the greater amount of the total "mass" is assigned to it (recall that in total it must be $x_1+\dots+x_k=1$). If all $\alpha_i$ are equal, the distribution is symmetric. If $\alpha_i < 1$, it can be thought of as anti-weight that pushes away $x_i$ toward extremes, while when it is high, it attracts $x_i$ toward some central value (central in the sense that all points are concentrated around it, not in the sense that it is symmetrically central). If $\alpha_1 = \dots = \alpha_k = 1$, then the points are uniformly distributed.
This can be seen on the plots below, where you can see trivariate Dirichlet distributions (unfortunately we can produce reasonable plots only up to three dimensions) parameterized by (a) $\alpha_1 = \alpha_2 = \alpha_3 = 1$, (b) $\alpha_1 = \alpha_2 = \alpha_3 = 10$, (c) $\alpha_1 = 1, \alpha_2 = 10, \alpha_3 = 5$, (d) $\alpha_1 = \alpha_2 = \alpha_3 = 0.2$.
The Dirichlet distribution is sometimes called a "distribution over distributions" since it can be thought of as a distribution of probabilities themselves. Notice that since each $x_i \in (0,1)$ and $\sum_{i=1}^k x_i = 1$, then $x_i$'s are consistent with the first and second axioms of probability. So you can use the Dirichlet distribution as a distribution of probabilities for discrete events described by distributions such as categorical or multinomial. It is not true that it is a distribution over any distributions, for example it is not related to probabilities of continuous random variables, or even some discrete ones (e.g. a Poisson distributed random variable describes probabilities of observing values that are any natural numbers, so to use a Dirichlet distribution over their probabilities, you'd need an infinite number of random variables $k$).
|
What exactly is the alpha in the Dirichlet distribution?
The Dirichlet distribution is a multivariate probability distribution that describes $k\ge2$ variables $X_1,\dots,X_k$, such that each $x_i \in (0,1)$ and $\sum_{i=1}^N x_i = 1$, that is parametrized
|
5,603
|
What exactly is the alpha in the Dirichlet distribution?
|
Disclaimer: I have never worked with this distribution before. This answer is based on this wikipedia article and my interpretation of it.
The Dirichlet distribution is a multivariate probability distribution with similar properties to the Beta distribution.
The PDF is defined as follows:
$$\{x_1, \dots, x_K\} \sim\frac{1}{B(\boldsymbol{\alpha})}\prod_{i=1}^Kx_i^{\alpha_i - 1}$$
with $K \geq 2$, $x_i \in (0,1)$ and $\sum_{i=1}^Kx_i = 1$.
If we look at the closely related Beta distribution:
$$\{x_1, x_2 (=1-x_1)\} \sim \frac{1}{B(\alpha,\beta)}x_1^{\alpha-1}x_2^{\beta-1}$$
we can see that these two distributions are the same if $K=2$. So let's base our interpretation on that first and then generalise to $K>2$.
In Bayesian statistics, the Beta distribution is used as a conjugate prior for binomial parameters (See Beta distribution). The prior can be defined as some prior knowledge on $\alpha$ and $\beta$ (or in line with the Dirichlet distribution $\alpha_1$ and $\alpha_2$). If some binomial trial then has $A$ successes and $B$ failures, the posterior distribution is then as follows: $\alpha_{1,pos} = \alpha_1 + A$ and $\alpha_{2,pos}=\alpha_2 + B$. (I won't work this out, as this is probably one of the first things you learn with Bayesian statistics).
So the Beta distribution then represents some posterior distribution on $x_1$ and $x_2 (=1-x_1)$, which can be interpreted as the probability of successes and failures respectively in a Binomial distribution. And the more data ($A$ and $B$) you have, the narrower this posterior distribution will be.
Now we know how the distribution works for $K=2$, we can generalise it to work for a multinomial distribution instead of a binomial. Which means that instead of two possible outcomes (success or failure), we will allow for $K$ outcomes (see why it generalises to Beta/Binom if $K=2$?). Each of these $K$ outcomes will have a probability $x_i$, which sums to 1 as probabilities do.
$\alpha_i$ then takes a similar role to the $\alpha_1$ and $\alpha_2$ in the Beta distribution as a prior for $x_i$ and gets updated in a similar fashion.
So now to get to your questions:
How do the alphas affect the distribution?
The distribution is bounded by the restrictions $x_i \in (0,1)$ and $\sum_{i=1}^Kx_i = 1$. The $\alpha_i$ determine which parts of the $K$-dimensional space get the most mass. You can see this in this image (not embedding it here because I don't own the picture). The more data there is in the posterior (using that interpretation) the higher the $\sum_{i=1}^K\alpha_i$, so the more certain you are of the value of $x_i$, or the probabilities for each of the outcomes. This means that the density will be more concentrated.
How are the alphas being normalized?
The normalisation of the distribution (making sure the integral equals 1) goes through the term $B(\boldsymbol{\alpha})$:
$$B(\boldsymbol{\alpha}) = \frac{\prod_{i=1}^K\Gamma(\alpha_i)}{\Gamma(\sum_{i=1}^K\alpha_i)}$$
Again if we look at the case $K=2$ we can see that the normalising factor is the same as in the Beta distribution, which used the following:
$$B(\alpha_1, \alpha_2) = \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)}{\Gamma(\alpha_1+\alpha_2)}$$
This extends to
$$B(\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)\dots\Gamma(\alpha_K)}{\Gamma(\alpha_1+\alpha_2+\dots+\alpha_K)}$$
What happens when the alphas are not integers?
The interpretation doesn't change for $\alpha_i>1$, but as you can see in the image I linked before, if $\alpha_i < 1$ the mass of the distribution accumulates at the edges of the range for $x_i$. $K$ on the other hand has to be an integer and $K\geq2$.
|
What exactly is the alpha in the Dirichlet distribution?
|
Disclaimer: I have never worked with this distribution before. This answer is based on this wikipedia article and my interpretation of it.
The Dirichlet distribution is a multivariate probability dis
|
What exactly is the alpha in the Dirichlet distribution?
Disclaimer: I have never worked with this distribution before. This answer is based on this wikipedia article and my interpretation of it.
The Dirichlet distribution is a multivariate probability distribution with similar properties to the Beta distribution.
The PDF is defined as follows:
$$\{x_1, \dots, x_K\} \sim\frac{1}{B(\boldsymbol{\alpha})}\prod_{i=1}^Kx_i^{\alpha_i - 1}$$
with $K \geq 2$, $x_i \in (0,1)$ and $\sum_{i=1}^Kx_i = 1$.
If we look at the closely related Beta distribution:
$$\{x_1, x_2 (=1-x_1)\} \sim \frac{1}{B(\alpha,\beta)}x_1^{\alpha-1}x_2^{\beta-1}$$
we can see that these two distributions are the same if $K=2$. So let's base our interpretation on that first and then generalise to $K>2$.
In Bayesian statistics, the Beta distribution is used as a conjugate prior for binomial parameters (See Beta distribution). The prior can be defined as some prior knowledge on $\alpha$ and $\beta$ (or in line with the Dirichlet distribution $\alpha_1$ and $\alpha_2$). If some binomial trial then has $A$ successes and $B$ failures, the posterior distribution is then as follows: $\alpha_{1,pos} = \alpha_1 + A$ and $\alpha_{2,pos}=\alpha_2 + B$. (I won't work this out, as this is probably one of the first things you learn with Bayesian statistics).
So the Beta distribution then represents some posterior distribution on $x_1$ and $x_2 (=1-x_1)$, which can be interpreted as the probability of successes and failures respectively in a Binomial distribution. And the more data ($A$ and $B$) you have, the narrower this posterior distribution will be.
Now we know how the distribution works for $K=2$, we can generalise it to work for a multinomial distribution instead of a binomial. Which means that instead of two possible outcomes (success or failure), we will allow for $K$ outcomes (see why it generalises to Beta/Binom if $K=2$?). Each of these $K$ outcomes will have a probability $x_i$, which sums to 1 as probabilities do.
$\alpha_i$ then takes a similar role to the $\alpha_1$ and $\alpha_2$ in the Beta distribution as a prior for $x_i$ and gets updated in a similar fashion.
So now to get to your questions:
How do the alphas affect the distribution?
The distribution is bounded by the restrictions $x_i \in (0,1)$ and $\sum_{i=1}^Kx_i = 1$. The $\alpha_i$ determine which parts of the $K$-dimensional space get the most mass. You can see this in this image (not embedding it here because I don't own the picture). The more data there is in the posterior (using that interpretation) the higher the $\sum_{i=1}^K\alpha_i$, so the more certain you are of the value of $x_i$, or the probabilities for each of the outcomes. This means that the density will be more concentrated.
How are the alphas being normalized?
The normalisation of the distribution (making sure the integral equals 1) goes through the term $B(\boldsymbol{\alpha})$:
$$B(\boldsymbol{\alpha}) = \frac{\prod_{i=1}^K\Gamma(\alpha_i)}{\Gamma(\sum_{i=1}^K\alpha_i)}$$
Again if we look at the case $K=2$ we can see that the normalising factor is the same as in the Beta distribution, which used the following:
$$B(\alpha_1, \alpha_2) = \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)}{\Gamma(\alpha_1+\alpha_2)}$$
This extends to
$$B(\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)\dots\Gamma(\alpha_K)}{\Gamma(\alpha_1+\alpha_2+\dots+\alpha_K)}$$
What happens when the alphas are not integers?
The interpretation doesn't change for $\alpha_i>1$, but as you can see in the image I linked before, if $\alpha_i < 1$ the mass of the distribution accumulates at the edges of the range for $x_i$. $K$ on the other hand has to be an integer and $K\geq2$.
|
What exactly is the alpha in the Dirichlet distribution?
Disclaimer: I have never worked with this distribution before. This answer is based on this wikipedia article and my interpretation of it.
The Dirichlet distribution is a multivariate probability dis
|
5,604
|
When to use simulations?
|
A quantitative model emulates some behavior of the world by (a) representing objects by some of their numerical properties and (b) combining those numbers in a definite way to produce numerical outputs that also represent properties of interest.
In this schematic, three numerical inputs on the left are combined to produce one numerical output on the right. The number lines indicate possible values of the inputs and output; the dots show specific values in use. Nowadays digital computers usually perform the calculations, but they are not essential: models have been calculated with pencil-and-paper or by building "analog" devices in wood, metal, and electronic circuits.
As an example, perhaps the preceding model sums its three inputs. R code for this model might look like
inputs <- c(-1.3, 1.2, 0) # Specify inputs (three numbers)
output <- sum(inputs) # Run the model
print(output) # Display the output (a number)
Its output simply is a number,
-0.1
We cannot know the world perfectly: even if the model happens to work exactly the way the world does, our information is imperfect and things in the world vary. (Stochastic) simulations help us understand how such uncertainty and variation in the model inputs ought to translate into uncertainty and variation in the outputs. They do so by varying the inputs randomly, running the model for each variation, and summarizing the collective output.
"Randomly" does not mean arbitrarily. The modeler must specify (whether knowingly or not, whether explicitly or implicitly) the intended frequencies of all the inputs. The frequencies of the outputs provide the most detailed summary of the results.
The same model, shown with random inputs and the resulting (computed) random output.
The figure displays frequencies with histograms to represent distributions of numbers. The intended input frequencies are shown for the inputs at left, while the computed output frequency, obtained by running the model many times, is shown at right.
Each set of inputs to a deterministic model produces a predictable numeric output. When the model is used in a stochastic simulation, however, the output is a distribution (such as the long gray one shown at right). The spread of the output distribution tells us how the model outputs can be expected to vary when its inputs vary.
The preceding code example might be modified like this to turn it into a simulation:
n <- 1e5 # Number of iterations
inputs <- rbind(rgamma(n, 3, 3) - 2,
runif(n, -2, 2),
rnorm(n, 0, 1/2))
output <- apply(inputs, 2, sum)
hist(output, freq=FALSE, col="Gray")
Its output has been summarized with a histogram of all the numbers generated by iterating the model with these random inputs:
Peering behind the scenes, we may inspect some of the many random inputs that were passed to this model:
rownames(inputs) <- c("First", "Second", "Third")
print(inputs[, 1:5], digits=2)
The output shows the first five out of $100,000$ iterations, with one column per iteration:
[,1] [,2] [,3] [,4] [,5]
First -1.62 -0.72 -1.11 -1.57 -1.25
Second 0.52 0.67 0.92 1.54 0.24
Third -0.39 1.45 0.74 -0.48 0.33
Arguably, the answer to the second question is that simulations can be used everywhere. As a practical matter, the expected cost of running the simulation should be less than the likely benefit. What are the benefits of understanding and quantifying variability? There are two primary areas where this is important:
Seeking the truth, as in science and the law. A number by itself is useful, but it is far more useful to know how accurate or certain that number is.
Making decisions, as in business and daily life. Decisions balance risks and benefits. Risks depend on the possibility of bad outcomes. Stochastic simulations help assess that possibility.
Computing systems have become powerful enough to execute realistic, complex models repeatedly. Software has evolved to support generating and summarizing random values quickly and easily (as the second R example shows). These two factors have combined over the last 20 years (and more) to the point where simulation is routine. What remains is to help people (1) specify appropriate distributions of inputs and (2) understand the distribution of outputs. That is the domain of human thought, where computers so far have been little help.
|
When to use simulations?
|
A quantitative model emulates some behavior of the world by (a) representing objects by some of their numerical properties and (b) combining those numbers in a definite way to produce numerical output
|
When to use simulations?
A quantitative model emulates some behavior of the world by (a) representing objects by some of their numerical properties and (b) combining those numbers in a definite way to produce numerical outputs that also represent properties of interest.
In this schematic, three numerical inputs on the left are combined to produce one numerical output on the right. The number lines indicate possible values of the inputs and output; the dots show specific values in use. Nowadays digital computers usually perform the calculations, but they are not essential: models have been calculated with pencil-and-paper or by building "analog" devices in wood, metal, and electronic circuits.
As an example, perhaps the preceding model sums its three inputs. R code for this model might look like
inputs <- c(-1.3, 1.2, 0) # Specify inputs (three numbers)
output <- sum(inputs) # Run the model
print(output) # Display the output (a number)
Its output simply is a number,
-0.1
We cannot know the world perfectly: even if the model happens to work exactly the way the world does, our information is imperfect and things in the world vary. (Stochastic) simulations help us understand how such uncertainty and variation in the model inputs ought to translate into uncertainty and variation in the outputs. They do so by varying the inputs randomly, running the model for each variation, and summarizing the collective output.
"Randomly" does not mean arbitrarily. The modeler must specify (whether knowingly or not, whether explicitly or implicitly) the intended frequencies of all the inputs. The frequencies of the outputs provide the most detailed summary of the results.
The same model, shown with random inputs and the resulting (computed) random output.
The figure displays frequencies with histograms to represent distributions of numbers. The intended input frequencies are shown for the inputs at left, while the computed output frequency, obtained by running the model many times, is shown at right.
Each set of inputs to a deterministic model produces a predictable numeric output. When the model is used in a stochastic simulation, however, the output is a distribution (such as the long gray one shown at right). The spread of the output distribution tells us how the model outputs can be expected to vary when its inputs vary.
The preceding code example might be modified like this to turn it into a simulation:
n <- 1e5 # Number of iterations
inputs <- rbind(rgamma(n, 3, 3) - 2,
runif(n, -2, 2),
rnorm(n, 0, 1/2))
output <- apply(inputs, 2, sum)
hist(output, freq=FALSE, col="Gray")
Its output has been summarized with a histogram of all the numbers generated by iterating the model with these random inputs:
Peering behind the scenes, we may inspect some of the many random inputs that were passed to this model:
rownames(inputs) <- c("First", "Second", "Third")
print(inputs[, 1:5], digits=2)
The output shows the first five out of $100,000$ iterations, with one column per iteration:
[,1] [,2] [,3] [,4] [,5]
First -1.62 -0.72 -1.11 -1.57 -1.25
Second 0.52 0.67 0.92 1.54 0.24
Third -0.39 1.45 0.74 -0.48 0.33
Arguably, the answer to the second question is that simulations can be used everywhere. As a practical matter, the expected cost of running the simulation should be less than the likely benefit. What are the benefits of understanding and quantifying variability? There are two primary areas where this is important:
Seeking the truth, as in science and the law. A number by itself is useful, but it is far more useful to know how accurate or certain that number is.
Making decisions, as in business and daily life. Decisions balance risks and benefits. Risks depend on the possibility of bad outcomes. Stochastic simulations help assess that possibility.
Computing systems have become powerful enough to execute realistic, complex models repeatedly. Software has evolved to support generating and summarizing random values quickly and easily (as the second R example shows). These two factors have combined over the last 20 years (and more) to the point where simulation is routine. What remains is to help people (1) specify appropriate distributions of inputs and (2) understand the distribution of outputs. That is the domain of human thought, where computers so far have been little help.
|
When to use simulations?
A quantitative model emulates some behavior of the world by (a) representing objects by some of their numerical properties and (b) combining those numbers in a definite way to produce numerical output
|
5,605
|
When to use simulations?
|
First, let me say that there is no single answer for your question. There is multiple examples of when you can (or have to) use simulation. I will try to give you few examples below. Second, notice that there are multiple ways you can define a "simulation", so the answer at least partly depends on the definition you choose.
Examples:
1. You are a Bayesian statistician, so simulation is your method of choice for doing statistics. There are non-simulation-based ways approaches to Bayesian statistics however in vast majority of cases you use simulation. For learning more check "Bayesian data analysis" book by Gelman (or other possible resources).
2. You want to assess the performance of statistical method. Say you have designed some statistical test $T$ that was designed for estimating some parameter $\theta$ given empirical data. Now you want to check if it really does what you want it to do. You could take some data sample and use your test on this data - however if you need a statistical test to know the $\theta$, then how do you know if your test works fine having only the data..? Of course you can compare the results with estimates of other statistical test, but what if the other test does not estimate $\theta$ correctly..? In this case you can use simulation. What you can do is you generate some fake data given your parameter $\theta$ and then check if your estimated value is the same as true value of $\theta$ (that you know in advance since you have chosen it). Using simulation lets you also to check different scenarios (sample size, different distributions of data, different amount of noise in your data etc.).
3. You don't have the data or it is very limited. Say you want to know what would be the possible outcome of nuclear war. Unfortunately (hopefully) there was no nuclear war before, so you do not have any data. In this case you can use computer simulation where you make some assumptions about reality and then let computers create parallel virtual realities where the nuclear war happens, so you have some samples of possible outcomes.
4. Your statistical model does not fit the software or is complicated. This approach is advocated, for example, by Gelman and Hill in "Data Analysis Using Regression and Multilevel/Hierarchical Models", where they describe simulation-based Bayesian estimation as a "next step" in regression modeling.
5. You want to learn about possible outcomes of a complicated process. Imagine that you want to forecast the future outcome of some complicated process, the problem is however that the behavior of your process is chaotic and given different inputs you get different outputs, while the number of possible inputs is very large. Generally, this was the case because Monte Carlo simulation methods were invented by physicists and mathematicians working on nuclear bomb during World War II. With simulation you try different inputs and gather samples so to get a general idea about the possible outcomes.
6. Your data does not meet the criteria for some statistical method, e.g. it has skewed distribution while it ought to be normal. In some cases this is not really a problem, however sometimes it is, so simulation-based methods like bootstrap were invented.
7. To test a theoretical model against reality. You have a theoretical model than describes some process, e.g. spread of epidemic through a social network. You can use the model to generate some data so that you can compare if the simulated is similar to the real data. Lada Adamic gives a multiple examples of such a usage for Social Network Analysis on her Coursera class (see some demos here).
8. To generate "hypothesis 0" data. You generate a fake (random) data so to compare the real data to it. If there were any significant effects or trends in your data then it should differ from the data generated at random. This approach is advocated by Buja et al. (2009) in their paper "Statistical inference for exploratory data analysis and model diagnostics" where they propose how using plots could facilitate an exploratory data analysis and hypothesis testing (see also documentation of nullabor R package that implements those ideas).
|
When to use simulations?
|
First, let me say that there is no single answer for your question. There is multiple examples of when you can (or have to) use simulation. I will try to give you few examples below. Second, notice th
|
When to use simulations?
First, let me say that there is no single answer for your question. There is multiple examples of when you can (or have to) use simulation. I will try to give you few examples below. Second, notice that there are multiple ways you can define a "simulation", so the answer at least partly depends on the definition you choose.
Examples:
1. You are a Bayesian statistician, so simulation is your method of choice for doing statistics. There are non-simulation-based ways approaches to Bayesian statistics however in vast majority of cases you use simulation. For learning more check "Bayesian data analysis" book by Gelman (or other possible resources).
2. You want to assess the performance of statistical method. Say you have designed some statistical test $T$ that was designed for estimating some parameter $\theta$ given empirical data. Now you want to check if it really does what you want it to do. You could take some data sample and use your test on this data - however if you need a statistical test to know the $\theta$, then how do you know if your test works fine having only the data..? Of course you can compare the results with estimates of other statistical test, but what if the other test does not estimate $\theta$ correctly..? In this case you can use simulation. What you can do is you generate some fake data given your parameter $\theta$ and then check if your estimated value is the same as true value of $\theta$ (that you know in advance since you have chosen it). Using simulation lets you also to check different scenarios (sample size, different distributions of data, different amount of noise in your data etc.).
3. You don't have the data or it is very limited. Say you want to know what would be the possible outcome of nuclear war. Unfortunately (hopefully) there was no nuclear war before, so you do not have any data. In this case you can use computer simulation where you make some assumptions about reality and then let computers create parallel virtual realities where the nuclear war happens, so you have some samples of possible outcomes.
4. Your statistical model does not fit the software or is complicated. This approach is advocated, for example, by Gelman and Hill in "Data Analysis Using Regression and Multilevel/Hierarchical Models", where they describe simulation-based Bayesian estimation as a "next step" in regression modeling.
5. You want to learn about possible outcomes of a complicated process. Imagine that you want to forecast the future outcome of some complicated process, the problem is however that the behavior of your process is chaotic and given different inputs you get different outputs, while the number of possible inputs is very large. Generally, this was the case because Monte Carlo simulation methods were invented by physicists and mathematicians working on nuclear bomb during World War II. With simulation you try different inputs and gather samples so to get a general idea about the possible outcomes.
6. Your data does not meet the criteria for some statistical method, e.g. it has skewed distribution while it ought to be normal. In some cases this is not really a problem, however sometimes it is, so simulation-based methods like bootstrap were invented.
7. To test a theoretical model against reality. You have a theoretical model than describes some process, e.g. spread of epidemic through a social network. You can use the model to generate some data so that you can compare if the simulated is similar to the real data. Lada Adamic gives a multiple examples of such a usage for Social Network Analysis on her Coursera class (see some demos here).
8. To generate "hypothesis 0" data. You generate a fake (random) data so to compare the real data to it. If there were any significant effects or trends in your data then it should differ from the data generated at random. This approach is advocated by Buja et al. (2009) in their paper "Statistical inference for exploratory data analysis and model diagnostics" where they propose how using plots could facilitate an exploratory data analysis and hypothesis testing (see also documentation of nullabor R package that implements those ideas).
|
When to use simulations?
First, let me say that there is no single answer for your question. There is multiple examples of when you can (or have to) use simulation. I will try to give you few examples below. Second, notice th
|
5,606
|
When to use simulations?
|
I think that the discussion of TrynnaDoStat's answer illustrates the point well: we use simulations whenever the problem is impossible to solve analytically (e.g. the posterior distributions of parameters in a hierarchical model), or when we're simply too annoyed to put the time into working out the solution analytically.
Based on what I've observed on this website, the threshold of "annoying enough to simulate" varies widely between statistician. People like @whuber can, apparently, glance at a problem and immediately see the solution while mere mortals like myself will have to carefully consider the problem and maybe do some reading before writing a simulation routine to do the hard work.
Keep in mind that simulations aren't necessarily a panacea since with large data sets, or complicated models, or both you'll spend enormous amounts of (computer) time estimating and checking your simulation. It's certainly not worth the effort if you could accomplish the same goal with an hour of careful consideration.
|
When to use simulations?
|
I think that the discussion of TrynnaDoStat's answer illustrates the point well: we use simulations whenever the problem is impossible to solve analytically (e.g. the posterior distributions of parame
|
When to use simulations?
I think that the discussion of TrynnaDoStat's answer illustrates the point well: we use simulations whenever the problem is impossible to solve analytically (e.g. the posterior distributions of parameters in a hierarchical model), or when we're simply too annoyed to put the time into working out the solution analytically.
Based on what I've observed on this website, the threshold of "annoying enough to simulate" varies widely between statistician. People like @whuber can, apparently, glance at a problem and immediately see the solution while mere mortals like myself will have to carefully consider the problem and maybe do some reading before writing a simulation routine to do the hard work.
Keep in mind that simulations aren't necessarily a panacea since with large data sets, or complicated models, or both you'll spend enormous amounts of (computer) time estimating and checking your simulation. It's certainly not worth the effort if you could accomplish the same goal with an hour of careful consideration.
|
When to use simulations?
I think that the discussion of TrynnaDoStat's answer illustrates the point well: we use simulations whenever the problem is impossible to solve analytically (e.g. the posterior distributions of parame
|
5,607
|
When to use simulations?
|
Simulations are often done when you can't get a closed form for something (such as a distribution) or you want a nitty-gritty and fast way to get that something.
For example, say I'm running a logistic regression using one variable $X$ to explain $Y$. I know that distribution of the coefficient $\beta$ for $X$ is asymptotically Normal from MLE theory. But let's say I'm interested in the difference of two estimated probabilities $f(\beta) = P(Y=1|X=1) - P(Y=1|X=0)$. It may be very difficult (or impossible) to derive the exact distribution of this function but, since I know the distribution of $\beta$, I can simulate values from $\beta$ and plug it into $f(\beta)$ to get an empirical distribution.
|
When to use simulations?
|
Simulations are often done when you can't get a closed form for something (such as a distribution) or you want a nitty-gritty and fast way to get that something.
For example, say I'm running a logisti
|
When to use simulations?
Simulations are often done when you can't get a closed form for something (such as a distribution) or you want a nitty-gritty and fast way to get that something.
For example, say I'm running a logistic regression using one variable $X$ to explain $Y$. I know that distribution of the coefficient $\beta$ for $X$ is asymptotically Normal from MLE theory. But let's say I'm interested in the difference of two estimated probabilities $f(\beta) = P(Y=1|X=1) - P(Y=1|X=0)$. It may be very difficult (or impossible) to derive the exact distribution of this function but, since I know the distribution of $\beta$, I can simulate values from $\beta$ and plug it into $f(\beta)$ to get an empirical distribution.
|
When to use simulations?
Simulations are often done when you can't get a closed form for something (such as a distribution) or you want a nitty-gritty and fast way to get that something.
For example, say I'm running a logisti
|
5,608
|
When to use simulations?
|
Simulations are an excellent way to check whether you can obtain useful estimates from a model.
You would do this by generating/simulating fake data that follows the distribution implied by your model. Then go ahead and fit your model to that data. This is an ideal case: your model is, in fact, true. So if the fit is noisy or inaccurate, then you know there is a problem either with the estimation procedure or the model itself.
Similarly, you can simulate data using the "wrong" data generating process and use that fake data to assess how your estimates are affected by violating model assumptions. This is often called sensitivity analysis.
These points are similar to items 2 and 8 in Tim's answer, and also a somewhat more ad-hoc version of the procedure in whuber's answer.
Simulations are also used to perform predictive model checking as advocated by Andrew Gelman and others. This amounts to plugging your predictor data back into the model and then simulating fake response data from the implied distribution, to see if your simulated data is close enough (by whatever criterion you're using) to the real thing.
Note that this is not the same thing as just computing fitted values. In a regression model, for instance, the fitted values are conditional averages; to run a predictive check on a regression model, you would have to draw once from the Gaussian distribution centered at each fitted value.
|
When to use simulations?
|
Simulations are an excellent way to check whether you can obtain useful estimates from a model.
You would do this by generating/simulating fake data that follows the distribution implied by your model
|
When to use simulations?
Simulations are an excellent way to check whether you can obtain useful estimates from a model.
You would do this by generating/simulating fake data that follows the distribution implied by your model. Then go ahead and fit your model to that data. This is an ideal case: your model is, in fact, true. So if the fit is noisy or inaccurate, then you know there is a problem either with the estimation procedure or the model itself.
Similarly, you can simulate data using the "wrong" data generating process and use that fake data to assess how your estimates are affected by violating model assumptions. This is often called sensitivity analysis.
These points are similar to items 2 and 8 in Tim's answer, and also a somewhat more ad-hoc version of the procedure in whuber's answer.
Simulations are also used to perform predictive model checking as advocated by Andrew Gelman and others. This amounts to plugging your predictor data back into the model and then simulating fake response data from the implied distribution, to see if your simulated data is close enough (by whatever criterion you're using) to the real thing.
Note that this is not the same thing as just computing fitted values. In a regression model, for instance, the fitted values are conditional averages; to run a predictive check on a regression model, you would have to draw once from the Gaussian distribution centered at each fitted value.
|
When to use simulations?
Simulations are an excellent way to check whether you can obtain useful estimates from a model.
You would do this by generating/simulating fake data that follows the distribution implied by your model
|
5,609
|
When to use simulations?
|
The simplest case for simulation. Let's say you have a forecasting model for the number of loan defaults, you also have a model for losses on defaulted loans. Now you need to forecast the total loss which the product of defaults and losses given the default. You can't simply multiply the defaults and losses on defaults to get the confidence intervals of total loss.
The reason is that if you have a random variable $x_i$, of which you know the densities, it doesn't mean that you can easily get the density of the product $x_1\cdot x_2$. On the other hand, if you know the correlation between numbers, it's easy to simulate the correlated numbers, and get the simulated distribution of losses.
This paper has a MBA-level description of this use case for operation risk estimation, where you have the distributions of loss frequency and amounts, and combine them to get the total loss distribution.
|
When to use simulations?
|
The simplest case for simulation. Let's say you have a forecasting model for the number of loan defaults, you also have a model for losses on defaulted loans. Now you need to forecast the total loss w
|
When to use simulations?
The simplest case for simulation. Let's say you have a forecasting model for the number of loan defaults, you also have a model for losses on defaulted loans. Now you need to forecast the total loss which the product of defaults and losses given the default. You can't simply multiply the defaults and losses on defaults to get the confidence intervals of total loss.
The reason is that if you have a random variable $x_i$, of which you know the densities, it doesn't mean that you can easily get the density of the product $x_1\cdot x_2$. On the other hand, if you know the correlation between numbers, it's easy to simulate the correlated numbers, and get the simulated distribution of losses.
This paper has a MBA-level description of this use case for operation risk estimation, where you have the distributions of loss frequency and amounts, and combine them to get the total loss distribution.
|
When to use simulations?
The simplest case for simulation. Let's say you have a forecasting model for the number of loan defaults, you also have a model for losses on defaulted loans. Now you need to forecast the total loss w
|
5,610
|
Are there any good movies involving mathematics or probability?
|
Pi
|
Are there any good movies involving mathematics or probability?
|
Pi
|
Are there any good movies involving mathematics or probability?
Pi
|
Are there any good movies involving mathematics or probability?
Pi
|
5,611
|
Are there any good movies involving mathematics or probability?
|
'A Beautiful Mind' naturally has a bit of game theory in it.
|
Are there any good movies involving mathematics or probability?
|
'A Beautiful Mind' naturally has a bit of game theory in it.
|
Are there any good movies involving mathematics or probability?
'A Beautiful Mind' naturally has a bit of game theory in it.
|
Are there any good movies involving mathematics or probability?
'A Beautiful Mind' naturally has a bit of game theory in it.
|
5,612
|
Are there any good movies involving mathematics or probability?
|
MONEYBALL!
It's a movie where the statisticians win!
This is probably the most inspiring major motion picture about the power of quantitative methods. (if only because the plot is a little formulaic). And it shows quantitative methods (sabrmetrics) eventually coming to dominate over the backward and untested techniques of the dinosaurs of baseball.
|
Are there any good movies involving mathematics or probability?
|
MONEYBALL!
It's a movie where the statisticians win!
This is probably the most inspiring major motion picture about the power of quantitative methods. (if only because the plot is a little formulaic
|
Are there any good movies involving mathematics or probability?
MONEYBALL!
It's a movie where the statisticians win!
This is probably the most inspiring major motion picture about the power of quantitative methods. (if only because the plot is a little formulaic). And it shows quantitative methods (sabrmetrics) eventually coming to dominate over the backward and untested techniques of the dinosaurs of baseball.
|
Are there any good movies involving mathematics or probability?
MONEYBALL!
It's a movie where the statisticians win!
This is probably the most inspiring major motion picture about the power of quantitative methods. (if only because the plot is a little formulaic
|
5,613
|
Are there any good movies involving mathematics or probability?
|
Not a movie, but a TV series:
Numb3rs
|
Are there any good movies involving mathematics or probability?
|
Not a movie, but a TV series:
Numb3rs
|
Are there any good movies involving mathematics or probability?
Not a movie, but a TV series:
Numb3rs
|
Are there any good movies involving mathematics or probability?
Not a movie, but a TV series:
Numb3rs
|
5,614
|
Are there any good movies involving mathematics or probability?
|
The mathematical movie database has some great suggestions with over 800 movies (though most tenuously linked to maths) already listed. In the navy, from 1941, is probably my favourite.
|
Are there any good movies involving mathematics or probability?
|
The mathematical movie database has some great suggestions with over 800 movies (though most tenuously linked to maths) already listed. In the navy, from 1941, is probably my favourite.
|
Are there any good movies involving mathematics or probability?
The mathematical movie database has some great suggestions with over 800 movies (though most tenuously linked to maths) already listed. In the navy, from 1941, is probably my favourite.
|
Are there any good movies involving mathematics or probability?
The mathematical movie database has some great suggestions with over 800 movies (though most tenuously linked to maths) already listed. In the navy, from 1941, is probably my favourite.
|
5,615
|
Are there any good movies involving mathematics or probability?
|
N Is a Number: A Portrait of Paul Erdős
|
Are there any good movies involving mathematics or probability?
|
N Is a Number: A Portrait of Paul Erdős
|
Are there any good movies involving mathematics or probability?
N Is a Number: A Portrait of Paul Erdős
|
Are there any good movies involving mathematics or probability?
N Is a Number: A Portrait of Paul Erdős
|
5,616
|
Are there any good movies involving mathematics or probability?
|
Proof was pretty good.
|
Are there any good movies involving mathematics or probability?
|
Proof was pretty good.
|
Are there any good movies involving mathematics or probability?
Proof was pretty good.
|
Are there any good movies involving mathematics or probability?
Proof was pretty good.
|
5,617
|
Are there any good movies involving mathematics or probability?
|
The Cube
|
Are there any good movies involving mathematics or probability?
|
The Cube
|
Are there any good movies involving mathematics or probability?
The Cube
|
Are there any good movies involving mathematics or probability?
The Cube
|
5,618
|
Are there any good movies involving mathematics or probability?
|
21 - based on the book Bringing Down the House (MIT Blackjack team)
Near the beginning they discuss the Monty Hall Problem. However after that there isn't much actual math/probability.
|
Are there any good movies involving mathematics or probability?
|
21 - based on the book Bringing Down the House (MIT Blackjack team)
Near the beginning they discuss the Monty Hall Problem. However after that there isn't much actual math/probability.
|
Are there any good movies involving mathematics or probability?
21 - based on the book Bringing Down the House (MIT Blackjack team)
Near the beginning they discuss the Monty Hall Problem. However after that there isn't much actual math/probability.
|
Are there any good movies involving mathematics or probability?
21 - based on the book Bringing Down the House (MIT Blackjack team)
Near the beginning they discuss the Monty Hall Problem. However after that there isn't much actual math/probability.
|
5,619
|
Are there any good movies involving mathematics or probability?
|
I have not seen this yet, but it seems somewhat geeky:
Fermat's Room
|
Are there any good movies involving mathematics or probability?
|
I have not seen this yet, but it seems somewhat geeky:
Fermat's Room
|
Are there any good movies involving mathematics or probability?
I have not seen this yet, but it seems somewhat geeky:
Fermat's Room
|
Are there any good movies involving mathematics or probability?
I have not seen this yet, but it seems somewhat geeky:
Fermat's Room
|
5,620
|
Are there any good movies involving mathematics or probability?
|
Early in The Social Network begins with a one night hackathon where Mark Zuckerberg uses the Elo rating system algorithm to
... create a website that rates the attractiveness of female students
when compared to each other. ... in a few hours, using an algorithm for
ranking chess players supplied by his best friend, Eduardo Saverin, he
creates a website called "FaceMash," where students can choose which of
two girls presented at a time is more
attractive.
However, much of the rest of the movie is devoted to episodes of hacking, corporate politics, lawsuits, escapades, Zuckerberg's interpersonal problems, etc. But, I found it quite fascinating, overall. A great geek movie.
|
Are there any good movies involving mathematics or probability?
|
Early in The Social Network begins with a one night hackathon where Mark Zuckerberg uses the Elo rating system algorithm to
... create a website that rates the attractiveness of female students
whe
|
Are there any good movies involving mathematics or probability?
Early in The Social Network begins with a one night hackathon where Mark Zuckerberg uses the Elo rating system algorithm to
... create a website that rates the attractiveness of female students
when compared to each other. ... in a few hours, using an algorithm for
ranking chess players supplied by his best friend, Eduardo Saverin, he
creates a website called "FaceMash," where students can choose which of
two girls presented at a time is more
attractive.
However, much of the rest of the movie is devoted to episodes of hacking, corporate politics, lawsuits, escapades, Zuckerberg's interpersonal problems, etc. But, I found it quite fascinating, overall. A great geek movie.
|
Are there any good movies involving mathematics or probability?
Early in The Social Network begins with a one night hackathon where Mark Zuckerberg uses the Elo rating system algorithm to
... create a website that rates the attractiveness of female students
whe
|
5,621
|
Are there any good movies involving mathematics or probability?
|
There are several movie versions of Flatland. And there's The Great $\pi$/e Debate.
|
Are there any good movies involving mathematics or probability?
|
There are several movie versions of Flatland. And there's The Great $\pi$/e Debate.
|
Are there any good movies involving mathematics or probability?
There are several movie versions of Flatland. And there's The Great $\pi$/e Debate.
|
Are there any good movies involving mathematics or probability?
There are several movie versions of Flatland. And there's The Great $\pi$/e Debate.
|
5,622
|
Are there any good movies involving mathematics or probability?
|
Good Will Hunting is also a classic. Discrete mathematics at MIT.
|
Are there any good movies involving mathematics or probability?
|
Good Will Hunting is also a classic. Discrete mathematics at MIT.
|
Are there any good movies involving mathematics or probability?
Good Will Hunting is also a classic. Discrete mathematics at MIT.
|
Are there any good movies involving mathematics or probability?
Good Will Hunting is also a classic. Discrete mathematics at MIT.
|
5,623
|
Are there any good movies involving mathematics or probability?
|
The documentary about Andrew Wiles proof of Fermat's Last Theorem is fantastic:
http://www.pbs.org/wgbh/nova/proof/
Available on youtube:
http://www.youtube.com/watch?v=7FnXgprKgSE
|
Are there any good movies involving mathematics or probability?
|
The documentary about Andrew Wiles proof of Fermat's Last Theorem is fantastic:
http://www.pbs.org/wgbh/nova/proof/
Available on youtube:
http://www.youtube.com/watch?v=7FnXgprKgSE
|
Are there any good movies involving mathematics or probability?
The documentary about Andrew Wiles proof of Fermat's Last Theorem is fantastic:
http://www.pbs.org/wgbh/nova/proof/
Available on youtube:
http://www.youtube.com/watch?v=7FnXgprKgSE
|
Are there any good movies involving mathematics or probability?
The documentary about Andrew Wiles proof of Fermat's Last Theorem is fantastic:
http://www.pbs.org/wgbh/nova/proof/
Available on youtube:
http://www.youtube.com/watch?v=7FnXgprKgSE
|
5,624
|
Are there any good movies involving mathematics or probability?
|
BBC Horizon - The Bible Code. It shows, that whatever codes people found in Bible, so far they didn't prove to be statistically significant.
|
Are there any good movies involving mathematics or probability?
|
BBC Horizon - The Bible Code. It shows, that whatever codes people found in Bible, so far they didn't prove to be statistically significant.
|
Are there any good movies involving mathematics or probability?
BBC Horizon - The Bible Code. It shows, that whatever codes people found in Bible, so far they didn't prove to be statistically significant.
|
Are there any good movies involving mathematics or probability?
BBC Horizon - The Bible Code. It shows, that whatever codes people found in Bible, so far they didn't prove to be statistically significant.
|
5,625
|
Are there any good movies involving mathematics or probability?
|
The Man Who Knew Infinity is based on the life of Srinivasa Ramanujan.
It's a beautiful movie directed by Matthew Brown. Mathematicians Manjul Bhargava and Ken Ono collaborated on the film.
|
Are there any good movies involving mathematics or probability?
|
The Man Who Knew Infinity is based on the life of Srinivasa Ramanujan.
It's a beautiful movie directed by Matthew Brown. Mathematicians Manjul Bhargava and Ken Ono collaborated on the film.
|
Are there any good movies involving mathematics or probability?
The Man Who Knew Infinity is based on the life of Srinivasa Ramanujan.
It's a beautiful movie directed by Matthew Brown. Mathematicians Manjul Bhargava and Ken Ono collaborated on the film.
|
Are there any good movies involving mathematics or probability?
The Man Who Knew Infinity is based on the life of Srinivasa Ramanujan.
It's a beautiful movie directed by Matthew Brown. Mathematicians Manjul Bhargava and Ken Ono collaborated on the film.
|
5,626
|
Are there any good movies involving mathematics or probability?
|
Rounders. A very watchable drama about poker players.
http://www.imdb.com/title/tt0128442/
|
Are there any good movies involving mathematics or probability?
|
Rounders. A very watchable drama about poker players.
http://www.imdb.com/title/tt0128442/
|
Are there any good movies involving mathematics or probability?
Rounders. A very watchable drama about poker players.
http://www.imdb.com/title/tt0128442/
|
Are there any good movies involving mathematics or probability?
Rounders. A very watchable drama about poker players.
http://www.imdb.com/title/tt0128442/
|
5,627
|
Are there any good movies involving mathematics or probability?
|
There is a published a documentary about Srinivasa Ramanujan whose life, as we know, is tremendously interesting. However, the film is Indian and I haven't actually seen it. I recall an Indian math historian speaking about this film at our university colloquium several years ago. He boasted, "Ben Kingsley was interested in depicting Ramanujan but was turned down for the role because he was only half Indian". As a mixed race individual, I felt a mixture of anger and pity. The latter because they basically turned down the opportunity to make a movie that would attract anyone's attention.
|
Are there any good movies involving mathematics or probability?
|
There is a published a documentary about Srinivasa Ramanujan whose life, as we know, is tremendously interesting. However, the film is Indian and I haven't actually seen it. I recall an Indian math hi
|
Are there any good movies involving mathematics or probability?
There is a published a documentary about Srinivasa Ramanujan whose life, as we know, is tremendously interesting. However, the film is Indian and I haven't actually seen it. I recall an Indian math historian speaking about this film at our university colloquium several years ago. He boasted, "Ben Kingsley was interested in depicting Ramanujan but was turned down for the role because he was only half Indian". As a mixed race individual, I felt a mixture of anger and pity. The latter because they basically turned down the opportunity to make a movie that would attract anyone's attention.
|
Are there any good movies involving mathematics or probability?
There is a published a documentary about Srinivasa Ramanujan whose life, as we know, is tremendously interesting. However, the film is Indian and I haven't actually seen it. I recall an Indian math hi
|
5,628
|
Are there any good movies involving mathematics or probability?
|
Sofia Kovalevskaya - biopic about Russian female mathematician. You don't have too many movies about these folks. One of the recent ones is The Imitation Game about Alan Turing, a British mathematician and computer scientist (allegedly) murdered by his government.
|
Are there any good movies involving mathematics or probability?
|
Sofia Kovalevskaya - biopic about Russian female mathematician. You don't have too many movies about these folks. One of the recent ones is The Imitation Game about Alan Turing, a British mathematicia
|
Are there any good movies involving mathematics or probability?
Sofia Kovalevskaya - biopic about Russian female mathematician. You don't have too many movies about these folks. One of the recent ones is The Imitation Game about Alan Turing, a British mathematician and computer scientist (allegedly) murdered by his government.
|
Are there any good movies involving mathematics or probability?
Sofia Kovalevskaya - biopic about Russian female mathematician. You don't have too many movies about these folks. One of the recent ones is The Imitation Game about Alan Turing, a British mathematicia
|
5,629
|
Are there any good movies involving mathematics or probability?
|
Stand and Deliver is a good film about the Bolivian-born math teacher Jaime Escalante. Inspiring! See this commentary
|
Are there any good movies involving mathematics or probability?
|
Stand and Deliver is a good film about the Bolivian-born math teacher Jaime Escalante. Inspiring! See this commentary
|
Are there any good movies involving mathematics or probability?
Stand and Deliver is a good film about the Bolivian-born math teacher Jaime Escalante. Inspiring! See this commentary
|
Are there any good movies involving mathematics or probability?
Stand and Deliver is a good film about the Bolivian-born math teacher Jaime Escalante. Inspiring! See this commentary
|
5,630
|
Are there any good movies involving mathematics or probability?
|
12 angry men (1957, with Henry Fonda): a great film about a decision procedure and the strength of evidences
|
Are there any good movies involving mathematics or probability?
|
12 angry men (1957, with Henry Fonda): a great film about a decision procedure and the strength of evidences
|
Are there any good movies involving mathematics or probability?
12 angry men (1957, with Henry Fonda): a great film about a decision procedure and the strength of evidences
|
Are there any good movies involving mathematics or probability?
12 angry men (1957, with Henry Fonda): a great film about a decision procedure and the strength of evidences
|
5,631
|
Are there any good movies involving mathematics or probability?
|
travelling salesman is a good one.
|
Are there any good movies involving mathematics or probability?
|
travelling salesman is a good one.
|
Are there any good movies involving mathematics or probability?
travelling salesman is a good one.
|
Are there any good movies involving mathematics or probability?
travelling salesman is a good one.
|
5,632
|
What is your favorite statistical graph?
|
I think that Anscombe's quartet deserves a place here as an example and reminder to always plot your data because datasets with the same numeric summaries can have very different relationships:
Anscombe, Francis J. (1973) Graphs in statistical analysis.
American Statistician, 27, 17-21.
|
What is your favorite statistical graph?
|
I think that Anscombe's quartet deserves a place here as an example and reminder to always plot your data because datasets with the same numeric summaries can have very different relationships:
Ansco
|
What is your favorite statistical graph?
I think that Anscombe's quartet deserves a place here as an example and reminder to always plot your data because datasets with the same numeric summaries can have very different relationships:
Anscombe, Francis J. (1973) Graphs in statistical analysis.
American Statistician, 27, 17-21.
|
What is your favorite statistical graph?
I think that Anscombe's quartet deserves a place here as an example and reminder to always plot your data because datasets with the same numeric summaries can have very different relationships:
Ansco
|
5,633
|
What is your favorite statistical graph?
|
I always enjoy reading this Sankey diagram (a type of flow map) on the French invasion of Russia by Charles Joseph Minard in 1812:
Charles Joseph Minard's famous graph showing the decreasing size of
the Grande Armée as it marches to Moscow (brown line, from left to
right) and back (black line, from right to left) with the size of the
army equal to the width of the line. Temperature is plotted on the
lower graph for the return journey (multiply Réaumur temperatures by
1¼ to get Celsius, e.g. −30 °R = −37.5 °C).
(click on image to zoom)
In 2nd position, this 3D pie makes me laugh each time I see it:
It is the perfect example of how misleading a 3D visualization can be: Steve Jobs clearly used a 3D pie chart to make Apple's market share look much larger than it was:
The 19.5% market share slice for Apple's iPhone somehow looks bigger
than the 21.2% market share for the mish-mash of "Other" brands.
Same Steve Jobs 3D trick on another slide:
|
What is your favorite statistical graph?
|
I always enjoy reading this Sankey diagram (a type of flow map) on the French invasion of Russia by Charles Joseph Minard in 1812:
Charles Joseph Minard's famous graph showing the decreasing size of
|
What is your favorite statistical graph?
I always enjoy reading this Sankey diagram (a type of flow map) on the French invasion of Russia by Charles Joseph Minard in 1812:
Charles Joseph Minard's famous graph showing the decreasing size of
the Grande Armée as it marches to Moscow (brown line, from left to
right) and back (black line, from right to left) with the size of the
army equal to the width of the line. Temperature is plotted on the
lower graph for the return journey (multiply Réaumur temperatures by
1¼ to get Celsius, e.g. −30 °R = −37.5 °C).
(click on image to zoom)
In 2nd position, this 3D pie makes me laugh each time I see it:
It is the perfect example of how misleading a 3D visualization can be: Steve Jobs clearly used a 3D pie chart to make Apple's market share look much larger than it was:
The 19.5% market share slice for Apple's iPhone somehow looks bigger
than the 21.2% market share for the mish-mash of "Other" brands.
Same Steve Jobs 3D trick on another slide:
|
What is your favorite statistical graph?
I always enjoy reading this Sankey diagram (a type of flow map) on the French invasion of Russia by Charles Joseph Minard in 1812:
Charles Joseph Minard's famous graph showing the decreasing size of
|
5,634
|
What is your favorite statistical graph?
|
I hope not to push things here too far toward the humorous side with an early response that's in that vein (+1 for @GregSnow's theoretical answer!), but since I already have an entry in the favorite cartoons thread, I'll add a graph here.
By Jorge Cham of Piled Higher and Deeper infamy, as per the © on on the bottom right margin that I hope I'm respecting! I particularly like the existential crisis bump, because I'm an existential psychologist with interests in motivation and emotion. As such, it's my (un)professional opinion that this is pretty accurate! $\mathbf{\large ☺}$
|
What is your favorite statistical graph?
|
I hope not to push things here too far toward the humorous side with an early response that's in that vein (+1 for @GregSnow's theoretical answer!), but since I already have an entry in the favorite c
|
What is your favorite statistical graph?
I hope not to push things here too far toward the humorous side with an early response that's in that vein (+1 for @GregSnow's theoretical answer!), but since I already have an entry in the favorite cartoons thread, I'll add a graph here.
By Jorge Cham of Piled Higher and Deeper infamy, as per the © on on the bottom right margin that I hope I'm respecting! I particularly like the existential crisis bump, because I'm an existential psychologist with interests in motivation and emotion. As such, it's my (un)professional opinion that this is pretty accurate! $\mathbf{\large ☺}$
|
What is your favorite statistical graph?
I hope not to push things here too far toward the humorous side with an early response that's in that vein (+1 for @GregSnow's theoretical answer!), but since I already have an entry in the favorite c
|
5,635
|
What is your favorite statistical graph?
|
Another famous visualization of data (we can have a semantic argument about whether it should be called a graph) is John Snow's 1854 map of cholera cases in London:
|
What is your favorite statistical graph?
|
Another famous visualization of data (we can have a semantic argument about whether it should be called a graph) is John Snow's 1854 map of cholera cases in London:
|
What is your favorite statistical graph?
Another famous visualization of data (we can have a semantic argument about whether it should be called a graph) is John Snow's 1854 map of cholera cases in London:
|
What is your favorite statistical graph?
Another famous visualization of data (we can have a semantic argument about whether it should be called a graph) is John Snow's 1854 map of cholera cases in London:
|
5,636
|
What is your favorite statistical graph?
|
I like very much your examples!, but one shocking and simple graph for my point of view is that one:
propaganda nazi
|
What is your favorite statistical graph?
|
I like very much your examples!, but one shocking and simple graph for my point of view is that one:
propaganda nazi
|
What is your favorite statistical graph?
I like very much your examples!, but one shocking and simple graph for my point of view is that one:
propaganda nazi
|
What is your favorite statistical graph?
I like very much your examples!, but one shocking and simple graph for my point of view is that one:
propaganda nazi
|
5,637
|
What is your favorite statistical graph?
|
Thinking in terms of a figure that packs a lot of information, I like this one:
It comes from the main page of the R Project for Statistical Computing. It won the R homepage graphics competition to be so displayed. The R code to produce it can be found by clicking on the figure on the R homepage.
|
What is your favorite statistical graph?
|
Thinking in terms of a figure that packs a lot of information, I like this one:
It comes from the main page of the R Project for Statistical Computing. It won the R homepage graphics competition t
|
What is your favorite statistical graph?
Thinking in terms of a figure that packs a lot of information, I like this one:
It comes from the main page of the R Project for Statistical Computing. It won the R homepage graphics competition to be so displayed. The R code to produce it can be found by clicking on the figure on the R homepage.
|
What is your favorite statistical graph?
Thinking in terms of a figure that packs a lot of information, I like this one:
It comes from the main page of the R Project for Statistical Computing. It won the R homepage graphics competition t
|
5,638
|
Negative values for AICc (corrected Akaike Information Criterion)
|
All that matters is the difference between two AIC (or, better, AICc) values, representing the fit to two models. The actual value of the AIC (or AICc), and whether it is positive or negative, means nothing. If you simply changed the units the data are expressed in, the AIC (and AICc) would change dramatically. But the difference between the AIC of the two alternative models would not change at all.
Bottom line: Ignore the actual value of AIC (or AICc) and whether it is positive or negative. Ignore also the ratio of two AIC (or AICc) values. Pay attention only to the difference.
|
Negative values for AICc (corrected Akaike Information Criterion)
|
All that matters is the difference between two AIC (or, better, AICc) values, representing the fit to two models. The actual value of the AIC (or AICc), and whether it is positive or negative, means
|
Negative values for AICc (corrected Akaike Information Criterion)
All that matters is the difference between two AIC (or, better, AICc) values, representing the fit to two models. The actual value of the AIC (or AICc), and whether it is positive or negative, means nothing. If you simply changed the units the data are expressed in, the AIC (and AICc) would change dramatically. But the difference between the AIC of the two alternative models would not change at all.
Bottom line: Ignore the actual value of AIC (or AICc) and whether it is positive or negative. Ignore also the ratio of two AIC (or AICc) values. Pay attention only to the difference.
|
Negative values for AICc (corrected Akaike Information Criterion)
All that matters is the difference between two AIC (or, better, AICc) values, representing the fit to two models. The actual value of the AIC (or AICc), and whether it is positive or negative, means
|
5,639
|
Negative values for AICc (corrected Akaike Information Criterion)
|
AIC = -2Ln(L)+ 2k
where L is the maximised value of Likelihood function for that model and k is the number of parameters in the model.
In your example -2Ln(L)+ 2k <0 means that the log-likelihood at the maximum was > 0
which means that the likelihood at the maximum was > 1.
There is no problem with a positive log-likelihood. It is a common misconception that the log-likelihood must be negative. If the likelihood is derived from a probability density it can quite reasonably exceed 1 which means that log-likelihood is positive, hence the deviance and the AIC are negative. This is what occurred in your model.
If you believe that comparing AICs is a good way to choose a model then it would still be the case that the (algebraically) lower AIC is preferred not the one with the lowest absolute AIC value. To reiterate you want the most negative number in your example.
|
Negative values for AICc (corrected Akaike Information Criterion)
|
AIC = -2Ln(L)+ 2k
where L is the maximised value of Likelihood function for that model and k is the number of parameters in the model.
In your example -2Ln(L)+ 2k <0 means that the log-likelihood at
|
Negative values for AICc (corrected Akaike Information Criterion)
AIC = -2Ln(L)+ 2k
where L is the maximised value of Likelihood function for that model and k is the number of parameters in the model.
In your example -2Ln(L)+ 2k <0 means that the log-likelihood at the maximum was > 0
which means that the likelihood at the maximum was > 1.
There is no problem with a positive log-likelihood. It is a common misconception that the log-likelihood must be negative. If the likelihood is derived from a probability density it can quite reasonably exceed 1 which means that log-likelihood is positive, hence the deviance and the AIC are negative. This is what occurred in your model.
If you believe that comparing AICs is a good way to choose a model then it would still be the case that the (algebraically) lower AIC is preferred not the one with the lowest absolute AIC value. To reiterate you want the most negative number in your example.
|
Negative values for AICc (corrected Akaike Information Criterion)
AIC = -2Ln(L)+ 2k
where L is the maximised value of Likelihood function for that model and k is the number of parameters in the model.
In your example -2Ln(L)+ 2k <0 means that the log-likelihood at
|
5,640
|
Negative values for AICc (corrected Akaike Information Criterion)
|
Generally, it is assumed that AIC (and so AICc) is defined up to adding a constant, so the fact if it is negative or positive is not meaningful at all. So the answer is yes, it is valid.
|
Negative values for AICc (corrected Akaike Information Criterion)
|
Generally, it is assumed that AIC (and so AICc) is defined up to adding a constant, so the fact if it is negative or positive is not meaningful at all. So the answer is yes, it is valid.
|
Negative values for AICc (corrected Akaike Information Criterion)
Generally, it is assumed that AIC (and so AICc) is defined up to adding a constant, so the fact if it is negative or positive is not meaningful at all. So the answer is yes, it is valid.
|
Negative values for AICc (corrected Akaike Information Criterion)
Generally, it is assumed that AIC (and so AICc) is defined up to adding a constant, so the fact if it is negative or positive is not meaningful at all. So the answer is yes, it is valid.
|
5,641
|
Negative values for AICc (corrected Akaike Information Criterion)
|
Yes it's valid to compare negative AICc values, in the same way as you would negative AIC values. The correction factor in the AICc can become large with small sample size and relatively large number of parameters, and penalize heavier than the AIC. So positive AIC values can correspond to negative AICc values.
|
Negative values for AICc (corrected Akaike Information Criterion)
|
Yes it's valid to compare negative AICc values, in the same way as you would negative AIC values. The correction factor in the AICc can become large with small sample size and relatively large number
|
Negative values for AICc (corrected Akaike Information Criterion)
Yes it's valid to compare negative AICc values, in the same way as you would negative AIC values. The correction factor in the AICc can become large with small sample size and relatively large number of parameters, and penalize heavier than the AIC. So positive AIC values can correspond to negative AICc values.
|
Negative values for AICc (corrected Akaike Information Criterion)
Yes it's valid to compare negative AICc values, in the same way as you would negative AIC values. The correction factor in the AICc can become large with small sample size and relatively large number
|
5,642
|
Negative values for AICc (corrected Akaike Information Criterion)
|
Yes. It's valid to compare AIC values regardless they are positive or negative. That's because AIC is defined be a linear function (-2) of log-likelihood. If the likelihood is large, your AIC will be likely negative but it says nothing about the model itself.
AICc is similar, the fact that the values are now adjusted change nothing.
|
Negative values for AICc (corrected Akaike Information Criterion)
|
Yes. It's valid to compare AIC values regardless they are positive or negative. That's because AIC is defined be a linear function (-2) of log-likelihood. If the likelihood is large, your AIC will be
|
Negative values for AICc (corrected Akaike Information Criterion)
Yes. It's valid to compare AIC values regardless they are positive or negative. That's because AIC is defined be a linear function (-2) of log-likelihood. If the likelihood is large, your AIC will be likely negative but it says nothing about the model itself.
AICc is similar, the fact that the values are now adjusted change nothing.
|
Negative values for AICc (corrected Akaike Information Criterion)
Yes. It's valid to compare AIC values regardless they are positive or negative. That's because AIC is defined be a linear function (-2) of log-likelihood. If the likelihood is large, your AIC will be
|
5,643
|
How seriously should I think about the different philosophies of statistics?
|
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one gets from applying statistical formulae into "real world" decisions is a non-trivial problem and is fraught with interpretive peril.
Frequently, people use statistics to influence their decision-making in the real world. For example, scientists aren't running randomized trials on COVID vaccines right now for funsies: it is because they want to make real world decisions about whether or not to administer a particular vaccine candidate to the populace. Although it may be a logistical challenge to gather up 1000 test subjects and observe them over the course of the vaccine, the math behind all of this is well-defined whether you are a Frequentist or a Bayesian: You take the data you gathered, cram it through the formulae and numbers pop out the other end.
However, those numbers can sometimes be difficult to interpret: Their relationship to the real world depends on many non-mathematical things – and this is where the philosophy bit comes in. The real world interpretation depends on how we went about gathering those test subjects. It depends on how likely we anticipated this vaccine to be effective a priori (did we pull a molecule out of a hat, or did we start with a known-effective vaccine-production method?). It depends on (perhaps unintuitively) how many other vaccine candidates we happen to be testing. It depends on etc., etc., etc.
Bayesians have attempted to introduce additional mathematical frameworks to help alleviate some of these interpretation problems. I think the fact that the Frequentist methods continue to proliferate shows that these additional frameworks have not been super successful in helping people translate their statistical computations into real world actions (although, to be sure, Bayesian techniques have led to many other advances in the field, not directly related to this specific problem).
To answer your specific questions: you don't need to align yourself with one philosophy. It may help to be specific about your approach, but it will generally be totally obvious that you are doing a Bayesian analysis the moment you start talking about priors. Lastly, though, you should consider all of this very seriously, because as a statistician it will be your ethical duty to ensure that the numbers that you provide people are used responsibly – because correctly interpreting those numbers is a hard problem. Whether you interpret your numbers through the lens of Frequentist or Bayesian philosophy isn't a huge deal, but interpretation of your numbers requires familiarity with the relevant philosophy.
|
How seriously should I think about the different philosophies of statistics?
|
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one get
|
How seriously should I think about the different philosophies of statistics?
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one gets from applying statistical formulae into "real world" decisions is a non-trivial problem and is fraught with interpretive peril.
Frequently, people use statistics to influence their decision-making in the real world. For example, scientists aren't running randomized trials on COVID vaccines right now for funsies: it is because they want to make real world decisions about whether or not to administer a particular vaccine candidate to the populace. Although it may be a logistical challenge to gather up 1000 test subjects and observe them over the course of the vaccine, the math behind all of this is well-defined whether you are a Frequentist or a Bayesian: You take the data you gathered, cram it through the formulae and numbers pop out the other end.
However, those numbers can sometimes be difficult to interpret: Their relationship to the real world depends on many non-mathematical things – and this is where the philosophy bit comes in. The real world interpretation depends on how we went about gathering those test subjects. It depends on how likely we anticipated this vaccine to be effective a priori (did we pull a molecule out of a hat, or did we start with a known-effective vaccine-production method?). It depends on (perhaps unintuitively) how many other vaccine candidates we happen to be testing. It depends on etc., etc., etc.
Bayesians have attempted to introduce additional mathematical frameworks to help alleviate some of these interpretation problems. I think the fact that the Frequentist methods continue to proliferate shows that these additional frameworks have not been super successful in helping people translate their statistical computations into real world actions (although, to be sure, Bayesian techniques have led to many other advances in the field, not directly related to this specific problem).
To answer your specific questions: you don't need to align yourself with one philosophy. It may help to be specific about your approach, but it will generally be totally obvious that you are doing a Bayesian analysis the moment you start talking about priors. Lastly, though, you should consider all of this very seriously, because as a statistician it will be your ethical duty to ensure that the numbers that you provide people are used responsibly – because correctly interpreting those numbers is a hard problem. Whether you interpret your numbers through the lens of Frequentist or Bayesian philosophy isn't a huge deal, but interpretation of your numbers requires familiarity with the relevant philosophy.
|
How seriously should I think about the different philosophies of statistics?
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one get
|
5,644
|
How seriously should I think about the different philosophies of statistics?
|
A preliminary note on my nomenclature: As a preliminary matter, I note that I have never liked the terms "frequentist school" for the philosophy and set of methods it designates, and so I instead refer to this school of thought as "classical". Both Bayesians and classical statisticians agree entirely on the relevant theorems pertaining to the laws of large numbers, so both groups agree that the "frequentist" interpretation of probability holds under valid assumptions (i.e., an exchangeable sequence of values representing "repetition" of an experiment). All Bayesians are also "frequentists", in the sense that we accept the laws of large numbers and agree that probability corresponds to limiting frequency in appropriate circumstances. Since there is no real disagreement on the underlying laws of large numbers, I view it as silly to say that one group is a "frequentist" school and the other isn't.
This has got me thinking – how seriously do I need to consider this?
Others may disagree here, but my view is that if you want to be a good statistician, it is important to take foundational questions in the field seriously, and devote serious thinking to them during your training. Philosophical and methodological issues can seem far-removed from data analysis, but they are foundational issues that inform your choice of modelling methods and your interpretation and communication of results.
Learning something always invovles a trade-off (though not always against other learning!) so you will need to decide the appropriate trade-off between learning the philosophical and foundational issues in statistics, versus using your time for something else. This trade-off will depend on your specific aspirations, in terms of how detailed you want your knowledge of the subject to be. When training to be an academic in the field (i.e., when doing my PhD) I spent quite a lot of time reading philosophical papers on this subject, mulling over their implications, and having late-night drunken conversations on the topic with reluctant young ladies at university parties. My view now ---as a practicing academic--- is that this was time well spent.
If I want to be a statistician, do I need to align myself with one philosophy?
If you find one philosophy/methodology to be exclusively correct then you should align yourself entirely with that one philosophy/methodology. However, there are many statisticians who find some merit in each approach under different circumstances, or view one paradigm as philosophically correct, but difficult to apply in certain cases. In any case, it is not necessary to align yourself exclusively with one approach.
To be a good statistician, you should certainly understand the difference between the two paradigms and be capable of applying models in either paradigm. You should also have some sense of when a particular approach might be easier to apply to solving a particular problem. (For example, some "paradoxes" arise under classical methods that are easily resolved in Bayesian analysis. Contrarily, some modelling situations are difficult to deal with in Bayesian analysis, such as when we want to test a specific null hypothesis against a broad but vague alternative hypothesis.) In general, if you can enlarge your "toolkit" to be familiar with more methods and models, you will have a greater capacity to deploy effective methods in statistical problems.
Before I approach a problem, do I need to specifically mention which school of thought I will be applying?
This depends on context, but for general modelling purposes, no --- this will be obvious from the type of model and analysis you apply. If you apply a prior distribution to the unknown parameters and derive a posterior distribution, we will know you are doing a Bayesian analysis. If you treat the unknown parameters as "unknown constants" and use classical methods, we will know you you are using classical analysis. In good statistical writing you should explicitly state the model you are using (and maybe give references if you are writing an academic paper), and you might take this occasion to explicitly note if you are doing a Bayesian analysis, but even if you don't, it will be obvious.
Of course, if the problem you are approaching is a theoretical or philosophical problem (as opposed to a data analysis problem) then it may hinge upon the relevant interpretation of probability, and the consequent methodological paradigm. In such cases you should explicitly state your philosophical/methodological approach.
And crucially, do I need to be careful that I don't mix frequentist and Bayesian approaches and cause contradictions/paradoxes?
Unless you regard one of these methods to be totally invalid, such that it should never be used, it would stand to reason that it is okay to mix methods under appropriate circumstances. Again, understanding the strong and weak points of each paradigm will assist you in understanding when it is easier to apply one paradigm or the other.
In practical statistical work, it is quite common to see Bayesian analysis that has some classical methods applied for diagnostic purposes to test underlying assumptions. Usually this occurs when we want to test some assumption of a Bayesian model against a broad and vague alternative (i.e., where the alternative is not specified as a parametric model which is itself amenable to Bayesian analysis). For example, we might conduct a Bayesian analysis using a linear regression model, but then apply the Grubb's test (a classical hypothesis test) to test whether the assumption of normally distributed error terms is reasonable. Alternatively, we might conduct alternative Bayesian analyses using a set of different models, but then conduct cross-validation using classical methods. Perhaps there are some Bayesian "purists" who completely eschew classical methods, but they are rare. (This partly depends on the state of knowledge in the field of Bayesian analysis; as the field develops further and expands its boundaries, it has less and less need for supplementation by classical methods. Consequently, you should see this as contextual, based on the present state of development of Bayesian theory and related computational tools, etc.)
If you mix the two methods then you certainly need to be mindful of creating contradictions or "paradoxes" in your analysis, but obviously that is going to require you to have a good understanding of the two paradigms, which further behoves you to devote time to learning them.
|
How seriously should I think about the different philosophies of statistics?
|
A preliminary note on my nomenclature: As a preliminary matter, I note that I have never liked the terms "frequentist school" for the philosophy and set of methods it designates, and so I instead refe
|
How seriously should I think about the different philosophies of statistics?
A preliminary note on my nomenclature: As a preliminary matter, I note that I have never liked the terms "frequentist school" for the philosophy and set of methods it designates, and so I instead refer to this school of thought as "classical". Both Bayesians and classical statisticians agree entirely on the relevant theorems pertaining to the laws of large numbers, so both groups agree that the "frequentist" interpretation of probability holds under valid assumptions (i.e., an exchangeable sequence of values representing "repetition" of an experiment). All Bayesians are also "frequentists", in the sense that we accept the laws of large numbers and agree that probability corresponds to limiting frequency in appropriate circumstances. Since there is no real disagreement on the underlying laws of large numbers, I view it as silly to say that one group is a "frequentist" school and the other isn't.
This has got me thinking – how seriously do I need to consider this?
Others may disagree here, but my view is that if you want to be a good statistician, it is important to take foundational questions in the field seriously, and devote serious thinking to them during your training. Philosophical and methodological issues can seem far-removed from data analysis, but they are foundational issues that inform your choice of modelling methods and your interpretation and communication of results.
Learning something always invovles a trade-off (though not always against other learning!) so you will need to decide the appropriate trade-off between learning the philosophical and foundational issues in statistics, versus using your time for something else. This trade-off will depend on your specific aspirations, in terms of how detailed you want your knowledge of the subject to be. When training to be an academic in the field (i.e., when doing my PhD) I spent quite a lot of time reading philosophical papers on this subject, mulling over their implications, and having late-night drunken conversations on the topic with reluctant young ladies at university parties. My view now ---as a practicing academic--- is that this was time well spent.
If I want to be a statistician, do I need to align myself with one philosophy?
If you find one philosophy/methodology to be exclusively correct then you should align yourself entirely with that one philosophy/methodology. However, there are many statisticians who find some merit in each approach under different circumstances, or view one paradigm as philosophically correct, but difficult to apply in certain cases. In any case, it is not necessary to align yourself exclusively with one approach.
To be a good statistician, you should certainly understand the difference between the two paradigms and be capable of applying models in either paradigm. You should also have some sense of when a particular approach might be easier to apply to solving a particular problem. (For example, some "paradoxes" arise under classical methods that are easily resolved in Bayesian analysis. Contrarily, some modelling situations are difficult to deal with in Bayesian analysis, such as when we want to test a specific null hypothesis against a broad but vague alternative hypothesis.) In general, if you can enlarge your "toolkit" to be familiar with more methods and models, you will have a greater capacity to deploy effective methods in statistical problems.
Before I approach a problem, do I need to specifically mention which school of thought I will be applying?
This depends on context, but for general modelling purposes, no --- this will be obvious from the type of model and analysis you apply. If you apply a prior distribution to the unknown parameters and derive a posterior distribution, we will know you are doing a Bayesian analysis. If you treat the unknown parameters as "unknown constants" and use classical methods, we will know you you are using classical analysis. In good statistical writing you should explicitly state the model you are using (and maybe give references if you are writing an academic paper), and you might take this occasion to explicitly note if you are doing a Bayesian analysis, but even if you don't, it will be obvious.
Of course, if the problem you are approaching is a theoretical or philosophical problem (as opposed to a data analysis problem) then it may hinge upon the relevant interpretation of probability, and the consequent methodological paradigm. In such cases you should explicitly state your philosophical/methodological approach.
And crucially, do I need to be careful that I don't mix frequentist and Bayesian approaches and cause contradictions/paradoxes?
Unless you regard one of these methods to be totally invalid, such that it should never be used, it would stand to reason that it is okay to mix methods under appropriate circumstances. Again, understanding the strong and weak points of each paradigm will assist you in understanding when it is easier to apply one paradigm or the other.
In practical statistical work, it is quite common to see Bayesian analysis that has some classical methods applied for diagnostic purposes to test underlying assumptions. Usually this occurs when we want to test some assumption of a Bayesian model against a broad and vague alternative (i.e., where the alternative is not specified as a parametric model which is itself amenable to Bayesian analysis). For example, we might conduct a Bayesian analysis using a linear regression model, but then apply the Grubb's test (a classical hypothesis test) to test whether the assumption of normally distributed error terms is reasonable. Alternatively, we might conduct alternative Bayesian analyses using a set of different models, but then conduct cross-validation using classical methods. Perhaps there are some Bayesian "purists" who completely eschew classical methods, but they are rare. (This partly depends on the state of knowledge in the field of Bayesian analysis; as the field develops further and expands its boundaries, it has less and less need for supplementation by classical methods. Consequently, you should see this as contextual, based on the present state of development of Bayesian theory and related computational tools, etc.)
If you mix the two methods then you certainly need to be mindful of creating contradictions or "paradoxes" in your analysis, but obviously that is going to require you to have a good understanding of the two paradigms, which further behoves you to devote time to learning them.
|
How seriously should I think about the different philosophies of statistics?
A preliminary note on my nomenclature: As a preliminary matter, I note that I have never liked the terms "frequentist school" for the philosophy and set of methods it designates, and so I instead refe
|
5,645
|
How seriously should I think about the different philosophies of statistics?
|
I try to add something to the already existing answers that are worthwhile to read.
I do think that the foundations discussion touch basic questions that are important to think about as statisticians, particularly "what do we mean by probability?" Also understanding "inferential logic" when running, say, tests, confidence intervals, or computing posteriors, is crucial.
I also think it is important to know that issues go beyond the Bayesian/frequentist distinction. Particularly, there are different varieties of Bayesians, which have at least to some extent a different understanding of probabilities, mainly the radical subjectivists, so-called objective Bayesians, and people who prefer Bayesian reasoning about models and parameters, but however give models and parameters a frequentist meaning, called "falsificationist Bayes" in Sec. 5 of Gelman and Hennig (2017), where we try to give a reasonably "neutral" overview. Furthermore, there are concepts of "aleatory probabilities" (as opposed to epistemic, i.e., formalising subjective uncertainty) that are not directly connected to long run frequencies (often referred to as "propensities").
From my point of view, a key for understanding concepts in the foundations of statistics and probabilities is that we are generally dealing with mathematical models, and reality is different, i.e., there are no "true" frequentist probabilities to be found, and neither is there any "truly rational and correct reasoning" that is identical to the Bayesian model of it. Regardless of whether we work in a Bayesian or frequentist way (and which specific variety of these), we use models in order to make mathematical reasoning available for understanding phenomena in reality, which involves abstraction, simplification, and also, in one way or another, manipulation. We are using them as tools for thinking; they are adapted to our thinking, not in the first place to any reality outside our thoughts. For this reason, all kinds of practical issues (like issues with sampling schemes, measurements, missing values, unobserved confounders etc.) are important, to some extent for improving our models, and to some extent in order to understand the limits of what we can do with whatever approach.
This means that I personally believe that the different foundational approaches should not be seen as a "right" or "wrong" philosophy of statistics, and this also means that nobody needs to commit themselves to one of them only. Particularly, "epistemic" probabilities (as often but not always employed in subjectivist/"objective" Bayesian reasoning) model the uncertainty of either an individual or of science/humankind as a whole, whereas "aleatory" probabilities (as usually employed in frequentist reasoning) model the behaviour of data generating processes out there in the world. These are different, and one can well be interested in one thing regarding one research question and the other thing regarding another.
I do think though that "mixing them up" in the same study is problematic. When doing probability modelling, results come as probabilities (be it p-values, confidence levels, or posterior probabilities), so a consistent meaning should be used for all probabilities that occur in the same model. I think that the statisticians should be clear about what they mean when employing probabilities in given circumstances, and mixing often is done in such a way that this is unclear (particularly I see a lot of Bayesian work in which the likelihood is apparently interpreted in a frequentist manner, referring to really existing data generating processes, where no explanation is given what the prior probabilities are meant to express, even if sometimes related in a very rough fashion to some available knowledge).
However, I also think that there can be "legitimate mixing", for example in situations in which the prior distribution can be interpreted as itself being generated from a "real process" (e.g., of studies/problems of a similar kind), or when Bayesians, when applying consistently epistemic probabilities, are still interested in the frequentist properties of what they are doing, because they may find the logic of frequentist modelling ("what would happen if reality behaved according to frequentist model X") useful for learning about the implications of Bayesian methods. Sometimes priors can be introduced arguing that their introduction improves the frequentist characteristics of a method, rather than arguing that they appropriately express subjective or objective epistemic probabilities. So I think "mixing" requires a careful distinction between what the different probabilities mean in the different circumstances, clear understanding why one is used in one place and another in another place, and why they were brought together.
|
How seriously should I think about the different philosophies of statistics?
|
I try to add something to the already existing answers that are worthwhile to read.
I do think that the foundations discussion touch basic questions that are important to think about as statisticians
|
How seriously should I think about the different philosophies of statistics?
I try to add something to the already existing answers that are worthwhile to read.
I do think that the foundations discussion touch basic questions that are important to think about as statisticians, particularly "what do we mean by probability?" Also understanding "inferential logic" when running, say, tests, confidence intervals, or computing posteriors, is crucial.
I also think it is important to know that issues go beyond the Bayesian/frequentist distinction. Particularly, there are different varieties of Bayesians, which have at least to some extent a different understanding of probabilities, mainly the radical subjectivists, so-called objective Bayesians, and people who prefer Bayesian reasoning about models and parameters, but however give models and parameters a frequentist meaning, called "falsificationist Bayes" in Sec. 5 of Gelman and Hennig (2017), where we try to give a reasonably "neutral" overview. Furthermore, there are concepts of "aleatory probabilities" (as opposed to epistemic, i.e., formalising subjective uncertainty) that are not directly connected to long run frequencies (often referred to as "propensities").
From my point of view, a key for understanding concepts in the foundations of statistics and probabilities is that we are generally dealing with mathematical models, and reality is different, i.e., there are no "true" frequentist probabilities to be found, and neither is there any "truly rational and correct reasoning" that is identical to the Bayesian model of it. Regardless of whether we work in a Bayesian or frequentist way (and which specific variety of these), we use models in order to make mathematical reasoning available for understanding phenomena in reality, which involves abstraction, simplification, and also, in one way or another, manipulation. We are using them as tools for thinking; they are adapted to our thinking, not in the first place to any reality outside our thoughts. For this reason, all kinds of practical issues (like issues with sampling schemes, measurements, missing values, unobserved confounders etc.) are important, to some extent for improving our models, and to some extent in order to understand the limits of what we can do with whatever approach.
This means that I personally believe that the different foundational approaches should not be seen as a "right" or "wrong" philosophy of statistics, and this also means that nobody needs to commit themselves to one of them only. Particularly, "epistemic" probabilities (as often but not always employed in subjectivist/"objective" Bayesian reasoning) model the uncertainty of either an individual or of science/humankind as a whole, whereas "aleatory" probabilities (as usually employed in frequentist reasoning) model the behaviour of data generating processes out there in the world. These are different, and one can well be interested in one thing regarding one research question and the other thing regarding another.
I do think though that "mixing them up" in the same study is problematic. When doing probability modelling, results come as probabilities (be it p-values, confidence levels, or posterior probabilities), so a consistent meaning should be used for all probabilities that occur in the same model. I think that the statisticians should be clear about what they mean when employing probabilities in given circumstances, and mixing often is done in such a way that this is unclear (particularly I see a lot of Bayesian work in which the likelihood is apparently interpreted in a frequentist manner, referring to really existing data generating processes, where no explanation is given what the prior probabilities are meant to express, even if sometimes related in a very rough fashion to some available knowledge).
However, I also think that there can be "legitimate mixing", for example in situations in which the prior distribution can be interpreted as itself being generated from a "real process" (e.g., of studies/problems of a similar kind), or when Bayesians, when applying consistently epistemic probabilities, are still interested in the frequentist properties of what they are doing, because they may find the logic of frequentist modelling ("what would happen if reality behaved according to frequentist model X") useful for learning about the implications of Bayesian methods. Sometimes priors can be introduced arguing that their introduction improves the frequentist characteristics of a method, rather than arguing that they appropriately express subjective or objective epistemic probabilities. So I think "mixing" requires a careful distinction between what the different probabilities mean in the different circumstances, clear understanding why one is used in one place and another in another place, and why they were brought together.
|
How seriously should I think about the different philosophies of statistics?
I try to add something to the already existing answers that are worthwhile to read.
I do think that the foundations discussion touch basic questions that are important to think about as statisticians
|
5,646
|
How seriously should I think about the different philosophies of statistics?
|
What you shouldn't do is align yourself with one in the sense of declaring one of them "right" and the other one "wrong". They are just two different viewpoints on the same thing, giving you alternative "tools of the trade". As an expert, you should be conversant in both. You may choose, for practical reasons, to specialize more in one than in the other. As an analogy, think of a chef who specializes in French cuisine, but can still whip up a nice Thai curry when the occasion calls for it.
Your question sounds like you might be afraid of the presence of a war of dogmas in statistics. These wars of dogmas happen in science every now and then, but I would argue that they are more of an artifact of the way humans do science, than based on real-world facts. The usual progress of dogma wars is: first there are two theories about a "hot" unsolved problem, then there is a popularity contest which fizzles out over a few decades, then the next generation of scientists discovers that both of the theories have merit and are not as mutually exclusive as presented at the height of the debate. You can find many good historical examples for this, e.g. keynesianist vs. neoclassicist economists from the early 20th century, or the question of whether environmental influences are heritable (often still oversimplified as darwinists vs. lamarckians).
Luckily, there is currently no such war of dogmas in the statistics community (and there wasn't one even when Bayes was alive), so you don't have to choose one camp and try to lead it to victory or go down fighting. A statistician declaring themselves as frequentist is, at best, trying to say "I prefer working with frequentist tools", and at worst, a snob who finds all other philosophies "not proper", imagine here again the French chef, maybe a pre-globalization one, who considers Thai food to be "inedible".
You should understand the "frequentist" (or "bayesian") declaration as an important signal about the flavor of information you will get from lessons or discussion with that person, and that's all about it.
|
How seriously should I think about the different philosophies of statistics?
|
What you shouldn't do is align yourself with one in the sense of declaring one of them "right" and the other one "wrong". They are just two different viewpoints on the same thing, giving you alternati
|
How seriously should I think about the different philosophies of statistics?
What you shouldn't do is align yourself with one in the sense of declaring one of them "right" and the other one "wrong". They are just two different viewpoints on the same thing, giving you alternative "tools of the trade". As an expert, you should be conversant in both. You may choose, for practical reasons, to specialize more in one than in the other. As an analogy, think of a chef who specializes in French cuisine, but can still whip up a nice Thai curry when the occasion calls for it.
Your question sounds like you might be afraid of the presence of a war of dogmas in statistics. These wars of dogmas happen in science every now and then, but I would argue that they are more of an artifact of the way humans do science, than based on real-world facts. The usual progress of dogma wars is: first there are two theories about a "hot" unsolved problem, then there is a popularity contest which fizzles out over a few decades, then the next generation of scientists discovers that both of the theories have merit and are not as mutually exclusive as presented at the height of the debate. You can find many good historical examples for this, e.g. keynesianist vs. neoclassicist economists from the early 20th century, or the question of whether environmental influences are heritable (often still oversimplified as darwinists vs. lamarckians).
Luckily, there is currently no such war of dogmas in the statistics community (and there wasn't one even when Bayes was alive), so you don't have to choose one camp and try to lead it to victory or go down fighting. A statistician declaring themselves as frequentist is, at best, trying to say "I prefer working with frequentist tools", and at worst, a snob who finds all other philosophies "not proper", imagine here again the French chef, maybe a pre-globalization one, who considers Thai food to be "inedible".
You should understand the "frequentist" (or "bayesian") declaration as an important signal about the flavor of information you will get from lessons or discussion with that person, and that's all about it.
|
How seriously should I think about the different philosophies of statistics?
What you shouldn't do is align yourself with one in the sense of declaring one of them "right" and the other one "wrong". They are just two different viewpoints on the same thing, giving you alternati
|
5,647
|
How seriously should I think about the different philosophies of statistics?
|
Many people have given way better answers than I possibly could, but there are two things I wanted to add.
The field, hypothesis, and type of data you are working with can heavily influence which philosophy you use. The hypothesis "The mass of a neutron is 1.001 times the mass of a proton" definitely has a true or false answer. A frequentist approach would be very well suited to testing this hypothesis. Compare that to "Competition drives populations into different areas." This is not always true, but it is true many times. It is completely valid to interpret a Bayesian test of this hypothesis as how often it is true or how significant this effect is.
I believe that you should write out how you are going to analyze the data before ever looking at it. Whenever you decide to deviate from this plan, add an explanation for why before you do the new tests. This is a way to help you identify biases before they influence your work. Plus, if you store this document with an independent review board, you are almost immune to accusations of p-hacking.
|
How seriously should I think about the different philosophies of statistics?
|
Many people have given way better answers than I possibly could, but there are two things I wanted to add.
The field, hypothesis, and type of data you are working with can heavily influence which phi
|
How seriously should I think about the different philosophies of statistics?
Many people have given way better answers than I possibly could, but there are two things I wanted to add.
The field, hypothesis, and type of data you are working with can heavily influence which philosophy you use. The hypothesis "The mass of a neutron is 1.001 times the mass of a proton" definitely has a true or false answer. A frequentist approach would be very well suited to testing this hypothesis. Compare that to "Competition drives populations into different areas." This is not always true, but it is true many times. It is completely valid to interpret a Bayesian test of this hypothesis as how often it is true or how significant this effect is.
I believe that you should write out how you are going to analyze the data before ever looking at it. Whenever you decide to deviate from this plan, add an explanation for why before you do the new tests. This is a way to help you identify biases before they influence your work. Plus, if you store this document with an independent review board, you are almost immune to accusations of p-hacking.
|
How seriously should I think about the different philosophies of statistics?
Many people have given way better answers than I possibly could, but there are two things I wanted to add.
The field, hypothesis, and type of data you are working with can heavily influence which phi
|
5,648
|
How seriously should I think about the different philosophies of statistics?
|
The data itself is often the same for both approaches. In practice, the Bayesian or frequentist philosophies determine different estimators to analyze that data. Conversely, some estimators can be rationalized by either philosophy. Within each approach, modeling choices are needed to take the model to data, that can sometimes be tested for out-of-sample predictive accuracy. This is particularly true of "empirical Bayes" estimators that try to fit the hyperparameters using data.
For this reason, it is useful to think of the broad statistical properties of the estimators, regardless of their original rationale. I will mention two that are particularly salient:
(1) Admissibility: According to Wikipedia "an admissible decision rule is a rule for making a decision such that there is no other rule that is always "better" than it (or at least sometimes better and never worse)." Admissibility is a very basic criterion that rules out estimators that are clearly bad (e.g. calculating a mean with a single observation, discarding data for no reason, scaling the data in a weird way, etc.). It is well known that Bayes estimators are admissible, and hence have minimal guarantees.
What's interesting is that the set of admissible estimators can be very large. From a Bayesian point of view, different priors can induce different admissible estimators. This is analogous to the concept of "efficiency" in economics. Two allocations are efficient if they don't "waste" resources, but can use different inputs depending on the planner's preferences: there is an efficiency frontier. An agnostic frequentist might view the use of priors as a way to describe a class of admissible estimators, that impose different preferences over the weight given to new information.
(2) Regularization: A prior can also be viewed as a form of regularization (reducing the complexity) in predictive models. This facilitates the estimation of complicated models with small sample sizes relative to the number of parameters. For instance this article shows that Ridge (a form of penalized linear regression) can be motivated as a bayesian estimator with a normal prior, and the tuning parameter as a hyperparameter. Hence these can be viewed as different routes to regulate the bias/variance trade-offs. Similar analogies have been found for Lasso and other recently proposed high-dimensional methods.
There are other theoretical connections. For example, the Bernstein-von-Mises Theorem shows that the credible set of Bayesian parametric models can be close to frequentist confidence intervals in large samples.
As an agnostic practitioner you want to either design tests of validity (even as a thought experiment) that contain tangible, replicable metrics (e.g. out-of-sample MSE), that can help you decide between alternative estimators.
|
How seriously should I think about the different philosophies of statistics?
|
The data itself is often the same for both approaches. In practice, the Bayesian or frequentist philosophies determine different estimators to analyze that data. Conversely, some estimators can be rat
|
How seriously should I think about the different philosophies of statistics?
The data itself is often the same for both approaches. In practice, the Bayesian or frequentist philosophies determine different estimators to analyze that data. Conversely, some estimators can be rationalized by either philosophy. Within each approach, modeling choices are needed to take the model to data, that can sometimes be tested for out-of-sample predictive accuracy. This is particularly true of "empirical Bayes" estimators that try to fit the hyperparameters using data.
For this reason, it is useful to think of the broad statistical properties of the estimators, regardless of their original rationale. I will mention two that are particularly salient:
(1) Admissibility: According to Wikipedia "an admissible decision rule is a rule for making a decision such that there is no other rule that is always "better" than it (or at least sometimes better and never worse)." Admissibility is a very basic criterion that rules out estimators that are clearly bad (e.g. calculating a mean with a single observation, discarding data for no reason, scaling the data in a weird way, etc.). It is well known that Bayes estimators are admissible, and hence have minimal guarantees.
What's interesting is that the set of admissible estimators can be very large. From a Bayesian point of view, different priors can induce different admissible estimators. This is analogous to the concept of "efficiency" in economics. Two allocations are efficient if they don't "waste" resources, but can use different inputs depending on the planner's preferences: there is an efficiency frontier. An agnostic frequentist might view the use of priors as a way to describe a class of admissible estimators, that impose different preferences over the weight given to new information.
(2) Regularization: A prior can also be viewed as a form of regularization (reducing the complexity) in predictive models. This facilitates the estimation of complicated models with small sample sizes relative to the number of parameters. For instance this article shows that Ridge (a form of penalized linear regression) can be motivated as a bayesian estimator with a normal prior, and the tuning parameter as a hyperparameter. Hence these can be viewed as different routes to regulate the bias/variance trade-offs. Similar analogies have been found for Lasso and other recently proposed high-dimensional methods.
There are other theoretical connections. For example, the Bernstein-von-Mises Theorem shows that the credible set of Bayesian parametric models can be close to frequentist confidence intervals in large samples.
As an agnostic practitioner you want to either design tests of validity (even as a thought experiment) that contain tangible, replicable metrics (e.g. out-of-sample MSE), that can help you decide between alternative estimators.
|
How seriously should I think about the different philosophies of statistics?
The data itself is often the same for both approaches. In practice, the Bayesian or frequentist philosophies determine different estimators to analyze that data. Conversely, some estimators can be rat
|
5,649
|
Approximate $e$ using Monte Carlo Simulation
|
The simple and elegant way to estimate $e$ by Monte Carlo is described in this paper. The paper is actually about teaching $e$. Hence, the approach seems perfectly fitting for your goal. The idea's based on an exercise from a popular Ukrainian textbook on probability theory by Gnedenko.
See ex.22 on p.183
It happens so that $E[\xi]=e$, where $\xi$ is a random variable that is defined as follows. It's the minimum number of $n$ such that $\sum_{i=1}^n r_i>1$ and $r_i$ are random numbers from uniform distribution on $[0,1]$. Beautiful, isn't it?!
Since it's an exercise, I'm not sure if it's cool for me to post the solution (proof) here :) If you'd like to prove it yourself, here's a tip: the chapter is called "Moments", which should point you in right direction.
If you want to implement it yourself, then don't read further!
This is a simple algorithm for Monte Carlo simulation. Draw a uniform random, then another one and so on until the sum exceeds 1. The number of randoms drawn is your first trial. Let's say you got:
0.0180
0.4596
0.7920
Then your first trial rendered 3. Keep doing these trials, and you'll notice that in average you get $e$.
MATLAB code, simulation result and the histogram follow.
N = 10000000;
n = N;
s = 0;
i = 0;
maxl = 0;
f = 0;
while n > 0
s = s + rand;
i = i + 1;
if s > 1
if i > maxl
f(i) = 1;
maxl = i;
else
f(i) = f(i) + 1;
end
i = 0;
s = 0;
n = n - 1;
end
end
disp ((1:maxl)*f'/sum(f))
bar(f/sum(f))
grid on
f/sum(f)
The result and the histogram:
2.7183
ans =
Columns 1 through 8
0 0.5000 0.3332 0.1250 0.0334 0.0070 0.0012 0.0002
Columns 9 through 11
0.0000 0.0000 0.0000
UPDATE:
I updated my code to get rid of the array of trial results so that it doesn't take RAM. I also printed the PMF estimation.
Update 2:
Here's my Excel solution. Put a button in Excel and link it to the following VBA macro:
Private Sub CommandButton1_Click()
n = Cells(1, 4).Value
Range("A:B").Value = ""
n = n
s = 0
i = 0
maxl = 0
Cells(1, 2).Value = "Frequency"
Cells(1, 1).Value = "n"
Cells(1, 3).Value = "# of trials"
Cells(2, 3).Value = "simulated e"
While n > 0
s = s + Rnd()
i = i + 1
If s > 1 Then
If i > maxl Then
Cells(i, 1).Value = i
Cells(i, 2).Value = 1
maxl = i
Else
Cells(i, 1).Value = i
Cells(i, 2).Value = Cells(i, 2).Value + 1
End If
i = 0
s = 0
n = n - 1
End If
Wend
s = 0
For i = 2 To maxl
s = s + Cells(i, 1) * Cells(i, 2)
Next
Cells(2, 4).Value = s / Cells(1, 4).Value
Rem bar (f / Sum(f))
Rem grid on
Rem f/sum(f)
End Sub
Enter the number of trials, such as 1000, in the cell D1, and click the button.
Here how the screen should look like after the first run:
UPDATE 3:
Silverfish inspired me to another way, not as elegant as the first one but still cool. It calculated the volumes of n-simplexes using Sobol sequences.
s = 2;
for i=2:10
p=sobolset(i);
N = 10000;
X=net(p,N)';
s = s + (sum(sum(X)<1)/N);
end
disp(s)
2.712800000000001
Coincidentally he wrote the first book on Monte Carlo method I read back in high school. It's the best introduction to the method in my opinion.
UPDATE 4:
Silverfish in comments suggested a simple Excel formula implementation. This is the kind of result you get with his approach after about total 1 million random numbers and 185K trials:
Obviously, this is much slower than Excel VBA implementation. Especially, if you modify my VBA code to not update the cell values inside the loop, and only do it once all stats are collected.
UPDATE 5
Xi'an's solution #3 is closely related (or even the same in some sense as per jwg's comment in the thread). It's hard to say who came up with the idea first Forsythe or Gnedenko. Gnedenko's original 1950 edition in Russian doesn't have Problems sections in Chapters. So, I couldn't find this problem at a first glance where it is in later editions. Maybe it was added later or buried in the text.
As I commented in Xi'an's answer, Forsythe's approach is linked to another interesting area: the distribution of distances between peaks (extrema) in random (IID) sequences. The mean distance happens to be 3. The down sequence in Forsythe's approach ends with a bottom, so if you continue sampling you'll get another bottom at some point, then another etc. You could track the distance between them and build the distribution.
|
Approximate $e$ using Monte Carlo Simulation
|
The simple and elegant way to estimate $e$ by Monte Carlo is described in this paper. The paper is actually about teaching $e$. Hence, the approach seems perfectly fitting for your goal. The idea's ba
|
Approximate $e$ using Monte Carlo Simulation
The simple and elegant way to estimate $e$ by Monte Carlo is described in this paper. The paper is actually about teaching $e$. Hence, the approach seems perfectly fitting for your goal. The idea's based on an exercise from a popular Ukrainian textbook on probability theory by Gnedenko.
See ex.22 on p.183
It happens so that $E[\xi]=e$, where $\xi$ is a random variable that is defined as follows. It's the minimum number of $n$ such that $\sum_{i=1}^n r_i>1$ and $r_i$ are random numbers from uniform distribution on $[0,1]$. Beautiful, isn't it?!
Since it's an exercise, I'm not sure if it's cool for me to post the solution (proof) here :) If you'd like to prove it yourself, here's a tip: the chapter is called "Moments", which should point you in right direction.
If you want to implement it yourself, then don't read further!
This is a simple algorithm for Monte Carlo simulation. Draw a uniform random, then another one and so on until the sum exceeds 1. The number of randoms drawn is your first trial. Let's say you got:
0.0180
0.4596
0.7920
Then your first trial rendered 3. Keep doing these trials, and you'll notice that in average you get $e$.
MATLAB code, simulation result and the histogram follow.
N = 10000000;
n = N;
s = 0;
i = 0;
maxl = 0;
f = 0;
while n > 0
s = s + rand;
i = i + 1;
if s > 1
if i > maxl
f(i) = 1;
maxl = i;
else
f(i) = f(i) + 1;
end
i = 0;
s = 0;
n = n - 1;
end
end
disp ((1:maxl)*f'/sum(f))
bar(f/sum(f))
grid on
f/sum(f)
The result and the histogram:
2.7183
ans =
Columns 1 through 8
0 0.5000 0.3332 0.1250 0.0334 0.0070 0.0012 0.0002
Columns 9 through 11
0.0000 0.0000 0.0000
UPDATE:
I updated my code to get rid of the array of trial results so that it doesn't take RAM. I also printed the PMF estimation.
Update 2:
Here's my Excel solution. Put a button in Excel and link it to the following VBA macro:
Private Sub CommandButton1_Click()
n = Cells(1, 4).Value
Range("A:B").Value = ""
n = n
s = 0
i = 0
maxl = 0
Cells(1, 2).Value = "Frequency"
Cells(1, 1).Value = "n"
Cells(1, 3).Value = "# of trials"
Cells(2, 3).Value = "simulated e"
While n > 0
s = s + Rnd()
i = i + 1
If s > 1 Then
If i > maxl Then
Cells(i, 1).Value = i
Cells(i, 2).Value = 1
maxl = i
Else
Cells(i, 1).Value = i
Cells(i, 2).Value = Cells(i, 2).Value + 1
End If
i = 0
s = 0
n = n - 1
End If
Wend
s = 0
For i = 2 To maxl
s = s + Cells(i, 1) * Cells(i, 2)
Next
Cells(2, 4).Value = s / Cells(1, 4).Value
Rem bar (f / Sum(f))
Rem grid on
Rem f/sum(f)
End Sub
Enter the number of trials, such as 1000, in the cell D1, and click the button.
Here how the screen should look like after the first run:
UPDATE 3:
Silverfish inspired me to another way, not as elegant as the first one but still cool. It calculated the volumes of n-simplexes using Sobol sequences.
s = 2;
for i=2:10
p=sobolset(i);
N = 10000;
X=net(p,N)';
s = s + (sum(sum(X)<1)/N);
end
disp(s)
2.712800000000001
Coincidentally he wrote the first book on Monte Carlo method I read back in high school. It's the best introduction to the method in my opinion.
UPDATE 4:
Silverfish in comments suggested a simple Excel formula implementation. This is the kind of result you get with his approach after about total 1 million random numbers and 185K trials:
Obviously, this is much slower than Excel VBA implementation. Especially, if you modify my VBA code to not update the cell values inside the loop, and only do it once all stats are collected.
UPDATE 5
Xi'an's solution #3 is closely related (or even the same in some sense as per jwg's comment in the thread). It's hard to say who came up with the idea first Forsythe or Gnedenko. Gnedenko's original 1950 edition in Russian doesn't have Problems sections in Chapters. So, I couldn't find this problem at a first glance where it is in later editions. Maybe it was added later or buried in the text.
As I commented in Xi'an's answer, Forsythe's approach is linked to another interesting area: the distribution of distances between peaks (extrema) in random (IID) sequences. The mean distance happens to be 3. The down sequence in Forsythe's approach ends with a bottom, so if you continue sampling you'll get another bottom at some point, then another etc. You could track the distance between them and build the distribution.
|
Approximate $e$ using Monte Carlo Simulation
The simple and elegant way to estimate $e$ by Monte Carlo is described in this paper. The paper is actually about teaching $e$. Hence, the approach seems perfectly fitting for your goal. The idea's ba
|
5,650
|
Approximate $e$ using Monte Carlo Simulation
|
I suggest upvoting Aksakal's answer. It is unbiased and relies only on a method of generating unit uniform deviates.
My answer can be made arbitrarily precise, but still is biased away from the true value of $e$.
Xi'an's answer is correct, but I think its dependence on either the $\log$ function or a way of generating Poisson random deviates is a bit circular when the purpose is to approximate $e$.
Estimating $e$ by Bootstrapping
Instead, consider the bootstrapping procedure. One has a large number of objects $n$ which are drawn with replacement to a sample size of $n$. At each draw, the probability of not drawing a particular object $i$ is $1-n^{-1}$, and there are $n$ such draws. The probability that a particular object is omitted from all draws is $p=(1-\frac{1}{n})^n.$
Because I'm assuming we know that
$$\exp(-1)=\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^n$$
so we also can write
$$\exp(-1)\approx \hat{p}=\sum_{i=1}^m\frac{\mathbb{I}_{i\in B_j}}{m}$$
That is, our estimate of $p$ is found by estimating the probability that a specific observation is omitted from $m$ bootstrap replicates $B_j$ across many such replicates -- i.e. the fraction of occurrences of object $i$ in the bootstraps.
There are two sources of error in this approximation. Finite $n$ will always mean that the results are approximate, i.e. the estimate is biased. Additionally, $\hat{p}$ will fluctuate around the true value because this is a simulation.
I find this approach somewhat charming because an undergraduate or another person with sufficiently little to do could approximate $e$ using a deck of cards, a pile of small stones, or any other items at hand, in the same vein as a person could estimate $\pi$ using a compass, a straight-edge and some grains of sand. I think it's neat when mathematics can be divorced from modern conveniences like computers.
Results
I conducted several simulations for various number of bootstrap replications. Standard errors are estimated using normal intervals.
Note that the choice of $n$ the number of objects being bootstrapped sets an absolute upper limit on the accuracy of the results because the Monte Carlo procedure is estimating $p$ and $p$ depends only on $n$. Setting $n$ to be unnecessarily large will just encumber your computer, either because you only need a "rough" approximation to $e$ or because the bias will be swamped by variance due to the Monte Carlo. These results are for $n=10^3$ and $p^{-1}\approx e$ is accurate to the third decimal.
This plot shows that the choice of $m$ has direct and profound consequences for the stability in $\hat{p}$. The blue dashed line shows $p$ and the red line shows $e$. As expected, increasing the sample size produces ever-more accurate estimates $\hat{p}$.
I wrote an embarrassingly long R script for this. Suggestions for improvement can be submitted on the back of a $20 bill.
library(boot)
library(plotrix)
n <- 1e3
## if p_hat is estimated with 0 variance (in the limit of infinite bootstraps), then the best estimate we can come up with is biased by exactly this much:
approx <- 1/((1-1/n)^n)
dat <- c("A", rep("B", n-1))
indicator <- function(x, ndx) xor("A"%in%x[ndx], TRUE) ## Because we want to count when "A" is *not* in the bootstrap sample
p_hat <- function(dat, m=1e3){
foo <- boot(data=dat, statistic=indicator, R=m)
1/mean(foo$t)
}
reps <- replicate(100, p_hat(dat))
boxplot(reps)
abline(h=exp(1),col="red")
p_mean <- NULL
p_var <- NULL
for(i in 1:10){
reps <- replicate(2^i, p_hat(dat))
p_mean[i] <- mean(reps)
p_var[i] <- sd(reps)
}
plotCI(2^(1:10), p_mean, uiw=qnorm(0.975)*p_var/sqrt(2^(1:10)),xlab="m", log="x", ylab=expression(hat(p)), main=expression(paste("Monte Carlo Estimates of ", tilde(e))))
abline(h=approx, col='red')
|
Approximate $e$ using Monte Carlo Simulation
|
I suggest upvoting Aksakal's answer. It is unbiased and relies only on a method of generating unit uniform deviates.
My answer can be made arbitrarily precise, but still is biased away from the true v
|
Approximate $e$ using Monte Carlo Simulation
I suggest upvoting Aksakal's answer. It is unbiased and relies only on a method of generating unit uniform deviates.
My answer can be made arbitrarily precise, but still is biased away from the true value of $e$.
Xi'an's answer is correct, but I think its dependence on either the $\log$ function or a way of generating Poisson random deviates is a bit circular when the purpose is to approximate $e$.
Estimating $e$ by Bootstrapping
Instead, consider the bootstrapping procedure. One has a large number of objects $n$ which are drawn with replacement to a sample size of $n$. At each draw, the probability of not drawing a particular object $i$ is $1-n^{-1}$, and there are $n$ such draws. The probability that a particular object is omitted from all draws is $p=(1-\frac{1}{n})^n.$
Because I'm assuming we know that
$$\exp(-1)=\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^n$$
so we also can write
$$\exp(-1)\approx \hat{p}=\sum_{i=1}^m\frac{\mathbb{I}_{i\in B_j}}{m}$$
That is, our estimate of $p$ is found by estimating the probability that a specific observation is omitted from $m$ bootstrap replicates $B_j$ across many such replicates -- i.e. the fraction of occurrences of object $i$ in the bootstraps.
There are two sources of error in this approximation. Finite $n$ will always mean that the results are approximate, i.e. the estimate is biased. Additionally, $\hat{p}$ will fluctuate around the true value because this is a simulation.
I find this approach somewhat charming because an undergraduate or another person with sufficiently little to do could approximate $e$ using a deck of cards, a pile of small stones, or any other items at hand, in the same vein as a person could estimate $\pi$ using a compass, a straight-edge and some grains of sand. I think it's neat when mathematics can be divorced from modern conveniences like computers.
Results
I conducted several simulations for various number of bootstrap replications. Standard errors are estimated using normal intervals.
Note that the choice of $n$ the number of objects being bootstrapped sets an absolute upper limit on the accuracy of the results because the Monte Carlo procedure is estimating $p$ and $p$ depends only on $n$. Setting $n$ to be unnecessarily large will just encumber your computer, either because you only need a "rough" approximation to $e$ or because the bias will be swamped by variance due to the Monte Carlo. These results are for $n=10^3$ and $p^{-1}\approx e$ is accurate to the third decimal.
This plot shows that the choice of $m$ has direct and profound consequences for the stability in $\hat{p}$. The blue dashed line shows $p$ and the red line shows $e$. As expected, increasing the sample size produces ever-more accurate estimates $\hat{p}$.
I wrote an embarrassingly long R script for this. Suggestions for improvement can be submitted on the back of a $20 bill.
library(boot)
library(plotrix)
n <- 1e3
## if p_hat is estimated with 0 variance (in the limit of infinite bootstraps), then the best estimate we can come up with is biased by exactly this much:
approx <- 1/((1-1/n)^n)
dat <- c("A", rep("B", n-1))
indicator <- function(x, ndx) xor("A"%in%x[ndx], TRUE) ## Because we want to count when "A" is *not* in the bootstrap sample
p_hat <- function(dat, m=1e3){
foo <- boot(data=dat, statistic=indicator, R=m)
1/mean(foo$t)
}
reps <- replicate(100, p_hat(dat))
boxplot(reps)
abline(h=exp(1),col="red")
p_mean <- NULL
p_var <- NULL
for(i in 1:10){
reps <- replicate(2^i, p_hat(dat))
p_mean[i] <- mean(reps)
p_var[i] <- sd(reps)
}
plotCI(2^(1:10), p_mean, uiw=qnorm(0.975)*p_var/sqrt(2^(1:10)),xlab="m", log="x", ylab=expression(hat(p)), main=expression(paste("Monte Carlo Estimates of ", tilde(e))))
abline(h=approx, col='red')
|
Approximate $e$ using Monte Carlo Simulation
I suggest upvoting Aksakal's answer. It is unbiased and relies only on a method of generating unit uniform deviates.
My answer can be made arbitrarily precise, but still is biased away from the true v
|
5,651
|
Approximate $e$ using Monte Carlo Simulation
|
Solution 1:
For a Poisson $\mathcal{P}(\lambda)$ distribution, $$\mathbb{P}(X=k)=\frac{\lambda^k}{k!}\,e^{-\lambda}$$Therefore, if $X\sim\mathcal{P}(1)$,
$$\mathbb{P}(X=0)=\mathbb{P}(X=1)=e^{-1}$$which means you can estimate $e^{-1}$ by a Poisson simulation. And Poisson simulations can be derived from an exponential distribution generator (if not in the most efficient manner).
Remark 1: As discussed in the comments, this is a rather convoluted argument since
simulating from a Poisson distribution or equivalently an Exponential
distribution may be hard to imagine without involving a log or an exp
function... But then W. Huber came to the rescue of this answer with a most elegant solution based on ordered uniforms. Which is an approximation however, since the distribution of a uniform spacing $U_{(i:n)}-U_{(i-1:n)}$ is a Beta $\mathfrak{B}(1,n)$, implying that $$\mathbb{P}(n\{U_{(i:n)}-U_{(i-1:n)}\}\ge 1)=\left(1-\frac{1}{n}\right)^n$$which converges to $e^{-1}$ as $n$ grows to infinity. As an other aside that answers the comments, von Neumann's 1951 exponential generator only uses uniform generations.
Solution 2:
Another way to achieve a representation of the constant $e$ as an integral is to recall that, when $$X_1,X_2\stackrel{\text{iid}}{\sim}\mathfrak{N}(0,1)$$ then $$(X_1^2+X_2^2)\sim\chi^2_1$$ which is also an $\mathcal{E}(1/2)$ distribution. Therefore,
$$\mathbb{P}(X_1^2+X_2^2\ge 2)=1-\{1-\exp(-2/2)\}=e^{-1}$$
A second approach to approximating $e$ by Monte Carlo is thus to simulate normal pairs $(X_1,X_2)$ and monitor the frequency of times $X_1^2+X_2^2\ge 2$. In a sense it is the opposite of the Monte Carlo approximation of $\pi$ related to the frequency of times $X_1^2+X_2^2<1$...
Solution 3:
My Warwick University colleague M. Pollock pointed out another Monte Carlo approximation called Forsythe's method: the idea is to run a sequence of uniform generations $u_1,u_2,...$ until $u_{n+1}>u_{n}$. The expectation of the corresponding stopping rule, $N$, which is the number of time the uniform sequence went down is then $e$ while the probability that $N$ is odd is $e^{-1}$! (Forsythe's method actually aims at simulating from any density of the form $\exp G(x)$, hence is more general than approximating $e$ and $e^{-1}$.)
This is quite parallel to Gnedenko's approach used in Aksakal's
answer, so I wonder if one can be derived from the other. At the very least, both have the same distribution with probability mass $1/n!$ for value $n$.
A quick R implementation of Forsythe's method is to forgo following precisely the sequence of uniforms in favour of larger blocks, which allows for parallel processing:
use=runif(n)
band=max(diff((1:(n-1))[diff(use)>0]))+1
bends=apply(apply((apply(matrix(use[1:((n%/%band)*band)],nrow=band),
2,diff)<0),2,cumprod),2,sum)
|
Approximate $e$ using Monte Carlo Simulation
|
Solution 1:
For a Poisson $\mathcal{P}(\lambda)$ distribution, $$\mathbb{P}(X=k)=\frac{\lambda^k}{k!}\,e^{-\lambda}$$Therefore, if $X\sim\mathcal{P}(1)$,
$$\mathbb{P}(X=0)=\mathbb{P}(X=1)=e^{-1}$$whic
|
Approximate $e$ using Monte Carlo Simulation
Solution 1:
For a Poisson $\mathcal{P}(\lambda)$ distribution, $$\mathbb{P}(X=k)=\frac{\lambda^k}{k!}\,e^{-\lambda}$$Therefore, if $X\sim\mathcal{P}(1)$,
$$\mathbb{P}(X=0)=\mathbb{P}(X=1)=e^{-1}$$which means you can estimate $e^{-1}$ by a Poisson simulation. And Poisson simulations can be derived from an exponential distribution generator (if not in the most efficient manner).
Remark 1: As discussed in the comments, this is a rather convoluted argument since
simulating from a Poisson distribution or equivalently an Exponential
distribution may be hard to imagine without involving a log or an exp
function... But then W. Huber came to the rescue of this answer with a most elegant solution based on ordered uniforms. Which is an approximation however, since the distribution of a uniform spacing $U_{(i:n)}-U_{(i-1:n)}$ is a Beta $\mathfrak{B}(1,n)$, implying that $$\mathbb{P}(n\{U_{(i:n)}-U_{(i-1:n)}\}\ge 1)=\left(1-\frac{1}{n}\right)^n$$which converges to $e^{-1}$ as $n$ grows to infinity. As an other aside that answers the comments, von Neumann's 1951 exponential generator only uses uniform generations.
Solution 2:
Another way to achieve a representation of the constant $e$ as an integral is to recall that, when $$X_1,X_2\stackrel{\text{iid}}{\sim}\mathfrak{N}(0,1)$$ then $$(X_1^2+X_2^2)\sim\chi^2_1$$ which is also an $\mathcal{E}(1/2)$ distribution. Therefore,
$$\mathbb{P}(X_1^2+X_2^2\ge 2)=1-\{1-\exp(-2/2)\}=e^{-1}$$
A second approach to approximating $e$ by Monte Carlo is thus to simulate normal pairs $(X_1,X_2)$ and monitor the frequency of times $X_1^2+X_2^2\ge 2$. In a sense it is the opposite of the Monte Carlo approximation of $\pi$ related to the frequency of times $X_1^2+X_2^2<1$...
Solution 3:
My Warwick University colleague M. Pollock pointed out another Monte Carlo approximation called Forsythe's method: the idea is to run a sequence of uniform generations $u_1,u_2,...$ until $u_{n+1}>u_{n}$. The expectation of the corresponding stopping rule, $N$, which is the number of time the uniform sequence went down is then $e$ while the probability that $N$ is odd is $e^{-1}$! (Forsythe's method actually aims at simulating from any density of the form $\exp G(x)$, hence is more general than approximating $e$ and $e^{-1}$.)
This is quite parallel to Gnedenko's approach used in Aksakal's
answer, so I wonder if one can be derived from the other. At the very least, both have the same distribution with probability mass $1/n!$ for value $n$.
A quick R implementation of Forsythe's method is to forgo following precisely the sequence of uniforms in favour of larger blocks, which allows for parallel processing:
use=runif(n)
band=max(diff((1:(n-1))[diff(use)>0]))+1
bends=apply(apply((apply(matrix(use[1:((n%/%band)*band)],nrow=band),
2,diff)<0),2,cumprod),2,sum)
|
Approximate $e$ using Monte Carlo Simulation
Solution 1:
For a Poisson $\mathcal{P}(\lambda)$ distribution, $$\mathbb{P}(X=k)=\frac{\lambda^k}{k!}\,e^{-\lambda}$$Therefore, if $X\sim\mathcal{P}(1)$,
$$\mathbb{P}(X=0)=\mathbb{P}(X=1)=e^{-1}$$whic
|
5,652
|
Approximate $e$ using Monte Carlo Simulation
|
Not a solution ... just a quick comment that is too long for the comment box.
Aksakal
Aksakal posted a solution where we calculate the expected number of standard Uniform drawings that must be taken, such that their sum will exceed 1. In Mathematica, my first formulation was:
mrM := NestWhileList[(Random[] + #) &, Random[], #<1 &]
Mean[Table[Length[mrM], {10^6}]]
EDIT: Just had a quick play with this, and the following code (same method - also in Mma - just different code) is about 10 times faster:
Mean[Table[Module[{u=Random[], t=1}, While[u<1, u=Random[]+u; t++]; t] , {10^6}]]
Xian / Whuber
Whuber has suggested fast cool code to simulate Xian's solution 1:
R version: n <- 1e5; 1/mean(n*diff(sort(runif(n+1))) > 1)
Mma version: n=10^6; 1. / Mean[UnitStep[Differences[Sort[RandomReal[{0, n}, n + 1]]] - 1]]
which he notes is 20 times faster the first code (or about twice as fast as the new code above).
Just for fun, I thought it would be interesting to see if both approaches are as efficient (in a statistical sense). To do so, I generated 2000 estimates of e using:
Aksakal's method: dataA
Xian's method 1 using whuber code: dataB
... both in Mathematica. The following diagram contrasts a nonparametric kernel density estimate of the resulting dataA and dataB data sets.
So, while whuber's code (red curve) is about twice as fast, the method does not appear to be as 'reliable'.
|
Approximate $e$ using Monte Carlo Simulation
|
Not a solution ... just a quick comment that is too long for the comment box.
Aksakal
Aksakal posted a solution where we calculate the expected number of standard Uniform drawings that must be taken,
|
Approximate $e$ using Monte Carlo Simulation
Not a solution ... just a quick comment that is too long for the comment box.
Aksakal
Aksakal posted a solution where we calculate the expected number of standard Uniform drawings that must be taken, such that their sum will exceed 1. In Mathematica, my first formulation was:
mrM := NestWhileList[(Random[] + #) &, Random[], #<1 &]
Mean[Table[Length[mrM], {10^6}]]
EDIT: Just had a quick play with this, and the following code (same method - also in Mma - just different code) is about 10 times faster:
Mean[Table[Module[{u=Random[], t=1}, While[u<1, u=Random[]+u; t++]; t] , {10^6}]]
Xian / Whuber
Whuber has suggested fast cool code to simulate Xian's solution 1:
R version: n <- 1e5; 1/mean(n*diff(sort(runif(n+1))) > 1)
Mma version: n=10^6; 1. / Mean[UnitStep[Differences[Sort[RandomReal[{0, n}, n + 1]]] - 1]]
which he notes is 20 times faster the first code (or about twice as fast as the new code above).
Just for fun, I thought it would be interesting to see if both approaches are as efficient (in a statistical sense). To do so, I generated 2000 estimates of e using:
Aksakal's method: dataA
Xian's method 1 using whuber code: dataB
... both in Mathematica. The following diagram contrasts a nonparametric kernel density estimate of the resulting dataA and dataB data sets.
So, while whuber's code (red curve) is about twice as fast, the method does not appear to be as 'reliable'.
|
Approximate $e$ using Monte Carlo Simulation
Not a solution ... just a quick comment that is too long for the comment box.
Aksakal
Aksakal posted a solution where we calculate the expected number of standard Uniform drawings that must be taken,
|
5,653
|
Approximate $e$ using Monte Carlo Simulation
|
Here is another way it can be done, though it is quite slow. I make no claim to efficiency, but offer this alternative in the spirit of completeness.
Contra Xi'an's answer, I will assume for the purposes of this question that you are able to generate and use a sequence of $n$ uniform pseudo-random variables $U_1, \cdots , U_n \sim \text{IID U}(0,1)$ and you then need to estimate $e$ by some method using basic arithmetic operations (i.e., you cannot use logarithmic or exponential functions or any distributions that use these functions).$^\dagger$ The present method is motivated by a simple result involving uniform random variables:
$$\mathbb{E} \Bigg( \frac{\mathbb{I}(U_i \geqslant 1 / e) }{U_i} \Bigg) = \int \limits_{1/e}^1 \frac{du}{u} = 1.$$
Estimating $e$ using this result: We first order the sample values into descending order to obtain the order statistics $u_{(1)} \geqslant \cdots \geqslant u_{(n)}$ and then we define the partial sums:
$$S_n(k) \equiv \frac{1}{n} \sum_{i=1}^k \frac{1}{u_{(i)}} \quad \text{for all } k = 1, .., n.$$
Now, let $m \equiv \min \{ k | S(k) \geqslant 1 \}$ and then estimate $1/e$ by interpolation of the ordered uniform variables. This gives an estimator for $e$
given by:
$$\hat{e} \equiv \frac{2}{u_{(m)} + u_{(m+1)}}.$$
This method has some slight bias (owing to the linear interpolation of the cut-off point for $1/e$) but it is a consistent estimator for $e$. The method can be implemented fairly easily but it requires sorting of values, which is more computationally intensive than deterministic calculation of $e$. This method is slow, since it involves sorting of values.
Implementation in R: The method can be implemented in R using runif to generate uniform values. The code is as follows:
EST_EULER <- function(n) { U <- sort(runif(n), decreasing = TRUE);
S <- cumsum(1/U)/n;
m <- min(which(S >= 1));
2/(U[m-1]+U[m]); }
Implementing this code gives convergence to the true value of $e$, but it is very slow compared to deterministic methods.
set.seed(1234);
EST_EULER(10^3);
[1] 2.715426
EST_EULER(10^4);
[1] 2.678373
EST_EULER(10^5);
[1] 2.722868
EST_EULER(10^6);
[1] 2.722207
EST_EULER(10^7);
[1] 2.718775
EST_EULER(10^8);
[1] 2.718434
> exp(1)
[1] 2.718282
$^\dagger$ I take the view that we want to avoid any method that makes use of any transformation that involves an exponential or logarithm. If we can use densities that use exponentials in their definition then it is possible to derive $e$ from these algebraically using a density call.
|
Approximate $e$ using Monte Carlo Simulation
|
Here is another way it can be done, though it is quite slow. I make no claim to efficiency, but offer this alternative in the spirit of completeness.
Contra Xi'an's answer, I will assume for the purp
|
Approximate $e$ using Monte Carlo Simulation
Here is another way it can be done, though it is quite slow. I make no claim to efficiency, but offer this alternative in the spirit of completeness.
Contra Xi'an's answer, I will assume for the purposes of this question that you are able to generate and use a sequence of $n$ uniform pseudo-random variables $U_1, \cdots , U_n \sim \text{IID U}(0,1)$ and you then need to estimate $e$ by some method using basic arithmetic operations (i.e., you cannot use logarithmic or exponential functions or any distributions that use these functions).$^\dagger$ The present method is motivated by a simple result involving uniform random variables:
$$\mathbb{E} \Bigg( \frac{\mathbb{I}(U_i \geqslant 1 / e) }{U_i} \Bigg) = \int \limits_{1/e}^1 \frac{du}{u} = 1.$$
Estimating $e$ using this result: We first order the sample values into descending order to obtain the order statistics $u_{(1)} \geqslant \cdots \geqslant u_{(n)}$ and then we define the partial sums:
$$S_n(k) \equiv \frac{1}{n} \sum_{i=1}^k \frac{1}{u_{(i)}} \quad \text{for all } k = 1, .., n.$$
Now, let $m \equiv \min \{ k | S(k) \geqslant 1 \}$ and then estimate $1/e$ by interpolation of the ordered uniform variables. This gives an estimator for $e$
given by:
$$\hat{e} \equiv \frac{2}{u_{(m)} + u_{(m+1)}}.$$
This method has some slight bias (owing to the linear interpolation of the cut-off point for $1/e$) but it is a consistent estimator for $e$. The method can be implemented fairly easily but it requires sorting of values, which is more computationally intensive than deterministic calculation of $e$. This method is slow, since it involves sorting of values.
Implementation in R: The method can be implemented in R using runif to generate uniform values. The code is as follows:
EST_EULER <- function(n) { U <- sort(runif(n), decreasing = TRUE);
S <- cumsum(1/U)/n;
m <- min(which(S >= 1));
2/(U[m-1]+U[m]); }
Implementing this code gives convergence to the true value of $e$, but it is very slow compared to deterministic methods.
set.seed(1234);
EST_EULER(10^3);
[1] 2.715426
EST_EULER(10^4);
[1] 2.678373
EST_EULER(10^5);
[1] 2.722868
EST_EULER(10^6);
[1] 2.722207
EST_EULER(10^7);
[1] 2.718775
EST_EULER(10^8);
[1] 2.718434
> exp(1)
[1] 2.718282
$^\dagger$ I take the view that we want to avoid any method that makes use of any transformation that involves an exponential or logarithm. If we can use densities that use exponentials in their definition then it is possible to derive $e$ from these algebraically using a density call.
|
Approximate $e$ using Monte Carlo Simulation
Here is another way it can be done, though it is quite slow. I make no claim to efficiency, but offer this alternative in the spirit of completeness.
Contra Xi'an's answer, I will assume for the purp
|
5,654
|
Approximate $e$ using Monte Carlo Simulation
|
Method requiring an ungodly amount of samples
First you need to be able to sample from a normal distribution. Assuming you are going to exclude the use of the function $f(x) = e^x$, or look up tables derived from that function, you can produce approximate samples from the normal distribution via the CLT. For example, if you can sample from a uniform(0,1) distribution, then $\frac{ \bar x \sqrt{12}}{ \sqrt{n}} \dot \sim N(0,1)$. As pointed out by whuber, to have the final estimate approach $e$ as the sample size approaches $\infty$, it would be required that the number of uniform samples used approaches $\infty$ as the sample size approaches infinity.
Now, if you can sample from a normal distribution, with large enough samples, you can get consistent estimates of the density of $N(0,1)$. This can be done with histograms or kernel smoothers (but be careful not to use a Gaussian kernel to follow your no $e^{x}$ rule!). To get your density estimates to be consistent, you will need to let your df (number of bins in histogram, inverse of window for smoother) approach infinity, but slower than the sample size.
So now, with lots of computational power, you can approximate the density of a $N(0,1)$, i.e. $\hat \phi(x)$. Since $\phi(\sqrt(2) ) = (2 \pi)^{-1/2} e^{-1}$, your estimate for $e = \hat \phi(\sqrt{2}) \sqrt{2 \pi}$.
If you want to go totally nuts, you can even estimate $\sqrt{2}$ and $\sqrt{2\pi}$ using the methods you discussed earlier.
Method requiring very few samples, but causing an ungodly amount of numerical error
A completely silly, but very efficient, answer based on a comment I made:
Let $X \sim \text{uniform}(-1, 1)$. Define $Y_n = | (\bar x)^n|$. Define $\hat e = (1 - Y_n)^{-1/Y_n}$.
This will converge very fast, but also run into extreme numerical error.
whuber pointed out that this uses the power function, which typically calls the exp function. This could be sidestepped by discretizing $Y_n$, such that $1/Y_n$ is an integer and the power could be replaced with repeated multiplication. It would be required that as $n \rightarrow \infty$, the discretizing of $Y_n$ would get finer and finer,and the discretization would have to exclude $Y_n = 0$. With this, the estimator theoretically (i.e. the world in which numeric error does not exist) would converge to $e$, and quite fast!
|
Approximate $e$ using Monte Carlo Simulation
|
Method requiring an ungodly amount of samples
First you need to be able to sample from a normal distribution. Assuming you are going to exclude the use of the function $f(x) = e^x$, or look up tables
|
Approximate $e$ using Monte Carlo Simulation
Method requiring an ungodly amount of samples
First you need to be able to sample from a normal distribution. Assuming you are going to exclude the use of the function $f(x) = e^x$, or look up tables derived from that function, you can produce approximate samples from the normal distribution via the CLT. For example, if you can sample from a uniform(0,1) distribution, then $\frac{ \bar x \sqrt{12}}{ \sqrt{n}} \dot \sim N(0,1)$. As pointed out by whuber, to have the final estimate approach $e$ as the sample size approaches $\infty$, it would be required that the number of uniform samples used approaches $\infty$ as the sample size approaches infinity.
Now, if you can sample from a normal distribution, with large enough samples, you can get consistent estimates of the density of $N(0,1)$. This can be done with histograms or kernel smoothers (but be careful not to use a Gaussian kernel to follow your no $e^{x}$ rule!). To get your density estimates to be consistent, you will need to let your df (number of bins in histogram, inverse of window for smoother) approach infinity, but slower than the sample size.
So now, with lots of computational power, you can approximate the density of a $N(0,1)$, i.e. $\hat \phi(x)$. Since $\phi(\sqrt(2) ) = (2 \pi)^{-1/2} e^{-1}$, your estimate for $e = \hat \phi(\sqrt{2}) \sqrt{2 \pi}$.
If you want to go totally nuts, you can even estimate $\sqrt{2}$ and $\sqrt{2\pi}$ using the methods you discussed earlier.
Method requiring very few samples, but causing an ungodly amount of numerical error
A completely silly, but very efficient, answer based on a comment I made:
Let $X \sim \text{uniform}(-1, 1)$. Define $Y_n = | (\bar x)^n|$. Define $\hat e = (1 - Y_n)^{-1/Y_n}$.
This will converge very fast, but also run into extreme numerical error.
whuber pointed out that this uses the power function, which typically calls the exp function. This could be sidestepped by discretizing $Y_n$, such that $1/Y_n$ is an integer and the power could be replaced with repeated multiplication. It would be required that as $n \rightarrow \infty$, the discretizing of $Y_n$ would get finer and finer,and the discretization would have to exclude $Y_n = 0$. With this, the estimator theoretically (i.e. the world in which numeric error does not exist) would converge to $e$, and quite fast!
|
Approximate $e$ using Monte Carlo Simulation
Method requiring an ungodly amount of samples
First you need to be able to sample from a normal distribution. Assuming you are going to exclude the use of the function $f(x) = e^x$, or look up tables
|
5,655
|
Approximate $e$ using Monte Carlo Simulation
|
If you do not have a calculator (ie you can not compute the exponential 'e' indirectly by using some related functions like computing a sample from a normal distribution or exponential distribution) and you have only coin flips or dice rolls* available to you, then you could use the following puzzle to estimate the number $e$:
The number $e$ appears in the expression for the expectation value for the frog problem with negative steps. We have $E[J_1] = 2e-2$. So we could approximate $e$ with an approximate for $E[J_1] = \mu_{J_1}$ using $\hat{e} = 0.5\hat{\mu}_{J_1}+1$
*and dice rolls could be constructed from coin flips if you want to be more restrictive
|
Approximate $e$ using Monte Carlo Simulation
|
If you do not have a calculator (ie you can not compute the exponential 'e' indirectly by using some related functions like computing a sample from a normal distribution or exponential distribution) a
|
Approximate $e$ using Monte Carlo Simulation
If you do not have a calculator (ie you can not compute the exponential 'e' indirectly by using some related functions like computing a sample from a normal distribution or exponential distribution) and you have only coin flips or dice rolls* available to you, then you could use the following puzzle to estimate the number $e$:
The number $e$ appears in the expression for the expectation value for the frog problem with negative steps. We have $E[J_1] = 2e-2$. So we could approximate $e$ with an approximate for $E[J_1] = \mu_{J_1}$ using $\hat{e} = 0.5\hat{\mu}_{J_1}+1$
*and dice rolls could be constructed from coin flips if you want to be more restrictive
|
Approximate $e$ using Monte Carlo Simulation
If you do not have a calculator (ie you can not compute the exponential 'e' indirectly by using some related functions like computing a sample from a normal distribution or exponential distribution) a
|
5,656
|
Approximate $e$ using Monte Carlo Simulation
|
$$\int_1^2 \frac{1}{x}dx = \ln{2}$$
So if you draw uniformly from $[1,2]^2$, the fraction of points whose product is less than $1$ would converge to $\ln{2}$ by the LLN.
You can then get to $e$ by
$$2^{\frac{1}{\ln{2}}} = e$$
One might raise the issue that exponentiation might require knowledge of $e$ itself. I don’t have a good answer for that.
|
Approximate $e$ using Monte Carlo Simulation
|
$$\int_1^2 \frac{1}{x}dx = \ln{2}$$
So if you draw uniformly from $[1,2]^2$, the fraction of points whose product is less than $1$ would converge to $\ln{2}$ by the LLN.
You can then get to $e$ by
$$2
|
Approximate $e$ using Monte Carlo Simulation
$$\int_1^2 \frac{1}{x}dx = \ln{2}$$
So if you draw uniformly from $[1,2]^2$, the fraction of points whose product is less than $1$ would converge to $\ln{2}$ by the LLN.
You can then get to $e$ by
$$2^{\frac{1}{\ln{2}}} = e$$
One might raise the issue that exponentiation might require knowledge of $e$ itself. I don’t have a good answer for that.
|
Approximate $e$ using Monte Carlo Simulation
$$\int_1^2 \frac{1}{x}dx = \ln{2}$$
So if you draw uniformly from $[1,2]^2$, the fraction of points whose product is less than $1$ would converge to $\ln{2}$ by the LLN.
You can then get to $e$ by
$$2
|
5,657
|
Approximate $e$ using Monte Carlo Simulation
|
The Python version of this is the following if anyone is curious:
import random
print("Number of iterations: ", end="")
n = int(input())
sum_total = 0
for _ in range(n):
temp = 0
counter = 0
while temp < 1:
temp += random.random()
counter += 1
sum_total += counter
print(sum_total/n)
|
Approximate $e$ using Monte Carlo Simulation
|
The Python version of this is the following if anyone is curious:
import random
print("Number of iterations: ", end="")
n = int(input())
sum_total = 0
for _ in range(n):
temp = 0
counter = 0
|
Approximate $e$ using Monte Carlo Simulation
The Python version of this is the following if anyone is curious:
import random
print("Number of iterations: ", end="")
n = int(input())
sum_total = 0
for _ in range(n):
temp = 0
counter = 0
while temp < 1:
temp += random.random()
counter += 1
sum_total += counter
print(sum_total/n)
|
Approximate $e$ using Monte Carlo Simulation
The Python version of this is the following if anyone is curious:
import random
print("Number of iterations: ", end="")
n = int(input())
sum_total = 0
for _ in range(n):
temp = 0
counter = 0
|
5,658
|
McFadden's Pseudo-$R^2$ Interpretation
|
So I figured I'd sum up what I've learned about McFadden's pseudo $R^2$ as a proper answer.
The seminal reference that I can see for McFadden's pseudo $R^2$ is: McFadden, D. (1974) “Conditional logit analysis of qualitative choice behavior.” Pp. 105-142 in P. Zarembka (ed.), Frontiers in Econometrics. Academic Press. http://eml.berkeley.edu/~mcfadden/travel.html
Figure 5.5 shows the relationship between $\rho^2$ and traditional $R^2$ measures from OLS. My interpretation is that larger values of $\rho^2$ (McFadden's pseudo $R^2$) are better than smaller ones.
The interpretation of McFadden's pseudo $R^2$ between 0.2-0.4 comes from a book chapter he contributed to: Bahvioural Travel Modelling. Edited by David Hensher and Peter Stopher. 1979. McFadden contributed Ch. 15 "Quantitative Methods for Analyzing Travel Behaviour on Individuals: Some Recent Developments". Discussion of model evaluation (in the context of multinomial logit models) begins on page 306 where he introduces $\rho^2$ (McFadden's pseudo $R^2$). McFadden states "while the $R^2$ index is a more familiar concept to planner who are experienced in OLS, it is not as well behaved as the $\rho^2$ measure, for ML estimation. Those unfamiliar with $\rho^2$ should be forewarned that its values tend to be considerably lower than those of the $R^2$ index...For example, values of 0.2 to 0.4 for $\rho^2$ represent EXCELLENT fit."
So basically, $\rho^2$ can be interpreted like $R^2$, but don't expect it to be as big. And values from 0.2-0.4 indicate (in McFadden's words) excellent model fit.
|
McFadden's Pseudo-$R^2$ Interpretation
|
So I figured I'd sum up what I've learned about McFadden's pseudo $R^2$ as a proper answer.
The seminal reference that I can see for McFadden's pseudo $R^2$ is: McFadden, D. (1974) “Conditional logit
|
McFadden's Pseudo-$R^2$ Interpretation
So I figured I'd sum up what I've learned about McFadden's pseudo $R^2$ as a proper answer.
The seminal reference that I can see for McFadden's pseudo $R^2$ is: McFadden, D. (1974) “Conditional logit analysis of qualitative choice behavior.” Pp. 105-142 in P. Zarembka (ed.), Frontiers in Econometrics. Academic Press. http://eml.berkeley.edu/~mcfadden/travel.html
Figure 5.5 shows the relationship between $\rho^2$ and traditional $R^2$ measures from OLS. My interpretation is that larger values of $\rho^2$ (McFadden's pseudo $R^2$) are better than smaller ones.
The interpretation of McFadden's pseudo $R^2$ between 0.2-0.4 comes from a book chapter he contributed to: Bahvioural Travel Modelling. Edited by David Hensher and Peter Stopher. 1979. McFadden contributed Ch. 15 "Quantitative Methods for Analyzing Travel Behaviour on Individuals: Some Recent Developments". Discussion of model evaluation (in the context of multinomial logit models) begins on page 306 where he introduces $\rho^2$ (McFadden's pseudo $R^2$). McFadden states "while the $R^2$ index is a more familiar concept to planner who are experienced in OLS, it is not as well behaved as the $\rho^2$ measure, for ML estimation. Those unfamiliar with $\rho^2$ should be forewarned that its values tend to be considerably lower than those of the $R^2$ index...For example, values of 0.2 to 0.4 for $\rho^2$ represent EXCELLENT fit."
So basically, $\rho^2$ can be interpreted like $R^2$, but don't expect it to be as big. And values from 0.2-0.4 indicate (in McFadden's words) excellent model fit.
|
McFadden's Pseudo-$R^2$ Interpretation
So I figured I'd sum up what I've learned about McFadden's pseudo $R^2$ as a proper answer.
The seminal reference that I can see for McFadden's pseudo $R^2$ is: McFadden, D. (1974) “Conditional logit
|
5,659
|
McFadden's Pseudo-$R^2$ Interpretation
|
McFadden's $R^2$ is defined as $1 - LL_{mod} / LL_0$, where $LL_{mod}$ is the log likelihood value for the fitted model and $LL_0$ is the log likelihood for the null model which includes only an intercept as predictor (so that every individual is predicted the same probability of 'success').
For a logistic regression model the log likelihood value is always negative (because the likelihood contribution from each observation is a probability between 0 and 1). If your model doesn't really predict the outcome better than the null model, $LL_{mod}$ will not be much larger than $LL_0$, and so $LL_{mod} / LL_0 \approx 1$, and McFadden's pseudo-$R^2$ is close to 0 (your model has no predictive value).
Conversely if your model was really good, those individuals with a success (1) outcome would have a fitted probability close to 1, and vice versa for those with a failure (0) outcome. In this case if you go through the likelihood calculation the likelihood contribution from each individual for your model will be close to zero, such that $LL_{mod}$ is close to zero, and McFadden's pseudo-$R^2$ squared is close to 1, indicating very good predictive ability.
As to what can be considered a good value, my personal view is that like that similar questions in statistics (e.g. what constitutes a large correlation?), is that can never be a definitive answer. Last year I wrote a blog post about McFadden's $R^2$ in logistic regression, which has some further simulation illustrations.
|
McFadden's Pseudo-$R^2$ Interpretation
|
McFadden's $R^2$ is defined as $1 - LL_{mod} / LL_0$, where $LL_{mod}$ is the log likelihood value for the fitted model and $LL_0$ is the log likelihood for the null model which includes only an inter
|
McFadden's Pseudo-$R^2$ Interpretation
McFadden's $R^2$ is defined as $1 - LL_{mod} / LL_0$, where $LL_{mod}$ is the log likelihood value for the fitted model and $LL_0$ is the log likelihood for the null model which includes only an intercept as predictor (so that every individual is predicted the same probability of 'success').
For a logistic regression model the log likelihood value is always negative (because the likelihood contribution from each observation is a probability between 0 and 1). If your model doesn't really predict the outcome better than the null model, $LL_{mod}$ will not be much larger than $LL_0$, and so $LL_{mod} / LL_0 \approx 1$, and McFadden's pseudo-$R^2$ is close to 0 (your model has no predictive value).
Conversely if your model was really good, those individuals with a success (1) outcome would have a fitted probability close to 1, and vice versa for those with a failure (0) outcome. In this case if you go through the likelihood calculation the likelihood contribution from each individual for your model will be close to zero, such that $LL_{mod}$ is close to zero, and McFadden's pseudo-$R^2$ squared is close to 1, indicating very good predictive ability.
As to what can be considered a good value, my personal view is that like that similar questions in statistics (e.g. what constitutes a large correlation?), is that can never be a definitive answer. Last year I wrote a blog post about McFadden's $R^2$ in logistic regression, which has some further simulation illustrations.
|
McFadden's Pseudo-$R^2$ Interpretation
McFadden's $R^2$ is defined as $1 - LL_{mod} / LL_0$, where $LL_{mod}$ is the log likelihood value for the fitted model and $LL_0$ is the log likelihood for the null model which includes only an inter
|
5,660
|
McFadden's Pseudo-$R^2$ Interpretation
|
I did some more focused research on this topic, and I found that interpretations of McFadden's pseudo $R^2$ (also known as likelihood-ratio index) are not clear; however, it can range from 0 to 1, but will never reach or exceed 1 as a result of its calculation.
A rule of thumb that I found to be quite helpful is that a McFadden's pseudo $R^2$ ranging from 0.2 to 0.4 indicates very good model fit. As such, the model mentioned above with a McFadden's pseudo $R^2$ of 0.192 is likely not a terrible model, at least by this metric, but it isn't particularly strong either.
It is also important to note that McFadden's pseudo $R^2$ is best used to compare different specifications of the same model (i.e. nested models). In reference to the aforementioned example, the 6 variable model (McFadden’s pseudo $R^2$ = 0.192) fits the data better than the 5 variable model (McFadden’s pseudo $R^2$ = 0.131), which I formally tested using a log-likelihood ratio test, which indicates there is a significant difference (p < 0.001) between the two models, and thus the 6 variable model is preferred for the given data set.
|
McFadden's Pseudo-$R^2$ Interpretation
|
I did some more focused research on this topic, and I found that interpretations of McFadden's pseudo $R^2$ (also known as likelihood-ratio index) are not clear; however, it can range from 0 to 1, but
|
McFadden's Pseudo-$R^2$ Interpretation
I did some more focused research on this topic, and I found that interpretations of McFadden's pseudo $R^2$ (also known as likelihood-ratio index) are not clear; however, it can range from 0 to 1, but will never reach or exceed 1 as a result of its calculation.
A rule of thumb that I found to be quite helpful is that a McFadden's pseudo $R^2$ ranging from 0.2 to 0.4 indicates very good model fit. As such, the model mentioned above with a McFadden's pseudo $R^2$ of 0.192 is likely not a terrible model, at least by this metric, but it isn't particularly strong either.
It is also important to note that McFadden's pseudo $R^2$ is best used to compare different specifications of the same model (i.e. nested models). In reference to the aforementioned example, the 6 variable model (McFadden’s pseudo $R^2$ = 0.192) fits the data better than the 5 variable model (McFadden’s pseudo $R^2$ = 0.131), which I formally tested using a log-likelihood ratio test, which indicates there is a significant difference (p < 0.001) between the two models, and thus the 6 variable model is preferred for the given data set.
|
McFadden's Pseudo-$R^2$ Interpretation
I did some more focused research on this topic, and I found that interpretations of McFadden's pseudo $R^2$ (also known as likelihood-ratio index) are not clear; however, it can range from 0 to 1, but
|
5,661
|
McFadden's Pseudo-$R^2$ Interpretation
|
In case anyone is still interested in finding McFadden's own word, here is the link. In a footnote, McFadden (1977, p.35) wrote that "values of .2 to .4 for [$\rho^2$] represent an excellent fit." The paper is available online.
http://cowles.yale.edu/sites/default/files/files/pub/d04/d0474.pdf
|
McFadden's Pseudo-$R^2$ Interpretation
|
In case anyone is still interested in finding McFadden's own word, here is the link. In a footnote, McFadden (1977, p.35) wrote that "values of .2 to .4 for [$\rho^2$] represent an excellent fit." T
|
McFadden's Pseudo-$R^2$ Interpretation
In case anyone is still interested in finding McFadden's own word, here is the link. In a footnote, McFadden (1977, p.35) wrote that "values of .2 to .4 for [$\rho^2$] represent an excellent fit." The paper is available online.
http://cowles.yale.edu/sites/default/files/files/pub/d04/d0474.pdf
|
McFadden's Pseudo-$R^2$ Interpretation
In case anyone is still interested in finding McFadden's own word, here is the link. In a footnote, McFadden (1977, p.35) wrote that "values of .2 to .4 for [$\rho^2$] represent an excellent fit." T
|
5,662
|
R - Confused on Residual Terminology
|
As requested, I illustrate using a simple regression using the mtcars data:
fit <- lm(mpg~hp, data=mtcars)
summary(fit)
Call:
lm(formula = mpg ~ hp, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.7121 -2.1122 -0.8854 1.5819 8.2360
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 30.09886 1.63392 18.421 < 2e-16 ***
hp -0.06823 0.01012 -6.742 1.79e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.863 on 30 degrees of freedom
Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892
F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07
The mean squared error (MSE) is the mean of the square of the residuals:
# Mean squared error
mse <- mean(residuals(fit)^2)
mse
[1] 13.98982
Root mean squared error (RMSE) is then the square root of MSE:
# Root mean squared error
rmse <- sqrt(mse)
rmse
[1] 3.740297
Residual sum of squares (RSS) is the sum of the squared residuals:
# Residual sum of squares
rss <- sum(residuals(fit)^2)
rss
[1] 447.6743
Residual standard error (RSE) is the square root of (RSS / degrees of freedom):
# Residual standard error
rse <- sqrt( sum(residuals(fit)^2) / fit$df.residual )
rse
[1] 3.862962
The same calculation, simplified because we have previously calculated rss:
sqrt(rss / fit$df.residual)
[1] 3.862962
The term test error in the context of regression (and other predictive analytics techniques) usually refers to calculating a test statistic on test data, distinct from your training data.
In other words, you estimate a model using a portion of your data (often an 80% sample) and then calculating the error using the hold-out sample. Again, I illustrate using mtcars, this time with an 80% sample
set.seed(42)
train <- sample.int(nrow(mtcars), 26)
train
[1] 30 32 9 25 18 15 20 4 16 17 11 24 19 5 31 21 23 2 7 8 22 27 10 28 1 29
Estimate the model, then predict with the hold-out data:
fit <- lm(mpg~hp, data=mtcars[train, ])
pred <- predict(fit, newdata=mtcars[-train, ])
pred
Datsun 710 Valiant Merc 450SE Merc 450SL Merc 450SLC Fiat X1-9
24.08103 23.26331 18.15257 18.15257 18.15257 25.92090
Combine the original data and prediction in a data frame
test <- data.frame(actual=mtcars$mpg[-train], pred)
test$error <- with(test, pred-actual)
test
actual pred error
Datsun 710 22.8 24.08103 1.2810309
Valiant 18.1 23.26331 5.1633124
Merc 450SE 16.4 18.15257 1.7525717
Merc 450SL 17.3 18.15257 0.8525717
Merc 450SLC 15.2 18.15257 2.9525717
Fiat X1-9 27.3 25.92090 -1.3791024
Now compute your test statistics in the normal way. I illustrate MSE and RMSE:
test.mse <- with(test, mean(error^2))
test.mse
[1] 7.119804
test.rmse <- sqrt(test.mse)
test.rmse
[1] 2.668296
Note that this answer ignores weighting of the observations.
|
R - Confused on Residual Terminology
|
As requested, I illustrate using a simple regression using the mtcars data:
fit <- lm(mpg~hp, data=mtcars)
summary(fit)
Call:
lm(formula = mpg ~ hp, data = mtcars)
Residuals:
Min 1Q Median
|
R - Confused on Residual Terminology
As requested, I illustrate using a simple regression using the mtcars data:
fit <- lm(mpg~hp, data=mtcars)
summary(fit)
Call:
lm(formula = mpg ~ hp, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.7121 -2.1122 -0.8854 1.5819 8.2360
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 30.09886 1.63392 18.421 < 2e-16 ***
hp -0.06823 0.01012 -6.742 1.79e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.863 on 30 degrees of freedom
Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892
F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07
The mean squared error (MSE) is the mean of the square of the residuals:
# Mean squared error
mse <- mean(residuals(fit)^2)
mse
[1] 13.98982
Root mean squared error (RMSE) is then the square root of MSE:
# Root mean squared error
rmse <- sqrt(mse)
rmse
[1] 3.740297
Residual sum of squares (RSS) is the sum of the squared residuals:
# Residual sum of squares
rss <- sum(residuals(fit)^2)
rss
[1] 447.6743
Residual standard error (RSE) is the square root of (RSS / degrees of freedom):
# Residual standard error
rse <- sqrt( sum(residuals(fit)^2) / fit$df.residual )
rse
[1] 3.862962
The same calculation, simplified because we have previously calculated rss:
sqrt(rss / fit$df.residual)
[1] 3.862962
The term test error in the context of regression (and other predictive analytics techniques) usually refers to calculating a test statistic on test data, distinct from your training data.
In other words, you estimate a model using a portion of your data (often an 80% sample) and then calculating the error using the hold-out sample. Again, I illustrate using mtcars, this time with an 80% sample
set.seed(42)
train <- sample.int(nrow(mtcars), 26)
train
[1] 30 32 9 25 18 15 20 4 16 17 11 24 19 5 31 21 23 2 7 8 22 27 10 28 1 29
Estimate the model, then predict with the hold-out data:
fit <- lm(mpg~hp, data=mtcars[train, ])
pred <- predict(fit, newdata=mtcars[-train, ])
pred
Datsun 710 Valiant Merc 450SE Merc 450SL Merc 450SLC Fiat X1-9
24.08103 23.26331 18.15257 18.15257 18.15257 25.92090
Combine the original data and prediction in a data frame
test <- data.frame(actual=mtcars$mpg[-train], pred)
test$error <- with(test, pred-actual)
test
actual pred error
Datsun 710 22.8 24.08103 1.2810309
Valiant 18.1 23.26331 5.1633124
Merc 450SE 16.4 18.15257 1.7525717
Merc 450SL 17.3 18.15257 0.8525717
Merc 450SLC 15.2 18.15257 2.9525717
Fiat X1-9 27.3 25.92090 -1.3791024
Now compute your test statistics in the normal way. I illustrate MSE and RMSE:
test.mse <- with(test, mean(error^2))
test.mse
[1] 7.119804
test.rmse <- sqrt(test.mse)
test.rmse
[1] 2.668296
Note that this answer ignores weighting of the observations.
|
R - Confused on Residual Terminology
As requested, I illustrate using a simple regression using the mtcars data:
fit <- lm(mpg~hp, data=mtcars)
summary(fit)
Call:
lm(formula = mpg ~ hp, data = mtcars)
Residuals:
Min 1Q Median
|
5,663
|
R - Confused on Residual Terminology
|
The original poster asked for an "explain like I'm 5" answer. Let's say your school teacher invites you and your schoolmates to help guess the teacher's table width. Each of the 20 students in class can choose a device (ruler, scale, tape, or yardstick) and is allowed to measure the table 10 times. You all are asked to use different starting locations on the device to avoid reading the same number over and over again; the starting reading then has to be subtracted from the ending reading to finally get one width measurement (you recently learned how to do that type of math).
There were in total 200 width measurements taken by the class (20 students, 10 measurements each). The observations are handed over to the teacher who will crunch the numbers. Subtracting each student's observations from a reference value will result in another 200 numbers, called deviations. The teacher averages each student's sample separately, obtaining 20 means. Subtracting each student's observations from their individual mean will result in 200 deviations from the mean, called residuals. If the mean residual were to be calculated for each sample, you'd notice it's always zero. If instead we square each residual, average them, and finally undo the square, we obtain the standard deviation. (By the way, we call that last calculation bit the square root (think of finding the base or side of a given square), so the whole operation is often called root-mean-square, for short; the standard deviation of observations equals the root-mean-square of residuals.)
But the teacher already knew the true table width, based on how it was designed and built and checked in the factory. So another 200 numbers, called errors, can be calculated as the deviation of observations with respect to the true width. A mean error can be calculated for each student sample. Likewise, 20 standard deviation of the error, or standard error, can be calculated for the observations. More 20 root-mean-square error values can be calculated as well. The three sets of 20 values are related as sqrt(me^2 + se^2) = rmse, in order of appearance. Based on rmse, the teacher can judge whose student provided the best estimate for the table width. Furthermore, by looking separatelly at the 20 mean errors and 20 standard error values, the teacher can instruct each student how to improve their readings.
As a check, the teacher subtracted each error from their respective mean error, resulting in yet another 200 numbers, which we'll call residual errors (that's not often done). As above, mean residual error is zero, so the standard deviation of residual errors or standard residual error is the same as the standard error, and in fact, so is the root-mean-square residual error, too. (See below for details.)
Now here is something of interest to the teacher. We can compare each student mean with the rest of the class (20 means total). Just like we defined before these point values:
m: mean (of the observations),
s: standard deviation (of the observations)
me: mean error (of the observations)
se: standard error (of the observations)
rmse: root-mean-square error (of the observations)
we can also define now:
mm: mean of the means
sm: standard deviation of the mean
mem: mean error of the mean
sem: standard error of the mean
rmsem: root-mean-square error of the mean
Only if the class of students is said to be unbiased, i.e., if mem = 0, then sem = sm = rmsem; i.e., standard error of the mean, standard deviation of the mean, and root-mean-square error the mean may be the same provided the mean error of the means is zero.
If we had taken only one sample, i.e., if there were only one student in class, the standard deviation of the observations (s) could be used to estimate the standard deviation of the mean (sm), as sm^2~s^2/n, where n=10 is the sample size (the number of readings per student). The two will agree better as the sample size grows (n=10,11,...; more readings per student) and the number of samples grows (n'=20,21,...; more students in class).
(A caveat: an unqualified "standard error" more often refers to the standard error of the mean, not the standard error of the observations.)
Here are some details of the calculations involved. The true value is denoted t.
Set-to-point operations:
mean: MEAN(X)
root-mean-square: RMS(X)
standard deviation: SD(X) = RMS(X-MEAN(X))
INTRA-SAMPLE SETS:
observations (given), X = {x_i}, i = 1, 2, ..., n=10.
deviations: difference of a set with respect to a fixed point.
residuals: deviation of observations from their mean, R=X-m.
errors: deviation of observations from the true value, E=X-t.
residual errors: deviation of errors from their mean, RE=E-MEAN(E)
INTRA-SAMPLE POINTS (see table 1):
m: mean (of the observations),
s: standard deviation (of the observations)
me: mean error (of the observations)
se: standard error of the observations
rmse: root-mean-square error (of the observations)
INTER-SAMPLE (ENSEMBLE) SETS:
means, M = {m_j}, j = 1, 2, ..., n'=20.
residuals of the mean: deviation of the means from their mean, RM=M-mm.
errors of the mean: deviation of the means from the "truth", EM=M-t.
residual errors of the mean: deviation of errors of the mean from their mean, REM=EM-MEAN(EM)
INTER-SAMPLE (ENSEMBLE) POINTS (see table 2):
mm: mean of the means
sm: standard deviation of the mean
mem: mean error of the mean
sem: standard error (of the mean)
rmsem: root-mean-square error of the mean
|
R - Confused on Residual Terminology
|
The original poster asked for an "explain like I'm 5" answer. Let's say your school teacher invites you and your schoolmates to help guess the teacher's table width. Each of the 20 students in class
|
R - Confused on Residual Terminology
The original poster asked for an "explain like I'm 5" answer. Let's say your school teacher invites you and your schoolmates to help guess the teacher's table width. Each of the 20 students in class can choose a device (ruler, scale, tape, or yardstick) and is allowed to measure the table 10 times. You all are asked to use different starting locations on the device to avoid reading the same number over and over again; the starting reading then has to be subtracted from the ending reading to finally get one width measurement (you recently learned how to do that type of math).
There were in total 200 width measurements taken by the class (20 students, 10 measurements each). The observations are handed over to the teacher who will crunch the numbers. Subtracting each student's observations from a reference value will result in another 200 numbers, called deviations. The teacher averages each student's sample separately, obtaining 20 means. Subtracting each student's observations from their individual mean will result in 200 deviations from the mean, called residuals. If the mean residual were to be calculated for each sample, you'd notice it's always zero. If instead we square each residual, average them, and finally undo the square, we obtain the standard deviation. (By the way, we call that last calculation bit the square root (think of finding the base or side of a given square), so the whole operation is often called root-mean-square, for short; the standard deviation of observations equals the root-mean-square of residuals.)
But the teacher already knew the true table width, based on how it was designed and built and checked in the factory. So another 200 numbers, called errors, can be calculated as the deviation of observations with respect to the true width. A mean error can be calculated for each student sample. Likewise, 20 standard deviation of the error, or standard error, can be calculated for the observations. More 20 root-mean-square error values can be calculated as well. The three sets of 20 values are related as sqrt(me^2 + se^2) = rmse, in order of appearance. Based on rmse, the teacher can judge whose student provided the best estimate for the table width. Furthermore, by looking separatelly at the 20 mean errors and 20 standard error values, the teacher can instruct each student how to improve their readings.
As a check, the teacher subtracted each error from their respective mean error, resulting in yet another 200 numbers, which we'll call residual errors (that's not often done). As above, mean residual error is zero, so the standard deviation of residual errors or standard residual error is the same as the standard error, and in fact, so is the root-mean-square residual error, too. (See below for details.)
Now here is something of interest to the teacher. We can compare each student mean with the rest of the class (20 means total). Just like we defined before these point values:
m: mean (of the observations),
s: standard deviation (of the observations)
me: mean error (of the observations)
se: standard error (of the observations)
rmse: root-mean-square error (of the observations)
we can also define now:
mm: mean of the means
sm: standard deviation of the mean
mem: mean error of the mean
sem: standard error of the mean
rmsem: root-mean-square error of the mean
Only if the class of students is said to be unbiased, i.e., if mem = 0, then sem = sm = rmsem; i.e., standard error of the mean, standard deviation of the mean, and root-mean-square error the mean may be the same provided the mean error of the means is zero.
If we had taken only one sample, i.e., if there were only one student in class, the standard deviation of the observations (s) could be used to estimate the standard deviation of the mean (sm), as sm^2~s^2/n, where n=10 is the sample size (the number of readings per student). The two will agree better as the sample size grows (n=10,11,...; more readings per student) and the number of samples grows (n'=20,21,...; more students in class).
(A caveat: an unqualified "standard error" more often refers to the standard error of the mean, not the standard error of the observations.)
Here are some details of the calculations involved. The true value is denoted t.
Set-to-point operations:
mean: MEAN(X)
root-mean-square: RMS(X)
standard deviation: SD(X) = RMS(X-MEAN(X))
INTRA-SAMPLE SETS:
observations (given), X = {x_i}, i = 1, 2, ..., n=10.
deviations: difference of a set with respect to a fixed point.
residuals: deviation of observations from their mean, R=X-m.
errors: deviation of observations from the true value, E=X-t.
residual errors: deviation of errors from their mean, RE=E-MEAN(E)
INTRA-SAMPLE POINTS (see table 1):
m: mean (of the observations),
s: standard deviation (of the observations)
me: mean error (of the observations)
se: standard error of the observations
rmse: root-mean-square error (of the observations)
INTER-SAMPLE (ENSEMBLE) SETS:
means, M = {m_j}, j = 1, 2, ..., n'=20.
residuals of the mean: deviation of the means from their mean, RM=M-mm.
errors of the mean: deviation of the means from the "truth", EM=M-t.
residual errors of the mean: deviation of errors of the mean from their mean, REM=EM-MEAN(EM)
INTER-SAMPLE (ENSEMBLE) POINTS (see table 2):
mm: mean of the means
sm: standard deviation of the mean
mem: mean error of the mean
sem: standard error (of the mean)
rmsem: root-mean-square error of the mean
|
R - Confused on Residual Terminology
The original poster asked for an "explain like I'm 5" answer. Let's say your school teacher invites you and your schoolmates to help guess the teacher's table width. Each of the 20 students in class
|
5,664
|
R - Confused on Residual Terminology
|
I also feel all the terms are very confusing. I strongly feel it is necessary to explain why we have these many metrics.
Here is my note on SSE and RMSE:
First metric: Sum of Squared Errors (SSE). Other names, Residual Sum of Squares (RSS), Sum of Squared Residuals (SSR).
If we are in optimization community, SSE is widely used. It is because it is the objective in optimization, where the optimization is
$$\underset{\beta}{\text{minimize}} ~ \|X\beta-y\|^2$$
And the residual/error term is $e=X\beta-y$, and $\|e\|^2=e^Te$, which is called Sum of Squared Errors (SSE).
Second Metric: Root-mean-square error (RMSE). Other names, root-mean-squares deviation.
RMSE is
$$
\|\frac 1 {\sqrt N} ({X\beta-y}) \|= \sqrt{\frac 1 N e^Te}
$$
where $N$ is number of data points.
Here is why we have this metric in addition to SSE we talked above. The advantage of RMSE metric is that it is more "normalized". Specifically, SSE will be depending on the amount of the data. The MSE would not depend on the amount of the data, but the RMSE also expresses the error in the same units as $y$.
|
R - Confused on Residual Terminology
|
I also feel all the terms are very confusing. I strongly feel it is necessary to explain why we have these many metrics.
Here is my note on SSE and RMSE:
First metric: Sum of Squared Errors (SSE). Oth
|
R - Confused on Residual Terminology
I also feel all the terms are very confusing. I strongly feel it is necessary to explain why we have these many metrics.
Here is my note on SSE and RMSE:
First metric: Sum of Squared Errors (SSE). Other names, Residual Sum of Squares (RSS), Sum of Squared Residuals (SSR).
If we are in optimization community, SSE is widely used. It is because it is the objective in optimization, where the optimization is
$$\underset{\beta}{\text{minimize}} ~ \|X\beta-y\|^2$$
And the residual/error term is $e=X\beta-y$, and $\|e\|^2=e^Te$, which is called Sum of Squared Errors (SSE).
Second Metric: Root-mean-square error (RMSE). Other names, root-mean-squares deviation.
RMSE is
$$
\|\frac 1 {\sqrt N} ({X\beta-y}) \|= \sqrt{\frac 1 N e^Te}
$$
where $N$ is number of data points.
Here is why we have this metric in addition to SSE we talked above. The advantage of RMSE metric is that it is more "normalized". Specifically, SSE will be depending on the amount of the data. The MSE would not depend on the amount of the data, but the RMSE also expresses the error in the same units as $y$.
|
R - Confused on Residual Terminology
I also feel all the terms are very confusing. I strongly feel it is necessary to explain why we have these many metrics.
Here is my note on SSE and RMSE:
First metric: Sum of Squared Errors (SSE). Oth
|
5,665
|
Why is multiple comparison a problem?
|
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)?
You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that.
The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope.
This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field.
On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible.
It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian...
|
Why is multiple comparison a problem?
|
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication
|
Why is multiple comparison a problem?
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)?
You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that.
The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope.
This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field.
On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible.
It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian...
|
Why is multiple comparison a problem?
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication
|
5,666
|
Why is multiple comparison a problem?
|
Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it.
Here's an interesting Bayesian perspective on multiple testing from Andrew Gelman: Why we don't (usually) worry about multiple comparisons.
|
Why is multiple comparison a problem?
|
Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it.
Here's
|
Why is multiple comparison a problem?
Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it.
Here's an interesting Bayesian perspective on multiple testing from Andrew Gelman: Why we don't (usually) worry about multiple comparisons.
|
Why is multiple comparison a problem?
Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it.
Here's
|
5,667
|
Why is multiple comparison a problem?
|
Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it doesn't result in a clinical improvement/detriment, it doesn't matter. That is one way of reducing the concern about multiple comparisons.
See also:
Bauer, P. (1991). Multiple testing in clinical trials. Stat Med, 10(6), 871-89; discussion 889-90.
Proschan, M. A. & Waclawiw, M. A. (2000). Practical guidelines for multiplicity adjustment in clinical trials. Control Clin Trials, 21(6), 527-39.
Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass.), 1(1), 43-6.
Perneger, T. V. (1998). What's wrong with bonferroni adjustments. BMJ (Clinical Research Ed.), 316(7139), 1236-8.
|
Why is multiple comparison a problem?
|
Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it do
|
Why is multiple comparison a problem?
Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it doesn't result in a clinical improvement/detriment, it doesn't matter. That is one way of reducing the concern about multiple comparisons.
See also:
Bauer, P. (1991). Multiple testing in clinical trials. Stat Med, 10(6), 871-89; discussion 889-90.
Proschan, M. A. & Waclawiw, M. A. (2000). Practical guidelines for multiplicity adjustment in clinical trials. Control Clin Trials, 21(6), 527-39.
Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass.), 1(1), 43-6.
Perneger, T. V. (1998). What's wrong with bonferroni adjustments. BMJ (Clinical Research Ed.), 316(7139), 1236-8.
|
Why is multiple comparison a problem?
Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it do
|
5,668
|
Why is multiple comparison a problem?
|
To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that you want to know which one have non zero mean, formally you want to test:
$H_{0i} : \theta_i=0$ Vs $H_{1i} : \theta_i\neq 0$
Definition of a threshold: You have $n$ decisions to make and you may have different aim. For a given test $i$ you are certainly going to choose a threshold $\tau_i$ and decide not to accept $H_{0i}$ if $|X_i|>\tau_i$.
Different options: You have to choose the thresholds $\tau_i$ and for that you have two options:
choose the same threshold for everyone
to choose a different threshold for everyone (most often a datawise threshold, see below).
Different aims: These options can be driven for different aims such as
Controling the probability to reject wrongly $H_{0i}$ for one or more than one $i$.
Controlling the expectation of the false alarm ratio (or False Discovery Rate)
What ever is your aim at the end, it is a good idea to use a datawise threshold.
My answer to your question: your intuition is related to the main heuristic for choosing a datawise threshold. It is the following (at the origin of Holm's procedure which is more powerfull than Bonferoni):
Imagine you have already taken a decision for the $p$ lowest $|X_{i}|$ and the decision is to accept $H_{0i}$ for all of them. Then you only have to make $n-p$ comparisons and you haven't taken any risk to reject $H_{0i}$ wrongly ! Since you haven't used your budget, you may take a little more risk for the remaining test and choose a larger threshold.
In the case of your judges: I assume (and I guess you should do the same) that both judge have the same budgets of false accusation for their life. The 60 years old judge may be less conservative if, in the past, he did not accuse anyone ! But if he already made a lot of accusation he will be more conservative and maybe even more than the youndest judge.
|
Why is multiple comparison a problem?
|
To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that yo
|
Why is multiple comparison a problem?
To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that you want to know which one have non zero mean, formally you want to test:
$H_{0i} : \theta_i=0$ Vs $H_{1i} : \theta_i\neq 0$
Definition of a threshold: You have $n$ decisions to make and you may have different aim. For a given test $i$ you are certainly going to choose a threshold $\tau_i$ and decide not to accept $H_{0i}$ if $|X_i|>\tau_i$.
Different options: You have to choose the thresholds $\tau_i$ and for that you have two options:
choose the same threshold for everyone
to choose a different threshold for everyone (most often a datawise threshold, see below).
Different aims: These options can be driven for different aims such as
Controling the probability to reject wrongly $H_{0i}$ for one or more than one $i$.
Controlling the expectation of the false alarm ratio (or False Discovery Rate)
What ever is your aim at the end, it is a good idea to use a datawise threshold.
My answer to your question: your intuition is related to the main heuristic for choosing a datawise threshold. It is the following (at the origin of Holm's procedure which is more powerfull than Bonferoni):
Imagine you have already taken a decision for the $p$ lowest $|X_{i}|$ and the decision is to accept $H_{0i}$ for all of them. Then you only have to make $n-p$ comparisons and you haven't taken any risk to reject $H_{0i}$ wrongly ! Since you haven't used your budget, you may take a little more risk for the remaining test and choose a larger threshold.
In the case of your judges: I assume (and I guess you should do the same) that both judge have the same budgets of false accusation for their life. The 60 years old judge may be less conservative if, in the past, he did not accuse anyone ! But if he already made a lot of accusation he will be more conservative and maybe even more than the youndest judge.
|
Why is multiple comparison a problem?
To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that yo
|
5,669
|
Why is multiple comparison a problem?
|
An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMRI). This short citation contains most of the message:
"[...] we completed an fMRI scanning session with a post-mortem Atlantic Salmon as the subject. The salmon was shown the same social perspective-taking task that was later administered to a group of human subjects."
that is, in my experience, a terrific argument to encourage users to use multiple testing corrections.
|
Why is multiple comparison a problem?
|
An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMR
|
Why is multiple comparison a problem?
An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMRI). This short citation contains most of the message:
"[...] we completed an fMRI scanning session with a post-mortem Atlantic Salmon as the subject. The salmon was shown the same social perspective-taking task that was later administered to a group of human subjects."
that is, in my experience, a terrific argument to encourage users to use multiple testing corrections.
|
Why is multiple comparison a problem?
An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMR
|
5,670
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
You can write your own function based on what we know about the mechanics of the two-sample $t$-test. For example, this will do the job:
# m1, m2: the sample means
# s1, s2: the sample standard deviations
# n1, n2: the same sizes
# m0: the null value for the difference in means to be tested for. Default is 0.
# equal.variance: whether or not to assume equal variance. Default is FALSE.
t.test2 <- function(m1,m2,s1,s2,n1,n2,m0=0,equal.variance=FALSE)
{
if( equal.variance==FALSE )
{
se <- sqrt( (s1^2/n1) + (s2^2/n2) )
# welch-satterthwaite df
df <- ( (s1^2/n1 + s2^2/n2)^2 )/( (s1^2/n1)^2/(n1-1) + (s2^2/n2)^2/(n2-1) )
} else
{
# pooled standard deviation, scaled by the sample sizes
se <- sqrt( (1/n1 + 1/n2) * ((n1-1)*s1^2 + (n2-1)*s2^2)/(n1+n2-2) )
df <- n1+n2-2
}
t <- (m1-m2-m0)/se
dat <- c(m1-m2, se, t, 2*pt(-abs(t),df))
names(dat) <- c("Difference of means", "Std Error", "t", "p-value")
return(dat)
}
Example usage:
set.seed(0)
x1 <- rnorm(100)
x2 <- rnorm(200)
# you'll find this output agrees with that of t.test when you input x1,x2
(tt2 <- t.test2(mean(x1), mean(x2), sd(x1), sd(x2), length(x1), length(x2)))
Difference of means Std Error t p-value
0.01183358 0.11348530 0.10427416 0.91704542
This matches the result of t.test:
(tt <- t.test(x1, x2))
# Welch Two Sample t-test
#
# data: x1 and x2
# t = 0.10427, df = 223.18, p-value = 0.917
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
# -0.2118062 0.2354734
# sample estimates:
# mean of x mean of y
# 0.02266845 0.01083487
tt$statistic == tt2[["t"]]
# t
# TRUE
tt$p.value == tt2[["p-value"]]
# [1] TRUE
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
You can write your own function based on what we know about the mechanics of the two-sample $t$-test. For example, this will do the job:
# m1, m2: the sample means
# s1, s2: the sample standard deviat
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
You can write your own function based on what we know about the mechanics of the two-sample $t$-test. For example, this will do the job:
# m1, m2: the sample means
# s1, s2: the sample standard deviations
# n1, n2: the same sizes
# m0: the null value for the difference in means to be tested for. Default is 0.
# equal.variance: whether or not to assume equal variance. Default is FALSE.
t.test2 <- function(m1,m2,s1,s2,n1,n2,m0=0,equal.variance=FALSE)
{
if( equal.variance==FALSE )
{
se <- sqrt( (s1^2/n1) + (s2^2/n2) )
# welch-satterthwaite df
df <- ( (s1^2/n1 + s2^2/n2)^2 )/( (s1^2/n1)^2/(n1-1) + (s2^2/n2)^2/(n2-1) )
} else
{
# pooled standard deviation, scaled by the sample sizes
se <- sqrt( (1/n1 + 1/n2) * ((n1-1)*s1^2 + (n2-1)*s2^2)/(n1+n2-2) )
df <- n1+n2-2
}
t <- (m1-m2-m0)/se
dat <- c(m1-m2, se, t, 2*pt(-abs(t),df))
names(dat) <- c("Difference of means", "Std Error", "t", "p-value")
return(dat)
}
Example usage:
set.seed(0)
x1 <- rnorm(100)
x2 <- rnorm(200)
# you'll find this output agrees with that of t.test when you input x1,x2
(tt2 <- t.test2(mean(x1), mean(x2), sd(x1), sd(x2), length(x1), length(x2)))
Difference of means Std Error t p-value
0.01183358 0.11348530 0.10427416 0.91704542
This matches the result of t.test:
(tt <- t.test(x1, x2))
# Welch Two Sample t-test
#
# data: x1 and x2
# t = 0.10427, df = 223.18, p-value = 0.917
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
# -0.2118062 0.2354734
# sample estimates:
# mean of x mean of y
# 0.02266845 0.01083487
tt$statistic == tt2[["t"]]
# t
# TRUE
tt$p.value == tt2[["p-value"]]
# [1] TRUE
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
You can write your own function based on what we know about the mechanics of the two-sample $t$-test. For example, this will do the job:
# m1, m2: the sample means
# s1, s2: the sample standard deviat
|
5,671
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
You just calculate it by hand:
$$
t = \frac{(\text{mean}_f - \text{mean}_m) - \text{expected difference}}{SE} \\
~\\
~\\
SE = \sqrt{\frac{sd_f^2}{n_f} + \frac{sd_m^2}{n_m}} \\
~\\
~\\
\text{where, }~~~df = n_m + n_f - 2
$$
The expected difference is probably zero.
If you want the p-value simply use the pt() function:
pt(t, df)
Thus, putting the code together:
> p = pt((((1.666667 - 4.500000) - 0)/sqrt(0.5773503/3 + 0.5773503/4)), (3 + 4 - 2))
> p
[1] 0.002272053
This assumes equal variances which is obvious because they have the same standard deviation.
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
You just calculate it by hand:
$$
t = \frac{(\text{mean}_f - \text{mean}_m) - \text{expected difference}}{SE} \\
~\\
~\\
SE = \sqrt{\frac{sd_f^2}{n_f} + \frac{sd_m^2}{n_m}} \\
~\\
~\\
\text{where, }
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
You just calculate it by hand:
$$
t = \frac{(\text{mean}_f - \text{mean}_m) - \text{expected difference}}{SE} \\
~\\
~\\
SE = \sqrt{\frac{sd_f^2}{n_f} + \frac{sd_m^2}{n_m}} \\
~\\
~\\
\text{where, }~~~df = n_m + n_f - 2
$$
The expected difference is probably zero.
If you want the p-value simply use the pt() function:
pt(t, df)
Thus, putting the code together:
> p = pt((((1.666667 - 4.500000) - 0)/sqrt(0.5773503/3 + 0.5773503/4)), (3 + 4 - 2))
> p
[1] 0.002272053
This assumes equal variances which is obvious because they have the same standard deviation.
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
You just calculate it by hand:
$$
t = \frac{(\text{mean}_f - \text{mean}_m) - \text{expected difference}}{SE} \\
~\\
~\\
SE = \sqrt{\frac{sd_f^2}{n_f} + \frac{sd_m^2}{n_m}} \\
~\\
~\\
\text{where, }
|
5,672
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
You can do the calculations based on the formula in the book (on the web page), or you can generate random data that has the properties stated (see the mvrnorm function in the MASS package) and use the regular t.test function on the simulated data.
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
You can do the calculations based on the formula in the book (on the web page), or you can generate random data that has the properties stated (see the mvrnorm function in the MASS package) and use th
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
You can do the calculations based on the formula in the book (on the web page), or you can generate random data that has the properties stated (see the mvrnorm function in the MASS package) and use the regular t.test function on the simulated data.
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
You can do the calculations based on the formula in the book (on the web page), or you can generate random data that has the properties stated (see the mvrnorm function in the MASS package) and use th
|
5,673
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
The question asks about R, but the issue can arise with any other statistical software. Stata for example has various so-called immediate commands, which allow calculations from summary statistics alone. See http://www.stata.com/manuals13/rttest.pdf for the particular case of the ttesti command, which applies here.
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
The question asks about R, but the issue can arise with any other statistical software. Stata for example has various so-called immediate commands, which allow calculations from summary statistics alo
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
The question asks about R, but the issue can arise with any other statistical software. Stata for example has various so-called immediate commands, which allow calculations from summary statistics alone. See http://www.stata.com/manuals13/rttest.pdf for the particular case of the ttesti command, which applies here.
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
The question asks about R, but the issue can arise with any other statistical software. Stata for example has various so-called immediate commands, which allow calculations from summary statistics alo
|
5,674
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
Another possible solution is to simulate the datasets and then use the standard t test function. It may be less efficient, computationally speaking, but it is very simple.
t.test.from.summary.data <- function(mean1, sd1, n1, mean2, sd2, n2, ...) {
data1 <- scale(1:n1)*sd1 + mean1
data2 <- scale(1:n2)*sd2 + mean2
t.test(data1, data2, ...)
}
Given that the t test depends only on the sample summary statistics but disregards the actual sample distributions, this function will give exactly the same results (except for variable names) as the t test function:
x <- c(1.0, 1.2, 2.3, 4.2, 2.1, 3.0, 1.9, 2.0, 3.2, 1.6)
y <- c(3.5, 4.2, 3.3, 2.0, 1.7, 4.5, 2.7, 2.8, 3.3)
m_x <- mean(x)
m_y <- mean(y)
s_x <- sd(x)
s_y <- sd(y)
t.test.from.summary.data(m_x, s_x, 10, m_y, s_y, 9)
Welch Two Sample t-test
data: data1 and data2
t = -1.9755, df = 16.944, p-value = 0.06474
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.78101782 0.05879559
sample estimates:
mean of x mean of y
2.250000 3.111111
t.test(x,y)
Welch Two Sample t-test
data: x and y
t = -1.9755, df = 16.944, p-value = 0.06474
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.78101782 0.05879559
sample estimates:
mean of x mean of y
2.250000 3.111111
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
|
Another possible solution is to simulate the datasets and then use the standard t test function. It may be less efficient, computationally speaking, but it is very simple.
t.test.from.summary.data <-
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
Another possible solution is to simulate the datasets and then use the standard t test function. It may be less efficient, computationally speaking, but it is very simple.
t.test.from.summary.data <- function(mean1, sd1, n1, mean2, sd2, n2, ...) {
data1 <- scale(1:n1)*sd1 + mean1
data2 <- scale(1:n2)*sd2 + mean2
t.test(data1, data2, ...)
}
Given that the t test depends only on the sample summary statistics but disregards the actual sample distributions, this function will give exactly the same results (except for variable names) as the t test function:
x <- c(1.0, 1.2, 2.3, 4.2, 2.1, 3.0, 1.9, 2.0, 3.2, 1.6)
y <- c(3.5, 4.2, 3.3, 2.0, 1.7, 4.5, 2.7, 2.8, 3.3)
m_x <- mean(x)
m_y <- mean(y)
s_x <- sd(x)
s_y <- sd(y)
t.test.from.summary.data(m_x, s_x, 10, m_y, s_y, 9)
Welch Two Sample t-test
data: data1 and data2
t = -1.9755, df = 16.944, p-value = 0.06474
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.78101782 0.05879559
sample estimates:
mean of x mean of y
2.250000 3.111111
t.test(x,y)
Welch Two Sample t-test
data: x and y
t = -1.9755, df = 16.944, p-value = 0.06474
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.78101782 0.05879559
sample estimates:
mean of x mean of y
2.250000 3.111111
|
How to perform two-sample t-tests in R by inputting sample statistics rather than the raw data?
Another possible solution is to simulate the datasets and then use the standard t test function. It may be less efficient, computationally speaking, but it is very simple.
t.test.from.summary.data <-
|
5,675
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
VAE is a framework that was proposed as a scalable way to do variational EM (or variational inference in general) on large datasets. Although it has an AE like structure, it serves a much larger purpose.
Having said that, one can, of course, use VAEs to learn latent representations. VAEs are known to give representations with disentangled factors [1] This happens due to isotropic Gaussian priors on the latent variables. Modeling them as Gaussians allows each dimension in the representation to push themselves as farther as possible from the other factors. Also, [1] added a regularization coefficient that controls the influence of the prior.
While isotropic Gaussians are sufficient for most cases, for specific cases, one may want to model priors differently. For example, in the case of sequences, one may want to define priors as sequential models [2].
Coming back to the question, as one can see, prior gives significant control over how we want to model our latent distribution. This kind of control does not exist in the usual AE framework. This is actually the power of Bayesian models themselves, VAEs are simply making it more practical and feasible for large-scale datasets. So, to conclude, if you want precise control over your latent representations and what you would like them to represent, then choose VAE. Sometimes, precise modeling can capture better representations as in [2]. However, if AE suffices for the work you do, then just go with AE, it is simple and uncomplicated enough. After all, with AEs we are simply doing non-linear PCA.
[1] Early Visual Concept Learning with Unsupervised Deep Learning, 2016
Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner
https://arxiv.org/abs/1606.05579
[2] A Recurrent Latent Variable Model for Sequential Data, 2015
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, Yoshua Bengio
https://arxiv.org/abs/1506.02216
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
VAE is a framework that was proposed as a scalable way to do variational EM (or variational inference in general) on large datasets. Although it has an AE like structure, it serves a much larger purpo
|
When should I use a variational autoencoder as opposed to an autoencoder?
VAE is a framework that was proposed as a scalable way to do variational EM (or variational inference in general) on large datasets. Although it has an AE like structure, it serves a much larger purpose.
Having said that, one can, of course, use VAEs to learn latent representations. VAEs are known to give representations with disentangled factors [1] This happens due to isotropic Gaussian priors on the latent variables. Modeling them as Gaussians allows each dimension in the representation to push themselves as farther as possible from the other factors. Also, [1] added a regularization coefficient that controls the influence of the prior.
While isotropic Gaussians are sufficient for most cases, for specific cases, one may want to model priors differently. For example, in the case of sequences, one may want to define priors as sequential models [2].
Coming back to the question, as one can see, prior gives significant control over how we want to model our latent distribution. This kind of control does not exist in the usual AE framework. This is actually the power of Bayesian models themselves, VAEs are simply making it more practical and feasible for large-scale datasets. So, to conclude, if you want precise control over your latent representations and what you would like them to represent, then choose VAE. Sometimes, precise modeling can capture better representations as in [2]. However, if AE suffices for the work you do, then just go with AE, it is simple and uncomplicated enough. After all, with AEs we are simply doing non-linear PCA.
[1] Early Visual Concept Learning with Unsupervised Deep Learning, 2016
Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner
https://arxiv.org/abs/1606.05579
[2] A Recurrent Latent Variable Model for Sequential Data, 2015
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, Yoshua Bengio
https://arxiv.org/abs/1506.02216
|
When should I use a variational autoencoder as opposed to an autoencoder?
VAE is a framework that was proposed as a scalable way to do variational EM (or variational inference in general) on large datasets. Although it has an AE like structure, it serves a much larger purpo
|
5,676
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
The standard autoencoder can be illustrated using the following graph:
As stated in the previous answers it can be viewed as just a nonlinear extension of PCA.
But compared to the variational autoencoder the vanilla autoencoder has the following drawback:
The fundamental problem with autoencoders, for generation, is that the
latent space they convert their inputs to and where they're encoded
vectors lie, may not be continuous or allow easy interpolation.
That's, the encoding part in the above graph can not deal with inputs that the encoder has never seen before because different classes are clustered bluntly and those unseen inputs are encoded be to something located somewhere in the blank:
To tackle this problem, the variational autoencoder was created by adding a layer containing a mean and a standard deviation for each hidden variable in the middle layer:
Then even for the same input the decoded output can vary, and the encoded and clustered inputs become smooth:
So to denoise or to classify(filter out dissimilar data) data, a standard autoencoder would be enough, while we'd better employ variational autoencoder for image generation.
In addition, the latent vector in the variational autoencoder can be manipulated. Say, we subtract the latent vector for glasses from the latent vector of a person with glasses and decode this latent vector we can get the same person without glasses.
Then for image manipulation, we should also use a variational autoencoder.
Reference:
Intuitively Understanding Variational Autoencoders
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
The standard autoencoder can be illustrated using the following graph:
As stated in the previous answers it can be viewed as just a nonlinear extension of PCA.
But compared to the variational autoenc
|
When should I use a variational autoencoder as opposed to an autoencoder?
The standard autoencoder can be illustrated using the following graph:
As stated in the previous answers it can be viewed as just a nonlinear extension of PCA.
But compared to the variational autoencoder the vanilla autoencoder has the following drawback:
The fundamental problem with autoencoders, for generation, is that the
latent space they convert their inputs to and where they're encoded
vectors lie, may not be continuous or allow easy interpolation.
That's, the encoding part in the above graph can not deal with inputs that the encoder has never seen before because different classes are clustered bluntly and those unseen inputs are encoded be to something located somewhere in the blank:
To tackle this problem, the variational autoencoder was created by adding a layer containing a mean and a standard deviation for each hidden variable in the middle layer:
Then even for the same input the decoded output can vary, and the encoded and clustered inputs become smooth:
So to denoise or to classify(filter out dissimilar data) data, a standard autoencoder would be enough, while we'd better employ variational autoencoder for image generation.
In addition, the latent vector in the variational autoencoder can be manipulated. Say, we subtract the latent vector for glasses from the latent vector of a person with glasses and decode this latent vector we can get the same person without glasses.
Then for image manipulation, we should also use a variational autoencoder.
Reference:
Intuitively Understanding Variational Autoencoders
|
When should I use a variational autoencoder as opposed to an autoencoder?
The standard autoencoder can be illustrated using the following graph:
As stated in the previous answers it can be viewed as just a nonlinear extension of PCA.
But compared to the variational autoenc
|
5,677
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
TenaliRaman had some good points but he missed a lot of fundamental concepts as well. First it should be noted that the primary reason to use an AE-like framework is the latent space that allows us to compress the information and hopefully get independent factors out of it that represent high-level features of the data. An important point is that, while AEs can be interpreted as the nonlinear extension of PCA since "X" hidden units would span the same space as the first "X" number of principal components, an AE does not necessarily produce orthogonal components in the latent space (which would amount to a form of disentanglement). Additionally from a VAE, you can get a semblance of the data likelihood (although approximate) and also sample from it (which can be useful for various different tasks). However, if you just want likelihood, there are better (explicit, tractable) density models out there, and if you want to draw samples....well GANs or the explicit density models with exact likelihood are a better choice.
The prior distribution imposed on the latent units in a VAE only contributes to model fitting due to the KL divergence term, which the [1] reference simply added a hyperparameter multiplier on that term and got a full paper out of it (most of it is fairly obvious). Essentially an "uninformative" prior is one which individually has a KL divergence close to zero and doesn't contribute much to the loss, meaning that particular unit is not used for reconstruction in the decoder. The disentanglement comes into play on a VAE naturally because, in the simplest case of multi-modal data, the KL divergence cost is lower by having a unique latent Gaussian to for each mode than if the model tries to capture multiple modes with a single Gaussian (which would diverge further from the prior as is penalized heavily by KL divergence cost) -- thus leading to disentanglement in the latent units. Therefore the VAE also lends itself naturally to most data sources because of the statistical implications associated with it.
There are sparsity imposing frameworks for AE as well, but unfortunately I'm not aware of any paper out there that compares the VAE vs AE strictly on the basis of latent space representation and disentanglement. I'd really like to see something in that arena though -- since AEs are much easier to train and if they could achieve as good of disentanglement as VAEs in the latent space then they would obviously be preferred. On a related note, I've also seen some promise by ICA (and nonlinear ICA) methods, but the ones I've seen forced the latent space to be of the same dimension as the data, which is not nearly as useful as AEs for extracting high-level features.
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
TenaliRaman had some good points but he missed a lot of fundamental concepts as well. First it should be noted that the primary reason to use an AE-like framework is the latent space that allows us t
|
When should I use a variational autoencoder as opposed to an autoencoder?
TenaliRaman had some good points but he missed a lot of fundamental concepts as well. First it should be noted that the primary reason to use an AE-like framework is the latent space that allows us to compress the information and hopefully get independent factors out of it that represent high-level features of the data. An important point is that, while AEs can be interpreted as the nonlinear extension of PCA since "X" hidden units would span the same space as the first "X" number of principal components, an AE does not necessarily produce orthogonal components in the latent space (which would amount to a form of disentanglement). Additionally from a VAE, you can get a semblance of the data likelihood (although approximate) and also sample from it (which can be useful for various different tasks). However, if you just want likelihood, there are better (explicit, tractable) density models out there, and if you want to draw samples....well GANs or the explicit density models with exact likelihood are a better choice.
The prior distribution imposed on the latent units in a VAE only contributes to model fitting due to the KL divergence term, which the [1] reference simply added a hyperparameter multiplier on that term and got a full paper out of it (most of it is fairly obvious). Essentially an "uninformative" prior is one which individually has a KL divergence close to zero and doesn't contribute much to the loss, meaning that particular unit is not used for reconstruction in the decoder. The disentanglement comes into play on a VAE naturally because, in the simplest case of multi-modal data, the KL divergence cost is lower by having a unique latent Gaussian to for each mode than if the model tries to capture multiple modes with a single Gaussian (which would diverge further from the prior as is penalized heavily by KL divergence cost) -- thus leading to disentanglement in the latent units. Therefore the VAE also lends itself naturally to most data sources because of the statistical implications associated with it.
There are sparsity imposing frameworks for AE as well, but unfortunately I'm not aware of any paper out there that compares the VAE vs AE strictly on the basis of latent space representation and disentanglement. I'd really like to see something in that arena though -- since AEs are much easier to train and if they could achieve as good of disentanglement as VAEs in the latent space then they would obviously be preferred. On a related note, I've also seen some promise by ICA (and nonlinear ICA) methods, but the ones I've seen forced the latent space to be of the same dimension as the data, which is not nearly as useful as AEs for extracting high-level features.
|
When should I use a variational autoencoder as opposed to an autoencoder?
TenaliRaman had some good points but he missed a lot of fundamental concepts as well. First it should be noted that the primary reason to use an AE-like framework is the latent space that allows us t
|
5,678
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
Choosing the distribution of the code in VAE allows for a better unsupervised representation learning where samples of the same class end up close to each other in the code space. Also this way, finding a semantic for the regions in the code space becomes easier. E.g, you would know from each area what class can be generated.
If you need more in-depth analysis, have a look at Durk Kingma' thesis. It's a great source for variational inference.
|
When should I use a variational autoencoder as opposed to an autoencoder?
|
Choosing the distribution of the code in VAE allows for a better unsupervised representation learning where samples of the same class end up close to each other in the code space. Also this way, findi
|
When should I use a variational autoencoder as opposed to an autoencoder?
Choosing the distribution of the code in VAE allows for a better unsupervised representation learning where samples of the same class end up close to each other in the code space. Also this way, finding a semantic for the regions in the code space becomes easier. E.g, you would know from each area what class can be generated.
If you need more in-depth analysis, have a look at Durk Kingma' thesis. It's a great source for variational inference.
|
When should I use a variational autoencoder as opposed to an autoencoder?
Choosing the distribution of the code in VAE allows for a better unsupervised representation learning where samples of the same class end up close to each other in the code space. Also this way, findi
|
5,679
|
K-fold vs. Monte Carlo cross-validation
|
$k$-Fold Cross Validation
Suppose you have 100 data points. For $k$-fold cross validation, these 100 points are divided into $k$ equal sized and mutually-exclusive 'folds'. For $k$=10, you might assign points 1-10 to fold #1, 11-20 to fold #2, and so on, finishing by assigning points 91-100 to fold #10. Next, we select one fold to act as the test set, and use the remaining $k-1$ folds to form the training data. For the first run, you might use points 1-10 as the test set and 11-100 as the training set. The next run would then use points 11-20 as the test set and train on points 1-10 plus 21-100, and so forth, until each fold is used once as the test set.
Monte-Carlo Cross Validation
Monte Carlo works somewhat differently. You randomly select (without replacement) some fraction of your data to form the training set, and then assign the rest of the points to the test set. This process is then repeated multiple times, generating (at random) new training and test partitions each time. For example, suppose you chose to use 10% of your data as test data. Then your test set on rep #1 might be points 64, 90, 63, 42, 65, 49, 10, 64, 96, and 48. On the next run, your test set might be 90, 60, 23, 67, 16, 78, 42, 17, 73, and 26. Since the partitions are done independently for each run, the same point can appear in the test set multiple times, which is the major difference between Monte Carlo and cross validation.
Comparison
Each method has its own advantages and disadvantages. Under cross validation, each point gets tested exactly once, which seems fair. However, cross-validation only explores a few of the possible ways that your data could have been partitioned. Monte Carlo lets you explore somewhat more possible partitions, though you're unlikely to get all of them--there are $\binom{100}{50} \approx 10^{28}$ possible ways to 50/50 split a 100 data point set(!).
If you're attempting to do inference (i.e., statistically compare two algorithms), averaging the results of a $k$-fold cross validation run gets you a (nearly) unbiased estimate of the algorithm's performance, but with high variance (as you'd expect from having only 5 or 10 data points). Since you can, in principle, run it for as long as you want/can afford, Monte Carlo cross validation can give you a less variable, but more biased estimate.
Some approaches fuse the two, as in the 5x2 cross validation (see Dietterich (1998) for the idea, though I think there have been some further improvements since then), or by correcting for the bias (e.g., Nadeau and Bengio, 2003).
|
K-fold vs. Monte Carlo cross-validation
|
$k$-Fold Cross Validation
Suppose you have 100 data points. For $k$-fold cross validation, these 100 points are divided into $k$ equal sized and mutually-exclusive 'folds'. For $k$=10, you might assig
|
K-fold vs. Monte Carlo cross-validation
$k$-Fold Cross Validation
Suppose you have 100 data points. For $k$-fold cross validation, these 100 points are divided into $k$ equal sized and mutually-exclusive 'folds'. For $k$=10, you might assign points 1-10 to fold #1, 11-20 to fold #2, and so on, finishing by assigning points 91-100 to fold #10. Next, we select one fold to act as the test set, and use the remaining $k-1$ folds to form the training data. For the first run, you might use points 1-10 as the test set and 11-100 as the training set. The next run would then use points 11-20 as the test set and train on points 1-10 plus 21-100, and so forth, until each fold is used once as the test set.
Monte-Carlo Cross Validation
Monte Carlo works somewhat differently. You randomly select (without replacement) some fraction of your data to form the training set, and then assign the rest of the points to the test set. This process is then repeated multiple times, generating (at random) new training and test partitions each time. For example, suppose you chose to use 10% of your data as test data. Then your test set on rep #1 might be points 64, 90, 63, 42, 65, 49, 10, 64, 96, and 48. On the next run, your test set might be 90, 60, 23, 67, 16, 78, 42, 17, 73, and 26. Since the partitions are done independently for each run, the same point can appear in the test set multiple times, which is the major difference between Monte Carlo and cross validation.
Comparison
Each method has its own advantages and disadvantages. Under cross validation, each point gets tested exactly once, which seems fair. However, cross-validation only explores a few of the possible ways that your data could have been partitioned. Monte Carlo lets you explore somewhat more possible partitions, though you're unlikely to get all of them--there are $\binom{100}{50} \approx 10^{28}$ possible ways to 50/50 split a 100 data point set(!).
If you're attempting to do inference (i.e., statistically compare two algorithms), averaging the results of a $k$-fold cross validation run gets you a (nearly) unbiased estimate of the algorithm's performance, but with high variance (as you'd expect from having only 5 or 10 data points). Since you can, in principle, run it for as long as you want/can afford, Monte Carlo cross validation can give you a less variable, but more biased estimate.
Some approaches fuse the two, as in the 5x2 cross validation (see Dietterich (1998) for the idea, though I think there have been some further improvements since then), or by correcting for the bias (e.g., Nadeau and Bengio, 2003).
|
K-fold vs. Monte Carlo cross-validation
$k$-Fold Cross Validation
Suppose you have 100 data points. For $k$-fold cross validation, these 100 points are divided into $k$ equal sized and mutually-exclusive 'folds'. For $k$=10, you might assig
|
5,680
|
K-fold vs. Monte Carlo cross-validation
|
Let's assume $ N $ is the size of the dataset, $k$ is the number of the $k$-fold subsets , $n_t$ is the size of the training set and $n_v$ is the size of the validation set. Therefore, $N = k \times n_v$ for $k$-fold cross-validation and $N = n_t + n_v$ for Monte Carlo cross-validation.
$k$-fold cross-validation (kFCV) divides the $N$ data points into $k$ mutually exclusive subsets of equal size. The process then leaves out one of the $k$ subsets as a validation set and trains on the remaining subsets. This process is repeated $k$ times, leaving out one of the $k$ subsets each time. The size of $k$ can range from $N$ to $2$ ($k = N$ is called leave-one-out cross validation). The authors in [2] suggest setting $k = 5$ or $10$.
Monte Carlo cross-validation (MCCV) simply splits the $N$ data points into the two subsets $n_t$ and $n_v$ by sampling, without replacement, $n_t$ data points. The model is then trained on subset $n_t$ and validated on subset $n_v$.There exist $ {N\choose n_t} $ unique training sets, but MCCV avoids the need to run this many iterations. Zhang [3] shows that running MCCV for $N^2$ iterations has results close to cross validation over all $ {N\choose n_t} $ unique training sets. It should be noted that the literature lacks research for large N.
The choice of $k$ and $n_t$ affects the bias/variance trade off. The larger $k$ or $n_t$, the lower the bias and the higher the variance. Larger training sets are more similar between iterations, hence over fitting to the training data. See [2] for more on this discussion. The bias and variance of kFCV and MCCV are different, but the bias of the two methods can be made equal by choosing appropriate levels of $k$ and $n_t$. The values of the bias and variance for both methods are shown in [1] (this paper refers to MCCV as repeated-learning testing-model).
[1] Burman, P. (1989). A Comparative study of ordinary cross-validation, $v$-fold cross validation and the repeated learing testing-model methods. Bometrika 76 503-514.
[2] Hastie, T., Tibshirani, R. and Friedman, J. (2011). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second ed. New York: Springer.
[3] Zhang, P. (1993). Model Selection Via Muiltfold Cross Validation. Ann. Stat. 21 299–313
|
K-fold vs. Monte Carlo cross-validation
|
Let's assume $ N $ is the size of the dataset, $k$ is the number of the $k$-fold subsets , $n_t$ is the size of the training set and $n_v$ is the size of the validation set. Therefore, $N = k \times
|
K-fold vs. Monte Carlo cross-validation
Let's assume $ N $ is the size of the dataset, $k$ is the number of the $k$-fold subsets , $n_t$ is the size of the training set and $n_v$ is the size of the validation set. Therefore, $N = k \times n_v$ for $k$-fold cross-validation and $N = n_t + n_v$ for Monte Carlo cross-validation.
$k$-fold cross-validation (kFCV) divides the $N$ data points into $k$ mutually exclusive subsets of equal size. The process then leaves out one of the $k$ subsets as a validation set and trains on the remaining subsets. This process is repeated $k$ times, leaving out one of the $k$ subsets each time. The size of $k$ can range from $N$ to $2$ ($k = N$ is called leave-one-out cross validation). The authors in [2] suggest setting $k = 5$ or $10$.
Monte Carlo cross-validation (MCCV) simply splits the $N$ data points into the two subsets $n_t$ and $n_v$ by sampling, without replacement, $n_t$ data points. The model is then trained on subset $n_t$ and validated on subset $n_v$.There exist $ {N\choose n_t} $ unique training sets, but MCCV avoids the need to run this many iterations. Zhang [3] shows that running MCCV for $N^2$ iterations has results close to cross validation over all $ {N\choose n_t} $ unique training sets. It should be noted that the literature lacks research for large N.
The choice of $k$ and $n_t$ affects the bias/variance trade off. The larger $k$ or $n_t$, the lower the bias and the higher the variance. Larger training sets are more similar between iterations, hence over fitting to the training data. See [2] for more on this discussion. The bias and variance of kFCV and MCCV are different, but the bias of the two methods can be made equal by choosing appropriate levels of $k$ and $n_t$. The values of the bias and variance for both methods are shown in [1] (this paper refers to MCCV as repeated-learning testing-model).
[1] Burman, P. (1989). A Comparative study of ordinary cross-validation, $v$-fold cross validation and the repeated learing testing-model methods. Bometrika 76 503-514.
[2] Hastie, T., Tibshirani, R. and Friedman, J. (2011). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second ed. New York: Springer.
[3] Zhang, P. (1993). Model Selection Via Muiltfold Cross Validation. Ann. Stat. 21 299–313
|
K-fold vs. Monte Carlo cross-validation
Let's assume $ N $ is the size of the dataset, $k$ is the number of the $k$-fold subsets , $n_t$ is the size of the training set and $n_v$ is the size of the validation set. Therefore, $N = k \times
|
5,681
|
K-fold vs. Monte Carlo cross-validation
|
The other two answers are great, I'll just add a two pictures as well as one synonym.
K-fold cross-validation (kFCV):
Monte Carlo cross-validation (MCCV) = Repeated random sub-sampling validation (RRSSV):
References:
The pictures come from (1) (pages 64 and 65), and the synonym is mentioned in (1) and (2).
(1) Remesan, Renji, and Jimson Mathew. Hydrological Data Driven Modelling: A Case Study Approach. Vol. 1. Springer, 2014.
(2) Dubitzky, Werner, Martin Granzow, and Daniel P. Berrar, eds. Fundamentals of data mining in genomics and proteomics. Springer Science & Business Media, 2007.
|
K-fold vs. Monte Carlo cross-validation
|
The other two answers are great, I'll just add a two pictures as well as one synonym.
K-fold cross-validation (kFCV):
Monte Carlo cross-validation (MCCV) = Repeated random sub-sampling validation (
|
K-fold vs. Monte Carlo cross-validation
The other two answers are great, I'll just add a two pictures as well as one synonym.
K-fold cross-validation (kFCV):
Monte Carlo cross-validation (MCCV) = Repeated random sub-sampling validation (RRSSV):
References:
The pictures come from (1) (pages 64 and 65), and the synonym is mentioned in (1) and (2).
(1) Remesan, Renji, and Jimson Mathew. Hydrological Data Driven Modelling: A Case Study Approach. Vol. 1. Springer, 2014.
(2) Dubitzky, Werner, Martin Granzow, and Daniel P. Berrar, eds. Fundamentals of data mining in genomics and proteomics. Springer Science & Business Media, 2007.
|
K-fold vs. Monte Carlo cross-validation
The other two answers are great, I'll just add a two pictures as well as one synonym.
K-fold cross-validation (kFCV):
Monte Carlo cross-validation (MCCV) = Repeated random sub-sampling validation (
|
5,682
|
K-fold vs. Monte Carlo cross-validation
|
What about in practice?
In certain situations (smaller data), I combine both Monte Carlo and K-Fold CV, into repeated, nested cross-validation:
Inner K-fold CV for hyperparameter selection
Outer K-fold CV for estimating generalization performance
Now, repeat steps 1 and 2 many times (Monte Carlo).
If for 2. you use K=5, and then repeat it 100 times in step 3, you will have 5*100 = 500 estimates of generalization performance.
This is available in scikit-learn as Repeated (stratified) K-fold cross-validation: scikit-learn docs. Also see nested vs non-nested cross-validation: scikit-learn docs
|
K-fold vs. Monte Carlo cross-validation
|
What about in practice?
In certain situations (smaller data), I combine both Monte Carlo and K-Fold CV, into repeated, nested cross-validation:
Inner K-fold CV for hyperparameter selection
Outer K-fo
|
K-fold vs. Monte Carlo cross-validation
What about in practice?
In certain situations (smaller data), I combine both Monte Carlo and K-Fold CV, into repeated, nested cross-validation:
Inner K-fold CV for hyperparameter selection
Outer K-fold CV for estimating generalization performance
Now, repeat steps 1 and 2 many times (Monte Carlo).
If for 2. you use K=5, and then repeat it 100 times in step 3, you will have 5*100 = 500 estimates of generalization performance.
This is available in scikit-learn as Repeated (stratified) K-fold cross-validation: scikit-learn docs. Also see nested vs non-nested cross-validation: scikit-learn docs
|
K-fold vs. Monte Carlo cross-validation
What about in practice?
In certain situations (smaller data), I combine both Monte Carlo and K-Fold CV, into repeated, nested cross-validation:
Inner K-fold CV for hyperparameter selection
Outer K-fo
|
5,683
|
K-fold vs. Monte Carlo cross-validation
|
k-fold CV trains on more data than are held out for validation; this is a problem. MCCV has great potential to correct this error; for details, see below, but the validation set should always be at least as big as the training set, and not the other way around.
According to Jun Shao's seminal 1993 paper, "Linear Model Selection by Cross-Validation," which appeared in Volume 88, Issue 422 of the the Journal of the American Statistical Association, Monte Carlo Cross-Validation (MCCV) would optimally have at least the same number $b$ of folds as points $n$ in the overall data set for training and testing (paper simulations use $b = 2n$ repeated a thousand times with empirical probability of selecting a known "true" model reported in the last section of the paper).
In Shao's simulations, the size of each training set, whether for LOOCV or MCCV (also included an approximate Balanced Incomplete Block Design or BIBD) was $n_c \approx \sqrt[4]{n^3}$; i.e., the integer part of the fraction (after truncating). For example, with an overall set of $n = 10,000$ data points, each training set would be of size $n_c = 1,000 = \sqrt[4]{10,000^3}$. Additionally, the $n_c$ training exemplars for each of the $b \geq n $ folds for MCCV were randomly drawn without even stratifying by class, and without regard to whether the draws were with or without replacement.
For $k$-fold cross-validation, on the other hand, one arbitrarily determines the number $k$ of folds before starting, as well as the sizes of each training ($n_c$) and testing ($n - n_c$) partition. One does not draw randomly, but in the order of data appearance within the data frame/matrix (although it could be shuffled once before choosing the folds). If the number $n = k \cdot n_c$ of data points is odd, one might be arbitrarily forced to select an odd number $k$ of folds in order to strictly make each training set the same size $n_c = n/k$.
|
K-fold vs. Monte Carlo cross-validation
|
k-fold CV trains on more data than are held out for validation; this is a problem. MCCV has great potential to correct this error; for details, see below, but the validation set should always be at l
|
K-fold vs. Monte Carlo cross-validation
k-fold CV trains on more data than are held out for validation; this is a problem. MCCV has great potential to correct this error; for details, see below, but the validation set should always be at least as big as the training set, and not the other way around.
According to Jun Shao's seminal 1993 paper, "Linear Model Selection by Cross-Validation," which appeared in Volume 88, Issue 422 of the the Journal of the American Statistical Association, Monte Carlo Cross-Validation (MCCV) would optimally have at least the same number $b$ of folds as points $n$ in the overall data set for training and testing (paper simulations use $b = 2n$ repeated a thousand times with empirical probability of selecting a known "true" model reported in the last section of the paper).
In Shao's simulations, the size of each training set, whether for LOOCV or MCCV (also included an approximate Balanced Incomplete Block Design or BIBD) was $n_c \approx \sqrt[4]{n^3}$; i.e., the integer part of the fraction (after truncating). For example, with an overall set of $n = 10,000$ data points, each training set would be of size $n_c = 1,000 = \sqrt[4]{10,000^3}$. Additionally, the $n_c$ training exemplars for each of the $b \geq n $ folds for MCCV were randomly drawn without even stratifying by class, and without regard to whether the draws were with or without replacement.
For $k$-fold cross-validation, on the other hand, one arbitrarily determines the number $k$ of folds before starting, as well as the sizes of each training ($n_c$) and testing ($n - n_c$) partition. One does not draw randomly, but in the order of data appearance within the data frame/matrix (although it could be shuffled once before choosing the folds). If the number $n = k \cdot n_c$ of data points is odd, one might be arbitrarily forced to select an odd number $k$ of folds in order to strictly make each training set the same size $n_c = n/k$.
|
K-fold vs. Monte Carlo cross-validation
k-fold CV trains on more data than are held out for validation; this is a problem. MCCV has great potential to correct this error; for details, see below, but the validation set should always be at l
|
5,684
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
Wang, Kaijun, Baijie Wang, and Liuqing Peng. "CVAP: Validation for cluster analyses." Data Science Journal 0 (2009): 0904220071.:
To measure the quality of clustering results, there are two kinds of
validity indices: external indices and internal indices.
An external
index is a measure of agreement between two partitions where the first
partition is the a priori known clustering structure, and the second
results from the clustering procedure (Dudoit et al., 2002).
Internal
indices are used to measure the goodness of a clustering structure
without external information (Tseng et al., 2005).
For external indices, we evaluate the results of a clustering algorithm based on a known cluster structure of a data set (or cluster labels).
For internal indices, we evaluate the results using quantities and features inherent in the data set. The optimal number of clusters is usually determined based on an internal validity index.
(Dudoit et al., 2002): Dudoit, S. & Fridlyand, J. (2002) A prediction-based resampling method for estimating the number of clusters in a dataset. Genome Biology, 3(7): 0036.1-21.
(Tseng et al., 2005): Thalamuthu, A, Mukhopadhyay, I, Zheng, X, & Tseng, G. C. (2006) Evaluation and comparison of gene clustering methods in microarray analysis. Bioinformatics, 22(19):2405-12.
In your case, you need some internal indices since you don't have labelled data. There exist tens of internal indices, like:
Silhouette index (implementation in MATLAB)
Davies-Bouldin
Calinski-Harabasz
Dunn index (implementation in MATLAB)
R-squared index
Hubert-Levin (C-index)
Krzanowski-Lai index
Hartigan index
Root-mean-square standard deviation (RMSSTD) index
Semi-partial R-squared (SPR) index
Distance between two clusters (CD) index
weighted inter-intra index
Homogeneity index
Separation index
Each of them have pros and cons, but at least they'll give you a more formal basis for your comparison. The MATLAB toolbox CVAP might be handy as it contains many internal validity indices.
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
Wang, Kaijun, Baijie Wang, and Liuqing Peng. "CVAP: Validation for cluster analyses." Data Science Journal 0 (2009): 0904220071.:
To measure the quality of clustering results, there are two kinds of
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
Wang, Kaijun, Baijie Wang, and Liuqing Peng. "CVAP: Validation for cluster analyses." Data Science Journal 0 (2009): 0904220071.:
To measure the quality of clustering results, there are two kinds of
validity indices: external indices and internal indices.
An external
index is a measure of agreement between two partitions where the first
partition is the a priori known clustering structure, and the second
results from the clustering procedure (Dudoit et al., 2002).
Internal
indices are used to measure the goodness of a clustering structure
without external information (Tseng et al., 2005).
For external indices, we evaluate the results of a clustering algorithm based on a known cluster structure of a data set (or cluster labels).
For internal indices, we evaluate the results using quantities and features inherent in the data set. The optimal number of clusters is usually determined based on an internal validity index.
(Dudoit et al., 2002): Dudoit, S. & Fridlyand, J. (2002) A prediction-based resampling method for estimating the number of clusters in a dataset. Genome Biology, 3(7): 0036.1-21.
(Tseng et al., 2005): Thalamuthu, A, Mukhopadhyay, I, Zheng, X, & Tseng, G. C. (2006) Evaluation and comparison of gene clustering methods in microarray analysis. Bioinformatics, 22(19):2405-12.
In your case, you need some internal indices since you don't have labelled data. There exist tens of internal indices, like:
Silhouette index (implementation in MATLAB)
Davies-Bouldin
Calinski-Harabasz
Dunn index (implementation in MATLAB)
R-squared index
Hubert-Levin (C-index)
Krzanowski-Lai index
Hartigan index
Root-mean-square standard deviation (RMSSTD) index
Semi-partial R-squared (SPR) index
Distance between two clusters (CD) index
weighted inter-intra index
Homogeneity index
Separation index
Each of them have pros and cons, but at least they'll give you a more formal basis for your comparison. The MATLAB toolbox CVAP might be handy as it contains many internal validity indices.
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
Wang, Kaijun, Baijie Wang, and Liuqing Peng. "CVAP: Validation for cluster analyses." Data Science Journal 0 (2009): 0904220071.:
To measure the quality of clustering results, there are two kinds of
|
5,685
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
An outline of internal clustering criteria (internal cluster validation indices)
This is the excerpt from my documentation of a number of popular internal clustering criteria I've programmed, as a user, for SPSS Statistics (see my web page).
1. Reflections
Internal clustering criteria or indices exist to assess internal validity of a partition of objects into groups (clusters or other classes).
Internal validity: general idea.
Internal validity of a partition of a set of objects is its justifiedness from the perspective of that information about the set of objects which was used by the procedure having done the partition. Consequently, internal validity answers the question, was that characteristic of the objects information accounted of “successfully” or “fully” in the act of partition. (And contrary, external validity of a partition is how well the partition corresponds to that information about the set of objects which was not used in the act of partition.)
Internal validity: operationally.
Internal validity of a grouping is greater when there greater is the extent of similar objects falling in the same group while dissimilar – in different groups. In other words, same-group points must, in majority, be more similar to each other than different-group points are. Or, to formulate in terms of density: the more dense are groups inside and the less is density outside them (or the more distant groups draw apart) the higher is internal validity. Different clustering criteria, depending on their formula, differently realize and accentuate that intuitive principle when testing internal validity.
What input.
Partition (grouping) of objects, and set – data (cases X variables) or matrix of proximities between objects. The set provides information about similarity between the objects.
Partition/grouping: what.
Internal clustering criteria are applicable not only to results of clustering. Any partition in classes of any origin (cluster analysis, machine or manual classification), if these groups do not intersect by membership of elements (while spatially, the classes might intersect), can be checked for internal validity by those indices. Criteria presented in this document are meant for nonhierarchical classification, That is, groups don’t divide in subgroups in the being assessed partition.
Usage: comparing different k.
Most often, internal clustering criteria are used for comparing cluster partitions with different number of clusters k obtained via the same method of clustering (or other method of grouping) basing on the same input set (same proximity matrix or same data). The purpose of such comparison is to choose the best k, i.e. the partition with the most valid number of clusters. In that context internal clustering criteria are also called sometimes stopping rules of clusterization. See details further.
Usage: comparing different methods.
You may also compare partitions (with the same or different number k of clusters/classes) given by different procedures/modes (for example, different methods of cluster analysis) basing on the same input set. Generally, it’s all the same for a criterion, which way – same or not – the being compared groupings were obtained, you may even don’t know which way it were. If you are comparing different methods under the same value of k you then are selecting the “better” method (at that k).
Usage: comparing not identical sets of objects.
This is possible. One should understand that for a clustering criterion objects “i” in the set – are just anonymous rows. Therefore it will be correct to compare, by the criterion value, partitions P1 and P2 which partly or completely are comprised of not the same objects. At so doing k may be one or different in the partitions. However, if P1 and P2 consist of different number of objects one may use criterion only if it is insensitive to the number of objects N.
Usage: with different variants of input (not identical features or not identical proximity matrices).
This is possible, but it is a subtle and problematic point. Speaking now here of the direct comparison of a criterion’s values val1 and val2, where val1 was obtained from input dataset (variables or proximities) X1 and partition P1, but val2 was obtained from dataset X2 and partition P2. Specifically:
Might compare partitions with the same k and gotten from the same method but differing on the used proximity measure between the objects. For example, one partition could be the result of clustering of matrix of euclidean distances (L2 norm), another – of matrix of Manhattan distances (L1 norm), third – of matrix of Minkovski distances with L3 norm. There is nothing formally illegitimate in such comparison – if you are ready to assume that different types of distances computed on the same data are immediately comparable in your case. But if they, the measures, have systematic difference for you – the difference one wants to level out (for instance, different lifting or range among values) – then do the corresponding “standardization” of the matrices before computing the clustering criterion. Considering the question of distance matrix transform it is also useful to inquire about how this or that clustering criterion reacts to transforming of matrix elements. “Universal” criteria like point-biserial correlation or C-index don’t react to addition of a constant to proximities, so the overall level of distance magnitudes in matrix is not important to them.
Might also compare partitions with the same k and after the same method, but differing on the set of attributes, variables in data. Here one have to repeat all those same warnings about comparability for you values of those different sets of variables: if they are incomparable (e.g. by level or range) – take care to bring them to comparability. Also, as a rule, clustering criteria aren’t indifferent to the number of variables: it would be incorrect, in general case, to compare directly criterion value obtained on data with 2 variables with value obtained on data with 5 variables.
Let’s say it separately about linear transformation of variables such as z-standardization. May one compare with a clustering criterion partitions (of the same k) of which one was received from raw data and the other was received from these same variables, only standardized? The answer to this question depends on a concrete criterion. If the criterion is insensitive to different linear transform of the variables, then you may.
Comparing different k: two types of criteria.
Most often internal clustering criteria are used to select the optimal number of clusters k. (All those cluster partitions with different k must be obtained by you and be present in the dataset as the cluster membership variables; that is, a criterion assesses already existing, done partitions.) One looks at a plot where by X axis there go solutions with different number of clusters in ascending or descending order, – for example, k from 2 to 20, and by Y axis there deposits the index magnitude.
There are extremum criteria and elbow criteria. With extremum criteria, the higher is value (or inversely, lower – depending on the concrete criterion), the better is partition; consequently, absolutely best k corresponds to the maximal (or the minimal) criterion value when k runs consecutive values. With elbow criteria, their value monotonically increases (or inversely, decreases – depending on the concrete criterion) as k grows, and absolutely best k corresponds to the locus of edge of this tendency where subsequent increase of k no longer is accompanied by steep growth (decline) of the criterion. The advantage of extremum criteria over elbow criteria is that for any two k one can judge which is better k; therefore extremum criteria are applicable for comparisons not only of a series of consecutive values of k. Elbow criteria do not allow comparing nonadjacent k and generally pairs of k, because it is unclear by which “side” of these two k or maybe between them the elbow is located. This is an essential drawback of elbow indices.
Comparing different k: priority of sharpness over extremum.
Need to say that in practice the sharpness on the bend – of a peak or an elbow – has major importance for extremum-type criteria too. On a plot of value profile of such a criterion by different consecutive k one should pay attention not only to max (or min, depending on the specific criterion) value in the profile, but to sharp bends tendency, not necessarily coinciding with max. If a partition with the given k is much better than partitions with k-1 and with k+1, i.e. there is a peak, then it is a strong argument for that k, even if there on the plot exist regions of k where the criterion is generally “better”. Even a one sided bend (elbow) may occur preferable to absolute max for extremum criteria. The reason for these advices is as follows.
The point is that various clustering indices have their peculiar small and having background, inherent character biases in respect to the number of clusters: some “prefer” many clusters while other – few clusters. And manifestation of these tendencies depends on peculiarities of the data: there is almost impossible to invent datasets with different k that would be equi-valid to each other, simultaneously for all possible criteria$^1$. Simulation experiments generating specified k clusters show that all criteria “are wrong” from time to time when clusters are mutually enough tight: they err in the sense that the overall max value does not match with the man-claimed number of the generated clusters. If to pay attention to peaks and elbows, rather than to max, then criteria are “wrong” less frequent in such experiments. (One should, however, realize the limitation of such simulation experiments in appraisal of the bias of clustering criteria: because a clustering criterion is not on the mission to discover the intended, at the generating, cluster structure, it simply assesses the sharpness of the structure as it turned out, while it might have turned out not at all such as it had been conceived at random generating.) By the idea, clustering criteria assisting to select a better k should have zero level base favouritism to k. Unfortunalely, this ideal is hardly likely achievable.
[$^1$ For example: let there be 2 round clusters in a 2-variable space (distance between them is 1), or 3 such clusters (triangle of them, distance between them is 1), or 4 such clusters (square of them, distance between the neighbours is 1). Specifics in disposition of the clusters are not the same in these three configurations (in the 2-cluster the data cloud is oblong; in the 4-cluster there exist between-cluster distances greater than 1) which complicates it to regard the three configurations equally internally valid by some “universal” internal validity. The very universal internal validity is what doesn’t exist. Some clustering criteria will respond to the above not sameness in the configurations by giving preference to one or another of them (and this is what enters the concept of the bias of a criterion towards k), while others – will not.]
Some criteria (for example BIC or PBM) consciously prefer solutions with small number of clusters, then it is said they “penalize for the excess of clusters”. C-Index, contrary, openly tends to reward solutions with a greater number of clusters.
Criterion vs eye.
If data are interval, clusters are not infrequently discernible visually on scatterplots in space of the variables or their principal components. But eye has its own prejudices (apophenia) and it is just one of, and is not the best, clustering criteria. Often this or that clustering criterion based on a statistical formula will “uncover” clusters not noticeable to eye, which interpretation afterwards will confirm their validity by content.
Choosing criterion: data nature.
Some criteria (1) require as input a set of data (cases x variables), and it is cases that are objects partitioned in clusters/classes. Some such criteria demand scale, quantitative variables; while other – categorical variables or mix of scale and categorical. Some criteria may be optimal for binary variables. Criteria of other type (2) are based on analysis of proximity matrix between objects. Often such criteria don’t care what – cases/respondents or variables/attributes – constitute the items broken in clusters, because a proximity matrix might exist for items of any nature. Some of the criteria of type (2) demand specific proximity measures, for example, euclidean distances. While to other criteria the kind of proximities is indifferent; those latter are called universal criteria. (But the “universality” question is more delicate than it seems, since these criteria do, for example, summation of proximities, and there rises theoretical question whether any or not any kind of proximities may be summed.) Some criteria (3) can be calculated equivalently from variables (scale) as well as from matrix (of euclidean distances).
Number of objects, or hilliness.
There are criteria reacting to (proportionally equal) increase or decrease of frequency in clusters. That seems natural because adding objects into clusters amplifies relief of the distributional shape in the data, when clusters don’t coincide any much, and so the criterion value will expectedly enhance. But there are criteria that don’t react to such alteration of N: albeit it is important to such criteria that the density inside clusters be higher than outside clusters they do not reward strengthening of density through the increase of the number of objects in clusters.
Spatial shape.
If a criterion requires scale data or euclidean distances, clusters might be of this or that configuration in the space. Here different clustering criteria have own preferences, i.e. they may reward, moderately, clusters exhibiting a specific spatial shape or relative position in the cluster solution. This quite intricate question can be split in three sub-questions: is the criterion sensitive, and how, (1) to the shape of cluster contours (round or oblong or curved); (2) to the rotation of oblong clusters relative one another, i.e. about their centroids; (3) to the rotation of the whole data cloud about its general centre (in the space of scale variables).
Remark for (1): there happens impression of false preference of round clusters. Not an existing clustering criterion demands clusters to nonoverlap by their margins in space, but the majority of cluster analysis methods output clusters exactly not overlapping in space. In these conditions (clusters are not allowed to superimpose physically) round clusters could get reseating closer to one another in space than oblong clusters with uncontrolled rotatedness, due to which the latter just have fewer chances to be encountered or be formed by clusterization in real investigation data – where, as we know, clusters are usually next to one another. Due to that phenomenon clustering criteria which are insensitive to cluster’s outline, such as Calinski-Harabasz, more often run into “good” solutions with round, rather than elongated, clusters. This doesn’t mean that these criteria themselves prefer round clusters.
Distributional shape in clusters.
There are criteria giving preference to clusters with uniform, flat distribution inside (for example, hyperball), and there are criteria giving preference to clusters with bell-shape distribution inside (like normal distribution); while other criteria don’t take the shape of density distribution in a cluster as important.
Space dimension.
One more not easy question – reaction of different clustering criteria to the increase of dimensionality of the space, which is “spanned” by the data split into clusters. That question is connected, among other things, to curse of dimensionality that “hang over” euclidean distances on which many clustering criteria are based.
Statistical significance.
Internal clustering criteria are not accompanied by probabilistic p-value, since they don’t infer about population and they are busy just with the dataset at hand. Of course, a good cluster solution in the form of high value of the criterion may be the consequence of contingent peculiarities of the concrete sample, overfitting. Cross-validation by equivalent dataset (in the form of stability check and generalizability check) will always be helpful.
2. Example
Application of two clustering criteria for decision about the number of clusters in cluster analysis. Here is five pretty contacting clusters; eye does not recognize them at once.
With this data cloud hierarchical cluster by average linkage analysis was done based on euclidean distances, and all cluster partitions from 15-cluster to 2-cluster were saved. Then used were 2 clustering criteria, Calinski–Harabasz and C-Index, in attempt to choose which solution is the best.
As seen on the left plot, Calinski–Harabasz quite easily (in the given example) managed the task, indicating at 5-cluster solution as absolutely the best. C-Index, however, recommends 15- or 9-cluster solutions (C-Index is “better” when lower). Nevertheless this needs to be ignored and needs to pay attention to the bend which C-Index gives at 5 cluster: 5-cluster solution is still good but 4-cluster is much worse. Therefore the best solution to select is 5-cluster one even on the right plot.
Of course, one should understand that if cluster structure in your data is almost entirely absent then neither of all criteria will help choose the “correct” cluster solution, for there is no one. In such instances there’ll be no peaks or bends, but will be relatively smooth line trends, ascending, descending or horizontal – depending on the data and the criterion.
3. Some internal clustering criteria
[I'm not giving the formulas: please meet them as well as comments on each criterion's idea in the full document on the web page, collection "Internal clustering criteria"]
A. Clustering criteria based on ideology of analysis-of-variance in euclidean space. Based on ratios of sums-of-squares of deviations within and between clusters: B/W, B/T or W/T.
Calinski–Harabasz is a multivariate analogue of Fisher’s F statistic. It recognizes well any convex clusters.
Davies–Bouldin is similar to the former but without its tendency towards approximately same-sized, by the number of objects inside, clusters; Davies–Bouldin rather prefers clusters equally-distanced from each other.
Cubic clustering criterion is like Calinski–Harabasz. It was (questionably) standardized for comparing results obtained on different data. Prefers spherical clusters.
Log SS Ratio is akin to Calinski–Harabasz, but instead of normalizing B/W it uses logarithm.
Log Det Ratio – logarithm of the inverted Wilks’ lambda; it is a MANOVA criterion considering volumetric property of the data cloud.
B. Clustering criteria professing univariate approach: analysis goes by each variable. These are fixed attributes: the data are not considered as lying in space where they might be arbitrarily rotated.
Ratkowsky–Lance is designed for scale features (where it is based on the analysis-of-variance idea) as well as for categorical features (based on the chi-square statistic idea). Ratkowsky-Lance can be used also to assess the contribution of individual features to the quality of a clustering partition.
AIC and BIC clustering criteria also allow for both scale and categorical attributes. These indices are linked to the idea of variational entropy. They put penalty for excess of clusters and thus make it possible to prefer groundedly a parsimonious (few clusters) solution.
C. Clustering criteria based on ideology of “cophenetic” correlation (correlation between likeness of objects and their falling into same cluster).
Point-biserial correlation is usual Pearson r.
Goodman–Kruskal Gamma is nonparametric, monotonic correlation.
C-Index assesses how much close the cluster partition is to (unreachable) ideal one in the current setting. This criterion is equivalent to the rescaled Pearson r.
D. Other criteria:
Dunn seeks for cluster solution with maximally demarcated, separated clusters – if possible, of approximately same physical size (diameter). The macro computes different versions of the criterion.
McClain–Rao is the ratio of the average same-cluster distance to the average between-cluster distance among objects.
PBM is eclectic criterion taking into account sums of deviations (not their squares) from centroids and separation between centroids.
Silhouette statistic (the macro computes several versions of) is able to assess quality of clusterization of each separate object, not just of the entire cluster solution. The criterion measures justifiedness of putting objects in their clusters.
Summary of some properties of the criteria:
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
An outline of internal clustering criteria (internal cluster validation indices)
This is the excerpt from my documentation of a number of popular internal clustering criteria I've programmed, as a use
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
An outline of internal clustering criteria (internal cluster validation indices)
This is the excerpt from my documentation of a number of popular internal clustering criteria I've programmed, as a user, for SPSS Statistics (see my web page).
1. Reflections
Internal clustering criteria or indices exist to assess internal validity of a partition of objects into groups (clusters or other classes).
Internal validity: general idea.
Internal validity of a partition of a set of objects is its justifiedness from the perspective of that information about the set of objects which was used by the procedure having done the partition. Consequently, internal validity answers the question, was that characteristic of the objects information accounted of “successfully” or “fully” in the act of partition. (And contrary, external validity of a partition is how well the partition corresponds to that information about the set of objects which was not used in the act of partition.)
Internal validity: operationally.
Internal validity of a grouping is greater when there greater is the extent of similar objects falling in the same group while dissimilar – in different groups. In other words, same-group points must, in majority, be more similar to each other than different-group points are. Or, to formulate in terms of density: the more dense are groups inside and the less is density outside them (or the more distant groups draw apart) the higher is internal validity. Different clustering criteria, depending on their formula, differently realize and accentuate that intuitive principle when testing internal validity.
What input.
Partition (grouping) of objects, and set – data (cases X variables) or matrix of proximities between objects. The set provides information about similarity between the objects.
Partition/grouping: what.
Internal clustering criteria are applicable not only to results of clustering. Any partition in classes of any origin (cluster analysis, machine or manual classification), if these groups do not intersect by membership of elements (while spatially, the classes might intersect), can be checked for internal validity by those indices. Criteria presented in this document are meant for nonhierarchical classification, That is, groups don’t divide in subgroups in the being assessed partition.
Usage: comparing different k.
Most often, internal clustering criteria are used for comparing cluster partitions with different number of clusters k obtained via the same method of clustering (or other method of grouping) basing on the same input set (same proximity matrix or same data). The purpose of such comparison is to choose the best k, i.e. the partition with the most valid number of clusters. In that context internal clustering criteria are also called sometimes stopping rules of clusterization. See details further.
Usage: comparing different methods.
You may also compare partitions (with the same or different number k of clusters/classes) given by different procedures/modes (for example, different methods of cluster analysis) basing on the same input set. Generally, it’s all the same for a criterion, which way – same or not – the being compared groupings were obtained, you may even don’t know which way it were. If you are comparing different methods under the same value of k you then are selecting the “better” method (at that k).
Usage: comparing not identical sets of objects.
This is possible. One should understand that for a clustering criterion objects “i” in the set – are just anonymous rows. Therefore it will be correct to compare, by the criterion value, partitions P1 and P2 which partly or completely are comprised of not the same objects. At so doing k may be one or different in the partitions. However, if P1 and P2 consist of different number of objects one may use criterion only if it is insensitive to the number of objects N.
Usage: with different variants of input (not identical features or not identical proximity matrices).
This is possible, but it is a subtle and problematic point. Speaking now here of the direct comparison of a criterion’s values val1 and val2, where val1 was obtained from input dataset (variables or proximities) X1 and partition P1, but val2 was obtained from dataset X2 and partition P2. Specifically:
Might compare partitions with the same k and gotten from the same method but differing on the used proximity measure between the objects. For example, one partition could be the result of clustering of matrix of euclidean distances (L2 norm), another – of matrix of Manhattan distances (L1 norm), third – of matrix of Minkovski distances with L3 norm. There is nothing formally illegitimate in such comparison – if you are ready to assume that different types of distances computed on the same data are immediately comparable in your case. But if they, the measures, have systematic difference for you – the difference one wants to level out (for instance, different lifting or range among values) – then do the corresponding “standardization” of the matrices before computing the clustering criterion. Considering the question of distance matrix transform it is also useful to inquire about how this or that clustering criterion reacts to transforming of matrix elements. “Universal” criteria like point-biserial correlation or C-index don’t react to addition of a constant to proximities, so the overall level of distance magnitudes in matrix is not important to them.
Might also compare partitions with the same k and after the same method, but differing on the set of attributes, variables in data. Here one have to repeat all those same warnings about comparability for you values of those different sets of variables: if they are incomparable (e.g. by level or range) – take care to bring them to comparability. Also, as a rule, clustering criteria aren’t indifferent to the number of variables: it would be incorrect, in general case, to compare directly criterion value obtained on data with 2 variables with value obtained on data with 5 variables.
Let’s say it separately about linear transformation of variables such as z-standardization. May one compare with a clustering criterion partitions (of the same k) of which one was received from raw data and the other was received from these same variables, only standardized? The answer to this question depends on a concrete criterion. If the criterion is insensitive to different linear transform of the variables, then you may.
Comparing different k: two types of criteria.
Most often internal clustering criteria are used to select the optimal number of clusters k. (All those cluster partitions with different k must be obtained by you and be present in the dataset as the cluster membership variables; that is, a criterion assesses already existing, done partitions.) One looks at a plot where by X axis there go solutions with different number of clusters in ascending or descending order, – for example, k from 2 to 20, and by Y axis there deposits the index magnitude.
There are extremum criteria and elbow criteria. With extremum criteria, the higher is value (or inversely, lower – depending on the concrete criterion), the better is partition; consequently, absolutely best k corresponds to the maximal (or the minimal) criterion value when k runs consecutive values. With elbow criteria, their value monotonically increases (or inversely, decreases – depending on the concrete criterion) as k grows, and absolutely best k corresponds to the locus of edge of this tendency where subsequent increase of k no longer is accompanied by steep growth (decline) of the criterion. The advantage of extremum criteria over elbow criteria is that for any two k one can judge which is better k; therefore extremum criteria are applicable for comparisons not only of a series of consecutive values of k. Elbow criteria do not allow comparing nonadjacent k and generally pairs of k, because it is unclear by which “side” of these two k or maybe between them the elbow is located. This is an essential drawback of elbow indices.
Comparing different k: priority of sharpness over extremum.
Need to say that in practice the sharpness on the bend – of a peak or an elbow – has major importance for extremum-type criteria too. On a plot of value profile of such a criterion by different consecutive k one should pay attention not only to max (or min, depending on the specific criterion) value in the profile, but to sharp bends tendency, not necessarily coinciding with max. If a partition with the given k is much better than partitions with k-1 and with k+1, i.e. there is a peak, then it is a strong argument for that k, even if there on the plot exist regions of k where the criterion is generally “better”. Even a one sided bend (elbow) may occur preferable to absolute max for extremum criteria. The reason for these advices is as follows.
The point is that various clustering indices have their peculiar small and having background, inherent character biases in respect to the number of clusters: some “prefer” many clusters while other – few clusters. And manifestation of these tendencies depends on peculiarities of the data: there is almost impossible to invent datasets with different k that would be equi-valid to each other, simultaneously for all possible criteria$^1$. Simulation experiments generating specified k clusters show that all criteria “are wrong” from time to time when clusters are mutually enough tight: they err in the sense that the overall max value does not match with the man-claimed number of the generated clusters. If to pay attention to peaks and elbows, rather than to max, then criteria are “wrong” less frequent in such experiments. (One should, however, realize the limitation of such simulation experiments in appraisal of the bias of clustering criteria: because a clustering criterion is not on the mission to discover the intended, at the generating, cluster structure, it simply assesses the sharpness of the structure as it turned out, while it might have turned out not at all such as it had been conceived at random generating.) By the idea, clustering criteria assisting to select a better k should have zero level base favouritism to k. Unfortunalely, this ideal is hardly likely achievable.
[$^1$ For example: let there be 2 round clusters in a 2-variable space (distance between them is 1), or 3 such clusters (triangle of them, distance between them is 1), or 4 such clusters (square of them, distance between the neighbours is 1). Specifics in disposition of the clusters are not the same in these three configurations (in the 2-cluster the data cloud is oblong; in the 4-cluster there exist between-cluster distances greater than 1) which complicates it to regard the three configurations equally internally valid by some “universal” internal validity. The very universal internal validity is what doesn’t exist. Some clustering criteria will respond to the above not sameness in the configurations by giving preference to one or another of them (and this is what enters the concept of the bias of a criterion towards k), while others – will not.]
Some criteria (for example BIC or PBM) consciously prefer solutions with small number of clusters, then it is said they “penalize for the excess of clusters”. C-Index, contrary, openly tends to reward solutions with a greater number of clusters.
Criterion vs eye.
If data are interval, clusters are not infrequently discernible visually on scatterplots in space of the variables or their principal components. But eye has its own prejudices (apophenia) and it is just one of, and is not the best, clustering criteria. Often this or that clustering criterion based on a statistical formula will “uncover” clusters not noticeable to eye, which interpretation afterwards will confirm their validity by content.
Choosing criterion: data nature.
Some criteria (1) require as input a set of data (cases x variables), and it is cases that are objects partitioned in clusters/classes. Some such criteria demand scale, quantitative variables; while other – categorical variables or mix of scale and categorical. Some criteria may be optimal for binary variables. Criteria of other type (2) are based on analysis of proximity matrix between objects. Often such criteria don’t care what – cases/respondents or variables/attributes – constitute the items broken in clusters, because a proximity matrix might exist for items of any nature. Some of the criteria of type (2) demand specific proximity measures, for example, euclidean distances. While to other criteria the kind of proximities is indifferent; those latter are called universal criteria. (But the “universality” question is more delicate than it seems, since these criteria do, for example, summation of proximities, and there rises theoretical question whether any or not any kind of proximities may be summed.) Some criteria (3) can be calculated equivalently from variables (scale) as well as from matrix (of euclidean distances).
Number of objects, or hilliness.
There are criteria reacting to (proportionally equal) increase or decrease of frequency in clusters. That seems natural because adding objects into clusters amplifies relief of the distributional shape in the data, when clusters don’t coincide any much, and so the criterion value will expectedly enhance. But there are criteria that don’t react to such alteration of N: albeit it is important to such criteria that the density inside clusters be higher than outside clusters they do not reward strengthening of density through the increase of the number of objects in clusters.
Spatial shape.
If a criterion requires scale data or euclidean distances, clusters might be of this or that configuration in the space. Here different clustering criteria have own preferences, i.e. they may reward, moderately, clusters exhibiting a specific spatial shape or relative position in the cluster solution. This quite intricate question can be split in three sub-questions: is the criterion sensitive, and how, (1) to the shape of cluster contours (round or oblong or curved); (2) to the rotation of oblong clusters relative one another, i.e. about their centroids; (3) to the rotation of the whole data cloud about its general centre (in the space of scale variables).
Remark for (1): there happens impression of false preference of round clusters. Not an existing clustering criterion demands clusters to nonoverlap by their margins in space, but the majority of cluster analysis methods output clusters exactly not overlapping in space. In these conditions (clusters are not allowed to superimpose physically) round clusters could get reseating closer to one another in space than oblong clusters with uncontrolled rotatedness, due to which the latter just have fewer chances to be encountered or be formed by clusterization in real investigation data – where, as we know, clusters are usually next to one another. Due to that phenomenon clustering criteria which are insensitive to cluster’s outline, such as Calinski-Harabasz, more often run into “good” solutions with round, rather than elongated, clusters. This doesn’t mean that these criteria themselves prefer round clusters.
Distributional shape in clusters.
There are criteria giving preference to clusters with uniform, flat distribution inside (for example, hyperball), and there are criteria giving preference to clusters with bell-shape distribution inside (like normal distribution); while other criteria don’t take the shape of density distribution in a cluster as important.
Space dimension.
One more not easy question – reaction of different clustering criteria to the increase of dimensionality of the space, which is “spanned” by the data split into clusters. That question is connected, among other things, to curse of dimensionality that “hang over” euclidean distances on which many clustering criteria are based.
Statistical significance.
Internal clustering criteria are not accompanied by probabilistic p-value, since they don’t infer about population and they are busy just with the dataset at hand. Of course, a good cluster solution in the form of high value of the criterion may be the consequence of contingent peculiarities of the concrete sample, overfitting. Cross-validation by equivalent dataset (in the form of stability check and generalizability check) will always be helpful.
2. Example
Application of two clustering criteria for decision about the number of clusters in cluster analysis. Here is five pretty contacting clusters; eye does not recognize them at once.
With this data cloud hierarchical cluster by average linkage analysis was done based on euclidean distances, and all cluster partitions from 15-cluster to 2-cluster were saved. Then used were 2 clustering criteria, Calinski–Harabasz and C-Index, in attempt to choose which solution is the best.
As seen on the left plot, Calinski–Harabasz quite easily (in the given example) managed the task, indicating at 5-cluster solution as absolutely the best. C-Index, however, recommends 15- or 9-cluster solutions (C-Index is “better” when lower). Nevertheless this needs to be ignored and needs to pay attention to the bend which C-Index gives at 5 cluster: 5-cluster solution is still good but 4-cluster is much worse. Therefore the best solution to select is 5-cluster one even on the right plot.
Of course, one should understand that if cluster structure in your data is almost entirely absent then neither of all criteria will help choose the “correct” cluster solution, for there is no one. In such instances there’ll be no peaks or bends, but will be relatively smooth line trends, ascending, descending or horizontal – depending on the data and the criterion.
3. Some internal clustering criteria
[I'm not giving the formulas: please meet them as well as comments on each criterion's idea in the full document on the web page, collection "Internal clustering criteria"]
A. Clustering criteria based on ideology of analysis-of-variance in euclidean space. Based on ratios of sums-of-squares of deviations within and between clusters: B/W, B/T or W/T.
Calinski–Harabasz is a multivariate analogue of Fisher’s F statistic. It recognizes well any convex clusters.
Davies–Bouldin is similar to the former but without its tendency towards approximately same-sized, by the number of objects inside, clusters; Davies–Bouldin rather prefers clusters equally-distanced from each other.
Cubic clustering criterion is like Calinski–Harabasz. It was (questionably) standardized for comparing results obtained on different data. Prefers spherical clusters.
Log SS Ratio is akin to Calinski–Harabasz, but instead of normalizing B/W it uses logarithm.
Log Det Ratio – logarithm of the inverted Wilks’ lambda; it is a MANOVA criterion considering volumetric property of the data cloud.
B. Clustering criteria professing univariate approach: analysis goes by each variable. These are fixed attributes: the data are not considered as lying in space where they might be arbitrarily rotated.
Ratkowsky–Lance is designed for scale features (where it is based on the analysis-of-variance idea) as well as for categorical features (based on the chi-square statistic idea). Ratkowsky-Lance can be used also to assess the contribution of individual features to the quality of a clustering partition.
AIC and BIC clustering criteria also allow for both scale and categorical attributes. These indices are linked to the idea of variational entropy. They put penalty for excess of clusters and thus make it possible to prefer groundedly a parsimonious (few clusters) solution.
C. Clustering criteria based on ideology of “cophenetic” correlation (correlation between likeness of objects and their falling into same cluster).
Point-biserial correlation is usual Pearson r.
Goodman–Kruskal Gamma is nonparametric, monotonic correlation.
C-Index assesses how much close the cluster partition is to (unreachable) ideal one in the current setting. This criterion is equivalent to the rescaled Pearson r.
D. Other criteria:
Dunn seeks for cluster solution with maximally demarcated, separated clusters – if possible, of approximately same physical size (diameter). The macro computes different versions of the criterion.
McClain–Rao is the ratio of the average same-cluster distance to the average between-cluster distance among objects.
PBM is eclectic criterion taking into account sums of deviations (not their squares) from centroids and separation between centroids.
Silhouette statistic (the macro computes several versions of) is able to assess quality of clusterization of each separate object, not just of the entire cluster solution. The criterion measures justifiedness of putting objects in their clusters.
Summary of some properties of the criteria:
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
An outline of internal clustering criteria (internal cluster validation indices)
This is the excerpt from my documentation of a number of popular internal clustering criteria I've programmed, as a use
|
5,686
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
There are some internal clustering methods. In particular with respect to the distances of objects in the data set. See for example Silhouette coefficient [on Wikipedia].
You must however be aware that there are algorithms such as k-means that try to optimize exactly these parameters, and as such you introduce a particular type of bias; essentially this is prone to overfitting.
So when using internal evaluation methods, you need to be well aware of the properties of your algorithm and the actual measures. I'd even try to do some kind of cross validation, using only part of the data for clustering, and another part of the data set to validation. For the silhouette coefficient this will probably not be enough to make anything except k-means look good, but at least it should help comparing different k-means results with each other. Which - for this reason - is actually the main use of such a coefficient: comparing different results of the same algorithm with each other.
Sorry for only half-answering your question. I do not know if there is an "online version" of any such method available.
Have a look at your objectives, and see if you can derive any quality measure from this. In general, there is no such thing as the best clustering result for real data. It will always be only relative to a certain objective; and as such it can also overfit. k-means optimizes distances from centers; and supervised learners optimize for labels and are thus prone to overfit in reproducing the labels.
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
There are some internal clustering methods. In particular with respect to the distances of objects in the data set. See for example Silhouette coefficient [on Wikipedia].
You must however be aware tha
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
There are some internal clustering methods. In particular with respect to the distances of objects in the data set. See for example Silhouette coefficient [on Wikipedia].
You must however be aware that there are algorithms such as k-means that try to optimize exactly these parameters, and as such you introduce a particular type of bias; essentially this is prone to overfitting.
So when using internal evaluation methods, you need to be well aware of the properties of your algorithm and the actual measures. I'd even try to do some kind of cross validation, using only part of the data for clustering, and another part of the data set to validation. For the silhouette coefficient this will probably not be enough to make anything except k-means look good, but at least it should help comparing different k-means results with each other. Which - for this reason - is actually the main use of such a coefficient: comparing different results of the same algorithm with each other.
Sorry for only half-answering your question. I do not know if there is an "online version" of any such method available.
Have a look at your objectives, and see if you can derive any quality measure from this. In general, there is no such thing as the best clustering result for real data. It will always be only relative to a certain objective; and as such it can also overfit. k-means optimizes distances from centers; and supervised learners optimize for labels and are thus prone to overfit in reproducing the labels.
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
There are some internal clustering methods. In particular with respect to the distances of objects in the data set. See for example Silhouette coefficient [on Wikipedia].
You must however be aware tha
|
5,687
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
If your problem is to evaluate the clustering result among a list of clustering algorithms (i.e choosing the best clustering algorithm for a certain input dataset) another idea is to use an evaluation metric that someone else used as evaluation function to maximize, in order to create his clustering algorithm.
A very good example is given by this paper: Rock: a robust clustering algorithm for categorical attributes. In the section 3.3 (page 5) the authors present a criterion function to maximize.
In this case, the function considers the numbers of the "neighbors" that a certain point have in common with another point. A neighbor for the point x is a point n very similar to x (i.e such that an user-defined similarity metric between x and n, returns a very high score).
So the idea is: if two points have in common a lot of "neighbors" then is a right thing to consider them in the same cluster.
In this way, using that evaluation function for the clustering results of two different algorithms, you can choose the high scored one.
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
|
If your problem is to evaluate the clustering result among a list of clustering algorithms (i.e choosing the best clustering algorithm for a certain input dataset) another idea is to use an evaluatio
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
If your problem is to evaluate the clustering result among a list of clustering algorithms (i.e choosing the best clustering algorithm for a certain input dataset) another idea is to use an evaluation metric that someone else used as evaluation function to maximize, in order to create his clustering algorithm.
A very good example is given by this paper: Rock: a robust clustering algorithm for categorical attributes. In the section 3.3 (page 5) the authors present a criterion function to maximize.
In this case, the function considers the numbers of the "neighbors" that a certain point have in common with another point. A neighbor for the point x is a point n very similar to x (i.e such that an user-defined similarity metric between x and n, returns a very high score).
So the idea is: if two points have in common a lot of "neighbors" then is a right thing to consider them in the same cluster.
In this way, using that evaluation function for the clustering results of two different algorithms, you can choose the high scored one.
|
Evaluation measures of goodness or validity of clustering (without having truth labels) [duplicate]
If your problem is to evaluate the clustering result among a list of clustering algorithms (i.e choosing the best clustering algorithm for a certain input dataset) another idea is to use an evaluatio
|
5,688
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
Yes, there are many ways to produce a sequence of numbers that are more evenly distributed than random uniforms. In fact, there is a whole field dedicated to this question; it is the backbone of quasi-Monte Carlo (QMC). Below is a brief tour of the absolute basics.
Measuring uniformity
There are many ways to do this, but the most common way has a strong, intuitive, geometric flavor. Suppose we are concerned with generating $n$ points $x_1,x_2,\ldots,x_n$ in $[0,1]^d$ for some positive integer $d$. Define
$$\newcommand{\I}{\mathbf 1}
D_n := \sup_{R \in \mathcal R}\,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>,
$$
where $R$ is a rectangle $[a_1, b_1] \times \cdots \times [a_d, b_d]$ in $[0,1]^d$ such that $0 \leq a_i \leq b_i \leq 1$ and $\mathcal R$ is the set of all such rectangles. The first term inside the modulus is the "observed" proportion of points inside $R$ and the second term is the volume of $R$, $\mathrm{vol}(R) = \prod_i (b_i - a_i)$.
The quantity $D_n$ is often called the discrepancy or extreme discrepancy of the set of points $(x_i)$. Intuitively, we find the "worst" rectangle $R$ where the proportion of points deviates the most from what we would expect under perfect uniformity.
This is unwieldy in practice and difficult to compute. For the most part, people prefer to work with the star discrepancy,
$$
D_n^\star = \sup_{R \in \mathcal A} \,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>.
$$
The only difference is the set $\mathcal A$ over which the supremum is taken. It is the set of anchored rectangles (at the origin), i.e., where $a_1 = a_2 = \cdots = a_d = 0$.
Lemma: $D_n^\star \leq D_n \leq 2^d D_n^\star$ for all $n$, $d$.
Proof. The left hand bound is obvious since $\mathcal A \subset \mathcal R$. The right-hand bound follows because every $R \in \mathcal R$ can be composed via unions, intersections and complements of no more than $2^d$ anchored rectangles (i.e., in $\mathcal A$).
Thus, we see that $D_n$ and $D_n^\star$ are equivalent in the sense that if one is small as $n$ grows, the other will be too. Here is a (cartoon) picture showing candidate rectangles for each discrepancy.
Examples of "good" sequences
Sequences with verifiably low star discrepancy $D_n^\star$ are often called, unsurprisingly, low discrepancy sequences.
van der Corput. This is perhaps the simplest example. For $d=1$, the van der Corput sequences are formed by expanding the integer $i$ in binary and then "reflecting the digits" around the decimal point. More formally, this is done with the radical inverse function in base $b$,
$$\newcommand{\rinv}{\phi}
\rinv_b(i) = \sum_{k=0}^\infty a_k b^{-k-1} \>,
$$
where $i = \sum_{k=0}^\infty a_k b^k$ and $a_k$ are the digits in the base $b$ expansion of $i$. This function forms the basis for many other sequences as well. For example, $41$ in binary is $101001$ and so $a_0 = 1$, $a_1 = 0$, $a_2 = 0$, $a_3 = 1$, $a_4 = 0$ and $a_5 = 1$. Hence, the 41st point in the van der Corput sequence is $x_{41} = \rinv_2(41) = 0.100101\,\text{(base 2)} = 37/64$.
Note that because the least significant bit of $i$ oscillates between $0$ and $1$, the points $x_i$ for odd $i$ are in $[1/2,1)$, whereas the points $x_i$ for even $i$ are in $(0,1/2)$.
Halton sequences. Among the most popular of classical low-discrepancy sequences, these are extensions of the van der Corput sequence to multiple dimensions. Let $p_j$ be the $j$th smallest prime. Then, the $i$th point $x_i$ of the $d$-dimensional Halton sequence is
$$
x_i = (\rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_d}(i)) \>.
$$
For low $d$ these work quite well, but have problems in higher dimensions.
Halton sequences satisfy $D_n^\star = O(n^{-1} (\log n)^d)$. They are also nice because they are extensible in that the construction of the points does not depend on an a priori choice of the length of the sequence $n$.
Hammersley sequences. This is a very simple modification of the Halton sequence. We instead use
$$
x_i = (i/n, \rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_{d-1}}(i)) \>.
$$
Perhaps surprisingly, the advantage is that they have better star discrepancy $D_n^\star = O(n^{-1}(\log n)^{d-1})$.
Here is an example of the Halton and Hammersley sequences in two dimensions.
Faure-permuted Halton sequences. A special set of permutations (fixed as a function of $i$) can be applied to the digit expansion $a_k$ for each $i$ when producing the Halton sequence. This helps remedy (to some degree) the problems alluded to in higher dimensions. Each of the permutations has the interesting property of keeping $0$ and $b-1$ as fixed points.
Lattice rules. Let $\beta_1, \ldots, \beta_{d-1}$ be integers. Take
$$
x_i = (i/n, \{i \beta_1 / n\}, \ldots, \{i \beta_{d-1}/n\}) \>,
$$
where $\{y\}$ denotes the fractional part of $y$. Judicious choice of the $\beta$ values yields good uniformity properties. Poor choices can lead to bad sequences. They are also not extensible. Here are two examples.
$(t,m,s)$ nets. $(t,m,s)$ nets in base $b$ are sets of points such that every rectangle of volume $b^{t-m}$ in $[0,1]^s$ contains $b^t$ points. This is a strong form of uniformity. Small $t$ is your friend, in this case. Halton, Sobol' and Faure sequences are examples of $(t,m,s)$ nets. These lend themselves nicely to randomization via scrambling. Random scrambling (done right) of a $(t,m,s)$ net yields another $(t,m,s)$ net. The MinT project keeps a collection of such sequences.
Simple randomization: Cranley-Patterson rotations. Let $x_i \in [0,1]^d$ be a sequence of points. Let $U \sim \mathcal U(0,1)$. Then the points $\hat x_i = \{x_i + U\}$ are uniformly distributed in $[0,1]^d$.
Here is an example with the blue dots being the original points and the red dots being the rotated ones with lines connecting them (and shown wrapped around, where appropriate).
Completely uniformly distributed sequences. This is an even stronger notion of uniformity that sometimes comes into play. Let $(u_i)$ be the sequence of points in $[0,1]$ and now form overlapping blocks of size $d$ to get the sequence $(x_i)$. So, if $s = 3$, we take $x_1 = (u_1,u_2,u_3)$ then $x_2 = (u_2,u_3,u_4)$, etc. If, for every $s \geq 1$, $D_n^\star(x_1,\ldots,x_n) \to 0$, then $(u_i)$ is said to be completely uniformly distributed. In other words, the sequence yields a set of points of any dimension that have desirable $D_n^\star$ properties.
As an example, the van der Corput sequence is not completely uniformly distributed since for $s = 2$, the points $x_{2i}$ are in the square $(0,1/2) \times [1/2,1)$ and the points $x_{2i-1}$ are in $[1/2,1) \times (0,1/2)$. Hence there are no points in the square $(0,1/2) \times (0,1/2)$ which implies that for $s=2$, $D_n^\star \geq 1/4$ for all $n$.
Standard references
The Niederreiter (1992) monograph and the Fang and Wang (1994) text are places to go for further exploration.
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
Yes, there are many ways to produce a sequence of numbers that are more evenly distributed than random uniforms. In fact, there is a whole field dedicated to this question; it is the backbone of quasi
|
Fake uniform random numbers: More evenly distributed than true uniform data
Yes, there are many ways to produce a sequence of numbers that are more evenly distributed than random uniforms. In fact, there is a whole field dedicated to this question; it is the backbone of quasi-Monte Carlo (QMC). Below is a brief tour of the absolute basics.
Measuring uniformity
There are many ways to do this, but the most common way has a strong, intuitive, geometric flavor. Suppose we are concerned with generating $n$ points $x_1,x_2,\ldots,x_n$ in $[0,1]^d$ for some positive integer $d$. Define
$$\newcommand{\I}{\mathbf 1}
D_n := \sup_{R \in \mathcal R}\,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>,
$$
where $R$ is a rectangle $[a_1, b_1] \times \cdots \times [a_d, b_d]$ in $[0,1]^d$ such that $0 \leq a_i \leq b_i \leq 1$ and $\mathcal R$ is the set of all such rectangles. The first term inside the modulus is the "observed" proportion of points inside $R$ and the second term is the volume of $R$, $\mathrm{vol}(R) = \prod_i (b_i - a_i)$.
The quantity $D_n$ is often called the discrepancy or extreme discrepancy of the set of points $(x_i)$. Intuitively, we find the "worst" rectangle $R$ where the proportion of points deviates the most from what we would expect under perfect uniformity.
This is unwieldy in practice and difficult to compute. For the most part, people prefer to work with the star discrepancy,
$$
D_n^\star = \sup_{R \in \mathcal A} \,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>.
$$
The only difference is the set $\mathcal A$ over which the supremum is taken. It is the set of anchored rectangles (at the origin), i.e., where $a_1 = a_2 = \cdots = a_d = 0$.
Lemma: $D_n^\star \leq D_n \leq 2^d D_n^\star$ for all $n$, $d$.
Proof. The left hand bound is obvious since $\mathcal A \subset \mathcal R$. The right-hand bound follows because every $R \in \mathcal R$ can be composed via unions, intersections and complements of no more than $2^d$ anchored rectangles (i.e., in $\mathcal A$).
Thus, we see that $D_n$ and $D_n^\star$ are equivalent in the sense that if one is small as $n$ grows, the other will be too. Here is a (cartoon) picture showing candidate rectangles for each discrepancy.
Examples of "good" sequences
Sequences with verifiably low star discrepancy $D_n^\star$ are often called, unsurprisingly, low discrepancy sequences.
van der Corput. This is perhaps the simplest example. For $d=1$, the van der Corput sequences are formed by expanding the integer $i$ in binary and then "reflecting the digits" around the decimal point. More formally, this is done with the radical inverse function in base $b$,
$$\newcommand{\rinv}{\phi}
\rinv_b(i) = \sum_{k=0}^\infty a_k b^{-k-1} \>,
$$
where $i = \sum_{k=0}^\infty a_k b^k$ and $a_k$ are the digits in the base $b$ expansion of $i$. This function forms the basis for many other sequences as well. For example, $41$ in binary is $101001$ and so $a_0 = 1$, $a_1 = 0$, $a_2 = 0$, $a_3 = 1$, $a_4 = 0$ and $a_5 = 1$. Hence, the 41st point in the van der Corput sequence is $x_{41} = \rinv_2(41) = 0.100101\,\text{(base 2)} = 37/64$.
Note that because the least significant bit of $i$ oscillates between $0$ and $1$, the points $x_i$ for odd $i$ are in $[1/2,1)$, whereas the points $x_i$ for even $i$ are in $(0,1/2)$.
Halton sequences. Among the most popular of classical low-discrepancy sequences, these are extensions of the van der Corput sequence to multiple dimensions. Let $p_j$ be the $j$th smallest prime. Then, the $i$th point $x_i$ of the $d$-dimensional Halton sequence is
$$
x_i = (\rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_d}(i)) \>.
$$
For low $d$ these work quite well, but have problems in higher dimensions.
Halton sequences satisfy $D_n^\star = O(n^{-1} (\log n)^d)$. They are also nice because they are extensible in that the construction of the points does not depend on an a priori choice of the length of the sequence $n$.
Hammersley sequences. This is a very simple modification of the Halton sequence. We instead use
$$
x_i = (i/n, \rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_{d-1}}(i)) \>.
$$
Perhaps surprisingly, the advantage is that they have better star discrepancy $D_n^\star = O(n^{-1}(\log n)^{d-1})$.
Here is an example of the Halton and Hammersley sequences in two dimensions.
Faure-permuted Halton sequences. A special set of permutations (fixed as a function of $i$) can be applied to the digit expansion $a_k$ for each $i$ when producing the Halton sequence. This helps remedy (to some degree) the problems alluded to in higher dimensions. Each of the permutations has the interesting property of keeping $0$ and $b-1$ as fixed points.
Lattice rules. Let $\beta_1, \ldots, \beta_{d-1}$ be integers. Take
$$
x_i = (i/n, \{i \beta_1 / n\}, \ldots, \{i \beta_{d-1}/n\}) \>,
$$
where $\{y\}$ denotes the fractional part of $y$. Judicious choice of the $\beta$ values yields good uniformity properties. Poor choices can lead to bad sequences. They are also not extensible. Here are two examples.
$(t,m,s)$ nets. $(t,m,s)$ nets in base $b$ are sets of points such that every rectangle of volume $b^{t-m}$ in $[0,1]^s$ contains $b^t$ points. This is a strong form of uniformity. Small $t$ is your friend, in this case. Halton, Sobol' and Faure sequences are examples of $(t,m,s)$ nets. These lend themselves nicely to randomization via scrambling. Random scrambling (done right) of a $(t,m,s)$ net yields another $(t,m,s)$ net. The MinT project keeps a collection of such sequences.
Simple randomization: Cranley-Patterson rotations. Let $x_i \in [0,1]^d$ be a sequence of points. Let $U \sim \mathcal U(0,1)$. Then the points $\hat x_i = \{x_i + U\}$ are uniformly distributed in $[0,1]^d$.
Here is an example with the blue dots being the original points and the red dots being the rotated ones with lines connecting them (and shown wrapped around, where appropriate).
Completely uniformly distributed sequences. This is an even stronger notion of uniformity that sometimes comes into play. Let $(u_i)$ be the sequence of points in $[0,1]$ and now form overlapping blocks of size $d$ to get the sequence $(x_i)$. So, if $s = 3$, we take $x_1 = (u_1,u_2,u_3)$ then $x_2 = (u_2,u_3,u_4)$, etc. If, for every $s \geq 1$, $D_n^\star(x_1,\ldots,x_n) \to 0$, then $(u_i)$ is said to be completely uniformly distributed. In other words, the sequence yields a set of points of any dimension that have desirable $D_n^\star$ properties.
As an example, the van der Corput sequence is not completely uniformly distributed since for $s = 2$, the points $x_{2i}$ are in the square $(0,1/2) \times [1/2,1)$ and the points $x_{2i-1}$ are in $[1/2,1) \times (0,1/2)$. Hence there are no points in the square $(0,1/2) \times (0,1/2)$ which implies that for $s=2$, $D_n^\star \geq 1/4$ for all $n$.
Standard references
The Niederreiter (1992) monograph and the Fang and Wang (1994) text are places to go for further exploration.
|
Fake uniform random numbers: More evenly distributed than true uniform data
Yes, there are many ways to produce a sequence of numbers that are more evenly distributed than random uniforms. In fact, there is a whole field dedicated to this question; it is the backbone of quasi
|
5,689
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
One way to do this would be to generate uniform random numbers, then test for "closeness" using any method you like and then delete random items that are too close to others and choose another set of random uniforms to make up for them.
Would such a distribution pass every test of uniformity? I sure hope not! It's no longer uniformly distributed, it is now some other distribution.
One uninuitive aspect of probability is that chance is clumpy. There are more runs in random data than people think there will be. I think Tversky did some research on this (he researched so much, though, that it's hard to remember).
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
One way to do this would be to generate uniform random numbers, then test for "closeness" using any method you like and then delete random items that are too close to others and choose another set of
|
Fake uniform random numbers: More evenly distributed than true uniform data
One way to do this would be to generate uniform random numbers, then test for "closeness" using any method you like and then delete random items that are too close to others and choose another set of random uniforms to make up for them.
Would such a distribution pass every test of uniformity? I sure hope not! It's no longer uniformly distributed, it is now some other distribution.
One uninuitive aspect of probability is that chance is clumpy. There are more runs in random data than people think there will be. I think Tversky did some research on this (he researched so much, though, that it's hard to remember).
|
Fake uniform random numbers: More evenly distributed than true uniform data
One way to do this would be to generate uniform random numbers, then test for "closeness" using any method you like and then delete random items that are too close to others and choose another set of
|
5,690
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
This is known as a "hard-core" poisson point process - so named by Brian Ripley in the 1970s; i.e. you want it to be random, but you don't want any points to be too close together. The "hard-core" can be imagined as a buffer zone around which other points cannot intrude.
Imagine you're recording the position of some cars in a city - but you're only recording the point at the nominal centre of the car. While they're on the streets no two point pairs can come close together because the points are protect by the "hard-core" of the bodywork - we'll ignore the potential super-position in multi-storey car parks :-)
There are procedures for generating such point processes - one way is just to generate points uniformly and then remove any that are too close together!
For some detail on such processes, refer for example to this
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
This is known as a "hard-core" poisson point process - so named by Brian Ripley in the 1970s; i.e. you want it to be random, but you don't want any points to be too close together. The "hard-core" ca
|
Fake uniform random numbers: More evenly distributed than true uniform data
This is known as a "hard-core" poisson point process - so named by Brian Ripley in the 1970s; i.e. you want it to be random, but you don't want any points to be too close together. The "hard-core" can be imagined as a buffer zone around which other points cannot intrude.
Imagine you're recording the position of some cars in a city - but you're only recording the point at the nominal centre of the car. While they're on the streets no two point pairs can come close together because the points are protect by the "hard-core" of the bodywork - we'll ignore the potential super-position in multi-storey car parks :-)
There are procedures for generating such point processes - one way is just to generate points uniformly and then remove any that are too close together!
For some detail on such processes, refer for example to this
|
Fake uniform random numbers: More evenly distributed than true uniform data
This is known as a "hard-core" poisson point process - so named by Brian Ripley in the 1970s; i.e. you want it to be random, but you don't want any points to be too close together. The "hard-core" ca
|
5,691
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
With respect to batch generation in advance, I would generate a large number of sets of pseudorandom variates, and then test them with a test such as the Kolmogorov-Smirnov test. You will want to select the set that has the highest p-value (i.e., $p \approx 1$ is ideal). Note that this will be slow, but as $N$ gets larger it probably becomes less necessary.
With respect to incremental generation, you essentially are looking for a series with a moderately negative autocorrelation. I'm not sure what the best way to do that would be, since I have very limited experience with time-series, but I suspect there are existing algorithms for this.
With respect to a test for "too even", any test for whether a sample follows a specific distribution (such as the KS noted above) will do, you just want to check if $p > (1-\alpha)$, rather than the standard approach. I wrote about an example of this alternative approach here: chi-squared always a one-sided test.
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
With respect to batch generation in advance, I would generate a large number of sets of pseudorandom variates, and then test them with a test such as the Kolmogorov-Smirnov test. You will want to sel
|
Fake uniform random numbers: More evenly distributed than true uniform data
With respect to batch generation in advance, I would generate a large number of sets of pseudorandom variates, and then test them with a test such as the Kolmogorov-Smirnov test. You will want to select the set that has the highest p-value (i.e., $p \approx 1$ is ideal). Note that this will be slow, but as $N$ gets larger it probably becomes less necessary.
With respect to incremental generation, you essentially are looking for a series with a moderately negative autocorrelation. I'm not sure what the best way to do that would be, since I have very limited experience with time-series, but I suspect there are existing algorithms for this.
With respect to a test for "too even", any test for whether a sample follows a specific distribution (such as the KS noted above) will do, you just want to check if $p > (1-\alpha)$, rather than the standard approach. I wrote about an example of this alternative approach here: chi-squared always a one-sided test.
|
Fake uniform random numbers: More evenly distributed than true uniform data
With respect to batch generation in advance, I would generate a large number of sets of pseudorandom variates, and then test them with a test such as the Kolmogorov-Smirnov test. You will want to sel
|
5,692
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
I would formalize your problem this way: You want a distribution over $[0,1]^n$ such that the density is $f(x) \propto e^{\left(\frac1k\sum_{ij}\lvert x_i-x_j \rvert^{k}\right)^{\frac1k}}$ for some $k<0$ quantifying the repulsion of points.
One easy way to generate such vectors is to do Gibbs sampling.
|
Fake uniform random numbers: More evenly distributed than true uniform data
|
I would formalize your problem this way: You want a distribution over $[0,1]^n$ such that the density is $f(x) \propto e^{\left(\frac1k\sum_{ij}\lvert x_i-x_j \rvert^{k}\right)^{\frac1k}}$ for some $
|
Fake uniform random numbers: More evenly distributed than true uniform data
I would formalize your problem this way: You want a distribution over $[0,1]^n$ such that the density is $f(x) \propto e^{\left(\frac1k\sum_{ij}\lvert x_i-x_j \rvert^{k}\right)^{\frac1k}}$ for some $k<0$ quantifying the repulsion of points.
One easy way to generate such vectors is to do Gibbs sampling.
|
Fake uniform random numbers: More evenly distributed than true uniform data
I would formalize your problem this way: You want a distribution over $[0,1]^n$ such that the density is $f(x) \propto e^{\left(\frac1k\sum_{ij}\lvert x_i-x_j \rvert^{k}\right)^{\frac1k}}$ for some $
|
5,693
|
What is the difference between finite and infinite variance
|
$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\var}{var}$
What does it mean for a random variable to have "infinite variance"? What does it mean for a random variable to have infinite expectation? The explanation in both cases are rather similar, so let us start with the case of expectation, and then variance after that.
Let $X$ be a continuous random variable (RV) (our conclusions will be valid more generally, for the discrete case, replace integral by sum). To simplify exposition, lets assume $X \ge 0$.
Its expectation is defined by the integral
$$
\E X = \int_0^\infty x f(x) \, d x
$$
when that integral exists, that is, is finite. Else we say the expectation does not exist.
That is an improper integral, and by definition is
$$
\int_0^\infty x f(x) \, d x = \lim_{a \rightarrow \infty} \int_0^a x f(x) \, d x
$$
For that limit to be finite, the contribution from the tail must vanish, that is, we must have
$$
\lim_{a \rightarrow \infty} \int_a^\infty x f(x) \, d x =0
$$
A necessary (but not sufficient) condition for that to be the case is $\lim_{x\rightarrow \infty} x f(x) =0 $. What the above displayed condition says, is that, the contribution to the expectation from the (right) tail must be vanishing. If so is not the case, the expectation is dominated by contributions from arbitrarily large realized values.
In practice, that will mean that empirical means will be very unstable, because they will be dominated by the infrequent very large realized values. And note that this instability of sample means will not disappear with large samples --- it is a built-in part of the model!
In many situations, that seems unrealistic. Lets say an (life) insurance model, so $X$ models some (human) lifetime. We know that, say $X > 1000$ doesn't occur, but in practice we use models without an upper limit. The reason is clear: No hard upper limit is known, if a person is (say) 110 years old, there is no reason he cannot live one more year! So a model with a hard upper limit seems artificial. Still, we do not want the extreme upper tail to have much influence.
If $X$ has a finite expectation, then we can change the model to have a hard upper limit without undue influence to the model. In situations with a fuzzy upper limit that seems good. If the model has infinite expectation, then, any hard upper limit we introduce to the model will have dramatic consequences! That is the real importance of infinite expectation.
With finite expectation, we can be fuzzy about upper limits. With infinite expectation, we cannot.
Now, much the same can be said about infinite variance, mutatis mutandi.
To make clearer, let us see at an example. For the example we use the Pareto distribution, implemented in the R package (on CRAN) actuar as pareto1 --- single-parameter Pareto distribution also known as Pareto type 1 distribution. It has probability density function given by
$$
f(x) = \begin{cases} \frac{\alpha m^\alpha}{x^{\alpha+1}} &, x\ge m \\
0 &, x<m \end{cases}
$$
for some parameters $m>0, \alpha>0$. When $\alpha > 1 $ the expectation exists and is given by $\frac{\alpha}{\alpha-1}\cdot m$. When $\alpha \le 1$ the expectation do not exist, or as we say, it is infinite, because the integral defining it diverges to infinity. We can define the First moment distribution (see the post When would we use tantiles and the medial, rather than quantiles and the median? for some information and references) as
$$
E(M) = \int_m^M x f(x) \, d x = \frac{\alpha}{\alpha-1} \left( m - \frac{m^\alpha}{M^{\alpha-1}} \right)
$$
(this exists without regard to if the expectation itself exists). (Later edit: I invented the name "first moment distribution, later I learned this is related to what is "officially" named partial moments).
When the expectation exists ($\alpha> 1$) we can divide by it to get the relative first moment distribution, given by
$$
Er(M) = E(m)/E(\infty) = 1-\left(\frac{m}{M}\right)^{\alpha-1}
$$
When $\alpha$ is just a little bit larger than one, so the expectation "just barely exists", the integral defining the expectation will converge slowly. Let us look at the example with $m=1, \alpha=1.2$. Let us plot then $Er(M)$ with the help of R:
### Function for opening new plot file:
open_png <- function(filename)
png(filename=filename, type="cairo-png")
library(actuar) # from CRAN
### Code for Pareto type I distribution:
# First plotting density and "graphical
# moments" using ideas from
# http://www.quantdec.com/envstats/notes/class_06/properties.htm
# and used some times at cross validated
m <- 1.0
alpha <- 1.2
# Expectation:
E <- m * (alpha/(alpha-1))
# upper limit for plots:
upper <- qpareto1(0.99, alpha, m)
#
open_png("first_moment_dist1.png")
Er <- function(M, m, alpha) 1.0 -
(m/M)^(alpha-1.0)
### Inverse relative first moment
### distribution function, giving
# what we may call "expectation quantiles":
Er_inv <- function(eq, m, alpha)
m*exp(log(1.0-eq)/(1-alpha))
plot(function(M) Er(M, m, alpha), from=1.0,
to=upper)
plot(function(M) ppareto1(M, alpha, m),
from=1.0, to=upper, add=TRUE, col="red")
dev.off()
which produces this plot:
For example, from this plot you can read that about 50% of the contribution to the expectation come from observations above around 40. Given that the expectation $\mu$ of this distribution is 6, that is astounding! (this distribution do not have existing variance. For that we need $\alpha > 2$).
The function Er_inv defined above is the inverse relative first moment distribution, an analogue to the quantile function. We have:
### What this plot shows very clearly is that
### most of the contribution to the expectation
### come from the very extreme right tail!
# Example
eq <- Er_inv(0.5, m, alpha)
ppareto1(eq, alpha, m)
eq
[1] 0.984375
[1] 32
This shows that 50% of the contributions to the expectation comes from the upper 1.5% tail of the distribution!
So, especially in small samples where there is a high probability that the extreme tail is not represented, the arithmetic mean, while still being an unbiased estimator of the expectation $\mu$, must have a very skew distribution. We will investigate this by simulation: First we use a sample size $n=5$.
set.seed(1234)
n <- 5
N <- 10000000 # Number of simulation replicas
means <- replicate(N, mean(rpareto1(n, alpha,
m) ))
mean(means)
[1] 5.846645
median(means)
[1] 2.658925
min(means)
[1] 1.014836
max(means)
[1] 633004.5
length(means[means <=100])
[1] 9970136
To get a readable plot we only show the histogram for the part of the sample with values below 100, which is a very large part of the sample.
open_png("mean_sim_hist1.png")
hist(means[means<=100], breaks=100,
probability=TRUE)
dev.off()
The distribution of the arithmetical means is very skew,
sum(means <= 6)/N
[1] 0.8596413
Almost 86% of the empirical means are less or equal than the theoretical mean, the expectation. That is what we should expect, since most of the contribution to the mean comes from the extreme upper tail, which is unrepresented in most samples.
We need to go back to reassess our earlier conclusion.
While the existence of the mean makes it possible to be fuzzy about upper limits, we see that when "the mean just barely exists", meaning that the integral is slowly convergent, we cannot really be that fuzzy about upper limits. Slowly convergent integrals has the consequence that it might be better to use methods that do not assume that the expectation exists.
When the integral is very slowly converging, it is in practice as if it didn't converge at all. The practical benefits that follow from a convergent integral is a chimera in the slowly convergent case!
That is one way to understand N N Taleb's conclusion in http://fooledbyrandomness.com/complexityAugust-06.pdf
|
What is the difference between finite and infinite variance
|
$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\var}{var}$
What does it mean for a random variable to have "infinite variance"? What does it mean for a random variable to have infinite expectation
|
What is the difference between finite and infinite variance
$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\var}{var}$
What does it mean for a random variable to have "infinite variance"? What does it mean for a random variable to have infinite expectation? The explanation in both cases are rather similar, so let us start with the case of expectation, and then variance after that.
Let $X$ be a continuous random variable (RV) (our conclusions will be valid more generally, for the discrete case, replace integral by sum). To simplify exposition, lets assume $X \ge 0$.
Its expectation is defined by the integral
$$
\E X = \int_0^\infty x f(x) \, d x
$$
when that integral exists, that is, is finite. Else we say the expectation does not exist.
That is an improper integral, and by definition is
$$
\int_0^\infty x f(x) \, d x = \lim_{a \rightarrow \infty} \int_0^a x f(x) \, d x
$$
For that limit to be finite, the contribution from the tail must vanish, that is, we must have
$$
\lim_{a \rightarrow \infty} \int_a^\infty x f(x) \, d x =0
$$
A necessary (but not sufficient) condition for that to be the case is $\lim_{x\rightarrow \infty} x f(x) =0 $. What the above displayed condition says, is that, the contribution to the expectation from the (right) tail must be vanishing. If so is not the case, the expectation is dominated by contributions from arbitrarily large realized values.
In practice, that will mean that empirical means will be very unstable, because they will be dominated by the infrequent very large realized values. And note that this instability of sample means will not disappear with large samples --- it is a built-in part of the model!
In many situations, that seems unrealistic. Lets say an (life) insurance model, so $X$ models some (human) lifetime. We know that, say $X > 1000$ doesn't occur, but in practice we use models without an upper limit. The reason is clear: No hard upper limit is known, if a person is (say) 110 years old, there is no reason he cannot live one more year! So a model with a hard upper limit seems artificial. Still, we do not want the extreme upper tail to have much influence.
If $X$ has a finite expectation, then we can change the model to have a hard upper limit without undue influence to the model. In situations with a fuzzy upper limit that seems good. If the model has infinite expectation, then, any hard upper limit we introduce to the model will have dramatic consequences! That is the real importance of infinite expectation.
With finite expectation, we can be fuzzy about upper limits. With infinite expectation, we cannot.
Now, much the same can be said about infinite variance, mutatis mutandi.
To make clearer, let us see at an example. For the example we use the Pareto distribution, implemented in the R package (on CRAN) actuar as pareto1 --- single-parameter Pareto distribution also known as Pareto type 1 distribution. It has probability density function given by
$$
f(x) = \begin{cases} \frac{\alpha m^\alpha}{x^{\alpha+1}} &, x\ge m \\
0 &, x<m \end{cases}
$$
for some parameters $m>0, \alpha>0$. When $\alpha > 1 $ the expectation exists and is given by $\frac{\alpha}{\alpha-1}\cdot m$. When $\alpha \le 1$ the expectation do not exist, or as we say, it is infinite, because the integral defining it diverges to infinity. We can define the First moment distribution (see the post When would we use tantiles and the medial, rather than quantiles and the median? for some information and references) as
$$
E(M) = \int_m^M x f(x) \, d x = \frac{\alpha}{\alpha-1} \left( m - \frac{m^\alpha}{M^{\alpha-1}} \right)
$$
(this exists without regard to if the expectation itself exists). (Later edit: I invented the name "first moment distribution, later I learned this is related to what is "officially" named partial moments).
When the expectation exists ($\alpha> 1$) we can divide by it to get the relative first moment distribution, given by
$$
Er(M) = E(m)/E(\infty) = 1-\left(\frac{m}{M}\right)^{\alpha-1}
$$
When $\alpha$ is just a little bit larger than one, so the expectation "just barely exists", the integral defining the expectation will converge slowly. Let us look at the example with $m=1, \alpha=1.2$. Let us plot then $Er(M)$ with the help of R:
### Function for opening new plot file:
open_png <- function(filename)
png(filename=filename, type="cairo-png")
library(actuar) # from CRAN
### Code for Pareto type I distribution:
# First plotting density and "graphical
# moments" using ideas from
# http://www.quantdec.com/envstats/notes/class_06/properties.htm
# and used some times at cross validated
m <- 1.0
alpha <- 1.2
# Expectation:
E <- m * (alpha/(alpha-1))
# upper limit for plots:
upper <- qpareto1(0.99, alpha, m)
#
open_png("first_moment_dist1.png")
Er <- function(M, m, alpha) 1.0 -
(m/M)^(alpha-1.0)
### Inverse relative first moment
### distribution function, giving
# what we may call "expectation quantiles":
Er_inv <- function(eq, m, alpha)
m*exp(log(1.0-eq)/(1-alpha))
plot(function(M) Er(M, m, alpha), from=1.0,
to=upper)
plot(function(M) ppareto1(M, alpha, m),
from=1.0, to=upper, add=TRUE, col="red")
dev.off()
which produces this plot:
For example, from this plot you can read that about 50% of the contribution to the expectation come from observations above around 40. Given that the expectation $\mu$ of this distribution is 6, that is astounding! (this distribution do not have existing variance. For that we need $\alpha > 2$).
The function Er_inv defined above is the inverse relative first moment distribution, an analogue to the quantile function. We have:
### What this plot shows very clearly is that
### most of the contribution to the expectation
### come from the very extreme right tail!
# Example
eq <- Er_inv(0.5, m, alpha)
ppareto1(eq, alpha, m)
eq
[1] 0.984375
[1] 32
This shows that 50% of the contributions to the expectation comes from the upper 1.5% tail of the distribution!
So, especially in small samples where there is a high probability that the extreme tail is not represented, the arithmetic mean, while still being an unbiased estimator of the expectation $\mu$, must have a very skew distribution. We will investigate this by simulation: First we use a sample size $n=5$.
set.seed(1234)
n <- 5
N <- 10000000 # Number of simulation replicas
means <- replicate(N, mean(rpareto1(n, alpha,
m) ))
mean(means)
[1] 5.846645
median(means)
[1] 2.658925
min(means)
[1] 1.014836
max(means)
[1] 633004.5
length(means[means <=100])
[1] 9970136
To get a readable plot we only show the histogram for the part of the sample with values below 100, which is a very large part of the sample.
open_png("mean_sim_hist1.png")
hist(means[means<=100], breaks=100,
probability=TRUE)
dev.off()
The distribution of the arithmetical means is very skew,
sum(means <= 6)/N
[1] 0.8596413
Almost 86% of the empirical means are less or equal than the theoretical mean, the expectation. That is what we should expect, since most of the contribution to the mean comes from the extreme upper tail, which is unrepresented in most samples.
We need to go back to reassess our earlier conclusion.
While the existence of the mean makes it possible to be fuzzy about upper limits, we see that when "the mean just barely exists", meaning that the integral is slowly convergent, we cannot really be that fuzzy about upper limits. Slowly convergent integrals has the consequence that it might be better to use methods that do not assume that the expectation exists.
When the integral is very slowly converging, it is in practice as if it didn't converge at all. The practical benefits that follow from a convergent integral is a chimera in the slowly convergent case!
That is one way to understand N N Taleb's conclusion in http://fooledbyrandomness.com/complexityAugust-06.pdf
|
What is the difference between finite and infinite variance
$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\var}{var}$
What does it mean for a random variable to have "infinite variance"? What does it mean for a random variable to have infinite expectation
|
5,694
|
What is the difference between finite and infinite variance
|
Variance is the measure of dispersion of the distribution of values of a random variable. It's not the only such measure, e.g. mean absolute deviation is one of alternatives.
The infinite variance means that random values don't tend to concentrate around the mean too tightly. It could mean that there's large enough probability that the next random number will be very far away from the mean.
The distributions like Normal (Gaussian) can produce random numbers very far away from the mean, but the probability of such events decrease very rapidly with magnitude of the deviation.
In that regard when you look at the plot of Cauchy distribution or a Gaussian (normal) distribution, they don't look very different visually. However, if you try to compute the variance of the Cauchy distribution it'll be infinite, while Gaussian's is finite. So, normal distribution is more tight around its mean compared to Cauchy's.
Btw, if you talk to mathematicians, they'll insist that Cauchy distribution doesn't have a well defined mean, that it's infinite. This sounds ridiculous to physicists who'd point out to the fact that Cauchy's symmetrical, hence, it's bound to have a mean. In this case they'd argue the problem is with your definition of mean, not with the Cauchy's distribution.
|
What is the difference between finite and infinite variance
|
Variance is the measure of dispersion of the distribution of values of a random variable. It's not the only such measure, e.g. mean absolute deviation is one of alternatives.
The infinite variance mea
|
What is the difference between finite and infinite variance
Variance is the measure of dispersion of the distribution of values of a random variable. It's not the only such measure, e.g. mean absolute deviation is one of alternatives.
The infinite variance means that random values don't tend to concentrate around the mean too tightly. It could mean that there's large enough probability that the next random number will be very far away from the mean.
The distributions like Normal (Gaussian) can produce random numbers very far away from the mean, but the probability of such events decrease very rapidly with magnitude of the deviation.
In that regard when you look at the plot of Cauchy distribution or a Gaussian (normal) distribution, they don't look very different visually. However, if you try to compute the variance of the Cauchy distribution it'll be infinite, while Gaussian's is finite. So, normal distribution is more tight around its mean compared to Cauchy's.
Btw, if you talk to mathematicians, they'll insist that Cauchy distribution doesn't have a well defined mean, that it's infinite. This sounds ridiculous to physicists who'd point out to the fact that Cauchy's symmetrical, hence, it's bound to have a mean. In this case they'd argue the problem is with your definition of mean, not with the Cauchy's distribution.
|
What is the difference between finite and infinite variance
Variance is the measure of dispersion of the distribution of values of a random variable. It's not the only such measure, e.g. mean absolute deviation is one of alternatives.
The infinite variance mea
|
5,695
|
What is the difference between finite and infinite variance
|
An alternative way to look at is by the quantile function.
$$Q(F(x)) = x$$
Then we can compute a moment or expectation
$$E(T(x)) = \int_{-\infty}^\infty T(x) f(x) dx\\$$
alternatively as (replacing $f(x)dx = dF$):
$$E(T(x)) = \int_{0}^1 T(Q(F)) dF \\$$
Say we wish to compute the first moment then $T(x) = x$. In the image below this corresponds to the area between F and the vertical line at $x=0$ (where the area on the left side may count as negative when $T(x)<0$). The second moment would correspond to the volume that the same area sweeps when it is rotated along the line at $x=0$ (with a factor $\pi$ difference).
The curves in the image show how much each quantile contributes in the computation.
For the normal curve there are only very few quantiles with a large contribution. But for the Cauchy curve there are many more quantiles with a large contribution. If the curve $T(Q(F))$ goes sufficiently fast enough to infinity when F approaches zero or one, then the area can be infinite.
This infinity may not be so strange since the integrand itself distance (mean) or squared distance (variance) can become infinite. It is only a question how much weight, how much percent of F, those infinite tails have.
In the summation/integration of distance from zero (mean) or squared distance from the mean (variance) a single point that is very far away will have more influence on the average distance (or squared distance) than a lot of points nearby.
Thus when we move towards infinity the density may decrease, but the influence on the sum of some (increasing) quantity, e.g. distance or squared distance does not necessarily change.
If for each amount of mass at some distance $x$ there is half or more mass at a distance $\sqrt{2}x$ then you will get that the sum of total mass $\sum \frac{1}{2^n}$ will converge because the contribution of mass decreases, but the variance becomes infinite since that contribution does not decrease $\sum ((\sqrt{2}x)^n)^2 \frac{1}{2^n} \to \infty$
|
What is the difference between finite and infinite variance
|
An alternative way to look at is by the quantile function.
$$Q(F(x)) = x$$
Then we can compute a moment or expectation
$$E(T(x)) = \int_{-\infty}^\infty T(x) f(x) dx\\$$
alternatively as (replacing
|
What is the difference between finite and infinite variance
An alternative way to look at is by the quantile function.
$$Q(F(x)) = x$$
Then we can compute a moment or expectation
$$E(T(x)) = \int_{-\infty}^\infty T(x) f(x) dx\\$$
alternatively as (replacing $f(x)dx = dF$):
$$E(T(x)) = \int_{0}^1 T(Q(F)) dF \\$$
Say we wish to compute the first moment then $T(x) = x$. In the image below this corresponds to the area between F and the vertical line at $x=0$ (where the area on the left side may count as negative when $T(x)<0$). The second moment would correspond to the volume that the same area sweeps when it is rotated along the line at $x=0$ (with a factor $\pi$ difference).
The curves in the image show how much each quantile contributes in the computation.
For the normal curve there are only very few quantiles with a large contribution. But for the Cauchy curve there are many more quantiles with a large contribution. If the curve $T(Q(F))$ goes sufficiently fast enough to infinity when F approaches zero or one, then the area can be infinite.
This infinity may not be so strange since the integrand itself distance (mean) or squared distance (variance) can become infinite. It is only a question how much weight, how much percent of F, those infinite tails have.
In the summation/integration of distance from zero (mean) or squared distance from the mean (variance) a single point that is very far away will have more influence on the average distance (or squared distance) than a lot of points nearby.
Thus when we move towards infinity the density may decrease, but the influence on the sum of some (increasing) quantity, e.g. distance or squared distance does not necessarily change.
If for each amount of mass at some distance $x$ there is half or more mass at a distance $\sqrt{2}x$ then you will get that the sum of total mass $\sum \frac{1}{2^n}$ will converge because the contribution of mass decreases, but the variance becomes infinite since that contribution does not decrease $\sum ((\sqrt{2}x)^n)^2 \frac{1}{2^n} \to \infty$
|
What is the difference between finite and infinite variance
An alternative way to look at is by the quantile function.
$$Q(F(x)) = x$$
Then we can compute a moment or expectation
$$E(T(x)) = \int_{-\infty}^\infty T(x) f(x) dx\\$$
alternatively as (replacing
|
5,696
|
What is the difference between finite and infinite variance
|
Most distributions you encounter probably have finite variance. Here is a discrete example $X$ that has infinite variance but finite mean:
Let its probability mass function be $ p(k) = c/|k|^3$, for $k \in \mathbb{Z} \setminus\{0\}$, $p(0) = 0$, where $c = (2\zeta(3))^{-1} := (2\sum_{k=1}^\infty 1/k^3)^{-1} < \infty$. First of all because $\mathbb{E} \mid X\mid < \infty$ it has finite mean. Also it has infinite variance because $2 \sum_{k=1}^\infty k^2 / |k|^3 = 2\sum_{k=1}^\infty k^{-1} = \infty$.
Note: $\zeta(x) :=\sum_{k=1}^\infty k^{-x}$ is the Riemann zeta function. There are many other examples, just not so pleasant to write down.
|
What is the difference between finite and infinite variance
|
Most distributions you encounter probably have finite variance. Here is a discrete example $X$ that has infinite variance but finite mean:
Let its probability mass function be $ p(k) = c/|k|^3$, for
|
What is the difference between finite and infinite variance
Most distributions you encounter probably have finite variance. Here is a discrete example $X$ that has infinite variance but finite mean:
Let its probability mass function be $ p(k) = c/|k|^3$, for $k \in \mathbb{Z} \setminus\{0\}$, $p(0) = 0$, where $c = (2\zeta(3))^{-1} := (2\sum_{k=1}^\infty 1/k^3)^{-1} < \infty$. First of all because $\mathbb{E} \mid X\mid < \infty$ it has finite mean. Also it has infinite variance because $2 \sum_{k=1}^\infty k^2 / |k|^3 = 2\sum_{k=1}^\infty k^{-1} = \infty$.
Note: $\zeta(x) :=\sum_{k=1}^\infty k^{-x}$ is the Riemann zeta function. There are many other examples, just not so pleasant to write down.
|
What is the difference between finite and infinite variance
Most distributions you encounter probably have finite variance. Here is a discrete example $X$ that has infinite variance but finite mean:
Let its probability mass function be $ p(k) = c/|k|^3$, for
|
5,697
|
How is Naive Bayes a Linear Classifier?
|
In general the naive Bayes classifier is not linear, but if the likelihood factors $p(x_i \mid c)$ are from exponential families, the naive Bayes classifier corresponds to a linear classifier in a particular feature space. Here is how to see this.
You can write any naive Bayes classifier as*
$$p(c = 1 \mid \mathbf{x}) = \sigma\left( \sum_i \log \frac{p(x_i \mid c = 1)}{p(x_i \mid c = 0)} + \log \frac{p(c = 1)}{p(c = 0)} \right),$$
where $\sigma$ is the logistic function. If $p(x_i \mid c)$ is from an exponential family, we can write it as
$$p(x_i \mid c) = h_i(x_i)\exp\left(\mathbf{u}_{ic}^\top \phi_i(x_i) - A_i(\mathbf{u}_{ic})\right),$$
and hence
$$p(c = 1 \mid \mathbf{x}) = \sigma\left( \sum_i \mathbf{w}_i^\top \phi_i(x_i) + b \right),$$
where
\begin{align}
\mathbf{w}_i &= \mathbf{u}_{i1} - \mathbf{u}_{i0}, \\
b &= \log \frac{p(c = 1)}{p(c = 0)} - \sum_i \left( A_i(\mathbf{u}_{i1}) - A_i(\mathbf{u}_{i0}) \right).
\end{align}
Note that this is similar to logistic regression – a linear classifier – in the feature space defined by the $\phi_i$. For more than two classes, we analogously get multinomial logistic (or softmax) regression.
If $p(x_i \mid c)$ is Gaussian, then $\phi_i(x_i) = (x_i, x_i^2)$ and we should have
\begin{align}
w_{i1} &= \sigma_1^{-2}\mu_1 - \sigma_0^{-2}\mu_0, \\
w_{i2} &= 2\sigma_0^{-2} - 2\sigma_1^{-2}, \\
b_i &= \log \sigma_0 - \log \sigma_1,
\end{align}
assuming $p(c = 1) = p(c = 0) = \frac{1}{2}$.
*Here is how to derive this result:
\begin{align}
p(c = 1 \mid \mathbf{x})
&= \frac{p(\mathbf{x} \mid c = 1) p(c = 1)}{p(\mathbf{x} \mid c = 1) p(c = 1) + p(\mathbf{x} \mid c = 0) p(c = 0)} \\
&= \frac{1}{1 + \frac{p(\mathbf{x} \mid c = 0) p(c = 0)}{p(\mathbf{x} \mid c = 1) p(c = 1)}} \\
&= \frac{1}{1 + \exp\left( -\log\frac{p(\mathbf{x} \mid c = 1) p(c = 1)}{p(\mathbf{x} \mid c = 0) p(c = 0)} \right)} \\
&= \sigma\left( \sum_i \log \frac{p(x_i \mid c = 1)}{p(x_i \mid c = 0)} + \log \frac{p(c = 1)}{p(c = 0)} \right)
\end{align}
|
How is Naive Bayes a Linear Classifier?
|
In general the naive Bayes classifier is not linear, but if the likelihood factors $p(x_i \mid c)$ are from exponential families, the naive Bayes classifier corresponds to a linear classifier in a par
|
How is Naive Bayes a Linear Classifier?
In general the naive Bayes classifier is not linear, but if the likelihood factors $p(x_i \mid c)$ are from exponential families, the naive Bayes classifier corresponds to a linear classifier in a particular feature space. Here is how to see this.
You can write any naive Bayes classifier as*
$$p(c = 1 \mid \mathbf{x}) = \sigma\left( \sum_i \log \frac{p(x_i \mid c = 1)}{p(x_i \mid c = 0)} + \log \frac{p(c = 1)}{p(c = 0)} \right),$$
where $\sigma$ is the logistic function. If $p(x_i \mid c)$ is from an exponential family, we can write it as
$$p(x_i \mid c) = h_i(x_i)\exp\left(\mathbf{u}_{ic}^\top \phi_i(x_i) - A_i(\mathbf{u}_{ic})\right),$$
and hence
$$p(c = 1 \mid \mathbf{x}) = \sigma\left( \sum_i \mathbf{w}_i^\top \phi_i(x_i) + b \right),$$
where
\begin{align}
\mathbf{w}_i &= \mathbf{u}_{i1} - \mathbf{u}_{i0}, \\
b &= \log \frac{p(c = 1)}{p(c = 0)} - \sum_i \left( A_i(\mathbf{u}_{i1}) - A_i(\mathbf{u}_{i0}) \right).
\end{align}
Note that this is similar to logistic regression – a linear classifier – in the feature space defined by the $\phi_i$. For more than two classes, we analogously get multinomial logistic (or softmax) regression.
If $p(x_i \mid c)$ is Gaussian, then $\phi_i(x_i) = (x_i, x_i^2)$ and we should have
\begin{align}
w_{i1} &= \sigma_1^{-2}\mu_1 - \sigma_0^{-2}\mu_0, \\
w_{i2} &= 2\sigma_0^{-2} - 2\sigma_1^{-2}, \\
b_i &= \log \sigma_0 - \log \sigma_1,
\end{align}
assuming $p(c = 1) = p(c = 0) = \frac{1}{2}$.
*Here is how to derive this result:
\begin{align}
p(c = 1 \mid \mathbf{x})
&= \frac{p(\mathbf{x} \mid c = 1) p(c = 1)}{p(\mathbf{x} \mid c = 1) p(c = 1) + p(\mathbf{x} \mid c = 0) p(c = 0)} \\
&= \frac{1}{1 + \frac{p(\mathbf{x} \mid c = 0) p(c = 0)}{p(\mathbf{x} \mid c = 1) p(c = 1)}} \\
&= \frac{1}{1 + \exp\left( -\log\frac{p(\mathbf{x} \mid c = 1) p(c = 1)}{p(\mathbf{x} \mid c = 0) p(c = 0)} \right)} \\
&= \sigma\left( \sum_i \log \frac{p(x_i \mid c = 1)}{p(x_i \mid c = 0)} + \log \frac{p(c = 1)}{p(c = 0)} \right)
\end{align}
|
How is Naive Bayes a Linear Classifier?
In general the naive Bayes classifier is not linear, but if the likelihood factors $p(x_i \mid c)$ are from exponential families, the naive Bayes classifier corresponds to a linear classifier in a par
|
5,698
|
How is Naive Bayes a Linear Classifier?
|
It is linear only if the class conditional variance matrices are the same for both classes. To see this write down the ration of the log posteriors and you'll only get a linear function out of it if the corresponding variances are the same.
Otherwise it is quadratic.
|
How is Naive Bayes a Linear Classifier?
|
It is linear only if the class conditional variance matrices are the same for both classes. To see this write down the ration of the log posteriors and you'll only get a linear function out of it if t
|
How is Naive Bayes a Linear Classifier?
It is linear only if the class conditional variance matrices are the same for both classes. To see this write down the ration of the log posteriors and you'll only get a linear function out of it if the corresponding variances are the same.
Otherwise it is quadratic.
|
How is Naive Bayes a Linear Classifier?
It is linear only if the class conditional variance matrices are the same for both classes. To see this write down the ration of the log posteriors and you'll only get a linear function out of it if t
|
5,699
|
How is Naive Bayes a Linear Classifier?
|
I'd like add one additional point: the reason for some of the confusion rests on what it means to be performing "Naive Bayes classification".
Under the broad topic of "Gaussian Discriminant Analysis (GDA)" there are several techniques: QDA, LDA, GNB, and DLDA (quadratic DA, linear DA, gaussian naive bayes, diagonal LDA). [UPDATED] LDA and DLDA should be linear in the space of the given predictors. (See, e.g., Murphy, 4.2, pg. 101 for DA and pg. 82 for NB. Note: GNB is not necessarily linear.
Discrete NB (which uses a multinomial distribution under the hood) is linear. You can also check out Duda, Hart & Stork section 2.6). QDA is quadratic as other answers have pointed out (and which I think is what is happening in your graphic - see below).
These techniques form a lattice with a nice set of constraints on the "class-wise covariance matrices" $\Sigma_c$:
QDA: $\Sigma_c$ arbitrary: arbitrary ftr. cov. matrix per class
LDA: $\Sigma_c = \Sigma$: shared cov. matrix (over classes)
GNB: $\Sigma_c = {diag}_c$: class wise diagonal cov. matrices (the assumption of ind. in the model $\rightarrow$ diagonal cov. matrix)
DLDA: $\Sigma_c = diag$: shared & diagonal cov. matrix
While the docs for e1071 claim that it is assuming class-conditional independence (i.e., GNB), I'm suspicious that it is actually doing QDA. Some people conflate "naive Bayes" (making independence assumptions) with "simple Bayesian classification rule". All of the GDA methods are derived from the later; but only GNB and DLDA use the former.
A big warning, I haven't read the e1071 source code to confirm what it is doing.
|
How is Naive Bayes a Linear Classifier?
|
I'd like add one additional point: the reason for some of the confusion rests on what it means to be performing "Naive Bayes classification".
Under the broad topic of "Gaussian Discriminant Analysis
|
How is Naive Bayes a Linear Classifier?
I'd like add one additional point: the reason for some of the confusion rests on what it means to be performing "Naive Bayes classification".
Under the broad topic of "Gaussian Discriminant Analysis (GDA)" there are several techniques: QDA, LDA, GNB, and DLDA (quadratic DA, linear DA, gaussian naive bayes, diagonal LDA). [UPDATED] LDA and DLDA should be linear in the space of the given predictors. (See, e.g., Murphy, 4.2, pg. 101 for DA and pg. 82 for NB. Note: GNB is not necessarily linear.
Discrete NB (which uses a multinomial distribution under the hood) is linear. You can also check out Duda, Hart & Stork section 2.6). QDA is quadratic as other answers have pointed out (and which I think is what is happening in your graphic - see below).
These techniques form a lattice with a nice set of constraints on the "class-wise covariance matrices" $\Sigma_c$:
QDA: $\Sigma_c$ arbitrary: arbitrary ftr. cov. matrix per class
LDA: $\Sigma_c = \Sigma$: shared cov. matrix (over classes)
GNB: $\Sigma_c = {diag}_c$: class wise diagonal cov. matrices (the assumption of ind. in the model $\rightarrow$ diagonal cov. matrix)
DLDA: $\Sigma_c = diag$: shared & diagonal cov. matrix
While the docs for e1071 claim that it is assuming class-conditional independence (i.e., GNB), I'm suspicious that it is actually doing QDA. Some people conflate "naive Bayes" (making independence assumptions) with "simple Bayesian classification rule". All of the GDA methods are derived from the later; but only GNB and DLDA use the former.
A big warning, I haven't read the e1071 source code to confirm what it is doing.
|
How is Naive Bayes a Linear Classifier?
I'd like add one additional point: the reason for some of the confusion rests on what it means to be performing "Naive Bayes classification".
Under the broad topic of "Gaussian Discriminant Analysis
|
5,700
|
Test for bimodal distribution
|
Another possible approach to this issue is to think about what might be going on behind the scenes that is generating the data you see. That is, you can think in terms of a mixture model, for example, a Gaussian mixture model. For instance, you might believe that your data are drawn from either a single normal population, or from a mixture of two normal distributions (in some proportion), with differing means and variances. Of course, you don't have to believe that there are only one or two, nor do you have to believe that the populations from which the data are drawn need to be normal.
There are (at least) two R packages that allow you to estimate mixture models. One package is flexmix, and another is mclust. Having estimated two candidate models, I believe it may be possible to conduct a likelihood ratio test. Alternatively, you could use the parametric bootstrap cross-fitting method (pdf).
|
Test for bimodal distribution
|
Another possible approach to this issue is to think about what might be going on behind the scenes that is generating the data you see. That is, you can think in terms of a mixture model, for example
|
Test for bimodal distribution
Another possible approach to this issue is to think about what might be going on behind the scenes that is generating the data you see. That is, you can think in terms of a mixture model, for example, a Gaussian mixture model. For instance, you might believe that your data are drawn from either a single normal population, or from a mixture of two normal distributions (in some proportion), with differing means and variances. Of course, you don't have to believe that there are only one or two, nor do you have to believe that the populations from which the data are drawn need to be normal.
There are (at least) two R packages that allow you to estimate mixture models. One package is flexmix, and another is mclust. Having estimated two candidate models, I believe it may be possible to conduct a likelihood ratio test. Alternatively, you could use the parametric bootstrap cross-fitting method (pdf).
|
Test for bimodal distribution
Another possible approach to this issue is to think about what might be going on behind the scenes that is generating the data you see. That is, you can think in terms of a mixture model, for example
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.