idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
44,701
|
Linear Combination of multivariate t distribution
|
It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed. In fact, linear combinations of independent $t_\nu$ variables are not $t$-distributed.
The comment by Joram Soch starts with a correct result but then makes a subtle error by claiming that two independent t-distributed vectors have a joint t-distribution. To illustrate that this is not so, let $X$ and $Y$ be scalar random variables, independent, and $t_\nu$ distributed. Then their joint density is the product of their densities and thus is proportional to $((1+x^2/\nu)(1+y^2/\nu))^{-(\nu+1)/2} = (1+x^2/\nu+y^2/\nu + x^2y^2/\nu^2)^{-(\nu+1)/2}$. But the term in parentheses is not a quadratic form in $x$ and $y$ and therefore is not a multivariate $t$-density.
This example demonstrates also that the components of a multivariate-$t$ variable are always dependent.
|
Linear Combination of multivariate t distribution
|
It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed.
|
Linear Combination of multivariate t distribution
It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed. In fact, linear combinations of independent $t_\nu$ variables are not $t$-distributed.
The comment by Joram Soch starts with a correct result but then makes a subtle error by claiming that two independent t-distributed vectors have a joint t-distribution. To illustrate that this is not so, let $X$ and $Y$ be scalar random variables, independent, and $t_\nu$ distributed. Then their joint density is the product of their densities and thus is proportional to $((1+x^2/\nu)(1+y^2/\nu))^{-(\nu+1)/2} = (1+x^2/\nu+y^2/\nu + x^2y^2/\nu^2)^{-(\nu+1)/2}$. But the term in parentheses is not a quadratic form in $x$ and $y$ and therefore is not a multivariate $t$-density.
This example demonstrates also that the components of a multivariate-$t$ variable are always dependent.
|
Linear Combination of multivariate t distribution
It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed.
|
44,702
|
Linear Combination of multivariate t distribution
|
Please have a look at
Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878.
The resulting PDF is described as a weighted sum of student-t distribution, and the paper shows how to obtain the weight. The author started from observing for odd degrees of freedom the characteristic function of a student-t r.v. is expressible in closed form, i.e. proportional to the modified Bessel function of the third kind. It seems to me that the paper proposes the solution for all odd t-distribution cases, and no even degree-of-freedoms should be involved.
|
Linear Combination of multivariate t distribution
|
Please have a look at
Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878.
The resulting
|
Linear Combination of multivariate t distribution
Please have a look at
Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878.
The resulting PDF is described as a weighted sum of student-t distribution, and the paper shows how to obtain the weight. The author started from observing for odd degrees of freedom the characteristic function of a student-t r.v. is expressible in closed form, i.e. proportional to the modified Bessel function of the third kind. It seems to me that the paper proposes the solution for all odd t-distribution cases, and no even degree-of-freedoms should be involved.
|
Linear Combination of multivariate t distribution
Please have a look at
Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878.
The resulting
|
44,703
|
Linear Combination of multivariate t distribution
|
P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says "
If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for any nonsingular scalar matrix C and for any a, CX + a has the p-variate t distribution with degrees of freedom v, mean vector Cp+a, and correlation matrix CRC'. This result is of importance in applications and is similar to the corresponding result for the multivariate normal distribution.
"
edit:
In a univariate language, e.g., considering $aT_1+bT_2$ where $T_i$ is t distributed with mean $m_1,m_2$, scale (in same order as variance) $S_1,S_2$ and with the same degree of freedom. Then define $T:=[T1, T2]'$ which is a bivariate T distribution with zero covariance by stacking the $T_1$ and $T_2$ into a vector. Then $aT_1+bT_2=[a, b]T$ . Applying above conclusion, $aT_1+bT_2$ is univariate t distributed with the same degree of freedom, whose mean is $am_1+bm_2$ and scale is $a^2S_1+b^2S_2$.
An R simulation that supports this theory is:
require('mas3321')
n=10000
sample=c()
mean1=.6
mean2=1.2
scale1=.5
scale2=1
p1=10
p2=20
samples_combt=p1*rgt(n, 10, mean1, scale1)+p2*rgt(n, 10,mean2 , scale2)
hist(samples_combt,probability = T)
mean_comb= p1*mean1+p2*mean2
scale_comb=p1^2*scale1+p2^2*scale2
curve(dgt(x, 10,mean_comb, scale_comb), add=TRUE)
|
Linear Combination of multivariate t distribution
|
P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says "
If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for
|
Linear Combination of multivariate t distribution
P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says "
If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for any nonsingular scalar matrix C and for any a, CX + a has the p-variate t distribution with degrees of freedom v, mean vector Cp+a, and correlation matrix CRC'. This result is of importance in applications and is similar to the corresponding result for the multivariate normal distribution.
"
edit:
In a univariate language, e.g., considering $aT_1+bT_2$ where $T_i$ is t distributed with mean $m_1,m_2$, scale (in same order as variance) $S_1,S_2$ and with the same degree of freedom. Then define $T:=[T1, T2]'$ which is a bivariate T distribution with zero covariance by stacking the $T_1$ and $T_2$ into a vector. Then $aT_1+bT_2=[a, b]T$ . Applying above conclusion, $aT_1+bT_2$ is univariate t distributed with the same degree of freedom, whose mean is $am_1+bm_2$ and scale is $a^2S_1+b^2S_2$.
An R simulation that supports this theory is:
require('mas3321')
n=10000
sample=c()
mean1=.6
mean2=1.2
scale1=.5
scale2=1
p1=10
p2=20
samples_combt=p1*rgt(n, 10, mean1, scale1)+p2*rgt(n, 10,mean2 , scale2)
hist(samples_combt,probability = T)
mean_comb= p1*mean1+p2*mean2
scale_comb=p1^2*scale1+p2^2*scale2
curve(dgt(x, 10,mean_comb, scale_comb), add=TRUE)
|
Linear Combination of multivariate t distribution
P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says "
If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for
|
44,704
|
Linear Combination of multivariate t distribution
|
Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). Hence I would think all linear transformations of student-t random variables (with same degree of freedom) are student-t distributed.
I believe if $$X \sim t_d(\nu, \mathbf{\mu}, \Sigma)$$ then $$\mathbf{w}^T X + c \sim t_1(\nu, \mathbf{w}^T \mathbf{\mu} + c, \mathbf{w}^T \Sigma \mathbf{w})$$
This is just my understanding after reading the below references and adds to answer posted by @user31575.
https://en.wikipedia.org/wiki/Generalised_hyperbolic_distribution
Also in "Quantitative risk management: Concepts, techniques and tools" section 2.3.1 equation 2.31
Also referenced in this paper Hu, W. and Kercheval, A.N., 2010. Portfolio optimization for student t and skewed t returns.
|
Linear Combination of multivariate t distribution
|
Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). He
|
Linear Combination of multivariate t distribution
Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). Hence I would think all linear transformations of student-t random variables (with same degree of freedom) are student-t distributed.
I believe if $$X \sim t_d(\nu, \mathbf{\mu}, \Sigma)$$ then $$\mathbf{w}^T X + c \sim t_1(\nu, \mathbf{w}^T \mathbf{\mu} + c, \mathbf{w}^T \Sigma \mathbf{w})$$
This is just my understanding after reading the below references and adds to answer posted by @user31575.
https://en.wikipedia.org/wiki/Generalised_hyperbolic_distribution
Also in "Quantitative risk management: Concepts, techniques and tools" section 2.3.1 equation 2.31
Also referenced in this paper Hu, W. and Kercheval, A.N., 2010. Portfolio optimization for student t and skewed t returns.
|
Linear Combination of multivariate t distribution
Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). He
|
44,705
|
Linear Combination of multivariate t distribution
|
If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom:
$$
X \sim t(\mu, \Sigma, \nu) \quad \Rightarrow \quad Y = AX + b \sim t(A\mu + b, A\Sigma A^\mathrm{T}, \nu) \; .
$$
EDIT: Following @GeorgiBoshnakov's answer, I learned that everything enclosed between horizontal bars is not correct, because falsely assuming that independent multivariate t-variates have a joint multivariate t-distribution. What follows, therefore only holds in the limiting case $\nu \to \infty$ or for very large $\nu$, i.e. when the multivariate t-distribution becomes or can be approximated by a multivariate normal distribution. In this case, there is in fact a theorem stating joint normality with zero covariance under marginal normality and statistical independence.
Thus, the intended combination only works, if the two multivariate t-distributions have the same dimensions and degrees of freedom. Let us assume that
$$
\begin{split}
X_1 &\sim t(\mu_1, \Sigma_1, \nu) \\
X_2 &\sim t(\mu_2, \Sigma_2, \nu)
\end{split}
$$
where $X_1$ and $X_2$ are independent $n \times 1$ random vectors. If that is the case, we have:
$$
X = \left[ \begin{array}{c} X_1 \\ X_2 \end{array} \right] \sim t\left( \left[ \begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right], \; \left[ \begin{array}{cc} \Sigma_1 & 0_{nn} \\ 0_{nn} & \Sigma_2 \end{array} \right], \; \nu \right) \; .
$$
The random variable you seem to have in mind is probably this:
$$
Y = c_1 X_1 + c_2 X_2 \; .
$$
Note that $Y$ can be emulated from $X$ by specifying an appropriate linear combination:
$$
A = \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right], \; b = 0_{n} \quad \Rightarrow \quad Y = AX + b = c_1 X_1 + c_2 X_2 \; .
$$
Thus, we can apply the linear transformation theorem from above:
$$
Y \sim t\left( \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right] \left[ \begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right] + 0_{n}, \; \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right] \left[ \begin{array}{cc} \Sigma_1 & 0_{nn} \\ 0_{nn} & \Sigma_2 \end{array} \right] \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right]^\mathrm{T}, \; \nu \right) \; .
$$
This gives:
$$
Y \sim t\left( c_1 \mu_1 + c_2 \mu_2, \; c_1^2 \Sigma_1 + c_2^2 \Sigma_2, \; \nu \right) \; .
$$
Fun Fact: The above theorem can also be used to prove the relationship between the multivariate t-distribution and the F-distribution.
|
Linear Combination of multivariate t distribution
|
If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom:
$$
X \sim t(\mu, \Sigma, \nu) \quad \Right
|
Linear Combination of multivariate t distribution
If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom:
$$
X \sim t(\mu, \Sigma, \nu) \quad \Rightarrow \quad Y = AX + b \sim t(A\mu + b, A\Sigma A^\mathrm{T}, \nu) \; .
$$
EDIT: Following @GeorgiBoshnakov's answer, I learned that everything enclosed between horizontal bars is not correct, because falsely assuming that independent multivariate t-variates have a joint multivariate t-distribution. What follows, therefore only holds in the limiting case $\nu \to \infty$ or for very large $\nu$, i.e. when the multivariate t-distribution becomes or can be approximated by a multivariate normal distribution. In this case, there is in fact a theorem stating joint normality with zero covariance under marginal normality and statistical independence.
Thus, the intended combination only works, if the two multivariate t-distributions have the same dimensions and degrees of freedom. Let us assume that
$$
\begin{split}
X_1 &\sim t(\mu_1, \Sigma_1, \nu) \\
X_2 &\sim t(\mu_2, \Sigma_2, \nu)
\end{split}
$$
where $X_1$ and $X_2$ are independent $n \times 1$ random vectors. If that is the case, we have:
$$
X = \left[ \begin{array}{c} X_1 \\ X_2 \end{array} \right] \sim t\left( \left[ \begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right], \; \left[ \begin{array}{cc} \Sigma_1 & 0_{nn} \\ 0_{nn} & \Sigma_2 \end{array} \right], \; \nu \right) \; .
$$
The random variable you seem to have in mind is probably this:
$$
Y = c_1 X_1 + c_2 X_2 \; .
$$
Note that $Y$ can be emulated from $X$ by specifying an appropriate linear combination:
$$
A = \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right], \; b = 0_{n} \quad \Rightarrow \quad Y = AX + b = c_1 X_1 + c_2 X_2 \; .
$$
Thus, we can apply the linear transformation theorem from above:
$$
Y \sim t\left( \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right] \left[ \begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right] + 0_{n}, \; \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right] \left[ \begin{array}{cc} \Sigma_1 & 0_{nn} \\ 0_{nn} & \Sigma_2 \end{array} \right] \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right]^\mathrm{T}, \; \nu \right) \; .
$$
This gives:
$$
Y \sim t\left( c_1 \mu_1 + c_2 \mu_2, \; c_1^2 \Sigma_1 + c_2^2 \Sigma_2, \; \nu \right) \; .
$$
Fun Fact: The above theorem can also be used to prove the relationship between the multivariate t-distribution and the F-distribution.
|
Linear Combination of multivariate t distribution
If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom:
$$
X \sim t(\mu, \Sigma, \nu) \quad \Right
|
44,706
|
How to prove that $X^T$e = 0
|
$$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$
ADDENDUM
$$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y\right) =\mathbf X'\mathbf y -\mathbf X'\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y$$
$$\mathbf X'\mathbf y -\mathbf X' \mathbf y = \mathbf 0$$
|
How to prove that $X^T$e = 0
|
$$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$
ADDENDUM
$$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf
|
How to prove that $X^T$e = 0
$$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$
ADDENDUM
$$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y\right) =\mathbf X'\mathbf y -\mathbf X'\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y$$
$$\mathbf X'\mathbf y -\mathbf X' \mathbf y = \mathbf 0$$
|
How to prove that $X^T$e = 0
$$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$
ADDENDUM
$$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf
|
44,707
|
How to sample using MCMC from a posterior distribution in general?
|
We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\theta | y)$. Note that $p(\theta | y)$ is not being calculated; you have to do something with that vector (or matrix) of random numbers to estimate $p(\theta)$. Since you're not calculating $p(\theta)$ for lots of values of $\theta$, you don't need a Gibbs (or MCMC) loop inside a $\theta$ loop - just one (long) Gibbs (or MCMC) loop.
EDIT in response to an update to the question: We do not need to integrate the distribution to get the constant of integration (CoI)! The whole value of MCMC is is found in situations where we can't calculate the CoI. Using MCMC, we can still generate random numbers from the distribution. If we could calculate the CoI, we could just calculate the probabilities directly, without the need to resort to simulation.
Once again, we are NOT calculating $p(\theta|y)$ using MCMC, we are generating random numbers from $p(\theta|y)$ using MCMC. A very different thing.
Here's an example from a simple case: the posterior distribution for the scale parameter from an Exponential distribution with a uniform prior. The data is in x, and we generate N <- 10000 samples from the posterior distribution. Observe that we are only calculating $p(x|\theta)$ in the program.
x <- rexp(100)
N <- 10000
theta <- rep(0,N)
theta[1] <- cur_theta <- 1 # Starting value
for (i in 1:N) {
prop_theta <- runif(1,0,5) # "Independence" sampler
alpha <- exp(sum(dexp(x,prop_theta,log=TRUE)) - sum(dexp(x,cur_theta,log=TRUE)))
if (runif(1) < alpha) cur_theta <- prop_theta
theta[i] <- cur_theta
}
hist(theta)
And the histogram:
Note that the logic is simplified by our choice of sampler (the prop_theta line), as a couple of other terms in the next line (alpha <- ...) cancel out, so don't need to be calculated at all. It's also simplified by our choice of a uniform prior. Obviously we can improve this code a lot, but this is for expository rather than functional purposes.
Here's a link to a question with several answers giving sources for learning more about MCMC.
|
How to sample using MCMC from a posterior distribution in general?
|
We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\the
|
How to sample using MCMC from a posterior distribution in general?
We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\theta | y)$. Note that $p(\theta | y)$ is not being calculated; you have to do something with that vector (or matrix) of random numbers to estimate $p(\theta)$. Since you're not calculating $p(\theta)$ for lots of values of $\theta$, you don't need a Gibbs (or MCMC) loop inside a $\theta$ loop - just one (long) Gibbs (or MCMC) loop.
EDIT in response to an update to the question: We do not need to integrate the distribution to get the constant of integration (CoI)! The whole value of MCMC is is found in situations where we can't calculate the CoI. Using MCMC, we can still generate random numbers from the distribution. If we could calculate the CoI, we could just calculate the probabilities directly, without the need to resort to simulation.
Once again, we are NOT calculating $p(\theta|y)$ using MCMC, we are generating random numbers from $p(\theta|y)$ using MCMC. A very different thing.
Here's an example from a simple case: the posterior distribution for the scale parameter from an Exponential distribution with a uniform prior. The data is in x, and we generate N <- 10000 samples from the posterior distribution. Observe that we are only calculating $p(x|\theta)$ in the program.
x <- rexp(100)
N <- 10000
theta <- rep(0,N)
theta[1] <- cur_theta <- 1 # Starting value
for (i in 1:N) {
prop_theta <- runif(1,0,5) # "Independence" sampler
alpha <- exp(sum(dexp(x,prop_theta,log=TRUE)) - sum(dexp(x,cur_theta,log=TRUE)))
if (runif(1) < alpha) cur_theta <- prop_theta
theta[i] <- cur_theta
}
hist(theta)
And the histogram:
Note that the logic is simplified by our choice of sampler (the prop_theta line), as a couple of other terms in the next line (alpha <- ...) cancel out, so don't need to be calculated at all. It's also simplified by our choice of a uniform prior. Obviously we can improve this code a lot, but this is for expository rather than functional purposes.
Here's a link to a question with several answers giving sources for learning more about MCMC.
|
How to sample using MCMC from a posterior distribution in general?
We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\the
|
44,708
|
How to sample using MCMC from a posterior distribution in general?
|
MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain (the first MC in MCMC) is identified whose stationary distribution is the posterior that you are interested in. You can sample from this Markov Chain and when it converges to its equilibrium distribution, you are essentially sampling from the posterior distribution that you are interested in.
|
How to sample using MCMC from a posterior distribution in general?
|
MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain
|
How to sample using MCMC from a posterior distribution in general?
MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain (the first MC in MCMC) is identified whose stationary distribution is the posterior that you are interested in. You can sample from this Markov Chain and when it converges to its equilibrium distribution, you are essentially sampling from the posterior distribution that you are interested in.
|
How to sample using MCMC from a posterior distribution in general?
MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain
|
44,709
|
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC prng?
|
It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now.
However, for the foreseeable future, $2^{256}$ or $\sim 10^{77}$ simulations is so many orders of magnitude beyond what we'd be able to generate in reasonable time with current understanding of computation (let alone what we could ever really need for statistical purposes, which may well be even less) that I doubt it will even come remotely close before MWC is a long-forgotten footnote in the annals of random number generation. It may be difficult to appreciate quite how large that quantity is.
Imagine you had $7\times 10^{18}$ cores (a billion cores for each and every person on the planet), each screaming along at a billion simulations a second. Let it run for a century (about $\pi$ billion seconds). That would be a simulation of size roughly $10^{-40}$ times $2^{256}$. You'd have to let it run for $10^{40}$ centuries
(The earth has been around for a bit over $4\times 10^7$ centuries)
So not any time soon.
(And if we suppose technologies like quantum computers... why would we use MWC?)
Practically speaking, I think the person you quote is correct. Long before the time we get anywhere near that issue, we would have ceased using MWC.
(My answer deliberately avoids addressing any issues of the suitability of MWC in general, since it's not directly relevant to the question, though would be a consideration if one comes to use it for something.)
As to why anyone would develop a generator with a period far longer than you might need for any one simulation; one advantage is that you don't have to worry about dependence between two simulations with different seed - if I do a simulation today with one seed and another tomorrow with a second seed, the sets of random numbers I get will almost never have any overlap.
|
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC pr
|
It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now.
However, for the foreseeable future, $
|
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC prng?
It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now.
However, for the foreseeable future, $2^{256}$ or $\sim 10^{77}$ simulations is so many orders of magnitude beyond what we'd be able to generate in reasonable time with current understanding of computation (let alone what we could ever really need for statistical purposes, which may well be even less) that I doubt it will even come remotely close before MWC is a long-forgotten footnote in the annals of random number generation. It may be difficult to appreciate quite how large that quantity is.
Imagine you had $7\times 10^{18}$ cores (a billion cores for each and every person on the planet), each screaming along at a billion simulations a second. Let it run for a century (about $\pi$ billion seconds). That would be a simulation of size roughly $10^{-40}$ times $2^{256}$. You'd have to let it run for $10^{40}$ centuries
(The earth has been around for a bit over $4\times 10^7$ centuries)
So not any time soon.
(And if we suppose technologies like quantum computers... why would we use MWC?)
Practically speaking, I think the person you quote is correct. Long before the time we get anywhere near that issue, we would have ceased using MWC.
(My answer deliberately avoids addressing any issues of the suitability of MWC in general, since it's not directly relevant to the question, though would be a consideration if one comes to use it for something.)
As to why anyone would develop a generator with a period far longer than you might need for any one simulation; one advantage is that you don't have to worry about dependence between two simulations with different seed - if I do a simulation today with one seed and another tomorrow with a second seed, the sets of random numbers I get will almost never have any overlap.
|
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC pr
It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now.
However, for the foreseeable future, $
|
44,710
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
|
NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_1,\theta_2)=\dfrac{\pi(\theta_1\vert x)}{\pi(\theta_2\vert x)}$, where
$$\pi(\theta\vert x) = \dfrac{\pi(x\vert \theta)\pi(\theta)}{\int \pi(x\vert \theta)\pi(\theta) d\theta} = \dfrac{\pi(x\vert \theta)\pi(\theta)}{\pi(x)},$$
is the posterior distribution of $\theta$ given the sample $x$. Therefore, the normalising constant $\pi(x)$ in the denominator does not depend on $\theta$ and it cancels out when you calculate $R(\theta_1,\theta_2)$. This is
$$R(\theta_1,\theta_2)= \dfrac{\pi(x\vert \theta_1)\pi(\theta_1)}{\pi(x\vert \theta_2)\pi(\theta_2)},$$
which does not involve the normalising constant, only the likelihood $\pi(x\vert \theta)$ and the prior $\pi(\theta)$.
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
|
NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_1,\theta_2)=\dfrac{\pi(\theta_1\vert x)}{\pi(\theta_2\vert x)}$, where
$$\pi(\theta\vert x) = \dfrac{\pi(x\vert \theta)\pi(\theta)}{\int \pi(x\vert \theta)\pi(\theta) d\theta} = \dfrac{\pi(x\vert \theta)\pi(\theta)}{\pi(x)},$$
is the posterior distribution of $\theta$ given the sample $x$. Therefore, the normalising constant $\pi(x)$ in the denominator does not depend on $\theta$ and it cancels out when you calculate $R(\theta_1,\theta_2)$. This is
$$R(\theta_1,\theta_2)= \dfrac{\pi(x\vert \theta_1)\pi(\theta_1)}{\pi(x\vert \theta_2)\pi(\theta_2)},$$
which does not involve the normalising constant, only the likelihood $\pi(x\vert \theta)$ and the prior $\pi(\theta)$.
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_
|
44,711
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
|
When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution.
In many situations you can easily normalize your improper (or unnormalized) posterior after calculating it.
This is, because once you have your result (e.g. a marginal over some random variable), it is easy to compute the normalizing constant by summation over the improper posterior values.
For example, if you obtain improper marginal probabilities 0.2 and 0.4 for some binary random variable, you can easily calculate the normalizing constant (0.6) and adjust to obtain the proper distribution with probabilities 1/3 and 2/3.
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
|
When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution.
In many situations you can easily normalize your improper (or unnormalized)
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution.
In many situations you can easily normalize your improper (or unnormalized) posterior after calculating it.
This is, because once you have your result (e.g. a marginal over some random variable), it is easy to compute the normalizing constant by summation over the improper posterior values.
For example, if you obtain improper marginal probabilities 0.2 and 0.4 for some binary random variable, you can easily calculate the normalizing constant (0.6) and adjust to obtain the proper distribution with probabilities 1/3 and 2/3.
|
Normalizing constant irrelevant in Bayes theorem? [duplicate]
When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution.
In many situations you can easily normalize your improper (or unnormalized)
|
44,712
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
|
Agree with Dmitry and others in the above discussion. I have following general comments that might help.
We can identify 4 steps in fitting distributions:
1) Model/function choice: hypothesize families of distributions;
2) Estimate parameters;
3) Evaluate quality of fit;
4) Goodness of fit statistical tests.
The first step in fitting distributions consists in choosing the mathematical model or function to represent data in the better way. Sometimes the type of model or function can be argued by some hypothesis concerning the nature of data,often histograms and other graphical techniques can help in this step (just like you plotted), but graphics could be quite subjective, so there are methods based on analytical expressions such us the Pearson’s K criterion. Solving a particular differential equation we can obtain several families of function able to represent quite all empirical distributions. Those curves depend only by mean, variability, skewness and kurtosis.
R has several functions that might be helpful:
ad.test(): Anderson-Darling test for normality (nortest)
chisq.test(): chi-squared test (stats)
cut: divides the range of data vector into intervals
cvm.test(): Cramer-von Mises test for normality (nortest)
ecdf(): computes an empirical cumulative distribution function (stats)
fitdistr(): Maximum-likelihood fitting of univariate distributions (MASS)
goodfit(): fits a discrete (count data) distribution for goodness-of-fit tests (vcd)
hist(): computes a histogram of the given data values (stats)
jarque.bera.test(): Jarque-Bera test for normality (tseries)
ks.test(): Kolmogorov-Sminorv test (stats)
kurtosis(): returns value of kurtosis (fBasics)
lillie.test(): Lilliefors test for normality (nortest)
mle(): estimate parameters by the method of maximum likelihood (stats4)
pearson.test(): Pearson chi-square test for normality (nortest)
plot(): generic function for plotting of R objects (stats)
qqnorm(): produces a normal QQ plot (stats)
qqline(), qqplot(): produce a QQ plot of two datasets (stats)
sf.test(): test di Shapiro-Francia per la normalità (nortest)
shapiro.test():Shapiro-Francia test for normalità (stats)
skewness(): returns value of skewness (fBasics)
table(): builds a contingency table (stats)
Please read this for details on fitting distribution.
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
|
Agree with Dmitry and others in the above discussion. I have following general comments that might help.
We can identify 4 steps in fitting distributions:
1) Model/function choice: hypothesize famili
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
Agree with Dmitry and others in the above discussion. I have following general comments that might help.
We can identify 4 steps in fitting distributions:
1) Model/function choice: hypothesize families of distributions;
2) Estimate parameters;
3) Evaluate quality of fit;
4) Goodness of fit statistical tests.
The first step in fitting distributions consists in choosing the mathematical model or function to represent data in the better way. Sometimes the type of model or function can be argued by some hypothesis concerning the nature of data,often histograms and other graphical techniques can help in this step (just like you plotted), but graphics could be quite subjective, so there are methods based on analytical expressions such us the Pearson’s K criterion. Solving a particular differential equation we can obtain several families of function able to represent quite all empirical distributions. Those curves depend only by mean, variability, skewness and kurtosis.
R has several functions that might be helpful:
ad.test(): Anderson-Darling test for normality (nortest)
chisq.test(): chi-squared test (stats)
cut: divides the range of data vector into intervals
cvm.test(): Cramer-von Mises test for normality (nortest)
ecdf(): computes an empirical cumulative distribution function (stats)
fitdistr(): Maximum-likelihood fitting of univariate distributions (MASS)
goodfit(): fits a discrete (count data) distribution for goodness-of-fit tests (vcd)
hist(): computes a histogram of the given data values (stats)
jarque.bera.test(): Jarque-Bera test for normality (tseries)
ks.test(): Kolmogorov-Sminorv test (stats)
kurtosis(): returns value of kurtosis (fBasics)
lillie.test(): Lilliefors test for normality (nortest)
mle(): estimate parameters by the method of maximum likelihood (stats4)
pearson.test(): Pearson chi-square test for normality (nortest)
plot(): generic function for plotting of R objects (stats)
qqnorm(): produces a normal QQ plot (stats)
qqline(), qqplot(): produce a QQ plot of two datasets (stats)
sf.test(): test di Shapiro-Francia per la normalità (nortest)
shapiro.test():Shapiro-Francia test for normalità (stats)
skewness(): returns value of skewness (fBasics)
table(): builds a contingency table (stats)
Please read this for details on fitting distribution.
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
Agree with Dmitry and others in the above discussion. I have following general comments that might help.
We can identify 4 steps in fitting distributions:
1) Model/function choice: hypothesize famili
|
44,713
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
|
First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying model and parametrize it. And it does not need to be a distribution at all.
However, if your question is just about the distribution that looks similar to your data, I would answer Gamma distribution. It handles non-negativity of your data and it incorporates skewness. It has two parameters: $k$ and $\theta$, that one can estimate numerically with maximum likelihood estimation. However, the initial solution you can get with method of moments. Have a look here for further information: http://en.wikipedia.org/wiki/Gamma_distribution#Parameter_estimation
Hope it will help for your immediate goals, but use it carefully :)
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
|
First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying model and parametrize it. And it does not need to be a distribution at all.
However, if your question is just about the distribution that looks similar to your data, I would answer Gamma distribution. It handles non-negativity of your data and it incorporates skewness. It has two parameters: $k$ and $\theta$, that one can estimate numerically with maximum likelihood estimation. However, the initial solution you can get with method of moments. Have a look here for further information: http://en.wikipedia.org/wiki/Gamma_distribution#Parameter_estimation
Hope it will help for your immediate goals, but use it carefully :)
|
Finding appropriate distribution that fit to for a frequency distribution of a variable
First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying
|
44,714
|
Interpreting $R^2$, F-statistic & p-value of a model
|
The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected.
See: Wikipedia
To illustrate that the formula given in the link is indeed used by summary.lm:
x1 <- 1:10
set.seed(42)
x2 <- rnorm(10)
y <- x1+2*x2+rnorm(10)
fit0 <- lm(y~1)
fit1 <- lm(y~x1+x2)
summary(fit1)
#F-statistic: 14.1 on 2 and 7 DF, p-value: 0.003507
RSS0 <- sum(residuals(fit0)^2)
RSS1 <- sum(residuals(fit1)^2)
Fvalue <- (RSS0-RSS1)/(3-1)/RSS1*(10-3)
#14.10014
pf(Fvalue,2,7,lower.tail=FALSE)
#0.00350697
|
Interpreting $R^2$, F-statistic & p-value of a model
|
The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected.
See: Wikipedia
To illustrate that the formula given in the link is
|
Interpreting $R^2$, F-statistic & p-value of a model
The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected.
See: Wikipedia
To illustrate that the formula given in the link is indeed used by summary.lm:
x1 <- 1:10
set.seed(42)
x2 <- rnorm(10)
y <- x1+2*x2+rnorm(10)
fit0 <- lm(y~1)
fit1 <- lm(y~x1+x2)
summary(fit1)
#F-statistic: 14.1 on 2 and 7 DF, p-value: 0.003507
RSS0 <- sum(residuals(fit0)^2)
RSS1 <- sum(residuals(fit1)^2)
Fvalue <- (RSS0-RSS1)/(3-1)/RSS1*(10-3)
#14.10014
pf(Fvalue,2,7,lower.tail=FALSE)
#0.00350697
|
Interpreting $R^2$, F-statistic & p-value of a model
The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected.
See: Wikipedia
To illustrate that the formula given in the link is
|
44,715
|
Solving the Kolmogorov forward equation for transition probabilities
|
This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the primes denote differentiation with respect to $t$, and the "Kronecker delta" is the initial condition $\mathbb{P}(0) = \mathbb{Id}_2$ corresponding to making no transitions at all during zero time.
Think of the process (beginning in any state $\mathbb{x}_0$) as reaching a state $\mathbb{x}_t = (a_t, b_t)$ after time $t$ by means of many tiny jumps over short intervals $0=t_0 \lt t_1 \lt t_2 \lt \cdots \lt t_n = t$. The infinitesimal generator is a first-order approximation to the changes. To write this down, use the convenience notation $a(i) = a_{t_i}$ and $b(i) = b_{t_i}$:
$$(a(i+1), b(i+1)) = (a(i), b(i)) + (a(i), b(i)) \cdot \mathbb{Q} + O((t_{i+1}-t_{i})^2).$$
(The big-O notation refers to a term that is proportional to the square of the elapsed time during the interval from $t_i$ to $t_{i+1}$: for very small intervals, this square becomes negligible.) In matrix notation this statement is
$$\mathbb{x}(i+1) \approx \mathbb{x}(i) \cdot (\mathbb{Id}_2 + \mathbb{Q}).$$
To focus on the change, subtract $\mathbb{x}(i)$ from both sides:
$$\mathbb{x}(i+1) - \mathbb{x}(i) \approx \mathbb{x}(i) \cdot \mathbb{Q}.$$
If we ignore the fact that the $\mathbb{x}$'s are vectors and $\mathbb{Q}$ is a matrix, and pretend they act like real-valued functions and a constant, respectively, and consider only "infinitesimal" changes in $\mathbb{x}$, we would write down a differential equation
$$\frac{d\mathbb{x}}{\mathbb{x}} = \mathbb{Q} dt$$
whose formal solution--paralleling the usual treatment in elementary Calculus--is
$$\mathbb{x}(t) = \mathbb{x}(0) \cdot \exp(\mathbb{Q} t),$$
indicating that
$$\mathbb{P}(t) = \exp(\mathbb{Q} t),$$
whatever that might mean!
It turns out that all this makes sense and can be justified rigorously. Not only that, the exponential of a matrix can be found in various ways: you can use the series expansion for the exponential,
$$\exp(\mathbb{Q}t) = \sum_{i=0}^{\infty} \frac{(\mathbb{Q}t)^i}{i!},$$
or--this tends to be much easier in practice--when you can find a basis in which $\mathbb{Q}$ is diagonal, its exponential is obtained by exponentiating the diagonal entries (in the usual way: they are just numbers). Specifically, I have computed that
$$\mathbb{Q}t = \pmatrix{-\mu t &\mu t \\ \lambda t & -\lambda t} = \pmatrix{1 & -\frac{\mu }{\lambda } \\ 1 & 1} \cdot \pmatrix{0 & 0 \\ 0 & -t(\lambda + \mu)} \cdot \pmatrix{1 & -\frac{\mu }{\lambda } \\ 1 & 1}^{-1}.$$
In this new basis, the infinitesimal generator is represented by the middle (diagonal) matrix and its exponential is
$$\exp{ \pmatrix{0 & 0 \\ 0 & -t(\lambda + \mu)}} = \pmatrix{\exp(0) & 0 \\ 0 & \exp(-t(\lambda + \mu))} = \pmatrix{1 & 0 \\ 0 & \exp(-t(\lambda + \mu))}.$$
Changing back to the original basis gives
$$\mathbb{P}(t) = \exp(\mathbb{Q} t) = \frac{1}{\lambda + \mu}\pmatrix{\lambda + \mu \exp(-t(\lambda+\mu)) & \mu - \mu \exp(-t(\lambda+\mu)) \\ \lambda - \lambda\exp(-t(\lambda+\mu)) & \mu + \lambda\exp(-t(\lambda+\mu))}.$$
To check, it's easy to verify the rows sum to unity. We had better assume $\mu$ and $\lambda$ have values that make all the entries non-negative, too: that's where the condition $\lambda \mu \gt 0$ comes in. So at least we have a bona fide transition matrix. To check that it has $\mathbb{Q}$ as its infinitesimal generator, let $t$ be such a small time interval that we can neglect all terms of order $t^2$ or greater in the calculations. In particular, when computing the exponentials we can stop at the first term of the series: this is the approximation $\exp(x) \approx 1 + x$ for very small numbers $x$. So, when $t$ is sufficiently small, we may estimate
$$\exp(-t(\lambda+\mu)) \approx 1 - t(\lambda+\mu)$$
and plugging this in to our formula for $\mathbb{P}(t)$ yields
$$\mathbb{P}(t) = \pmatrix{1 - t\mu & t \mu \\ t \lambda & 1 - t \lambda} + O(t^2) = \pmatrix{1 & 0 \\ 0 & 1} + t \mathbb{Q} + O(t^2).$$
It is also evident that $\mathbb{P}(0) = \mathbb{Id}_2$ (the identity matrix or "Kronecker delta"), exactly as intended. Returning full circle to the initial setting, if we begin with the distribution $\mathbb{x}$, then after time $t$ the distribution will be
$$\mathbb{x} \cdot \mathbb{P}(t) = \mathbb{x} \cdot \left(\pmatrix{1 & 0 \\ 0 & 1} + t \mathbb{Q} + O(t^2)\right) = \mathbb{x} + \mathbb{x Q} t + O(t^2).$$
That is, to first order in the time elapsed $t$, the change in $\mathbb{x}$ is proportional to $\mathbb{x Q}$ and to $t$: that's precisely what an infinitesimal generator tells us. So even if you don't believe (or even understand) the manipulations with the matrix differential equation and the matrix exponential, this check fully justifies the answer.
|
Solving the Kolmogorov forward equation for transition probabilities
|
This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the p
|
Solving the Kolmogorov forward equation for transition probabilities
This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the primes denote differentiation with respect to $t$, and the "Kronecker delta" is the initial condition $\mathbb{P}(0) = \mathbb{Id}_2$ corresponding to making no transitions at all during zero time.
Think of the process (beginning in any state $\mathbb{x}_0$) as reaching a state $\mathbb{x}_t = (a_t, b_t)$ after time $t$ by means of many tiny jumps over short intervals $0=t_0 \lt t_1 \lt t_2 \lt \cdots \lt t_n = t$. The infinitesimal generator is a first-order approximation to the changes. To write this down, use the convenience notation $a(i) = a_{t_i}$ and $b(i) = b_{t_i}$:
$$(a(i+1), b(i+1)) = (a(i), b(i)) + (a(i), b(i)) \cdot \mathbb{Q} + O((t_{i+1}-t_{i})^2).$$
(The big-O notation refers to a term that is proportional to the square of the elapsed time during the interval from $t_i$ to $t_{i+1}$: for very small intervals, this square becomes negligible.) In matrix notation this statement is
$$\mathbb{x}(i+1) \approx \mathbb{x}(i) \cdot (\mathbb{Id}_2 + \mathbb{Q}).$$
To focus on the change, subtract $\mathbb{x}(i)$ from both sides:
$$\mathbb{x}(i+1) - \mathbb{x}(i) \approx \mathbb{x}(i) \cdot \mathbb{Q}.$$
If we ignore the fact that the $\mathbb{x}$'s are vectors and $\mathbb{Q}$ is a matrix, and pretend they act like real-valued functions and a constant, respectively, and consider only "infinitesimal" changes in $\mathbb{x}$, we would write down a differential equation
$$\frac{d\mathbb{x}}{\mathbb{x}} = \mathbb{Q} dt$$
whose formal solution--paralleling the usual treatment in elementary Calculus--is
$$\mathbb{x}(t) = \mathbb{x}(0) \cdot \exp(\mathbb{Q} t),$$
indicating that
$$\mathbb{P}(t) = \exp(\mathbb{Q} t),$$
whatever that might mean!
It turns out that all this makes sense and can be justified rigorously. Not only that, the exponential of a matrix can be found in various ways: you can use the series expansion for the exponential,
$$\exp(\mathbb{Q}t) = \sum_{i=0}^{\infty} \frac{(\mathbb{Q}t)^i}{i!},$$
or--this tends to be much easier in practice--when you can find a basis in which $\mathbb{Q}$ is diagonal, its exponential is obtained by exponentiating the diagonal entries (in the usual way: they are just numbers). Specifically, I have computed that
$$\mathbb{Q}t = \pmatrix{-\mu t &\mu t \\ \lambda t & -\lambda t} = \pmatrix{1 & -\frac{\mu }{\lambda } \\ 1 & 1} \cdot \pmatrix{0 & 0 \\ 0 & -t(\lambda + \mu)} \cdot \pmatrix{1 & -\frac{\mu }{\lambda } \\ 1 & 1}^{-1}.$$
In this new basis, the infinitesimal generator is represented by the middle (diagonal) matrix and its exponential is
$$\exp{ \pmatrix{0 & 0 \\ 0 & -t(\lambda + \mu)}} = \pmatrix{\exp(0) & 0 \\ 0 & \exp(-t(\lambda + \mu))} = \pmatrix{1 & 0 \\ 0 & \exp(-t(\lambda + \mu))}.$$
Changing back to the original basis gives
$$\mathbb{P}(t) = \exp(\mathbb{Q} t) = \frac{1}{\lambda + \mu}\pmatrix{\lambda + \mu \exp(-t(\lambda+\mu)) & \mu - \mu \exp(-t(\lambda+\mu)) \\ \lambda - \lambda\exp(-t(\lambda+\mu)) & \mu + \lambda\exp(-t(\lambda+\mu))}.$$
To check, it's easy to verify the rows sum to unity. We had better assume $\mu$ and $\lambda$ have values that make all the entries non-negative, too: that's where the condition $\lambda \mu \gt 0$ comes in. So at least we have a bona fide transition matrix. To check that it has $\mathbb{Q}$ as its infinitesimal generator, let $t$ be such a small time interval that we can neglect all terms of order $t^2$ or greater in the calculations. In particular, when computing the exponentials we can stop at the first term of the series: this is the approximation $\exp(x) \approx 1 + x$ for very small numbers $x$. So, when $t$ is sufficiently small, we may estimate
$$\exp(-t(\lambda+\mu)) \approx 1 - t(\lambda+\mu)$$
and plugging this in to our formula for $\mathbb{P}(t)$ yields
$$\mathbb{P}(t) = \pmatrix{1 - t\mu & t \mu \\ t \lambda & 1 - t \lambda} + O(t^2) = \pmatrix{1 & 0 \\ 0 & 1} + t \mathbb{Q} + O(t^2).$$
It is also evident that $\mathbb{P}(0) = \mathbb{Id}_2$ (the identity matrix or "Kronecker delta"), exactly as intended. Returning full circle to the initial setting, if we begin with the distribution $\mathbb{x}$, then after time $t$ the distribution will be
$$\mathbb{x} \cdot \mathbb{P}(t) = \mathbb{x} \cdot \left(\pmatrix{1 & 0 \\ 0 & 1} + t \mathbb{Q} + O(t^2)\right) = \mathbb{x} + \mathbb{x Q} t + O(t^2).$$
That is, to first order in the time elapsed $t$, the change in $\mathbb{x}$ is proportional to $\mathbb{x Q}$ and to $t$: that's precisely what an infinitesimal generator tells us. So even if you don't believe (or even understand) the manipulations with the matrix differential equation and the matrix exponential, this check fully justifies the answer.
|
Solving the Kolmogorov forward equation for transition probabilities
This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the p
|
44,716
|
Derivation of the effect of unmodeled confounders on OLS estimates
|
I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$ and $W_i$ but for some reason you do not observe $W_i$. Then your model becomes:
$$
\begin{align}
y_i &= \alpha + \beta X_i + \gamma W_i + e_i \newline
&= \alpha + \beta X_i + u_i
\end{align}
$$
where $u_i = \gamma W_i + e_i$ and $e_i$ are error terms, with ${\rm Cov}(X_i, e_i ) = 0$. By the Frisch-Waugh theorem, the estimate of $\beta$ will be:
$$
\begin{align}
\widehat{\beta} &= \frac{{\rm Cov}(y_i, X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \frac{{\rm Cov}(\alpha + \beta X_i + u_i , X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \beta + \frac{{\rm Cov}(u_i, X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \beta + \frac{{\rm Cov}(\gamma W_i + e_i, X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \beta + \gamma \frac{{\rm Cov}(W_i, X_i)}{{\rm Var}(X_i)} \newline
\end{align}
$$
In the second line replace $y_i = \alpha + \beta X_i + u_i$. In the third line, the covariance was split into the sum of covariances where ${\rm Cov}(\alpha , X_i) = 0$ because the intercept is a constant, and ${\rm Cov}(\beta X_i, X_i) = \beta {\rm Var}(X_i)$. In the fourth line, use the fact that $u_i = \gamma W_i + e_i$ and ${\rm Cov}(e_i, X_i) = 0$ in the last line. This yields the result.
If in the long regression $\gamma = 0$ (i.e., not significantly different from zero) or if ${\rm Cov}(W_i, X_i) = 0$, omitting $W_i$ is not a problem. You can see this from the last expression that was derived here. Otherwise your coefficient of $X_i$ will be biased. Reasoning about the sign of $\gamma$ whether ${\rm Cov}(W_i, X_i)$ is positive or negative, you can have an idea about the direction of the bias and sometimes even about the magnitude.
|
Derivation of the effect of unmodeled confounders on OLS estimates
|
I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$
|
Derivation of the effect of unmodeled confounders on OLS estimates
I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$ and $W_i$ but for some reason you do not observe $W_i$. Then your model becomes:
$$
\begin{align}
y_i &= \alpha + \beta X_i + \gamma W_i + e_i \newline
&= \alpha + \beta X_i + u_i
\end{align}
$$
where $u_i = \gamma W_i + e_i$ and $e_i$ are error terms, with ${\rm Cov}(X_i, e_i ) = 0$. By the Frisch-Waugh theorem, the estimate of $\beta$ will be:
$$
\begin{align}
\widehat{\beta} &= \frac{{\rm Cov}(y_i, X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \frac{{\rm Cov}(\alpha + \beta X_i + u_i , X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \beta + \frac{{\rm Cov}(u_i, X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \beta + \frac{{\rm Cov}(\gamma W_i + e_i, X_i)}{{\rm Var}(X_i)} \newline
\ \newline
&= \beta + \gamma \frac{{\rm Cov}(W_i, X_i)}{{\rm Var}(X_i)} \newline
\end{align}
$$
In the second line replace $y_i = \alpha + \beta X_i + u_i$. In the third line, the covariance was split into the sum of covariances where ${\rm Cov}(\alpha , X_i) = 0$ because the intercept is a constant, and ${\rm Cov}(\beta X_i, X_i) = \beta {\rm Var}(X_i)$. In the fourth line, use the fact that $u_i = \gamma W_i + e_i$ and ${\rm Cov}(e_i, X_i) = 0$ in the last line. This yields the result.
If in the long regression $\gamma = 0$ (i.e., not significantly different from zero) or if ${\rm Cov}(W_i, X_i) = 0$, omitting $W_i$ is not a problem. You can see this from the last expression that was derived here. Otherwise your coefficient of $X_i$ will be biased. Reasoning about the sign of $\gamma$ whether ${\rm Cov}(W_i, X_i)$ is positive or negative, you can have an idea about the direction of the bias and sometimes even about the magnitude.
|
Derivation of the effect of unmodeled confounders on OLS estimates
I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$
|
44,717
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
|
I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhile addition, anything else should be removed (without distorting the data of course).
In the case of horizontal (or vertical) lines, I would use them only if: (1) It is difficult to tell the exact values for data points and I think that this is important information; (2) I want the viewer to be able to compare the exact y-axis (or x-axis) positions of data points (e.g. to see that two point have exactly the same value, or differ by a specific amount).
Specifically in the figure you show, it seems that the data can only have a limited amount of y values and thus it is easy to tell the value by eye, making the horizontal lines useless in this case.
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
|
I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhi
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhile addition, anything else should be removed (without distorting the data of course).
In the case of horizontal (or vertical) lines, I would use them only if: (1) It is difficult to tell the exact values for data points and I think that this is important information; (2) I want the viewer to be able to compare the exact y-axis (or x-axis) positions of data points (e.g. to see that two point have exactly the same value, or differ by a specific amount).
Specifically in the figure you show, it seems that the data can only have a limited amount of y values and thus it is easy to tell the value by eye, making the horizontal lines useless in this case.
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhi
|
44,718
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
|
It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowest common denominator formatting rules. They're there so that it's harder to screw things up, not because it's the best way to do things. I remember it was only a few years ago that new APA submission format had to have indented paragraphs for each reference when the prior version was hanging paragraphs because people couldn't figure out their new fangled word processors. The prior version of APA was written for people with typewriters. The rule was in place not because indented paragraphs for each reference was better but because it was more likely all the submissions would look the same.
In short, don't try to derive best principles for graphs from rules for submission to journals. They're a combination of journal consistency, graph quality, and what they can most easily get submitters to consistently do.
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
|
It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowe
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowest common denominator formatting rules. They're there so that it's harder to screw things up, not because it's the best way to do things. I remember it was only a few years ago that new APA submission format had to have indented paragraphs for each reference when the prior version was hanging paragraphs because people couldn't figure out their new fangled word processors. The prior version of APA was written for people with typewriters. The rule was in place not because indented paragraphs for each reference was better but because it was more likely all the submissions would look the same.
In short, don't try to derive best principles for graphs from rules for submission to journals. They're a combination of journal consistency, graph quality, and what they can most easily get submitters to consistently do.
|
Formatting graphs and figures: why and when is it bad to include horizontal lines?
It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowe
|
44,719
|
How to add outliers to an existing data?
|
You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects (which won't change the value distribution in this dimension). This method is often used to test the robustness of algorithms. It could be useful in your case, too.
The method is described here: Assessing data mining results via swap randomization.
Maybe this paper is useful: A synthetic data generator for clustering and outlier analysis
|
How to add outliers to an existing data?
|
You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects
|
How to add outliers to an existing data?
You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects (which won't change the value distribution in this dimension). This method is often used to test the robustness of algorithms. It could be useful in your case, too.
The method is described here: Assessing data mining results via swap randomization.
Maybe this paper is useful: A synthetic data generator for clustering and outlier analysis
|
How to add outliers to an existing data?
You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects
|
44,720
|
How to add outliers to an existing data?
|
There are two commonly seen approaches:
Add outliers to real data by randomization methods.
In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%)
For 1 there are some variants - modifying single attributes, drawing each attribute, but from different instances etc.; personally, I'm not at all convinced of these methods. Because they simulate a particular effect of data dilution, and thus often favor algorithms designed around the same concept of outlierness. A method that does well on such data sets will then often fail badly when your real outliers are not caused by this very specific kind of errors.
For 2, you will have to face the fact that some data sets are just too hard. The fact that one class is more rare than the others doesn't mean they are really outliers; even if you downsample it to the extreme. Plus, this approach is also quite naive: it assumes that the majority class does not contain outliers. In any real data set that I have seen every class will have outliers within the class, too. So do not expect your method to be able to go to 90% on these data sets. If you can improve from 70% to 80%, then your method already works quite well. Anything beyond 80% may be indicative of some bias IMHO.
When reviewing outlier detection papers, I consider any result higher than 0.80 to be suspicious: either the data set was too much designed for the algorithm, the algorithm parameter were systematically tweaked to find the best possible result, or maybe the result is just fake altogether.
In most cases where I've seen the WBC data set being used, they downsampled the cancer class to like 10 instances. But then, you shouldn't tell your algorithm to get the top 10 results. In a real scenario, you do not know there are 10 outliers to be found...
|
How to add outliers to an existing data?
|
There are two commonly seen approaches:
Add outliers to real data by randomization methods.
In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%)
For
|
How to add outliers to an existing data?
There are two commonly seen approaches:
Add outliers to real data by randomization methods.
In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%)
For 1 there are some variants - modifying single attributes, drawing each attribute, but from different instances etc.; personally, I'm not at all convinced of these methods. Because they simulate a particular effect of data dilution, and thus often favor algorithms designed around the same concept of outlierness. A method that does well on such data sets will then often fail badly when your real outliers are not caused by this very specific kind of errors.
For 2, you will have to face the fact that some data sets are just too hard. The fact that one class is more rare than the others doesn't mean they are really outliers; even if you downsample it to the extreme. Plus, this approach is also quite naive: it assumes that the majority class does not contain outliers. In any real data set that I have seen every class will have outliers within the class, too. So do not expect your method to be able to go to 90% on these data sets. If you can improve from 70% to 80%, then your method already works quite well. Anything beyond 80% may be indicative of some bias IMHO.
When reviewing outlier detection papers, I consider any result higher than 0.80 to be suspicious: either the data set was too much designed for the algorithm, the algorithm parameter were systematically tweaked to find the best possible result, or maybe the result is just fake altogether.
In most cases where I've seen the WBC data set being used, they downsampled the cancer class to like 10 instances. But then, you shouldn't tell your algorithm to get the top 10 results. In a real scenario, you do not know there are 10 outliers to be found...
|
How to add outliers to an existing data?
There are two commonly seen approaches:
Add outliers to real data by randomization methods.
In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%)
For
|
44,721
|
How to add outliers to an existing data?
|
Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process described by the model (roughly 1 in 10⁹ standard normally distributed numbers will be < -6) or they can be generated by a process that is not included in your model.
Usually, one doesn't care about the former, as the model is adequate for them.
But with respect to the latter, you can simulate only things where you have an idea of the generating process. If you want unexpected rare events, there is no way but collecting data and waiting for them to occur. And IMHO it doesn't make sense to discuss this without discussing the underlying process/problem/task (not only the model). It is the very nature of these things that you cannot give a typical outlier. And you need to discuss the relevance of your outlier generating process to the model and problem. IMHO, the no free lunch theorem applies very much for outlier detection.
Recommended reading ;-) "Journal of Machine Learning Gossip" Paper of which a few copies still float aroud in the net.
|
How to add outliers to an existing data?
|
Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process desc
|
How to add outliers to an existing data?
Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process described by the model (roughly 1 in 10⁹ standard normally distributed numbers will be < -6) or they can be generated by a process that is not included in your model.
Usually, one doesn't care about the former, as the model is adequate for them.
But with respect to the latter, you can simulate only things where you have an idea of the generating process. If you want unexpected rare events, there is no way but collecting data and waiting for them to occur. And IMHO it doesn't make sense to discuss this without discussing the underlying process/problem/task (not only the model). It is the very nature of these things that you cannot give a typical outlier. And you need to discuss the relevance of your outlier generating process to the model and problem. IMHO, the no free lunch theorem applies very much for outlier detection.
Recommended reading ;-) "Journal of Machine Learning Gossip" Paper of which a few copies still float aroud in the net.
|
How to add outliers to an existing data?
Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process desc
|
44,722
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the problem involves averages. But machine learning problems can be more complex and sample sizes are not always large enough for normal approximations to apply. Some argue for mathematical convenience. That is no justification especially when computers can easily handle added complexity and computer-intensive resampling approaches.
But I think the question should be challenged. Who says the Guassian distribution is "always" used or even just predominantly used in machine learning. Taleb claimed that statistics is dominated by the Gaussian distribution especially when applied to finance. He was very wrong about that!
In machine learning aren't kernel density classification approaches, tree classifiers and other nonparametric methods sometimes used? Aren't nearest neighbor methods used for clustering and classification? I think they are and I know statisticians use these methods very frequently.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the problem involves averages. But machine learning problems can be more complex and sample sizes are not always large enough for normal approximations to apply. Some argue for mathematical convenience. That is no justification especially when computers can easily handle added complexity and computer-intensive resampling approaches.
But I think the question should be challenged. Who says the Guassian distribution is "always" used or even just predominantly used in machine learning. Taleb claimed that statistics is dominated by the Gaussian distribution especially when applied to finance. He was very wrong about that!
In machine learning aren't kernel density classification approaches, tree classifiers and other nonparametric methods sometimes used? Aren't nearest neighbor methods used for clustering and classification? I think they are and I know statisticians use these methods very frequently.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the
|
44,723
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is reverse: the distribution of random part of variable is called normal). Central limit theorem says that the sum of large number of varibles each having a small influence on the result approximate normal distribution.
1. Why data is treated as normally distributed? In machine learning we want to express dependent variable as some function of a number of independent variables. If this function is sum (or expressed as a sum of some other funstions) and we are suggesting that the number of independent variables is really high, then the dependent variable should have normal distribution (due to central limit theorem).
2. Why errors are looked to be normally distributed? The dependent variable ($Y$) consists of deterministic and random parts. In machine learning we are trying to express deterministic part as a sum of deterministic independent variables: $$deterministic + random = func(deterministic(1))+...+func(deterministic(n))+model\_error$$ If the whole deterministic part of $Y$ is explained by $X$ then the $model\_error$ depicts only $random$ part, and thus should have normal distribution. So if error distribution is normal, then we may suggest that the model is successful. Else there are some other features that are absent in model, but have large enough influence on $Y$ (the model is incomplete) or the model is incorrect.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is re
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is reverse: the distribution of random part of variable is called normal). Central limit theorem says that the sum of large number of varibles each having a small influence on the result approximate normal distribution.
1. Why data is treated as normally distributed? In machine learning we want to express dependent variable as some function of a number of independent variables. If this function is sum (or expressed as a sum of some other funstions) and we are suggesting that the number of independent variables is really high, then the dependent variable should have normal distribution (due to central limit theorem).
2. Why errors are looked to be normally distributed? The dependent variable ($Y$) consists of deterministic and random parts. In machine learning we are trying to express deterministic part as a sum of deterministic independent variables: $$deterministic + random = func(deterministic(1))+...+func(deterministic(n))+model\_error$$ If the whole deterministic part of $Y$ is explained by $X$ then the $model\_error$ depicts only $random$ part, and thus should have normal distribution. So if error distribution is normal, then we may suggest that the model is successful. Else there are some other features that are absent in model, but have large enough influence on $Y$ (the model is incomplete) or the model is incorrect.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is re
|
44,724
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression, the solution is technically in closed form when your distribution is Gaussian:
$\hat \beta = (X^T X)^{-1} X^T Y$
where as for other distributions, iterative algorithms must be used. Technical note: using this direct computation to find $\hat \beta$ is both inefficient and unstable.
Quite often, both the theoretical math and numerical methods required are substaintially easier if the distribution is a linear transformation of normal variables. Because of this, methods are frequently first developed under the assumption that the data is normal, as the problem is considerably more tractable. Later, the more difficult problem of addressing non-normality is addressed by statistical/machine learning researchers.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression,
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression, the solution is technically in closed form when your distribution is Gaussian:
$\hat \beta = (X^T X)^{-1} X^T Y$
where as for other distributions, iterative algorithms must be used. Technical note: using this direct computation to find $\hat \beta$ is both inefficient and unstable.
Quite often, both the theoretical math and numerical methods required are substaintially easier if the distribution is a linear transformation of normal variables. Because of this, methods are frequently first developed under the assumption that the data is normal, as the problem is considerably more tractable. Later, the more difficult problem of addressing non-normality is addressed by statistical/machine learning researchers.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression,
|
44,725
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation.
I tried reasoning this out and am summarizing my understanding -
Usually the data distribution in Nature follows a Normal distribution ( few examples like - age, income, height, weight etc., ) . So its the best approximation when we are not aware of the underlying distribution pattern.
Most often the goal in ML/ AI is to strive to make the data linearly separable even if it means projecting the data into higher dimensional space so as to find a fitting "hyperplane" (for example - SVM kernels, Neural net layers, Softmax etc.,). The reason for this being "Linear boundaries always help in reducing variance and is the most simplistic, natural and interpret-able" besides reducing mathematical / computational complexities. And, when we aim for linear separability, its always good to reduce the effect of outliers, influencing points and leverage points. Why? Because the hyperplane is very sensitive to the influencing points and leverage points (aka outliers) - To undertstand this - Lets shift to a 2D space where we have one predictor (X) and one target(y) and assume there exists a good positive correlation between X and y. Given this, if our X is normally distributed and y is also normally distributed, you are most likely to fit a straight line that has many points centered in the middle of the line rather than the end-points (aka outliers, leverage / influencing points). So the predicted regression line will most likely suffer little variance when predicting on unseen data.
Extrapolating the above understanding to a n-dimensional space and fitting a hyperplane to make things linearly separable does infact really makes sense because it helps in reducing the variance.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation.
I tried r
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation.
I tried reasoning this out and am summarizing my understanding -
Usually the data distribution in Nature follows a Normal distribution ( few examples like - age, income, height, weight etc., ) . So its the best approximation when we are not aware of the underlying distribution pattern.
Most often the goal in ML/ AI is to strive to make the data linearly separable even if it means projecting the data into higher dimensional space so as to find a fitting "hyperplane" (for example - SVM kernels, Neural net layers, Softmax etc.,). The reason for this being "Linear boundaries always help in reducing variance and is the most simplistic, natural and interpret-able" besides reducing mathematical / computational complexities. And, when we aim for linear separability, its always good to reduce the effect of outliers, influencing points and leverage points. Why? Because the hyperplane is very sensitive to the influencing points and leverage points (aka outliers) - To undertstand this - Lets shift to a 2D space where we have one predictor (X) and one target(y) and assume there exists a good positive correlation between X and y. Given this, if our X is normally distributed and y is also normally distributed, you are most likely to fit a straight line that has many points centered in the middle of the line rather than the end-points (aka outliers, leverage / influencing points). So the predicted regression line will most likely suffer little variance when predicting on unseen data.
Extrapolating the above understanding to a n-dimensional space and fitting a hyperplane to make things linearly separable does infact really makes sense because it helps in reducing the variance.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation.
I tried r
|
44,726
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational data available and whenever data tends to infinity it gets normally distributed around its mean and thats what Normal distribution(Gaussian Distribution) says. Although its not necessary that Gaussian DIstribution will always be a perfect fit to any data that tends to infinity like take a case when your data is always positive then you can easily see if you try to fit gaussian dist to it it will give some weight to negative values of x also(although negligible but still some weight is given) so in such case distribution like Zeta is more suited.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
|
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational da
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational data available and whenever data tends to infinity it gets normally distributed around its mean and thats what Normal distribution(Gaussian Distribution) says. Although its not necessary that Gaussian DIstribution will always be a perfect fit to any data that tends to infinity like take a case when your data is always positive then you can easily see if you try to fit gaussian dist to it it will give some weight to negative values of x also(although negligible but still some weight is given) so in such case distribution like Zeta is more suited.
|
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational da
|
44,727
|
Understanding regression results when data are subsetted
|
To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition.
The following R code shows how it was produced. Briefly, it generates 400 data points per year, with values of variable $x$ ranging variously from $0$ to $2$ through $2$ to $4$, shifting upwards each year: this is one (mild) form of confounding of $x$ and year. It creates probabilities $y$ according to a logistic model and, for later use, generates binary observations $z$ according to those probabilities. The probabilities are plotted, with color differentiating the years. Finally, the logistic fit to those 2000 observations is overplotted with a dashed line.
# Create sample data
#
logistic <- function(x) 1 / (1 + exp(-x))
n <- 400 # Values per year
offset <- 80 # Shift in x-values year-to-year
year <- as.factor(floor(seq(from=2006,to=2011-1/n, by=1/n)))
x <- as.vector(sapply(1:5, function(i) offset*i + 1:n)) / n * 2
y <- logistic((4 - 2/3 * x - unclass(year)))
set.seed(17)
z <- rbinom(length(y), 1, prob=y)
data <- data.frame(cbind(x, year, y, z))
#
# Plot the *probabilities* which underlie the data.
#
par(mfrow=c(1,1))
plot(x, y, col=year, pch=19, cex=0.75, ylab="Probability",
main="Individual and Overall Fits")
#
# Plot the overall fit.
#
b <- summary(glm(z ~ x, data=data, family=binomial(link="logit")))$coefficients
curve(logistic(b[1] + b[2]*x), add=TRUE, lwd=3, lty=2, col="Gray")
Apparently, the effect of this "staircase" of falling curves is to cause the overall model to average its way through the middle, suggesting a much steeper fall than any individual step (year) exhibits.
We can do the calculations of the odds ratios and their 95% CIs, too:
output <- function(d) {
b <- summary(glm(z ~ x, data=d, family=binomial(link="logit")))$coefficients
a <- b["x", "Estimate"]
u <- b["x", "Std. Error"]
z <- qnorm(c(.025, .975))
exp(c(a, a + z*u))
}
output(data)
by(data, data$year, output)
The (cleaned up) output, which is in the form (estimate, lower limit, upper limit) for each model, is
Overall
0.17 0.14 0.20
year: 1
0.54 0.31 0.94
year: 2
0.53 0.37 0.78
year: 3
0.36 0.25 0.53
year: 4
0.61 0.36 1.03
year: 5
0.47 0.22 1.01
The overall coefficient of 0.17 is lower than the lower confidence limits of any of the annual (year-specific) fits, several of which are evidently not even significant (because their confidence intervals include 1.0).
Comments
This phenomenon is not special to logistic regression: it is seen in many ordinary regressions and, in that form, has been discussed and illustrated elsewhere on this site. (Finding that discussion might take some clever searching, I'm afraid.) Many textbooks use examples like this to illustrate the value of including and controlling for important variables in statistical models and to discuss confounding.
This example uses fairly large (sub)sample sizes: this suggests that the changes in widths of confidence intervals are not wholly due to changes in sample size. In fact, most of the change is because each "stair" is much shallower than the overall trend, and therefore is not as easy to discriminate from no trend at all.
There is no multiple testing going on: we're looking only at one covariate.
|
Understanding regression results when data are subsetted
|
To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition.
The following R code shows how it was produced. Briefly, it generates
|
Understanding regression results when data are subsetted
To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition.
The following R code shows how it was produced. Briefly, it generates 400 data points per year, with values of variable $x$ ranging variously from $0$ to $2$ through $2$ to $4$, shifting upwards each year: this is one (mild) form of confounding of $x$ and year. It creates probabilities $y$ according to a logistic model and, for later use, generates binary observations $z$ according to those probabilities. The probabilities are plotted, with color differentiating the years. Finally, the logistic fit to those 2000 observations is overplotted with a dashed line.
# Create sample data
#
logistic <- function(x) 1 / (1 + exp(-x))
n <- 400 # Values per year
offset <- 80 # Shift in x-values year-to-year
year <- as.factor(floor(seq(from=2006,to=2011-1/n, by=1/n)))
x <- as.vector(sapply(1:5, function(i) offset*i + 1:n)) / n * 2
y <- logistic((4 - 2/3 * x - unclass(year)))
set.seed(17)
z <- rbinom(length(y), 1, prob=y)
data <- data.frame(cbind(x, year, y, z))
#
# Plot the *probabilities* which underlie the data.
#
par(mfrow=c(1,1))
plot(x, y, col=year, pch=19, cex=0.75, ylab="Probability",
main="Individual and Overall Fits")
#
# Plot the overall fit.
#
b <- summary(glm(z ~ x, data=data, family=binomial(link="logit")))$coefficients
curve(logistic(b[1] + b[2]*x), add=TRUE, lwd=3, lty=2, col="Gray")
Apparently, the effect of this "staircase" of falling curves is to cause the overall model to average its way through the middle, suggesting a much steeper fall than any individual step (year) exhibits.
We can do the calculations of the odds ratios and their 95% CIs, too:
output <- function(d) {
b <- summary(glm(z ~ x, data=d, family=binomial(link="logit")))$coefficients
a <- b["x", "Estimate"]
u <- b["x", "Std. Error"]
z <- qnorm(c(.025, .975))
exp(c(a, a + z*u))
}
output(data)
by(data, data$year, output)
The (cleaned up) output, which is in the form (estimate, lower limit, upper limit) for each model, is
Overall
0.17 0.14 0.20
year: 1
0.54 0.31 0.94
year: 2
0.53 0.37 0.78
year: 3
0.36 0.25 0.53
year: 4
0.61 0.36 1.03
year: 5
0.47 0.22 1.01
The overall coefficient of 0.17 is lower than the lower confidence limits of any of the annual (year-specific) fits, several of which are evidently not even significant (because their confidence intervals include 1.0).
Comments
This phenomenon is not special to logistic regression: it is seen in many ordinary regressions and, in that form, has been discussed and illustrated elsewhere on this site. (Finding that discussion might take some clever searching, I'm afraid.) Many textbooks use examples like this to illustrate the value of including and controlling for important variables in statistical models and to discuss confounding.
This example uses fairly large (sub)sample sizes: this suggests that the changes in widths of confidence intervals are not wholly due to changes in sample size. In fact, most of the change is because each "stair" is much shallower than the overall trend, and therefore is not as easy to discriminate from no trend at all.
There is no multiple testing going on: we're looking only at one covariate.
|
Understanding regression results when data are subsetted
To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition.
The following R code shows how it was produced. Briefly, it generates
|
44,728
|
Understanding regression results when data are subsetted
|
In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period.
It's hard to say exactly what's going on. Your code would help - did the model for the full data set include year as a IV? What is your dependent variable? What is your independent variable?
It certainly seems that year is a confounding variable. In other words, you have
DV ~ IV
but, in addition, DV is related to year and IV is related to year.
Confounds with time are pretty common. If DV becomes more likely over time, and IV increases with time, then a set of relationships like the one you found would exist.
|
Understanding regression results when data are subsetted
|
In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period.
It's hard to say exactly what's going on.
|
Understanding regression results when data are subsetted
In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period.
It's hard to say exactly what's going on. Your code would help - did the model for the full data set include year as a IV? What is your dependent variable? What is your independent variable?
It certainly seems that year is a confounding variable. In other words, you have
DV ~ IV
but, in addition, DV is related to year and IV is related to year.
Confounds with time are pretty common. If DV becomes more likely over time, and IV increases with time, then a set of relationships like the one you found would exist.
|
Understanding regression results when data are subsetted
In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period.
It's hard to say exactly what's going on.
|
44,729
|
Understanding regression results when data are subsetted
|
I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are significant given a proper p-value adjustment to the tests.
But Peter has hit on an important observation. The individual years give odds ratios that are close to 1 (three cases 1 is included in the interval). But when the years are pooled the ratio is much further away from 1 compared to any of the individual years. This suggests to me that some factor influencing the outcome differs from year to year. If that is the case the data is really not poolable without inclusion of this confounding factor as another covariate in the model. However to make sense of this we would need to know more specifics. You haven't told us what covariate you are using and the values of the covariate that you are comparing to get an odds ratio. If we know that and more about the nature of the data and your problem it may be possible to figure out what the confounding covariate is and whether or not it is observable and can therefore be used as a covariate in the model.
But the answer to your question about intuition, the results you see are due to differences in sample size, multiple testing and some cause for the model to depend on the year and hence not be poolable.
|
Understanding regression results when data are subsetted
|
I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are sig
|
Understanding regression results when data are subsetted
I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are significant given a proper p-value adjustment to the tests.
But Peter has hit on an important observation. The individual years give odds ratios that are close to 1 (three cases 1 is included in the interval). But when the years are pooled the ratio is much further away from 1 compared to any of the individual years. This suggests to me that some factor influencing the outcome differs from year to year. If that is the case the data is really not poolable without inclusion of this confounding factor as another covariate in the model. However to make sense of this we would need to know more specifics. You haven't told us what covariate you are using and the values of the covariate that you are comparing to get an odds ratio. If we know that and more about the nature of the data and your problem it may be possible to figure out what the confounding covariate is and whether or not it is observable and can therefore be used as a covariate in the model.
But the answer to your question about intuition, the results you see are due to differences in sample size, multiple testing and some cause for the model to depend on the year and hence not be poolable.
|
Understanding regression results when data are subsetted
I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are sig
|
44,730
|
A question about notation of Bayes' Theorem
|
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambiguous because it is clearly understood that we are dealing with a distribution on the space of the parameter $\theta$. This notation could become ambiguous when dealing with a two parameters model, say $\theta$ and $\mu$. In such a case I personally use the notation $\underset{\theta}{\propto}$, $\underset{\mu}{\propto}$ or $\underset{\mu,\theta}{\propto}$ for precising what variable is considered in the proportionality statement.
|
A question about notation of Bayes' Theorem
|
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambi
|
A question about notation of Bayes' Theorem
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambiguous because it is clearly understood that we are dealing with a distribution on the space of the parameter $\theta$. This notation could become ambiguous when dealing with a two parameters model, say $\theta$ and $\mu$. In such a case I personally use the notation $\underset{\theta}{\propto}$, $\underset{\mu}{\propto}$ or $\underset{\mu,\theta}{\propto}$ for precising what variable is considered in the proportionality statement.
|
A question about notation of Bayes' Theorem
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambi
|
44,731
|
A question about notation of Bayes' Theorem
|
You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value) $p(y)$
|
A question about notation of Bayes' Theorem
|
You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value
|
A question about notation of Bayes' Theorem
You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value) $p(y)$
|
A question about notation of Bayes' Theorem
You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value
|
44,732
|
What is the meaning of operators in regression or anova formulas in R
|
The formulas in R have their own mini-language. You can have some detailed information in the R session with
help(formula)
which you can also find here.
For the sake of the example, let's say that you predict $Z$ from $X$ and $Y$ and let's drop the error terms.
$Z \sim X + Y$ means that you fit and additive model $Z_i = X_i + Y_i$
$Z \sim X * Y$ means that you fit a model with interactions $Z_i = X_i + Y_i + X_i Y_i$
$Z \sim X / Y$ means that you fit a model with "nested" interactions $Z_i = X_i + X_i Y_i$
$Z \sim X : Y$ means that you fit a model with only interactions $Z_i = X_i Y_i$
You also have the $-$, ^ and $\%in\%$ operators described in the help page. The $-$ operator is mostly used for removing the intercept and the others are redundant with the ones shown above.
|
What is the meaning of operators in regression or anova formulas in R
|
The formulas in R have their own mini-language. You can have some detailed information in the R session with
help(formula)
which you can also find here.
For the sake of the example, let's say that yo
|
What is the meaning of operators in regression or anova formulas in R
The formulas in R have their own mini-language. You can have some detailed information in the R session with
help(formula)
which you can also find here.
For the sake of the example, let's say that you predict $Z$ from $X$ and $Y$ and let's drop the error terms.
$Z \sim X + Y$ means that you fit and additive model $Z_i = X_i + Y_i$
$Z \sim X * Y$ means that you fit a model with interactions $Z_i = X_i + Y_i + X_i Y_i$
$Z \sim X / Y$ means that you fit a model with "nested" interactions $Z_i = X_i + X_i Y_i$
$Z \sim X : Y$ means that you fit a model with only interactions $Z_i = X_i Y_i$
You also have the $-$, ^ and $\%in\%$ operators described in the help page. The $-$ operator is mostly used for removing the intercept and the others are redundant with the ones shown above.
|
What is the meaning of operators in regression or anova formulas in R
The formulas in R have their own mini-language. You can have some detailed information in the R session with
help(formula)
which you can also find here.
For the sake of the example, let's say that yo
|
44,733
|
Predict probabilities from Firth logistic regression in R
|
You can probably compute any predictions you want with little algebra. Let consider the example dataset,
data(sex2)
fm <- case ~ age+oc+vic+vicl+vis+dia
fit <- logistf(fm, data=sex2)
A design matrix is the only missing piece to compute predicted probabilities once we get the regression coefficients, given by
betas <- coef(fit)
So, let's try to get prediction for the observed data, first:
X <- model.matrix(fm, data=sex2) # add a column of 1's to sex2[,-1]
pi.obs <- 1 / (1 + exp(-X %*% betas)) # in case there's an offset, δ, it
# should be subtracted as exp(-Xβ - δ)
We can check that we get the correct result
> pi.obs[1:5]
[1] 0.3389307 0.9159945 0.9159945 0.9159945 0.9159945
> fit$predict[1:5]
[1] 0.3389307 0.9159945 0.9159945 0.9159945 0.9159945
Now, you can put in the above design matrix, X, values you are interested in. For example, with all covariates set to one
new.x <- c(1, rep(1, 6))
1 / (1 + exp(-new.x %*% betas))
we get an individual probability of 0.804, while when all covariates are set to 0 (new.x <- c(1, rep(0, 6))), the estimated probability is 0.530.
|
Predict probabilities from Firth logistic regression in R
|
You can probably compute any predictions you want with little algebra. Let consider the example dataset,
data(sex2)
fm <- case ~ age+oc+vic+vicl+vis+dia
fit <- logistf(fm, data=sex2)
A design matrix
|
Predict probabilities from Firth logistic regression in R
You can probably compute any predictions you want with little algebra. Let consider the example dataset,
data(sex2)
fm <- case ~ age+oc+vic+vicl+vis+dia
fit <- logistf(fm, data=sex2)
A design matrix is the only missing piece to compute predicted probabilities once we get the regression coefficients, given by
betas <- coef(fit)
So, let's try to get prediction for the observed data, first:
X <- model.matrix(fm, data=sex2) # add a column of 1's to sex2[,-1]
pi.obs <- 1 / (1 + exp(-X %*% betas)) # in case there's an offset, δ, it
# should be subtracted as exp(-Xβ - δ)
We can check that we get the correct result
> pi.obs[1:5]
[1] 0.3389307 0.9159945 0.9159945 0.9159945 0.9159945
> fit$predict[1:5]
[1] 0.3389307 0.9159945 0.9159945 0.9159945 0.9159945
Now, you can put in the above design matrix, X, values you are interested in. For example, with all covariates set to one
new.x <- c(1, rep(1, 6))
1 / (1 + exp(-new.x %*% betas))
we get an individual probability of 0.804, while when all covariates are set to 0 (new.x <- c(1, rep(0, 6))), the estimated probability is 0.530.
|
Predict probabilities from Firth logistic regression in R
You can probably compute any predictions you want with little algebra. Let consider the example dataset,
data(sex2)
fm <- case ~ age+oc+vic+vicl+vis+dia
fit <- logistf(fm, data=sex2)
A design matrix
|
44,734
|
Predict probabilities from Firth logistic regression in R
|
An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer
data(sex2)
fm <- case ~ age + oc + vic + vicl + vis + dia
fit <- brglm(fm, data = sex2)
predict(fit, newdata = sex2[1:5, ], type = "response")
That yields the
> predict(fit, newdata = sex2[1:5, ], type = "response")
1 2 3 4 5
0.3389307 0.9159945 0.9159945 0.9159945 0.9159945
Note that in the brglm() case, because of the way the function works, what you see above is simply the result of the standard predict.glm() function/method in R.
|
Predict probabilities from Firth logistic regression in R
|
An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer
data(sex2)
fm <- case ~ age + oc + vic + vicl + vis + dia
fit <- brglm(fm, data = sex2)
predict(f
|
Predict probabilities from Firth logistic regression in R
An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer
data(sex2)
fm <- case ~ age + oc + vic + vicl + vis + dia
fit <- brglm(fm, data = sex2)
predict(fit, newdata = sex2[1:5, ], type = "response")
That yields the
> predict(fit, newdata = sex2[1:5, ], type = "response")
1 2 3 4 5
0.3389307 0.9159945 0.9159945 0.9159945 0.9159945
Note that in the brglm() case, because of the way the function works, what you see above is simply the result of the standard predict.glm() function/method in R.
|
Predict probabilities from Firth logistic regression in R
An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer
data(sex2)
fm <- case ~ age + oc + vic + vicl + vis + dia
fit <- brglm(fm, data = sex2)
predict(f
|
44,735
|
SVM options in scikit-learn
|
I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a wrapper), begins with the following useful blurb:
The shrinking technique reduces the size of the problem by temporarily eliminating variables α_i that are unlikely to be selected in the SMO working set because they have reached their lower or upper bound (Joachims, 1999). The SMO iterations then continues on the remaining variables. Shrinking reduces the number of kernel values needed to update the gradient vector (see algorithm 6.2, line 8). The hit rate of the kernel cache is therefore improved.
So basically libSVM optimizes over a subset of the Lagrange multipliers α_i. As many of these are typically zero in a given problem, this is often a safe heuristic to adopt. Much more information can be found in the linked document.
|
SVM options in scikit-learn
|
I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a
|
SVM options in scikit-learn
I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a wrapper), begins with the following useful blurb:
The shrinking technique reduces the size of the problem by temporarily eliminating variables α_i that are unlikely to be selected in the SMO working set because they have reached their lower or upper bound (Joachims, 1999). The SMO iterations then continues on the remaining variables. Shrinking reduces the number of kernel values needed to update the gradient vector (see algorithm 6.2, line 8). The hit rate of the kernel cache is therefore improved.
So basically libSVM optimizes over a subset of the Lagrange multipliers α_i. As many of these are typically zero in a given problem, this is often a safe heuristic to adopt. Much more information can be found in the linked document.
|
SVM options in scikit-learn
I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a
|
44,736
|
SVM options in scikit-learn
|
scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number of samples it means that a single value of C will not be adequate for all the models. For this reason, we advocate using scale_C=False. It is documented in the list of parameters of the SVM objects.
I must confess that I do not know what the shrinking heuristic is, and it is not properly documented in the libSVM documentation.
|
SVM options in scikit-learn
|
scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number o
|
SVM options in scikit-learn
scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number of samples it means that a single value of C will not be adequate for all the models. For this reason, we advocate using scale_C=False. It is documented in the list of parameters of the SVM objects.
I must confess that I do not know what the shrinking heuristic is, and it is not properly documented in the libSVM documentation.
|
SVM options in scikit-learn
scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number o
|
44,737
|
Bayesian prior corresponding to penalized regression coefficients
|
L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
|
Bayesian prior corresponding to penalized regression coefficients
|
L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
|
Bayesian prior corresponding to penalized regression coefficients
L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
|
Bayesian prior corresponding to penalized regression coefficients
L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
|
44,738
|
Bayesian prior corresponding to penalized regression coefficients
|
For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an exponential prior. The parameter of the exponential distribution $\lambda$ has a correspondence with your $C$ in that you can choose a value of each such that the same naximum is achieved. The bayesian prior is the langrangian form (on log scale) of the constraint. For the ridge penalty constraning the parameter to be positive is just a truncated normal distribution.
|
Bayesian prior corresponding to penalized regression coefficients
|
For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an expon
|
Bayesian prior corresponding to penalized regression coefficients
For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an exponential prior. The parameter of the exponential distribution $\lambda$ has a correspondence with your $C$ in that you can choose a value of each such that the same naximum is achieved. The bayesian prior is the langrangian form (on log scale) of the constraint. For the ridge penalty constraning the parameter to be positive is just a truncated normal distribution.
|
Bayesian prior corresponding to penalized regression coefficients
For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an expon
|
44,739
|
Bayesian prior corresponding to penalized regression coefficients
|
The $L_2$ constraint on the coefficients is Tikhonov regularization.
It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate normal. It turns out that the mean of the posterior distribution occurs at a point in parameter space that can also be obtained by Tikhonov regularization and the relationship between the Tikhonov regularization parameter and the equivalent multivariate normal prior is pretty simple.
This is textbook material that can be found in Tarantola's textbook among other places.
When the model being fit is nonlinear, or there are additional constraints on the parameters (such as your nonnegativity constraint), or the prior isn't MVN, it all becomes much more complicated and this simple equivalence breaks down.
|
Bayesian prior corresponding to penalized regression coefficients
|
The $L_2$ constraint on the coefficients is Tikhonov regularization.
It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate
|
Bayesian prior corresponding to penalized regression coefficients
The $L_2$ constraint on the coefficients is Tikhonov regularization.
It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate normal. It turns out that the mean of the posterior distribution occurs at a point in parameter space that can also be obtained by Tikhonov regularization and the relationship between the Tikhonov regularization parameter and the equivalent multivariate normal prior is pretty simple.
This is textbook material that can be found in Tarantola's textbook among other places.
When the model being fit is nonlinear, or there are additional constraints on the parameters (such as your nonnegativity constraint), or the prior isn't MVN, it all becomes much more complicated and this simple equivalence breaks down.
|
Bayesian prior corresponding to penalized regression coefficients
The $L_2$ constraint on the coefficients is Tikhonov regularization.
It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate
|
44,740
|
Bayesian prior corresponding to penalized regression coefficients
|
If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
|
Bayesian prior corresponding to penalized regression coefficients
|
If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
|
Bayesian prior corresponding to penalized regression coefficients
If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
|
Bayesian prior corresponding to penalized regression coefficients
If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
|
44,741
|
What is the difference between test to check homogenity of variance and ANOVA?
|
Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the default in programs like SPSS, but either test (or even Brown-Forsythe) is acceptable. ANOVA is the omnibus test of mean differences among groups. While, in name, ANOVA analyzes the variance (between, within, and overall) among three or more groups, its hypotheses actually make statements about the equality of means versus there being "at least two means different."
|
What is the difference between test to check homogenity of variance and ANOVA?
|
Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the de
|
What is the difference between test to check homogenity of variance and ANOVA?
Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the default in programs like SPSS, but either test (or even Brown-Forsythe) is acceptable. ANOVA is the omnibus test of mean differences among groups. While, in name, ANOVA analyzes the variance (between, within, and overall) among three or more groups, its hypotheses actually make statements about the equality of means versus there being "at least two means different."
|
What is the difference between test to check homogenity of variance and ANOVA?
Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the de
|
44,742
|
What is the difference between test to check homogenity of variance and ANOVA?
|
Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essentially, it starts off the same way as a Brown Forsythe test for the ANOVA, obtaining the absolute deviations of each observation from its respective group median. Rather than performing an ANOVA on these residuals, the FK test ranks these residuals from low to high (where a rank of 1 is given to the lowest data point), assigning the average value of any tied ranks. By dividing each of these resulting ranks by the value 2(n+1), where n is the total number of data points across all groups, and then adding 0.5 to each result, each of the ranked residuals is "normalized" into an area under the normal curve.
Using the inverse normal distribution, we then convert these areas back into z-scores, taking the absolute value of any negative z-scores. We obtain the average z-score for each group, as well as the overall average z-score, and the overall variance of the z-scores. We then find a "mean square" for each group by taking its average z-score and subtracting the overall z-score, squaring the difference, and multiplying by the respective sample size of the group. Do this for all the groups, add them up, and divide by the total variance of all the z-scores. This is your FK statistic that is evaluated against a chi-square distribution with degrees of freedom equal to (number of groups - 1). If the result is significant, the group's have statistically different variances.
Hope this helps!
|
What is the difference between test to check homogenity of variance and ANOVA?
|
Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essential
|
What is the difference between test to check homogenity of variance and ANOVA?
Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essentially, it starts off the same way as a Brown Forsythe test for the ANOVA, obtaining the absolute deviations of each observation from its respective group median. Rather than performing an ANOVA on these residuals, the FK test ranks these residuals from low to high (where a rank of 1 is given to the lowest data point), assigning the average value of any tied ranks. By dividing each of these resulting ranks by the value 2(n+1), where n is the total number of data points across all groups, and then adding 0.5 to each result, each of the ranked residuals is "normalized" into an area under the normal curve.
Using the inverse normal distribution, we then convert these areas back into z-scores, taking the absolute value of any negative z-scores. We obtain the average z-score for each group, as well as the overall average z-score, and the overall variance of the z-scores. We then find a "mean square" for each group by taking its average z-score and subtracting the overall z-score, squaring the difference, and multiplying by the respective sample size of the group. Do this for all the groups, add them up, and divide by the total variance of all the z-scores. This is your FK statistic that is evaluated against a chi-square distribution with degrees of freedom equal to (number of groups - 1). If the result is significant, the group's have statistically different variances.
Hope this helps!
|
What is the difference between test to check homogenity of variance and ANOVA?
Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essential
|
44,743
|
What is the difference between test to check homogenity of variance and ANOVA?
|
Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
|
What is the difference between test to check homogenity of variance and ANOVA?
|
Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
|
What is the difference between test to check homogenity of variance and ANOVA?
Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
|
What is the difference between test to check homogenity of variance and ANOVA?
Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
|
44,744
|
What is the difference between test to check homogenity of variance and ANOVA?
|
ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal by comparing the variance among them to that expected based solely on the within-group variance: is the variation among group means "greater than expected by chance alone" i.e. purely from sampling variability.
This is totally different than Levene's or other such tests which test whether the variances of the groups are equal. Heuristically, Levene's and Brown-Forsythe's tests (I'm not sure about Fligner; sorry Mike) are like ANOVA on the squares or absolute values of the within-group residuals, so they test, roughtly, whether the mean magnitude of the residuals -- thus the within-group variability -- differs among groups.
|
What is the difference between test to check homogenity of variance and ANOVA?
|
ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal
|
What is the difference between test to check homogenity of variance and ANOVA?
ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal by comparing the variance among them to that expected based solely on the within-group variance: is the variation among group means "greater than expected by chance alone" i.e. purely from sampling variability.
This is totally different than Levene's or other such tests which test whether the variances of the groups are equal. Heuristically, Levene's and Brown-Forsythe's tests (I'm not sure about Fligner; sorry Mike) are like ANOVA on the squares or absolute values of the within-group residuals, so they test, roughtly, whether the mean magnitude of the residuals -- thus the within-group variability -- differs among groups.
|
What is the difference between test to check homogenity of variance and ANOVA?
ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal
|
44,745
|
How to interpret the margin of error in a poll?
|
The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Republican voters")--thoroughly mixed, $400$ of those were blindly taken out, and each of the associated $400$ voters had written complete answers to all the poll questions on their tickets. These $400$ poll results are the "sample."
The "as if" raises plenty of practical questions that go to whether the poll really can be viewed as arising in such a way. (Can we really think of the population as represented by a definite set of tickets? Is it fair to assume all tickets are completely filled out? Was the sampling conducted in a manner akin to drawing from a thoroughly mixed box? Etc.) Other respondents have listed some of those questions. Granting, however, that this is an adequate model of the poll leads us to the crux of the question: to what extent do these $400$ tickets represent the entire population? We never know for sure, but we can develop some expectations by studying this process of sampling from a box of tickets.
To do this, we focus on one question at a time. We might as well view each ticket as bearing either the "yes" or "no" answer for that question. We now compare the true survey results (that is, the true proportions of yeses among all tickets in the box) to the results of the myriad possible samples of $400$ tickets. (There are more than $1.9 \times 10^{1475}$ such samples.) We have to make the comparison for any possible true proportion, but even so, it's merely a matter of mathematical calculation. This calculation shows that the observed response in at least $95$% of all such samples lies within $\pm 4.9$% of the population value no matter what that population value might be. For example, if exactly $50$% of the tickets in the box are "yes," then $95$% of the possible samples of $400$ tickets will contain between $50-4.9$% = $45.1$% and $50+4.9$% = $54.9$ yeses.
(That computed value of $95$% actually depends on the true proportion of yeses in the population: if that proportion is very small or very large, we find that quite a bit more than $95$% of all samples will give results accurate to within the margin of error. A true proportion of $50$% is the worst case, which is used because we don't know the true proportion!)
This is all the margin of error means. Because $95$% is a substantial fraction of all possible samples, we feel it's highly likely that the one sample that was actually obtained will be among these $95$%. A doubter is allowed to suppose the sample could be one of the remaining $5$%: we cannot prove him wrong (based only on the poll results, anyway). Yet, similar calculations show (for instance) that the proportion of yeses will differ from the true proportion by more than $12.2$% in only one of every million possible samples. It's still possible the poll is among these one-in-a-million samples, but we have very shaky grounds to believe that. Thus, there is usually a limit to what constitutes a "reasonable" amount of doubt about what the true proportion may be, and it's rarely as extreme as $\pm 100$%.
The fundamental insight afforded by these calculations is that once the number of tickets in the box becomes moderately large (a few thousand in this case), the margin of error does not depend on how many tickets are in the box. It should be intuitively clear that the only thing that really matters for a relatively small sample is the proportion of yeses in the box, because the proportion determines the chance of drawing a "yes" or "no" and that proportion doesn't appreciably change between drawing the first and drawing the last of the $400$ tickets.
In summary, assuming it's accurate to view the poll as acting like drawing tickets from a box, our right to "extrapolate" from the poll to the population (a process more formally known as statistical inference) is an uncertain one, because we can always be wrong; but when the sample is just a small fraction of the population, the amount by which we might be in error in making that extrapolation depends primarily on the size of the sample, not the size of the population. This is why most credible polls, whether of local or international scope, use samples of a few hundred to a few thousand. It is rare that larger samples are needed to achieve a high chance of getting reasonable accuracy.
|
How to interpret the margin of error in a poll?
|
The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Repu
|
How to interpret the margin of error in a poll?
The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Republican voters")--thoroughly mixed, $400$ of those were blindly taken out, and each of the associated $400$ voters had written complete answers to all the poll questions on their tickets. These $400$ poll results are the "sample."
The "as if" raises plenty of practical questions that go to whether the poll really can be viewed as arising in such a way. (Can we really think of the population as represented by a definite set of tickets? Is it fair to assume all tickets are completely filled out? Was the sampling conducted in a manner akin to drawing from a thoroughly mixed box? Etc.) Other respondents have listed some of those questions. Granting, however, that this is an adequate model of the poll leads us to the crux of the question: to what extent do these $400$ tickets represent the entire population? We never know for sure, but we can develop some expectations by studying this process of sampling from a box of tickets.
To do this, we focus on one question at a time. We might as well view each ticket as bearing either the "yes" or "no" answer for that question. We now compare the true survey results (that is, the true proportions of yeses among all tickets in the box) to the results of the myriad possible samples of $400$ tickets. (There are more than $1.9 \times 10^{1475}$ such samples.) We have to make the comparison for any possible true proportion, but even so, it's merely a matter of mathematical calculation. This calculation shows that the observed response in at least $95$% of all such samples lies within $\pm 4.9$% of the population value no matter what that population value might be. For example, if exactly $50$% of the tickets in the box are "yes," then $95$% of the possible samples of $400$ tickets will contain between $50-4.9$% = $45.1$% and $50+4.9$% = $54.9$ yeses.
(That computed value of $95$% actually depends on the true proportion of yeses in the population: if that proportion is very small or very large, we find that quite a bit more than $95$% of all samples will give results accurate to within the margin of error. A true proportion of $50$% is the worst case, which is used because we don't know the true proportion!)
This is all the margin of error means. Because $95$% is a substantial fraction of all possible samples, we feel it's highly likely that the one sample that was actually obtained will be among these $95$%. A doubter is allowed to suppose the sample could be one of the remaining $5$%: we cannot prove him wrong (based only on the poll results, anyway). Yet, similar calculations show (for instance) that the proportion of yeses will differ from the true proportion by more than $12.2$% in only one of every million possible samples. It's still possible the poll is among these one-in-a-million samples, but we have very shaky grounds to believe that. Thus, there is usually a limit to what constitutes a "reasonable" amount of doubt about what the true proportion may be, and it's rarely as extreme as $\pm 100$%.
The fundamental insight afforded by these calculations is that once the number of tickets in the box becomes moderately large (a few thousand in this case), the margin of error does not depend on how many tickets are in the box. It should be intuitively clear that the only thing that really matters for a relatively small sample is the proportion of yeses in the box, because the proportion determines the chance of drawing a "yes" or "no" and that proportion doesn't appreciably change between drawing the first and drawing the last of the $400$ tickets.
In summary, assuming it's accurate to view the poll as acting like drawing tickets from a box, our right to "extrapolate" from the poll to the population (a process more formally known as statistical inference) is an uncertain one, because we can always be wrong; but when the sample is just a small fraction of the population, the amount by which we might be in error in making that extrapolation depends primarily on the size of the sample, not the size of the population. This is why most credible polls, whether of local or international scope, use samples of a few hundred to a few thousand. It is rare that larger samples are needed to achieve a high chance of getting reasonable accuracy.
|
How to interpret the margin of error in a poll?
The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Repu
|
44,746
|
How to interpret the margin of error in a poll?
|
I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren endorsing it on the title page is a former President of ASA from about five years ago. He used to be a high profile statistician in federal agencies such as the Social Security Administration and Internal Revenue Service, and now semi-retired from government to continue working as a VP of the National Opinion Research Center at University of Chicago.) The booklet delivers a clear and concise explanation of when and why you can, or can not, extrapolate the survey findings to the target population.
|
How to interpret the margin of error in a poll?
|
I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren
|
How to interpret the margin of error in a poll?
I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren endorsing it on the title page is a former President of ASA from about five years ago. He used to be a high profile statistician in federal agencies such as the Social Security Administration and Internal Revenue Service, and now semi-retired from government to continue working as a VP of the National Opinion Research Center at University of Chicago.) The booklet delivers a clear and concise explanation of when and why you can, or can not, extrapolate the survey findings to the target population.
|
How to interpret the margin of error in a poll?
I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren
|
44,747
|
How to interpret the margin of error in a poll?
|
To answer your question:
It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to look into to confirm this. If I ask 400 of my closest friends, this doesn't work. To get a truly random sample, I'd have to get the list of all 700,000 people, and use a random number generator to pick 400 from it. Even so, there might be some selection biases. For example, if we're only calling landline telephones, then young people (who often only have cell phones) would be under represented in the sample. It's still possible to correct for these issues, but you have to be pretty careful.
Nate Silverstein's blog has some really good posts on the reliability of different polling firms, problems with their techniques, and correct inference for US political polls.
|
How to interpret the margin of error in a poll?
|
To answer your question:
It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to loo
|
How to interpret the margin of error in a poll?
To answer your question:
It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to look into to confirm this. If I ask 400 of my closest friends, this doesn't work. To get a truly random sample, I'd have to get the list of all 700,000 people, and use a random number generator to pick 400 from it. Even so, there might be some selection biases. For example, if we're only calling landline telephones, then young people (who often only have cell phones) would be under represented in the sample. It's still possible to correct for these issues, but you have to be pretty careful.
Nate Silverstein's blog has some really good posts on the reliability of different polling firms, problems with their techniques, and correct inference for US political polls.
|
How to interpret the margin of error in a poll?
To answer your question:
It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to loo
|
44,748
|
How to interpret the margin of error in a poll?
|
The short answer is yes, you can extrapolate.
Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican primary voters. But this is difficult. People refuse to answer polls, or they aren't home or other things can go wrong; even worse, the people who answer are not a random sample of the whole population (for instance, younger people are less likely to have land line telephones). Most pollsters therefore try to weight the sample they get to match a known population. Exit polls of Republican primaries give good estimates of various traits of this population.
Reputable pollsters (such as PPP) try hard to do this in a balanced way.
So, can you extrapolate from a relatively small sample to a large population? Yes you can, but there are some caveats.
|
How to interpret the margin of error in a poll?
|
The short answer is yes, you can extrapolate.
Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican prim
|
How to interpret the margin of error in a poll?
The short answer is yes, you can extrapolate.
Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican primary voters. But this is difficult. People refuse to answer polls, or they aren't home or other things can go wrong; even worse, the people who answer are not a random sample of the whole population (for instance, younger people are less likely to have land line telephones). Most pollsters therefore try to weight the sample they get to match a known population. Exit polls of Republican primaries give good estimates of various traits of this population.
Reputable pollsters (such as PPP) try hard to do this in a balanced way.
So, can you extrapolate from a relatively small sample to a large population? Yes you can, but there are some caveats.
|
How to interpret the margin of error in a poll?
The short answer is yes, you can extrapolate.
Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican prim
|
44,749
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
|
With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty.
If the population is normally distributed, there is no minimum sample size. If the mean difference is small relative to the population variance, then you will have very little power as well. However, it is possible to get good power even with a very small sample size.
As an example suppose your pairwise differences were normally distributed with (unknown) variance $\sigma^{2} = 1$. Below are monte carlo estimates (using 10000 sims) of the power for incrementally larger values $0, .5, 1, ..., 5$ of the mean pairwise differences
Mean Difference Power
[1,] 0.0 0.0512
[2,] 0.5 0.1097
[3,] 1.0 0.2934
[4,] 1.5 0.5250
[5,] 2.0 0.7467
[6,] 2.5 0.8975
[7,] 3.0 0.9648
[8,] 3.5 0.9925
[9,] 4.0 0.9976
[10,] 4.5 0.9998
[11,] 5.0 0.9999
So we can see that it is possible for the paired $t$-test to still have good power when the mean difference is pretty large in comparison to the variance of the differences (at least 2x as large in this case), even if $n=4$. Please keep in mind that this all goes directly out of the window if the differences are not normally distributed.
You can look at these powers for other values of the mean difference and variance if you like using the R code below (note: the critical value for the $t$-test when $n=4$ using the usual .05 cutoff is 3.182446. The null value to be tested is assumed to be 0).
U=seq(0,5,by=.5)
V=U-U
sig=1
for(k in 1:11)
{
Z=rep(0,10000)
for(i in 1:10000)
{
diffs=rnorm(4,mean=U[k], sd=sig)
z=(mean(diffs)-0)/(sd(diffs)/sqrt(4))
Z[i]=z
}
V[k] = mean(abs(Z)>3.182446)
}
X=cbind(U,V)
colnames(X)=c("Mean Difference", "Power")
X
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
|
With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty.
If the population is normally distri
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty.
If the population is normally distributed, there is no minimum sample size. If the mean difference is small relative to the population variance, then you will have very little power as well. However, it is possible to get good power even with a very small sample size.
As an example suppose your pairwise differences were normally distributed with (unknown) variance $\sigma^{2} = 1$. Below are monte carlo estimates (using 10000 sims) of the power for incrementally larger values $0, .5, 1, ..., 5$ of the mean pairwise differences
Mean Difference Power
[1,] 0.0 0.0512
[2,] 0.5 0.1097
[3,] 1.0 0.2934
[4,] 1.5 0.5250
[5,] 2.0 0.7467
[6,] 2.5 0.8975
[7,] 3.0 0.9648
[8,] 3.5 0.9925
[9,] 4.0 0.9976
[10,] 4.5 0.9998
[11,] 5.0 0.9999
So we can see that it is possible for the paired $t$-test to still have good power when the mean difference is pretty large in comparison to the variance of the differences (at least 2x as large in this case), even if $n=4$. Please keep in mind that this all goes directly out of the window if the differences are not normally distributed.
You can look at these powers for other values of the mean difference and variance if you like using the R code below (note: the critical value for the $t$-test when $n=4$ using the usual .05 cutoff is 3.182446. The null value to be tested is assumed to be 0).
U=seq(0,5,by=.5)
V=U-U
sig=1
for(k in 1:11)
{
Z=rep(0,10000)
for(i in 1:10000)
{
diffs=rnorm(4,mean=U[k], sd=sig)
z=(mean(diffs)-0)/(sd(diffs)/sqrt(4))
Z[i]=z
}
V[k] = mean(abs(Z)>3.182446)
}
X=cbind(U,V)
colnames(X)=c("Mean Difference", "Power")
X
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty.
If the population is normally distri
|
44,750
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
|
There is no minimum sample size for a t-test.
But as @shabbychef noted, you will have very little power.
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
|
There is no minimum sample size for a t-test.
But as @shabbychef noted, you will have very little power.
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
There is no minimum sample size for a t-test.
But as @shabbychef noted, you will have very little power.
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
There is no minimum sample size for a t-test.
But as @shabbychef noted, you will have very little power.
|
44,751
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
|
What is the minimum sample size for a paired t-test?
Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f.
Which assumption should I check for paired t-test?
Normally, I'd try to assess all of them, but if you only have 4 pairs, it's just about hopeless to try. You have four pair-differences, from which two d.f. would go to estimating the mean and variance of the differences (the location and scale not mattering for the assumptions), in essence leaving two d.f. to assess changing variance, dependence (in whatever form occur to you to look for, if any) and normality.
If my data is non-normal, what is an alternative non-parametric test?
Paired data: Wilcoxon signed rank test; or sign test; or any number of varieties of permutation test or bootstrap test (depending on how you construct your statistic/what exactly you want to test). All of them still have assumptions, of course.
But the t-test is at least reasonably robust to at least mild non-normality of the differences (and its the differences that are supposed to be normal). If the observations are say, mildly right-skew and not very heavy-tailed, the differences may be indistinguishable from normal even at large sample sizes. That said, there's little reason to avoid the signed rank test if non-normality is the main concern, but with 4 pairs, then you're pretty much stuck with a significance level of 12.5%
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
|
What is the minimum sample size for a paired t-test?
Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f.
Which assumption should I check for paired t-test?
N
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
What is the minimum sample size for a paired t-test?
Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f.
Which assumption should I check for paired t-test?
Normally, I'd try to assess all of them, but if you only have 4 pairs, it's just about hopeless to try. You have four pair-differences, from which two d.f. would go to estimating the mean and variance of the differences (the location and scale not mattering for the assumptions), in essence leaving two d.f. to assess changing variance, dependence (in whatever form occur to you to look for, if any) and normality.
If my data is non-normal, what is an alternative non-parametric test?
Paired data: Wilcoxon signed rank test; or sign test; or any number of varieties of permutation test or bootstrap test (depending on how you construct your statistic/what exactly you want to test). All of them still have assumptions, of course.
But the t-test is at least reasonably robust to at least mild non-normality of the differences (and its the differences that are supposed to be normal). If the observations are say, mildly right-skew and not very heavy-tailed, the differences may be indistinguishable from normal even at large sample sizes. That said, there's little reason to avoid the signed rank test if non-normality is the main concern, but with 4 pairs, then you're pretty much stuck with a significance level of 12.5%
|
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
What is the minimum sample size for a paired t-test?
Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f.
Which assumption should I check for paired t-test?
N
|
44,752
|
Which distribution to use with MCMC and empirical data?
|
Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution:
Li, Q. and E. Maasoumi and J.S. Racine
(2009), “A Nonparametric Test for
Equality of Distributions with Mixed
Categorical and Continuous Data,”
Journal of Econometrics, 148, pp
186-200
This test is available in the np package in R as the npdeneqtest() function.
Choosing a good distribution is always difficult; what does your data look like? Gamma distributions are rather flexible for positive data, most data can be reasonably approximated with mixtures of Gaussians, the Beta distribution is extremely flexible for data between zero and one.
|
Which distribution to use with MCMC and empirical data?
|
Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution:
Li, Q. and E. Maasoum
|
Which distribution to use with MCMC and empirical data?
Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution:
Li, Q. and E. Maasoumi and J.S. Racine
(2009), “A Nonparametric Test for
Equality of Distributions with Mixed
Categorical and Continuous Data,”
Journal of Econometrics, 148, pp
186-200
This test is available in the np package in R as the npdeneqtest() function.
Choosing a good distribution is always difficult; what does your data look like? Gamma distributions are rather flexible for positive data, most data can be reasonably approximated with mixtures of Gaussians, the Beta distribution is extremely flexible for data between zero and one.
|
Which distribution to use with MCMC and empirical data?
Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution:
Li, Q. and E. Maasoum
|
44,753
|
Which distribution to use with MCMC and empirical data?
|
Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so you really don't know if the data comes from that distribution, or you just don't have the power.
But note that you can have a population that follows a normal distribution exactly (or at least close enough), but data sampled randomly from that distribution does not look as nicely bell shaped (or any other distribution). The population distribution is more important that the sample distribution. One thing to try is to plot several samples and see how different they are, then see if your data fits into that variation scheme. This idea is detailed in:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.2009.0120
If you still feel the need to find a transformation to get to normality, then consider using the Box-Cox transformations. The boxcox function in the MASS package for R will find an optimal transform, but it also gives a confidence interval so that you can bring outside knowledge into the decision, for example the "best" value of lambda may be 0.4, but if a square root transform has scientific merit and 0.5 is in the confidence interval, then that is probably more reasonable than going with the 0.4.
A lot of this also depends on what you plan to do with your data or the tranform of it. Often we can apply the Central Limit Theorem and the distribution of the population then does not matter (as long as we believe that it is not overly skewed or has extreem outliers). Or there are non-parametric methods that don't rely on assumptions about the population distribution. So the best approach depends on what you plan to do with this data.
|
Which distribution to use with MCMC and empirical data?
|
Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so y
|
Which distribution to use with MCMC and empirical data?
Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so you really don't know if the data comes from that distribution, or you just don't have the power.
But note that you can have a population that follows a normal distribution exactly (or at least close enough), but data sampled randomly from that distribution does not look as nicely bell shaped (or any other distribution). The population distribution is more important that the sample distribution. One thing to try is to plot several samples and see how different they are, then see if your data fits into that variation scheme. This idea is detailed in:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.2009.0120
If you still feel the need to find a transformation to get to normality, then consider using the Box-Cox transformations. The boxcox function in the MASS package for R will find an optimal transform, but it also gives a confidence interval so that you can bring outside knowledge into the decision, for example the "best" value of lambda may be 0.4, but if a square root transform has scientific merit and 0.5 is in the confidence interval, then that is probably more reasonable than going with the 0.4.
A lot of this also depends on what you plan to do with your data or the tranform of it. Often we can apply the Central Limit Theorem and the distribution of the population then does not matter (as long as we believe that it is not overly skewed or has extreem outliers). Or there are non-parametric methods that don't rely on assumptions about the population distribution. So the best approach depends on what you plan to do with this data.
|
Which distribution to use with MCMC and empirical data?
Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so y
|
44,754
|
Which distribution to use with MCMC and empirical data?
|
There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding the appropriate statistical model, which might have generated the data.
|
Which distribution to use with MCMC and empirical data?
|
There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding t
|
Which distribution to use with MCMC and empirical data?
There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding the appropriate statistical model, which might have generated the data.
|
Which distribution to use with MCMC and empirical data?
There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding t
|
44,755
|
Which distribution to use with MCMC and empirical data?
|
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one best way to approximate an unknown distribution in practice. There isn't even one "best". As a general comment, you can get a long way mixing normal distributions. But without assuming something you're going to have a hard time, particularly when the data aren't iid.
|
Which distribution to use with MCMC and empirical data?
|
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one bes
|
Which distribution to use with MCMC and empirical data?
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one best way to approximate an unknown distribution in practice. There isn't even one "best". As a general comment, you can get a long way mixing normal distributions. But without assuming something you're going to have a hard time, particularly when the data aren't iid.
|
Which distribution to use with MCMC and empirical data?
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one bes
|
44,756
|
Converting arbitrary distribution to uniform one
|
If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use the empirical distribution function:
$$\hat{F}(x) = \frac{1}{N} \sum_{i=1}^N 1[x_i\leq x]$$
where $1[A]$ is the indicator function, $1[A]=1$ is $A$ is true and $1[A]=0$ if $A$ is false. The function $F$ is often also called the quantile function.
In coding terms, once you've written your function F you now have two objects, x containing your data and q containing the transformed data, so you could write a function Finv which takes a number in [0,1] and returns the value of your sample distribution at that quantile (using linear interpolation or some other appropriate method for filling in the gaps).
Now if you want to take e.g. 5% of the data either side of the value x0, your range will be Finv(F(x0) - 0.05) to Finv(F(x0) + 0.05).
|
Converting arbitrary distribution to uniform one
|
If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use th
|
Converting arbitrary distribution to uniform one
If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use the empirical distribution function:
$$\hat{F}(x) = \frac{1}{N} \sum_{i=1}^N 1[x_i\leq x]$$
where $1[A]$ is the indicator function, $1[A]=1$ is $A$ is true and $1[A]=0$ if $A$ is false. The function $F$ is often also called the quantile function.
In coding terms, once you've written your function F you now have two objects, x containing your data and q containing the transformed data, so you could write a function Finv which takes a number in [0,1] and returns the value of your sample distribution at that quantile (using linear interpolation or some other appropriate method for filling in the gaps).
Now if you want to take e.g. 5% of the data either side of the value x0, your range will be Finv(F(x0) - 0.05) to Finv(F(x0) + 0.05).
|
Converting arbitrary distribution to uniform one
If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use th
|
44,757
|
Converting arbitrary distribution to uniform one
|
Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of values falling into that range $N$, the following should hold:
$$F(r_1)-F(r_2)=\frac{N}{500 000}$$
This is an equation with 2 unknown variables, so we need some restrictions to solve it. The popular one would be setting $r_1=x-\varepsilon/2$ and $r_2=x+\varepsilon/2$. This would give us the equation of one variable, which could be solved pretty easily, using any optimisation algorithm.
The only thing you need is the function $F$. You can either model it, or use some non-parametric estimate. In the latter case probably optimisation algorithm may be unnecessary, as it should be possible to work out the solution.
Note: This is only one possible approach, depending on data it might not work.
|
Converting arbitrary distribution to uniform one
|
Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of
|
Converting arbitrary distribution to uniform one
Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of values falling into that range $N$, the following should hold:
$$F(r_1)-F(r_2)=\frac{N}{500 000}$$
This is an equation with 2 unknown variables, so we need some restrictions to solve it. The popular one would be setting $r_1=x-\varepsilon/2$ and $r_2=x+\varepsilon/2$. This would give us the equation of one variable, which could be solved pretty easily, using any optimisation algorithm.
The only thing you need is the function $F$. You can either model it, or use some non-parametric estimate. In the latter case probably optimisation algorithm may be unnecessary, as it should be possible to work out the solution.
Note: This is only one possible approach, depending on data it might not work.
|
Converting arbitrary distribution to uniform one
Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of
|
44,758
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
|
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance. But about your specific question, I would say this is clearly detailed in the two sections that directly follow your quote (p. 10).
Rereading Piantadosi's textbook, it appears that the author means that clinical thinking applies to the situation where a physician has to interpret the results of RCTs or other studies in order to decide of the best treatment to apply to a new patient. This has to do with the extent to which (population-based) conclusions drawn from previous RCT might generalize to new, unobserved, samples. In a certain sense, such decision or judgment call for some form of clinical experience, which is not necessarily of the resort of a consistent statistical framework. Then, the author said that "the solution offered by statistical reasoning is to control the signal-to-noise ratio by design." In other words, this is a way to reduce uncertainty, and "the chance of drawing incorrect conclusions from either good or bad data." In sum, both lines of reasoning are required in order to draw valid conclusions from previous (and 'localized') studies, and choose the right treatment to administer to a new individual, given his history, his current medication, etc. -- treatment efficacy follows from a good balance between statistical facts and clinical experience.
I like to think of a statistician as someone who is able to mark off the extent to which we can draw firm inferences from the observed data, whereas the clinician is the one that will have a more profound insight onto the implications or consequences of the results at the individual or population level.
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
|
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance.
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance. But about your specific question, I would say this is clearly detailed in the two sections that directly follow your quote (p. 10).
Rereading Piantadosi's textbook, it appears that the author means that clinical thinking applies to the situation where a physician has to interpret the results of RCTs or other studies in order to decide of the best treatment to apply to a new patient. This has to do with the extent to which (population-based) conclusions drawn from previous RCT might generalize to new, unobserved, samples. In a certain sense, such decision or judgment call for some form of clinical experience, which is not necessarily of the resort of a consistent statistical framework. Then, the author said that "the solution offered by statistical reasoning is to control the signal-to-noise ratio by design." In other words, this is a way to reduce uncertainty, and "the chance of drawing incorrect conclusions from either good or bad data." In sum, both lines of reasoning are required in order to draw valid conclusions from previous (and 'localized') studies, and choose the right treatment to administer to a new individual, given his history, his current medication, etc. -- treatment efficacy follows from a good balance between statistical facts and clinical experience.
I like to think of a statistician as someone who is able to mark off the extent to which we can draw firm inferences from the observed data, whereas the clinician is the one that will have a more profound insight onto the implications or consequences of the results at the individual or population level.
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance.
|
44,759
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
|
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues.
The sole fact that, for instance, a treatment does not have a "statistically significant" effect does not imply that the treatment does not have a biological effect and viceversa. Statistics can tell you if a certain event is likely or unlikely to be happening, but does not give you any hint as to whether something is biologically plausible.
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
|
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues.
T
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues.
The sole fact that, for instance, a treatment does not have a "statistically significant" effect does not imply that the treatment does not have a biological effect and viceversa. Statistics can tell you if a certain event is likely or unlikely to be happening, but does not give you any hint as to whether something is biologically plausible.
|
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues.
T
|
44,760
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
|
Hey, but it seems you already looked at the results!
Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fixed to a given value (e.g., 0.80). At least, this is the "Neyman-Pearson" approach. For example, you might consider a risk of 5% ($\alpha=0.05$) for all your hypotheses, and if the tests are not independent you should consider correcting for multiple comparisons, using any single-step or step-down methods you like.
When reporting your results, you should indicate the Type I (and II, if applicable) error you considered (before seeing the results!), corrected or not for multiple comparisons, and give your p-values as p<.001 or p=.0047 for example.
Finally, I would say that your tests allow you to reject a given null hypothesis not to prove Hypothesis A or B. Moreover, what you describe as 0.001 being a somewhat stronger indication of an interesting deviation from the null than 0.05 is more in light with the Fisher approach to statistical hypothesis testing.
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
|
Hey, but it seems you already looked at the results!
Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fix
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
Hey, but it seems you already looked at the results!
Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fixed to a given value (e.g., 0.80). At least, this is the "Neyman-Pearson" approach. For example, you might consider a risk of 5% ($\alpha=0.05$) for all your hypotheses, and if the tests are not independent you should consider correcting for multiple comparisons, using any single-step or step-down methods you like.
When reporting your results, you should indicate the Type I (and II, if applicable) error you considered (before seeing the results!), corrected or not for multiple comparisons, and give your p-values as p<.001 or p=.0047 for example.
Finally, I would say that your tests allow you to reject a given null hypothesis not to prove Hypothesis A or B. Moreover, what you describe as 0.001 being a somewhat stronger indication of an interesting deviation from the null than 0.05 is more in light with the Fisher approach to statistical hypothesis testing.
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
Hey, but it seems you already looked at the results!
Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fix
|
44,761
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
|
My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothesis (e.g. not specifying the alternative hypothesis) is difficult.
I suppose the "purist" would tell you that this should be fixed prior to looking at the data (one of my lecturers call not doing this intellectual dishonesty), but I would only say this is appropriate for "confirmatory analysis" where a well defined model (or set of models) has been set prior to the data being seen.
If the analysis is more "exploratory" then I would not worry about precise level so much, rather try to find relationships and try to explain why they may be there (i.e. use the analysis to build a model). tentative hypothesis testing may be useful as an initial guide, but you would need to get more data to confirm your hypothesis.
A useful way to "get more data" without running another experiment is to "lock up" some portion of your data and use the rest to "explore" and then once you are confident of a potentially useful model, "test" your theory with the data you "locked up". NOTE: you can on do the "test" once!
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
|
My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothe
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothesis (e.g. not specifying the alternative hypothesis) is difficult.
I suppose the "purist" would tell you that this should be fixed prior to looking at the data (one of my lecturers call not doing this intellectual dishonesty), but I would only say this is appropriate for "confirmatory analysis" where a well defined model (or set of models) has been set prior to the data being seen.
If the analysis is more "exploratory" then I would not worry about precise level so much, rather try to find relationships and try to explain why they may be there (i.e. use the analysis to build a model). tentative hypothesis testing may be useful as an initial guide, but you would need to get more data to confirm your hypothesis.
A useful way to "get more data" without running another experiment is to "lock up" some portion of your data and use the rest to "explore" and then once you are confident of a potentially useful model, "test" your theory with the data you "locked up". NOTE: you can on do the "test" once!
|
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothe
|
44,762
|
Can statistical prediction be asymmetric?
|
The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{n-1}$ and obtaining the residuals $Y_n$ for the regression of $X_n$ on $X_2, \ldots, X_{n-1}$. Then you regress $Y_1$ on $Y_n$.
Similarly, the coefficient of $X_1$ in the regression of $X_n$ on $X_1, \ldots, X_{n-1}$ is computed by regressing $Y_n$ on $Y_1$.
This reduces the question to one of ordinary regression (of one variable against another). The coefficients are not related to one another in a simple manner, because the variance of $Y_1$ does not have to equal the variance of $Y_n$, but the t-statistics will be identical.
It seems the software has not done what you expected.
|
Can statistical prediction be asymmetric?
|
The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{
|
Can statistical prediction be asymmetric?
The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{n-1}$ and obtaining the residuals $Y_n$ for the regression of $X_n$ on $X_2, \ldots, X_{n-1}$. Then you regress $Y_1$ on $Y_n$.
Similarly, the coefficient of $X_1$ in the regression of $X_n$ on $X_1, \ldots, X_{n-1}$ is computed by regressing $Y_n$ on $Y_1$.
This reduces the question to one of ordinary regression (of one variable against another). The coefficients are not related to one another in a simple manner, because the variance of $Y_1$ does not have to equal the variance of $Y_n$, but the t-statistics will be identical.
It seems the software has not done what you expected.
|
Can statistical prediction be asymmetric?
The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{
|
44,763
|
Can statistical prediction be asymmetric?
|
Your are trying to estimate model
\begin{align}
X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\
X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta.
\end{align}
For such model ordinary least squares will give biased estimates. Assuming that $X_2,...,X_{n-1}$ are either deterministic or independent from $\varepsilon$ and $\eta$ we have
\begin{align}
EX_n\varepsilon&=E\left(\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta\right)\varepsilon\\
&=\beta_1EX_1\varepsilon+E\varepsilon\eta=\\
&=\beta_1E(\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon)\varepsilon+E\varepsilon\eta\\
&=\alpha_n\beta_1EX_n\varepsilon+\beta_1E\varepsilon^2 +E\varepsilon\eta.
\end{align}
So
\begin{align}
EX_n\varepsilon=\frac{\beta_1E\varepsilon^2+E\varepsilon\eta}{1-\alpha_n\beta_1}
\end{align}
The main assumption of linear regression is that the error is not correlated to the regressors. Without this assumption the estimates are biased and inconsistent. As we see in this case the assumption is violated, so it is entirely natural to expect unexpected results.
The are ways to get statistically sound estimates of the coefficients. This is an extremely common problem in econometrics called endogeneity. Most popular solution is two-stage least squares. In general these type of models are called simultaneous equations.
Update
In the comments below the attention was drawn to the fact that the OP is just trying to fit two regressions and the model given above might be inappropriate. @onestop provided excellent reference about data modelling vs algorithmic modelling cultures. Nonetheless there is an important point I want to make in this case.
Taking into account @whuber answer we can restrict ourselves to the case where we have only two variables, $Y$ and $X$. Suppose having the sample $(y_i,x_i)$ we fit two linear regressions $Y$ vs $X$ and $X$ vs $Y$. The least squares estimates are the following:
\begin{align}
\beta_{YX}&=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}x_i^2} \\
\beta_{XY}&=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}y_i^2}
\end{align}
Now from the formulas it is clear that $\beta_{XY}$ and $\beta_{YX}$ coincide only in the case when $\sum_{i=1}^ny_i^2=\sum_{i=1}^nx_i^2$.
The corresponding $t$-statistics are then
\begin{align}
t_{YX}&=\frac{\beta_{YX}}{\sigma_{\beta_{XY}}}, \quad \sigma_{\beta_{YX}}^2=\frac{ \sigma_{YX}^2}{\sum_{i=1}^nx_i^2}, \quad \sigma_{YX}^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\beta_{YX}x_i)^2 \\
t_{XY}&=\frac{\beta_{XY}}{\sigma_{XY}}, \quad \quad \sigma_{\beta_{XY}}^2=\frac{ \sigma_{XY}^2}{\sum_{i=1}^nx_i^2}, \quad \sigma_{XY}^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\beta_{XY}x_i)^2
\end{align}
After a bit of manipulation we get the amazing fact (in my opinion) that
\begin{align}
t_{XY}=t_{YX}=\frac{\sum_{i=1}^nx_iy_i}{\sqrt{\frac{1}{n-1}\left(\sum_{i=1}^ny_i^2\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_iy_i\right)^2\right)}}
\end{align}
So this illustrates that OP clearly has a problem, since the $t$-statistics must coincide in this case. This of course was pointed out by @whuber.
So far so good. The problem arises when we want to get the distribution of this statistic. This is needed since $t$-statistic formally test the hypothesis that the coefficient $\beta_{XY}$ is zero. Without assuming any model, suppose that we want to test the null hypothesis that $cov(X,Y)=0$. We can also assume that we have $EX=EY=0$. Rewrite the $t$ statistic as follows:
\begin{align}
t=\frac{\frac{1}{\sqrt{n-1}}\sum_{i=1}^nx_iy_i}{\sqrt{\frac{1}{n-1}\sum_{i=1}^ny_i^2\frac{1}{n-1}\sum_{i=1}^nx_i^2-\left(\frac{1}{n-1}\sum_{i=1}^nx_iy_i\right)^2}}
\end{align}
Now since we have a sample due to law of large numbers we have
\begin{align*}
\frac{1}{n-1}\sum_{i=1}^ny_i^2\frac{1}{n-1}\sum_{i=1}^nx_i^2\xrightarrow{P}var(X)var(Y)\\
\frac{1}{n-1}\sum_{i=1}^nx_iy_i\xrightarrow{P}cov(X,Y),
\end{align*}
where $\xrightarrow{P}$ denotes convergence in probability.
So denominator of $t$-statistic converges to $\sqrt{var(X)var(Y)}$ under null hypothesis of $cov(X,Y)=0$.
Under null hypothesis due to central limit theorem we have that
\begin{align}
\frac{1}{\sqrt{n-1}}\sum_{i=1}^nx_iy_i\xrightarrow{D}N(0,var(XY)),
\end{align}
where $\xrightarrow{D}$ denotes convergence in distribution. So we get that under null-hypothesis of no correlation the $t$-statistic converges to
\begin{align}
t\xrightarrow{D}N\left(0,\frac{var(XY)}{var(X)var(Y)}\right)
\end{align}
Now if $(X,Y)$ are bivariate normal we have that $\frac{var(XY)}{var(X)var(Y)}=1$, under null hypothesis of $cov(X,Y)=0$, since zero correlation implies independence for normal random variables. The quantity $\frac{var(XY)}{var(X)var(Y)}$ is one, when $X$ is deterministic as is usually the case in linear regression, but then the null hypothesis $cov(X,Y)=0$ makes no sense.
Now if we assume that we have a model $Y=X\beta+\varepsilon$, under null hypothesis that $\beta=0$ and usual assumption $E(\varepsilon^2|X)=\sigma^2$ we get that
\begin{align}
t\xrightarrow{D}N(0,1),
\end{align}
since
\begin{align}
\frac{var(\varepsilon X)}{var(X)var(\varepsilon)}=\frac{\sigma^2var(X)}{\sigma^2var(X)}=1
\end{align}
But in the OP question we have two regressions, so if we allow model for one, we must allow model for the other and we arrive at the problem which I described in my initial answer.
So I hope that my lengthy update illustrates that if we do not assume usual model in doing regression, we cannot assume that usual statistics will have the same distributions.
|
Can statistical prediction be asymmetric?
|
Your are trying to estimate model
\begin{align}
X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\
X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta.
\end{align}
For such model ordinary least
|
Can statistical prediction be asymmetric?
Your are trying to estimate model
\begin{align}
X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\
X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta.
\end{align}
For such model ordinary least squares will give biased estimates. Assuming that $X_2,...,X_{n-1}$ are either deterministic or independent from $\varepsilon$ and $\eta$ we have
\begin{align}
EX_n\varepsilon&=E\left(\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta\right)\varepsilon\\
&=\beta_1EX_1\varepsilon+E\varepsilon\eta=\\
&=\beta_1E(\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon)\varepsilon+E\varepsilon\eta\\
&=\alpha_n\beta_1EX_n\varepsilon+\beta_1E\varepsilon^2 +E\varepsilon\eta.
\end{align}
So
\begin{align}
EX_n\varepsilon=\frac{\beta_1E\varepsilon^2+E\varepsilon\eta}{1-\alpha_n\beta_1}
\end{align}
The main assumption of linear regression is that the error is not correlated to the regressors. Without this assumption the estimates are biased and inconsistent. As we see in this case the assumption is violated, so it is entirely natural to expect unexpected results.
The are ways to get statistically sound estimates of the coefficients. This is an extremely common problem in econometrics called endogeneity. Most popular solution is two-stage least squares. In general these type of models are called simultaneous equations.
Update
In the comments below the attention was drawn to the fact that the OP is just trying to fit two regressions and the model given above might be inappropriate. @onestop provided excellent reference about data modelling vs algorithmic modelling cultures. Nonetheless there is an important point I want to make in this case.
Taking into account @whuber answer we can restrict ourselves to the case where we have only two variables, $Y$ and $X$. Suppose having the sample $(y_i,x_i)$ we fit two linear regressions $Y$ vs $X$ and $X$ vs $Y$. The least squares estimates are the following:
\begin{align}
\beta_{YX}&=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}x_i^2} \\
\beta_{XY}&=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}y_i^2}
\end{align}
Now from the formulas it is clear that $\beta_{XY}$ and $\beta_{YX}$ coincide only in the case when $\sum_{i=1}^ny_i^2=\sum_{i=1}^nx_i^2$.
The corresponding $t$-statistics are then
\begin{align}
t_{YX}&=\frac{\beta_{YX}}{\sigma_{\beta_{XY}}}, \quad \sigma_{\beta_{YX}}^2=\frac{ \sigma_{YX}^2}{\sum_{i=1}^nx_i^2}, \quad \sigma_{YX}^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\beta_{YX}x_i)^2 \\
t_{XY}&=\frac{\beta_{XY}}{\sigma_{XY}}, \quad \quad \sigma_{\beta_{XY}}^2=\frac{ \sigma_{XY}^2}{\sum_{i=1}^nx_i^2}, \quad \sigma_{XY}^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\beta_{XY}x_i)^2
\end{align}
After a bit of manipulation we get the amazing fact (in my opinion) that
\begin{align}
t_{XY}=t_{YX}=\frac{\sum_{i=1}^nx_iy_i}{\sqrt{\frac{1}{n-1}\left(\sum_{i=1}^ny_i^2\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_iy_i\right)^2\right)}}
\end{align}
So this illustrates that OP clearly has a problem, since the $t$-statistics must coincide in this case. This of course was pointed out by @whuber.
So far so good. The problem arises when we want to get the distribution of this statistic. This is needed since $t$-statistic formally test the hypothesis that the coefficient $\beta_{XY}$ is zero. Without assuming any model, suppose that we want to test the null hypothesis that $cov(X,Y)=0$. We can also assume that we have $EX=EY=0$. Rewrite the $t$ statistic as follows:
\begin{align}
t=\frac{\frac{1}{\sqrt{n-1}}\sum_{i=1}^nx_iy_i}{\sqrt{\frac{1}{n-1}\sum_{i=1}^ny_i^2\frac{1}{n-1}\sum_{i=1}^nx_i^2-\left(\frac{1}{n-1}\sum_{i=1}^nx_iy_i\right)^2}}
\end{align}
Now since we have a sample due to law of large numbers we have
\begin{align*}
\frac{1}{n-1}\sum_{i=1}^ny_i^2\frac{1}{n-1}\sum_{i=1}^nx_i^2\xrightarrow{P}var(X)var(Y)\\
\frac{1}{n-1}\sum_{i=1}^nx_iy_i\xrightarrow{P}cov(X,Y),
\end{align*}
where $\xrightarrow{P}$ denotes convergence in probability.
So denominator of $t$-statistic converges to $\sqrt{var(X)var(Y)}$ under null hypothesis of $cov(X,Y)=0$.
Under null hypothesis due to central limit theorem we have that
\begin{align}
\frac{1}{\sqrt{n-1}}\sum_{i=1}^nx_iy_i\xrightarrow{D}N(0,var(XY)),
\end{align}
where $\xrightarrow{D}$ denotes convergence in distribution. So we get that under null-hypothesis of no correlation the $t$-statistic converges to
\begin{align}
t\xrightarrow{D}N\left(0,\frac{var(XY)}{var(X)var(Y)}\right)
\end{align}
Now if $(X,Y)$ are bivariate normal we have that $\frac{var(XY)}{var(X)var(Y)}=1$, under null hypothesis of $cov(X,Y)=0$, since zero correlation implies independence for normal random variables. The quantity $\frac{var(XY)}{var(X)var(Y)}$ is one, when $X$ is deterministic as is usually the case in linear regression, but then the null hypothesis $cov(X,Y)=0$ makes no sense.
Now if we assume that we have a model $Y=X\beta+\varepsilon$, under null hypothesis that $\beta=0$ and usual assumption $E(\varepsilon^2|X)=\sigma^2$ we get that
\begin{align}
t\xrightarrow{D}N(0,1),
\end{align}
since
\begin{align}
\frac{var(\varepsilon X)}{var(X)var(\varepsilon)}=\frac{\sigma^2var(X)}{\sigma^2var(X)}=1
\end{align}
But in the OP question we have two regressions, so if we allow model for one, we must allow model for the other and we arrive at the problem which I described in my initial answer.
So I hope that my lengthy update illustrates that if we do not assume usual model in doing regression, we cannot assume that usual statistics will have the same distributions.
|
Can statistical prediction be asymmetric?
Your are trying to estimate model
\begin{align}
X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\
X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta.
\end{align}
For such model ordinary least
|
44,764
|
Estimating the probability that a software change fixed a problem
|
This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts for a chance mechanism or uncertainty in three ways:
The data themselves can vary by chance.
Because of this, any estimates made from the data are uncertain.
The future statistic can also vary by chance.
Estimating a probability distribution from the data handles (1). But if you simply compare the future value to predictions from that distribution you are ignoring (2) and (3). This will exaggerate the significance of any difference that you note. This is why it can be important to use a prediction limit method rather than some ad hoc method.
Failure times are often taken to be exponentially distributed (which is essentially a continuous version of a geometric distribution). The exponential is a special case of the Gamma distribution with "shape parameter" 1. Approximate prediction limit methods for gamma distributions have been worked out, as published by Krishnamoorthy, Mathew, and Mukherjee in a 2008 Technometrics article. The calculations are relatively simple. I won't discuss them here because there are more important issues to attend to first.
Before applying any parametric procedure you should check that the data at least approximately conform to the procedure's assumptions. In this case we can check whether the data look exponential (or geometric) by making an exponential probability plot. This procedure matches the sorted data values $k_1, k_2, \ldots, k_7$ = $22, 24, 36, 44, 74, 89, 100$ to percentage points of (any) exponential distribution, which can be computed as the negative logarithms of $1 - (1 - 1/2)/7, 1 - (2 - 1/2)/7, \ldots, 1 - (7 - 1/2)/7$. When I do that the plot looks decidedly curved, suggesting that these data are not drawn from an exponential (or geometric) distribution. With either of those distributions you should see a cluster of shorter failure times and a straggling tail of longer failure times. Here, the initial clustering is apparent at $22, 24, 26, 44$, but after a relatively long gap from $44$ to $74$ there is another cluster at $74, 89, 100$. This should cause us to mistrust the results of our parametric models.
One approach in this situation is to use a nonparametric prediction limit. That's a dead simple procedure in this case: if the post-fix value is the largest of all the values, that should be evidence that the fix actually lengthened the failure times. If all eight values (the seven pre-fix data and the one post-fix value) come from the same distribution and are independent, there is only a $1/8$ chance that the eighth value will be the largest. Therefore, we can say with $1 - 1/8 = 87.5$% confidence that the fix has improved the failure times. This procedure also correctly handles the censoring in the last value, which really records a failure time of some unknown value greater than 233. (If a parametric prediction limit happens to exceed 233--and I suspect [based on experience and on the result of @Owe Jessen's bootstrap] it would be close if we were to calculate it with 95% confidence--we would determine that the number 233 is not inconsistent with the other data, but that would leave unanswered the question concerning the true time to failure, for which 233 is only an underestimate.)
Based on @csgillespie's calculations, which--as I argued above--likely overestimate the confidence as $98.3$%, we nevertheless have found a window in which the actual confidence is likely to lie: it's at least $87.5$% and somewhat less than $98.3$% (assuming we have any faith in the geometric distribution model).
I will conclude by sharing my greatest concern: the question as stated could easily be misinterpreted as an appeal to use statistics to make an impression or sanctify a conclusion, rather than provide genuinely useful information about uncertainty. If there are additional reasons to suppose that the fix has worked, then the best course is to invoke them and don't bother with statistics. Make the case on its technical merits. If, on the other hand, there is little assurance that the fix was effective--we just don't know for sure--and the objective here is to decide whether the data warrant proceeding as if it did work, then a prudent decision maker will likely prefer the conservative confidence level afforded by the non-parametric procedure.
Edit
For (hypothetical) data {22, 24, 36, 44, 15, 20, 23} the exponential probability plot is not terrifically non-linear:
(If this looks non-linear to you, generate probability plots for a few hundred realizations of seven draws from an Exponential[25] distribution to see how much they will wiggle by chance alone.)
Therefore with this modified dataset you can feel more comfortable using the equations in Krishnamoorthy et al. (op. cit.) to compute a prediction limit. However, the harmonic mean of 25.08 and relatively small SD (around 10) indicate the prediction limit for any typical confidence level (e.g., 95% or 99%) will be much less than 223. The principle in play here is that one uses statistics for insight and to make difficult decisions. Statistical procedures are of little (additional) help when the results are obvious.
|
Estimating the probability that a software change fixed a problem
|
This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts f
|
Estimating the probability that a software change fixed a problem
This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts for a chance mechanism or uncertainty in three ways:
The data themselves can vary by chance.
Because of this, any estimates made from the data are uncertain.
The future statistic can also vary by chance.
Estimating a probability distribution from the data handles (1). But if you simply compare the future value to predictions from that distribution you are ignoring (2) and (3). This will exaggerate the significance of any difference that you note. This is why it can be important to use a prediction limit method rather than some ad hoc method.
Failure times are often taken to be exponentially distributed (which is essentially a continuous version of a geometric distribution). The exponential is a special case of the Gamma distribution with "shape parameter" 1. Approximate prediction limit methods for gamma distributions have been worked out, as published by Krishnamoorthy, Mathew, and Mukherjee in a 2008 Technometrics article. The calculations are relatively simple. I won't discuss them here because there are more important issues to attend to first.
Before applying any parametric procedure you should check that the data at least approximately conform to the procedure's assumptions. In this case we can check whether the data look exponential (or geometric) by making an exponential probability plot. This procedure matches the sorted data values $k_1, k_2, \ldots, k_7$ = $22, 24, 36, 44, 74, 89, 100$ to percentage points of (any) exponential distribution, which can be computed as the negative logarithms of $1 - (1 - 1/2)/7, 1 - (2 - 1/2)/7, \ldots, 1 - (7 - 1/2)/7$. When I do that the plot looks decidedly curved, suggesting that these data are not drawn from an exponential (or geometric) distribution. With either of those distributions you should see a cluster of shorter failure times and a straggling tail of longer failure times. Here, the initial clustering is apparent at $22, 24, 26, 44$, but after a relatively long gap from $44$ to $74$ there is another cluster at $74, 89, 100$. This should cause us to mistrust the results of our parametric models.
One approach in this situation is to use a nonparametric prediction limit. That's a dead simple procedure in this case: if the post-fix value is the largest of all the values, that should be evidence that the fix actually lengthened the failure times. If all eight values (the seven pre-fix data and the one post-fix value) come from the same distribution and are independent, there is only a $1/8$ chance that the eighth value will be the largest. Therefore, we can say with $1 - 1/8 = 87.5$% confidence that the fix has improved the failure times. This procedure also correctly handles the censoring in the last value, which really records a failure time of some unknown value greater than 233. (If a parametric prediction limit happens to exceed 233--and I suspect [based on experience and on the result of @Owe Jessen's bootstrap] it would be close if we were to calculate it with 95% confidence--we would determine that the number 233 is not inconsistent with the other data, but that would leave unanswered the question concerning the true time to failure, for which 233 is only an underestimate.)
Based on @csgillespie's calculations, which--as I argued above--likely overestimate the confidence as $98.3$%, we nevertheless have found a window in which the actual confidence is likely to lie: it's at least $87.5$% and somewhat less than $98.3$% (assuming we have any faith in the geometric distribution model).
I will conclude by sharing my greatest concern: the question as stated could easily be misinterpreted as an appeal to use statistics to make an impression or sanctify a conclusion, rather than provide genuinely useful information about uncertainty. If there are additional reasons to suppose that the fix has worked, then the best course is to invoke them and don't bother with statistics. Make the case on its technical merits. If, on the other hand, there is little assurance that the fix was effective--we just don't know for sure--and the objective here is to decide whether the data warrant proceeding as if it did work, then a prudent decision maker will likely prefer the conservative confidence level afforded by the non-parametric procedure.
Edit
For (hypothetical) data {22, 24, 36, 44, 15, 20, 23} the exponential probability plot is not terrifically non-linear:
(If this looks non-linear to you, generate probability plots for a few hundred realizations of seven draws from an Exponential[25] distribution to see how much they will wiggle by chance alone.)
Therefore with this modified dataset you can feel more comfortable using the equations in Krishnamoorthy et al. (op. cit.) to compute a prediction limit. However, the harmonic mean of 25.08 and relatively small SD (around 10) indicate the prediction limit for any typical confidence level (e.g., 95% or 99%) will be much less than 223. The principle in play here is that one uses statistics for insight and to make difficult decisions. Statistical procedures are of little (additional) help when the results are obvious.
|
Estimating the probability that a software change fixed a problem
This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts f
|
44,765
|
Estimating the probability that a software change fixed a problem
|
There are a few ways of doing this problem. The way I would tackle this problem is as follows.
The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a failure. The geometric distribution has one parameter p, which is the probability of failure at each point. For your data set, we estimate p as follows:
\begin{equation}
\hat p^{-1} = \frac{100 + 22 + 36 + 44 + 89 + 24 + 74}{7} = 55.57
\end{equation}
So $\hat p = 1/55.57 = 0.018$. From the CDF, the probability of having a run of 223 iterations and observing a failure is:
\begin{equation}
1-(1-\hat p)^{223} = 0.983
\end{equation}
So the probability of running 223 iterations and not having a failure is
\begin{equation}
1- 0.983 = 0.017
\end{equation}
So it seems likely (but not overwhelming so) that you have fixed the problem. If you have a run of about 300 iterations than the probability goes down to 0.004
Some notes
A bernoulli trial is just tossing a coin, i.e. there are only two outcomes.
The geometric distribution is usually phrased in terms of success (rather than failure). For you a success is when the machine breaks!
|
Estimating the probability that a software change fixed a problem
|
There are a few ways of doing this problem. The way I would tackle this problem is as follows.
The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a f
|
Estimating the probability that a software change fixed a problem
There are a few ways of doing this problem. The way I would tackle this problem is as follows.
The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a failure. The geometric distribution has one parameter p, which is the probability of failure at each point. For your data set, we estimate p as follows:
\begin{equation}
\hat p^{-1} = \frac{100 + 22 + 36 + 44 + 89 + 24 + 74}{7} = 55.57
\end{equation}
So $\hat p = 1/55.57 = 0.018$. From the CDF, the probability of having a run of 223 iterations and observing a failure is:
\begin{equation}
1-(1-\hat p)^{223} = 0.983
\end{equation}
So the probability of running 223 iterations and not having a failure is
\begin{equation}
1- 0.983 = 0.017
\end{equation}
So it seems likely (but not overwhelming so) that you have fixed the problem. If you have a run of about 300 iterations than the probability goes down to 0.004
Some notes
A bernoulli trial is just tossing a coin, i.e. there are only two outcomes.
The geometric distribution is usually phrased in terms of success (rather than failure). For you a success is when the machine breaks!
|
Estimating the probability that a software change fixed a problem
There are a few ways of doing this problem. The way I would tackle this problem is as follows.
The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a f
|
44,766
|
Estimating the probability that a software change fixed a problem
|
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corrections greatly appreciated:
fails <- c(100, 22, 36, 44, 89, 24, 74) # Observed data
N <- 100000 # Number of replications
Ncol <- length(fails) # Number of columns in the data-matrix
boot.m <- matrix(sample(fails,N*Ncol,replac=T),ncol=Ncol) # The bootstrap data matrix
# it draws a vector of Ncol results from the original data, and replicates this N-times
p.hat <- function(x){p.hat = 1/(sum(x)/length(x))} # Function to calculate the
# probability of failure
p.vec <- apply(boot.m,1,p.hat) # calculates the probabilities for each of the
# replications
quant.p <- quantile(p.vec,probs=0.01) # calculates the 1%-quantile of the probs.
hist(p.vec) # draws a histogram of the probabilities
abline(v=quant.p,col="red") # adds a line where quant.p is
no.fail <- 223 # Repetitions without a fail after the repair
(prob.fail <- 1 - pgeom(no.fail,prob=quant.p)) # Prob of no fail after 223 reps with
# failure prob qant.p
The idea was to get a worst-case value for the probability, and then use it to calculate the probability of observing no fail after 223 iterations, given the prior failure probability. The worst case of course being a low failure probability to begin with, which would raise the likelihood of observing no failure after 223 iterations without fixing the problem.
The result was 6.37% - as I understand it, you would have had a 6%-probability of not observing a failure after 223 trials if the problem still exists.
Of course, you could generate samples of trials and calculate the probability from that:
boot.fails <- rbinom(N,size=no.fail, prob=quant.p) # repeats draws with succes-rate
# quant.prob N times.
mean(boot.fails==0) # Ratio of no successes
with the result of 6.51%.
|
Estimating the probability that a software change fixed a problem
|
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corre
|
Estimating the probability that a software change fixed a problem
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corrections greatly appreciated:
fails <- c(100, 22, 36, 44, 89, 24, 74) # Observed data
N <- 100000 # Number of replications
Ncol <- length(fails) # Number of columns in the data-matrix
boot.m <- matrix(sample(fails,N*Ncol,replac=T),ncol=Ncol) # The bootstrap data matrix
# it draws a vector of Ncol results from the original data, and replicates this N-times
p.hat <- function(x){p.hat = 1/(sum(x)/length(x))} # Function to calculate the
# probability of failure
p.vec <- apply(boot.m,1,p.hat) # calculates the probabilities for each of the
# replications
quant.p <- quantile(p.vec,probs=0.01) # calculates the 1%-quantile of the probs.
hist(p.vec) # draws a histogram of the probabilities
abline(v=quant.p,col="red") # adds a line where quant.p is
no.fail <- 223 # Repetitions without a fail after the repair
(prob.fail <- 1 - pgeom(no.fail,prob=quant.p)) # Prob of no fail after 223 reps with
# failure prob qant.p
The idea was to get a worst-case value for the probability, and then use it to calculate the probability of observing no fail after 223 iterations, given the prior failure probability. The worst case of course being a low failure probability to begin with, which would raise the likelihood of observing no failure after 223 iterations without fixing the problem.
The result was 6.37% - as I understand it, you would have had a 6%-probability of not observing a failure after 223 trials if the problem still exists.
Of course, you could generate samples of trials and calculate the probability from that:
boot.fails <- rbinom(N,size=no.fail, prob=quant.p) # repeats draws with succes-rate
# quant.prob N times.
mean(boot.fails==0) # Ratio of no successes
with the result of 6.51%.
|
Estimating the probability that a software change fixed a problem
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corre
|
44,767
|
Estimating the probability that a software change fixed a problem
|
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this should work either from there or if you download it to your computer (which you are welcome to do). I think you have a total of 382 successes and 7 failures in the old version, and 223 successes and 0 failures in the new one, and that you could get this at random with probability about 4% even if the new version was no better.
I suggest that you run it a bit more. You can play about with the web page to see how the probability changes if you survive longer - I would go for something over 1000 - in fact I'd try hard to turn it into something I could run automatically and then let it overnight to really blitz the problem.
|
Estimating the probability that a software change fixed a problem
|
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this shoul
|
Estimating the probability that a software change fixed a problem
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this should work either from there or if you download it to your computer (which you are welcome to do). I think you have a total of 382 successes and 7 failures in the old version, and 223 successes and 0 failures in the new one, and that you could get this at random with probability about 4% even if the new version was no better.
I suggest that you run it a bit more. You can play about with the web page to see how the probability changes if you survive longer - I would go for something over 1000 - in fact I'd try hard to turn it into something I could run automatically and then let it overnight to really blitz the problem.
|
Estimating the probability that a software change fixed a problem
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this shoul
|
44,768
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
|
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis.
For example, you have 'propensity to shoplift' measured via 3 items on a scale of 1 to 5 (where 1 is low propensity to shoplift and 5 is high). Suppose that you reversed item 1 on the on the survey so that 1 is high and 5 is low. Then you should reverse the score for item one so that 5 means the same thing across all three items (i.e., 5 is high propensity to shoplift).
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor
|
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis.
For example, you have 'propensity
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis.
For example, you have 'propensity to shoplift' measured via 3 items on a scale of 1 to 5 (where 1 is low propensity to shoplift and 5 is high). Suppose that you reversed item 1 on the on the survey so that 1 is high and 5 is low. Then you should reverse the score for item one so that 5 means the same thing across all three items (i.e., 5 is high propensity to shoplift).
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis.
For example, you have 'propensity
|
44,769
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
|
Reliability Analysis: Yes, you should reverse score the reversed items.
Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of thumb regarding number of factors to extract, etc.) should be the same. The sign of factor loadings will flip based on whether you reverse reversed items.
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor
|
Reliability Analysis: Yes, you should reverse score the reversed items.
Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of t
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
Reliability Analysis: Yes, you should reverse score the reversed items.
Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of thumb regarding number of factors to extract, etc.) should be the same. The sign of factor loadings will flip based on whether you reverse reversed items.
|
Should I reverse score items before running reliability analyses (item-total correlation) and factor
Reliability Analysis: Yes, you should reverse score the reversed items.
Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of t
|
44,770
|
Why don't we use normal distribution in every problem? [closed]
|
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different samples from a distribution, has its own distribution. And the CLT gives criteria when that distribution is normal or when it approaches normality.
First, if the underlying distribution is normally distributed, then the distribution of the means is normal...regardless of sample size.
Second, if the underlying distribution is not normally distributed, then the distribution of the means approaches a normal distribution for large enough sample sizes.
Thus, there are instances when the distribution of the means is most definitely not normally distributed.
|
Why don't we use normal distribution in every problem? [closed]
|
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different sampl
|
Why don't we use normal distribution in every problem? [closed]
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different samples from a distribution, has its own distribution. And the CLT gives criteria when that distribution is normal or when it approaches normality.
First, if the underlying distribution is normally distributed, then the distribution of the means is normal...regardless of sample size.
Second, if the underlying distribution is not normally distributed, then the distribution of the means approaches a normal distribution for large enough sample sizes.
Thus, there are instances when the distribution of the means is most definitely not normally distributed.
|
Why don't we use normal distribution in every problem? [closed]
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different sampl
|
44,771
|
Why don't we use normal distribution in every problem? [closed]
|
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren't always met.
The "law of rare events" gives one example of where a sum of independent random variables converges to a non-Gaussian limit.
Suppose we have a random process where at each step $n$, we observe $n$ independent binary random variables $X_{1n}, X_{2n}, \dots, X_{nn}$ where $P(X_{nk} = 1) = p_{nk}$ and $P(X_{nk} = 0) = 1 - p_{nk}$ (so they are Bernoulli with success probability $p_{nk}$). It turns out that if $\sum_{k=1}^n p_{nk} \to \lambda \in (0,\infty)$ as $n\to\infty$ and $\max_{1\leq k \leq n} p_{nk} \to 0$ then $\sum_{k=1}^n X_{nk} \stackrel{\text d}\to \text{Pois}(\lambda)$. This means that if we have a collection of binary random variables where the probability that any one of them is 1 goes to zero, but the collection as a whole maintains a steady expected number of 1s, then the sum will have a Poisson limit, not a Gaussian limit. This is theorem 3.6.1 in Durrett's Probability: Theory and Examples, available here.
Beyond this, we aren't always interested in means. Suppose we have $X_1, \dots, X_n \stackrel{\text{iid}}\sim \text{Unif}(\theta, \theta+1)$ and we want to estimate $\theta$. It turns out that $X_{(1)} := \min_{1\leq k \leq n} X_k$ is about the best estimator we could consider if we're using squared loss (in a minimax sense). If we normalize $X_{(1)}$ by subtracting $\theta$ and dividing by $1/n$ we get a non-Gaussian limit:
$$
\begin{aligned}
P\left(\frac{X_{(1)} - \theta}{1/n} \leq t\right) &= 1 - P\left(X_{(1)} - \theta > t/n\right) \\
&= 1 - P\left(X_1 - \theta > t/n\right)^n \\
&= 1 - (1 - t/n)^n \\
&\to e^{-t}
\end{aligned}
$$
as $n\to\infty$ hence $n(X_{(1)} - \theta) \stackrel{\text d}\to \text{Exp}(1)$, i.e. we get an Exponential distribution as our limit rather than a Gaussian.
These are two very classical examples of tidy problems where we don't get a Gaussian limit. If we're doing "real world" modeling then all bets can be off. We may have deep dependence relationships that prevent the CLT from applying, like if our data have temporal or spatial correlations. Or maybe it's a non-stationary process so there isn't one "mean" for things to be Gaussian around. Or we might be predicting/forecasting or studying non-asymptotic problems where there is no sense of "converging" to a limit. The CLT and friends are great but there is a lot of behavior that they fail to describe.
|
Why don't we use normal distribution in every problem? [closed]
|
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren
|
Why don't we use normal distribution in every problem? [closed]
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren't always met.
The "law of rare events" gives one example of where a sum of independent random variables converges to a non-Gaussian limit.
Suppose we have a random process where at each step $n$, we observe $n$ independent binary random variables $X_{1n}, X_{2n}, \dots, X_{nn}$ where $P(X_{nk} = 1) = p_{nk}$ and $P(X_{nk} = 0) = 1 - p_{nk}$ (so they are Bernoulli with success probability $p_{nk}$). It turns out that if $\sum_{k=1}^n p_{nk} \to \lambda \in (0,\infty)$ as $n\to\infty$ and $\max_{1\leq k \leq n} p_{nk} \to 0$ then $\sum_{k=1}^n X_{nk} \stackrel{\text d}\to \text{Pois}(\lambda)$. This means that if we have a collection of binary random variables where the probability that any one of them is 1 goes to zero, but the collection as a whole maintains a steady expected number of 1s, then the sum will have a Poisson limit, not a Gaussian limit. This is theorem 3.6.1 in Durrett's Probability: Theory and Examples, available here.
Beyond this, we aren't always interested in means. Suppose we have $X_1, \dots, X_n \stackrel{\text{iid}}\sim \text{Unif}(\theta, \theta+1)$ and we want to estimate $\theta$. It turns out that $X_{(1)} := \min_{1\leq k \leq n} X_k$ is about the best estimator we could consider if we're using squared loss (in a minimax sense). If we normalize $X_{(1)}$ by subtracting $\theta$ and dividing by $1/n$ we get a non-Gaussian limit:
$$
\begin{aligned}
P\left(\frac{X_{(1)} - \theta}{1/n} \leq t\right) &= 1 - P\left(X_{(1)} - \theta > t/n\right) \\
&= 1 - P\left(X_1 - \theta > t/n\right)^n \\
&= 1 - (1 - t/n)^n \\
&\to e^{-t}
\end{aligned}
$$
as $n\to\infty$ hence $n(X_{(1)} - \theta) \stackrel{\text d}\to \text{Exp}(1)$, i.e. we get an Exponential distribution as our limit rather than a Gaussian.
These are two very classical examples of tidy problems where we don't get a Gaussian limit. If we're doing "real world" modeling then all bets can be off. We may have deep dependence relationships that prevent the CLT from applying, like if our data have temporal or spatial correlations. Or maybe it's a non-stationary process so there isn't one "mean" for things to be Gaussian around. Or we might be predicting/forecasting or studying non-asymptotic problems where there is no sense of "converging" to a limit. The CLT and friends are great but there is a lot of behavior that they fail to describe.
|
Why don't we use normal distribution in every problem? [closed]
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren
|
44,772
|
Why don't we use normal distribution in every problem? [closed]
|
First, the CLT doesn't guarantee that the mean of samples will be normally distributed.
But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of your estimator to be equal to the population parameter of the mean. This is known as being "unbiased". Taking the population mean (assuming it's known) does result in an unbiased estimator, but in a trivial and not very useful way.
Just having an estimator that's unbiased is an exceedingly unimpressive accomplishment. The mean is the absolute floor of machine learning performance. It's a benchmark that every other method had been improve upon, otherwise the method is pointless. For instance, suppose you're trying to predict what a student's college grades will be based on their high school grades and SAT score. The simplest machine learning model would be to simply take the mean of all the students at the college, and give that as output. That would an unbiased estimate, but in a pointless way.
The goal of machine learning is to see how much better you can do than just taking the mean of all the X overall. It's about getting as educated of a guess about each individual in a sample, not predicting what the average over the whole sample will be. It's about identifying features of particular Xs that are informative as possible, and getting the mean of the Xs that have those particular features, rather than of all the Xs. For instance, if you have a student with a high school GPA of 3.4 and an SAT of 1400, you want to know what the mean college GPA of all students with high school GPA of 3.4 and an SAT of 1400 is, not what the mean college GPA over all students is.
|
Why don't we use normal distribution in every problem? [closed]
|
First, the CLT doesn't guarantee that the mean of samples will be normally distributed.
But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of you
|
Why don't we use normal distribution in every problem? [closed]
First, the CLT doesn't guarantee that the mean of samples will be normally distributed.
But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of your estimator to be equal to the population parameter of the mean. This is known as being "unbiased". Taking the population mean (assuming it's known) does result in an unbiased estimator, but in a trivial and not very useful way.
Just having an estimator that's unbiased is an exceedingly unimpressive accomplishment. The mean is the absolute floor of machine learning performance. It's a benchmark that every other method had been improve upon, otherwise the method is pointless. For instance, suppose you're trying to predict what a student's college grades will be based on their high school grades and SAT score. The simplest machine learning model would be to simply take the mean of all the students at the college, and give that as output. That would an unbiased estimate, but in a pointless way.
The goal of machine learning is to see how much better you can do than just taking the mean of all the X overall. It's about getting as educated of a guess about each individual in a sample, not predicting what the average over the whole sample will be. It's about identifying features of particular Xs that are informative as possible, and getting the mean of the Xs that have those particular features, rather than of all the Xs. For instance, if you have a student with a high school GPA of 3.4 and an SAT of 1400, you want to know what the mean college GPA of all students with high school GPA of 3.4 and an SAT of 1400 is, not what the mean college GPA over all students is.
|
Why don't we use normal distribution in every problem? [closed]
First, the CLT doesn't guarantee that the mean of samples will be normally distributed.
But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of you
|
44,773
|
Is there a name for a distribution where I can take log of its histogram and get back the same histogram?
|
To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a+b\log X$ and $X$ have the same distribution.
This is possible for a distribution that puts all its probability on two points. The logarithm puts all the probability on two different points, and you can rescale it back to the original distribution.
It's not possible otherwise. Let $x_1$ and $x_2$ be two values of $X$ and consider the point halfway between them: $x_m=\frac{1}{2}(x_1+x_2)$. Because $\log$ is a concave function, $\log x_m$ will always be less than $\frac{1}{2}(\log x_1+\log x_2)$. If we write $y_1$ and $y_2$ for the transformed values of $x_1$ and $x_2$, we can choose the scaling so that $y_1=x_1$ and $y_2=x_2$. But $y_m<x_m$, rather than $y_m=x_m$. So we can only get the same shape back if there is no probability at $x_m$. Basically the same argument works for a point 1/3 of the way between $x_1$ and $x_2$, or any other intermediate position.
|
Is there a name for a distribution where I can take log of its histogram and get back the same histo
|
To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a
|
Is there a name for a distribution where I can take log of its histogram and get back the same histogram?
To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a+b\log X$ and $X$ have the same distribution.
This is possible for a distribution that puts all its probability on two points. The logarithm puts all the probability on two different points, and you can rescale it back to the original distribution.
It's not possible otherwise. Let $x_1$ and $x_2$ be two values of $X$ and consider the point halfway between them: $x_m=\frac{1}{2}(x_1+x_2)$. Because $\log$ is a concave function, $\log x_m$ will always be less than $\frac{1}{2}(\log x_1+\log x_2)$. If we write $y_1$ and $y_2$ for the transformed values of $x_1$ and $x_2$, we can choose the scaling so that $y_1=x_1$ and $y_2=x_2$. But $y_m<x_m$, rather than $y_m=x_m$. So we can only get the same shape back if there is no probability at $x_m$. Basically the same argument works for a point 1/3 of the way between $x_1$ and $x_2$, or any other intermediate position.
|
Is there a name for a distribution where I can take log of its histogram and get back the same histo
To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a
|
44,774
|
Is there a name for a distribution where I can take log of its histogram and get back the same histogram?
|
A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of the range - small values like 0.001, 0.01, and 0.1 that are "nearby" one another get stretched to cover a larger range of -3, -2, and -1, while large values like 1000, 10000, and 100000 that are "distant" from one another get squeezed to cover a smaller range of 3, 4, and 5.
When taking the log of a histogram, you're modifying distances between bars, but are doing so unevenly along the length of the histogram - you're stretching the left half of the histogram and squeezing the right half. Except in the case of very simple histograms with few values, this will always result in a different shape from what you started with. To preserve the shape, you'd need to shift or scale all values by the same amount, but the log effectively applies a scale factor that varies with the value being scaled.
|
Is there a name for a distribution where I can take log of its histogram and get back the same histo
|
A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of
|
Is there a name for a distribution where I can take log of its histogram and get back the same histogram?
A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of the range - small values like 0.001, 0.01, and 0.1 that are "nearby" one another get stretched to cover a larger range of -3, -2, and -1, while large values like 1000, 10000, and 100000 that are "distant" from one another get squeezed to cover a smaller range of 3, 4, and 5.
When taking the log of a histogram, you're modifying distances between bars, but are doing so unevenly along the length of the histogram - you're stretching the left half of the histogram and squeezing the right half. Except in the case of very simple histograms with few values, this will always result in a different shape from what you started with. To preserve the shape, you'd need to shift or scale all values by the same amount, but the log effectively applies a scale factor that varies with the value being scaled.
|
Is there a name for a distribution where I can take log of its histogram and get back the same histo
A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of
|
44,775
|
Misunderstanding the chi squared distribution
|
Sampling one value from
$$
\sum_{i=1}^k Z_i^2
$$
requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distribution.
On the other hand, sampling one value from
$$
kZ^2
$$
requires to make one single draw from $Z$, square it, and to multiply it by $k$.
sample1 <- rnorm(n = 1e4)^2 + rnorm(n = 1e4)^2 + rnorm(n = 1e4)^2
sample2 <- 3 * rnorm(n = 1e4)^2
curve(dchisq(x, df = 3), from = 0, to = 40, col = "red", lwd = 2)
lines(density(sample1), col = "blue")
lines(density(sample2), col = "green")
from seaborn import displot
import numpy.random as dists
import pandas as pd
sample_size = 10**4
sample1 = dists.normal(size = sample_size)**2 + dists.normal(size = sample_size)**2 + dists.normal(size = sample_size)**2
sample2 = 3 * dists.normal(size = sample_size)**2
sample3 = dists.chisquare(df = 3, size = sample_size)
plot_data = pd.concat([pd.DataFrame({'label': '3 independent chi_sq_1',
'data': sample1}),
pd.DataFrame({'label': '3 times chi_sq_1',
'data': sample2}),
pd.DataFrame({'label': 'chi_sq_3',
'data': sample3})],
ignore_index = True)
displot(data = plot_data, x = 'data', hue = 'label')
displot(data = plot_data, x = 'data', hue = 'label', kind = 'ecdf')
|
Misunderstanding the chi squared distribution
|
Sampling one value from
$$
\sum_{i=1}^k Z_i^2
$$
requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distrib
|
Misunderstanding the chi squared distribution
Sampling one value from
$$
\sum_{i=1}^k Z_i^2
$$
requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distribution.
On the other hand, sampling one value from
$$
kZ^2
$$
requires to make one single draw from $Z$, square it, and to multiply it by $k$.
sample1 <- rnorm(n = 1e4)^2 + rnorm(n = 1e4)^2 + rnorm(n = 1e4)^2
sample2 <- 3 * rnorm(n = 1e4)^2
curve(dchisq(x, df = 3), from = 0, to = 40, col = "red", lwd = 2)
lines(density(sample1), col = "blue")
lines(density(sample2), col = "green")
from seaborn import displot
import numpy.random as dists
import pandas as pd
sample_size = 10**4
sample1 = dists.normal(size = sample_size)**2 + dists.normal(size = sample_size)**2 + dists.normal(size = sample_size)**2
sample2 = 3 * dists.normal(size = sample_size)**2
sample3 = dists.chisquare(df = 3, size = sample_size)
plot_data = pd.concat([pd.DataFrame({'label': '3 independent chi_sq_1',
'data': sample1}),
pd.DataFrame({'label': '3 times chi_sq_1',
'data': sample2}),
pd.DataFrame({'label': 'chi_sq_3',
'data': sample3})],
ignore_index = True)
displot(data = plot_data, x = 'data', hue = 'label')
displot(data = plot_data, x = 'data', hue = 'label', kind = 'ecdf')
|
Misunderstanding the chi squared distribution
Sampling one value from
$$
\sum_{i=1}^k Z_i^2
$$
requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distrib
|
44,776
|
What are the downsides of ARIMA models?
|
One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe - a useful first-order approximation.)
This came as something of a surprise at the earlier forecasting competition, at least to the statisticians who had gone through (or written originally) all the proofs about optimality and similar properties of ARIMA models - though only under the assumption that the true data generating process in fact does follow an ARIMA process, and usually also using asymptotic results.
More information at my answer to Why is non-iid noise so important to traditional time-series approaches?, where I also linked to Rob Hyndman's "Brief history of forecasting competitions" (2020, IJF), which is extremely enlightening reading.
|
What are the downsides of ARIMA models?
|
One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe -
|
What are the downsides of ARIMA models?
One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe - a useful first-order approximation.)
This came as something of a surprise at the earlier forecasting competition, at least to the statisticians who had gone through (or written originally) all the proofs about optimality and similar properties of ARIMA models - though only under the assumption that the true data generating process in fact does follow an ARIMA process, and usually also using asymptotic results.
More information at my answer to Why is non-iid noise so important to traditional time-series approaches?, where I also linked to Rob Hyndman's "Brief history of forecasting competitions" (2020, IJF), which is extremely enlightening reading.
|
What are the downsides of ARIMA models?
One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe -
|
44,777
|
What are the downsides of ARIMA models?
|
In my answer, I respectfully disagree with the accepted answer.
First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence that the stochastic process that produced the time series in question was one other than ARIMA and ARIMA should not have been used in the first place. A time series with nonlinear dependence for example will obviously not be forecasted well by ARIMA but that is hardly a shortcoming of ARIMA. If you simulate a time series from an ARIMA process, then an ARIMA model will do a spectacular job at prediction. If using the wrong tool results in poor performance - this is not evidence that the tool if flawed.
Secondly, if ARIMA models are to be faulted for their performance in forecasting then in my opinion a strong case can be made about them doing poorly in long-term forecasting only. They could also be faulted for assuming that the error terms is white noise with a constant variance, which translates into a constant prediction error. But the GARCH family of models can help accommodate time-varying, autocorrelated variance.
Another point pertains to the fact that it is trendy to discard old methodologies in favor of new, recently minted methods. In the forecasting competitions the message seems to be that we should discard ARIMA for machine learning methods. But
there have been plenty of fads in statistics/econometrics. Check out this article titled "Economists are prone to fads, and the latest is machine learning". Saying the words "machine learning" in an interview these days might help you get the job, but that is hardly evidence of substance
I could organize a forecasting competition where I will select the time series so as to champion any particular family of models - you name it; I could select these time series so that the ARIMA models will do best and the machine learning methods will do poorly
If, as whuber points out, the case could be made that there are hardly any real-world phenomena that are driven by ARIMA processes, then a case could be made about the limited applicability of ARIMA. And to me the biggest finding from the forecasting competitions is that combinations of various methods have consistently outperformed (across competitions), on average, any particular method. This seems to support the statement that real time series come from much more complicated processes than those in our arsenal of models (which includes both ARIMA and machine learning models). However, this is an indictment against any individual model type - be it an ARIMA model or a machine learning model. But somehow the conclusion is erroneously translated into machine learning - good; ARIMA - bad.
|
What are the downsides of ARIMA models?
|
In my answer, I respectfully disagree with the accepted answer.
First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence tha
|
What are the downsides of ARIMA models?
In my answer, I respectfully disagree with the accepted answer.
First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence that the stochastic process that produced the time series in question was one other than ARIMA and ARIMA should not have been used in the first place. A time series with nonlinear dependence for example will obviously not be forecasted well by ARIMA but that is hardly a shortcoming of ARIMA. If you simulate a time series from an ARIMA process, then an ARIMA model will do a spectacular job at prediction. If using the wrong tool results in poor performance - this is not evidence that the tool if flawed.
Secondly, if ARIMA models are to be faulted for their performance in forecasting then in my opinion a strong case can be made about them doing poorly in long-term forecasting only. They could also be faulted for assuming that the error terms is white noise with a constant variance, which translates into a constant prediction error. But the GARCH family of models can help accommodate time-varying, autocorrelated variance.
Another point pertains to the fact that it is trendy to discard old methodologies in favor of new, recently minted methods. In the forecasting competitions the message seems to be that we should discard ARIMA for machine learning methods. But
there have been plenty of fads in statistics/econometrics. Check out this article titled "Economists are prone to fads, and the latest is machine learning". Saying the words "machine learning" in an interview these days might help you get the job, but that is hardly evidence of substance
I could organize a forecasting competition where I will select the time series so as to champion any particular family of models - you name it; I could select these time series so that the ARIMA models will do best and the machine learning methods will do poorly
If, as whuber points out, the case could be made that there are hardly any real-world phenomena that are driven by ARIMA processes, then a case could be made about the limited applicability of ARIMA. And to me the biggest finding from the forecasting competitions is that combinations of various methods have consistently outperformed (across competitions), on average, any particular method. This seems to support the statement that real time series come from much more complicated processes than those in our arsenal of models (which includes both ARIMA and machine learning models). However, this is an indictment against any individual model type - be it an ARIMA model or a machine learning model. But somehow the conclusion is erroneously translated into machine learning - good; ARIMA - bad.
|
What are the downsides of ARIMA models?
In my answer, I respectfully disagree with the accepted answer.
First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence tha
|
44,778
|
What are the downsides of ARIMA models?
|
The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor results.
Furthermore, this model has some important limitations:
It can capture only linear dependencies with the past.
It forecast the average value of the series.
It assumes the error to be Independent and Identically Normally Distributed.
It is an univariate model, namely it cannot take advantage from exogenous variables.
It rely on strong assumption, such as stationary, invertibility, independence of residuale, and so on...
|
What are the downsides of ARIMA models?
|
The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor res
|
What are the downsides of ARIMA models?
The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor results.
Furthermore, this model has some important limitations:
It can capture only linear dependencies with the past.
It forecast the average value of the series.
It assumes the error to be Independent and Identically Normally Distributed.
It is an univariate model, namely it cannot take advantage from exogenous variables.
It rely on strong assumption, such as stationary, invertibility, independence of residuale, and so on...
|
What are the downsides of ARIMA models?
The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor res
|
44,779
|
How is a Bimodal distribution platykurtic?
|
Graphical comment per @whuber's Comment.
Here is a histogram of a sample of a million observations from a
beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas for the mean, variance, skewness, and kurtosis of beta distributions (for given $\alpha,\beta)$. The superimposed normal density curve matches the mean and SD of
the sample.
R code for figure:
set.seed(2021)
y = rbeta(10^6, .5, .5)
mean(y); sd(y)
[1] 0.500134
[1] 0.3535411
hdr = "BETA(.5, .5) Sample with Normal PDF"
hist(y, prob=T, br=30, xlim=c(-.5,1.5), col="skyblue2", main=hdr)
curve(dnorm(x, mean(y), sd(y)), add=T, col="orange", lwd=2)
abline(h=0, col="green2")
|
How is a Bimodal distribution platykurtic?
|
Graphical comment per @whuber's Comment.
Here is a histogram of a sample of a million observations from a
beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas
|
How is a Bimodal distribution platykurtic?
Graphical comment per @whuber's Comment.
Here is a histogram of a sample of a million observations from a
beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas for the mean, variance, skewness, and kurtosis of beta distributions (for given $\alpha,\beta)$. The superimposed normal density curve matches the mean and SD of
the sample.
R code for figure:
set.seed(2021)
y = rbeta(10^6, .5, .5)
mean(y); sd(y)
[1] 0.500134
[1] 0.3535411
hdr = "BETA(.5, .5) Sample with Normal PDF"
hist(y, prob=T, br=30, xlim=c(-.5,1.5), col="skyblue2", main=hdr)
curve(dnorm(x, mean(y), sd(y)), add=T, col="orange", lwd=2)
abline(h=0, col="green2")
|
How is a Bimodal distribution platykurtic?
Graphical comment per @whuber's Comment.
Here is a histogram of a sample of a million observations from a
beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas
|
44,780
|
How is a Bimodal distribution platykurtic?
|
While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of data in the tails." This may have been started by Balanda and MacGillivray, who "defined" kurtosis "vaguely as the location- and scale-free movement of probability mass from the shoulders of a distribution into its center and tails".
This "interpretation" is a reversal of the implication in the Finucan result, which proves that as mass moves away from the "shoulders" to the center and tails, then kurtosis increases. (Incidentally, there is no contradiction here in the bimodal case because there is no mass in the center). Unfortunately, the Finucan conditions do not tell you what larger kurtosis implies about the distribution. To infer that larger kurtosis implies "more mass in the tails" is akin to stating "well, I know all bears are mammals, so it must be the case that all mammals are bears."
For a simple counterexample of a family of probability distributions where kurtosis tends to infinity, but tail mass decreases, see here: https://math.stackexchange.com/a/2510884/472987
Rather than "mass in the tails," kurtosis precisely measures tail leverage, a combination of mass and extension. Greater extension implies greater leverage, even with little mass (Archimedes boasted that he could move the earth with a long enough lever). A single outlier, sufficiently distant from the pack of data, is enough to create great leverage. Thus, while a comment above seemed to say that kurtosis is "distorted" by outliers (suggesting a nod to the incorrect "peakedness" definition?), the more correct statement is that kurtosis measures outliers.
While outliers are sometimes defined as "mistakes," I am referring to them here as "rare, extreme values." The two-point equiprobable bimodal distribution is the least outlier-prone distribution in the universe of distributions. That is to say, it has the least tail leverage.
Besides my 2014 paper "Kurtosis as Peakedness: 1905-2014. R.I.P.," here are some posts that explain the precise nature of the "tail leverage" meaning of kurtosis.
https://stats.stackexchange.com/a/532055/102879
https://stats.stackexchange.com/a/481022/102879
|
How is a Bimodal distribution platykurtic?
|
While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of
|
How is a Bimodal distribution platykurtic?
While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of data in the tails." This may have been started by Balanda and MacGillivray, who "defined" kurtosis "vaguely as the location- and scale-free movement of probability mass from the shoulders of a distribution into its center and tails".
This "interpretation" is a reversal of the implication in the Finucan result, which proves that as mass moves away from the "shoulders" to the center and tails, then kurtosis increases. (Incidentally, there is no contradiction here in the bimodal case because there is no mass in the center). Unfortunately, the Finucan conditions do not tell you what larger kurtosis implies about the distribution. To infer that larger kurtosis implies "more mass in the tails" is akin to stating "well, I know all bears are mammals, so it must be the case that all mammals are bears."
For a simple counterexample of a family of probability distributions where kurtosis tends to infinity, but tail mass decreases, see here: https://math.stackexchange.com/a/2510884/472987
Rather than "mass in the tails," kurtosis precisely measures tail leverage, a combination of mass and extension. Greater extension implies greater leverage, even with little mass (Archimedes boasted that he could move the earth with a long enough lever). A single outlier, sufficiently distant from the pack of data, is enough to create great leverage. Thus, while a comment above seemed to say that kurtosis is "distorted" by outliers (suggesting a nod to the incorrect "peakedness" definition?), the more correct statement is that kurtosis measures outliers.
While outliers are sometimes defined as "mistakes," I am referring to them here as "rare, extreme values." The two-point equiprobable bimodal distribution is the least outlier-prone distribution in the universe of distributions. That is to say, it has the least tail leverage.
Besides my 2014 paper "Kurtosis as Peakedness: 1905-2014. R.I.P.," here are some posts that explain the precise nature of the "tail leverage" meaning of kurtosis.
https://stats.stackexchange.com/a/532055/102879
https://stats.stackexchange.com/a/481022/102879
|
How is a Bimodal distribution platykurtic?
While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of
|
44,781
|
Testing the difference in distribution between two groups
|
By inspection, it is pretty clear that Cat is
under-represented in the second database. Let's see
how that plays out in a chi-squared test of your
$2\times 4$ contingency matrix.
db1 = c(22000, 2300, 42009, 106000)
db2 = c( 380, 30, 7, 260)
MAT= rbind(db1,db2); MAT
[,1] [,2] [,3] [,4]
db1 22000 2300 42009 106000
db2 380 30 7 260
chisq.test(MAT)
Pearson's Chi-squared test
data: MAT
X-squared = 1238, df = 3, p-value < 2.2e-16
The null hypothesis that the proportions are the
same in the two databases is very strongly rejected
with P-value near $0.$
The sum of the squares of the Pearson Residuals
is the chi-squared statistic $1238.$ Residuals
with the largest absolute value point the way to
the cells in which the observed and expected counts
differed most.
chisq.test(MAT)$resi
[,1] [,2] [,3] [,4]
db1 -1.958478 -0.4334418 0.7695618 0.4790736
db2 31.244842 6.9149717 -12.2773080 -7.6429657
So it's birds and cats that have the greatest differences
in proportions. Ad hoc, we can look at the $2 \times 2$
contingency matrix for just birds and cats.
chisq.test(MAT[,c(1,3)], cor=F)
Pearson's Chi-squared test
data: MAT[, c(1, 3)]
X-squared = 690.98, df = 1, p-value < 2.2e-16
Based on your interests, you could look at other sub-matrices
as well. Ordinarily, one would be concerned about
false discovery, doing multiple tests on the same data,
but with P-values as small as this, one can do several
ad hoc tests without using methods such as Bonferroni's
to adjust significance levels.
Addendum per question in Comment. Suppose we had
Db1 = c(22000, 2300, 42009, 106000)
Db2 = c( 380, 30, 4, 260)
MTR = rbind(Db1,Db2); MTR
The chi-squared test works OK with the smaller cell
you proposed. Maybe you have read about needing
counts to be above $5.$ That's for 'expected counts' computed from row and column totals, based on $H_0.$
The counts in MTR are 'observed counts'. In R you can look at expected counts using $-notation:
chisq.test(MTR, cor=F)$exp
[,1] [,2] [,3] [,4]
Db1 22292.79999 2320.921536 41849.3032 105845.9753
Db2 87.20001 9.078464 163.6968 414.0247
Because of relatively large row and column totals,
cell [2,3] is OK (along with all the others).
If not, chisq.test would show a warning
message in output, saying that the P-value may not be accurate. Then you could use parameter sim=T in chisq.test to simulate a more useful
P-value.
chisq.test(MTR, cor=F)
Pearson's Chi-squared test
data: MTR
X-squared = 1249.3, df = 3, p-value < 2.2e-16
> chisq.test(MTR, cor=F)$exp
[,1] [,2] [,3] [,4]
Db1 22292.79999 2320.921536 41849.3032 105845.9753
Db2 87.20001 9.078464 163.6968 414.0247
|
Testing the difference in distribution between two groups
|
By inspection, it is pretty clear that Cat is
under-represented in the second database. Let's see
how that plays out in a chi-squared test of your
$2\times 4$ contingency matrix.
db1 = c(22000, 230
|
Testing the difference in distribution between two groups
By inspection, it is pretty clear that Cat is
under-represented in the second database. Let's see
how that plays out in a chi-squared test of your
$2\times 4$ contingency matrix.
db1 = c(22000, 2300, 42009, 106000)
db2 = c( 380, 30, 7, 260)
MAT= rbind(db1,db2); MAT
[,1] [,2] [,3] [,4]
db1 22000 2300 42009 106000
db2 380 30 7 260
chisq.test(MAT)
Pearson's Chi-squared test
data: MAT
X-squared = 1238, df = 3, p-value < 2.2e-16
The null hypothesis that the proportions are the
same in the two databases is very strongly rejected
with P-value near $0.$
The sum of the squares of the Pearson Residuals
is the chi-squared statistic $1238.$ Residuals
with the largest absolute value point the way to
the cells in which the observed and expected counts
differed most.
chisq.test(MAT)$resi
[,1] [,2] [,3] [,4]
db1 -1.958478 -0.4334418 0.7695618 0.4790736
db2 31.244842 6.9149717 -12.2773080 -7.6429657
So it's birds and cats that have the greatest differences
in proportions. Ad hoc, we can look at the $2 \times 2$
contingency matrix for just birds and cats.
chisq.test(MAT[,c(1,3)], cor=F)
Pearson's Chi-squared test
data: MAT[, c(1, 3)]
X-squared = 690.98, df = 1, p-value < 2.2e-16
Based on your interests, you could look at other sub-matrices
as well. Ordinarily, one would be concerned about
false discovery, doing multiple tests on the same data,
but with P-values as small as this, one can do several
ad hoc tests without using methods such as Bonferroni's
to adjust significance levels.
Addendum per question in Comment. Suppose we had
Db1 = c(22000, 2300, 42009, 106000)
Db2 = c( 380, 30, 4, 260)
MTR = rbind(Db1,Db2); MTR
The chi-squared test works OK with the smaller cell
you proposed. Maybe you have read about needing
counts to be above $5.$ That's for 'expected counts' computed from row and column totals, based on $H_0.$
The counts in MTR are 'observed counts'. In R you can look at expected counts using $-notation:
chisq.test(MTR, cor=F)$exp
[,1] [,2] [,3] [,4]
Db1 22292.79999 2320.921536 41849.3032 105845.9753
Db2 87.20001 9.078464 163.6968 414.0247
Because of relatively large row and column totals,
cell [2,3] is OK (along with all the others).
If not, chisq.test would show a warning
message in output, saying that the P-value may not be accurate. Then you could use parameter sim=T in chisq.test to simulate a more useful
P-value.
chisq.test(MTR, cor=F)
Pearson's Chi-squared test
data: MTR
X-squared = 1249.3, df = 3, p-value < 2.2e-16
> chisq.test(MTR, cor=F)$exp
[,1] [,2] [,3] [,4]
Db1 22292.79999 2320.921536 41849.3032 105845.9753
Db2 87.20001 9.078464 163.6968 414.0247
|
Testing the difference in distribution between two groups
By inspection, it is pretty clear that Cat is
under-represented in the second database. Let's see
how that plays out in a chi-squared test of your
$2\times 4$ contingency matrix.
db1 = c(22000, 230
|
44,782
|
Testing the difference in distribution between two groups
|
A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
|
Testing the difference in distribution between two groups
|
A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
|
Testing the difference in distribution between two groups
A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
|
Testing the difference in distribution between two groups
A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
|
44,783
|
An intuitive explanation of the instrumental variable
|
I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variable is as follows:
Here the unmeasured variable $E$ is your variable causing the problem, because it sets up a backdoor path from $X$ to $Y,$ and is thus a true confounder. You cannot condition on it because it is unmeasured. So you find an instrumental variable $Z$ that causally affects only $X$, and has no causal effect on $E$ or $Y$ in either direction. The reason $Z$ and $E$ can be uncorrelated is that if you examine the path $Z\to X\leftarrow E,$ the collider at $X$ prevents causal information flow from $Z$ to $E.$ Similarly, if you examine the path $Z\to X\to Y\leftarrow E,$ the collider at $Y$ prevents information flow. There are no other paths from $Z$ to $E,$ so there can be no information flow from $Z$ to $E$ or vice versa.
Incidentally, it is also sometimes possible to correct for $E$ using the frontdoor approach. If you can insert a variable $Z$ in-between $X$ and $Y$ thus:
Then you can invoke the frontdoor adjustment formula:
$$P(Y|\operatorname{do}(X))=\sum_zP(Z=z|X)\,\sum_xP(Y|X=x, Z=z)\,P(X=x).$$
This is not always possible, unfortunately.
|
An intuitive explanation of the instrumental variable
|
I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variabl
|
An intuitive explanation of the instrumental variable
I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variable is as follows:
Here the unmeasured variable $E$ is your variable causing the problem, because it sets up a backdoor path from $X$ to $Y,$ and is thus a true confounder. You cannot condition on it because it is unmeasured. So you find an instrumental variable $Z$ that causally affects only $X$, and has no causal effect on $E$ or $Y$ in either direction. The reason $Z$ and $E$ can be uncorrelated is that if you examine the path $Z\to X\leftarrow E,$ the collider at $X$ prevents causal information flow from $Z$ to $E.$ Similarly, if you examine the path $Z\to X\to Y\leftarrow E,$ the collider at $Y$ prevents information flow. There are no other paths from $Z$ to $E,$ so there can be no information flow from $Z$ to $E$ or vice versa.
Incidentally, it is also sometimes possible to correct for $E$ using the frontdoor approach. If you can insert a variable $Z$ in-between $X$ and $Y$ thus:
Then you can invoke the frontdoor adjustment formula:
$$P(Y|\operatorname{do}(X))=\sum_zP(Z=z|X)\,\sum_xP(Y|X=x, Z=z)\,P(X=x).$$
This is not always possible, unfortunately.
|
An intuitive explanation of the instrumental variable
I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variabl
|
44,784
|
An intuitive explanation of the instrumental variable
|
I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing.
First, for me DAG paradigm perfectly described by @Adrian Keister is the key to understand what is going on here. It helps to both understand exclusion restriction assumption, as well as other variants of instruments described in Brito and Pearl (2012).
Causal graphs in convenient way introduce mediation. Lets look at the classical causal model for instrumental variable in terms of mediation:
Effect we want to identify is $\beta$, it is "true" effect of X on Y. As described in previous answer, it can not be identified directly, because of existence of unmeasured confounder C.
However we can understand it in a way, that X mediates effect of Z on Y.
In this setting effect Z on X is identified correctly, it is equal $\alpha$. Also, effect of Z on Y is identified correctly, it is equal $\alpha \cdot \beta$.
Therefore if we want to calculate $\beta$ we can simply divide estimator from the second model by estimator from the first model. The procedure would be similar, as in estimating the effect in front-door criterion.
What happens then in the 2SLS procedure? In my intuitive understanding we still estimate still effect of Z on Y, however it includes already forced, or already included $\alpha$ parameter. What is left in the second stage regression is only $\beta$.
Brito, Carlos, and Judea Pearl. "Generalized instrumental variables." arXiv preprint arXiv:1301.0560 (2012).
|
An intuitive explanation of the instrumental variable
|
I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing.
Fir
|
An intuitive explanation of the instrumental variable
I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing.
First, for me DAG paradigm perfectly described by @Adrian Keister is the key to understand what is going on here. It helps to both understand exclusion restriction assumption, as well as other variants of instruments described in Brito and Pearl (2012).
Causal graphs in convenient way introduce mediation. Lets look at the classical causal model for instrumental variable in terms of mediation:
Effect we want to identify is $\beta$, it is "true" effect of X on Y. As described in previous answer, it can not be identified directly, because of existence of unmeasured confounder C.
However we can understand it in a way, that X mediates effect of Z on Y.
In this setting effect Z on X is identified correctly, it is equal $\alpha$. Also, effect of Z on Y is identified correctly, it is equal $\alpha \cdot \beta$.
Therefore if we want to calculate $\beta$ we can simply divide estimator from the second model by estimator from the first model. The procedure would be similar, as in estimating the effect in front-door criterion.
What happens then in the 2SLS procedure? In my intuitive understanding we still estimate still effect of Z on Y, however it includes already forced, or already included $\alpha$ parameter. What is left in the second stage regression is only $\beta$.
Brito, Carlos, and Judea Pearl. "Generalized instrumental variables." arXiv preprint arXiv:1301.0560 (2012).
|
An intuitive explanation of the instrumental variable
I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing.
Fir
|
44,785
|
Prove that the OLS estimator of the intercept is BLUE
|
This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\mathbf{Y} = \mathbf{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$ and consider the general linear estimator:
$$\hat{\boldsymbol{\beta}}_\mathbf{A}
= \hat{\boldsymbol{\beta}}_\text{OLS} + \mathbf{A} \mathbf{Y}
= [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] \mathbf{Y}.$$
Since the OLS estimator is unbiased and $\mathbb{E}(\mathbf{Y}) = \mathbf{x} \boldsymbol{\beta}$ this general linear estimator has bias:
$$\begin{align}
\text{Bias}(\hat{\boldsymbol{\beta}}_\mathbf{A}, \boldsymbol{\beta})
&\equiv \mathbb{E}(\hat{\boldsymbol{\beta}}_\mathbf{A}) - \boldsymbol{\beta} \\[6pt]
&= \mathbb{E}(\hat{\boldsymbol{\beta}}_\text{OLS} + \mathbf{A} \mathbf{Y}) - \boldsymbol{\beta} \\[6pt]
&= \boldsymbol{\beta} + \mathbf{A} \mathbf{x} \boldsymbol{\beta} - \boldsymbol{\beta} \\[6pt]
&= \mathbf{A} \mathbf{x} \boldsymbol{\beta}, \\[6pt]
\end{align}$$
and so the requirement of unbiasedness imposes the restriction that $\mathbf{A} \mathbf{x} = \mathbf{0}$. The variance of the general linear estimator is:
$$\begin{align}
\mathbb{V}(\hat{\boldsymbol{\beta}}_\mathbf{A})
&= \mathbb{V}(\mathbf{A} \mathbf{Y}) \\[6pt]
&= [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] \mathbb{V}(\mathbf{Y}) [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}]^\text{T} \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}]^\text{T} \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] [\mathbf{x} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbf{x} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbf{A}^\text{T} + \mathbf{A} \mathbf{x} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} (\mathbf{A} \mathbf{x})^\text{T} + (\mathbf{A} \mathbf{x}) (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{0}^\text{T} + \mathbf{0} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}]. \\[6pt]
\end{align}$$
Hence, we have:
$$\mathbb{V}(\hat{\boldsymbol{\beta}}_\mathbf{A}) - \mathbb{V}(\hat{\boldsymbol{\beta}}_\text{OLS}) = \sigma^2 \mathbf{A} \mathbf{A}^\text{T}.$$
Now, since $\mathbf{A} \mathbf{A}^\text{T}$ is a positive definite matrix, we can see that the variance of the general linear estimator is minimised when $\mathbf{A} = \mathbf{0}$, which yields the OLS estimator.
|
Prove that the OLS estimator of the intercept is BLUE
|
This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\m
|
Prove that the OLS estimator of the intercept is BLUE
This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\mathbf{Y} = \mathbf{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$ and consider the general linear estimator:
$$\hat{\boldsymbol{\beta}}_\mathbf{A}
= \hat{\boldsymbol{\beta}}_\text{OLS} + \mathbf{A} \mathbf{Y}
= [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] \mathbf{Y}.$$
Since the OLS estimator is unbiased and $\mathbb{E}(\mathbf{Y}) = \mathbf{x} \boldsymbol{\beta}$ this general linear estimator has bias:
$$\begin{align}
\text{Bias}(\hat{\boldsymbol{\beta}}_\mathbf{A}, \boldsymbol{\beta})
&\equiv \mathbb{E}(\hat{\boldsymbol{\beta}}_\mathbf{A}) - \boldsymbol{\beta} \\[6pt]
&= \mathbb{E}(\hat{\boldsymbol{\beta}}_\text{OLS} + \mathbf{A} \mathbf{Y}) - \boldsymbol{\beta} \\[6pt]
&= \boldsymbol{\beta} + \mathbf{A} \mathbf{x} \boldsymbol{\beta} - \boldsymbol{\beta} \\[6pt]
&= \mathbf{A} \mathbf{x} \boldsymbol{\beta}, \\[6pt]
\end{align}$$
and so the requirement of unbiasedness imposes the restriction that $\mathbf{A} \mathbf{x} = \mathbf{0}$. The variance of the general linear estimator is:
$$\begin{align}
\mathbb{V}(\hat{\boldsymbol{\beta}}_\mathbf{A})
&= \mathbb{V}(\mathbf{A} \mathbf{Y}) \\[6pt]
&= [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] \mathbb{V}(\mathbf{Y}) [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}]^\text{T} \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}]^\text{T} \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} + \mathbf{A}] [\mathbf{x} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbf{x} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbf{A}^\text{T} + \mathbf{A} \mathbf{x} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} (\mathbf{A} \mathbf{x})^\text{T} + (\mathbf{A} \mathbf{x}) (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{0}^\text{T} + \mathbf{0} (\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}] \\[6pt]
&= \sigma^2 [(\mathbf{x}^\text{T} \mathbf{x})^{-1} + \mathbf{A} \mathbf{A}^\text{T}]. \\[6pt]
\end{align}$$
Hence, we have:
$$\mathbb{V}(\hat{\boldsymbol{\beta}}_\mathbf{A}) - \mathbb{V}(\hat{\boldsymbol{\beta}}_\text{OLS}) = \sigma^2 \mathbf{A} \mathbf{A}^\text{T}.$$
Now, since $\mathbf{A} \mathbf{A}^\text{T}$ is a positive definite matrix, we can see that the variance of the general linear estimator is minimised when $\mathbf{A} = \mathbf{0}$, which yields the OLS estimator.
|
Prove that the OLS estimator of the intercept is BLUE
This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\m
|
44,786
|
Prove that the OLS estimator of the intercept is BLUE
|
I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator:
$$\tilde{\alpha} = \sum_{i=1}^n c_i y_i$$
and define $c_i = k_i + d_i$, where $k_i$ are the weights on the OLS estimator $\hat{\alpha}$, so $k_i = \Big[\frac{1}{n} - \frac{(x_i - \bar{x})}{\sum_{i=1}^n (x_i - \bar{x})^2}\bar{x}\Big]$. Then:
$$\hat{\alpha} = \sum_{i=1}^n k_i y_i = \frac{1}{n} \sum_{i=1}^n y_i - \frac{\sum_{i=1}^n (x_i - \bar{x})y_i}{\sum_{i=1}^n (x_i - \bar{x})^2}\bar{x} = \bar{y} - \hat{\beta} \bar{x}$$
Hence, our new alternative estimator is:
$$\tilde{\alpha} = \sum_{i=1}^n k_i y_i + \sum_{i=1}^n d_i y_i = \hat{\alpha} + \sum_{i=1}^n d_i y_i$$
So far so good. The next step is to find $Var(\tilde{\alpha})$:
$$Var(\tilde{\alpha}) = Var\Big(\sum_{i=1}^n k_i y_i + \sum_{i=1}^n d_i y_i\Big) = Var\Big(\sum_{i=1}^n k_i y_i\Big) + \sum_{i=1}^n d_i^2 Var(y_i) + 2\sum_{i=1}^n k_i d_i Var(y_i)$$
Alternatively,
$$Var(\tilde{\alpha}) = Var(\hat{\alpha}) + \sum_{i=1}^n d_i^2 Var(y_i) + 2\sum_{i=1}^n k_i d_i Var(y_i)$$
So, to show that $\hat{\alpha}$ is at least as efficient as any alternative (linear unbiased) estimator, i.e. $Var(\hat{\alpha}) \leq Var(\tilde{\alpha})$, we want to show that this third term drops out. Here's where I got tripped up: before, we did this with the condition of unbiasedness for $\beta$, which gave the conditions that $\sum_{i=1}^n c_i = 0$ and $\sum_{i=1}^n c_i x_i = 1$. But since we require that our new estimator is unbiased for $\alpha$, we get a different set of conditions. We need $\mathbb{E}(\tilde{\alpha}) = \alpha$, so: \begin{align*}
\mathbb{E}(\tilde{\alpha}) &= \mathbb{E}\Big(\sum_{i=1}^n c_i y_i\Big) \\
&= \mathbb{E}\Big(\alpha \sum_{i=1}^n c_i + \beta \sum_{i=1}^n c_i x_i + \sum_{i=1}^n c_i u_i\Big)
\end{align*}
Since we want to keep the first term and we require the rest of the expression to drop out (to yield $\mathbb{E}(\tilde{\alpha}) = \alpha$), unbiasedness now imposes the condition that $\sum_{i=1}^n c_i = 1$ and $\sum_{i=1}^n c_i x_i = 0$: the opposite of the conditions for $\beta$. With these conditions updated, we can now show that:
\begin{align*}
\sum_{i=1}^n k_i d_i &= \sum_{i=1}^n k_i (c_i - k_i) \\
&= \sum_{i=1}^n k_i c_i - \sum_{i=1}^n k_i^2 \\
&= \Big[\frac{1}{n} \sum_{i=1}^n c_i - \frac{\bar{x} \sum_{i=1}^n c_i x_i - \bar{x}^2 \sum_{i=1}^n c_i}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] - \Big[\frac{n}{n^2} + \frac{\sum_{i=1}^n (x_i - \bar{x})^2 \bar{x}^2}{[\sum_{i=1}^n (x_i - \bar{x})^2]^2} - 2 \frac{\bar{x}^2 - \bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] \\
&= \Big[\frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] - \Big[\frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] \\
&= 0
\end{align*}
So, now we have shown that:
$$Var(\tilde{\alpha}) = Var(\hat{\alpha}) + \sum_{i=1}^n d_i^2 Var(y_i)$$
And since we've got a sum of squares (positive) multiplied by the variance of $y_i$ (also positive), this is sufficient to conclude that $Var(\tilde{\alpha}) \geq Var(\hat{\alpha})$, or in other words, $\hat{\alpha}$ is BLUE. As Ben's answer says, it's certainly quicker to give the proof with vector algebra - but in case anyone else has tricky examiners, then here's the scalar proof.
|
Prove that the OLS estimator of the intercept is BLUE
|
I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator:
$$\tilde{\alp
|
Prove that the OLS estimator of the intercept is BLUE
I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator:
$$\tilde{\alpha} = \sum_{i=1}^n c_i y_i$$
and define $c_i = k_i + d_i$, where $k_i$ are the weights on the OLS estimator $\hat{\alpha}$, so $k_i = \Big[\frac{1}{n} - \frac{(x_i - \bar{x})}{\sum_{i=1}^n (x_i - \bar{x})^2}\bar{x}\Big]$. Then:
$$\hat{\alpha} = \sum_{i=1}^n k_i y_i = \frac{1}{n} \sum_{i=1}^n y_i - \frac{\sum_{i=1}^n (x_i - \bar{x})y_i}{\sum_{i=1}^n (x_i - \bar{x})^2}\bar{x} = \bar{y} - \hat{\beta} \bar{x}$$
Hence, our new alternative estimator is:
$$\tilde{\alpha} = \sum_{i=1}^n k_i y_i + \sum_{i=1}^n d_i y_i = \hat{\alpha} + \sum_{i=1}^n d_i y_i$$
So far so good. The next step is to find $Var(\tilde{\alpha})$:
$$Var(\tilde{\alpha}) = Var\Big(\sum_{i=1}^n k_i y_i + \sum_{i=1}^n d_i y_i\Big) = Var\Big(\sum_{i=1}^n k_i y_i\Big) + \sum_{i=1}^n d_i^2 Var(y_i) + 2\sum_{i=1}^n k_i d_i Var(y_i)$$
Alternatively,
$$Var(\tilde{\alpha}) = Var(\hat{\alpha}) + \sum_{i=1}^n d_i^2 Var(y_i) + 2\sum_{i=1}^n k_i d_i Var(y_i)$$
So, to show that $\hat{\alpha}$ is at least as efficient as any alternative (linear unbiased) estimator, i.e. $Var(\hat{\alpha}) \leq Var(\tilde{\alpha})$, we want to show that this third term drops out. Here's where I got tripped up: before, we did this with the condition of unbiasedness for $\beta$, which gave the conditions that $\sum_{i=1}^n c_i = 0$ and $\sum_{i=1}^n c_i x_i = 1$. But since we require that our new estimator is unbiased for $\alpha$, we get a different set of conditions. We need $\mathbb{E}(\tilde{\alpha}) = \alpha$, so: \begin{align*}
\mathbb{E}(\tilde{\alpha}) &= \mathbb{E}\Big(\sum_{i=1}^n c_i y_i\Big) \\
&= \mathbb{E}\Big(\alpha \sum_{i=1}^n c_i + \beta \sum_{i=1}^n c_i x_i + \sum_{i=1}^n c_i u_i\Big)
\end{align*}
Since we want to keep the first term and we require the rest of the expression to drop out (to yield $\mathbb{E}(\tilde{\alpha}) = \alpha$), unbiasedness now imposes the condition that $\sum_{i=1}^n c_i = 1$ and $\sum_{i=1}^n c_i x_i = 0$: the opposite of the conditions for $\beta$. With these conditions updated, we can now show that:
\begin{align*}
\sum_{i=1}^n k_i d_i &= \sum_{i=1}^n k_i (c_i - k_i) \\
&= \sum_{i=1}^n k_i c_i - \sum_{i=1}^n k_i^2 \\
&= \Big[\frac{1}{n} \sum_{i=1}^n c_i - \frac{\bar{x} \sum_{i=1}^n c_i x_i - \bar{x}^2 \sum_{i=1}^n c_i}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] - \Big[\frac{n}{n^2} + \frac{\sum_{i=1}^n (x_i - \bar{x})^2 \bar{x}^2}{[\sum_{i=1}^n (x_i - \bar{x})^2]^2} - 2 \frac{\bar{x}^2 - \bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] \\
&= \Big[\frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] - \Big[\frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \Big] \\
&= 0
\end{align*}
So, now we have shown that:
$$Var(\tilde{\alpha}) = Var(\hat{\alpha}) + \sum_{i=1}^n d_i^2 Var(y_i)$$
And since we've got a sum of squares (positive) multiplied by the variance of $y_i$ (also positive), this is sufficient to conclude that $Var(\tilde{\alpha}) \geq Var(\hat{\alpha})$, or in other words, $\hat{\alpha}$ is BLUE. As Ben's answer says, it's certainly quicker to give the proof with vector algebra - but in case anyone else has tricky examiners, then here's the scalar proof.
|
Prove that the OLS estimator of the intercept is BLUE
I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator:
$$\tilde{\alp
|
44,787
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
|
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control
$$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$
as $x \to \infty$.
Gautschi's inequality (applied with $s=\frac{1}{2}$) implies
$$
1 - \sqrt{\frac{x+1}{x+\frac{1}{2}}}
<1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}
< 1 - \sqrt{\frac{x}{x+\frac{1}{2}}}$$
The upper and lower bounds can be rearranged as
$$
\left|1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}\right|
< \frac{1}{2x+1} \cdot \frac{1}{1 + \sqrt{1 - \frac{1}{2x+1}}}
\approx \frac{1}{2(2x+1)}.$$
Plugging in $x=\frac{n}{2}-1$ gives a bound of $\frac{1}{2(n-1)}$. This is weaker than the author's claim of asymptotic equivalence with $\frac{1}{4n}$, but at least it is of the same order.
Responses to comments:
When $x=\frac{n}{2}-1$ you have $x+1 = \frac{n}{2}$ and $x + \frac{1}{2} = \frac{n}{2} - 1 + \frac{1}{2} = \frac{n}{2} - \frac{1}{2} = \frac{n-1}{2}$. So $\frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}} = \frac{\Gamma(n/2)}{\Gamma((n-1)/2) \sqrt{(n-1)/2}}$.
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
|
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control
$$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$
as $x \to \infty$.
Gautschi's inequality (applied
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control
$$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$
as $x \to \infty$.
Gautschi's inequality (applied with $s=\frac{1}{2}$) implies
$$
1 - \sqrt{\frac{x+1}{x+\frac{1}{2}}}
<1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}
< 1 - \sqrt{\frac{x}{x+\frac{1}{2}}}$$
The upper and lower bounds can be rearranged as
$$
\left|1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}\right|
< \frac{1}{2x+1} \cdot \frac{1}{1 + \sqrt{1 - \frac{1}{2x+1}}}
\approx \frac{1}{2(2x+1)}.$$
Plugging in $x=\frac{n}{2}-1$ gives a bound of $\frac{1}{2(n-1)}$. This is weaker than the author's claim of asymptotic equivalence with $\frac{1}{4n}$, but at least it is of the same order.
Responses to comments:
When $x=\frac{n}{2}-1$ you have $x+1 = \frac{n}{2}$ and $x + \frac{1}{2} = \frac{n}{2} - 1 + \frac{1}{2} = \frac{n}{2} - \frac{1}{2} = \frac{n-1}{2}$. So $\frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}} = \frac{\Gamma(n/2)}{\Gamma((n-1)/2) \sqrt{(n-1)/2}}$.
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control
$$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$
as $x \to \infty$.
Gautschi's inequality (applied
|
44,788
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
|
The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion
$$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1}{12z} - \frac{1}{360z^3} + \cdots$$
(and usually you don't even need that final term). This gives us some intuition about how $\Gamma$ behaves and a basis for working out approximate values. Although this series is not a topic in an elementary Calculus course, the following analysis based on it uses only the most elementary facts about power series expansions (Taylor series) and so is something anybody can learn to do.
Calling this an "asymptotic expansion" means that when you fix the number of terms you use, then eventually -- for any $z$ with a suitably large size -- the approximation becomes extremely good. (This is in contrast to a power series in $1/z,$ which for a fixed $z$ ought to get better and better as more terms in the series are included.)
This expansion is so good that it is used in almost all computing software to compute values of $\Gamma.$ For example, here is a comparison of computations of $\Gamma(z)$ for $z=2,4,6,8:$
2 4 6 8
Stirling 0.9999787 5.9999956 119.9999880 5040
R 1.0000000 6.0000000 120.0000000 5040
Relative error 0.9999787 0.9999993 0.9999999 1
"R" refers to the value returned by the gamma function in the R software. Look how close the approximation is even for $z=2!$
To apply this expansion, take the logarithm of the expression you wish to analyze, focusing on product terms that will simplify:
$$w=\log\left(\sqrt\frac{2}{n-1}\frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n-1}{2}\right)}\right) = \frac{1}{2}\left(\log 2 - \log(n-1)\right) + \log \Gamma\left(\frac{n}{2}\right) - \log\Gamma\left(\frac{n-1}{2}\right)$$
(You can find many accounts of Stirling's approximation in terms of $\Gamma$ itself. These are less useful than the log Gamma series because working with the logs amounts to doing some algebraic addition and subtraction, which is relatively simple.)
Now just substitute a suitable number of terms of the asymptotic series for the $\log \Gamma$ components. Sometimes you can get away with carrying the series out to the $-z$ term, but often there is so much cancellation that you need the $1/(12z)$ term to learn anything useful. Focusing on the log Gamma functions in the foregoing, it is clear the constant terms $(1/2)\log(2\pi)$ will cancel. Write down the rest:
$$\begin{aligned}
\log \Gamma\left(\frac{n}{2}\right) - \log\Gamma\left(\frac{n-1}{2}\right)&\approx \left(\frac{n}{2} - \frac{1}{2}\right)\log\left(\frac{n}{2}\right) - \frac{n}{2} + \frac{1}{12\left(\frac{n}{2}\right)}\\
&- \left[\left(\frac{n-1}{2} - \frac{1}{2}\right)\log\left(\frac{n-1}{2}\right) - \frac{n-1}{2} + \frac{1}{12\left(\frac{n-1}{2}\right)}\right]
\end{aligned}$$
Now we add the $\frac{1}{2}\left(\log 2 - \log(n-1)\right)$ terms back in and simplify as much as we can, freely using approximations for large $n$ (that is, small $\epsilon=1/(n-1)$) using the power series $\log(1 + \epsilon) = \epsilon - \epsilon^2/2 + O(\epsilon^3):$
$$\begin{aligned}
w &\approx \frac{n-1}{2}\log\left(\frac{n}{n-1}\right) - \frac{1}{2} - \frac{1}{6n(n-1)} \\
&= \frac{n-1}{2}\left(\frac{1}{n-1} - \frac{1}{2(n-1)^2} + O((n-1)^{-3})\right) - \frac{1}{2} - \frac{1}{6n(n-1)} \\
&= -\frac{1}{4(n-1)} + O(n^{-2}).
\end{aligned}$$
That wasn't particularly painful. The $O(n^{-p})$ analysis of $\log$ and the extensive cancellation are characteristic of calculations with Gamma functions.
Returning to the original question, it concerns an expression we may readily work out using the Taylor series $\exp(\epsilon) = 1 + \epsilon + O(\epsilon^2):$
$$\sigma(1 - \exp(w)) = \sigma\left(1 - (1 - \frac{1}{4(n-1)} + O\left(n^{-2}\right)\right) = \frac{\sigma}{4(n-1)} + O(n^{-2}).$$
This agrees with the equality in the question (because $1/(n-1)=1/n$ modulo $O(n^{-2})$).
It should now be clear that by taking more terms in the asymptotic expansion and in the Taylor series of $\log$ and $\exp$ you can obtain a higher-order approximation of the form $\sigma((1/4)(n-1)^{-1} + a_2(n-1)^{-2} + \cdots + a_p(n-1)^{-p}.)$ Just don't go overboard with this: for small $n,$ using these additional terms will make the approximation worse; the improvement is only for extremely large values of $n.$
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
|
The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion
$$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion
$$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1}{12z} - \frac{1}{360z^3} + \cdots$$
(and usually you don't even need that final term). This gives us some intuition about how $\Gamma$ behaves and a basis for working out approximate values. Although this series is not a topic in an elementary Calculus course, the following analysis based on it uses only the most elementary facts about power series expansions (Taylor series) and so is something anybody can learn to do.
Calling this an "asymptotic expansion" means that when you fix the number of terms you use, then eventually -- for any $z$ with a suitably large size -- the approximation becomes extremely good. (This is in contrast to a power series in $1/z,$ which for a fixed $z$ ought to get better and better as more terms in the series are included.)
This expansion is so good that it is used in almost all computing software to compute values of $\Gamma.$ For example, here is a comparison of computations of $\Gamma(z)$ for $z=2,4,6,8:$
2 4 6 8
Stirling 0.9999787 5.9999956 119.9999880 5040
R 1.0000000 6.0000000 120.0000000 5040
Relative error 0.9999787 0.9999993 0.9999999 1
"R" refers to the value returned by the gamma function in the R software. Look how close the approximation is even for $z=2!$
To apply this expansion, take the logarithm of the expression you wish to analyze, focusing on product terms that will simplify:
$$w=\log\left(\sqrt\frac{2}{n-1}\frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n-1}{2}\right)}\right) = \frac{1}{2}\left(\log 2 - \log(n-1)\right) + \log \Gamma\left(\frac{n}{2}\right) - \log\Gamma\left(\frac{n-1}{2}\right)$$
(You can find many accounts of Stirling's approximation in terms of $\Gamma$ itself. These are less useful than the log Gamma series because working with the logs amounts to doing some algebraic addition and subtraction, which is relatively simple.)
Now just substitute a suitable number of terms of the asymptotic series for the $\log \Gamma$ components. Sometimes you can get away with carrying the series out to the $-z$ term, but often there is so much cancellation that you need the $1/(12z)$ term to learn anything useful. Focusing on the log Gamma functions in the foregoing, it is clear the constant terms $(1/2)\log(2\pi)$ will cancel. Write down the rest:
$$\begin{aligned}
\log \Gamma\left(\frac{n}{2}\right) - \log\Gamma\left(\frac{n-1}{2}\right)&\approx \left(\frac{n}{2} - \frac{1}{2}\right)\log\left(\frac{n}{2}\right) - \frac{n}{2} + \frac{1}{12\left(\frac{n}{2}\right)}\\
&- \left[\left(\frac{n-1}{2} - \frac{1}{2}\right)\log\left(\frac{n-1}{2}\right) - \frac{n-1}{2} + \frac{1}{12\left(\frac{n-1}{2}\right)}\right]
\end{aligned}$$
Now we add the $\frac{1}{2}\left(\log 2 - \log(n-1)\right)$ terms back in and simplify as much as we can, freely using approximations for large $n$ (that is, small $\epsilon=1/(n-1)$) using the power series $\log(1 + \epsilon) = \epsilon - \epsilon^2/2 + O(\epsilon^3):$
$$\begin{aligned}
w &\approx \frac{n-1}{2}\log\left(\frac{n}{n-1}\right) - \frac{1}{2} - \frac{1}{6n(n-1)} \\
&= \frac{n-1}{2}\left(\frac{1}{n-1} - \frac{1}{2(n-1)^2} + O((n-1)^{-3})\right) - \frac{1}{2} - \frac{1}{6n(n-1)} \\
&= -\frac{1}{4(n-1)} + O(n^{-2}).
\end{aligned}$$
That wasn't particularly painful. The $O(n^{-p})$ analysis of $\log$ and the extensive cancellation are characteristic of calculations with Gamma functions.
Returning to the original question, it concerns an expression we may readily work out using the Taylor series $\exp(\epsilon) = 1 + \epsilon + O(\epsilon^2):$
$$\sigma(1 - \exp(w)) = \sigma\left(1 - (1 - \frac{1}{4(n-1)} + O\left(n^{-2}\right)\right) = \frac{\sigma}{4(n-1)} + O(n^{-2}).$$
This agrees with the equality in the question (because $1/(n-1)=1/n$ modulo $O(n^{-2})$).
It should now be clear that by taking more terms in the asymptotic expansion and in the Taylor series of $\log$ and $\exp$ you can obtain a higher-order approximation of the form $\sigma((1/4)(n-1)^{-1} + a_2(n-1)^{-2} + \cdots + a_p(n-1)^{-p}.)$ Just don't go overboard with this: for small $n,$ using these additional terms will make the approximation worse; the improvement is only for extremely large values of $n.$
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion
$$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1
|
44,789
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
|
Comment: Using R to visualize the speed of convergence.
n = seq(5,300,by=5)
c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2))
plot(n,c); abline(h=1, col="green2", lwd=2)
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
|
Comment: Using R to visualize the speed of convergence.
n = seq(5,300,by=5)
c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2))
plot(n,c); abline(h=1, col="green2", lwd=2)
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Comment: Using R to visualize the speed of convergence.
n = seq(5,300,by=5)
c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2))
plot(n,c); abline(h=1, col="green2", lwd=2)
|
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Comment: Using R to visualize the speed of convergence.
n = seq(5,300,by=5)
c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2))
plot(n,c); abline(h=1, col="green2", lwd=2)
|
44,790
|
MCMC: long burn in vs re-initialization of the chain?
|
If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in.
However, it can accelerate convergence if you allow your chains to interact. Start from a random position, and let all your chains run independently for $T$ steps. Then, set all the walkers of all the chains to the same position (the point with maximum log probability seen so far across all chains), and let them run again independently. Hence, the chain that happened to be the closest to the high likelihood area will "guide" the others towards this position.
One possible drawback of this procedure is that it might increase the risk of getting stuck in a local optimum.
|
MCMC: long burn in vs re-initialization of the chain?
|
If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in.
However, it can accelerate convergence if
|
MCMC: long burn in vs re-initialization of the chain?
If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in.
However, it can accelerate convergence if you allow your chains to interact. Start from a random position, and let all your chains run independently for $T$ steps. Then, set all the walkers of all the chains to the same position (the point with maximum log probability seen so far across all chains), and let them run again independently. Hence, the chain that happened to be the closest to the high likelihood area will "guide" the others towards this position.
One possible drawback of this procedure is that it might increase the risk of getting stuck in a local optimum.
|
MCMC: long burn in vs re-initialization of the chain?
If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in.
However, it can accelerate convergence if
|
44,791
|
MCMC: long burn in vs re-initialization of the chain?
|
The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is more actively looking for a reasonable starting point, that is, one that is compatible with the target density. The performance of the approach however depends on the mixing behaviour of the MCMC chain. If it mixes quite slowly relative to the contemplated horizon of a few hundred iterations, the chains will have likely stayed within the attraction basin of the mode found in the previous round. It would be better to consider annealed or relaxed targets during the warm-up, where flatter targets consisting of powered densities (with powers less than one) or partial posteriors (using only a fraction of the data) would be run (and the value of the actual target density monitored nonetheless). The preliminary exploration would be thus more effective and prone to leave attraction basins.
|
MCMC: long burn in vs re-initialization of the chain?
|
The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is
|
MCMC: long burn in vs re-initialization of the chain?
The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is more actively looking for a reasonable starting point, that is, one that is compatible with the target density. The performance of the approach however depends on the mixing behaviour of the MCMC chain. If it mixes quite slowly relative to the contemplated horizon of a few hundred iterations, the chains will have likely stayed within the attraction basin of the mode found in the previous round. It would be better to consider annealed or relaxed targets during the warm-up, where flatter targets consisting of powered densities (with powers less than one) or partial posteriors (using only a fraction of the data) would be run (and the value of the actual target density monitored nonetheless). The preliminary exploration would be thus more effective and prone to leave attraction basins.
|
MCMC: long burn in vs re-initialization of the chain?
The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is
|
44,792
|
Regression in Causal Inference
|
Just to add to the excellent answers by Adrian and Noah, there is the residual question of:
how to establish which of the three sets of variables given above should be conditioned on.
Fist let's recap how the backdoor criterion is applied to this particular DAG, which I'm reposting here:
Usually we are interested in the "average causal effect" (ACE) which is the expected increase of $Y$ for a unit change in $X$. This means that we must allow all causal paths between $X \rightarrow Y$ to remain open but we must block any backdoor paths from $Y \rightarrow X$
What makes this DAG quite intriguing is that $U_3$ appears to be a confounder for $X \rightarrow Y$ but is also a collider (having 2 direct causes, $U_1$ and $U_2$). So a simplistic approach would be to say that we need to condition on it to block the backdoor path $Y \leftarrow U_3 \rightarrow X$) but then we don't want to condition on it, because that will open up the backdoor path $Y \leftarrow U_2 \rightarrow U_3 \leftarrow U_1 \rightarrow X$. This is easily resolved by blocking that path by additionally conditioining on either $U_2$ or $U_1$, or indeed both.
Thus we have arrived at the 3 candidate adjustment sets $\lbrace U_1, U_3\rbrace$, $\lbrace U_2, U_3\rbrace$ and $\lbrace U_1, U_2, U_3\rbrace$.
All 3 sets will give us an unbiased estimate of the causal effect, so how do we choose between them ?
We could reject the larger set $\lbrace U_1, U_2, U_3\rbrace$ on two grounds. First model parsimony. Second $U_2$ and $U_3$ are correlated and this correlation could be very high leading to instabilty in the estimation procedure that is used to fit the model. If they are not highly corrrelated then we might still consider this set, but with the additional considerations as below:
we choose the set which gives us the most precise estimate of the causal effect - in a multivariable regression model this would be the estimate with the smallest standard error.
$\lbrace U_2, U_3\rbrace$ will yield the most precise estimate because conditional on them, $U_1$ is an instrument and therefore should not be adjusted for. Adjusting for $U_2$ would reduce the residual variance of $Y$ more than adjusting for $U_1$ would. Thanks to Noah for pointing this out in the comments. Here is a monte carlo simulation in R of this DAG that demonstrates this:
set.seed(15)
nsim <- 1000
se_1 <- numeric(nsim)
se_2 <- numeric(nsim)
N <- 500
for(i in 1:nsim) {
# simulate the DAG
U1 <- rnorm(N, 10, 2)
U2 <- -U1 + rnorm(N, 10, 2)
U3 <- U1 + U2 + rnorm(N, 10, 2)
X <- U1 + U3 + rnorm(N, 10, 2)
Y <- X + U3 + U2 + rnorm(N, 10, 2)
# extract standard error for U1
coefs_1 <- lm(Y ~ X + U3 + U1) %>% summary() %>% coef()
se_1[i] <- coefs_1[6]
# extract standard error for U2
coefs_2 <- lm(Y ~ X + U3 + U2) %>% summary() %>% coef()
se_2[i] <- coefs_2[6]
}
ggplot(df, aes( x = SE, group = U, color = U)) +
geom_histogram(aes(y = ..density..), alpha = 0.7, position = "identity", bins = 30) +
geom_density()
As we can see, conditioning on $U_2$ gives consistently lower standard errors than conditioning on $U_1$
|
Regression in Causal Inference
|
Just to add to the excellent answers by Adrian and Noah, there is the residual question of:
how to establish which of the three sets of variables given above should be conditioned on.
Fist let's rec
|
Regression in Causal Inference
Just to add to the excellent answers by Adrian and Noah, there is the residual question of:
how to establish which of the three sets of variables given above should be conditioned on.
Fist let's recap how the backdoor criterion is applied to this particular DAG, which I'm reposting here:
Usually we are interested in the "average causal effect" (ACE) which is the expected increase of $Y$ for a unit change in $X$. This means that we must allow all causal paths between $X \rightarrow Y$ to remain open but we must block any backdoor paths from $Y \rightarrow X$
What makes this DAG quite intriguing is that $U_3$ appears to be a confounder for $X \rightarrow Y$ but is also a collider (having 2 direct causes, $U_1$ and $U_2$). So a simplistic approach would be to say that we need to condition on it to block the backdoor path $Y \leftarrow U_3 \rightarrow X$) but then we don't want to condition on it, because that will open up the backdoor path $Y \leftarrow U_2 \rightarrow U_3 \leftarrow U_1 \rightarrow X$. This is easily resolved by blocking that path by additionally conditioining on either $U_2$ or $U_1$, or indeed both.
Thus we have arrived at the 3 candidate adjustment sets $\lbrace U_1, U_3\rbrace$, $\lbrace U_2, U_3\rbrace$ and $\lbrace U_1, U_2, U_3\rbrace$.
All 3 sets will give us an unbiased estimate of the causal effect, so how do we choose between them ?
We could reject the larger set $\lbrace U_1, U_2, U_3\rbrace$ on two grounds. First model parsimony. Second $U_2$ and $U_3$ are correlated and this correlation could be very high leading to instabilty in the estimation procedure that is used to fit the model. If they are not highly corrrelated then we might still consider this set, but with the additional considerations as below:
we choose the set which gives us the most precise estimate of the causal effect - in a multivariable regression model this would be the estimate with the smallest standard error.
$\lbrace U_2, U_3\rbrace$ will yield the most precise estimate because conditional on them, $U_1$ is an instrument and therefore should not be adjusted for. Adjusting for $U_2$ would reduce the residual variance of $Y$ more than adjusting for $U_1$ would. Thanks to Noah for pointing this out in the comments. Here is a monte carlo simulation in R of this DAG that demonstrates this:
set.seed(15)
nsim <- 1000
se_1 <- numeric(nsim)
se_2 <- numeric(nsim)
N <- 500
for(i in 1:nsim) {
# simulate the DAG
U1 <- rnorm(N, 10, 2)
U2 <- -U1 + rnorm(N, 10, 2)
U3 <- U1 + U2 + rnorm(N, 10, 2)
X <- U1 + U3 + rnorm(N, 10, 2)
Y <- X + U3 + U2 + rnorm(N, 10, 2)
# extract standard error for U1
coefs_1 <- lm(Y ~ X + U3 + U1) %>% summary() %>% coef()
se_1[i] <- coefs_1[6]
# extract standard error for U2
coefs_2 <- lm(Y ~ X + U3 + U2) %>% summary() %>% coef()
se_2[i] <- coefs_2[6]
}
ggplot(df, aes( x = SE, group = U, color = U)) +
geom_histogram(aes(y = ..density..), alpha = 0.7, position = "identity", bins = 30) +
geom_density()
As we can see, conditioning on $U_2$ gives consistently lower standard errors than conditioning on $U_1$
|
Regression in Causal Inference
Just to add to the excellent answers by Adrian and Noah, there is the residual question of:
how to establish which of the three sets of variables given above should be conditioned on.
Fist let's rec
|
44,793
|
Regression in Causal Inference
|
In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_3\}.$ Then in a regression setting, NOT conditioning on those variables would mean you would regress $Y=aX+\varepsilon.$ Here $\varepsilon$ is an error term (residual) to account for whatever. (Always plot your residuals!) Conditioning on $\{U_1,U_3\}$ would mean regressing $Y=aX+b_1U_1+b_3U_3+\varepsilon.$
In other settings, conditioning on a variable $U_1$ might mean running your analysis for certain known values of $U_1.$ For example, if $U_1\in\{0,1\},$ then you run your analysis for $U_1=0$ and for $U_1=1$ separately, and you DON'T aggregate the data.
Finally, you can also condition on a variable using the back-door adjustment formula, which I imagine you'll see soon, if you haven't already.
Your question as to how to know which variables to condition on is a great one! The answer is: whichever set of variables will isolate the true causal effect of $X$ on $Y.$ In your case, any of the three sets you mentioned satisfy the BDC, and thus you could use any of them. You might find, in such a circumstance, that conditioning on one particular set gives you slightly more accuracy on the test set. So pick that one. In other situations, sometimes there's only one choice.
|
Regression in Causal Inference
|
In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_
|
Regression in Causal Inference
In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_3\}.$ Then in a regression setting, NOT conditioning on those variables would mean you would regress $Y=aX+\varepsilon.$ Here $\varepsilon$ is an error term (residual) to account for whatever. (Always plot your residuals!) Conditioning on $\{U_1,U_3\}$ would mean regressing $Y=aX+b_1U_1+b_3U_3+\varepsilon.$
In other settings, conditioning on a variable $U_1$ might mean running your analysis for certain known values of $U_1.$ For example, if $U_1\in\{0,1\},$ then you run your analysis for $U_1=0$ and for $U_1=1$ separately, and you DON'T aggregate the data.
Finally, you can also condition on a variable using the back-door adjustment formula, which I imagine you'll see soon, if you haven't already.
Your question as to how to know which variables to condition on is a great one! The answer is: whichever set of variables will isolate the true causal effect of $X$ on $Y.$ In your case, any of the three sets you mentioned satisfy the BDC, and thus you could use any of them. You might find, in such a circumstance, that conditioning on one particular set gives you slightly more accuracy on the test set. So pick that one. In other situations, sometimes there's only one choice.
|
Regression in Causal Inference
In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_
|
44,794
|
Regression in Causal Inference
|
There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arrows do not necessarily represent main effects in a linear regression of an outcome on its causes. $X$, $U_2$, and $U_3$ may come together to form $Y$ in any number of ways, including linear or nonlinear forms, interacting or not. That is, the arrows from $X$, $U_2$, and $U_3$ to $Y$ represent the structural equation
$$Y=f(X, U_2, U_3)$$
but they say nothing about what $f(.)$ looks like. It's possible that $f(X, U_2, U_3)$ is $\beta_0 + \beta_1 X + \beta_2 U_2 + \beta_3 U_3$, but it could be any other form as well. Nothing about the DAG implies it is of this form or another. Statistical theory for causal inference does not depend on the functional form of $f(.)$ or of other relations in the DAG.
The implications of the DAG, such as the backdoor path from $X$ to $Y$ is closed by conditioning on $U_2$ and $U_3$, for example, are nonparametric. That means that by nonparametrically conditioning on the adjustment sets, the nonparametric association between is unbiased. Your question amounts to, "What does it mean to nonparmaterically condition on an adjustment set?" The answer is not linear regression. There are two ways of nonparametric conditioning to recover causal relationships: standardization and inverse probability weighting (IPW). See Hernán and Robins (2006) for a nice introduction to these techniques. I'll briefly describe them here. Importantly, what I'm about to describe is not what you should do in your dataset. These methods in their purest form assume you have population data.
Standardization involves conditioning on an adjustment set by creating strata based on a complete cross of every unique level of the variables in the set. For example, If $U_2$ had two unique values, and $U_3$ had three unique values, you would create six strata based on a complete cross of their levels. From here, you can compute any association between $X$ and $Y$ within each stratum, and that association represents a causal relationship. For example, you could compute the difference between the mean of $Y$ for those with $X=1$ in and the mean of $Y$ for those with $X=0$. You could also compute a risk ratio or an odds ratio if $Y$ was binary. In each stratum, the association is unbiased. You can think of the phrase "conditional on" to mean "within strata of". If you want a single number that represents the marginal causal association (i.e., as opposed to six numbers that each represents a conditional association), you can take the sum of the conditional associations weighted by the proportion of individuals within each stratum (assuming the measure of association is collapsible).
With IPW, you again form strata of the adjustment set. In each stratum, you compute the proportion of units at each level of the treatment. This is called the propensity score (PS). You can use a formula to turn the PS into inverse probability weights and then compute an association between $X$ and $Y$ using the weights (e.g., a difference in weighted means, or a ratio of weighted odds). The weighted association is unbiased for the marginal causal relationship between $X$ and $Y$.
Everything I've described so far is about populations and is only somewhat related to how you would arrive at an unbiased estimate of the causal relationship between $X$ and $Y$ with sample data. Generally, the nonparametric population versions of standardization and IPW are not available in your sample, so you have to use sample versions of them, and often it's not possible to apply the nonparametric formulas because there are not enough units within each stratum of a full cross of every covariate to estimate either the association between the treatment and outcome or the probability of treatment (this is called the "curse of dimensionality"). Instead, you have to make some simplifying functional form assumptions, which may be based in theory or on the data itself. Linear regression is a parametric, sample version of standardization that makes extremely strict assumptions about functional form. The traditional parametric sample form of IPW, which involves using logistic regression to estimate propensity scores, also makes extremely strict functional form assumptions. There is an entire field of statistics devoted to figuring out new ways of enhancing the sample versions of standardization and IPW, which I briefly discuss in this answer.
I highly recommend Hernán and Robins' (2020) book, which is what I read to learn about this topic. They make very clear the distinction between what a DAG tells you about causal relationships between variables and how to use models to estimate measures of association in a sample, which I guess is the distinction that I want you to take away from this.
In summary, a DAG makes implications about what variables you need to condition on to recover causal associations nonparametrically in the population. Standardization and IPW are two ways of conditioning on variables to nonparametrically recover a causal association in the population. In sample data, there are a variety of statistical methods that can be used to estimate a conditional association, including OLS and versions of IPW, both of which often make extremely strict and likely incorrect functional form assumptions.
|
Regression in Causal Inference
|
There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arr
|
Regression in Causal Inference
There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arrows do not necessarily represent main effects in a linear regression of an outcome on its causes. $X$, $U_2$, and $U_3$ may come together to form $Y$ in any number of ways, including linear or nonlinear forms, interacting or not. That is, the arrows from $X$, $U_2$, and $U_3$ to $Y$ represent the structural equation
$$Y=f(X, U_2, U_3)$$
but they say nothing about what $f(.)$ looks like. It's possible that $f(X, U_2, U_3)$ is $\beta_0 + \beta_1 X + \beta_2 U_2 + \beta_3 U_3$, but it could be any other form as well. Nothing about the DAG implies it is of this form or another. Statistical theory for causal inference does not depend on the functional form of $f(.)$ or of other relations in the DAG.
The implications of the DAG, such as the backdoor path from $X$ to $Y$ is closed by conditioning on $U_2$ and $U_3$, for example, are nonparametric. That means that by nonparametrically conditioning on the adjustment sets, the nonparametric association between is unbiased. Your question amounts to, "What does it mean to nonparmaterically condition on an adjustment set?" The answer is not linear regression. There are two ways of nonparametric conditioning to recover causal relationships: standardization and inverse probability weighting (IPW). See Hernán and Robins (2006) for a nice introduction to these techniques. I'll briefly describe them here. Importantly, what I'm about to describe is not what you should do in your dataset. These methods in their purest form assume you have population data.
Standardization involves conditioning on an adjustment set by creating strata based on a complete cross of every unique level of the variables in the set. For example, If $U_2$ had two unique values, and $U_3$ had three unique values, you would create six strata based on a complete cross of their levels. From here, you can compute any association between $X$ and $Y$ within each stratum, and that association represents a causal relationship. For example, you could compute the difference between the mean of $Y$ for those with $X=1$ in and the mean of $Y$ for those with $X=0$. You could also compute a risk ratio or an odds ratio if $Y$ was binary. In each stratum, the association is unbiased. You can think of the phrase "conditional on" to mean "within strata of". If you want a single number that represents the marginal causal association (i.e., as opposed to six numbers that each represents a conditional association), you can take the sum of the conditional associations weighted by the proportion of individuals within each stratum (assuming the measure of association is collapsible).
With IPW, you again form strata of the adjustment set. In each stratum, you compute the proportion of units at each level of the treatment. This is called the propensity score (PS). You can use a formula to turn the PS into inverse probability weights and then compute an association between $X$ and $Y$ using the weights (e.g., a difference in weighted means, or a ratio of weighted odds). The weighted association is unbiased for the marginal causal relationship between $X$ and $Y$.
Everything I've described so far is about populations and is only somewhat related to how you would arrive at an unbiased estimate of the causal relationship between $X$ and $Y$ with sample data. Generally, the nonparametric population versions of standardization and IPW are not available in your sample, so you have to use sample versions of them, and often it's not possible to apply the nonparametric formulas because there are not enough units within each stratum of a full cross of every covariate to estimate either the association between the treatment and outcome or the probability of treatment (this is called the "curse of dimensionality"). Instead, you have to make some simplifying functional form assumptions, which may be based in theory or on the data itself. Linear regression is a parametric, sample version of standardization that makes extremely strict assumptions about functional form. The traditional parametric sample form of IPW, which involves using logistic regression to estimate propensity scores, also makes extremely strict functional form assumptions. There is an entire field of statistics devoted to figuring out new ways of enhancing the sample versions of standardization and IPW, which I briefly discuss in this answer.
I highly recommend Hernán and Robins' (2020) book, which is what I read to learn about this topic. They make very clear the distinction between what a DAG tells you about causal relationships between variables and how to use models to estimate measures of association in a sample, which I guess is the distinction that I want you to take away from this.
In summary, a DAG makes implications about what variables you need to condition on to recover causal associations nonparametrically in the population. Standardization and IPW are two ways of conditioning on variables to nonparametrically recover a causal association in the population. In sample data, there are a variety of statistical methods that can be used to estimate a conditional association, including OLS and versions of IPW, both of which often make extremely strict and likely incorrect functional form assumptions.
|
Regression in Causal Inference
There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arr
|
44,795
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
|
Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain level of precision. This creates errors, because values like $\sqrt{2}$, which have an unending sequence of digits, can't be stored (because you don't have enough memory to store an unending sequence of digits). This what is meant by "finite-precision": only the largest digits are stored.
Floating point values are represented to within some tolerance, called machine epsilon or $\epsilon$, which is the upper bound of the relative error due to rounding.
When you compose multiple operations which have finite precision, these rounding errors can accumulate, resulting in larger differences.
In the case of zero singular values, this means that due to rounding error, some singular values which are truly zero will be stored as a nonzero value.
An example: some matrix $A$ has singular values $[2,1,0.5,0]$. But your SVD algorithm may return singular values 2.0, 1.0, 0.5, 2.2e-16 or a similarly small number. That final value is numerically zero; it's zero to within the numerical tolerance of the algorithm.
The floating point standard is governed by IEEE 754.
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
|
Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain leve
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain level of precision. This creates errors, because values like $\sqrt{2}$, which have an unending sequence of digits, can't be stored (because you don't have enough memory to store an unending sequence of digits). This what is meant by "finite-precision": only the largest digits are stored.
Floating point values are represented to within some tolerance, called machine epsilon or $\epsilon$, which is the upper bound of the relative error due to rounding.
When you compose multiple operations which have finite precision, these rounding errors can accumulate, resulting in larger differences.
In the case of zero singular values, this means that due to rounding error, some singular values which are truly zero will be stored as a nonzero value.
An example: some matrix $A$ has singular values $[2,1,0.5,0]$. But your SVD algorithm may return singular values 2.0, 1.0, 0.5, 2.2e-16 or a similarly small number. That final value is numerically zero; it's zero to within the numerical tolerance of the algorithm.
The floating point standard is governed by IEEE 754.
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain leve
|
44,796
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
|
TLDR;
In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negative infinity to positive infinity. In a computer this number can be represented by a type such as int8_t (in C++) which spans from -128 to 127. The situation is even worse with real numbers, such as $\pi$ or $\sqrt 2$. That's what is meant by the author.
The long answer can be as long as you have time for. For instance,
"What Every Computer Scientist Should Know About Floating-Point Arithmetic" is a required read for anyone who does numbers on a computer.
I'll touch on three subjects.
Computer Integers lack some properties of mathematical integral numbers
Not only integer types are bounded, but they also lack some properties you expect from integral numbers. For instance, in math you expect given $a>0$ and $b>0$ that $a+b>0$ too. Yet, it may not be the case in computer math. For instance, the following code output 110 and not 111 as you'd expect:
#include <iostream>
int main() {
short int a = 17000, b = 17000, r;
std::cout << (a > 0);
std::cout << (b > 0);
r = a + b;
std::cout << (r > 0);
}
Computer "real" numbers are countable
The real numbers in mathematics are not countable. That's the huge difference of real numbers from integral and rational numbers. It was a huge breakthrough for European math when Stevin introduced the notion of real numbers, e.g. $\sqrt 2$. They fill the gaps between rational numbers such as 1/3.
Although the number of both real and integral numbers is infinite, there are more real numbers than integral numbers. Weirder though the number of positive and negative whole numbers is the same in math :)
These properties are not preserved in computer math. For instance, there's exactly the same, and finite!, number of double precision real and long integer numbers in C++. It's $2^{64}$ numbers to be precise. So, the cardinality (power set) of what is supposed to be continuum is equal to that of integral (whole) numbers!
arbitrary precision math
Due to these limitation some esoteric math problems are impossible to work on using the standard machine arithmetic. So mathematicians creates libraries for so called arbitrary precision arithmetic libraries that can greatly expand the ranges of numbers stored in a computer. However, "arbitrary" is still a finite notion. When it comes to real numbers they approximate the math concept better than standard machine arithmetic, but they don't fully implement it.
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
|
TLDR;
In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negati
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
TLDR;
In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negative infinity to positive infinity. In a computer this number can be represented by a type such as int8_t (in C++) which spans from -128 to 127. The situation is even worse with real numbers, such as $\pi$ or $\sqrt 2$. That's what is meant by the author.
The long answer can be as long as you have time for. For instance,
"What Every Computer Scientist Should Know About Floating-Point Arithmetic" is a required read for anyone who does numbers on a computer.
I'll touch on three subjects.
Computer Integers lack some properties of mathematical integral numbers
Not only integer types are bounded, but they also lack some properties you expect from integral numbers. For instance, in math you expect given $a>0$ and $b>0$ that $a+b>0$ too. Yet, it may not be the case in computer math. For instance, the following code output 110 and not 111 as you'd expect:
#include <iostream>
int main() {
short int a = 17000, b = 17000, r;
std::cout << (a > 0);
std::cout << (b > 0);
r = a + b;
std::cout << (r > 0);
}
Computer "real" numbers are countable
The real numbers in mathematics are not countable. That's the huge difference of real numbers from integral and rational numbers. It was a huge breakthrough for European math when Stevin introduced the notion of real numbers, e.g. $\sqrt 2$. They fill the gaps between rational numbers such as 1/3.
Although the number of both real and integral numbers is infinite, there are more real numbers than integral numbers. Weirder though the number of positive and negative whole numbers is the same in math :)
These properties are not preserved in computer math. For instance, there's exactly the same, and finite!, number of double precision real and long integer numbers in C++. It's $2^{64}$ numbers to be precise. So, the cardinality (power set) of what is supposed to be continuum is equal to that of integral (whole) numbers!
arbitrary precision math
Due to these limitation some esoteric math problems are impossible to work on using the standard machine arithmetic. So mathematicians creates libraries for so called arbitrary precision arithmetic libraries that can greatly expand the ranges of numbers stored in a computer. However, "arbitrary" is still a finite notion. When it comes to real numbers they approximate the math concept better than standard machine arithmetic, but they don't fully implement it.
|
What is finite precision arithmetic and how does it affect SVD when computed by computers?
TLDR;
In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negati
|
44,797
|
Advice on running random forests on a large dataset
|
Some hints:
500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of millions of rows.
Good random forest implementations like ranger (available in caret) are fully parallelized. The more cores, the better.
Random forests do not scale too well to large data. Why? Their basic idea is to pool a lot of very deep trees. But growing deep trees eats a lot of resources. Playing with parameters like max.depth and num.trees help to reduce computational time. Still, they are not ideal. In your situation, maybe 20 minutes with ranger on a normal laptop would be sufficient. (A rough guess).
library(ranger)
n <- 500000
p <- 100
df <- data.frame(matrix(rnorm(n * p), ncol = p))
df$y <- factor(sample(0:1, n, TRUE))
object.size(df) # 400 MB
head(df)
fit <- ranger(y ~ .,
data = df,
num.trees = 500,
max.depth = 8,
probability = TRUE)
fit
With higher max.depth, quite a lot of additional time will be required.
|
Advice on running random forests on a large dataset
|
Some hints:
500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of million
|
Advice on running random forests on a large dataset
Some hints:
500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of millions of rows.
Good random forest implementations like ranger (available in caret) are fully parallelized. The more cores, the better.
Random forests do not scale too well to large data. Why? Their basic idea is to pool a lot of very deep trees. But growing deep trees eats a lot of resources. Playing with parameters like max.depth and num.trees help to reduce computational time. Still, they are not ideal. In your situation, maybe 20 minutes with ranger on a normal laptop would be sufficient. (A rough guess).
library(ranger)
n <- 500000
p <- 100
df <- data.frame(matrix(rnorm(n * p), ncol = p))
df$y <- factor(sample(0:1, n, TRUE))
object.size(df) # 400 MB
head(df)
fit <- ranger(y ~ .,
data = df,
num.trees = 500,
max.depth = 8,
probability = TRUE)
fit
With higher max.depth, quite a lot of additional time will be required.
|
Advice on running random forests on a large dataset
Some hints:
500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of million
|
44,798
|
Advice on running random forests on a large dataset
|
The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is that you rather should not use Spark. You can check those benchmarks, Spark "is slower and has a larger memory footprint" and for some versions of Spark "random forests having low prediction accuracy vs the other methods", so basically, the Spark implementation of random forest is poor.
|
Advice on running random forests on a large dataset
|
The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is th
|
Advice on running random forests on a large dataset
The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is that you rather should not use Spark. You can check those benchmarks, Spark "is slower and has a larger memory footprint" and for some versions of Spark "random forests having low prediction accuracy vs the other methods", so basically, the Spark implementation of random forest is poor.
|
Advice on running random forests on a large dataset
The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is th
|
44,799
|
Expected triangle area from normal distribution
|
This problem can be solved through a series of simplifications and then looking things up.
First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covariance matrix is the identity and the unit of area is $\sigma^2:$ that's why the result is a multiple of $\sigma^2.$ So from now on we may take $\sigma=1.$
Second, let the three (independent) random points (each with coordinates from this trivariate standard Normal distribution) be $X,$ $Y,$ and $Z.$ Let $i$ denote one of the three components of these vectors. The triangle in question can be translated to the origin (without changing its area) by subtracting $Z,$ where it is determined by the vectors $U = X-Z$ and $V = Y-Z.$ The components of these vectors are Normal with zero means and covariances
$$\operatorname{Cov}(U_i,V_i) = \operatorname{Cov}(X_i-Z_i, Y_i-Z_i) = 1$$
and variances
$$\operatorname{Var}(V_i) = \operatorname{Var}(U_i) = \operatorname{Var}(X_i-Z_i) = 2.$$
Consequently the correlation of $U_i$ and $V_i$ is $\rho = 1/2.$
Third, we may exploit properties of Normal distributions to describe the distribution of $U,V$ in an equivalent way. Define $\rho^\prime = \sqrt{1-\rho^2}$ so that $\rho^2 + (\rho^\prime)^2 = 1.$
An equivalent description of the distribution of $(U,V)$ begins with independent components $U_i,W_i$ (all with zero mean and variance of $2.$) If we set
$$V = \rho^\prime\,W + \rho\,U$$
then
$$\operatorname{Var}(V) = (\rho^2 + (\rho^\prime)^2)(2) = 2$$
and
$$\operatorname{Cov}(U,V) = \rho\,(2) = 2\rho.$$
This version of $(U,V),$ which (in $n=3$ dimensions) also is $2n$-variate Normal, has exactly the same first and second moments as the original description: thus the distributions are the same.
Fourth, geometry tells us the area of the triangle $OVU$ is the same as the area of the triangle $O(\rho^\prime W)U$ and that, in turn, is $\rho^\prime$ times the area of triangle $OWU,$ which trigonometry tells us is
$$\operatorname{Area}(OWU) = \frac{1}{2} |W|\,|U|\,\sin(\theta_{UW}).$$
Here, $\theta_{UW}$ is the angle made between vectors $U$ and $W.$
Now we may call on well-known (simple) results:
$|U|/\sqrt{2}$ and $|W|/\sqrt{2}$ have $\chi(n)$ distributions.
$t = (1 + \cos(\theta_{UW}))/2$ has a Beta$((n-1)/2, (n-1)/2)$ distribution..
$|U|,|W|,$ and $\theta_{UW}$ are independent. (This follows directly from the spherical symmetry of the $n$-variate standard Normal distribution.)
This information is enough to work out the distribution of the area. (When $n=3$ it happens to have a Gamma distribution but in other dimensions its PDF is proportional to a modified Bessel $K$ function.)
The expected area is particularly easy to find. We can look up (or readily) compute the $\chi(n)$ expectation,
$$E\left[\frac{|U|}{\sqrt{2}}\right] = E\left[\frac{|W|}{\sqrt{2}}\right] = \sqrt{2} \frac{\Gamma((n+1)/2)}{\Gamma(n/2)},$$
and with almost no work we can find the expectation of $\sin(\theta_{UV}) = 2\sqrt{t(1-t)}$ as
$$\eqalign{
E\left[2t^{1/2}(1-t)^{1/2}\right] &= \frac{1}{B((n-1)/2,(n-1)/2)} \int_0^1 2t^{1/2}(1-t)^{1/2} t^{(n-1)/2-1}(1-t)^{(n-1)/2-1}\, \mathrm{d}t \\
&= \frac{2}{B((n-1)/2,(n-1)/2)} \int_0^1 t^{n/2-1}(1-t)^{n/2-1}\, \mathrm{d}t \\
&= \frac{2\,B(n/2,n/2)}{B((n-1)/2,(n-1)/2)}.
} $$
Plug everything into the area formula for triangle $OWU$ to obtain
$$\eqalign{
E[\operatorname{Area}(OWU)] &= E\left[\frac{1}{2} |W|\,|U|\,\sin(\theta_{UW})\right] \\
& = \frac{1}{2} \left((\sqrt{2})(\sqrt{2}) \frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\right)^2\ \frac{2\,B\left(\frac{n}{2},\frac{n}{2}\right)}{B\left(\frac{n-1}{2},\frac{n-1}{2}\right)} \\
& = 4\frac{\Gamma\left(\frac{n+1}{2}\right)^2 \Gamma(n-1)}{\Gamma\left(\frac{n-1}{2}\right)^2 \Gamma(n)} \\
&= 4 \frac{\left(\frac{n-1}{2}\right)^2}{n-1} = n-1.
}$$
(The third line expanded the Beta functions in terms of Gamma functions and the last line used the defining relation $\Gamma(z+1) = z\Gamma(z)$ several times.)
We must remember the other two factors dropped along the way: this area has to be multiplied by $\rho^\prime$ (lost at step 4) and then by $\sigma^2$ (lost at step 1).
We have thereby obtained a general formula for the expectation of a triangular area in any number of dimensions and even when the components of the vectors $U$ and $V$ are correlated with correlation coefficient $\rho.$ (Bear in mind these components have variances of $2,$ not $1.$) It is
$$E[\operatorname{Area}(OVU)] = \rho^\prime\, (n-1)\, \sigma^2.$$
Earlier we saw that $\rho=1/2,$ so $\rho^\prime = \sqrt{3}/2$ (there is where the square root of $3$ comes from!) and for $n=3$ this yields
$$E[\operatorname{Area}(XYZ)] = \sqrt{3}\, \sigma^2.$$
|
Expected triangle area from normal distribution
|
This problem can be solved through a series of simplifications and then looking things up.
First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covaria
|
Expected triangle area from normal distribution
This problem can be solved through a series of simplifications and then looking things up.
First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covariance matrix is the identity and the unit of area is $\sigma^2:$ that's why the result is a multiple of $\sigma^2.$ So from now on we may take $\sigma=1.$
Second, let the three (independent) random points (each with coordinates from this trivariate standard Normal distribution) be $X,$ $Y,$ and $Z.$ Let $i$ denote one of the three components of these vectors. The triangle in question can be translated to the origin (without changing its area) by subtracting $Z,$ where it is determined by the vectors $U = X-Z$ and $V = Y-Z.$ The components of these vectors are Normal with zero means and covariances
$$\operatorname{Cov}(U_i,V_i) = \operatorname{Cov}(X_i-Z_i, Y_i-Z_i) = 1$$
and variances
$$\operatorname{Var}(V_i) = \operatorname{Var}(U_i) = \operatorname{Var}(X_i-Z_i) = 2.$$
Consequently the correlation of $U_i$ and $V_i$ is $\rho = 1/2.$
Third, we may exploit properties of Normal distributions to describe the distribution of $U,V$ in an equivalent way. Define $\rho^\prime = \sqrt{1-\rho^2}$ so that $\rho^2 + (\rho^\prime)^2 = 1.$
An equivalent description of the distribution of $(U,V)$ begins with independent components $U_i,W_i$ (all with zero mean and variance of $2.$) If we set
$$V = \rho^\prime\,W + \rho\,U$$
then
$$\operatorname{Var}(V) = (\rho^2 + (\rho^\prime)^2)(2) = 2$$
and
$$\operatorname{Cov}(U,V) = \rho\,(2) = 2\rho.$$
This version of $(U,V),$ which (in $n=3$ dimensions) also is $2n$-variate Normal, has exactly the same first and second moments as the original description: thus the distributions are the same.
Fourth, geometry tells us the area of the triangle $OVU$ is the same as the area of the triangle $O(\rho^\prime W)U$ and that, in turn, is $\rho^\prime$ times the area of triangle $OWU,$ which trigonometry tells us is
$$\operatorname{Area}(OWU) = \frac{1}{2} |W|\,|U|\,\sin(\theta_{UW}).$$
Here, $\theta_{UW}$ is the angle made between vectors $U$ and $W.$
Now we may call on well-known (simple) results:
$|U|/\sqrt{2}$ and $|W|/\sqrt{2}$ have $\chi(n)$ distributions.
$t = (1 + \cos(\theta_{UW}))/2$ has a Beta$((n-1)/2, (n-1)/2)$ distribution..
$|U|,|W|,$ and $\theta_{UW}$ are independent. (This follows directly from the spherical symmetry of the $n$-variate standard Normal distribution.)
This information is enough to work out the distribution of the area. (When $n=3$ it happens to have a Gamma distribution but in other dimensions its PDF is proportional to a modified Bessel $K$ function.)
The expected area is particularly easy to find. We can look up (or readily) compute the $\chi(n)$ expectation,
$$E\left[\frac{|U|}{\sqrt{2}}\right] = E\left[\frac{|W|}{\sqrt{2}}\right] = \sqrt{2} \frac{\Gamma((n+1)/2)}{\Gamma(n/2)},$$
and with almost no work we can find the expectation of $\sin(\theta_{UV}) = 2\sqrt{t(1-t)}$ as
$$\eqalign{
E\left[2t^{1/2}(1-t)^{1/2}\right] &= \frac{1}{B((n-1)/2,(n-1)/2)} \int_0^1 2t^{1/2}(1-t)^{1/2} t^{(n-1)/2-1}(1-t)^{(n-1)/2-1}\, \mathrm{d}t \\
&= \frac{2}{B((n-1)/2,(n-1)/2)} \int_0^1 t^{n/2-1}(1-t)^{n/2-1}\, \mathrm{d}t \\
&= \frac{2\,B(n/2,n/2)}{B((n-1)/2,(n-1)/2)}.
} $$
Plug everything into the area formula for triangle $OWU$ to obtain
$$\eqalign{
E[\operatorname{Area}(OWU)] &= E\left[\frac{1}{2} |W|\,|U|\,\sin(\theta_{UW})\right] \\
& = \frac{1}{2} \left((\sqrt{2})(\sqrt{2}) \frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\right)^2\ \frac{2\,B\left(\frac{n}{2},\frac{n}{2}\right)}{B\left(\frac{n-1}{2},\frac{n-1}{2}\right)} \\
& = 4\frac{\Gamma\left(\frac{n+1}{2}\right)^2 \Gamma(n-1)}{\Gamma\left(\frac{n-1}{2}\right)^2 \Gamma(n)} \\
&= 4 \frac{\left(\frac{n-1}{2}\right)^2}{n-1} = n-1.
}$$
(The third line expanded the Beta functions in terms of Gamma functions and the last line used the defining relation $\Gamma(z+1) = z\Gamma(z)$ several times.)
We must remember the other two factors dropped along the way: this area has to be multiplied by $\rho^\prime$ (lost at step 4) and then by $\sigma^2$ (lost at step 1).
We have thereby obtained a general formula for the expectation of a triangular area in any number of dimensions and even when the components of the vectors $U$ and $V$ are correlated with correlation coefficient $\rho.$ (Bear in mind these components have variances of $2,$ not $1.$) It is
$$E[\operatorname{Area}(OVU)] = \rho^\prime\, (n-1)\, \sigma^2.$$
Earlier we saw that $\rho=1/2,$ so $\rho^\prime = \sqrt{3}/2$ (there is where the square root of $3$ comes from!) and for $n=3$ this yields
$$E[\operatorname{Area}(XYZ)] = \sqrt{3}\, \sigma^2.$$
|
Expected triangle area from normal distribution
This problem can be solved through a series of simplifications and then looking things up.
First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covaria
|
44,800
|
Expected triangle area from normal distribution
|
Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$.
Why? First, a histogram of random samples looks very much like a Gamma distribution. (I'm using Mathematica here because I know the OP also uses Mathematica.)
(* Define the area of the triangle of 3 points in 3-space *)
x1 = {x[1], x[2], x[3]};
x2 = {x[4], x[5], x[6]};
x3 = {x[7], x[8], x[9]};
area = Area[Polygon[{x1, x2, x3}]]
$$\frac{1}{2} \sqrt{(x_2 (x_4-x_7)+x_5 x_7-x_4 x_8+x1 (x_8-x_5))^2+(x_3 (x_4-x_7)+x_6 x_7-x_4 x_9+x_1 (x_9-x_6))^2+(x_3 (x_5-x_8)+x_6 x_8-x_5 x_9+x_2 (x_9-x_6))^2}$$
(* Look at the distribution of some random samples of area *)
n = 10000;
a = ConstantArray[0, n];
Do[a[[j]] = area /. Thread[Table[x[i], {i, 9}] ->
RandomVariate[NormalDistribution[0, 1], 9]], {j, n}]
Histogram[a, Automatic, "PDF"]
Fortunately all of the even moments of the random variable area are readily determined. So we'll match the 2nd and 4th moments of area with that of a Gamma distribution and determine the parameters of the Gamma distribution.
(* Expectation of 2nd and 4th moments of area *)
m2 = Expectation[area^2, Table[x[i] \[Distributed] NormalDistribution[0, 1], {i, 9}]]
(* 9/2 *)
m4 = Expectation[area^4, Table[x[i] \[Distributed] NormalDistribution[0, 1], {i, 9}]]
(* 135/2 *)
(* Expectation of 2nd and 4th moments of a gamma distribution *)
g2 = Expectation[z^2, z \[Distributed] GammaDistribution[a, b]]
(* a (1+a) b^2 *)
g4 = Expectation[z^4, z \[Distributed] GammaDistribution[a, b]]
(* a (1+a) (2+a) (3+a) b^4 *)
(* Get solution(s) for a and b where a > 0 and b > 0 *)
Select[{a, b} /. Solve[{m2 == g2, m4 == g4}, {a, b}], #[[1]] > 0 && #[[2]] > 0 &][[1]]
(* {2,Sqrt[3]/2} *)
So we have a Gamma distribution with parameters $2$ and $\sqrt{3}/2$ which has a mean of $2 \times \sqrt{3}/2=\sqrt{3}$.
But do higher order moments now match? Yes.
TableForm[
Table[{2 k, Expectation[area^(2 k),
Table[x[i] \[Distributed] NormalDistribution[0, 1], {i, 9}]],
Expectation[z^(2 k), z \[Distributed] GammaDistribution[2, Sqrt[3]/2]]}, {k, 1, 5}],
TableHeadings -> {None, {"\nk", "\nE[area^k]", "k-th moment of a\nGamma(2,3^(1/2)/2)"}}]
It looks like a Gamma and we can match (eventually) many even moments.
This doesn't get you a proof but if the distribution of the area is really a multiple of a Gamma distribution, then that might suggest some avenues to go about getting a proof to others. (This approach will almost certainly apply to your cross-product questions on several of these forums.)
|
Expected triangle area from normal distribution
|
Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$.
Why? First, a histogram of random s
|
Expected triangle area from normal distribution
Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$.
Why? First, a histogram of random samples looks very much like a Gamma distribution. (I'm using Mathematica here because I know the OP also uses Mathematica.)
(* Define the area of the triangle of 3 points in 3-space *)
x1 = {x[1], x[2], x[3]};
x2 = {x[4], x[5], x[6]};
x3 = {x[7], x[8], x[9]};
area = Area[Polygon[{x1, x2, x3}]]
$$\frac{1}{2} \sqrt{(x_2 (x_4-x_7)+x_5 x_7-x_4 x_8+x1 (x_8-x_5))^2+(x_3 (x_4-x_7)+x_6 x_7-x_4 x_9+x_1 (x_9-x_6))^2+(x_3 (x_5-x_8)+x_6 x_8-x_5 x_9+x_2 (x_9-x_6))^2}$$
(* Look at the distribution of some random samples of area *)
n = 10000;
a = ConstantArray[0, n];
Do[a[[j]] = area /. Thread[Table[x[i], {i, 9}] ->
RandomVariate[NormalDistribution[0, 1], 9]], {j, n}]
Histogram[a, Automatic, "PDF"]
Fortunately all of the even moments of the random variable area are readily determined. So we'll match the 2nd and 4th moments of area with that of a Gamma distribution and determine the parameters of the Gamma distribution.
(* Expectation of 2nd and 4th moments of area *)
m2 = Expectation[area^2, Table[x[i] \[Distributed] NormalDistribution[0, 1], {i, 9}]]
(* 9/2 *)
m4 = Expectation[area^4, Table[x[i] \[Distributed] NormalDistribution[0, 1], {i, 9}]]
(* 135/2 *)
(* Expectation of 2nd and 4th moments of a gamma distribution *)
g2 = Expectation[z^2, z \[Distributed] GammaDistribution[a, b]]
(* a (1+a) b^2 *)
g4 = Expectation[z^4, z \[Distributed] GammaDistribution[a, b]]
(* a (1+a) (2+a) (3+a) b^4 *)
(* Get solution(s) for a and b where a > 0 and b > 0 *)
Select[{a, b} /. Solve[{m2 == g2, m4 == g4}, {a, b}], #[[1]] > 0 && #[[2]] > 0 &][[1]]
(* {2,Sqrt[3]/2} *)
So we have a Gamma distribution with parameters $2$ and $\sqrt{3}/2$ which has a mean of $2 \times \sqrt{3}/2=\sqrt{3}$.
But do higher order moments now match? Yes.
TableForm[
Table[{2 k, Expectation[area^(2 k),
Table[x[i] \[Distributed] NormalDistribution[0, 1], {i, 9}]],
Expectation[z^(2 k), z \[Distributed] GammaDistribution[2, Sqrt[3]/2]]}, {k, 1, 5}],
TableHeadings -> {None, {"\nk", "\nE[area^k]", "k-th moment of a\nGamma(2,3^(1/2)/2)"}}]
It looks like a Gamma and we can match (eventually) many even moments.
This doesn't get you a proof but if the distribution of the area is really a multiple of a Gamma distribution, then that might suggest some avenues to go about getting a proof to others. (This approach will almost certainly apply to your cross-product questions on several of these forums.)
|
Expected triangle area from normal distribution
Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$.
Why? First, a histogram of random s
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.