idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
16,001
|
Do we ever use maximum likelihood estimation?
|
Maximum likelihood estimation is often used in machine learning to train:
neural networks, e.g. Can we use MLE to estimate Neural Network weights?
linear, logistic regression and multiclass logistic regression, e.g. Why linear and logistic regression coefficients cannot be estimated using same method?
conditional random field (CRF), e.g. https://www.coursera.org/learn/probabilistic-graphical-models-3-learning/lecture/oKJ1x/maximum-likelihood-for-conditional-random-fields
hidden Markov model (HMM), e.g. https://en.wikipedia.org/w/index.php?title=Hidden_Markov_model&oldid=768811108#Learning
Note that in some cases one prefers to add some regularization, which is sometimes equivalent to Maximum a posteriori estimation, e.g. Why is Lasso penalty equivalent to the double exponential (Laplace) prior?.
|
Do we ever use maximum likelihood estimation?
|
Maximum likelihood estimation is often used in machine learning to train:
neural networks, e.g. Can we use MLE to estimate Neural Network weights?
linear, logistic regression and multiclass logistic
|
Do we ever use maximum likelihood estimation?
Maximum likelihood estimation is often used in machine learning to train:
neural networks, e.g. Can we use MLE to estimate Neural Network weights?
linear, logistic regression and multiclass logistic regression, e.g. Why linear and logistic regression coefficients cannot be estimated using same method?
conditional random field (CRF), e.g. https://www.coursera.org/learn/probabilistic-graphical-models-3-learning/lecture/oKJ1x/maximum-likelihood-for-conditional-random-fields
hidden Markov model (HMM), e.g. https://en.wikipedia.org/w/index.php?title=Hidden_Markov_model&oldid=768811108#Learning
Note that in some cases one prefers to add some regularization, which is sometimes equivalent to Maximum a posteriori estimation, e.g. Why is Lasso penalty equivalent to the double exponential (Laplace) prior?.
|
Do we ever use maximum likelihood estimation?
Maximum likelihood estimation is often used in machine learning to train:
neural networks, e.g. Can we use MLE to estimate Neural Network weights?
linear, logistic regression and multiclass logistic
|
16,002
|
Do we ever use maximum likelihood estimation?
|
Can somebody tell me a simple case in which it is used for?
A very typical case is in logistic regression. Logistic regression is a technique used often in machine learning to classify data points. For example, logistic regression can be used to classify whether an email is spam or is not spam or classify whether a person has or does not have a disease.
Specifically, the logistic regression model says that the probability a data point $x_i$ is in class 1 is as follows:
$h_\theta(x_i) = P[y_i = 1] = \frac{1}{1+e^{-\theta^T x_i}}$
The parameter vector $\theta$ is typically estimated using MLE.
Specifically, using optimization methods, we find the estimator $\hat\theta$ such that the expression $-\sum_{i=1}^n y_i\log(h_\hat\theta(x_i)) + (1-y_i)\log(1-h_{\hat\theta}(x_i))$ is minimized. This expression is the negative log likelihood, so minimizing this is equivalent to maximizing the likelihood.
|
Do we ever use maximum likelihood estimation?
|
Can somebody tell me a simple case in which it is used for?
A very typical case is in logistic regression. Logistic regression is a technique used often in machine learning to classify data points. F
|
Do we ever use maximum likelihood estimation?
Can somebody tell me a simple case in which it is used for?
A very typical case is in logistic regression. Logistic regression is a technique used often in machine learning to classify data points. For example, logistic regression can be used to classify whether an email is spam or is not spam or classify whether a person has or does not have a disease.
Specifically, the logistic regression model says that the probability a data point $x_i$ is in class 1 is as follows:
$h_\theta(x_i) = P[y_i = 1] = \frac{1}{1+e^{-\theta^T x_i}}$
The parameter vector $\theta$ is typically estimated using MLE.
Specifically, using optimization methods, we find the estimator $\hat\theta$ such that the expression $-\sum_{i=1}^n y_i\log(h_\hat\theta(x_i)) + (1-y_i)\log(1-h_{\hat\theta}(x_i))$ is minimized. This expression is the negative log likelihood, so minimizing this is equivalent to maximizing the likelihood.
|
Do we ever use maximum likelihood estimation?
Can somebody tell me a simple case in which it is used for?
A very typical case is in logistic regression. Logistic regression is a technique used often in machine learning to classify data points. F
|
16,003
|
Do we ever use maximum likelihood estimation?
|
We are using MLE all the time, but we may not feel it. I will give two simple examples to show.
Example 1
If we observe coin flip result, with $8$ head out of $10$ flips (assuming iid. from Bernoulli), how to guess the parameter $\theta$ (prob of head) of the coin? We may say $\theta=0.8$, using "counting".
Why use counting? this is actually implicitly using MLE! Where the problem is
$$
\underset \theta {\text{Maximize}}~~~\theta^{8}(1-\theta)^{2}
$$
To solve the equation, we will need some calculus, but the conclusion is counting.
Example 2
How would we estimate a Gaussian distribution parameters from data? We use empirical mean as estimated mean and empirical variance as estimated variance, which is also coming from MLE!.
|
Do we ever use maximum likelihood estimation?
|
We are using MLE all the time, but we may not feel it. I will give two simple examples to show.
Example 1
If we observe coin flip result, with $8$ head out of $10$ flips (assuming iid. from Bernoulli)
|
Do we ever use maximum likelihood estimation?
We are using MLE all the time, but we may not feel it. I will give two simple examples to show.
Example 1
If we observe coin flip result, with $8$ head out of $10$ flips (assuming iid. from Bernoulli), how to guess the parameter $\theta$ (prob of head) of the coin? We may say $\theta=0.8$, using "counting".
Why use counting? this is actually implicitly using MLE! Where the problem is
$$
\underset \theta {\text{Maximize}}~~~\theta^{8}(1-\theta)^{2}
$$
To solve the equation, we will need some calculus, but the conclusion is counting.
Example 2
How would we estimate a Gaussian distribution parameters from data? We use empirical mean as estimated mean and empirical variance as estimated variance, which is also coming from MLE!.
|
Do we ever use maximum likelihood estimation?
We are using MLE all the time, but we may not feel it. I will give two simple examples to show.
Example 1
If we observe coin flip result, with $8$ head out of $10$ flips (assuming iid. from Bernoulli)
|
16,004
|
Do we ever use maximum likelihood estimation?
|
Some maximum likelihood uses in wireless communication:
Decoding of digital data from noisy received signals, with or without redundant codes.
Estimation of time-, phase-, and frequency-offsets in receivers.
Estimation of the (parameters of the) propagation channel.
Estimation of delay, angle of arrival, and Doppler shift (e.g., radar).
Estimation of a mobile position (e.g., GPS).
Estimation of clock offsets for synchronization of all kinds of distributed settings.
A multitude of calibration procedures.
|
Do we ever use maximum likelihood estimation?
|
Some maximum likelihood uses in wireless communication:
Decoding of digital data from noisy received signals, with or without redundant codes.
Estimation of time-, phase-, and frequency-offsets in re
|
Do we ever use maximum likelihood estimation?
Some maximum likelihood uses in wireless communication:
Decoding of digital data from noisy received signals, with or without redundant codes.
Estimation of time-, phase-, and frequency-offsets in receivers.
Estimation of the (parameters of the) propagation channel.
Estimation of delay, angle of arrival, and Doppler shift (e.g., radar).
Estimation of a mobile position (e.g., GPS).
Estimation of clock offsets for synchronization of all kinds of distributed settings.
A multitude of calibration procedures.
|
Do we ever use maximum likelihood estimation?
Some maximum likelihood uses in wireless communication:
Decoding of digital data from noisy received signals, with or without redundant codes.
Estimation of time-, phase-, and frequency-offsets in re
|
16,005
|
How to implement a mixed model using betareg function in R?
|
The current capabilities of betareg do not include random/mixed effects. In betareg() you can only include fixed effect, e.g., for your three-level pond variable. The betamix() function implements a finite mixture beta regression, not a mixed effects beta regression.
In your case, I would first try to see what effect a fixed pond factor effect has. This "costs" you two degrees of freedom while a random effect would be slightly cheaper with only one additional degree of freedom. But I would be surprised if the two approaches lead to very different qualitative insights.
Finally, while glm() does not support beta regression, but in the mgcv package there is the betar() family that can be used with the gam() function.
|
How to implement a mixed model using betareg function in R?
|
The current capabilities of betareg do not include random/mixed effects. In betareg() you can only include fixed effect, e.g., for your three-level pond variable. The betamix() function implements a f
|
How to implement a mixed model using betareg function in R?
The current capabilities of betareg do not include random/mixed effects. In betareg() you can only include fixed effect, e.g., for your three-level pond variable. The betamix() function implements a finite mixture beta regression, not a mixed effects beta regression.
In your case, I would first try to see what effect a fixed pond factor effect has. This "costs" you two degrees of freedom while a random effect would be slightly cheaper with only one additional degree of freedom. But I would be surprised if the two approaches lead to very different qualitative insights.
Finally, while glm() does not support beta regression, but in the mgcv package there is the betar() family that can be used with the gam() function.
|
How to implement a mixed model using betareg function in R?
The current capabilities of betareg do not include random/mixed effects. In betareg() you can only include fixed effect, e.g., for your three-level pond variable. The betamix() function implements a f
|
16,006
|
How to implement a mixed model using betareg function in R?
|
The package glmmTMB may be helpful for anyone with a similar question. For example, if you wanted to include pond from the above question as a random effect, the following code would do the trick:
glmmTMB(y ~ 1 + (1|pond), df, family=list(family="beta",link="logit"))
|
How to implement a mixed model using betareg function in R?
|
The package glmmTMB may be helpful for anyone with a similar question. For example, if you wanted to include pond from the above question as a random effect, the following code would do the trick:
glm
|
How to implement a mixed model using betareg function in R?
The package glmmTMB may be helpful for anyone with a similar question. For example, if you wanted to include pond from the above question as a random effect, the following code would do the trick:
glmmTMB(y ~ 1 + (1|pond), df, family=list(family="beta",link="logit"))
|
How to implement a mixed model using betareg function in R?
The package glmmTMB may be helpful for anyone with a similar question. For example, if you wanted to include pond from the above question as a random effect, the following code would do the trick:
glm
|
16,007
|
How to implement a mixed model using betareg function in R?
|
This started as a comment, but went long. I don't think a random effects model is appropriate here. There are only 3 ponds -- do you want to estimate a variance from 3 numbers? That's kinda what's going with a random effects model. I'm guessing the ponds were chosen by reason of their convenience to the researcher, and not as a random sample of "Ponds of the Americas".
The advantage of a random effects model is that it allows you to construct a confidence interval on the response (activity level) that takes pond to pond variation into account. A fixed effects model -- in other words, treating pond like a block -- adjusts the response for the pond effect. If there were some addidtional treatment effect -- say two species of frog in each pond -- blocking reduces the mean square error (denominator of the F test) and allows the effect of the treatment to shine forth.
In this example, there is no treatment effect and the number of ponds is too small for a random effects model (and probably too "non-random"), so I'm not sure what conclusions can be drawn from this study. One could get a nice estimate of the difference between the ponds, but that's about it. I don't see inferences being drawn to the wider population of frogs in other pond settings. One could frame it as a pilot study, I suppose.
Bear in mind that any use of a random effects model here is going to give a very unreliable estimate for the pond variance and must be Used With Caution.
But as to your original question -- isn't this more of a rate problem? The go-to distribution for events-per-unit-time is the Poisson. So you could do Poisson regression using the counts with the time interval as an offset.
|
How to implement a mixed model using betareg function in R?
|
This started as a comment, but went long. I don't think a random effects model is appropriate here. There are only 3 ponds -- do you want to estimate a variance from 3 numbers? That's kinda what's goi
|
How to implement a mixed model using betareg function in R?
This started as a comment, but went long. I don't think a random effects model is appropriate here. There are only 3 ponds -- do you want to estimate a variance from 3 numbers? That's kinda what's going with a random effects model. I'm guessing the ponds were chosen by reason of their convenience to the researcher, and not as a random sample of "Ponds of the Americas".
The advantage of a random effects model is that it allows you to construct a confidence interval on the response (activity level) that takes pond to pond variation into account. A fixed effects model -- in other words, treating pond like a block -- adjusts the response for the pond effect. If there were some addidtional treatment effect -- say two species of frog in each pond -- blocking reduces the mean square error (denominator of the F test) and allows the effect of the treatment to shine forth.
In this example, there is no treatment effect and the number of ponds is too small for a random effects model (and probably too "non-random"), so I'm not sure what conclusions can be drawn from this study. One could get a nice estimate of the difference between the ponds, but that's about it. I don't see inferences being drawn to the wider population of frogs in other pond settings. One could frame it as a pilot study, I suppose.
Bear in mind that any use of a random effects model here is going to give a very unreliable estimate for the pond variance and must be Used With Caution.
But as to your original question -- isn't this more of a rate problem? The go-to distribution for events-per-unit-time is the Poisson. So you could do Poisson regression using the counts with the time interval as an offset.
|
How to implement a mixed model using betareg function in R?
This started as a comment, but went long. I don't think a random effects model is appropriate here. There are only 3 ponds -- do you want to estimate a variance from 3 numbers? That's kinda what's goi
|
16,008
|
How to implement a mixed model using betareg function in R?
|
I think you were right in guessing that you could use a glm binomial model.
No movement = failure (0), movement = 1 (success).
I second @Placidia that you do not have enough ponds (3) to justify making a random effect out of it.
The advantage of using a binomial model comes from using more of your original data to answer your hypothesis (more statistical power). The possibilities are:
using a mixed-effects extension (binomial glmer) for tadpole identity as an individual random intercept (or slope)
looking into the effect of different time intervals (categorically).
It all depends on your hypothesis. Maybe you do not need something more complex than a simple beta regression. It is not clear from your original question what your hypothesis is (but it is certainly not about comparing activities between ponds, as you said that you are not interested about that).
|
How to implement a mixed model using betareg function in R?
|
I think you were right in guessing that you could use a glm binomial model.
No movement = failure (0), movement = 1 (success).
I second @Placidia that you do not have enough ponds (3) to justify makin
|
How to implement a mixed model using betareg function in R?
I think you were right in guessing that you could use a glm binomial model.
No movement = failure (0), movement = 1 (success).
I second @Placidia that you do not have enough ponds (3) to justify making a random effect out of it.
The advantage of using a binomial model comes from using more of your original data to answer your hypothesis (more statistical power). The possibilities are:
using a mixed-effects extension (binomial glmer) for tadpole identity as an individual random intercept (or slope)
looking into the effect of different time intervals (categorically).
It all depends on your hypothesis. Maybe you do not need something more complex than a simple beta regression. It is not clear from your original question what your hypothesis is (but it is certainly not about comparing activities between ponds, as you said that you are not interested about that).
|
How to implement a mixed model using betareg function in R?
I think you were right in guessing that you could use a glm binomial model.
No movement = failure (0), movement = 1 (success).
I second @Placidia that you do not have enough ponds (3) to justify makin
|
16,009
|
Var(X) is known, how to calculate Var(1/X)?
|
It is impossible.
Consider a sequence $X_n$ of random variables, where
$$P(X_n=n-1)=P(X_n=n+1)=0.5$$
Then:
$$\newcommand{\Var}{\mathrm{Var}}\Var(X_n)=1 \quad \text{for all $n$}$$
But $\Var\left(\frac{1}{X_n}\right)$ approaches zero as $n$ goes to infinity:
$$\Var\left(\frac{1}{X_n}\right)=\left(0.5\left(\frac{1}{n+1}-\frac{1}{n-1}\right)\right)^2$$
This example uses the fact that $\Var(X)$ is invariant under translations of $X$, but $\Var\left(\frac{1}{X}\right)$ is not.
But even if we assume $\mathrm{E}(X)=0$, we can't compute $\Var\left(\frac{1}{X}\right)$:
Let
$$P(X_n=-1)=P(X_n=1)=0.5\left(1-\frac{1}{n}\right)$$
and
$$P(X_n=0)=\frac{1}{n} \quad \text{for $n>0$} $$
Then $\Var(X_n)$ approaches 1 as $n$ goes to infinity, but $\Var\left(\frac{1}{X_n}\right)=\infty$ for all $n$.
|
Var(X) is known, how to calculate Var(1/X)?
|
It is impossible.
Consider a sequence $X_n$ of random variables, where
$$P(X_n=n-1)=P(X_n=n+1)=0.5$$
Then:
$$\newcommand{\Var}{\mathrm{Var}}\Var(X_n)=1 \quad \text{for all $n$}$$
But $\Var\left(\fra
|
Var(X) is known, how to calculate Var(1/X)?
It is impossible.
Consider a sequence $X_n$ of random variables, where
$$P(X_n=n-1)=P(X_n=n+1)=0.5$$
Then:
$$\newcommand{\Var}{\mathrm{Var}}\Var(X_n)=1 \quad \text{for all $n$}$$
But $\Var\left(\frac{1}{X_n}\right)$ approaches zero as $n$ goes to infinity:
$$\Var\left(\frac{1}{X_n}\right)=\left(0.5\left(\frac{1}{n+1}-\frac{1}{n-1}\right)\right)^2$$
This example uses the fact that $\Var(X)$ is invariant under translations of $X$, but $\Var\left(\frac{1}{X}\right)$ is not.
But even if we assume $\mathrm{E}(X)=0$, we can't compute $\Var\left(\frac{1}{X}\right)$:
Let
$$P(X_n=-1)=P(X_n=1)=0.5\left(1-\frac{1}{n}\right)$$
and
$$P(X_n=0)=\frac{1}{n} \quad \text{for $n>0$} $$
Then $\Var(X_n)$ approaches 1 as $n$ goes to infinity, but $\Var\left(\frac{1}{X_n}\right)=\infty$ for all $n$.
|
Var(X) is known, how to calculate Var(1/X)?
It is impossible.
Consider a sequence $X_n$ of random variables, where
$$P(X_n=n-1)=P(X_n=n+1)=0.5$$
Then:
$$\newcommand{\Var}{\mathrm{Var}}\Var(X_n)=1 \quad \text{for all $n$}$$
But $\Var\left(\fra
|
16,010
|
Var(X) is known, how to calculate Var(1/X)?
|
You can use Taylor series to get an approximation of the low order moments of a transformed random variable. If the distribution is fairly 'tight' around the mean (in a particular sense), the approximation can be pretty good.
So for example
$$g(X) = g(\mu) + (X-\mu) g'(\mu) + \frac{(X-\mu)^2}{2} g''(\mu) + \ldots$$
so
\begin{eqnarray}
\text{Var}[g(X)] &=& \text{Var}[g(\mu) + (X-\mu) g'(\mu) + \frac{(X-\mu)^2}{2} g''(\mu) + \ldots]\\
&=& \text{Var}[(X-\mu) g'(\mu) + \frac{(X-\mu)^2}{2} g''(\mu) + \ldots]\\
&=& g'(\mu)^2 \text{Var}[(X-\mu)] + 2g'(\mu)\text{Cov}[(X-\mu),\frac{(X-\mu)^2}{2} g''(\mu) + \ldots] \\& &\quad+ \text{Var}[\frac{(X-\mu)^2}{2} g''(\mu) + \ldots]\\
\end{eqnarray}
often only the first term is taken
$$\text{Var}[g(X)] \approx g'(\mu)^2 \text{Var}(X)$$
In this case (assuming I didn't make a mistake), with $g(X)=\frac{1}{X}$, $\text{Var}[\frac{1}{X}] \approx \frac{1}{\mu^4} \text{Var}(X)$.
Wikipedia: Taylor expansions for the moments of functions of random variables
---
Some examples to illustrate this. I'll generate two (gamma-distributed) samples in R, one with a 'not-so-tight' distribution about the mean and one a bit tighter.
a <- rgamma(1000,10,1) # mean and variance 10; the mean is not many sds from 0
var(a)
[1] 10.20819 # reasonably close to the population variance
The approximation suggests the variance of $1/a$ should be close to $(1/10)^4 \times 10 = 0.001$
var(1/a)
[1] 0.00147171
Algebraic calculation has that the actual population variance is $1/648 \approx 0.00154$
Now for the tighter one:
a <- rgamma(1000,100,10) # should have mean 10 and variance 1
var(a)
[1] 1.069147
The approximation suggests the variance of $1/a$ should be close to $(1/10)^4 \times 1 = 0.0001$
var(1/a)
[1] 0.0001122586
Algebraic calculation shows that the population variance of the reciprocal is $\frac{10^2}{99^2\times 98} \approx 0.000104$.
|
Var(X) is known, how to calculate Var(1/X)?
|
You can use Taylor series to get an approximation of the low order moments of a transformed random variable. If the distribution is fairly 'tight' around the mean (in a particular sense), the approxim
|
Var(X) is known, how to calculate Var(1/X)?
You can use Taylor series to get an approximation of the low order moments of a transformed random variable. If the distribution is fairly 'tight' around the mean (in a particular sense), the approximation can be pretty good.
So for example
$$g(X) = g(\mu) + (X-\mu) g'(\mu) + \frac{(X-\mu)^2}{2} g''(\mu) + \ldots$$
so
\begin{eqnarray}
\text{Var}[g(X)] &=& \text{Var}[g(\mu) + (X-\mu) g'(\mu) + \frac{(X-\mu)^2}{2} g''(\mu) + \ldots]\\
&=& \text{Var}[(X-\mu) g'(\mu) + \frac{(X-\mu)^2}{2} g''(\mu) + \ldots]\\
&=& g'(\mu)^2 \text{Var}[(X-\mu)] + 2g'(\mu)\text{Cov}[(X-\mu),\frac{(X-\mu)^2}{2} g''(\mu) + \ldots] \\& &\quad+ \text{Var}[\frac{(X-\mu)^2}{2} g''(\mu) + \ldots]\\
\end{eqnarray}
often only the first term is taken
$$\text{Var}[g(X)] \approx g'(\mu)^2 \text{Var}(X)$$
In this case (assuming I didn't make a mistake), with $g(X)=\frac{1}{X}$, $\text{Var}[\frac{1}{X}] \approx \frac{1}{\mu^4} \text{Var}(X)$.
Wikipedia: Taylor expansions for the moments of functions of random variables
---
Some examples to illustrate this. I'll generate two (gamma-distributed) samples in R, one with a 'not-so-tight' distribution about the mean and one a bit tighter.
a <- rgamma(1000,10,1) # mean and variance 10; the mean is not many sds from 0
var(a)
[1] 10.20819 # reasonably close to the population variance
The approximation suggests the variance of $1/a$ should be close to $(1/10)^4 \times 10 = 0.001$
var(1/a)
[1] 0.00147171
Algebraic calculation has that the actual population variance is $1/648 \approx 0.00154$
Now for the tighter one:
a <- rgamma(1000,100,10) # should have mean 10 and variance 1
var(a)
[1] 1.069147
The approximation suggests the variance of $1/a$ should be close to $(1/10)^4 \times 1 = 0.0001$
var(1/a)
[1] 0.0001122586
Algebraic calculation shows that the population variance of the reciprocal is $\frac{10^2}{99^2\times 98} \approx 0.000104$.
|
Var(X) is known, how to calculate Var(1/X)?
You can use Taylor series to get an approximation of the low order moments of a transformed random variable. If the distribution is fairly 'tight' around the mean (in a particular sense), the approxim
|
16,011
|
Sampling from $x^2\phi(x)$?
|
Some guesswork suggest that $X$ perhaps can be simulated by a suitable power-transformation of a Gamma random variable $Y$ multiplied by a random sign to make the resulting density symmetric about zero. If $Y$ has density $$f_Y(y)=\frac{\lambda^\alpha}{\Gamma(\alpha)}y^{\alpha-1}e^{-\lambda y},$$
then the density of $X=Y^k I$ where $P(I=-1)=P(I=1)=1/2$ becomes
\begin{align}
f_X(x)
&=\frac12 f_Y(|x|^{1/k})\left|\frac{dy}{dx}\right|
\\&=\frac12 \frac{\lambda^\alpha}{\Gamma(\alpha)}|x|^{(\alpha-1)/k}e^{-\lambda |x|^{1/k}}\frac1k|x|^{1/k-1}.
\end{align}
So for $k=1/2$ (a square root transformation), the gamma rate parameter $\lambda=1/2$ and the gamma shape parameter $\alpha=3/2$, we obtain the desired $f_X$.
An R implementation follows below. Note that this involves using rgamma which uses "a modified rejection technique" (Ahrens and Dieter, 1982) so it is not clear if this is the most efficient method.
n <- 1e+4
y <- rgamma(n, shape=3/2, rate=1/2)
x <- sqrt(y)*sample(c(-1, 1), n, replace=TRUE)
hist(x, prob=TRUE, breaks=100)
curve(x^2*dnorm(x), add=TRUE)
|
Sampling from $x^2\phi(x)$?
|
Some guesswork suggest that $X$ perhaps can be simulated by a suitable power-transformation of a Gamma random variable $Y$ multiplied by a random sign to make the resulting density symmetric about zer
|
Sampling from $x^2\phi(x)$?
Some guesswork suggest that $X$ perhaps can be simulated by a suitable power-transformation of a Gamma random variable $Y$ multiplied by a random sign to make the resulting density symmetric about zero. If $Y$ has density $$f_Y(y)=\frac{\lambda^\alpha}{\Gamma(\alpha)}y^{\alpha-1}e^{-\lambda y},$$
then the density of $X=Y^k I$ where $P(I=-1)=P(I=1)=1/2$ becomes
\begin{align}
f_X(x)
&=\frac12 f_Y(|x|^{1/k})\left|\frac{dy}{dx}\right|
\\&=\frac12 \frac{\lambda^\alpha}{\Gamma(\alpha)}|x|^{(\alpha-1)/k}e^{-\lambda |x|^{1/k}}\frac1k|x|^{1/k-1}.
\end{align}
So for $k=1/2$ (a square root transformation), the gamma rate parameter $\lambda=1/2$ and the gamma shape parameter $\alpha=3/2$, we obtain the desired $f_X$.
An R implementation follows below. Note that this involves using rgamma which uses "a modified rejection technique" (Ahrens and Dieter, 1982) so it is not clear if this is the most efficient method.
n <- 1e+4
y <- rgamma(n, shape=3/2, rate=1/2)
x <- sqrt(y)*sample(c(-1, 1), n, replace=TRUE)
hist(x, prob=TRUE, breaks=100)
curve(x^2*dnorm(x), add=TRUE)
|
Sampling from $x^2\phi(x)$?
Some guesswork suggest that $X$ perhaps can be simulated by a suitable power-transformation of a Gamma random variable $Y$ multiplied by a random sign to make the resulting density symmetric about zer
|
16,012
|
Sampling from $x^2\phi(x)$?
|
One could consider the following alternatives to Jarle Tufto's most efficient solution:
namely, transform back into $X$ a Gamma $\mathcal G(3/2,1/2)$ [or equivalently a $\chi^2_3$] variate
rjt=function(n)sqrt(rgamma(n,3/2)*2)*sample(c(-1,1),n,rep=TRUE)
use a numerical inverse of the cdf$$F(x)=\int_{-\infty}^x y^2\varphi(y)\,\text dy=\int_{-\infty}^x y\,\varphi^\prime(y)\,\text dy=\Phi(x)-x\varphi(x)$$
pf=function(x)pnorm(x)-x*dnorm(x)
df=function(x)x^2*dnorm(x)#the denominator is the Normal variance, 1
qf=function(u)uniroot(function(x)pf(x)-u,int=c(-10,10))$root
rf=function(n)apply(as.matrix(runif(n)),1,qf)
apply an accept-reject solution based on a Normal density with a larger variance$^1$ by noticing that$$f(x)=\frac{1}{\sqrt{2\pi}}x^2e^{-x^2/4}e^{-x^2/4}\le\frac{4}{e}\frac{\sqrt{2}}{\sqrt{4\pi}}e^{-x^2/4}=M \varphi(x;0,2)$$
arf=function(n,M=4*sqrt(2)/exp(1)){
z=rnorm(M*n,0,sd=sqrt(2))
z[runif(M*n)<df(z)/dnorm(z,sd=sqrt(2))/M]}
The correct fit of all three generators can be tested by the following qq-plot command:
plot(1:1e4,pf(sort(rf(1e4))))
This range of solutions produces the following relative and respective running times
test replications elapsed relative user.self
3 accept-reject 100 0.294 2.211 0.294
2 inverse 100 74.707 570.282 74.681
1 transform 100 0.131 1.000 0.131
which shows the superiority of the original transform$^2$ approach.
$^1$As an addendum, the accept-reject solution can be improved by optimising the break-up$$\exp\{-x^2/2\}=\exp\{-\alpha x^2/2\}\exp\{-(1-\alpha)x^2/2\}$$ as a function of $\alpha$, since it is easy to show that $\alpha=2/3$ minimises the upper bound $M=2/\alpha\sqrt{1-\alpha} e$. However, this finer tuning of accept-reject only brings a gain of less than 10%:
3 accept-reject 0.273 2.133 0.269
2 inverse 73.895 577.305 73.878
1 transform 0.128 1.000 0.128
$^2$Devroye (1986) mentions this distribution a few times in his Bible of simulation methods (p.119, p.176) as the Maxwell distribution. In particular, if $X$ follows this distribution and if $U$ is Uniform on $(0,1)$, $Y=UX$ follows a Normal distribution. Sadly, the reciprocal does not work: simulating a Normal variate and dividing by an independent Uniform variate does not return a Maxwell variate! However, inverting the joint distribution of $(U,X)$ into a joint distribution of $(X,Y)$ leads to the (conditional on $Y$) representation
$$X=\frac{Y}{|Y|}\sqrt{Y^2-2\log(V)}\qquad Y\sim\mathcal N(0,1)\quad V\sim\mathcal U(0,1)$$
ie
rmax=function(n)sqrt((y<-rnorm(n))**2-2*log(runif(n)))*y/abs(y)
which proves to be 20% faster than the Gamma transform in rjt :
test replications elapsed relative user.self
2 devroye 100 0.991 1.000 0.992
1 tufto 100 1.246 1.237 1.245
despite $Y^2-2\log(V)$ being just another representation of a Gamma $\mathcal G(3/2,1/2)$ variate.
|
Sampling from $x^2\phi(x)$?
|
One could consider the following alternatives to Jarle Tufto's most efficient solution:
namely, transform back into $X$ a Gamma $\mathcal G(3/2,1/2)$ [or equivalently a $\chi^2_3$] variate
rjt=funct
|
Sampling from $x^2\phi(x)$?
One could consider the following alternatives to Jarle Tufto's most efficient solution:
namely, transform back into $X$ a Gamma $\mathcal G(3/2,1/2)$ [or equivalently a $\chi^2_3$] variate
rjt=function(n)sqrt(rgamma(n,3/2)*2)*sample(c(-1,1),n,rep=TRUE)
use a numerical inverse of the cdf$$F(x)=\int_{-\infty}^x y^2\varphi(y)\,\text dy=\int_{-\infty}^x y\,\varphi^\prime(y)\,\text dy=\Phi(x)-x\varphi(x)$$
pf=function(x)pnorm(x)-x*dnorm(x)
df=function(x)x^2*dnorm(x)#the denominator is the Normal variance, 1
qf=function(u)uniroot(function(x)pf(x)-u,int=c(-10,10))$root
rf=function(n)apply(as.matrix(runif(n)),1,qf)
apply an accept-reject solution based on a Normal density with a larger variance$^1$ by noticing that$$f(x)=\frac{1}{\sqrt{2\pi}}x^2e^{-x^2/4}e^{-x^2/4}\le\frac{4}{e}\frac{\sqrt{2}}{\sqrt{4\pi}}e^{-x^2/4}=M \varphi(x;0,2)$$
arf=function(n,M=4*sqrt(2)/exp(1)){
z=rnorm(M*n,0,sd=sqrt(2))
z[runif(M*n)<df(z)/dnorm(z,sd=sqrt(2))/M]}
The correct fit of all three generators can be tested by the following qq-plot command:
plot(1:1e4,pf(sort(rf(1e4))))
This range of solutions produces the following relative and respective running times
test replications elapsed relative user.self
3 accept-reject 100 0.294 2.211 0.294
2 inverse 100 74.707 570.282 74.681
1 transform 100 0.131 1.000 0.131
which shows the superiority of the original transform$^2$ approach.
$^1$As an addendum, the accept-reject solution can be improved by optimising the break-up$$\exp\{-x^2/2\}=\exp\{-\alpha x^2/2\}\exp\{-(1-\alpha)x^2/2\}$$ as a function of $\alpha$, since it is easy to show that $\alpha=2/3$ minimises the upper bound $M=2/\alpha\sqrt{1-\alpha} e$. However, this finer tuning of accept-reject only brings a gain of less than 10%:
3 accept-reject 0.273 2.133 0.269
2 inverse 73.895 577.305 73.878
1 transform 0.128 1.000 0.128
$^2$Devroye (1986) mentions this distribution a few times in his Bible of simulation methods (p.119, p.176) as the Maxwell distribution. In particular, if $X$ follows this distribution and if $U$ is Uniform on $(0,1)$, $Y=UX$ follows a Normal distribution. Sadly, the reciprocal does not work: simulating a Normal variate and dividing by an independent Uniform variate does not return a Maxwell variate! However, inverting the joint distribution of $(U,X)$ into a joint distribution of $(X,Y)$ leads to the (conditional on $Y$) representation
$$X=\frac{Y}{|Y|}\sqrt{Y^2-2\log(V)}\qquad Y\sim\mathcal N(0,1)\quad V\sim\mathcal U(0,1)$$
ie
rmax=function(n)sqrt((y<-rnorm(n))**2-2*log(runif(n)))*y/abs(y)
which proves to be 20% faster than the Gamma transform in rjt :
test replications elapsed relative user.self
2 devroye 100 0.991 1.000 0.992
1 tufto 100 1.246 1.237 1.245
despite $Y^2-2\log(V)$ being just another representation of a Gamma $\mathcal G(3/2,1/2)$ variate.
|
Sampling from $x^2\phi(x)$?
One could consider the following alternatives to Jarle Tufto's most efficient solution:
namely, transform back into $X$ a Gamma $\mathcal G(3/2,1/2)$ [or equivalently a $\chi^2_3$] variate
rjt=funct
|
16,013
|
Sampling from $x^2\phi(x)$?
|
First of all, it is worth noting that the scaling constant in this case is the second raw moment of the standard normal distribution, which is:
$$\int \limits_{\infty}^\infty x^2 \phi(x) \ dx = 1.$$
Consequently, your density function is:
$$f(x) = \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \Big( -\frac{x^2}{2} \Big)
\quad \quad \quad \text{for all } x \in \mathbb{R}.$$
You can sample from this density without rejection using the transformation (see below for proof):
$$X = \text{SGN} \cdot \chi
\quad \quad \quad
\text{SGN} \sim 1 - 2 \cdot \text{Bern}(\tfrac{1}{2})
\quad \quad \quad
\chi \sim \text{Chi}(\text{df} = 3).$$
We can easily implement this transformation method in R to produce the following simulation function (which is vectorised to allow you to produce any number of simulations).
rtransnormdist <- function(n) {
CHI <- sqrt(rchisq(n, df = 3))
SGN <- sample(c(-1, 1), size = n, replace = TRUE)
SGN*CHI }
We can confirm that this produces the required density as follows:
set.seed(1)
SIMS <- rtransnormdist(10^6)
plot(density(SIMS), lty = 2, lwd = 2, main = 'Simulated Density')
curve(x^2*dnorm(x), col = 'red', lty = 3, lwd = 2, add = TRUE)
Proof of density transformation: Using the stated transformation and applying the rules for density transformations we obtain:
$$\begin{align}
f_{|X|}(x)
= f_\chi(x) \cdot \Bigg| \frac{dx}{d \chi} \Bigg|
&= \text{Chi}(x|3) \times 1 \\[6pt]
&= \frac{x^2 \sqrt{2}}{\sqrt{\pi}} \cdot \exp \Big( -\frac{x^2}{2} \Big) \cdot \mathbb{I}(x \geqslant 0), \\[6pt]
\end{align}$$
which then gives the density:
$$\begin{align}
f_{X}(x)
= \frac{1}{2} \cdot f_{|X|}(|x|)
&= \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \Big( -\frac{x^2}{2} \Big). \\[6pt]
\end{align}$$
This confirms the desired density function.
|
Sampling from $x^2\phi(x)$?
|
First of all, it is worth noting that the scaling constant in this case is the second raw moment of the standard normal distribution, which is:
$$\int \limits_{\infty}^\infty x^2 \phi(x) \ dx = 1.$$
C
|
Sampling from $x^2\phi(x)$?
First of all, it is worth noting that the scaling constant in this case is the second raw moment of the standard normal distribution, which is:
$$\int \limits_{\infty}^\infty x^2 \phi(x) \ dx = 1.$$
Consequently, your density function is:
$$f(x) = \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \Big( -\frac{x^2}{2} \Big)
\quad \quad \quad \text{for all } x \in \mathbb{R}.$$
You can sample from this density without rejection using the transformation (see below for proof):
$$X = \text{SGN} \cdot \chi
\quad \quad \quad
\text{SGN} \sim 1 - 2 \cdot \text{Bern}(\tfrac{1}{2})
\quad \quad \quad
\chi \sim \text{Chi}(\text{df} = 3).$$
We can easily implement this transformation method in R to produce the following simulation function (which is vectorised to allow you to produce any number of simulations).
rtransnormdist <- function(n) {
CHI <- sqrt(rchisq(n, df = 3))
SGN <- sample(c(-1, 1), size = n, replace = TRUE)
SGN*CHI }
We can confirm that this produces the required density as follows:
set.seed(1)
SIMS <- rtransnormdist(10^6)
plot(density(SIMS), lty = 2, lwd = 2, main = 'Simulated Density')
curve(x^2*dnorm(x), col = 'red', lty = 3, lwd = 2, add = TRUE)
Proof of density transformation: Using the stated transformation and applying the rules for density transformations we obtain:
$$\begin{align}
f_{|X|}(x)
= f_\chi(x) \cdot \Bigg| \frac{dx}{d \chi} \Bigg|
&= \text{Chi}(x|3) \times 1 \\[6pt]
&= \frac{x^2 \sqrt{2}}{\sqrt{\pi}} \cdot \exp \Big( -\frac{x^2}{2} \Big) \cdot \mathbb{I}(x \geqslant 0), \\[6pt]
\end{align}$$
which then gives the density:
$$\begin{align}
f_{X}(x)
= \frac{1}{2} \cdot f_{|X|}(|x|)
&= \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \Big( -\frac{x^2}{2} \Big). \\[6pt]
\end{align}$$
This confirms the desired density function.
|
Sampling from $x^2\phi(x)$?
First of all, it is worth noting that the scaling constant in this case is the second raw moment of the standard normal distribution, which is:
$$\int \limits_{\infty}^\infty x^2 \phi(x) \ dx = 1.$$
C
|
16,014
|
What is the need of assumptions in linear regression?
|
You are correct - you do not need to satisfy these assumptions to fit a least squares line to the points. You need these assumptions to interpret the results. For example, assuming there was no relationship between an input $X_1$ and $Y$, what is the probability of getting a coefficient $\beta_1$ at least as great as what we saw from the regression?
|
What is the need of assumptions in linear regression?
|
You are correct - you do not need to satisfy these assumptions to fit a least squares line to the points. You need these assumptions to interpret the results. For example, assuming there was no relati
|
What is the need of assumptions in linear regression?
You are correct - you do not need to satisfy these assumptions to fit a least squares line to the points. You need these assumptions to interpret the results. For example, assuming there was no relationship between an input $X_1$ and $Y$, what is the probability of getting a coefficient $\beta_1$ at least as great as what we saw from the regression?
|
What is the need of assumptions in linear regression?
You are correct - you do not need to satisfy these assumptions to fit a least squares line to the points. You need these assumptions to interpret the results. For example, assuming there was no relati
|
16,015
|
What is the need of assumptions in linear regression?
|
Try the image of Anscombe's quartet from Wikipedia to get an idea of some of the potential issues with interpreting linear regression when some of those assumptions are clearly false: most of the basic descriptive statistics are the same in all four (and the individual $x_i$ values are identical in all but the bottom right)
|
What is the need of assumptions in linear regression?
|
Try the image of Anscombe's quartet from Wikipedia to get an idea of some of the potential issues with interpreting linear regression when some of those assumptions are clearly false: most of the bas
|
What is the need of assumptions in linear regression?
Try the image of Anscombe's quartet from Wikipedia to get an idea of some of the potential issues with interpreting linear regression when some of those assumptions are clearly false: most of the basic descriptive statistics are the same in all four (and the individual $x_i$ values are identical in all but the bottom right)
|
What is the need of assumptions in linear regression?
Try the image of Anscombe's quartet from Wikipedia to get an idea of some of the potential issues with interpreting linear regression when some of those assumptions are clearly false: most of the bas
|
16,016
|
What is the need of assumptions in linear regression?
|
You don't need those assumptions to fit a linear model. However, your parameter estimates could be biased or not having the minimum variance. Violating the assumptions will make yourself more difficult in interpreting the regression results, for example, constructing a confidence interval.
|
What is the need of assumptions in linear regression?
|
You don't need those assumptions to fit a linear model. However, your parameter estimates could be biased or not having the minimum variance. Violating the assumptions will make yourself more difficul
|
What is the need of assumptions in linear regression?
You don't need those assumptions to fit a linear model. However, your parameter estimates could be biased or not having the minimum variance. Violating the assumptions will make yourself more difficult in interpreting the regression results, for example, constructing a confidence interval.
|
What is the need of assumptions in linear regression?
You don't need those assumptions to fit a linear model. However, your parameter estimates could be biased or not having the minimum variance. Violating the assumptions will make yourself more difficul
|
16,017
|
What is the need of assumptions in linear regression?
|
Ok, the answers so far go like this: If we violate the assumptions then bad things can happen. I believe that the interesting direction is: When all assumptions that we need (actually a little different from the ones above) are met, why and how can we be sure that linear regression is the best model?
I think the answer to that question goes like this: If we make the assumptions as in the answer of this question then we can compute the conditional density $p(y_i|x_i)$. From this we can compute $E[Y_i|X_i=x_i]$ (the factorization of the conditional expectation at $x_i$) and see that it is indeed the linear regression function. Then we use this in order to see that this is the best function with respect to the true risk.
|
What is the need of assumptions in linear regression?
|
Ok, the answers so far go like this: If we violate the assumptions then bad things can happen. I believe that the interesting direction is: When all assumptions that we need (actually a little differe
|
What is the need of assumptions in linear regression?
Ok, the answers so far go like this: If we violate the assumptions then bad things can happen. I believe that the interesting direction is: When all assumptions that we need (actually a little different from the ones above) are met, why and how can we be sure that linear regression is the best model?
I think the answer to that question goes like this: If we make the assumptions as in the answer of this question then we can compute the conditional density $p(y_i|x_i)$. From this we can compute $E[Y_i|X_i=x_i]$ (the factorization of the conditional expectation at $x_i$) and see that it is indeed the linear regression function. Then we use this in order to see that this is the best function with respect to the true risk.
|
What is the need of assumptions in linear regression?
Ok, the answers so far go like this: If we violate the assumptions then bad things can happen. I believe that the interesting direction is: When all assumptions that we need (actually a little differe
|
16,018
|
What is the need of assumptions in linear regression?
|
The two key assumptions are
Independence of observations
Mean is not related to the variance
See The discussion in Julian Faraway's book.
If these are both true, OLS is surprisingly resistant to breaches in the other assumptions you have listed.
|
What is the need of assumptions in linear regression?
|
The two key assumptions are
Independence of observations
Mean is not related to the variance
See The discussion in Julian Faraway's book.
If these are both true, OLS is surprisingly resistant to bre
|
What is the need of assumptions in linear regression?
The two key assumptions are
Independence of observations
Mean is not related to the variance
See The discussion in Julian Faraway's book.
If these are both true, OLS is surprisingly resistant to breaches in the other assumptions you have listed.
|
What is the need of assumptions in linear regression?
The two key assumptions are
Independence of observations
Mean is not related to the variance
See The discussion in Julian Faraway's book.
If these are both true, OLS is surprisingly resistant to bre
|
16,019
|
R: compute correlation by group
|
The package plyr is the way to go.
Here is a simple solution:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
require(plyr)
func <- function(xx)
{
return(data.frame(COR = cor(xx$a, xx$b)))
}
ddply(xx, .(group), func)
The output will be:
group COR
1 1 0.05152923
2 2 -0.15066838
3 3 -0.04717481
4 4 0.07899114
|
R: compute correlation by group
|
The package plyr is the way to go.
Here is a simple solution:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
require(plyr)
func <- function(xx)
{
return(data.frame
|
R: compute correlation by group
The package plyr is the way to go.
Here is a simple solution:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
require(plyr)
func <- function(xx)
{
return(data.frame(COR = cor(xx$a, xx$b)))
}
ddply(xx, .(group), func)
The output will be:
group COR
1 1 0.05152923
2 2 -0.15066838
3 3 -0.04717481
4 4 0.07899114
|
R: compute correlation by group
The package plyr is the way to go.
Here is a simple solution:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
require(plyr)
func <- function(xx)
{
return(data.frame
|
16,020
|
R: compute correlation by group
|
If you are inclined to use functions in the base package, you can use the by function, then reassemble the data:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
# This returns a "by" object
result <- by(xx[,2:3], xx$group, function(x) {cor(x$a, x$b)})
# You get pretty close to what you want if you coerce it into a data frame via a matrix
result.dataframe <- as.data.frame(as.matrix(result))
# Add the group column from the row names
result.dataframe$C <- rownames(result)
|
R: compute correlation by group
|
If you are inclined to use functions in the base package, you can use the by function, then reassemble the data:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
# T
|
R: compute correlation by group
If you are inclined to use functions in the base package, you can use the by function, then reassemble the data:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
# This returns a "by" object
result <- by(xx[,2:3], xx$group, function(x) {cor(x$a, x$b)})
# You get pretty close to what you want if you coerce it into a data frame via a matrix
result.dataframe <- as.data.frame(as.matrix(result))
# Add the group column from the row names
result.dataframe$C <- rownames(result)
|
R: compute correlation by group
If you are inclined to use functions in the base package, you can use the by function, then reassemble the data:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
head(xx)
# T
|
16,021
|
R: compute correlation by group
|
Another example using base packages and Tal's example data:
DataCov <- do.call( rbind, lapply( split(xx, xx$group),
function(x) data.frame(group=x$group[1], mCov=cov(x$a, x$b)) ) )
|
R: compute correlation by group
|
Another example using base packages and Tal's example data:
DataCov <- do.call( rbind, lapply( split(xx, xx$group),
function(x) data.frame(group=x$group[1], mCov=cov(x$a, x$b)) ) )
|
R: compute correlation by group
Another example using base packages and Tal's example data:
DataCov <- do.call( rbind, lapply( split(xx, xx$group),
function(x) data.frame(group=x$group[1], mCov=cov(x$a, x$b)) ) )
|
R: compute correlation by group
Another example using base packages and Tal's example data:
DataCov <- do.call( rbind, lapply( split(xx, xx$group),
function(x) data.frame(group=x$group[1], mCov=cov(x$a, x$b)) ) )
|
16,022
|
R: compute correlation by group
|
Using data.table is shorter than dplyr
dt <- data.table(xx)
dtCor <- dt[, .(mCor = cor(M1,M2)), by=C]
|
R: compute correlation by group
|
Using data.table is shorter than dplyr
dt <- data.table(xx)
dtCor <- dt[, .(mCor = cor(M1,M2)), by=C]
|
R: compute correlation by group
Using data.table is shorter than dplyr
dt <- data.table(xx)
dtCor <- dt[, .(mCor = cor(M1,M2)), by=C]
|
R: compute correlation by group
Using data.table is shorter than dplyr
dt <- data.table(xx)
dtCor <- dt[, .(mCor = cor(M1,M2)), by=C]
|
16,023
|
R: compute correlation by group
|
Here's a similar method that will give you a table with the n's and p values for each correlation as well (rounded to 3 decimal places for convenience):
library(Hmisc)
corrByGroup <- function(xx){
return(data.frame(cbind(correl = round(rcorr(xx$a, xx$b)$r[1,2], digits=3),
n = rcorr(xx$a, xx$b)$n[1,2],
pvalue = round(rcorr(xx$a, xx$b)$P[1,2], digits=3))))
}
|
R: compute correlation by group
|
Here's a similar method that will give you a table with the n's and p values for each correlation as well (rounded to 3 decimal places for convenience):
library(Hmisc)
corrByGroup <- function(xx){
r
|
R: compute correlation by group
Here's a similar method that will give you a table with the n's and p values for each correlation as well (rounded to 3 decimal places for convenience):
library(Hmisc)
corrByGroup <- function(xx){
return(data.frame(cbind(correl = round(rcorr(xx$a, xx$b)$r[1,2], digits=3),
n = rcorr(xx$a, xx$b)$n[1,2],
pvalue = round(rcorr(xx$a, xx$b)$P[1,2], digits=3))))
}
|
R: compute correlation by group
Here's a similar method that will give you a table with the n's and p values for each correlation as well (rounded to 3 decimal places for convenience):
library(Hmisc)
corrByGroup <- function(xx){
r
|
16,024
|
R: compute correlation by group
|
Here's a more modern solution, using the dplyr package (which didn't yet exist when the question was asked):
Construct the input:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
Compute the correlations:
library(dplyr)
xx %>%
group_by(group) %>%
summarize(COR=cor(a,b))
The output:
Source: local data frame [4 x 2]
group COR
(int) (dbl)
1 1 0.05112400
2 2 0.14203033
3 3 -0.02334135
4 4 0.10626273
|
R: compute correlation by group
|
Here's a more modern solution, using the dplyr package (which didn't yet exist when the question was asked):
Construct the input:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400
|
R: compute correlation by group
Here's a more modern solution, using the dplyr package (which didn't yet exist when the question was asked):
Construct the input:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) )
Compute the correlations:
library(dplyr)
xx %>%
group_by(group) %>%
summarize(COR=cor(a,b))
The output:
Source: local data frame [4 x 2]
group COR
(int) (dbl)
1 1 0.05112400
2 2 0.14203033
3 3 -0.02334135
4 4 0.10626273
|
R: compute correlation by group
Here's a more modern solution, using the dplyr package (which didn't yet exist when the question was asked):
Construct the input:
xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400
|
16,025
|
Nitpicking about the active/passive usage of "correlated"
|
Correlate is now commonly used as a verb. You pointed to the use of this word as transitive vs. intransitive, and stated that the latter is right and the former is, perhaps, wrong.
Note, unlike you, I'm not framing this as the difference between active and passive forms, because that distinction is just a red herring in this case. Consider this, the form that you find more comfortable to use is "A is correlated to B" is passive. However, it's not the fact that it's passive that makes it more natural to you. It's that it's intransitive, as in active form "A correlates to B," as opposed to its transitive form "we correlate A to B," that makes it sound right to you.
I must agree that the intransitive form sounds more natural, both in passive and active forms. Moreover, when Galton first introduced the term, he used it only as an intransitive verb, in passive form, e.g. "the length of the arm is said to be correlated with that of the leg." According to Pearson, it was Galton who first defined the term as a statistical concept in "Co-relations and their Measurement, chiefly from Anthropometric Data” in 1888. Although the word itself was used before in other contexts. Pearson's paper "Notes on the History of Correlation" is here.
Now, I have to break a bad news: both forms have been in use for quite some time. Here's an example from The Standard American Encyclopedia of Arts... published in 1898!
-- verb intransitive – correlate, correlating. To have reciprocal relation, to be
reciprocally relates, as father and son. -- verb transitive. To place in
reciprocal relation: to determine the relations between, as between
several objects or phenomena which bear a resemblance to one another
As you can see both intransitive and transitive forms are described, i.e. "A correlates to B" and "we correlate A to B" are both fine. See also this discussion.
The verb "correlate" was created by back-formation from the noun. For instance, apparently, a verb "translate" was created similarly from a noun "translation".
@kjetilbhalvorsen brought up an example "to google", but it's a different mechanism of word formation called verbing, and a special case of it too. Normally, verbing is making verbs from nouns like "medal" $to$ "to medal." In this case we take an eponym "Google" and make a verb "to google." It's similar to "Xerox" $\to$ "to xerox", and even an older example of a guy named Charles Boycott $\to$ "to boycott."
What's even more interesting about Google case, is that it's made from a recently made up word "googol."
|
Nitpicking about the active/passive usage of "correlated"
|
Correlate is now commonly used as a verb. You pointed to the use of this word as transitive vs. intransitive, and stated that the latter is right and the former is, perhaps, wrong.
Note, unlike you, I
|
Nitpicking about the active/passive usage of "correlated"
Correlate is now commonly used as a verb. You pointed to the use of this word as transitive vs. intransitive, and stated that the latter is right and the former is, perhaps, wrong.
Note, unlike you, I'm not framing this as the difference between active and passive forms, because that distinction is just a red herring in this case. Consider this, the form that you find more comfortable to use is "A is correlated to B" is passive. However, it's not the fact that it's passive that makes it more natural to you. It's that it's intransitive, as in active form "A correlates to B," as opposed to its transitive form "we correlate A to B," that makes it sound right to you.
I must agree that the intransitive form sounds more natural, both in passive and active forms. Moreover, when Galton first introduced the term, he used it only as an intransitive verb, in passive form, e.g. "the length of the arm is said to be correlated with that of the leg." According to Pearson, it was Galton who first defined the term as a statistical concept in "Co-relations and their Measurement, chiefly from Anthropometric Data” in 1888. Although the word itself was used before in other contexts. Pearson's paper "Notes on the History of Correlation" is here.
Now, I have to break a bad news: both forms have been in use for quite some time. Here's an example from The Standard American Encyclopedia of Arts... published in 1898!
-- verb intransitive – correlate, correlating. To have reciprocal relation, to be
reciprocally relates, as father and son. -- verb transitive. To place in
reciprocal relation: to determine the relations between, as between
several objects or phenomena which bear a resemblance to one another
As you can see both intransitive and transitive forms are described, i.e. "A correlates to B" and "we correlate A to B" are both fine. See also this discussion.
The verb "correlate" was created by back-formation from the noun. For instance, apparently, a verb "translate" was created similarly from a noun "translation".
@kjetilbhalvorsen brought up an example "to google", but it's a different mechanism of word formation called verbing, and a special case of it too. Normally, verbing is making verbs from nouns like "medal" $to$ "to medal." In this case we take an eponym "Google" and make a verb "to google." It's similar to "Xerox" $\to$ "to xerox", and even an older example of a guy named Charles Boycott $\to$ "to boycott."
What's even more interesting about Google case, is that it's made from a recently made up word "googol."
|
Nitpicking about the active/passive usage of "correlated"
Correlate is now commonly used as a verb. You pointed to the use of this word as transitive vs. intransitive, and stated that the latter is right and the former is, perhaps, wrong.
Note, unlike you, I
|
16,026
|
Nitpicking about the active/passive usage of "correlated"
|
I see where you are coming with this – if you say something like "we correlated A with B", you might risk giving the impression that you introduced correlation between A and B where perhaps none existed before.
In my view, there are better ways to say this, such as: "we investigated whether A and B were correlated" or "we studied the (linear?) association/relationship between A and B".
Can you get away with using "we correlated A and B" from a grammatical and/or statistical viewpoint? The answer is yes. Is that the best way you can get your point across? My own answer to this last question would be No.
|
Nitpicking about the active/passive usage of "correlated"
|
I see where you are coming with this – if you say something like "we correlated A with B", you might risk giving the impression that you introduced correlation between A and B where perhaps none exist
|
Nitpicking about the active/passive usage of "correlated"
I see where you are coming with this – if you say something like "we correlated A with B", you might risk giving the impression that you introduced correlation between A and B where perhaps none existed before.
In my view, there are better ways to say this, such as: "we investigated whether A and B were correlated" or "we studied the (linear?) association/relationship between A and B".
Can you get away with using "we correlated A and B" from a grammatical and/or statistical viewpoint? The answer is yes. Is that the best way you can get your point across? My own answer to this last question would be No.
|
Nitpicking about the active/passive usage of "correlated"
I see where you are coming with this – if you say something like "we correlated A with B", you might risk giving the impression that you introduced correlation between A and B where perhaps none exist
|
16,027
|
Nitpicking about the active/passive usage of "correlated"
|
I don't think this is a grammatical issue, just a question of how words are used, or should be best used, in practice.
A meta-lesson I have learned over several years is that a claim that something is ungrammatical is fragile. There is always another grammarian who can be found who will dispute the assertion. (I am of a generation firmly told never to split infinitives because, supposedly, the practice is totally ungrammatical; that was rebutted as bogus logic (a misconceived analogy with Latin) long before I was taught this in the 1960s; my teachers were, I guess now, just passing on what they had been told in their youth, and so forth. Nevertheless I still can't split an infinitive willingly.)
I would understand "we correlated $X$ and $Y$" easily as "we calculated the correlation between $X$ and $Y$". It's fairly common usage, I think. Even if it isn't common usage, I don't see what is ungrammatical about it. There is an associated question of how far the correlation exists as an inevitable consequence of the data, as a mathematical or even real fact, before its value is calculated, or indeed regardless of whether that is done. I can't say I have ever worried about that.
But I wouldn't want to write that in a paper or catch myself saying it in a presentation. That is mostly a question of personal style, and as always agreement and disagreement about style are both to be expected.
I can't imagine saying "We plotted $Y$ against $X$", because I would just say "Here is a plot..." or "Figure 2 is a plot ...". Similarly, at most, I would just say "The correlation is ...".
It's worth remembering that Francis Galton hijacked correlation, which was a fairly unusual but long-existing word, for the present statistical purpose. Now I guess that the statistical sense of correlation (or a more diluted or generalised sense of it) is primary usage.
Notes:
You want nit-pickers to comment, so in that vein I will say that $A$ and $B$ are not congenial notation for variables, even in complete abstraction.
Never heard of "decorrelated"!
|
Nitpicking about the active/passive usage of "correlated"
|
I don't think this is a grammatical issue, just a question of how words are used, or should be best used, in practice.
A meta-lesson I have learned over several years is that a claim that something is
|
Nitpicking about the active/passive usage of "correlated"
I don't think this is a grammatical issue, just a question of how words are used, or should be best used, in practice.
A meta-lesson I have learned over several years is that a claim that something is ungrammatical is fragile. There is always another grammarian who can be found who will dispute the assertion. (I am of a generation firmly told never to split infinitives because, supposedly, the practice is totally ungrammatical; that was rebutted as bogus logic (a misconceived analogy with Latin) long before I was taught this in the 1960s; my teachers were, I guess now, just passing on what they had been told in their youth, and so forth. Nevertheless I still can't split an infinitive willingly.)
I would understand "we correlated $X$ and $Y$" easily as "we calculated the correlation between $X$ and $Y$". It's fairly common usage, I think. Even if it isn't common usage, I don't see what is ungrammatical about it. There is an associated question of how far the correlation exists as an inevitable consequence of the data, as a mathematical or even real fact, before its value is calculated, or indeed regardless of whether that is done. I can't say I have ever worried about that.
But I wouldn't want to write that in a paper or catch myself saying it in a presentation. That is mostly a question of personal style, and as always agreement and disagreement about style are both to be expected.
I can't imagine saying "We plotted $Y$ against $X$", because I would just say "Here is a plot..." or "Figure 2 is a plot ...". Similarly, at most, I would just say "The correlation is ...".
It's worth remembering that Francis Galton hijacked correlation, which was a fairly unusual but long-existing word, for the present statistical purpose. Now I guess that the statistical sense of correlation (or a more diluted or generalised sense of it) is primary usage.
Notes:
You want nit-pickers to comment, so in that vein I will say that $A$ and $B$ are not congenial notation for variables, even in complete abstraction.
Never heard of "decorrelated"!
|
Nitpicking about the active/passive usage of "correlated"
I don't think this is a grammatical issue, just a question of how words are used, or should be best used, in practice.
A meta-lesson I have learned over several years is that a claim that something is
|
16,028
|
Nitpicking about the active/passive usage of "correlated"
|
"Correlate" is a back formation of "correlation", which comes from "co" (with) and "relation". Which I suppose is a bit redundant, as a relation is always with something else. It would be acceptable to say "We related X to Y", so I think that from a "lay" perspective, it makes sense to say "We correlated X to Y". One could argue that in a math context, "correlate" has a specific meaning that precludes this use, but that raises questions such as "What is that meaning?" "How was it established?", and "In what circumstances is it reasonable to call for math specific usage?". For instance, there was a Jeopardy! clue along the lines of "It's the set of points within a fixed distance of a central point." The "correct" response was "What is sphere", but mathematically the correct response was "What is ball?" Even though they were discussing math, this is a program directed at the general populace, so making the distinction was reasonable.
So I would say that it is reasonable to make the distinction yourself, and even reasonable to expect someone speaking to a math audience to make the distinction, but it's acceptable in more lay contexts to not do so.
I might be wrong that this really constitutes active vs passive voice grammatically
I think you are. Generally speaking, if something is in the passive voice, then you can add a "by ..." at the end, e.g. "The passive voice is frequently used [by writers]".
but what I describe is the difference between doing something to A and B such that they each end up changed
I don't think that's an accurate description. If someone were to say "We compared A and B", would they be implying that A and B were changed? Just because something is grammatically the object of a verb, doesn't mean that anything was actually done to it.
|
Nitpicking about the active/passive usage of "correlated"
|
"Correlate" is a back formation of "correlation", which comes from "co" (with) and "relation". Which I suppose is a bit redundant, as a relation is always with something else. It would be acceptable t
|
Nitpicking about the active/passive usage of "correlated"
"Correlate" is a back formation of "correlation", which comes from "co" (with) and "relation". Which I suppose is a bit redundant, as a relation is always with something else. It would be acceptable to say "We related X to Y", so I think that from a "lay" perspective, it makes sense to say "We correlated X to Y". One could argue that in a math context, "correlate" has a specific meaning that precludes this use, but that raises questions such as "What is that meaning?" "How was it established?", and "In what circumstances is it reasonable to call for math specific usage?". For instance, there was a Jeopardy! clue along the lines of "It's the set of points within a fixed distance of a central point." The "correct" response was "What is sphere", but mathematically the correct response was "What is ball?" Even though they were discussing math, this is a program directed at the general populace, so making the distinction was reasonable.
So I would say that it is reasonable to make the distinction yourself, and even reasonable to expect someone speaking to a math audience to make the distinction, but it's acceptable in more lay contexts to not do so.
I might be wrong that this really constitutes active vs passive voice grammatically
I think you are. Generally speaking, if something is in the passive voice, then you can add a "by ..." at the end, e.g. "The passive voice is frequently used [by writers]".
but what I describe is the difference between doing something to A and B such that they each end up changed
I don't think that's an accurate description. If someone were to say "We compared A and B", would they be implying that A and B were changed? Just because something is grammatically the object of a verb, doesn't mean that anything was actually done to it.
|
Nitpicking about the active/passive usage of "correlated"
"Correlate" is a back formation of "correlation", which comes from "co" (with) and "relation". Which I suppose is a bit redundant, as a relation is always with something else. It would be acceptable t
|
16,029
|
Nitpicking about the active/passive usage of "correlated"
|
I don't think this is nitpicking at all.
The first time I heard someone say "We correlated A with B", the speaker had the ability to influence A. I took their saying to mean "A and B were first uncorrelated, but we then altered A so as to have it strongly correlated with B". I spent a lot of time trying to figure out why they had done this. Eventually I realized that they meant "we found a correlation between A and B", and their motivation became much more clear at that point.
|
Nitpicking about the active/passive usage of "correlated"
|
I don't think this is nitpicking at all.
The first time I heard someone say "We correlated A with B", the speaker had the ability to influence A. I took their saying to mean "A and B were first uncor
|
Nitpicking about the active/passive usage of "correlated"
I don't think this is nitpicking at all.
The first time I heard someone say "We correlated A with B", the speaker had the ability to influence A. I took their saying to mean "A and B were first uncorrelated, but we then altered A so as to have it strongly correlated with B". I spent a lot of time trying to figure out why they had done this. Eventually I realized that they meant "we found a correlation between A and B", and their motivation became much more clear at that point.
|
Nitpicking about the active/passive usage of "correlated"
I don't think this is nitpicking at all.
The first time I heard someone say "We correlated A with B", the speaker had the ability to influence A. I took their saying to mean "A and B were first uncor
|
16,030
|
Nitpicking about the active/passive usage of "correlated"
|
This usage of the verb correlate may be uncommon but it is grammatically correct since it can be used as a transitive verb.
correlate: to present or set forth so as to show relationship. "He correlates the findings of the scientists, the psychologists, and the mystics."
See this Definition for reference.
|
Nitpicking about the active/passive usage of "correlated"
|
This usage of the verb correlate may be uncommon but it is grammatically correct since it can be used as a transitive verb.
correlate: to present or set forth so as to show relationship. "He correlat
|
Nitpicking about the active/passive usage of "correlated"
This usage of the verb correlate may be uncommon but it is grammatically correct since it can be used as a transitive verb.
correlate: to present or set forth so as to show relationship. "He correlates the findings of the scientists, the psychologists, and the mystics."
See this Definition for reference.
|
Nitpicking about the active/passive usage of "correlated"
This usage of the verb correlate may be uncommon but it is grammatically correct since it can be used as a transitive verb.
correlate: to present or set forth so as to show relationship. "He correlat
|
16,031
|
Is the average of positive-definite matrices also positive-definite?
|
Yes, it is. jth asnwer is correct (+1) but I think you can get a much simple explanation with just basic Linear Algebra.
Assume $A$ and $B$ are positive definite matrices for size $n$. By definition this means that for all $u \in R^n$, $0 < u^TAu$ and $0 < u^TBu$.
This means that $0 < u^TAu + u^TBu$ or equivalently that $ 0 < u^T(A+B)u$. ie. $(A+B)$ has to be positive definite.
|
Is the average of positive-definite matrices also positive-definite?
|
Yes, it is. jth asnwer is correct (+1) but I think you can get a much simple explanation with just basic Linear Algebra.
Assume $A$ and $B$ are positive definite matrices for size $n$. By definition t
|
Is the average of positive-definite matrices also positive-definite?
Yes, it is. jth asnwer is correct (+1) but I think you can get a much simple explanation with just basic Linear Algebra.
Assume $A$ and $B$ are positive definite matrices for size $n$. By definition this means that for all $u \in R^n$, $0 < u^TAu$ and $0 < u^TBu$.
This means that $0 < u^TAu + u^TBu$ or equivalently that $ 0 < u^T(A+B)u$. ie. $(A+B)$ has to be positive definite.
|
Is the average of positive-definite matrices also positive-definite?
Yes, it is. jth asnwer is correct (+1) but I think you can get a much simple explanation with just basic Linear Algebra.
Assume $A$ and $B$ are positive definite matrices for size $n$. By definition t
|
16,032
|
Is the average of positive-definite matrices also positive-definite?
|
Of course. The set of positive definite matrices forms a cone, meaning it is closed under positive linear combinations and scaling.
|
Is the average of positive-definite matrices also positive-definite?
|
Of course. The set of positive definite matrices forms a cone, meaning it is closed under positive linear combinations and scaling.
|
Is the average of positive-definite matrices also positive-definite?
Of course. The set of positive definite matrices forms a cone, meaning it is closed under positive linear combinations and scaling.
|
Is the average of positive-definite matrices also positive-definite?
Of course. The set of positive definite matrices forms a cone, meaning it is closed under positive linear combinations and scaling.
|
16,033
|
The sum of two independent gamma random variables
|
The proof is as follows: (1) Remember that the characteristic function of the sum of independent random variables is the product of their individual characteristic functions; (2) Get the characteristic function of a gamma random variable here; (3) Do the simple algebra.
To get some intuition beyond this algebraic argument, check whuber's comment.
Note: The OP asked how to compute the characteristic function of a gamma random variable. If $X\sim\mathrm{Exp}(\lambda)$, then (you can treat $i$ as an ordinary constant, in this case)
$$\psi_X(t)=\mathrm{E}\left[e^{itX}\right]=\int_0^\infty e^{itx} \lambda\,e^{-\lambda x}\,dx = \frac{1}{1-it/\lambda}\, .$$
Now use Huber's tip: If $Y\sim\mathrm{Gamma}(k,\theta)$, then $Y=X_1+\dots+X_k$, where the $X_i$'s are independent $\mathrm{Exp}(\lambda = 1/\theta)$. Therefore, using property (1), we have
$$
\psi_Y(t) = \left( \frac{1}{1-it\theta}\right)^k \, .
$$
Tip: you won't learn these things staring at the results and proofs: stay hungry, compute everything, try to find your own proofs. Even if you fail, your appreciation of somebody else's answer will be at a much higher level. And, yes, failing is OK: nobody is looking! The only way to learn mathematics is by fist fighting for each concept and result.
|
The sum of two independent gamma random variables
|
The proof is as follows: (1) Remember that the characteristic function of the sum of independent random variables is the product of their individual characteristic functions; (2) Get the characteristi
|
The sum of two independent gamma random variables
The proof is as follows: (1) Remember that the characteristic function of the sum of independent random variables is the product of their individual characteristic functions; (2) Get the characteristic function of a gamma random variable here; (3) Do the simple algebra.
To get some intuition beyond this algebraic argument, check whuber's comment.
Note: The OP asked how to compute the characteristic function of a gamma random variable. If $X\sim\mathrm{Exp}(\lambda)$, then (you can treat $i$ as an ordinary constant, in this case)
$$\psi_X(t)=\mathrm{E}\left[e^{itX}\right]=\int_0^\infty e^{itx} \lambda\,e^{-\lambda x}\,dx = \frac{1}{1-it/\lambda}\, .$$
Now use Huber's tip: If $Y\sim\mathrm{Gamma}(k,\theta)$, then $Y=X_1+\dots+X_k$, where the $X_i$'s are independent $\mathrm{Exp}(\lambda = 1/\theta)$. Therefore, using property (1), we have
$$
\psi_Y(t) = \left( \frac{1}{1-it\theta}\right)^k \, .
$$
Tip: you won't learn these things staring at the results and proofs: stay hungry, compute everything, try to find your own proofs. Even if you fail, your appreciation of somebody else's answer will be at a much higher level. And, yes, failing is OK: nobody is looking! The only way to learn mathematics is by fist fighting for each concept and result.
|
The sum of two independent gamma random variables
The proof is as follows: (1) Remember that the characteristic function of the sum of independent random variables is the product of their individual characteristic functions; (2) Get the characteristi
|
16,034
|
The sum of two independent gamma random variables
|
Here is an answer that does not need to use characteristic
functions, but instead reinforces some ideas that have other
uses in statistics. The density of the sum of independent
random variables is the convolutions of the densities. So,
taking $\theta = 1$ for ease of exposition, we have for $z > 0$,
$$\begin{align}
f_{X+Y}(z) &= \int_0^z f_X(x)f_Y(z-x)\,\mathrm dx\\
&=\int_0^z \frac{x^{a-1}e^{-x}}{\Gamma(a)}\frac{(z-x)^{b-1}e^{-(z-x)}}{\Gamma(b)}\,\mathrm dx\\
&= e^{-z}\int_0^z \frac{x^{a-1}(z-x)^{b-1}}{\Gamma(a)\Gamma(b)}\,\mathrm dx
&\scriptstyle{\text{now substitute}}~ x = zt~ \text{and think}\\
&= e^{-z}z^{a+b-1}\int_0^1 \frac{t^{a-1}(1-t)^{b-1}}{\Gamma(a)\Gamma(b)}\,\mathrm dt & \scriptstyle{\text{of Beta}}(a,b)~\text{random variables}\\
&= \frac{e^{-z}z^{a+b-1}}{\Gamma(a+b)}
\end{align}$$
|
The sum of two independent gamma random variables
|
Here is an answer that does not need to use characteristic
functions, but instead reinforces some ideas that have other
uses in statistics. The density of the sum of independent
random variables is t
|
The sum of two independent gamma random variables
Here is an answer that does not need to use characteristic
functions, but instead reinforces some ideas that have other
uses in statistics. The density of the sum of independent
random variables is the convolutions of the densities. So,
taking $\theta = 1$ for ease of exposition, we have for $z > 0$,
$$\begin{align}
f_{X+Y}(z) &= \int_0^z f_X(x)f_Y(z-x)\,\mathrm dx\\
&=\int_0^z \frac{x^{a-1}e^{-x}}{\Gamma(a)}\frac{(z-x)^{b-1}e^{-(z-x)}}{\Gamma(b)}\,\mathrm dx\\
&= e^{-z}\int_0^z \frac{x^{a-1}(z-x)^{b-1}}{\Gamma(a)\Gamma(b)}\,\mathrm dx
&\scriptstyle{\text{now substitute}}~ x = zt~ \text{and think}\\
&= e^{-z}z^{a+b-1}\int_0^1 \frac{t^{a-1}(1-t)^{b-1}}{\Gamma(a)\Gamma(b)}\,\mathrm dt & \scriptstyle{\text{of Beta}}(a,b)~\text{random variables}\\
&= \frac{e^{-z}z^{a+b-1}}{\Gamma(a+b)}
\end{align}$$
|
The sum of two independent gamma random variables
Here is an answer that does not need to use characteristic
functions, but instead reinforces some ideas that have other
uses in statistics. The density of the sum of independent
random variables is t
|
16,035
|
The sum of two independent gamma random variables
|
On a more heuristic level: If $a$ and $b$ are integers, the Gamma distribution is an Erlang distribution, and so $X$ and $Y$ describe the waiting times for respectively $a$ and $b$ occurrences in a Poisson process with rate $\theta$. The two waiting times $X$ and $Y$ are
independent
sum up to a waiting time for $a+b$ occurrences
and the waiting time for $a+b$ occurrences is distributed Gamma($a+b,\theta$).
None of this is a mathematical proof, but it puts some flesh on the bones of the connection, and can be used if you want to flesh it out in a mathematical proof.
|
The sum of two independent gamma random variables
|
On a more heuristic level: If $a$ and $b$ are integers, the Gamma distribution is an Erlang distribution, and so $X$ and $Y$ describe the waiting times for respectively $a$ and $b$ occurrences in a Po
|
The sum of two independent gamma random variables
On a more heuristic level: If $a$ and $b$ are integers, the Gamma distribution is an Erlang distribution, and so $X$ and $Y$ describe the waiting times for respectively $a$ and $b$ occurrences in a Poisson process with rate $\theta$. The two waiting times $X$ and $Y$ are
independent
sum up to a waiting time for $a+b$ occurrences
and the waiting time for $a+b$ occurrences is distributed Gamma($a+b,\theta$).
None of this is a mathematical proof, but it puts some flesh on the bones of the connection, and can be used if you want to flesh it out in a mathematical proof.
|
The sum of two independent gamma random variables
On a more heuristic level: If $a$ and $b$ are integers, the Gamma distribution is an Erlang distribution, and so $X$ and $Y$ describe the waiting times for respectively $a$ and $b$ occurrences in a Po
|
16,036
|
Why is the marginal distribution/marginal probability described as "marginal"?
|
Consider the table below (copied from this website) representing joint probabilities of outcomes from rolling two dice:
In this common and natural way of showing the distribution, the marginal probabilities of the outcomes from the individual dice are written literally in the margins of the table (the highlighted row/column).
Of course we can't really construct such tables for continuous random variables, but anyway I'd guess that this is the origin of the term.
|
Why is the marginal distribution/marginal probability described as "marginal"?
|
Consider the table below (copied from this website) representing joint probabilities of outcomes from rolling two dice:
In this common and natural way of showing the distribution, the marginal probab
|
Why is the marginal distribution/marginal probability described as "marginal"?
Consider the table below (copied from this website) representing joint probabilities of outcomes from rolling two dice:
In this common and natural way of showing the distribution, the marginal probabilities of the outcomes from the individual dice are written literally in the margins of the table (the highlighted row/column).
Of course we can't really construct such tables for continuous random variables, but anyway I'd guess that this is the origin of the term.
|
Why is the marginal distribution/marginal probability described as "marginal"?
Consider the table below (copied from this website) representing joint probabilities of outcomes from rolling two dice:
In this common and natural way of showing the distribution, the marginal probab
|
16,037
|
Why is the marginal distribution/marginal probability described as "marginal"?
|
To add to Jake Westfall's answer (https://stats.stackexchange.com/q/408410), we can consider the marginal density as integrating out the other variable. In detail, if we have $(X, Y)$ being two random variables, then the density of $X$ at $x$ is
$$
p(x) = \int p(x, y)dy = \int p(x | y)p(y)dy,
$$
which when the variables are discrete, for example if $X$ and $Y$ only take on values of $1, \dots, 6$, then finding the probability of
$$
p(X = 1) = \sum_{y = 1}^6 p(X = 1, Y = y)
$$
which is the same as summing the elements in the first row ($i = 1$) of his table.
I think it's easier to view this in terms of a plot though. Below is a plot of the joint density when sampling from a mixture of two Gaussians, the marginal of $X$ and $Y$ to the top and on right respectively
Same plot with smoothed densities (you can think of this as the same but with $X$ and $Y$ now being continuous, in which case you can still find the marginal, but we will use an integral instead of summing)
Both of these plots were generated using the jointplot function from seaborn (https://seaborn.pydata.org/generated/seaborn.jointplot.html#seaborn.jointplot).
Hope this helps!
|
Why is the marginal distribution/marginal probability described as "marginal"?
|
To add to Jake Westfall's answer (https://stats.stackexchange.com/q/408410), we can consider the marginal density as integrating out the other variable. In detail, if we have $(X, Y)$ being two random
|
Why is the marginal distribution/marginal probability described as "marginal"?
To add to Jake Westfall's answer (https://stats.stackexchange.com/q/408410), we can consider the marginal density as integrating out the other variable. In detail, if we have $(X, Y)$ being two random variables, then the density of $X$ at $x$ is
$$
p(x) = \int p(x, y)dy = \int p(x | y)p(y)dy,
$$
which when the variables are discrete, for example if $X$ and $Y$ only take on values of $1, \dots, 6$, then finding the probability of
$$
p(X = 1) = \sum_{y = 1}^6 p(X = 1, Y = y)
$$
which is the same as summing the elements in the first row ($i = 1$) of his table.
I think it's easier to view this in terms of a plot though. Below is a plot of the joint density when sampling from a mixture of two Gaussians, the marginal of $X$ and $Y$ to the top and on right respectively
Same plot with smoothed densities (you can think of this as the same but with $X$ and $Y$ now being continuous, in which case you can still find the marginal, but we will use an integral instead of summing)
Both of these plots were generated using the jointplot function from seaborn (https://seaborn.pydata.org/generated/seaborn.jointplot.html#seaborn.jointplot).
Hope this helps!
|
Why is the marginal distribution/marginal probability described as "marginal"?
To add to Jake Westfall's answer (https://stats.stackexchange.com/q/408410), we can consider the marginal density as integrating out the other variable. In detail, if we have $(X, Y)$ being two random
|
16,038
|
What is the distribution for various polyhedral dice all rolled at once?
|
I wouldn't want to do it algebraically, but you can calculate the pmf simply enough (it's just convolution, which is really easy in a spreadsheet).
I calculated these in a spreadsheet*:
i n(i) 100 p(i)
5 1 0.0022
6 5 0.0109
7 15 0.0326
8 35 0.0760
9 69 0.1497
10 121 0.2626
11 194 0.4210
12 290 0.6293
13 409 0.8876
14 549 1.1914
15 707 1.5343
16 879 1.9076
17 1060 2.3003
18 1244 2.6997
19 1425 3.0924
20 1597 3.4657
21 1755 3.8086
22 1895 4.1124
23 2014 4.3707
24 2110 4.5790
25 2182 4.7352
26 2230 4.8394
27 2254 4.8915
28 2254 4.8915
29 2230 4.8394
30 2182 4.7352
31 2110 4.5790
32 2014 4.3707
33 1895 4.1124
34 1755 3.8086
35 1597 3.4657
36 1425 3.0924
37 1244 2.6997
38 1060 2.3003
39 879 1.9076
40 707 1.5343
41 549 1.1914
42 409 0.8876
43 290 0.6293
44 194 0.4210
45 121 0.2626
46 69 0.1497
47 35 0.0760
48 15 0.0326
49 5 0.0109
50 1 0.0022
Here $n(i)$ is the number of ways of getting each total $i$; $p(i)$ is the probability, where $p(i) = n(i)/46080$. The most likely outcomes happen less than 5% of the time.
The y-axis is probability expressed as a percentage.
* The method I used is similar to the procedure outlined here, though the exact mechanics involved in setting it up change as user interface details change (that post is about 5 years old now though I updated it about a year ago). And I used a different package this time
(I did it in LibreOffice's Calc this time). Still, that's the gist of it.
|
What is the distribution for various polyhedral dice all rolled at once?
|
I wouldn't want to do it algebraically, but you can calculate the pmf simply enough (it's just convolution, which is really easy in a spreadsheet).
I calculated these in a spreadsheet*:
i n(i)
|
What is the distribution for various polyhedral dice all rolled at once?
I wouldn't want to do it algebraically, but you can calculate the pmf simply enough (it's just convolution, which is really easy in a spreadsheet).
I calculated these in a spreadsheet*:
i n(i) 100 p(i)
5 1 0.0022
6 5 0.0109
7 15 0.0326
8 35 0.0760
9 69 0.1497
10 121 0.2626
11 194 0.4210
12 290 0.6293
13 409 0.8876
14 549 1.1914
15 707 1.5343
16 879 1.9076
17 1060 2.3003
18 1244 2.6997
19 1425 3.0924
20 1597 3.4657
21 1755 3.8086
22 1895 4.1124
23 2014 4.3707
24 2110 4.5790
25 2182 4.7352
26 2230 4.8394
27 2254 4.8915
28 2254 4.8915
29 2230 4.8394
30 2182 4.7352
31 2110 4.5790
32 2014 4.3707
33 1895 4.1124
34 1755 3.8086
35 1597 3.4657
36 1425 3.0924
37 1244 2.6997
38 1060 2.3003
39 879 1.9076
40 707 1.5343
41 549 1.1914
42 409 0.8876
43 290 0.6293
44 194 0.4210
45 121 0.2626
46 69 0.1497
47 35 0.0760
48 15 0.0326
49 5 0.0109
50 1 0.0022
Here $n(i)$ is the number of ways of getting each total $i$; $p(i)$ is the probability, where $p(i) = n(i)/46080$. The most likely outcomes happen less than 5% of the time.
The y-axis is probability expressed as a percentage.
* The method I used is similar to the procedure outlined here, though the exact mechanics involved in setting it up change as user interface details change (that post is about 5 years old now though I updated it about a year ago). And I used a different package this time
(I did it in LibreOffice's Calc this time). Still, that's the gist of it.
|
What is the distribution for various polyhedral dice all rolled at once?
I wouldn't want to do it algebraically, but you can calculate the pmf simply enough (it's just convolution, which is really easy in a spreadsheet).
I calculated these in a spreadsheet*:
i n(i)
|
16,039
|
What is the distribution for various polyhedral dice all rolled at once?
|
So I made this code:
d4 <- 1:4 #the faces on a d4
d6 <- 1:6 #the faces on a d6
d8 <- 1:8 #the faces on a d8
d10 <- 1:10 #the faces on a d10 (not used)
d12 <- 1:12 #the faces on a d12
d20 <- 1:20 #the faces on a d20
N <- 2000000 #run it 2 million times
mysum <- numeric(length = N)
for (i in 1:N){
mysum[i] <- sample(d4,1)+
sample(d6,1)+
sample(d8,1)+
sample(d12,1)+
sample(d20,1)
}
#make the plot
hist(mysum,breaks = 1000,freq = FALSE,ylim=c(0,1))
grid()
The result is this plot.
It is quite Gaussian looking. I think we (again) may have demonstrated a variation on the central limit theorem.
|
What is the distribution for various polyhedral dice all rolled at once?
|
So I made this code:
d4 <- 1:4 #the faces on a d4
d6 <- 1:6 #the faces on a d6
d8 <- 1:8 #the faces on a d8
d10 <- 1:10 #the faces on a d10 (not used)
d12 <- 1:12 #the faces on a d12
d20 <- 1:20 #t
|
What is the distribution for various polyhedral dice all rolled at once?
So I made this code:
d4 <- 1:4 #the faces on a d4
d6 <- 1:6 #the faces on a d6
d8 <- 1:8 #the faces on a d8
d10 <- 1:10 #the faces on a d10 (not used)
d12 <- 1:12 #the faces on a d12
d20 <- 1:20 #the faces on a d20
N <- 2000000 #run it 2 million times
mysum <- numeric(length = N)
for (i in 1:N){
mysum[i] <- sample(d4,1)+
sample(d6,1)+
sample(d8,1)+
sample(d12,1)+
sample(d20,1)
}
#make the plot
hist(mysum,breaks = 1000,freq = FALSE,ylim=c(0,1))
grid()
The result is this plot.
It is quite Gaussian looking. I think we (again) may have demonstrated a variation on the central limit theorem.
|
What is the distribution for various polyhedral dice all rolled at once?
So I made this code:
d4 <- 1:4 #the faces on a d4
d6 <- 1:6 #the faces on a d6
d8 <- 1:8 #the faces on a d8
d10 <- 1:10 #the faces on a d10 (not used)
d12 <- 1:12 #the faces on a d12
d20 <- 1:20 #t
|
16,040
|
What is the distribution for various polyhedral dice all rolled at once?
|
I will show an approach to do this algebraically, with the aid of R.
Assume the different dice have probability distributions given by vectors
$$ \DeclareMathOperator{\P}{\mathbb{P}}
P(X=i)=p(i)
$$ where $X$ is the number of eyes seen on throwing the dice, and $i$ is a integer in the range $0,1,\dots,n$. So the probability of two eyes, say, is in the third vector component. Then a standard dice has distribution given by the vector $(0,1/6,1/6,1/6,1/6,1/6,1/6)$. The probability generating function (pgf) is then given by $p(t)=\sum_0^6 p(i) t^i$. Let the second dice have distribution given by the vector $q(j)$ with $j$ in range $0,1,\dots,m$. Then the distribution of the sum of eyes on two independent dice rolls given by the product of the pgf' s, $p(t)q(t)$. Writing out the product we can see it is given by the convolution of the coefficient sequences, so can be found by the R function convolve(). Lets test this by two throws of standard dice:
p <- q <- c(0, rep(1/6, 6))
pq <- convolve(p, rev(q), type="open")
zapsmall(pq)
[1] 0.00000000 0.00000000 0.02777778 0.05555556 0.08333333 0.11111111
[7] 0.13888889 0.16666667 0.13888889 0.11111111 0.08333333 0.05555556
[13] 0.02777778
and you can check that that is correct (by hand calculation). Now for the real question, five dice with 4,6,8,12,20 sides. I will do the calculation assuming uniform probs for each dice. Then:
p1 <- c(0, rep(1/4, 4))
p2 <- c(0, rep(1/6, 6))
p3 <- c(0, rep(1/8, 8))
p4 <- c(0, rep(1/12, 12))
p5 <- c(0, rep(1/20, 20))
s2 <- convolve(p1, rev(p2), type="open")
s3 <- convolve(s2, rev(p3), type="open")
s4 <- convolve(s3, rev(p4), type="open")
s5 <- convolve(s4, rev(p5), type="open")
sum(s5)
[1] 1
zapsmall(s5)
[1] 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00002170
[7] 0.00010851 0.00032552 0.00075955 0.00149740 0.00262587 0.00421007
[13] 0.00629340 0.00887587 0.01191406 0.01534288 0.01907552 0.02300347
[19] 0.02699653 0.03092448 0.03465712 0.03808594 0.04112413 0.04370660
[25] 0.04578993 0.04735243 0.04839410 0.04891493 0.04891493 0.04839410
[31] 0.04735243 0.04578993 0.04370660 0.04112413 0.03808594 0.03465712
[37] 0.03092448 0.02699653 0.02300347 0.01907552 0.01534288 0.01191406
[43] 0.00887587 0.00629340 0.00421007 0.00262587 0.00149740 0.00075955
[49] 0.00032552 0.00010851 0.00002170
plot(0:50, zapsmall(s5))
The plot is shown below:
Now you can compare this exact solution with simulations.
|
What is the distribution for various polyhedral dice all rolled at once?
|
I will show an approach to do this algebraically, with the aid of R.
Assume the different dice have probability distributions given by vectors
$$ \DeclareMathOperator{\P}{\mathbb{P}}
P(X=i)=p
|
What is the distribution for various polyhedral dice all rolled at once?
I will show an approach to do this algebraically, with the aid of R.
Assume the different dice have probability distributions given by vectors
$$ \DeclareMathOperator{\P}{\mathbb{P}}
P(X=i)=p(i)
$$ where $X$ is the number of eyes seen on throwing the dice, and $i$ is a integer in the range $0,1,\dots,n$. So the probability of two eyes, say, is in the third vector component. Then a standard dice has distribution given by the vector $(0,1/6,1/6,1/6,1/6,1/6,1/6)$. The probability generating function (pgf) is then given by $p(t)=\sum_0^6 p(i) t^i$. Let the second dice have distribution given by the vector $q(j)$ with $j$ in range $0,1,\dots,m$. Then the distribution of the sum of eyes on two independent dice rolls given by the product of the pgf' s, $p(t)q(t)$. Writing out the product we can see it is given by the convolution of the coefficient sequences, so can be found by the R function convolve(). Lets test this by two throws of standard dice:
p <- q <- c(0, rep(1/6, 6))
pq <- convolve(p, rev(q), type="open")
zapsmall(pq)
[1] 0.00000000 0.00000000 0.02777778 0.05555556 0.08333333 0.11111111
[7] 0.13888889 0.16666667 0.13888889 0.11111111 0.08333333 0.05555556
[13] 0.02777778
and you can check that that is correct (by hand calculation). Now for the real question, five dice with 4,6,8,12,20 sides. I will do the calculation assuming uniform probs for each dice. Then:
p1 <- c(0, rep(1/4, 4))
p2 <- c(0, rep(1/6, 6))
p3 <- c(0, rep(1/8, 8))
p4 <- c(0, rep(1/12, 12))
p5 <- c(0, rep(1/20, 20))
s2 <- convolve(p1, rev(p2), type="open")
s3 <- convolve(s2, rev(p3), type="open")
s4 <- convolve(s3, rev(p4), type="open")
s5 <- convolve(s4, rev(p5), type="open")
sum(s5)
[1] 1
zapsmall(s5)
[1] 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00002170
[7] 0.00010851 0.00032552 0.00075955 0.00149740 0.00262587 0.00421007
[13] 0.00629340 0.00887587 0.01191406 0.01534288 0.01907552 0.02300347
[19] 0.02699653 0.03092448 0.03465712 0.03808594 0.04112413 0.04370660
[25] 0.04578993 0.04735243 0.04839410 0.04891493 0.04891493 0.04839410
[31] 0.04735243 0.04578993 0.04370660 0.04112413 0.03808594 0.03465712
[37] 0.03092448 0.02699653 0.02300347 0.01907552 0.01534288 0.01191406
[43] 0.00887587 0.00629340 0.00421007 0.00262587 0.00149740 0.00075955
[49] 0.00032552 0.00010851 0.00002170
plot(0:50, zapsmall(s5))
The plot is shown below:
Now you can compare this exact solution with simulations.
|
What is the distribution for various polyhedral dice all rolled at once?
I will show an approach to do this algebraically, with the aid of R.
Assume the different dice have probability distributions given by vectors
$$ \DeclareMathOperator{\P}{\mathbb{P}}
P(X=i)=p
|
16,041
|
What is the distribution for various polyhedral dice all rolled at once?
|
A little help to your intuition:
First, consider what happens if you add one to all the faces of one die, e.g. the d4. So, instead of 1,2,3,4, the faces now show 2,3,4,5.
Comparing this situation to the original, it is easy to see that the total sum is now one higher than it used to be. This means that the shape of the distribution is unchanged, it is just moved one step to the side.
Now subtract the average value of each die from every side of that die.
This gives dice marked
$-{3\over 2}$,$-{1\over 2}$,${1\over 2}$,${3\over 2}$
$-{5\over 2}$,$-{3\over 2}$,$-{1\over 2}$,${1\over 2}$,${3\over 2}$,${5\over 2}$
$-{7\over 2}$,$-{5\over 2}$,$-{3\over 2}$,$-{1\over 2}$,${1\over 2}$,${3\over 2}$,${5\over 2}$,${7\over 2}$
etc.
Now, the sum of these dice should still have the same shape as the original, only shifted downwards. It should be clear that this sum is symmetrical around zero. Therefore the original distribution is also symmetrical.
|
What is the distribution for various polyhedral dice all rolled at once?
|
A little help to your intuition:
First, consider what happens if you add one to all the faces of one die, e.g. the d4. So, instead of 1,2,3,4, the faces now show 2,3,4,5.
Comparing this situation to
|
What is the distribution for various polyhedral dice all rolled at once?
A little help to your intuition:
First, consider what happens if you add one to all the faces of one die, e.g. the d4. So, instead of 1,2,3,4, the faces now show 2,3,4,5.
Comparing this situation to the original, it is easy to see that the total sum is now one higher than it used to be. This means that the shape of the distribution is unchanged, it is just moved one step to the side.
Now subtract the average value of each die from every side of that die.
This gives dice marked
$-{3\over 2}$,$-{1\over 2}$,${1\over 2}$,${3\over 2}$
$-{5\over 2}$,$-{3\over 2}$,$-{1\over 2}$,${1\over 2}$,${3\over 2}$,${5\over 2}$
$-{7\over 2}$,$-{5\over 2}$,$-{3\over 2}$,$-{1\over 2}$,${1\over 2}$,${3\over 2}$,${5\over 2}$,${7\over 2}$
etc.
Now, the sum of these dice should still have the same shape as the original, only shifted downwards. It should be clear that this sum is symmetrical around zero. Therefore the original distribution is also symmetrical.
|
What is the distribution for various polyhedral dice all rolled at once?
A little help to your intuition:
First, consider what happens if you add one to all the faces of one die, e.g. the d4. So, instead of 1,2,3,4, the faces now show 2,3,4,5.
Comparing this situation to
|
16,042
|
What is the distribution for various polyhedral dice all rolled at once?
|
Using the R software I posted earlier at https://stats.stackexchange.com/a/116913/919 for solving problems like this, you can compute the solution in one line:
(all <- d(1,4) + d(1,6) + d(1,8) + d(1,12) + d(1,20))
The output gives all 46 probabilities (not shown), which can be plotted with another line:
plot(all, xlab="Value", yaxp=c(0,1,2), main=expression(d[4]+d[6]+d[8]+d[12]+d[20]))
To this plot I have added the graph of the Normal distribution with the same variance and mean (employing a continuity correction),
curve(pnorm(x, mean(all)-1/2, sqrt(var.die(all))), add=TRUE, col="Red")
If you prefer to see the probability function, here it is:
with(all, plot(value, prob, type="h", main="Probability Function", cex.main=1))
Clearly the Normal approximation is already good, so we may continue to use it to describe the sum of many rolls of this combination. But if you want to see it precisely computed, you may do so. For instance, here is the sum of four trials with its Normal approximation superimposed (no continuity correction needed),
with(all+all+all+all, plot(value, prob, type="h", main="Sum of Four Trials", cex.main=1))
curve(dnorm(x, mean(all)*4, sqrt(4*var.die(all))), add=TRUE, col="Red", lwd=2)
|
What is the distribution for various polyhedral dice all rolled at once?
|
Using the R software I posted earlier at https://stats.stackexchange.com/a/116913/919 for solving problems like this, you can compute the solution in one line:
(all <- d(1,4) + d(1,6) + d(1,8) + d(1,1
|
What is the distribution for various polyhedral dice all rolled at once?
Using the R software I posted earlier at https://stats.stackexchange.com/a/116913/919 for solving problems like this, you can compute the solution in one line:
(all <- d(1,4) + d(1,6) + d(1,8) + d(1,12) + d(1,20))
The output gives all 46 probabilities (not shown), which can be plotted with another line:
plot(all, xlab="Value", yaxp=c(0,1,2), main=expression(d[4]+d[6]+d[8]+d[12]+d[20]))
To this plot I have added the graph of the Normal distribution with the same variance and mean (employing a continuity correction),
curve(pnorm(x, mean(all)-1/2, sqrt(var.die(all))), add=TRUE, col="Red")
If you prefer to see the probability function, here it is:
with(all, plot(value, prob, type="h", main="Probability Function", cex.main=1))
Clearly the Normal approximation is already good, so we may continue to use it to describe the sum of many rolls of this combination. But if you want to see it precisely computed, you may do so. For instance, here is the sum of four trials with its Normal approximation superimposed (no continuity correction needed),
with(all+all+all+all, plot(value, prob, type="h", main="Sum of Four Trials", cex.main=1))
curve(dnorm(x, mean(all)*4, sqrt(4*var.die(all))), add=TRUE, col="Red", lwd=2)
|
What is the distribution for various polyhedral dice all rolled at once?
Using the R software I posted earlier at https://stats.stackexchange.com/a/116913/919 for solving problems like this, you can compute the solution in one line:
(all <- d(1,4) + d(1,6) + d(1,8) + d(1,1
|
16,043
|
What is the distribution for various polyhedral dice all rolled at once?
|
The Central Limit Theorem answers your question. Though its details and its proof (and that Wikipedia article) are somewhat brain-bending, the gist of it is simple. Per Wikipedia, it states that
the sum of a number of independent and identically distributed random variables with finite variances will tend to a normal distribution as the number of variables grows.
Sketch of a proof for your case:
When you say “roll all the dice at once,” each roll of all the dice is a random variable.
Your dice have finite numbers printed on them. The sum of their values therefore has finite variance.
Every time you roll all the dice, the probability distribution of the outcome is the same. (The dice don’t change between rolls.)
If you roll the dice fairly, then every time you roll them, the outcome is independent. (Previous rolls don’t affect future rolls.)
Independent? Check. Identically distributed? Check. Finite variance? Check. Therefore the sum tends toward a normal distribution.
It wouldn’t even matter if the distribution for one roll of all dice were lopsided toward the low end. I wouldn’t matter if there were cusps in that distribution. All the summing smooths it out and makes it a symmetrical gaussian. You don’t even need to do any algebra or simulation to show it! That’s the surprising insight of the CLT.
|
What is the distribution for various polyhedral dice all rolled at once?
|
The Central Limit Theorem answers your question. Though its details and its proof (and that Wikipedia article) are somewhat brain-bending, the gist of it is simple. Per Wikipedia, it states that
the
|
What is the distribution for various polyhedral dice all rolled at once?
The Central Limit Theorem answers your question. Though its details and its proof (and that Wikipedia article) are somewhat brain-bending, the gist of it is simple. Per Wikipedia, it states that
the sum of a number of independent and identically distributed random variables with finite variances will tend to a normal distribution as the number of variables grows.
Sketch of a proof for your case:
When you say “roll all the dice at once,” each roll of all the dice is a random variable.
Your dice have finite numbers printed on them. The sum of their values therefore has finite variance.
Every time you roll all the dice, the probability distribution of the outcome is the same. (The dice don’t change between rolls.)
If you roll the dice fairly, then every time you roll them, the outcome is independent. (Previous rolls don’t affect future rolls.)
Independent? Check. Identically distributed? Check. Finite variance? Check. Therefore the sum tends toward a normal distribution.
It wouldn’t even matter if the distribution for one roll of all dice were lopsided toward the low end. I wouldn’t matter if there were cusps in that distribution. All the summing smooths it out and makes it a symmetrical gaussian. You don’t even need to do any algebra or simulation to show it! That’s the surprising insight of the CLT.
|
What is the distribution for various polyhedral dice all rolled at once?
The Central Limit Theorem answers your question. Though its details and its proof (and that Wikipedia article) are somewhat brain-bending, the gist of it is simple. Per Wikipedia, it states that
the
|
16,044
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
A threshold isn't trained with the model because logistic regression isn't a classifier (cf., Why isn't Logistic Regression called Logistic Classification?). It is a model to estimate the parameter, $p$, that governs the behavior of the Bernoulli distribution. That is, you are assuming that the response distribution, conditional on the covariates, is Bernoulli, and so you want to estimate how the parameter that controls that variable changes as a function of the covariates. It is a direct probability model only. Of course, it can be used as a classifier subsequently, and sometimes is in certain contexts, but it is still a probability model.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
A threshold isn't trained with the model because logistic regression isn't a classifier (cf., Why isn't Logistic Regression called Logistic Classification?). It is a model to estimate the parameter,
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
A threshold isn't trained with the model because logistic regression isn't a classifier (cf., Why isn't Logistic Regression called Logistic Classification?). It is a model to estimate the parameter, $p$, that governs the behavior of the Bernoulli distribution. That is, you are assuming that the response distribution, conditional on the covariates, is Bernoulli, and so you want to estimate how the parameter that controls that variable changes as a function of the covariates. It is a direct probability model only. Of course, it can be used as a classifier subsequently, and sometimes is in certain contexts, but it is still a probability model.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
A threshold isn't trained with the model because logistic regression isn't a classifier (cf., Why isn't Logistic Regression called Logistic Classification?). It is a model to estimate the parameter,
|
16,045
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
It's because the optimal threshold is not only a function of the true positive rate (TPR), the false positive rate (FPR), accuracy or whatever else. The other crucial ingredient is the cost and the payoff of correct and wrong decisions.
If your target is a common cold, your response to a positive test is to prescribe two aspirin, and the cost of a true untreated positive is an unnecessary two days' worth of headaches, then your optimal decision (not classification!) threshold is quite different than if your target is some life-threatening disease, and your decision is (a) some comparatively simple procedure like an appendectomy, or (b) a major intervention like months of chemotherapy! And note that although your target variable may be binary (sick/healthy), your decisions may have more values (send home with two aspirin/run more tests/admit to hospital and watch/operate immediately).
Bottom line: if you know your cost structure and all the different decisions, you can certainly train a decision support system (DSS) directly, which includes a probabilistic classification or prediction. I would, however, strongly argue that discretizing predictions or classifications via thresholds is not the right way to go about this.
See also my answer to the earlier "Classification probability threshold" thread. Or this answer of mine. Or that one.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
It's because the optimal threshold is not only a function of the true positive rate (TPR), the false positive rate (FPR), accuracy or whatever else. The other crucial ingredient is the cost and the pa
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
It's because the optimal threshold is not only a function of the true positive rate (TPR), the false positive rate (FPR), accuracy or whatever else. The other crucial ingredient is the cost and the payoff of correct and wrong decisions.
If your target is a common cold, your response to a positive test is to prescribe two aspirin, and the cost of a true untreated positive is an unnecessary two days' worth of headaches, then your optimal decision (not classification!) threshold is quite different than if your target is some life-threatening disease, and your decision is (a) some comparatively simple procedure like an appendectomy, or (b) a major intervention like months of chemotherapy! And note that although your target variable may be binary (sick/healthy), your decisions may have more values (send home with two aspirin/run more tests/admit to hospital and watch/operate immediately).
Bottom line: if you know your cost structure and all the different decisions, you can certainly train a decision support system (DSS) directly, which includes a probabilistic classification or prediction. I would, however, strongly argue that discretizing predictions or classifications via thresholds is not the right way to go about this.
See also my answer to the earlier "Classification probability threshold" thread. Or this answer of mine. Or that one.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
It's because the optimal threshold is not only a function of the true positive rate (TPR), the false positive rate (FPR), accuracy or whatever else. The other crucial ingredient is the cost and the pa
|
16,046
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
Philosophical concerns aside, this would cause computational difficulties.
The reason why is that functions with continuous output are relatively easy to optimize. You look for the direction where the function increases, and then go that way. If we alter our loss function to include the "cutoff" step, our output becomes discrete, and our loss function is therefore also discrete. Now when we alter the parameters of our logistic function by "a little bit" and jointly alter the cutoff value by "a little bit", our loss gives an identical value, and optimization becomes difficult. Of course, it's not impossible (There's a whole field of study in discrete optimization) but continuous optimization is by far the easier problem to solve when you are optimizing many parameters. Conveniently, once the logistic model has been fit, finding the optimal cutoff, though still a discrete output problem, is now only in one variable, and we can just do a grid search, or some such, which is totally viable in one variable.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
Philosophical concerns aside, this would cause computational difficulties.
The reason why is that functions with continuous output are relatively easy to optimize. You look for the direction where th
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
Philosophical concerns aside, this would cause computational difficulties.
The reason why is that functions with continuous output are relatively easy to optimize. You look for the direction where the function increases, and then go that way. If we alter our loss function to include the "cutoff" step, our output becomes discrete, and our loss function is therefore also discrete. Now when we alter the parameters of our logistic function by "a little bit" and jointly alter the cutoff value by "a little bit", our loss gives an identical value, and optimization becomes difficult. Of course, it's not impossible (There's a whole field of study in discrete optimization) but continuous optimization is by far the easier problem to solve when you are optimizing many parameters. Conveniently, once the logistic model has been fit, finding the optimal cutoff, though still a discrete output problem, is now only in one variable, and we can just do a grid search, or some such, which is totally viable in one variable.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
Philosophical concerns aside, this would cause computational difficulties.
The reason why is that functions with continuous output are relatively easy to optimize. You look for the direction where th
|
16,047
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
Regardless of the underlying model, we can work out the sampling distributions of TPR and FPR at a threshold. This implies that we can characterize the variability in TPR and FPR at some threshold, and we can back into a desired error rate trade-off.
A ROC curve is a little bit deceptive because the only thing that you control is the threshold, however the plot displays TPR and FPR, which are functions of the threshold. Moreover, the TPR and FPR are both statistics, so they are subject to the vagaries of random sampling. This implies that if you were to repeat the procedure (say by cross-validation), you could come up with a different FPR and TPR at some specific threshold value.
However, if we can estimate the variability in the TPR and FPR, then repeating the ROC procedure is not necessary. We just pick a threshold such that the endpoints of a confidence interval (with some width) are acceptable. That is, pick the model so that the FPR is plausibly below some researcher-specified maximum, and/or the TPR is plausibly above some researcher-specified minimum. If your model can't attain your targets, you'll have to build a better model.
Of course, what TPR and FPR values are tolerable in your usage will be context-dependent.
For more information, see ROC Curves for Continuous Data
by Wojtek J. Krzanowski and David J. Hand.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
Regardless of the underlying model, we can work out the sampling distributions of TPR and FPR at a threshold. This implies that we can characterize the variability in TPR and FPR at some threshold, an
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
Regardless of the underlying model, we can work out the sampling distributions of TPR and FPR at a threshold. This implies that we can characterize the variability in TPR and FPR at some threshold, and we can back into a desired error rate trade-off.
A ROC curve is a little bit deceptive because the only thing that you control is the threshold, however the plot displays TPR and FPR, which are functions of the threshold. Moreover, the TPR and FPR are both statistics, so they are subject to the vagaries of random sampling. This implies that if you were to repeat the procedure (say by cross-validation), you could come up with a different FPR and TPR at some specific threshold value.
However, if we can estimate the variability in the TPR and FPR, then repeating the ROC procedure is not necessary. We just pick a threshold such that the endpoints of a confidence interval (with some width) are acceptable. That is, pick the model so that the FPR is plausibly below some researcher-specified maximum, and/or the TPR is plausibly above some researcher-specified minimum. If your model can't attain your targets, you'll have to build a better model.
Of course, what TPR and FPR values are tolerable in your usage will be context-dependent.
For more information, see ROC Curves for Continuous Data
by Wojtek J. Krzanowski and David J. Hand.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
Regardless of the underlying model, we can work out the sampling distributions of TPR and FPR at a threshold. This implies that we can characterize the variability in TPR and FPR at some threshold, an
|
16,048
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
Usually in biomedical research, we don't use a training set---we just apply logistic regression on the full dataset to see which predictors are significant risk factors for the outcome we're looking at; or to look at one predictor of interest while controlling for the effect of other possible predictors on the outcome.
I'm not sure quite what you mean by threshold values, but there are various parameters that one may seek to optimize: AUC, cutoff values for a dichotomizing a continuous predictor variable, positive and negative predictive values, confidence intervals and p-values, false positive and false negative rates.
Logistic regression looks at a population of subjects and assesses the strength and causal direction of risk factors that contribute to the outcome of interest in that population. It's also possible to "run it in reverse," so to speak, and determine an individual's risk of the outcome given the risk factors that the individual has. Logistic regression assigns each individual a risk of the outcome, based on their individual risk factors, and by default this is 0.5. If a subject's probability of having the outcome (based on all the data and subjects in your model) is 0.5 or above, it predicts he will have the outcome; if below 0.5 then it predicts he won't. But you can adjust this cutoff level, for example to flag more individuals who might be at risk of having the outcome, albeit at the price of having more false positives being predicted by the model. You can adjust this cutoff level to optimize screening decisions in order to predict which individuals would be advised to have further medical followup, for example; and to construct your positive predictive value, negative predictive value, and false negative and false positive rates for a screening test based on the logistic regression model. You can develop the model on half your dataset and test it on the other half, but you don't really have to (and doing so will cut your 'training' data in half and thus reduce the power to find significant predictors in the model). So yes, you can 'train the whole thing end to end'. Of course, in biomedical research, you would want to validate it on another population, another data set before saying your results can be generalized to a wider population. Another approach is to use a bootstrapping-type approach where you run your model on a subsample of your study population, then replace those subjects back into the pool and repeat with another sample, many times (typically 1000 times). If you get significant results a prescribed majority of the time (e.g. 95% of the time) then your model can be deemed validated---at least on your own data. But again, the smaller the study population you run your model on, the less likely it will be that some predictors will be statistically significant risk factors for the outcome. This is especially true for biomedical studies with limited numbers of participants.
Using half of your data to 'train' your model and then 'validating' it on the other half is an unnecessary burden. You don't do that for t-tests or linear regression, so why do it in logistic regression? The most it will do is let you say 'yeah it works' but if you use your full dataset then you determine that anyway. Breaking your data into smaller datasets runs the risk of not detecting significant risk factors in the study population (OR the validation population) when they are in fact present, due to small sample size, having too many predictors for your study size, and the possibility that your 'validation sample' will show no associations just from chance. The logic behind the 'train then validate' approach seems to be that if the risk factors you identify as significant aren't strong enough, then they won't be statistically significant when modeled on some randomly-chosen half of your data. But that randomly-chosen sample might happen to show no association just by chance, or because it is too small for the risk factor(s) to be statistically significant. But it's the magnitude of the risk factor(s) AND their statistical significance which determine their importance and for that reason it's best to use your full dataset to build your model with. Statistical significance will become less significant with smaller sample sizes, as it does with most statistical tests.
Doing logistic regression is an art almost as much as a statistical science. There are different approaches to use and different parameters to optimize depending on your study design.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
|
Usually in biomedical research, we don't use a training set---we just apply logistic regression on the full dataset to see which predictors are significant risk factors for the outcome we're looking a
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
Usually in biomedical research, we don't use a training set---we just apply logistic regression on the full dataset to see which predictors are significant risk factors for the outcome we're looking at; or to look at one predictor of interest while controlling for the effect of other possible predictors on the outcome.
I'm not sure quite what you mean by threshold values, but there are various parameters that one may seek to optimize: AUC, cutoff values for a dichotomizing a continuous predictor variable, positive and negative predictive values, confidence intervals and p-values, false positive and false negative rates.
Logistic regression looks at a population of subjects and assesses the strength and causal direction of risk factors that contribute to the outcome of interest in that population. It's also possible to "run it in reverse," so to speak, and determine an individual's risk of the outcome given the risk factors that the individual has. Logistic regression assigns each individual a risk of the outcome, based on their individual risk factors, and by default this is 0.5. If a subject's probability of having the outcome (based on all the data and subjects in your model) is 0.5 or above, it predicts he will have the outcome; if below 0.5 then it predicts he won't. But you can adjust this cutoff level, for example to flag more individuals who might be at risk of having the outcome, albeit at the price of having more false positives being predicted by the model. You can adjust this cutoff level to optimize screening decisions in order to predict which individuals would be advised to have further medical followup, for example; and to construct your positive predictive value, negative predictive value, and false negative and false positive rates for a screening test based on the logistic regression model. You can develop the model on half your dataset and test it on the other half, but you don't really have to (and doing so will cut your 'training' data in half and thus reduce the power to find significant predictors in the model). So yes, you can 'train the whole thing end to end'. Of course, in biomedical research, you would want to validate it on another population, another data set before saying your results can be generalized to a wider population. Another approach is to use a bootstrapping-type approach where you run your model on a subsample of your study population, then replace those subjects back into the pool and repeat with another sample, many times (typically 1000 times). If you get significant results a prescribed majority of the time (e.g. 95% of the time) then your model can be deemed validated---at least on your own data. But again, the smaller the study population you run your model on, the less likely it will be that some predictors will be statistically significant risk factors for the outcome. This is especially true for biomedical studies with limited numbers of participants.
Using half of your data to 'train' your model and then 'validating' it on the other half is an unnecessary burden. You don't do that for t-tests or linear regression, so why do it in logistic regression? The most it will do is let you say 'yeah it works' but if you use your full dataset then you determine that anyway. Breaking your data into smaller datasets runs the risk of not detecting significant risk factors in the study population (OR the validation population) when they are in fact present, due to small sample size, having too many predictors for your study size, and the possibility that your 'validation sample' will show no associations just from chance. The logic behind the 'train then validate' approach seems to be that if the risk factors you identify as significant aren't strong enough, then they won't be statistically significant when modeled on some randomly-chosen half of your data. But that randomly-chosen sample might happen to show no association just by chance, or because it is too small for the risk factor(s) to be statistically significant. But it's the magnitude of the risk factor(s) AND their statistical significance which determine their importance and for that reason it's best to use your full dataset to build your model with. Statistical significance will become less significant with smaller sample sizes, as it does with most statistical tests.
Doing logistic regression is an art almost as much as a statistical science. There are different approaches to use and different parameters to optimize depending on your study design.
|
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
Usually in biomedical research, we don't use a training set---we just apply logistic regression on the full dataset to see which predictors are significant risk factors for the outcome we're looking a
|
16,049
|
Can you infer causality from correlation in this example of dictator game?
|
In general you should not assume that correlation implies causality - even in cases where it seems that is the only possible reason.
Consider that there are other things that correlate with age - generational aspects of culture for example. Perhaps these three groups will remain the same even as they all age, but the next generation will buck the trend?
All that being said, you are probably right that younger people are more likely to keep a larger amount, but just be aware there are other possibilities.
|
Can you infer causality from correlation in this example of dictator game?
|
In general you should not assume that correlation implies causality - even in cases where it seems that is the only possible reason.
Consider that there are other things that correlate with age - gene
|
Can you infer causality from correlation in this example of dictator game?
In general you should not assume that correlation implies causality - even in cases where it seems that is the only possible reason.
Consider that there are other things that correlate with age - generational aspects of culture for example. Perhaps these three groups will remain the same even as they all age, but the next generation will buck the trend?
All that being said, you are probably right that younger people are more likely to keep a larger amount, but just be aware there are other possibilities.
|
Can you infer causality from correlation in this example of dictator game?
In general you should not assume that correlation implies causality - even in cases where it seems that is the only possible reason.
Consider that there are other things that correlate with age - gene
|
16,050
|
Can you infer causality from correlation in this example of dictator game?
|
I can postulate several causalities from your data.
The age is measured and then the amount of money kept. Older participants prefer to keep more money (maybe they are smarter or less idealistic, but that's not the point).
The amount of money kept is measured and then the age. People who keep more money spend more time time counting it and are therefore older when the age is measured.
Sick people keep more money because they need money for (possibly life-saving) medication or treatment. The actual correlation is between sickness and money kept, but this variable is "hidden" and we therefore jump to the wrong conclusion, because age and likelihood of sickness correlates in the demographic group of persons chosen for experiment.
(Omitting 143 theories; I need to keep this reasonably short)
The experimenter spoke in an old, obscure dialect which the young people did not understand and therefore mistakenly chose the wrong option.
Conclusion: you are correct, but your classmate might claim to be 147 times correcter.
Another famous correlation is between low IQ and hours of TV watched daily. Does watching TV make one dumb, or do dumb people watch more TV? It could even be both.
|
Can you infer causality from correlation in this example of dictator game?
|
I can postulate several causalities from your data.
The age is measured and then the amount of money kept. Older participants prefer to keep more money (maybe they are smarter or less idealistic, but
|
Can you infer causality from correlation in this example of dictator game?
I can postulate several causalities from your data.
The age is measured and then the amount of money kept. Older participants prefer to keep more money (maybe they are smarter or less idealistic, but that's not the point).
The amount of money kept is measured and then the age. People who keep more money spend more time time counting it and are therefore older when the age is measured.
Sick people keep more money because they need money for (possibly life-saving) medication or treatment. The actual correlation is between sickness and money kept, but this variable is "hidden" and we therefore jump to the wrong conclusion, because age and likelihood of sickness correlates in the demographic group of persons chosen for experiment.
(Omitting 143 theories; I need to keep this reasonably short)
The experimenter spoke in an old, obscure dialect which the young people did not understand and therefore mistakenly chose the wrong option.
Conclusion: you are correct, but your classmate might claim to be 147 times correcter.
Another famous correlation is between low IQ and hours of TV watched daily. Does watching TV make one dumb, or do dumb people watch more TV? It could even be both.
|
Can you infer causality from correlation in this example of dictator game?
I can postulate several causalities from your data.
The age is measured and then the amount of money kept. Older participants prefer to keep more money (maybe they are smarter or less idealistic, but
|
16,051
|
Can you infer causality from correlation in this example of dictator game?
|
Inferring causation from correlation in general is problematic because there may be a number of other reasons for the correlation. For example, spurious correlations due to confounders, selection bias (e.g., only choosing participants with an income below a certain threshold), or the causal effect may simply go the other direction (e.g., a thermometer is correlated with temperature but certainly does not cause it). In each of these cases, your classmate's procedure might find a causal effect where there is none.
However, if the participants were randomly selected, we could rule out confounders and selection bias. In that case, either age must cause money kept or money kept must cause age. The latter would imply that forcing someone to keep a certain amount of money would somehow change their age. So we can safely assume that age causes money kept.
Note that the causal effect could be "direct" or "indirect". People of different age will have received a different education, have a different amount of wealth, etc., and for these reasons might choose to keep a different amount of the $100. Causal effects via these mediators are still causal effects but are indirect.
|
Can you infer causality from correlation in this example of dictator game?
|
Inferring causation from correlation in general is problematic because there may be a number of other reasons for the correlation. For example, spurious correlations due to confounders, selection bias
|
Can you infer causality from correlation in this example of dictator game?
Inferring causation from correlation in general is problematic because there may be a number of other reasons for the correlation. For example, spurious correlations due to confounders, selection bias (e.g., only choosing participants with an income below a certain threshold), or the causal effect may simply go the other direction (e.g., a thermometer is correlated with temperature but certainly does not cause it). In each of these cases, your classmate's procedure might find a causal effect where there is none.
However, if the participants were randomly selected, we could rule out confounders and selection bias. In that case, either age must cause money kept or money kept must cause age. The latter would imply that forcing someone to keep a certain amount of money would somehow change their age. So we can safely assume that age causes money kept.
Note that the causal effect could be "direct" or "indirect". People of different age will have received a different education, have a different amount of wealth, etc., and for these reasons might choose to keep a different amount of the $100. Causal effects via these mediators are still causal effects but are indirect.
|
Can you infer causality from correlation in this example of dictator game?
Inferring causation from correlation in general is problematic because there may be a number of other reasons for the correlation. For example, spurious correlations due to confounders, selection bias
|
16,052
|
Can you infer causality from correlation in this example of dictator game?
|
Correlation is a mathematical concept; causality is a philosophical idea.
On the other hand, spurious correlation is a mostly technical (you won't find it in measure-theoretical probability textbooks) concept that can be defined in a way that's mostly actionable.
This idea is closely related to the idea of falsificationism in science -- where the goal is never to prove things, only to disprove them.
Statistics is to mathematics as medicine is to biology. You're asked to make your best judgement with the support of a wealth of technical knowledge, but this knowledge is never enough to cover the whole world. So if you're going to make judgements as a statistician and present them to others, you need to follow certain standards of quality are met; i.e. that you're giving sound advice, giving them their money's worth. This also means taking the asymmetry of risks into consideration -- in medical testing, the cost of giving a false negative result (which may prevent people from getting early treatment) may be higher than the cost of giving a false positive (which causes distress).
In practice these standards will vary from field to field -- sometimes it's triple-blind RCTs, sometimes it's instrumental variables and other techniques to control for reverse causation and hidden common causes, sometimes it's Granger causality -- that something in the past consistently correlates with something else in the presence, but not in the reverse direction. It might even be rigorous regularization and cross-validation.
|
Can you infer causality from correlation in this example of dictator game?
|
Correlation is a mathematical concept; causality is a philosophical idea.
On the other hand, spurious correlation is a mostly technical (you won't find it in measure-theoretical probability textbooks)
|
Can you infer causality from correlation in this example of dictator game?
Correlation is a mathematical concept; causality is a philosophical idea.
On the other hand, spurious correlation is a mostly technical (you won't find it in measure-theoretical probability textbooks) concept that can be defined in a way that's mostly actionable.
This idea is closely related to the idea of falsificationism in science -- where the goal is never to prove things, only to disprove them.
Statistics is to mathematics as medicine is to biology. You're asked to make your best judgement with the support of a wealth of technical knowledge, but this knowledge is never enough to cover the whole world. So if you're going to make judgements as a statistician and present them to others, you need to follow certain standards of quality are met; i.e. that you're giving sound advice, giving them their money's worth. This also means taking the asymmetry of risks into consideration -- in medical testing, the cost of giving a false negative result (which may prevent people from getting early treatment) may be higher than the cost of giving a false positive (which causes distress).
In practice these standards will vary from field to field -- sometimes it's triple-blind RCTs, sometimes it's instrumental variables and other techniques to control for reverse causation and hidden common causes, sometimes it's Granger causality -- that something in the past consistently correlates with something else in the presence, but not in the reverse direction. It might even be rigorous regularization and cross-validation.
|
Can you infer causality from correlation in this example of dictator game?
Correlation is a mathematical concept; causality is a philosophical idea.
On the other hand, spurious correlation is a mostly technical (you won't find it in measure-theoretical probability textbooks)
|
16,053
|
Can you infer causality from correlation in this example of dictator game?
|
Causal claim for age would be inappropriate in this case
The problem with claiming causality in your exam question design can be boiled down to one simple fact: aging was not a treatment, age was not manipulated at all. The main reason to do controlled studies is precisely because, due to the manipulation and control over the variables of interest, you can say that the change in one variable causes the change in the outcome (under extremely specific experimental conditions and with a boat-load of other assumptions like random assignment and that the experimenter didn't screw up something in the execution details, which I casually gloss over here).
But that's not what the exam design describes - it simply has two groups of participants, with one specific fact that differs them known (their age); but you have no way of knowing any of the other ways the group differs. Due to the lack of control, you cannot know whether it was the difference in age that caused the change in outcome, or if it is because the reason 40-year olds join a study is because they need the money while 20-year olds were students who were participating for class credit and so had different motivations - or any one of a thousand other possible natural differences in your groups.
Now, the technical terminology for these sorts of things varies by field. Common terms for things like participant age and gender are "participant attribute", "extraneous variable", "attribute independent variable", etc. Ultimately you end up with something that is not a "true experiment" or a "true controlled experiment", because the thing you want to make a claim about - like age - wasn't really in your control to change, so the most you can hope for without far more advanced methods (like causal inference, additional conditions, longitudinal data, etc.) is to claim there is a correlation.
This also happens to be one of the reasons why experiments in social science, and understanding hard-to-control attributes of people, is so tricky in practice - people differ in lots of ways, and when you can't change the things you want to learn about, you tend to need more complex experimental and inferential techniques or a different strategy entirely.
How could you change the design to make a causal claim?
Imagine a hypothetical scenario like this: Group A and B are both made up of participants who are 20 years old.
You have Group A play the dictatorship game as usual.
For Group B, you take out a Magical Aging Ray of Science (or perhaps by having a Ghost treat them with horrifying visage), which you have carefully tuned to aging all the participants in Group B so that they are now 40 years old, but otherwise leaving them unchanged, and then have them play the dictator game just as Group A did.
For extra rigor you could get a Group C of naturally-aged 40-year olds to confirm the synthetic aging is comparable to natural aging, but lets keep things simple and say we know that artificial aging is just like the real thing based on "prior work".
Now, if Group B keeps more money than Group A, you can claim that the experiment indicates that aging causes people to keep more of the money. Of course there are still approximately a thousand reasons why your claim could turn out to be wrong, but your experiment at least has a valid causal interpretation.
|
Can you infer causality from correlation in this example of dictator game?
|
Causal claim for age would be inappropriate in this case
The problem with claiming causality in your exam question design can be boiled down to one simple fact: aging was not a treatment, age was not
|
Can you infer causality from correlation in this example of dictator game?
Causal claim for age would be inappropriate in this case
The problem with claiming causality in your exam question design can be boiled down to one simple fact: aging was not a treatment, age was not manipulated at all. The main reason to do controlled studies is precisely because, due to the manipulation and control over the variables of interest, you can say that the change in one variable causes the change in the outcome (under extremely specific experimental conditions and with a boat-load of other assumptions like random assignment and that the experimenter didn't screw up something in the execution details, which I casually gloss over here).
But that's not what the exam design describes - it simply has two groups of participants, with one specific fact that differs them known (their age); but you have no way of knowing any of the other ways the group differs. Due to the lack of control, you cannot know whether it was the difference in age that caused the change in outcome, or if it is because the reason 40-year olds join a study is because they need the money while 20-year olds were students who were participating for class credit and so had different motivations - or any one of a thousand other possible natural differences in your groups.
Now, the technical terminology for these sorts of things varies by field. Common terms for things like participant age and gender are "participant attribute", "extraneous variable", "attribute independent variable", etc. Ultimately you end up with something that is not a "true experiment" or a "true controlled experiment", because the thing you want to make a claim about - like age - wasn't really in your control to change, so the most you can hope for without far more advanced methods (like causal inference, additional conditions, longitudinal data, etc.) is to claim there is a correlation.
This also happens to be one of the reasons why experiments in social science, and understanding hard-to-control attributes of people, is so tricky in practice - people differ in lots of ways, and when you can't change the things you want to learn about, you tend to need more complex experimental and inferential techniques or a different strategy entirely.
How could you change the design to make a causal claim?
Imagine a hypothetical scenario like this: Group A and B are both made up of participants who are 20 years old.
You have Group A play the dictatorship game as usual.
For Group B, you take out a Magical Aging Ray of Science (or perhaps by having a Ghost treat them with horrifying visage), which you have carefully tuned to aging all the participants in Group B so that they are now 40 years old, but otherwise leaving them unchanged, and then have them play the dictator game just as Group A did.
For extra rigor you could get a Group C of naturally-aged 40-year olds to confirm the synthetic aging is comparable to natural aging, but lets keep things simple and say we know that artificial aging is just like the real thing based on "prior work".
Now, if Group B keeps more money than Group A, you can claim that the experiment indicates that aging causes people to keep more of the money. Of course there are still approximately a thousand reasons why your claim could turn out to be wrong, but your experiment at least has a valid causal interpretation.
|
Can you infer causality from correlation in this example of dictator game?
Causal claim for age would be inappropriate in this case
The problem with claiming causality in your exam question design can be boiled down to one simple fact: aging was not a treatment, age was not
|
16,054
|
Can you infer causality from correlation in this example of dictator game?
|
Generally you can't jump from correlation to causation. For example, there's a well-known social science phenomenon about social status/class, and propensity to spend/save. For many many years it was believed that this showed causation. Last year more intensive research showed it wasn't.
Classic "correlation isn't causation" - in this case, the confounding factor was that growing up in poverty teaches people to use money differently, and spend if there is a surplus, because it may not be there tomorrow even if saved for various reasons.
In your example, suppose the older people all lived through a war, which the younger people didn't. The link might be that people who grew up in social chaos, with real risk of harm and loss of life, learn to prioritise saving resources for themselves and against need, more than those who grow up in happier circumstances where the state, employers, or health insurers will take care of it, and survival isn't an issue that shaped their outlook. Then you would get the same apparent link - older people (including those closer to their generation) keep more, but it would only apparently be linked to age. In reality the causative element is the social situation one spent formative years in, and what habits that taught - not age per se.
|
Can you infer causality from correlation in this example of dictator game?
|
Generally you can't jump from correlation to causation. For example, there's a well-known social science phenomenon about social status/class, and propensity to spend/save. For many many years it was
|
Can you infer causality from correlation in this example of dictator game?
Generally you can't jump from correlation to causation. For example, there's a well-known social science phenomenon about social status/class, and propensity to spend/save. For many many years it was believed that this showed causation. Last year more intensive research showed it wasn't.
Classic "correlation isn't causation" - in this case, the confounding factor was that growing up in poverty teaches people to use money differently, and spend if there is a surplus, because it may not be there tomorrow even if saved for various reasons.
In your example, suppose the older people all lived through a war, which the younger people didn't. The link might be that people who grew up in social chaos, with real risk of harm and loss of life, learn to prioritise saving resources for themselves and against need, more than those who grow up in happier circumstances where the state, employers, or health insurers will take care of it, and survival isn't an issue that shaped their outlook. Then you would get the same apparent link - older people (including those closer to their generation) keep more, but it would only apparently be linked to age. In reality the causative element is the social situation one spent formative years in, and what habits that taught - not age per se.
|
Can you infer causality from correlation in this example of dictator game?
Generally you can't jump from correlation to causation. For example, there's a well-known social science phenomenon about social status/class, and propensity to spend/save. For many many years it was
|
16,055
|
Can you infer causality from correlation in this example of dictator game?
|
The relationship between correlation and causation has stumped philosophers and statisticians alike for centuries. Finally, over the last twenty years or so computer scientists claim to have sorted it all out. This does not seem to be widely known. Fortunately Judea Pearl, a prime mover in this field, has recently published a book explaining this work for a popular audience: The Book of Why.
https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/046509760X
https://bigthink.com/errors-we-live-by/judea-pearls-the-book-of-why-brings-news-of-a-new-science-of-causes
Spoiler alert: You can infer causation from correlation in some circumstances if you know what you are doing. You need to make some causal assumptions to start with (a causal model, ideally based on science). And you need the tools to do counterfactual reasoning (The do-algebra). Sorry I can't distill this down to a few lines (I'm still reading the book myself), but I think the answer to your question is in there.
|
Can you infer causality from correlation in this example of dictator game?
|
The relationship between correlation and causation has stumped philosophers and statisticians alike for centuries. Finally, over the last twenty years or so computer scientists claim to have sorted it
|
Can you infer causality from correlation in this example of dictator game?
The relationship between correlation and causation has stumped philosophers and statisticians alike for centuries. Finally, over the last twenty years or so computer scientists claim to have sorted it all out. This does not seem to be widely known. Fortunately Judea Pearl, a prime mover in this field, has recently published a book explaining this work for a popular audience: The Book of Why.
https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/046509760X
https://bigthink.com/errors-we-live-by/judea-pearls-the-book-of-why-brings-news-of-a-new-science-of-causes
Spoiler alert: You can infer causation from correlation in some circumstances if you know what you are doing. You need to make some causal assumptions to start with (a causal model, ideally based on science). And you need the tools to do counterfactual reasoning (The do-algebra). Sorry I can't distill this down to a few lines (I'm still reading the book myself), but I think the answer to your question is in there.
|
Can you infer causality from correlation in this example of dictator game?
The relationship between correlation and causation has stumped philosophers and statisticians alike for centuries. Finally, over the last twenty years or so computer scientists claim to have sorted it
|
16,056
|
Can you infer causality from correlation in this example of dictator game?
|
No. There is a one-way logical relationship between causality and correlation.
Consider correlation a property you calculate on some data, e.g. the most common (linear) correlation as defined by Pearson. For this particular definition of correlation you can create random data points that will have a correlation of zero or of one without having any kind of causality between them, just by having certain (a)symmetries.
For any definition of correlation you can create a prescription that will show both behaviours: high values of correlation with no mathematical relation in between and low values of correlation, even if there is a fixed expression.
Yes, the relation from "unrelated, but highly correlated" is weaker than "no correlation despite being related". But the only indicator (!) you have if correlation is present is that you have to look harder for an explanation for it.
|
Can you infer causality from correlation in this example of dictator game?
|
No. There is a one-way logical relationship between causality and correlation.
Consider correlation a property you calculate on some data, e.g. the most common (linear) correlation as defined by Pears
|
Can you infer causality from correlation in this example of dictator game?
No. There is a one-way logical relationship between causality and correlation.
Consider correlation a property you calculate on some data, e.g. the most common (linear) correlation as defined by Pearson. For this particular definition of correlation you can create random data points that will have a correlation of zero or of one without having any kind of causality between them, just by having certain (a)symmetries.
For any definition of correlation you can create a prescription that will show both behaviours: high values of correlation with no mathematical relation in between and low values of correlation, even if there is a fixed expression.
Yes, the relation from "unrelated, but highly correlated" is weaker than "no correlation despite being related". But the only indicator (!) you have if correlation is present is that you have to look harder for an explanation for it.
|
Can you infer causality from correlation in this example of dictator game?
No. There is a one-way logical relationship between causality and correlation.
Consider correlation a property you calculate on some data, e.g. the most common (linear) correlation as defined by Pears
|
16,057
|
Can you infer causality from correlation in this example of dictator game?
|
There are a few reasons why this conclusion doesn't make sense.
It's not a prespecified hypothesis.
There is no control group.
Age is not a modifiable risk factor... depending on what question you're trying to ask.
A suggested improvement to the design is the following cross-over type study.
Same setting: random despots of any age who rule lands.
Design: Select matched pairs of young and old dictators. Give them money pot, inspect proportion-difference withheld (old - young = $p_1$). Steal the money back so the country and the ruler have basically the same assets as before. Depose them from their respective thrones and install them in the other's land. Reperform the pot-giving, inspect proportion-difference withheld (old - young = $p_2$).
|
Can you infer causality from correlation in this example of dictator game?
|
There are a few reasons why this conclusion doesn't make sense.
It's not a prespecified hypothesis.
There is no control group.
Age is not a modifiable risk factor... depending on what question you're
|
Can you infer causality from correlation in this example of dictator game?
There are a few reasons why this conclusion doesn't make sense.
It's not a prespecified hypothesis.
There is no control group.
Age is not a modifiable risk factor... depending on what question you're trying to ask.
A suggested improvement to the design is the following cross-over type study.
Same setting: random despots of any age who rule lands.
Design: Select matched pairs of young and old dictators. Give them money pot, inspect proportion-difference withheld (old - young = $p_1$). Steal the money back so the country and the ruler have basically the same assets as before. Depose them from their respective thrones and install them in the other's land. Reperform the pot-giving, inspect proportion-difference withheld (old - young = $p_2$).
|
Can you infer causality from correlation in this example of dictator game?
There are a few reasons why this conclusion doesn't make sense.
It's not a prespecified hypothesis.
There is no control group.
Age is not a modifiable risk factor... depending on what question you're
|
16,058
|
Can you infer causality from correlation in this example of dictator game?
|
That must be one heavy cat! Clearly he must be responsible for crushing the awning.
I found this one on LinkedIn. Just because you saw some things does not mean that one caused the other. We are free to assume and to entertain different hypotheses, but correlation does not imply causation.
|
Can you infer causality from correlation in this example of dictator game?
|
That must be one heavy cat! Clearly he must be responsible for crushing the awning.
I found this one on LinkedIn. Just because you saw some things does not mean that one caused the other. We are fr
|
Can you infer causality from correlation in this example of dictator game?
That must be one heavy cat! Clearly he must be responsible for crushing the awning.
I found this one on LinkedIn. Just because you saw some things does not mean that one caused the other. We are free to assume and to entertain different hypotheses, but correlation does not imply causation.
|
Can you infer causality from correlation in this example of dictator game?
That must be one heavy cat! Clearly he must be responsible for crushing the awning.
I found this one on LinkedIn. Just because you saw some things does not mean that one caused the other. We are fr
|
16,059
|
Can you infer causality from correlation in this example of dictator game?
|
Causality and correlation are different categories of things. That is why correlation alone is not sufficient to infer causality.
For example, causality is directional, while correlation is not. When infering causality, you need to establish what is cause and what is effect.
There are other things that might interfere with your inference. Hidden or third variables and all the questions of statistics (sample selection, sample size, etc.)
But assuming that your statistics are properly done, correlation can provide clues about causality. Typically, if you find a correlation, it means that there is some kind of causality somewhere and you should start looking for it.
You can absolutely start with a hypothesis derived from your correlation. But a hypothesis is not a causality, it is merely a possibility of a causality. You then need to test it. If your hypothesis resists sufficient falsification attempts, you may be on to something.
For example, in your age-causes-greed hypothesis, one alternative hypothesis would be that it is not age, but length of being a dictator. So you would look for old, but recently-empowered dictators as a control group, and young-but-dictator-since-childhood as a second one and check the results there.
|
Can you infer causality from correlation in this example of dictator game?
|
Causality and correlation are different categories of things. That is why correlation alone is not sufficient to infer causality.
For example, causality is directional, while correlation is not. When
|
Can you infer causality from correlation in this example of dictator game?
Causality and correlation are different categories of things. That is why correlation alone is not sufficient to infer causality.
For example, causality is directional, while correlation is not. When infering causality, you need to establish what is cause and what is effect.
There are other things that might interfere with your inference. Hidden or third variables and all the questions of statistics (sample selection, sample size, etc.)
But assuming that your statistics are properly done, correlation can provide clues about causality. Typically, if you find a correlation, it means that there is some kind of causality somewhere and you should start looking for it.
You can absolutely start with a hypothesis derived from your correlation. But a hypothesis is not a causality, it is merely a possibility of a causality. You then need to test it. If your hypothesis resists sufficient falsification attempts, you may be on to something.
For example, in your age-causes-greed hypothesis, one alternative hypothesis would be that it is not age, but length of being a dictator. So you would look for old, but recently-empowered dictators as a control group, and young-but-dictator-since-childhood as a second one and check the results there.
|
Can you infer causality from correlation in this example of dictator game?
Causality and correlation are different categories of things. That is why correlation alone is not sufficient to infer causality.
For example, causality is directional, while correlation is not. When
|
16,060
|
Can you infer causality from correlation in this example of dictator game?
|
My thinking is that you can't infer causality from this because you
can't infer causation from correlation.
You cannot infer causation from correlation alone. For achieve causal conclusion you need causal assumptions.
So your question should be: in the example causal assumptions are declared? If yes what they are and what statistical results they imply in the data?
So:
I've just had en exam where we were presented with two variables. In a
dictator game where a dictator is given 100 USD, and can choose how
much to send or keep for himself, there was a positive correlation
between age and how much money the participants decided to keep.
No clear causal assumptions are given, so no clear causal conclusion can be achieved. Maybe we can glimpse some causal assumptions here: "In a dictator game where a dictator is given ..." because this can look like a sort of experiment. However this connection is too crude and, among others, control group are absent. In other words causal assumptions are absent or, at best, unclear.
Therefore
My classmate thinks that you can because if you, for example, split
the participants up into three separate groups, you can see how they
differ in how much they keep and how much they share, and therefore
conclude that age causes them to keep more. Who is correct and why?
if some measures differ or not among groups tell us not much about causal relations. Your classmate is wrong.
|
Can you infer causality from correlation in this example of dictator game?
|
My thinking is that you can't infer causality from this because you
can't infer causation from correlation.
You cannot infer causation from correlation alone. For achieve causal conclusion you need c
|
Can you infer causality from correlation in this example of dictator game?
My thinking is that you can't infer causality from this because you
can't infer causation from correlation.
You cannot infer causation from correlation alone. For achieve causal conclusion you need causal assumptions.
So your question should be: in the example causal assumptions are declared? If yes what they are and what statistical results they imply in the data?
So:
I've just had en exam where we were presented with two variables. In a
dictator game where a dictator is given 100 USD, and can choose how
much to send or keep for himself, there was a positive correlation
between age and how much money the participants decided to keep.
No clear causal assumptions are given, so no clear causal conclusion can be achieved. Maybe we can glimpse some causal assumptions here: "In a dictator game where a dictator is given ..." because this can look like a sort of experiment. However this connection is too crude and, among others, control group are absent. In other words causal assumptions are absent or, at best, unclear.
Therefore
My classmate thinks that you can because if you, for example, split
the participants up into three separate groups, you can see how they
differ in how much they keep and how much they share, and therefore
conclude that age causes them to keep more. Who is correct and why?
if some measures differ or not among groups tell us not much about causal relations. Your classmate is wrong.
|
Can you infer causality from correlation in this example of dictator game?
My thinking is that you can't infer causality from this because you
can't infer causation from correlation.
You cannot infer causation from correlation alone. For achieve causal conclusion you need c
|
16,061
|
Can you infer causality from correlation in this example of dictator game?
|
A number of readers of this post think that Pearl has authored the definitive opinion on causality. However entertaining this may be, it is flawed. That post is a highly rated answer, and in that light the downvotes for this post were mistakenly awarded. A better approach to this arises from mechanics, i.e., from physics as epitomized by Newton's laws, e.g., "For every action, there is an equal and opposite reaction," in that we refer to a cause as an 'action' and an effect as a 'reaction'. Note the atemporality correctly implied by the phrase 'equal and opposite;' that is, effect is not subordinate to cause, they are equal, simultaneous and opposite.
Why atemorality? To begin with "Statistics means never having to say you’re certain”. Statistics can be used to screen for probable causes, but more is needed for a conviction regarding causality, both legally, and in experiments. Taken alone, statistical arguments do not reduce to cause and effect because the Sir Arthur Conan Doyle criterion, "When you have eliminated the impossible, whatever remains, however improbable, must be the truth" does not adjudicate between multiple improbabilities, and neither does statistics. Furthermore, naive attempts at defining causality based upon untested assumptions lead to outright rediculous statements. For example, in the Merriam Webster Dictionary one reads with dismay that Causality is "the relation between a cause and its effect or between regularly correlated events or phenomena." True enough, language is fluid and people use words without worrying about whether those words are used self-consistently, and if the reader is one of those, then my concern about defining causality in a self-consistent fashion is irrelevant, and subjects like Resolving the black hole causality paradox cannot be understood, because the definition of causality used is strict, unambiguous, and uninterpretable using sloppy definitions of causality.
In that light, we turn to physics to investigate causality, and if we do not, we will never sort out just how confusing causality is. Regarding atemporality, there are those who claim, without proof, that cause must precede effect because that "seems" reasonable. Is it? "The arrow of time" is ambiguous at the quantum level, and effects can precede causes, e.g., see The arrow of causality and quantum gravity. If a cause is persistent in time, an effect may have temporal duration, but the effect may still precede the cause in temporal sequence at the quantum level, the durations can be in negative time, and the clock can run backwards. It takes a while staring at Feynman diagrams to understand why this is the case, e.g., "Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time.[3] Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams."
The OP is correct from a physical sciences point of view. In simplest form, the possibility of a physical time-independent view of causality is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism.
Getting a bit more complete about this, one would include Hempel's inductive-statistical model to form a scientific explanation, which link offers a more complete discussion of causality.
As for the problem at hand, age can be related to experience, but the relationship is not simple, moreover, brain function at different ages is different (time demarcation dilates with age). Experience as a modifier of behaviour is quite variable, and just because a cohort in a certain territorial and temporal sense may have similar historical experiences does not imply that any behaviour resulting from those experiences can be extrapolated to other cohorts without fear of contradiction. With respect to a controlled trial, the commonality of experiences is an uncontrolled variable that introduces an unknown and unexplored amount of spurious correlation into any binary comparison such that any difference found should not be thought of as revealing a probably causal linkage. Moreover, a probable cause, when found, would only constitute a suspicion and not something one can state with conviction; it is at best a working hypothesis not a best conclusion. Convictions concerning causality should only be drawn from a body of evidence that is inclusive enough for those convictions to be without reasonable doubt. That is not the case for the question above for which there is not enough information to claim any causal relationship beyond a coincidental context from cohort grouping. One can, indeed, formulate so many hypotheses, for example, that the evolution of generosity with age is modified by cultural/historical epoch experience, that no firm conclusions can be drawn from the problem as stated.
|
Can you infer causality from correlation in this example of dictator game?
|
A number of readers of this post think that Pearl has authored the definitive opinion on causality. However entertaining this may be, it is flawed. That post is a highly rated answer, and in that ligh
|
Can you infer causality from correlation in this example of dictator game?
A number of readers of this post think that Pearl has authored the definitive opinion on causality. However entertaining this may be, it is flawed. That post is a highly rated answer, and in that light the downvotes for this post were mistakenly awarded. A better approach to this arises from mechanics, i.e., from physics as epitomized by Newton's laws, e.g., "For every action, there is an equal and opposite reaction," in that we refer to a cause as an 'action' and an effect as a 'reaction'. Note the atemporality correctly implied by the phrase 'equal and opposite;' that is, effect is not subordinate to cause, they are equal, simultaneous and opposite.
Why atemorality? To begin with "Statistics means never having to say you’re certain”. Statistics can be used to screen for probable causes, but more is needed for a conviction regarding causality, both legally, and in experiments. Taken alone, statistical arguments do not reduce to cause and effect because the Sir Arthur Conan Doyle criterion, "When you have eliminated the impossible, whatever remains, however improbable, must be the truth" does not adjudicate between multiple improbabilities, and neither does statistics. Furthermore, naive attempts at defining causality based upon untested assumptions lead to outright rediculous statements. For example, in the Merriam Webster Dictionary one reads with dismay that Causality is "the relation between a cause and its effect or between regularly correlated events or phenomena." True enough, language is fluid and people use words without worrying about whether those words are used self-consistently, and if the reader is one of those, then my concern about defining causality in a self-consistent fashion is irrelevant, and subjects like Resolving the black hole causality paradox cannot be understood, because the definition of causality used is strict, unambiguous, and uninterpretable using sloppy definitions of causality.
In that light, we turn to physics to investigate causality, and if we do not, we will never sort out just how confusing causality is. Regarding atemporality, there are those who claim, without proof, that cause must precede effect because that "seems" reasonable. Is it? "The arrow of time" is ambiguous at the quantum level, and effects can precede causes, e.g., see The arrow of causality and quantum gravity. If a cause is persistent in time, an effect may have temporal duration, but the effect may still precede the cause in temporal sequence at the quantum level, the durations can be in negative time, and the clock can run backwards. It takes a while staring at Feynman diagrams to understand why this is the case, e.g., "Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time.[3] Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams."
The OP is correct from a physical sciences point of view. In simplest form, the possibility of a physical time-independent view of causality is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism.
Getting a bit more complete about this, one would include Hempel's inductive-statistical model to form a scientific explanation, which link offers a more complete discussion of causality.
As for the problem at hand, age can be related to experience, but the relationship is not simple, moreover, brain function at different ages is different (time demarcation dilates with age). Experience as a modifier of behaviour is quite variable, and just because a cohort in a certain territorial and temporal sense may have similar historical experiences does not imply that any behaviour resulting from those experiences can be extrapolated to other cohorts without fear of contradiction. With respect to a controlled trial, the commonality of experiences is an uncontrolled variable that introduces an unknown and unexplored amount of spurious correlation into any binary comparison such that any difference found should not be thought of as revealing a probably causal linkage. Moreover, a probable cause, when found, would only constitute a suspicion and not something one can state with conviction; it is at best a working hypothesis not a best conclusion. Convictions concerning causality should only be drawn from a body of evidence that is inclusive enough for those convictions to be without reasonable doubt. That is not the case for the question above for which there is not enough information to claim any causal relationship beyond a coincidental context from cohort grouping. One can, indeed, formulate so many hypotheses, for example, that the evolution of generosity with age is modified by cultural/historical epoch experience, that no firm conclusions can be drawn from the problem as stated.
|
Can you infer causality from correlation in this example of dictator game?
A number of readers of this post think that Pearl has authored the definitive opinion on causality. However entertaining this may be, it is flawed. That post is a highly rated answer, and in that ligh
|
16,062
|
Example of how the log-sum-exp trick works in Naive Bayes
|
In
$$
p(Y=C|\mathbf{x}) = \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}
$$
both the denominator and the numerator can become very small, typically because the $p(x_i \vert C_k)$ can be close to 0 and we multiply many of them with each other. To prevent underflows, one can simply take the log of the numerator, but one needs to use the log-sum-exp trick for the denominator.
More specifically, in order to prevent underflows:
If we only care about knowing which class $(\hat{y})$ the input $(\mathbf{x}=x_1, \dots, x_n)$ most likely belongs to with the maximum a posteriori (MAP) decision rule, we don't have to apply the log-sum-exp trick, since we don't have to compute the denominator in that case. For the numerator one can simply take the log to prevent underflows: $log \left( p(\mathbf{x}|Y=C)p(Y=C) \right) $. More specifically:
$$\hat{y} = \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}}p(C_k \vert x_1, \dots, x_n)
= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \ p(C_k) \displaystyle\prod_{i=1}^n p(x_i \vert C_k)$$
which becomes after taking the log:
$$
\begin{align}
\hat{y} &= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \log \left( p(C_k \vert x_1, \dots, x_n) \right)\\
&= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \log \left( \ p(C_k) \displaystyle\prod_{i=1}^n p(x_i \vert C_k) \right) \\
&= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \left( \log \left( p(C_k) \right) + \ \displaystyle\sum_{i=1}^n \log \left(p(x_i \vert C_k) \right) \right)
\end{align}$$
If we want to compute the class probability $p(Y=C|\mathbf{x})$, we will need to compute the denominator:
$$ \begin{align}
\log \left( p(Y=C|\mathbf{x}) \right)
&= \log \left( \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)} \right)\\
&= \log \left( \underbrace{p(\mathbf{x}|Y=C)p(Y=C)}_{\text{numerator}} \right) - \log \left( \underbrace{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}_{\text{denominator}} \right)\\
\end{align}
$$
The element $\log \left( ~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)\\ $ may underflow because $p(x_i \vert C_k)$ can be very small: it is the same issue as in the numerator, but this time we have a summation inside the logarithm, which prevents us from transforming the $p(x_i \vert C_k)$ (can be close to 0) into $\log \left(p(x_i \vert C_k) \right)$ (negative and not close to 0 anymore, since $0 \leq p(x_i \vert C_k) \leq 1$). To circumvent this issue, we can use the fact that $p(x_i \vert C_k) = \exp \left( {\log \left(p(x_i \vert C_k) \right)} \right)$ to obtain:
$$\log \left( ~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) =\log \left( ~\sum_{k=1}^{|C|}{} \exp \left( \log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) \right) \right)$$
At that point, a new issue arises: $\log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)$ may be quite negative, which implies that $ \exp \left( \log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) \right) $ may become very close to 0, i.e. underflow. This is where we use the log-sum-exp trick:
$$\log \sum_k e^{a_k} = \log \sum_k e^{a_k}e^{A-A} = A+ \log\sum_k e^{a_k -A}$$
with:
$a_k=\log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)$,
$A = \underset{k \in \{1, \dots, |C|\}} \max a_k.$
We can see that introducing the variable $A$ avoids underflows. E.g. with $k=2, a_1 = - 245, a_2 = - 255$, we have:
$\exp \left(a_1\right) = \exp \left(- 245\right) =3.96143\times 10^{- 107}$
$\exp \left(a_2\right) = \exp \left(- 255\right) =1.798486 \times 10^{-111}$
Using the log-sum-exp trick we avoid the underflow, with $A=\max ( -245, -255 )=-245$:
$\begin{align}\log \sum_k e^{a_k} &= \log \sum_k e^{a_k}e^{A-A} \\&= A+ \log\sum_k e^{a_k -A}\\ &= -245+ \log\sum_k e^{a_k +245}\\&= -245+ \log \left(e^{-245 +245}+e^{-255 +245}\right) \\&=-245+ \log \left(e^{0}+e^{-10}\right) \end{align}$
We avoided the underflow since $e^{-10}$ is much farther away from 0 than $3.96143\times 10^{- 107}$ or $1.798486 \times 10^{-111}$.
|
Example of how the log-sum-exp trick works in Naive Bayes
|
In
$$
p(Y=C|\mathbf{x}) = \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}
$$
both the denominator and the numerator can become very small, typically because the $p(x_i
|
Example of how the log-sum-exp trick works in Naive Bayes
In
$$
p(Y=C|\mathbf{x}) = \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}
$$
both the denominator and the numerator can become very small, typically because the $p(x_i \vert C_k)$ can be close to 0 and we multiply many of them with each other. To prevent underflows, one can simply take the log of the numerator, but one needs to use the log-sum-exp trick for the denominator.
More specifically, in order to prevent underflows:
If we only care about knowing which class $(\hat{y})$ the input $(\mathbf{x}=x_1, \dots, x_n)$ most likely belongs to with the maximum a posteriori (MAP) decision rule, we don't have to apply the log-sum-exp trick, since we don't have to compute the denominator in that case. For the numerator one can simply take the log to prevent underflows: $log \left( p(\mathbf{x}|Y=C)p(Y=C) \right) $. More specifically:
$$\hat{y} = \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}}p(C_k \vert x_1, \dots, x_n)
= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \ p(C_k) \displaystyle\prod_{i=1}^n p(x_i \vert C_k)$$
which becomes after taking the log:
$$
\begin{align}
\hat{y} &= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \log \left( p(C_k \vert x_1, \dots, x_n) \right)\\
&= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \log \left( \ p(C_k) \displaystyle\prod_{i=1}^n p(x_i \vert C_k) \right) \\
&= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \left( \log \left( p(C_k) \right) + \ \displaystyle\sum_{i=1}^n \log \left(p(x_i \vert C_k) \right) \right)
\end{align}$$
If we want to compute the class probability $p(Y=C|\mathbf{x})$, we will need to compute the denominator:
$$ \begin{align}
\log \left( p(Y=C|\mathbf{x}) \right)
&= \log \left( \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)} \right)\\
&= \log \left( \underbrace{p(\mathbf{x}|Y=C)p(Y=C)}_{\text{numerator}} \right) - \log \left( \underbrace{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}_{\text{denominator}} \right)\\
\end{align}
$$
The element $\log \left( ~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)\\ $ may underflow because $p(x_i \vert C_k)$ can be very small: it is the same issue as in the numerator, but this time we have a summation inside the logarithm, which prevents us from transforming the $p(x_i \vert C_k)$ (can be close to 0) into $\log \left(p(x_i \vert C_k) \right)$ (negative and not close to 0 anymore, since $0 \leq p(x_i \vert C_k) \leq 1$). To circumvent this issue, we can use the fact that $p(x_i \vert C_k) = \exp \left( {\log \left(p(x_i \vert C_k) \right)} \right)$ to obtain:
$$\log \left( ~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) =\log \left( ~\sum_{k=1}^{|C|}{} \exp \left( \log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) \right) \right)$$
At that point, a new issue arises: $\log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)$ may be quite negative, which implies that $ \exp \left( \log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) \right) $ may become very close to 0, i.e. underflow. This is where we use the log-sum-exp trick:
$$\log \sum_k e^{a_k} = \log \sum_k e^{a_k}e^{A-A} = A+ \log\sum_k e^{a_k -A}$$
with:
$a_k=\log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)$,
$A = \underset{k \in \{1, \dots, |C|\}} \max a_k.$
We can see that introducing the variable $A$ avoids underflows. E.g. with $k=2, a_1 = - 245, a_2 = - 255$, we have:
$\exp \left(a_1\right) = \exp \left(- 245\right) =3.96143\times 10^{- 107}$
$\exp \left(a_2\right) = \exp \left(- 255\right) =1.798486 \times 10^{-111}$
Using the log-sum-exp trick we avoid the underflow, with $A=\max ( -245, -255 )=-245$:
$\begin{align}\log \sum_k e^{a_k} &= \log \sum_k e^{a_k}e^{A-A} \\&= A+ \log\sum_k e^{a_k -A}\\ &= -245+ \log\sum_k e^{a_k +245}\\&= -245+ \log \left(e^{-245 +245}+e^{-255 +245}\right) \\&=-245+ \log \left(e^{0}+e^{-10}\right) \end{align}$
We avoided the underflow since $e^{-10}$ is much farther away from 0 than $3.96143\times 10^{- 107}$ or $1.798486 \times 10^{-111}$.
|
Example of how the log-sum-exp trick works in Naive Bayes
In
$$
p(Y=C|\mathbf{x}) = \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}
$$
both the denominator and the numerator can become very small, typically because the $p(x_i
|
16,063
|
Example of how the log-sum-exp trick works in Naive Bayes
|
Suppose we want to identify which of two databases is more likely to have generated a phrase (for example, which novel is this phrase more likely to have come from). We could assume independence of the words conditional on the database (Naive Bayes assumption).
Now look up the second link you have posted. There $a$ would represent the joint probability of observing the sentence given a database and the $e^{b_{t}}$s would represent the probability of observing each of the words in the sentence.
|
Example of how the log-sum-exp trick works in Naive Bayes
|
Suppose we want to identify which of two databases is more likely to have generated a phrase (for example, which novel is this phrase more likely to have come from). We could assume independence of th
|
Example of how the log-sum-exp trick works in Naive Bayes
Suppose we want to identify which of two databases is more likely to have generated a phrase (for example, which novel is this phrase more likely to have come from). We could assume independence of the words conditional on the database (Naive Bayes assumption).
Now look up the second link you have posted. There $a$ would represent the joint probability of observing the sentence given a database and the $e^{b_{t}}$s would represent the probability of observing each of the words in the sentence.
|
Example of how the log-sum-exp trick works in Naive Bayes
Suppose we want to identify which of two databases is more likely to have generated a phrase (for example, which novel is this phrase more likely to have come from). We could assume independence of th
|
16,064
|
Example of how the log-sum-exp trick works in Naive Bayes
|
We can see from this answer that the smallest number in Python(just take it for example) is 5e-324 due to the IEEE754, and the hardware cause applies to other languages as well.
In [2]: np.nextafter(0, 1)
Out[2]: 5e-324
And any float smaller than that would lead to 0.
In [3]: np.nextafter(0, 1)/2
Out[3]: 0.0
And let's see the function of Naive Bayes with discrete features and two classes as you required:
$$
p(S=1|w_1, ... w_n) = \frac{p(S=1) \prod_{i=1}^n p(\mathbf{w_i}|S=1)}{~\sum_{s=\{0, 1\}}p(S=s)\prod_{i=1}^n p(\mathbf{w_i}|S=s)}
$$
Let me instantiate that function by a simple NLP task bellow.
We decide to detect if the coming email is spam($S=1$) or not spam($S=0$) and we have a word vocabulary of 5,000 in size($n=5,000$) and the only concern is if a word($w_i$) occurs($p(w_i|S=1)$) in the email or not($1-p(w_i|S=1)$) for simplicity(Bernoulli naive Bayes).
In [1]: import numpy as np
In [2]: from sklearn.naive_bayes import BernoulliNB
# let's train our model with 200 samples
In [3]: X = np.random.randint(2, size=(200, 5000))
In [4]: y = np.random.randint(2, size=(200, 1)).ravel()
In [5]: clf = BernoulliNB()
In [6]: model = clf.fit(X, y)
We can see that $p(S=s)\prod_{i=1}^n p(\mathbf{w_i}|S=s)$ would be very small because of the probabilities(both $p(w_i|S=1)$ and $1-p(w_i|S=1)$ would be between 0 and 1) in $\prod_i^{5000}$, and hence we are sure that the product would be smaller than $5e^{-324}$ and we just get $0/0$.
In [7]: (np.nextafter(0, 1)*2) / (np.nextafter(0, 1)*2)
Out[7]: 1.0
In [8]: (np.nextafter(0, 1)/2) / (np.nextafter(0, 1)/2)
/home/lerner/anaconda3/bin/ipython3:1: RuntimeWarning: invalid value encountered in double_scalars
#!/home/lerner/anaconda3/bin/python
Out[8]: nan
In [9]: l_cpt = model.feature_log_prob_
In [10]: x = np.random.randint(2, size=(1, 5000))
In [11]: cls_lp = model.class_log_prior_
In [12]: probs = np.where(x, np.exp(l_cpt[1]), 1-np.exp(l_cpt[1]))
In [13]: np.exp(cls_lp[1]) * np.prod(probs)
Out[14]: 0.0
Then the problem arises: How can we calculate the probability of the email is a spam $p(S=1|w_1, ... w_n)$? Or how can we calculate the numerator and the denominator?
We can see the official implementation in sklearn:
jll = self._joint_log_likelihood(X)
# normalize by P(x) = P(f_1, ..., f_n)
log_prob_x = logsumexp(jll, axis=1)
return jll - np.atleast_2d(log_prob_x).T
For the numerator it converted the product of probabilities to the sum of log likelihood and for the denominator it used the logsumexp in scipy which is:
out = log(sum(exp(a - a_max), axis=0))
out += a_max
Because we cannot add two joint probabilities by adding its joint log likelihood, and we should go out from the log space to the probability space. But we cannot add the two true probabilities because they are too small and we should scale them and do the addition: $\sum_{s=\{0,1\}} e^{jll_s - max\_jll}$ and put the result back into the log space $\log\sum_{s=\{0,1\}} e^{jll_s - max\_jll}$ then rescale it back: $max\_jll+ \log\sum_{s=\{0,1\}} e^{jll_s - max\_jll}$ in log space by adding the $max\_jll$.
And here is the derivation:
$\begin{align}
\log \sum_{s=\{0,1\}} e^{jll_s} & =
\log \sum_{s=\{0,1\}} e^{jll_s}e^{max\_jll-max\_jll} \\& =
\log e ^{max\_jll}+ \log\sum_{s=\{0,1\}} e^{jll_s - max\_jll} \\& =
max\_jll+ \log\sum_{s=\{0,1\}} e^{jll_s - max\_jll}
\end{align}$
where $max\_jll$ is the $a\_max$ in the code.
Once we get both the numerator and the denominator in log space we can get the log conditional probability($\log p(S=1|w_1, ... w_n)$) by subtracting the denominator from the numerator:
return jll - np.atleast_2d(log_prob_x).T
Hope that helps.
Reference:
1. Bernoulli Naive Bayes Classifier
2. Spam Filtering with Naive Bayes – Which Naive Bayes?
|
Example of how the log-sum-exp trick works in Naive Bayes
|
We can see from this answer that the smallest number in Python(just take it for example) is 5e-324 due to the IEEE754, and the hardware cause applies to other languages as well.
In [2]: np.nextafter(
|
Example of how the log-sum-exp trick works in Naive Bayes
We can see from this answer that the smallest number in Python(just take it for example) is 5e-324 due to the IEEE754, and the hardware cause applies to other languages as well.
In [2]: np.nextafter(0, 1)
Out[2]: 5e-324
And any float smaller than that would lead to 0.
In [3]: np.nextafter(0, 1)/2
Out[3]: 0.0
And let's see the function of Naive Bayes with discrete features and two classes as you required:
$$
p(S=1|w_1, ... w_n) = \frac{p(S=1) \prod_{i=1}^n p(\mathbf{w_i}|S=1)}{~\sum_{s=\{0, 1\}}p(S=s)\prod_{i=1}^n p(\mathbf{w_i}|S=s)}
$$
Let me instantiate that function by a simple NLP task bellow.
We decide to detect if the coming email is spam($S=1$) or not spam($S=0$) and we have a word vocabulary of 5,000 in size($n=5,000$) and the only concern is if a word($w_i$) occurs($p(w_i|S=1)$) in the email or not($1-p(w_i|S=1)$) for simplicity(Bernoulli naive Bayes).
In [1]: import numpy as np
In [2]: from sklearn.naive_bayes import BernoulliNB
# let's train our model with 200 samples
In [3]: X = np.random.randint(2, size=(200, 5000))
In [4]: y = np.random.randint(2, size=(200, 1)).ravel()
In [5]: clf = BernoulliNB()
In [6]: model = clf.fit(X, y)
We can see that $p(S=s)\prod_{i=1}^n p(\mathbf{w_i}|S=s)$ would be very small because of the probabilities(both $p(w_i|S=1)$ and $1-p(w_i|S=1)$ would be between 0 and 1) in $\prod_i^{5000}$, and hence we are sure that the product would be smaller than $5e^{-324}$ and we just get $0/0$.
In [7]: (np.nextafter(0, 1)*2) / (np.nextafter(0, 1)*2)
Out[7]: 1.0
In [8]: (np.nextafter(0, 1)/2) / (np.nextafter(0, 1)/2)
/home/lerner/anaconda3/bin/ipython3:1: RuntimeWarning: invalid value encountered in double_scalars
#!/home/lerner/anaconda3/bin/python
Out[8]: nan
In [9]: l_cpt = model.feature_log_prob_
In [10]: x = np.random.randint(2, size=(1, 5000))
In [11]: cls_lp = model.class_log_prior_
In [12]: probs = np.where(x, np.exp(l_cpt[1]), 1-np.exp(l_cpt[1]))
In [13]: np.exp(cls_lp[1]) * np.prod(probs)
Out[14]: 0.0
Then the problem arises: How can we calculate the probability of the email is a spam $p(S=1|w_1, ... w_n)$? Or how can we calculate the numerator and the denominator?
We can see the official implementation in sklearn:
jll = self._joint_log_likelihood(X)
# normalize by P(x) = P(f_1, ..., f_n)
log_prob_x = logsumexp(jll, axis=1)
return jll - np.atleast_2d(log_prob_x).T
For the numerator it converted the product of probabilities to the sum of log likelihood and for the denominator it used the logsumexp in scipy which is:
out = log(sum(exp(a - a_max), axis=0))
out += a_max
Because we cannot add two joint probabilities by adding its joint log likelihood, and we should go out from the log space to the probability space. But we cannot add the two true probabilities because they are too small and we should scale them and do the addition: $\sum_{s=\{0,1\}} e^{jll_s - max\_jll}$ and put the result back into the log space $\log\sum_{s=\{0,1\}} e^{jll_s - max\_jll}$ then rescale it back: $max\_jll+ \log\sum_{s=\{0,1\}} e^{jll_s - max\_jll}$ in log space by adding the $max\_jll$.
And here is the derivation:
$\begin{align}
\log \sum_{s=\{0,1\}} e^{jll_s} & =
\log \sum_{s=\{0,1\}} e^{jll_s}e^{max\_jll-max\_jll} \\& =
\log e ^{max\_jll}+ \log\sum_{s=\{0,1\}} e^{jll_s - max\_jll} \\& =
max\_jll+ \log\sum_{s=\{0,1\}} e^{jll_s - max\_jll}
\end{align}$
where $max\_jll$ is the $a\_max$ in the code.
Once we get both the numerator and the denominator in log space we can get the log conditional probability($\log p(S=1|w_1, ... w_n)$) by subtracting the denominator from the numerator:
return jll - np.atleast_2d(log_prob_x).T
Hope that helps.
Reference:
1. Bernoulli Naive Bayes Classifier
2. Spam Filtering with Naive Bayes – Which Naive Bayes?
|
Example of how the log-sum-exp trick works in Naive Bayes
We can see from this answer that the smallest number in Python(just take it for example) is 5e-324 due to the IEEE754, and the hardware cause applies to other languages as well.
In [2]: np.nextafter(
|
16,065
|
How to use Levene test function in R?
|
Let's say that, in R, your 1st sample is stored in a vector named sample1 and your 2nd sample is stored in a vector named sample2.
You first have to combine your two samples in a single vector and to create another vector defining the two groups:
y <- c(sample1, sample2)
and
group <- as.factor(c(rep(1, length(sample1)), rep(2, length(sample2))))
Now, you can call
library(car)
levene.test(y, group)
EDIT
When trying this in R, I got the following warning:
'levene.test' has now been removed. Use 'leveneTest' instead...
According to this, you should have a look at leveneTest instead...
|
How to use Levene test function in R?
|
Let's say that, in R, your 1st sample is stored in a vector named sample1 and your 2nd sample is stored in a vector named sample2.
You first have to combine your two samples in a single vector and to
|
How to use Levene test function in R?
Let's say that, in R, your 1st sample is stored in a vector named sample1 and your 2nd sample is stored in a vector named sample2.
You first have to combine your two samples in a single vector and to create another vector defining the two groups:
y <- c(sample1, sample2)
and
group <- as.factor(c(rep(1, length(sample1)), rep(2, length(sample2))))
Now, you can call
library(car)
levene.test(y, group)
EDIT
When trying this in R, I got the following warning:
'levene.test' has now been removed. Use 'leveneTest' instead...
According to this, you should have a look at leveneTest instead...
|
How to use Levene test function in R?
Let's say that, in R, your 1st sample is stored in a vector named sample1 and your 2nd sample is stored in a vector named sample2.
You first have to combine your two samples in a single vector and to
|
16,066
|
How to use Levene test function in R?
|
Ocram's answer has all of the important pieces. However, you don't need to load all of Rcmdr if you don't want to. The relevant library is "car". But as ocram indicates, levene.test is deprecated. Note that the deprecation is not a change of functionality or code (at this point, 09/18/2011). It simply is a change in the function name. So levene.test and leveneTest will work the same. For the record I thought I'd provide an example using leveneTest and reusable reshaping code for this simple case:
#Creating example code
sample1 <- rnorm(20)
sample2 <- rnorm(20)
#General code to reshape two vectors into a long data.frame
twoVarWideToLong <- function(sample1,sample2) {
res <- data.frame(
GroupID=as.factor(c(rep(1, length(sample1)), rep(2, length(sample2)))),
DV=c(sample1, sample2)
)
}
#Reshaping the example data
long.data <- twoVarWideToLong(sample1,sample2)
#There are many different calls here that will work... but here is an example
leveneTest(DV~GroupID,long.data)
|
How to use Levene test function in R?
|
Ocram's answer has all of the important pieces. However, you don't need to load all of Rcmdr if you don't want to. The relevant library is "car". But as ocram indicates, levene.test is deprecated.
|
How to use Levene test function in R?
Ocram's answer has all of the important pieces. However, you don't need to load all of Rcmdr if you don't want to. The relevant library is "car". But as ocram indicates, levene.test is deprecated. Note that the deprecation is not a change of functionality or code (at this point, 09/18/2011). It simply is a change in the function name. So levene.test and leveneTest will work the same. For the record I thought I'd provide an example using leveneTest and reusable reshaping code for this simple case:
#Creating example code
sample1 <- rnorm(20)
sample2 <- rnorm(20)
#General code to reshape two vectors into a long data.frame
twoVarWideToLong <- function(sample1,sample2) {
res <- data.frame(
GroupID=as.factor(c(rep(1, length(sample1)), rep(2, length(sample2)))),
DV=c(sample1, sample2)
)
}
#Reshaping the example data
long.data <- twoVarWideToLong(sample1,sample2)
#There are many different calls here that will work... but here is an example
leveneTest(DV~GroupID,long.data)
|
How to use Levene test function in R?
Ocram's answer has all of the important pieces. However, you don't need to load all of Rcmdr if you don't want to. The relevant library is "car". But as ocram indicates, levene.test is deprecated.
|
16,067
|
How to use Levene test function in R?
|
The easiest way (in my opinion) to prepare the data is using reshape2 package:
#Load packages
library(reshape2)
library(car)
#Creating example data
sample1 <- rnorm(20)
sample2 <- rnorm(20)
#Combine data
sample <- as.data.frame(cbind(sample1, sample2))
#Melt data
dataset <- melt(sample)
#Compute test
leveneTest(value ~ variable, dataset)
|
How to use Levene test function in R?
|
The easiest way (in my opinion) to prepare the data is using reshape2 package:
#Load packages
library(reshape2)
library(car)
#Creating example data
sample1 <- rnorm(20)
sample2 <- rnorm(20)
#Combine
|
How to use Levene test function in R?
The easiest way (in my opinion) to prepare the data is using reshape2 package:
#Load packages
library(reshape2)
library(car)
#Creating example data
sample1 <- rnorm(20)
sample2 <- rnorm(20)
#Combine data
sample <- as.data.frame(cbind(sample1, sample2))
#Melt data
dataset <- melt(sample)
#Compute test
leveneTest(value ~ variable, dataset)
|
How to use Levene test function in R?
The easiest way (in my opinion) to prepare the data is using reshape2 package:
#Load packages
library(reshape2)
library(car)
#Creating example data
sample1 <- rnorm(20)
sample2 <- rnorm(20)
#Combine
|
16,068
|
Literature on IV quantile regression
|
I would take a gander at the 7 Chernozhukov and Hansen IVQR papers. The 2005 paper is often cited. They also provide links to data and code in MATLAB, OX and Stata.
Another frequently cited paper in this literature is Abadie, Angrist, and Imbens (2002).
Frolich and Melly (2010) and Kwak (2010) are also worth checking out, especially if you use Stata. Both provide code.
|
Literature on IV quantile regression
|
I would take a gander at the 7 Chernozhukov and Hansen IVQR papers. The 2005 paper is often cited. They also provide links to data and code in MATLAB, OX and Stata.
Another frequently cited paper in t
|
Literature on IV quantile regression
I would take a gander at the 7 Chernozhukov and Hansen IVQR papers. The 2005 paper is often cited. They also provide links to data and code in MATLAB, OX and Stata.
Another frequently cited paper in this literature is Abadie, Angrist, and Imbens (2002).
Frolich and Melly (2010) and Kwak (2010) are also worth checking out, especially if you use Stata. Both provide code.
|
Literature on IV quantile regression
I would take a gander at the 7 Chernozhukov and Hansen IVQR papers. The 2005 paper is often cited. They also provide links to data and code in MATLAB, OX and Stata.
Another frequently cited paper in t
|
16,069
|
Literature on IV quantile regression
|
Even though this question already has an accepted answer, I think I can still contribute to this. The Koenker (2005) book will really not get you far because developments in IV quantile regression started to pick up around that time.
The early IV quantile regression techniques include the causal chain framework by Chesher (2003), which was further developed in the weighted average deviations approach (WAD) by Ma and Koenker (2006). In this paper they also introduce the control variate approach. A similar idea was used by Lee (2007) who derived an IV quantile regression estimator using control functions.
All of these estimators make use of an assumed triangular error structure which is necessary for identification. The problem with this is that this triangular structure is implausible for endogeneity problems that arise due to simultaneity. For instance, you cannot use these estimators for a supply-demand estimation problem.
The estimator by Abadie, Angrist and Imbens (2002), that Dimitriy V. Masterov mentioned, assumes that you have both a binary endogenous variable and a binary instrument. In general, this is a very restrictive framework but it extends the LATE approach from linear regression IV to quantile regressions. This is nice because many researchers, especially in economics, are familiar with the LATE concept and the interpretation of the resulting coefficients.
The seminal paper by Chernozhukov and Hansen (2005) really kicked off this literature and these two guys have done a lot of work in this area. The IV quantile regression estimator (IVQR) provides a natural link to the 2SLS estimator in the quantile context. Their estimator is implemented via Matlab or Ox as Dimitriy pointed out but you can forget about that Kwak (2010) paper. This paper never made it to the Stata journal and also his code does not run properly. I assume he abandoned this project.
Instead you should consider the smoothed estimating equations IVQR (SEE-IVQR) estimator by Kaplan and Sun (2012). This is a recent estimator which is an improvement over the original IVQR estimator in terms of computational speed (it avoids the burdensome grid search algorithm) and mean squared error. The Matlab code is available here.
The paper by Frölich and Melly (2010) is nice because it considers the difference between conditional and unconditional quantile regression. The problem with quantile regression in general is that once you include covariates in your regression, the interpretation changes. In OLS you can always go from the conditional to the unconditional expectation via the law of iterated expectations but for quantiles this is not available. This problem was first shown by Firpo (2007) and Firpo et al. (2009). He uses a re-centered influence function in order to marginalize conditional quantile regression coefficients such that they can be interpreted as the usual OLS coefficients. For your purpose, this estimator won't help much because it allows for exogenous variables only. If you are interested, Nicole Fortin makes the Stata code available on her website.
The most recent unconditional IV quantile regression estimator I know of is by Powell (2013). His generalized (IV) quantile regression estimator allows you to estimate marginal quantile treatment effects in the presence of endogeneity. Somewhere on the RAND website he also makes his Stata code available, I couldn't find it just now though. Since you asked for it: in an earlier paper he had implemented this estimator in the panel data context (see Powell, 2012). This estimator is great because unlike all previous panel data QR methods this estimator does not rely on large T asymptotics (which you usually don't have, at least not in microeconometric data).
Last but not least, a more exotic variant: the censored IVQR estimator (CQIV) by Chernozhukov et al. (2011) allows to take care for censored data - as the name suggests. It is an extension of the paper by Chernozhukov and Hong (2003) which I don't link because it's not for the IV context. This estimator is computationally heavy but if you have censored data and no other way around it, this is the way to go. Amanda Kowalski has published the Stata code on her website or you can download it from RePEc. This estimator (and, by the way, also the IVQR and SEE-IVQR) assume that you have a continuous endogenous variable. I have used these estimators in the context of earnings regressions where education was my endogenous variable which took between 18 to 20 values, so not exactly continuous. But in simulation exercises I could always show that this is not a problem. However, this is probably application dependent so if you decide to use this, double check it.
|
Literature on IV quantile regression
|
Even though this question already has an accepted answer, I think I can still contribute to this. The Koenker (2005) book will really not get you far because developments in IV quantile regression sta
|
Literature on IV quantile regression
Even though this question already has an accepted answer, I think I can still contribute to this. The Koenker (2005) book will really not get you far because developments in IV quantile regression started to pick up around that time.
The early IV quantile regression techniques include the causal chain framework by Chesher (2003), which was further developed in the weighted average deviations approach (WAD) by Ma and Koenker (2006). In this paper they also introduce the control variate approach. A similar idea was used by Lee (2007) who derived an IV quantile regression estimator using control functions.
All of these estimators make use of an assumed triangular error structure which is necessary for identification. The problem with this is that this triangular structure is implausible for endogeneity problems that arise due to simultaneity. For instance, you cannot use these estimators for a supply-demand estimation problem.
The estimator by Abadie, Angrist and Imbens (2002), that Dimitriy V. Masterov mentioned, assumes that you have both a binary endogenous variable and a binary instrument. In general, this is a very restrictive framework but it extends the LATE approach from linear regression IV to quantile regressions. This is nice because many researchers, especially in economics, are familiar with the LATE concept and the interpretation of the resulting coefficients.
The seminal paper by Chernozhukov and Hansen (2005) really kicked off this literature and these two guys have done a lot of work in this area. The IV quantile regression estimator (IVQR) provides a natural link to the 2SLS estimator in the quantile context. Their estimator is implemented via Matlab or Ox as Dimitriy pointed out but you can forget about that Kwak (2010) paper. This paper never made it to the Stata journal and also his code does not run properly. I assume he abandoned this project.
Instead you should consider the smoothed estimating equations IVQR (SEE-IVQR) estimator by Kaplan and Sun (2012). This is a recent estimator which is an improvement over the original IVQR estimator in terms of computational speed (it avoids the burdensome grid search algorithm) and mean squared error. The Matlab code is available here.
The paper by Frölich and Melly (2010) is nice because it considers the difference between conditional and unconditional quantile regression. The problem with quantile regression in general is that once you include covariates in your regression, the interpretation changes. In OLS you can always go from the conditional to the unconditional expectation via the law of iterated expectations but for quantiles this is not available. This problem was first shown by Firpo (2007) and Firpo et al. (2009). He uses a re-centered influence function in order to marginalize conditional quantile regression coefficients such that they can be interpreted as the usual OLS coefficients. For your purpose, this estimator won't help much because it allows for exogenous variables only. If you are interested, Nicole Fortin makes the Stata code available on her website.
The most recent unconditional IV quantile regression estimator I know of is by Powell (2013). His generalized (IV) quantile regression estimator allows you to estimate marginal quantile treatment effects in the presence of endogeneity. Somewhere on the RAND website he also makes his Stata code available, I couldn't find it just now though. Since you asked for it: in an earlier paper he had implemented this estimator in the panel data context (see Powell, 2012). This estimator is great because unlike all previous panel data QR methods this estimator does not rely on large T asymptotics (which you usually don't have, at least not in microeconometric data).
Last but not least, a more exotic variant: the censored IVQR estimator (CQIV) by Chernozhukov et al. (2011) allows to take care for censored data - as the name suggests. It is an extension of the paper by Chernozhukov and Hong (2003) which I don't link because it's not for the IV context. This estimator is computationally heavy but if you have censored data and no other way around it, this is the way to go. Amanda Kowalski has published the Stata code on her website or you can download it from RePEc. This estimator (and, by the way, also the IVQR and SEE-IVQR) assume that you have a continuous endogenous variable. I have used these estimators in the context of earnings regressions where education was my endogenous variable which took between 18 to 20 values, so not exactly continuous. But in simulation exercises I could always show that this is not a problem. However, this is probably application dependent so if you decide to use this, double check it.
|
Literature on IV quantile regression
Even though this question already has an accepted answer, I think I can still contribute to this. The Koenker (2005) book will really not get you far because developments in IV quantile regression sta
|
16,070
|
Literature on IV quantile regression
|
The new Handbook of Quantile Regression has two excellent chapters on these topics:
"Instrumental Variable Quantile Regression" by Chernozhukov, Hansen, and Wüthrich (draft on Chris Hansen's website)
"Local Quantile Treatment Effects" by Melly and Wüthrich (draft on Blaise Melly's website)
|
Literature on IV quantile regression
|
The new Handbook of Quantile Regression has two excellent chapters on these topics:
"Instrumental Variable Quantile Regression" by Chernozhukov, Hansen, and Wüthrich (draft on Chris Hansen's website)
|
Literature on IV quantile regression
The new Handbook of Quantile Regression has two excellent chapters on these topics:
"Instrumental Variable Quantile Regression" by Chernozhukov, Hansen, and Wüthrich (draft on Chris Hansen's website)
"Local Quantile Treatment Effects" by Melly and Wüthrich (draft on Blaise Melly's website)
|
Literature on IV quantile regression
The new Handbook of Quantile Regression has two excellent chapters on these topics:
"Instrumental Variable Quantile Regression" by Chernozhukov, Hansen, and Wüthrich (draft on Chris Hansen's website)
|
16,071
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
I don't see any sense in not "believing" the Q-Q plot (if you've produced it properly); it's just a graphical representation of the reality of your data, juxtaposed with the definitional distribution. Clearly it's not a perfect match, but if it's good enough for your purposes, that may be more or less the end of the story. You may want to check out this related question: Is normality testing 'essentially useless'?
The $p$-value from the KS test is basically telling you that your sample size is large enough to give strong evidence against the null hypothesis that your data belong to exactly the same distribution as your reference distribution (I assume you referenced the gamma distribution; you may want to double-check that you did). That seems clear enough from the Q-Q plot as well (i.e., there are some small but seemingly systematic patterns of deviation), so I don't think there's truly any conflicting information here.
Whether your data are too different from a gamma distribution for your intended purposes is another question. The KS test alone can't answer it for you (because its outcome will depend on your sample size, among other reasons), but the Q-Q plot might help you decide. You might also want to look into robust alternatives to any other analyses you plan to run, and if you're particularly serious about minding the sensitivity of any subsequent analyses to deviations from the gamma distribution, you might want to consider doing some simulation testing too.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
I don't see any sense in not "believing" the Q-Q plot (if you've produced it properly); it's just a graphical representation of the reality of your data, juxtaposed with the definitional distribution.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
I don't see any sense in not "believing" the Q-Q plot (if you've produced it properly); it's just a graphical representation of the reality of your data, juxtaposed with the definitional distribution. Clearly it's not a perfect match, but if it's good enough for your purposes, that may be more or less the end of the story. You may want to check out this related question: Is normality testing 'essentially useless'?
The $p$-value from the KS test is basically telling you that your sample size is large enough to give strong evidence against the null hypothesis that your data belong to exactly the same distribution as your reference distribution (I assume you referenced the gamma distribution; you may want to double-check that you did). That seems clear enough from the Q-Q plot as well (i.e., there are some small but seemingly systematic patterns of deviation), so I don't think there's truly any conflicting information here.
Whether your data are too different from a gamma distribution for your intended purposes is another question. The KS test alone can't answer it for you (because its outcome will depend on your sample size, among other reasons), but the Q-Q plot might help you decide. You might also want to look into robust alternatives to any other analyses you plan to run, and if you're particularly serious about minding the sensitivity of any subsequent analyses to deviations from the gamma distribution, you might want to consider doing some simulation testing too.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
I don't see any sense in not "believing" the Q-Q plot (if you've produced it properly); it's just a graphical representation of the reality of your data, juxtaposed with the definitional distribution.
|
16,072
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
What you could do is create multiple samples from your theoretical distribution and plot those on the background of your QQ-plot. That will give you an idea of what kind of variability you can reasonably expect from just sampling.
You can extend that idea to create an envelope around the theoretical line, using the example from pages 86-89 of :
Venables, W.N. and Ripley, B.D. 2002.
Modern applied statistics with S.
New York: Springer.
This will be a point-wise envelope. You can extend that idea even further to create an overall envelope using the ideas from pages 151-154 of:
Davison, A.C. and Hinkley, D.V. 1997. Bootstrap methods and their application. Cambridge: Cambridge University Press.
However, for basic exploration I think just plotting a couple of reference samples in the background of your QQ-plot will be more than enough.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
What you could do is create multiple samples from your theoretical distribution and plot those on the background of your QQ-plot. That will give you an idea of what kind of variability you can reasona
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
What you could do is create multiple samples from your theoretical distribution and plot those on the background of your QQ-plot. That will give you an idea of what kind of variability you can reasonably expect from just sampling.
You can extend that idea to create an envelope around the theoretical line, using the example from pages 86-89 of :
Venables, W.N. and Ripley, B.D. 2002.
Modern applied statistics with S.
New York: Springer.
This will be a point-wise envelope. You can extend that idea even further to create an overall envelope using the ideas from pages 151-154 of:
Davison, A.C. and Hinkley, D.V. 1997. Bootstrap methods and their application. Cambridge: Cambridge University Press.
However, for basic exploration I think just plotting a couple of reference samples in the background of your QQ-plot will be more than enough.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
What you could do is create multiple samples from your theoretical distribution and plot those on the background of your QQ-plot. That will give you an idea of what kind of variability you can reasona
|
16,073
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
The KS test assumes particular parameters of your distribution. It tests the hypothesis "the data are distributed according to this particular distribution". You might have specified these parameters somewhere. If not, some not matching defaults may have been used. Note that the KS test will become conservative if the estimated parameters are plugged into the hypothesis.
However, most goodness-of-fit tests are used the wrong way round. If the KS test would not have shown significance, this does not mean that the model you wanted to prove is appropriate. That's what @Nick Stauner said about too small sample size. This issue is similar to point hypothesis tests and equivalence tests.
So in the end: Only consider the QQ-plots.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
The KS test assumes particular parameters of your distribution. It tests the hypothesis "the data are distributed according to this particular distribution". You might have specified these parameters
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
The KS test assumes particular parameters of your distribution. It tests the hypothesis "the data are distributed according to this particular distribution". You might have specified these parameters somewhere. If not, some not matching defaults may have been used. Note that the KS test will become conservative if the estimated parameters are plugged into the hypothesis.
However, most goodness-of-fit tests are used the wrong way round. If the KS test would not have shown significance, this does not mean that the model you wanted to prove is appropriate. That's what @Nick Stauner said about too small sample size. This issue is similar to point hypothesis tests and equivalence tests.
So in the end: Only consider the QQ-plots.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
The KS test assumes particular parameters of your distribution. It tests the hypothesis "the data are distributed according to this particular distribution". You might have specified these parameters
|
16,074
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
QQ Plot is an exploratory data analysis technique and should be treated as such - so are all other EDA plots. They are only meant to give you preliminary insights into the data on hand. You should never decide or stop your analysis based on EDA plots like QQ plot. It is a wrong advice to consider only QQ plots. You should definitely go by quantiative techniques like KS Test. Suppose you have another QQ plot for similar data set, how would you compare the two without a quantitative tool? Right next step for you, after EDA and KS test is to find out why KS test is giving low p-value (in your case, it could even be due to some error).
EDA techniques are NOT meant to serve as decision making tools. In fact, I would say even inferential statistics are meant to be only exploratory. They give you pointers as to which direction your statistical analysis should proceed. For example, a t-test on a sample would only give you a confidence level that the sample may (or may not) belong to the population, you may still proceed further based on that insight as to what distribution your data belongs to and what are its parameters etc. In fact, when some state that even techniques implemented as part of machine learning libraries are also exploratory in nature!!! I hope they mean it in this sense...!
To conclude statistical decisions on the basis of plots or visualization techniques is making mockery of advances made in statistical science. If you ask me, you should use these plots as tools for communicating the final conclusions based on your quantitative statistical analysis.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
|
QQ Plot is an exploratory data analysis technique and should be treated as such - so are all other EDA plots. They are only meant to give you preliminary insights into the data on hand. You should nev
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
QQ Plot is an exploratory data analysis technique and should be treated as such - so are all other EDA plots. They are only meant to give you preliminary insights into the data on hand. You should never decide or stop your analysis based on EDA plots like QQ plot. It is a wrong advice to consider only QQ plots. You should definitely go by quantiative techniques like KS Test. Suppose you have another QQ plot for similar data set, how would you compare the two without a quantitative tool? Right next step for you, after EDA and KS test is to find out why KS test is giving low p-value (in your case, it could even be due to some error).
EDA techniques are NOT meant to serve as decision making tools. In fact, I would say even inferential statistics are meant to be only exploratory. They give you pointers as to which direction your statistical analysis should proceed. For example, a t-test on a sample would only give you a confidence level that the sample may (or may not) belong to the population, you may still proceed further based on that insight as to what distribution your data belongs to and what are its parameters etc. In fact, when some state that even techniques implemented as part of machine learning libraries are also exploratory in nature!!! I hope they mean it in this sense...!
To conclude statistical decisions on the basis of plots or visualization techniques is making mockery of advances made in statistical science. If you ask me, you should use these plots as tools for communicating the final conclusions based on your quantitative statistical analysis.
|
Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
QQ Plot is an exploratory data analysis technique and should be treated as such - so are all other EDA plots. They are only meant to give you preliminary insights into the data on hand. You should nev
|
16,075
|
Is there a law that says if you do enough trials, rare things happen?
|
Law of truly large numbers:
... with a sample size large enough, any outrageous thing is likely to happen.
|
Is there a law that says if you do enough trials, rare things happen?
|
Law of truly large numbers:
... with a sample size large enough, any outrageous thing is likely to happen.
|
Is there a law that says if you do enough trials, rare things happen?
Law of truly large numbers:
... with a sample size large enough, any outrageous thing is likely to happen.
|
Is there a law that says if you do enough trials, rare things happen?
Law of truly large numbers:
... with a sample size large enough, any outrageous thing is likely to happen.
|
16,076
|
Is there a law that says if you do enough trials, rare things happen?
|
You could explain that even as an event specified a priori, the probability that it occurs is not low. Indeed, it's not so hard to calculate the probability of 3 or more rolls of sixes in a row for at least one die out of 200.
[Incidentally, there's a nice approximate calculation you can use - if you have $n$ trials there there's a probability of $1/n$ of a 'success' (for $n$ not too small), the chance of at least one 'success' is about $1-1/e$. More generally, for $kn$ trials, the probability is about $1-e^{-k}$. In your case you're looking at $m = kn$ trials for a probability of $1/n$ where $n=216$ and $m=200$, so $k = 200/216$, giving a probability of around 60% that you'll see 3 sixes in a row at least once out of the 200 sets of 3 rolls.
I don't know that this specific calculation has a particular name, but the general area of rare events with many trials is related to the Poisson distribution. Indeed the Poisson distribution itself is sometimes called 'the law of rare events', and even occasionally 'the law of small numbers' (with 'law' in these cases meaning 'probability distribution').]
--
However, if you didn't specify that particular event before the rolling and only say afterward 'Hey, wow, what are the chances of that?', then your probability calculation is wrong, because it ignores all the other events about which you'd say 'Hey, wow, what are the chances of that?'.
You've only specified the event after you observe it, for which 1/216 doesn't apply, even with only one die.
Imagine I have a wheelbarrow full of small, but distinguishable dice (maybe they have little serial numbers) - say I have ten thousand of them. I tip the wheelbarrow full of dice out:
die # result
00001 4
00002 1
00003 5
. .
. .
. .
09999 6
10000 6
... and I go "Hey! Wow, what are the chances I'd get '4' on die #1 and '1' on die #2 and ... and '6' on die #999 and '6' on die #10000?"
That probability is $\frac{1}{6}^{10000}$ or about $3.07\times 10^{-7782}$. That's an astonishingly rare event! Something amazing must be going on. Let me try again. I shovel them all back in, and tip the wheelbarrow out again. Again I say "hey, wow, what are the chances??" and again it turns out I have an event of such astonishing rarity it should only happen once in the lifetime of a universe or something. What's up?
Simply, I am doing nothing but trying to calculate the probability of an event specified after the fact as if it had been specified a priori. If you do that, you get crazy answers.
|
Is there a law that says if you do enough trials, rare things happen?
|
You could explain that even as an event specified a priori, the probability that it occurs is not low. Indeed, it's not so hard to calculate the probability of 3 or more rolls of sixes in a row for at
|
Is there a law that says if you do enough trials, rare things happen?
You could explain that even as an event specified a priori, the probability that it occurs is not low. Indeed, it's not so hard to calculate the probability of 3 or more rolls of sixes in a row for at least one die out of 200.
[Incidentally, there's a nice approximate calculation you can use - if you have $n$ trials there there's a probability of $1/n$ of a 'success' (for $n$ not too small), the chance of at least one 'success' is about $1-1/e$. More generally, for $kn$ trials, the probability is about $1-e^{-k}$. In your case you're looking at $m = kn$ trials for a probability of $1/n$ where $n=216$ and $m=200$, so $k = 200/216$, giving a probability of around 60% that you'll see 3 sixes in a row at least once out of the 200 sets of 3 rolls.
I don't know that this specific calculation has a particular name, but the general area of rare events with many trials is related to the Poisson distribution. Indeed the Poisson distribution itself is sometimes called 'the law of rare events', and even occasionally 'the law of small numbers' (with 'law' in these cases meaning 'probability distribution').]
--
However, if you didn't specify that particular event before the rolling and only say afterward 'Hey, wow, what are the chances of that?', then your probability calculation is wrong, because it ignores all the other events about which you'd say 'Hey, wow, what are the chances of that?'.
You've only specified the event after you observe it, for which 1/216 doesn't apply, even with only one die.
Imagine I have a wheelbarrow full of small, but distinguishable dice (maybe they have little serial numbers) - say I have ten thousand of them. I tip the wheelbarrow full of dice out:
die # result
00001 4
00002 1
00003 5
. .
. .
. .
09999 6
10000 6
... and I go "Hey! Wow, what are the chances I'd get '4' on die #1 and '1' on die #2 and ... and '6' on die #999 and '6' on die #10000?"
That probability is $\frac{1}{6}^{10000}$ or about $3.07\times 10^{-7782}$. That's an astonishingly rare event! Something amazing must be going on. Let me try again. I shovel them all back in, and tip the wheelbarrow out again. Again I say "hey, wow, what are the chances??" and again it turns out I have an event of such astonishing rarity it should only happen once in the lifetime of a universe or something. What's up?
Simply, I am doing nothing but trying to calculate the probability of an event specified after the fact as if it had been specified a priori. If you do that, you get crazy answers.
|
Is there a law that says if you do enough trials, rare things happen?
You could explain that even as an event specified a priori, the probability that it occurs is not low. Indeed, it's not so hard to calculate the probability of 3 or more rolls of sixes in a row for at
|
16,077
|
Is there a law that says if you do enough trials, rare things happen?
|
I think that your statement "If you do enough tests, even unlikely things are bound to happen", would be better expressed as "If you do enough tests, even unlikely things are likely to happen". "bound to happen" is a bit too definite for a probability issue and I think the association of unlikely with likely in this context makes the point you are trying to put over.
|
Is there a law that says if you do enough trials, rare things happen?
|
I think that your statement "If you do enough tests, even unlikely things are bound to happen", would be better expressed as "If you do enough tests, even unlikely things are likely to happen". "boun
|
Is there a law that says if you do enough trials, rare things happen?
I think that your statement "If you do enough tests, even unlikely things are bound to happen", would be better expressed as "If you do enough tests, even unlikely things are likely to happen". "bound to happen" is a bit too definite for a probability issue and I think the association of unlikely with likely in this context makes the point you are trying to put over.
|
Is there a law that says if you do enough trials, rare things happen?
I think that your statement "If you do enough tests, even unlikely things are bound to happen", would be better expressed as "If you do enough tests, even unlikely things are likely to happen". "boun
|
16,078
|
Is there a law that says if you do enough trials, rare things happen?
|
I think what you need is a zero-one law. The most famous of these is the Kolmogorov Zero-One Law, which states that any event in the event space we're interested in will either eventually occur with probability 1 or never occur with probability 1. That is to say, there is no grey area of events that may happen.
|
Is there a law that says if you do enough trials, rare things happen?
|
I think what you need is a zero-one law. The most famous of these is the Kolmogorov Zero-One Law, which states that any event in the event space we're interested in will either eventually occur with p
|
Is there a law that says if you do enough trials, rare things happen?
I think what you need is a zero-one law. The most famous of these is the Kolmogorov Zero-One Law, which states that any event in the event space we're interested in will either eventually occur with probability 1 or never occur with probability 1. That is to say, there is no grey area of events that may happen.
|
Is there a law that says if you do enough trials, rare things happen?
I think what you need is a zero-one law. The most famous of these is the Kolmogorov Zero-One Law, which states that any event in the event space we're interested in will either eventually occur with p
|
16,079
|
Is there a law that says if you do enough trials, rare things happen?
|
Glivenko–Cantelli theorem (Wikipedia Link)
This theorem says, loosely speaking, that as the number of samples grows, the empirical distribution tends toward ("converges") the true distribution.
In that sense, if there truly is a nonzero probability of an event happening, enough observations should lead to you seeing it happen, since your empirical CDF has to tend toward the true CDF that gives such an event positive, even if small, probability.
|
Is there a law that says if you do enough trials, rare things happen?
|
Glivenko–Cantelli theorem (Wikipedia Link)
This theorem says, loosely speaking, that as the number of samples grows, the empirical distribution tends toward ("converges") the true distribution.
In tha
|
Is there a law that says if you do enough trials, rare things happen?
Glivenko–Cantelli theorem (Wikipedia Link)
This theorem says, loosely speaking, that as the number of samples grows, the empirical distribution tends toward ("converges") the true distribution.
In that sense, if there truly is a nonzero probability of an event happening, enough observations should lead to you seeing it happen, since your empirical CDF has to tend toward the true CDF that gives such an event positive, even if small, probability.
|
Is there a law that says if you do enough trials, rare things happen?
Glivenko–Cantelli theorem (Wikipedia Link)
This theorem says, loosely speaking, that as the number of samples grows, the empirical distribution tends toward ("converges") the true distribution.
In tha
|
16,080
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
|
Imagine any regression line fitted to some data.
Now imagine an extra data point, an outlier some distance away from the main body of the data, but one which lies somewhere along that regression line.
If the regression line were to be refitted, the coefficients would not change. Conversely, deleting the extra outlier would have zero influence on the coefficients.
So, an outlier or leverage point would have zero influence if it were perfectly consistent with the rest of the data and the model that rest implies.
For "line" read "plane" or "hyperplane" if desired, but the simplest example of two variables and a scatter plot is enough here.
However, as you are fond of definitions -- often, it seems, tending to read too much into them -- here is my favourite definition of outliers:
"Outliers are sample values that cause surprise in relation to the majority of the sample" (W.N. Venables and B.D. Ripley. 2002. Modern applied
statistics with S. New York: Springer, p.119).
Crucially, surprise is in the mind of the beholder and is dependent on some tacit or explicit model of the data. There may be another model under which
the outlier is not surprising at all, say if the data really are lognormal or gamma rather than normal.
P.S. I don't think that leverage points necessarily lack neighbouring observations. For example, they may occur in pairs.
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
|
Imagine any regression line fitted to some data.
Now imagine an extra data point, an outlier some distance away from the main body of the data, but one which lies somewhere along that regression line
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
Imagine any regression line fitted to some data.
Now imagine an extra data point, an outlier some distance away from the main body of the data, but one which lies somewhere along that regression line.
If the regression line were to be refitted, the coefficients would not change. Conversely, deleting the extra outlier would have zero influence on the coefficients.
So, an outlier or leverage point would have zero influence if it were perfectly consistent with the rest of the data and the model that rest implies.
For "line" read "plane" or "hyperplane" if desired, but the simplest example of two variables and a scatter plot is enough here.
However, as you are fond of definitions -- often, it seems, tending to read too much into them -- here is my favourite definition of outliers:
"Outliers are sample values that cause surprise in relation to the majority of the sample" (W.N. Venables and B.D. Ripley. 2002. Modern applied
statistics with S. New York: Springer, p.119).
Crucially, surprise is in the mind of the beholder and is dependent on some tacit or explicit model of the data. There may be another model under which
the outlier is not surprising at all, say if the data really are lognormal or gamma rather than normal.
P.S. I don't think that leverage points necessarily lack neighbouring observations. For example, they may occur in pairs.
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
Imagine any regression line fitted to some data.
Now imagine an extra data point, an outlier some distance away from the main body of the data, but one which lies somewhere along that regression line
|
16,081
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
|
It's easy to illustrate how a high leverage point might not be influential in the case of a simple linear model:
The blue line is a regression line based on all the data, the red line ignores the point at the top right of the plot.
This point fits the definition of a high leverage point you just provided as it is far away from the rest of the data. Because of that, the regression line (the blue one) has to pass close to it. But since its position largely fits the pattern observed in the rest of the data, the other model would predict it very well (i.e. the red line already passes close to it in any case) and it is therefore not particularly influential.
Compare this to the following scatterplot:
Here, the point on the right of the plot is still a high leverage point but this time it does not really fit the pattern observed in the rest of the data. The blue line (the linear fit based on all the data) passes very close but the red line does not. Including or excluding this one point changes the parameter estimates dramatically: It has a lot of influence.
Note that the definitions you cited and the examples I just gave might seem to imply that high leverage/influential points are, in some sense, univariate “outliers” and that the fitted regression line will pass close to points with the highest influence but it need not be the case.
In this last example, the observation on the bottom right has a (relatively) large effect on the fit of the model (visible again through the difference between the red and the blue lines) but it still appears to be far away from the regression line while being undetectable in univariate distributions (represented here by the “rugs” along the axes).
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
|
It's easy to illustrate how a high leverage point might not be influential in the case of a simple linear model:
The blue line is a regression line based on all the data, the red line ignores the poi
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
It's easy to illustrate how a high leverage point might not be influential in the case of a simple linear model:
The blue line is a regression line based on all the data, the red line ignores the point at the top right of the plot.
This point fits the definition of a high leverage point you just provided as it is far away from the rest of the data. Because of that, the regression line (the blue one) has to pass close to it. But since its position largely fits the pattern observed in the rest of the data, the other model would predict it very well (i.e. the red line already passes close to it in any case) and it is therefore not particularly influential.
Compare this to the following scatterplot:
Here, the point on the right of the plot is still a high leverage point but this time it does not really fit the pattern observed in the rest of the data. The blue line (the linear fit based on all the data) passes very close but the red line does not. Including or excluding this one point changes the parameter estimates dramatically: It has a lot of influence.
Note that the definitions you cited and the examples I just gave might seem to imply that high leverage/influential points are, in some sense, univariate “outliers” and that the fitted regression line will pass close to points with the highest influence but it need not be the case.
In this last example, the observation on the bottom right has a (relatively) large effect on the fit of the model (visible again through the difference between the red and the blue lines) but it still appears to be far away from the regression line while being undetectable in univariate distributions (represented here by the “rugs” along the axes).
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
It's easy to illustrate how a high leverage point might not be influential in the case of a simple linear model:
The blue line is a regression line based on all the data, the red line ignores the poi
|
16,082
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
|
Great intuitive and visual answers above, let me add a little formulas in the case of linear regression in 2 dimensions. The predictor at a given data point $x_i$ is
$\hat{y} = \beta_0 + \beta_1 x_i$.
Being high leverage concern only $x_i$, $|x_i- \bar{x}|$ is unusually large compare to average. Meanwhile, being outlier concerns $y_i$, when the residual $|\hat{y}-y_i|$ given $x_i$ is unusually large.
For the same residual $|\hat{y}-y|$, a high leverage point has a stronger influence to what $\beta_1$ best fits the data to minimize $\sum_i^N (\beta_0 + \beta_1 x_i-y)^2$ because the way it forms a product with $\beta_1$ in the predictor. My mental picture is that a point far away from the data mean has a "longer lever" to push the slope than a point near the mean.
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
|
Great intuitive and visual answers above, let me add a little formulas in the case of linear regression in 2 dimensions. The predictor at a given data point $x_i$ is
$\hat{y} = \beta_0 + \beta_1 x_i$.
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
Great intuitive and visual answers above, let me add a little formulas in the case of linear regression in 2 dimensions. The predictor at a given data point $x_i$ is
$\hat{y} = \beta_0 + \beta_1 x_i$.
Being high leverage concern only $x_i$, $|x_i- \bar{x}|$ is unusually large compare to average. Meanwhile, being outlier concerns $y_i$, when the residual $|\hat{y}-y_i|$ given $x_i$ is unusually large.
For the same residual $|\hat{y}-y|$, a high leverage point has a stronger influence to what $\beta_1$ best fits the data to minimize $\sum_i^N (\beta_0 + \beta_1 x_i-y)^2$ because the way it forms a product with $\beta_1$ in the predictor. My mental picture is that a point far away from the data mean has a "longer lever" to push the slope than a point near the mean.
|
Precise meaning of and comparison between influential point, high leverage point, and outlier?
Great intuitive and visual answers above, let me add a little formulas in the case of linear regression in 2 dimensions. The predictor at a given data point $x_i$ is
$\hat{y} = \beta_0 + \beta_1 x_i$.
|
16,083
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
|
Gradient-free learning is in the mainstream very heavily, but not used heavily in deep learning. Methods used for training neural networks that don't involve derivatives are typically called "metaheuristics." In computer science and pattern recognition (which largely originated in electrical engineering), metaheuristics are the go-to for NP-hard problems, such as airline flight scheduling, traffic route planning to optimize fuel consumption by delivery trucks, or the traveling salesman problem (annealing). As an example see swarm-based learning for neural networks or genetic algorithms for training neural networks or use of a metaheuristic for training a convolutional neural network. These are all neural networks which use metaheuristics for learning, and not derivatives.
While metaheuristics encompasses a wide swath of the literature, they're just not strongly associated with deep-learning, as these are different areas of optimization. Look up "solving NP-hard problems with metaheuristics." Last, recall that gradients used for neural networks don't have anything to do with the derivatives of a function that a neural network can be used to minimize (maximize). (This would be called function approximation using a neural network as opposed to classification analysis via neural network.) They're merely derivatives of the error or cross-entropy with respect to connection weight change within the network.
In addition, the derivatives of a function may not be known, or the problem can be too complex for using derivatives. Some of the newer optimization methods involve finite differencing as a replacement for derivatives, since compute times are getting faster, and derivative-free methods are becoming less computationally expensive in the time complexity.
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
|
Gradient-free learning is in the mainstream very heavily, but not used heavily in deep learning. Methods used for training neural networks that don't involve derivatives are typically called "metaheu
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
Gradient-free learning is in the mainstream very heavily, but not used heavily in deep learning. Methods used for training neural networks that don't involve derivatives are typically called "metaheuristics." In computer science and pattern recognition (which largely originated in electrical engineering), metaheuristics are the go-to for NP-hard problems, such as airline flight scheduling, traffic route planning to optimize fuel consumption by delivery trucks, or the traveling salesman problem (annealing). As an example see swarm-based learning for neural networks or genetic algorithms for training neural networks or use of a metaheuristic for training a convolutional neural network. These are all neural networks which use metaheuristics for learning, and not derivatives.
While metaheuristics encompasses a wide swath of the literature, they're just not strongly associated with deep-learning, as these are different areas of optimization. Look up "solving NP-hard problems with metaheuristics." Last, recall that gradients used for neural networks don't have anything to do with the derivatives of a function that a neural network can be used to minimize (maximize). (This would be called function approximation using a neural network as opposed to classification analysis via neural network.) They're merely derivatives of the error or cross-entropy with respect to connection weight change within the network.
In addition, the derivatives of a function may not be known, or the problem can be too complex for using derivatives. Some of the newer optimization methods involve finite differencing as a replacement for derivatives, since compute times are getting faster, and derivative-free methods are becoming less computationally expensive in the time complexity.
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
Gradient-free learning is in the mainstream very heavily, but not used heavily in deep learning. Methods used for training neural networks that don't involve derivatives are typically called "metaheu
|
16,084
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
|
Great question! To put it briefly, "Gradient Free Learning" (i.e. "metaheuristics", as pointed out by @user0123456789) is usually used when the "gradient" (i.e. derivative) of the loss function can not be evaluated. This can occur in instances such as :
The derivative of the loss function does not exist (e.g. contains "indicator functions", piecewise functions)
The derivative of the loss function exists, but is very costly to evaluate (e.g. I have heard talks in which gradient free optimization techniques were suggested for various problems involving reinforcement learning)
Discrete Combinatorics/Optimization problems (this is kind of related to the first point, but imagine trying to optimize functions in which the inputs are a set of discrete objects and the output is a value associated with different inputs - for example: travelling salesman problem, knapsack optimization, scheduling, etc.)
Gradient Free Optimization Techniques (e.g. Evolutionary Algorithms, Genetic Algorithm, Simulated Annealing, Particle Swarm, etc.) are sometimes preferred for certain types problems such as "games", in which optimal strategies are developed by mutating and combining random strategies according to their performance with respect to some target (e.g. https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies , https://www.youtube.com/watch?v=OGHA-elMrxI)
The other note that I wanted to add was that in situations where the gradient of the loss function can be evaluated (e.g. classic MLP neural networks), I think there might be some theoretical results that guarantee the probabilistic convergence of Stochastic Gradient Descent (i.e. the opposite of gradient free learning) to a global maximum provided infinite iterations (I could be wrong about this) - with gradient free optimization techniques, as far I know there is no such guarantee (over here, I myself asked a question about the "Schema Theorem", which uses Markov Chains to supposedly guarantee an improvement in results as the number of iterations in the Genetic Algorithm increases https://math.stackexchange.com/questions/4295279/does-the-following-computer-science-optimization-theorem-have-a-proof).
To sum everything up - chances are that if the derivative of your loss function "exists", try using classical gradient based optimization techniques. If the derivative does not exist, consider using Gradient Free based techniques.
For example, over here I asked a question about identifying "clusters" in a dataset such that the "proportion of zeros in all columns for a given cluster" is minimized. As far as I can think, there is no standard "gradient" in this problem, making it an ideal choice for "gradient free optimization techniques": https://or.stackexchange.com/questions/7488/mixed-integer-programming-optimization-using-the-genetic-algorithm
Hope this helps!
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
|
Great question! To put it briefly, "Gradient Free Learning" (i.e. "metaheuristics", as pointed out by @user0123456789) is usually used when the "gradient" (i.e. derivative) of the loss function can no
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
Great question! To put it briefly, "Gradient Free Learning" (i.e. "metaheuristics", as pointed out by @user0123456789) is usually used when the "gradient" (i.e. derivative) of the loss function can not be evaluated. This can occur in instances such as :
The derivative of the loss function does not exist (e.g. contains "indicator functions", piecewise functions)
The derivative of the loss function exists, but is very costly to evaluate (e.g. I have heard talks in which gradient free optimization techniques were suggested for various problems involving reinforcement learning)
Discrete Combinatorics/Optimization problems (this is kind of related to the first point, but imagine trying to optimize functions in which the inputs are a set of discrete objects and the output is a value associated with different inputs - for example: travelling salesman problem, knapsack optimization, scheduling, etc.)
Gradient Free Optimization Techniques (e.g. Evolutionary Algorithms, Genetic Algorithm, Simulated Annealing, Particle Swarm, etc.) are sometimes preferred for certain types problems such as "games", in which optimal strategies are developed by mutating and combining random strategies according to their performance with respect to some target (e.g. https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies , https://www.youtube.com/watch?v=OGHA-elMrxI)
The other note that I wanted to add was that in situations where the gradient of the loss function can be evaluated (e.g. classic MLP neural networks), I think there might be some theoretical results that guarantee the probabilistic convergence of Stochastic Gradient Descent (i.e. the opposite of gradient free learning) to a global maximum provided infinite iterations (I could be wrong about this) - with gradient free optimization techniques, as far I know there is no such guarantee (over here, I myself asked a question about the "Schema Theorem", which uses Markov Chains to supposedly guarantee an improvement in results as the number of iterations in the Genetic Algorithm increases https://math.stackexchange.com/questions/4295279/does-the-following-computer-science-optimization-theorem-have-a-proof).
To sum everything up - chances are that if the derivative of your loss function "exists", try using classical gradient based optimization techniques. If the derivative does not exist, consider using Gradient Free based techniques.
For example, over here I asked a question about identifying "clusters" in a dataset such that the "proportion of zeros in all columns for a given cluster" is minimized. As far as I can think, there is no standard "gradient" in this problem, making it an ideal choice for "gradient free optimization techniques": https://or.stackexchange.com/questions/7488/mixed-integer-programming-optimization-using-the-genetic-algorithm
Hope this helps!
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
Great question! To put it briefly, "Gradient Free Learning" (i.e. "metaheuristics", as pointed out by @user0123456789) is usually used when the "gradient" (i.e. derivative) of the loss function can no
|
16,085
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
|
The reason we don't use gradient-free methods for training neural nets is simple: gradient-free methods don't work as well as gradient-based methods.
Gradient-based methods converge faster, to better solutions. Gradient-free methods tend to scale poorly (for instance, one of the papers you cite only tests on MNIST, which is a tiny dataset and task; the other tests on CIFAR-10 but is a gradient-based method) and tend to yield inferior results (for instance, one of the papers you cite reports 97% accuracy on MNIST; but state-of-the-art accuracy on MNIST is well over 99%).
In general, when a gradient is available and the loss surface is not too messy, typically gradient-based methods work better than gradient-free methods. Gradient-free methods are useful when it is not easy to compute the gradient or when the loss function is not very smooth.
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
|
The reason we don't use gradient-free methods for training neural nets is simple: gradient-free methods don't work as well as gradient-based methods.
Gradient-based methods converge faster, to better
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
The reason we don't use gradient-free methods for training neural nets is simple: gradient-free methods don't work as well as gradient-based methods.
Gradient-based methods converge faster, to better solutions. Gradient-free methods tend to scale poorly (for instance, one of the papers you cite only tests on MNIST, which is a tiny dataset and task; the other tests on CIFAR-10 but is a gradient-based method) and tend to yield inferior results (for instance, one of the papers you cite reports 97% accuracy on MNIST; but state-of-the-art accuracy on MNIST is well over 99%).
In general, when a gradient is available and the loss surface is not too messy, typically gradient-based methods work better than gradient-free methods. Gradient-free methods are useful when it is not easy to compute the gradient or when the loss function is not very smooth.
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
The reason we don't use gradient-free methods for training neural nets is simple: gradient-free methods don't work as well as gradient-based methods.
Gradient-based methods converge faster, to better
|
16,086
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
|
In my understanding it's a consequence of the high number of variables that neural networks tends to require when tackling interesting problems. For simple tasks gradient-free methods work very well and are quite capable of beating gradient-based methods, as many of them deal with non-convex functions/local optima better than the grad-based methods and that tends to be the biggest issue for low dimensional problems.
However, as the number of dimensions/model variables increases, two things happen:
Local optima cease to be optima and become saddles instead. To be, say, a local minimum a zero-gradient point must be a minimum with respect to every dimension. If you have a million of these, it is practically guaranteed that it won't be a minimum in at least one. Modern gradient-based methods deal with saddles reasonably well, so as models scale up the functions become effectively convex for them.
A random perturbation of a solution candidate becomes increasingly unlikely to happen to have a direction similar to that of the gradient. That means that in grad-free methods that rely on such perturbations a lot of them will have to be made before the solution candidates move in the direction of the gradient, as opposed to just performing a random walk. Most grad-free methods fall into this category, and accordingly take a performance hit as models scale up.
The exception to that rule are the methods of the evolutionary strategies family. The main idea of these is to accumulate the information about the gradient from multiple perturbations and skew the distribution of subsequent perturbations in a way that makes them more likely to be aligned with the gradient. Those perform reasonably well on deep learning tasks [1]. They require roughly a few times more resources than the amount required by the gradient-based family to do the same job, but offer a superior performance on deceptive problems and improved horizontal scalability. I think the main reason why this approach never attracted mainstream attention is because the amount of parallel hardware required to get to the point where they compare favorably to grad-based is available to very few people in the world. It's been a while, but from what I recall the breakeven point is somewhere in the hundreds of GPUs region.
[1] https://eng.uber.com/deep-neuroevolution/
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
|
In my understanding it's a consequence of the high number of variables that neural networks tends to require when tackling interesting problems. For simple tasks gradient-free methods work very well a
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
In my understanding it's a consequence of the high number of variables that neural networks tends to require when tackling interesting problems. For simple tasks gradient-free methods work very well and are quite capable of beating gradient-based methods, as many of them deal with non-convex functions/local optima better than the grad-based methods and that tends to be the biggest issue for low dimensional problems.
However, as the number of dimensions/model variables increases, two things happen:
Local optima cease to be optima and become saddles instead. To be, say, a local minimum a zero-gradient point must be a minimum with respect to every dimension. If you have a million of these, it is practically guaranteed that it won't be a minimum in at least one. Modern gradient-based methods deal with saddles reasonably well, so as models scale up the functions become effectively convex for them.
A random perturbation of a solution candidate becomes increasingly unlikely to happen to have a direction similar to that of the gradient. That means that in grad-free methods that rely on such perturbations a lot of them will have to be made before the solution candidates move in the direction of the gradient, as opposed to just performing a random walk. Most grad-free methods fall into this category, and accordingly take a performance hit as models scale up.
The exception to that rule are the methods of the evolutionary strategies family. The main idea of these is to accumulate the information about the gradient from multiple perturbations and skew the distribution of subsequent perturbations in a way that makes them more likely to be aligned with the gradient. Those perform reasonably well on deep learning tasks [1]. They require roughly a few times more resources than the amount required by the gradient-based family to do the same job, but offer a superior performance on deceptive problems and improved horizontal scalability. I think the main reason why this approach never attracted mainstream attention is because the amount of parallel hardware required to get to the point where they compare favorably to grad-based is available to very few people in the world. It's been a while, but from what I recall the breakeven point is somewhere in the hundreds of GPUs region.
[1] https://eng.uber.com/deep-neuroevolution/
|
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main str
In my understanding it's a consequence of the high number of variables that neural networks tends to require when tackling interesting problems. For simple tasks gradient-free methods work very well a
|
16,087
|
How is it possible to obtain a good linear regression model when there is no substantial correlation between the output and the predictors?
|
A pair of variables may show high partial correlation (the correlation accounting for the impact of other variables) but low - or even zero - marginal correlation (pairwise correlation).
Which means that pairwise correlation between a response, y and some predictor, x may be of little value in identifying suitable variables with (linear) "predictive" value among a collection of other variables.
Consider the following data:
y x
1 6 6
2 12 12
3 18 18
4 24 24
5 1 42
6 7 48
7 13 54
8 19 60
The correlation between y and x is $0$. If I draw the least squares line, it's perfectly horizontal and the $R^2$ is naturally going to be $0$.
But when you add a new variable g, which indicates which of two groups the observations came from, x becomes extremely informative:
y x g
1 6 6 0
2 12 12 0
3 18 18 0
4 24 24 0
5 1 42 1
6 7 48 1
7 13 54 1
8 19 60 1
The $R^2$ of a linear regression model with both the x and g variables in it will be 1.
It's possible for this sort of thing to happen with every one of the variables in the model - that all have small pairwise correlation with the response, yet the model with them all in there is very good at predicting the response.
Additional reading:
https://en.wikipedia.org/wiki/Omitted-variable_bias
https://en.wikipedia.org/wiki/Simpson%27s_paradox
|
How is it possible to obtain a good linear regression model when there is no substantial correlation
|
A pair of variables may show high partial correlation (the correlation accounting for the impact of other variables) but low - or even zero - marginal correlation (pairwise correlation).
Which means t
|
How is it possible to obtain a good linear regression model when there is no substantial correlation between the output and the predictors?
A pair of variables may show high partial correlation (the correlation accounting for the impact of other variables) but low - or even zero - marginal correlation (pairwise correlation).
Which means that pairwise correlation between a response, y and some predictor, x may be of little value in identifying suitable variables with (linear) "predictive" value among a collection of other variables.
Consider the following data:
y x
1 6 6
2 12 12
3 18 18
4 24 24
5 1 42
6 7 48
7 13 54
8 19 60
The correlation between y and x is $0$. If I draw the least squares line, it's perfectly horizontal and the $R^2$ is naturally going to be $0$.
But when you add a new variable g, which indicates which of two groups the observations came from, x becomes extremely informative:
y x g
1 6 6 0
2 12 12 0
3 18 18 0
4 24 24 0
5 1 42 1
6 7 48 1
7 13 54 1
8 19 60 1
The $R^2$ of a linear regression model with both the x and g variables in it will be 1.
It's possible for this sort of thing to happen with every one of the variables in the model - that all have small pairwise correlation with the response, yet the model with them all in there is very good at predicting the response.
Additional reading:
https://en.wikipedia.org/wiki/Omitted-variable_bias
https://en.wikipedia.org/wiki/Simpson%27s_paradox
|
How is it possible to obtain a good linear regression model when there is no substantial correlation
A pair of variables may show high partial correlation (the correlation accounting for the impact of other variables) but low - or even zero - marginal correlation (pairwise correlation).
Which means t
|
16,088
|
How is it possible to obtain a good linear regression model when there is no substantial correlation between the output and the predictors?
|
I assume you are training a multiple regression model, in which you have multiple independent variables $X_1$, $X_2$, ..., regressed on Y. The simple answer here is a pairwise correlation is like running an underspecified regression model. As such, you omitted important variables.
More specifically, when you state "there is no variable with a good correlation with the predicted variable", it sounds like you are checking the pairwise correlation between each independent variable with the dependent variable, Y. This is possible when $X_2$ brings in important, new information and helps clear up the confounding between $X_1$ and Y. With that confounding, though, we may not see a linear pair-wise correlation between $X_1$ and Y. You may also want to check the relationship between partial correlation $\rho_{x_{1},y|x_{2}}$ and multiple regression $y=\beta_1X_1 +\beta_2X_2 + \epsilon$. Multiple regression have a more close relationship with partial correlation than pairwise correlation, $\rho_{x_{1},y}$.
|
How is it possible to obtain a good linear regression model when there is no substantial correlation
|
I assume you are training a multiple regression model, in which you have multiple independent variables $X_1$, $X_2$, ..., regressed on Y. The simple answer here is a pairwise correlation is like runn
|
How is it possible to obtain a good linear regression model when there is no substantial correlation between the output and the predictors?
I assume you are training a multiple regression model, in which you have multiple independent variables $X_1$, $X_2$, ..., regressed on Y. The simple answer here is a pairwise correlation is like running an underspecified regression model. As such, you omitted important variables.
More specifically, when you state "there is no variable with a good correlation with the predicted variable", it sounds like you are checking the pairwise correlation between each independent variable with the dependent variable, Y. This is possible when $X_2$ brings in important, new information and helps clear up the confounding between $X_1$ and Y. With that confounding, though, we may not see a linear pair-wise correlation between $X_1$ and Y. You may also want to check the relationship between partial correlation $\rho_{x_{1},y|x_{2}}$ and multiple regression $y=\beta_1X_1 +\beta_2X_2 + \epsilon$. Multiple regression have a more close relationship with partial correlation than pairwise correlation, $\rho_{x_{1},y}$.
|
How is it possible to obtain a good linear regression model when there is no substantial correlation
I assume you are training a multiple regression model, in which you have multiple independent variables $X_1$, $X_2$, ..., regressed on Y. The simple answer here is a pairwise correlation is like runn
|
16,089
|
How is it possible to obtain a good linear regression model when there is no substantial correlation between the output and the predictors?
|
In vector terms, if you have a set of vectors $X$ and another vector y, then if y is orthogonal (zero correlation) to every vector in $X$, then it will also be orthogonal to any linear combination of vectors from $X$. However, if the vectors in $X$ have large uncorrelated components, and small correlated components, and the uncorrelated components are linearly dependents, then y can be correlated to a linear combination of $X$. That is, if $X={x_1,x_2 ...}$ and we take $o_i$ = component of x_i orthogonal to y, $p_i$ = component of x_i parallel to y, then if there exists $c_i$ such that $\sum c_io_i =0$, then $\sum c_ix_i$ will be parallel to y (i.e., a perfect predictor). If $\sum c_io_i =0$ is small, then $\sum c_ix_i$ will be a good predictor. So suppose we have $X_1$ and $X_2$ ~ N(0,1) and $E$ ~ N(0,100). Now we create new columns $X'_1$ and $X'_2$. For each row, we take a random sample from $E$, add that number to $X_1$ to get $X'_1$, and subtract it from $X_2$ to get $X'_2$. Since each row has the same sample of $E$ being added and subtracted, the $X'_1$ and $X'_2$ columns will be perfect predictors of $Y$, even though each one has just a tiny correlation with $Y$ individually.
|
How is it possible to obtain a good linear regression model when there is no substantial correlation
|
In vector terms, if you have a set of vectors $X$ and another vector y, then if y is orthogonal (zero correlation) to every vector in $X$, then it will also be orthogonal to any linear combination of
|
How is it possible to obtain a good linear regression model when there is no substantial correlation between the output and the predictors?
In vector terms, if you have a set of vectors $X$ and another vector y, then if y is orthogonal (zero correlation) to every vector in $X$, then it will also be orthogonal to any linear combination of vectors from $X$. However, if the vectors in $X$ have large uncorrelated components, and small correlated components, and the uncorrelated components are linearly dependents, then y can be correlated to a linear combination of $X$. That is, if $X={x_1,x_2 ...}$ and we take $o_i$ = component of x_i orthogonal to y, $p_i$ = component of x_i parallel to y, then if there exists $c_i$ such that $\sum c_io_i =0$, then $\sum c_ix_i$ will be parallel to y (i.e., a perfect predictor). If $\sum c_io_i =0$ is small, then $\sum c_ix_i$ will be a good predictor. So suppose we have $X_1$ and $X_2$ ~ N(0,1) and $E$ ~ N(0,100). Now we create new columns $X'_1$ and $X'_2$. For each row, we take a random sample from $E$, add that number to $X_1$ to get $X'_1$, and subtract it from $X_2$ to get $X'_2$. Since each row has the same sample of $E$ being added and subtracted, the $X'_1$ and $X'_2$ columns will be perfect predictors of $Y$, even though each one has just a tiny correlation with $Y$ individually.
|
How is it possible to obtain a good linear regression model when there is no substantial correlation
In vector terms, if you have a set of vectors $X$ and another vector y, then if y is orthogonal (zero correlation) to every vector in $X$, then it will also be orthogonal to any linear combination of
|
16,090
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivariate normal?
|
It might be best to understand "measure of association" in a multivariate distribution to consist of all properties that remain the same when the values are arbitrarily rescaled and recentered. Doing so can change the means and variances to any theoretically allowable values (variances must be positive; means can be anything).
The correlation coefficients ("Pearson's $\rho$") then completely determine a multivariate Normal distribution. One way to see this is to look at any formulaic definition, such as formulas for the density function or characteristic function. They involve only means, variances, and covariances--but covariances and correlations can be deduced from one another when you know the variances.
The multivariate Normal family is not the only family of distributions that enjoys this property. For example, any Multivariate t distribution (for degrees of freedom exceeding $2$) has a well-defined correlation matrix and is completely determined by its first two moments, also.
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivaria
|
It might be best to understand "measure of association" in a multivariate distribution to consist of all properties that remain the same when the values are arbitrarily rescaled and recentered. Doing
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivariate normal?
It might be best to understand "measure of association" in a multivariate distribution to consist of all properties that remain the same when the values are arbitrarily rescaled and recentered. Doing so can change the means and variances to any theoretically allowable values (variances must be positive; means can be anything).
The correlation coefficients ("Pearson's $\rho$") then completely determine a multivariate Normal distribution. One way to see this is to look at any formulaic definition, such as formulas for the density function or characteristic function. They involve only means, variances, and covariances--but covariances and correlations can be deduced from one another when you know the variances.
The multivariate Normal family is not the only family of distributions that enjoys this property. For example, any Multivariate t distribution (for degrees of freedom exceeding $2$) has a well-defined correlation matrix and is completely determined by its first two moments, also.
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivaria
It might be best to understand "measure of association" in a multivariate distribution to consist of all properties that remain the same when the values are arbitrarily rescaled and recentered. Doing
|
16,091
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivariate normal?
|
Variates can be associated in ways that the Pearson correlation is completely blind to.
In the multivariate normal, the Pearson correlation is "exhaustive" in the sense that the only association possible is indexed by $\rho$. But for other distributions (even those with normal margins), there can be association without correlation. Here's a couple of plots of 3 normal random variates (x,y and x,z); they're highly associated (if you tell me the value of the $x$-variate, I'll tell you the other two, and if you tell me the $y$ I can tell you the $z$), but they are all uncorrelated.
Here's another example of associated but uncorrelated variates:
(The underlying point is being made about distributions, even though I'm illustrating it with data here.)
Even when the variates are correlated, the Pearson correlation in general doesn't tell you how -- you can get very different forms of association that have the same Pearson correlation, (but when the variates are multivariate normal, as soon as I tell you the correlation you can say exactly how standardized variates are related).
So the Pearson correlation doesn't "exhaust" the ways in which variates are associated -- they can be associated but uncorrelated, or they can be correlated but associated in quite distinct ways. [The variety of ways in which association not entirely captured by correlation can happen is quite large -- but if any of them happen, you can't have a multivariate normal. Note, however, that nothing in my discussion implies that this (that knowing $\rho$ defines the possible association) characterizes the multivariate normal, even though the title quote seems to suggest it.]
(A common way to address multivariate association is via copulas. There are numerous questions on site that relate to copulas; you may find some of them helpful)
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivaria
|
Variates can be associated in ways that the Pearson correlation is completely blind to.
In the multivariate normal, the Pearson correlation is "exhaustive" in the sense that the only association poss
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivariate normal?
Variates can be associated in ways that the Pearson correlation is completely blind to.
In the multivariate normal, the Pearson correlation is "exhaustive" in the sense that the only association possible is indexed by $\rho$. But for other distributions (even those with normal margins), there can be association without correlation. Here's a couple of plots of 3 normal random variates (x,y and x,z); they're highly associated (if you tell me the value of the $x$-variate, I'll tell you the other two, and if you tell me the $y$ I can tell you the $z$), but they are all uncorrelated.
Here's another example of associated but uncorrelated variates:
(The underlying point is being made about distributions, even though I'm illustrating it with data here.)
Even when the variates are correlated, the Pearson correlation in general doesn't tell you how -- you can get very different forms of association that have the same Pearson correlation, (but when the variates are multivariate normal, as soon as I tell you the correlation you can say exactly how standardized variates are related).
So the Pearson correlation doesn't "exhaust" the ways in which variates are associated -- they can be associated but uncorrelated, or they can be correlated but associated in quite distinct ways. [The variety of ways in which association not entirely captured by correlation can happen is quite large -- but if any of them happen, you can't have a multivariate normal. Note, however, that nothing in my discussion implies that this (that knowing $\rho$ defines the possible association) characterizes the multivariate normal, even though the title quote seems to suggest it.]
(A common way to address multivariate association is via copulas. There are numerous questions on site that relate to copulas; you may find some of them helpful)
|
Why is Pearson's ρ only an exhaustive measure of association if the joint distribution is multivaria
Variates can be associated in ways that the Pearson correlation is completely blind to.
In the multivariate normal, the Pearson correlation is "exhaustive" in the sense that the only association poss
|
16,092
|
Which variables explain which PCA components, and vice versa?
|
You are right, the loadings can help you here. They can be used to compute the correlation between the variables and the principal components. Moreover, the sum of the squared loadings of one variable over all principal components is equal to 1. Hence, the squared loadings tell you the proportion of variance of one variable explained by one principal component.
The problem with princomp is, it only shows the "very high" loadings. But since the loadings are just the eigenvectors of the covariance matrix, one can get all loadings using the eigen command in R:
loadings <- eigen(cov(USArrests))$vectors
explvar <- loadings^2
Now, you have the desired information in the matrix explvar.
|
Which variables explain which PCA components, and vice versa?
|
You are right, the loadings can help you here. They can be used to compute the correlation between the variables and the principal components. Moreover, the sum of the squared loadings of one variable
|
Which variables explain which PCA components, and vice versa?
You are right, the loadings can help you here. They can be used to compute the correlation between the variables and the principal components. Moreover, the sum of the squared loadings of one variable over all principal components is equal to 1. Hence, the squared loadings tell you the proportion of variance of one variable explained by one principal component.
The problem with princomp is, it only shows the "very high" loadings. But since the loadings are just the eigenvectors of the covariance matrix, one can get all loadings using the eigen command in R:
loadings <- eigen(cov(USArrests))$vectors
explvar <- loadings^2
Now, you have the desired information in the matrix explvar.
|
Which variables explain which PCA components, and vice versa?
You are right, the loadings can help you here. They can be used to compute the correlation between the variables and the principal components. Moreover, the sum of the squared loadings of one variable
|
16,093
|
Which variables explain which PCA components, and vice versa?
|
I think that the accepted answer can be dangerously misleading (-1). There are at least four different questions mixed together in the OP. I will consider them one after another.
Q1. How much of the variance of a given PC is explained by a given original variable? How much of the variance of a given original variable is explained by a given PC?
These two questions are equivalent and the answer is given by the square $r^2$ of the correlation coefficient between the variable and the PC. If PCA is done on the correlations, then the correlation coefficient $r$ is given (see here) by the corresponding element of the loadings. PC $i$ is associated with an eigenvector $\mathbf V_i$ of the correlation matrix and the corresponding eigenvalue $s_i$. A loadings vector $\mathbf L_i$ is given by $\mathbf L_i = (s_i)^{1/2} \mathbf V_i$. Its elements are correlations of this PC with the respective original variables.
Note that eigenvectors $\mathbf V_i$ and loadings $\mathbf L_i$ are two different things! In R, eigenvectors are confusingly called "loadings"; one should be careful: their elements are not the desired correlations. [The currently accepted answer in this thread confuses the two.]
In addition, if PCA is done on covariances (and not on correlations), then loadings will also give you covariances, not correlations. To obtain correlations, one needs to compute them manually, following PCA. [The currently accepted answer is unclear about that.]
Q2. How much of the variance of a given original variable is explained by a given subset of PCs? How to select this subset to explain e.g. $80\%$ of the variance?
Because PCs are orthogonal (i.e. uncorrelated), one can simply add up individual $r^2$ values (see Q1) to get the global $R^2$ value.
To select a subset, one can add PCs with the highest correlations ($r^2$) with a given original variable until the desired amount of explained variance ($R^2$) is reached.
Q3. How much of the variance of a given PC is explained by a given subset of original variables? How to select this subset to explain e.g. $80\%$ of the variance?
An answer to this question is not automatically given by PCA! E.g. if all original variables are very strongly inter-correlated with pairwise $r=0.9$, then correlations between the first PC and all the variables will be around $r=0.9$. One cannot add these $r^2$ numbers to compute the proportion of variance of this PC explained by, say, five original variables (this would result in a nonsensical result $R^2 = 0.9\cdot0.9\cdot5>1$). Instead, one would need to regress this PC on these variables and obtain the multiple $R^2$ value.
How to select a subset explaining given amount of variance, was suggested by @FrankHarrell (+1).
|
Which variables explain which PCA components, and vice versa?
|
I think that the accepted answer can be dangerously misleading (-1). There are at least four different questions mixed together in the OP. I will consider them one after another.
Q1. How much of the
|
Which variables explain which PCA components, and vice versa?
I think that the accepted answer can be dangerously misleading (-1). There are at least four different questions mixed together in the OP. I will consider them one after another.
Q1. How much of the variance of a given PC is explained by a given original variable? How much of the variance of a given original variable is explained by a given PC?
These two questions are equivalent and the answer is given by the square $r^2$ of the correlation coefficient between the variable and the PC. If PCA is done on the correlations, then the correlation coefficient $r$ is given (see here) by the corresponding element of the loadings. PC $i$ is associated with an eigenvector $\mathbf V_i$ of the correlation matrix and the corresponding eigenvalue $s_i$. A loadings vector $\mathbf L_i$ is given by $\mathbf L_i = (s_i)^{1/2} \mathbf V_i$. Its elements are correlations of this PC with the respective original variables.
Note that eigenvectors $\mathbf V_i$ and loadings $\mathbf L_i$ are two different things! In R, eigenvectors are confusingly called "loadings"; one should be careful: their elements are not the desired correlations. [The currently accepted answer in this thread confuses the two.]
In addition, if PCA is done on covariances (and not on correlations), then loadings will also give you covariances, not correlations. To obtain correlations, one needs to compute them manually, following PCA. [The currently accepted answer is unclear about that.]
Q2. How much of the variance of a given original variable is explained by a given subset of PCs? How to select this subset to explain e.g. $80\%$ of the variance?
Because PCs are orthogonal (i.e. uncorrelated), one can simply add up individual $r^2$ values (see Q1) to get the global $R^2$ value.
To select a subset, one can add PCs with the highest correlations ($r^2$) with a given original variable until the desired amount of explained variance ($R^2$) is reached.
Q3. How much of the variance of a given PC is explained by a given subset of original variables? How to select this subset to explain e.g. $80\%$ of the variance?
An answer to this question is not automatically given by PCA! E.g. if all original variables are very strongly inter-correlated with pairwise $r=0.9$, then correlations between the first PC and all the variables will be around $r=0.9$. One cannot add these $r^2$ numbers to compute the proportion of variance of this PC explained by, say, five original variables (this would result in a nonsensical result $R^2 = 0.9\cdot0.9\cdot5>1$). Instead, one would need to regress this PC on these variables and obtain the multiple $R^2$ value.
How to select a subset explaining given amount of variance, was suggested by @FrankHarrell (+1).
|
Which variables explain which PCA components, and vice versa?
I think that the accepted answer can be dangerously misleading (-1). There are at least four different questions mixed together in the OP. I will consider them one after another.
Q1. How much of the
|
16,094
|
Which variables explain which PCA components, and vice versa?
|
You can do a backwards or forwards stepwise variable selection predicting a component or a linear combination of components from their constituent variables. The $R^2$ will be 1.0 at the first step if you use backwards stepdown. Even though stepwise regression is pretty much of a disaster when predicting $Y$ it can work well when the prediction is mechanistic as is the case here. You can add or remove variables until you explain 0.8 or 0.9 (for example) of the information in the principal components.
|
Which variables explain which PCA components, and vice versa?
|
You can do a backwards or forwards stepwise variable selection predicting a component or a linear combination of components from their constituent variables. The $R^2$ will be 1.0 at the first step i
|
Which variables explain which PCA components, and vice versa?
You can do a backwards or forwards stepwise variable selection predicting a component or a linear combination of components from their constituent variables. The $R^2$ will be 1.0 at the first step if you use backwards stepdown. Even though stepwise regression is pretty much of a disaster when predicting $Y$ it can work well when the prediction is mechanistic as is the case here. You can add or remove variables until you explain 0.8 or 0.9 (for example) of the information in the principal components.
|
Which variables explain which PCA components, and vice versa?
You can do a backwards or forwards stepwise variable selection predicting a component or a linear combination of components from their constituent variables. The $R^2$ will be 1.0 at the first step i
|
16,095
|
Which variables explain which PCA components, and vice versa?
|
The US arrests data bundled with R are just an example here, but I note that the loadings calculations in the question come from a PCA of the covariance matrix. That's somewhere between arbitrary and nonsensical, as the variables are measured on different scales.
Urban population looks like a percent. California is 91% and highest.
The three crime variables appear to be number of arrests for crimes expressed relative to population size (presumably for some time period). Presumably it's documented somewhere whether it's arrests per 1000 or 10000 or whatever.
The mean of the assault variable in the given units is about 171 and the mean murder is about 8. So, the explanation of your loadings is that in large part the pattern is an artefact: it depends on the very different variability of the variables.
So, although there is sense in the data in that there are many more arrests for assaults than for murders, etc., that known (or unsurprising) fact dominates the analysis.
This shows that, as any where else in statistics, you have to think about what you are doing in a PCA.
If you take this further:
I'd argue that percent urban is better left out of the analysis. It's not a crime to be urban; it might of course serve proxy for variables influencing crime.
A PCA based on a correlation matrix would make more sense in my view. Another possibility is to work with logarithms of arrest rates, not arrest rates (all values are positive; see below).
Note: @random_guy's answer deliberately uses the covariance matrix.
Here are some summary statistics. I used Stata, but that's quite immaterial.
Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
urban_pop | 50 65.54 14.47476 32 91
murder | 50 7.788 4.35551 .8 17.4
rape | 50 21.232 9.366384 7.3 46
assault | 50 170.76 83.33766 45 337
|
Which variables explain which PCA components, and vice versa?
|
The US arrests data bundled with R are just an example here, but I note that the loadings calculations in the question come from a PCA of the covariance matrix. That's somewhere between arbitrary and
|
Which variables explain which PCA components, and vice versa?
The US arrests data bundled with R are just an example here, but I note that the loadings calculations in the question come from a PCA of the covariance matrix. That's somewhere between arbitrary and nonsensical, as the variables are measured on different scales.
Urban population looks like a percent. California is 91% and highest.
The three crime variables appear to be number of arrests for crimes expressed relative to population size (presumably for some time period). Presumably it's documented somewhere whether it's arrests per 1000 or 10000 or whatever.
The mean of the assault variable in the given units is about 171 and the mean murder is about 8. So, the explanation of your loadings is that in large part the pattern is an artefact: it depends on the very different variability of the variables.
So, although there is sense in the data in that there are many more arrests for assaults than for murders, etc., that known (or unsurprising) fact dominates the analysis.
This shows that, as any where else in statistics, you have to think about what you are doing in a PCA.
If you take this further:
I'd argue that percent urban is better left out of the analysis. It's not a crime to be urban; it might of course serve proxy for variables influencing crime.
A PCA based on a correlation matrix would make more sense in my view. Another possibility is to work with logarithms of arrest rates, not arrest rates (all values are positive; see below).
Note: @random_guy's answer deliberately uses the covariance matrix.
Here are some summary statistics. I used Stata, but that's quite immaterial.
Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
urban_pop | 50 65.54 14.47476 32 91
murder | 50 7.788 4.35551 .8 17.4
rape | 50 21.232 9.366384 7.3 46
assault | 50 170.76 83.33766 45 337
|
Which variables explain which PCA components, and vice versa?
The US arrests data bundled with R are just an example here, but I note that the loadings calculations in the question come from a PCA of the covariance matrix. That's somewhere between arbitrary and
|
16,096
|
How to choose between ANOVA and ANCOVA in a designed experiment?
|
As a fact of history, regression and ANOVA developed separately, and, due in part to tradition, are still often taught separately. In addition, people often think of ANOVA as appropriate for designed experiments (i.e., the manipulation of a variable / random assignment) and regression as appropriate for observational research (e.g., downloading data from a government website and looking for relationships). However, all of this is a little misleading. An ANOVA is a regression, just one where all of the covariates are categorical. An ANCOVA is a regression with qualitative and continuous covariates, but without interaction terms between the factors and the continuous explanatory variables (i.e., the so called 'parallel slopes assumption'). As for whether a study is experimental or observational, this is unrelated to the analysis itself.
Your experiment sounds good. I would analyze this as a regression (in my mind, I tend to call everything regression). I would include all the covariates if you are interested in them, and/or if the theories you are working with suggest they may be important. If you think the effect of some of the variables may depend on other variables, be sure to add in all of the requisite interaction terms. One thing to bear in mind is that each explanatory variable (including interaction terms!) will consume a degree of freedom, so make sure your sample size is adequate. I would not dichotomize, or otherwise make categorical, any of your continuous variables (it is unfortunate that this practice is widespread, it's really a bad thing to do). Otherwise, it sounds like you're on your way.
Update: There seems to be some concern here about whether or not to convert continuous variables into variables with just two (or more) categories. Let me address that here, rather than in a comment. I would keep all of your variables as continuous. There are several reasons to avoid categorizing continuous variables:
By categorizing you would be throwing information away--some observations are further from the dividing line & others are closer to it, but they're treated as though they were the same. In science, our goal is to gather more and better information and to better organize and integrate that information. Throwing information away is simply antithetical to good science in my oppinion;
You tend to lose statistical power as @Florian points out (thanks for the link!);
You lose the ability to detect non-linear relationships as @rolando2 points out;
What if someone reads your work & wonders what would happen if we
drew the line b/t categories in a different place? (For example, consider your BMI example, what if someone else 10 years from now, based on what's happening in the literature at that time, wants to also know about people who are underweight and those who are morbidly obese?) They would simply be out of luck, but if you keep everything in its original form, each reader can assess their own preferred categorization scheme;
There are rarely 'bright lines' in nature, and so by categorizing you fail to reflect the situation under study as it really is. If you are concerned that there may be an actual bright line at some point for a-priori theoretical reasons, you could fit a spline to assess this. Imagine a variable, $X$, that runs from 0 to 1, and you think the relationship between this variable and a response variable suddenly and fundamentally changes at .7, then you create a new variable (called a spline) like this:
$$
\begin{aligned}
X_{spline} &= 0 &\text{if } X\le{.7} \\
X_{spline} &= X-.7 &\text{if } X>.7
\end{aligned}
$$
then add this new $X_{spline}$ variable to your model in addition to your original $X$ variable. The model output will show a sharp break at .7, and you can assess whether this enhances our understanding of the data.
1 & 5 being the most important, in my opinion.
|
How to choose between ANOVA and ANCOVA in a designed experiment?
|
As a fact of history, regression and ANOVA developed separately, and, due in part to tradition, are still often taught separately. In addition, people often think of ANOVA as appropriate for designed
|
How to choose between ANOVA and ANCOVA in a designed experiment?
As a fact of history, regression and ANOVA developed separately, and, due in part to tradition, are still often taught separately. In addition, people often think of ANOVA as appropriate for designed experiments (i.e., the manipulation of a variable / random assignment) and regression as appropriate for observational research (e.g., downloading data from a government website and looking for relationships). However, all of this is a little misleading. An ANOVA is a regression, just one where all of the covariates are categorical. An ANCOVA is a regression with qualitative and continuous covariates, but without interaction terms between the factors and the continuous explanatory variables (i.e., the so called 'parallel slopes assumption'). As for whether a study is experimental or observational, this is unrelated to the analysis itself.
Your experiment sounds good. I would analyze this as a regression (in my mind, I tend to call everything regression). I would include all the covariates if you are interested in them, and/or if the theories you are working with suggest they may be important. If you think the effect of some of the variables may depend on other variables, be sure to add in all of the requisite interaction terms. One thing to bear in mind is that each explanatory variable (including interaction terms!) will consume a degree of freedom, so make sure your sample size is adequate. I would not dichotomize, or otherwise make categorical, any of your continuous variables (it is unfortunate that this practice is widespread, it's really a bad thing to do). Otherwise, it sounds like you're on your way.
Update: There seems to be some concern here about whether or not to convert continuous variables into variables with just two (or more) categories. Let me address that here, rather than in a comment. I would keep all of your variables as continuous. There are several reasons to avoid categorizing continuous variables:
By categorizing you would be throwing information away--some observations are further from the dividing line & others are closer to it, but they're treated as though they were the same. In science, our goal is to gather more and better information and to better organize and integrate that information. Throwing information away is simply antithetical to good science in my oppinion;
You tend to lose statistical power as @Florian points out (thanks for the link!);
You lose the ability to detect non-linear relationships as @rolando2 points out;
What if someone reads your work & wonders what would happen if we
drew the line b/t categories in a different place? (For example, consider your BMI example, what if someone else 10 years from now, based on what's happening in the literature at that time, wants to also know about people who are underweight and those who are morbidly obese?) They would simply be out of luck, but if you keep everything in its original form, each reader can assess their own preferred categorization scheme;
There are rarely 'bright lines' in nature, and so by categorizing you fail to reflect the situation under study as it really is. If you are concerned that there may be an actual bright line at some point for a-priori theoretical reasons, you could fit a spline to assess this. Imagine a variable, $X$, that runs from 0 to 1, and you think the relationship between this variable and a response variable suddenly and fundamentally changes at .7, then you create a new variable (called a spline) like this:
$$
\begin{aligned}
X_{spline} &= 0 &\text{if } X\le{.7} \\
X_{spline} &= X-.7 &\text{if } X>.7
\end{aligned}
$$
then add this new $X_{spline}$ variable to your model in addition to your original $X$ variable. The model output will show a sharp break at .7, and you can assess whether this enhances our understanding of the data.
1 & 5 being the most important, in my opinion.
|
How to choose between ANOVA and ANCOVA in a designed experiment?
As a fact of history, regression and ANOVA developed separately, and, due in part to tradition, are still often taught separately. In addition, people often think of ANOVA as appropriate for designed
|
16,097
|
Best books for an introduction to statistical data analysis?
|
I didn't find How To Measure Anything, nor Head First, particularly good.
Statistics In Plain English (Urdan) is a good starter book.
Once you finish that, Multivariate Data Analysis (Joseph Hair et al.) is fantastic.
Good luck!
|
Best books for an introduction to statistical data analysis?
|
I didn't find How To Measure Anything, nor Head First, particularly good.
Statistics In Plain English (Urdan) is a good starter book.
Once you finish that, Multivariate Data Analysis (Joseph Hair et a
|
Best books for an introduction to statistical data analysis?
I didn't find How To Measure Anything, nor Head First, particularly good.
Statistics In Plain English (Urdan) is a good starter book.
Once you finish that, Multivariate Data Analysis (Joseph Hair et al.) is fantastic.
Good luck!
|
Best books for an introduction to statistical data analysis?
I didn't find How To Measure Anything, nor Head First, particularly good.
Statistics In Plain English (Urdan) is a good starter book.
Once you finish that, Multivariate Data Analysis (Joseph Hair et a
|
16,098
|
Best books for an introduction to statistical data analysis?
|
This book is dynamite:
George E. P. Box, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building
It starts from zero knowledge of Statistics but it doesn't insult the reader's intelligence. It's incredibly practical but with no loss of rigour; in fact, it underscores the danger of ignoring underlying assumptions (which are often false in real life) of common tests.
It's out of print but it's very easy to find a copy. Follow the link for a few options.
|
Best books for an introduction to statistical data analysis?
|
This book is dynamite:
George E. P. Box, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building
It starts from zero knowledge of Statistics but it doesn't insult th
|
Best books for an introduction to statistical data analysis?
This book is dynamite:
George E. P. Box, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building
It starts from zero knowledge of Statistics but it doesn't insult the reader's intelligence. It's incredibly practical but with no loss of rigour; in fact, it underscores the danger of ignoring underlying assumptions (which are often false in real life) of common tests.
It's out of print but it's very easy to find a copy. Follow the link for a few options.
|
Best books for an introduction to statistical data analysis?
This book is dynamite:
George E. P. Box, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building
It starts from zero knowledge of Statistics but it doesn't insult th
|
16,099
|
Best books for an introduction to statistical data analysis?
|
I am a big fan of Statistical Models - Theory and Practice by David Friedman. It succeeds remarkably well to introduce and motivate the different concepts of statistical modeling through concrete, and historically important problems (cholera in London, Yule on the causes of poverty, Political repression in the McCarty era..).
Friedman illustrates the principles of modeling, and the pitfalls. In some sense, the discussion shows how to think about the critical issues and is honest about the connection between the statistical models and the real world phenomena.
|
Best books for an introduction to statistical data analysis?
|
I am a big fan of Statistical Models - Theory and Practice by David Friedman. It succeeds remarkably well to introduce and motivate the different concepts of statistical modeling through concrete, an
|
Best books for an introduction to statistical data analysis?
I am a big fan of Statistical Models - Theory and Practice by David Friedman. It succeeds remarkably well to introduce and motivate the different concepts of statistical modeling through concrete, and historically important problems (cholera in London, Yule on the causes of poverty, Political repression in the McCarty era..).
Friedman illustrates the principles of modeling, and the pitfalls. In some sense, the discussion shows how to think about the critical issues and is honest about the connection between the statistical models and the real world phenomena.
|
Best books for an introduction to statistical data analysis?
I am a big fan of Statistical Models - Theory and Practice by David Friedman. It succeeds remarkably well to introduce and motivate the different concepts of statistical modeling through concrete, an
|
16,100
|
Best books for an introduction to statistical data analysis?
|
The classic "orange horror" remains an excellent introduction: Exploratory Data Analysis by John Tukey.
http://www.amazon.com/Exploratory-Data-Analysis-John-Tukey/dp/0201076160
|
Best books for an introduction to statistical data analysis?
|
The classic "orange horror" remains an excellent introduction: Exploratory Data Analysis by John Tukey.
http://www.amazon.com/Exploratory-Data-Analysis-John-Tukey/dp/0201076160
|
Best books for an introduction to statistical data analysis?
The classic "orange horror" remains an excellent introduction: Exploratory Data Analysis by John Tukey.
http://www.amazon.com/Exploratory-Data-Analysis-John-Tukey/dp/0201076160
|
Best books for an introduction to statistical data analysis?
The classic "orange horror" remains an excellent introduction: Exploratory Data Analysis by John Tukey.
http://www.amazon.com/Exploratory-Data-Analysis-John-Tukey/dp/0201076160
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.